Saturday, November 27, 2010

Retrieving message counts of a remote MSMQ queue using PowerShell

My current client does work for the retail industry. This weekend is the beginning of the holiday shopping season, a very important period to the retail industry.

This is also the first time that I’ve seen the planning and operations that the Retail IT groups go through. I’m really impressed by the amount of planning and support that is in place to make sure that IT-related issues don’t interfere with holiday shoppers. There are bridge calls that begin at 2am, around the clock monitoring of servers, hourly reports, call-ins every 30 minutes and lots of people are either active or on-call.

My part in this was to spend three hours monitoring 13 remote servers. To make life a little easier, we used a PowerShell script similar to this to retrieve the queue lengths of some remote MSMQ queues.

   1: $queuesToCheck = 'Q1' , 'Q2'
   2: $servers = 'myserver1', 'myserver2'
   3:  
   4: $queues = @()
   5: $servers | ForEach-Object {
   6:     $ServerName = $_
   7:     
   8:     $machineQueuesToCheck = $queuesToCheck | ForEach-Object { "$ServerName\$_".ToLower() }
   9:     $queues += gwmi -class Win32_PerfRawData_MSMQ_MSMQQueue -computerName $ServerName  | Where-Object {
  10:         $machineQueuesToCheck -contains $_.Name.Trim() -and $_.MessagesInQueue -gt 0
  11:     } 
  12: }
  13:  
  14: $queues | Format-Table Name, MessagesInQueue

Monday, September 6, 2010

Ascendo DataVault & Blackberry Desktop Manager v6

I’ve been a Blackberry Storm 9530 owner since the device was released. I’ve been really happy with the device, especially with the application ecosystem. There are lots of apps for the device. (There are a lot of apps for Blackberry devices in general).

As a computer professional, I spend a lot of time on the Internet. I’m a member of a large number of sites and that leads to a lot of username/password combinations to keep track of. An app for tracking passwords is a necessity for me. I don’t have the memory to track them all. Single-sign-on can’t come fast enough.

So I tried a bunch of password apps and ended up with DataVault from Ascendo, mainly due to its flexibility and the fact that there is a version that runs on Windows and the Blackberry Storm. The two versions will sync between each other through the Blackberry Desktop Manager.

This worked well until about a month ago when RIM released v6.0 of the Desktop Manager. According to an FAQ on Ascendo’s site, RIM removed an API that Ascendo was using to support synchronization. Unfortunately I didn’t notice that synchronization was broken until today.

The FAQ suggest upgrading to the latest version, 4.7.1 which has a fix for the problem. Still no syncing for me.

Another FAQ suggests upgrading the device version to the latest. Still no syncing for.

On a whim, I try running the Desktop Manager as Administrator. Surprise! The sync dialog appears.

Only two hours lost.

Sunday, August 29, 2010

Register Spring.Net objects with code

Anyone that has worked with Spring.Net is probably familiar with configuring the IOC container using an XML document. A typical example would be:

   2:  
   3: <objects xmlns="http://www.springframework.net" 
   4:          xmlns:v='http://www.springframework.net/validation' 
   5:          xmlns:aop="http://www.springframework.net/aop" 
   6:          xmlns:db="http://www.springframework.net/db" 
   8:  
   1: <?xml version="1.0" encoding="utf-8"?>
   9:   <object id="ConsoleWriter" type="SimpleCalculatorWithComplexTree.Writers.ConsoleWriter" singleton="false" >
  10:     <constructor-arg name="formatter" ref="HexFormatter" />
  11:   </object>
  12:  
  13:   <object id="Calculator" type="SimpleCalculatorWithComplexTree.Calculator" singleton="false">
  14:     <constructor-arg name="writer" ref="ConsoleWriter" />
  15:   </object>
  16:  
  17:   <object id="HexFormatter" type="SimpleCalculatorWithComplexTree.Formatters.HexFormatter" singleton="false" >
  18:     
  19:   </object>
  20:   
  21: </objects>

The last couple of years has seen an anti-XML movement begin to form. In the world of IOC containers, this has materialized as a movement away from XML configuration and more towards using code constructs and “convention over configuration.” I’m not against XML. After all, almost everything has a place.


I recently did a presentation on Spring.Net for a .NET user group. I wanted to introduce the IOC container without overwhelming people with XML. A quick search found an article about XMLless configuration of the container. This approach felt like it would be a distraction from my goal of getting to the container.


Luckily I stumbled into at article from early 2008 that discussed the configuration api. From this I was able to create this method that extends the GenericApplicationContext and allows for easy registration of a type in the container:



   1: Imports System.Runtime.CompilerServices
   2: Imports Spring.Context.Support
   3: Imports Spring.Objects.Factory.Support
   4:  
   5: Public Module SpringExtension
   6:  
   7:     <Extension()>
   8:     Public Sub RegisterType(Of T)(ByVal ctx As GenericApplicationContext, ByVal builderConfig As Action(Of ObjectDefinitionBuilder))
   9:         Dim objectDefinitionFactory As IObjectDefinitionFactory = New DefaultObjectDefinitionFactory()
  10:  
  11:         Dim builder As ObjectDefinitionBuilder = ObjectDefinitionBuilder.RootObjectDefinition(objectDefinitionFactory, GetType(T))
  12:         builderConfig.Invoke(builder)
  13:  
  14:         ctx.RegisterObjectDefinition(builder.ObjectDefinition.ObjectType.Name, builder.ObjectDefinition)
  15:  
  16:     End Sub
  17:  
  18: End Module

Here’s a quick example of its use:



   1: Dim ctx = New GenericApplicationContext()
   2:  
   3: ctx.RegisterType(Of Calculator)(Sub(b As ObjectDefinitionBuilder) b _
   4:                               .SetAutowireMode(Spring.Objects.Factory.Config.AutoWiringMode.AutoDetect) _
   5:                               .SetSingleton(False))
   6:  
   7: ctx.RegisterType(Of HexFormatter)(Sub(b As ObjectDefinitionBuilder) b _
   8:                       .SetAutowireMode(Spring.Objects.Factory.Config.AutoWiringMode.AutoDetect) _
   9:                       .SetSingleton(False))
  10:  
  11: ctx.RegisterType(Of ConsoleWriter)(Sub(b As ObjectDefinitionBuilder) b _
  12:                       .SetAutowireMode(Spring.Objects.Factory.Config.AutoWiringMode.AutoDetect) _
  13:                       .SetSingleton(False))

Sunday, August 22, 2010

Disable password expiration on Windows Hyper-V server

Back in April I wrong a quick entry about not being able to use the Hyper-V Remote manager to access the Hyper-V server. I was getting the error “Cannot connect to the RPC service on computer…” because my credentials on the server had expired. Today I finally got around to changing the account security policy on the server to prevent password from expiring. The command is:

NET accounts /MAXPWAGE:UNLIMITED

Monday, August 16, 2010

Spring.Net and Common.Logging 2.0

Versions 1.2 and 1.3 of Spring.Net bind to Common.Logging 1.2. If you are using or need to use v2.0 of Common.Logging, you can use an assembly redirect to force the assembly loader to the updated version. Put the following into your application’s configuration file:

<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Common.Logging"
publicKeyToken="AF08829B84F0328E" />
<bindingRedirect oldVersion="1.2.0.0"
newVersion="2.0.0.0"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>

Sunday, May 30, 2010

TF31002, VS2008 and TFS2010

It’s funny (but not at the time) how easily you can find the answer to a question once you’ve already answered the question.

Last Friday I spent way too much time trying to figure out why I was getting the error TF31002 when trying to connect my Visual Studio 2008 instance to Team Foundation Server 2010. I knew you could do it. I have the configuration running at home. I have multiple co-workers with successful configurations. Yet, on these two machine, nothing but TF31002.

Today I can search and find all kind of solutions as long as I don’t include TF31002 in my search. Searching with the terms ‘VS2008 TFS2010’, returns the solution as the first hit. Search on TF31002 gets nothing useful. That’s why I put TF31002 in the title of this post.

The setup:

VS2008 with SP1 already installed.

We installed TFS2010.

We then installed Team Explorer for VS2008 and the Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 (Installer). I tried to get to the TFS2010 box and received the error TF31002. Oh. I was told that you’re supposed to include the url to the team project :

http://server:8080/tfs/collection

That got me an nice message about not allowing ‘http:’, ‘https:’ nor ‘/’ in the server name.

Sound familiar?

Sadly, the answer is in the sequence of the installs. Since I installed Team Explorer after installing VS2008 SP1, I had to reinstall SP1 and then the Forward Compatibility Update.

Sunday, May 9, 2010

Creating a NetBIOS alias for the local machine

My current assignment is with a client that has a system consisting of multiple components where each component is a separate .EXE. This wouldn’t be so bad except for the configuration problem that is created. Each component has its own config file and there is a requirement for some consistency between the config files for some values. In addition, the system is deployed to multiple environments and clients.

Configuration is a nightmare and we’re still doing it manually.

This past week, one of the team members thought of the idea of using aliases instead of referring to actual machine names (for file and application location). The thought was that we could define the aliases in the Hosts file and reference the loopback address (127.0.0.1).

Eh, no.

Using the Hosts file works well for HTTP but has no affect on UNC paths.

Luckily this client has an infrastructure group and they were able to help us out. They identified four registry entries that have to be changes in order to create aliases for the local machine.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters Add a new Multi-String value called OptionalNames. Enter one or more aliases, one per line.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters Add a new DWORD value called DisableStrictNameChecking and set to 1.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters Add a new DWORD value called DisableLoopBackCheck and set to 1.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0
Add a new Multi-String Value BackConnectionHostNames. Enter one or more aliases, one per line.

You’ll have to cycle the machine for the configuration to take affect.

Saturday, May 8, 2010

Hyper-V: “No virtual machines were found on this server” and “Access Denied”

It seems that my adventures with Hyper-V will never end. Today I was finishing up the rebuild of my laptop, which included installing the Hyper-V manager. Luckily I had previously taken good notes, I just wish I had followed them. I remembered that I had to get the Remote Server Administration Toolkit for Windows 7. After installing it, I enabled it, tried to connect to my host server, and was rewarded with the Access Denied message. After a few minutes, the Access Denied changed to the No virtual machines… message.

Sadly I spent more time on this than I should have. After review my notes, I realized that I didn’t make the COM security changes using the DCOMCNFG tool. 30 seconds to make the change and no more error messages.

Laptop Rebuild

I decided to join the cool kids and purchase an SSD. An opportunity appears and I decided to take advantage of it. I did a little bit of research and decided to get the Intel X25-M. There are an assortment of 128GB drives available but the little extra to get the 160GB seemed to be worth it.

I must say that I’m really impressed with the speed of the drive but my favorite “feature” is the sound level. There is no noise and I really like that.

Now, the downside to a new driver is installing all the software that you accumulate over the years. It’s amazing. Here’s the list of everything that I just installed:

  • AVG
  • Ascendo DataVault
  • Foxit
  • Office 2010
  • Outlook Connector
  • Office Communicator
  • SharedView
  • Silverlight 4
  • PersonalBrain
  • Quicken
  • Blackberry Desktop Manager
  • blu
  • VMWare Workstation
  • Windows Home Server Connector
  • Windows IM
  • Windows Live Writer
  • Zune
  • Live Meeting
  • Hyper-V Manager
  • Adobe Digital Edition
  • ImgBurn
  • Intel SSD Tools
  • All the necessary drivers

Yes, there are no development tools in the list. I do all of my development in VMs. It really simplifies things.

Saturday, April 17, 2010

Hyper-V Server - “Cannot connect to the RPC service on computer <computer-name>. Make sure your RPC service is running.”

I installed Microsoft Hyper-V server back in November 2008. I almost never have any problems. It just sits there and runs. The only problem that I have is that periodically I won’t be able to remotely connect to from the Hyper-V Manager. I get the “Cannot connect to the RPC service on computer <computer-name>. Make sure your RPC service is running.” I addressed this issue here.

Today I encountered this error again. So I fired up Remote Desktop to connect to the server. Instead of connecting, I was greeted with a message about not being able to connect to the Local Security Authority.

I’ve seen this before and I should have written about it then. Doing so would have have saved me a couple of minutes.

This is my home network and both the server and my laptop run in a workgroup. The affect is that, while the same account is on both machines, the accounts are managed separately. The password for the account on the Hyper-V server had expired. I used RDP to connect as the admin and reset the password for my non-admin user. Now all is good and I won’t have this problem for another 45 days.

Monday, March 8, 2010

ALTER COLUMN on XML column results in Error 511

We frequently change the schema that constrains an XML column in one of the tables. When it’s time to update the schema collection and re-type the column, we use the following steps:

  1. Remove the type with the ALTER TABLE statement:
    ALTER TABLE <table> ALTER COLUMN <xml-column> XML;
  2. Change the schema collection by recreating it.
  3. Reset permissions:
    GRANT EXECUTE ON XML SCHEMA COLLECTION…
  4. Reapply the schema constraint to the column:
    ALTER TABLE <table> ALTER COLUMN <xml-column> XML(<schema collection>)

Twice I’ve started getting error 511 on step one:

Msg 511, Level 16, State 1, Line 1
Cannot create a row of size 8073 which is greater than the allowable maximum of 8060.
The statement has been terminated.

This thread from the SQL Server XML forum indicates that the behavior is “by design.” There seems to be a limit to the number of times that you can alter a column. The good news is that there is a work-around. Rebuilding the indexes on the table appears to reset the alter-count.

Sunday, February 28, 2010

VMWare NAT Service (vmnat.exe) goes wild

Tonight I decided to upgrade to VMWare Workstation 7. I’d been running on 6.5 since it was released but held off on 7.0.

The install went smoothly, as expected. The fun started after the final reboot. My usually well-responsive laptop was anything but. I fired up the Windows Task Manager and sorted by CPU but nothing registered at more than 1%. I then clicked the Show processes from all users option and vmnat.exe appeared at the top of the list with ~50% CPU and 5.9MB Memory.

Restarting the laptop didn’t solve the problem.

A web search turned up the following thread VMWare Workstation 6.5.3 + Windows 7 Enterprise 64 = VMNAT.EXE issue? in which the poster is complaining about high CPU utilization from vmnat.exe. The resolution to his problem was to remove NAT from VMNET0.

I used the Virtual Network Editor to check my configuration. No NAT on VMNET0. I used the Restore Default option and after it completed, vmnat.exe had calmed down to 0% CPU and 1.6MB Memory.

Saturday, February 13, 2010

Upgrading TFS 2010 B2 to the RC

The Release Candidates of Visual Studio and Team Foundation Server were made available last week and I’m finally getting around to upgrading my installation of TFS. In this post, I’ll record the steps that I took and the problems encountered.

1. Backup the server
I’m actually running TFS in a VM hosted within a Hyper-V 2008 Server. To backup my server, I shutdown the VM and used the Hyper-V Manager to create a snapshot of it.

2. Uninstall TFS 2010 Beta 2

image

Happily the uninstall didn’t encounter any problems

3. Using the Hyper-V Manager, I mounted the TFS RC ISO.

4. Next I started the install of the Release Candidate.

5. After the install completed, I selected the Upgrade wizard. After stepping through the pages and providing all the information, the Configure step is kicked off to do the real work.

image

6. After a very short amount of time – success!

image

The final test was to fire up a VM containing an install of VS 2008 and access the new TFS installation. Unlike the rest of the upgrade, this didn’t work the first time. Instead I was met with:

image

The Forward Compatibility Update (KB974558) is a patch to VS2008 SP1 that enables interoperability with TFS 2010. I didn’t need this to access the Beta 2 install but who am I to argue with the computer…

KB974558 is a short download and a quick install. I launched VS2008 again and Team Explorer happily reached out to the TFS server!

And with that – my TFS 2010 installation has been upgraded to the RC. Kudos to the TFS team!!

Monday, February 1, 2010

Outlook Date Tricks

I love it when a piece of software surprises me in a good way - since it so rarely happens. In this case, I’m talking about Microsoft Outlook and some convenient functionality that’s embedded within its date fields. I actually ran into this a rather long time ago but have only now gotten around to writing it up.image of scheduling bars

Say that you’re creating a new meeting request for tomorrow. You could always use the scheduling bars to select the start and end times of the meeting. The greed bar represents the start time and the red for the end. It works well when the meeting is within the next couple of days. But what if meeting is for next Wednesday? You could:

  1. Scroll some number of days on the Scheduling page
  2. Calculate the date and type it in to the Start Time box
  3. Use the drop-down calendar to select the appropriate date
  4. Enter “next Wednesday” into the Start Time box

Yup. If you enter “next Wednesday” and press tab, Outlook will do the calculation for you:

Start time of next wednesday

Outlook calculates the correct value

Try it!

Through experimentation, I’ve discovered that the following also works:

  • name of a day (ex. Monday, Tuesday, etc)
  • “today”, “tomorrow”, “yesterday”
  • “last” {name of day}
  • “[number] week[s] from {today | tomorrow | name-of-a-day}” (ex. 2 weeks from Monday)
  • “last day of {this|next| last} month”
  • “last {name of day}”
  • “Christmas” (I was surprised!)

There’s probably more.

Friday, January 8, 2010

DateTime.MinValue != SQL Server minimum datetime value

I’m sure that most people know this but I’m betting that there are a few that don’t. I say this because I just found a line of code that is trying to insert DateTime.MinValue into a SQL Server datetime column. The result is:

Arithmetic overflow error converting expression to data type datetime.

The value of DateTime.MinValue is 1 Jan 0001 12:00:00am.

The minimum value that you can put into a SQL Server datetime column is 1 Jan 1753. On the other hand, if you’re lucky enough to be using SQL Server 2008 and you have control over the table definitions, you can use the new datetime2 datatype. Datetime2 has an extended date range of 1 Jan 0001 to 31 Dec 9999.

Saturday, January 2, 2010

Automatically Backing up a Blogger.com Blog – Part 2

I started writing about my blog backup solution in a previous post. In that post I briefly discussed the Blogger API that I used. In this post I’d like to go over the rest.

The Solution


The solution is simple,  unimpressive and consist of only a few classes. I discussed the actual API calls in the previous post. Those are the heart of the Exporter and Authenticator classes. Here’s the rest of the Authenticator class:
public class Authenticator
{
private static ILog logger = LogManager.GetLogger(typeof(Authenticator));

private string _email;
private string _password;

private int _timeoutInMs = 100000;

public Authenticator(string email, string password)
{
_email = email;
_password = password;
}

/// <summary>
/// The request timeout in milliseconds
/// </summary>
public int TimeoutInMs
{
get { return _timeoutInMs; }
set
{
_timeoutInMs = value;
if (logger.IsDebugEnabled) logger.Debug("Authentication request timeout changed to: " + this._timeoutInMs);
}
}

private void CheckRequiredConfiguration()
{
if (string.IsNullOrEmpty(this._email) || string.IsNullOrEmpty(this._password))
throw new ConfigurationErrorsException("Either the username or password has not been supplied. Please check the configuration.");
}

/// <summary>
/// Make an authentication request
/// </summary>
/// <returns>an instance of <see cref="AuthenticationResult"/> containing the result of the authentication request</returns>
/// <exception cref="InvalidDataException">Occurs if no response or an unexpected response is received from the Blogger service.</exception>
/// <exception cref="AuthenticationException">Occurs if the status code is 'OK' but no Auth token was returned.</exception>
public AuthenticationResult Authenticate()
{
this.CheckRequiredConfiguration();

HttpWebRequest authenticationRequest = CreateAuthenticationRequest();

HttpWebResponse response;
try
{
response = (HttpWebResponse)authenticationRequest.GetResponse();
}
catch (WebException webException)
{
response = webException.Response as HttpWebResponse;

}

if (null == response) throw new InvalidDataException("An invalid response was received from the service.");

var result = new AuthenticationResult();

var status = response.StatusCode.ToString();
if (logger.IsDebugEnabled) logger.Debug("Authentication response result code: " + status);
result.Status = status;

var sr = new StreamReader(response.GetResponseStream());
string responseBody = sr.ReadToEnd();
if (logger.IsTraceEnabled) logger.Trace("Authentication response body: " + responseBody);
response.Close();

if (result.Status == "OK")
{
string authTokenValue = ParseAuthenticationToken(responseBody);
result.Token = this.BuildAuthToken(this._email, authTokenValue);
}

return result;

}

private AuthorizationToken BuildAuthToken(string email, string authValue)
{
return new AuthorizationToken(email, authValue);
}

private string ParseAuthenticationToken(string responseBody)
{
int authTokenPosition = responseBody.IndexOf("Auth=");
if (-1 == authTokenPosition)
{
var ex =
new AuthenticationException(
"Authentication request returned 'OK' but auth token was missing. Check exception data for authentication response.");
ex.Data.Add("authResponse", responseBody);
throw ex;
}

return responseBody.Substring(authTokenPosition + 5);
}

private HttpWebRequest CreateAuthenticationRequest()
{
var uri = new Uri("https://www.google.com/accounts/ClientLogin");
HttpWebRequest authenticationRequest = (HttpWebRequest)WebRequest.Create(uri);
authenticationRequest.AllowAutoRedirect = false;
authenticationRequest.Method = "POST";
authenticationRequest.ContentType = "application/x-www-form-urlencoded";
authenticationRequest.KeepAlive = false;
authenticationRequest.Expect = string.Empty;
authenticationRequest.Headers.Add("GData-Version", "2");
authenticationRequest.Timeout = this.TimeoutInMs;

var postBody = new StringBuilder();
postBody.Append("accountType=GOOGLE&");
postBody.AppendFormat("Email={0}&", this._email.ToUrlEncoded());
postBody.AppendFormat("Passwd={0}&", this._password.ToUrlEncoded());
postBody.Append("service=blogger&");
postBody.AppendFormat("source={0}", "malevy.net-Blogger.Backup-1".ToUrlEncoded());

if (logger.IsTraceEnabled) logger.Trace("body of post: " + postBody.ToString());

byte[] encodedData = (new ASCIIEncoding()).GetBytes(postBody.ToString());

authenticationRequest.ContentLength = encodedData.Length;
var stream = authenticationRequest.GetRequestStream();
stream.Write(encodedData, 0, encodedData.Length);
stream.Close();

return authenticationRequest;
}
}

The call to Authenticate returns an instance of AuthenticationResult. AuthenticationResult is a simple container that holds a success or failure indicator (shocking – I know!). In the event that the authentication succeeded, the token that was received from the AuthenticationService is wrapped in an instance of AuthorizationToken, along with the email. Those two pieces of information are required by the Exporter:

public class Exporter
{
private string _blogId;
private readonly IArchiveWriter _fileWriter;
private int _timeoutInMs = 100000;

private static ILog logger = LogManager.GetLogger(typeof(Exporter));

public Exporter(string blogId, IArchiveWriter archiveWriter)
{
_blogId = blogId;
_fileWriter = archiveWriter;
}

/// <summary>
/// The request timeout in milliseconds
/// </summary>
public int TimeoutInMs
{
get { return _timeoutInMs; }
set
{
_timeoutInMs = value;
if (logger.IsDebugEnabled) logger.Debug("Export request timeout changed to: " + this._timeoutInMs);
}
}


/// <summary>
/// Begins the export process
/// </summary>
/// <param name="authToken">The authorization token that was returned from the <see cref="Authenticator"/></param>
public void Export(AuthorizationToken authToken)
{
if (authToken == null) throw new ArgumentNullException("authToken");

if (string.IsNullOrEmpty(this._blogId))
throw new ArgumentException("Exporter must be configured with a Blogger blog ID.");

if (null == this._fileWriter) throw new ArgumentException("Exporter must be configured with a writer");

var request = this.CreateExportRequest(authToken);

var response = request.GetResponse();
using (var responseStream = response.GetResponseStream())
{
this._fileWriter.ArchiveBlogStream(responseStream, this._blogId);
}

if (logger.IsInfoEnabled) logger.Info(string.Format("Blogger blog {0} was successfully backed up.", this._blogId));

}

private HttpWebRequest CreateExportRequest(AuthorizationToken token)
{
// I originally wanted to get all the feeds using the archive functionality but I'm having problems getting archive
// to work when authenticating using ClientLogin. I'm going to punt to this method and hope that I get a response
// from the blogger api newsgroup.
const string archiveUriFormat = "http://www.blogger.com/feeds/{0}/posts/full?updated=1990-01-01T00:00:00&orderby=updated";
var uri = new Uri(string.Format(archiveUriFormat, this._blogId));
HttpWebRequest exportRequest = (HttpWebRequest)WebRequest.Create(uri);

exportRequest.AllowAutoRedirect = false;
exportRequest.Method = "GET";
exportRequest.KeepAlive = false;
exportRequest.Expect = string.Empty;
exportRequest.Headers.Add("GData-Version", "2");
exportRequest.Timeout = this.TimeoutInMs;

exportRequest.Headers.Add(token.ToAuthorizationHeader());

return exportRequest;
}

}

If the Export request succeeds, the response is passed to the IArchiveWriter. In my case, this will be an instance of ArchiveFileWriter. ArchiveFileWriter persists the stream to a file (another shocking discovery!). I’ll skip the code for this one.

The driver that pulls all of this together is an instance of the Agent class:

public class Agent
{
private readonly Authenticator _authenticator;
private readonly Exporter _exporter;

private static ILog logger = LogManager.GetLogger(typeof(Agent));

public Agent(Authenticator authenticator, Exporter exporter)
{
if (authenticator == null) throw new ArgumentNullException("authenticator");
if (exporter == null) throw new ArgumentNullException("exporter");

_authenticator = authenticator;
_exporter = exporter;
}

public void Execute()
{
var authenticationResult = this._authenticator.Authenticate();

if ("OK" != authenticationResult.Status)
{
logger.Error("Unable to authenticate with the Blogger service.");
return;
}

try
{
this._exporter.Export(authenticationResult.Token);
}
catch (Exception e)
{
logger.Error("BloggerBackup caught an unhandled exception.", e);
}
}
}

When the Execute() method is called, Agent uses the instance of Authenticator to make the authentication request. If that succeeds, the authorization token is passed to the exporter to retrieve the actual blog contents.

Now, the observant reader (you of course!) may have noticed that Agent accepts an instance of Authenticator and an instance of Exporter in its constructor. In my solution, all of the wiring is done using Spring.Net.  Here, I believe, is a good example of using an IOC container to support something other than testing. Although I will admit that taking this approach (i.e. Dependency Injection) does make testing much easier.

I decided to use Spring’s XML configuration. I know that there are many people that are very anti-XML for configuration but I don’t believe that the XML gets out of hand for this implementation. Here’s a portion of the XML that is used to wire all of the components together:

<object id="Backup-Agent" type="BloggerBackup.Agent.Agent, BloggerBackup.Agent">
<constructor-arg name="authenticator" ref="authenticator" />
<constructor-arg name="exporter" ref="exporter" />
</object>

<object id="authenticator" type="BloggerBackup.Agent.Authenticator, BloggerBackup.Agent" >
<constructor-arg name="email" value="${email}" />
<constructor-arg name="password" value="${password}" />
</object>

<object id="exporter" type="BloggerBackup.Agent.Exporter, BloggerBackup.Agent" >
<constructor-arg name="blogId" value="${id-of-blog-to-backup}" />
<constructor-arg name="archiveWriter" ref="fileWriter" />
</object>

<object id="fileWriter" type="BloggerBackup.Agent.ArchiveWriter.ArchiveFileWriter, BloggerBackup.Agent" >
<constructor-arg name="archiveRoot" value="${archive-root-folder}" />
</object>

When I ask for an instance of Agent, Spring.Net is nice enough to fill in the dependencies. Of course, that’s what IOC containers do.

When Configuration is not Configuration

I’ve always felt that there are two types of configuration metadata. The first is the type of metadata that represents a portion of the application. The Spring.Net config file is an example of this. I consider this file to be source code. It is describing the way that my application should be put together. In this case, Spring.Net is a factory that is creating objects for me.

The second type of configuration is “operational information.” For example, the timeout, email, and password values. I expect a support engineer to adjust these values as necessary but I don’t expect him or her to change Spring’s config file.

Why mention this? Well, if you look back at the Spring.Net config file, I’m injecting operational values into the objects.  A good example is the instance of ArchiveFileWriter that is given the name of the folder to store the archive files:

<object id="fileWriter" type="BloggerBackup.Agent.ArchiveWriter.ArchiveFileWriter, BloggerBackup.Agent" >
<constructor-arg name="archiveRoot" value="${archive-root-folder}" />
</object>

The funny syntax (${archive-root-folder}) is a place holder that allows me to put the actual value in the app.config file. I’m using Spring.Net’s PropertyPlaceholderConfigurer to retrieve operational values from the <appSettings> section of the app.config file. This way, the support engineer can change those values without having to edit the Spring.Net config file.

Pretty cool eh?

Making things automatic


There’s only one more piece to describe. I’m using an embedded scheduler called Quartz.Net. Now, I’ll be honest. I found using the native Quartz.Net API to be really frustrating. Luckily, Spring.Net has a really nice wrapper around Quartz.Net. In the Spring.Net config file, I define an instance of the SchedulerFactoryObject:

<object id="scheduler-factory" type="Spring.Scheduling.Quartz.SchedulerFactoryObject, Spring.Scheduling.Quartz">
<property name="triggers">
<ref object="Backup-Agent-Schedule"/>
</property>
</object>


To it, I provide an instance of the schedule that I want my backup to occur. I’m using an instance of CronTriggerObject:

<object id="Backup-Agent-Schedule" type="Spring.Scheduling.Quartz.CronTriggerObject, Spring.Scheduling.Quartz">
<property name="JobDetail" ref="backupRunner" />
<property name="CronExpressionString" value="${backup-agent-schedule-asCronExpression}" />
</object>


The CronTriggerObject is supplied with the Cron expression which is being injected from the app.config file. It’s also being given a reference to the JobDetail object. Now, if you look at the same Quartz.Net examples that I saw, you’re shown examples of jobs that derive from JobDetail. I’m not crazy about this approach because it lets Quartz.Net leak into portions of the application where it probably doesn’t really need to be. The Spring.Net wrappers have a good solution to this with their MethodInvokingJobDetailFactoryObject. You configure it to receive an object and the name of a method on the object and the method will be called at the appropriate time:


<object id="backupRunner" type="Spring.Scheduling.Quartz.MethodInvokingJobDetailFactoryObject, Spring.Scheduling.Quartz">
<property name="TargetObject" ref="Backup-Agent" />
<property name="TargetMethod" value="Execute" />
</object>


The Host

The only thing left is to create the Spring.Net Application Context. When the context is created, all the objects will be instantiated, the schedule will start and everything will start working together. Just like a practiced orchestra. For this, I have a class called ContextHost:

public class ContextHost : IDisposable
{
private static ILog Log = LogManager.GetLogger(typeof (ContextHost));

private IApplicationContext _springContext = null;

public void Create()
{

if (Log.IsDebugEnabled) Log.Debug("Starting scheduler");
this._springContext = new XmlApplicationContext(false, "file://~/spring-config.xml");
if (Log.IsInfoEnabled) Log.Info("Scheduler started");
}

public void Close()
{
if (Log.IsDebugEnabled) Log.Debug("Stopping scheduler");

if (null != this._springContext)
{
this._springContext.Dispose();
this._springContext = null;
}

if (Log.IsInfoEnabled) Log.Info("Scheduler stopped");
}

public void Dispose()
{
if (null != this._springContext) this.Close();
GC.SuppressFinalize(this);
}
}


ContextHost is strictly responsible for creating and destroying the ApplicationContext.

OK. I lied. Since I have this solution residing in a Windows Service, I need my service class. It looks like this:

public partial class Host : ServiceBase
{
private static ILog Log = LogManager.GetLogger(typeof (Host));

private ContextHost _contextHost = null;

public Host()
{
InitializeComponent();
}

protected override void OnStart(string[] args)
{
try
{
_contextHost = new ContextHost();
_contextHost.Create();
}
catch (Exception e)
{
_contextHost.Close();
Log.Error("Caught unhandled exception", e);
}
}

protected override void OnStop()
{
if (null != _contextHost) _contextHost.Close();
}
}

When my service starts, ContextHost is used to create the context. When my service stops, the application context is destroyed. Nothing could be simpler.

Friday, January 1, 2010

Automatically Backing up a Blogger.com Blog

I’ve been in the computer software industry for a very long time. Long enough to know that the unexpected does happen and sometimes it causes a data loss. I can humbly say that there were times when the work that I have done was the cause of one or two “incidents”. With that said, I understand the importance of redundancy in some form.

Recently I stood up a Windows Home Server for the main purpose of automatically backing up the four computers that I have around the house.  It does an excellent job. With that done, my next goal was to use it to backup my blog. I don’t have a lot of posts but it’s still an investment that I don’t want to take a chance losing.

The manual way

I started the way that must computer people start – with a search engine. Unfortunately I didn’t find anything that I liked. The most obvious answer was to use the Export feature that Blogger provides. On the Blog Settings page of the Blogger.com Dashboard are a set of tools for Importing and Exporting a blog.

BlogTools

The Export functionality does exactly what I want with one exception: it’s not automatic. There also nothing within the Dashboard that allows me to schedule an export. Almost there…

Blogger API

It turns out that Blogger.com has an API that can be used to access and manipulate a blog. They’ve even gone as far as offering clients in a number of different languages: .NET, Java, JavaScript, PHP, and Python. So I took this as an opportunity to start another side project. I decided to create a service that would use the Blogger API and export my blog on a regular basis. And I’ll use my Windows Home Server to host the service.

I decided to not use the .NET client that was available and instead go with the native HTTP protocol that was provided. Why? Just to be different…

The export request is straight-forward to create but requires an Authentication header. So I started with that.

Authentication

The Blogger API supports three authentication methods: AuthSub, ClientLogin and OAuth.

Both AuthSub and OAuth  were designed to allow web-based applications to authenticate on behalf of a user. Since I’m not writing a web app, they weren’t applicable. 

ClientLogin was designed to allow a user to authenticate from a desktop applications. I’m not writing a desktop application but the ideal holds. Basically you pass the email and password to Google authentication service and Google returns an authorization token. You then pass the token back with any other request that is being make on behalf  of the user.

To make the Authentication request, you do a POST request to:

https://www.google.com/accounts/ClientLogin

I create the request like this:

        private HttpWebRequest CreateAuthenticationRequest()
{
var uri = new Uri("https://www.google.com/accounts/ClientLogin");
HttpWebRequest authenticationRequest = (HttpWebRequest)WebRequest.Create(uri);
authenticationRequest.AllowAutoRedirect = false;
authenticationRequest.Method = "POST";
authenticationRequest.ContentType = "application/x-www-form-urlencoded";
authenticationRequest.KeepAlive = false;
authenticationRequest.Expect = string.Empty;
authenticationRequest.Headers.Add("GData-Version", "2");
authenticationRequest.Timeout = this.TimeoutInMs;

var postBody = new StringBuilder();
postBody.Append("accountType=GOOGLE&");
postBody.AppendFormat("Email={0}&", this._email.ToUrlEncoded());
postBody.AppendFormat("Passwd={0}&", this._password.ToUrlEncoded());
postBody.Append("service=blogger&");
postBody.AppendFormat("source={0}", "malevy.net-Blogger.Backup-1".ToUrlEncoded());

if (logger.IsTraceEnabled) logger.Trace("body of post: " + postBody.ToString());

byte[] encodedData = (new ASCIIEncoding()).GetBytes(postBody.ToString());

authenticationRequest.ContentLength = encodedData.Length;
var stream = authenticationRequest.GetRequestStream();
stream.Write(encodedData, 0, encodedData.Length);
stream.Close();

return authenticationRequest;
}


The ToUrlEncoded() is an extension method that Url encodes a string. I based it on a post from Rick Strahl.



(Just a word of caution, the service parameter is case-sensitive. I wasted *way* too much time on that little typo)



If the request succeeds, you’ll get back a 200 response with a body containing SID, LSID, and Auth values. The documentation says that we can ignore the SID and LSID values. Only the Auth value is required.



To use the Auth token, you attach it as a header to a request using the following format:



Authorization: email Auth={the-returned-auth-value}



Archive/Export



This is where things broke down for me. I’ll save you the reading time with a quick summary. I couldn’t get the Archive function to work. The request is really simple:



GET http://www.blogger.com/feeds/{blog-id}/archive



I created the request and attached the authorization token as laid out in the documentation. I was rewarded with a response saying that the authorization header was not recognized. I then spent a lot of time searching the Internet and came up with nothing. I even tried formatting the request and authorization header differently and I still wasn’t able to get it to work. As a last resort, I posted a message to the Blogger Developer Group. Posts from new members are moderated so I’ve not even seen my question appear in the list yet.







Many of the Blogger APIs require that you supply the ID of your blog. If you’re like me and don’t know the value, this Blogger Help article will help.


 



The Workaround



In the meantime, I’ve decided to use the querying capabilities of the API to pull down all the posts. The request looks like:



GET http://www.blogger.com/feeds/{blog-id}/posts/full?updated=1990-01-01T00:00:00&orderby=updated



Which basically retrieves all the posts that have been updated since midnight on 1 Jan 1990. I use the following code snippet to create the request:



        private HttpWebRequest CreateExportRequest(AuthorizationToken token)
{
const string archiveUriFormat = "http://www.blogger.com/feeds/{0}/posts/full?updated=1990-01-01T00:00:00&orderby=updated";
var uri = new Uri(string.Format(archiveUriFormat, this._blogId));
HttpWebRequest exportRequest = (HttpWebRequest)WebRequest.Create(uri);

exportRequest.AllowAutoRedirect = false;
exportRequest.Method = "GET";
exportRequest.KeepAlive = false;
exportRequest.Expect = string.Empty;
exportRequest.Headers.Add("GData-Version", "2");
exportRequest.Timeout = this.TimeoutInMs;

exportRequest.Headers.Add(token.ToAuthorizationHeader());

return exportRequest;
}


I’m going to stick with this approach until I can get some help with the Archive/Export functionality.



In the next post, I’ll discuss the form and architecture of the application. In particular I’ll discuss using Quartz.Net to kick off the archive functionality and the use of Spring.Net to pull everything together.