DESKTOP

Programming Windows Azure : Building a Storage Client

10/10/2010 9:52:08 AM

You’ll never have to build a storage client. You’re almost always better off using one of the ones already available—be it the official one from Microsoft that ships with the Windows Azure SDK, or one of the community-supported ones for various languages.

However, there is no better way to learn the innards of how storage access works than to build a storage client. Knowing how the underlying protocols work is crucial when debugging weird errors and bugs while interacting with the storage services. You may be wondering why you are building a library when Microsoft provides a reference implementation. It is important to remember that the actual API is the REST interface, and the wrapper is just that: a “wrapper.” The Microsoft implementation is just one wrapper around the storage services. You can find several other commonly used, community-written implementations, each optimized for different purposes and offering different features.

Even with the official Microsoft implementation, you’ll often find yourself looking at the low-level REST interface to diagnose/debug issues. For all these reasons, knowing how the REST interface works is critical to being able to work effectively with the Windows Azure storage services.

During this examination, you will be building a trivial storage client. The code you’ll be writing is for a trivial storage library meant primarily to educate. It will support the bare minimum functionality, and won’t support several features such as block lists, setting metadata on containers, and the performance optimizations that have gone into the official library. It is by no means robust; at a minimum, it lacks error checking and performance optimizations. It also doesn’t support anything other than creating containers and blobs.

Here, you will build this storage client piece by piece. You’ll start by creating some boilerplate data structure classes. You’ll add some code to construct the correct request path and headers. And finally, you’ll add the authentication and signing code. You can follow along with the examples, but you won’t be able to run it until the end, when all the pieces come together.

Let’s start with some boilerplate code. Example 1 shows a skeleton C# class that brings in the right namespaces, sets up some variables and constants you’ll be using, and does nothing else. You’ll be using these namespaces later to perform HTTP requests and to do some cryptographic work to sign your requests. This is just skeletal infrastructure code; you can’t run it at this point. You’ll be using this infrastructure code and building on top of it throughout the remainder of this article.

Example 1. Skeleton code
using System;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.Text;
using System.Net;
using System.Security.Cryptography;
using System.Security.Globalization;



namespace SimpleStorageClient
{


public class BlobStorage
{

private const string CloudBlobHost = "blob.core.windows.net";

private const string HeaderPrefixMS = "x-ms-";
private const string HeaderPrefixProperties = "x-ms-prop-";
private const string HeaderPrefixMetadata = "x-ms-meta-";
private const string HeaderDate = HeaderPrefixMS + "date";


private const string CarriageReturnLinefeed = "\r\n";



private string AccountName { get; set; }
private byte[] SecretKey { get; set; }
private string Host { get; set; }

public BlobStorage(string accountName, string base64SecretKey)
{
this.AccountName = accountName;
this.SecretKey = Convert.FromBase64String(base64SecretKey);
this.Host = CloudBlobHost; //Pick default blob storage URL
}

public BlobStorage(string host, string accountName, string base64SecretKey)
{

this.AccountName = accountName;
this.SecretKey = Convert.FromBase64String(base64SecretKey);
this.Host = host;
}
}
}


This code first sets up some constants specific to the Windows Azure storage service (the URL it resides at, the prefix it uses for its headers), and then sets up a simple wrapper class (BlobStorage) to store the storage credentials.

That analogy goes only so far, because there are several differences between containers and directories. For example, you cannot nest one container within another. Typically, containers are used to set permissions—you can set whether the contents within a container are public or private. Don’t worry if this sounds fuzzy at this point.

What comes next in Example 2 is a little CreateContainer method that takes a container name and a Boolean argument specifying whether the container should be public (readable by everyone) or private (readable only by someone with access to your keys). Several operations on storage involve modifying headers. Let’s make these small methods pass their own custom headers. In this case, you are setting x-ms-prop-publicaccess to true. Insert the CreateContainer method into the BlobStorage class.

Example 2. The CreateContainer method
   public bool CreateContainer(string containerName, bool publicAccess)
{
Dictionary<string,string> headers = new Dictionary<string,string>();

if (publicAccess)
{
//Public access for container. Set x-ms-prop-publicaccess
// to true
headers[HeaderPrefixProperties + "publicaccess"] = "true";
}

// To construct a container, make a PUT request to
//http://<account>.blob.core.windows.net/mycontainer
HttpWebResponse response = DoStorageRequest(
containerName ,
"PUT",
headers,
null /*No data*/,
null /* No content type*/);

bool ret = false;

switch (response.StatusCode)
{
case HttpStatusCode.Created:
// Returned HTTP 201. Container created as expected
ret = true;
break;

default:
//Unexpected status code.
// Throw exception with description from HTTP
// Note that a 409 conflict WebException means that
// the container already exists
throw new Exception(response.StatusDescription);
break;
}

return ret;

}


That wasn’t so difficult, was it? Note the line where it called DoStoreRequest and specified PUT. This function, DoStoreRequest, is where the meat of the action is (but you haven’t written it yet). It takes the path you want to make the request to as a parameter, along with the HTTP method to use in the request. Custom headers are passed in the form of a Dictionary object. It then constructs the right URL using some plain old string concatenation, and makes an HTTP request. In this case, since you are creating a container, you pass a PUT to specify that you want to create one. Example 3 shows a first cut at this method. Insert this inside the BlobStorage class as well.

Example 3. Constructing requests to storage
     private HttpWebResponse
DoStorageRequest(string resourcePath, string httpMethod,
Dictionary<string,string> metadataHeaders, byte[] data, string
contentType)
{
// Create request object for
//http://<accountname>.blob.core.windows.net/<resourcepath>
string url = "http://" + this.AccountName + "." +
this.Host + "/" + resourcePath;

HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Method = httpMethod;

request.ContentLength = (data == null? 0: data.Length);
request.ContentType = contentType;

//Add x-ms-date header. This should be in RFC 1123 format,i.e.,
//of the form Sun, 28 Jan 2008 12:11:37 GMT
//This is done by calling DateTime.ToString("R")

request.Headers.Add(HeaderDate,
DateTime.UtcNow.ToString("R", CultureInfo.InvariantCulture));

//Add custom headers to request's headers
if (metadataHeaders != null)
{
foreach (string key in metadataHeaders.Keys)
{
request.Headers.Add(
key,
metadataHeaders[key]
);
}
}

// Get authorization header value by signing request
string authHeader = SignRequest(request);

//Add authorization header. This is of the form
// Authorization:SharedKey <accountname>:<authHeader>

request.Headers.Add("Authorization",
"SharedKey " + this.AccountName
+ ":" + authHeader);

//Write data if any. Data is only present when uploading blobs,
//table entities or queue messages
if (data != null)
{
request.GetRequestStream().Write(data, 0, data.Length);
}




//Make HTTP request and return response object
return (HttpWebResponse)request.GetResponse();
}


That was a bit more code than you’ve seen so far, so let’s walk through this in sequence. First, you declare a method that accepts several arguments: a resource path, the HTTP method to use, and any custom headers and data to pass in. You must then create the right path for the resource. Thankfully, that’s quite simple for containers. Just take the root account path (accountname.blob.core.windows.net) and append the container name/resource path after it.

Let’s skip over the lines that are related to authentication for now. (You’ll learn more about this in the next section.)

The last few lines make the actual request to the server. This returns a response object that contains the results of the operation. If the operation succeeds, this will contain an HTTP 2xx code. If it fails, it will contain an error code with an XML blob containing information on why it failed.

So, what is that mysterious authentication header in the middle? If you removed those lines, you would have the right URL and all the other headers correct, but your request would fail. Let’s explore that in a bit more detail.

1. Understanding Authentication and Request Signing

Signing the headers is arguably the most important part of writing a storage client library. Using the description of the URLs, you could make requests to your storage account. However, those requests won’t succeed because they haven’t been signed with either your primary or secondary key.

You might be wondering at this point why you have all these keys to manage. What’s wrong with a good old username and password, and using HTTP Basic/Digest authentication like everyone else?

The answer lies in future flexibility. You may not always want to give everyone your password. For example, you may want others to access your account with your keys, but you want to limit them to a subset of all possible operations. Or you may want to give them access to your data, but charge them for each access. Another common scenario is when you want Microsoft to charge your customers directly for all the data bandwidth and storage that you handle for them. None of these features exist today, but they’re possible only with the flexibility of this scheme. Another important reason is that these keys are easier to regenerate than passwords that tie into your identity on the site. So, it’s somewhat safer to embed these in tools/scripts/sites, rather than having to fork over your password.


Note: If you’re familiar with Amazon’s authentication mechanisms for Amazon Web Services (AWS), a lot of this will look similar to you. There are some major differences (such as the actual algorithm used), so even if you’ve written code against Amazon, you should still read through this discussion.

So, how do you use these keys? Every request you make to the storage service has a URL that it acts against, and headers that get included with the request. You use the key to sign the request, and embed the signature in the request as a header. Even if the actual request were intercepted, an attacker cannot recover your private key from the request. Since the date and time is part of the request, an attacker cannot use it to make further requests as well. Of course, all this security is valid only if your private keys stay private. If someone else has access to your keys, the proverbial jig is up. She can then make requests as you, which will be charged to your credit card or method of payment for Azure services.


Note: Microsoft has access to your keys as well. This may scare you a bit—why does Microsoft have access to the keys to my data? This is standard practice in these cloud storage systems. As you have learned, cloud computing involves a trust relationship with the vendor. It is up to you to decide whether you’re OK with this, depending on how sensitive your data is.

2. Using the Signing Algorithm

Let’s walk through the process for creating a signature, and then examine some code that does the same thing:

  1. You must construct an HTTP header of the form Authorization="SharedKey {Account_Name}:{Signature}" and embed the header in the final request that goes over the wire to the storage service.

  2. You start with a few important components of the request:

    1. The HTTP method name (GET/PUT/HEAD/POST).

    2. The MD5 hash of the data being uploaded. (This ensures that the signature doesn’t match if the data is corrupted along the way. This is an optional header; if you don’t want to specify this, just insert an empty line.)

    3. The MIME type of the data and the date. (Again, this can be empty. When you create a container, no data is being uploaded, so there is no MIME type.)

  3. You then must construct a canonicalized form of the custom headers. This is a fancy way of saying that you must consolidate all the headers that start with x-ms-. You do that by sorting all the custom headers lexicographically, and then consolidating repeating headers into one.

  4. You finally construct something called the canonicalized resource, which is nothing but the account name concatenated with the path of the resource you want to access.

In short, the algorithm just described can be shortened to the following:

StringToSign = VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedHeaders +
CanonicalizedResource;

Once you have the string you need to sign, the actual process of signing is quite trivial. You create a base64-encoded HMAC-SHA256 hash of the string that is cryptographically secure. That is quite a bit of work. Example 4 shows the code required to do this. Insert this method in the BlobStorage class as well.

Example 4. Generating authentication header
       private string SignRequest(HttpWebRequest request)
{
StringBuilder stringToSign = new StringBuilder();

//First element is the HTTP method - GET/PUT/POST/etc.
stringToSign.Append(request.Method + "\n");

//The second element is the MD5 hash of the data. This
// is optional so we can insert an empty line here.
stringToSign.Append("\n");


//Append content type of the request
stringToSign.Append(request.ContentType + "\n");

//Append date. Since we always add the
// x-ms-date header, this can be an empty string.
stringToSign.Append("\n");

// Construct canonicalized headers.

// Note that this doesn't implement
// parts of the specification like combining header fields with the
// same name , unfolding long lines or trimming white spaces.

// Look for header names that start with x-ms
// Then sort them in case-insensitive manner.
List<string> httpStorageHeaderNameArray = new List<string>();
foreach (string key in request.Headers.Keys)
{
if (key.ToLowerInvariant().StartsWith(HeaderPrefixMS,
StringComparison.Ordinal))
{
httpStorageHeaderNameArray.Add(key.ToLowerInvariant());
}
}

httpStorageHeaderNameArray.Sort();

// Now go through each header's values in sorted order
// and append them to the canonicalized string.
// At the end of this, you should have a bunch of headers of the
// form x-ms-somekey:value
// x-ms-someotherkey:othervalue
foreach (string key in httpStorageHeaderNameArray)
{
stringToSign.Append(key + ":" + request.Headers[key] + "\n");
}

// Finally, add canonicalized resources
// This is done by prepending a '/' to the
// account name and resource path.

stringToSign.Append("/" + this.AccountName +
request.RequestUri.AbsolutePath);

// We now have a constructed string to sign.
//We now need to generate a HMAC SHA256 hash using
// our secret key and base64 encode it.

byte[] dataToMAC =
System.Text.Encoding.UTF8.GetBytes(stringToSign.ToString());

using (HMACSHA256 hmacsha256 = new HMACSHA256(this.SecretKey))
{
return Convert.ToBase64String(hmacsha256.ComputeHash(dataToMAC));
}

}


This code essentially follows the algorithm outlined previously. It starts by adding the HTTP method, the ContentType, and an empty line for the hash, all separated by newlines.

The code then loops through all the headers, looking through header keys that start with x-ms-. When it finds them, it pulls them into a list. You now have all the custom headers you have added to the list. The code then sorts the list lexicographically through the built-in list sort function.

You then construct a long string of concatenated custom headers and their values. It is important that this be in the exact format that the server expects, but it is easy to go wrong here.

If you read the algorithm’s specification, you’ll find that a few shortcuts have been taken here. In real-world usage, you’ll need to check for duplicate headers and for whitespace between values, and deal with them accordingly. After this, constructing the canonical resource path is quite simple. You take the account name and concatenate with the path to the resource.

Whew! You now have a string that you can sign. Just to be on the safe side, the code converts the string to UTF8 encoding. The code can then sign it. Signing involves taking the secret key and using the HMAC-SHA256 algorithm to generate a digest. This is a binary digest, and you convert it into HTTP-friendly form using base64 encoding.


Note: If you’re following along with implementing the storage client library for some new language/platform, you might get tripped up at this point. Some mainstream languages don’t have support for SHA-256 (the HMAC part is trivial to implement). For example, you would have to figure out an SHA-256 implementation as well as an HMAC implementation to get an Erlang version of this library (available at http://github.com/sriramk/winazureerl/). Obviously, spending long nights reading and debugging through cryptographic algorithms is no fun. On the other hand, it makes your requests safer.

You now have a signed header that is good to go. On the server side, the Windows Azure storage service goes through the exact same process and generates a signature. It checks to see whether the signature you generated matches the one it computed. If it doesn’t, it throws a 403 error.


Note: Debugging these errors when writing your own storage clients can be quite painful, because several things could go wrong. When debugging the storage clients, you should step through a storage client implementation that you know is reliable, and compare intermediate values at every step of the process.

3. Creating and Uploading Stuff

You now have a container for uploading stuff, so let’s do just that—upload stuff. After the infrastructure you’ve built, the act of uploading a blob is a bit of an anticlimax in its simplicity. In short, you simply must do an HTTP PUT to the right URL, and send the data you want to upload as part of the HTTP request.

Example 5 shows a small helper method to do that. It’s interesting how small this is. The reason each additional bit of functionality you add is so small is because most of the heavy lifting is done in the core parts of signing and forming the right path to the resource. Once you have that finished, the rest of the features are quite simple to write.

Example 5. PUT blob
       public bool CreateBlob(string containerName, string blobName,
byte[] data, string contentType)
{
HttpWebResponse response = DoStorageRequest(
containerName + "/" + blobName,
"PUT",
null, /* No extra headers */
data,
contentType);

bool ret = false;

switch (response.StatusCode)
{
case HttpStatusCode.Created:
// Returned HTTP 201. Blob created as expected
ret = true;
break;

default:
//Unexpected status code.
// Throw exception with description from HTTP
throw new Exception(response.StatusDescription);
break;
}

return ret;

}


You can now make requests to create a container and upload a sample blob. This is a small text file containing the string "Hello world!". Since it is a text file, you set the ContentType to "text/plain". Replace your blank implementation of Main with the following code. Replace the string USERNAME with your username, and the string KEY with your storage key:

class Program
{
static void Main(string[] args)
{
BlobStorage storage = new BlobStorage(
"USERNAME", "KEY");
storage.CreateContainer("testnetclient", true);
storage.CreateBlob("testnetclient", "helloworld.txt",
System.Text.Encoding.UTF8.GetBytes(
"Hello world!"),
"text/plain");
}
}


Note: This code tries to create the container without checking whether it already exists. Windows Azure storage returns a 409 error code and a message if the container already exists. More robust code will check the list of containers to ensure that you are not attempting to re-create existing containers. Windows Azure will always honor a request to create a blob, regardless of whether it exists.

You can now access this file over normal HTTP. Open a web browser and go to http://<username>.blob.core.windows.net/test/helloworld.txt. You should see the text file you just uploaded to blob storage. Congratulations! You’ve uploaded your first bits into the Microsoft cloud.

Other  
  •  Working with the REST API
  •  Excel Programmer : Fix Misteakes
  •  Excel Programmer : Change Recorded Code
  •  Excel Programmer : Record and Read Code
  •  Configuring Server Roles in Windows 2008 : New Roles in 2008
  •  Windows Server 2003 : Creating and Configuring Application Directory Partitions
  •  Windows Server 2003 : Configuring Forest and Domain Functional Levels
  •  Windows Server 2003 : Installing and Configuring Domain Controllers
  •  Manage Server Core
  •  Configure Server Core Postinstallation
  •  Install Server Core
  •  Determine Your Need for Server Core
  •  Install Windows Server 2008
  •  Windows Server 2008 : Configure NAP
  •  Incorporate Server Core Changes in Windows Server 2008 R2
  •  Decide What Edition of Windows Server 2008 to Install
  •  Perform Other Pre-Installation Tasks
  •  Developing Windows Azure Services that Use SQL Azure
  •  Creating Windows with Mixed Content
  •  Mixing Windows and Forms
  •  
    Top 10
    Outlining AD DS Changes in Windows Server 2008 R2 (part 1)
    Parallel Programming : Task Relationships (part 2) - Parent and Child Tasks
    Microsoft XNA Game Studio 3.0 : Displaying Images - Using Resources in a Game (part 2) - Positioning Your Game Sprite on the Screen
    Sync Your iPad with iTunes : Manually Transferring Music, Movies, Podcasts, and More on Your iPad (Drag-and-Drop Method)
    MySQL Server Monitoring (part 2) - MySQL Administrator
    Exchange Server 2007 : Enable Antispam Configuration
    Windows 7 : Working with the Windows Firewall (part 2) - Configuring Security for the Basic Windows Firewall & Troubleshooting the Basic Windows Firewall
    Our predictions for future tech (Part 1)
    Understanding IIS 7.0 Architecture : Request Processing in Application Pool
    Is computer only a big smartphone?
    Most View
    AVG Internet Security 2012
    Windows Server 2008: Domain Name System and IPv6 - Troubleshooting DNS
    Business Intelligence in SharePoint 2010 with Business Connectivity Services : Consuming External Content Types (part 3) - Business Connectivity Services Web Parts
    Design and Deploy High Availability for Exchange 2007 : Design Hub Transport High Availability
    Introducing Automated Help And Support in Vista
    SQL Server 2008 : T-SQL Tips and Tricks (part 2) - Using CONTEXT_INFO & Working with Outer Joins
    SQL Server 2008 : General T-SQL Coding Recommendations (part 2) - Avoid SQL Injection Attacks When Using Dynamic SQL & Comment Your T-SQL Code
    Touchscreen Test for Windows 8
    Non-Deterministic Finite Automata
    Building Android Apps : Web Programming Crash Course
    Windows 7 : Using Compression and Encryption (part 1) - Compressing Drives
    Programming the Mobile Web : Geolocation and Maps - Location Techniques
    Application Security in Windows Vista
    2012 Make It Worth ( Part 1)
    Windows 7 : Configuring Disks and Drives (part 1) - Using Disk Management
    Choosing a super-zoom camera (part 3)
    The best browser hacks (part 3) - MS Internet Explorer
    Keyboard Events in Silverlight
    Ipad : Presentations with Keynote - Adding Transitions (part 2) - Object Transitions
    Windows 7 : Managing the Boot Sector for Hard Disk Partitions