AWS Powershell: “A parameter cannot be found that matches parameter name ‘Credentials’.”

I spent ages debugging this one a few months ago, and just hit it again, so I thought I’d share to save others some time.

If you have an older AWS powershell script, you may hit this error when running AWS Powershell cmdlets, particularly if using a cross-account role – e.g.:

$aws_role = Use-STSRole -RoleArn $arn -ExternalId $externalid -Region $region
$aws_creds = $aws_role.Credentials

Get-S3Bucket -Credentials $aws_creds -BucketName $bucket -Region $region
# will throw "A parameter cannot be found that matches parameter name 'Credentials'."

The problem is that at some point, the AWS Powershell cmdlets renamed the ‘Credentials’ parameter to ‘Credential’ (no trailing s). Running the script after upgrading AWS Powershell manifests the error. To compound this, I assume because of the way they’ve implemented the shared parameters, Get-Help doesn’t actually show the Credential parameter at all. I trawled through the release notes and was unable to find the version at which the parameter changed, or even if there was any warning.

The fix is obviously to rename your -Credentials parameters to -Credential, and then shake your fist in the general direction of Amazon.

Watch App Development Blog – Week 2

In Week 1, I got a (very) basic Haskell REST web service running that scraped the Transperth site for live train times. Now we’re up to:

Step 2: Build & Deployment

Like most developers working on side-projects, I don’t want to pay a bundle for hosting a service during development when it really doesn’t need many resources, however when the product goes live and inevitably becomes a raging success, I need to be able to scale capacity quickly & easily. In the past I’ve used freemium PaaS providers like Heroku and AppHarbor which are designed for exactly this scenario.

I started down the Heroku path using Joe Nelson’s buildpack, however I immediately hit Heroku’s 15 minute build timeout. There are a variety of ways around this, although I got to thinking (as I’ve pondered in the past about AppHarbor) why I need to build on my hosting provider. Heroku was originally designed for deploying apps written in Ruby that didn’t need compilation; pushing source & compiling on the server seems like a hack to me.

Docker is the new hotness in packaging and application deployment, and is better suited to building a compiled web application locally and deploying to a cloud host. I thought I’d give this a go.

Docker Development on OS X

The Docker host relies on specific features of the Linux kernel, which means that working with containers locally on OS X or Windows requires running them inside a Docker host in a Linux VM. This starts to get a bit onioney. My initial inclination was to do docker development using Vagrant – the same method I use for working on other web systems targeting a Linux host. After spending considerable time trying out different methods of running Docker through Vagrant, I ended up coming to the conclusion that it wasn’t worth the hassle for a simple deployment like this one. Instead, my model would be:

  1. While I’m developing locally, just run the service directly on OS X without using Docker.
  2. When I’m ready to deploy, spin up boot2docker and build the container
  3. Commit & push the image to a remote docker repo.
  4. Deploy the image to the cloud host from the repo.

I strongly recommend getting started with Docker using Chris Jones’ “Missing Guide”. I installed using the downloadable installer rather than homebrew, but the only real config change I needed to make was to give the boot2docker VM more RAM – GHC struggles a bit unless it has plenty. Run the command boot2docker config > ~/.boot2docker/profile, then edit the ~/.boot2docker/profile file and change the ‘Memory’ setting (I gave it 4096). I didn’t configure any port-forwarding as I’m only using docker to build the image.

Building a Haskell Docker image

Dockerhub has an official Haskell image, which is a good starting point for development. I implemented a Dockerfile starting from the example at the end of the README. I needed to add an extra step to cater for my gps-1.2 requirement which is (still) not available on Hackage yet at time of writing.

FROM haskell:7.8

RUN cabal update

# Add .cabal file
ADD ./perthtransport.cabal /opt/app/perthtransport.cabal

# Install gps-1.2 from source
ADD gps /opt/app/gps
RUN cd /opt/app/gps && cabal install

# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/app && cabal install --only-dependencies -j4

# Add and Install Application Code
ADD . /opt/app
RUN cd /opt/app && cabal install

# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH

EXPOSE 3000

# Default Command for Container
WORKDIR /opt/app
CMD ["perthtransport"]

I also needed to create a .dockerignore to ensure the cabal sandbox was excluded from the context. Once this was done, my build process consisted of running:

boot2docker up
docker build -t <repo:tag>
docker push <repo:tag>
boot2docker down

Container Hosting in the cloud

Unfortunately the container hosting landscape seems a bit immature at present – I’d love to have a Heroku-like service that lets me deploy scalable containers as simply as using a docker push. Also, while docker is standardised at the container level, most providers (ECS, Digital Ocean etc) seem to be inventing their own clustering layers on top. Maybe swarm will fix that – let’s wait and see.

I ended up going with Tutum – they have a good-looking, self-explanatory web interface, a web service API, and a CLI tool (brew install tutum). They don’t do the hosting themselves though – you need to register your own cloud host account (AWS, Azure, Digital Ocean) with them & they manage the nodes for you. They do give you a private repository, plus the service is ‘free forever’ if you sign up as a developer now. I’m using an AWS t2.micro instance under the free usage tier as the only node at present.

I set up the initial service definition via the web UI, to redeploy the latest image from the repo, I just need to do a tutum service redeploy <imageid>.

Scripting the deployment

I used rake as a build scripting tool, for no other reason than that’s what I normally use for Xcode builds. The process is simple enough that you could probably just use a bash script though.

task :run do
  sh "cabal install --only-dependencies"
  sh "cabal build"
  sh "dist/build/perthtransport/perthtransport"
end

task :deploy do
  version = File.open("perthtransport.cabal").read().match(/^version:\s*([^\s]*)$/)[1]
  puts "Building version #{version}"
  begin
    sh "boot2docker up"
    sh "docker build -t #{DOCKER_REPO}:#{version} ."
    sh "docker push #{DOCKER_REPO}:#{version}"
    sh "tutum service redeploy #{TUTUM_SERVICE_ID}"
    sh "git tag -a #{version} -m 'Build #{version}' & git push origin tag #{version}"
  ensure
    sh "boot2docker down"
  end
end

So now I can build & run locally with a rake run and deploy to an AWS node with rake deploy. Next week we’ll start on the actual watch app functionality. In the interim, the source code is available on bitbucket.

Implementing AWS authentication for your own REST API

If you need to build an authentication mechanism for an HTTP-based REST API, a common approach is to use HTTP Basic – it’s simple, all clients have it built-in, it’s easy to test from the browser, and you can store passwords as hashes. The downside is that your credentials are transmitted in (nearly) plain text, which makes SSL (with its associated security restrictions and computational cost) a necessity.

If you’d like to implement a simple scheme for a non-sensitive API that doesn’t require SSL, this is more complicated. HTTP Digest requires storing unhashed passwords on the server, and requires a challenge-response conversation between server & client. Schemes like Kerberos and three-legged oath require hanging your hat on a third-party authentication provider, and are awkward to implement in a client.

Luckily, in software if you hit a problem you can usually copy somebody else’s solution. This is what the Microsoft Azure team did when implementing their API authentication – “Let’s just copy what Amazon does“. Amazon probably copied someone else. Who am I to argue with that approach?

The general concept behind these schemes is relatively simple:

  1. Come up with a way to generate API & secret keys for a client. These are usually base64-encoded cryptographically generated random byte arrays.
  2. For each request, hash a string containing the requested URL and specific headers (including the Date header) with the secret key using HMAC-SHA1.
  3. Add an Authorize header with a custom scheme name (e.g. ‘AWS’), containing the access key & base64-encoded signature separated by a colon.

The client goes through this process to generate the Authorize header, then the server performs a reverse of the procedure using the stored secret key to authenticate the request. Additionally, the server checks the Date header value against server time to check for replay attacks.

Below is the source for an AuthorizeAttribute subclass (for an MVC3 REST API). The code is easily adaptable to other frameworks, such as an OpenRasta PipelineContributor. The injected Session property in this instance is an NHibernate Session, and the ApiKey class is mapped to a database table. Successful authentication adds a custom IPrincipal to the HttpContext. Note that none of the x-aws headers are being used.

public class ApiAuthenticateAttribute : AuthorizeAttribute
{    
    private static System.Text.UTF8Encoding utf8 = new System.Text.UTF8Encoding();
 
    [Inject]
    public ISession Session { private get; set; }

    public override void OnAuthorization(AuthorizationContext filterContext)
    {
        var request = filterContext.HttpContext.Request;
        IPrincipal principal = null;

        if (request.Headers["Authorization"] != null && request.Headers["Authorization"].StartsWith("AWS "))
        {
            // Amazon AWS authentication scheme.
            var credential = filterContext.HttpContext.Request.Headers["Authorization"].Substring(4).Split(':');
            var apiKey = Session.Query<ApiKey>().Where(k => k.AccessKey == credential[0]).FirstOrDefault();
            if (apiKey != null && !apiKey.IsDisabled && credential.Count() > 1)
            {
                // check the date header is present & within 15 mins
                DateTime clientDate;
                if (request.Headers["Date"] != null
                    && DateTime.TryParseExact(request.Headers["Date"], "R", DateTimeFormatInfo.CurrentInfo, DateTimeStyles.AdjustToUniversal, out clientDate)
                    && Math.Abs((clientDate - DateTime.UtcNow).TotalMinutes) <= 15)
                {
                    // build the signature & check for match
                    var stringToSign = String.Format("{0}\n{1}\n{2}\n{3}\n{4}",
                        request.HttpMethod,
                        request.Headers["Content-MD5"] ?? "",
                        request.Headers["Content-Type"] ?? "",
                        request.Headers["Date"] ?? "",
                        request.RawUrl);

                    var hmac = new HMACSHA1(utf8.GetBytes(apiKey.SecretKey));
                    var signature = Convert.ToBase64String(hmac.ComputeHash(utf8.GetBytes(stringToSign)));
                    if (signature == credential[1])
                    {
                        principal = apiKey.ToPrincipal();
                    }
                }
            }
        }

        if (principal == null)
        {
             filterContext.Result = new HttpUnauthorizedResult();
        }
        else
        {
            filterContext.HttpContext.User = principal;
        }
    }
}

Generating API Keys can be done like so:

public ApiKey()
{
    // Generate random keys by using RNGCryptoServiceProvider & Base64 encoding the output
    // Key lengths match AWS keys.
    var rngProvider = RNGCryptoServiceProvider.Create();
    var bytes = new byte[15];
    rngProvider.GetBytes(bytes);
    // Do some magic to ensure we have uppercase & digits only.
    AccessKey = Convert.ToBase64String(bytes).ToUpper().Replace("+", "0").Replace("/", "9");
    bytes = new byte[30];
    rngProvider.GetBytes(bytes);
    SecretKey = Convert.ToBase64String(bytes);
}

For the client, most languages have freely available Amazon client code that can be easily adapted. Reusing a popular scheme like this saves a lot of time & energy over rolling a completely custom solution, particularly where a number of disparate client platforms are likely to be used.