IKEA desk hacking to mount a Rode mic arm

I’ve been working from home more frequently the past few months for some reason, so I eventually bit the bullet and constructed a permanent home workstation. I’ll post up more details of my setup later on, but when I tweeted a pic of my mic arm, it got a fair bit of interest, so I thought I’d give a bit more detail here.

I bought a Rode Podcaster dynamic mic a few months ago to start experimenting with recording screencasts – both to distribute over the web, and also as a backup for an epic presentation demo failure. Now the important thing with sensitive microphones is that they’re shock-mounted to protect from bumps & knocks (and even just vibrations on the desk from keyboard use), as these tend to create significant noise in the recording. This is what my first setup looked like:

Budget Shockmount

Suffice it to say, cheaping out on a proper shockmount & arm isn’t worth it – I supplemented this setup with the Rode PSM1 and PSA1 shortly afterwards.

The PSA1 comes with both a clamp-on mount and a through-desk mount. The clamp is the best option if you don’t want to irreversibly modify your desk, but it’s not suitable for all desks: mine doesn’t have an overhang to clamp to. The through-desk mount involves drilling a hole in your desk to mount an insert – this is how it’s done:

Step 1: Drill a hole in your desk. There are two important components to this step – the drilling of the hole, and knowing where to drill the hole.

  1. Drilling the hole – it’s best to use a holesaw – a flat (spade) bit is not great for going through the particle board that most desks are made out of. 22mm is right for the Rode desk insert. My one looked like this:
    Sutton holesaw
    Make sure you drill from the top; the surface may be splintered or chipped when the holesaw exits.  That said, the insert will cover an area around the hole anyway.
  2. Knowing where to drill – the main considerations here are:
    • Will it reach where you want it to reach? It’s a bit difficult out of the mount, but I’d try holding the base of the arm in your proposed position and making sure the mic can be positioned comfortably. At full stretch, the arm will reach roughly 600mm from the centre of the hole.
    • If you mount in the corner of your desk (as is common), is there enough wall clearance for the back of the arm? When weighted with something as heavy as the Podcaster, the arm can stick back about 120-130mm, unweighted (or if you push it), it can move back about 200mm.
    • Don’t drill right on the edge – the lip of the insert will overlap nearly 30mm from the centre of the hole.
    • Make sure you have clearance underneath the desk; the insert will probably stick through about 50-60mm.

Step 2: Push the insert into the hole & screw up the nut underneath. This is knurled; it’s just meant to be finger-tight.

Step 3: In my case, the desk drawers go nearly all the way to the back of the desk, and needed surgery to clear the bottom of the insert. I removed the drawer, carefully cut down with a handsaw, and used a sharp chisel to remove the section. It looks a bit rough, but hopefully no-one will see it.

Drawer Clearance


Step 4: You’re done! Slot the arm into the insert and start recording.

Finished Mic Arm

Mary & Tom Poppendieck – The Scaling Dilemma

I’ve been a fan of the Poppendiecks’ work on Lean Software Development for a while, so I was quick to sign up when I saw they were coming back to Perth as part of a YOW! Night. The talk was entitled ‘The Scaling Dilemma’, and covered issues encountered in scaling development teams beyond the popular “2 pizza” size. I haven’t worked much with larger teams, but I didn’t want to miss the opportunity to see speakers of this calibre in Perth.

Mary presented (I think Mary always does the presentations) some intriguing anthropological background on why teams are typically the size they are – the 5–7 person inner circle, the 12–15 person sympathy group, the 30–50 person hunting party and the ~150 person clan. She showed some evidence of these organisation sizes recurring throughout human history – the Roman Army & other military groups, stone-age villages, University departments, Gore & Associates, etc. Based on her background in hardware product development, she was most familiar with ‘hunting party’-sized teams.

Other than that, I found some of my key takeaways were:

  • “Monopolies destroy collaboration” – if there’s a group/team/department in an organisation that doesn’t (want to or have to) accommodate others, this will eventually destroy inter-team trust & collaboration.
  • The application of the Theory of Constraints as an underlying principle behind some modern software best practices – eg continuous delivery can be viewed as an attempt to break the release cycle/integration constraint.

It was a really thought-provoking presentation; it’s great to see these sorts of speakers come to Perth where possible – YOW! and BankWest deserve full credit for continuing to make this happen.

“Unrecognized option: -files” in hadoop streaming job

I was recently working on an Elastic MapReduce Streaming setup, that required copying a few required Python files to the nodes in addition to the mapper/reducer.

After much trial & error, I ending up using the following .NET AWS SDK code to accomplish the file upload:

var mapReduce = new StreamingStep {
    Inputs = new List<string> { "s3://<bucket>/input.txt" },
    Output = "s3://<bucket>/output/",
    Mapper = "s3://<bucket>/mapper.py",
    Reducer = "s3://<bucket>/reducer.py",


var step = new StepConfig {
    Name = "python_mapreduce",
    ActionOnFailure = "TERMINATE_JOB_FLOW",
    HadoopJarStep = mapReduce

// Then build & submit the RunJobFlowRequest

This generated the rather odd error:

ERROR org.apache.hadoop.streaming.StreamJob (main): Unrecognized option: -files

Odd, because -files most certainly is an option.

Prolonged googling later, and I discovered that the -files option needs to come first. However, StreamingStep doesn’t give me any way to change the order of the arguments – or does it?

I eventually realised I was being a bit dense. ToHadoopJarStepConfig() is a convenience method that just generates a regular JarStep… which exposes the args as a List. Change the code to this:

mapReduce.Args.Insert(0, "-files");
mapReduce.Args.Insert(1, "s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

and everything is awesome.

Basic Auth with a Web API 2 IAuthenticationFilter

MVC5/Web API 2 introduced a new IAuthenticationFilter (as opposed the the IAuthorizationFilter we needed to dual-purpose in the past), as well as a substantial overhaul of the user model with ASP.NET Identity. Unfortunately, the documentation is abysmal, and all the blog articles focus on the System.Web.Mvc.Filters.IAuthenticationFilter, not the System.Web.Http.Filters.IAuthenticationFilter, which is clearly something entirely different.

We had a project where we needed to support a Basic-over-SSL authentication scheme on the ApiControllers for a mobile client, as well as Forms auth for the MVC controllers running the admin interface. We were keen to leverage the new Identity model, mostly as it appears to be a much more coherent design than the legacy hodgepodge we’d used previously. This required a fair bit of decompilation and digging, but I eventually came up with something that worked.

Below is an excerpt of the relevant parts of our BasicAuthFilter class – it authenticates against a UserManager<T> (which could be the default EF version) and creates a (role-less) ClaimsPrincipal if successful.

public async Task AuthenticateAsync(HttpAuthenticationContext context, CancellationToken cancellationToken)
    var authHeader = context.Request.Headers.Authorization;
    if (authHeader == null || authHeader.Scheme != "Basic")
        context.ErrorResult = Unauthorized(context.Request);
        string[] credentials = ASCIIEncoding.ASCII.GetString(Convert.FromBase64String(authHeader.Parameter)).Split(':');

        if (credentials.Length == 2)
            using (var userManager = CreateUserManager())
                var user = await userManager.FindAsync(credentials[0], credentials[1]);
                if (user != null)
                    var identity = await userManager.CreateIdentityAsync(user, "BasicAuth");
                    context.Principal = new ClaimsPrincipal(new ClaimsIdentity[] { identity });
                    context.ErrorResult = Unauthorized(context.Request);
            context.ErrorResult = Unauthorized(context.Request);

public Task ChallengeAsync(HttpAuthenticationChallengeContext context, CancellationToken cancellationToken)
    context.Result = new AddBasicChallengeResult(context.Result, realm);
    return Task.FromResult(0);

private class AddBasicChallengeResult : IHttpActionResult
    private IHttpActionResult innerResult;
    private string realm;

    public AddBasicChallengeResult(IHttpActionResult innerResult, string realm)
        this.innerResult = innerResult;
        this.realm = realm;

    public async Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        var response = await innerResult.ExecuteAsync(cancellationToken);
        if (response.StatusCode == HttpStatusCode.Unauthorized)
            response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue("Basic", String.Format("realm=\"{0}\"", realm)));
        return response;

Note that you’ll need to use config.SuppressDefaultHostAuthentication() in your WebApiConfig in order to prevent redirection from unauthorised API calls.

Build Server Traffic Lights

Traffic Light

I’ve wanted real build server traffic lights since I first implemented a Continuous Integration server in the mid 2000s. In those days, the trendy thing to do was to hook up red & green lava lamps to your build server, but CCTray’s red/green/yellow status indicators always seemed better suited to traffic lights. However, it was something that always got put in the ‘someday’ pile. More recently, I’d become interested in hardware automation platforms like Arduino, and it seemed like an ideal first project, so I dusted off the concept.

Obtaining the traffic light unit itself was relatively straightforward – in WA, the old style incandescent units are being progressively replaced with LEDs, so the reasoning was there’d be some that are superfluous to requirements. A few phone calls later, I managed to track down the contractor handling the replacement and do a beverage-related deal for a second-hand traffic light. The hardest part was actually explaining what I intended to do with it!

These traffic light units don’t contain any switching logic or complex electronics at all – they have a 240VAC feed for each light, with industrial grade internal transformers stepping down to 10V and driving 20W high-pressure bulbs. I’d seen reports that the standard bulbs were too bright for indoor use, but a test run showed it was probably just okay, and it was certainly much simpler to keep the lighting as-is while I got the rest of the hardware working.

Traffic Light 3The intention was to run the lights as a networked device (rather than a USB one, requiring an active host computer), as this would enable more flexibility in installation. I ordered an Arduino Ethernet and relay shield from Little Bird Electronics, and set about coding the controller software.

Traffic Light UI

The code is available online here – it’s adapted from a similar project by Dirk Engels. The Arduino runs a web server that serves a page displaying the current status of the light, as well as buttons to control the light and RESTful control URLs to provide build server integration. My main changes to the design were:

  • Integration of a DHCP library, to remove the hard-coded IP address and make it possible to move the light between networks without reprogramming.
  • Bonjour support, to advertise the light at ‘traffic-light.local’ and remove any requirement for DNS entries/DHCP reservations on the network.
  • A failover mode that flashes amber if the light has not heard from the build server in over 5 minutes. This mimics real world behaviour and seemed more appropriate than turning off or displaying the last known state indefinitely.

Traffic Light 2

Wiring in the controller was pretty simple – the 240V mains feed powers the 9V DC power supply for the Arduino, as well as the 10V transformers for the lights via the relay shield. Initially these were switched on the high-voltage side, but the inrush current appeared to play havoc with small switch-mode power supplies (i.e. phone chargers) on the same circuit, so I rewired to switch on the low-voltage side. This also allowed me to remove two of the transformers and freed up some internal space; I ended up being able to neatly mount the controller on one of the unused transformer brackets.

Traffic Light 4Obviously the light needed a pole; I constructed one using galvanised fence post and some sub-par oxy welding. I would have liked to run the wiring down inside the pole, but unfortunately the size of the mains plug was going to make this difficult (given I wanted the light to stay easily removable). A few coats of suitable yellow paint and it was good to go.

After installing the light in the office, we developed a small powershell script to query the build server and update the light. It’s had a significant benefit in putting the build status unavoidably in front of the developers, and the builds have become noticeably more ‘green’ than they have been for some time.

There are a few areas I’d design differently if I did it again:

  • Use a hardware flasher circuit for the failover mode (via the fourth relay) – the software flasher works okay, but there’s a noticeable stutter in the flashes if the controller is doing something else (like responding to a web request). I’m not enough of a hardware whiz to build one of these though.
  • Install bulkhead RJ45 & 3-pin PC power connections on the traffic light housing, so that the cables are detachable – this would permit variable cable lengths and potentially allow routing inside the pole.
  • Use low-wattage bulbs rather than the specialised 20W high pressure bulbs – the traffic light is a bit bright straight-on. Unfortunately the existing bulb holders have a unique bayonet mount and they’d need to be replaced with something else (e.g. automotive BA15S).

EntityPropertyMappingAttribute duplicated between assemblies

I was working on an entity class for an OData endpoint when I ran across the following doozy:

The type ‘System.Data.Services.Common.EntityPropertyMappingAttribute’ exists in both ‘…Microsoft.Data.OData.dll’ and ‘…System.Data.Services.Client.dll’

It looks like Microsoft has duplicated this type (plus a couple of others) between two different assemblies – in this instance I ran across it with the Azure.Storage package.

Thankfully, Jon Skeet to the rescue! To resolve:

  1. Select the System.Data.Services.Client reference and open the properties dialog
  2. Under ‘Aliases’, change ‘global’ to ‘global,SystemDataServicesClient’
  3. Add the following code at the top of the offending entity file:
extern alias SystemDataServicesClient;
using SystemDataServicesClient::System.Data.Services.Common;

You’ll also need to delete your other using System.Data.Services.Common, but at that point you should be compiling again.

Azure AD Single Sign On with multiple environments (Reply URLs)

As part of an effort to move some internal applications to the cloud (sorry, The Cloud™), I recently went through the process of implementing Azure AD single sign on against our Office365 tenant directory. Working through the excellent MSDN tutorial, I hit the following (where it was describing how to reconfigure Azure AD to deploy your app to production):

Locate the REPLY URL text box, and enter there the address of your target Windows Azure Web Site (for example, https://aadga.windowsazure.net/). That will let Windows Azure AD to return tokens to your Windows Azure Web Site location upon successful authentication (as opposed to the development time location you used earlier in the thread). Once you updated the value, hit SAVE in the command bar at the bottom of the screen.

Wait, what? This appears to imply  Azure AD can’t authenticate an application in more than one environment (eg if you want to run a production & test environment, or, I don’t know, RUN IT LOCALLY) without setting up duplicate Azure applications and making fairly extensive changes to the web.config. Surely there’s a better way?

I noticed that the current version of the Azure management console allows for multiple Reply URL values:
Azure AD Reply URLs

However, just adding another URL didn’t work – the authentication still only redirected to the topmost value.

The key was the \\system.identityModel.services\federationConfiguration\wsFederation@reply attribute in web.config – adding this attribute sent through the reply URL and allowed authentication via the same Azure AD application from multiple environments, with only relatively minor web.config changes.

As the simplest solution, here’s an example Web.Release.config transform – more advanced scenarios could involve scripting xml edits during a build step to automatically configure by environment.

      <wsFederation reply="<<your prod url>>" xdt:Transform="SetAttributes" />