Watch App Development Blog – Week 1

Okay, I’m trying something new – a weekly blog post talking about my progress developing an app. The theory is that the thought of my massive readership expectantly waiting for the next update will give me enough of an incentive to get something finished. Everyone practise their sad face to make me feel guilty if I don’t post an update.

I’m keen to get out an Apple Watch app. To be honest, I don’t think they’ll make much money, but most apps I build are for other people; it would be nice to put out something good, so I can say “I did that!”

The Concept

Over the last few months, I’ve tried to be aware of instances where I need some information off my phone, but pulling it out & launching an app seems like too much of a hassle. One scenario I noticed was when I was heading to the train station – I used to know the departure times off by heart, but now I’m not sure whether to run or dawdle. It would be great if there was an app on my watch that gave me live departure times for my nearest station – challenge accepted!

Step 1: The API

Transperth don’t have a public API, although there’s an unofficial third-party one that scrapes the website. Unfortunately some parts were broken with a recent site update, and the developer now lives in Melbourne. I also have a few ideas for some custom API behaviour, so I made a probably ill-advised decision to build my own scraping API.

I really wanted to try out a web project in F#, but I didn’t want to develop on Windows and I ended up running into significant problems with Xamarin – broken project templates, unimplemented parts of the aspnetwebstack, etc. ASP.NET vNext looks promising, but I had issues with it also.

So I thought I’d give Haskell another go – I’ve tried this in the past, but I’m much gooder at Haskell now. The state of web frameworks in Haskell has also improved significantly since 2010. I went with Scotty – I like the simplicity of the Sinatra/NancyFx model, and there’s a great walk-through by Aditya Bhargava.

Haskell Web Development on OS X

If you’re playing along at home, you’ll need to follow the following steps to run the API:

  1. Install the Haskell Platform
  2. Run cabal sandbox init in your project directory. Cabal sandbox installs dependencies in a project scope, similar to Bundler in Ruby.
  3. Create a cabal file specifying your dependencies. This process I found a little odd – effectively you’re specifying your executable as a library, but it allows you to leverage cabal dependency resolution. Use Adit’s cabal file as a base.
  4. Create a Main.hs and add your Scotty routes (check out the examples).
  5. Run cabal install && .cabal-sandbox/bin/<executable name>

After an extended compile time, you should now have a web server running on localhost:<port>.

JSON Response Types

Returning JSON can be done by defining record types that implement the ToJSON type class (from Aeson):

{-# LANGUAGE DeriveGeneric #-}
module Types where

import Data.Aeson
import GHC.Generics

data Departure = Departure { time :: String, destination :: String, pattern :: String, status :: String } deriving (Generic, Show)
instance ToJSON Station

HTML Parsing

Parsing the DNN-generated web page is done using tagsoup. This differs from most other HTML parsing libraries I’ve used in that it doesn’t define a query API or CSS-like selector syntax over a DOM, it just converts the HTML into a flat list of nodes that can be manipulated using regular list functions.

My scraping function, which is probably not brilliant Haskell, looks like the following (excluding some helpers):

getTrainTimes :: String -> IO [Departure]
getTrainTimes x = do tags <- fmap parseTags $  openURL $ "http://www.transperth.wa.gov.au/Timetables/Live-Train-Times?stationname=" ++ (urlEncode x)
                     let table = head $ tables tags -- first table in page
                     let rowArray = reverse . tail . reverse . tail $ rows table -- strip first & last rows
                     let times = map (f . cells) rowArray -- convert each row into a Departure
                     return times
                  where f cs = Departure (textFromCell $ cellAtColumn 0 cs) (destFromCell $ cellAtColumn 1 cs) (patternFromCell $ cellAtColumn 2 cs) (textFromCell $ cellAtColumn 3 cs)

‘Stations Near Me’

In addition to querying for live times, I also have a flat text file of station names & locations I lifted from Darcy’s project. This is used to respond to the ‘all stations’ API call, and also supports a geospatial query endpoint using the gps package. Initially I started getting build failures with this dependency – it was trying to compile GPX file support, which I don’t need. The latest version (1.2) of the gps code has removed this dependency, but it’s not on Hackage yet.

Happily, this is solvable:

  1. Specify the specific version of the package in your cabal file: gps >=1.2
  2. Put the source code in your project directory (e.g. git submodule add git@github.com:TomMD/gps.git)
  3. Specify the new source directory with cabal sandbox add-source gps

Using the Geo.Computations model, it’s then fairly straightforward to filter the list of stations based on distance to a given point.

Bringing it Together

Once the stations, live times, and geospatial filtering was done, it was just a case of defining the appropriate route functions in Scotty:

  scotty port $ do
    get "/train/" $ do
      list <- liftIO stations
      json list

    get "/train/near" $ do
      y <- param "lat"
      x <-param "long"
      list <- liftIO $ stationsNear y x
      json list

    get "/train/:station" $ do
      stationId <- param "station"
      station <- liftIO $ station stationId
      case station of
        Just s -> do
          times <- liftIO $ getTrainTimes $ name s
          json times
        Nothing ->
          Web.Scotty.status status404

I haven’t touched cache control or more advanced error handling, and it would be nice to fall back on timetables if live times aren’t available, but I now have enough of an API running to support the basic functions of the watch app. One of the things I liked about doing it in Haskell was that once it compiled, it generally worked. It’s a pretty nice feeling.

I’ve put the code up on bitbucket – feel free to have a look through it and send some feedback if you can’t stand my beginner Haskell.

Next Steps

Next is hosting – building and deploying my dinky API somewhere I can reach it. Tune in next week for another thrilling instalment!

FP101x Review

I’ve just finished Erik Meijer’s FP101x “Introduction to Functional Programming” course, and it was a bit of a mixed bag. Note that this was my first MOOC, so take this review with a grain of salt.

Firstly, the good. Erik’s lectures were well-paced and entertaining, and for the most part communicated concepts fairly well. I won’t say he’s the most polished presenter, but his knowledge, enthusiasm and sense of whimsy (a plate of bananas was used as a prop at one point) combined to make the content quite watchable.

The lab format was quite good also. Typically there was a downloadable Haskell file with some unimplemented functions, then the lab questions took you through what each of the functions should do. Once done, you would paste some test code into GHCi that called the function, and then paste the result into the lab quiz. Because this is Haskell, generally once it compiled with no type errors, you got the right answer.

As an aside, it would be nice if you could paste your code into the quiz directly and have some QuickCheck magic running in the background – this can’t be done on the very general-purpose EdX platform, but hopefully that’s how we’ll learn code in the future.

The content was fairly well graduated also – I have some exposure to Haskell and FP, and didn’t have too many problems early on, but started to get a lot more challenged as the course progressed. Once we hit the M word some of the exercises became quite difficult (but mostly in a good way).

Now, the bad. This was ostensibly a 6 week course, but the lectures & exercises dropped two at a time. It might just be the psychological impact of splitting it up in this way, but I often found I’d only get one module done in a week, and I started to fall further and further behind. There was plenty of time to finish all the assessments after the fact, but I wouldn’t have been so down on myself if the content had have been spaced out a little more.

The course was intended to be language agnostic, but in practice it was very much a Haskell course. Certainly the FP concepts covered in the lectures were generic and widely applicable, and Erik did a good job of demoing functional style in other languages, but the assessments were nearly all Haskell-specific, and I can’t imagine anyone would have made it through without doing Haskell.

And now, the homework. There were a number of problems with this. Here’s a typical question, for context:

Higher Order Functions question

Answering this question format generally involved laboriously typing the possible answers (they’re images, so they can’t be copy & pasted) into GHCi, and running each one to see if it worked – it felt like a fairly mind-numbing chore, had a high chance of transcription errors, and didn’t really aid understanding at all. For the simpler examples, like this one, it was sometimes more reliable to try to reason about the code in your head than chance typing it incorrectly. Occasionally I found I would correct the intentional errors in the functions subconsciously while typing! This really only affected the scores, but it definitely detracted from the enjoyment of the course.

I can understand the value of including a few  ‘choose all the correct implementations’ questions early on while people are getting to grips with different features of the language, but for the most part I wanted to see what the idiomatic Haskell code should look like (i.e. I’m fairly confident some of those solutions to any wouldn’t be written by any self-respecting Haskell developer).

In some cases, the question was very abstract and it wasn’t really clear what the code was actually meant to do. These questions were really crying out for examples of inputs, or how the function is intended to be used – one example that I struggled with:

Consider expressions built up from non-negative numbers, greater or equal to zero using a subtraction operator that associates to the left.

A possible grammar for such expressions would look as follows:

However, this grammar is left-recursive and hence directly transliterating this grammar into parser combinators would result in a program that does not terminate because of the left-recursion. In the lectures of week 7 we showed how to factor recursive grammars using iteration.

Choose an iterative implementation of left-asociative expressions that does not suffer from non-termination.

Lastly, there were a number of instances where entirely new concepts were introduced in the homework that hadn’t been mentioned at all in the lectures  (e.g. redexes). I think some of these were covered in the book, but it had been mentioned earlier in the course that the book wasn’t really necessary, so I hadn’t been keeping up to date.

To sum up, I definitely know more Haskell now than when I started, so I’d have to call that a win. The course itself shows some promise, and I’d like to see the homework issues improved. I think the excessive genericness and one-size-fits-all of the EdX platform detracts from it a bit, but not terminally so. If you want to dip your toes into FP, don’t mind using Haskell to do it, and think you can cope with somewhat laborious homework tasks, you’ll definitely get value out of this.

IKEA desk hacking to mount a Rode mic arm

I’ve been working from home more frequently the past few months for some reason, so I eventually bit the bullet and constructed a permanent home workstation. I’ll post up more details of my setup later on, but when I tweeted a pic of my mic arm, it got a fair bit of interest, so I thought I’d give a bit more detail here.

I bought a Rode Podcaster dynamic mic a few months ago to start experimenting with recording screencasts – both to distribute over the web, and also as a backup for an epic presentation demo failure. Now the important thing with sensitive microphones is that they’re shock-mounted to protect from bumps & knocks (and even just vibrations on the desk from keyboard use), as these tend to create significant noise in the recording. This is what my first setup looked like:

Budget Shockmount

Suffice it to say, cheaping out on a proper shockmount & arm isn’t worth it – I supplemented this setup with the Rode PSM1 and PSA1 shortly afterwards.

The PSA1 comes with both a clamp-on mount and a through-desk mount. The clamp is the best option if you don’t want to irreversibly modify your desk, but it’s not suitable for all desks: mine doesn’t have an overhang to clamp to. The through-desk mount involves drilling a hole in your desk to mount an insert – this is how it’s done:

Step 1: Drill a hole in your desk. There are two important components to this step – the drilling of the hole, and knowing where to drill the hole.

  1. Drilling the hole – it’s best to use a holesaw – a flat (spade) bit is not great for going through the particle board that most desks are made out of. 22mm is right for the Rode desk insert. My one looked like this:
    Sutton holesaw
    Make sure you drill from the top; the surface may be splintered or chipped when the holesaw exits.  That said, the insert will cover an area around the hole anyway.
  2. Knowing where to drill – the main considerations here are:
    • Will it reach where you want it to reach? It’s a bit difficult out of the mount, but I’d try holding the base of the arm in your proposed position and making sure the mic can be positioned comfortably. At full stretch, the arm will reach roughly 600mm from the centre of the hole.
    • If you mount in the corner of your desk (as is common), is there enough wall clearance for the back of the arm? When weighted with something as heavy as the Podcaster, the arm can stick back about 120-130mm, unweighted (or if you push it), it can move back about 200mm.
    • Don’t drill right on the edge – the lip of the insert will overlap nearly 30mm from the centre of the hole.
    • Make sure you have clearance underneath the desk; the insert will probably stick through about 50-60mm.

Step 2: Push the insert into the hole & screw up the nut underneath. This is knurled; it’s just meant to be finger-tight.

Step 3: In my case, the desk drawers go nearly all the way to the back of the desk, and needed surgery to clear the bottom of the insert. I removed the drawer, carefully cut down with a handsaw, and used a sharp chisel to remove the section. It looks a bit rough, but hopefully no-one will see it.

Drawer Clearance

 

Step 4: You’re done! Slot the arm into the insert and start recording.

Finished Mic Arm

Mary & Tom Poppendieck – The Scaling Dilemma

I’ve been a fan of the Poppendiecks’ work on Lean Software Development for a while, so I was quick to sign up when I saw they were coming back to Perth as part of a YOW! Night. The talk was entitled ‘The Scaling Dilemma’, and covered issues encountered in scaling development teams beyond the popular “2 pizza” size. I haven’t worked much with larger teams, but I didn’t want to miss the opportunity to see speakers of this calibre in Perth.

Mary presented (I think Mary always does the presentations) some intriguing anthropological background on why teams are typically the size they are – the 5–7 person inner circle, the 12–15 person sympathy group, the 30–50 person hunting party and the ~150 person clan. She showed some evidence of these organisation sizes recurring throughout human history – the Roman Army & other military groups, stone-age villages, University departments, Gore & Associates, etc. Based on her background in hardware product development, she was most familiar with ‘hunting party’-sized teams.

Other than that, I found some of my key takeaways were:

  • “Monopolies destroy collaboration” – if there’s a group/team/department in an organisation that doesn’t (want to or have to) accommodate others, this will eventually destroy inter-team trust & collaboration.
  • The application of the Theory of Constraints as an underlying principle behind some modern software best practices – eg continuous delivery can be viewed as an attempt to break the release cycle/integration constraint.

It was a really thought-provoking presentation; it’s great to see these sorts of speakers come to Perth where possible – YOW! and BankWest deserve full credit for continuing to make this happen.

“Unrecognized option: -files” in hadoop streaming job

I was recently working on an Elastic MapReduce Streaming setup, that required copying a few required Python files to the nodes in addition to the mapper/reducer.

After much trial & error, I ending up using the following .NET AWS SDK code to accomplish the file upload:

var mapReduce = new StreamingStep {
    Inputs = new List<string> { "s3://<bucket>/input.txt" },
    Output = "s3://<bucket>/output/",
    Mapper = "s3://<bucket>/mapper.py",
    Reducer = "s3://<bucket>/reducer.py",
}.ToHadoopJarStepConfig();

mapReduce.Args.Add("-files");
mapReduce.Args.Add("s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

var step = new StepConfig {
    Name = "python_mapreduce",
    ActionOnFailure = "TERMINATE_JOB_FLOW",
    HadoopJarStep = mapReduce
};

// Then build & submit the RunJobFlowRequest

This generated the rather odd error:

ERROR org.apache.hadoop.streaming.StreamJob (main): Unrecognized option: -files

Odd, because -files most certainly is an option.

Prolonged googling later, and I discovered that the -files option needs to come first. However, StreamingStep doesn’t give me any way to change the order of the arguments – or does it?

I eventually realised I was being a bit dense. ToHadoopJarStepConfig() is a convenience method that just generates a regular JarStep… which exposes the args as a List. Change the code to this:

mapReduce.Args.Insert(0, "-files");
mapReduce.Args.Insert(1, "s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

and everything is awesome.

Why Objective-C is doomed

My recent LinkedIn article ended with the conclusion:

One thing that is clear to observers – Objective-C’s days are numbered. Apple management are not shy about killing off technologies that no longer suit them, and once they feel maintaining Objective-C is preventing further improvement, it will be given an official end of life. This won’t happen overnight, but it will happen, and Objective-C developers should ignore the warning signs at their own peril.

This resulted in an animated comment discussion – many developers who are professionally (and emotionally) tied to Objective-C would be understandably resistant to the concept of its eventual demise. However, I still feel the writing is on the wall for Objective-C. I think this particular aspect deserves a bit more treatment, so I’m presenting the reasoning behind this viewpoint here.

Swift is transitional, not complementary

Apple have done a fantastic job making Swift and Objective-C code interoperate cleanly, and this leads some to believe that the two languages can coexist indefinitely. I get the feeling they regard Swift in the same vein as CoffeeScript – an alternative syntax to exactly the same code, delivering some convenience and terseness to the source without fundamentally changing the underlying programming model.

However, this isn’t an accurate comparison. Evan Swick posted a detailed look at Swift internals – if I can ineptly summarise, Swift, like Objective-C, is implemented on top of the objc runtime, but differs in key areas like method dispatch. It’s more of a peer to Objective-C rather than just a ‘client language’ compiling down to the same executable.

At a higher level, Swift departs from Objective-C in other aspects. Type-safety and the monadic option type will result in much more reliable apps, the various functional programming aspects will enable us to design software in different ways, and the (intentionally) less capable error handling and reflection features will force us to do so.

For application development, Swift will be a better tool for the job by almost any measure. I can’t envision a situation where, on an ongoing basis (and purely on language merit), Objective-C will be used for some parts of a new application, and Swift will be used for others. Swift is interoperable with Objective-C only so that we (and Apple) can leverage our existing code and slowly transition to the new world, not so that we have additional language choices for development.

Swift code will outgrow Objective-C

Swift programs inherit the standard Cocoa design patterns by virtue of using the Cocoa frameworks, but there are places the Swift programming model is likely to go where Objective-C can’t follow. Some of the issues are apparent already:

  • Objective-C APIs return all objects as optionals (and all primitives as non-optional), making optionals much less useful than they are in vanilla Swift code.
  • Objective-C APIs always deal with non-generic collections, resulting in a lot of typecasts and erosion of the benefits of type-safety.

The other areas Swift will diverge is through the new language features. Generics, ADTs, top-level and curried functions will allow new styles of API, but none of these are available from Objective-C. We’ll initially see third party frameworks adopt a ‘Swift-first’ or ‘Swift-only’ API approach, but I expect there’ll be increasing pressure from developers for Apple to follow suit. Note this isn’t necessarily a rational process – see Erica Sudun’s recent anecdote on ‘Cocoaphobia’.

As an aside, I’ve been trying to think of similar scenarios examples where a platform vendor has replaced their core applications language. The plethora of alternative JVM languages are all community-driven. Microsoft successfully replaced VB6 with .NET, although this involved also rewriting the application frameworks, and of course Win32 and MFC refuse to die. F# was born within Microsoft (Research), but they never really pushed this as mainstream C# replacement – instead they’ve rolled F#-inspired enhancements into the C# language, and spun off F# as a community project.

Ultimately, Apple will not be able to continue advancing the platform & the tools while maintaining backward compatibility with Objective-C. We’ll start to see new frameworks released as ‘Swift-only’, and eventually I’d expect to see Foundation and AppKit/UIKit replaced with Swift equivalents.

Objective-C is not indispensable

It’s true that Swift isn’t capable of seamlessly interleaving C & assembly like Objective-C can, but this isn’t a deal-breaker for an application programming language – most other languages in this category (Ruby, C#, Java, Python, etc) can’t do this either.

The argument I’ve seen posed is that because of the C-level compatibility, Apple will always need Objective-C around to perform lower-level tasks (e.g. build the frameworks and tools), therefore they’ll continue making it available for third party devs to use.

There are a few problems with this:

  • Apple are quite okay keeping tools around for internal use only (e.g. YellowBox for Windows, which was used by WebObjects for a while). There’s a lot more involved in fully supporting a technology for third party developers vs using it internally.
  • The bulk of iOS & OS X’s low level code is written in C, C++, and assembly (like every other platform) – e.g. the kernel, the BSD subsystem, clang & LLVM, the objc runtime, CoreFoundation, CoreGraphics, OpenCL, Metal, WebKit, CoreAudio etc. I think most application developers just see the Cocoa layer and tend to overestimate its importance in the scheme of things.
  • The ‘Objective’ part of Objective-C is even less capable than Swift of writing low level ‘unsafe’ code, so if there is a lot of low-level code in the Foundation framework, conceptually you can regard it as being written in C.

The main reason why Objective-C is used to build the application frameworks is because it’s the language used for building applications. Once we transition to using Swift for applications, it will exist there only for legacy reasons.

The end is at handDoomed! Doomed I say!

I’ll finish with a quote from Bill Gates, of all people:

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

Anyone today claiming iOS & OS X developers “don’t need to learn Objective-C” is jumping the gun – it’s going to be difficult to be a good developer without knowing the language the bulk of the code is written in. I expect Objective-C will still be around for most, if not all, of that 10 years.

However, anyone assuming that Objective-C will stick around forever is ignoring the warning signs. Swift and the Mac & iOS platforms will outlast Objective-C, hopefully by a significant margin.