FP101x Review

I’ve just finished Erik Meijer’s FP101x “Introduction to Functional Programming” course, and it was a bit of a mixed bag. Note that this was my first MOOC, so take this review with a grain of salt.

Firstly, the good. Erik’s lectures were well-paced and entertaining, and for the most part communicated concepts fairly well. I won’t say he’s the most polished presenter, but his knowledge, enthusiasm and sense of whimsy (a plate of bananas was used as a prop at one point) combined to make the content quite watchable.

The lab format was quite good also. Typically there was a downloadable Haskell file with some unimplemented functions, then the lab questions took you through what each of the functions should do. Once done, you would paste some test code into GHCi that called the function, and then paste the result into the lab quiz. Because this is Haskell, generally once it compiled with no type errors, you got the right answer.

As an aside, it would be nice if you could paste your code into the quiz directly and have some QuickCheck magic running in the background – this can’t be done on the very general-purpose EdX platform, but hopefully that’s how we’ll learn code in the future.

The content was fairly well graduated also – I have some exposure to Haskell and FP, and didn’t have too many problems early on, but started to get a lot more challenged as the course progressed. Once we hit the M word some of the exercises became quite difficult (but mostly in a good way).

Now, the bad. This was ostensibly a 6 week course, but the lectures & exercises dropped two at a time. It might just be the psychological impact of splitting it up in this way, but I often found I’d only get one module done in a week, and I started to fall further and further behind. There was plenty of time to finish all the assessments after the fact, but I wouldn’t have been so down on myself if the content had have been spaced out a little more.

The course was intended to be language agnostic, but in practice it was very much a Haskell course. Certainly the FP concepts covered in the lectures were generic and widely applicable, and Erik did a good job of demoing functional style in other languages, but the assessments were nearly all Haskell-specific, and I can’t imagine anyone would have made it through without doing Haskell.

And now, the homework. There were a number of problems with this. Here’s a typical question, for context:

Higher Order Functions question

Answering this question format generally involved laboriously typing the possible answers (they’re images, so they can’t be copy & pasted) into GHCi, and running each one to see if it worked – it felt like a fairly mind-numbing chore, had a high chance of transcription errors, and didn’t really aid understanding at all. For the simpler examples, like this one, it was sometimes more reliable to try to reason about the code in your head than chance typing it incorrectly. Occasionally I found I would correct the intentional errors in the functions subconsciously while typing! This really only affected the scores, but it definitely detracted from the enjoyment of the course.

I can understand the value of including a few  ‘choose all the correct implementations’ questions early on while people are getting to grips with different features of the language, but for the most part I wanted to see what the idiomatic Haskell code should look like (i.e. I’m fairly confident some of those solutions to any wouldn’t be written by any self-respecting Haskell developer).

In some cases, the question was very abstract and it wasn’t really clear what the code was actually meant to do. These questions were really crying out for examples of inputs, or how the function is intended to be used – one example that I struggled with:

Consider expressions built up from non-negative numbers, greater or equal to zero using a subtraction operator that associates to the left.

A possible grammar for such expressions would look as follows:

However, this grammar is left-recursive and hence directly transliterating this grammar into parser combinators would result in a program that does not terminate because of the left-recursion. In the lectures of week 7 we showed how to factor recursive grammars using iteration.

Choose an iterative implementation of left-asociative expressions that does not suffer from non-termination.

Lastly, there were a number of instances where entirely new concepts were introduced in the homework that hadn’t been mentioned at all in the lectures  (e.g. redexes). I think some of these were covered in the book, but it had been mentioned earlier in the course that the book wasn’t really necessary, so I hadn’t been keeping up to date.

To sum up, I definitely know more Haskell now than when I started, so I’d have to call that a win. The course itself shows some promise, and I’d like to see the homework issues improved. I think the excessive genericness and one-size-fits-all of the EdX platform detracts from it a bit, but not terminally so. If you want to dip your toes into FP, don’t mind using Haskell to do it, and think you can cope with somewhat laborious homework tasks, you’ll definitely get value out of this.

IKEA desk hacking to mount a Rode mic arm

I’ve been working from home more frequently the past few months for some reason, so I eventually bit the bullet and constructed a permanent home workstation. I’ll post up more details of my setup later on, but when I tweeted a pic of my mic arm, it got a fair bit of interest, so I thought I’d give a bit more detail here.

I bought a Rode Podcaster dynamic mic a few months ago to start experimenting with recording screencasts – both to distribute over the web, and also as a backup for an epic presentation demo failure. Now the important thing with sensitive microphones is that they’re shock-mounted to protect from bumps & knocks (and even just vibrations on the desk from keyboard use), as these tend to create significant noise in the recording. This is what my first setup looked like:

Budget Shockmount

Suffice it to say, cheaping out on a proper shockmount & arm isn’t worth it – I supplemented this setup with the Rode PSM1 and PSA1 shortly afterwards.

The PSA1 comes with both a clamp-on mount and a through-desk mount. The clamp is the best option if you don’t want to irreversibly modify your desk, but it’s not suitable for all desks: mine doesn’t have an overhang to clamp to. The through-desk mount involves drilling a hole in your desk to mount an insert – this is how it’s done:

Step 1: Drill a hole in your desk. There are two important components to this step – the drilling of the hole, and knowing where to drill the hole.

  1. Drilling the hole – it’s best to use a holesaw – a flat (spade) bit is not great for going through the particle board that most desks are made out of. 22mm is right for the Rode desk insert. My one looked like this:
    Sutton holesaw
    Make sure you drill from the top; the surface may be splintered or chipped when the holesaw exits.  That said, the insert will cover an area around the hole anyway.
  2. Knowing where to drill – the main considerations here are:
    • Will it reach where you want it to reach? It’s a bit difficult out of the mount, but I’d try holding the base of the arm in your proposed position and making sure the mic can be positioned comfortably. At full stretch, the arm will reach roughly 600mm from the centre of the hole.
    • If you mount in the corner of your desk (as is common), is there enough wall clearance for the back of the arm? When weighted with something as heavy as the Podcaster, the arm can stick back about 120-130mm, unweighted (or if you push it), it can move back about 200mm.
    • Don’t drill right on the edge – the lip of the insert will overlap nearly 30mm from the centre of the hole.
    • Make sure you have clearance underneath the desk; the insert will probably stick through about 50-60mm.

Step 2: Push the insert into the hole & screw up the nut underneath. This is knurled; it’s just meant to be finger-tight.

Step 3: In my case, the desk drawers go nearly all the way to the back of the desk, and needed surgery to clear the bottom of the insert. I removed the drawer, carefully cut down with a handsaw, and used a sharp chisel to remove the section. It looks a bit rough, but hopefully no-one will see it.

Drawer Clearance

 

Step 4: You’re done! Slot the arm into the insert and start recording.

Finished Mic Arm

Mary & Tom Poppendieck – The Scaling Dilemma

I’ve been a fan of the Poppendiecks’ work on Lean Software Development for a while, so I was quick to sign up when I saw they were coming back to Perth as part of a YOW! Night. The talk was entitled ‘The Scaling Dilemma’, and covered issues encountered in scaling development teams beyond the popular “2 pizza” size. I haven’t worked much with larger teams, but I didn’t want to miss the opportunity to see speakers of this calibre in Perth.

Mary presented (I think Mary always does the presentations) some intriguing anthropological background on why teams are typically the size they are – the 5–7 person inner circle, the 12–15 person sympathy group, the 30–50 person hunting party and the ~150 person clan. She showed some evidence of these organisation sizes recurring throughout human history – the Roman Army & other military groups, stone-age villages, University departments, Gore & Associates, etc. Based on her background in hardware product development, she was most familiar with ‘hunting party’-sized teams.

Other than that, I found some of my key takeaways were:

  • “Monopolies destroy collaboration” – if there’s a group/team/department in an organisation that doesn’t (want to or have to) accommodate others, this will eventually destroy inter-team trust & collaboration.
  • The application of the Theory of Constraints as an underlying principle behind some modern software best practices – eg continuous delivery can be viewed as an attempt to break the release cycle/integration constraint.

It was a really thought-provoking presentation; it’s great to see these sorts of speakers come to Perth where possible – YOW! and BankWest deserve full credit for continuing to make this happen.

“Unrecognized option: -files” in hadoop streaming job

I was recently working on an Elastic MapReduce Streaming setup, that required copying a few required Python files to the nodes in addition to the mapper/reducer.

After much trial & error, I ending up using the following .NET AWS SDK code to accomplish the file upload:

var mapReduce = new StreamingStep {
    Inputs = new List<string> { "s3://<bucket>/input.txt" },
    Output = "s3://<bucket>/output/",
    Mapper = "s3://<bucket>/mapper.py",
    Reducer = "s3://<bucket>/reducer.py",
}.ToHadoopJarStepConfig();

mapReduce.Args.Add("-files");
mapReduce.Args.Add("s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

var step = new StepConfig {
    Name = "python_mapreduce",
    ActionOnFailure = "TERMINATE_JOB_FLOW",
    HadoopJarStep = mapReduce
};

// Then build & submit the RunJobFlowRequest

This generated the rather odd error:

ERROR org.apache.hadoop.streaming.StreamJob (main): Unrecognized option: -files

Odd, because -files most certainly is an option.

Prolonged googling later, and I discovered that the -files option needs to come first. However, StreamingStep doesn’t give me any way to change the order of the arguments – or does it?

I eventually realised I was being a bit dense. ToHadoopJarStepConfig() is a convenience method that just generates a regular JarStep… which exposes the args as a List. Change the code to this:

mapReduce.Args.Insert(0, "-files");
mapReduce.Args.Insert(1, "s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

and everything is awesome.

Why Objective-C is doomed

My recent LinkedIn article ended with the conclusion:

One thing that is clear to observers – Objective-C’s days are numbered. Apple management are not shy about killing off technologies that no longer suit them, and once they feel maintaining Objective-C is preventing further improvement, it will be given an official end of life. This won’t happen overnight, but it will happen, and Objective-C developers should ignore the warning signs at their own peril.

This resulted in an animated comment discussion – many developers who are professionally (and emotionally) tied to Objective-C would be understandably resistant to the concept of its eventual demise. However, I still feel the writing is on the wall for Objective-C. I think this particular aspect deserves a bit more treatment, so I’m presenting the reasoning behind this viewpoint here.

Swift is transitional, not complementary

Apple have done a fantastic job making Swift and Objective-C code interoperate cleanly, and this leads some to believe that the two languages can coexist indefinitely. I get the feeling they regard Swift in the same vein as CoffeeScript – an alternative syntax to exactly the same code, delivering some convenience and terseness to the source without fundamentally changing the underlying programming model.

However, this isn’t an accurate comparison. Evan Swick posted a detailed look at Swift internals – if I can ineptly summarise, Swift, like Objective-C, is implemented on top of the objc runtime, but differs in key areas like method dispatch. It’s more of a peer to Objective-C rather than just a ‘client language’ compiling down to the same executable.

At a higher level, Swift departs from Objective-C in other aspects. Type-safety and the monadic option type will result in much more reliable apps, the various functional programming aspects will enable us to design software in different ways, and the (intentionally) less capable error handling and reflection features will force us to do so.

For application development, Swift will be a better tool for the job by almost any measure. I can’t envision a situation where, on an ongoing basis (and purely on language merit), Objective-C will be used for some parts of a new application, and Swift will be used for others. Swift is interoperable with Objective-C only so that we (and Apple) can leverage our existing code and slowly transition to the new world, not so that we have additional language choices for development.

Swift code will outgrow Objective-C

Swift programs inherit the standard Cocoa design patterns by virtue of using the Cocoa frameworks, but there are places the Swift programming model is likely to go where Objective-C can’t follow. Some of the issues are apparent already:

  • Objective-C APIs return all objects as optionals (and all primitives as non-optional), making optionals much less useful than they are in vanilla Swift code.
  • Objective-C APIs always deal with non-generic collections, resulting in a lot of typecasts and erosion of the benefits of type-safety.

The other areas Swift will diverge is through the new language features. Generics, ADTs, top-level and curried functions will allow new styles of API, but none of these are available from Objective-C. We’ll initially see third party frameworks adopt a ‘Swift-first’ or ‘Swift-only’ API approach, but I expect there’ll be increasing pressure from developers for Apple to follow suit. Note this isn’t necessarily a rational process – see Erica Sudun’s recent anecdote on ‘Cocoaphobia’.

As an aside, I’ve been trying to think of similar scenarios examples where a platform vendor has replaced their core applications language. The plethora of alternative JVM languages are all community-driven. Microsoft successfully replaced VB6 with .NET, although this involved also rewriting the application frameworks, and of course Win32 and MFC refuse to die. F# was born within Microsoft (Research), but they never really pushed this as mainstream C# replacement – instead they’ve rolled F#-inspired enhancements into the C# language, and spun off F# as a community project.

Ultimately, Apple will not be able to continue advancing the platform & the tools while maintaining backward compatibility with Objective-C. We’ll start to see new frameworks released as ‘Swift-only’, and eventually I’d expect to see Foundation and AppKit/UIKit replaced with Swift equivalents.

Objective-C is not indispensable

It’s true that Swift isn’t capable of seamlessly interleaving C & assembly like Objective-C can, but this isn’t a deal-breaker for an application programming language – most other languages in this category (Ruby, C#, Java, Python, etc) can’t do this either.

The argument I’ve seen posed is that because of the C-level compatibility, Apple will always need Objective-C around to perform lower-level tasks (e.g. build the frameworks and tools), therefore they’ll continue making it available for third party devs to use.

There are a few problems with this:

  • Apple are quite okay keeping tools around for internal use only (e.g. YellowBox for Windows, which was used by WebObjects for a while). There’s a lot more involved in fully supporting a technology for third party developers vs using it internally.
  • The bulk of iOS & OS X’s low level code is written in C, C++, and assembly (like every other platform) – e.g. the kernel, the BSD subsystem, clang & LLVM, the objc runtime, CoreFoundation, CoreGraphics, OpenCL, Metal, WebKit, CoreAudio etc. I think most application developers just see the Cocoa layer and tend to overestimate its importance in the scheme of things.
  • The ‘Objective’ part of Objective-C is even less capable than Swift of writing low level ‘unsafe’ code, so if there is a lot of low-level code in the Foundation framework, conceptually you can regard it as being written in C.

The main reason why Objective-C is used to build the application frameworks is because it’s the language used for building applications. Once we transition to using Swift for applications, it will exist there only for legacy reasons.

The end is at handDoomed! Doomed I say!

I’ll finish with a quote from Bill Gates, of all people:

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

Anyone today claiming iOS & OS X developers “don’t need to learn Objective-C” is jumping the gun – it’s going to be difficult to be a good developer without knowing the language the bulk of the code is written in. I expect Objective-C will still be around for most, if not all, of that 10 years.

However, anyone assuming that Objective-C will stick around forever is ignoring the warning signs. Swift and the Mac & iOS platforms will outlast Objective-C, hopefully by a significant margin.

 

Why Swift was a better option than Xamarin

swift-hero_2xMatt Baxter-Reynolds over at ZDNet claims that Swift is “a total mishit that’s badly judged the state of the software development industry” and that “What Apple should have done, years ago, was bought Xamarin”. I disagree with this on a number of levels.

I think the crux of the argument hinges on the flawed concept that ‘if only’ Apple supported some sort of standard third party language, it would make switching apps & developers between platforms easy, and everybody wins.  This doesn’t make any sense:

  • Learning a new language only takes a few weeks, however learning frameworks, tools, UI idioms, platform processes and cloud service models takes years. I think there’s an implicit and significant overestimation of the actual benefit to developers of using an existing language.
  • Apple could go further and adopt wholesale a full-stack technology like J2ME or Xamarin, but this would eviscerate their ability to innovate & differentiate their platform. I’m struggling with how this could possibly be attractive to them. There’s a definite sense of entitlement in demanding Apple cripple their business to make it slightly easier for third party developers to add value to competitor platforms.

I’d go further and say the industry as a whole would be negatively impacted by ‘language homogenisation’ — having a single language across all platforms would stifle development of new languages and technologies that work to make the industry better. Would we all genuinely be happier and more productive if Java 6 was the only mobile development language?

A couple of other choice quotes:

When I first heard about Swift I was pleased as I assumed that Apple would look to solve the key problem faced by mobile developers — specifically that there is little overlap between developer toolsets making cross-platform mobile development extremely difficult.

I’m one of those developers and that ‘key problem’ is barely even on my list of problems. If it was, I certainly wouldn’t assume it was Apple’s responsibility to solve it.

No, the problems I battle with most are diagnosing obscure app crashes, dealing with header files, dynamic typing issues, forgetting asterisks, counting square brackets and the like (not to mention f*#!ingblocksyntax). Swift goes a long way towards removing this pain.

It’s been a long time since the software development community accepted commercial, “ivory tower” organisations to dictate engineering approaches. The last time this happened was back in the era that brought us Java and .NET itself. Now the community itself decides.

It should make Matt happy to know that the community itself DOES have the ability to decide — there are plenty of cross-platform mobile dev environments available today, including Xamarin. The fact the community hasn’t decided to replace ‘native’ development with these platforms should prompt him to wonder why (and I don’t believe it’s due to them not being first party).

Oh come on — it doesn’t show the best in modern language thinking at all. Modern language thinking comes from the community, building incrementally within that community in a way that’s genuinely helpful to all of us.

I like to think of programming languages as a window into the general attitudes towards language design at the time they were created. Objective-C is clearly a child of the smalltalk era. .NET was influenced by Java which was influenced by C++. The language elements I see in Swift are a reflection of thoroughly modern  thinking from (yes, community driven) languages like Go, Rust, F#Ruby, Python and Haskell.

It sounds like Matt’s arguing only open source languages have the right to be called ‘modern’, which seems a bit narrow-minded to me.

In essence, Apple had one job — create a new baseline tooling for iOS and show a sympatico approach with how the rest of the industry actually operates — and they blew it.

It looks like he’s called out two jobs there. It’s too early to judge whether they’ve blown the first (and most important) job, but at this stage they certainly appear to have delivered the goods. As for the second, maybe I just don’t understand ‘how the rest of the industry actually operates’, but Platform |> Developer Tools |> Software |> Profit! would be my guess. I get the feeling Matt looked at the language, decided he didn’t like it, and took it as a personal affront Apple didn’t just adopt a language he was already comfortable with.

Lastly, I’ll just add that many commentators questioning the need for Swift seem to be assuming the language selection process was a relatively trivial window-shopping exercise — i.e. “Hey, this one looks nice!” However, that’s oversimplifying the number of factors that needed to be considered, and key amongst those was seamless compatibility with existing C & Objective-C code. None of the existing solutions (like Xamarin) come close to doing this as well as Swift.