22/10/2010 § 1 Comment
Attempting to `cabal install happstack` on OS X resulted in numerous errors, including:
ghc: could not execute: /Library/Frameworks/GHC.framework/Versions/612/usr/lib/ghc-6.12.3/ghc-asm
There’s a bug with the Haskell platform installer for OS X — the shebang line in the file /Library/Frameworks/GHC.framework/Versions/612/usr/lib/ghc-6.12.3/ghc-asm is incorrect (it’s pointing at the MacPorts location for perl, rather than system perl).
Open the file in your text editor of choice and change the first line from
21/10/2010 § Leave a Comment
Just a quick observation on the newly announced Mac App Store: I found the most interesting thing about the app store is what it isn’t — it isn’t iTunes. This would be mostly due to the fact a Mac app store wouldn’t make much sense in iTunes for Windows, but perhaps it marks the beginning of the end for iTunes as Apple’s universal delivery mechanism for content?
19/10/2010 § 1 Comment
I installed FluentNHibernate & NHibernate.Core 3.0 Beta through the excellent NuPack. Attempting to run a Linq query against the session resulted in:
Could not load file or assembly ‘Antlr3.Runtime, Version=22.214.171.124154, Culture=neutral, PublicKeyToken=3a9cab8f8d22bfb7′ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0×80131040)
It looks like the binary dll distributed in the Antlr 3.1.3 package is actually version 3.1.0. Downloading the correct version from here and copying over the top of the offending file (in <solution root>\packages\Antlr.3.1.3\lib) fixed the problem.
18/10/2010 § Leave a Comment
The second YOW talk Dave Thomas delivered back on the 30th September was entitled “Why Real Developers Embrace Functional Programming and NoSQL: Data Confessions of an Object’holic and Stateful Sinner”. Needless to say, it contained a fair bit of controversial content that Dave was fairly unapologetic about, including the bold pronouncement that ‘C# and Java will be legacy platforms in 5 years’.
The gist of the argument was:
- Objects are not terribly good abstractions for most real-world problems.
- Good object-oriented design is hard, and Morts are still the bread & butter coders out there producing software.
- Objects are implemented inefficiently in almost all runtime environments — “A good JIT can generate fast code, but it will generate a lot of it”. They also don’t translate well to parallel execution environments.
- Serialisation/storage is still a problem — sending objects over the wire, or persisting to disk requires complicated, framework-heavy mapping.
- KLOCs kill!
Dave’s proposed solution is a wholesale movement towards Functional Programming. This has been contemplated before (ie for the last forty years), but I think there’s more appetite in mainstream development communities today. Functional languages are available for both the Java & .NET runtimes (and have been for some time), but more importantly we’re seeing some FP paradigms pop up in imperative languages — eg Ruby, Python and even Linq in C#. I suspect the future will be much more heavily skewed towards multi-paradigm languages than Dave would prefer, but I can certainly see it happening.
So, count me in as a convert. I’m foolishly attempting to teach myself Haskell by following the excellent wikibook and building toy web apps with Happstack — expect to see some more posts on this subject soon as I muddle my way through.
14/10/2010 § 1 Comment
I went along to an Agile Perth meetup last night – Shelley Beeston from Thoughtworks presentiing a session on user stories. I’m always enthusiastic about hearing concrete examples of different Agile techniques from the trenches on (presumably) successful projects.
Requirements approaches are something that I’ve experimented a fair bit with in the past, but I’ve always ended up fairly dissatisfied with each one. When it boils down to it, I consider some sort of functional description to be a critical element of the enduring system doco (along with commented source code & the 5-page ‘this is how it really works’ design document – see Alex Papadimoulis’s recent diatribe on this subject). There’s too much lost context when you only have the source code to work off, particularly if there’s no-one around who was there as part of the original implementation. This typically leads me to err on the side of heavier requirements approaches & I often regress to use cases in the generally forlorn belief that they’re going to give me the document I’m after. In practice, they tend to be incomplete and out of date, and the ‘real’ requirements done informally in an even more ephemeral manifestation.
Shelley opened with a familiar description of the problems inherent in traditional hand-off based requirements management, before moving on to an outline of how she’s used user stories on projects for Thoughtworks. The interesting points I distilled were:
- She’s an enthusiastic supporter of the use of index cards for initial requirements discussions, even if they’re subsequently retyped into an electronic system. I’ve used cards for backlog management in the past, but I found the overhead of manually producing reports to take a lot of the joy out of it. This gives me an excuse to re-introduce the cards I still have sitting in my drawer.
- ‘Activities’ are documented separately to stories. If I’m interpreting/recalling this correctly, activities are high-level process interactions with the system, that may initially map one-to-one with stories & epics, but end up being one-to-many with the refined stories. The purpose seems to be a high-level roadmap of functionality that’s published to give the project team a locus for discussions of system progress/completeness.
- The analysts work on story splitting & refinement two iterations ahead of development. This is not too dissimilar to Dave Thomas’s recommendation of regular backlog maintenance, and would certainly ensure you’re kickstarting each iteration with ready-to-code stories, but systemic pipelining still makes me uneasy. It didn’t help that Shelley uttered the classic phrase ‘mini-waterfalls’ in relation to the iterations. It’s possible, though, that projects of a certain size just don’t work attempting to do the full story lifecycle in a single iteration.
- Acceptance criteria. A structured mechanism for documenting the acceptance criteria for a story (that’s not a cumbersome list of test cases) is one of the key components I think I’ve been missing from my approaches in the past. Shelley nominated the format:
Scenario 1: Title
And [some more context]…
And [another outcome]…
with the recommendation there be no more than about 4-5 scenarios per story. The scenarios are stored separately, in a Word document or spreadsheet.
I’m keen to put some of these ideas into practice — I suspect some combination of the activity documentation and acceptance criteria will satisfy my doco desires.
07/10/2010 § Leave a Comment
I was lucky enough to catch Dave Thomas’s promotional YOW presentation in Perth on Thursday night. His first talk was entitled “Improving the Quality and Productivity of Backlogs Through Envisioning: Collaborative Agile Product Analysis, Architecture and Design”. His concept of ‘Envisioning’ consists of establishing old-school product design functions in order to improve the quality of the Product Backlog. I particularly liked:
- A little more emphasis on upfront requirements — not to the level of a BDUF, but I’ve always struggled with pulling together a useful backlog as part of a Sprint 0, let alone in a half-day planning session. Much of the focus in Agile seems to revolve around managing an existing project, rather than bootstrapping a new one.
- ‘Architect is a role, not a job!’ Dave made a strong point that ‘Architecture’ should be done by full-fledged members of the development team(s) that will have to implement it. I’ve long felt that a good technical architecture can only be done in conjunction with actually putting the design into practice, so it was nice to see this approach advocated. He used the analogy of a hockey playing coach — I guess that’s a captain-coach in AFL/cricket terminology.
- A good acceptance test being worth 100 ‘fragile’ automated unit tests. I’ve predominantly worked in small teams in the past and it’s difficult to justify 100% unit test coverage, particularly when a large number of tests break every refactoring. In contrast, we’ve always struggled to get user testing resources and investment in automation in that area would be much more beneficial.
- The value of prototyping, even if just paper prototypes, should not be underestimated. I recently read an interview with Chris Clark, UX designer at iOS development shop Black Pixel, and was struck by how much of the design process consists of mockups and prototypes.
In all, Dave achieved his objective of making me want to attend YOW, although unfortunately I won’t make it this year.