Watch App Development Blog – Week 4 *cough* 5

 I’m blogging my progress in developing an Apple Watch App. Read the previous instalments here.

So, um, I missed a week. Sorry about that. Sad face.

Step 4: The Pebble

We don’t have access to any Apple Watch hardware yet, so it’s difficult to get a good feel for how you will use your app in context. I was given a Pebble for Christmas though, and thought it may be worthwhile getting the core information displayed on a Pebble app so I can experience and analyse the usage flow behind a watch app with this data.

Pebble: the Palm Pilot of Smartwatches

First, my impressions of the pebble:

  • Battery life is not too bad, I get about a week of normal usage. It doesn’t compare favourably to several years of battery life on a traditional watch (like my trusty Tag 2000), but it should soundly spank most newer-generation smart watches, including the Apple Watch, based on the rumours to date.
  • The screen is awful if you’re used to a high-resolution smartphone display. The B&W 144 x 168 display looks like 90s tech.
  • The Watch itself is plasticky (lasted a whole day before it copped a small but visible scratch on the face), and too large and ungainly for most wrists.
  • The app/watchface marketplace is fairly limited & most of the apps have a ‘hobbyist’ feel to them. I don’t necessarily mean that in a negative manner – it’s great to see the enthusiasm, but without a good mechanism to monetise apps there’s not the same level of investment & innovation that I see in the iOS App Store or the Play Store.
  • The hardware is fairly slow and anaemic.
  • Development for the pebble is difficult. The Pebble C API is very low level, requires a lot of careful memory management, can only run on the device, and can’t be debugged. They’ve released a JavaScript API to try to make the experience a bit better, though I haven’t tried it.
  • If you’re using your pebble with an iPhone, the experience is less than seamless due to Apple’s bluetooth accessory restrictions. Some functionality  (e.g. network access) stops working if the Pebble iOS app has been terminated from the background. Third party iOS apps have a single Pebble connection to share, and communication can only be initiated from the phone.

To me, the product is very reminiscent of the early Palm Pilots – clunky B&W screen, an awkward developer experience, small hobbyist developer community etc. The future’s yet to be written, but the Pebble will need to undergo radical and ruthless improvement to keep pace with the latest smartwatches.

The Pebble App

The general concept behind the watch app is a simple display showing the departure times for the closest station. This should provide the ‘in context’ component of the most important watch app functionality. The initial UI design (pictured) includes the nearest station, and the destination, pattern, and minutes remaining for the next four departures from the station.

Pebble Mockup

The simplest way to manage communication between the phone app & the watch app is the AppSync API. The general semantics of the API involve syncing a dictionary of shared data between the phone & watch; it also makes data storage on the watch more convenient. The downside is that this requires a specific key for each individual data element synced to the watch – i.e. specific numbered departures rather than a variable array of scheduled trains.

With that in mind, the keys were defined thus:

#define KEY_STATION 0
#define KEY_DEST_1 1
#define KEY_TIME_1 2
#define KEY_DEST_2 3
#define KEY_TIME_2 4
#define KEY_DEST_3 5
#define KEY_TIME_3 6
#define KEY_DEST_4 7
#define KEY_TIME_4 8

The main issue I ran into with AppSync was a storage limitation. The sample code includes a 30 byte sync buffer which is insufficient for most data sync requirements, but it wasn’t immediately obvious that’s what the error DICT_NOT_ENOUGH_STORAGE was referring to. Upping the buffer to 128 bytes solved the issue. That should be enough for anybody.

Once the data was synced, I update the UI using the following function:

static void drawText() {
  const Tuple *tuple;
  if ((tuple = app_sync_get(&s_sync, KEY_STATION))) {
    if (tuple->value->cstring[0] == 0) {
      text_layer_set_text(station_text_layer, "Waiting for data");
    } else {
      text_layer_set_text(station_text_layer, tuple->value->cstring);
    }
  }
  for (int i = 0; i < 4; i++) {
    if ((tuple = app_sync_get(&s_sync, i * 2 + 1))) {
      if (tuple->value->cstring[0] == 0) {
        text_layer_set_text(dest_text_layers[i], "");
      } else {
        text_layer_set_text(dest_text_layers[i], tuple->value->cstring);
      }
    }
    if ((tuple = app_sync_get(&s_sync, i * 2 + 2))) {
      if (!tuple->value->int32) {
        text_layer_set_text(time_text_layers[i], "");
      } else {
        time_t departure_time = tuple->value->int32;
        time_t current_time = time(NULL);
        int minutes = (departure_time - current_time)/60;
        char time_str[5];
        snprintf(time_str, 5, "%dm", minutes);
        text_layer_set_text(time_text_layers[i], time_str);
      }
    }
  }
}

…where dest_text_layers and time_text_layers are four element arrays containing references to the text layers on the watch UI.

Can you spot the bug? If you haven’t done much work with embedded systems it’s not obvious. Critically, the documentation for text_layer_set_text says:

The string is not copied, so its buffer most likely cannot be stack allocated, but is recommended to be a buffer that is long-lived, at least as long as the TextLayer is part of a visible Layer hierarchy.

time_str is not copied when passed to text_layer_set_text; the effect being that it goes out of scope and is never displayed on the watch face. The solution is a set of string buffers referenced statically – I used a static char pointer array, and malloced/freed the buffers in window_load/window_unload.

// at the top of the file
#define TIME_LABEL_LENGTH 5
static char *time_strings[4];

// in the window_load() function
  for (int i = 0; i < 4; i++) {
    time_strings[i] = malloc(sizeof(char[TIME_LABEL_LENGTH]));
  }

// drawtext() changes to:
  snprintf(time_strings[i], TIME_LABEL_LENGTH, "%dm", minutes);
  text_layer_set_text(time_text_layers[i], time_strings[i]);

The iOS App

Pebble integration doesn’t require a a substantial amount of code – drag in the frameworks and pull a PBWatch reference from PBPebbleCentral.defaultCentral().lastConnectedWatch(). Because I want to be able to show the number of minutes until a train leaves, I changed the earlier code from a HH:MM string to a ZonedDate/NSDate in the Haskell & Swift code. I then implemented Pebble communication using the following (dest/pattern is abbreviated to economise on transfer bandwidth & Pebble display size):

    func updatePebble(station: Station, _ times : [Departure]) {
        var pebbleUpdate : [NSNumber: AnyObject] = [
            KeyStation : station.name,
        ]
        for i: Int in 0..<4 {
            let destKey = NSNumber(int: Int32(i * 2 + 1))
            let timeKey = NSNumber(int: Int32(i * 2 + 2))
            pebbleUpdate[destKey] = times[i].shortDescription
            pebbleUpdate[timeKey] = NSNumber(int32: Int32(times[i].time.timeIntervalSince1970))
        }
        self.watch?.appMessagesPushUpdate(pebbleUpdate, withUUID: appUUID, onSent: { (w, Dict, e) in
            if let error = e {
                println("Error sending update to pebble: \(error.localizedDescription)")
            } else {
                println("Sent update to pebble!")
            }
        })
    }

There was a problem though: the Pebble showed around -470 minutes for each train (i.e. 8 hours out – suspicious, as local time is +8:00). Turns out Pebble has no concept of timezone at all. The docs spin this as: “Note that the epoch is adjusted for Timezones and Daylight Savings.” Not sure that qualifies as epoch time, but it was clear the conversion is meant to happen on the phone. The following code sorted the issue:

  let adjustedEpoch = Int(times[i].time.timeIntervalSince1970) + NSTimeZone.localTimeZone().secondsFromGMT
  pebbleUpdate[timeKey] = NSNumber(int32: Int32(adjustedEpoch))

Glorious 1 bit UI

The result (I have a promising future as a watch model). I’ll give it a good workout near the train station over the next week.

Pebble Running

As always, the code is available here, here and here.

A last note: The fact that I’m on week 5 of my watch app development journey and am yet to touch WatchKit is not lost on me. I should hopefully start hitting WatchKit code this week. With a bit of luck.

 

Advertisements

The operation couldn’t be completed. (SSErrorDomain error 100.)

If you’re trying to test iOS App Store receipt validation, and you perform a receipt refresh using SKReceiptRefreshRequest, you are almost certainly going to come across the mysterious and enigmatic SSErrorDomain Error 100. There’s not a lot of information on the googles, so here’s what I know/suspect.

As far as I can tell, code 100 is the App Store’s way of telling you “Sorry, I have no receipt for that bundle ID for that user”. That’s unlikely to happen in production unless shenanigans are underway (a receipt is generated even for a free app ‘Get’), but it can happen often in development. The sandbox App Store appears to have the ability to generate fake receipts when requested, but all ducks need to be in a row for this to happen.

In the sandbox (Development/Ad Hoc builds):

  • If you don’t have an app record set up in iTunes Connect, you’ll get a Code 100
  • If you’re signed in with your regular Apple ID instead of a sandbox account: Code 100
  • If you’re signed in with a sandbox account associated with a different iTunes Connect account: Code 100

The story is a bit different for Apple Testflight builds – these are production builds with special handling for in-app purchases, and the App Store (currently) does NOT generate a fake original purchase receipt. I haven’t tested this myself, but from a developer report on the dev forums (login required):

  • If you have a virgin install from TestFlight, you’ll get a Code 100
  • If you’ve previously installed the App Store version of the app, you’ll get a receipt
  • If you have a virgin install from TestFlight but have made an in-app purchase, you’ll get a receipt

Hopefully this saves others some frustration.

Watch App Development Blog – Week 3

In weeks 1 & 2, I got a Transperth-scraping REST API built and deployed to an AWS-based cloud host. This week I’ll get started on:

Step 3: The iPhone App

Third party apps on Apple Watch are very limited and rely heavily on the companion iPhone app for logic, network access, location etc, so the first place to start with the Watch app is on the iPhone.

The phone app itself will need useful functionality otherwise it’s unlikely to be approved. I have a few ideas for cool features for the phone, but for the moment it can just display the live times for the nearest station. This will require:

  1. Getting the list of stations from the server
  2. Finding the nearest station based on the user’s current location
  3. Getting the live times for that station from the server.

Let’s get started. I’m going to be building the app in Swift (of course). Mattt Thompson of NSHipster/AFNetworking fame has written a Swift-only networking framework called Alamofire, so we’ll start with it.

After setting up the framework following the instructions, I created a new Swift file called ‘ApiClient’. Downloading JSON from the server using Alamofire looks like the following:

func getAllStations(f: [Station] -> ()) {
    Alamofire.request(.GET, "\(hostname)/train")
        .responseJSON { (_, _, json, _) in
           if let json = json as? [NSDictionary] {
                let s = json.map({ Station(dictionary: $0) })
                f(s)
            } else { f([]) }
    }
}

struct Station {
    let id : String
    let name : String
    let location: CLLocation

    init(dictionary: NSDictionary) {
        self.id = dictionary["id"] as String
        self.name = dictionary["name"] as String
        let lat = Double(dictionary["lat"] as NSNumber)
        let long = Double(dictionary["long"] as NSNumber)
        self.location = CLLocation(latitude: lat, longitude: long)
    }
}

I’m using a global function (to be more functional) with a callback parameter that takes an array of stations. The returned JSON array is mapped over to convert the NSDictionary instances into Station values. If anything goes wrong, an empty array is passed to the callback – this isn’t brilliant error handling and will probably change, but it’s clearer to show as-is for now.

The user’s location can then be retrieved from a CLLocationManager, and the nearest location calculated like so:

    func nearestStation(loc: CLLocation) -> Station? {
        return stations.filter({ $0.distanceFrom(loc) <= 1500 }).sorted({ $0.distanceFrom(loc) &< $1.distanceFrom(loc) }).first
    }

    // where distanceFrom is defined on Station as
    func distanceFrom(otherLocation: CLLocation) -> CLLocationDistance {
        return location.distanceFromLocation(otherLocation)
    }

This will filter out all stations greater than 1.5km away, and return the nearest of the remainder (or nil if there are no nearby stations).

From there, we can retrieve the live times from the server using the code:

func getLiveTrainTimes(station: String, f: [Departure] -> ()) {
    Alamofire.request(.GET, "\(hostname)/train/\(station)")
        .responseJSON { (_, _, json, _) in
            if let json = json as? [NSDictionary] {
                let s = json.map({ Departure(dictionary: $0) })
                f(s)
            } else { f([]) }
    }
}

Wait – this looks pretty much identical to getAllStations, just with a different URL and return type. Let’s refactor:

protocol JSONConvertible {
    init(dictionary: NSDictionary)
}

func get<T: JSONConvertible>(path: String, f: [T] ->()) {
    Alamofire.request(.GET, hostname + path)
        .responseJSON { (_, _, json, _) in
        if let json = json as? [NSDictionary] {
            f(json.map({ T(dictionary: $0) }))
        } else { f([]) }
    }
}

// we can then redefine the other request methods as:
func getAllStations(f: [Station] -> ()) {
    get("/train", f)
}

func getLiveTrainTimes(station: String, f: [Departure] -> ()) {
    get("/train/\(station)", f)
}

The UI for the app (which I won’t go through here) is just a regular UITableView with a row for each train at the nearest station. However, it’s worth considering how the app will operate at different phases of its lifecycle – I want the live times to be updated in the background (for reasons that will become apparent later).

While the app is in the foreground, the location updates will come through as normal – I have a 50m distance filter on as this is unlikely to change your nearest station. When entering the background, the app will switch to the ‘significant change’ location service – again, accuracy is not super important so the course-grained significant change should work fine.

For the live train times network requests, in the foreground these will be triggered by a 30s NSTimer, in the background using the background fetch API. I haven’t used background fetch before, but it seems like the right technology to use in the case – allow the OS to decide if the app should refresh its data, based on battery life, network connectivity, and usage patterns of the app.

The various services are switched on & off like so:

    func applicationDidBecomeActive(application: UIApplication) {
        locationManager.startUpdatingLocation()
        timer = NSTimer.scheduledTimerWithTimeInterval(NSTimeInterval(30), target: self, selector: "timerFired:", userInfo: nil, repeats: true)
        timer?.fire()
    }

    func applicationWillResignActive(application: UIApplication) {
        locationManager.stopUpdatingLocation()
        timer?.invalidate()
        timer = nil
    }

    func applicationDidEnterBackground(application: UIApplication) {
        locationManager.startMonitoringSignificantLocationChanges()
    }

    func applicationWillEnterForeground(application: UIApplication) {
        locationManager.stopMonitoringSignificantLocationChanges()
    }

This is enough to start road-testing the app functionality out on the phone, and maybe start formulating a few ideas around the best functionality to prioritise on the watch. Speaking of the watch, next week I’ll have a surprise along those lines. As before, the code is available on bitbucket.

Bird Nerd 1.0

So, I built a game.

Bird Nerd IconFollowing the yanking of Flappy Bird from the App Store (and the subsequent proliferation of indie apps), a colleague said: “We should build an app called ‘Flappy Word’, where instead of flying through pipes, you collect letters and make words”. This sounded like an absolute winner of an idea, so I went home and coded up the basic game in SpriteKit that evening. We spent the next week and a bit refining gameplay, designing artwork,  gathering feedback from users, changing the name to something that doesn’t ‘leverage a popular app’, and submitted it to the App Store.

I have to confess to not being much of a gamer (and I haven’t built a game before), so I’m not well equipped to assess whether the game is any good, or predict how many will download it. However, the process of building it was certainly enjoyable, and as I learnt a lot doing it, I’ll go through some of the key design decisions here.

SpriteKit

I haven’t done much work with game engines, but for a game newbie, SpriteKit is a very well-designed framework (if you’re okay with iOS 7+ only). Its allegedly heavily inspired by the popular Cocos2D, but beyond that, it’s a first class Apple framework with expected levels of integration and consistency with the rest of UIKit & CoreFoundation. I’d picked up a copy of Dmitry Volevodz’ ‘iOS 7 Game Development’, which uses an endless runner game as an example, and was able to use this to ramp up on the framework pretty quickly.

SpriteKit has a relatively simple and understandable model, which revolves around SKScenes, SKNodes and SKActions. On an 8-bit style game it required liberal use of node.texture.filteringMode = SKTextureFilteringNearest to prevent antialiasing when scaling up low-res artwork.

Some of the other tips/techniques I discovered were:

  • OpenGL really works the simulator – it always spins up the MacBook fans regardless of whether it’s doing much.
  • Prefer using SKActions for behaviour rather than dumping everything in the -update: method.
  • Subclassing SKSpriteNode for each node type is a good idea, in order to keep the code well separated.
  • Implementing UI elements like buttons in SceneKit is pretty clunky. It would be nice if it was easier to mix UIKit controls into the scene.
  • Texture atlas support is nice, but they aren’t generated during command-line builds – this drove me crazy for a while trying to work out why the TestFlight builds kept crashing.
  • OpenGL really, really does not like running while the app’s in the background. Pause the scene in -applicationWillResignActive: and -applicationDidEnterBackground:

Word lists & game logic

The interesting problems that needed to be solved here were:

  1. spawning new letters in a random, yet playable order
  2. detecting when a valid (or invalid) word is formed

Item 2 requires a word list. As a writer of a non-US flavour of English it was important to me that there be a choice of wordlists (thanks to 12dicts). Despite the Apple documentation and some older references to the contrary, there is definitely a British English language preference in iOS, so it was relatively transparent to load the correct list according the the user’s language settings. The word lists required some pre-processing in a simple ruby script to remove words with punctuation and words shorter than 3 characters. Potentially, I could also strip out words that had a valid word as a prefix (e.g. ‘doggerel’ is unplayable as it’s prefixed with ‘dog’), but I’ve kept them in the list for the moment for potential game enhancements.

The word lists contain up to 75,000 words, which is workable as in-memory data, but should really use an efficient lookup mechanism rather than scanning the entire array each time a letter is hit. Because I know the lists are sorted, I can use a binary search – Cocoa provides one with the -indexOfObject:inSortedRange:options:usingComparator: method of NSArray, which I implemented with a prefix check in the comparator. Each collision, the game can quickly check whether the current letter combination is a valid word, is the start of a valid word, or is invalid.

For item 1, letters are spawned randomly using a very simple frequency weighting (i.e. vowels are more likely). Further localisation of the app would need the frequencies adjusted (and accented letters included, depending on language).  However, during play we noticed it got tedious waiting for a random spawn of the one letter you need, so I included a probabilistic component that includes valid next letters more frequently. This results in the game hinting fairly explicitly in some instances (i.e. if there’s only one valid letter).

Graphic design

It’s probably obvious neither of us are designers! We wanted to go for a retro style, both to make it more Flappy Bird-esque, and because we’re both ancient enough to have played C64/Amiga/classic Mac era games. It has an Australian bush theme, just for something different (see if you can spot the Western Grey Kangaroo).

As an example to try to illustrate the workflow, the bird sprite went through the following evolution:

Bird Nerd Sprite v1 S: “I slightly modified the Flappy Bird sprite to make it look more like a magpie lark”

H: “That’s meant to be a magpie lark?”

Bird Nerd Sprite v2H: “Fek me drawing is hard!”

Bird Nerd Sprite v3S: “I gave him a bigger eye.”

Bird Nerd Sprite v4H: “Larger Bird Nerd sprite. I liked what you did with the glasses thing and have tried to enhance that further.”

Bird Nerd Sprite v5S: “I think his eye still needs to be bigger. Should he have more white on his belly?

Bird Nerd Sprite v6H: “I think a coloured beak and legs looks pretty good?”

Bird Nerd Sprite v7S: “Couldn’t help myself – I had to make his glasses bigger”

The coins were added late in development, after the sound effects were added, as they were quite reminiscent of the Mario-style coin collection and solved an issue with legibility and contrast in drawing the letters directly onto the background.

Game Physics

Bird Nerd Screenshot

The basic mechanics are close to that of Flappy Bird – tap to fly up. Flappy Bird appears to instantaneously set the upward velocity to a constant value rather than apply a set amount of force, i.e. it’s irrelevant how fast you were falling before the tap. I did wind back the gravity and upward velocity to make the game a bit more controllable – the early versions were incredibly difficult, and given the spelling component of the game is quite hard, it was a reasonable trade-off to make flying a little easier.

Initially we noticed players lurked at the top or bottom of the screen to wait for letters – we solved this by ‘landing’ and pausing the game if the bird got too low, and spawning letters all the way up to the top of screen (behind the score) to prevent staying high. Initially the letters were completely random with some hit-testing to prevent overlap, but once we went to letter coins it made more visual sense to lay them out on a grid.

The actual physics and collision code was remarkably simple thanks to SpriteKit; the only real issue I came across was a little bit of ‘drift’ in the player’s x position, presumably due to collisions (easily rectified by resetting in the SKScene -update). I shudder to think how much effort it would have been writing something like this in 6502 assembler on a C64.

Revenue Model

I went for iAd in-app advertising as a revenue model – even if I felt it was worth it, charging upfront for a game in such a crowded market is a really hard sell. In-App Purchase is where nearly all of the game revenue is, but it’s a fair bit of additional development work, and is generating wariness in some consumers thanks to an increasing number of slimy implementations. iAd isn’t regarded as a high earner, but I wanted to experiment with it (partly so I could have some stats for the next Perth iOS Meetup). Plus, I just can’t stand non-retina ads. And yes, I’m aware of the irony of complaining about pixelated ads in an intentionally pixelated game!

iAd implementation is quick & easy – the contract can be configured in about 10 minutes online, and integrating a banner view is only a couple of lines of code. There are a few gotchas – most of the Apple iAd documentation still refers to deprecated methods, and there are a handful of rejection-tempting faux pas you need to be aware of, such as displaying a blank banner view, or submitting screenshots showing test ads.

Final Thoughts

Working with someone else on an app was quite valuable, both in the sense of ‘two heads are better than one’, and to keep the motivation and development pace up. At this stage I have no idea how the app will be received, or how to go about marketing it, but I’ll add another post once the dust has settled and I’ve got some download and iAd stats. In the meantime, get your Bird Nerd on and post your score on Twitter!

Download Bird Nerd from the App Store

Book Review: Getting Started with LevelDB

Getting Started with LevelDB

I was asked to review the new LevelDB book from Packt (disclaimer: I know the author and was provided a free electronic copy). I’ve done a reasonable amount of experimental hacking using LevelDB on iOS in the past,so I was keen to see whether there was much I’d learn from a ‘Getting Started’ book.

Firstly, a bit about my background in this area. Thanks to my abiding hatred of all things Core Data (remind me to do a blog post on that later), I’d looked into LevelDB a little while ago as a possible alternative iOS persistence technology. I’ve used NoSQL (document) databases on web projects in the past, and I was looking for something that gives the performance, simplicity and syncability of something like CouchDB or RavenDB on iOS (before you comment, yes, I know about TouchDB and CouchBase Mobile). I found LevelDB to be a little too low-level to offer a complete persistence solution out of the box, and partially managed to avoid the temptation to go off and build one (another blog post). I knew Andy had a background in ISAM and embedded storage engines, not to mention that horrid C++ language, so I was hopeful he’d have some better approaches to real-work usage.

I also should mention there was a mild controversy over the book when it was released – the title implies generic LevelDB content, whereas the book focuses on practical implementation in OS X and iOS applications, which some people felt was misleading. I didn’t have a problem with it, but then I’m only really interested in using it in OS X & iOS applications.

The ‘building and installing’ chapters are quite detailed, and explicitly show things going wrong, provide an explanation why, and show how to rectify it. I found them a little bit tedious, but people inexperienced with building open-source C/C++ projects would probably need the extra hand-holding (to be honest, I ignored the makefile & built LevelDB in Xcode – kudos to the developers for not making the build too exotic).

Andy then goes on to examine the C++ API and explain the query semantics and some of the core idioms and public data structures. This lays important conceptual foundations, but given I was already familiar with these I skimmed over most of it. It’s worth noting the code in these chapters is all defiantly C++ (e.g. cout rather than NSLog etc) – I can understand why this was done, given the disappointing and inconsistent state of Objective-C API options (more on that in a moment), but it doesn’t make it any less jarring.

Finally we hit the Objective-C code and some of the OS X/iOS samples. I had a bit of a chuckle at the line:

Some people also have a strong aversion to C++ and will avoid anything that lacks an Objective-C interface.

(I suspect he’s probably talking about me). Complicating this task is the fact there’s no official or standard Objective-C API for LevelDB, so the book spends a bit of time covering the three most popular open source wrapper interfaces, and duplicates some of the sample code across the three. Eventually Andy settles on a customised version of APLevelDB for the remainder of the book (though I’m not sold on exposing the raw C++ DB reference), however ideally we’d instead have available a popular, stable, complete, well-maintained, idiomatic Cocoa LevelDB wrapper. This could have made the book much simpler and better, and allowed it to spend more time and energy discussing other topics. Hopefully someone who’s not me will write this someday.

Next we receive a walkthrough implementing a basic sample app – the book describes an OS X application, but it’s worth noting all the downloadable code samples include the iOS equivalent. There are also descriptions of some debugging tools, including dump, lev, and implementing a REPL on an iOS device via an embedded web server (quite a clever trick, though I’m not sure if I’ll ever use it). The sample app is then extended with more advanced functionality, and this is where (for me) things really started to get interesting. This included secondary indexes, key design considerations, custom comparators, record-splitting, and ‘schema support’ extensions to assist in maintaining keys. The last was quite a nifty idea – I can see the potential for a declarative (e.g. json file in the app bundle) index definition mechanism. Less code is always better™.

Chapter 9 was titled “A Document Database”, and I hoped it was something closer to what I’ve used in CouchDB. It wasn’t really – the sample was more geared towards a metadata database of external files. However, it does include a basic text indexing implementation, which is an area I’ve battled with before, so it’s given me enough to take this idea further. The links to test indexing algorithms and open source implementations are also invaluable – additional resources are extensively referenced throughout the book; it’s one of the things that’s been done very well.

Chapter 10, “Tuning and Key Policies” is gold, and probably worth the sticker price of the book on its own if you’re serious about implementing a LevelDB solution. It starts out describing LevelDB under the covers – memtables, SSTs, the eponymous levels, snapshots and Bloom filters. The various performance settings are discussed and recommendations made, then Andy gets into a discussion of how to structure data and keys to optimise performance, much of which revolves around understanding when & how often your data is read and updated. He also discusses some optimisations made by Basho in the Riak codebase.

Lastly, the appendix covers using LevelDB from three scripting languages (Ruby, Python & JavaScript (node)), which was worthy of inclusion given I’ve already come across the need to script data in & out of a database.

In summary, I think the book would be invaluable to anyone looking at using LevelDB from Objective-C, and most of it would still be very useful to developers on other platforms. It’s certainly given me a lot to mull over, and rekindled some of my excitement about LevelDB on mobile devices.

GC on iOS

The Whereoscope guys have put up a well-publicised post explaining why they prefer Android development to iOS. One of the main gripes they had was the lack of garbage collection; I felt I had to put forward my take on the GC question.

GC has been available in Cocoa since 2007, and there’s no technical reason (that I’m aware of) that it can’t be delivered as part of iOS. Apple has made an explicit choice to not ship GC in the OS. Without going into too much detail about the trade-offs involved, I think it’s fair to say that when Apple does need to make a trade-off between developer convenience/productivity and user experience, the user will win EVERY TIME. Developers who are expecting anything else are kidding themselves.

Regarding the other points — it’s much the same deal with the provisioning and App Store deployment processes; these are there primarily to deliver a safe, simple, reliable experience for the users. Criticisms of the documentation and XCode are subjective, but probably not too far off the mark. I did find the complaint about the Simulator as being ‘too fast’, in comparison to Android’s emulator (too slow to be usable, but this is somehow a good thing) to be quite odd.