Watch App Development Blog – Week 3

In weeks 1 & 2, I got a Transperth-scraping REST API built and deployed to an AWS-based cloud host. This week I’ll get started on:

Step 3: The iPhone App

Third party apps on Apple Watch are very limited and rely heavily on the companion iPhone app for logic, network access, location etc, so the first place to start with the Watch app is on the iPhone.

The phone app itself will need useful functionality otherwise it’s unlikely to be approved. I have a few ideas for cool features for the phone, but for the moment it can just display the live times for the nearest station. This will require:

  1. Getting the list of stations from the server
  2. Finding the nearest station based on the user’s current location
  3. Getting the live times for that station from the server.

Let’s get started. I’m going to be building the app in Swift (of course). Mattt Thompson of NSHipster/AFNetworking fame has written a Swift-only networking framework called Alamofire, so we’ll start with it.

After setting up the framework following the instructions, I created a new Swift file called ‘ApiClient’. Downloading JSON from the server using Alamofire looks like the following:

func getAllStations(f: [Station] -> ()) {
    Alamofire.request(.GET, "\(hostname)/train")
        .responseJSON { (_, _, json, _) in
           if let json = json as? [NSDictionary] {
                let s = json.map({ Station(dictionary: $0) })
                f(s)
            } else { f([]) }
    }
}

struct Station {
    let id : String
    let name : String
    let location: CLLocation
    
    init(dictionary: NSDictionary) {
        self.id = dictionary["id"] as String
        self.name = dictionary["name"] as String
        let lat = Double(dictionary["lat"] as NSNumber)
        let long = Double(dictionary["long"] as NSNumber)
        self.location = CLLocation(latitude: lat, longitude: long)
    }
}

I’m using a global function (to be more functional) with a callback parameter that takes an array of stations. The returned JSON array is mapped over to convert the NSDictionary instances into Station values. If anything goes wrong, an empty array is passed to the callback – this isn’t brilliant error handling and will probably change, but it’s clearer to show as-is for now.

The user’s location can then be retrieved from a CLLocationManager, and the nearest location calculated like so:

    func nearestStation(loc: CLLocation) -> Station? {
        return stations.filter({ $0.distanceFrom(loc) <= 1500 }).sorted({ $0.distanceFrom(loc) &< $1.distanceFrom(loc) }).first
    }

    // where distanceFrom is defined on Station as 
    func distanceFrom(otherLocation: CLLocation) -> CLLocationDistance {
        return location.distanceFromLocation(otherLocation)
    }

This will filter out all stations greater than 1.5km away, and return the nearest of the remainder (or nil if there are no nearby stations).

From there, we can retrieve the live times from the server using the code:

func getLiveTrainTimes(station: String, f: [Departure] -> ()) {
    Alamofire.request(.GET, "\(hostname)/train/\(station)")
        .responseJSON { (_, _, json, _) in
            if let json = json as? [NSDictionary] {
                let s = json.map({ Departure(dictionary: $0) })
                f(s)
            } else { f([]) }
    }
}

Wait – this looks pretty much identical to getAllStations, just with a different URL and return type. Let’s refactor:

protocol JSONConvertible {
    init(dictionary: NSDictionary)
}

func get<T: JSONConvertible>(path: String, f: [T] ->()) {
    Alamofire.request(.GET, hostname + path)
        .responseJSON { (_, _, json, _) in
        if let json = json as? [NSDictionary] {
            f(json.map({ T(dictionary: $0) }))
        } else { f([]) }
    }
}

// we can then redefine the other request methods as:
func getAllStations(f: [Station] -> ()) {
    get("/train", f)
}

func getLiveTrainTimes(station: String, f: [Departure] -> ()) {
    get("/train/\(station)", f)
}

The UI for the app (which I won’t go through here) is just a regular UITableView with a row for each train at the nearest station. However, it’s worth considering how the app will operate at different phases of its lifecycle – I want the live times to be updated in the background (for reasons that will become apparent later).

While the app is in the foreground, the location updates will come through as normal – I have a 50m distance filter on as this is unlikely to change your nearest station. When entering the background, the app will switch to the ‘significant change’ location service – again, accuracy is not super important so the course-grained significant change should work fine.

For the live train times network requests, in the foreground these will be triggered by a 30s NSTimer, in the background using the background fetch API. I haven’t used background fetch before, but it seems like the right technology to use in the case – allow the OS to decide if the app should refresh its data, based on battery life, network connectivity, and usage patterns of the app.

The various services are switched on & off like so:

    func applicationDidBecomeActive(application: UIApplication) {
        locationManager.startUpdatingLocation()
        timer = NSTimer.scheduledTimerWithTimeInterval(NSTimeInterval(30), target: self, selector: "timerFired:", userInfo: nil, repeats: true)
        timer?.fire()
    }
    
    func applicationWillResignActive(application: UIApplication) {
        locationManager.stopUpdatingLocation()
        timer?.invalidate()
        timer = nil
    }
    
    func applicationDidEnterBackground(application: UIApplication) {
        locationManager.startMonitoringSignificantLocationChanges()
    }

    func applicationWillEnterForeground(application: UIApplication) {
        locationManager.stopMonitoringSignificantLocationChanges()
    }

This is enough to start road-testing the app functionality out on the phone, and maybe start formulating a few ideas around the best functionality to prioritise on the watch. Speaking of the watch, next week I’ll have a surprise along those lines. As before, the code is available on bitbucket.

Watch App Development Blog – Week 2

In Week 1, I got a (very) basic Haskell REST web service running that scraped the Transperth site for live train times. Now we’re up to:

Step 2: Build & Deployment

Like most developers working on side-projects, I don’t want to pay a bundle for hosting a service during development when it really doesn’t need many resources, however when the product goes live and inevitably becomes a raging success, I need to be able to scale capacity quickly & easily. In the past I’ve used freemium PaaS providers like Heroku and AppHarbor which are designed for exactly this scenario.

I started down the Heroku path using Joe Nelson’s buildpack, however I immediately hit Heroku’s 15 minute build timeout. There are a variety of ways around this, although I got to thinking (as I’ve pondered in the past about AppHarbor) why I need to build on my hosting provider. Heroku was originally designed for deploying apps written in Ruby that didn’t need compilation; pushing source & compiling on the server seems like a hack to me.

Docker is the new hotness in packaging and application deployment, and is better suited to building a compiled web application locally and deploying to a cloud host. I thought I’d give this a go.

Docker Development on OS X

The Docker host relies on specific features of the Linux kernel, which means that working with containers locally on OS X or Windows requires running them inside a Docker host in a Linux VM. This starts to get a bit onioney. My initial inclination was to do docker development using Vagrant – the same method I use for working on other web systems targeting a Linux host. After spending considerable time trying out different methods of running Docker through Vagrant, I ended up coming to the conclusion that it wasn’t worth the hassle for a simple deployment like this one. Instead, my model would be:

  1. While I’m developing locally, just run the service directly on OS X without using Docker.
  2. When I’m ready to deploy, spin up boot2docker and build the container
  3. Commit & push the image to a remote docker repo.
  4. Deploy the image to the cloud host from the repo.

I strongly recommend getting started with Docker using Chris Jones’ “Missing Guide”. I installed using the downloadable installer rather than homebrew, but the only real config change I needed to make was to give the boot2docker VM more RAM – GHC struggles a bit unless it has plenty. Run the command boot2docker config > ~/.boot2docker/profile, then edit the ~/.boot2docker/profile file and change the ‘Memory’ setting (I gave it 4096). I didn’t configure any port-forwarding as I’m only using docker to build the image.

Building a Haskell Docker image

Dockerhub has an official Haskell image, which is a good starting point for development. I implemented a Dockerfile starting from the example at the end of the README. I needed to add an extra step to cater for my gps-1.2 requirement which is (still) not available on Hackage yet at time of writing.

FROM haskell:7.8

RUN cabal update

# Add .cabal file
ADD ./perthtransport.cabal /opt/app/perthtransport.cabal

# Install gps-1.2 from source
ADD gps /opt/app/gps
RUN cd /opt/app/gps &amp;&amp; cabal install

# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/app &amp;&amp; cabal install --only-dependencies -j4

# Add and Install Application Code
ADD . /opt/app
RUN cd /opt/app &amp;&amp; cabal install

# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH

EXPOSE 3000

# Default Command for Container
WORKDIR /opt/app
CMD [&quot;perthtransport&quot;]

I also needed to create a .dockerignore to ensure the cabal sandbox was excluded from the context. Once this was done, my build process consisted of running:

boot2docker up
docker build -t &lt;repo:tag&gt;
docker push &lt;repo:tag&gt;
boot2docker down

Container Hosting in the cloud

Unfortunately the container hosting landscape seems a bit immature at present – I’d love to have a Heroku-like service that lets me deploy scalable containers as simply as using a docker push. Also, while docker is standardised at the container level, most providers (ECS, Digital Ocean etc) seem to be inventing their own clustering layers on top. Maybe swarm will fix that – let’s wait and see.

I ended up going with Tutum – they have a good-looking, self-explanatory web interface, a web service API, and a CLI tool (brew install tutum). They don’t do the hosting themselves though – you need to register your own cloud host account (AWS, Azure, Digital Ocean) with them & they manage the nodes for you. They do give you a private repository, plus the service is ‘free forever’ if you sign up as a developer now. I’m using an AWS t2.micro instance under the free usage tier as the only node at present.

I set up the initial service definition via the web UI, to redeploy the latest image from the repo, I just need to do a tutum service redeploy <imageid>.

Scripting the deployment

I used rake as a build scripting tool, for no other reason than that’s what I normally use for Xcode builds. The process is simple enough that you could probably just use a bash script though.

task :run do
  sh &quot;cabal install --only-dependencies&quot;
  sh &quot;cabal build&quot;
  sh &quot;dist/build/perthtransport/perthtransport&quot;
end

task :deploy do
  version = File.open(&quot;perthtransport.cabal&quot;).read().match(/^version:\s*([^\s]*)$/)[1]
  puts &quot;Building version #{version}&quot;
  begin
    sh &quot;boot2docker up&quot;
    sh &quot;docker build -t #{DOCKER_REPO}:#{version} .&quot;
    sh &quot;docker push #{DOCKER_REPO}:#{version}&quot;
    sh &quot;tutum service redeploy #{TUTUM_SERVICE_ID}&quot;
    sh &quot;git tag -a #{version} -m 'Build #{version}' &amp; git push origin tag #{version}&quot;
  ensure
    sh &quot;boot2docker down&quot;
  end
end

So now I can build & run locally with a rake run and deploy to an AWS node with rake deploy. Next week we’ll start on the actual watch app functionality. In the interim, the source code is available on bitbucket.

Watch App Development Blog – Week 1

Okay, I’m trying something new – a weekly blog post talking about my progress developing an app. The theory is that the thought of my massive readership expectantly waiting for the next update will give me enough of an incentive to get something finished. Everyone practise their sad face to make me feel guilty if I don’t post an update.

I’m keen to get out an Apple Watch app. To be honest, I don’t think they’ll make much money, but most apps I build are for other people; it would be nice to put out something good, so I can say “I did that!”

The Concept

Over the last few months, I’ve tried to be aware of instances where I need some information off my phone, but pulling it out & launching an app seems like too much of a hassle. One scenario I noticed was when I was heading to the train station – I used to know the departure times off by heart, but now I’m not sure whether to run or dawdle. It would be great if there was an app on my watch that gave me live departure times for my nearest station – challenge accepted!

Step 1: The API

Transperth don’t have a public API, although there’s an unofficial third-party one that scrapes the website. Unfortunately some parts were broken with a recent site update, and the developer now lives in Melbourne. I also have a few ideas for some custom API behaviour, so I made a probably ill-advised decision to build my own scraping API.

I really wanted to try out a web project in F#, but I didn’t want to develop on Windows and I ended up running into significant problems with Xamarin – broken project templates, unimplemented parts of the aspnetwebstack, etc. ASP.NET vNext looks promising, but I had issues with it also.

So I thought I’d give Haskell another go – I’ve tried this in the past, but I’m much gooder at Haskell now. The state of web frameworks in Haskell has also improved significantly since 2010. I went with Scotty – I like the simplicity of the Sinatra/NancyFx model, and there’s a great walk-through by Aditya Bhargava.

Haskell Web Development on OS X

If you’re playing along at home, you’ll need to follow the following steps to run the API:

  1. Install the Haskell Platform
  2. Run cabal sandbox init in your project directory. Cabal sandbox installs dependencies in a project scope, similar to Bundler in Ruby.
  3. Create a cabal file specifying your dependencies. This process I found a little odd – effectively you’re specifying your executable as a library, but it allows you to leverage cabal dependency resolution. Use Adit’s cabal file as a base.
  4. Create a Main.hs and add your Scotty routes (check out the examples).
  5. Run cabal install && .cabal-sandbox/bin/<executable name>

After an extended compile time, you should now have a web server running on localhost:<port>.

JSON Response Types

Returning JSON can be done by defining record types that implement the ToJSON type class (from Aeson):

{-# LANGUAGE DeriveGeneric #-}
module Types where

import Data.Aeson
import GHC.Generics

data Departure = Departure { time :: String, destination :: String, pattern :: String, status :: String } deriving (Generic, Show)
instance ToJSON Station

HTML Parsing

Parsing the DNN-generated web page is done using tagsoup. This differs from most other HTML parsing libraries I’ve used in that it doesn’t define a query API or CSS-like selector syntax over a DOM, it just converts the HTML into a flat list of nodes that can be manipulated using regular list functions.

My scraping function, which is probably not brilliant Haskell, looks like the following (excluding some helpers):

getTrainTimes :: String -> IO [Departure]
getTrainTimes x = do tags <- fmap parseTags $  openURL $ "http://www.transperth.wa.gov.au/Timetables/Live-Train-Times?stationname=" ++ (urlEncode x)
                     let table = head $ tables tags -- first table in page
                     let rowArray = reverse . tail . reverse . tail $ rows table -- strip first & last rows
                     let times = map (f . cells) rowArray -- convert each row into a Departure
                     return times
                  where f cs = Departure (textFromCell $ cellAtColumn 0 cs) (destFromCell $ cellAtColumn 1 cs) (patternFromCell $ cellAtColumn 2 cs) (textFromCell $ cellAtColumn 3 cs)

‘Stations Near Me’

In addition to querying for live times, I also have a flat text file of station names & locations I lifted from Darcy’s project. This is used to respond to the ‘all stations’ API call, and also supports a geospatial query endpoint using the gps package. Initially I started getting build failures with this dependency – it was trying to compile GPX file support, which I don’t need. The latest version (1.2) of the gps code has removed this dependency, but it’s not on Hackage yet.

Happily, this is solvable:

  1. Specify the specific version of the package in your cabal file: gps >=1.2
  2. Put the source code in your project directory (e.g. git submodule add git@github.com:TomMD/gps.git)
  3. Specify the new source directory with cabal sandbox add-source gps

Using the Geo.Computations model, it’s then fairly straightforward to filter the list of stations based on distance to a given point.

Bringing it Together

Once the stations, live times, and geospatial filtering was done, it was just a case of defining the appropriate route functions in Scotty:

  scotty port $ do
    get "/train/" $ do
      list <- liftIO stations
      json list
    
    get "/train/near" $ do
      y <- param "lat"
      x <-param "long"
      list <- liftIO $ stationsNear y x
      json list
    
    get "/train/:station" $ do
      stationId <- param "station"
      station <- liftIO $ station stationId
      case station of
        Just s -> do
          times <- liftIO $ getTrainTimes $ name s
          json times
        Nothing ->
          Web.Scotty.status status404

I haven’t touched cache control or more advanced error handling, and it would be nice to fall back on timetables if live times aren’t available, but I now have enough of an API running to support the basic functions of the watch app. One of the things I liked about doing it in Haskell was that once it compiled, it generally worked. It’s a pretty nice feeling.

I’ve put the code up on bitbucket – feel free to have a look through it and send some feedback if you can’t stand my beginner Haskell.

Next Steps

Next is hosting – building and deploying my dinky API somewhere I can reach it. Tune in next week for another thrilling instalment!

IKEA desk hacking to mount a Rode mic arm

I’ve been working from home more frequently the past few months for some reason, so I eventually bit the bullet and constructed a permanent home workstation. I’ll post up more details of my setup later on, but when I tweeted a pic of my mic arm, it got a fair bit of interest, so I thought I’d give a bit more detail here.

I bought a Rode Podcaster dynamic mic a few months ago to start experimenting with recording screencasts – both to distribute over the web, and also as a backup for an epic presentation demo failure. Now the important thing with sensitive microphones is that they’re shock-mounted to protect from bumps & knocks (and even just vibrations on the desk from keyboard use), as these tend to create significant noise in the recording. This is what my first setup looked like:

Budget Shockmount

Suffice it to say, cheaping out on a proper shockmount & arm isn’t worth it – I supplemented this setup with the Rode PSM1 and PSA1 shortly afterwards.

The PSA1 comes with both a clamp-on mount and a through-desk mount. The clamp is the best option if you don’t want to irreversibly modify your desk, but it’s not suitable for all desks: mine doesn’t have an overhang to clamp to. The through-desk mount involves drilling a hole in your desk to mount an insert – this is how it’s done:

Step 1: Drill a hole in your desk. There are two important components to this step – the drilling of the hole, and knowing where to drill the hole.

  1. Drilling the hole – it’s best to use a holesaw – a flat (spade) bit is not great for going through the particle board that most desks are made out of. 22mm is right for the Rode desk insert. My one looked like this:
    Sutton holesaw
    Make sure you drill from the top; the surface may be splintered or chipped when the holesaw exits.  That said, the insert will cover an area around the hole anyway.
  2. Knowing where to drill – the main considerations here are:
    • Will it reach where you want it to reach? It’s a bit difficult out of the mount, but I’d try holding the base of the arm in your proposed position and making sure the mic can be positioned comfortably. At full stretch, the arm will reach roughly 600mm from the centre of the hole.
    • If you mount in the corner of your desk (as is common), is there enough wall clearance for the back of the arm? When weighted with something as heavy as the Podcaster, the arm can stick back about 120-130mm, unweighted (or if you push it), it can move back about 200mm.
    • Don’t drill right on the edge – the lip of the insert will overlap nearly 30mm from the centre of the hole.
    • Make sure you have clearance underneath the desk; the insert will probably stick through about 50-60mm.

Step 2: Push the insert into the hole & screw up the nut underneath. This is knurled; it’s just meant to be finger-tight.

Step 3: In my case, the desk drawers go nearly all the way to the back of the desk, and needed surgery to clear the bottom of the insert. I removed the drawer, carefully cut down with a handsaw, and used a sharp chisel to remove the section. It looks a bit rough, but hopefully no-one will see it.

Drawer Clearance

 

Step 4: You’re done! Slot the arm into the insert and start recording.

Finished Mic Arm

Mary & Tom Poppendieck – The Scaling Dilemma

I’ve been a fan of the Poppendiecks’ work on Lean Software Development for a while, so I was quick to sign up when I saw they were coming back to Perth as part of a YOW! Night. The talk was entitled ‘The Scaling Dilemma’, and covered issues encountered in scaling development teams beyond the popular “2 pizza” size. I haven’t worked much with larger teams, but I didn’t want to miss the opportunity to see speakers of this calibre in Perth.

Mary presented (I think Mary always does the presentations) some intriguing anthropological background on why teams are typically the size they are – the 5–7 person inner circle, the 12–15 person sympathy group, the 30–50 person hunting party and the ~150 person clan. She showed some evidence of these organisation sizes recurring throughout human history – the Roman Army & other military groups, stone-age villages, University departments, Gore & Associates, etc. Based on her background in hardware product development, she was most familiar with ‘hunting party’-sized teams.

Other than that, I found some of my key takeaways were:

  • “Monopolies destroy collaboration” – if there’s a group/team/department in an organisation that doesn’t (want to or have to) accommodate others, this will eventually destroy inter-team trust & collaboration.
  • The application of the Theory of Constraints as an underlying principle behind some modern software best practices – eg continuous delivery can be viewed as an attempt to break the release cycle/integration constraint.

It was a really thought-provoking presentation; it’s great to see these sorts of speakers come to Perth where possible – YOW! and BankWest deserve full credit for continuing to make this happen.

“Unrecognized option: -files” in hadoop streaming job

I was recently working on an Elastic MapReduce Streaming setup, that required copying a few required Python files to the nodes in addition to the mapper/reducer.

After much trial & error, I ending up using the following .NET AWS SDK code to accomplish the file upload:

var mapReduce = new StreamingStep {
    Inputs = new List<string> { "s3://<bucket>/input.txt" },
    Output = "s3://<bucket>/output/",
    Mapper = "s3://<bucket>/mapper.py",
    Reducer = "s3://<bucket>/reducer.py",
}.ToHadoopJarStepConfig();

mapReduce.Args.Add("-files");
mapReduce.Args.Add("s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

var step = new StepConfig {
    Name = "python_mapreduce",
    ActionOnFailure = "TERMINATE_JOB_FLOW",
    HadoopJarStep = mapReduce
};

// Then build & submit the RunJobFlowRequest

This generated the rather odd error:

ERROR org.apache.hadoop.streaming.StreamJob (main): Unrecognized option: -files

Odd, because -files most certainly is an option.

Prolonged googling later, and I discovered that the -files option needs to come first. However, StreamingStep doesn’t give me any way to change the order of the arguments – or does it?

I eventually realised I was being a bit dense. ToHadoopJarStepConfig() is a convenience method that just generates a regular JarStep… which exposes the args as a List. Change the code to this:

mapReduce.Args.Insert(0, "-files");
mapReduce.Args.Insert(1, "s3://<bucket>/python_module_1.py,s3://<bucket>/python_module_2.py");

and everything is awesome.

Basic Auth with a Web API 2 IAuthenticationFilter

MVC5/Web API 2 introduced a new IAuthenticationFilter (as opposed the the IAuthorizationFilter we needed to dual-purpose in the past), as well as a substantial overhaul of the user model with ASP.NET Identity. Unfortunately, the documentation is abysmal, and all the blog articles focus on the System.Web.Mvc.Filters.IAuthenticationFilter, not the System.Web.Http.Filters.IAuthenticationFilter, which is clearly something entirely different.

We had a project where we needed to support a Basic-over-SSL authentication scheme on the ApiControllers for a mobile client, as well as Forms auth for the MVC controllers running the admin interface. We were keen to leverage the new Identity model, mostly as it appears to be a much more coherent design than the legacy hodgepodge we’d used previously. This required a fair bit of decompilation and digging, but I eventually came up with something that worked.

Below is an excerpt of the relevant parts of our BasicAuthFilter class – it authenticates against a UserManager<T> (which could be the default EF version) and creates a (role-less) ClaimsPrincipal if successful.

public async Task AuthenticateAsync(HttpAuthenticationContext context, CancellationToken cancellationToken)
{
    var authHeader = context.Request.Headers.Authorization;
    if (authHeader == null || authHeader.Scheme != "Basic")
        context.ErrorResult = Unauthorized(context.Request);
    else
    {
        string[] credentials = ASCIIEncoding.ASCII.GetString(Convert.FromBase64String(authHeader.Parameter)).Split(':');

        if (credentials.Length == 2)
        {
            using (var userManager = CreateUserManager())
            {
                var user = await userManager.FindAsync(credentials[0], credentials[1]);
                if (user != null)
                {
                    var identity = await userManager.CreateIdentityAsync(user, "BasicAuth");
                    context.Principal = new ClaimsPrincipal(new ClaimsIdentity[] { identity });
                }
                else
                    context.ErrorResult = Unauthorized(context.Request);
            }
        }
        else
            context.ErrorResult = Unauthorized(context.Request);
    }
}

public Task ChallengeAsync(HttpAuthenticationChallengeContext context, CancellationToken cancellationToken)
{
    context.Result = new AddBasicChallengeResult(context.Result, realm);
    return Task.FromResult(0);
}

private class AddBasicChallengeResult : IHttpActionResult
{
    private IHttpActionResult innerResult;
    private string realm;

    public AddBasicChallengeResult(IHttpActionResult innerResult, string realm)
    {
        this.innerResult = innerResult;
        this.realm = realm;
    }

    public async Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
    {
        var response = await innerResult.ExecuteAsync(cancellationToken);
        if (response.StatusCode == HttpStatusCode.Unauthorized)
            response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue("Basic", String.Format("realm=\"{0}\"", realm)));
        return response;
    }
} 

Note that you’ll need to use config.SuppressDefaultHostAuthentication() in your WebApiConfig in order to prevent redirection from unauthorised API calls.