Keep Calm 3 for Android

Today I published Keep Calm 3 for Android. This update contains lots of minor changes and improvements since the last version:

  • New icon
  • Like on iOS, you can begin a line with a lowercase letter so that it appears smaller (like the 'and' in the original poster)
  • Improved poster appearance and responsiveness
  • Removed ads
  • Slightly modified UI
  • Android 4+ only

As well as these minor changes, this update is also the first time that I've unified the code base for Keep Calm and Keep Calm Pro. However, I will not be shipping further updates for Keep Calm Pro as a separate app as Keep Calm 3 will allow users to upgrade via an in-app purchase (this will be available within the next week).

You can get the latest update to Keep Calm for free on Google Play.

Building Keep Calm 3.3

Keep Calm is a fairly small app, clocing in at around 5000 lines of code. For my most recent update I decided to focus not on new features, but improving the user experience in simple but effective ways. I also removed a lot of code for iOS 6 (I had been using FlatUIKit on iOS 6, but not on iOS 7 - I'm no longer using any external dependencies). This post details the wide variety of changes that I made and some of the technical challenges that arose.

Color picker

The old version of Keep Calm had a bit of a rubbish color picker that was essentially just three UISliders. The new color picker is a dramatic change, and relies heavily on Core Animation and OpenGL ES. This is the first time I've shipped an app that uses OpenGL, although I've been playing with it since December 2012. 

The structure of the control is really simple. There is an GLKView that draws the main saturation-brightness square. I had tried doing this using Quartz 2D, however this proved to be far too slow and after shifting around the calculations I was able to draw the square using GLSL in far less than a millisecond (the video is a little laggy, but the view runs at 60fps on a real device). Wikipedia has all of the formulas that you need to convert HSB to to RGB, however most implementations use branching (depending on which multiple of 60 the hue is between) however because branching should be avoided in shaders I therefore compile 6 separate programs that are generated at runtime. I had anticipated that this would be slow, however I've had now startup issues with the view.

The hue slider is made up of a static 1px wide image. This is created on the CPU using the formula described above (a simplified version of it, at least).

Finally, the two 'loupes' which appear on the hue slider and the saturation-brightness square are CAShapeLayers with a bunch of custom animations for when the user touches up and down. This is a really nice effect and works incredibly well on the device. It also makes it a hell of a lot easier to see the color that you are picking out.

I have released the color picker under the MIT license on GitHub.

Text editor

The new text editor is WYSIWYG, whereas the old one required you to begin 'small' lines with a lowercase letter:

This was really easy to write using TextKit and UITextView, however it wouldn't have been possible using iOS 6 (hence why I dropped support for it). The only issue I ran into was the UITextView scrolling bug, however I managed to use this solution to fix it.

Slick UICollectionView

UICollectionView is an awesome class, and I've been using it since iOS 6 came out for my main grid of posters. Each cell contains a UIImageView of a thumbnail for each poster. I do this, rather than drawing each poster or using the poster view I use in my editor, because it has proven to be the least memory intensive and the fastest solution so far. The process for displaying images is fairly simple:

  • The data source method is called to fetch the cell
  • If the thumbnail for the poster is in a cache, then load it immediately
  • Otherwise load it asynchronously and load it in a callback

However this still didn't produce really smooth scrolling, especially on an iPad 3 or iPhone 4. These devices had a fairly similar CPU to their predecessors (iPhone 3GS, iPad 2), but four times the number of pixels (their GPUs are much better though) which means that loading images on them tends to perform a lot worse than their immediate successors (iPhone 4S, iPad 4). My new solution is to asynchronously load the thumbnails for cells a few rows ahead (or behind, if scrolling up) the current cell. This doesn't produce a noticeable performance drop and ensures that posters are usually immediately visible when scrolling.

A framework for my apps

Since last October (with the release of Hipster Lab) I've been building a utility framework for use in my apps that simplifies a lot of common tasks. Keep Calm uses the latest version of this framework however most of the development on it this year has occurred as the result of a major update for Play Time that I've been working on. This framework will be available at some point on GitHub, but there is a lot I need to fix first.

Accessibility

I'd always held off putting accessibility support (for visually impaired users) into Keep Calm because I'd always thought it was far too visual an app, however I decided that it might be at least a worthwhile learning experience for me. I found that it was incredibly easy to add support and I encourage other developers to consider adding support into their app.

Keep it simple, stupid

IMG_1392.jpg

A lot of Keep Calm relied on custom drawing code in order to render the poster. The main poster view was constructed of two custom CALayers with several hundred lines of Quartz code. Now I've just gone for three UIViews; one for the background (this can either contain nothing, a gradient layer or a UIImageView depending on background content), one for the crown (UIImageView) and one for the text (custom UILabel). The benefit of this over my previous approach is that it is now much faster - rotations are a lot more slick, for example.

The greatest lesson I've learnt from this version of Keep Calm is, by far, to just go with the simplest solution because this is often the one that works the best.

What's next

This fall I plan to release v4 of Keep Calm, as well as an update to the Android version. The next version will be iOS 8 only (I will probably do a bug fix 3.x version that supports iOS 7 and 8) which will include minor user interface changes and maybe an extension so that you can create posters from pictures directly within the Photos app. Keep Calm will continue to be developed in Objective-C for the foreseeable future (some of my apps will be getting a little Swift treatment thought) but I'm strongly considering Xamarin on the Android side. 

Keep Calm 3.3

After nearly two years on the App Store, I've just released Keep Calm 3.3. This release isn't focused on adding any new features, but is now much faster and easier to use. Previously if you wanted to create posters with lines of text of varying heights you had to type 'KEEP CALM and CARRY ON', however now there is just a simple switch in the UI. Furthermore, I've now got a new OpenGL ES accelerated color picker which I'm really happy with:

image.jpg

As well as a new text editor and color picker the app has had a lot of performance improvements, and will also consume a lot less space on your device. Furthermore, I've also implemented accessibility so that visually impaired users can now use the app more easily.

You can download the update from the App Store for free.

CGContext in Swift

A lot of code that I've seen on StackOverflow for correctly getting the CGContext from an NSGraphicsContext in Swift doesn't seem to be working in OSX 10.9. The following does work as of Xcode 6 beta 4 running on OSX 10.9:

var context:CGContextRef = reinterpretCast(NSGraphicsContext.currentContext().graphicsPort)

In OSX 10.10 a new method is available on NSGraphicsContext that returns the CGContext however I haven't had this correctly working with Quartz.

SQLite in Swift frameworks

I have an unusually specific use case for Swift: I want to use it to replace the model layer in one of my apps, and to do so I wanted it in a separate framework (given that these are so easy in Xcode 6). My model layer works off of on top of FMDB + SQLite, so that was a must have. However, the latest beta of Xcode 6 (b4) removed bridging headers from frameworks - instead you have to add Objective-C imports to the 'umbrella header' of the framework. Unfortunately 'sqlite3 is a non-modular header import', which meant that I couldn't import FMDB into the Swift framework at all:

Screen Shot 2014-07-23 at 19.30.04.png

This was very frustrating because the Objective-C version of the framework would build perfectly! The solution, however, is to use module maps. These are part of the LLVM compiler that allow you to map non-modular libraries and frameworks such as sqlite3, which means that they can be used by Swift.

Here's what I did to setup FMDB

  1. Create a new Objective-C framework in an Xcode workspace for FMDB. I then added all of the FMDB headers and source files
  2. Ensured that all FMDB headers were made public and that they were included in FMDB.h
  3. Linked libsqlite3.dylib
  4. Ensured that the 'Defines Module' was 'Yes' in the FMDB build settings

Then I created Swift framework that used FMDB:

  1. Create a new Swift framework (I called it ModelSwift)
  2. Link it with FMDB
  3. Add #import <FMDB/FMDB.h> to the umbrella header (ModelSwift.h)
  4. Create a module map (sqlite3.modulemap) and add it to the project (you could place it anywhere, however):
module sqlite3 [system] {
    header "/Applications/Xcode6-Beta4.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS8.0.sdk/usr/include/sqlite3.h"
    link "sqlite3"
    export *
}

module sqlite3simulator [system] {
    header "/Applications/Xcode6-Beta4.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator8.0.sdk/usr/include/sqlite3.h"
    link "sqlite3"
    export *
}

You then have to add the path to the directory that this the module map is stored in to the 'Import Settings' of the Swift framework:

Once you've done this you'll be able to freely use FMDB in your Swift framework. Once you've built the framework and you're importing it into an app you will also need to add the import path to your app's build settings to ensure that it picks up the module map as well (I'm not quite sure why this is). I found that I also need to add an empty Swift file to my Objective-C app so that it would allow me to set the import paths. You may also need to enable non-modular headers in the build settings of the app.

Hopefully a future release of the beta will fix all of this, but this definitely works for now.

Replicating Overcast's show notes

Early this week Marco Arment released Overcast, a really elegant new podcast app for iOS. The show notes aren't displayed by default in the player, instead you swipe up on the show artwork to view them:

image.jpg

This is an effect that I quite like, so I thought I would take a look at how it could be implemented. Firstly, the show notes are probably presented using a UIWebView, because most podcasts use (relatively simple) HTML in their shownotes. Secondly, a UIWebView is just a UIScrollView, so it is possible to add a contentOffset to the web view and display the artwork in an image view begins the web view. Here's what that hierarchy looks like:

Therefore, all you really need to do is resize the image view as the web view, which is in front, is scrolled. This can be done with some simple code in the UIScrollViewDelegate:

CGFloat miniSize = CGRectGetWidth(self.view.frame) / 3;

if (scrollView.contentOffset.y < 0) {
    CGFloat size = miniSize;
    if (scrollView.contentOffset.y < -miniSize) {
        CGFloat offset = scrollView.contentOffset.y + 320;
        CGFloat fraction = 1 - offset / (320 - miniSize);
        size = fraction * (320 - miniSize) + miniSize;
    }
    self.artworkImageView.frame = CGRectMake(CGRectGetMaxX(self.view.frame) - size, 0, size, size);
    self.artworkScrollView.contentOffset = CGPointZero;
}
else {
    self.artworkScrollView.contentOffset = scrollView.contentOffset;
}


If the user has scrolled between the artwork and the 'mini size' then the shownotes will be displayed directly underneath. When the show note title is between the bottom of the artwork and the top of the scroll view, the artwork stays fixed, however when it will zoom when the title is below the artwork. The interaction itself is pretty simple, but I really like the way it works. You can find my full implementation on GitHub. Here's a demo video:

iOS Developer FAQ

For the last few weeks I've been working on an extensive list of FAQs for new iOS developers, because they commonly need answers of questions that they may not know how to find. In order to write the FAQ, which is available on GitHub, I drew on my own experiences, StackOverflow and /r/iOSProgramming.

I don't want this to be a static document, so I'm actively looking for new questions and answers through issues and pull requests.

OpenCL fractal generation

I've been meaning to play around with OpenCL for a while (like a couple of years), so I decided to experiment with some of the basics. In this post I'm going to be focussing on using OpenCL on OSX to create some Mandelbrot fractals, so I'll assume you've already read the first few chapters of Apple's documentation (don't worry, it doesn't take long). If you want to skip the post and get straight to the code, please check it out on GitHub.

Start out by creating a new command line tool (Foundation) in Xcode, linking it with AppKit.framework, Foundation.framework and OpenCL.framework (you're going to want to do this because we'll need to write a tiny bit of Objective-C to save the images). Import these frameworks in main.m:

Screen Shot 2014-05-27 at 19.55.42.png

The next step is to actually write the kernel. OpenCL kernels are basically programs written in a C-like language that execute on the stream processors of the GPU, a little like OpenGL shaders (but way more powerful). The kernel is based off of this GLSL shader (so I won't go into detail on complex numbers):

The kernel itself has several options, including the output image to write to, the width of the image, the height of the image (which are used to normalise the coordinates) and the number of iterations to do. This is fairly similar to the original GLSL shader, and it acts in a similar way because it is executed per pixel. Now we need the Objective-C/C code to run the kernel:

This code does the following:

  1. Creates a dispatch queue for OpenCL. On OSX Apple has made it super easy to run OpenCL kernels by integrating them with GCD. On other platforms a lot more boiler-plate code is required
  2. Allocates some bytes for the image (notice that we allocate 4 bytes - 1 unsigned integer - per pixel for the RGBA channels)
  3. Creates a struct describing the image format (RGBA, 1 byte per component) for OpenCL
  4. Allocates OpenCL memory for the image
  5. On the OpenCL queue a range is created to describe the image (this should be familiar once you've read through Apple's docs)
  6. Execute the kernel
  7. Copy the image data back to the main memory from OpenCL's memory
  8. Create an NSBitmapImageRep for the data, encode that as a PNG and export to disk

Voila! You'll find this in your home directory:

As a bonus, I also stuck this in a loop and generated a video for the first 1000 iterations:

OpenCL is really powerful, and Apple has done an awesome job at integrating it into OSX and Xcode. This project doesn't even begin to scratch the surface of what you can do with it. At some point soon I'm going to take a look at some more advanced topics such as image processing and integrating with OpenGL.