Free iOS app: When You’ve Got Swag

iOS Simulator Screen shot 20 Aug 2013 15.24.53

I first released When You’ve Got Swag on Android a few months ago and to my surprise it actually ended up getting around 30,000 downloads. Over the weekend I decided that I wanted to play around with FMDB so I decided to put together a simple iOS app, and it struck me as a good time to bring the wildly ridiculous ‘swag’ images commonly found on Instagram and Tumblr over to iOS.

After having written the model code (which really is pretty simple) FMDB proved to be really awesome and I see it as reasonably likely that I will use it as an alternative to Core Data in all my apps – it was really easy to get started with, I didn’t have to write 100 lines of boilerplate code and I don’t have to worry about issues between threads as much as I did with Core Data – when I wrote Keep Calm I actually ended up with a whole extra model layer on top of Core Data to handle rendering on a separate thread (I now know that this could be easily rewritten).

The UI was all created with FlatUIKit (well not quite all, I had to use a tiny bit of UIAppearance) which proved to be just as awesome and I plan on using FlatUIKit+FMDB a lot in the future :).

To customise which words are highlighted you enter the text in block capitals (case doesn’t matter, all of the pictures are rendered with capital text anyway). I had originally planned to try and detect #hashtags, @usernames and URLs however I realised that just detecting punctuation next to capital letters would be a lot easier and more customisable. The text rendering is done using a regular expression to find characters and then an attributed string rendered with Core Text for the highlighting. I intend to add text sizing in a future release along with iPad support (the app is so simple and because it was originally written as a test app I don’t really see the need for this).

When You’ve Got Swag on the App Store (free)

Working in different languages on iOS

The standard language for developing iOS apps is Objective-C and it is what the vast majority of tutorials and sample code for the platform are written in, however over the last year I’ve seen several languages become increasingly dominant on the iOS platform. In this post I’ve tried to review and list as many of the languages as possible. I’ve ignored C/C++ from his post because they are within the Cocoa Touch tool chain already. I’ve also ignored JavaScript and other web technologies because so many platforms and frameworks already exist for them – I’m more interested in compiled platforms.

  • C# is a great language and it is rapidly becoming more popular across many platforms. All C# outside of Microsoft is based off of the Mono project and there are two commercial bindings for iOS. Xamarin allows you to write C# code that integrates with every single Cocoa Touch API (and they have day one updates for each new iOS version, although you can also join their beta programme to get new features at the same time as Apple’s betas, I believe). Unfortunately the free license limits the compiled size of your app but in my experience this isn’t too much of an issue. Being able to write C# proves to be a huge advantage however I’ve found that deploying to device can be quite slow, and the start up time for apps is often a few seconds slower (it is managed rather than native code) and that some patterns from Objective-C such as protocols/delegates don’t transfer over to C# as well – you have to create a subclass of UITableViewDataSource, for example, rather than a class that accepts UITableViewDataSource as a protocol. Alternatively, if you want to write games and don’t care about UIKit, Unity is a great tool that allows you to write your game in C# and deploy across several platforms. I’ve found, again, that the disadvantage of managed code does mean that Unity games tend to be a tiny bit slower than their Objective-C equivalents.
  • Despite Apple’s allegiance to the language in the early OSX days, Java is not a viable option for iOS development however a few bindings do exist. Codename One converts Java Bytecode to native code across several platforms (iOS, Android and Windows Phone) with a native UI. Their site is unclear whether or not they are actually binding to UIKit or if they are creating a native style UI. Alternatively, Google’s J2ObjC converts Java code to Objective-C. This project is very appropriate if you need to use Java for model code rather than UI code and it seems like a sensible choice if you have business logic you need to run on iOS.
  • I had initially expected that there would be very few options for running Python on iOS due to earlier limitations on interpreted languages however evidently the popularity of the language has lead to several options emerging. The first option, PyObjC provides bridging between Objective-C and Python and allows you to write apps that link with UIKit in Python but, like with C#, the differences between the two languages mean that the Python code feels a little clunky. Another option is PyMob, however I can’t find a public version of it to play around with. Finally, Kivy presents itself as a far more viable option for writing iOS games in Python; it makes no attempt to bind with UIKit which brings the benefit that it cross-platform with support for iOS, Android, OSX, Windows and even the Raspberry Pi! I’ve recently enjoyed Pythonista, which is an iOS app that allows you to write Python on your iPad or iPhone. I’ve been using it for about a month and I’ve really enjoyed it – the price tag is definitely worth it.
  • I’ve never really played around with Ruby for a particularly large amount of time but it has always struck me as the interpreted equivalent of Objective-C, which brings the benefit that the syntax doesn’t feel broken when attempting to bind to Objective-C – the MVC style, as well as protocols/delegates is adopted by both languages. RubyMotion links directly with Cocoa/Cocoa Touch and compiles to native code. Unfortunately the license is $200, however compared to the other languages I’ve mentioned here that definitely seems worth it.
  • If you have experience with Flash you would have been initially disappointed by the decision that Flash wouldn’t be available on the iPhone, however thanks to Adobe’s Flash platform with Flex it is now possible to deploy Flash apps to the App Store. This doesn’t strike me as a very good option, however, because there is absolutely no effort to work with the standard iOS frameworks – the aim seems to be to make it possible to deploy your Flash game onto iOS. Furthermore, you lose a lot in performance because the Flash runtime seems to be addicted to battery power, using nearly 100% CPU and every spare bit of RAM – these are some of the reasons, aside from lack of demand – that Flash was removed from Play Store on Android.

When writing this blog post I came across a very common pattern. Often a language will have several bindings available for it, however none of them will be able to maintain the feel of the language (Ruby was the only one that isn’t entirely true of this) because they have to adopt Objective-C idioms which they may not employ themselves. The two best examples of this are protocols/delegates and the way Objective-C functions are named compared to how they are in all other languages. Ultimately, I’ve come to the conclusion that it is probably easiest to build your next iOS app in Objective-C, regardless of how tempting another language may be:

  • There is significantly more documentation available for Objective-C
  • There are disadvantages to working in Objective-C and some things can be achieved in a lot less code in other languages, but otherwise it works perfectly well as a language. With a bit of experience, it really isn’t too challenging to write good re-usable code that runs quickly
  • All the Cocoa Touch frameworks are built in Objective-C and are designed to be interacted with from Objective-C rather than another language with different idioms
  • You are potentially going to be making huge performance sacrifices. In cases when you have an interpreted language like Python or JavaScript this will be an obvious issue but even the performance of C# is radically different on mobile devices to native code. The most simple reason for this is Garbage Collection. Objective-C isn’t GC (the feature was deprecated several years and has never been available on the iPhone) however it uses Automatic Reference Counting instead to manage memory. In some ways this is far more sensible because the “Garbage Collection” effectively happens once at compile time, saving CPU cycles later but on the other hand it may confuse programmers coming from other languages – but this has always been a “feature” of Objective-C :).

Opening pages in Chrome from Safari on iOS

If you particularly like Google Chrome for iOS (it isn’t a badly designed app, just a little slow because of Apple’s restrictions) but don’t like having to copy links from the address bar in Safari into Chrome I’ve come up with a simple bookmarklet that will do it for you:

  1. Create a new bookmark in Safari (you can do this by adding this page to your bookmarks) and give it a useful title like ‘Open in Chrome’
  2. Delete the URL
  3. Paste this into the address box instead: javascript:window.location.replace(window.location.href.replace('https://','googlechromes://').replace('http://','googlechrome://'))
  4. When you’ve got a page open that you want to open in Chrome, just tap the bookmark. Sometimes it will ask if you want to open the link in Chrome but most of the time it doesn’t and just changes apps straight away.

Using GLKit to do super simple antialiasing

I’ve been doing some work with OpenGL ES recently and I was getting irritated that lines produced tended to have very jagged edges. After exploring for a while I discovered that iOS has supported multi-sampling since iOS 4 however it takes quite a lot of code which I didn’t really fancy having to implement, especially because I was already using GLKit and therefore not using render or frame buffers. I didn’t want to have to blur line edges either, I just wanted full screen antialiasing.
It turns out that iOS 5 and above support antialiasing if you are using a GLKView:

GLKView *glkView = (GLKView*)self.view;
glkView.drawableMultisample = GLKViewDrawableMultisample4X;

Incredibly I didn’t detect any performance lag – I can quite happily achieve 60fps on a retina device with 100,000 triangles!

Reducing the size of images in iOS apps

For a surprisingly simple app, Keep Calm on iOS has required a reasonable amount of maintainence. Like many iOS apps it uses a lot of images to improve the user experience, and in the first version I had over 150 pictures because I allow users to change the crown. From the most recent version onwards there are over 1000 pictures as users can optionally purchase extras. This has pushed the app download size up from 2MB to around 20MB with around 19.5MB solely being images.

I decided that I wanted to be able to reduce the size and number of images in the app bundle but I didn’t want to reduce image quality or number.

The first option was to compress all of the PNGs using a tool like pngcrush however I’m only storing PNGs with one channel (alpha) so this had virtually no effect. I had also considered storing the original SVG files I had generated them from, but the addition of SVG rendering libraries, saving code and Core Image filters meant that I wouldn’t have seen any major reduction in the bundle size. I also had a look into zipping the files (and tarring them) however this reduced less than 1% because of pre-existing compression in the PNG files.

The next option I decided to investigate was putting all of the images in one single file, like a sprite sheet. This would mean PNG compression could help to reduce the amount of overhead on each file whilst maintaining the original quality.

I then wrote some ridiculously simple code for a Mac app that read in all of the files and drew them onto a Quartz 2D canvas (they’re all 300px by 300px at the most, so I just drew them in a square grid). This then produced a PNG file that was around 18MB, so I didn’t really gain anything.

Just to see if the NSImage compression code wasn’t great, I exported the image in GIMP but there was virtually no change. My images were basically one giant white PNG with an alpha channel, so in theory the file should have been a bit smaller (I was being optimistic, it is around 36 megapixels). I then added a black background to the image and exported it with GIMP without the alpha channel and magically the filesize reduced from 18MB to just under 4MB. This would keep my app size reasonably low and reduce the installation process because the iOS devices wouldn’t have to unpack over 1000 files, it would be less than 200.

The next problem was that I would now have to ‘unpack’ all of the extra icons when the user purchased them. It turns out that with a UIImage category you can pretty quickly (according to an Xcode log I managed 40 images/second on an old generation iPod Touch) crop the images out of the original and save them to disk. I had been concerned that it would not be able to load such a large image into memory, however I incredibly didn’t get any memory warnings when doing so.

The next problem I faced was that I didn’t need the black background in each image. The easy solution to fixing this is to use a Core Image filter called CIColorMatrix which multiplies each color value and adds a value onto it. I could then just multiply all RGB values by 0 and add on 1 to set them to alpha. The new alpha could then just be 1 multiplied by one of the original RGB values.

I then wrote some new code that loaded the image using UIImage and applied the filter using Core Image before running the same splitting routine. This worked perfectly (and at about the same speed) in the simulator but I couldn’t get it working on the device – the images would crop but they would be completely blank, which was useless. I was also getting memory warnings.

From what I can gather UIImage will keep the original compressed version of the image in memory, hence why it could load a 4MB 36 megapixel image on a device. Core Image, on the other hand, needs the raw image data and so uncompresses it, which meant holding 36 megapixels * 4 bytes per pixel = 144MB of data in RAM. Instead of feeding me back a useable image it just gave up and gave me a blank 36MP image, which was useless. My final solution is therefore either to split the large image into smaller images or just apply Core Image filters when the images get displayed.

In conclusion, if you’ve got a large number (probably less than 200) of small images in your app you could probably reduce the app size significantly by putting them all in one image and unpacking the individual images from that. On the other hand, if you have a small number of large images it is probably best to keep them in individual images so that your app doesn’t crash.

How the iPad Mini integrates with Human Interface Guidelines

The iPad Mini is going to be an odd device. It is, after all, effectively a ‘concentration’ of an iPad 2. The fact that it has iPad 2 internals would suggest that the 2010/2011 generation devices (iPad 2, iPhone 4 and, based on internals, the iPad Mini) will probably be continued to get updated to the latest version of iOS up until iOS 9. I assume this is based on the fact that the iPhone 3GS was released with iOS 3 and but it still received the following three updates. Because the iPad Mini was released with an iOS version two versions newer it would suggest that the iPhone 4 or iPad 2 may receive as many as five version updates.

When the iPad Mini was announced a lot of people trashed Apple because they were effectively back tracking on what Steve Jobs had said about 7-inch tablets. A problem that Steve had cited was that developers would either develop scaled up phone apps or scaled down tablet apps and either way they would be hard to be use. Turns out he was wrong.

Apple’s Human Interface Guidelines for iOS are a collection of lots of long documents that basically just encourage developers to make beautiful, easy to use apps. There is one explicit requirement that apps should use touch targets at least 44 by 44 points. A point on a non-retina device is one pixel, on a retina device it is 2 pixels wide. This means that the actual size for touch targets on different Apple devices are the following:

Basically Apple has sized the iPad Mini at incredibly convenient size for developers because it means that touch targets on full size iPad apps on an iPad Mini will still be above (exactly) the minimum iPhone requirement. The major benefit for developers is that they don’t have to effectively scale up their touch targets so that they are big enough for the guidelines on the iPad Mini. In theory targets will remain their current size on the full size iPad too. An alternative way of looking at it is that there is no way that the iPad Mini could have been smaller without it breaking Apple’s own guidelines.

Given that the touch targets are phone size it also suggests that Apple reckons you’ll be using the iPad Mini in your hands more. It is reasonable to sit a full size iPad on a desk or your lap, but that probably won’t be happening with the iPad Mini because you don’t do that with your iPhone.