Lessons from iOS Development #3: Let’s forget this

I’ve blogged previously about memory management with ARC in iOS but I thought I would cover some different aspects that aren’t related to retain/release cycles but focus on when stuff really needs to be kept in memory. I’ve split this post into a few different sections based on different areas of iOS development.

Core Data

Core Data is really great (except if you’re using it with iCloud) as an ORM and I’ve found it to be a worthy replacement to SQLite on iOS, albiet with some limitations. I’ve generally found that it uses a little more memory than SQLite does for the same/similar tasks, but keeping a context and reusing it throughout your application (except in different threads) generally works quite well.

A major benefit of Core Data is NSFetchedResultsController. If you wanted to fetch an object from a Core Data store normally you might do something like the following:

NSFetchRequest *fetchRequest = [NSFetchRequest new];
NSEntityDescription *entity = [NSEntityDescription entityForName:@"Object" inManagedObjectContext:self.context];
[fetchRequest setEntity:entity];
NSArray * results = [self.context executeFetchRequest:fetchRequest error:nil];

If you’ve only got a few attributes on your object with few (if any) relationships and there are only going to be a small number of them, then this approach works fine. But what if they’re bigger than you think they are? What if each object has an image attached to it and there are 1000 objects? Your users may surprise you.

Keeping a huge number of objects in memory at any one time is never going to be a sensible idea, so thankfully Apple has provided us with an alternative. I shall not explain all the details of how NSFetchedResultsController works because Ray Wenderlich has a pretty good tutorial on it but it means that you only have to fetch an object when you actually need it, and after that it disappears, which can massively reduce memory load and massively increase table/collection view performance.

Images

Apple generally encourages developers that they ought to use separate image files for iPad, iPhone, Retina and Non-Retina devices (although, based on normal release cycles, I suspect there will be no non-Retina devices available by the end of this year and no non-Retina devices receiving OS updates by the end of next year) and it is incredibly important that you only keep an image in memory that is the highest resolution that you need.

For example, say you are working on an app that does photo manipulation and users can load photos from their camera. On an iPhone, these images are going to be eight megapixels (it’s nearer 7.9 million pixels) however the iPhone screen only has 0.7 million pixels – less than a tenth. The full image doesn’t need to be kept in memory, so it be resized before being presented in the app. However, if the user needs to export their edited photo they may wish to do so at a higher resolution, so the the original photo should be saved to disk temporarily.

If you want some decent code that resizes UIImages accurately (and quickly) I recommend the UIImage categories described in this blog post. In general it is a lot quicker to use Quartz2D for a task like this because converting the image to a Core Image type can be slow, and converting it back can be slower.

Although not entirely related, always be wary of which format you are using for your images. Apple recommends PNGs everywhere, but JPEG often provides much better compression for photos.

Strings and big data from the web

I’ve found that storing large NSStrings (about a million characters large) to be generally stupid because a) it slows iOS devices down big time and b) some NSString functions become incredibly slow. It is therefore wise to read a file line by line if you are only processing a small chunk at a time, for example. Alternatively use SQLite if you can so that text can be split up.

Also be aware of loading data from the web. If you can absolutely guarantee its source and you know how big it is, the download it with NSData and keep it in memory. On the other hand, can you absolutely guarantee that what you are downloading is small and won’t fill up the RAM (which can be an issue on mobile devices, especially older ones)? Use other functions instead of loading it one go.

Advertisements

Building color palettes from images with C#

This image shows the Wikimedia 'Best picture of 2012' with the generated color palette.

This image shows the Wikimedia ‘Best picture of 2012’ with the generated color palette.

Today I was playing around a little with C# and I wrote a simple tool that will generate a color palette of the colors that make up an image. You can get the source on GitHub.

The code works by first loading the image and then counting the number of pixels for each value of hue and saturation. The mean number of pixels per color is then calculated and the results are plotted to a 255px by 360px image and the color is determined by how much greater than the mean it is. Below are some more samples of palettes that I produced: