Close modal

Blog Post

Using pixel colours to make seamless mobile apps (swift/ UIKIT)

Development
Sun 18 October 2015
0 Comments


Rationale

Many applications use images and quite often that content will be fetched from the internet, and a responsive application should minimise the delay and deviation from a seemingly organic experience. Quite often a blank or coloured space (possibly bordered) or a placeholder image could be used to signify where content is still being fetched, but it is less than ideal, and with possibly many such image resources can create a seemingly clinical feel. Here I propose one such method that has worked for me and present possible variations on this, the focus is on mobile applications that provide previews of images and written using swift.

The Setup

As of writing I'm developing a mobile application to support those who still practice film development themselves, and as such can feature many images, whose number makes it impractical to bundle in. Since images are particularly pretty content and make the app look good, we wish to put them front and centre, and since content is dynamic and numerous, loading may not be immediate.

Putting in some Code

First we need to handle the base64 decoding, to save space we can simply squash the bytes 0-255 value into 0-64 and let it take up a single base64 character, and then expand it into 0-255 with some minor loss of granularity of tone, but not enough to make a difference. So if we were to have 25 pixels of gray tones, they would only need 25 bytes of base64 encoded characters.

Here's our cell that handles displaying an image or preview

import UIKit
import Haneke

class ImageCollectionViewCell: UICollectionViewCell {
    @IBOutlet weak var view: UIImageView!

    lazy class var b64Map: [Character : UInt16] = {
        let b64 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
        var b64Map = [Character : UInt16]()
        b64.characters.forEach {b64Map[$0] = UInt16(b64Map.count*4) }
        return b64Map
}()

    func setImage(url: NSURL, preview blurString: String ) {
        var pixels = Array(blurString).map { b64Map[$0]! }
        let cs = CGColorSpaceCreateDeviceGray()
        let ctx = CGBitmapContextCreate(&pixels, 5, 5, 8, 5, cs,nil)
        let img = CGBitmapContextCreateImage(ctx)
        view.contentMode = UIViewContentMode.ScaleAspectFill
        let preview = UIImage(CGImage:img)
        view.hnk_setImageFromURL(url,placeholder:preview)
    }

    override func prepareForReuse() {
        super.prepareForReuse()
        view.image = nil
    }
}

Black and White (grayscale) images

Level of blur Image String Image result
The Original Original
2x2 Blur
4 pixels
QO
oV
Blur 2x2
3x3 Blur
9 pixels
EQJ
dgK
rkP
Blur 3x3
4x4 Blur
16 pixels
CNLI
JaWH
mxWJ
ivWS
Blur 4x4
5x5 Blur
25 pixels
DKKJI
EKiKJ
WncRI
g8kRK
duiQW
Blur 5x5
6x6 Blur
36 pixels
DILJLI
FCdTGK
KVchJK
X8kYNI
a5teMN
YunaPa
Blur 6x6

Colour Images

While my purposes are (almost) strictly black and white, there may be colour images, and more than likely you'll want colour images, so let's extrapolate this to handle colour images.

Previously we needed 25 pixels to make good looking gray scale images, colour has a profound effect, with three different channels, the data will be a bit larger, but also have higher contrast than just gray scales; so it is proposed that we need only the four corner values, in red, green and blue (RGB) values.

Since it works so well, and is effective, we will keep the base64 encoded method, but this time include the 4 pixels of the corners, in the 3 channels of RGB, that's 20 values, concatenated as follows: RGBRGBRGBRGB. While you could express the full range of each channel, the 6 bit representation over base64 seems to work just fine, so let's stick with that to save bytes.

Handling both is gray scale and colour image strings is easy, we don't even need an explicit way of differentiating them, they are inherently different lengths, 25 bytes for gray scale, and 20 bytes for the colour format. Thus we can tell based on the length of the string which image type is to be created.

Let's see how this works with an example image I've taken from my photo library.

Level of blur Image String Image result
The Original Original
2x2 Blur
4 pixels
hikehk
caVZYU
Blur x_x
3x3 Blur
9 pixels
cgleinZel
gggokdceg
USNbZUQPM
Blur x_x
4x4 Blur
16 pixels

cgleiobgnYdk
cfikjhjjiadi
edcojalhaYZb
OLHVVRUUQLKG

Blur x_x

So we have such an example string

Conclusions

Some more examples:

Image Preview Full
1 http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg
2 http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg
3 http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg
4 http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg
5 http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg
6 http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg http://blog.mitchellcurrie.com/post-images/pixellated-image-previews/img_on=ar001_sample1.jpg