In this tutorial, we'll look at some of the advanced features and usage patterns of the humble UIImage
class in iOS. By the end of this tutorial, you'll have learned the following: how to make images in code, how to create resizable images for UI elements like callouts, and how to create animated images.
Theoretical Overview
If you've ever had to display an image in your iOS app, you're probably familiar with UIImage
. It's the class that allows you to represent images on iOS. This is by far the most common way of using UIImage
and is quite straightforward: you have an image file in one of several standard image formats (PNG, JPEG, BMP, etc.) and you wish to display it in your app's interface. You instantiate a new UIImage
instance by sending the class message imageNamed:
. If you have an instance of UIImageView
, you can set its image
property to the UIImage
instance, and then you can stick in the image view into your interface by setting it as a subview to your onscreen view:
UIImage *img = [UIImage imageNamed:filename]; UIImageView *imgView = [[UIImageView alloc] initWithImage:img]; // Set frame, etc. ... [self.view addSubview:imgView];
You can also carry out the equivalent of the above procedure directly in Interface Builder.
There are some other ways of instantiating an image, such as from a URL, or from an archived image that was stored as an NSData
type, but we won't focus on those aspects in this tutorial.
Before we talk about creating images in code, recall that at the most primitive level, we know what a 2D image really is: a two-dimensional array of pixel values. The region of memory representing the pixel array for an image is often referred to as its bitmap store. This is sometimes useful to keep in mind when making memory considerations. However, it is important to realize that a UIImage
is really a higher level abstraction of an image than that of a pixel array, and one that has been optimized according to the demands and usage scenarios of a mobile platform. While it is theoretically possible to create an image by populating an entire pixel array with values, or to reach into an existing image's bitmap and read or modify the value of an individual pixel, it is rather inconvenient to do so on iOS and is not really facilitated by the API. However since the majority of app developers seldom find real need to mess with images at the pixel level, it's usually not an issue.
What UIImage
(or more generally, UIKit and Core Graphics) does facilitate the developer to do is to create a new image by compositing existing images in interesting ways, or to generate an image by rasterizing a vector drawing constructed using UIKit's UIBezierPath
class or Core graphic's CGPath...
functions. If you want to write an app that lets the user create a collage of their pictures, it's easy to do with UIKit and UIImage
. If you've developed, say, a freehand drawing app and you want to let the user save his creation, then the simplest approach would involve extracting a UIImage
from the drawing context. In the first section of this tutorial, you'll learn exactly how both of these ideas can be accomplished!
It is important to keep in mind that a UIImage
constructed this way is no different than an image obtained by opening a picture from the photo album, or downloading from the Internet: it can be saved to the archive or uploaded to the photo album or displayed in a UIImageView
.
Image resizing is an important type of image manipulation. Obviously you'd like to avoid enlarging an image, because that causes the image's quality and sharpness to suffer. However, there are certain scenarios in which resizable images are needed and there actually are sensible ways to do so that don't degrade the quality of the image. UIImage
caters for this situation by permitting images that have an inner resizable area and "edge insets" on the image borders that resize in a particular direction, or don't resize at all. Furthermore, the resizing can be carried out either by tiling or stretching the resizable portions for two somewhat different effects that can be useful in different situations.
The second part section of the implementation will show a concrete implementation of this idea. We'll write a nifty little class that can display any amount of text inside a resizable image!
Finally, we'll talk a bit about animating images with UIImage
. As you can probably guess, this means "playing" a series of images in succession, giving rise to the illusion of animation much like the animated GIFs that you see on the Internet. While this might seem a bit limited, in simple situations UIImage
's animated image support might be just what you need, and all it takes is a couple of lines of code to get up and running. That's what we'll look at in the third and final section of this tutorial! Time to roll up our sleeves and get to work!
1. Starting a New Project
Create a new iOS project in Xcode, with the "Empty Application" template. Call it "UIImageFun". Check the option for Automatic Reference Counting, but uncheck the options for Core Data and Unit Tests.
A small note, before we proceed: this tutorial uses several sets of images, and to obtain these you'll need to click where it says "Download Source Files" at the top of this page. After downloading and unzipping the archive, drag the folder named "Images" into the Project Navigator - the leftmost tab in the lefthand pane in Xcode. If the left pane isn't visible, then press the key combination ⌘ + 0 to make it visible and ensure the leftmost tab - whose icon looks like a folder - is selected.
The downloaded file also contains the complete Xcode project with the images already added to the project, in case you get stuck somewhere.
2. Creating an Image in Code
Create a new file for an Objective-C class, call it ViewController
and make it a subclass of UIViewController
. Ensure that the options related to iPad and XIB are left unchecked.
Replace all the code in ViewController.m with the following:
#import "ViewController.h" @interface ViewController () { UIImage *img; UIImageView *iv; NSMutableArray *ivs; } @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; // (1) Creating a bitmap context, filling it with yellow as "background" color: CGSize size = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height); UIGraphicsBeginImageContextWithOptions(CGSizeMake(size.width, size.height), YES, 0.0); [[UIColor yellowColor] setFill]; UIRectFill(CGRectMake(0, 0, size.width, size.height)); // (2) Create a circle via a bezier path and stroking+filling it in the bitmap context: UIBezierPath *bezierPath = [UIBezierPath bezierPathWithArcCenter:CGPointMake(size.width/2, size.height/2) radius:140 startAngle:0 endAngle:2 * M_PI clockwise:YES]; [[UIColor blackColor] setStroke]; bezierPath.lineWidth = 5.0; [bezierPath stroke]; [[UIColor redColor] setFill]; [bezierPath fill]; // (3) Creating an array of images: NSArray *rocks = @[[UIImage imageNamed:@"rock1"], [UIImage imageNamed:@"rock2"], [UIImage imageNamed:@"rock3"], [UIImage imageNamed:@"rock4"], [UIImage imageNamed:@"rock5"], [UIImage imageNamed:@"rock6"], [UIImage imageNamed:@"rock7"], [UIImage imageNamed:@"rock8"], [UIImage imageNamed:@"rock9"]]; // (4) Drawing rocks in a loop, each chosen randomly from the image set and drawn at a random position in a circular pattern: for ( int i = 0; i < 100; i++) { int idx = arc4random() % rocks.count; NSLog(@"idx = %d", idx); int radius = 100; int revolution = 360; float r = (float)(arc4random() % radius); float angle = (float)(arc4random() % revolution); float x = size.width/2 + r * cosf(angle * M_PI/180.0); float y = size.height/2 + r * sinf(angle * M_PI/180.0); CGSize rockSize = ((UIImage *)rocks[idx]).size; [rocks[idx] drawAtPoint:CGPointMake(x-rockSize.width/2, y-rockSize.height/2)]; } // (5) Deriving a new UIImage instance from the bitmap context: UIImage *fImg = UIGraphicsGetImageFromCurrentImageContext(); // (6) Closing the context: UIGraphicsEndImageContext(); // (7) Setting the image view's image property to the created image, and displaying iv = [[UIImageView alloc] initWithImage:fImg]; [self.view addSubview:iv]; } @end
Configure the App Delegate to use an instance of ViewController
as the root view controller by replacing the code in AppDelegate.m with the following:
#import "AppDelegate.h" #import "ViewController.h" @implementation AppDelegate - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; self.window.rootViewController = [[ViewController alloc] init]; self.window.backgroundColor = [UIColor whiteColor]; [self.window makeKeyAndVisible]; return YES; } @end
Let's examine the code for viewDidLoad:
where all the action happens. We'll refer to the numbers in the code context.
- We want to start by drawing an image, which means we need a "canvas". In proper terminology, this is called an image context (or bitmap context). We create one by calling the
UIGraphicsBeginImageContextWithOptions()
function. This function takes as arguments aCGSize
, which we've set to the size of our view controller's view, meaning the entire screen. TheBOOL
tell us whether the context is opaque or not. An opaque context is more efficient, but you can't "see through" it. Since there's nothing of interest underneath our context, we set it toYES
. The scale factor, which is afloat
that we set to 0.0 (a value that ensures device-specific scale). Depending on whether the device has a Retina display, the scale factor will be set to 2.0 or 1.0 respectively. I'll talk a bit more about the scale factor shortly, but for a comprehensive discussion, I'll refer you to the official documentation (specifically, the "Points vs. Pixels" section in the Drawing and Printing Guide for iOS).Once we create an image context this way, it becomes the current context. This is important because to draw with UIKit, we must have a current drawing context where all the implicit drawing happens. We now set a fill color for the current context and fill in a rectangle the size of the entire context.
-
We now create a
UIBezierPath
instance in the shape of a circle, which we stroke with a thick outline and fill with a different color. This concludes the drawing portion of our image creation. -
We create an array of images, with each image instantiated via the
imageNamed:
initializer ofUIImage
. It's important to observe here that we have two sets of rock images: rock1.png, rock2.png,... and [email protected], [email protected], the latter being twice the resolution of the former. One of the great features ofUIImage
is that at runtime theimageNamed:
method automatically looks for an image with suffix @2x presumed to be of double resolution on a Retina device. If one is available, it is used! If the suffixed image is absent or if the device is non-Retina, than the standard image is used. Note that we don't specify the suffix of the image in the initializer. The use of single- and double-resolution images in conjunction with the device-dependent scale (as a result of settingscale
to0.0
) ensures the actual size of the objects on screen will be the same. Naturally, the Retina images will be more crisp because of the higher pixel density. - We compose our image in a loop by placing a randomly chosen rock from our picture set at a random point (constrained to lie in a circle) in each iteration. The
UIImage
methoddrawAtPoint:
draws the chosen rock image at the specified point into the current image context. - We now extract a new
UIImage
object from the contents of the current image context, by callingUIGraphicsGetImageFromCurrentImageContext()
. - The call to
UIGraphicsEndImageContext()
ends the current image context and cleans up memory. - Finally, we set the image we created as the
image
property of ourUIImageView
and display it on screen.
Build and run. The output should look like the following, only randomized differently:
By testing on both Retina and non-Retina devices or by changing the device type in the Simulator under the Hardware menu, you can ensure that the rocks are flipped with one in respect to the other. Once again, I only did this so we could easily confirm that the right set of images would be being picked at runtime. Normally, there's no reason for you to do this!
To recap - at the risk of belaboring the point - we created a new image (a UIImage
object) by compositing together images we already have on top of a drawing we drew.
On to the next part of the implementation!
3. Resizable Images
Consider the figure below.
The left image shows a callout or "speech bubble" similar to the one seen in many messaging apps. Obviously, we would like the callout to expand or shrink according to the amount of text in it. Also, we'd like to use a single image from which we can generate callouts of any size. If we magnify the entire callout equally in all directions, the entire image gets pixellated or blurred depending on the resizing algorithm being used. However, note the way that the callout image has been designed. It can be expanded in certain directions without loss of quality simply by replicating (tiling) pixels as we go along. The corner shapes can't be resized without changing image quality, but on the other hand, the middle is just a block of pixels of uniform colour that can be made any size we like. The top and bottom sides can be stretched horizontally without losing quality, and the left and right sides vertically. All this is shown in the image on the right hand side.
Luckily for us, UIImage
has a couple of methods for creating resizable images of this sort. The one we're going to use is resizableImageWithCapInsets:
. Here the "cap insets" represent the dimensions of the non-stretchable corners of the image (starting from the top margin and moving counterclockwise) and are encapsulated in a struct
of type UIEdgeInsets
composed of four float
s:
typedef struct { float top, left, bottom, right; } UIEdgeInsets;
The figure below should clarify what these numbers represent:
Let's exploit resizable UIImage
s to create a simple class that lets us enclose any amount of text in a resizable image!
Create a NSObject
subclass called Note
and enter the following code into Note.h and Note.m respectively.
#import <Foundation/Foundation.h> @interface Note : NSObject @property (nonatomic, readonly) NSString *text; @property (nonatomic, readonly) UIImageView *noteView; - (id)initWithText:(NSString *)text fontSize:(float)fontSize noteChrome:(UIImage *)img edgeInsets:(UIEdgeInsets)insets maximumWidth:(CGFloat)width topLeftCorner:(CGPoint)corner; @end
#import "Note.h" @implementation Note - (id)initWithText:(NSString *)text fontSize:(float)fontSize noteChrome:(UIImage *)img edgeInsets:(UIEdgeInsets)insets maximumWidth:(CGFloat)width topLeftCorner:(CGPoint)corner { if (self = [super init]) { #define LARGE_NUMBER 10000 // just a large (but arbitrary) number because we don't want to impose any vertical constraint on our note size _text = [NSString stringWithString:text]; CGSize computedSize = [text sizeWithFont:[UIFont systemFontOfSize:fontSize] constrainedToSize:CGSizeMake(width, LARGE_NUMBER) lineBreakMode:NSLineBreakByWordWrapping]; UILabel *textLabel = [[UILabel alloc] init]; textLabel.font = [UIFont systemFontOfSize:fontSize]; textLabel.text = self.text; textLabel.numberOfLines = 0; // unlimited number of lines textLabel.lineBreakMode = NSLineBreakByWordWrapping; textLabel.frame = CGRectMake(insets.left, insets.top, computedSize.width , computedSize.height); _noteView = [[UIImageView alloc] initWithFrame:CGRectMake(corner.x, corner.y, textLabel.bounds.size.width+insets.left+insets.right, textLabel.bounds.size.height+insets.top+insets.bottom)]; _noteView.image = [img resizableImageWithCapInsets:insets]; [_noteView addSubview:textLabel]; } return self; } @end
The initializer method for Note
, -initWithText:fontSize: noteChrome:edgeInsets:maximumWidth:topLeftCorner:
takes several parameters, including the text string to be displayed, the font size, the note "chrome" (which is the resizable UIImage
that will surround the text), its cap insets, the maximum width the note's image view may have, and the top-left corner of the note's frame.
Once initialised, the Note
class' noteView
property (of type UIImageView
) is the user interface element that we'll display on the screen.
The implementation is quite simple. We exploit a very useful method from the NSString
's category on UIKit, sizeWithFont:constrainedToSize:lineBreakMode:
, that computes the size that a block of text will occupy on the screen, given certain parameters. Once we've done that, we construct a text label (UILabel
) and populate it with the provided text. By taking into account the inset sizes and the calculated text size, we assign the label an appropriate frame, as well as make our noteView
's image
large enough (using the resizableImageWithCapInsets
method) so that the label fits comfortably on top of the interior area of the the image.
In the figure below, the image on the left represents what a typical note containing a few lines worth of text in it would look like.
Note that the interior has nothing of interest. We can actually "pare" the image to its bare minimum (as shown on the right) by getting rid of all the pixels in the interior with image editing software. In fact, in the documentation Apple recommends that for best performance, the interior area should be tiled 1 x 1 pixels. That's what the funny little image on the right represents, and that's the one we're going to be passing to our Note
initializer. Make sure that it got added to your project as squeezednote.png when you dragged the Images folder to your project.
In ViewController.m
, enter the #import "Note.h"
statement at the top. Comment out the previous viewDidLoad:
form and enter the following:
- (void)viewDidLoad { [super viewDidLoad]; NSString *text1 = @"Hi!"; NSString *text2 = @"I size myself according to my content!"; NSString *text3 = @"Standard boring random text: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."; UIImage *noteChrome = [UIImage imageNamed:@"squeezednote"]; UIEdgeInsets edgeInsets = UIEdgeInsetsMake(37, 12, 12, 12); Note *note1 = [[Note alloc] initWithText:text1 fontSize:25.0 noteChrome:noteChrome edgeInsets:edgeInsets maximumWidth:300 topLeftCorner:CGPointMake(10, 10)]; Note *note2 = [[Note alloc] initWithText:text2 fontSize:30.0 noteChrome:noteChrome edgeInsets:edgeInsets maximumWidth:300 topLeftCorner:CGPointMake(200, 10)]; Note *note3 = [[Note alloc] initWithText:text3 fontSize:16.0 noteChrome:noteChrome edgeInsets:edgeInsets maximumWidth:300 topLeftCorner:CGPointMake(10, 200)]; [self.view addSubview:note1.noteView]; [self.view addSubview:note2.noteView]; [self.view addSubview:note3.noteView]; }
We're simply creating Note
objects with a different amount of text. Build, run, and observe how nicely the "chrome" around the note resizes to accommodate the text inside its boundaries.
For the sake of comparison, here's what the output would look like if "squeezednote.png" were configured as a "normal" UIImage
instantiated with imageNamed:
and resized equally in all directions.
Admittedly, we wouldn't actually use a "minimal" image like "squeezednote" unless we were using resizable images in the first place, so the effect shown in the previous screenshot is greatly exaggerated. However, the blurring problem would definitely be there.
On to the final part of the tutorial!
4. Animated Images
By animated image, I actually mean a sequence of individual 2D images that are displayed in succession. This is basically just the sprite animation that is used in most 2D games. UIImage
has an initializer method animatedImageNamed:duration:
to which you pass a string that represents the prefix of the sequence of images to be animated, so if your images are named "robot1.png", "robot2.png", ..., "robot60.png", you'd simply pass in the string "robot" to this method. The duration of the animation is also passed in. That's pretty much it! When the image is added to a UIImageView
, it continuously animates on screen. Let's implement an example.
Comment out the previous version of viewDidLoad:
and enter the following version.
- (void)viewDidLoad { [super viewDidLoad]; ivs = [NSMutableArray array]; img = [UIImage animatedImageNamed:@"explosion" duration:2.0]; UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(explosion:)]; [self.view addGestureRecognizer:tap]; } - (void)explosion:(UITapGestureRecognizer *)t { CGPoint loc = [t locationInView:self.view]; for (UIImageView *v in ivs) { if ([v pointInside:[v convertPoint:loc fromView:self.view] withEvent:nil]) { [ivs removeObject:v]; [v removeFromSuperview]; return; } } UIImageView *v = [[UIImageView alloc] initWithImage:img]; v.center = loc; [ivs addObject:v]; [self.view addSubview:v]; }
We added a set of PNG images to our project, explosion1.png through explosion81.png which represent an animated sequence of a fiery explosion. Our code is quite simple. We detect a tap on the screen and either place a new explosion animation at the tap point, or if there was already an explosion going on at that point, we remove it. Note that the essential code consists of just creating an animated image via animatedImageNamed:
to which we pass the string @"explosion"
and a float value for the duration.
You'll have to run the app on the simulator or a device yourself in order to enjoy the fireworks display, but here's an image that captures a single frame of the action, with several explosions going on at the same time.
Admittedly, if you were developing a high-paced action game such as a shoot 'em up or a side scrolling platformer, then UIImage
's support for animated images would seem quite primitive. They wouldn't be your go-to approach for implementing animation. That's not really what the UIImage
is built for, but in other less demanding scenarios, it might be just the ticket! Since the animation runs continuously until you remove the animated image or the image view from the interface, you can make the animations stop after a prescribed time interval by sending a delayed message with – performSelector:withObject:afterDelay:
or use an NSTimer
.
Conclusion
In this tutorial, we looked at some useful but less well known features of the UIImage
class that can come in handy. I suggest you take a look at the UIImage
Class Reference because some of the features we discussed in this tutorial have more options. For example, images can be composited using one of several blending options. Resizable images can be configured in one of two resizing modes, tiling (which is the one we used implicitly) or stretching. Even animated images can have insets. We didn't talk about the underlying CGImage
opaque type that UIImage
wraps around. You need to deal with CGImage
s if you program at the Core Graphics level, which is the C-based API that sits one level below UIKit in the iOS framework. Core Graphics is more powerful than UIKit to program with, but not quite as easy. We also didn't talk about images created with data from a Core Image object, as that would make more sense in a Core Image tutorial.
I hope you found this tutorial useful. Keep coding!
Comments