This tutorial will build upon the results of a previous Mobiletuts+ tutorial to create an enhanced version of the drawing app in which the thickness of the pen stroke changes smoothly with the speed of the user's drawing and makes the resulting sketch look even more stylistic and interesting.
Overview
If you haven't already done so, I strongly recommend that you work through the first tutorial before starting this one. I'll make passing references to the concepts and code from the first tutorial, but I won't go into the details here.
In the first tutorial we implemented an algorithm that interpolated the touch points acquired from the user drawing on the screen with his finger, enabling the user to draw on the screen. The interpolation was done with Bezier curve segments (provided by the UIBezierPath
class in UIKit
), with four consecutive touch points comprising a single Bezier segment. We then performed a smoothing operation on the junction point connecting two adjacent segments to achieve an overall smooth freehand curve.
Also recall that in order to maintain drawing performance and UI responsiveness, we would (at particular instants) render the drawing generated until that point is into a bitmap. This freed us to reset our UIBezierPath
, preventing our app from becoming sluggish and unresponsive due to excessive computations from an indefinitely-growing path. We carried out this step whenever the user lifted his finger off the screen.
Now let's talk about our objectives for this tutorial. In principle, our requirement is straightforward: as the user draws with her finger, keep track of how fast her finger moves, and vary the width of the pen stroke accordingly. The exact relationship between the speed and how thick we want the stroke to be can be modified to achieve different aesthetic effects.
Keeping track of the drawing speed is simple enough; the app samples the user's touch approximately 60 times per second (as long as there is no slowdown on the main thread) so the instantaneous speed of the user's touch will be be proportional to the distance between two consecutive touch samples.
The obvious approach that suggests itself would be to vary the lineWidth
property on the UIBezierPath
class with respect to the drawing speed. However, this simple idea has a couple of issues and ultimately is not be good enough to meet our demands. Keeping with the spirit of the first tutorial, we will implement this approach first, so we can examine its shortcomings and think about iteratively improving it or scrapping it altogether and trying something else. This is how real code development happens anyway!
As we develop our app, we'll discover that due to the new and more complex requirements, our app will benefit if we move some code to the background- in particular, the bitmap drawing code. We'll use Apple's GCD (Grand Central Dispatch) for that.
Let's dive right in and write some code!
1. First Attempt: A "Naive" Algorithm
Step 1
Fire up Xcode and create a new project with the "Empty Application" template. Call it VariableStrokeWidthTut. Make it a Universal project and check "Use Automatic Reference Counting" leaving the other options unchecked.
Step 2
In the project summary for both devices choose any one mode as the only supported interface orientation, it doesn't matter which one. I've chosen Portrait right-side-up in this tutorial. It's reasonable for a drawing app to maintain a single orientation.
As discussed before, we'll start with the simplest possible idea, varying the UIBezierPath
's lineWidth
property, and see what it gives us.
Step 3
Create a new file, calling it NaiveVarWidthView
and make it a subclass of UIView
.
Replace all the code in NaiveVarWidthView.m with the following:
#import "NaiveVarWidthView.h" @implementation NaiveVarWidthView { UIBezierPath *path; UIImage *incrementalImage; CGPoint pts[5]; uint ctr; } - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { [self setMultipleTouchEnabled:NO]; path = [UIBezierPath bezierPath]; } return self; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { ctr = 0; UITouch *touch = [touches anyObject]; pts[0] = [touch locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint p = [touch locationInView:self]; ctr++; pts[ctr] = p; if (ctr == 4) { pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0); [path moveToPoint:pts[0]]; [path addCurveToPoint:pts[3] controlPoint1:pts[1] controlPoint2:pts[2]]; UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, 0.0); // ................. (1) if (!incrementalImage) { UIBezierPath *rectpath = [UIBezierPath bezierPathWithRect:self.bounds]; [[UIColor whiteColor] setFill]; [rectpath fill]; } [incrementalImage drawAtPoint:CGPointZero]; [[UIColor blackColor] setStroke]; float speed = 0.0; for (int i = 0; i < 3; i++) { float dx = pts[i+1].x - pts[i].x; float dy = pts[i+1].y - pts[i].y; speed += sqrtf(dx * dx + dy * dy); } // ................. (2) #define FUDGE_FACTOR 100 // emperically determined float width = FUDGE_FACTOR/speed; // ................. (3) [path setLineWidth:width]; [path stroke]; incrementalImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [self setNeedsDisplay]; [path removeAllPoints]; // ................. (4) pts[0] = pts[3]; pts[1] = pts[4]; ctr = 1; } } - (void)drawRect:(CGRect)rect { [incrementalImage drawInRect:rect]; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [self setNeedsDisplay]; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [self touchesEnded:touches withEvent:event]; } @end
This code has only a few modifications from the final version of the app from the first tutorial. I'll only discuss what's new here. Referring to the points in the code:
- (1) We're creating an off-screen bitmap to render (draw) into as before. However this time we're doing the off-screen rendering step after every drawing update (that is, after every sampling of four touch points, which comes to around 60/4 = 25 times per second). Why? It's because a single
UIBezierPath
instance can have only one value oflineWidth
. Since our objective is to vary the line width according to the drawing speed, instead of having one long bezier path to which we keep incrementing points (as in the first tutorial) we need to decompose our path into the smallest possible segments so each can have a differentlineWidth
value. Obviously, since four points go into defining a cubic Bezier, our segments can't be any shorter than that. So we'd need to allocate a newUIBezierPath
object for every four points received until the offscreen rendering step happens. We'd have to keep allocating memory for newUIBezierPath
s potentially indefinitely if we only did the bitmap rendering due to the user lifting her finger off the screen. On the other extreme, we could do the offscreen rendering step after every four points acquired (or around 60/4 = 25 times per second), so that we only need to keep the one instance ofUIBezierPath
with no more than four points in it, and that's what we've done here. We could also make a compromise, and do the offscreen drawing step periodically but less frequently, creating newUIBezierPath
's until that step happens. - (2) We're using a simple heuristic for the "speed" value by computing the straight-line distance between adjacent points as a (rough) approximation for the length of the Bezier curve.
- (3) We're setting the
lineWidth
to be the inverse of the drawing speed times a "fudge factor" determined experimentally (such that the line has reasonable width at the average drawing speed a typical user is expected to draw with). - (4) After the offscreen bitmap render, we can remove all the points in our
UIBezierPath
instance and start fresh. To reiterate, this step happens after every four touch points acquired.
Step 4
Paste the following code into AppDelegate.m in order to configure the view controller and assign it a view which is an instance of NaiveVarWidthView
.
#import "AppDelegate.h" #import "NaiveVarWidthView.h" @implementation AppDelegate - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; self.window.backgroundColor = [UIColor whiteColor]; UIViewController *vc = [[UIViewController alloc] init]; self.window.rootViewController = vc; vc.view = [[NaiveVarWidthView alloc] initWithFrame:self.window.bounds]; vc.view.frame = self.window.bounds; vc.view.backgroundColor = [UIColor whiteColor]; [self.window makeKeyAndVisible]; return YES; }
Step 5
Build the app and run. Scribble on your device and carefully note the result:
Here the line width is definitely changing with the variation in drawing speed, but the results are not really impressive. The width jumps rather abruptly instead of varying smoothly along the curve the way we would like. Let's look at these problems in more detail:
As we discussed previously, the lineWidth
property is a fixed value for a single UIBezierPath
instance and unfortunately can't be made to vary along its length. Even though we're using the smallest possible Bezier path (with only four points) still the increment in the stroke width only takes place at the junction of two adjacent paths, giving rise to a "jumpy" rather than continuous variation of width.
The second implementation-related issue is that even though Core Graphics uses the abstract concept of "points" to represent sizes such as lineWidth
, in reality our "canvas" is actually composed of discrete pixels. Depending on whether our device has a non-Retina or Retina display, one unit of length in terms of points corresponds to one or two pixels respectively. Despite the fact that like any good vector drawing API, the internal algorithms used by Core Graphics employs some "tricks" (such as anti-aliasing) in order to visually depict non-integral line widths, it is not realistic to expect to draw lines of arbitrary thickness - for example, a line having width (say) 2.1 points will probably be rendered identically to a line of width 2.0 points. Conversely, a perceptible change in rendering only occurs for a large increment in the value of the lineWidth
property. Note that the discretization issue is an omnipresent one, but at the same time the right approach or algorithm can make all the difference.
You might be able to improve the results marginally, by tinkering with the lineWidth
calculation and so on, but I think this approach is fundamentally limited and so we need to approach this problem with a fresh perspective.
Before moving on to that, let's address the fact that we're now doing the offscreen rendering step periodically (up to 25 times per second, in fact) and, more significantly, we're now doing it in between touch point acquisition. On my iPhone 4, I determined (using a counter and a timer that fired every second) that this was causing the touch acquisition rate to drop from 60-63 per second (for the code from the first tutorial) to around 48-52 per second now, which is a remarkable drop! Obviously this represents a decrease in the app's responsiveness and further will degrade the quality of the interpolation, making the resultant curve look less smooth. Strictly speaking, we ought to use the Instruments tool to analyze the app's performance, but for the purposes of this tutorial let's say we've done that and verified that the offscreen rendering operation is what's consuming the most time.
The issue with our code lies in the touchesMoved:withEvent:
method: after acquiring every fourth touch point, the control enters the body of the if statement, executes the time-consuming rendering code, and only after completing it does it exit the body of the method. Until that happens, the UI is unable to process the next touch.
This type of problem, in general terms, is not an uncommon one. We have a time-consuming operation (in this case, off screen rendering) whose result (the bitmap) is useful only after the entire operation finishes. At the same time, we have some short but frequent events that cannot tolerate latency (here, touch acquisition). If we have multiple processors to run our code, we'd like to separate the two "paths of code" so that they can execute independently, each on its own processor. Even if we have a single processor running our code, we'd like to arrange things so that we have two separate code paths, with the execution of the latter interspersed in between the former, and the processor scheduling time for each code path according to its time and priority requirements. Hopefully it's clear that we've just described multithreading (albeit in a greatly simplified way).
One of the clues we have that multithreading would be helpful in this situation is that we are required to draw the image only once for every four consecutive touch points, so in reality- if things are arranged properly- there is more time available for the bitmap drawing code to run than we made use of above.
2. Moving Off-Screen Drawing to the Background With GCD
In general terms, want to move the rendering code away from the main thread that is responsible for drawing on the screen and processing user events. The iOS SDK offers several options to achieve this, including manual threading, NSOperation and Grand Central Dispatch (GCD). Here we'll be using GCD. It is not be possible to talk about GCD in significant detail in this tutorial, so my idea is to explain the bits we use as I run you through the code. I feel that if you understand the "design pattern" we're going to be applying and how it helps solve the problem at hand, you'll be able to adapt it to other problems of a similar nature, for instance downloading large amounts of Internet data, performing some complex filtering operation on an image, etc. while keeping the UI responsive.
Step 1
Create a new UIView
subclass called NaiveVarWidthBGRenderingView
.
Paste the following code into NaiveVarWidthBGRenderingView.m:
#import "NaiveVarWidthBGRenderingView.h" #define CAPACITY 100 // buffer capacity @implementation NaiveVarWidthBGRenderingView { UIImage *incrementalImage; CGPoint pts[5]; uint ctr; CGPoint pointsBuffer[CAPACITY]; // ................. (1) uint bufIdx; dispatch_queue_t drawingQueue; } - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { [self setMultipleTouchEnabled:NO]; drawingQueue = dispatch_queue_create("drawingQueue", NULL); // ................. (2) } return self; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { ctr = 0; bufIdx = 0; UITouch *touch = [touches anyObject]; pts[0] = [touch locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint p = [touch locationInView:self]; ctr++; pts[ctr] = p; if (ctr == 4) { pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0); pointsBuffer[bufIdx] = pts[0]; pointsBuffer[bufIdx + 1] = pts[1]; pointsBuffer[bufIdx + 2] = pts[2]; pointsBuffer[bufIdx + 3] = pts[3]; bufIdx += 4; CGRect bounds = self.bounds; dispatch_async(drawingQueue, ^{ // ................. (3) if (bufIdx == 0) return; // ................. (4) UIBezierPath *path = [UIBezierPath bezierPath]; for ( int i = 0; i < bufIdx; i += 4) { [path moveToPoint:pointsBuffer[i]]; [path addCurveToPoint:pointsBuffer[i+3] controlPoint1:pointsBuffer[i+1] controlPoint2:pointsBuffer[i+2]]; } UIGraphicsBeginImageContextWithOptions(bounds.size, YES, 0.0); if (!incrementalImage) // first time; paint background white { UIBezierPath *rectpath = [UIBezierPath bezierPathWithRect:self.bounds]; [[UIColor whiteColor] setFill]; [rectpath fill]; } [incrementalImage drawAtPoint:CGPointZero]; [[UIColor blackColor] setStroke]; float speed = 0.0; for (int i = 0; i < 3; i++) { float dx = pts[i+1].x - pts[i].x; float dy = pts[i+1].y - pts[i].y; speed += sqrtf(dx * dx + dy * dy); } #define FUDGE_FACTOR 100 // emperically determined float width = FUDGE_FACTOR/speed; [path setLineWidth:width]; [path stroke]; incrementalImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); dispatch_async(dispatch_get_main_queue(), ^{ // ................. (5) bufIdx = 0; [self setNeedsDisplay]; }); }); pts[0] = pts[3]; pts[1] = pts[4]; ctr = 1; } } - (void)drawRect:(CGRect)rect { [incrementalImage drawInRect:rect]; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [self setNeedsDisplay]; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [self touchesEnded:touches withEvent:event]; } @end
Step 2
Modify AppDelegate.m to #include NaiveVarWidthBGRenderingView
and to set the root view controller's view
to be an instance of NaiveVarWidthBGRenderingView
. Simply replacing the string NaiveVarWidthView
by NaiveVarWidthBGRenderingView
everywhere in AppDelegate.m will do the trick.
Run the code. We haven't touched our drawing code yet, so there's nothing new to see. Hopefully you'll be satisfied knowing that your code makes more effective use of your device's processing resources and probably performs better on older devices. On my iPhone 4, with the same test described above, the touch acquisition rate went back up to its maximum value (60-63 per second).
Now let's study the code, with reference to the numbered points in the code listing:
- (1) We've introduced an array to store incoming points,
pointsBuffer
. I'll explain exactly why in a bit. The size of the buffer (100) was chosen arbitrarily; in fact we don't actually expect this buffer to be filled beyond the the four points belonging to a single Bezier curve segment. But it's there to handle a certain situation that might conceivably arise. - (2) GCD abstracts threads behind the concept of a queue. We submit tasks (units of work) on queues. There are two types of queues, concurrent and serial. We'll only talk about serial queues here, because that's the only type we're explicitly using. A serial queue actions the tasks placed on it strictly in a first-in, first-out basis, much like a first-come, first-serve queue at a bank teller or the cashier at a supermarket. The word "serial" also indicates that a task will complete before the next one is run, much like a cashier at the supermarket won't start attending to the next customer before he's done serving the current customer. Here we've created a queue and assigned it the identifier
drawingQueue
. It helps to bear in mind that all the code we normally write is tacitly executed on the always-existing main queue, which itself is a serial queue! So now we have two queues. We haven't actually scheduled any work on the drawing queue yet. - (3) The call to the
dispatch_async()
function schedules ondrawingQueue
, the bitmap drawing code packaged in the block ^{ ... }, asynchronously. "Asynchronous" implies that while the task has been submitted, it's not executed yet. In fact thedispatch_async()
returns control to the caller immediately, in this case the body of the(-)touchesMoved:withEvent:
method (on the main queue). This is a fundamental difference to our previous (non-thread based) implementation. Everything before was happening on the main queue and the bitmap drawing code had to be executed to completion before moving on! Make sure you grasp this distinction. With our present implementation, on a multicore device it's quite possible that the drawing queue would be created on a different core than the one processing the main queue, and both queues processed simultaneously, much like a small supermarket that has two cashiers, providing service to two queues of customers at the same. To understand how things work on a single processor device, consider the following analogy: imagine an office with a single photocopier. The "copy-machine guy" has a load of work that he's received in bulk, and which he is expected to take the whole day to complete. However, every now and then one of the office employees brings him a few pages to photocopy. Obviously, the smart thing for him to do is temporarily interrupt the time-consuming job that he's been at throughout the day, and complete the short (but ostensibly urgent) job submitted to him by the employee, and then get back to his previous duties. In this analogy, the employee's short but urgent photocopy need refers to high-priority tasks that appear on the main queue, such as touch events or on-screen drawing, while the bulk job refers to time-consuming tasks such as downloading data from the Internet or (in our case) drawing to an off-screen buffer. The operating system behaves like the smart copy-machine guy, scheduling tasks on the single processor (the lone photocopier) in a way that best serves the needs of the app (the office). (I hope analogy this wasn't too cheesy!) Anyway, the actual code submitted to the drawing queue is pretty much what we had in our earlier implementation, except our use of a buffer to which we append our touch points, which I'll discuss next. - (4) This bit of code has to do with our use of the
pointsBuffer
array. Consider the hypothetical scenario that an off-screen drawing task gets enqueued on the drawing queue, but for some reason doesn't get a chance to execute, and in meanwhile on the main queue the next four touch points have been acquired and another drawing task is enqueued on the drawing queue, behind the first one. Who knows, maybe our app was more complex and had other stuff going on at the same time as well. By buffering our touch points, we can ensure that in the case of multiply-enqueued off-screen drawing tasks, the first one does all the drawing, and the ones after it are simply returned due to the points buffer being empty. As I said previously, this scenario of the drawing queue getting backed up with two or more drawing tasks all waiting to be executed might not occur at all, and if it occurs on a persistent basis, then it might mean that our algorithm was too slow for the device, whether because of its complexity, poor design, or our app trying to do too many things. But on the off-chance it happens, we've handled it. - (5) All UI update actions must happen on the main queue, which we'll do with another asynchronous dispatch from within the drawing task on the drawing queue, as in the previous call to
dispatch_async()
, the task of updating the screen has been submitted, but this doesn't mean that that the app is going to drop what it's doing and execute it right then and there.
The pattern that we've implemented looks like this in general, and is applicable to many other scenarios:
// Main queue dispatch_async(aSerialQueue, ^{ // background processing dispatch_async(mainQueue, ^{ // update UI with results }); });
In general, writing multithreaded code may not be an easy task. But it isn't always as complex as you might think (as our own example indicates). It might sometimes seem like a "thankless chore" because there's no explicit "wow factor" that you get to show at the end of it. But always bear in mind that if your app's UI runs as smooth as butter then your users are much more likely to enjoy using it and come back to it again and again!
3. Developing a Better Algorithm
In the first iteration, we determined that it was unlikely we would make much improvement in getting a continuous and smooth width-varying pen stroke with the "naive" approach we'd started with. So now let's try a new approach.
The method I'm going to present here is fairly straightforward, although it does require us to think out-of-the-box. Instead of representing our drawn stroke with one Bezier curve like we were doing previously, we now represent it by the filled region between two Bezier paths, each path is slightly offset on either side of imaginary curve traced out by the user's finger. By slightly varying the offsets of the control points that define these two Bezier curves, we shall manage to achieve a very plausible effect of a smoothly varying pen width.
The figure above shows the construction described before for a single cubic Bezier segment. The x's with red circles around them would correspond to the captured touch points and the dashed brown curve is the Bezier segment generated from these points. It corresponds to the Bezier path we drew in our previous implementations.
For each of the four touch points, a pair of offset points are generated, shown at either end of the green line segment. These green line segments are made to be perpendicular to the line segment joining two adjacent touch points. We thus generate two sets of four points on either side of the touch points set, and each of these offset point sets can be used to generate an offset Bezier curve which will lie on either side of the traced Bezier curve (the two solid brown curves). It should be clear from the figure that the width variation is controlled by the distances of the offset points (i.e. the length of the green line-segments). If we fill the region between these two offset curves, we've effectively simulated a "stroke" of varying width!
This approach better leverages how vector drawing works inside the Core Graphics/UIKit framework, because it models continuous variation better, compared to the "abrupt" approach of changing stroke width in the "naive" method, and in the bottom line, it works well.
The main step we need to implement is a method that can give us the coordinates of these offset points. Let's specify the problem more precisely and geometrically. We have a line segment connecting points p1 = (x1, y1)
and p2 = (x2, y2)
, which I'll denote as p1-p2
. We'd like to find a line passing through p2
, perpendicular to p1-p2
. This problem is easy to solve if we formulate it in terms of vectors. The line segment p1-p2
can be represented by the equation p = p1 + (p2 - p1)t
, where t
is a variable parameter. Varying t
from 0 to 1 causes p
to "sweep" from p1
to p2
along the straight line connecting the two points. The two special cases are t = 0
corresponding to p = p1
, while t = 1
corresponds to p = p2
.
We can split up this parametric equation in terms of x and y coordinates to get the pair of equations x = x1 + t(x2 - x1)
and y = y1 + t(y2 - y1)
, where p = (x, y)
. We need to invoke a theorem from geometry that states that the product of slopes of two perpendicular lines is -1. The slope of the line through (x1, y1)
and (x2, y2)
is equal to (y2-y1)/(x2-x1)
. Using this property and some algebraic manipulation, we can work out the end points pa
and pb
of the line perpendicular to p1-p2
, such that pa
and pb
are an equal distance from p2
. The length of pa-pb
can be controlled by a variable that expresses the ratio of the length of this line to p1-p2
. Instead of writing out a bunch of messy equations, I've drawn a figure that should clarify everything.
Step 1
Let's implement these ideas in code! Create FinalAlgView as a subclass of UIView and paste the following code in it. Also, don't forget to modify AppDelegate.m to use this class as the view controller's view:
#define CAPACITY 100 #define FF .2 #define LOWER 0.01 #define UPPER 1.0 #import "FinalAlgView.h" typedef struct { CGPoint firstPoint; CGPoint secondPoint; } LineSegment; // ................. (1) @implementation FinalAlgView { UIImage *incrementalImage; CGPoint pts[5]; uint ctr; CGPoint pointsBuffer[CAPACITY]; uint bufIdx; dispatch_queue_t drawingQueue; BOOL isFirstTouchPoint; LineSegment lastSegmentOfPrev; } - (id)initWithFrame:(CGRect) frame { self = [super initWithFrame:frame]; if (self) { [self setMultipleTouchEnabled:NO]; drawingQueue = dispatch_queue_create("drawingQueue", NULL); UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(eraseDrawing:)]; tap.numberOfTapsRequired = 2; // Tap twice to clear drawing! [self addGestureRecognizer:tap]; } return self; } - (void)eraseDrawing:(UITapGestureRecognizer *)t { incrementalImage = nil; [self setNeedsDisplay]; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { ctr = 0; bufIdx = 0; UITouch *touch = [touches anyObject]; pts[0] = [touch locationInView:self]; isFirstTouchPoint = YES; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint p = [touch locationInView:self]; ctr++; pts[ctr] = p; if (ctr == 4) { pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0); for ( int i = 0; i < 4; i++) { pointsBuffer[bufIdx + i] = pts[i]; } bufIdx += 4; CGRect bounds = self.bounds; dispatch_async(drawingQueue, ^{ UIBezierPath *offsetPath = [UIBezierPath bezierPath]; // ................. (2) if (bufIdx == 0) return; LineSegment ls[4]; for ( int i = 0; i < bufIdx; i += 4) { if (isFirstTouchPoint) // ................. (3) { ls[0] = (LineSegment){pointsBuffer[0], pointsBuffer[0]}; [offsetPath moveToPoint:ls[0].firstPoint]; isFirstTouchPoint = NO; } else ls[0] = lastSegmentOfPrev; float frac1 = FF/clamp(len_sq(pointsBuffer[i], pointsBuffer[i+1]), LOWER, UPPER); // ................. (4) float frac2 = FF/clamp(len_sq(pointsBuffer[i+1], pointsBuffer[i+2]), LOWER, UPPER); float frac3 = FF/clamp(len_sq(pointsBuffer[i+2], pointsBuffer[i+3]), LOWER, UPPER); ls[1] = [self lineSegmentPerpendicularTo:(LineSegment){pointsBuffer[i], pointsBuffer[i+1]} ofRelativeLength:frac1]; // ................. (5) ls[2] = [self lineSegmentPerpendicularTo:(LineSegment){pointsBuffer[i+1], pointsBuffer[i+2]} ofRelativeLength:frac2]; ls[3] = [self lineSegmentPerpendicularTo:(LineSegment){pointsBuffer[i+2], pointsBuffer[i+3]} ofRelativeLength:frac3]; [offsetPath moveToPoint:ls[0].firstPoint]; // ................. (6) [offsetPath addCurveToPoint:ls[3].firstPoint controlPoint1:ls[1].firstPoint controlPoint2:ls[2].firstPoint]; [offsetPath addLineToPoint:ls[3].secondPoint]; [offsetPath addCurveToPoint:ls[0].secondPoint controlPoint1:ls[2].secondPoint controlPoint2:ls[1].secondPoint]; [offsetPath closePath]; lastSegmentOfPrev = ls[3]; // ................. (7) // Suggestion: Apply smoothing on the shared line segment of the two adjacent offsetPaths } UIGraphicsBeginImageContextWithOptions(bounds.size, YES, 0.0); if (!incrementalImage) { UIBezierPath *rectpath = [UIBezierPath bezierPathWithRect:self.bounds]; [[UIColor whiteColor] setFill]; [rectpath fill]; } [incrementalImage drawAtPoint:CGPointZero]; [[UIColor blackColor] setStroke]; [[UIColor blackColor] setFill]; [offsetPath stroke]; // ................. (8) [offsetPath fill]; incrementalImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [offsetPath removeAllPoints]; dispatch_async(dispatch_get_main_queue(), ^{ bufIdx = 0; [self setNeedsDisplay]; }); }); pts[0] = pts[3]; pts[1] = pts[4]; ctr = 1; } } - (void)drawRect:(CGRect)rect { [incrementalImage drawInRect:rect]; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { // Left as an exercise! [self setNeedsDisplay]; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [self touchesEnded:touches withEvent:event]; } -(LineSegment) lineSegmentPerpendicularTo: (LineSegment)pp ofRelativeLength:(float)fraction { CGFloat x0 = pp.firstPoint.x, y0 = pp.firstPoint.y, x1 = pp.secondPoint.x, y1 = pp.secondPoint.y; CGFloat dx, dy; dx = x1 - x0; dy = y1 - y0; CGFloat xa, ya, xb, yb; xa = x1 + fraction/2 * dy; ya = y1 - fraction/2 * dx; xb = x1 - fraction/2 * dy; yb = y1 + fraction/2 * dx; return (LineSegment){ (CGPoint){xa, ya}, (CGPoint){xb, yb} }; } float len_sq(CGPoint p1, CGPoint p2) { float dx = p2.x - p1.x; float dy = p2.y - p1.y; return dx * dx + dy * dy; } float clamp(float value, float lower, float higher) { if (value < lower) return lower; if (value > higher) return higher; return value; } @end
Let's study this code, again with reference to the numbered comments:
- (1)
LineSegment
is a simple C structure that has beentypedef
'd to conveniently package the two CGPoints at the end of a line segment. Nothing special. - (2) The
offsetPath
is the path we'll fill and stroke to achieve our variably thick pen stroke. It'll consist of a closed path (meaning it's first point will be connected to the last one so that it can be filled), consisting of two Bezier subpaths offset to either side of the traced path plus two straight line segments connecting the corresponding ends of the two subpaths. - (3) Here we're dealing with the special case of the first touch when the user puts his finger on the view. We won't create offset points for this first point.
- (4) This is the factor used to relate the speed of the drawing (taking the distance between two touch points as representing the user's speed). The function
len_sq()
returns the squared distance between two points. Why the squared distance? I'll explain that in the next point. As before,FF
is a "fudge factor" that I decided upon after trial-and-error in order to get visually pleasing results. Theclamp()
function keeps the value of the argument from going below or above set thresholds, to prevent our pen stroke from becoming too thick or too thin. Again, the values of LOWER and UPPER were chosen after some trial-and-error. - (5) We create the method
(-)lineSegmentPerpendicularTo:ofRelativeLength:
to implement the geometrical idea that our approach is based on, as discussed earlier. The first argument corresponds top1-p2
from the figure. From the figure, observe that the longerp1-p2
is, the longerpa-pb
will be (in absolute terms). So by makingf
inversely proportional to the length ofp1-p2
, we'll "cancel out" this dependence on length, so that, for example,f = 0.5/length(p1-p2)
would makepa-pb
have length 1 point, independent of the length ofp1-p2
. To make it so thatpa-pb
's length varies according to the length ofp1-p2,
I've divided byp1-p2
's length again. This is the motivation for the inverse squared length factor from the previous point. - (6) This just constructs the closed path by joining together two Bezier subpaths and two straight line segments. Note that the subpaths comprising the
offsetPath
have to be added in a particular sequence, such that each subpath begins from the last point of the previous one. Note in particular the direction of the second cubic Bezier segment. You might trace out the shape of a typicaloffsetPath
by following the sequence in the code to understand how it forms. - (7) This just enforces continuity between two adjacent
offsetPath
's. - (8) We both stroke and fill the path. If we don't stroke, then adjacent
offsetPath
segments sometimes appear non-contiguous.
Step 2
Build the app and run it. I think you'll agree that the subtle width variation of the sketched line as you draw makes for an interesting stylistic effect.
For comparison, here's what the end effect was with the algorithm with fixed stroke width from the original tutorial:
Conclusion
We started with a freehand sketching app, enhanced it to incorporate multithreading, and introduced a stylistic effect on the drawing algorithm. As always, there's room for improvement. Touch ending (i.e. when the user lifts their finger after having drawn) needs to be handled so that the sketched line terminates gracefully. You might observe that if you're scribbling very fast in a zigzag sort of a pattern, then the curve can become quite pinched at the turning points of the curve. The width variation algorithm can be made more sophisticated so that the thickness of the line varies more realistically, or could simply be messed with to get some fun effects for a kids' app! You can also vary the properties of the Bezier in each iteration of the drawing cycle. For instance, you can introduce subtle effects by varying the color of the fill and stroke slightly, in addition to the thickness of the stroke.
I hope you found this tutorial beneficial and that it gave you some fresh ideas for your own drawing/sketching app. Happy coding!
Comments