Category Archives: iOS

Right Storyboard, Wrong Platform

If, in haste, you inadvertently add a storyboard file to your Mac or iOS project from the wrong platform palette, you’ll end up with a storyboard that compiles and installs into the app bundle, but which products cryptic errors upon building and running. For example, a Mac storyboard lost in an iOS world:

*** Assertion failure in -[UIStoryboard initWithBundle:storyboardFileName:identifierToNibNameMap:designatedEntryPointIdentifier:], /SourceCache/UIKit_Sim/UIKit-3347.44.1/UIStoryboard.m:52

*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: nibNameMap != nil'

The assertion above occurs when, internally to UIStoryboard, the Info.plist for your compiled “.storyboardc” file is consulted to determine its constituent “.nib” files. In the Mac case, the keys for storyboard Info.plist entries have an “NS” prefix, e.g. NSViewControllerIdentifiersToNibNames, whereas on iOS, it goes hunting for a UI-prefixed key: UIViewControllerIdentifiersToNibNames.

Granted, as soon as you proceeded to the next step, trying to populate the storyboard with UI elements that make sense for the platform, you would probably figure out your mistake. But if you’re just trying to get the ball rolling and end up immediately scratching your head over the failure, hopefully this blog post will have helped you figure out more quickly what was wrong.

I don’t think there’s any official way to change a created storyboard’s platform target. Best bet if you run into this is to delete the storyboard and recreate it from scratch, taking care to select the file template from the appropriate platform in Xcode’s “New File” panel.

Xcode Consolation

One of the best things any Mac or iOS developer can do to improve their craft is to simply watch another developer at work in Xcode. Regardless of the number of years or the diversity of projects that make up your experience with the tool, you will undoubtedly notice a neat trick here or there that changes everything about the way you work.

If you were sitting in my office on a typical workday, looking over my shoulder, that would be a little creepy. But one thing you’d notice that differentiates me from many others is the amount of time I spend typing into the Xcode debugger console.

In my experience, other developers fall into three groups when I mention that I use the console almost exclusively when interacting with the debugger:

  1. Those who share this approach. We exchange high-fives and move on.
  2. Those who appreciate the console for occasional tricks, but rely mostly on the variables view.
  3. Those who have never heard of or noticed the console.

Let’s assume for the sake of argument that you fall into group 3. You can show the console while debugging any process by selecting View -> Debug Area -> Activate Console from the menu bar. In fact, I do this so often that that the keyboard shortcut, Cmd-Shift-C, is completely second-nature to me.

Xcode window with console highlighted.

In fact I find the console so useful that it’s not unusual for me to hide much of the other junk, so I can have a mainline connection to the debugger.

Xcode window with large console area

So let’s go back again to the uncomfortable assumption that you’re sitting in my office, looking over my shoulder. What would you see me do in the console? All manner of things: For one thing I rarely click those visual buttons for stepping through code. Instead, I use these terse keyboard aliases, inherited from the bad old Gdb days. In lldb, these all come as standard abbreviations of more verbosely named commands:

  • n – step over the next line of source
  • s – step into the first line of source of a function
  • ni – step over the next instruction of assembly code
  • si – step into the first instruction of a function
  • c – continue
  • fin – continue until return from current stack frame

Those are the basics, but another trick, I believe it was called ret in Gdb, comes in handy often:

  • thread return – return immediately from the current stack frame

You could use this if you are stuck in some function that is crashed, for example, but you know that returning to the caller would allow the process to continue running as normal. Or, you could use it to completely circumvent a path of code by breaking on a function and bolting right out, optionally overriding the return value. Just to pick an absurd example with flair, let’s stub out AppKit’s +[NSColor controlTextColor]:

(lldb) b +[NSColor controlTextColor]
Breakpoint 1: where = AppKit`+[NSColor controlTextColor], address = 0x00007fff8fb05572
(lldb) break command add
Enter your debugger command(s).  Type 'DONE' to end.
> thread ret [NSColor redColor]
> c
> DONE
(lldb) c

Here we have opted to break on every single call to controlTextColor, returning to the caller with a fraudulent color that is apparently not consulted by the label for this popup menu in MarsEdit:

MarsEdit screenshot with labels drawn in red

In fact, mucking about with system symbols is one of the great tricks of the console, and lldb’s built-in “breakpoint” command brings with it superpowers that can’t be touched by Xcode’s dumbed down GUI-based controls. For example, what if we wanted to set a breakpoint that would catch not only +controlTextColor, but any similar variation? Using lldb’s support for regular expression breakpoints, it’s a snap:

(lldb) break set -r control.*Color -s AppKit
Breakpoint 1: 20 locations.

Here we request that a breakpoint be set for any symbol in the AppKit framework that contains the lower-cased “control,” followed by anything, then capitalized “Color.” And lldb just set 20 breakpoints at once. To see them all, just type “breakpoint list” and you’ll see all the variations listed out as sub-breakpoints:

(lldb) break list
Current breakpoints:
1: regex = 'control.*Color', locations = 20, resolved = 20, hit count = 0
  1.1: where = AppKit`+[NSColor controlTextColor], address = 0x00007fff8fb05572, resolved, hit count = 0 
  1.2: where = AppKit`+[NSDynamicSystemColor controlTextColor], address = 0x00007fff8fb05670, resolved, hit count = 0 
  1.3: where = AppKit`+[NSColor controlBackgroundColor], address = 0x00007fff8fb1465d, resolved, hit count = 0 
  1.4: where = AppKit`+[NSDynamicSystemColor controlBackgroundColor], address = 0x00007fff8fb1475d, resolved, hit count = 0 
  (etc...)

One major shortcoming of all this goodness is Xcode has a mind of its own when it comes to which breakpoints are set on a given target. Anything you add from the lldb prompt will not register as a breakpoint in Xcode’s list, so you’ll have to continue to manipulate it through the console. You can enable or disable breakpoints precisely by specifying both the breakpoint number and its, umm, sub-number? For example to disable the breakpoint on +[NSColor controlBackgroundColor] above, just type “break disable 1.3”.

Regular expressions can be very handy for setting breakpoint precisely on a subset of related methods, but they can also be very useful for casting about widely in the vain pursuit of a clue. For example, when I’m boggled by some behavior in my app, and suspect it has to do with some system-provided API, or even an internal framework method, I’ll just throw some verbs that seem related at a breakpoint, and see what I land on. For example, it seems like some delegate, somewhere should be imposing this behavior:

(lldb) break set -r should.*:
Breakpoint 5: 735 locations.

That’s right, set a breakpoint for any ObjC method that includes the conventional “shouldBlahBlahBlah:” form in its selector name. Or if you are really at your wit’s end about an event related issue, why not capture the whole lot and see where you stand?

(lldb) break set -r [eE]vent
Breakpoint 10: 9520 locations.

Sometimes I’ll set a breakpoint like this and immediately continue, to see what I land on, but often I’ll use lldb’s breakpoint system as a quick way of searching the universe of symbol names. After setting a breakpoint like the one above, I’ll do a “break list” and then search through the results for other substrings I’m interested in. This often gives me a clue about the existence of methods or functions that are pertinent to the problem at hand.

I’ve scratched the surface here of all the powerful things you can use Xcode’s debugger console for, but I hope it has brought you some increased knowledge about the tools at your disposal. If you’ve never used the console before, or only use it occasionally, perhaps I’ve pushed you a bit farther down the path of having its Cmd-Shift-C shortcut feel like second-nature to you as well.

Share Extension Iterations

One of the many big surprises at WWDC this year was the news that iOS 8 and Mac OS X 10.10 would support app extensions, which give developers a variety of new ways in which data and services can be shared between apps.

One of these new types of extension, called a Share extension, essentially allows apps that generate user data provide content to apps that publish user data. As a developer whose main app, MarsEdit, is a blog-publishing tool for the Mac, you can imagine how this might be of interest to me.

The first thing I noticed upon digging into app extension development is that it’s fairly fiddly to get set up, and the default build, run, invoke iteration loop is fraught with administrative overhead.

First of all, when you add a Share extension target to an existing project, you’ll discover that the new target can’t be run directly. To test or debug it, you have to invoke the Share extension from some app that supports … sharing. There are many default apps on the Mac and iOS that support sharing data of various formats. For example on Mac OS X the system Preview app makes a canonical sharing source for photo data, while the Notes app is one of many suitable test apps for sharing text. If you don’t do anything special after adding a Share extension target to your project, the auto-generated Xcode scheme will be configured to ask every time you run:

ChooseAnApp

This will get old fast, and 99% of the time what you’ll want to run is exactly the same dang app you just ran. Luckily, this is easily addressed in the scheme settings, where you can change the “Executable” selection in the “Info” section of the “Run” tab from “Ask on Launch” to a specific application of your choosing.

OtherApp

That’s all well and good, but you still have to go through the tedious task of selecting some suitable content in the target app, clicking the share button, and choosing your extension from the list, before you even have the luxury of examining your own code’s behavior.

Wouldn’t it be great if you could make changes to your app extension’s appearance or behavior, click Run, and moments later be looking at your Share extension’s UI without having to do anything else? Good news: you can, and I’m about to explain how to achieve it for both Mac and iOS targets.

Automatic Invocation

The problem with all the targets on your Mac or iOS that may or may not support sharing pertinent data to your extension, is you have to walk them through the process of invoking you. When you’re deep into the development of a particular aspect of the user interface or mechanical behavior of your extension, what you really want is to repeatedly invoke the extension with the same, predictable kinds of data. Luckily, it’s possible on both Mac and iOS platforms for a host app to invoke sharing extensions programmatically, so you just need to choose a Run target that will predictably invoke your extension upon launching.

The only way I know to guarantee this is to write your own code that invokes the extension, and put it in a custom debugging-purposed target, or else put it into your main app as conditional code that only runs when building Debug variants and when prompted to take effect. I can imagine a sophisticated version of this approach involving a multi-faceted extension-debugging target that would be capable of fast-tracking extension invocation for a variety of data formats. But for the simplest solution, I recommend adding the ability to invoke your extension to the very app that hosts it. Then your simple test mode behavior will always coexist in the same source base as your extension code.

The way I approached this was to add an environment variable, “RSDebugAppExtension” to the scheme configuration for my app extension, which is also configured now to target the main app as its Executable target. So when I build and run the app extension scheme, may main app runs with these environment variables set:

Screenshot 11 24 14 3 12 PM

Now in the main app’s applicationDidFinishLaunching: method, I just have to add the appropriate code for invoking the app extension’s UI. Without the platform-specific voodoo, it looks something like this:

#if DEBUG
	// Check for flag to invoke app extension
	NSDictionary* env = [[NSProcessInfo processInfo] environment];
	NSString* appExtensionDebug = [env objectForKey:@"RSDebugAppExtension"];
	if (appExtensionDebug != nil)
	{
		// RUN MY EXTENSION
	}
#endif

The #if DEBUG just ensures that I will never inadvertenly ship this code in a release version of my app. The environment variable ensures I won’t be plagued by my extension’s UI appearing whenever I’m working on other aspects of my app besides the extension.

Now we just have to dig into the specifics of how to “RUN MY EXTENSION.” I’ll show you how that works for both Mac and iOS based Share extensions.

Invoking a Mac-based Share Extension

If you’re working on a Mac-based Share extension, the key to invoking it at runtime is to enlist the help of the NSSharingService infrastructure. You simply ask it for a service with your identifier, and ask it to “perform.”

NSString* serviceName = @"com.red-sweater.yeehaw";
NSSharingService* myService = [NSSharingService sharingServiceNamed:serviceName];
[myService performWithItems:@[@"Hello world"]];

Now when I build and run with the aforementioned environment variable set up, it pops up my Share extension, pre-populated with the text “Hello world.”

Invoking an iOS-based Share Extension

Unfortunately, iOS doesn’t support the same NSSharingService infrastructure that the Mac has. This seems like an area where we might be lucky to see more unification over the years, but for the time being the iOS notion of a sharing service is tied up with the existing “Social.framework” and its SLComposeViewController class. This is the interface that is supposed to be used to invoke a sharing panel for Facebook, Twitter, Tencent Weibo, etc. And luckily, although I don’t believe it’s documented, you can also pass in the extension identifier for your custom Share extension to instantiate a suitable view controller for hosting your extension:

NSString* serviceName = @"com.red-sweater.yeehaw";
SLComposeViewController* myController = [SLComposeViewController composeViewControllerForServiceType:serviceName];
[myController setInitialText:@"Hello World"];
[[self navigationController] presentViewController:myController animated:YES completion:nil];

You can see the basic gist of it is similar to the Mac, but you have to configure the view controller directly as opposed to providing a list of arbitrary input values. Also bear in mind that because the iOS-based solution above relies upon [self navigationController] being non-nil, you’ll need to add the special debugging code at a point in your app’s launch process when this is already set up, or else find another way of ensuring you can meaningfully present view controllers to your app’s main window.

Going Further

Notice that I hinge all the invocation on simply whether the RSDebugAppExtension environment variable is non-nil. Basically while I’m debugging I have to go in and change the code in applicationDidFinishLaunching to invoke the extension in the way I want. This works OK for me at least at this stage, but I can imagine down the road wanting to be able to fine-tune the behavior of how the extension is invoked. For this purpose, why not encode into the value of the environment variable itself certain clues about how to invoke the extension? It could specify for example what text should be passed the extension, or other configuration as desired.

My intent in sharing this blog post is both to point out some potentially time-saving tricks that will help those of you developing Share app extensions, but also to encourage you to look for creative ways to cut out the tedious aspects of whatever you work on. When all is said and done, the few monotonous steps during each iteration of work on a Share extension probably don’t add up to all that much in sheer time, but they take a lot of the joy and immediacy out of development, which frankly puts me in a bad mood. Clicking Run and seeing my work, including my latest changes, pop up instantly on the screen makes me feel a lot happier about my job.

Bridging iOS WebViews

I hinted in my previous post that similar techniques could be used to bridge JavaScript to Objective-C on iOS, but that it would require using undocumented methods. As with the Mac-based solution, it’s not such a big deal so long as you’re only going to use it for private, debugging builds of your app. So how can it be done?

WebKit on iOS is for the most part famously, frustratingly hidden from developer use. As apps like Safari show, there is an extensive, powerful WebKit framework just as there is on the Mac, but iOS developers are limited to the comparatively impotent UIWebView.

But here’s a hint: UIWebView is itself a client of “proper WebKit,” and thus implements many of the powerful delegate methods that you would implement as the Mac delegate of a WebView. So if I were an iOS developer and wanted to play around with bridging WebView to Objective-C, I would simply subclass UIWebView, and override the delegate methods methods I am interested in:

#if DEBUG
@interface UIWebView (PrivateMethods)
- (void)webView:(UIWebView*)webView
      didClearWindowObject:(id)windowObject
      forFrame:(id)frame;
@end

@interface MyWebView : UIWebView
@end

@implementation MyWebView

// Override methods on UIWebView that it itself employs
// as delegate of a "proper" WebKit WebView object
- (void)webView:(UIWebView*)webView
      didClearWindowObject:(id)windowObject
      forFrame:(id)frame
{
   [super webView:webView didClearWindowObject:windowObject forFrame:frame];

   id myDelegate = [[UIApplication sharedApplication] delegate];
   [windowObject setValue:myDelegate forKey:@"appDelegate"];
}

@end

#endif

Set your WebView to use the custom class “MyWebView”, and now your iOS-based WebView can pass whatever information it needs to directly to your delegate, just as I illustrated for the Mac version.

JavaScript Bug Traps

I am the kind of developer who leans heavily on automated systems for the improvement of my code. For this reason I tend to enable the maximum number of warnings, rely upon static analysis to discover nuanced bugs, develop and run unit tests to automatically confirm expected behaviors, and I encourage my apps to crash hard if they are going to crash at all.

I would characterize all of these behaviors as part of an overall attitude of vigilance against software defects. Learning about problems in code quickly and clearly is a key step maintaining an overall high level of code quality.

Most of my software is developed in Objective-C, which enables me to leverage compile-time and run-time tools from Apple that catch my errors quickly. Between compile-time warnings and errors, and runtime exceptions and crashes, the vast majority of my programming errors in Objective-C are caught within minutes of writing them.

On the other hand, MarsEdit features a rich HTML text editor largely implemented in JavaScript, a language which tends not to afford any of these convenient alerts to my mistakes. I have often stared in bewilderment at the editor window’s perplexing behavior, only to discover after many lost minutes or hours, that some subtle JavaScript error has been preventing the expected behavior of my code. I’ll switch to the WebKit inspector for the affected WebView, and discover the console is littered with red: errors being dutifully logged by the JavaScript runtime, but ignored by everybody.

Wouldn’t it be great if the JavaScript code crashed as hard as the Objective-C code does?

In fact, these days mine does. By a combination of JavaScript hooks that are alerted to the errors, Objective-C bridges that propagate them to the host app, and the judicious use of undocumented WebKit Inspector methods, development versions of my WebView-based apps now automatically present an inspector window the moment any error occurs in the view’s underlying JavaScript code.

For those of you who also develop JavaScript-heavy WebKit apps, I’ll share the steps for enabling this same type of behavior in your app. These steps all apply to Mac-based WebViews, but most of the same techniques could be used with iOS if you are willing to use some private methods to establish a bridge between the WebView and host app (for debug builds only, of course).

Step 1: Catch the error in JavaScript.

In whatever the main source content is for your WebView-based user interface, add some script code to register a callback for JavaScript errors:

<script type="text/javascript">
   function gotJavaScriptError(errorEvent)
   {
      console.log("Error: " + errorEvent.message);
   }

   window.addEventListener("error", gotJavaScriptError,  false);
</script>

Now if you encounter some error and happen to be attached with the web inspector, you’ll see additional confirmation that the error has in fact been noticed. if you’re following along in your own code while you read, I strongly encourage you to add a testing mechanism for generating errors. A simple “click” event callback (or “touchstart” on iOS) does the trick well:

<script type="text/javascript">
   function gotClick(theEvent)
   {
      blargh
   }

   window.addEventListener("click", gotClick, false);
</script>

Now when you click, you blargh. And when you blargh, the error is logged. Of course, just logging the error isn’t very useful because it’s quiet and goes ignored. Just like JavaScript errors usually do…

Step 2: Propagate the error to the host app.

You could do some interesting stuff in the WebView itself. For example you might convert the error into a string and display a prominent alert message. But since there’s so much more you can do once the host app is in charge, it’s worth going the extra mile for that functionality.

To tie the host app to the WebView we need to wait for the WebView’s frame to finish preparing its “window script object,” and then set a named attribute on that object that refers to an Objective-C instance. See that your WebView has a frameLoadDelegate configured, and within that delegate object implement this method:

- (void)webView:(WebView *)webView 
        didClearWindowObject:(WebScriptObject *)windowObject
        forFrame:(WebFrame *)frame
{
   // Expose ourselves as a JavaScript window attribute
   [[webView windowScriptObject] setValue:self forKey:@"appDelegate"];
}

At this point any JavaScript code, including our error-handling callback, can reference the Objective-C delegate directly. But in order to make good use of the delegate we’ll have to add a couple other methods. The first lets the WebKit runtime know that we are not fussy about which of our methods are accessible to the WebView:

+ (BOOL) isSelectorExcludedFromWebScript:(SEL)aSelector
{
   return NO;
}

Now JavaScript code within the WebView can call any method it chooses on the delegate object. Implement something suitable for catching the news of the JavaScript error:

- (void) javaScriptErrorOccurred:(WebScriptObject*)errorEvent
{
   NSLog(@"WARNING JavaScript error: %@", errorEvent);
}

And back in your WebView HTML source file, amend the JavaScript error handler to call through to Objective-C instead of pointlessly logging to the JavaScript console:

<script type="text/javascript">
   function gotJavaScriptError(errorEvent)
   {
      console.log("Error: " + errorEvent.message);
      window.appDelegate.javaScriptErrorOccurred_(errorEvent);
   }

   ...
</script>

Notice that the colons in Objective-C method names are simply replaced with underscores to make them compatible with the JavaScript function naming rules. The errorEvent argument in this case is translated to a DOMEvent object instance in Objective-C, where it can be further interrogated. Alternatively you could pull the parts of the event you want out in JavaScript, and pass along only those bits.

Now we’ve done a considerable amount of work, only to upgrade our easy to ignore JavaScript errors from littering the web console to littering the system console. At least we’d notice these errors in the Xcode debugger console, and might eventually get around to taking a closer look. But we can still do much better.

Step 3: Get face to face with the error.

Of course from Objective-C there are any manner of ways you could handle the error. In a shipping app it might make sense to prompt the user with news of the issue and offer, much like a crash reporter would, to send information about the error to you. Or if the contents and behavior of your WebView are critical enough, maybe it’s worth forcing the app to quit just as it would for a native code crash. But for debugging builds I’ve find it very helpful simply to have the error force the Web Inspector open so I am no longer able to quietly ignore all the red console logging. In the delegate class, go back to the javaScriptErrorOccurred: method and replace the NSLog call with this string of fancy mumbo-jumbo

- (void) javaScriptErrorOccurred:(id)errorEvent
{
#if DEBUG
   id inspector = [[self webView] performSelector:@selector(inspector)];
   [inspector performSelector:@selector(showConsole:) withObject:nil];
#endif
}

That’s it. Now when you run into a WebView-based JavaScript error, the web inspector appears and the list of pertinent errors is front-and-center.

I encourage you to leave the DEBUG barrier in place, because as the prolific use of “performSelector” may suggest, these are private WebKit methods that would probably not be viewed as acceptable by e.g. App Store reviewers. Anyway, you probably don’t want customers being pushed into the Web Inspector.

I hope this technique proves useful for those of you with extensive JavaScript source bases of your own. For everybody else, perhaps I’ve helped to drive home the idea that we should be as vigilant as possible against software defects. All developers write bugs, all the time. It’s only fair that we try to balance the scales and give ourselves a fighting chance by also being alerted to the bugs we write, all the time.

F-Script Anywhere With LLDB

Ever since the start of my career at Apple, working with the venerable Macsbug, I have prided myself on making the most of whatever debugging facilities are at my disposal. I came to be quite capable with Macsbug, adding custom commands and data templates that helped me speed through crucial debugging sessions that would have otherwise taken much longer.

When I moved from the classic Mac OS team to Mac OS X, I was forced, only slightly earlier than every other Mac developer, to adapt to gdb. I did so with modest aplomb, adding custom commands that made it easy to, for example, continue until the next branch instruction (as it happens, also the first real post on the Red Sweater Blog).

When Apple started shifting away from gdb to lldb a few years ago, I realized I would have to throw out all my old tricks and start building new ones. To my great shame, progress on this front has been slower than I wished. The shame is made greater by the fact that lldb is so delightfully extensible, it practically begs for a nerd like me to go town adding every manner of finessing to suit my needs.

The nut of lldb’s extensibility is that much of its functionality is actually implemented in Python, and developers such as ourselves are invited to extend that functionality by providing Python modules of our own.

I finally decided to break the ice with lldb’s extensibility by adding a shortcut command for something I often want to do, but frequently put off because it’s too cumbersome: injecting F-Script into an arbitrary application running on my Mac. F-Script is a novel dynamic programming interface that lets you query the runtime of a Cocoa app using a custom language. It also features a handy tool for drilling down into an app’s view hierarchy and then navigating the various superviews and subviews, along with all their attributes. In some respects it’s very similar to a “web inspector,” only for native Objective-C applications on the Mac (and sadly, with far fewer features).

There are Automator workflows that aim to automate the process of injecting F-Script into a target app, by running the required commands, via gdb or lldb, to make the injection work seamlessly. For some reason, these workflows have never worked so seamlessly for me, so I’m always reduced to attaching to the process with lldb, and running the required commands manually to get the framework loaded.

Fortunately for me, “having to run lldb” is not such a big deal. Usually when I want to poke around at an app, I’m in lldb anyway, trying to break on a specific function or method, or examining the application’s windows and views via the command line. Once I’m attached to a process with lldb, getting F-Script to inject itself is as easy as running these two commands:

expr (void) [[NSBundle bundleWithPath:@"/Library/Frameworks/FScript.framework"] load]
expr (void) [FScriptMenuItem insertInMainMenu]

That’s all well and good but to do that I always have to find the memo I took with the specific commands, then copy and paste them individually into lldb. Far too often, I wind up imagining the struggle of this work and put it off until I’ve spent minutes if not hours doing things “the harder way” until I finally relent and load F-Script.

Today I decided that I need to stop manually copying and pasting these commands, and I need to finally learn the slightest bit about lldb’s Python-based script commands. The fruit of this effort is summarized in this GitHub gist. Follow the directions on that link, and you’ll be no further away than typing “fsa” in lldb from having all of its utility instantly injected into the attached app from lldb.

Even if you’re not interested in F-Script Anywhere, the gist is a concise example of what it takes to install a simple, Python-backed macro command into lldb. Enjoy!

Brent’s Coding Nits

I agree with every one of Brent Simmons’s coding nits, even if I don’t practice what he preaches completely. Everybody has to make some mistakes to keep life interesting, right?

There’s also one point he makes:

All of your class names should have a prefix. All of them.

Which I think is particularly applicable to shared code such as the kind he’s reviewing on GitHub when compiling this list of nits. It’s not nearly so important, and in fact not really that important at all, for a project’s private classes to have prefixes. For example if you make a new app called “Motorcycle Derby,” you should be forgiven for calling your app’s delegate class “AppDelegate.m” if that is what suits you. By this point in history, Apple takes virtual namespacing seriously enough that you’re unlikely to clash with an unprefixed name from them, and so long as everybody distributing shared library code follow’s Brent’s rule above, you won’t clash with them either.

Still, you can’t go wrong by adding prefixes to be extra sure that you won’t conflict with anybody.

Transplanting Constraints

Over the past few months I have become quite taken by Auto Layout, Apple’s powerful layout specification framework for Mac and iOS.

For the past few years I’ve heard both that Auto Layout is brilliant and that it has a frustrating learning curve. I can now attest that both of these are true.

One of the problems people have complained most about with respect to Auto Layout is the extent to which Xcode’s Interface Builder falls short in providing assistance specifying constraints. As many people have noticed, Apple is addressing these complaints slowly but surely. Xcode 5’s UI for adding constraints and debugging layout issues is dramatically superior to the functionality in Xcode 4.

Still, there is much room for improvement.

One frustrating behavior arises when one deigns to move a large number of views from one position in a view hierarchy to another. For example, the simple and common task of collecting a number of views and embedding them in a new superview. This task is so common that Apple provides a variety of helpful tools under Editor -> Embed In to streamline the task.

Here’s the big downer with respect to constraints: whenever you move a view from one superview to another, all of the constraints attached to the old superview, constraints that you may have laboriously fine-tuned over hours or days, simply disappear. Poof!

This isn’t such a big deal when your constraints happen to match what Interface Builder suggests for you. But even very simple interfaces may have a fairly large number of constraints. Consider this contrived example, in which three buttons are arranged to roughly share the width of a container view:

SimpleButtons 1

Nine constraints, and the removal or misconfiguration of any one will lead to incorrect layout in my app. Yet simply embedding the views in a custom view wipes them all out:

TestView2 xib 1

This problem is bad enough in the contrived scenario about, but in my much more complicated interfaces, a collection of views might comprise 50 or more customized constraints. Here’s a “simple” subsection of MarsEdit’s post editor side panel:

ServerOptions

Having to piece those all together again just because I want to rearrange some views, well it makes me mad. And when I get mad? I get … innovative!

A Pattern For Transplanting Constraints

Thanks to recent changes in Interface Builder’s file-format for xib files, it’s more straight-forward than ever to hand-tune the contents of a xib file outside of Xcode. It should go without saying that in doing so, you take your fate into your hands, etc., etc. But if you’re anything like me, a little hand-editing in BBEdit is worth the risk if it saves hours of much more intricate hand-editing back in Interface Builder. You’ll save valuable time and also reduce the very real risk of missing some nuanced detail as you try to reimplement everything by hand.

So without further ado, here are steps you can follow to transplant a set of views in a xib file such that the constraints from the old view follow over to the new view:

  1. Make a backup of your .xib file. You’re going to screw this up at least once, so you’ll want something “sane” to fall back on when you do.
  2. In Interface Builder, create the parent view if it doesn’t exist already. Give it a real obvious name like “New Parent View” so you’ll be able to spot it later:

    NewParentView xib

  3. Save changes in IB to make sure the .xib file is up-to-date.
  4. Open the .xib file in a text editor such as BBEdit, or right-click the file in Xcode and select Edit As -> Source Code to edit as text right in Xcode.
  5. Locate the new parent view by searching on the name you gave it. For example, in my sample project the view looks like this in the text file:
    <customView ... id="5M5-9Q-zMt" userLabel="New Parent View">
    ...
    </customView>
  6. Locate the old parent view. If you have trouble, you may want to give it a custom name as well before saving again in IB. In my trivial example, the old parent is the first and only top-level view in the xib file, so it looks like this:
    <customView id="1">
    ...
    </customView>
    
  7. Take note of the id for the old parent view and the new parent view. We’re going to need these in a minute to tie up some loose ends.
  8. Locate the constraints from the old parent view, cut them, and paste them into the new parent view’s XML content. Again in my case it’s trivial because I want all the constraints from the old parent view. I cut them out of the old and into the new so things looks something like this:
    <customView ... id="5M5-9Q-zMt" userLabel="New Parent View">
            ...
            <constraints>
                    <constraint firstItem="rfg-hN-1Il" firstAttribute="leading" secondItem="1" secondAttribute="leading" constant="20" symbolic="YES" id="LOu-nX-awU"/>
                    <constraint firstItem="8Ju-hM-RbA" firstAttribute="baseline" secondItem="Sgd-MR-FMw" secondAttribute="baseline" id="Mwc-6y-uaP"/>
                    ...
            </constraints>
    </customView>
    
  9. Locate the subviews themselves from the old parent view, and cut and paste them in the same way, making sure they reside in a <subviews> node in the new parent view. You should now have a new parent view whose xml topology looks something like this:
    <customView ... id="5M5-9Q-zMt" userLabel="New Parent View">
    	<rect ... />
    	<autoresizingMask ... />
    	<subviews>
    		... your transplanted subviews here ...
    	</subviews>
    	<constraints>
    		... your transplanted constraints here ...
    	</constraints>
    </customView>
    

    We’re close! But not quite finished. If you save and try to use the .xib now, you’ll find that Interface Builder rejects it as corrupted. What’s wrong? The constraints we transplanted mostly reference only the other views that we transplanted, but some of them also reference the old parent view.. To fix the integrity of these constraints, we need to update them to reference the new parent view instead.

  10. Refer back to the Interface Builder “id” values you noted in step 7. We need to locate any reference to the old parent view and adjust it so it references the new parent view. In our example, the old parent view id is “1” and the new parent view id is “5M5-9Q-zMt”. Specifically, we’re looking for attributes on our transplanted constraints where the “firstItem” or “secondItem” references the old parent ID:
    <constraint firstItem="rfg-hN-1Il" firstAttribute="leading" secondItem="1" secondAttribute="leading" constant="20" symbolic="YES" id="LOu-nX-awU"/>
    

    Change the value secondItem=”1″ to secondItem=”5M5-9Q-zMt”, and repeat for any other instances where the old parent view is referenced.

  11. Save the text-formatted .xib file, cross your fingers, and hope you didn’t make any mistakes.
  12. Reopen the .xib file in Interface Builder, or if you’re already in Xcode’s text editor, right-click the file and select Open As -> Interface Builder.

If your combination of luck and skill paid off as planned, then you’ll see something beautiful like this:

TestView xib

All of my views, now situated within the new parent view, and the desired constraints in-tact.

I hope this helps serve as a specific reference for folks in the same boat as I am in, wanting to shuffle views around without losing the hard work I’ve put into my constraints. And I hope it also serves to inspire you to think beyond the limitations of our tools. As great as Xcode, Interface Builder, and a host of other essential technologies are, they often fall short of desired behavior. When they do, it’s often in our power to work around the issues and carry on developing software as effectively as we know how.

Static Analysis

I am thus far primarily a Mac developer, though I have dipped my toes in the iOS development arena many times in the — sheesh! — 5 years since iOS 2.0 shipped with its developer-facing SDK.

My first, and only shipping app for iOS is Shush, a static noise generator that was inspired by my son Henry’s birth. He was born in August, 2008, months after iOS had been opened to the public. As you might imagine, I didn’t have a lot of spare time to play around with iOS programming, but I did have a screaming baby. For those of you who don’t know, static noise is famously soothing to small babies. Shush 1.0 was my bare-bones solution for dispensing infinite, soothing static noise from the magical device I could hold in my hands:

Shush1 0

Pretty hot, huh? It did the trick. I would hold crying, months-old Henry against my chest and, with the iPhone quietly shushing in my hand, he would drift off to sleep.

I mostly forgot about Shush after Henry got old enough to no longer benefit from it. Fast-forward 3 years and my second son, Matthew was about to be born. I realized I was going to need to dust off the old soothing machine, and it seemed like a great excuse to finally brush up the UI a little.

There isn’t much reason to look at the screen while using Shush: its primary purpose is generating audio noise. But despite Shush 1.0’s extremely minimalist design, I had always imagined the app was a prime candidate for a skeuomorophic design. In the old days of analog television, a common technique for generating this sound was simply to tune to an unused station. While the audio of the room filled with static white noise, the screen similarly rumbled with visual black-and-white “snow.” I thought it would be pretty cool to simulate this on the iPhone, as a throwback to those nostalgic days and so a Shush user would have something vaguely interesting to gaze at if they chose to.

It turns out, simulating the static television snow of my analog youth is extremely challenging to do, even on a fancy iPhone. Generating audio white noise is relatively easy: you can get close to the desired output by simply taking random numbers and feeding them to the audio system as samples. It seems reasonable to assume you could do the same for video. For each frame, you could simply generate a random grey between 0.0 and 1.0 for each pixel, rendering the result to an image:

This yields a pretty TV-snow-like image:

Static snow image

The problem is it’s incredibly CPU intensive to calculate that many random numbers and construct an image. Even testing this naive approach again today on my relatively speedy iPhone 5, the naive approach produces an animation where the frames only alternate every 3 seconds or so. Clearly, this would not do.

I experimented with a variety of approaches to speed up the rendering. What if I didn’t generate a wholly random number, but just alternated between 0 and 1? Also, do I really need to generate a random value for every pixel? What if I clump the pixels together to cover more ground? I tried a variety of techniques to speed up the view drawing:

The result was faster, but still not fast enough. What’s worse? It looked more like an homage to a Commodore 64 than to a vintage analog television set:

Another snow image

I was about ready to throw in the towel. Maybe this was simply not possible on an iPhone. I did some research on the web and it was not promising: not only is this a hard problem on an iPhone, it’s a hard problem everywhere. I learned that I’m not the first person who tried to generate an approximation of visual static on a digital computer, and that most people eventually resort to using a canned video animation of static. Modern television sets typically do this to give you the old-timey sense of “nothing is plugged in,” but if you look closely you can see repeating patterns in the static. In other words, it’s not really random. It’s not really static. If I couldn’t make this strange, skeuomorphic homage look more or less like real TV static, I was not interested in the challenge.

Up to this point all of my efforts had been at the level of the CPU: how can I fill this image buffer with random pixels faster? Having no experience with OpenGL or directly programming a GPU, I hadn’t even considered the possibility of approaching that. But a conversation with my friend Mike Ash put the idea in my head, and I ran with it. Since iOS devices are famously optimized for leveraging the GPU, I figured it might be a simple matter of asking the GPU to generate the random pixels for each frame on the fly, obviating the need for the generation of any CPU-bound image data.

I gave myself a crash course education in OpenGL, learning the bare minimum I needed to know before tackling the problem. To give you an idea of where I was starting, I had heard the term “shader” before, but honestly had no idea what it did. I eventually learned that probably what I needed was a bit of OpenGL magic called a “pixel shader.” Essentially it’s a chunk of code that runs on the video card and gets to choose what color each pixel in a given scene should be. For my scenario, I would be setting up OpenGL with a sheer surface pointed directly at the camera, so as to appear 2D. The shader’s job would be to fill that 2D surface up with random gray pixels.

Using Apple’s GLKViewController, I was able to skip over much of the hardcore OpenGL setup, and skip right to the work on the shaders. I used some boilerplate code to get my GLKViewController wired to my pixel shader, and was able to e.g. demonstrate my ability to fill the surface with a specific color:

It works! And while I was working out how to make OpenGL do my bidding, I put some work into the TV frame appearance for the app:

Skitched 20130927 153929

At this point, it feels like I’m almost home. I just need to swap out the constant RGB values for ones that insert random values of gray. What’s that you say? There’s no random generator in OpenGL? Well, I’ll be damned.

Once again things appeared hopeless. I played around with the addition of a “vertex shader,” which is a shader that has access to additional information about the scene. Using the fact that a vertex shader and pixel shader can communicate with each other, I was able to incorporate the specific coordinate for the pixel being shaded. Scouring the web, I found example code for OpenGL that would take a varying number like this and “fuzz” it sufficiently that it appeared to be somewhat random. Thus, my next effort involved taking the x and y coordinates for the current pixel and transforming them into a seemingly random shade of gray:

Shush

Oh my god! It’s beautiful. We’re done, this is exactly what I’ve been striving to do for weeks, now. Except… it doesn’t animate. It’s just a rendered scene of random grey pixels in which the random grey pixels are always exactly the same as before. Why? Because the inputs to the pseudo-random fuzz function are always the same: the coordinates of each pixel in the scene.

My final stroke of insight was to inject “just enough randomness” into the scene by hooking up a value that the pixel shader obtains from the client app. If I can supply random numbers to shader, you may ask, what’s the big deal? Why not just supply all the random numbers? Because the facility for injecting values into the shader only gives the client app access once per complete rendering. Once it starts rendering, the determination of values for each of the pixels in the scene is completely up to the shader itself. But by combining the pseudo-random generation based on pixel coordinate, and further fuzzing that value with a random value injected once per rendering, the results are as in the image above, but beautifully, quickly rendered (video capture doesn’t do it full justice).

Here is a final code snippet of both the vertex and fragment shaders. You can pop them into a project in Apple’s OpenGL Shader Builder to get a better feel for how they work.

On January 17, 2012, I released Shush 2.0. The next day, Matthew was born. It worked great for the few months that I needed it, and just as I did before, I have since mostly stopped using the app myself. However, it was a great exercise in pushing the limits of what the iPhone seemed capable of doing. Hopefully this experience will inspire you to look deeper for solutions to the problems that vex you while working with these fascinating, limited devices.