Announcing Runtime

I'm very happy to announce Runtime, a simple run tracking app designed for iOS 7 and optimized for the iPhone 5s.

It was a busy summer for me (as I imagine it was for many iOS developers). In between WWDC and the launch of iOS 7 I also took some time off to hike the John Muir Trail. I haven't had time to write about the trip yet, but don't worry. I will soon. In the mean time, there are plenty of pictures up on 500px.

But really this summer was all about iOS 7. A lot of us spent the summer speculating on how iOS 7 would be received and how developers would adapt to it. Well, we don't have to speculate any more. Many apps have been updated and today Apple released new versions of many of their apps as well.

As I spent time talking to people about iOS 7, writing about it both here and for Mutual Mobile, and using the beta versions I couldn't help but want to get started building something with it. I saw iOS 7 as the opportunity to build better products for our users. I wanted to put that feeling to the test by actually building something with it. Runtime is the result of that experiment.

I intend to write a lot over the next few weeks about the process of building Runtime. I'll write about both the product and engineering side of the development process. Part of the motivation for building Runtime was to learn as many of the new APIs in iOS 7 as quickly as possible. I've always learned better by doing, and so using all of the new tools to build a product was the best way for me to learn. I'm looking forward to writing about my experiences with those APIs, especially the updates to MapKit, CoreMotion, and CoreLocation.

What I will say for now is that iOS 7 has been an absolute joy to develop for. The decision to go iOS 7 only for a new product was a very easy one. iOS 7 now has 64% share of the iOS install base. Thats a huge market for the delightful experiences we can build with it.

Runtime is in its final round of testing and should be ready soon. I can't wait to show it to you. 

 

A Thru-hiker's Pantry

Another key part of planning a thru-hike is preparing your meals and food pick ups. You don't have to carry all of your food for the entire trip. Usually you can mail some ahead to a camp or post office, or buy extra along the way. Picking what to eat is important because your body needs fuel while you're hiking. There's different estimation formulas for how many calories you burn per hour of hiking, but its a lot. If you eat 2000 calories in a normal day, plan on nearly double that during a strenuous backpacking trip. The last thing you want to do is be caught starving because you didn't bring enough food to keep your body going.

In addition to packing plenty of food, it needs to be light enough and compact enough to fit in your pack without adding too much extra weight. In many cases when you are traveling in bear country you are required to pack your food in a bear canister like the one in this picture on the top left.

IMG 0882

The bear canister is both a blessing and a curse. Its heavy, and difficult to pack, but it also forces you to think critically about all of your food items.

The food in the image above is everything I'm bringing for the first 8 days that I am on the trail.

1) Breakfast

The first meal of the day is always important. You generally want to eat something large to get your metabolism going. When I am hiking though I tend to bring something to eat right when I wake up, and then a few things to much on during the first few hours of the day. I find that having a constant stream of energy keeps me going longer and feeling better.

For breakfast I'm bringing a combination of pop tarts, clif bars, and various energy foods like Stinger gummies or energy gels. I'll eat the pop tarts or clif bars first, and keep the other smaller items for later.

2) Lunch

Lunch is usually pretty simple. I'm bringing plenty of crackers, and a mix between summer sausage and chicken. Getting protein out on the trail is hard, so anything like chicken or tuna that can be packed along is a great value.

3) Snacks

I want to always have something to snack on when I need more energy. Nuts are a great snack food because they're so calorie dense. So are raisins, beef jerky, and of course extra Clif bars.

4) Dinner

I'm taking a mix of cold dinners that require no cooking, and hot dinners that require you to cook or boil water. The cold dinners are pretty simple: Tortillas, Peanut Butter, and Nutella. The hot dinners are a bit more exciting. Tortillas are also an essential ingredient in making quesadillas, so that will be one meal. I'm also a big fan of Mac and Cheese and Chicken (MCC), which gets you plenty of protein and tastes delicious on a cold night after a long hike.

5) Coffee

Unfortunately taking coffee on a low impact backpacking trip is a bit of a nonstarter. You can't leave the used grounds out on the ground to attract animals, so your only option is to pack the grounds with you. Instead of worrying about that, which would be extremely messy and frustrating, I'm just taking some espresso beans that have been covered in chocolate. This'll get me my coffee fix and my candy fix at the same time. This is a great trick if you're looking to bring coffee on the trail but unsure of what to do about brewing and grounds.

All of this then gets to fit into that bear canister and into my backpack. There's an identical set of food waiting at my resupply point too. Of course, this won't stop me from buying a cheeseburger in town along the way, if I can find one, and I'll absolutely be craving a taco when I get back, but this should be enough to keep me going along the way. In the backcountry, food is fuel to keep your body going, but you can still have a little fun with it if you want to.

16 Days, 210 Miles, 1 Bag

I'm about to set out on an exciting adventure to hike the John Muir Trail, a thru-hike from Yosemite Valley to Mount Whitney in the California high sierras. I've been on plenty of backpacking trips, including several very long ones, but this will be my first thru-hike. Thru-hiking means you hike an entire trail start to finish. While the JMT isn't quite as long as the Appalachian Trail, the Pacific Crest Trail, or the Continental Divide Trail, it still presents the same challenge which is you need to carry everything you need for the while journey with you, as well as plan your food resupply stops along the way.

Selecting gear for a long backpacking trip, especially a thru-hike, is very important. There needs to be a balance between weight, size, utility, and at least a modicum of comfort. Here's a breakdown of the gear I'm planning to bring with me, as well as some explanation on the purpose and thoughts behind some of the items.

IMG 1049

1) Accommodations. 

When you're backpacking you have to bring your home with you for the whole journey. For the 16 day trip my home will include a Big Agnes Copper Spur UL2 tent, a Western Mountaineering UltraLite sleeping bag, and a Thermarest NeoAir sleeping pad. I picked these items because they're the lightest, most compact at a good price, provide great protection from the elements, and are warm and cozy.

2) Clothes.

You don't have many chances to do laundry on the trail, but you also can't be carrying many changes of clothes either. Because of this, I'm bringing one change of clothes only - plus a third pair of socks. Socks are one of the most important items when you're hiking long distance and I'm very particular about the ones I bring. I've got three pairs of Teko socks, which is a brand thats served me well for many hundreds of miles.

One other important point about clothing is the notion of layering. When the temperatures change the easiest way to control your body temperature is through adding or subtracting layers. My clothes system consists of 4 layers.

- Baselayer, Capilene t-shirt

- Midlayer, Capilene zip-neck t-shirt with long sleeves

- Shell, GoreTex rain jacket which is wind and water proof

I'm bringing two mid layers on this trip so that I can double up on long sleeves, which also allows me to leave the bulky/heavy fleece behind. The zip-t also lets you control your body temperature even closer, since you can zip up/down the collar and roll up/down the sleeves to adjust warmth. This should be fine as long as the temperature doesn't drop below freezing during the day. At night, I can just jump in my sleeping bag to stay warm.

I'm bringing two bandanas, which also serve many purposes. You can use them for first aid, towels, cooling rags, wash cloths, etc. They are extremely useful.

3) Camera

One of the main reasons I go backpacking is to take pictures. I'm bringing a Canon 6D DSLR, which has a full frame sensor that will be ideal for landscape. My workhorse lens will be the 24-105 F4L, but I will also have a 300 F4L for wildlife. Keeping the camera powered will be difficult, so I'm packing 7 total batteries as well as the charger, which I may be able to use periodically at staffed camps. I'm also bringing plenty of memory cards, in small-medium sizes so that my risk of losing photos is low if a card fails. Camera gear represents the bulk of my pack weight, but capturing memorable images is worth it so I am bringing it all with me.

4) Electronics

Normally when I go out in the backcountry I put aside technology and avoid the outside world. On this trip I intend to do things a bit differently. I'm bringing my iPhone, which I intend to use for a few purposes, and a Kindle Paperwhite with plenty of books stocked up on it. I'm also bringing a GoalZero Nomad 3.5 solar charger to keep both devices powered. There won't be any cell coverage where I'm going, so I won't have to worry about disconnecting from the outside world. But I am interested in seeing how I will use those devices in the wilderness. I've got a few apps that I'm excited to try, like Peaks, Night Sky, and Project Noah (an app for tracking the sightings of plants and animals). There are also practical reasons to bring the phone and Kindle. A Kindle weighs less than one paperback, so its a major savings in weight. An iPhone is also a virtual swiss army knife. It can double as a backup flashlight, GPS if you get lost, First Aid manual, emergency contact device, backup camera, voice recorder, backup map, etc. For the extra few ounces its absolutely worth bringing it on the trail.

5) Miscellaneous

The rest of what I am bringing are the simple odds and ends to keep me going on the trail. I have a basic first aid kid with pain medication and an ace bandage. A cut-in-half tooth brush and a micro tooth paste (you really don't need that handle on the tooth brush). Micro Pur, a set of small tablets, for purifying water. Chapstick and Sunscreen. Some cash for buying food along the trail, or hitchhiking in an emergency. A headlamp, watch, knife, sunglasses, and water bottles round out the smaller items. And of course, a map and compass too :)

One of the best items to have in your kit is concentrated camp soap. Just one drop of "camp suds" can wash a whole pot. A whole bottle can wash a 747. Just a little goes a long way, which is important if you want to do laundry or take a bath out in the wild.

 

IMG 1054

 

Carrying all of this will be my Osprey Variant 52 backpack. The Variant is more of a climbing approach pack and winter ski/snowboard pack than a long distance hiking backpack, but there's two main things I like about it. The first is that it has a flat back panel, which is more my style. The mesh panel curved packs don't fit me well. The second is that it's fairly minimalist and simple. It's a standard internal frame sheet pack, which is very light weight, with a single large compartment for arranging all of your gear. It carries well, looks great, and is very durable.

If you're ever out in the wilderness for a few weeks, this should be all you need to survive and have a wonderful experience in the outdoors. I can't wait for the trip, and I'll be sure to post plenty of pictures once its over.

Look and Feel

Today Apple unveiled what's going to become a whole new world for the iOS ecosystem. Alongside an impressive update to OS X and new hardware announcements like a refreshed MacBook Air and long-awaited Mac Pro came a complete re-thinking of iOS from what seems like the ground up. No stone was left un-turned with this new beginning for iOS. I wanted to talk about what that may mean for the iOS ecosystem's users and developers.

Apple has always been a champion for great design. That's apparent in the look of their products. The iconic Macintosh for example, or the elegant and attractive iMacs and MacBooks. The iPhone 5 may be the most attractive device Apple has made yet, with details like the chamfered edging and matte black finish. But in many ways, more important than how it looks is how it feels to hold, and to use. An iPhone just feels right to many of us, the same way that a Mac felt right to us before smartphones.

My favorite Steve Jobs quote, which received a mention today at the keynote, is that 'design is how it works'. What Apple has done with iOS 7 is a true testament to that. The design of iOS is no longer about the appearance of rich controls and detailed artwork and buttons. The focus has shifted to how the software makes the experience feel. Instead of rich textures, the layers in an application show through to provide depth and context and keep the focus where it should be: on content. The chrome becomes more muted and gets out of the way when you don't need it and comes back when you do.

But more than just the depth and translucency, we can see a big emphasis on motion and interaction. Shifting a background image based on how you hold your phone. Changing the angle of tabs in Safari. Even something as small and simple as shrinking, but not hiding, the URL bar in Safari as you scroll up. All of these are delightful touches that help bring an app to life through interaction. Parallax, blur and many of these techniques have been possible on iOS for some time now but this shift in iOS 7 shows which direction Apple sees the design winds blowing and I am excited to see them embracing it and taking the lead on the future direction for the platform.

At Google I/O a few weeks ago, I was disappointed with the lack of focus on developers building immersive experiences centered around great look and feel. Design is clearly important to Google. The polish and design in many of their own recent products is a testament to that. And while there were some good sessions around animation technology and UI design, I don't think you could say that it was a major emphasis from them to developers.

And that's why I am so excited about iOS 7. To me, it's not just about the lack of skeumorphic elements or linen backgrounds. It's about a focus from all angles on delightful user experience and engaging motion design in applications. Like with the introduction of iOS, Apple has set the bar for developers with iOS 7, and is giving us great tools to build these same kinds of experiences on the platform. I'm excited that they are pushing us forward and can't wait to see more of what everyone is going to build on the new version.

MMRecord 1.0.2

About 6 weeks ago I released MMRecord: a simple way to build a web service request that interfaces with Core Data in an iOS or Mac application. Since then the feedback has been amazing. I never expected to see so many postitive comments about the library or so much involvement from community members. Thank you!

At release time I wrote an article on the Mutual Mobile Engineering Blog that introduced the library, including how we built it and what it's good for. I wanted to touch base here on the changes that have been made to it since then and what the future may hold.

Many of the issue reports and subsequent fixes came from members of the community. The 1.0.2 update includes a few bug fixes and one new feature: Unix timestamp date parsing support. You can check out the full release notes and changelog here.

As for what's next, there's a few possibilities. There's been requests for two features that seem valuable to me: progress blocks, and deleting orphan records. I'm also interested in adding better logging support for easier debugging, and support for record insert and update validation checks.

Is there a feature you'd love to see? Get in touch on Github or on Twitter! And yes, we do accept pull requests :)

Google Glass

I recently spent some time with Google Glass while attending the Google I/O developers conference in San Francisco. I wrote about my experiences with Glass on The Push, but I thought I would expand on some of my thoughts here.

As a developer I'm very interested in Glass. New computing platforms are always exciting and Glass is even more exciting than others. With glass you get a display that you can always see, a camera that's always pointed where you want it to be, and a full compliment of sensors. It's a lot to be excited about, even to an iOS developer. While the native way to develop for Glass will be with an Android SDK called the GDK, there is also a RESTful API for Glass that any of us can use.

The big question though is what does Glass mean to users? To me it means that I don't have to pull my phone out of my pocket to see relevant information. I can glance up, check the notification that came in, and return to what I was doing. The camera that's always pointed in the right direction is compelling too. I've missed plenty of shots because I didn't have a camera ready. With Glass I can just reach up and push a button to take a picture.

But that's only the beginning of what people can use Glass for. Getting directions while walking around a new city would be very helpful. Instruction manuals for fixing something on your car, or building a model airplane. Possibly augmented reality at some point – though the off-in-the-corner display isn't well suited for that kind of application. Like any new platform, it will take time for the ecosystem of apps to mature and develop as everyone learns what the best experiences on the device are. We saw the same thing when the iPad first came out. My only hope is that developers embrace Google's design guidelines on the platform: that glass apps should get out of the way of the user.

Finally, a few words about privacy. Most of the concerns I've seen discussed center around the camera on Glass, which as I mentioned is always staring in the same direction the person wearing it is. The same feature that can be convenient for capturing a memory could be  unsettling to other members of the public. Only time will tell if this concern will affect the product's future, but as for me its not something that I am concerned about. Here's why.

Over the last 8 years, including my time as a photographer for the UT student yearbook and newspaper, I've photographed hundreds of events ranging from parties, classes, meetings, disasters, parades, protests, conferences, and sporting events. Rarely, if ever, have I had someone tell me that I couldn't take their picture. Journalistic photography is different from casual photography. But both journalistic and casual photography have unspoken codes of conduct and etiquette around what is appropriate to take a picture of and what is not. Glass photographers would need to adhere to the same codes, and I see no reason why they would not. A lot of concern around Glass seems to be centered around private events. Given the ubiquity of cell phone cameras now, I don't think that the existence of Glass will make things any worse for people wishing to not have their picture taken.

Then, there's Facebook. Turn back the clock 10 years and imagine telling parents that their kids would end up posting thousands of pictures of parties on the internet with their names attached to them. Even now, that's a scary thought. And yet Facebook is the most popular website on the internet and is used by 94% of teenagers who use social media. Just because a piece technology is unsettling to some doesn't mean it won't become widely used. In the case of Facebook, privacy rules, responsible social media practices, and proper etiquette fell into place to help people feel more comfortable on the service. If the value proposition from Glass is high enough, then I think the same thing will happen there.

There's a recent story that highlights a benefit of ubiquitous photography. In the aftermath of the Boston Marathon tragedy the bombing suspects were identified within hours due to unprecedented levels of photography in the area around the race finish line. So while we all hope that something like that never happens again, we can still be thankful for all of the cameras that helped catch criminals before they could commit another act of terrorism.

For those looking to get a head start on, or learn more about what developing for Glass will be like, there are a few places to go for information. Mutual Mobile created a Glass simulator that you can try out here. That will give you some idea of what you'll see on the display. There's also an API emulator available here that will show you what developing for the RESTful Mirror API will be like. Finally, there's the Google developers site, which is full of great resources that you can check out.

Review: Nifty MiniDrive

IMG 9819

Having plenty of computer storage options available has always been important to me as an avid photographer and computer user. On the desktop thats easy to accomplish using both internal and external drives. But now that I've been using a laptop for work it's been harder to have more options for storage. External drives are a hassle, and additional internal drives are not an option on the next generation of Mac laptops.

I was really excited when I saw the Nifty MiniDrive on Kickstarter. The Nifty MiniDrive extends the onboard storage on a Mac laptop by utilizing the SD card slot to house a Micro SD card. This is a genius solution for elegantly adding a fair bit of flash storage to your laptop. I had always viewed the SD slot on laptops with skepticism because I use Compact Flash cards for my photography work. But this solution seemed like a great way to utilize that card slot while also giving me more options for storage.

The instructions for installing the drive on the Nifty website are very helpful. Installing the drive was straightforward. I did have to re-seat the Micro SD card once on my first installation to get it to seat correctly, but since there there's been no issues. The drive lines up very nicely on my laptop, as seen in the image above.

The Micro SD card I purchased for my MiniDrive is a Samsung 64GB drive. The other option I considered was a SanDisk 64GB drive. I typically use SanDisk drives for my photography work, but I decided to try out the Samsung because it was on sale and showed similar performance characteristics to the SanDisk. Straight read/write performance for the Samsung has been about what I would expect. Essentially it's similar to a USB 2.0 external drive for reads, and perhaps a bit slower for writes. Not all flash storage is created equal, and you're not going to set any speed records with SD cards like you might with USB 3.0 and Thunderbolt externals or newer SSD drives.


Screen Shot 2013 02 26 at 9 44 39 PM


The Nifty Team lists a few possible uses for the drive on their Kickstarter page. I decided to use my MiniDrive as a Time Machine backup for general documents, settings, and works in progress. The practical consideration behind this is that the largest currently available MicroSD card is 64GB - too small for backing up my entire startup drive. But the reality is that this is still large enough for what I need to back up. Source code is backed up by an SCM, and applications are easily replaceable. Photos I keep backed up using other methods, so any photos stored on my laptop are disposable.

After my initial tests, I set up the drive as my Time Machine backup volume. I configured Time Machine to exclude all of my apps, repositories, system data, caches, and any other large files that don't need to be backed up. That placed my total backup size in the 10-12GB range.

I use Time Machine as part of my Mac Pro's backup system too. I recently switched backup volumes on my Mac Pro and performed a 400GB initial backup of my startup drive. That backup took about 3 hours. I knew that an SD card is no where near as fast as an onboard SATA drive, but I was still expecting that the initial backup wouldn't take more than a few hours.

Screen Shot 2013 02 27 at 9 15 41 AM


In the end, the initial backup to the SD card took almost a full day to finish. I'm assuming that this is because of how many small files were included in the backup. The random small write speed of the SD card is not very fast compared to the large consecutive write speed that I was testing above. The screenshot of Activity Monitor above was while writing a single large file. Now, here's another screen shot while the backup was in progress. As you can see, the drive isn't maintaining a constant speed and so the backup ends up taking longer to finish.

Screen Shot 2013 02 26 at 9 46 56 PM

I finished the initial backup about two weeks ago, and I've been using the drive as my Time Machine volume ever since. The subsequent backups have finished much faster. I haven't noticed any performance issues while the backups are going on. Performance within the Time Machine is good as well with the MiniDrive. Scanning through file and folder versions for the past few weeks was fast and easy. I have not yet needed to recover a file from the ether of time yet, but backup isn't only about that. It's about the peace of mind you get by knowing that your data is safe. The Nifty MiniDrive gives me that, and in a stylish and elegant package to boot. In the end, that's what matters, and so I am very happy to have my Nifty MiniDrive.

Renaissance

IMG 3677

 

This week I had the pleasure of attending the inaugural Renaissance conference in San Francisco.  I decided to go because, like the rest of the people there, I make apps.

 

All of us know that there is a lot that goes into making an app.  First, there's the idea.  What is the job that your app will be hired to do?  Then there's design.  What will your app look like?  How will it work?  And then finally, there's code, the part that makes your app do what you want it to do.  The great thing about Renaissance is how it addressed everything that goes into making an app.  There are lots of excellent conferences like WWDC that intensely focus on teaching developers how to leverage the powerful frameworks that allow us to build amazing apps.  But this was the first conference I've been to that kept a higher level view that explained more of the what and the why than the how.  

 

The creative process is a big part of making an app.  Renaissance did a great job of setting the stage for a discussion around creativity with a talk by Brenda Chapman, the creator of Brave at Pixar.  Brenda's talk was great and was followed by an equally impressive talk by Phil Letourneau on Animating Your App To Life.  Animation is something that really enhances the experience in an app and Phil did a great job of explaining some lessons we can learn from Disney when it comes to designing the animations in our apps.  Mark Pospesel then explained some strategies that developers can use to make their animations seem more real.  The entire talk contained only a single line of code, but was extremely valuable for anyone trying to create engaging animations in their apps.

 

IMG 2694

 

Those talks really set the standard for the entire conference.  They were followed by great talks about the importance of type and copy, which is the true content of your application.  Bluetooth Low Energy was a high profile technology at Renaissance, featuring several talks and a half-day lab where attendees could try out some of the technology being worked on by the presenters and ask questions about building apps that take advantage of CoreBluetooth and integrate with BLE devices.  Not to be left out was Audio, which is one of the least often considered elements of a great app.  It was great to see several great presentations about that there as well.

 

The business side of making apps was well covered.  There were a nice variety of successful game developers (who shared some of their secrets for success), business leaders, and solo indie developers there at the conference giving presentations and talking to attendees.  Enterprise app development is a huge market and was well represented at the conference.  Members of the teams from Push.io and Parse gave great talks on supporting enterprise development through powerful backend services, and Brent Simmons shared some of his experience with NewsGator and Glassboard - his new enterprise collaboration iPhone app.

 

One of the recent trends in iOS app development has been a focus on quality.  I was happy to see a lot of developers talking about this at the conference, and happy to see several great talks about the topic.  One of the best points to come out of the first talk was that we should be testing to build better apps, not just testing to build bug free apps.  One way to do this is to automate the testing on your apps, and Jim Puls from Square gave a great talk about KIF - a framework designed to do just that.  I had the chance to talk to Jim a lot about KIF and I feel really good about using it as the basis for fully automated regression testing now.  I'm looking forward to writing more about that in the coming weeks.

 

One of my favorite quotes from Steve Jobs is "Design is how it works".  While developers still outnumbered designers at Renaissance, design was a major focus of the conference and almost the entire last day was devoted to it.  One of the most insightful talks was by Justin Maxwell on the concept of Path Dependence.  His point, if I can attempt to convey it, was that it isn't skeumorphism that people are against.  What frustrates users is when designers produce a UI that is divergent from what the expectations on the spectrum of experience and functionality are.  In the 90's, a UI for a sound mixer that matched a real life sound mixer made a lot of sense, because that's what users of the application would understand and expect since they were coming from a real-life sound mixer.  That expectation isn't the same for something like a podcasting app, where there is no real-life path dependence to influence a user's expectation.  His talk was fascinating and if the videos come out I definitely recommend checking it out.  Finally, the atmosphere at Renaissance was really good.  

 

I enjoyed the single track conference format, and I got to meet a lot of wonderful people before, during, and after the conference.  Everything about it was very worthwhile.  The venue was well chosen, in part because it gave the opportunity for exercise every morning, but also for being open and close to downtown San Francisco.  I would definitely recommend going to anyone considering it next year.  

 

Thank you very much to Tim Burks and Bill Dudney for organizing the conference, and to everyone else who came for making it a wonderful experience.

Blocks, Operations, and Retain Cycles

There's been some great discussions in the iOS community lately about the pitfalls of Objective-C and things to watch out for while developing iOS apps.  One of our projects at Mutual Mobile recently encountered a very difficult to diagnose issue that I wanted to describe so that hopefully others can avoid this happening to them.  The issue involved leaking images which resulted in a memory pressure warning and subsequent crash.  We knew what the symptoms were and how to reproduce it, but the root cause of the leak was extremely hard to pin down.  It used to be the case that most memory management issues were the result of programmer oversight.  But as you will see below, this issue would be extremely easy to overlook which is why I felt it was important to share a detailed explanation of it here.  
 
You can't start off a conversation about memory management anymore without mentioning Automatic Reference Counting (ARC).  Apple knew that managing memory is a big deal and that a lot of people in the iOS community were doing it wrong.  At WWDC in 2011, they mentioned that around 90% of the crashes on the app store were due to memory management issues.  ARC was their attempt to combat that.  Essentially ARC synthesizes calls to retain and release for you so that you the developer can focus on actually building your app.  It seemed too good to be true at the time, but I think I can safely say now that ARC has been a huge success.  It's now extremely easy for us to manage memory in Objective-C because, well, we really don't have to do anything.  As long as we follow the simple conventions the compiler takes care of the rest.  It really does work quite well.
 
There are a few gotchas with ARC though.  For starters, ARC doesn't really apply to C references (though it can help with converting from C to Objective-C, but that's another topic).  You do have to deal with C in iOS projects, because several libraries like CoreGraphics have a C interface.  Instead of method calls to retain and release, C libraries include retain and release functions, such as CGContextRelease(context c), to manage their memory.  ARC does not synthesize those for you, so you still have to call them.  CoreGraphics can be a tough framework to use, so my first thought was that the issue would be somewhere in this layer.  But it didn't take long to discover that this wasn't the case.  All it took was reading the code.  All of the calls to retain and release were balanced out correctly where the images were created.  Here's an example below taken from the method that we thought may have been at fault:
 
    CGContextRef context = CGBitmapContextCreate(NULL, target_w, target_h, 8, 0, rgb, bmi);
    
    CGColorSpaceRelease(rgb);
    
    UIImage *pdfImage = nil;
    
    if (context != NULL) {
        CGContextDrawPDFPage(context, page);
        
        CGImageRef imageRef = CGBitmapContextCreateImage(context);
        CGContextRelease(context);
 
        pdfImage = [UIImage imageWithCGImage:imageRef scale:screenScale orientation:UIImageOrientationUp];
        CGImageRelease(imageRef);
    } else {
       CGContextRelease(context);
    }
 
So as you can see, at the top of the method the context is being created, so by convention it has a retain count of +1.  Then the context is used to draw an pdf into it, which is then copied out to an image so that it can be returned by the method.  The image ref also has a retain count of +1, because that's how the creation convention works.  It is then the developer's responsibility to release those references, which you can see is done in this example.  So far so good.  We knew from instruments that the image context was being leaked, but we still didn't know how or why.  But since we could tell both from our own inspection, and reading Apple's sample code for this exact use case, that it was being done correct, we went back to the drawing board.

The other main gotcha that remains with ARC is the retain cycle.  A retain cycle can take a few forms, but it typically means that object A retains object B, and object B retains object A, but nothing else retains object A or B.  They are still holding on to each other, so they never get dealloc'd, but nothing else is holding on to them so they both leak.  A lot of the time this can happen with a block, where the block retains the thing that created the block, and never gets released, so that the thing that created the block never gets released either.  That's clearly a problem.  It's also a problem that is tough to solve.  The static analyzer isn't well equipped to point these out to you, and Instruments isn't super effective at nailing them down either.  It will tell you something is leaking, but that's about it.  You do get a stack trace for the leak (or cycle if it detects one), but it's still up to you to pin down exactly what is causing the cycle.  You have to have a really strong understanding of how this stuff works and then do the detective work to parse through the code to find out what the cause could be.

In this case the team did understand retain cycles and did a lot to prevent them.  The view controller in question was responsible for showing all of the images that we are dealing with here.  That view controller creates dozens of blocks to perform image conversion operations from a PDF to an image.  The concern initially was that perhaps one of these blocks was holding onto the view controller, which holds the view hierarchy...and so maybe the view hierarchy isn't ever getting released.  There were indeed a few places where a block could have been "capturing" the view controller, and we took steps to prevent those.  Here's how you typically address that issue, by changing any reference to "self" to be a weak reference:

    __weak typeof(self) weakSelf = self;
    
    void (^ loadThumbnailBlock)(NSIndexPath *indexPath) = ^ (NSIndexPath *indexPath) {
        Page *page = [weakSelf.fetchedResultsController objectAtIndexPath:indexPath];
        [weakSelf loadThumbnailForPage:page forIndexPath:indexPath];
    };
    
So we eventually proved that the view controller was always being released, dealloc'd, and that all of it's views were going away.  Still the problem persisted, so that was still not the root cause.  The view controller was also handling the memory pressure warning correctly.  Whenever the app received the warning that it was using too much memory, it would remove the images it wasn't displaying from memory, as well as unload any views it wasn't using.  This was only a small fraction of the memory that had already been leaked though, so this only delayed the inevitable.
 
Finally, we nailed it down though.  We knew that the image contexts were being leaked both from checking instruments and because that's the only thing that could eat up that much memory.  The images are being rendered in blocks that are kicked off by the view controller.  We knew now that the view controller wasn't being held on to, because it was being dealloc'd, and so thus it also couldn't be holding onto the blocks that were rendering the images.  So something else must be holding onto those blocks...

Enter NSOperationQueue and NSBlockOperation.  NSOperationQueue and NSBlockOperation are built around Grand Central Dispatch to provide conveniences such as the ability to "cancel" an operation.  We were using this convenience to allow the app to cancel image conversion blocks if you closed the view controller before they finished.  Makes sense right?  Well, it was actually this optimization that was killing us in terms of memory leaks.  Take a look at the block of code below:

    __block NSBlockOperation *operation = [[NSBlockOperation alloc] init];
    __weak typeof(self)weakSelf = self;
 
    MMVoidBlock thumbnailOperationBlock = ^ {
        if (!operation.isCancelled) {
            workerBlock();
        }
        
        [weakSelf.thumbnailOperationList removeObjectForKey:key];
    };
 
    [operation addExecutionBlock:thumbnailOperationBlock];

Notice any problems?  In this case we are building a thumbnail operation block that in turn calls a worker block.  The workerBlock() is actually what goes off and renders the PDF into a graphics context, converts that into an image, and saves the image.  But take a look at what else it's doing.  The block has a reference to the operation.  That sounds fine, until you look at the last line of that snippet.  The block, which holds a strong reference to the operation, is then being added to the same operation.  That addExecutionBlock method is going to retain the block, so now we have ourselves a retain cycle.  The block is holding onto the operation, so the operation won't be released.  But when the operation finishes the queue is going to release the operation, so now the operation has leaked because we don't have anything that holds a reference to it.  But the operation also has a reference to the block, so the block is never going to get released.  And finally, the block has a reference to the image and graphics contexts, which will now never be released either.  All because that silly block captured the operation it was added to.

Now the plot thickens even further.  The way you would typically solve this problem is by declaring the variable you want to use in the block as being weak by using the __weak specifier.  The example above inside the view controller illustrates that method.  But in this case, we can't do that because it will fail with ARC.  If you were to change __block operation to __weak operation in the example above, the operation would be released immediately.  So the performance optimizations of ARC bite us badly here too, because the operation will be nil in the last line of this function, which will cause the app to not work at all.  The compiler knows this and will actually warn you not to use __weak there.  In this case, what the compiler is telling you to do is actually the wrong thing to do, which is why this problem is so hard to solve.  By being a good sport and doing what the compiler tells you, you are lulled into a false sense of security that you will not have a retain cycle.
 
Here is the actual solution below.
 
    __block NSBlockOperation *operation = [[NSBlockOperation alloc] init];
    __weak typeof(self)weakSelf = self;
    __weak typeof(operation)weakOp = operation;
 
    MMVoidBlock thumbnailOperationBlock = ^ {
        if (!weakOp.isCancelled) {
            workerBlock();
        }
        
        [weakSelf.thumbnailOperationList removeObjectForKey:key];
    };

    [operation addExecutionBlock:thumbnailOperationBlock];
 
What we had to do is just make the operation referenced by the block into a weak reference.  That way, the block isn't holding onto the operation, it just has a weak reference to it.  It's worth pointing out that pre-ARC, __block was sufficient to prevent a retain cycle.  But with ARC, __block no longer carries the same meaning and __weak must be used to prevent a retain cycle.  When the operation is finished and gets released by the operation queue, it will go away.  That will then release the block, which releases the image and graphics contexts just like we see it's supposed to in the sample above.  That resulted in dramatically improved memory performance and completely normal memory behavior.  Literally just that 2 line fix there.
 
It is worth pointing out that, in some ways, this represents a possible bug (or questionable behavior) on the part of NSBlockOperation.  What appears to be happening is that when you add an execution block, it retains it.  So far so good.  But when the block executes, it does not release the block, which would have actually broken the cycle.  Instead, the block won't be dropped until the operation is dealloc'd because, for whatever reason, NSBlockOperation does not release its associated execution blocks until dealloc gets called.  What the reason for that is, I do not know, but that's just the icing on the cake for what is already a tremendously ridiculous issue.
 
There is always a risk when dealing with scarce resources and expensive operations if you are not careful about how you use them.  Retain cycles are preventable, but it takes a lot of thought to consider how they may be caused by the code we write.  This situation represents a perfect storm in terms of memory issues on iOS.  It's easy for retain cycles to go unnoticed.  This one became so critical because we were leaking images, not strings and numbers.  Leaking images will cause an app to get killed due to memory pressure, where as strings and numbers probably never will.  Dealing with retain cycles can be extremely painful, and we should be especially careful to avoid them when dealing with expensive objects like Images and other graphical assets.  In this case, even with extreme care it was still nearly impossible to avoid a cycle.  This was a really tough problem to solve and I hope you all will remember it and avoid it in the future.

Calendaring on Mac and iOS

I am a fairly heavy user of calendars. This goes back to college when I worked part time jobs while keeping a full class load. But now in the professional world it's more important than ever to manage my calendar so that I can maintain my commitments to the people I work with, as well as maintain a sane work/live balance at home. Here's how I've set up my tools to manage calendars.

Mutual Mobile uses all of the Google services like Gmail and Google Calendar. While I don't use Gmail or Google Calendar for my primary personal email or calendaring service, I've been very impressed with both. Google Calendar in particular is an excellent service. The service is based on the CalDAV standard which is very widely used and supported. Apple's own Calendar apps on both iOS and Mac support Google Calendar as a backend data source, but not without some problems. I spent some time this past weekend identifying these limitations and configuring my setup to work around them.

The first problem is that Apple seems really to want all of your Calendar data to live in iCloud. Normally this would be fine. When I migrated from MobileMe to iCloud I was pleased to find that all of my old events were still there. My iCloud account actually has events dating back to 2002, where I added an event to watch The Lord of the Rings: The Two Towers. But here's where things go wrong. If you leave your "Default Calendar" set to iCloud, and leave the "Automatically retrieve CalDAV invitations from Mail" setting on, then Calendar will create a new set of all your events on your default iCloud calendar, even though those event invitations were sent from Google! This got to be really frustrating when I realized I had about 2 years worth of duplicate calendar information. Useful tip: if you use Google for calendaring, either set that as your default calendar or just disable this setting in general.

Screen Shot 2013 01 01 at 3 03 06 PM

Another issue is how alerts work in iOS. Google does have a push notification service feature for Google Calendar, which works really well. But that service isn't without it's flaws either. By default it also sends you an email reminder for all events. That obviously gets extremely annoying, so I turned that feature off, but it will also give you a browser pop up alert 10 minutes before a meeting. Pop ups are even more annoying than emails but you can't disable that without disabling notifications altogether. Notifications are a problem solved pretty elegantly on iOS, and in OS X Mountain Lion, so I really just wanted to get that working without all the email/pop up cruft. Well it turns out that is possible, if you set the default alerts setting on Mac and iOS for event reminders. This setting is off by default, where Apple strangely elects to leave responsibility for alerts to Google, but you can turn it on for individual accounts. I set mine to 5 minutes on iOS and 10 minutes on Mac. A word of warning though, this event reminder is added to the calendar event the first time the event is synced to your Mac or iOS device. If you're enabling this for the first time, you should remove your Google calendars from your Mac or iOS device, enable the setting, and then re-add them to make sure you'll get all of your alerts when you expect them.

Screen Shot 2013 01 01 at 3 08 46 PM

The last major issue is probably the biggest problem though. Unlike on the Mac, the Calendar app on iOS will not automatically update itself. This is a huge problem for a number of reasons. For one thing, people will add or change events on your calendar all the time during the day. If your calendar isn't being updated, you'll miss them all or go to the wrong place at the wrong time. The other main problem is that if you have alerts set up as I describe above, then if your calendar isn't up to date you won't get any alerts! This is because all of those alerts are device notifications, not push notifications. The event has to be on the device so that the app knows to give you a notification prior to the event start time. With the proliferation of push Email you would think we would have push calendaring by now, but sadly that hasn't happened yet. In order to keep your calendar up to date you'll have to resort to fetch.

Since I have all my email set to either Push or Manual Fetch, I was in a bit of a pickle here. My personal Email is iCloud, which uses push. For work email I use the new Gmail app, which also uses push, so I leave the iOS account setting as manual fetch. But when you change the iOS configuration for your Gmail account, you only get one choice. I didn't feel like having all of my email being fetched every 30 minutes, so I added a new account. iOS gives you the option to also add a CalDAV account directly, so I disabled calendars on my Gmail account on iOS and added a new account just for my calendar. That enabled me to leave my email configured to manual fetch, and calendars to fetch automatically every 30 minutes.

Photo

I've tested the performance a bit by leaving my phone unpowered over night with the setting enabled. My phone seemed to lose between 2-3% power idling for 8 hours at night, and my iPad lost about 2%. When you combine that with the other notifications and stuff that my devices get overnight, that seems like an acceptable loss of power to have an up to date calendar.

I'm hoping that doing this bit of house cleaning will also enable me to try out other great calendar apps like Agenda and Fantastical, which I have tried but never really used because my calendar information on iOS was never as reliable as I wanted it to be. That forced me to always rely on Google Calendar on the web, or the Mac Calendar app, to do my calendaring. Now I should be much more free to do the bulk of my calendaring on iOS, which I think is the way things should be.

Wildlife Photography

Woodpecker

I originally got into photography when I started camping. Visiting great state parks in Texas and venturing out into Canada, Colorado, and New Mexico was the perfect motivation to pick up a camera. But as I continued getting more into photography I became more and more interested in shooting sports. Like camping, I was very interested in sports and absolutely loved getting to shoot all kinds of events. I still love shooting sports, but it's not a regular part of my life, nor is it something that's very accessible. The wilderness, however, is very accessible to those who want to venture out, so I turned my photographic attention back towards the outdoors.

Shooting wildlife was really an accident for me. I went on a trip this summer to Colorado and decided to take my brand new Canon 300mm F4/L. I wasn't planning on doing a lot of wildlife photography, but I did want to experiment with the lens a bit. I knew it was good for flowers and compressed landscape scenes, and that it was very light (for a 300), so I brought it along.

Marmot 1

Along a few hikes I happened to run across some great wildlife. I saw several marmots and other small critters, and a few gorgeous birds. I hadn't planned to focus on wildlife but quickly that became my photographic focus on the trip. Much of the credit to that goes to the 300 F4/L (which I originally bought for sports). The lens is perfect for wildlife, by being both light enough to hand hold yet sharp enough to create stunning images of close up animals isolated with a gorgeous background blur. Just seeing a few of the images on the LCD was enough to get me hooked.

But there were a few other reasons why shooting wildlife was so appealing. For one thing, it's a lot like shooting sports. A large part of shooting sports is anticipation and fast reflexes. You have to understand the game and be able to predict what's going to happen, and then act instinctively by focusing on the right spot and quickly releasing the shutter. It's the same thing with shooting wildlife except that it's even harder to predict what animals are going to do, especially birds. There's no white lines that animals have to stay within, or goal to reach.

There's so much opportunity to experiment when photographing wildlife. You can try shooting at different angles to get different perspectives, or framing the scene differently to use a different background. You can get down low and shoot from ground-level for smaller animals, to see what things look like from their perspective, or hike up higher and shoot down on birds, to see what things look like from above. Like many types of photography, the background is very important when shooting wildlife. A lot of times an interesting background will really make a photograph more than an empty sky will.

Ruby Crowned Kinglet

Shooting wildlife is also the same kind of drug as shooting sports. Part of what makes shooting sports so addictive is waiting for the perfect shot. Photographers will often wait an entire game, or even an entire season, for that perfect shot of an outfielder diving to catch a ball, or a receiver diving across the goal line with the football. Capturing those images is rare but it's something everyone wants to do. Likewise, looking for that bird that you can hear in the trees and trying to catch an in-focus image of it in flight is just as addicting. But even without that goal, just being out in nature and enjoying the scenery is enough motivation to make wildlife photograph an outstanding pastime. The same way just watching a game makes shooting sports that much more enjoyable.

If you're interested in taking up wildlife photography, then treat it like any other hobby. Just start doing it. Like sports photography, wildlife is also fairly gear intensive, but before you invest thousands in equipment make sure it's something you enjoy doing. You can easily get by with any DSLR and a medium-long range zoom lens. Go out in the back yard and take some photos, then go out for a hike at a state park and take some more. You'll see birds, deer, squirrels, rabbits, etc. If you're up north maybe you'll see some elk, or even a bear. Remember to respect whatever wildlife you do see though. When you're in the wild you're in their home. Don't feed animals or do anything to antagonize them. Just watch, take pictures, and enjoy the experience of observing nature.

Similarities between Backpacking and Software Development

At first glance it's hard to think of two activities with less in common than backpacking and software development.  Certainly many who engage in one activity don't usually partake in the other.  Most software developers who do backpack are probably trying to get away from technology for a while.  The ones that don't probably would if they could have a persistent connection to twitter (or a hot shower).  I was a backpacker before I was a software developer and the more I get into software develpment the more similar they seem to me.  I thought about this while I was out backpacking recently and I want to describe just some of the ways that the two are similar.  

Preparation.  "Do I look like a guy with a plan?"  If you're The Joker, or a hiker who ran out of water, the answer may be no, but if you want to be successful as a developer or comfortable as a backpacker a plan is required.  Knowing what route you're going to take is important so you can decide how much water to carry.  Equally so for developers, failing to understand the requirements and plans of other teams can leave you duplicating effort or preventing others from completing their work.  Always have a plan.

Tell Someone Your Itinerary.  Speaking of a plan, it's generally best to tell that plan to other people.  If you break your ankle and don't make it home, will anyone know where to look for you?  If you start working on a critical piece of the data model without consulting other team members, you may waste effort by going down the wrong path, or you may have missed a requirement that someone else knows about.  Always tell someone what you're doing and where you are going.

YAGNI.  Ya Ain't Gonna Need It.  This is probably the biggest similarity between backpacking and developing software.  All too often we see developers who like to add gold plating and extra features to a class or framework where they may not be necessary.  Once I was designing a screen that had a complex custom layout and I had an idea for an additional feature.  I wanted to assign priorities to views so that they could be arranged automatically rather than by index.  I caught myself early as I realized that this absolutely wasn't necessary.  Likewise, people new to backpacking often pack far more than they really need.  "Will I need this hatchet?  Of course!  What if I need to chop firewood?"  Never mind that they are in Texas and there's a fire ban everywhere.  I call this behavior Shiny Object Syndrome (SOS).  When some people see a shiny object or a new toy they can't resist bringing it with them.  Unfortunately both of these behaviors have drastic consequences.  In a software project the team now has to maintain that shiny object forever, whether they use it or not.  When you're backpacking your backpack is often at least 30-40 lbs, and an extra 5-10 lbs of junk can really wear you down.  Sooner or later people realize that they never used that method or hatchet and they leave it out.  Trust me, ya ain't gonna need it.

Determination.  There's a certain amount of determination required to do anything difficult, be it backpacking, developing software, or watching a New York Jets game.  Especially when you're trying to race up a mountain before rain comes, or sprinting towards a release deadline.  Your body is telling you to stop, that you're tired, or that you're not going to make it.  It's up to the individual to put aside the doubts and press on.  Determination is something we all have to have.

Leave No Trace.  While of LNT doesn't directly translate to software, since you do want to leave a documentation trail in your source code, there is one principle that ties in closely.  When you are in the wilderness you should strive to leave everything better than you found it.  That doesn't mean trimming trees or building rock chairs, but it does mean picking up trash, fixing downed trail signs, building and maintaining trail, etc.  The same principle applies to software.  If a developer sees a method or class that isn't built well, contains a bug, or isn't' safe, that developer should strongly consider fixing or reporting it.  Ignoring technical debt has the same awful affect that littering does on the wilderness: it ruins the experience for everyone else.  Always leave things better than you found them.

Organization.  If the sun goes down and you can't see the inside of your pack, will you know where your flashlight is?  What if you need to meet someone at a trailhead, will you know where to go and how long it will take to get there?  This is equally as important in software because you need to know where things are in the code base, what features are on the roadmap, when other developers and designers will be finished with certain tasks, etc.  Know where things are.  Be organized.

Calm Under Pressure.  There's all kinds of pressures out there.  There's deadlines, of course, weather, physical danger to yourself or others, lack of food or sleep, etc.  Staying calm when someone has broken a leg is incredibly difficult, but without calm a first responder cannot act appropriately.  Panic never helps.  Likewise, with a deadline approaching team members have to be calm and perform.  Panic is contagious and if one person loses their calm then they put the entire team at risk.  Take a deep breath, count to three, and stay calm.

Health.  All the determination, will power, and planning won't help you if you can't get out of bed or lose the energy to continue forward.  It's common that inexperienced backpackers don't eat enough on a long day.  Your body consumes roughly 500 calories per hour on a strenuous hike.  On a 15-20 mile hike that could be 5000 calories.  If you aren't eating enough to replenish that then you won't have enough energy to reach your destination.  It's equally important to drink enough water.  Dehydration is the most common injury in the backcountry, not bear attacks or twisted ankles.  It's common for developers to chug coffee and red bull as they try to sprint towards the finish.  People have to think about what they eat and how they treat their body or they won't be able to perform when they need to.  Be Healthy.

It's fun to think about the different aspects of our lives and how similar they are in terms of the basic skills and considerations they involve.  All of these principles are important across many areas of our lives.  Remember that how you approach a problem in one activity may help in another, which is a great reason to diversify your life and maintain a broader perspective.  Or in other words, go enjoy Nature!

Thoughts on the iPad Mini

For me, it's all about the weight.  Period.  It used to be about the Retina Display.  When you looked at it for the first your eyes could never again ignore the pixels in other displays.  It's the same with the iPad mini.  Once you pick one up and hold it everything else just feels unbearably heavy.  It really is that striking of a difference.

I think that like most developers and users I was skeptical of the importance and value of a smaller iPad when it was rumored to be nearing release weeks and months ago.  The general consensus has been that 3.x-inch smartphones work very well, and that 10-inch tablets are similarly well suited.  Few were clamoring for a 7-inch device so it's been interesting seeing the reception to the iPad Mini now that it's out, especially on the heels of the very successful iPhone 5 that also challenged the size of smartphones by releasing what as seen as a very useful 4-inch smartphone.  With the iPhone now in the larger size range, where does a smaller iPad fit in the lineup?

After a few days of using the iPad Mini it's clear to me that it fits right on top of my iPad (3) as my go-to device for general computing, which for me means reading, browsing, checking email, tweeting, messaging, etc.  And it has everything to do with the weight.  I take the iPad everywhere, so being lighter makes it more portable.  But more importantly it's just easier to hold in your hand.  What's probably most striking about the device is that you don't actually hold it, or at least I don't.  It's light enough that I can just rest it in my hand and let friction keep it from slipping out.  The larger iPad is heavy enough that you actually need to grip it, sometimes with two hands, or rest it on something or it will fall.  The iPad Mini is easily held in one hand.

I was really interested to see if I would use the iPad Mini differently than the iPad.  Yes it's smaller and lighter but what does that actually mean in real world use terms?  The first thing I noticed is that I preferred to hold the Mini in portrait orientation.  I've always been a Landscape iPad user, so that alone proved to me that the iPad Mini is different.  The weight also makes a big difference in the way I use the device.  In addition to holding it with one hand, I want to control it with one hand too.  When I am using the iPad I have a tendency to "hold" it with my left hand and use my right hand to tap controls and perform gestures.  With the iPad Mini that sort of interaction feels more like work.  What I find myself doing is trying to do everything with my left thumb.  That actually works well on some apps but not as well on others.  I wouldn't be surprised to see interfaces leaning more towards this type of interaction in the future, with tabs and buttons on the side instead of on the bottom.

It's clear to me that the iPad Mini is going to be a big deal.  It only took a day before I made the decision to recommend it to anyone I know who is buying their first iPad.  It just makes too much sense to buy the Mini.  It's more portable, it's more fun to use, it runs all of the existing apps, and it's cheaper.  The only bummer is the lack of a Retina Display, but it's not something that I have missed as much as I would have expected.  I peered closely at it and said "Yes, I can see pixels" and stopped worrying about it.  I just enjoy the lighter weight and ease of use too much to care about the pixel density that much.  Going back to an iPhone 3GS after using an iPhone 4 would have been impossible.  Going back to a non-Retina Display iPad may be hard for some people, but for me it was easy because of just how much more enjoyable the iPad Mini is to use.

The iPad Mini is going to open up the iPad experience to so many more people than before.  Price is important and a $329 device is very accessible, not just to consumers but to business and schools.  One of the key markets for iPads is education.  Apple announced that more than 80% of core curriculum on the iBooks store, which just adds to the value brought to the platform by the countless education apps available for the iPad.  So now schools can buy three iPad Minis for the price of two iPads.  That's a 50% reduction in the total cost to outfit an entire classroom with iPads.  That same savings extends to the enterprise as well.  Many companies are deploying iPads to their entire teams, or encouraging a Bring Your Own Device program.  A less expensive device which provides the same or better user experience of the full size iPad is extremely persuasive in all of these environments.

There's been some talk lately about what the ideal iPad would look like.  A few have speculated that the iPad Mini with a Retina Display would be the ideal iPad.  It's hard to argue with that reasoning.  As it stands now, I can safely say that I enjoy using the iPad Mini more than my Retina Display iPad (3).  If the Mini had a Retina Display it would absolutely be no contest.  But what if the iPad weighed the same as the Mini does now, or even a tad lighter?  I think my ideal iPad would actually be the full size one but which is as thin and light as the Mini is today.  I'm betting that the Mini will get a Retina Display in two years, so that may be around the time that the iPad achieves the current weight of the iPad Mini.  I can't wait for when that happens to see if I am right in my assumption.  For now though I love the iPad Mini and I can't wait to see how the market and Apple's customers take to it now that it's out.

Speculation on XPC for iOS

I've read several articles lately from some talented iOS developers talking about the possibility that XPC services may make their way to iOS in a future version.  It's a really fascinating possibility and I hope it comes to pass.  XPC is a method for processes to communicate with each other which is tied in with Grand Central Dispatch.  Introduced in OS X Lion and refined in Mountain Lion, XPC is already included as a private framework in iOS 6, so indications are good that it could make it's way into iOS 7.

One way that XPC could be used is to create Remote View Controllers.  Ole Begemann has a great overview of this on his blog, oleb.net, including some research into current iOS frameworks which are using remote view controllers.  Essentially an app creates another executable which can vend out a remote view controller to other apps.  Now there's a lot that goes into how the app determines which of these remote view controllers may be useful to it (i.e. if they can consume the type of content the app wants to share) but the gist of it is that the currently running app opens up a portal to a small piece of another app and all of the touch events and user input gets forwarded to that app which is running in a separate process.  That lets the remote view controller take care of all the nitty gritty details of posting to twitter without the currently running app having to worry about it.

I wanted to speculate a bit on what this could mean to iOS.  iOS has been a sandboxed platform since day 1.  We didn't get multitasking until iOS 4, and even now it's not exactly open season on parallel tasks and communication between apps.  The only way that apps can communicate with each other now is through URL schemes.  There's been some creativity around URL schemes, such as how Facebook has a global login system between all of it's applications (and apps that want to use Facebook), but it's still a cumbersome system.  It requires every app to know about every other app, including which URL schemes it responds to.  That's not a scalable system at all which is why we need something better.

The obvious value that XPC and Remote View Controllers could bring is sharing.  Apps like Instapaper or Instagram could publish sharing activities and view controllers that allow any app to post a link to Instapaper or a picture to Instagram.  There's definitely a need for this type of service so I hope it's something we see soon.  If you're worried about having too many sharing options, I could see a notification center-style interface for managing these options.  Since apps would have to publish what they can handle in terms of content, it should be straightforward to provide a static interface for managing that in Settings.

Some less obvious use-cases for XPC and Remote View Controllers are global authentication schemes and remote detail views.  Authentication with various services is a very common problem on iOS.  Facebook and Twitter used to be the most common that required some form of authentication, which would have to be baked into an app in the form of an SDK or OAuth implementation.  Now, thanks to integration in iOS, this is no longer a major issue.  But there's still plenty of other services (Dropbox for example) that plenty of apps need to authenticate to.  Wouldn't it be nice if the Dropbox app could vend an authentication view controller to any app that needed to login to Dropbox?  It would definitely be a better experience than using URL schemes to go back and forth to the Dropbox app, and it would also keep the responsibility of safely authenticating in Dropbox's court, where it belongs. 

We know that Apple's own App Store modal view controllers (from Mail and Safari) are using XPC under the hood.  There's plenty of potential here for other apps to offer similar forms of detail views.  Shopping is certainly a great use case for this as well with things like Amazon or Google Shopping being obvious possibilities.

Another place that XPC could kickstart a revolution on iOS is the Notification Center.  When Notification Center debuted in iOS 5 we were greeted with widgets for Weather and Stocks.  What's been obvious since then is that those widgets are not system-wide, but rather tied to the existence of the corresponding Weather and Stocks apps.  That's why they are only on the iPhone, not the iPad, because those apps don't exist there.  This could be another use case for XPC, where apps could be able to vend remote views which exist in the Notification Center as small widgets.  There's some great user experience gains to be had by widgets, such as quick access to data (Weather, Scores, etc.), basic controls for things like timer apps, or quickly launching apps similar to David Barnard's original vision for Launch Center.  Shoot, I know I wouldn't mind having a flashlight widget in my Notification Center.

Here's a last stretch speculation for XPC.  What if Apple converted UIWebView to run in a separate process and use XPC?  There's long been a disadvantage for hybrid/web apps that run inside of a wrapped web view because they don't have access to the same Nitro JavaScript engine that mobile safari uses.  The reasoning I've heard for that is that it isn't secure enough when it's in the same process as the app.  Would putting the web view in another process and using XPC for it to communicate with the host-app be enough to secure it such that any app could have access to Nitro?  That would certainly improve Apple's status as a leading platform for mobile web developers.  This is definitely a stretch, but it's exciting to think about.

There's just so much potential with XPC that I have to believe Apple is working on bringing this technology to iOS.  Hopefully it will make it for iOS 7, but even if it's still a few versions out I am still excited just thinking of the possibilities that it could bring to the platform.

Definitely check out Ole Begemann's series below, as well as Kyle Baxter's speculation and Federico Viticci's interview with Loren Brichter.  If you think any of this sounds cool too, definitely file an enhancement request:

 

http://oleb.net/blog/2012/10/remote-view-controllers-in-ios-6/

http://tightwind.net/2012/10/the-future-of-ios/

http://www.macstories.net/featured/a-conversation-with-loren-brichter/ 

http://bugreport.apple.com/

Multnomah Falls

Last weekend I attended the outstanding CocoaConf iOS and Mac Developers conference in Portland, Oregon.  It was absolutely awesome getting to meet so many developers and members of the Apple development community.  There were great people from all over the country (and the world) there to talk about the Apple dev platform and we had a lot of great conversations.  One of the biggest interests of the conference was all of the buzz around UI Automation.  I'm really looking forward to experimenting more with that over the course of the coming year.  I'm also looking forward to attending CocoaConf again next year in Dallas, and possibly other conferences as well.  I really enjoy WWDC, but there's a great energy around a smaller conference like CocoaConf that the larger ones don't always have.

Whenever I visit a new city like Portland I try to make a point to spend a little time getting off the beaten path and exploring.  In this case, I was also visiting an entirely new region.  I'd never really been to the Pacific Northwest before at all.  As an avid hiker and backpacker, I had heard great things about Oregon in particular, so it was great to finally get to go there.

Since the conference was near the Airport, I decided on Sunday to go rent a car and drive around.  I wanted to see the city, but I also wanted to see the surrounding land.  When I was talking with some friends at the conference, one of them mentioned a place called Multnomah Falls.  It wasn't far from the city, so I got my rental car and headed out that way on Sunday morning.  I really didn't know what to expect, so I packed up my GR1 with my camera, jacket, rain gear, food, and water and tossed on my workout clothes to prepare for a hike.  What I found when I got there was simply shocking.  

Let me explain something before going on.  I'm from Texas.  I've hiked thousands of miles in Texas, New Mexico, and Colorado.  The tallest waterfall I've ever seen was Yellowstone Falls, which is around 300 feet tall, but I didn't get anywhere near it.  Multnomah Falls is over 600 feet tall, and it's like 200 yards away from the highway.  I wasn't expecting either of those things at all.  From the moment I got there I was simply shocked by how beautiful this was.

I threw on my pack and my camera and headed out.  There's a little welcome center and observation area, but then right past it is the trail.  I headed straight for it and started hiking up.  At the first turn there's a great view of the bridge that stands above the lower falls.

I kept going up another few switch backs and got to the bridge.  While it was drizzling on and off during the hike, near the falls it was like a constant downpour as the water turned to mist at the bottom and just shot out at you like a shower.  I didn't want my camera to get soaked, so I stopped only briefly to capture a shot of the bottom of the upper falls before pressing on up the trail.

I continued up on the trail for a few more switch backs.  It was still raining, but I didn't feel the need to break out my rain jacket.  It was never really more than a mist, I think because of the trees.  It was so densly wooded in the forest that the trees seemed to catch most of the water before it go to me.  After one or two switchbacks most of the other hikers were gone.  Most people turned around at the bridge.  A few switch backs up I was rewarded with my favorite view of the falls from between the trees, and one of my favorite landscape photos that I've ever taken.

I kept hiking up to the top of the hillside and then started to switchback down towards the top of the falls.  The trail down to the falls was pretty rough.  It was essentially a mudslide, unfortunately.  Park workers were working hard to put up erosion barriers so that the trail didn't completely wash away.  I went pretty slowly and carefully across this stretch of a few hundred yards down to the water.  Right before you get to the top of the falls there are gorgeous views of the spring that feeds the falls, which are prime for shooting time-lapse exposures. 

This one above is my favorite.  I've always loved shooting long exposures of water, but it's usually much harder than it sounds, largely because of the amount of available light.  Usually it's so bright during the day that you can't afford much more than a half second exposure without washing out the water, even at a setting of 100 ISO and f/22.  The day was so overcast, and the forest so shaded, that I could afford nearly 2 seconds at those settings, which resulted in a silky smooth stream and excellent color and contrast on the image itself.  I could have stayed by this spring all day taking more pictures of it even without the falls.  Also, here's a tip.  If you don't have a tripod (as I often don't), you can still do long exposures like this without one.  My favorite trick is to use my backpack as a stand, and set my camera to a 2-second delayed shutter setting.  If you want to be even more careful, you can enable mirror lockup, which will even further reduce the amount of camera shake that could affect the sharpness and stability of your image.  All of these shots were taken while resting on top of my GR1.  

Below is the small "mini" falls about 20 feet in front of the main drop off.  I like this one because it shows the whole stream in the background fading out of the frame.

Finally I got to the observation point at the top of the falls.  The view down was awesome above the roar of the falling water.  It was an excellent experience and a wonderful hike!

Finally, here's one more picture from just off of the trail itself.  The trail was very well maintained and the surrounding terrain was simply beautiful.  The forrest was very colorful, as I think this photo illustrates.  I still appreciate the mountains and deserts of the southwest, but the beauty of Oregon was unmistakable and I look forward to discovering more of it in future visits.

I don't always publish these kinds of trip reports, but in this case I felt compelled to share.  Even though I only spent a few hours out in the forest, Oregon was easily one of the most beautiful places I've ever been to.  I've been an amateur landscape photographer for more than 10 years.  I love hiking and I've seen some simply amazing things in many parts of the United States.  But I've never seen a place that was as photogenic as this.  You can always tell as a photographer when you're shooting a subject that just makes it easy, and you appreciate it, because you know you've found something special.

If you're ever looking for a great place to visit, definitely check out Portland and Multnomah Falls.  I would absolutely go back, and I look forward to exploring other parts of the Pacific Northwest in the future.

If you want to see higher resolution versions of some of these pictures feel free to check them out at 500px (link on the header above).

One other note, about the Goruck GR1.  I was curious how the GR1 would hold up in the rain like this with a constant drizzle.  It had no problems at all.  My iPad and other electronics were also inside as well as keys, phone, wallet, etc. and nothing got wet, even without a rain cover.  I had tested out the pack in the rain before this, and read other articles where the authors had not had issues in the rain...but it was great to experience for myself how robust it really is.  I still love that pack and I would absolutely take it anywhere now.

iPad Week: Friday

My week of using an iPad at work has come to a close.  Overall it was a great experience.  I'll be glad to have my laptop back, but I definitely feel like I appreciate the iPad even more now.  It's a very personal device, and pushing the boundaries of what it is capable of makes me even more excited to be building apps for it.

I'm in a position as a developer where my job involves more than just writing code.  As such, I was able to test the experience of working with the iPad in a wide variety of activities.  All of my normal activities were possible with the iPad, but it was clear that the iPad just isn't that great for writing code yet.  Sure, you can do it, but it's far from ideal.  For every other aspect of my job though the iPad performed very well.  There's a lot that goes into software development, from team communication to researching frameworks and tools, reading specs, writing documentation, reviewing designs, maintaining build systems and testing apps.  Engineering is all about solving problems and I really feel like I was able to solve a lot of problems this week using the iPad.

When I'm at home I use my iPad a lot while I'm laying on the couch.  I spend a lot of time reading twitter and RSS news to keep up to date on what is going on.  It occurred to me today while I was sitting at my desk that I hadn't tried that at work yet with the iPad.  I was researching an automated UI testing framework called KIF, so I laid down on one of our office couches and read about KIF for an hour.  It was really relaxing and I learned a lot about a new tool that I want to use.  I bet that when people start using tablets more in the work space that they'll be able to be more relaxed while still being very productive.  That's certainly how a lot of my experience using the iPad felt.

I've been thinking about how the iPad actually can become a better experience for software development.  I can see how Xcode could be built for the iPad.  The sliding panels in the desktop version will fit in well on the iPad being shown and hidden with swipe gestures.  Same situation with the console/debugger drawer at the bottom.  Whether or not the iPad has enough horsepower to be an effective build machine is another question, but I don't think it would be hard for Apple to build a version of Xcode that works on the iPad.  The issue that I see with it though is that development is still very mouse and keyboard based.  You have to type to write code.  We're not in a world yet where you can drag and drop functions (except for code snippets…which I think would get used a lot more on a tablet) to build an app.

We are in a world though where we have Interface Builder.  That's the one piece of iOS development that does lend itself to the tablet, since you can drag and drop bits of an interface around on the screen to build your nibs and storyboards.  So then it occurred to me, what if you could bake interface builder into an app itself?  What if you were building an app on the iPad, and that while it was running on your iPad in "developer mode" you could actually move the pieces of the UI around as needed while the app was running.  It would be the perfect way to avoid the problem of context switching between your code/interface editor in the app itself.  Why not just adjust your UI from the app while the app is running?  We all do it all the time, where we'll run an app in the simulator and say to ourselves "that text box should be 2 pixels over to the left….  With this system you could just drag the text box over, say by tapping and holding on it to move it.  You could use standard popovers to bring up a dialog for property changes, like font, color, size, etc.  I bet that would be a pretty compelling way to fine tune an app you were working on developing, and it's something that would be a far better experience on the iPad itself than it would be on the simulator running on a Mac.

One of the biggest surprises this week was that I found the 10" display on the iPad to be plenty large enough for me.  I'm a bit obsessed with screen real-estate…I use dual monitors at home and at work I use three monitors.  I tried plugging the iPad into a monitor, but it just didn't' feel right.  It turned out that the 10" gorgeous retina display on the iPad was just fine for me.  I wouldn't have expected that to be the case.

I consider this to be a successful experiment.  I would definitely encourage other people, developers or otherwise, to try using an iPad as your primary computer for a week.  I think you'll be pleasantly surprised by how much you can do with it.  I think it's important to note that what made this experiment possible for me was the quality of the apps that I was able to use.  Panic in particular deserves a lot of credit.  Their iPad apps are simply top notch.  Edovia and Google deserve a lot of credit as well.  Screens, Google Drive, and Google Hangout are excellent experiences on the iPad.  If you do try this experiment and find places where the experience is lacking, think about what would improve that experience and write about it.  Maybe someone will be able to build an app to fill the need that you have.

Thanks to everyone at Mutual Mobile who made this experiment possible!

iPad Week: Thursday

I just checked the fast app switching bar on my iPad and Screens is on the 4th page!  

Today wasn't what I would call a hardcore development day, but I did take on a bit of coding today including a new challenge: adding files to the Xcode project file from the command line.

What went well?

Diet Coda continues to perform quite well for me.  I fiddled with the syntax highlighting options and the Javascript setting actually works pretty well.  It dims out comments and highlights parenthesis and some keywords.  It's better than nothing, though not nearly as full-featured as Textastic, but it works for me since I just can't use Textastic for my workflow.

Prompt is also great.  The interesting thing about Prompt and Textastic is that they both provide excellent solutions to user experience issues when you don't have an external keyboard.  Textastic, for example, has these ingenious side-swipe keys for adding parens, numbers, square brackets, etc. that you would otherwise have to switch keyboards for.  Prompt has similar keys for things like arrows (obviously required for a terminal app).  But since I am using the Apple Bluetooth Keyboard this isn't really a concern for me.  That's why I am able to get by entirely with Diet Coda for both code editing and terminal usage.

What didn't go well?

I was dreading needing to add files to an Xcode project.  I've had to do that already this week, but in those cases I just used Screens to do it on my Mac.  I wanted to try and see if I could do it without using Xcode through remote access.  I initially tried modifying the project file manually.  Bad idea.  Even though I understand the internal structure of the project file, there's just too many details to get wrong and too many steps to go through.  Even if I could make it work once, it wouldn't be repeatable over time.

Not long into this attempt I realized that there must be scripts out there to do this.  Otherwise apps like AppCode from JetBrains would be impossible.   I was able to find two command line tools for managing the project file that I've linked below.  I only tried the one from CocoaPods, but I would lean towards that since it seems maintained more actively than xcs.

https://github.com/CocoaPods/Xcodeproj

http://bluezbox.com/blog/15/managing-xcode-projects-from-command-line

As this experiment is drawing to an end, I'm trying to collect my thoughts on what my takeaways are.  Are we at the point where a developer would be as productive on an iPad as they are on a Mac?  Certainly not.  But I can absolutely see us getting there.  The catch is that it's going to take a rethinking of the process of software development to get us there.  But maybe that's the key takeaway from this.  We see it all the time at Mutual Mobile: the iPad is changing the way entire industries conduct business.  It's amazing to watch.  And as a developer, its fitting to expect that eventually the software development industry will be changed by it too.

iPad Week: Wednesday

Today was the first day that I didn't secretly wish that I had my Laptop that I could hide under my desk with to work on.  I also did a lot more coding today, so it's starting to feel like I'm hitting my stride with the iPad.

What went well?

Diet Coda fits way better into my mental model of the universe than Textastic did.  Let me explain why.  Diet Coda is a client-server application.  It depends on a server to edit files.  It doesn't allow you to download local copies of files and edit them offline - you have to be constantly connected to a server.  For most people, that's a disadvantage but it actually works better for me.  Here's why.  Textastic requires you to download a copy of a folder or file to your iPad before you can make edits.  The problem then is that you have to then upload them back to the server before you can build.  That wouldn't be so bad if Textastic was using a kind of version control system…but it's not.  Re-syncing your files to the server is an entirely manual process.  You have to pick the local files or folders that you want to move back to your server, AND then pick which ones on the server you want those to replace.  It's really not an intuitive process and I didn't enjoy it at all.

I felt much more at ease with Diet Coda and I actually was able to jump in with it and crank out a couple of new features on a project.  I didn't have to worry about forgetting to sync a certain file, either to the device or back to the server.  Every file I needed was right there and it's simple to edit.  The best part though is really the integrated Terminal.  I'm just a tap away from being able to run xcodebuild or check in my changes to subversion.  Yes, you miss Objective-C syntax highlighting that you get from Textastic, and yes you miss cool features like the jump bar, but for me those were less important than the ability to more quickly access the Terminal and the lower amount of hassle to edit files.

I spent a fair amount of time in Google Docs again today, including some time collaboratively editing a document with someone.  While I did stumble upon one crash, I really was surprised by how good the experience was.  Everything he was typing showed up immediately on my screen, and vic-versa.  Great stuff.

If you've ever compared Box.net to Google Drive and Dropbox the first thing you'll notice is that the Mac sync experience for Box is terrible compared to the others.  The iPad app for Box, however, is great.  This may be one of the best examples of a piece of my work that is actually better on the iPad.  Using Box to look at documents and videos was great today.

So what didn't go well?

I used Screens once today…I almost made it the full day but I wasn't able to :(  I needed to zip up a piece of source code to email to someone.  In retrospect, I probably could have zipped it up, copied it to Dropbox via the terminal, and then emailed it to them via the Dropbox app…but I didn't think of that at the time.  That's about it though.  The experience was far more positive today than negative.

The new experience that I noticed today in terms of my interaction while using the iPad was that I tended to pair program with people more.  Several issues came up today that I had to help with, and where before I might have just pulled out my laptop and dove in I tended to just sit with another developer and work through the problem with them.  I could look at what they were doing, while pulling up other classes and documentation on my iPad.  Sharing came into play again today, where I would find something in DocSets and hand the other person my iPad to show them a method or property that I'd found.  Sometimes you don't need to be at the keyboard to get something done.

The best lesson that I learned today though is that while it may be difficult to create a new app from scratch on the iPad, as I was trying to do yesterday, it's really not hard at all to edit an existing one.  Given what I experienced today, I am now absolutely confident that I could carry my iPad on a trip with me and be equipped to fix a bug or small issue that came up while I was away from a Mac.  Diet Coda + Prompt are more than capable of tackling that, especially with Screens to view a debugger if I got stuck trying to figure it out.

Speaking of a debugger, one of the biggest issues facing developers trying to develop an app solely on the iPad is the lack of a console or a debugger - anything to get feedback about the app while it's running.  In Xcode I depend heavily on the debugger.  It's such a great tool for diagnosing issues.  But before I learned how to use a debugger I used print statements in the console to diagnose issues.  Sometimes I still use them.  Printing out values and comparing them to what you expect is a great way to figure out if something is doing the right thing or not.  So then I asked myself, wouldn't it be great to have this kind of support on an app I was working on on the iPad?

Yesterday I started out trying to make a simple debugger library that can be added to another app.  It's actually very simple.  It's got two parts: a log macro, and a console view.  The log macro is very similar to other log macros, like DLog, except that it logs the text out to a file that gets cleared every time the app restarts (similar to how Xcode clears the console).  The console view is a text view that just prints out the contents of that file.  Simple enough.  That way you can just add debug log statements to your app that are only intended to be viewed in this console.  But you don't always want to see the console, so how do you handle that?  I added a global swipe gesture that shows and hides the console as needed.  It's super simple, but it's actually pretty effective.  Here's a screenshot of the sample app below:

 

That's all for today.  Overall the experience today was extremely positive.  With two days left, I am getting more confident that I'll reach my goal of not only using an iPad for work for a full week, but also not having to use Screens for a full day.  Wish me luck!

iPad Week: Tuesday

If you ever find yourself not appreciating Xcode, try doing iOS development on an iPad.

Today was the first day where I attempted a real development task on the iPad.  The goal was to start building a new UIView subclass.  I used Screens to create a new Xcode project, and add the necessary files.  Then I switched to Textastic to implement it.  The experience was workable, but I wouldn't consider it to be ideal.

Again, I'll start with what went well.

I was surprised to find out that Textastic includes a rudimentary form of code completion.  It knows a bit about Objective-C keywords and it's smart enough to offer those as suggestions while you are typing.  That's very cool.  It even works with properties, as seen above.  But what it lacks is true auto-completion with support for Cocoa classes, methods, types, etc.  That's a big deal when you forget if it's "UIAutoresizingFlexibleWidth" or "UIViewAutoresizingFlexibleWidth".  

My deployment system works.  I was able to initiate a build from the command line on Prompt and then download the built app about a minute later.  That's not an ideal turnaround time, but it's not terrible.  I actually wasn't expecting it to be as good as it is.

Documentation wasn't really an issue either.  The app DocSets is really fantastic.  I think that's a must-install for any developer.  It's available for free on Github, but if you like it then consider buying it on the App Store to support the project.  https://github.com/omz/DocSets-for-iOS

So what didn't go well?

What I missed most was the instant feedback that you get from Xcode and having a real compiler as part of your tool chain.  I wasn't as comfortable with the uncertainty of wondering if NSUserDefaults had a method for setString:, or if it was actually setObject:.  In many of those cases I could look up the answer using DocSets, but that's still less efficient then letting Xcode just tell you which methods are available in-line while you're typing.

But the killer for me were the unknowns that lead to compilation issues.  A couple of times I would do things like forget a parenthesis, or a square bracket, or slightly mis-spell a variable or method name.  Some of the time the output from Xcodebuild and Bamboo is enough to see what the issue is, but in one case I did have to open up Screens and check the logs in Xcode to see what the issue was.  Not having that instant feedback is a big loss once you're used to having it around.  Or maybe this will help teach me to write perfect code with my eyes closed?

My goal with this experiment is to use the native iPad apps as much as possible and to rely on Screens and remote access to a Mac as little as possible.  To me, Screens is my security blanket.  I'm trying to ween myself off of it.  Yesterday I turned to Screens pretty often at the sign of trouble.  I'm happy to report that today I didn't need to as much.  I only had to go to screens to create the new project, fix the above build issue, and to convert a file type that I had on my Mac at home.  I'm hoping that by the end of the week I can go a whole day without having to use Screens for any of my work.

Aside from development tasks, the iPad performed very well for me today.  One new addition to my work-load was taking notes.  There were two meetings I attended today where I needed to take some notes.  I'd left my bluetooth keyboard at my desk however, so I had to do it all forefinger style.  I read a great suggestion some time ago on how to type quickly on the iPad.  On a physical keyboard we are taught to use all 5 fingers to hit the various keys.  The iPad keyboard isn't big enough for that though, so what the author suggested was using just your first three fingers to do your typing and ignore your thumb and pinky.  That system works extremely well for me, and I was very happy with the accuracy of my typing and the speed at which I was able to take notes today using that method.  Here's a link to the aforementioned article: http://whowritesforyou.com/2011/08/24/ipad-typing-tip-use-three-fingers/

I paid a bit closer attention to how I was using the iPad today.  Yesterday I spent more of my time leaving my iPad at my desk as if it were a laptop.  Today I made it a point to carry it around with me everywhere.  What I noticed was that having a tablet made me more likely to share my work with others, and by share I mean literally handing them my iPad to show them what I was doing.  I think this is a great way that the iPad can add value in the work place, especially for software development.  Software Engineers are infamously shy and tend to keep to themselves.  Couple those tendencies with the fact that it's difficult sometimes to crowd around a laptop to share your work with others and you have a recipe for poor communication between engineers.  I like how the iPad solves the issue of sharing with a Laptop or an iMac because rather than crowd around a monitor I could just pull up a source file or a Google doc and hand the iPad to someone to get their opinion.  Social interaction is a hallmark of mobility, and a perfect example of where tablets can add value to our work as software developers.

Tomorrow I'm going to attempt some more development work, with the goal of trying out Diet Coda as opposed to Textastic, and hopefully using Screens a bit less than before.  We'll see how it goes :)