INTRODUCTION - University of Colorado Colorado Springs



Apple iPadObservations on an Initial Development ProjectJoe B. TaylorUniversity of Colorado at Colorado Springsjtaylor8@uccs.eduAbstract—Apple has released new hardware and operating systems this spring and plans an additional major operating system release this summer. This project explores the currently available software development kit for these recent releases. The author is new to Mac environment and iPhone / iPad development but has years of experience in Microsoft development and some experience with Java and other non-Microsoft languages. The intent of the project is to implement a simple mapping application using a map of the University of Colorado at Colorado Springs main campus and gain a better understanding of this style of development. Index Terms—Apple iPad, Objective C, iPhone OS SDK 3.2INTRODUCTIONAPPLE has had huge success with their iPhone and iPod products over the last several years. And now, they are trying to make a major shift in how the average user relates and interacts with technology by introducing an iPhone like tablet computer. The iPad has been a big hit with consumers since its recent release. With this new hardware release comes a new version of the iPhone operating system that introduces new possibilities in multi-device development. While the iPad’s 3.2 version of the iPhone operating system does not complete the transition to multi-device development, it lays the ground work for the transition. The expected summer release of yet another new version of the iPhone operating system promises to complete the transition.This project explores iPhone / iPad development from the perspective of an experienced developer with little prior knowledge of the Mac environment or Mac / iPhone development. A custom map application will be built and basic concepts of iPad development explored.Recent Changes From AppleiPad HardwareTwo versions of Apple’s new iPad began shipping in April 2010. They are primarily differentiated as Wi-Fi only or 3G. The Wi-Fi only version has wireless networking capabilities while the 3G version adds cellular data access and GPS to the mix. The iPad is about twice as wide and twice as tall as an iPhone and runs on the iPhone 3.2 OS from Apple.iPhone OS SDKsThe iPhone OS currently has a split personality in that the most recent production release version is 3.2 but this version only runs on iPad hardware. The 3.1.3 version is the latest release for iPhone hardware and will not run on iPad hardware. iPhone OS 3.2 introduces the new concept of the Universal Applications which allows a single compiled application to run on all iPhone OS devices. This basically means that a Universal Application is an iPhone application and iPad application compiled into a single binary. This does not mean however that the code is the same for all devices. Views must be created for each form factor of device such as iPhone and iPad. Application logic must take into account the various features that each platform provides if the application is to be fine tuned for each platform.Version 4.0 of the iPhone OS is expected to ship during summer 2010. This version is expected to run on iPhones and iPads which will bring the Universal Application concept into full play. The current beta version of 4.0 is only available for iPhones. Multitasking is likely the most anticipated new feature in the 4.0 release. Current pre-4.0 iPhone development has been limited by the inability to allow code to continue running in the background. New options will soon be available to allow user code to continue operating in the background for a finite period of time, continue operating as a service for long term operation and access local notification queues to alert end users even when the application is not running.iPhone Developer ProgramTo begin developing for iPhones and iPads, registration with the iPhone Developer Program is required. This initial registration is free and allows access to the iPhone OS SDK and documentation. The development environment provided in the SDK requires an Intel-based Mac running Mac OS X Snow Leopard version 10.6.2 or later.The SDK provides a development environment called XCode for writing code and a tool called Interface Builder for laying out graphical aspects of the project. An iPad / iPhone simulator is also included so this free registration can get you started developing and testing applications on the simulator.Paid registration is required to move projects to physical hardware for testing and debugging. Moving an application to the App Store for distribution and/or sale also requires the paid registration. The fee for an individual developer is currently $99 for this program. Apple calls the process of setting up a device for development testing, provisioning. They provide an iPhone provisioning portal website to facilitate the process. The basic process includes having a developer certificate generated for your account and then an application specific certificate generated for your app. The whole process can seem fairly intimidating during the first attempt but overall it is fairly quick process to get all of this arranged.Development ProjectTo better understand the development process for iPhone OS based devices, a development project was undertaken to build a mapping application for the UCCS campus. An image of a campus map is displayed with sufficient detail to determine buildings and parking areas when viewing from a handheld device. The application scope was laid out to include scrolling the map image, zooming in and out, reorienting the map when the device rotates and updating the device’s location on the map with a red dot whenever location information can be obtained.Scrolling and zooming an image is quite simple to implement once the intended set of classes are understood. Once the hierarchy of three classes was understood, very little effort was required to get the functionality working. The classes involved will be discussed in the next section.Reorienting the display as the device is physically rotated takes considerably more understanding but only several lines of code. Application code can receive an event notification when the device rotates. This event must then respond by rotating the user interface based on its previous orientation and the devices new orientation. Rotating the objects within the view is not overly complex but doing so changes the field of view on the displayed image.During the initial attempts at rotating the image, the portion of the image visible before the rotation would usually end up off the screen and another portion of the image would be displayed after the rotation. To overcome this seemingly random image movement, additional calculations were required during the rotation event. First, the center point of the previously visible image had to be calculated in terms of the overall image. Next, the center point and new visible area dimensions were used to calculate upper, left corner coordinates required to maintain the previous center point. After the image is transformed to the new perspective, the center point is corrected by zooming to the appropriate location using the calculated values.Finding the longitude and latitude coordinates for the devices location is straightforward to implement. However, converting these values into usable x,y coordinates on the image took me down several incorrect paths before I got it working correctly. The final solution came by calculating the value difference between the minimum and maximum longitude and minimum and maximum latitude within the bounds of the map. These differences were compared to the map size in pixels to determine the ratio of pixels to units of change in longitude and latitude. This ratio in conjunction with the calculated longitude and latitude of the map corners allowed the correct placement of the red dot on the map.The map of the campus that I download from the university website was drawn to scale which was very useful in having the current location marked accurately. In addition this map was drawn using vectors in a PDF file so it scaled very well to create an image format that is supported by the classes used in this project.Lessons LearnedDevice Orientation CodingManaging the reconfiguration of the screen layout based on device orientation takes a fair amount of work. There are at least three options for handling this event: automatically, reframing or swapping views. Automatically handling rotation and resizing sounds like a great idea but it only works on very simple screens. Interface Builder provides checkbox options on objects such as rotatable and resizable. When these options are checked and the device is rotated, a basic set of transformation logic attempts to update the view. A view with four large buttons in a two-by-two arrangement works great with this automatic logic. There are a few additional options that can be specified to help the automatic logic work better but complex interfaces require other approaches.Reframing is a manual and somewhat tedious process but it can provide useable views without having to depend on automatic logic or building separate views. The basic concept of reframing is to layout the interface using Interface Builder and record the X, Y, width and height values for each object in each orientation. During the rotation event, you reframe each control by setting the appropriate X, Y, width and height values to move them into their new correct positions.The multiple-view option involves building and saving separate views in Interface Builder for each orientation. This option is similar to reframing but requires less manual control adjustment from code. A single landscape view can be used for both right-landscape and left-landscape orientations but the view must be transformed or rotated during the rotate event to display correctly.Understanding Framework ClassesImplementing certain functionality can either be really easy or really complicated depending on your knowledge of the framework. My lack of framework knowledge during this project caused me to explore various functionality in less than optimal ways. In particular, scrolling and zooming an image has both a hard way and an easy way.Loading and displaying an image does not appear to be overly difficult and it is really not difficult. However this apparently obvious way to display an image can make the next step of zooming and scrolling apparently very difficult. I instantiated an UIImage object which holds the bits of an image and then assigned this UIImage object to a UIImageView. So far, so good, an image will now show up on the display.After this minor success, a review of the UIImageView methods and properties, as well as those on the UIImage, show no easy way to scroll or zoom. There are gesture events that can be captured and processed. After looking into these events, processing them requires tracking where one or more fingers touch the screen and where they end up before they are removed from the screen. This would all require keeping up with multiple coordinates between events and determining how they differ and what the resulting action should be for the image. This can all be accomplished but not without a fair amount of thinking and coding. After being frustrated by the apparent complexity to implement this universal iPhone functionality, I began searching for a higher level solution.The UIScrollView is the answer to all of these problems, if you know that you need another control to get there. By adding the UIImageView object to the UIScrollView object, scrolling and zooming are implemented and managed using very few lines of code. Properties for maximum and minimum zoom can be specified, bouncing at the views boundaries can optionally be configured. There is one aspect of this easier method that still required some new understanding. The view controller class must be made the delegate of the UIScrollView objects events and these events must be implemented. This is similar to implementing an interface but can cause problems if you don’t know that a scroll view requires a delegate. The event implementation for this turns out to be quite simple. One line of code in a properly defined event method gets it all working.Future WorkThis project would be of some limited use to new students to UCCS but this project could be expanded to include additional functionality to make it useful to many students. The map functionality could be enhanced to turn on and off current location updates as well as an option to automatically center the view over the current location.The map itself could either be enhanced by providing complete building names instead of just initials or building names could be added as floating tags over the buildings with the option to turn them off. A list of buildings sorted by name would also be useful in addition to providing an option to center on that building in the map.Other possible enhancements would include news and events, phone directory and emergency contact information. Shuttle routes and times could also be a useful feature. Course descriptions could also be a useful addition.Having the application pull and possibly cache information from the university’s main website, would help keep the app up to date and prevent double entry effort for moving up new information. ReferencesT. Thompson, “The iPhone Isn’t Easy,” Dr. Dobb’s Digest, pp. 5–14, Apr. 2010.J. Ray, S. Johnson, iPhone Application Development in 24 Hours.Indianapolis, IN: Sams Publishing, 2009, pp. 463–465, 570–581. Apple Computer Corporation, iPhone Development Library Documents [Online]. Available: ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download