Apple iPad puts tablets and multitouch at the center of changes to consumer electronics and PCs. Speech, natural language processing, gestures and haptics augment tablet interfaces leading to new usages and markets. Android and iOS impact the future of ecosystem players, such as Microsoft and HP.
Table of Contents
iPad and Beyond: What the Future of Computing Holds
- Tablet Consumer Electronic Hybrids
- Tablets Merging With PCs
- Mobile Computing Trends Are Changing What Consumers Carry With Them
- Future Interface Innovations Lead to New Usages
New Computing Interfaces Lead to New Markets
- iPad and Beyond: What the Future of Computing Holds
- Figure 1. Interface Types
The iPad has created a transformational change in how people interact with computers. Multitouch on the iPad and other media tablets has liberated users from the hardware keyboard and pointing device (aka the mouse). As media tablets become more commonplace, users will expect the convenience and simplicity of multitouch user interfaces when they interact with other computing devices. Makers of PCs and consumer electronics are noticing the shift in consumer expectations and are incorporating features popularized by the iPad into the new products they are developing. Multitouch technology has become the de facto interface of high-end smartphones and media tablets, and will extend to additional consumer electronic devices and to PCs.
During the next five to 10 years, media tablets will instigate change in computing form factors. Modular designs will enable tablets to take on new functions, becoming the cross-platform controller and brain for hybrid consumer electronics and computers. Tablets will be substitutes for several of the consumer electronics consumers often carry with them. Thin-and-light mobile PCs with tabletlike features will become mainstream, pushing out some bulkier PC styles that have been the norm.
Providers of consumer electronic devices are experimenting with designs in which tablets are docked in the devices as controllers and sometimes even as the computing brain of the system. In his report, "Tablets and Smartphones Give Rise to New Hybrid Devices," Ken Dulaney gives the following examples of hybrid tablet-consumer electronic devices being developed and tested: phones that use a docked tablet as a screen for videoconferencing and home media systems that come with a tablet for controlling the TV and accessing the Internet. Integral to these new capabilities are ensemble interactions, which are a user experience that involves multiple linked devices or that dynamically cross two or more personal devices (see ensemble interactions' profile in "Hype Cycle for Consumer Technologies, 2011" ). An example of ensemble interactions is seamlessly transitioning from watching a movie on TV to watching it on a tablet in another room or on the go.
Some device hybrids utilize the tablet as a user interface to the other device, for controlling the device and reading messages about the status of the device. Early products and prototypes include:
Washing machines that enable the homeowner to start the machine or change settings through a tablet from another room in the house
Point-of-sale systems that include tablets to enable staff to enter customer orders or data from anywhere in the store
Printers that enable the user to control printer settings and to download documents from the tablet to print
Other hybrid designs leverage the flexibility of tablets to become the brains of consumer electronic devices. One tablet can replace multiple dedicated electronics devices by connecting with different peripherals. Tablets docked in the dashboards of cars can replace dedicated navigation devices and in-car entertainment, and environmental controls. Wirelessly connect a blood pressure cuff, a bathroom scale and an oximeter to a tablet to create a home health monitor that can plot personal health trends and send the data to a doctor. Mount a tablet into a projector and it becomes digital signage in a retail store or a device for streaming media via the Internet.
The new thin-and-light PC designs coming to market incorporate tablet design features (see "Market Trends: Mobile PC Feature Integration Forecast, 2011" ). Intel's concept of Ultrabook combines the attributes of a tablet with the performance of a mobile PC. Of these attributes, particularly important are thin height, light weight, instant-on (only a few seconds needed to get on the Internet after the PC is turned on) and long battery life. These features were chosen from consumers' attractions to media tablets. Apple offers similar ultraportable PC designs in its MacBook Air, though the relatively high price contributes to it being a niche product.
Ultrabooks also bring the thin-and-light features of tablets to the PC. The thickness of Ultrabooks with 13.3-inch screens is less than 0.7 inches deep and they weigh about 2.3 pounds (see "Computex Taipeii 2011 Highlighted the Changing Mobile Computing Landscape" ). The Ultrabook reference designs are significantly thinner and lighter than most mobile PC form factors with 13-inch screens. Ultrabook designs will evolve to incorporate more tabletlike features over time. For example, in 2013 lower-power processors may enable 12 hours of use between charges. In addition, some Ultrabooks will be available with touchscreens. The popularity of touchscreens on a lightweight clamshell PC is questionable because tapping the screen may push the screen backward or tip over the PC. However, multitouch can be used on the PC's touchpad instead of directly on the screen. In addition, convertible Ultrabook PCs and hybrid tablet models are likely to be available, in which the screen folds over the keyboard to make a slate form factor.
PC vendors are experimenting with prototypes of tablets that morph into PCs. The idea of a tablet-PC hybrid is not new. Convertible tablet computers have been available for 20 years they have a hinge that enables the keyboard to be tucked under the tablet display. Newer hybrid ultraportable designs include those where the screen is detachable for tablet usage and docks into the hardware keyboard to create a clamshell. The keyboard may include an additional processor and graphics card for more computing power when docked. Other experimental designs have two hinged screens that can be opened like a book. The marketplace may serve as a proving ground to see which of a variety of hybrid tablet PC designs are adopted by consumers.
Gartner has the Strategic Planning Assumption that in 2015, the biggest distinction between connected TVs, all-in-one desktops, and tablets will be the screen size. For desk-based PCs, the all-in-one (AIO) form factors with touchscreens, such as the HP Touchsmart, can be thought of as oversized tablets. Though AIOs are not intended to be carried around, they can be more easily moved from a desk to a counter in a home than a traditional tower PC. AIOs look similar to flat-screen TVs and will be used increasingly as IPTVs or additional screens in the home for entertainment. Among desk-based PCs, the AIO form-factor has the greatest potential for market growth through 2014 (see "SWOT: ASUS, PC Business, Worldwide" ).
Surface computers have multitouch displays that enable input via touch or gesture from multiple people at the same time (see "Cool Vendors in Multitouch User Interface, 2011" ). The displays are larger than most AIOs and typically are integrated into tabletops, walls or kiosks. Surface computers have niche applications, such as restaurant menus, conference rooms, gaming tables, retail and hospitality because of the cost of the displays. As multitouch becomes a ubiquitous interface for computing during the next decade, more uses for multitouch on the large displays of surface computers may emerge in areas, such as education, venue sound and lighting, video editing and animation.
Underlying a tablet user experience are hardware and software elements essential for the success of multitouch, such as the OS. The multitouch technology profile in the "Hype Cycle for Human-Computer Interaction, 2011" discusses that while notable operating systems, such as Android, iOS and BlackBerry for smartphones or tablets have incorporated multitouch, there is no successful multitouch OS for PCs. The Apple Mac OS X Lion launched in July may possibly be the first, and Microsoft Windows 8 to launch in 2012 has the potential to provide significant improvements to multitouch user experiences on PCs compared with Windows 7. Without multitouch OSs on par with iOs and OSX Lion, the leadership position of Microsoft OSs may decline.
Developing a successful multitouch OS for tablets, smartphones and PCs is a task so large it even tripped up HP, the market leader for PCs. HP's effort to launch a tablet running webOS was unsuccessful and possibly contributed to HP's decision to divest from its PC business (see "HP Shifts Focus to the Enterprise With Possible PSG Spinoff" ).
Tablets and smartphones will become more like digital wallets and change what people carry with them every day. What is in your wallet? When consumers go out for a couple hours, they carry a mobile phone; keys and sundry plastic cards and paper, for example, cash; driver's license; credit cards; an ATM card; loyalty cards; gift cards; membership cards; an employer ID card; insurance cards; library cards; photos; receipts; and postage stamps. Tablets and smartphones are becoming mobile wallets that let users simplify what they carry. Plastic cards and slips of paper will be phased out of daily use. The printed words, images and electronic information can be accessed with a tap of an icon on a tablet or smartphone. Purchases can be made without cash or cards using technologies, such as those for Near Field Communication (NFC) and Mobile Money Transfers (MMT). NFC allows users to make payments by waving their mobile phone or tablet in front of a compatible reader (see "Hype Cycle for Consumer Services and Mobile Applications, 2011" ). MMT enables the transfer of money between accounts and between people. It can be used for retail purchases when transferring money to a merchant's account, and bill payment. Our smartphones or tablets will be able to vouch for our identity by analyzing our voiceprint or by analyzing our keystroke patterns (see "Best Practices in Mobile User Authentication and Layered Fraud Prevention" ). Car keys will grow less common as doors can be unlocked and cars started using a smartphone or tablet instead of a key.
In everyday life, consumers carry smartphones or mobile phones plus a larger screen device if they expect to spend significant time viewing content, for example, Internet pages, watching video, reading digital content or playing games. The widespread use of small, connected consumer electronic devices is driving the adoption of touchscreens. Multitouch is becoming the norm. Devices with small screens — less than 5 inches — provide access to communications and to the rich content on the Internet. Small screens are better suited to brief periods of entertainment and for quick, information searches. Media tablets make a more pleasant experience on the Internet and viewing content throughout the day, especially for people who have several minutes or more to spend at a time. The larger screen size of tablets, typically 10 inches, allows for a more immersive viewing experience for media while being highly portable. Gartner survey data confirms that a primary reason consumers bring a tablet when they go out is because the screen size on their mobile phones is too small to give a good viewing experience. The report led by Mika Kitagawa, "Survey Analysis: Consumers Prefer Mobile Devices to Be 'Pocketable'," shows that consumers most frequently take a mobile phone with them, but many also carry a lightweight device, such as a tablet, netbook or notebook PC, for a larger screen.
Interface technologies can be clustered around five basic modalities as shown in Figure 1. They include state of mind (of the user), human-computer hybrids, action detection, Speech, and Biosensing. The technologies outlined in this section can be read about in the "Hype Cycle for Human-Computer Interaction, 2011." Of the interface modalities, action detection has been extensively used to this point because touch, for example, pressing a key, clicking a mouse or touching a screen, is the standard way to interact with computing endpoint devices, such as tablets, PCs and smartphones.
Gartner (September 2011)
Action detection describes interfaces in which physical movement controls the computer. These include interfaces, such as, touch and multitouch, gestures, behavior analysis, manipulating objects, and eye motions.
Since the introduction of the iPad in 2010, hundreds of thousands of applications have become available for tablets through the Apple Store and Android Marketplace. The deluge of multitouch applications points to the creativity of consumers in seeking out new ways to use their tablets. In his report, "Multitouch Will Be One of the Most Disruptive Technologies of the Decade," Van Baker highlights the importance of the software ecosystem to the success of tablets. The multitouch user interface is integral to tablet applications and makes controlling the device more natural, enhancing the overall user experience.
The immediacy and simplicity of the multitouch user interface is compelling to users, regardless of their technical proficiency. Command icons for multitouch user interfaces sit on the top-level view where they can be seen. The structure of the multitouch user interface is likely much more gratifying than searching for a cryptic command buried in a hierarchical menu four layers down. There is less need to remember where a command is located in a hierarchy. Controlling the tablet with simple finger motions on the screen, such as to tap, sweep or pinch, is a more easily accessible way to interact with the device than a keyboard and mouse, and the gestures quickly become ingrained behaviors.
Even with its growing popularity, the technology behind multitouch interfaces continues to improve.
Multitouch displays on media tablets are controlled by sensor/processor pairs that comprise the multitouch controllers. In his report, "Market Trends: Touch Controllers, a Dynamic Space in Semiconductors," Adib Ghubril lays out the characteristics that strongly determine the quality of multitouch controllers. Functionality, maintainability and ruggedness are the most important characteristics. Capacitive technology provides the best quality at present, so most tablets have capacitive touchscreens. The proponents of optical and resistive touchscreens are not standing still, and the one that demonstrates significantly better properties may become the touchscreen technology of choice in the future.
Haptics enable sensory feedback through the fingertips, creating various textures at different points on the touchscreen or more commonly through a joystick. Commercial uses of haptics are in video game and training simulators (see "Cool Vendors in Emerging Technologies, 2011" ). The addition of haptics makes for a richer user experience on touchscreens, enhancing the multitouch experience. Haptics technology is being developed on touchscreens and will enable new capabilities and usages on tablets and other computing devices, such as:
Adding texture to images in advertising, to games and photos
Having texture or a tingling sensation as feedback to users that they are touching the right spot on a display when typing or touching an icon
Guiding people through fingertip sensation when finding a pathway on a map or diagram
Senseg is one company developing a haptic interface on tablets (see "Cool Vendors in Imaging and Display Devices, 2011" ). Their E-sense solution uses columbic forces from an electric field that attract the skin of the finger in ways that give the sensation of smooth cloth or rough rock. The textures provide sensory feedback and can be used as feedback to the user that they have touched the right software button, can guide a user's finger around the screen and gives different texture to images on the screen. Other haptic solutions have required an appliance, such as a mouse or joystick, as an interface, but Senseg provides direct haptic interaction with the tablet display.
Gestures will be increasingly used to control computing devices. Gartner has a Strategic Planning Assumption that by 2016 half of consumers in mature markets will wave more frequently to their digital devices than to their friends. Gestures are movements of the body in 3D space to control a computing device. Gestures and multitouch are closely related as multitouch can be thought of as making 2D gestures on the screen. In "Emerging Technology Analysis: Migrating Gesture Recognition From Entertainment to Enterprise," Adib Gubril describes gestures as the evolution of the touch interface, providing users with a greater palette of commands. Gestures will move beyond their gaming applications popularized by Nintendo's Wii and Microsoft's Kinect. Future uses will arise in education, hospitality, shopping and transportation.
A more complex form of gesture-based input is gestural-based or behavioral-based video recognition, in which gestures and body movement are captured on video and analyzed via software algorithms to determine whether a person needs assistance or may be causing harm. For example, from video images of an elderly person at home alone, a gestural-based computer could determine if the individual is performing the necessary tasks for daily living, or seems to be ill, or has fallen. The computer acts according to the situation based on these visual cues, such as giving audio reminders to the person, turn off appliances or contact emergency responders. Gestural-based or behavioral-based video recognition can also be used for public surveillance to recognize if a person is acting abnormally in a way that signals their intent to cause harm or break the law. The computer would act on the analysis in ways, such as by turning on alarms, playing an audio reminder not to smoke, or by contacting security or medics.
Tangible user interfaces are real-world objects that when manipulated give input commands to the computer. The objects represent certain input assigned to them by the computer, such as letters of the alphabet or the police. For example, a piece representing paramedics placed on an interactive map display could signal a dispatch of paramedics to that particular location. Games can be played with a set of interface cubes each showing a different letter of the alphabet. The player could see how many words could be made from the given letters given in a minute with the gaming device keeping score.
Eye tracking involves using video to gauge where a person's gaze is on a display. A person can provide input to the computer by staring at an icon on the screen for several seconds. Regarding advertising, a computer could infer a person's interest in a product by how long they look at its ad on the computer screen. In immersive video games, eye tracking can move the scenery as if the player were running in the direction of their gaze. Eye tracking can also be useful for people who have lost the ability to speak or to move their hands to communicate with others.
Speech as an interface to computers has several aspects, namely, speech recognition, control by speech and natural language processing (NLP). NLP turns questions posed in colloquial language, either by text or verbally, into encoded, structured input. The speech interface has the long-term potential to free users from the necessity of using touch and gesture interfaces. Speech-to-text combines speech recognition and control by speech to create text on the computer. Speech-to-text is important for hands-free applications and for uses where a keyboard cannot be used to input text. Examples include the transcription of doctors' comments during hospital rounds into patients' medical records and the ability to send text messages or email while driving. A computing device, such as a tablet, can capture what was said and send it as a message. Speech is emerging as a feature on search engines as providers compete to attract more users. Search engines will include options for spoken queries and use natural language processing to return more relevant answers.
Speech with NLP enables input spoken in conventional language, but the computer can provide answers in a more casual or conversational manner as well. Virtual assistants are computers that provide online customer support. They answer customer questions using NLP by finding answers in a large database on the back end. The virtual assistants can also be given a human appearance via avatars, which will make the interaction with the computer seem more like talking to another person.
Biosensing interfaces use signals given by or made through the human body to control a computer. Examples are bioacoustic, computer brain interfaces and biometric input. Biosensing interface technologies are in the research stage with some proof-of-concept prototypes. With bioacoustic sensing, finger taps to the skin are captured by a bioacoustic sensing device. Where on the body a tap occurs determines what input is sent to the computer. Commands, such as scrolling by sliding a finger on the forearm or zoom by pinching the thumb and index finger are in development. For computer-brain interfaces, shifts in brainwave patterns signal different commands. The electrical impulses can be detected through electrodes in a helmet, or for greater accuracy, through electrodes implanted in the brain. Biometric sensing comes from sensors attached to or implanted in the body that provide wireless input to a mobile computing device. Software algorithms analyze the sensor data and, if abnormal readings are found, the computer will take action, such as adjusting doses on an insulin pump, or contacting paramedics in situations such as a stroke, epileptic seizure or heart attack.
Computer interfaces can get input based on the emotional state of the user. Three interface types are affective computing, mood recognition and emotion detection. Affective computing technologies sense the emotional state of the user with cameras, sensors, microphones, interactions and software algorithms. Remote education is a focus of applications for affective computing as teachers cannot easily assess the mood of students when they are not colocated in a classroom and adapt their teaching style or lesson. The computer may suggest alternative texts or videos depending on the mood of the students. Mood recognition interface technologies use similar means to detect the mood of a user. Automotive applications are some of the first to consider mood recognition. If a driver seems fatigued or is yawning, the in-car computer may give a warning or turn on loud, fast-paced music to help the driver keep awake. If the driver seems enraged, yelling and gesticulating, the computer may turn on mellow music for relaxation. A third interface technology that senses the user's state of mind is emotion detection, which has near-term applications primarily in customer service and insurance. Changes in voice, such as pitch and frequency, can signal when a caller is angry or lying.
Human-computer hybrids include human augmentation and wearable computers. Human augmentation is a computer interface intended to increase human physical ability or to replace capability that has been lost through injury or illness. An example of human augmentation is power-assisted exoskeletons , which enable people wearing them to carry heavy loads on their backs. The electronics that control the suit determine the motions of the wearer through analysis of data from sensors placed throughout the suit. Human augmentation can help amputees with devices, such as DARPA's robotic arm, which is controlled by neural interfaces implanted in the brain.
Wearable computer interfaces are designed to be worn on the body to enable hands-free or eyes-free tasks, such as head-mounted displays or a computer screen fastened to the wrist. Clothing can be jackets or vests that have pockets or straps to attach a tablet or mobile PC. Some wearable computer interfaces are integrated into fabric. Smart fabric can have controllers or nanoelectronics embedded in the fabric, or conductive thread may be woven into the fabric creating electric circuits (see "Emerging Technology Analysis: Smart Fabrics Create New Opportunities for Interaction" ).
In his report, "Technology Trends That Matter," Steve Prentice outlines why alternative user interfaces, such as multitouch, are essential for extending the deployment of computing devices into new markets. Smartphones, tablets and tablet hybrids will become the first pathway to the Internet for many. The keyboard on PCs is a major barrier for those who have had no reason or opportunity to become facile with qwerty. The multitouch interface will make tablets more attractive as a first computer purchase for people who would prefer not to use a keyboard. That includes people:
Less familiar with technology, often people who are less educated, and with lower incomes.
Preferring to give written input on a touchscreen that can capture their handwriting and may convert it to text; handwriting recognition on touchscreen tablets may prove even more of a driver where the local languages do not use Western characters.
With disabilities that limit dexterity.
The learning from innovations first developed for the disabled may later be applied to products for other consumers. Touch interfaces will enable more people with disabilities to be included in the workforce. IBM's Human Ability and Accessibility Center develops innovations in touch interfaces, which when combined with other assistive technologies, makes it possible for a wider range of people to earn an income. A key characteristic of emerging devices is "multimodal interfaces" that seamlessly link multiple options for input, such as voice, touch, pressure and gestures (see "Market Insight: Interview With IBM Lab's Frances West on Improving Accessibility Points to Futures on Tablets" ). Accessible interfaces are designed for flexibility and simplicity when they are used, which allows the user interface to by easily adapted to the needs of the person. Multimodal interfaces enable users to choose the best way to interact with their computing device for specific tasks or situations. Thus, users can focus more on what they need or want from their computer with less effort on the process of getting to results.