Extracting Value From the Massively Connected World of 2015


Archived Published: 01 April 2005 ID: G00125949

Analyst(s): | |

  Free preview of Gartner research

Summary

By 2015, wirelessly networked sensors in everything we own will form a new Web. But it will only be of value if the "terabyte torrent" of data it generates can be collected, analyzed and interpreted.

Table of Contents

Analysis

Think of a factory with a robotic assembly line building family cars. Imagine how, in the future, many of the parts in each vehicle will be tagged with sensor-equipped processors. And that these processors will measure and regularly report, over wireless networks, essential attributes of their provenance, performance or environment. The trend has already started. For example, some cars come with tires that constantly report their pressure status, and U.K. insurer Norwich Union recently introduced "pay-as-you-drive" insurance using wireless tracking systems to determine the time of day vehicles are driven.

As car factory assembly lines deliver more cars, so the total rate of data being generated will increase. Toyota has estimated that there were 740 million cars in the world in 2001 and that there will be 1 billion by 2010. By 2015, if half the world's billion or more cars contain 100 sensors — each of which sends a 16-byte message once a second — they would generate data at a rate of 6.4 Tbps (to put this in perspective, in 2004 some of the fastest IP networks ran at 40 Gbps).

Now imagine all the world's factories making not just cars, but TVs, power tools, clothes and printers — everything, in fact. In the future, many of those factories will produce objects that generate and broadcast data from the moment of their creation to the time they become landfill. Some of this data will be processed and stored locally, but much of it will be forwarded for purposes like service monitoring, analytics and secure audit trails.

According to Airbus, from 2005, each new A380 jetliner will contain over 10,000 radio frequency identification (RFID) tags. It has been estimated that, if all the items passing through Wal-Mart become RFID tagged, they will generate at least 7TB of data daily. These are just early examples, exploiting mostly passive, low-reporting frequency devices. But by 2015, the amount of data generated by commonplace physical objects — which will be equipped with sensor nodes and network connected every minute of every day — will give new meaning to the expression "information overload."

As IT professionals, we are asked to do a lot more than imagine these scenarios. We will be required to build systems for our companies that connect to this massive network and to extract value from the terabyte torrents of data it generates.

In "The Real-World Web Will Connect Objects and Places," Gartner Fellow Jackie Fenn introduces some of the constituent elements that form this emerging world of connected objects and people. Advances in microelectromechanical systems (MEMS) already enable us to sense and measure acceleration, acidity, temperature, pressure, stress and many other factors. These devices will become smaller, cheaper and less power hungry. Progress in multimethod location finding will allow devices to report their whereabouts. The evolution in standards for identity tagging will allow every object to be given a globally unique reference. Low-power, self-organizing "mesh networks" will be developed to convey data from these sensors to the edges of local area networks, and onwards to the global Internet.

However, managing this new Web and generating economic value from it will present a serious challenge for many disciplines within IT. To provide the massive computing capabilities required, new architectures will be needed to organize application software elements and apply processing infrastructure reliably and economically. Considerable progress will have to be made in structuring and searching techniques for raw data, and in standards for metadata, so that software can automate the heavy lifting of analysis. Despite this hardware and software pretreatment, rendering the complex synthesis to get the best use out of the human brain's ability to recognize patterns will require improvements in the way information is viewed. One key to that progress will be through producing better displays.

Applications that can manage processes and interactions between myriad objects will be complex, but they will also have to be highly adaptable to change. This is because of the decentralized managerial control required. The ownership or stewardship of connected people and objects will change regularly as they switch between modes or move through supply chains and markets, or even between individuals and institututions. Service-oriented architecture (SOA) for application software will shift from being beneficial to essential because of these trends, and will require functions to become more granular and business process cycle times to reduce. However, SOA will have to move to the next stage in its development to cope with these requirements. In "Software Architectures Will Evolve From SOA and Events to Service Virtualization, " Gartner Fellow Daryl Plummer explains how "services virtualization" will evolve.

In "Key Technologies Needed on the Road to Tera-Architectures," Gartner Fellow Martin Reynolds describes the emergence of new computing environments that will consist of very large numbers of inexpensive, standard processing modules. Redundancy will be central to designs in which hardware components, or even whole servers, that fail will be automatically taken out of service and then thrown away. New layers of software will combine and virtualize these components so that they are decoupled not only from applications, but even from the device management levels within the myriad operating system instances they concurrently run. Far from being whiteboard theory, Martin explains that these cutting-edge architectures are already being forged within the search engine industry.

In "Semantic Web Drives Data Management, Automation and Knowledge Discovery," Alexander Linden unravels the complexities of the "Semantic Web." Without metadata, the capacity of machines to assimilate and analyze higher-order information will remain restricted. Over the past two years, larger financial institutions have switched to stating their enterprise storage capacities in Petabytes (thousands of terabytes). It is clear we will need much more automated support to cope with search and analysis of such data mountains. In this area, the nature of progress is different, but basic logical methods developed over 20 years ago are now becoming very valuable. Alex demonstrates that real progress is being made by providing examples of new standards for metadata.

One further question remains — how will we as human beings deal with the information that this real-world Web senses and sends to us? To assimilate this information, we will need new graphical visualizations and must improve the bandwidth of the final few inches of the network journey — the transition to eye and brain.

We can't increase the processing capacity of humans yet, but today's computers and their displays don't even take advantage of our innate signal and image processing capabilities, which are extremely good. Large screens are one step toward helping that.

For 20 years, the price-performance characteristics of computer displays have restricted the majority to viewing a rapidly expanding information universe though an unnaturally narrow apeture — perhaps just 17 inches across. In "Bigger and Better Displays Will Boost Productivity at Last," Mark Raskino explains how rapid progress in screen technology will offer bigger displays that will improve information worker productivity in some unexpected ways.

Advice

The first serious effects of the "real-world Web" will start to be seen within five years. But it will be a decade before mass deployment levels breach key tipping points within social groups and business markets.

Architects, strategic planners, policy makers and economists working to 10-year time frames should develop midrange probability scenarios in which any physical object worth more than $10 and with a working life of more than one day can be economically monitored. Creative scenarios should assume that, with local wireless relays, items can regularly and reliably report identity, location and recent activity history from anywhere in the world within 3 km of wide area network access. More complex sensing like chemical concentrations should be assumed possible, but only economically viable for objects with a value of more than $100. Assume application software and its supporting computing infrastructure can reliably track and logically analyze the status of a network that contains 100 million independent objects, and which updates event messages at the rate of one per second. Assume that a human operator can immediately interpret the status of all of these objects within a single standard desktop display costing less than $500.

Conclusion

Specific economic and social consequences of the massively connected world of 2015 are hard to predict, but we are confident that the impact will be highly disruptive. Our research shows that technology foundation stones for the "real-world Web" are already being laid.

Features

"The Real-World Web Will Connect Objects and Places" — Introduces some of the constituent elements that form this emerging world of connected objects and people, which we call the "real-world Web." By Jackie Fenn and Nick Jones

"Software Architectures Will Evolve From SOA and Events to Service Virtualization" — Explains how "services virtualization" will evolve. By Daryl Plummer

"Key Technologies Needed on the Road to Tera-Architectures" — Describes the emergence of what we call "Tera Architectures." By Martin Reynolds

"Semantic Web Drives Data Management, Automation and Knowledge Discovery" — Unravels the complexities of the "Semantic Web." By Alexander Linden

"Bigger and Better Displays Will Boost Productivity at Last" — Explains how rapid progress in screen technology will offer bigger display spaces that will improve human information work productivity in some unexpected ways. By Mark Raskino

© 2005 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartners research may discuss legal issues related to the information technology business, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice.

Why Gartner

Gartner delivers the technology-related insight you need to make the right decisions, every day.

Find out more