Hype Cycle for Embedded Software and Systems, 2013

1 August 2013 ID:G00251344
Analyst(s): Ganesh Ramamoorthy


Embedded software is a vital component of modern electronic systems. We chart the progress of embedded operating systems, tools, software, hardware, programming standards and design techniques. Eventually, the Internet of Things will transform the embedded software and systems industry.

Table of Contents


What You Need to Know

The global semiconductor industry has traditionally been driven by system vendors looking to use standardized products to reduce overall system cost and achieve a faster time to market for their products, while using in-house resources to create differentiation. These two factors have been the primary drivers for the semiconductor market's growth for nearly a decade, and, although they remain so, the post-recession market environment presents new challenges for chip vendors. On the surface, these challenges include the declining average selling price of chips and the shortening shelf-life of products — driven by growing consumer demand for ever-cheaper and more-feature-rich products. However, beneath the surface the interplay of three fundamental factors is reshaping the competitive landscape of the semiconductor market:

  • The growing need for more functionality on semiconductor chips
  • The increasing concentration of end-market demand on a few high-volume electronic equipment devices
  • The increasing importance of embedded software in modern system-on-chip (SoC) designs

Of these three the third has, arguably, the potential to be disruptive. Despite the growing functional integration and performance improvements gained as a result of advanced chip designs, it is becoming increasingly evident that hardware alone will no longer be sufficient to differentiate end-market electronic devices. Because consumers are attracted more by the UI and the functionality delivered by a device, chip vendors are beginning to realize that adding capabilities through tightly integrated software is crucial to product differentiation. This change is driving chip vendors to build software development capabilities alongside chip design skills.

In the past, traditional electronic designs consisted of discrete, specialized devices that only required a basic application layer interface to be provided by the chip vendor to the equipment manufacturer. The equipment manufacturer would then develop, in-house, the necessary "specialized software" on top of a standard microprocessor platform. Over time, these microprocessor platforms were integrated into specialized application-specific standard products (ASSPs) and application-specific integrated circuits (ASICs), creating an environment where the semiconductor manufacturer had to provide the upper layers of a software stack that could be modified. The subsequent exponential growth of software code needed to run an ever-growing set of features and functions has left most, or all, of the core embedded software work on the shoulders of the chip manufacturer. The OEMs saw a "brain drain" of software talent. They now provide differentiation based on manufacturing talent and ability to scale, whereas SoC vendors continue to struggle to provide robust, integrated and embedded software designs because their heritage is based on hardware engineering rather than software.

While the underlying chip still offers differentiation in terms of power consumption or specific functions, the ability to change the software (the final functionality layer) "on the fly" is becoming crucial. This is because chip vendors need to stretch designs across wider groups of usage scenarios and achieve a faster time to market. Also, they need to be able to respond to changing market conditions more effectively and quickly, while at the same time differentiating their offerings and introducing new features and functions. Chip vendors are, therefore, increasingly focusing on addressing this critical need through effective use of embedded software.

This growing focus is also evident from increased spending by chip vendors on embedded software development, especially in the advanced process nodes. The cost of embedded software development today is as much as the hardware development cost in leading-edge process nodes. Further, with the growing adoption of multicore processors in consumer devices, we expect a boom in the demand for embedded software development tools from both chip vendors and equipment makers. This heralds a big opportunity for embedded software developers and tool providers. However, financially strong chip vendors and equipment makers will also look to acquire embedded software tool providers and internalize the technology in order to gain competitive advantage and differentiate their products.

Beyond semiconductor and electronics technologies, embedded software and embedded systems also affect a wide range of other technologies: from operational technologies employed by a variety of manufacturing industries such as oil and gas, pharmaceuticals, and energy and utilities; to the Internet of Things (IoT) enabled by the growing penetration of connected devices and intelligent systems. This change necessitates embedded software and system designs that are connected and are capable of handling multiple functions; gone are the days of isolated, fixed-function devices.

Therefore, the growing role of embedded software in modern electronic devices will affect the way chip vendors, chip design service providers, device manufacturers, product engineering service providers, software developers and the embedded system developers plan for new product/service development activities. Also, it will considerably reshape the competitive landscape of the semiconductor and electronics industry in the coming years.

The Hype Cycle

In this Hype Cycle for embedded software and systems (see Figure 1), we describe and analyze some of the emerging technologies, techniques, standards and other relevant product developments that deliver value to the semiconductor industry as a whole, and to the embedded software and system development community in particular.

We profile 36 technologies in this Hype Cycle, several of which — such as software-defined radio for mobile devices, embedded hypervisor, the IoT and smart dust — can be classed as transformational. These kinds of technologies have the capability to affect and evolve embedded software and systems development techniques, reduce system costs and lower energy consumption. Their effect on existing technology markets could also be significant, since they have the potential to displace and alter many of today's established programming and product development practices.

All the technologies in this document are important — they represent significant market and investment opportunities. As such, clients are advised to study them carefully and seek more detailed information on technologies of interest. Technologies near the Peak of Inflated Expectations are not necessarily more important than other technologies; they are simply enjoying a greater degree of hype, market expectation and press interest than other technologies. Historically, the embedded software and systems market has been plagued by technical challenges that have, time and again, been overcome and this is unlikely to change anytime soon.

This Hype Cycle provides a glimpse into tomorrow's technology solutions and represents Gartner's view of the progress of some of the most interesting and significant embedded software and system technologies. The list of technologies included is not exhaustive — technologies have been selected on the basis of industry interest and analyst preference. Their positions on the Hype Cycle are based on averages in recognition that sub-technologies exist, creating variability in positioning. Most of the technologies on this Hype Cycle should be viewed as underlying enabling technologies that facilitate and support a wide range of embedded systems. The adoption rate of technologies varies across these applications. For more detailed information on individual end applications please refer to the relevant Hype Cycle entries.

Figure 1. Hype Cycle for Embedded Software and Systems, 2013
Figure 1.Hype Cycle for Embedded Software and Systems, 2013

Source: Gartner (July 2013)

The Priority Matrix

The Priority Matrix maps the time to mainstream adoption of a technology, against its benefit rating (see Figure 2). No technologies are designated as having a low benefit rating in this Hype Cycle. This does not indicate that no such embedded software and system technologies exist. Rather, it shows that only higher-benefit technologies were selected for inclusion in the Hype Cycle. Relatively few technologies are designated as having transformational benefit; this intuitively fits with the reality of technology development. Only one transformational technology — software-defined radio for mobile devices — will reach mainstream adoption in the next two to five years, followed by embedded hypervisors in the next five to 10 years. Most of the transformational technologies will take longer to mature, with several requiring more than 10 years. Smart dust and the IoT are examples of embryonic technologies that have the potential to cause significant disruption to the embedded software and systems industry, because of their radically new approaches and their promise of high utility. Most of the technologies on the Hype Cycle occupy the "middle ground" of high benefit and two to five years to mainstream adoption.

Figure 2. Priority Matrix for Embedded Software and Systems, 2013
Figure 2.Priority Matrix for Embedded Software and Systems, 2013

Source: Gartner (July 2013)

On the Rise

Smart Dust

Analysis By: Ganesh Ramamoorthy

Definition: Smart dust "motes" are tiny wireless microelectromechanical systems (MEMS), robots or other devices that can detect everything from light, temperature and pressure to vibrations, magnetism and chemical compositions. They run on a wireless computer network and are distributed over an area to perform tasks, usually sensing through RFID. As they do not use large antennae, the range of these systems is measured in a few millimeters.

Position and Adoption Speed Justification: A single smart dust mote typically contains a semiconductor laser diode and MEMS beam-steering mirror for active optical transmission; a MEMS corner cube retro reflector for passive optical transmission; an optical receiver, signal processing and control circuitry; and a power source based on thick-film batteries and solar cells.

Smart dust motes have tiny processors that run programs on a skeleton OS and access equally small banks of RAM and flash memory. They combine sensing, computing, wireless communication capabilities and autonomous power supplies within a volume of few millimeters and are generally aimed at monitoring real-world phenomena without disturbing the original process. They are so small and light that they can remain suspended in the environment like an ordinary dust particle. Air currents can also move them in the direction of flow and, once they are deployed, it is very hard to detect their presence and even harder to get rid of them.

The key applications for smart dust include:

  • Environmental protection (identification and monitoring of pollution).
  • Habitat monitoring (observing the behavior of animals in their natural environment).
  • Military (monitoring activities in inaccessible areas, accompanying soldiers and alerting them to any poisons or dangerous biological substances in the air).
  • Indoor/outdoor environmental monitoring.
  • Security and tracking.
  • Health and wellbeing monitoring (entering human bodies and checking for physiological problems).
  • Factory and process automation.
  • Seismic and structural monitoring.
  • Traffic monitoring and management.

As a complete sensor/communication system integrated into a cubic millimeter package is still a long time off, we have yet to see commercial applications based on smart dust. But some reasonably small motes are commercially available. One of these, the MICA2DOT, is available from Crossbow Technology. The unit becomes a mote once a small sensor board, coin battery and antenna are added to the product. Other commercially available motes are sold by Dust Networks, which offers motes that are about the size of a matchbox and operate for five years on two AA batteries. HP has made plans with Royal Dutch Shell to install a million matchbook-sized monitors to aid in oil exploration by measuring rock vibrations and movement. The sensors, which have already been developed, will cover an area of six square miles.

At present, much of the activity surrounding smart dust is concentrated on research laboratories such as the DARPA-funded project at USC Robotics Research Lab, at the University of California, Berkeley, and JLH Labs. The main purpose of this research is to make smart dust motes as small as possible — which involves both evolutionary and revolutionary advances in miniaturization, integration and energy management; and to make it available at as low a price as possible. Given the wide range of applications and their benefits, however, we believe this technology will have a transformative effect on all walks of businesses and human lives.

User Advice: Smart dust motes that are currently available off the shelf can be configured with sensors that measure a variety of properties such as temperature, barometric pressure, humidity, light intensity, acceleration, vibration, magnetism, acoustic levels and location using GPS. The combination of these capabilities in a well-designed sensor network can potentially open up possibilities for delivering numerous services.

Business Impact: The benefits of smart dust are compelling and transformational. Given the embryonic stage of this technology, it is clear that vendors should build a position for themselves through patent development for commercial applications, direct funding to research projects or through equity funding of companies engaged in development. Smart dust will transform the way humans interact with their surroundings and open up new ways for businesses to deliver services, and help save costs in the process. This will have wide-ranging implications for businesses, technological, social, economic and legal practices across the globe.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Crossbow Technology; Dust Networks

Intelligent System

Analysis By: Alfonso Velosa

Definition: An intelligent system framework enables an enterprise to better use the information present in its devices. An intelligent system starts by obtaining data from connected devices and connecting it via a gateway to the enterprise's systems where that data can be analyzed and converted into insights and action by enterprise employees and automated systems.

Position and Adoption Speed Justification: There is a significant opportunity for businesses to achieve greater value from the data that is located in devices that are, or will be, spread throughout the enterprise. Unfortunately, this data has been locked into the devices — mostly due to a lack of connectivity — but also due to a lack of standards, systems and processes to obtain this data systematically, and even ignorance of the value of the information on those devices.

The systems of operational technology (OT)/IT convergence and, more generally, the Internet of Things, continue to deploy through enterprises. Thus, there is an increasing need, possibility and opportunity for intelligent systems to collect, process, analyze and disseminate the data from devices in an intelligent way. To gain the full value of the data in these systems, enterprises will need a system that incorporates:

  • Devices. The device will need to collect data from either sensors or inputs from other systems. The device may also format or process some of the data internally.
  • Data. This data will need to be collected within appropriate enterprise contextual elements such as location, environmental parameters, and inputs from other devices. It must be presented in a format that can be accessed by the enterprise gateway or other enterprise connectivity systems.
  • Connectivity. The data will need to be transmitted from the devices to enterprise systems. This may occur via a gateway or server.
  • Application logic. Includes the functions and applications that the device will need to execute its core functions/applications. This application logic may reside on the device and/or the gateway/server.
  • Security and authentication. A comprehensive security and authentication process is required to protect the integrity of the devices and the data. The process will need to be able to provide scheduled and ad hoc updates on the security profile.
  • Analytics and presentation layer. The data will need to be analyzed and presented in a format that facilitates the decision-making and action capabilities of enterprise IT and OT employees and automated systems.

User Advice: System developers will want to look at aspects of the business that may benefit from OT/IT integration of the devices through to the analytics layer. Observe that while the technical elements may be quite challenging, the developers and their management should also consider the cultural elements of the enterprise just as thoroughly. For example, how the data from an intelligent system will fit into the work processes for an enterprise as well as how to incentivize employees to leverage the data to its fullest potential.

Recognize that data format standards vary by industry, often by vendor and by legacy systems, so any intelligent system that pulls in enough data will need to be capable of addressing multiple formats and industry standards. Thus, ensure you understand the value and cost of the proper use of the data in the system, and how it will cover the costs of any necessary consulting work to audit and integrate all of your data sources and types.

Business Impact: The value of an intelligent system will depend on its ability to leverage industry-specific parameters. An intelligent system will have the potential to help enterprises that implement them outperform their peers in terms of maximizing the use of information; and to extend operational elements such as asset management, or create new value chains for the enterprise to drive new revenue streams and deliver value to customers.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Eurotech; GE; Microsoft

Constrained Application Protocol

Analysis By: Ganesh Ramamoorthy; Alfonso Velosa

Definition: The Constrained Application Protocol (CoAP) is a specialized Web transfer protocol for use with constrained networks and nodes for machine-to-machine (M2M) applications. Currently, CoAP is an Internet Engineering Task Force (IETF) draft.

Position and Adoption Speed Justification: Most of the new devices that are likely to connect to the Internet of Things (IoT) are embedded and connect wirelessly. While Internet Protocol version 6 (IPv6) helps connect any device to the Internet, other technologies help manage, communicate, and visualize the information provided by these devices. As current Internet technologies were developed with "high-power" devices in mind, they are not suited for the IoT environment of low-power embedded devices and networks. Therefore, specialized protocols that take into account the energy, memory and processing constraints of these devices are required. The IETF has released a RESTful application layer protocol for communications within embedded wireless networks referred to as CoAP.

CoAP uses an interaction model similar to the client/server model of HTTP, but M2M interactions typically result in a CoAP implementation acting in both server and client roles (referred to as endpoints). The current specification for CoAP, however, faces challenges ranging from lack of universal semantics to some security limitations. Also, little research has been done on putting CoAP under test for different M2M applications. CoAP is currently attracting attention from the embedded research community and a few companies have also released some CoAP-based products indicating that the RESTful trend for low-power embedded networks will keep gaining interest. This area has seen a big push with regards to research since 2011, indicating a growing community interest toward RESTful interactions in low-power wireless embedded networks. Furthermore, the open source community has been providing momentum for CoAP, with implementations in C# (ported to TinyOS), Java SE/Android and Python, for example.

User Advice: Developers should address security limitations such as achieving DTLS/TLS translation when CoAP/HTTP mapping is used as a proxy and the suitability of CoAP for a monitoring application, in order to integrate CoAP into Web applications such as Twitter or Web browsers such as Firefox. Standard programming language implementations will lower development cost and effort.

Business Impact: Vendors focused on developing applications for low-power networks should adopt CoAP as it helps reduce development cost because of its ability to maintain reliability while allowing easy integration with existing Internet technologies.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: sensinode; Watteco

Tangible User Interfaces

Analysis By: Adib Carl Ghubril

Definition: In a tangible user interface (TUI), the user controls digital information and processes by manipulating physical, real-world objects (rather than digital, on-screen representations) that are meaningful in the context of the interaction.

Position and Adoption Speed Justification: TUIs have mainly been impacting music-creating applications of late. Reactable Systems allows users to superimpose musical beats and tones by varying the orientation and relative positioning of physical cubes on a surface. The cubes vary in size and are marked with unique symbols.

Although TUIs will likely take most of the next decade to develop commercially because of the need to fundamentally rethink the user interface experience and application development toolset, we believe they provide the kind of tactile feedback that enriches the user experience; therefore, we expect this technology to continue to evolve, particularly in applications where the user is crafting something.

User Advice: TUIs can be a powerful educational tool when experiencing the world for the first time. By handling a physical model of an animal, plant or object, infants can generate narrative, musical and visual feedback about the model, which augments their intellect in a more meaningful way.

Similarly, organizations with customer-facing, branded kiosk-style applications (such as retail and hospitality organizations) should evaluate opportunities for a highly engaging (and relatively high-cost) customer experience using TUIs. Others should wait for new classes of devices and peripherals to integrate the capability seamlessly.

Business Impact: TUIs will provide a natural way for people to bridge the physical and digital worlds. Applications focused primarily on customer-facing experiences, collaborative planning, and the design of real-world objects and places will benefit the most.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Microsoft; Reactable Systems; Samsung; Sifteo

Embedded Hypervisor

Analysis By: Satish R.M.

Definition: An embedded hypervisor provides software virtualization that allows multiple OSs or execution environments to run simultaneously on an embedded processor. In a multicore environment, a hypervisor can also distribute the OSs and applications across multiple cores. Embedded hypervisor differs from a normal hypervisor in meeting the real-time and security requirements of embedded systems.

Position and Adoption Speed Justification: Hypervisor solutions debuted in the embedded space at a large scale, with mobile devices, enabling the reuse of legacy code on new hardware, achieving a faster time to market. Integrating baseband and application functions on a single chip is another use case that has reduced costs. Baseband functions normally execute in a real-time operating system that is proprietary. Also, application functions execute on an open or commercial OS. The benefits of using a hypervisor as a virtualization platform include multicore enablement, functional integration, platform abstraction, legacy software support, reliability and security. The adoption of hypervisors to achieve standardization, manageability and security of the application environment will be determined based on all or a combination of the below factors:

  • With dual-identity smartphones and tablets around the corner — which will be the main focus of bring your own device (BYOD) strategies — hypervisor solutions will become extremely critical. The use of Type 1 hypervisor solutions to separate the work and personal environments on the mobile device is ideal. This is in comparison to Type 2 solutions where the hypervisor is hosted within another OS. In a Type 1 solution, multiple OS environments are well-isolated because they all run directly on top of the hypervisor. The constraint in using a Type 1 solution for a BYOD scheme is that it requires the device manufacturer to pre-load the same, with consideration for operator variants in different geographies. Moreover, OS vendors should also accommodate and support Type 1 solutions. Due to these obstacles, Type 2 solutions are being evaluated as alternatives by using application wrapping and containerization implementations. However, they lack the security that Type 1 solutions offer.
  • Although embedded virtualization has immense potential and is popular in the consumer industry, its usage is limited or nonexistent in other industries, such as automotive. In-vehicle infotainment systems will be the first to adopt hypervisor solutions to run both Linux (to meet real-time requirements) and Android (to meet demanding UI requirements). If consumer devices quickly adopt Type 1 solutions, they stand to gain by offering more user environments that implement BYOD strategies for enterprise applications on the one hand and UI (human-machine interface) implementations that meet the stringent driver distraction guidelines on the other. Security concerns with mobile devices (running open OSs) being part of connected vehicle or connected infrastructure can also be adequately addressed using trusted execution environment (TEE) with secure access control to hardware resources.
  • With in-vehicle Ethernet poised to connect the camera-based advanced driver assist systems (ADASs) and eventually become the backbone of vehicle control networks, embedded virtualization could create a paradigm shift. This shift will be amplified by the proliferation of multicore architectures and implementation of AUTomotive Open System ARchitecture (AUTOSAR) across the board. AUTOSAR already defines a standardized methodology termed virtual functional bus (VFB) to achieve virtual integration of software components. This enables software (applications) deployment that is independent of the underlying hardware at run time, using the AUTOSAR runtime environment (RTE). But RTE relies heavily on standardized interfaces and corresponding configurations at compile time, which introduces significant bottlenecks when meeting the tough real-time requirements. Embedded virtualization with a Type 1 hypervisor adopted across the board will complement AUTOSAR implementations and make them more efficient. Moreover, such a strategy has implications for ISO 26262 compliance because it provides redundant execution paths across multiple ECUs for safety-critical functions, and takeover of master control for certain safety-critical functions, under failure conditions for fail-safe operation.

User Advice: Choosing microkernel hypervisors (paravirtualization) over monolithic hypervisors (pure virtualization) becomes important to meet both real-time and security requirements. Having a microkernel hypervisor ensures a smaller trusted computing base (TCB), which reduces the attack surface but increases development time (compared to pure virtualization). To overcome this barrier it is advisable for vendors to use pre-virtualization tools that enable modifications to sensitive calls (traps that change the processor state) converting them to hypervisor calls in the executable image file at compile time.

Semiconductor chip suppliers already provide hardware architectures (TrustZone by ARM and Trust Architecture by Freescale) to facilitate several security scenarios for embedded systems across applications. Without implementing security-aware applications and kernels, TEE is incomplete. OS vendors need to take advantage of the available hardware features when implementing secure application environments, isolating other application environments in accordance with GlobalPlatform and/or Trusted Computing Group standards.

Business Impact: Consumer devices will play a key role in enabling virtualization in industries where they will be used as hybrid devices (including automotive) to leverage their computing power. Proliferation of consumer devices in our future daily life — including connected vehicle and connected infrastructure (spanning several industries) — makes it crucial for the consumer industry to lead the way in embedded virtualization and also ensure the major concern on embedded systems security is adequately addressed, by guaranteeing security of the system and user data. Once the hypervisor solutions designed with TEE are proven with consumer devices, this will make it easier for other industries to follow suit, as they are easily scalable. Focusing on the consumer market initially and allowing these solutions to evolve and mature becomes critical.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: AMD; Apple; ARM; Bosch; Broadcom; Citrix; Denso; Discretix; Elliptic Technologies; ENEA; Freescale; Gemalto; General Dynamics; Giesecke & Devrient; GM; Google; Green Hills Software; HTC; Hyundai; IBM; Inside Secure; Intel; LG Electronics; Microsoft; Nvidia; OpenSynergy; Orange; Qualcomm; Red Bend Software; Samsung; Sysgo; Texas Instruments; Toshiba; Trustonic; VMware; Volkswagen; Wind River; XtratuM

Recommended Reading: "Tablets Are Transforming the Future of In-Vehicle Infotainment"

"An Update on Mobile Virtualization and Trusted Environments"

"Smartphone Virtualization: Making Mobile Applications More Trustable"

Message Queue Telemetry Transport

Analysis By: Alfonso Velosa; Massimo Pezzini

Definition: Message queue telemetry transport (MQTT) is an open message protocol for machine-to-machine (M2M) communications for use with constrained networks. It enables the transfer of messages with telemetry-style data to a server from devices such as sensors, phones or computers on the basis of a publish-and-subscribe messaging paradigm.

Position and Adoption Speed Justification: MQTT is the open and royalty-free specification of a TCP/Internet Protocol (IP)-based, lightweight, publish-and-subscribe protocol targeting M2M integration over low-bandwidth, high-latency, unreliable networks. MQTT is a technology that processes large volumes of small information packets at a high rate. It was developed by IBM and Eurotech. MQTT was submitted to the Organization for the Advancement of Structured Information Standards for standardization in February 2013.

MQTT — currently in Version 3.1 (see http://mqtt.org/wiki/doku.php/mqtt_protocol#clarifications and http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html) — defines two roles: the client and the broker. The client is a device with a small footprint and low energy consumption profile, or a software application that subscribes and/or publishes to certain "topics" (types of messages). MQTT use is not constrained to M2M use cases as the protocol can be used to integrate phones and other portable devices. The broker sits between clients and dispatches to subscribers the messages it receives from publishers.

MQTT identifies three levels of quality of service, defining how reliably the message exchange between a client and a broker must take place:

  • "At most once" — Messages can be lost or duplicated.
  • "At least once" — Messages are assured to arrive, but duplicates are possible.
  • "Exactly once" — Once and only once message delivery.

Messages can be "retained" — that is, stored in the broker — so that subscribers can receive messages sent while they were not operating.

Numerous MQTT implementations are available in open-source projects such as Apache ActiveMQ, Mosquitto (open source), MQTT.js and RabbitMQ. Vendors providing closed-source MQTT implementations include:

  • IBM, which released the IBM MessageSight appliance and extensions to the popular WebSphere MQ message-oriented middleware.
  • Software AG, which supports MQTT in its webMethods Nirvana Messaging.
  • Eurotech, with its Everyware Cloud integration platform as a service.
  • LogMeIn, with the Xively service (in beta).

Although the specification has been available for more than 10 years, the industry has been more significantly looking at MQTT only recently. Note that IBM is supporting it in the context of its mobile computing and Smarter Planet initiatives, and with the growing requirements for open standards-based M2M integration technologies. However, adoption is still relatively low because of skills availability, the relative immaturity of some MQTT-based cloud services and the still limited support from device manufacturers and application software vendors.

User Advice: The ability of developers to leverage a connectivity protocol in constrained environments will appeal to organizations that want to minimize IT costs for their mobile and embedded systems and OT initiatives.

The open-standard nature of MQTT, the availability of open-source implementations and of cloud-services-based MQTT offerings provide user organizations with a range of deployment options. Therefore, user organizations wishing to implement multivendor M2M scenarios while minimizing the vendor lock-in risk, should consider MQTT as the underlying connectivity layer. However, as a caveat, be aware that until there is an official standard for MQTT, there is some level of vendor lock-in if you use commercial implementations due to the different implementations of MQTT.

Moreover, MQTT use is not constrained to M2M use cases but can be also utilized to integrate smartphones, tablets and other portable devices; for example, Facebook's Facebook Messenger IM/chat app uses MQTT as the underlying messaging protocol. Therefore, users looking for a "universal" messaging layer to integrate M2M, mobile applications and enterprise applications should evaluate the suitability of MQTT.

Finally, for secure or mission-critical applications, ensure you understand the level of security add-ons you will need to put onto your memory-constrained IoT or M2M hardware using MQTT. This is because it transmits in clear text.

Business Impact: MQTT is a proposed connectivity protocol for constrained environments. This standard for messaging will help businesses achieve faster time to market for their products and services by combining M2M, mobile and enterprise applications, especially as they leverage cloud solutions. They will also be able to maintain a reasonable degree of vendor independence and flexibility in their deployment options as a result.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Eurotech; IBM; LogMeIn; ThingWorx


Analysis By: Ganesh Ramamoorthy

Definition: LiteOS is a multithreaded, Unix-like operating system (OS) designed for wireless sensor networks (WSNs). LiteOS provides a familiar programming environment based on Unix threads, and C (programming language). It follows a hybrid programming model that allows both event-driven and thread-driven programming. LiteOS is open source, written in C and runs on the Atmel-based MicaZ and Iris sensor networking platforms.

Position and Adoption Speed Justification: WSNs are hard to manage, memory-constrained systems. Multiple threads, resource management and code-debugging are some of the key challenges that reach new levels of complexity in WSNs. These complications become even more obvious when WSNs must be handled in an obscure or unknown programming environment — which is commonly the case. LiteOS is an open-source, interactive OS — designed specifically for WSNs — to help programmers operate one or more WSNs in a Unix-like manner: transferring data, installing programs, retrieving results, or configuring sensors. A key feature of LiteOS is LiteFS, a hierarchical distributed file system which allows for the wireless mounting and management of motes, similar to the mechanisms used for connecting external devices in common Unix implementations. Another feature is LiteShell, which (as its name suggests) is a shell environment — similar to those in Unix-based systems allowing users to manage the WSN just as if it was a regular Unix file system. Finally, LiteOS provides a C-like source code with a GDB (GNU debugger), allowing for a more seamless transition for experienced programmers.

The existing architectures of WSNs have certain inherent limitations, such as: network-dependent application development or application-dependent network design and deployment; costly optimization or suboptimal efficiency; limited reusability; low return on investment and non-scalability. The aim of LiteOS is to reduce these limitations for the programmers.

LiteOS differs from current sensor network operating systems such as TinyOS and more conventional embedded operating systems such as VxWorks, embedded configurable operating system (eCos), embedded Linux, and Windows Embedded Compact (Windows CE). LiteOS is easy to use and provides a more familiar environment to the user. Its features are either not available in existing sensor network operating systems, such as the shell and the hierarchical file system — or are only partially supported. Compared to the conventional embedded operating systems, LiteOS has a much smaller code footprint, running on platforms such as MicaZ, with 8MHz CPU, 128 kilobytes of program flash, and four kilobytes of RAM.

Given the growing hype around Internet of Things and the increasing proliferation of sensors in various applications, such as: building monitoring, environment control, wildlife habitat monitoring, forest fire detection, industry automation, military, security, and healthcare, the need for a reliable, easy-to-use OS to develop and manage these applications cannot be over emphasized. Therefore, we believe, operating systems (and specifically LiteOS) designed with the specific requirements of WSNs in mind will grow in popularity over the coming years.

User Advice: LiteOS supports C programming and provides Unix-like abstraction for WSNs. Therefore, it helps improve compatibility with other development platforms and simplifies sensor network programming. This can help reduce the application development time and cost for programming teams engaged in developing applications for WSNs.

Business Impact: LiteOS is a free OS for WSN. Therefore, it opens the doors to self-employed engineers and amateurs to develop applications for WSNs — in direct competition with established organizations. Semiconductor vendors, software development firms and system manufacturers that have a stake in WSN or in the Internet of Things should therefore develop strategies to embrace LiteOS and its developer community, and use them to their advantage.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging


Analysis By: Ganesh Ramamoorthy; Alfonso Velosa

Definition: OneM2M is an initiative aimed at developing a common, global machine-to-machine (M2M) specification, (initially focusing on the service layer). The goal of the initiative is to identify a common service layer architecture by identifying the gaps in the existing standards and creating specifications to fill these gaps.

Position and Adoption Speed Justification: M2M services often rely upon telecommunications providers for connectivity between the various devices and the M2M application servers. Telecommunications companies are optimizing networks to meet industry needs for M2M communications more effectively, and are in the process of developing standards for optimizing them. The growing adoption of M2M solutions and applications in a wide range of industries necessitates a common, widely available and cost-effective service layer that is readily embedded in M2M application servers — as well as in various pieces of hardware and software — in order to reduce time-to-market, development costs and operational expenses.

We therefore believe that the oneM2M initiative, (which provides this common service layer standard) will gain traction as more organizations participate in the standards development process, as well as in the specific aspects of M2M application development. The oneM2M initiative counts not just technology and telecommunications companies among its members, but also a broad set of standards development organizations, such as the Alliance for Telecommunications Industry Solutions (ATIS) and the China Communications Standards Association (CCSA), all focusing on the development of one globally-agreed M2M specification standard.

User Advice: Application developers should adopt oneM2M to simplify application development, enabling utilization of applications across different service platforms, and intra- and inter-industry service integration.

Business Impact: OneM2M will help vendors achieve mass-market economies of scale through faster time to market for their products/services at lower capital and operating costs.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Alcatel-Lucent; Jasper Wireless; NEC; Nokia Siemens Networks


Analysis By: Jim Tully

Definition: Biochips relate to a number of technologies that involve the merging of semiconductor and biological sciences. The most common form is based on an array of molecular sensors arranged on a small surface — typically referred to as "lab-on-chip." The underlying mechanism utilizes microfluidic micro-electromechanical systems (MEMS) technology. These devices are used to analyze biological elements such as DNA, ribonucleic acid and proteins, in addition to certain chemicals.

Position and Adoption Speed Justification: Operation of lab-on-chip devices normally involves loading a reference quantity of the material to be tested onto the array before shipping to the place of use. The sample to be tested is then placed onto the array at the point of use. The chip detects if the samples match and sends an appropriate signal.

There are a number of uses for lab-on-chip devices emerging:

  • Medical applications for clinical purposes. An area of focus for specific devices is a biochip to detect flu viruses. H5N1 (bird flu) and H1N1 (swine flu) versions have been produced. Urinalysis is another application for detection of urinary tract infection and kidney function. One of the benefits of this technology is faster analysis than traditional techniques because multiple tests can be carried out in parallel.
  • Detection of food pathogens. This involves the analysis of bacterial contaminants in food and water. STMicroelectronics and Veredus Laboratories have developed a device that can detect E. coli, salmonella, listeria and other pathogens. The device can detect more than 10 such pathogens simultaneously.
  • Chemical and biohazard analysis. Further extensions of the technology are aimed at chemical analysis, particularly for detecting explosives and biohazards.

Mobile device manufacturers have experimented with the addition of biochips onto cases of mobile phones. This could facilitate health screening services offered by mobile operators or their partners. Biometric sensing, including DNA fingerprinting development, are also receiving attention.

The use of biochips for clinical applications is at the stage where penetration is growing steadily in clinical laboratories. The need to demonstrate consistent accuracy outside of R&D laboratories is a challenge for biochip vendors. In some markets, U.S. Food and Drug Administration approval is needed and this delays time to market considerably for this use of the technology.

Few biochip applications have been taken to a level where they can be administered by nonspecialists. For significant market growth to occur, biochip applications will need to move from specialist laboratories into doctors' surgeries and, later, into the consumer market. It will take at least five years for biochips to enter doctors' surgeries. Consumer biochips for self-diagnosis are probably 5 to 10 years away. The potential demand could be huge — provided the costs can be made sufficiently low.

Another form of biochip is less well-developed and involves the growing of biological material on silicon for implanting into the human body. For example, the growth of neurons on silicon is a promising area for retina and hearing implants. The generic term "biochip" refers to all these variations and usage of the technology.

The amount of reported progress in biochips has stalled in the past year or two, caused in some part by the depressed macroeconomic conditions. We have, therefore, held its position constant on the 2013 Hype Cycle.

User Advice:

  • Companies and government organizations involved in analyzing complex organic molecules or biological materials should examine the potential benefits of biochips.
  • These devices are likely to be viable sooner than you realize and specific areas of relevance include: medical diagnosis, pollution monitoring, biological research, food safety, airport security, and military uses (biological warfare).
  • Biochips represent an emerging market for vendors that have MEMS/microfluidic capabilities.
  • Capturing the growth opportunity for medical applications will, in practice, require an understanding of clinical processes to move this technology through clinical trials.

Business Impact:

  • There could be a significant impact on healthcare, if the technology fulfills its promise of faster and more accurate diagnosis, particularly during epidemics.
  • Other businesses will also be affected, most notably mobile device manufacturers and mobile operators, physical security screening organizations and semiconductor vendors.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Affymetrix; Agilent Technologies; Imec; Owlstone; STMicroelectronics

Recommended Reading: "Silicon Technology and Biotechnology Paths Converge"

Parallelization and Optimization Tools

Analysis By: Ganesh Ramamoorthy

Definition: Parallelization and optimization tools are software tools that help multicore programmers gain insight into a multicore program's execution, internal dependencies and memory behavior, with the aim of identifying compute and code construct bottlenecks that prevent parallelism — calculating the impact of parallelization on application performance and rebuilding the codes that implement the parallel constructs.

Position and Adoption Speed Justification: Parallelization and optimization is a critical step in developing effective software for multicore processors. These programs must be correct as well as amenable to modern software engineering practices for efficient life cycle management. The main factors determining their success are the parallelization algorithm that is used, implementation language and interfaces, programming environment and tools, and the target multicore platform. While there are many different approaches to parallelization and optimization, the basic approach is finding sequential pieces that can run in parallel and combining the results efficiently.

Tools for developing multicore programs are based on four different approaches:

  • The first is to extend a compiler.
  • The second is to extend a sequential programming language and allow essential multicore programming schemes to be captured from known environments.
  • The third is to add a multicore programming layer; this is a layer on a sequential core that controls the creation and synchronization of processes and partitioning data.
  • The fourth is to create a new multicore programming language, such as FORTRAN 90, High Performance FORTRAN, or C*, OpenMP (which is an extension of C++) and Message Passing Interface.

The adoption of parallelization and optimization tools by software programmers depends on the growth of the multicore programming paradigm. While there is a growing demand for multicore programming today, the majority of the code continues to be written without multiprocessing in mind; multicore programming is generally seen as a hard-to-achieve and time-consuming task, so many programmers avoid it as far as possible. A key reason for this is the lack of mature tools to help achieve the goal of multicore programming, which means that optimizing and parallelizing software code to take advantage of multicore architectures and multicore processors remains a key challenge for programmers developing applications for mobile devices such as smartphones and tablets.

However, given the growing penetration of multicore processors and the demand for embedded multicore applications, the current mindset of software programmers accustomed to sequential programming is likely to change and adopt the multicore programming paradigm — leading to increased use of parallelization and optimization, virtualization and debugging tools.

User Advice: Software programmers and vendors should investigate the reduced risk and faster time to market for their multicore-optimized applications that parallelization and optimization tools promise.

Business Impact: Parallelization and optimization tools affect development time and have the potential to significantly minimize the cost of adopting new multicore platforms.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Vector Fabrics

At the Peak


Analysis By: Ganesh Ramamoorthy

Definition: Open Sound Library for Embedded Systems (OpenSL ES) is a royalty-free, cross-platform, hardware-accelerated application programming interface (API) based on C programming language for 2D and 3D audio. It is designed for mobile applications and game developers, providing access to features such as 3D-positional audio and Musical Instrument Digital Interface (MIDI) playback. OpenSL ES is managed by the nonprofit technology consortium Khronos Group.

Position and Adoption Speed Justification: The OpenSL ES API has five major features:

  • Basic audio playback and recording
  • 3D audio effects (including 3D positional audio)
  • Music experience enhancing effects (including bass boost and environmental reverb)
  • Interactive music and ringtones (using SP-MIDI, Mobile DLS, Mobile XMF)
  • Buffer queues

To avoid fragmentation, OpenSL ES is divided into three profiles: phone, music, and game. Each profile is designed for the respective needs of the device with a specific set of audio functionalities. A vendor typically conforms to one or any combination of the profiles. An application usually queries the OpenSL ES implementation to find out which profiles are supported, and is designed to either work with only the common parts of the profiles, or adapt to the available functionality as given by the profiles in the device it is running on. The application developer can also specify both the minimum and the optimal profile requirements.

The design goal of OpenSL ES is to help developers of embedded applications (via an object-oriented design), access advanced audio features such as 3D positional audio and MIDI playback, as well as the ability to port applications easily across different manufacturers and platforms. Although it is developed primarily for application developers in the mobile and gaming industry, the increasing proliferation of multimedia functionalities in the in-vehicle infotainment (IVI) systems will likely see their adoption in the automotive industry as well.

User Advice: Playing one simple sound on different platforms requires different codes, and application developers need to port their audio source code across several proprietary APIs. OpenSL ES addresses the limitation of platforms and provides additional tangible user benefits in terms of development time and effort reduction.

Business Impact: OpenSL ES can be deployed on a wide range of devices catering to different market segments, and is suitable for embedded audio acceleration implementation, in both hardware and software. The specification can be freely used by any company to create a product, but the implementation must be tested for conformance in Khronos's OpenSL ES Adopters' program before it can be made available to users.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Google; Nvidia; SRS Labs; Symbian


Analysis By: Sylvain Fabre

Definition: DASH7 is a standard, promoted by the DASH7 Alliance that leverages the 433Mhz band unlicensed spectrum, supporting the networking of sensors up to 2 km apart and in-building positioning within one meter. The battery life for modules can be several years, with DASH7 supporting the Advanced Encryption Standard (AES) 128-bit public key encryption and data rates of 200 Kbps.

Position and Adoption Speed Justification: Early interoperability tests have occurred between a few vendors, but some work remains to diffuse this technology to a broader set of vendors and promote wider adoption. Additionally, despite being an attractive value proposition (compared to competing standards such as ZigBee), it remains to be seen which standard will emerge as the dominant standard over time.

User Advice: Companies that are active or considering engagement in sensors, machine to machine and/or Internet of Things, should be aware of, and make preparations to test this open source technology.

Business Impact: DASH7 has uses for smart-building scenarios, as well as location-based services, mobile advertising, vehicle and logistics applications (low latency for vehicle connectivity) and defense.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Bosch; SkyBitz; Texas Instruments; WizziLab

Internet of Things

Analysis By: Hung LeHong

Definition: The Internet of Things is the network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment.

Position and Adoption Speed Justification: While the Internet of Things is getting more attention overall, interest in the Internet of Things has grown faster than implementations. A good way to understand adoption and maturity in the Internet of Things is to observe the various types of enterprises and industry applications:

  • On the more advanced side, there are enterprises that are often in asset-intensive industries that have long had "connected" assets (e.g., utilities, industrial). These enterprises are dealing with the convergence of operational technology (OT) with information technology (IT) as they modernize from proprietary and silo-based systems to more integrated and standards-based systems.
  • Another advanced asset-intensive market is building and facilities management, where energy savings and environmental benefits create a good business case for the Internet of Things.
  • In both the private and public sector, we also see an increase in selectively augmenting existing assets with sensors and wireless connections to make these assets remotely manageable. Examples include city infrastructure (e.g., street lights) and healthcare assets. As major existing assets come to the end of their life cycles, we expect to see the increased purchase of new assets that come with Internet of Things capabilities "out of the box."
  • On the consumer side, there is a whole collection of startups that are responding to the maker movement. See crowdfunding sites, like Kickstarter, to get a good sense of what is being pursued. Much of the focus is on the connected home, where convenience and energy savings are the pursued benefits. Unlike the enterprise side, consumer applications are favoring convenience and gadget-appeal over cost savings as the main reasons for adopting the Internet of Things.
  • Most enterprises are currently at the "education" stage. They are looking to see how they might leverage the Internet of Things in their enterprise and with their customers.

On the technology side, there continues to be slow progress toward standardization. Internet of Things wireless protocols continue to vie for dominance, but no clear leader stands out universally. There are some exceptions. Bluetooth LE is getting strong adoption as the wireless protocol to connect things to smartphones, tablets and computers. We expect continued standardization, but also expect a heterogeneous and fragmented environment. As such, platforms and hubs used to connect and manage things using different standards and protocols have gained much popularity on the enterprise and consumer side. These platforms will become very important in complex ecosystems such as cities and building campuses.

User Advice: Enterprises should pursue these activities to increase their capabilities with the Internet of Things:

CIOs and enterprise architects:

  • Work on aligning IT with OT resources, processes and people. Success in enterprise Internet of Things is founded in having these two areas work collaboratively.
  • Ensure that EA teams are ready to incorporate Internet of Things opportunities and entities at all levels.
  • Look for standards in areas such as wireless protocols and data integration to make better investments in hardware, software and middleware for the Internet of Things.

Product managers:

  • Consider having your major products Internet-enabled. Experiment and work out the benefits to you and customers in having your products connected.
  • Start talking with your partners and seek out new partners to help your enterprise pursue Internet of Things opportunities.

Strategic planners and innovation leads:

  • For, enterprises with innovation programs, experiment and look to other industries as sources for innovative uses of the Internet of Things.

Information management:

  • Increase your knowledge and capabilities with big data. The Internet of Things will produce two challenges with information: volume and velocity. Knowing how to handle large volumes and/or real-time data cost-effectively is a requirement for the Internet of Things.

Information security managers:

  • Assign one or more individuals on your security team to fully understand the magnitude of how the Internet of Things will need to be managed and controlled. Have them work with their OT counterparts on security.

Business Impact: The Internet of Things has very broad applications. However, most applications are rooted in four usage scenarios. The Internet of Things will improve enterprise processes, asset utilization, and products and services in one of, or a combination of, the following ways:

  • Manage — Connected things can be monitored and optimized. For example, sensors on an asset can be optimized for maximum performance or increased yield and up time.
  • Charge — Connected things can be monetized on a pay-per-use. For example, automobiles can be charged for insurance based on mileage.
  • Operate — Connected things can be remotely operated, avoiding the need to go on site. For example, field assets such as valves and actuators can be controlled remotely.
  • Extend — Connected things can be extended with digital services such as content, upgrades and new functionality. For example, connected healthcare equipment can receive software upgrades that improve functionality.

These four usage models will provide benefits in the enterprise and consumer markets.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Atos; Axeda; Bosch; Cisco; Eurotech; GE; Honeywell; IBM; Kickstarter; LogMeIn; Microsoft; QNX; Schneider Electric; Siemens

Recommended Reading: "Uncover Value From the Internet of Things With the Four Fundamental Usage Scenarios"

"The Internet of Things Is Moving to the Mainstream"

"The Information of Things: Why Big Data Will Drive the Value in the Internet of Things"

"Agenda Overview for Operational Technology Alignment With IT, 2013"

OpenGL ES 3.0

Analysis By: Michele Reitz

Definition: OpenGL for Embedded Systems (Open GL ES) 3.0 refers to an industry standard (evolved from OpenGL) that specifies an API that interacts with graphics processing units (GPUs) to achieve hardware-accelerated high-quality graphical image rendering for embedded systems — including mobile phones, video game consoles, appliances and vehicles. The standard itself (the current being 3.0, released in August 2012), the standard's evolution and its compliance are managed by Khronos Group, the nonprofit technology consortium.

Position and Adoption Speed Justification: OpenGL ES was drawn up against the OpenGL standard and was intended to be focused primarily on the embedded systems and mobile market. Since the introduction, OpenGL ES versions have been adopted by most major semiconductor manufacturers in the market, including those listed in the Sample Vendors list, and many others. OpenGL ES 3.0 is available and shipping in application processors using GPU cores provided by ARM (Mali-T604), Nvidia, Qualcomm (Adreno 305 and 320) and Vivante. The standard also has been adopted by Intel (HD 2500 and HD 4000), (ARM Mali-T624), and Imagination Technologies (PowerVR Rogue), and these are expected to become available in application processors from Apple, MediaTek, Texas Instruments, Renesas Electronics and other system-on-chip vendors in the next product cycles.

Key benefits for adoption of this standard include:

  • It is an open, vendor-neutral, multiplatform-embedded graphics standard that is royalty-free for anyone or any company wishing to use it.
  • It is designed for a small footprint-embedded system with small/minimum data storage requirements and low power consumption because the graphics software pipeline is optimized.
  • Compliance to the standard allows maximum portability of software and hardware for new embedded system products, with minimal to no changes to the interface.
  • It is an evolving standard that allows new hardware and software innovations to be updated or extended quickly.

OpenGL ES 3.0 has numerous advanced graphics capabilities that are desirable for new products and applications, including enhancements to shading, texture and techniques that reduce CPU and GPU overhead while performing common rendering operations.

User Advice: Makers of GPUs and software for embedded systems should track the OpenGL ES 3.0 and upcoming standards closely to ensure the next generation of devices are compliant. GPU companies should optimize new high-end GPU hardware to take advantage of the new capabilities to enable the most advanced graphics operations. Companies designing leading-edge game consoles, mobile phones and tablets will need to adopt this standard in their next-generation devices.

Business Impact: OpenGL ES 3.0 makes it easy and affordable to offer a variety of advanced 3D graphics and games across all major mobile and embedded platforms. Because OpenGL ES 3.0 is based on OpenGL, the standard ensures a migration path to and from the desktop OpenGL, which is the most widely used graphics API in the industry.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: ARM; Imagination Technologies; Intel; Nvidia; Qualcomm; Vivante


Analysis By: Ganesh Ramamoorthy

Definition: Contiki is an open-source operating system (OS) for the Internet of Things (IoT), used in a wide variety of systems such as: city sound monitoring, street lights, networked electrical power meters, industrial monitoring, radiation monitoring, construction site monitoring, alarm systems and remote house monitoring. Contiki was created by Adam Dunkels at the Swedish Institute of Computer Science (SICS) in 2003.

Position and Adoption Speed Justification: Contiki is an open source, highly portable, multitasking OS for memory-efficient, networked-embedded systems and wireless sensor networks. It is designed for microcontrollers with small amounts of memory. A typical Contiki configuration is two kilobytes of RAM and 40 kilobytes of ROM. Contiki enables the use of Internet Protocol (IP) communications in low-power sensor networks for both IP version 4 (IPv4) and IP version 6 (IPv6). It contains two communication stacks: Micro IP (uIP) and Rime. The uIP is an open-source Request for Comments (RFC)-compliant TCP/IP that can be used with 8-bit and 16-bit microcontrollers, allowing Contiki to communicate over the Internet. Rime is a lightweight communication stack designed for low-power radios, supporting best-effort local area broadcast to reliable multihop bulk data flooding.

Many key principles from Contiki have been widely adopted in the industry. The uIP-embedded IP stack is used in many systems such as freighter ships, satellites and oil-drilling equipment. Contiki and uIP are also recognized by the popular Network Mapper (Nmap) network-scanning tool. Contiki's protothreads are also used in many different embedded systems, ranging from digital TV decoders to wireless vibration sensors. The emergence of the Internet of Things will see the widespread adoption of Contiki in the coming years. Key candidates for Contiki include systems such as: city street lights, sound monitoring systems, smart meters, industrial process monitors, radiation monitors, construction site monitoring, alarm systems and remote house monitoring.

User Advice: Contiki's ability to run on a wide range of platforms, ranging from embedded microcontrollers to old home computers, makes it an attractive OS for application developers and programmers for embedded systems. Other factors that make it even more attractive for programmers and system developers include the fact that Contiki is written in the C programming language and is freely available as open-source under a Berkeley Software Distribution (BSD)-style license. It supports IPv6 and IPv4, along with other recent low-power wireless standards such as Low-Power Wireless Personal Area Networks (6LoWPAN), Routing Protocol for Low Power and Lossy Networks (RPL), and Constrained Application Protocol (CoAP).

Business Impact: Contiki allows for easier and faster development of applications (compared to the traditional embedded operating systems like VxWorks and embedded Linux, for example) as it is written in the C programming language. With its Cooja simulator, Contiki networks can be emulated before being burned into hardware; and Instant Contiki provides an entire development environment in a single download. Furthermore, being an open-source software, Contiki can be used free of charge in both commercial and noncommercial systems, which means minimal application development costs.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Atmel; Cisco; SAP

Low-Cost Development Boards

Analysis By: Ganesh Ramamoorthy; Jim Tully

Definition: Low-cost development boards are typically open-source electronics prototyping platforms, based on flexible, easy-to-use hardware and software, that are priced low enough to be affordable by an electronics enthusiast, hobbyist, an amateur designer or anyone interested in creating new electronics systems.

Position and Adoption Speed Justification: Development boards have been available for many years. A key change is the availability and use of low-cost development boards, primarily driven by the Internet of Things (IoT). These boards are typically low-power, single-computer development boards and cost anything between $15 to $200 — depending on the functionality and features required. However, they are targeted at different use cases and cannot be compared side-by-side as there are large differences in energy requirements, operating systems and features/functionalities support.

Prominent examples of low-cost development boards include: Raspberry Pi, Arduino, Gizmo Board, BeagleBoard, PandaBoard, XinoRF, Ciesco, Pinoccio, Cubieboard, mbed, openPicus, Hackberry, Udoo, Blueboard, Origen — to name a few. The majority of these are based on ARM processor cores, although several other processors are also supported. Additionally, a few of them aim to become fully-fledged, production-ready boards for the future.

The trend toward low-cost development boards is clearly being driven by the growing proliferation of open-source software such as operating systems and design tools in combination with low-cost hardware platforms. These environments encourage users to share their work and support each other through a range of Web-based tools. As a result, a new class of electronics enthusiasts and hobbyists, self-employed amateur designers, students, electronic design companies and others interested in creating new electronic systems, are using these development boards for piloting/experimenting and learning purposes.

Moreover, as there are strong links between the hardware development and the application development for smartphones and tablets, many of the solutions developed using these boards are controlled by smartphone apps. We believe the next "killer app" will likely emerge from these users that are currently utilizing low-cost development boards to create innovative solutions.

User Advice: Investors should carefully observe the use of these platforms in order to anticipate emerging market opportunities. Low-cost development boards are the most economical way for beginners, amateurs and students to experiment, learning Linux and other embedded operating systems quickly while building something that is commercially viable. These development boards are a good way for design companies to pilot or build simple proofs-of-concept, and innovation teams and labs should have them available in order to test IoT ideas.

Business Impact: For chip vendors, low-cost development boards and the associated developer communities and software offerings are very important in the success of particular chips. Chip vendors must encourage the use of these boards and the users'/developers' community through promotional offers and development support. They should observe the developer community's activities closely for the emergence of innovative new products and applications.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: ARM; Broadcom; Freescale; Google; Marvell Semiconductor; Qualcomm; Samsung; Texas Instruments

Raspberry Pi

Analysis By: Alfonso Velosa

Definition: The Raspberry Pi is a flexible, low-cost computer built onto a credit card-sized printed circuit board assembly. It was primarily created to be used in the teaching of computer science and to encourage students of all ages to build key computing skills.

Position and Adoption Speed Justification: The British Raspberry Pi Foundation was established In 2006 to promote computer education when a group of colleagues at the University of Cambridge Computer Laboratory became concerned about the lack of basic computing skills demonstrated by students. The group used its core expertise in processors and operating systems to design a simplified, low-cost system. Priced at $35 as of May 2013, the Model B Raspberry Pi's specification includes:

  • Application processor: A Broadcom BCM2835 with a 700 MHz ARMv6 processor core and a VideoCore 4 graphics processor.
  • Memory: 512 MB of SDRAM and a Secure Digital memory card slot for additional memory.
  • Connectivity: Two USB 2.0 ports, a 10/100 mbps Ethernet network connection.
  • Video: The graphics processor is capable of resolutions between 640 x 350 and 1920 x 1200 and outputs video to composite video (RCA), HDMI and DSI connectors.
  • Power rating: 700 mA (3.5 W)
  • Operating Systems: Capable of running any OS based on a Linux kernel (such as Arch, Raspbian and Slackware ARM), RISC OS or Unix.

Despite being sold without a keyboard, a display, a real-time clock or a power supply, the Raspberry Pi has already shown real potential. Since its release in February 2012, the foundation has reported that over one million customers have purchased the board.

The popularity of this device and the proliferation of its use cases indicate that the Raspberry Pi is well on its way to fulfilling its objective of influencing computer science education. It is also demonstrating the capabilities that low-cost computing can bring to students. A large number of independent online communities have also started to develop projects and build standards around media streaming, gaming, photography and navigation. Key candidates for future use of Raspberry Pi-based systems include the Internet of Things and smart city projects that leverage the device's flexibility and low cost.

User Advice: Application developers and programmers can leverage use cases built upon familiar operating systems such as Unix and Linux. However, since most use is currently driven by the hobbyist communities, there is minimal official support, documentation or upgrades. You should also consider other boards such as Arduino, Netduino or BeagleBoard for development and niche projects.

Business Impact: Raspberry Pi allows for low-cost development of hardware applications and systems built upon standard open source operating systems. With its proliferation and the growth of a large pool of developers, it has the potential to become one of the main computing platforms for general development for the Internet of Things or for niche, low-volume applications.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: ARM; Broadcom; University of Cambridge Computer Laboratory


Analysis By: Randy Rhodes

Definition: Supervisory control and data acquisition (SCADA) systems include a human-machine interface, control interfaces for plant systems, data acquisition units, alarm processing, remote terminal units or programmable logic controllers, trend analysis, and communication infrastructure. Open SCADA systems are built on industry and de facto standards, rather than closed, proprietary platforms. They may include open-source software.

Position and Adoption Speed Justification: Early SCADA systems were built on proprietary, event-driven operating systems. Today's SCADA systems increasingly depend on commonly available hardware and operating systems. Microsoft and Linux operating systems have been more readily accepted and are common among utility SCADA applications, particularly on client workstations. Most communication subsystems now depend on standard, openly published protocols — such as IEC 60870-5-101 or IEC 60870-5-104, IEC 61850 and Distributed Network Protocol 3 (DNP3) — rather than vendor-specific protocols of the past. Support for the OPC Foundation's Unified Architecture is widespread (based on Microsoft Windows technology, OPC originally meant OLE for Process Control, but now it stands for Open Process Control), and extensions for communications over TCP/IP are available from most vendors.

Adoption of open SCADA has slowed due to industry awareness of security issues. Utilities are showing some caution due to the mission-critical nature of modern utility SCADA systems; a worst-case SCADA security disruption could cause costly equipment failure. Network security vendors are addressing specialized security risks with ruggedized industrial firewall solutions for SCADA networks. SCADA vendors are adding enhanced security management and compliance monitoring features that are consistent with IT desktop management systems.

A few self-organizing groups offer open-source SCADA code — typically, these include Linux-based SCADA servers, a few distributed control system interfaces, SCADA alarm processing, and time-series trending. Ongoing support is still limited, however, and development road maps are uncertain at best.

User Advice: For electric, gas and water utility applications, open SCADA will be more commonly adapted by small and midsize municipal organizations, where there is less need for complex analytical applications requiring in-depth vendor customization of the overall technology stack. Utilities should rely on not only the business unit technical operations staff, but also the internal IT support staff to ensure that these systems are fully maintained throughout their entire life cycles.

The IT staff should assist in establishing operational technology (OT) governance on all SCADA projects — including network security, access monitoring, patch administration, and backup and restoration management. The IT staff also should take the lead in establishing clear accountability for ongoing operational support and maintenance via service-level agreements between the IT and OT staffs.

While open architecture systems offer improved flexibility and lower costs, commoditized platforms typically offer a broader "attack surface" to potential intruders. Additional security controls will likely introduce more technical complexity, unexpected implementation delays and increased support requirements.

Business Impact: Open SCADA changes will affect process control and distribution automation functions in electric, gas, water and wastewater utilities.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: CG Automation; Efacec ACS; GE Intelligent Platforms; Open Systems International (OSI); Survalent Technology

Recommended Reading: "Security Lessons Learned From Stuxnet"

"How CIOs Should Address Accountability and Responsibility for IT and OT"

"Five Mistakes to Avoid When Implementing Open-Source Software"


Analysis By: Alfonso Velosa; Ganesh Ramamoorthy

Definition: TinyOS is an OS designed for low-power wireless devices, such as those used in wireless sensor networks (WSN), ubiquitous computing, personal-area networks, smart buildings and smart meters. It is an open source software component-based OS written in the nesC programming language as a set of cooperating tasks and processes.

Position and Adoption Speed Justification: TinyOS, developed at the University of California, Berkeley, is one of the earliest event-based, microthreaded OSs for WSNs that allows the application software to directly access the hardware when required. It is designed to support high levels of concurrent applications in a very small amount of memory, provide modularized components with little processing and storage overheads, and guarantee concurrent data flows among hardware devices while managing hardware capabilities and resources effectively. Because TinyOS has one stack, it provides tasks to manage larger computational operations. As the component with TinyOS posts a task, the OS can schedule it as appropriate. Tasks are run in "first in, first out" order, which is adequate for most input-/output-centric applications.

TinyOS applications are written in a dialect of C language — nesC — that is optimized for the memory limits. TinyOS supports a wide range of hardware platforms and has been used in several generations of sensor nodes. Supported processors include the Texas Instruments MSP430 and the Atmel AVR. TinyOS applications can be compiled to run on any of these platforms without modification, hence the position on the Hype Cycle.

TinyOS is very hyped and is starting to see some emergent adoption in a broad range of applications. Also, it is well-aligned to leverage the emergent machine-to-machine (M2M) and Internet of Things (IoT) hype and devices that we see in the market. It faces competition from a broad range of OSs and proprietary approaches for M2M and IoT applications.

User Advice: Developers of embedded systems that have limited memory capabilities can leverage TinyOS. Even though TinyOS applications are written in nesC, they provide key advantages for developers in the form of familiar sets of shell-script, front-end tools. Also, most of the associated libraries and tools are mostly written in C. This can facilitate the development of custom-made applications for WSNs.

Business Impact: TinyOS is a free OS with an extensive set of libraries and tools and a fairly large number of developers. This provides an opportunity for semiconductor, software and services companies to shorten the time to market for their WSN and IoT projects by using applications that leverage this OS.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Atmel; Texas Instruments

On-Device Monitoring

Analysis By: Charlotte Patrick

Definition: On-device monitoring uses client software installed on a customer device to collect data on the use of that device (be it a mobile phone, IPTV, tablet, PC or broadband hub) and to report potential problems like network issues or rogue apps. It can be used for fault detection and resolution; can provide information to customer services agents during an interaction with the customer; or provide data during device or service testing.

Position and Adoption Speed Justification: The actual number of communications service providers (CSPs) using on-device monitoring solutions remain small — with certain geographies such as the U.S. having clusters of implementations which are now reasonably long-standing (four to five years) — with other regions being more reluctant to use them due to concerns around privacy. We estimate that between 5% and 10% of global CSPs have some sort of clients with reasonably rich measurement and fault-detection functionality.

The adoption speed also varies depending on the device — with clients on devices such as mobile broadband dongles, set-top boxes (STBs) and fixed broadband hubs being more mature than those on mobile phones. Mobile adoption has historically been slowed by the large number of mobile operating systems and more recently by privacy concerns (especially after negative press coverage in the U.S.).

Future adoption will be influenced by some longer-term trends, including:

  • The increasing number of endpoints that need to be visible to operational departments — including machine to machine.
  • The growing enthusiasm for a more holistic view of the customer's experience.
  • The expansion of self-service functionality to include troubleshooting.

Requirements are the collection of information on issues and exposure of this information to networks, customer services and/or the customer — allowing for provision of proactive resolutions (including mobile device management resolutions) or information that will help a customer to manually resolve the issue. This trend may positively impact on-device monitoring and see solutions becoming consumed into wider functionalities in the area of consumer mobile device management.

User Advice: The question of whether it is worth deploying a client versus using nonclient-based operational support systems equipment to look at the customer's experience is not straightforward. There is an information set that is only available from the client — including device information and some of the customer's actions on the device.

There are various choices of on-device monitoring functionality for mobiles, with different cost-benefit profiles:

  • A fully custom-made client installed in the factory. This can be heavier (more integrated into the device and of larger size) or lighter, dependent on exactly what type of information is required.
  • One or two thousand clients put on devices to provide a relevant sample. (This could be the CSP's own employees or a subsegment of opted-in customers).
  • The use of just a handful of clients to test devices before they go to market.
  • Putting a client on over-the-air when a customer requires help from customer services.
  • Potential to install light clients at point of sale in the future.

Any CSP wishing to go forward with an on-device mobile client should manage public relations very carefully. The ability to opt in as the device is switched on for the first time, or some type of customer education, will be far preferable to alarmist press headlines.

Business Impact: On-device clients speak strongly to some key CSP concerns. For example, how to maintain the quality of the network service where data networks are congested; and how to troubleshoot devices when there are issues and the device is some distance from customer-facing staff. On-device monitoring provides data that is not available from back-office systems for these types of activity — although the trade-off is always whether the cost of implementing the solution is worth the extra data that is collected.

In the medium term we expect to see most CSPs compromising, with clients placed on a small but statistically relevant number of mobile devices for collecting network data. Then, other data for customer services being collected via mobile device management functionality as part of troubleshooting/proactive fixing activities. Clients for fixed broadband hubs or STBs will have a different path, with continued rollout of on-device monitoring functionality.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Emerging

Sample Vendors: Carrier IQ; Genius Digital; Mariner Partners; Mformation Technologies; V3D

Virtual Prototypes

Analysis By: Satish R.M.

Definition: Virtual prototypes are host computer-based simulation models of a system and its components. Virtual prototypes are constructed/used for simulating system behavior and their component interactions to explore concepts, topology, architecture, design alternatives, validation techniques and eliminating errors. Virtual prototypes are employed from the concept definition phase to the production and postproduction phases.

Position and Adoption Speed Justification: Many of the existing complex embedded systems requiring exponential maintenance efforts are mostly legacy systems. For these legacy systems it has proven to be a massive task to create virtual prototypes, which introduces a barrier to migrate to embedded software and systems development using virtual prototypes. Despite the visible benefits of achieving executable specifications and automatic code generation, verification, validation and maintenance, only technology leaders in different industry verticals have been able to adopt virtual prototype-based development to a lesser degree. Most of the smaller, less complex embedded systems and applications will never find the need for virtual prototypes. A major part of the Internet of Things will actually be formed by these less complex systems.

Virtual prototypes are best suited for embedded systems or subsystems that will form part of emerging technologies and applications, and will be launched in the future. The proliferation of multicore architectures can accelerate the adoption of virtual prototypes as they enable debugging and error fixing early in the development cycle. Time-to-market pressures will ensure slow adoption of virtual prototypes due to the additional time and effort involved in constructing the simulation models. Perceived benefits in the long run will eventually favor virtual prototypes.

User Advice: The need for constructing and refining the user and environment models is immense. Currently, system integrators (SIs) have achieved model in loop (MIL), software in loop (SIL) and hardware in loop (HIL) development. Solution providers can distinguish their offerings by implementing these refinements in the form of toolkits/libraries to establish real-world plant/user models. Solution providers focusing on toolkits and libraries that aid compliance with functional safety standards in the respective industries can accelerate the adoption of virtual prototypes.

SIs and component designers should follow a modular approach to software and systems design, allowing for future reuse across variants/derivatives, maximizing the benefits of using virtual prototypes. Although this demands an initial investment in time and resources serving as a barrier to many SIs and component designers, the long-term benefits in switching to virtual prototypes for all product development phases is enormous.

Business Impact: Organizations deploying virtual prototypes now should be ready for a long-term strategy because results and ROI can be realized only after a concerted effort in this direction.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Alstom; Bosch; Cadence Design Systems; Continental Automotive Group; Delphi; Denso; DSpace; Freescale; GE; Honeywell; MathWorks; Mentor Graphics; National Instruments; Nissan; Philips Electronics; Renesas Group; Siemens; STMicroelectronics; Sun Microsystems; Synopsys; Texas Instruments


Analysis By: Ganesh Ramamoorthy; Alfonso Velosa

Definition: MatrixSSL is a customized version of the Transport Layer Security (TLS) and Secure Sockets Layer (SSL). These are cryptographic protocols that provide communication security over the Internet for small footprint applications and devices in the embedded hardware environment. It is an open-source software package available under the GNU license, which, when compiled, consists of a single library file with a simple application programming interface (API) set that a programmer can use to secure an application.

Position and Adoption Speed Justification: In embedded systems, network security comes at a very high cost. While there are freely available open-source solutions such as the OpenSSL, the performance of most embedded processors, alone, are just not adequate enough to perform cryptography for real-time applications. There have been attempts at using an embedded processor in conjunction with an Advanced Encryption Standard (AES) hardware accelerator on a real-time operating system (RTOS) such as uClinux, on which the OpenSSL library has been ported and cross-compiled. The results of these experiments have shown improvements in hardware acceleration and in the performance of OpenSSL cryptographic functions and, hence, of the SSL connection as well. However, since the standard OpenSSL library is over 1MB, implementing it in an embedded hardware environment that requires a smaller code footprint has always been a challenge.

MatrixSSL adds only 50KB to an application's install package, making it an ideal choice for distributing standard security with the application — dynamically over the Internet. MatrixSSL has been ported to most existing standard operating systems (such as FreeRTOS, BareMetal OS, embedded configurable operating system [eCos], VxWorks, uClinux, ThreadX, Windows Embedded Compact [Windows CE], Pocket PC, Palm OS, Portable Software On Silicon [pSOS], SMX, Binary Runtime Environment for Wireless [Brew], Mac OS X, Linux and Windows) and is also portable to new operating systems due to its high degree of platform independence. Also, as APIs operate on a memory buffer level, integration with existing application sockets code is simple and its cross-platform portability makes it compatible with applications that are deployed across multiple operating systems and architectures (such as: ARM, MIPS32, PowerPC, Hitachi H-8, Hitachi SH3, Intel 80386 [i386] and x86-64).

The small API set, limited platform requirement and the clean C programming source code, make MatrixSSL the ideal choice for secure management of remote devices and the encryption layer in secure embedded Web servers. The latest version, MatrixSSL 3.4.2, was released on 28 February 2013 and incorporates a number of bug fixes and improvements — such as improved runtime checks of certificate algorithms against cipher suites and expired session resumption fix. It also contains the full cryptographic software module that includes industry-standard public key and symmetric key algorithms.

The software can be obtained either under a GNU Public License model or a standard commercial license model. Under the GNU license, programmers can use the library free of charge, as long as all the code that links to, or uses, the library is made public. If the application source code is to remain proprietary a commercial license is required, which comes with support, updates and additional software features such as client authentication and certificate/key generation.

User Advice: Embedded developers must integrate and support their security code in network applications such as communications groupware, instant messaging, peer-to-peer systems, handheld data collection, and collaboration applications that are beginning to include network security features using MatrixSSL. Further, developers must integrate MatrixSSL library with the application in order to make the application more secure as an external library patch cannot guarantee the same security that an integrated library can provide.

Business Impact: As more devices connect to the Internet and develop more functionality, network security is vital in these applications in order to deliver uninterrupted services to users. MatrixSSL, besides securing the various network applications, also helps reduce support costs for businesses by allowing the entire solution to be tested in-house without depending on an external security library.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Discretix; Inside Secure; PeerSec Networks

Sliding Into the Trough

OMA Device Management

Analysis By: Alfonso Velosa

Definition: OMA Device Management (OMA DM) is a device management protocol from the Open Mobile Alliance (OMA) that is designed to manage mobile devices such as phones and PDAs. It supports device provisioning, configuration, software updates and fault management. OMA DM is designed to operate in an environment where memory and bandwidth are constrained and tight security is required.

Position and Adoption Speed Justification: Mobile phone and PDA vendors and developers operate in a heterogeneous world. OMA specifications are meant to work across cellular technologies that provide networking and data transport. The OMA DM specification in particular is designed for the management of small mobile devices such as mobile phones and PDAs.

OMA DM is a device management protocol specified by the OMA DM working group and Data Synchronization (DS) working group that is designed as part of the OMA's focus on the development of open technical specifications. This is for mobile service enablers that drive modularity, extensibility and consistency.

The OMA DM specification provides a standard way to support the following device management requirements:

  • Provisioning — Setup of the device to enable and/or disable features.
  • Configuration — Allows change settings and parameters of the mobile device.
  • Software upgrades — Provide for loading new software onto a device.
  • Fault management — Diagnostics collection and query about the status of devices.

Note that the OMA has released the Lightweight machine-to-machine (M2M) standard to provide device management for the Internet of Things (IoT).

User Advice: Developers should adopt OMA DM to simplify mobile device management in a stateful communication protocol that manages authentication in a constrained memory and bandwidth environment.

If you have a heterogeneous set of systems deployed in the field, verify that the systems that you have from different technology providers are running compatible versions of the OMA DM protocol. Some technology and service providers start with the nominal OMA DM protocol; in practice, however, they customize them for security and device management. This makes their solutions partially proprietary.

Business Impact: OMA DM will provide the standardized approach for mobile device management that will help vendors achieve faster time to market for their products and services with lower capital and operating costs. Furthermore, the Lightweight M2M standard will allow enterprises to manage a broader portfolio of devices, which is an increasingly important factor for firms with IoT deployments.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: Google; IBM; Microsoft; Nokia; Sony; ZyXEL

Multicore Programming

Analysis By: Ganesh Ramamoorthy

Definition: Multicore programming is a software-development paradigm where the purpose is to develop software to better utilize the processing power provided by a multicore processor. This concept is a marked deviation from the traditional sequential programming technique used by most embedded programmers in the past, and requires programmers to change their entire way of thinking about software design and development issues and then the manner in which those issues are solved.

Position and Adoption Speed Justification: Single-core processors were the key embedded industry solution between 1980 and 2000 when large performance increases were being achieved on a yearly basis, fulfilling the prophecy of Moore's Law. Embedded multicore processors supporting various multicore architectures have been available for a long time now. However, software that fully utilizes the functionalities and capabilities of these multicore processors has lagged in its development. The key reason behind this is the need for programmers to think differently, or rather to think in parallel, while writing software for multicore processors. The lack of mature virtualization packages, performance/system profilers, debug tools, and other parallelization and optimization tools and the lack of multicore-enabled system software and programming models have only added to the challenges of multicore programming for embedded systems. Moreover, C/C++, which is the de facto programming language for embedded software, poses a major challenge for programmers that write software for embedded multicore processors. Because of its sequential nature, C/C++ does not scale very well for multicore processors. Therefore, most software programmers approach multicore programming with caution and hence its adoption in embedded systems continues to remain muted.

Using extensions, however, programmers have been able to partially enable C/C++ for multicore processors. Through formalized programming models, solutions such as OpenMP, Clean C, Parallel Studio, Unified Parallel C and pC++, programmers are enabling the use of C/C++ for multicore processors. As more such solutions that enable use of C/C++ — which is ingrained in the embedded market and therefore cannot be quickly replaced — and new tools from electronic design automation vendors and software tool vendors for virtualization, parallelization and code optimization, emerge, multicore programming for embedded systems will likely gain momentum. Other higher-level programming languages such as Erlang and Fortress; and Z-level programming languages like Chapel and Haskell that are better able to utilize the advantages of multicore processing, will also likely emerge over time and drive multicore programming into the future.

As the market for multicore processors in embedded applications begins to move into the product deployment stage, the need for multicore software will also increase in parallel in the coming years.

User Advice: Software developers who are experienced in sequential programming need to understand new concepts for developing software targeted at multicore processors. Engineering managers running technical teams of hardware and software engineers should be knowledgeable about development for multicore processors and the learning curve faced by their software developers. Project managers scheduling and delivering complex projects should appreciate, and appropriately schedule, projects targeting multicore processors. Test engineers developing tests to validate functionality and verify performance should write appropriate and efficient tests for multicore-related projects.

Business Impact: Multicore programming can bring significant cost savings in development projects. Those cost savings can be applied to developer efficiency in analyzing and designing the implementation, a reduced number of bugs in the implementation, and increased likelihood of the implementation meeting performance requirements. These savings lead to faster time to market and lower development cost.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Codeplay Software; Critical Blue; Freescale; Intel; Microsoft; Pervasive Software; Polycore Software; Rogue Wave Software; Scalable Solutions


Analysis By: Sylvain Fabre; Jim Tully

Definition: Z-Wave wireless technology enables home automation, and is especially catered to control applications in the home. It is implemented in short-range radio frequency modules added to household goods such as lamps, access control systems, entertainment hubs, pool controls and appliances. So far, almost all its success has been in home security applications like door and window locks. Another wireless technology, ZigBee, has had far more success in appliance control (though this market is very young) and heating control (like thermostats).

Position and Adoption Speed Justification: Over 700 products use Z-Wave and have been certified by the Z-Wave Alliance. There are 200 members of this consortium, an industry body made up of manufacturers of products built to the Z-Wave standard.

User Advice: Consider Z-Wave for home automation applications within a 30-meter range. Z-Wave is designed to be easily embedded into consumer electronics products, including battery-operated devices such as remote controls, smoke alarms and security sensors. Recognize, however, that Z-Wave seems to be losing market share to other wireless technologies — even in its core area of security — and Z-wave has been struggling over the last couple of years. ZigBee, Bluetooth and even Wi-Fi appear to be encroaching upon Z-wave's traditional stronghold of security devices in the home.

OEMs of appliances should carefully analyze the cost of Z-Wave technology, especially in light of the cost of alternatives like silicon. Despite the cost of Wi-Fi silicon falling, it still proves more expensive than that of Z-Wave and ZigBee.

Business Impact: Z-Wave is optimized for reliable, low-latency communication of small data packets. This differentiates it from Wi-Fi and other Institute of Electrical and Electronics Engineers (IEEE) 802.11-based technologies, which are geared for high-data-rate applications. Z-Wave operates in the sub-gigahertz frequency range, around 900MHz. This band is used for cordless handsets and other home electronics appliances, however Z-Wave's operation in this band does not interfere with Wi-Fi and other standards that use the 2.4GHz band.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Early mainstream

Sample Vendors: 1010data; ADT Security Services; Danfoss; Evolve Guest Controls; Sigma Designs; Vivint

Embedded GUI

Analysis By: Adib Carl Ghubril

Definition: An embedded graphical user interface (GUI) runs on more-limited processing and memory resources, while providing an interaction similar to what is experienced on a desktop computer.

Position and Adoption Speed Justification: Embedded GUIs have typically run on Windows CE and embedded Linux — both of which entail a significant amount of memory external to a midtier processor. The introduction of Java Platform, Standard Edition Embedded (Java SE Embedded) started the migration of C++ widgets to JavaScript, and so more powerful functionality was able to run on more-limited resources, such as those found on board microcontrollers. However, JavaScripts can consume much of the limited onboard resources and restrict the functionality of an embedded Web browser GUI.

HTML5 is much more code-efficient, so the implementation offers developers the flexibility to customize further down the application stack. The announcements of various development toolmakers, such as mbed, Bsquare and QNX, bodes well for the progression of HTML5, with the caveat that security and standardization across devices remain elusive and pose the gravest risk to reaching the Plateau of Productivity.

Indeed, the first version of HTML5 is to be completed within the next two years and, at the onset, may not support pure HTML5 applications but, rather, require wrapped HTML hybrids.

User Advice: Perform a cost-benefit analysis on the implementation of widgets and WebSockets using the main programming and scripting language.

Business Impact: The Internet of Things is fueling the proliferation of embedded processors and microcontrollers. The Internet of Things running HTML5 on microcontrollers not only will maintain a good power-performance ratio, but also will offer a compelling user experience via GUI Web browsers.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Bsquare; Micrium; QNX

Recommended Reading: "Hype Cycle for Web Computing, 2012"

"A Platform Approach for Websites, Portals and Mobile Apps Leads to Faster Time to Market and Improved User Experience"

Software-Defined Radio for Mobile Devices

Analysis By: Jim Tully; Sylvain Fabre

Definition: Software-defined radio (SDR) provides software control of significant parts of a wireless function rather than through traditional hardware control. This allows devices to switch dynamically between protocols and frequencies via software control. SDR is most attractive where standards are changing and uncertain, or where multiple standards are required. Smart antennas are an important part of SDR and are typically implemented as an array of elements with programmable characteristics such as field width, frequency and waveform shape.

Position and Adoption Speed Justification: SDR has great potential to reduce device cost and hardware complexity, as less radio hardware is required to support many protocols. SDR could also enable advanced concepts such as cognitive radio — in which devices dynamically negotiate protocol and spectrum use depending on their needs and the needs of other devices in the vicinity.

The ultimate goal is that the radio frequency and baseband components of the wireless device are fully programmable and can switch frequencies and protocols in milliseconds. The critical element of SDR is designing a programmable radio that operates over a broad range of frequencies. This is as challenging, if not more challenging, than implementing the baseband function.

While SDR technology has already been used in cellular base stations, it has had little impact on mobile devices. Chips implementing partial SDR solutions are available for use in some mobile devices (such as laptop data cards) and for selected components of wireless systems (such as baseband processors). SDR is likely to be more attractive for use on high-end devices, like smartphones, which must support many different wireless standards and protocols.

Practical SDR implementations will rely on relatively high-performance digital signal processor (DSP) technology. Current DSP-based processors, with the appropriate clock rate, consume too much power for most handsets, but significant developments are being made in power reduction. However, it will take many years before SDR is used on a wide range of mobile devices. One early example of a primarily SDR-based baseband chip from Nvidia's Icera is already shipping in ZTE's Mimosa X phone. For this reason, we have advanced the technology a little further down the slope this year.

User Advice: Communications equipment vendors should already be seriously evaluating this technology. In the long term, SDR has the potential to make mobile devices less expensive and more flexible.

Business Impact: SDR has the potential to reduce the cost of mobile devices and bring them to market more rapidly by lowering production costs. SDR enables multimode, multiband and/or multifunctional wireless devices that can be modified using software upgrades performed in the field, or over the airwaves.

Manufacturers of mobile devices that include multiple radios should pilot SDR technology as it becomes available; and adopt it when the economic case is proven and when the performance and power consumption of SDR is at an acceptable level.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: BitWave Semiconductor; Imec;mimoOn; Nvidia; Sandbridge Technologies; Silicon Hive; Tensilica; Vanu

Recommended Reading: "Magic Quadrant for LTE Network Infrastructure"

"Forecast: Mobile Data Traffic and Revenue, Worldwide, 2010-2015"

"Emerging Technology Analysis: Self-Organizing Networks, Hype Cycle for Wireless Networking Infrastructure"

"Dataquest Insight: IPR Issues Could Delay Growth in the Long Term Evolution Market"

Remote Monitoring and Device Management

Analysis By: Michael Maoz

Definition: Smart devices containing embedded sensors and monitors enable proactive services for process and product improvement, as well as business intelligence (BI). Remote monitoring and device management is sometimes referred to as machine to machine (M2M), smart devices, remote monitoring or support automation. It should accelerate the use of big data techniques, as remote monitoring and device management generates high volumes of data about products.

Position and Adoption Speed Justification: Remote monitoring and device management is a proactive service that can also feed contextual information that's valuable to marketing and sales. There has been an increase of more than 300% in the number of intelligent asynchronous devices, intelligent structures and monitored capital assets with remote monitors deployed between 2010 and the first half of 2013. Remote monitoring and device management passes the context of the machine, device or equipment — heat/vibration/capacity/location/state — along in a proactive way to a central alert system, where it can be analyzed against a variety of operational criteria.

User Advice: The use of software-based monitors and proactive diagnostics embedded in a range of physical devices from capital equipment (e.g., medical devices, printers, automated teller machines, kiosks, and oil and gas refineries) to physical structures (for example, bridges, walkways and stadiums) is radically changing support costs, improving uptime and moving the support equation from reactive to proactive. This is occurring through the real-time delivery of data monitoring, which removes the need for unnecessary maintenance checks, helps pinpoint failure points and highlights the most likely fixes.

Business Impact: Remote monitoring and device management software and systems increasingly will be incorporated into large capital projects, such as skyscrapers, stadiums, highways and bridges. Remote monitoring and device management is a basic technology approach in industries such as oil and gas, cable and airlines. It can be used to reduce carbon emissions, because it reduces the need for nonessential maintenance inspections and supports environmentally conscious IT. More-rapid detection of issues reduces unforeseen downtime through preventive repair and maintenance.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Axeda; Digi; NextNine; SmartSignal

System Prototyping

Analysis By: Satish R.M.

Definition: System prototyping is an embedded software and systems development methodology that employs tools for integration and validation of all system components. System components include the hardware board with the semiconductor chipsets assembled on it and numerous software components. System prototyping ensures that the overall system and its components meet the product specifications and function in an environment the product will be subjected to in the real world.

Position and Adoption Speed Justification: System-level prototyping solutions have always followed proprietary methods and tools — for over a decade — to achieve objectives. With increasing embedded systems complexity, time-to-market pressures and software development costs, the need for rapidly adopting mature prototyping solutions has become essential. Standardized solutions are imperative for managing the distributed software and systems development across multiple competencies worldwide.

Currently, the adoption of standardized solutions within organizations is restricted to projects that can afford the capital investment of building/buying and installing these expensive solutions across multiple sites. System integrators (SIs) have started employing commercial off-the-shelf (COTS) system prototyping solutions over the past few years to efficiently manage the product development cycle and optimize costs by achieving significant automation of the verification and validation process. SIs will quickly realize that upgrading and maintaining proprietary solutions is not sustainable. It is better to employ mature standardized solutions across the board.

User Advice: The maturity of the system prototyping solution is crucial for meeting product cost, quality and reliability requirements. For embedded real-time systems that are safety-critical, it is important for these solutions to provide an environment that is very similar to the real-world scenario. To meet this challenge, SIs and solution providers should focus on building solutions that support methods and formats to capture and use real-time data that represents the characteristics of the system stimuli in the real world. Real-time data could be captured from existing platforms to build a model and use this as a baseline for developing modifications that reflect the system specifications of the new product under development.

Moreover, it is also necessary for OEMs and SIs to ensure that all their vendors are synchronized, by using the same set of standard tools and methods at all stages of the product verification and validation process, irrespective of the competency. The general tendency in the industry is for each of the vendors to follow their own methods and tools, defeating the purpose. With everyone involved using a standard verification/validation environment there is scope for reducing the development cycle and optimizing product performance. Promoting usage of the same toolsets across the value chain for a product will make it easier to sell for COTS solution providers and advantageous for OEMs and SIs to negotiate.

Business Impact: System prototyping is essential for ensuring all the hardware and software interfaces function as designed and validation of the overall system functionality defined by the performance parameters such as power consumption, thermal characteristics and electromagnetic interference/compatibility (EMI/EMC).

A mature system prototyping solution has a direct impact on product development cycle, quality, cost and reliability. Standardizing and employing mature solutions across the board has far reaching implications for the operational efficiency of organizations.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Bosch; Cadence Design Systems; Continental Automotive Group; Delphi; ETAS; Freescale; General Motors; GE; Honeywell; IBM; Infineon Technologies; Johnson Controls; Mentor Graphics; National Instruments; NXP Semiconductors; Philips Electronics; Renesas; Samsung; Schneider Electric; Siemens Healthcare; STMicroelectronics; Synopsys; Texas Instruments; Visteon

IC Subsystem Reuse

Analysis By: Ganesh Ramamoorthy; Jim Tully

Definition: Integrated circuit (IC) subsystem reuse refers to the reuse of multiple blocks or subsystems from one chip design to another. They typically comprise of one or more processing elements, memory and other system functions integrated into a pre-designed and pre-verified block. Some examples include audio/video processing subsystems, graphics/image processing subsystems, power management subsystems, memory processing subsystems. Typically, embedded software is also included in this category.

Position and Adoption Speed Justification: IC subsystem reuse is critical for maintaining efficiencies within organizations that design chips — as design costs continue to rise substantially with each new process generation. Usually, it is very inefficient and costly to redesign these subsystems across different chips, so it makes a lot of sense to lift the entire subsystem and reuse it in the new chip. Although large blocks are being reused, the significant amount of customization and design effort involved in the reuse — especially in the case of complex analog and mixed-signal subsystems such as radio frequency subsystems — outweighs the economic benefits of reuse. As a result, the adoption of subsystems by chip vendors has been sporadic. However, with chip vendors beginning to realize the economic implications of rising design costs in the post-recession market, the case for IC subsystem reuse has strengthened. The emergence of flexible interconnects that allow designers to work on a system-on-chip (SoC) independently, as well as tools to reduce customization and design efforts, has further strengthened the case. Therefore, over the next two to five years, as chip vendors embrace new approaches to keep design costs under control, we believe that the adoption of the IC subsystem reuse will grow rapidly. We have observed increased rates in the adoption of subsystem reuse for multimedia functions, while other system functions show no major shifts in reuse. Similarly, with the increasing role of embedded software in system design, we are also beginning to see approaches that enable the reuse of embedded software codes across multiple designs in order to reduce design time and cost. We have therefore moved its position further ahead in this year's Hype Cycle.

User Advice: Developers of SoC devices should allocate significant resources to this technology, as it is critical to their business. Also, SoC designers should consider reusing subsystems within the first two to three years during the introduction of the original design, in order to derive the maximum benefits. This is because due to technology advancing rapidly and chip designs moving to advanced process nodes faster, the available time frame to reuse the subsystems in new designs is currently very limited. Once a chip design has been moved to an advanced process node, reusing a subsystem designed in a lagging-edge node will then involve considerable redesign of the original subsystem in the advanced process nodes, and will therefore not deliver the desired results.

Business Impact: Subsystem reuse is crucial for complex SoC devices and for the onward trajectory of Moore's Law. The goal of subsystem reuse is a virtually plug-and-play environment in which a high degree of automation can be used. Analog functions such as interfaces are also important. Some companies are more successful than others, and achieve varying degrees of reuse. This translates directly into time-to-market advantages, differentiation and therefore, accelerated growth for successful companies.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Adolescent

Sample Vendors: ARM; Broadcom; Panasonic; Qualcomm; STMicroelectronics; Texas Instruments

Climbing the Slope


Analysis By: Ganesh Ramamoorthy

Definition: Software components over the air (SCOTA) refers to the method used for upgrading firmware over the air (FOTA) for mobile phones and tablets. SCOTA is also referred to as a software update, firmware update or device management.

Position and Adoption Speed Justification: Traditionally, an update to a device's firmware could only be done by the device manufacturer through an authorized service center. Alternatively, the user could update the device firmware by connecting the device to a desktop computer via a USB cable. However, both these methods are cumbersome for users, not only because it requires the user to visit a service center every time an update is required or the user has to be tech savvy to perform the update on their own with a PC, but also users must actively seek out updates. SCOTA helps get rid of these hassles. Any mobile device with SCOTA/FOTA capability can receive firmware updates directly "over the air" from the mobile phone service provider or the mobile device manufacturer automatically.

As a result, FOTA/SCOTA is now very popular among all mobile device manufacturers and operators and is also the de facto means for deploying firmware upgrades on mobile devices. Current phone manufacturers that produce FOTA-capable phones include LG, Samsung, HTC, NEC, Nokia, Motorola, Sanyo, Kyocera, Sharp, Sony Ericsson, Toshiba and others. With the growing proliferation of mobile devices and the increasing trend toward connecting all things to the Internet, SCOTA/FOTA capabilities on devices will therefore be a critical requirement to keep these devices up to date.

User Advice: Currently, only mobile devices that have the SCOTA/FOTA capability can receive upgrades over the air. These devices are typically feature phones, smartphones and media tablets. The basic phones do not possess this capability. However, the growing trend of connecting everything to the Internet will likely usher in devices in the future that will all have this capability. Users should therefore inquire about this capability while making their purchase decision.

Business Impact: SCOTA/FOTA allows manufacturers and operators to "push out" firmware upgrades to mobile consumers and ensure that they have the latest software upgrades, which reduces potential customer support costs and increases consumer satisfaction.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Bitfone; HP; Innopath; Mformation Technologies; Red Bend Software


Analysis By: Mark Hung

Definition: ZigBee is a wireless mesh networking technology based on the IEEE 802.15.4-2003 standard. Consumer applications include electronics, smart meter infrastructure, home automation and machine-to-machine communication. The technology can be used in several topologies, such as point-to-point or simple mesh. ZigBee's targeted battery life and data rates differentiate it from Wi-Fi.

Position and Adoption Speed Justification: The standard is agreed on, but the development of a large body of compliant products is evolving. It's unclear whether the standard is too broad, causing it to splinter into many incompatible substandards. Several competing technologies, such as Z-Wave and low-power Wi-Fi, have entered the market, making 802.15.4/ZigBee's potential as a standard questionable, especially in the consumer market.

Bluetooth 4.0 is a potential competitor in some areas, such as medical equipment and remote control applications; however, Bluetooth 4.0 has been more focused on consumer electronics applications. The pricing of a stand-alone radio is expected to be in the $1 to $2 range; however, implementations (for example, building automation) may require an advanced microcontroller, especially when security protocols demand it, potentially adding more cost to the bill of materials.

ZigBee's first significant win was in automated meter reading, leading to participation in the smart grid push by utilities. This has become a particularly active area for ZigBee. This foray into the home may parlay into ZigBee's inclusion in more home appliances. In 2009, the Radio Frequency for Consumer Electronics (RF4CE) consortium and the ZigBee Alliance agreed to work on a standard for remote controls, which have been primarily infrared-based.

User Advice: Early adopters and organizations with closed applications should consider ZigBee. All other potential users should monitor 802.15.4/ZigBee's progress.

Business Impact: This will affect machine-to-machine automation.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: Freescale Semiconductor; GreenPeak; Marvell Technology Group; Silicon Laboratories; STMicroelectronics; Texas Instruments

MISRA-C 2012

Analysis By: Ganesh Ramamoorthy

Definition: The Motor Industry Software Reliability Association (MISRA), is a collaboration between vehicle manufacturers, component suppliers and engineering consultancies, all seeking to promote best practice in developing safety-related electronic systems in road vehicles and other embedded systems. MISRA C is a C programming language subset developed by Ford and Rover — the de facto standard for embedded C programming in safety-related industries.

Position and Adoption Speed Justification: MISRA C is a software development language subset, originally designed to promote the safer use of the C programming language in safety-critical automotive applications to meet the requirements of Safety Integrity Level (SIL) 2 and above (as C is not ideal for safety-critical work). The first version released in 1998 (MISRA-C:1998), and the 2004 version (MISRA-C:2004) which provided extensions and improvements, both centered on C90 (ISO/IEC 9899:1990), which was a very well-supported, reliable version. The current significant members of MISRA include: AB Automotive Electronics, Bentley Motors, Delphi Diesel Systems, Ford Motor, Jaguar Cars, Land Rover, Lotus Engineering, Mira, Protean Electric, Ricardo U.K., TRW Automotive Electronics, The University of Leeds and Visteon Engineering Services U.K.

MISRA-C:2012 was published in March 2013, and extends support to the C99 (ISO/IEC 9899:1999) version of the C language while maintaining guidelines for C90. Other improvements — many of which have been made as a result of user feedback — include better rationales for every guideline, identified decidability (so users can better interpret the output of checking tools), greater granularity of rules to allow more precise control, also a number of expanded examples and integration of guidelines for the application of MISRA-C:2004 in the context of automatic code generation.

Currently, MISRA is a widely adopted, safe-coding de facto standard, designed for achieving software quality in automotive, aerospace, industrial, medical, defense and rail applications that would have a high cost as a result of failure. As the use of coding standards in safety-critical embedded software development plays an essential role in reducing the risk of unsafe code escaping into production devices, we believe that MISRA and other similar coding standards such as Jet Propulsion Laboratory (JPL) will continue to gain traction as the coding standard. Even noncritical systems such as consumer devices are as likely to adopt this coding standard, in order to ensure high quality and failure-proof software applications.

User Advice: All software development organizations should adopt MISRA-C:2012 guidelines because it enforces sound coding practices and, as well as addressing the ambiguities of the C programming language, it helps developers to write code in a consistent manner — thus avoiding ambiguous constructions.

Business Impact: The role of software is becoming an increasingly critical one within numerous electronic systems. This has huge implications for chip vendors, software developers and system companies that are involved in developing embedded software at present. Adhering to the MISRA-C:2012 coding standard is vital to ensure software quality, avoid the high cost of software and system failure, and to reduce the cost and complexity of compliance, while enforcing consistent, best-practice and safe use of the C programming language in the development of critical embedded systems.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Atollic; IAR Systems; LDRA; MathWorks; Phaedrus Systems

ESL Design Tools and Methodologies

Analysis By: Ganesh Ramamoorthy; Jim Tully

Definition: Electronic system-level (ESL) design tools and methodologies are software tools and methodologies for the high-level definition of an electronic system. These tools generate individual hardware and software descriptions for implementation by lower-level tools. Use of ESL tools facilitates the design and verification of hardware and software at the same time. Therefore, they have the potential to reduce both costs and time to market.

Position and Adoption Speed Justification: ESL provides tools and methodologies that let designers describe and analyze chips on a level of abstraction at which they can functionally describe behavior without resorting to the details of the hardware register-transfer level implementations.

ESL tools and methodologies have been hyped for the past 10 years or more and adoption over this period has been slow. System architects typically use a combination of standard office applications such as spreadsheets, high-level modeling tools, whiteboards and internally developed tools. The key idea here is to model the behavior of the entire system using a high-level language such as C, C++, LabVIEW, MATLAB or by using graphical model-based design tools like SystemVue or Simulink, or an abstract modeling language such as SystemC.

Over the years, other general-purpose system design languages like SysML and those specific to embedded system design like Semantic Model Definition Language and Store Schema Definition Language have emerged that enable the creation of a model at a higher level of abstraction. Many attempts have been made by electronic design automation software vendors to provide integrated tool suites, but efforts to generate software descriptions at the appropriate level have not been very successful.

While ESL is now an established approach at most of leading system on chip (SoC) design companies over the last couple of years, with the growing complexity of SoC designs, we are also seeing increased intellectual property (IP) reuse. Current SoC designs use, on average, at least 100 proven IP blocks, with each IP block sometimes comprising of more than a couple of million gates. The increased use of third-party IP into SoC design has shifted the development effort to one of integration and verification for all of the heterogeneous IP blocks and original circuits are designed mainly to differentiate SoCs. This seems to have been spurred by the recession.

However, with the growing focus on embedded systems and system-based design, the role of ESL is growing, mainly because of the productivity benefits the tools bring as companies using them can cut costs by reducing their engineering workforce. The recent acquisition of Magma Design Automation by Synopsys and Flowmaster Group by Mentor Graphics are a testimony to the increasing focus on system-level design from chip vendors and hence the interest in ESL technology. Therefore, we are more confident of its future development.

User Advice: Companies involved in the design of electronic systems must evaluate these tools. In addition, users should press vendors for more effective offerings. Effective use of these tools and methodologies will be critical to the success of companies designing electronic systems over the next few years.

Business Impact: ESL tools enable designers to create a virtual prototype of their system design before the detailed hardware implementation is finalized. They provide a way to start the architectural analysis much earlier and validate the software to be implemented on the actual hardware to eliminate hardware-software integration bugs earlier in the design cycle.

This provides significant benefits to companies, including faster time to market, reduced product design cycles, increased hardware design productivity, faster creation of derivative products, improved communication between hardware and software teams, and improved product quality.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: ARM; AutoESL; Cadence Design Systems; Calypto Design Systems; CoFluent Design; Mentor Graphics; Synopsys

Offload Engines

Analysis By: Carl Claunch

Definition: Offload engines move processing for specific functions — such as TCP/IP stack processing, XML processing, Java Virtual Machine (JVM) processing and complex mathematical computations — from general-purpose processors onto one or more specialized components. Users cannot write programs to run on the offload engines, which are restricted to providing only defined functions.

Position and Adoption Speed Justification: Offload engines provide advantages in software licensing, performance and scalability by using specialized or hidden processors to handle a standard server processor. This frees the server processor to handle the operations in which it excels. In some cases, offload engines can change the economics by reducing the size of the general server that is needed, thus cutting software license charges for any software installed on it, even if the offload engines aren't faster at running the offloaded tasks.

To efficiently accomplish the offload, the OS must participate in the offloading process, and the connection between the offload engine and the server's main memory must be quick, efficient and scalable. In many cases, the OS uses fast connections, such as Peripheral Component Interconnect Express (PCIe). In others, it may implement direct memory access. This is why OS support for offload engines is hindering the broader implementation of this technology.

Furthermore, because many server processors are underused, the desire to offload workloads for efficiency is less compelling until consolidation takes place. Finally, offload engines must be completely invisible at the application layer, and not convertible by users to general-purpose use to gain software vendors' support for ignoring those offload engines in their licensing fees.

Despite these challenges, offload engines will be facilitating new levels of server performance and scalability in the next two to three years. Input/output (I/O) and bus-connected offload engines have been around for more than four decades. AMD and Intel have invested in specifications to better support offload engines at the chipset level. As another example of the use of this technology, IBM has supported offload engines in System z for at least a decade and other types of offload engines for much longer (such as cryptographic coprocessor). Many high-speed Ethernet cards contain TCP/IP offload engines (TOEs), which accomplish some processing that would otherwise require substantial amounts of processor capacity in the server.

The drive by system vendors to produce differentiated servers for specific workloads or roles will lead to more use of offload engines as a means of delivering a clearly different experience for the user of those differentiated systems.

If the offload engine is not dedicated to specific functions, but could run any code suitably created for it, it falls into the category of heterogeneous architectures.

User Advice: Enterprises should evaluate offload engine technology for tasks that are dragging down conventional server processors. This includes TCP/IP stack handling and JVM processing. Big-data-related processing is expected to be the next target of offload engines. However, enterprises must ensure that the OS vendor and the surrounding software ecosystem will fully support offloading. Furthermore, investments in offload engines are necessarily locked to that role, whereas money spent on general-purpose server systems can be protected if the original need goes away because the general-purpose processors can simply be assigned to run other software.

Business Impact: Offload engines aren't conventional processors, or they are permanently inaccessible to the users, except for their specialty role. They can increase server performance without increasing the licensing costs associated with those processors.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: AMD; IBM; Intel; Oracle

Cortex M

Analysis By: Adib Carl Ghubril

Definition: Cortex M is ARM's line of 32-bit processors targeting real-time applications typically found in control systems where feedback timing is critical to system performance.

Position and Adoption Speed Justification: Cortex was first introduced in 2007 and was quickly adopted by major manufacturers of microcontrollers (MCUs). Companies such as STMicroelectronics and NXP quickly transitioned from the ARM A7 core to the Cortex line and have since been joined by Texas Instruments, Atmel, Freescale, Infineon, Silicon Laboratories and more. In fact, ARM-based microcontrollers make up more than 50% of all 32-bit MCUs by either units shipped or total revenue.

ARM Cortex has gone through a few iterations. M4 has a MAC function to support arithmetic-intensive applications, while some manufacturers that owned designer licenses improved the performance of the M0 core, creating M0+. Cortex M now has three main versions: M0 (M0+), M3 and M4, is supported by a vibrant community of embedded system designers, and is on the product road map of many vendors in the top 15 of the MCU market.

User Advice: Specifying Cortex-based MCUs will give OEMs access to the largest 32-bit selection of firmware and tools, as well as a relatively low-power device in that bit class.

Business Impact: Cortex-based MCUs have brought down the barriers to entry into the 32-bit space by not only lowering aggregate average selling prices, but also fostering a strong ecosystem. Cortex cores are a big factor in the proliferation of 32-bit MCUs and enable the high compound annual growth rate forecast for that class of microcontrollers.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Early mainstream

Sample Vendors: Atmel; Freescale; NXP Semiconductors; STMicroelectronics


Hype Cycle Phases, Benefit Ratings and Maturity Levels

Table 1. Hype Cycle Phases



Innovation Trigger

A breakthrough, public demonstration, product launch or other event generates significant press and industry interest.

Peak of Inflated Expectations

During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized activity by technology leaders results in some successes, but more failures, as the technology is pushed to its limits. The only enterprises making money are conference organizers and magazine publishers.

Trough of Disillusionment

Because the technology does not live up to its overinflated expectations, it rapidly becomes unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of Enlightenment

Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the technology's applicability, risks and benefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity

The real-world benefits of the technology are demonstrated and accepted. Tools and methodologies are increasingly stable as they enter their second and third generations. Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid growth phase of adoption begins. Approximately 20% of the technology's target audience has adopted or is adopting the technology as it enters this phase.

Years to Mainstream Adoption

The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (July 2013)

Table 2. Benefit Ratings

Benefit Rating



Enables new ways of doing business across industries that will result in major shifts in industry dynamics


Enables new ways of performing horizontal or vertical processes that will result in significantly increased revenue or cost savings for an enterprise


Provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise


Slightly improves processes (for example, improved user experience) that will be difficult to translate into increased revenue or cost savings

Source: Gartner (July 2013)

Table 3. Maturity Levels

Maturity Level




  • In labs
  • None


  • Commercialization by vendors
    Pilots and deployments by industry leaders
  • First generation
    High price
    Much customization


  • Maturing technology capabilities and process understanding
    Uptake beyond early adopters
  • Second generation
    Less customization

Early mainstream

  • Proven technology
    Vendors, technology and adoption rapidly evolving
  • Third generation
    More out of box

Mature mainstream

  • Robust technology
    Not much evolution in vendors or technology
  • Several dominant vendors


  • Not appropriate for new developments
    Cost of migration constrains replacement
  • Maintenance revenue focus


  • Rarely used
  • Used/resale market only

Source: Gartner (July 2013)