Use Complex-Event Processing to Keep Up With Real-time Big Data
The conventional save-and-process paradigm is not fast enough to get immediate value from big data that is delivered at high rates of speed. Companies that need to analyze real-time data for time-critical applications, or add value to data before storing it, must use complex-event processing.
- The need for enterprises to respond faster to changing conditions is driving architects and project leaders to use complex-event processing (CEP) in more business processes.
- The conventional save-and-process paradigm is not fast enough to get immediate value from big data that is delivered at high rates of speed.
- No technology is capable of handling all types (real-time, near real-time and offline) of big data processing.
- Use CEP when latency must be low, the volume of inputs is high or event patterns are complex.
- Use CEP to help big data applications load high-volume event data more efficiently and effectively.
- Acquire a CEP-capable commercial-off-the-shelf (COTS) package that meets the project's specific business requirements.
- Acquire a general-purpose CEP-capable event processing platform and tailor it if an appropriate COTS package is not available.
- Run CEP, in-memory databases, and batch-oriented big data systems side by side when the same input needs to be processed in real-time and be saved for deeper post hoc analysis.
CEP distills incoming data about events into "complex" events. Complex events summarize the information value of the incoming data to provide insights into the current conditions in a company and its environment. CEP is "event-driven" because computation is triggered immediately when input data is received. The input data can come from transactional application systems, sensors, social computing systems, market data providers, Web-based news feeds, email systems or other sources.
For example, a trucking company can monitor the location, speed, engine temperature and other properties of its trucks by capturing a continuous stream of GPS and sensor readings (these are input or "base" events) from each truck. A CEP-based monitoring system correlates the base events and finds meaningful patterns in them to derive complex events such as "truck is off its planned route," "truck will arrive at its destination in 15 minutes," or "trend in temperature readings indicates likely engine failure in the next 30 minutes."
One complex event can be the result of calculations performed on dozens, hundreds or thousands of base events. Complex events are the key performance indicators within the monitoring system that gives situation awareness to fleet operations managers, drivers and other interested parties. The system displays truck locations on a map on a business dashboard; issues email or text message alerts to notify workers at a loading dock to be ready to unload. It also gives warnings of impending mechanical breakdowns. The system can also correlate truck locations with traffic data to help re-route trucks around congested areas; and mash up weather information from a weather service to alert drivers about storms that might slow their progress.
Gartner identifies CEP as an important big data technology primarily because of its ability to process incoming data quickly. We define big data as high-volume, -velocity and -variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making (see "The Importance of 'Big Data': A Definition"). In addition to high velocity, CEP applications typically process a high volume of data, and some CEP applications process a wide variety of data types, including structured data and content. The majority of CEP applications have always been big data in a general sense. However, most are outside of the conventional image of big data because less than 5% of them currently involve MapReduce or Hadoop, the technologies popularly associated with the big data movement.
Source: Gartner (August 2012)
The need for enterprises to respond faster to changing conditions is driving architects and project leaders to use complex-event processing (CEP) in more business processes
Speed is a major component in most modern business strategies, such as time-based competition, real-time enterprise, predictive enterprise and zero latency enterprise. Enterprises that seek to run at a fast pace need early notification of emerging business threats and opportunities, just as a person driving 60 miles per hour needs more advance warning of road hazards than a person driving 30 miles per hour.
Companies can get CEP capabilities by:
- Acquiring a COTS application, monitoring system or other tool that has embedded CEP.
- Writing CEP logic as part of a custom-coded application.
- Acquiring a general-purpose (CEP-capable) event-processing platform and tailoring it to their specific business requirements.
Before 2004, almost all CEP was acquired through the first two approaches listed above — in COTS packages or custom coding — because general-purpose platforms were not widely available. Gartner is now tracking 21 event processing platform products (see Note 1), and developers are using them in a growing number of applications. However, companies still get most of their CEP capability from a COTS package (only a few do custom coding). For example, supply chain visibility, IT operations management (including system and network management, and business transaction monitoring), security information and event management, fraud detection, and some financial services trading platforms have purpose-built CEP logic in their respective packages. In many cases, buyers are unaware that the package is using CEP.
General-purpose event processing platforms are used by financial services companies for algorithmic trading, order routing, market surveillance, trader surveillance, risk management and foreign exchange (FX). Another common CEP application on these platforms is managing transportation operations for planes, trains, ships and trucking fleets. Other companies use CEP for business process monitoring and real-time cross-selling. The platforms are also used in operational technology (OT) applications that rely on sensor data regarding temperature, pressure, vibration and revolutions-per-second from physical devices. OT goes by various names in different industries, and is often owned and operated independently of business IT systems. More than 100 smart electric grid projects use CEP technology to track electricity transmission and consumption, and to monitor the health of equipment and networks. Other OT CEP applications include pipeline monitoring and oil refining.
CEP-based systems perform real-time (or near-real-time) computation on real-time (or near-real-time) data. This is inherently different from most business intelligence (BI), data discovery and advanced analytics systems that process data that is historical (that is, it is minutes old, hours old, days old or older). BI, data discovery and advanced analytics may be "real time" in the sense that the answer to a query comes back within seconds, but they are also "real-time" on historical data. Even systems in which all of the data is in memory are generally providing real-time analysis of historical data (data more than 15 minutes old).
- Acquire a CEP-capable COTS package if one is available that meets the project's specific business requirements.
- Acquire a general-purpose CEP-capable event processing platform and tailor it to the task if an appropriate COTS package is not available.
- Write CEP logic into a custom application only as a last resort.
The conventional save-and-process paradigm is not fast enough to get immediate value from big data that is delivered at high rates of speed
Most BI and other analytic applications store the data in a database and then apply a query to extract the relevant subset of the data to use in the analysis. The data may be stored in memory or on disk, and may use a row-oriented or column-store structure depending on the amount of data and the need for speed (in-memory column stores will often provide the best performance for analytic purposes). In this save-and-process paradigm, the query comes to the data.
CEP inverts that paradigm by bringing the data to the query — the query is in place before the data arrives so each message (event, row or record) is processed immediately and, optionally, stored later. The CEP query specifies the algorithms and rules for filtering (selecting a particular subset of incoming data), aggregating (computing sums, averages and counts, or ordering the data), enriching and finding patterns in the data. This approach minimizes latency because:
- CEP systems are event-driven — they run as soon as new data is received.
- Data is kept in memory in the CEP engine. In more-demanding applications, the CEP engine may store data in an in-memory data grid (IMDG) (see "What IT Leaders Need to Know About In-Memory Data Grids").
- Data is routed internally within the CEP engine as it arrives so that it is in place for immediate computation.
- Calculations are incremental. For example, when calculating the volume of customer service requests received in the past 10 minutes, the system does not process all of the data received during that 10-minute window. Instead, it starts from the result that was calculated one minute ago for the previous 10-minute window, then subtracts the data that is more than 10 minutes old, and adds in the data received in the most-recent minute.
- Other optimizations are employed to minimize context switches, unnecessary data movement and other overhead.
After processing, an appropriate response is triggered, and in some cases, a copy of the complex events and some (or occasionally all) of the input base events may be stored in a database or forwarded to other applications. This could be called a process-and-save paradigm. However, in most cases, the amount of data saved is much smaller than the input data because the volume of input maybe enormous and business doesn't want to keep superfluous detail.
Use CEP in the following cases:
- Latency must is low (typically less than 100 milliseconds, but sometimes less than one millisecond, between the time that an event arrives and it is processed).
- Volume of input events per second is high (typically hundreds or a few thousand events per second, but into millions of events per second in extreme applications).
- Event patterns to be detected are complex (such as patterns based on temporal or spatial relationships).
No technology is capable of handling all types (real-time, near-real-time and offline) of big data processing
CEP, IMDG, in-memory databases, MapReduce and other big data technologies are naturally complementary because they address different aspects of handing high-volume, high-velocity and high-variety data. In addition, CEP can be used to:
- Filter arriving event data before it is loaded into the Hadoop Distributed File System (HDFS) so that extraneous data is omitted before a MapReduce process is undertaken.
- Process incoming base events into more-meaningful complex events to be stored in HDFS.
- Run in parallel with a MapReduce process, so events are delivered to Hadoop (or another big data system) and the CEP engine simultaneously.
In one real example of simultaneous usage, Web events are delivered to a CEP engine through the Hadoop Flume facility. The CEP system is watching in real-time for indications of security breaches so that immediate action can be taken to interrupt the problem as it is unfolding. The same incoming base Web events are loading from Flume into HDFS for deeper, post hoc analysis a few hours or days later. The CEP system will detect security breaches that are new instances of a known pattern, whereas analysts can investigate the data after it has been mapped and reduced to discover previously-unknown patterns that indicate security problems.
Some of the design principles used in MapReduce are also being applied to leading edge event-processing platforms. Conventional event-processing platforms were designed as hubs and typically had a centralized topology. Newer event processing platforms have added parallelism and, separately, can also be distributed geographically to implement hierarchical event processing networks.
Some event-processing platforms, including IBM's InfoSphere Streams, HStreaming's Cloud and Enterprise, and Vitria's M3O Analytic Server, implement a MapReduce-style grid architecture that potentially scales efficiently to hundreds or more nodes, each handling many thousands or more events per second. HStreaming also leverages standard Apache Hadoop technologies such as Pig, HDFS, and Zookeeper, with some adjustments to accommodate the real-time nature of event processing. These event-processing platforms use MapReduce in a different way than it is implemented for more-typical MapReduce processes. The event-processing platforms do the mapping incrementally as the data arrives (mapping data in motion) and the data is reduced almost immediately thereafter, rather than the usual batch-style MapReduce processes. This provides CEP-class latencies in the milliseconds or seconds, compared to standard MapReduce latencies of ten minutes or more. These systems scale higher than single-hub event-processing software platforms, while running much faster than common MapReduce systems. Other event processing platforms use different (not MapReduce) approaches to parallelism to allow them to scale up.
- Use CEP to help big data applications load high-volume event data more efficiently and effectively by filtering out unwanted data and generating input complex events that have more value than the arriving, raw base events.
- Run CEP, in-memory databases and batch-oriented big data systems side by side when the same input needs to be processed in real-time (or near-real-time) and also to be saved for deeper post hoc analysis.
- Use big-data style design concepts (such as MapReduce) and tools, with appropriate adjustments, as one way to improve the scalability of event processing platforms.
- impactappraisal... (39KB)
|CEP||Complex-event processing — Performing computation (including generating, reading, discarding or performing calculations) on complex events, which are events that are abstractions of one or more other events.|
|Continuous intelligence||Nonstop monitoring of events to generate insight into conditions. Continuous intelligence is an active, self-triggered analytical process, in contrast to on-demand analytics that run when requested by a person or software component.|
|Latency||Latency is the time it takes for a system to respond to an input.|
|OT||Operational technology: hardware and software that detects or causes a change, through the direct monitoring and/or control of physical devices, processes and events.|
|Situation awareness||Knowing what is going on so you can decide what to do.|
- FeedZai Pulse
- Guavus Reflex
- HStreaming Cloud, Enterprise
- IBM InfoSphere Streams
- IBM Operational Decision Management
- Informatica RulePoint
- LG CNS EventPro
- Microsoft StreamInsight
- OneMarketData's OneTick CEP
- Oracle CEP
- Progress Software Apama CEP
- Red Hat Drools Fusion/JBoss Enterprise BRMS
- ruleCore's Reactive CEP Rule Server
- StreamBase Systems Event Processing Platform
- SAP Sybase Event Stream Processor
- Software AG Business Events
- Starview Enterprise Platform
- Tibco BusinessEvents
- Vitria Technology M3O Analytic Server
- WestGlobal Vantify