Magic Quadrant for Application Performance Monitoring
In 2011, about $2 billion will be spent worldwide on application performance monitoring licenses and first-year maintenance contracts. This is a 15% increase over the $1.7 billion spent on APM in 2010, which grew by approximately 10%, compared with global spending in 2009.
This document was revised on 28 September 2011. For more information, see the Corrections page.
The $2 billion market for application performance monitoring (APM) technologies is subdivided into five dimensions of functionality:
- End-user experience monitoring
- Application runtime architecture discovery, modeling and display
- User-defined transaction profiling
- Component deep-dive monitoring in application context
- Application performance analytics
Source: Gartner (September 2011)
This Magic Quadrant assesses the APM market. First, we will define the market and explain why it's important to treat the five functional dimensions that together constitute its technology foundation in an integrated manner and to distinguish the APM market from others that are closely related to it. Next, we will set out the qualifying criteria for inclusion in this research and then survey some general characteristics of each of the resulting quadrants. This research will then conclude with a brief study of each participating vendor's APM strengths and weaknesses.
Defining the Market
Gartner defines an application as a set of algorithms with four characteristics:
- The execution of some members of the set may be initiated by an end user.
- The sequencing of some of the steps in some members of the set is dictated by business logic.
- The algorithms work in concert with one another as they execute.
- If the algorithms complete their concerted execution successfully, then they achieve well-defined goals that meet the established requirements of some end users or end-user communities.
Gartner defines APM as a process with five elements:
- It tracks in real time the execution of the algorithms that constitute an application.
- It measures and reports on the scarce hardware and software resources that are consumed as the algorithms execute.
- It determines whether the application has executed successfully.
- It records the latencies associated with some of the execution-step sequences.
- It determines why an application has failed to execute successfully or why resource consumption and latency levels have departed from expectations.
Market Size and Growth
With these definitions in mind, Gartner estimates that, in 2011, approximately $2 billion will be spent globally on APM licenses and first-year maintenance contracts. This represents a 15% increase, compared with the $1.7 billion spent on this technology in 2010, which, in turn, grew by about 10% over the global spend in 2009. These figures do not include the revenue associated with subscriptions to "APM as a service" offerings, although they do include technology purchases made by service providers that then go on to use such technologies as platforms for the provision of APM-managed service offerings.
Although rarely acknowledged and called out as a distinct IT operational process before 2005, APM has been an implicit part of IT operations since the mid-1990s. Vendors such as Patrol, EcoSystems Software, Mercury Interactive and Candle (eventually acquired by BMC, Compuware, HP and IBM, respectively) and service providers such as Keynote provided critical monitoring functionality to enterprises during the period in which classical client/server application architectures began to be set aside in favor of multitier, Web-oriented constructs. Nonetheless, since 2005, there has been an increasingly explicit acknowledgment of and high-profile focus on the APM process among Global 2000 enterprises. Gartner estimates that 20% of the Global 2000 are trying to reconstruct all their IT operational process frameworks in ways that accord with the monitoring and management of applications, rather than infrastructure, in a central place.
The factor most responsible for the increased attention now being paid to the APM process and the tools and services supporting it does not come from IT, but from the business side of the enterprise which has, during the past decade, fundamentally changed its attitude toward IT in general. Line of business and C-level executives now generally recognize that IT is not just infrastructure that supports background workflows, but is also, and more fundamentally, a direct generator of revenue and a key enabler of strategy.
However, heightened executive appreciation of IT has also meant greater executive willingness (and ability) to impose a model of what's important about IT on IT operations professionals. For the typical executive, what's important is the application portfolio that directly enables him or her to accomplish business goals. Unfortunately, at just the moment when executives have become keen about imposing an application-centric view of the world on IT operations, applications have become far more difficult to monitor; in general, architectures have become more modular, redundant, distributed and dynamic, often laying down the particular twists and turns that a code execution path could take at the latest possible moment.
The combined impact of modularity, redundancy, distribution and dynamism undermined the effectiveness of the technologies and techniques that traditionally supported the APM process. Products such as those offered by Patrol and EcoSystems Software represented minor variations on classical event correlation and analysis. A comparatively small number of thresholds indicating unacceptable levels of resource consumption were set in advance, and, should those thresholds be transgressed during the course of an application's execution, the transgression would be recorded and sent to a screen or data store.
Because application code was developed, stored and processed in large contiguous blocks, it was, in theory at least, possible to infer the application's overall state of health or lack thereof from a few threshold transgression signals. It was, in particular, generally assumed that such a sparse set of threshold transgression signals correlated reasonably well with application availability and latency as experienced by end users. If anything further was needed, especially when users accessed applications over a variably performing Internet, monitoring could be supplemented by technologies from vendors such as Candle or service providers such as Keynote that supported the launch of stereotyped synthetic transaction scripts from strategically placed software robots standing in for typical end users.
Modularity, redundancy, distribution and dynamism, however, combined to ensure that:
- Data gathered from the execution of one region of application code provided little information with regard to what has happening at other times in other regions.
- Thresholds could be meaningfully set a priori.
- Stereotypical synthetic transactions could be defined in advance as user interactions with an application became increasingly varied, and availability and response time characteristics became increasingly sensitive to even small differences in space and time.
- In other words, a new approach to APM technology was required, and, between 2005 and 2008, the outlines of such an approach began to appear in the ways in which technology buyers conceptualized the APM problem space and how vendors were augmenting their product portfolios.
An Overview of Five-Dimensional APM
The fundamental problem was that applications built according to modern architectural principles needed to be monitored in a holistic, end-to-end manner. Detail remained important, of course, but that detail had to be set into a well-understood overall picture of system behavior. To that end, five distinct dimensions of, or perspectives on, end-to-end application performance have been assembled by market participants, each one essential and complementary to all the others:
- End-user experience monitoring — the capture of data about how end-to-end application availability, latency, execution correctness and quality appeared to the end user
- Runtime application architecture discovery, modeling and display — the discovery of the software and hardware components involved in application execution and the array of possible paths across which these components could communicate to enable that involvement
- User-defined transaction profiling — the tracing of events as they occur among the components or objects as they move across the paths discovered in the second dimension; this is generated in response to a user's attempt to cause the application to execute what the user regards as a logical unit of work
- Component deep-dive monitoring in application context — the fine-grained monitoring of resources consumed by and events occurring within the components discovered in the second dimension
- Analytics — the marshaling of techniques, including behavior learning engines, complex-event processing (CEP) platforms, log analysis, and multidimensional database analysis to discover meaningful and actionable patterns in the typically large datasets generated by the first four dimensions of APM
Although the technologies underlying each of these dimensions are typically deployed by different communities in an enterprise, and the dimensions themselves reflect the concerns of different stakeholders, there is, nonetheless, a high-level, circular workflow that weaves the five dimensions together. As a first step, end-user experience monitoring would pick up a problem as it affects the application's user.
Next, the application's runtime architecture would be generated and/or surveyed to establish the potential scope of the problem. After that, user-defined transactions would be examined, as they flow across some subset of the possible paths exhibited by the runtime architecture graph, to ascertain what nodes in that graph are the sources of the problem affecting the end user. Next, deep-dive monitoring of those nodes is carried out in the context of the results of the previous three steps. Finally, analytics are brought to bear to establish the root cause in the midst of the vast volumes of data generated in the first four steps, as well as to better anticipate and prepare for end-user experience problems that could emerge in the future.
End-User Experience Monitoring
End-user experience monitoring is particularly central to the APM process for three reasons:
- Because the end user's experience is precisely where business process execution interfaces with the IT stack, any monitoring effort that fails to capture the end user's experience is ultimately blind at the most direct point of encounter between IT and the business.
- For heavily modular, redundant, distributed and dynamic application architectures, the application may exist as an integral whole only when the end user is accessing it — i.e., the end user's experience may be the only vantage point from which the application meaningfully exists.
- End-user experience monitoring can be explained and justified to line-of-business and C-level executives in a relatively straightforward and visceral manner — e.g., everyone understands what it means and can guess the impact of an unavailable or high end-user latency application on business process execution or customer satisfaction.
The four fundamentally distinct technologies being deployed for the purposes of end-user experience are described in the sections that follow.
Synthetic Transaction-Based Proxy Monitoring of the End-User Experience: The oldest commercialized approach to capturing end-user experience data, this technology requires the placement of software robots at various locations near which users might access an application, then the robots run scripts against an application in production, and record response time and availability results. This technology has been frequently criticized because it does not deliver information about real user experiences; however, it relies on a set of hard-to-create and even harder-to-maintain scripts, and, by its nature, places added traffic burdens on the application and the application's supporting infrastructure.
During the past year, however, interest in this approach has, to some extent, revived:
- It's reliance on scripts enables the testing of an application's performance characteristics during off-hours. This anticipates what could happen when real users begin to come online, and it aligns the technology closely with testing and quality management platforms. Hence, it's well-suited to the revived interest in application life cycle management (ALM).
- It's easy for a service provider to offer APM monitoring based on this approach, provided there is a sufficient upfront investment in points of presence. Because the technology is fundamentally noninvasive, it can even be packaged in a software-as-a-service (SaaS) delivery model.
Given the growing complexity at the edge of the Internet, the inaccessibility of that edge to monitoring systems located in the data center, and the emergence of similarly inaccessible public cloud-based services that make up at least part of what an end user experiences as end-to-end application performance, this approach is, for many enterprises, a necessary outside-in supplement to any inside-out performance vision achieved by data-center-bound technologies.
Packet Capture and HTTP-Based Analysis of Real End-User Experience: Gartner estimates that, since September 2010, approximately 70% of the new deployments of end-user experience-monitoring capabilities among Global 2000 enterprises have been built on appliances (both real and virtual) that capture packets, typically from switched port analyzer (SPAN) ports, and analyze their HTTP content for data regarding availability, keyboard-to-eyeball response time and user-defined transaction completion rates. Ease of implementation and maintenance, combined with the fact that such appliances capture real (as opposed to proxy) end-user experience, have been the main factors driving the popularity of this approach. Furthermore, as many vendors have begun to deliver their appliances in a virtual or "micro" (i.e., smartphone-size) format, even the historical objections to hardware appliance-based monitoring technology are beginning to erode.
Nonetheless, the past 12 months represent the crest of this technology's popularity for three reasons. First, the appliances used to monitor network performance are almost identical in structure and function to the HTTP-based end-user experience-monitoring appliances, except that the results they report, while drawing on the same packets, are based on different protocol content (e.g., IP and MPLS). Hence, a growing number of enterprises are beginning to consider the deployment of a converged monitoring framework — i.e., a single box (or set of boxes) that will read the packet stream, but perform different analyses depending on whether network administration or applications support staff are making use of the technology. Indeed, during the past year, a number of vendors historically positioned exclusively in the network performance monitoring market have entered the APM market, primarily by extending their appliances' analytic capabilities to HTTP content.
Secondly, since 2005, most APM projects have been targeted at multitiered, Web-based applications. Although such applications remain a central object of concern, there is, for many Global 2000 enterprises, a sense that at least the basics of the multitiered Web-based APM have been solved, and attention is now turning to the still-numerous thick-client architectures associated with off-the-shelf packages, such as SAP, and the growing number of Citrix XenApp-based, thin-client architectures. Of course, HTTP does not fit into either scenario; hence, neither do HTTP-based appliances. Here too, a number of vendors are responding to the situation by offering coverage of a broader array of protocols; however, the challenges are considerable, because these protocols are often not very informative about end-user experience (e.g., Citrix's ICA) or are coded and must be reverse-engineered (e.g., SAP's dcodes).
Finally, packet capture and analysis appliances are, in whatever number of protocols they may have fluency, data-center-bound. Hence, they're blind to the increasing number of sources of latency and other performance problems that originate in cloud services that lie along an application's execution path between the firewalls of the data center and the point at which the application's services are consumed. Even more immediately, they're blind to the sources that lie within the increasingly complex and obscure Internet edge.
Endpoint Instrumentation: Originally emerging in response to the challenge of monitoring thick-client applications, an approach to end-user experience monitoring that relies upon instrumenting the device through which the user accesses the application has, during the past 12 months, been deployed by approximately 5% of the Global 2000, but Gartner expects this percentage to grow significantly during the next 12 months. The device-situated agent not only collects data effectively for thick-client application, its location means that the technology can provide insight into how the user interacts with the entire portfolio of applications accessed through that device. In other words, the technology opens up the possibility of moving from a focus on end-user experience monitoring to end-user behavior monitoring.
Finally, although this possibility is contingent on the emergence of more lightweight form factors than have as yet been achieved, endpoint instrumentation technology could be a way to capture the end-user experience of application consumption on mobile devices (such as smartphones and tablets) and, thereby, provide some insight into performance problems originating at the Internet edge.
Web Page Script Injection: In 2008, Steve Souder of Google published an influential white paper entitled "Episodes," which single-handedly revived interest in a late 1990s' vintage approach to monitoring the end-user experience of Web-based applications through the injection of monitoring functionality, written in J-script, directly into the code of a Web page. As the Web page is rendered, the monitoring capability "wakes up" and starts to collect real user-experience-related performance information precisely where the application is being accessed. Although this technology is, like the packet capture and HTTP analysis approach, limited to Web-based application architectures, the location of the J-script "mini-agent" within the Internet edge enables it to capture data about the impact of Ajax constructs and other Web 2.0 objects on application performance and even to take into account the state of the device through which the application is being accessed.
Although commercialized versions of this technology have been on the market for more than two years, the more recent introduction of automated script injection capabilities has underscored its attraction. Gartner expects that, during the next 12 months, many vendors and service providers will add this technology to their end-user experience-monitoring offerings.
With the adoption of application development (AD) practices based on Agile-inspired methodologies and a general business-driven effort to increase application developer productivity, the rate at which new code changes are being introduced into the production environment has increased by an order of magnitude. If an IT organization wants to manage the impact of nonstop change on application performance, a good understanding of the interdependencies among hardware and software components presupposed or created by an application as it executes is critical.
Application Runtime Architecture Discovery and Display
Although the focus of APM has been on the three dimensions of end-user experience monitoring, user-defined transaction profiling and application component deep-dive monitoring, during the past year, increased attention has been paid to the second dimension: runtime architecture discovery and modeling. This heightening of interest can be attributed to five factors:
- The increasingly widespread mainstream deployment of Web-oriented architecture (WOA) styles, such as service-oriented architecture (SOA), representational state transfer (REST) and computational REST (CREST)
- The growing complexity and architectural modularization of off-the-shelf application packages
- The acceleration of the AD process under the impact of methodologies such as agile
- The growing recognition that the effective deployment of cloud-based services will require particularly careful understanding of the slots in the application architecture into which those services can fit
- The growing recognition that the models delivered by more-traditional service-dependency mapping are, in many cases, oriented more toward understanding the affinities and linkages among physical and virtual configurations caused by their common support of an application than they are oriented toward laying out the relationships among the hardware and software components that support an application at runtime
The net result of these five trends is that teams charged with monitoring application performance are now being asked to observe and analyze the performance of large numbers of applications, about which no one — including the code authors — can definitively say how they're executing in the production environment. This is due partially to the complexity and the sheer number of independent modules out of which these applications are built, and partly because many variables in these applications bind late during the course of an application's execution, increasingly taking into account local environmental conditions that prevail at the time of binding. Finally, it is due, in part, to the high levels of abstraction in which a modern application developer designs and creates code, which ensures that the developer understands only the new functionality that he or she adds, while the bulk of the functionality actually invoked by an application in execution remains hidden behind exposed interfaces.
Service-dependency mapping tools, such as BMC's ADDM or HP's DDMa, have been typical of the products that APM vendors have brought into play in response to requests for application component discovery and modeling capabilities. Vendors provide out-of-the-box "blueprints" of systems, components, databases, application servers and some application constructs as well (e.g., SAP). Using these blueprints they analyze and observe patterns and headers of the network traffic among infrastructure elements, allowing these patterns to form the basis of a topological diagram, and then associating the diagram, either manually or via IP address or port analysis, to create an IT service model or application construct.
These constructs are suitable for various scenarios (change impact assessment, incident correlation to an IT service, gap analysis to architecture tools), but ultimately they tell the APM team little about how the logical software elements of an application interact to support an execution path or how those logical elements interact individually with components at the infrastructure layer. This lack of detail limits the usefulness of models generated by the IT service-dependency mapping system, particularly as the structure of application software becomes ever more modular, intricate and abstract.
As a consequence, APM vendors, which have service-dependency mapping capabilities and user-defined transaction-profiling capabilities in their portfolios, have begun to supplement their dependency-mapping system models with snapshots of the transaction paths generated by their user-defined transaction-profiling functionality. The idea behind this hybrid approach is straightforward. The transaction path will reveal many aspects of the logical structure of application as it manifests itself at runtime. At the same time, these transaction path snapshots provide a vague picture of the lower layers of infrastructure, physical or virtual. Together, the two technologies, it is argued, can provide the user with a genuinely comprehensive model — capturing the bottom infrastructure-centric aspects of an application model alongside the top-down, transaction-path-centric aspects.
The hybrid approach has much to recommend it. There are, however, two issues that users need to keep in mind. First, no major vendor provides any kind of deep integration between the two technologies. This means that the resulting comprehensive models resemble two independent topology maps sitting side-by-side with a few threads connecting them, rather than a single, coherent portrait of an application's runtime architecture. Second, the fact that a modern application's execution path can follow a different course each time that application is invoked means that the transaction-path snapshots will have limited validity, unless multiple snapshots are in some way overlaid on top of one another, and an aggregate trace is generated. Even then, it's possible for transaction paths to gradually change, meaning that the snapshots need to be recreated on a regular basis.
In a few instances, vendors have begun to add a third layer to the application model, namely one taken from some kind of business process modeling and monitoring capability. The third layer is usual drawn from an analysis of the payload of the same transactions used in the second layer, or it is constructed independently, typically using some kind of BPEL-based notation, the result of which is then mapped onto the transaction-path snapshot. The potential to create an application model with direct links to a business process map is great, although interest is limited, because the enterprise communities charged with modeling and monitoring applications are primarily associated with IT, while those focused on business process modeling are on the business side.
Besides service-dependency models, transaction-profile snapshots and BPEL diagrams, two further layers occasionally contribute to the construction of a comprehensive picture of an application's runtime architecture. First, there are network topology diagrams, which are intended primarily to display physical devices and the physical links that allow these devices to communicate with one another. Understanding how network topologies interact with other aspects of an application's architecture will become increasingly important in light of the emergence of converged monitoring frameworks, discussed in the subsection on end-user experience monitoring. Second, there are the Bayesian networks, which are intended to display random variables describing statistical properties of a system's behavior, while the links among the variables represent the conditional observational dependence of one variable on another.
Although typically not as detailed as a model based on service dependency or transaction profiling, the network graphs must, at a minimum, be consistent with generated application runtime architecture models, if such technology is to be applied for the purposes of discovering large-scale normal patterns in application performance. Bayesian network technology is an increasingly important element of application performance analytics and Gartner expects that, during the next 12 months, the coordination of runtime architecture models and Bayesian network graphs will become a key process associated with the second dimension of APM.
User-Defined Transaction Profiling
Between 2005 and 2010, user-defined transaction profiling had been, after end-user experience monitoring, the most-asked-about dimension of APM by Gartner clients. However, during the past 12 months, there has been a lessening of interest on the part of Global 2000 enterprises.
The term "transaction" has many meanings. Historically, it meant a logical unit of work — i.e., the net result of a sequence of interactions with a database that took that database from one consistent state to another. Consistency was defined in terms of resettled business or semantic rules. In fact, all transaction processing management (TOM) systems, from IBM's CISCO to Amazon's Dynamo presuppose this interpretation, as their common goal is to ensure (at least, eventually) database consistency, even in the face of millions of concurrent interaction attempts and an unending round of local and global system failures.
In the context of APM, however, the term "transaction" means something else. When they engage the services of a modern, composite application, users or customers will typically perform a number of distinct operations that — however unconnected they may be from the point of view of the systems being accessed and exercised — form an integral action from the perspective of that user or customer. This "integral action" is what most APM vendors mean by the term "transaction." The user-defined transaction-profiling technologies will attempt to trace the effects of this transaction across an array of components and network paths.
As a first approximation, there is only a loose relationship between the concept of a logical unit of work and the concept of an integral action. Although most users will see a logical unit of work as one possible (albeit limited) type of integral action, an integral action can involve implicit and explicit interactions with multiple databases or with none at all. Furthermore, a transaction in the sense of a logical unit of work pertains, in most cases, to local events that take place deep within a system stack, while an integral action corresponds closely with a user's surface experience of the system and the system's global behavior. Since understanding user experience and end-to-end application performance are the key goals of most APM technology suites, it is only natural that the APM vendor community would concentrate its attentions on the integral action interpretation of the transaction term.
However, four problems stem from an exclusive focus on the user's integral perspective:
- In the end, the event stream initiated by an integral action does, in almost every instance, affect one or more databases somewhere along the way. It is precisely via those database interactions that the integral action causes a permanent change in the state of the application that the user is engaging. Failure to focus on database changes risks missing some of the permanent damage possibly caused by a poorly performing application.
- In many ways, it's accurate to view the entire contents of Web-based domain as an extended database equipped with a hierarchical data model. With such a concept in mind, any integral action taken on a Web-based application can be viewed as a kind of logical unit of work applied to a database governed by weak consistency rules. At a minimum, such a view would provide better performance comparisons between Web-based and traditional architectural approaches to online transaction processing (OLAP).
- Most importantly, the integral action focus has led to a systematic neglect by APM vendors of DBMS internals and, even more problematically, of the increasingly significant performance bottlenecks attributable to storage system input/output (I/O).
- As the environments facing users at the Internet's edge become richer and support a hybrid of business and consumer application services, it is becoming increasingly difficult to describe user behavior as integral; indeed, users are increasingly prone to "meander and graze" on application services, making it difficult to point to anything that resembles a logical unit of work, even when that concept is pushed to a metaphorical extreme.
The latency experienced by a user or customer can be broken down into a number of segments: edge time (the latency attributable to the device, browser and experience enriching code, such as Ajax or Flash); network transport time; concurrency and recovery manager time; database compute time; and storage I/O time. Historically, database compute time and storage I/O time have been lumped together for the purposes of performance analysis (i.e., database time), as has edge time and network time.
From client inquiries, Gartner has ascertained that, in the early part of the decade, approximately 60% of the latency experienced by the end user was attributable to network time, 30% to database time, and the rest to concurrency and recovery manager time and edge time. During the past two years, however, there has been a significant shift in the latency weights away from the center and toward the periphery. The current breakdown average stands closer to 50% network time and 40% database time; however, anecdotal evidence suggests that half the network time latency is attributable to edge sources, while half or more of the database time is attributable to storage I/O. This "latency shift toward the periphery" has, during the past 12 months, pushed the sources of performance problems into regions of the application execution path where the appropriateness of the construct around which most vendors have built their user-defined transaction-profiling systems is less relevant.
A key consequence of the decline in relevance has been a shift in buyers' preferences among the types of technology deployed to support user-defined transaction profiling. Almost all such systems function on two levels. First, there is a data collection mechanism, which includes information about where the user-defined transaction needs to be gathered from a source. Second, there is the stitching engine, which is necessary because, in the end, the execution of a user-defined transaction is a disconnected partial ordering of events that take place at various points within the hardware and software components marshaled by the application. No matter how much preparation is done in advance, it is impossible to ensure that all of the events bear some marker that positively associates them with the consequences of some integrated set of actions undertaken by a user. Hence, there is always the need to supplement the data collection with some kind of engine that explicitly reviews data collected and determines when different datasets do or do not represent different stages in the flow of a user-defined transaction across the infrastructure and the application stack.
There have been three basic approaches to data collection and stitching.
Dope and Trace: This approach requires the user to dope the application code with a tag or an "isotope" that gets inserted in packet payloads as they make their way from hop to hop and also gets passed from computational step to computational step within a node. Strategically placed sensors then read the isotope's traces, which usually take the form of partial paths. The stitching engine then gathers a collection of such partial paths, filters them and guesses at the gaps among the partial paths surviving the filter. Finally, it constructs an end-to-end profile from the filtered paths and guesses.
Recognize and Trace: This approach eschews the initial doping process. Instead, it uses smart sensors that are capable of recognizing packet protocol or payload features that remain constant or evolve in a predictable way as packets associated with a given user-defined transaction disperse across the infrastructure. The stitching engine then works similarly.
HTTP Time Stamp Analysis: This approach uses two or more of the same kind of packet-capture and HTTP content analysis appliances used for end-user experience monitoring. They're placed at various packet collection points around the infrastructure. The stitching engines then rely almost exclusively on time stamping to build up a user-defined transaction profile.
Although the need for code decoration has always made users slow to adopt the first approach, it is generally (and accurately) believed to be the most accurate of the three, and, for many enterprises, that higher accuracy rate was sufficient to outweigh the problems associated with the upfront doping process. The latency shift to the periphery, however, has reduced the accuracy of all three approaches and, in many cases, turned the balance against dope and trace. (The inaccuracy pertains to the ability of the dope-and-trace technology to provide an end-to-end profile across the entire infrastructure. This critique does not affect the technology's internode-tracing capabilities, and there are situations in which such a technology has been adopted primarily for its ability to understand paths within one or more node types.)
In some cases, this has meant a greater acceptance of one or the other of the second two approaches. However, more commonly across the market, there is a demand for hybrid approaches that combine dope and trace, recognize and trace and HTTP time-stamp analysis. There is broad acknowledgment that most user-defined transaction-profiling technologies will, at best, provide a heuristic for root-cause fault location, instead of a completely reliable real-time portrayal of what is happening across the infrastructure as an application executes.
Component Deep-Dive Monitoring in Application Context
The fourth dimension refers to a broad collection of technologies and products designed to monitor the performance of the various hardware and software components that support the execution of an application. What distinguishes component deep-dive monitoring in application context from performance-monitoring technology in general is the ability to associate the latency or resource consumption being measured with the application causing the latency or resource consumption.
In practice, such monitoring is largely confined to off-the-shelf application stacks, middleware (e.g., application servers, message queuing systems and service buses), database management systems (DBMSs) and aspects of network packet flow. To achieve the appropriate level of detail, some kind of instrumentation is inevitably required, but the nature of that instrumentation varies widely.
Although the primary audience for deep-dive monitoring technology is application support and IT operations staff, the increased rate of application code change and the consequent need for greater communication and interaction between AD and production in general has meant that, particularly in the middleware space, AD teams have become increasingly important influencers (and, in some cases, direct buyers) of such technology.
Application Performance Analytics
All technologies in the first four dimensions of APM tend to generate large volumes of data. Given the highly modular, distributed and dynamic nature of many applications, these datasets have a high signal-to-noise ratio. These factors have made the ability to correlate within and across the various datasets prepared by APM systems an essential prerequisite for deriving value from an APM investment. In fact, the subject of analytics has risen from the fifth most-asked-about dimension of APM to the second (after end-user experience monitoring) during the past 12 months.
Different types of data demand different types of analytic technology:
- Discrete, but complex event data — a stream of signals arriving in some kind of temporal sequence notifying the user that events describable in terms of multiple, possibly hierarchically, arrangements have occurred (e.g., a sequence of notifications about the location of a user-defined transaction as it hops across the various components of the infrastructure); this kind of data requires some kind of CEP capability if the information obtained is to be made intelligible in anything like real time
- System trace data — usually collected in the log files of the various components that, working together, support the execution of an application; these file updates record the changes of application component state brought about by the stream of events described by the previous type of data (e.g., a log file search, the results of which are used to help construct aspects of an application runtime architecture); this kind of data requires some kind of unstructured text searching/log file analysis mechanism that can rapidly pick out textual correlations and meanings from a volume of alphanumeric strings
- Data describing continuous changes of latency or resource consumption — response time data captured by end-user experience-monitoring technologies; this kind of data requires some kind of behavior learning engine capability that can discover the large-scale statistical patterns that govern the relationships among variables
- Multidimensional structured historical data — this involves events and time-stamped correlations among continuous variables — usually stored by either packet capture or deep-dive monitoring technology in some kind of high-density data storage capability; this kind of data requires multidimensional OLAP capabilities to adequately slice and dice the hypercubes generated by the multidimensional data points
Most APM systems offer some subset of these four styles of data analytics; however, Gartner anticipates that, during the next 12 months, systematic integrations of these four styles will become commonplace among vendors of comprehensive APM solutions.
APM: Future Directions
The increased use of cloud-based infrastructure, the growing complexity and obscurity of the Internet edge, and the continued acceleration of the rate at which new functionality is moving from application development into production has the potential to drive further evolution in the five-dimensional model during the next 12 to 18 months.
First, end-user experience monitoring will become more important, because the point at which the end-user accesses the hybrid application will often be the only point where end-to-end behavior can be observed. Furthermore, the character of end-user experience monitoring will need to evolve. Instead of being focused on metrics captured from the performance of individual applications, it will have to additionally concern itself with metrics describing the joint performance of the entire portfolio of applications available to the end user at any point in time, as well as the user's behavior with regard to that portfolio. A premium will also be placed on methods that will allow the user behavior-monitoring system to infer how the user models the services made available by the application portfolio.
However imprecise and volatile those models may be, they will be the primary entities that the user regards as performing well or poorly. The presence of rich services lying at various points along an application's execution path between the data center and the user means that many aspects of end-user experience will not be inferable from data collection points within the data center itself. Some kind of data collection and analysis must take place from the edge in to supplement (or in some cases replace) monitoring from the inside-out style of more traditional approaches.
Second, runtime application architecture discovery and display will need to integrate an atlas of public cloud service providers, their interdependencies and, most importantly, a detailed picture of how the applications available to any given user are linked to public cloud service offerings. Furthermore, it will be critical to add a temporal dimension to the architecture display, where possible, showing typical deviations among the time infrastructures governing the services that the application is consuming. This latter requirement is driven by hybrid application's typical lack of a single universal time infrastructure.
Third, user-defined transaction profiling will require an almost comprehensive overhaul. Not only does the presence of public cloud-based services within an application's runtime architecture limit the ability to trace a user-defined transaction as it hops from end to end across application tiers, the user's increasingly unstructured interaction with a shifting collection of services, instead of a single application, makes it difficult to maintain the idea of an application-specific, user-defined transaction. Instead, this functional dimension is likely to evolve into what is fundamentally a dynamic complement to the runtime architecture functionality discussed above. In other words, future APM platforms will be equipped to present a continually evolving image of an application's runtime architecture, which can be frozen and taken offline for analysis, rather than a separate module to trace the flow of transactions.
Fourth, application component deep-dive monitoring will remain much as it is now. However, it will expand its scope of coverage to include individual cloud services and hybrid application enablement technologies.
Fifth, analytics will acquire a priority second only to end-user experience monitoring. Precisely because of the complexities and the many (sometimes conflicting) layers of abstraction involved in grafting public cloud services onto on-premises-derived components, we do not, at present, have good models of the behaviors in one component that are related to behaviors in other components. Not only do we lack causal models, we lack observationally based correlational rules of thumb. Hence, behavior learning engines, as well as the other analytical disciplines such as CEP, log file analysis, and multidimensional modeling and analysis, will be critical in building up what constitutes a hybrid application's health or lack of health.
Many vendors discussed in this document are already taking steps toward this stage in the evolution of APM. There are many aspects of these technologies and strategies that become clear only in light of this change. Nonetheless, for all the real depth of change involved and the undoubted impact that the cloud and rich edge technologies will have on application architectures and IT in general, although incrementally transformed, the five-dimensional model for APM will endure through the strategic-planning horizon.
Adjacent Markets; Overlapping Definitions
APM as a technology category is closely related to, and frequently confused with, four other technology categories. Unfortunately, vendors have often exploited and compounded the market confusion. The five related categories are described as follows.
Application Management (AM): APM technologies are a proper subset of application management technologies. They include AD, testing, quality and release management, as well as application project portfolio management technologies.
Business Service Management (BSM): Business services are collections of IT functionality defined and presented in terms intelligible to business users or customers, and governed by service-level agreements (SLAs) stated and measured in terms that are relevant to business or customer concerns. BSM tools aid IT shops in dynamically linking the availability and performance status of IT infrastructure and application components to business services that enable business processes. BSM is compatible with APM, although given that, for many large enterprises, the business services of interest to users and customers are applications, we find an increasing number looking at APM as an initial step on a longer road to BSM. In such cases, a better way of relating BSM to APM would be to say that business services decompose into user-defined transaction types, the execution of instances of which are monitored by APM technologies.
Business Process Monitoring (BPM): BPM technologies monitor the execution of a modeled business process flow. Most business processes in large enterprises contain some paths that are executed by means of applications, and, it would be conceivable for a BPM platform to hand off the monitoring of those application enabled paths to an APM system, particularly those dimensions associated with end-user experience monitoring and user-defined transaction profiling. In practice, few BPM platform implementations capitalize on that possibility, although a number of APM vendors (e.g., HP, Oracle and Progress Software) have added some BPM functionality to their APM portfolios.
Business Transaction Management (BTM): Thanks to some effective marketing on the part of some APM vendors (e.g., OpTier, Correlsense and dynaTrace software), BTM has become a popular term to describe APM offerings that are restricted to user-defined transaction profiling and packet-based, real user-experience monitoring. Many users (e.g., those that are focused on operations management, rather than application support) find that conjunction of dimensions to be an attractive starting point for APM. Hence, the term denotes a useful concept. The qualifier "business" is misleading, however, in that it assumes that such technologies are concerned out-of-the-box with business-meaningful transactions, such as bank account updating or ticket buying. In fact, they are simply concerned with any named set of user activities at an application interface. The qualifier "management" is misleading, because it presupposes that the technologies do something more than simply exhibit how events kicked off by some user activity at an application interface make their way across an application stack and supporting infrastructure.
Vendors eligible for inclusion in this research meet the following criteria:
- Gartner client inquiry data confirms that the product is of interest to Gartner clients in enterprise environments by making their product selection shortlists.
- The vendor should have at least 50 customers that use its APM products actively in a production environment for at least two out of the four following functionalities: end-user experience monitoring, application component discovery and modeling, deep monitoring of application components and transaction flow monitoring.
- The vendor's APM product portfolio should be capable of at least two of the following functions: end-user experience monitoring; application component discovery modeling; in-depth monitoring of key application components (e.g., Java EE, Oracle DBMS); and transaction-flow monitoring.
- The vendor must have a global sales presence.
Arcturus Technologies, ASG, Aternity, BMC Software, Coradiant (BMC), Correlsense, dynaTrace software (Compuware), eG Innovations, Idera, InfoVista, Knoa Software, ManageEngine, Microsoft, NetScout Systems, Network Instruments
- AviCode was acquired by Microsoft in the later part of 2010.
- MQSoftware was acquired by BMC in August 2009.
- NetIQ did not meet all the inclusion criteria for this Magic Quadrant; however, it remains active in this market.
- Nimsoft was acquired by CA Technologies in 2010.
Gartner analysts evaluate technology providers on the quality and efficacy of the processes, systems, methods or procedures that enable IT provider performance to be competitive, efficient and effective, and to positively affect revenue, retention and reputation. Ultimately, technology providers are judged on their ability and success in capitalizing on their vision.
Core goods and services offered by the technology provider that compete in/serve the defined market include product/service capabilities, quality, feature sets and skills, whether offered natively or through OEM agreements/partnerships.
Overall Viability (Business Unit, Financial, Strategy, Organization)
Viability includes an assessment of the overall organization's financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue investing in the product, offering the product and advancing the state of the art within the organization's portfolio of products.
The technology providers' capabilities in all presales activities and the structure that supports them include deal management, pricing and negotiation, presales support and the overall effectiveness of the sales channel.
Market Responsiveness and Track Record
This category involves the ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the provider's history of responsiveness.
This criterion includes the clarity, quality, creativity and efficacy of programs designed to deliver the organization's message to influence the market, promote the brand and business, increase awareness of the products and establish a positive identification with the product/brand and organization in the minds of buyers. This "mind share" can be driven by a combination of publicity, promotional, thought leadership, word-of-mouth and sales activities.
This involves relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, it includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups and SLAs.
The ability of the organization to meet its goals and commitments includes the quality of the organizational structure. This involves skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.
Source: Gartner (September 2011)
This criterion involves the ability of the technology provider to understand buyers' needs and translate these needs into products and services. Vendors that show the highest degree of vision listen and understand buyers' wants and needs, and can shape or enhance them with their added vision.
This criterion involves a clear, differentiated set of messages consistently communicated throughout the organization and externalized through the website, advertising, customer programs and positioning statements.
This is the strategy for selling product that uses an appropriate network of direct and indirect sales, marketing, service and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.
Offering (Product) Strategy
This involves a technology provider's approach to product development and delivery that emphasizes differentiation, functionality, methodology and feature set as they map to current and future requirements.
This criterion includes the soundness and logic of a technology provider's underlying business proposition.
This involves the technology provider's strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including verticals.
This criterion comprises the direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes.
This category involves the technology provider's strategy to direct resources, skills and offerings to meet the specific needs of locations outside the home or native geography, either directly or through partners, channels and subsidiaries, as appropriate for that region and market.
Source: Gartner (September 2011)
Six aspects characterize Leaders: (1) competitive offerings related to all five dimensions of APM and best-of-breed functionality in three or more of the dimensions; (2) credibility in the monitoring of application domains assembled from heterogeneous sources; (3) effective integration across four or more of the dimensions; (4) the ability to deliver and support APM on a global basis; (5) a consistent track record of innovation; and (6) a vision that places APM at the heart of operations and application management (see Note 1).
Challengers have typically delivered highly competitive offerings in three or more dimensions of APM. In some of the remaining dimensions, however, offerings are restricted in terms of functional depth or in terms of the environments to which their technologies are applied, and these restrictions exclude them from consideration by some well-defined subsegments of the APM user community. Challengers will tend to have a strong global support and services infrastructure; a well-regarded brand, (although that regard may not be generated by APM). They recognize the importance of APM, if not its centrality to their overall software product portfolio. These firms often evolve into leaders; however, in some cases, due to considerations of holistic corporate strategy, they opt to retain their status as Challengers.
Visionaries, although frequently competitive across three or more of the five functional dimensions of APM, have, in at least one functional dimension, demonstrated (1) a consistent track record of innovation (i.e., they regularly introduced functional capabilities that no other vendor had previously introduced), and (2) their innovations have eventually been widely adopted by their competitors and have shaped the way in which users define APM. Visionaries become Leaders or their technologies are acquired and become part of the Leaders' APM product portfolios.
Niche Players are typically newcomers to the APM market, as Gartner defines it, and their strengths are concentrated in one or two or five functional dimensions of APM, or they focus their short-term research and development, marketing and sales strategies on meeting the requirements of well-defined subsegments of the overall APM user community. Many Niche Players extend their depth evenly across all the functional dimensions and broaden their strategic focus, thereby opening the door to become Challengers, Visionaries and, in some cases, Leaders.
Arcturus supports all five dimensions of APM functionality through an integrated platform, Applicare, constructed from multiple modules (Applicare Server with Applicare Web Console, Advisor & Detector, User Experience Manager, IntelliCheck, IntelliTrace, IntelliSense, Platform Agents for WebLogic, WebSphere, Tomcat, JBoss, and Generic Java, Blackbox, Knowledge Base Subscription, Customizable Dashboards, Knoms Framework, Auto Tune Wizard, Reporting Module, SNMP Monitoring, Tuxedo Monitoring, Oracle Monitoring). For the purposes of this research, we examined Applicare Enterprise, Version 6.5.
- The Knoms Framework makes effective use of social networking technologies and metaphors to gather, share and refine knowledge about problem resolution, particularly with regard to Java EE environments.
- Easy to deploy and maintain, Applicare is well-suited to the Java-based application requirements of small or midsize businesses (SMBs).
- Restriction to Java EE technology for application servers limits Applicare's appeal to an increasingly .NET-focused SMB buying community.
- Arcturus has a low level of brand recognition, particularly outside the highly competitive Oracle environment market, which is problematic in an era where many larger and more well-known firms are explicitly targeting SMB buyers.
ASG supports all five dimensions of APM functionality. For the purposes of this research, we examined ASG-TeVista Performance Manager with the TeVista E2E component for end-user experience monitoring; ASG Discovery and Dependency Mapping (DDM) for runtime architecture discovery, modeling, and display; ASG-TMON for Web Based Applications, Version 5.1 for user defined transaction profiling; ASG-TMON for Web Based Applications, Version 5.1, ASG-TMON for CICS TS for z/OS Version 3.2, ASG-TMON for IMS Version 3.0, ASG-TMON for DB2 Version 5.0, ASG-TMON for TCP/IP 2.3, ASG-TMON for VTAM Version 2.3, ASG-TMON for WebSphere MQ Version 2.3, ASG-TMON for Web Services Version 3.2, ASG-NaviPlex Version 3.2, ASG-NaviGraph Version 2.2 and ASG-PathPoint Version 3.0.3 for component deep-dive monitoring in application context, while application performance analytics is supported by functional elements of each of the previously listed products.
- Through integration with a broad and deep range of mainframe performance-monitoring capability, ASG is able to provide genuinely end-to-end, user-defined transaction profiles.
- ASG's historically application-centric DDM technology provides a platform that adequately models application and infrastructure components and, hence, exhibits relationships among them.
- ASG's APM product portfolio is not fully integrated, making it difficult to deploy and maintain the portfolio holistically.
- Gartner clients continue to regard ASG as a specialist in IBM environments, with a fundamental focus on mainframe management.
Aternity supports all five dimensions of APM functionality through an integrated platform, the Frontline Performance Intelligence Platform. For the purposes of this research, we examined Version 5.0.
- The endpoint instrumentation approach to end-user experience monitoring adopted in the Frontline Intelligence Platform enables the technology to effectively monitor the real user experience of thick-client, Web 2.0 and server-based applications, architectures that are, in general, opaque to packet capture and HTTP analysis-based technologies.
- A powerful, embedded Behavior Learning Engine (BLE) allows the technology to discover complex, normal patterns of actions taken against the entire portfolio of applications that might be available to a user, laying the foundations for genuine user behavior monitoring.
- Technologies such as the Frontline Intelligence Platform, by using endpoint instrumentation and BLE correlation technologies, are able to get a purview on the increasingly complex and opaque Internet edge and the correlation between edge events and host processes and, hence, supplement, and, in some cases, replace APM technologies reliant on data-center-bound instrumentation points.
- Although Version 5.0 of the Frontline Intelligence Platform has an architecture that enables the capture of data from sources not within the end user's line of sight, the technology's coverage of components deep within the application stack or infrastructure is limited; hence, for large enterprises, it will not be seen as a comprehensive APM solution.
- New approaches to packet capture and analysis and to Web page script injection have the potential to eclipse Aternity's approach to endpoint instrumentation for many application types.
- Although concerns in some particularly demanding markets (e.g., Germany) have been alleviated by the Version 5.0's Privacy Audit Control feature, employee privacy issues can limit the technology's appeal.
BMC supports all five dimensions of APM functionality. For the purposes of this research, we examined BMC ProactiveNet Performance Management, Version 8.5 for end-user experience monitoring; BMC Atrium Discovery and Dependency Mapping (ADDM), Version 8.2 and BMC Atrium CMDB, Version 7.6.03 for application runtime architecture discovery, modeling and display; BMC Middleware Management — Transaction Monitoring (BMM), version 5.0 and MainView Transaction Analyzer (MVTA) Version 3.1 for user-defined transaction profiling; BMC ProactiveNet Performance Management (BPPM) Version 8.5, BMC AppSight Application Problem Resolution, Version 7.6, BMC MainView (version numbers vary by individual product) and BMC Middleware Management — Performance and Availability (BMMPA), Version 5 for component deep-dive monitoring in application context. Application performance analytics is an integral feature of BPPM.
- After allowing an early leadership in APM (based on the Patrol technologies) to lapse, BMC has, through a sequence of acquisitions during the past 18 months — e.g., MQ Software and, most recently, Coradiant — demonstrated a deep understanding of market requirements and trends.
- In possession of one of the leading BLE platforms available, BMC has effectively integrated application performance analytics with key elements of its APM portfolio.
- AppSight's development environment focus, coupled with the BladeLogic platform's recently added application provisioning capabilities, allow BMC to position its APM offerings potentially as elements of an overall life cycle approach to application management.
- The deployment of ADDM for runtime architecture discovery, modeling, and display allows BMC to graft its APM capabilities directly onto a market-leading configuration management database (CMDB) and service desk offering.
- Through integration with a broad and deep range of mainframe performance-monitoring capabilities, BMC is able to provide an end-to-end view of application behavior.
- BMC's seeming lack of enthusiasm for APM between 2005 and 2009, particularly with regard to the end-user experience-monitoring dimension of functionality, rendered buyers skeptical of BMC's long-term and broad commitment to this market.
- BMC's large portfolio of APM-relevant technologies remains fundamentally a collection of disjointed technologies.
- AppSight's development environment focus undermines its appeal to production environments.
- The almost exclusive dependency of BMC's user-defined transaction-profiling capability on payload-content-based stitching limits its applicability to relatively simple application architectures.
- Data about network performance is increasingly critical for understanding the root causes of application performance problems; to date, BMC has opted not to make significant investments in a network performance-monitoring capability.
CA Technologies supports all five dimensions of APM functionality. For the purposes of this research, we examined CA Application Performance Management, Version 9.0.6 and CA NetQoS SuperAgent, Version 9.0 for end-user experience monitoring; CA Application Performance Management, Version 9.0.6 and CA NetQoS SuperAgent, Version 9.0 for application runtime architecture discovery, modeling and display; CA Application Performance Management, Version 9.0.6 and CA NetQoS SuperAgent, Version 9.0 with Multi-Port Collector, Version 2.1, and CA SYSVIEW Performance Management for CA APM, Version 12.7 for user-defined transaction profiling; CA Application Performance Management, Version 9.0.6, CA SYSVIEW Performance Management for CA APM, Version 12.7, CA Database Performance, Version 11.4, and CA NetQoS SuperAgent, Version 9.0 for component deep-dive monitoring in application context, while application performance analytics is an integral feature of CA APM and the CA NetQOS SuperAgent.
- CA Technologies' component deep-dive monitoring in application context technology continues to be considered, after the 12 years since its first appearance on the market as Wily Introscope, one of the two or three most effective performance-monitoring technologies targeted at Java EE, .NET and MQ. Although intended primarily for use in production contexts, its appeal to Global 2000 application development teams has grown considerably.
- The CA NetQOS SuperAgent is widely regarded as one of the two or three leading packet-capture and analysis-based network performance-monitoring systems. By merging this technology with the Application Performance Management suite's HTTP-based real user experience-monitoring capability, the groundwork has been laid for a converged application network performance-monitoring infrastructure and for a deeper consideration of network behavior impact on application performance.
- CA Technologies has embedded its Application Performance Management suite within a broader Service Assurance portfolio that links infrastructure, application, and, ultimately, business service monitoring. The centrality that CA Technologies accords APM within that portfolio's architecture resonates well with the growing number of Global 2000 enterprises that desire an application-centric approach to IT operations.
- CA Technologies has articulated a clear business transaction conceptual model that has enabled the company to relate the transaction construct to the service construct in a precise manner.
- CA Technologies dominates a number of key market niches in the use of application component deep-dive monitoring, most notably, SAP NetWeaver monitoring and the monitoring of Java-based billing applications for IP telecommunications service.
- CA Technologies' recent marketing focus on broader initiatives, such as service assurance and cloud enablement and management, has created concern with some IT organizations that CA investments in APM may diminish.
- CA Technologies' APM suite is, for the most part, data center bound. As a result, the current suite's applicability to application architectures containing public cloud service components or exploiting user-experience-enhancing technologies (e.g., Ajax and Flash) placed close to the edge of the Internet is limited.
- Large IT organizations have reported confusion about the relationship between CA's APM end-user experience-monitoring module and the CA NetQOS SuperAgent.
- Until this point, CA Technologies' approach to analytics has been fragmented, and its long-term strategy for this APM functional dimension is unclear.
Compuware supports all five dimensions of APM functionality. For the purposes of this research, we examined Gomez Real User Monitoring Data Center (Vantage Real User Monitoring), Version 11.5, Gomez Synthetic Monitoring Private Enterprise (Vantage Active User Monitoring), Version 11.5, and Gomez Service Manager (Vantage Service Manager), Version 11.5 for end-user experience monitoring; Gomez Business Service Manager (Vantage Service Manager), version 11.5, Gomez Real User Monitoring Data Center (Vantage Real User Monitoring), Version 11.5, Gomez Java/.NET Monitoring (Vantage for Java/.NET Monitoring), Version 11.5, Gomez Network Performance Monitoring (Vantage for Network Monitoring), Version 11.5, Gomez Server Monitoring (Vantage for Server Monitoring), Version 11.5, Gomez Transaction Trace Analysis (ApplicationVantage), Version 11.5, and Compuware Strobe, Version 4.2 for application runtime architecture discovery, modeling and display, user-defined transaction profiling and component deep-dive monitoring in application context; and the Gomez Performance Management Database (PMDB) module for application performance analytics.
- Compuware provides an effective, IT operations and business-oriented approach to packet capture and HTTP analysis-based real end-user experience.
- Following the acquisition of Gomez, Compuware has articulated a well-reasoned conceptual architecture that integrates an outside-in, service-based approach to monitoring with inside-out, data-center-bound capabilities to capture the behavior of Web 2.0 and cloud-dependent applications.
- Compuware has pushed the boundaries of packet capture and analysis-based end-user experience-monitoring technologies to support thick-client (e.g., SAP, Cerner Millennium) and server-based computing (Citrix XenApp) scenarios.
- Its acquisition and rapid integration of BEZ for BLE functionality demonstrates Compuware's acknowledgment of the centrality of analytics to APM during the next few years.
- The Compuware sales force continues to effectively champion the growing Compuware product portfolio and the concept of APM itself.
- Compuware's service-based strategy runs the risk of being overshadowed by evolving technologies for endpoint instrumentation and Web page script injection.
- Compuware's APM portfolio has been historically criticized for a lack of depth in its ability to monitor fine-grained application server and database events. Although the recent acquisition of dynaTrace software will address the former issue (assuming effective execution on integration), the latter issue remains unresolved.
- As a result of its recent string of acquisitions, Compuware owns multiple, sometimes conflicting, packet-capture and synthetic transaction-based technologies; further rationalization is required.
Coradiant supports two of the five dimensions of APM functionality. For the purposes of this research, we examined TrueSight End User Monitor (appliance form factor) and TrueSight Enterprise Edition (virtual appliance form factor) along with TrueSight Application Visibility Connector, TrueSight RIA Visibility, TrueSight Cloud Visibility for Akamai modules for end-user experience monitoring and the TrueSight Reporting Services for application performance analytics.
- Coradiant's TrueSight is easy to deploy and maintain, particularly in its virtual appliance form factor. It is capable of delivering fined-grained real end-user experience data based on packet capture and HTTP analysis.
- Its relationship and consequent collaboration with Akamai has enabled the Coradiant technology to penetrate the complex Internet edge in a common and important use case, paving the way for similar deployments with other cloud service providers.
- TrueSight's HTTP-centricity makes the technology a less-than-ideal candidate for a converged application network performance monitoring infrastructure.
- The Coradiant service provider alliance strategy for gaining access to the internals of the Internet edge will find itself increasingly in competition with endpoint instrumentation and Web page script injection technologies that are service-provider-independent.
Correlsense supports three of the five dimensions of APM functionality. For the purposes of this research, we examined SharePath RUM, Version 2.0 (Express, Standard and Cloud editions) for end-user experience monitoring, and SharePath Data Center, Version 2.0 for application runtime architecture discovery, modeling and display, and user-defined transaction profiling.
- Based on the recognize-and-trace paradigm, Correlsense provides quick and easy-to-generate application runtime architecture models and user-defined transaction profiles that are truly end-to-end, particularly with the addition of the SharePath RUM functionality.
- Its noninvasive approach to "broad, rather than deep" user-defined transaction profiling is in accordance with the current balance of market requirements.
- Correlsense's end-user experience-monitoring capability is not sufficiently powerful to satisfy an enterprise's overall end-user experience-monitoring requirements.
- Because buyers often purchase end-user experience-monitoring and user-defined transaction-profiling functionalities at the same time, SharePath may lose ground in head-to-head competitions.
dynaTrace supports all five dimensions of APM functionality through a single, integrated, eponymous platform. For the purposes of this research, we examined dynaTrace Production Edition, Version 3.5.
- Although its capabilities are not restricted to Java EE and .NET, the dynaTrace platform is a particularly effective performance-monitoring technology for classical application server environments.
- The PurePath technology on which the platform is based creates a transaction-like data structure through which fine-grained, application-server-level events may be observed and contextualized at multiple levels of abstraction.
- Originally designed to appeal to the special requirements of application developers, the technology has evolved to meet the needs of users focused on production.
- By positioning itself as a BTM concern, dynaTrace has understated the degree to which its core capabilities are specialized in component deep-dive monitoring dimensions.
- Getting full value out of the dynaTrace platform requires an understanding of application internals that is typically beyond the scope of IT operations professionals.
eG Innovations supports four out of the five dimensions of APM functionality through a single integrated platform, eG Enterprise. For the purposes of this research, we examined eG Enterprise Version 5.2 for end-user experience monitoring, application runtime architecture discovery, modeling and display, component deep-dive monitoring in application context, and application performance analytics.
- Easy to deploy and scale, eG Enterprise is able to deliver a comprehensive, high-level picture of how applications interact with the underlying infrastructure.
- The technology has particular fluency in the XenApp (Citrix) environment, which is typically a blind spot for APM suites.
- When focused on the application domain in isolation, levels of detail may be appropriate for either the SMB or a comparatively simple application stack. More-complex scenarios will require users to supplement eG Enterprise with technologies from other vendors, particularly in the component deep-dive monitoring dimension.
- Marketing and positioning remains technology-feature-focused in a market where business stake holders are important influencers.
HP supports all five dimensions of APM functionality. For the purposes of this research, we examined HP Business Process Monitor (BPM), Version 9.01 and HP Real User Monitor (RUM), Version 9.01 for end-user experience monitoring; HP Configuration Management System (CMS) Version 9.01, HP Runtime Service Model version 9.01, and HP Discovery and Dependency Mapping Automation (DDMA) version 9.01 collecting data from HP Business Process Insight Version 9.01, HP Business Process Monitor v9.01, HP Diagnostics (Diag) Version 9.01, HP TransactionVision (TV) Version 9.01, HP SiteScope (SiS) Version 9.01, and various HP Operations Manager Smart Plug-ins (OM SPIs) for application runtime architecture discovery, modeling and display; HP TransactionVision (TV) Version 9.01, HP Business Process Insight Version 9.01, HP Diagnostics (Diag) Version 9.01, and HP Real User Monitor (RUM), Version 9.01 for user-defined transaction profiling; HP Diagnostics (Diag) Version 9.01 and various HP Operations Manager Smart Plug-ins (OM SPIs) for component deep-dive monitoring in application context; and the Service Health Reporter Module, HP Problem Isolation, and the automated baselining and threshold-setting capabilities associated with BPM and Diag for application performance analytics.
- HP's BPM remains the market-leading on-premises technology for synthetic transaction-based, end-user experience monitoring, an approach that has seen a significant revival of popularity in the wake of the edge of the Internet's growing complexity and obscurity.
- HP's marketing efforts and sales motions consistently demonstrate an understanding of how APM issues are framed by CIOs and business executives that enables the business unit to position APM in strategic, rather than tactical/technological terms.
- HP's testing, quality management, and release management technologies enable it to embed its APM offerings within a broader life cycle approach to application management.
- A broad portfolio of infrastructure monitoring capabilities gives HP the credibility to exploit the growing interest in deploying APM as a "manager of managers" layer.
- Particularly by virtue of its Business Process Insight technology, HP provides an effective means of dynamically mapping the unfolding of a business process to the execution of applications supporting that process.
- The Dynamic Service Model construct provides a foundation around with HP can begin to integrate the diverse models and technologies that support its APM portfolio.
- HP's APM portfolio remains complex and integration among its elements requires an intricate choreography if effective integration is to be achieved. This has given the HP technology a reputation for difficulty of implementation and maintenance, even when compared with portfolios of equal breadth and depth.
- HP has allowed its position of thought leadership in APM to slip; once an initiator of new approaches and technologies (e.g., BPM/APM convergence), HP now tends to only belatedly adopt APM innovations that have been successfully commercialized elsewhere.
- Although HP was among the first broad portfolio operations management vendors to add user-defined transaction profiling to its portfolio with its acquisition of Bristol Technology in February 2007, the messaging-middleware-centricity of its TransactionVision (TV) offering in this functional dimension has limited its uptake, even among HP's APM customers.
IBM supports all five dimensions of APM functionality. For the purposes of this research, we examined IBM Tivoli Composite Application Manager (ITCAM) for Transactions, Version 7.2 for end-user experience monitoring; IBM Tivoli Application Dependency Discovery Manager (TADDM),Version 7.2 for application run time architecture discovery, modeling and display, IBM Tivoli Composite Application Manager (ITCAM) for Transactions, Version 7.2 and IBM Tivoli Composite Application Manager (ITCAM) for SOA, Version 6.2.2 for user-defined transaction profiling, IBM Tivoli Composite Application Manager (ITCAM) for Applications, Version 6.2.4, IBM Tivoli Composite Application Manager (ITCAM) for Application Diagnostics, Version 7.2, IBM Tivoli Composite Application Manager (ITCAM) for Microsoft Applications, Version, 6.2 and IBM Tivoli Monitoring (ITM), Version 6.2.2, and IBM Tivoli OMEGAMON XE for z/OS Version 4.2.0, for CICS on z/OS Version 4.2.0, for CICS Transaction Gateway on z/OS Version 4.1.0, for DB2 Performance Expert on z/OS Version 5.1.0, for IMS on z/OS Version 4.2.0, for Mainframe Networks Version 4.2.0, for Messaging on z/OS Version 7.0.1, for z/VM and Linux Version 4.2.0, and for Messaging Version 7.0 for component deep-dive monitoring in application context, and the trending, dynamic thresholding, and visual baselining capabilities that are integral features of ITCAM and TMI for application performance analytics.
- IBM has significantly improved integration among the elements that make up its ITCAM portfolio, making implementation and maintenance relatively straightforward and pain-free processes.
- Pricing structure innovations make the purchase and cost management of IBM's APM suite simpler than for other broad portfolio APM vendors. Its significant market share and installed base ensures business stability in this area.
- IBM's sales force and service capability are both well-informed regarding the potential value that IBM's APM products can add to individual transactions or engagements.
- IBM's decision to combine the dope-and-trace and HTTP time-stamp analysis approaches to user-defined transaction profiling by allowing a single stitching algorithm to accept feeds from the two approaches' respective modes of data collection responds effectively and innovatively to the market's increasingly eclectic and nondogmatic requirements of that APM functional dimension.
- IBM has a comprehensive vision of the requirements for an approach to analytics that combines the four elements of CPE, BLE, unstructured text search, and multidimensional OLAP and has access to the technologies required for realization of that vision in the short term.
- Based on Gartner's client interactions, large enterprises tend to consider solutions from the IBM APM portfolio only when they are making decisions to invest in a broad array of IBM management or middleware technology
- IBM's end-user experience-monitoring functionality, although broad, lacks depth. Specifically, its synthetic transaction technology is difficult to implement and maintain, its packet-capture technology is limited to basic HTTP and HTTPS collection, and its client instrumentation technology is confined to basic response-time data collection and simple correlation. Production deployments of the latter two approaches are sparse.
Idera supports four of the five dimensions of APM functionality. For the purposes of this research, we examined SharePoint Diagnostic Manager, Version 2.0 for user experience-monitoring, and application runtime architecture discovery, modeling and display, and SharePoint Diagnostic Manager, Version 2.0 and SQL Diagnostic Manager, Version 2.0 for component deep-dive monitoring in application context, and application performance analytics.
- Idera has established itself as a market-leading performance-monitoring solution for SQL Server for pure Microsoft environments.
- Although the SharePoint management market is fragmented and highly competitive, Idera's solution is well-regarded, particularly for enterprises that are attempting an integrated approach to SharePoint-centric Microsoft stacks.
- Idera's exclusive focus on Windows-based application components makes it highly vulnerable to Microsoft's own designs on the Windows-based application management market.
- Idera does not support some critical Microsoft technologies, such as .NET.
Inetco Systems supports three of the five dimensions of APM with a single integrated codebase. For the purposes of this research, we examined Insight, Version 4.7 for end-user experience monitoring, user-defined transaction profiling and application performance analytics.
- Insight constructs a seamless picture of end-user experience and transaction flow by capturing, analyzing, and correlating packets from strategically located span ports across the infrastructure.
- Insight's approach, while easy to deploy and maintain, can also handle extremely high transaction volumes.
- At the core of Insight lies a sophisticated, multilayered transaction model that makes it easy to infer runtime architecture structures from span port data.
- Inetco has particular expertise in high-volume, financial application transactions; this expertise has led the company to develop the first robust solution for the Advanced Message Queueing Protocol (AMQP) message-processing standard.
- Inetco is perceived as a niche financial applications vendor. It will have to continue to accumulate cases of deployment in other industries to establish itself as a general-purpose APM player.
- Exclusive reliance on packet capture and analysis limits Insight's applicability to applications that exploit user-experience-enhancing technologies, such as Ajax and Flash.
- Inetco's marketing and sales efforts remains overly technology-focused, given the growing influence of business executives on APM purchase decisions.
InfoVista supports three of the five dimensions of APM functionality. For the purposes of this research, we examined 5View Service Data Manager, VBersion 3.2 and the following 5View appliances: Application Version 5.0, Netflow, Version 6.0, Mobile IP, Version 4.0, voice over IP (VoIP), Version 5.0, and Web, Version 1.0 for end-user experience monitoring, 5View Service Data Manager, Version 3.2 with the Application and Netflow appliances for application runtime architecture discovery, modeling and display, and 5View Mediation, Version 5.0 for application performance analytics.
- InfoVista has combined network performance monitoring, complex IP-based service monitoring with a recent emphasis on unified collaboration and communications (UCC) contexts, and some aspects of APM in a single, extensible and service-provider-friendly platform.
- The company has established a particularly strong following among communications service providers that are in the early stages of creating a portfolio of higher-layer application-like services.
- In all dimensions besides end-user experience monitoring, InfoVista's technology remains heavily weighted toward network performance monitoring, rather than pure APM.
- Its end-user experience monitoring capabilities rely exclusively on data-center-bound technology, limiting its usefulness when it comes to applications that exploit Web 2.0 and other user experience-enhancing Internet-edge technologies.
Knoa supports two out of the five dimensions of APM functionality. For the purposes of this research, we examined Knoa Experience and Performance Manager (EPM), Version 6.0, Knoa Global End-User Monitor (GEM), Version 2.0, and Knoa Virtual/Cloud End-user Monitor (VCEM) Version 1.0 for end-user experience monitoring and application performance analytics.
- The endpoint instrumentation approach to end-user experience-monitoring adopted in EPM and GEM enables the technology to effectively monitor the real user experience of thick-client and server-based applications.
- Deep knowledge of usage patterns associated with SAP and other off-the-shelf applications is rendered in the form of templates that organize the data collected.
- New approaches to packet capture and analysis and to Web page script injection will be seen as increasingly attractive alternatives to Knoa's endpoint instrumentation approach.
- Employee privacy concerns can limit the technology's appeal.
ManageEngine (Zoho) supports all five dimensions of APM functionality through a single integrated platform, Application Manager. For the purposes of this research, we examined Applications Manager, Version 9.4.
- Application Manager provides a low-cost implementation of all five dimensions of APM functionality.
- The platform is easy to deploy and maintain.
- Despite being targeted at the SMB market, the platform can scale to encompass many thousands of nodes and continue to deliver on the promised capabilities.
- As lower-cost vendors begin to offer more-sophisticated functionality at prices comparable to ManageEngine, the company will need to push its offerings toward best-of-breed status more aggressively.
- The company's marketing puts too much emphasis on the low price, rather than the high-value dimension of the offering; based on Gartner client inquiries, the low-cost image drives away potential customers that would be otherwise satisfied with the functionality that ManageEngine brings to the table.
Microsoft supports all five dimensions of APM functionality. For the purposes of this research, we examined Systems Center Operations Manager R2 for end-user experience monitoring, application runtime architecture discovery, modeling and display, and application performance analytics, and AVIcode, Version 5.7 for component deep-dive monitoring in application context.
- Microsoft's acquisition and rapid integration of AVIcode into its management portfolio has quickly established the company as a viable alternative for the management of application stacks in Windows-based environments.
- Microsoft's short-term plans to integrate the AVIcode .NET component deep-dive monitoring capability with Systems Center Operations Management (SCOM) should allow the resultant platform to monitor application stack and infrastructure through a single, application-centric lens.
- Microsoft is in a position to revolutionize APM technology pricing, especially for Windows environments.
- The integrated AVIcode/SCOM platform will become part of a comprehensive life cycle of process and product for APM.
- Microsoft has relied primarily on a synthetic, transaction-based approach to end-user experience monitoring, which is limited, compared with analogous offerings in the market.
- Microsoft's APM strategy is highly insular; it presupposes that most of the application components to be monitored will be based on the Microsoft platforms, and .NET in particular, whereas most enterprises assume that applications will frequently act as threads tying together many infrastructures and platforms.
Nastel supports all five dimensions of APM. For the purposes of this research, we considered AutoPilot M6, Version 6.2, AutoPilot, and TransactionWorks, Version 6.1 for end-user experience monitoring; AutoPilot TransactionWorks, Version 6.1 for user-defined transaction profiling; AutoPilot M6, Version 6.2, AutoPilot, and TransactionWorks, Version 6.1 for component deep-dive monitoring in application context; AutoPilot M6, Version 6.2, AutoPilot, and TransactionWorks, Version 6.1 for application runtime architecture discovery, modeling and display and aspects of Autopilot M6, Version 6.2 for application performance analytics.
- Nastel Technologies has a reputation for technical sophistication and code excellence, particularly in the areas of message-based middleware and Java EE.
- Its smart correlation approach to user-defined transaction profiling, coupled with a robust CEP capability, enables the technology to build particularly accurate and detailed transaction profiles and integrate those profiles with pure business process data.
- Efforts begun in early 2009 to transform Nastel's position from that of message-queuing middleware management specialist to that of a general player in the APM space have been largely successful, particularly for large enterprises that see themselves as requiring an integrated mix of WebSphere Java and WebSphere MQ management functionality.
- Nastel's market positioning has yet to resonate with business-oriented decision makers. This situation is reinforced by a sales force that continues to call at relatively low technical levels of central IT or line of business application support teams.
- Nastel relies solely on dope-and-trace technology for user-defined transaction-profiling technology at a time when Gartner clients indicate that they prefer hybrid approaches that take advantage of packet-capture technology.
- Given the centrality of the end-user experience monitoring to most enterprise APM implementations, Nastel's reliance on partners for the provision packet capture and analysis-based end-user experience monitoring undermines the attractiveness of the overall portfolio, particularly in a period marked by rapid market consolidation.
NetScout supports three of the five dimensions of APM through an integrated appliance-based platform, the nGenius Service Assurance Solution. For the purposes of this research, we examined nGenius Service Assurance Solution, Version 4.9 for end-user experience monitoring, user-defined transaction profiling and analytics.
- Recent versions of the nGenius packet capture and analysis appliance have begun to extract meaning from Layer 7 protocols, giving the technology a good understanding of business application behavior.
- nGenius scales well, enabling it to handle large and complex infrastructures for service providers, as well as Global 2000 enterprises.
- The company is likely to benefit greatly from the trend toward converged application network performance-monitoring infrastructures, thanks to its focus on managing application and network service delivery in an integrated manner.
- The recent acquisition of Psytechnics gives NetScout best-of-breed monitoring capabilities for VoIP and IPTV, posititioning NetScout well for a likely future in which complex IP-based services and business applications are jointly managed through a common platform.
- In some particular domains, most notably low-latency trading applications, NetScout is able to capture and analyze fine-grained business application events.
- nGenius' ability to monitor events internal to databases, application servers and other middleware layers is limited, underscoring the impression that nGenius remains overly focused on network and network service performance, as opposed to business application performance.
- nGenius is comparatively complex from an implementation and maintenance point of view.
Network Instruments supports all five dimensions of APM through a single integrated suite of appliance and native agent-based technologies. For the purposes of this research, we considered Observer Infrastructure Version 2.1, Observer Reporting Server, Version 14.1, GigaStor, Version 14.1 and Observer Version 14.1.
- Network Instrument's GigaStor technology and its associated analytics engine provide the user with the ability to slice and dice and to discover complex patterns in high-volume cubes of application performance data.
- With a particularly deep understanding of how network traffic flows in application context over a virtual fabric, Network Instruments is well-positioned to support a converged application network performance-monitoring infrastructure in highly virtualized environments.
- Gartner clients find Network Instruments' APM offerings better-suited to the needs of the network administrator, rather than general IT operations or application support.
- Sales interactions focus on technical features and functions in a period where the market pays increasing attention to the requirements of and language used by business executives.
Opnet supports all five dimensions of APM. For the purposes of this research, we considered AppResponse Xpert Version 8.0, AppTransaction Xpert Version 16.0, AppInternals Xpert Version 7.1, AppForensics Xpert Version 4.0, and AppSQL Xpert, Version 4.7 for end-user experience monitoring; AppMapper Xpert Version 1.5, AppResponse Xpert Version 8.0, AppInternals Xpert Version 7.1, AppForensics Xpert Version 4.0, AppSQL Xpert, Version 4.7, and AppTelemetry Xpert, Version 1.0 for application runtime architecture discovery, modeling and display; AppInternals Xpert, Version 7.1, AppResponse Xpert, Version 8.0, AppSQL Xpert Version 4.7, AppTransaction Xpert Version 16.0 and AppTelemetry Xpert Version 1.0 for component deep-dive analysis in application context; AppInternals Xpert, Version 7.1, AppResponse Xpert, Version 8.0, AppSQL Xpert, Version 4.7, AppTransaction Xpert, Version 16.0 and AppTelemetry Xpert, Version 1.0 for user-defined transaction profiling. Application performance analytics is handled by functionality spread across the Xpert product line.
- Opnet's Xpert portfolio integrates all five dimensions of APM at considerable depth through its use of common data models and visualization.
- Its AppResponse Xpert module switches context easily between the visualization and report requirements of the application support and network administration staff.
- The Xpert suite is well-positioned to support a converged application network performance-monitoring infrastructure.
- Its AppTransaction Xpert creates a transaction-like data structure through which fine-grained, application-server-level events may be observed and contextualized; this enables the system user to analyze the data at multiple levels of abstraction.
- The Bayesian network-based analytics functionality is well-architected, pervasive and will come to play an increasingly important correlating role among the datasets gathered across the suite during the next year.
- Opnet's marketing, positioning, and sales motions are still technology-centric in a market where business executives are becoming increasingly influential in technology purchase decisions.
- The radical naming convention change for its product portfolio that took place late in 2010 have caused some confusion in the market.
- Opnet's minimal attention to infrastructure elements between the network and the application layer will limit the Xpert suite's appeal to organizations seeking a "manager of managers" solution.
OpTier supports all five dimensions of APM functionality. For the purposes of this research, we considered the End User Experience Monitor module of CoreFirst, Version 4.0 for end-user experience monitoring; CoreFirst, Version 4.0 for application runtime architecture discovery, modeling and display, user-defined transaction flow monitoring and for component deep-dive analysis in application context; and the CoreFirst Repository for application performance analytics.
- OpTier's CoreFirst technology has been enriched to the point where it can provide component deep-dive monitoring functionality, as well as user-defined transaction profiling.
- The company's end-user experience-monitoring module, based on packet capture and HTTP analysis, is highly scalable and well-integrated, with the central CoreFirst model, giving the system user the ability to obtain a genuinely end-to-end view.
- OpTier's application performance analytics functionality integrates CEP with multidimensional OLAP, providing a balance between real-time and offline analytical exercises.
- The company was first to market with tag-based user-defined transaction profiling with an early version of CoreFirst appearing in 2004 and has maintained a reputation for consistent innovation and code-quality excellence, particularly in the investment banking sector.
- Although having successfully expanded to all five functional dimensions of APM, OpTier now faces the challenge of embedding its APM functionality, first, into a more general monitoring capability that embraces infrastructure as well and, second, into an overall life cycle approach to APM.
- The data-center-bound nature of OpTier's end-user experience-monitoring technology limits its applicability to scenarios where the obscurity of the Internet edge becomes an issue.
Oracle supports all five dimensions of APM functionality. For the purposes of this research, we examined Oracle Real User Experience Insight, 11g, Release 1 and Oracle Enterprise Manager Grid Control, 11g, Release 1 for end-user experience monitoring; the Business Transaction Management module of Oracle Enterprise Manager Grid Control 11g, Release 1 for application run time architecture discovery, modeling, and display; the Business Transaction Management module of Oracle Enterprise Manager Grid Control 11g, Release 1 for user defined transaction profiling; Oracle Enterprise Manager Grid Control, 11g, Release 1 for component deep dive monitoring in application context; while application performance analytics is handled by functionality integral to both Oracle Real User Experience Insight, 11g, Release 1 and Oracle Enterprise Manager Grid Control, 11g, Release 1.
- For enterprises whose core ERP, middleware, and database requirements are met by Oracle software, Oracle's APM portfolio has, during the past 18 months, emerged as the default solution within the Gartner client base; this is a function both of the attractiveness of a "one-stop shop" approach and of Oracle's ability to successfully exploit the knowledge of its products to enhance functionality.
- Oracle has directly grafted BPM functionality onto APM functionality. Not only is this a valuable step in and of itself, but it helps reinforce the positioning of integrated software stacks that combine the delivery of basic functionality with monitoring and management.
- Oracle's integration of its own services capability as a direct feed into the Enterprise Manager framework improves the velocity of problem resolution and tightens the ties between Oracle and Enterprise Manager customers.
- As evidenced by Gartner client inquiries, the Oracle APM portfolio is of limited interest to organizations with higher levels of heterogeneity within the database and application domains.
- Enterprise Manager has no network performance-monitoring capability, and Oracle has yet to even signal a direction in this area.
Precise supports all five dimensions of APM functionality with a single, integrated and eponymous platform. For the purposes of this research, we examined Precise 9.0.
- By defining transactions in terms of a consistent change in state, rather than an integral set of actions, the Precise technology has been particularly effective in capturing the contributions of database and storage to end-to-end application latency.
- Precise's combination of a state-change transaction model with an understanding of how transactions flow and branch in dynamic virtual environments make the technology particularly suitable for user-defined transaction profiling in emerging cloud contexts.
- Historically a differentiator, the Precise technology retains its status as best-of-breed for monitoring the internals of key off-the-shelf application stacks, such as Oracle's PeopleSoft and SAP.
- A partnership with EMC has opened up a new buying community for APM technologies (SAN administrators) almost exclusively to Precise.
- Precise's brand promotion has been low-key, so general market awareness of its technology's capabilities is limited and out-of-date.
- Precise's historically secure market-leading positions in the areas of database and off-the-shelf application stacks are threatened by the stack/DBMS vendors.
Progress supports all five dimensions of APM functionality through a suite called Responsive Process Management Suite, Version 2.0. The four products that compose the suite are Progress Actional Version 8.2.3, Progress Apama Version 4.3.2, Progress Savvion Version 7.6.2 and Progress Control Tower Version 2.0, which are configured differently to support the different APM functional dimensions. For the purposes of this research, we have examined Progress Actional Version 8.2.3, Progress Apama Version 4.3.2, Progress Savvion Version 7.6.2 and Progress Control Tower Version 2.0 for end-user experience monitoring, application runtime architecture discovery, modeling, and display, user-defined transaction profiling, and application performance analytics; while Progress Actional Version 8.2.3 alone was considered for component deep-dive monitoring in application context.
- Progress has chosen to focus its integration efforts almost exclusively on the correlation between application execution and business process flow, through linking user-defined transaction-profiling capabilities with a business-oriented CEP engine and a market-leading BPM platform.
- The IP underlying Actional — one of the first offerings to deploy a dope-and-trace mechanism for profiling transactions — is heavily protected by a range of patents, which continues to circumscribe the efforts of some of its closer competitors
- Progress is a well-established vendor and its de facto ubiquity across the Global 2000 gives its APM platform a significant beachhead.
- Progress' focus on integrating APM upward into a BPM proposition has diminished its market presence as an APM vendor; the buying communities for its respective supporting technologies remain fundamentally distinct.
- The Actional technology has acquired a reputation for being difficult to implement, based on feedback from Gartner clients. This critique is common to many dope-and-trace platforms; however, given Progress' attention to enabling and promoting the BPM link, insufficient attention has been paid to Actional's simplification.
Quest supports all five dimensions of APM functionality. For the purposes of this research, we considered Foglight, Version 5.5.8 for end-user experience monitoring, component deep-dive monitoring in application context, and application performance analytics and a combination of Foglight, Version 5.5.8 and Foglight Network Management System, Version 5.5 for application runtime architecture discovery, modeling and display.
- Foglight's distinctive model-centric architecture provides an integrated approach to the five dimensions of APM; in fact, it can be extended further to capture aspects of the physical and virtual infrastructures on which the applications run.
- Quest is the market leader in multivendor performance monitoring for DBMSs, based on Gartner market data, giving Foglight a credibility in this area unique among the technologies considered in this research.
- Quest's historical strength in the Microsoft application management market, coupled with the ubiquity of its Toad database utilities package gives Quest a beachhead in most large enterprises and many SMBs.
- Quest's packet capture and analysis technology offers strong user-behavior replay and error-capture functionality.
- The popularity of vFoglight and its integration with Foglight proper enables Quest to directly address the interplay between application performance and the performance of any virtual fabric over which it runs.
- The sales and marketing efforts supporting Foglight tend to be technically oriented and feature/function driven; hence, it is frequently acquired to supplement management solutions, rather than serve as an enterprise's APM center of gravity. This leaves Quest vulnerable to containment and even displacement strategies, as competitors improve the functional reach of their portfolios.
- Foglight still depends on AVIcode technology for the data collection aspects of its .NET deep-dive monitoring capability; however, Microsoft now owns AVIcode and clearly intends to compete directly against Quest.
SL supports four out of the five dimensions of APM functionality. For the purposes of this research, we considered RTView UX, Version 1.0 for end-user experience monitoring and RTView, Version 5.8 for user-defined transaction profiling, component deep-dive monitoring in application context and application performance analytics.
- SL's model-based approach to APM makes RTView a particularly effective solution for monitoring SOA-based application stacks.
- The technology has become a de facto standard for applications built using the Tibco Software framework.
- SL's low-key, technology-focused marketing and positioning means that the company frequently fails to be considered in situations for which RTView would be an on-target solution.
Visual Network Systems supports all five dimensions of APM functionality via a single, integrated product family. For the purposes of this research, we considered the Visual Performance Manager (VPM), with the Application Performance Appliance (APA) Version 6.9.
- The system is easy to implement and maintain. Because it's fundamentally based on the capture and analysis of TCP and User Datagram Protocol (UDP) packet flows, configurations are robust in the face of changes to application add.
- VPM/APA has added extensive Citrix XenApp environment-monitoring capabilities, which address an increasingly important problem area for Global 2000 enterprises.
- Through its focus on business-user-oriented visualization on the one hand and database-related metrics on the other, Visual Network Systems has taken key steps toward the integration of the application support and network administration perspectives.
- VPM/APA functionality retains a strong bias toward the needs of the network administrator.
- Visual Network Systems sells VPM/APA primarily through the channel, but its value-added resellers and system integrators, in general, pitch the product as a technology solution to technical problems, failing to exploit the business significance of APM.
Gartner does not recommend that clients restrict product evaluations to vendors in the Leaders quadrant. Vendors in the other quadrants may, in fact, offer the most appropriate solution for a client based on the size of the deployment, the organization's IT maturity level and the organization's industry alignment. Thus, the Gartner Magic Quadrant for Application Performance Monitoring should be viewed as one data point in the product evaluation process; it should not be viewed as the sole element driving a decision.
We review and adjust our inclusion criteria for Magic Quadrants and MarketScopes as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant or MarketScope may change over time. A vendor appearing in a Magic Quadrant or MarketScope one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. This may be a reflection of a change in the market and, therefore, changed evaluation criteria, or a change of focus by a vendor.
Ability to Execute
Product/Service: Core goods and services offered by the vendor that compete in/serve the defined market. This includes current product/service capabilities, quality, feature sets, skills and so on, whether offered natively or through OEM agreements/partnerships as defined in the market definition and detailed in the subcriteria.
Overall Viability (Business Unit, Financial, Strategy, Organization): Viability includes an assessment of the overall organization's financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue investing in the product, will continue offering the product and will advance the state of the art within the organization's portfolio of products.
Sales Execution/Pricing: The vendor's capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support, and the overall effectiveness of the sales channel.
Market Responsiveness and Track Record: Ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the vendor's history of responsiveness.
Marketing Execution: The clarity, quality, creativity and efficacy of programs designed to deliver the organization's message to influence the market, promote the brand and business, increase awareness of the products, and establish a positive identification with the product/brand and organization in the minds of buyers. This "mind share" can be driven by a combination of publicity, promotional initiatives, thought leadership, word-of-mouth and sales activities.
Customer Experience: Relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, this includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups, service-level agreements and so on.
Operations: The ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.
Completeness of Vision
Market Understanding: Ability of the vendor to understand buyers' wants and needs and to translate those into products and services. Vendors that show the highest degree of vision listen and understand buyers' wants and needs, and can shape or enhance those with their added vision.
Marketing Strategy: A clear, differentiated set of messages consistently communicated throughout the organization and externalized through the website, advertising, customer programs and positioning statements.
Sales Strategy: The strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service, and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.
Offering (Product) Strategy: The vendor's approach to product development and delivery that emphasizes differentiation, functionality, methodology and feature sets as they map to current and future requirements.
Business Model: The soundness and logic of the vendor's underlying business proposition.
Vertical/Industry Strategy: The vendor's strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.
Innovation: Direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes.
Geographic Strategy: The vendor's strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the "home" or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.