Digital Transformation, DevOps, and the Future of Testing

Transforming Testing for DevOps

Research from Gartner

IT Market Clock for Application Development, 2017

The need to deliver value continuously has led application organizations to take agile and DevOps to enterprise scale. Application leaders must select the right combination of tools, technologies and practices to enable the ongoing digital business transformation of their organizations.

Key Findings

  • Tools and practices commonly associated with Mode 2, such as agile and DevOps, are evolving to enterprise scale as organizations mature their bimodal capabilities.
  • As cloud-based services make advanced technologies such as machine learning more accessible, application architectures and tools that leverage such services take prominence while service-oriented approaches fade.
  • Because application development technology assets are interdependent with other assets, organizational collaboration is needed to change them effectively.


Application leaders who are modernizing application development should:

  • Support the enterprise-class agile and DevOps practices needed for a mature Mode 2 capability by employing the appropriate tools and frameworks.
  • Utilize cloud-based service offerings by establishing architecture principles that favor patterns such as mesh app and service architecture (MASA).
  • Coordinate changes to application development technology with changes to other technologies, to culture, and to the organization itself.


What You Need to Know

Digital business requires application organizations that can deliver value continuously, respond quickly to changing conditions, and innovate, all while preserving quality. As organizations continue to mature their bimodal IT capabilities in response (see Note 1), tools and practices that enable Mode 2 will gain market parity with those that evolved to develop and support what are now legacy systems.

Application leaders must take advantage of what the market offers, but they must balance technology change with corresponding changes to the organization, its people (and their skills) and its culture.

This research analyzes key technology assets in a number of application development (AD) market segments. It maps each technology asset class in terms of two parameters:

  • Commoditization
  • Progress through its own market life cycle

Organizations must understand both parameters when setting strategies for deploying, sourcing and retiring key technology assets in support of their portfolios of software services and applications. These assets include development tools and disciplines as well as a growing range of architectural patterns that can be used to build software services and applications.

Only with this understanding will organizations be able to:

  • Determine the right time to deploy emerging or adolescent AD technology assets.
  • Establish roadmaps for converting or replacing applications on a more aggressive basis, as dictated by technology risks, because application solutions can outlive the technologies from which they were built. Use this research in conjunction with Gartner's application fitness and value review processes to maintain an appropriate application strategy and roadmap.
  • Assess the cost and risk issues related to the decreasing numbers of skilled people who know the tools because skilled resources that become increasingly scarce often come at premium prices, and some technologies can persist for decades.
  • Have different generations of AD disciplines and technologies coexist in support of new types of solutions and in coordination with legacy applications.
  • Utilize this document together with the associated Hype Cycle to make informed decisions regarding the frequency of technology asset review cycles.

The IT Market Clock

The 2017 IT Market Clock for Application Development depicts the relative market maturity and commoditization levels for the major AD technology asset classes.

During the early stages of a specific technology asset's market life, it is likely to be used primarily by "visionaries" and "early adopters" because:

  • The pace of innovation is likely to be high, but this usually puts the onus on the level of skills needed to fully exploit it.
  • The technology may be incomplete and – in the case of open source – initially lack commercial support, tooling or documentation.
  • Consulting resources and trained developers may be scarce in the market.

As a result, the value delivered by the technology will usually be highly differentiating, but it may be available to only a few.

If demand and supply grow, the technology becomes more standardized and the skills required to exploit it become more readily available. This generally results in declining implementation costs. The technology enters an early mainstream phase with increased commercial support and, in most cases, increasing levels of vendor competition. During this period of market evolution, strategic advantage can usually be found through the choice of the supplier and/or the delivery model.

As the technology matures, products and technologies from competing suppliers may become more functionally equivalent, making it easier to switch among them. The asset class will be at its most commoditized (and price competition at its highest) during this period of market evolution. In the commoditized phase, switching costs, prices and margins for suppliers reach their minimum levels. However, anomalies will always exist, and the AD market is no exception, but while competitive pressure may increase, switching costs will still be high.

Without competition – through a lack of competitors or a market transitioning away from the technology – the availability of appropriate skills and support from adjacent products will decline as technology assets approach the logical end of their support lives. The result is the final phase of market development, during which the level of commoditization for the asset class decreases. Prices rise because of reduced supplier choice and/or declining availability of the skills needed to maintain and run the products.

However, unlike the life cycles of many other technology assets, those in AD can exist in the final (Replacement) phase of the market for decades, with extended maintenance to support those solutions that use outdated or obsolete technologies but are still critical for day-to-day operations. In addition, many of these assets are supported by open-source communities well before and after their commercial viability.

Two important characteristics shape the adoption and retirement of technologies in AD:

  • Some older technologies and disciplines may persist in the installed base for as long as the applications they were used to deliver provide value to the business, and vendors stay up to date on the maintenance of the product. Java EE application servers are examples.
  • Older products may receive infusions of support for newer disciplines, architectures or technologies, thus retaining their product identity despite being dramatically different from their original form. An example is the resurgence of service-oriented architecture (SOA) using RESTful and event-driven styles.

Because of this, organizations frequently need legacy AD technologies to coexist with those of the next generation, with applications deployed to different generations of platforms. In many cases, however, it will be necessary to have a transition plan for divesting an older technology and to have the necessary financial resources to buy a replacement technology. Often, the critical driver will be the ability to obtain or retain the skills and knowledge base that will drive the move to new technologies.

The 2017 IT Market Clock for Application Development is shown in Figure 1. It positions 22 AD technology asset classes according to where they are in their market lives and their relative commoditization levels.

Figure 1. IT Market Clock for Application Development, 2017

Figure 1

Source: Gartner (October 2017)

Useful Market Life

For each AD technology asset class, market life is a relative measure of where the asset class resides in its life cycle. Measures are stated using the metaphor of a 12-hour clock face, and the full market lifetime of delivery comprises one complete 12-hour cycle, from 12:00 until 12:00.

The market life is composed of four phases:

  • Advantage: From 12:00 to 3:00, the market typically moves from an emerging status to adolescent status. Levels of demand and competition are typically low, so the technology is procured for what it delivers, not for its placement in its own market.
  • Choice: From 3:00 to 6:00, the market typically moves from an adolescent status to early mainstream status. This is the phase of highest demand growth, during which supply options should grow, and prices fall at their fastest rate.
  • Cost: From 6:00 to 9:00, the market moves from early mainstream to mature mainstream status. During this phase, commoditization is at its highest level, and costs will be the strongest motivator in most procurement decisions.
  • Replacement: From 9:00 to 12:00, the market moves from mature mainstream status, through legacy and obsolescence to "market end" (after which, the technology is no longer viable for procurement or use). Procurement and operating costs will steadily rise, and enterprises should seek alternative approaches to fulfilling the business requirement. Although some AD market asset classes fall into this category, most address stable market niches, and the pace of decline is slow.

The market life positions of technology asset classes are based on a consensus assessment of technology and market maturity. Some asset classes also appear in Gartner Hype Cycles, the span of which cover adoption of 20% to 50% market penetration – which equates to approximately 5:00 on the IT Market Clock.

Asset classes mature at different speeds; the color coding for each entry denotes how quickly the asset class will get to the next market phase. With the obvious exception of asset classes that are already in the final market phase, this is not an indication of how close to end of life an asset class may be. Once an asset class moves to the next market phase, the timing is reclassified.


Commoditization is shown on the IT Market Clock as the (radial) distance from the center of the clock: The further toward the outside an asset class is, the more commoditized it is. Commoditization is evaluated on a scale of four to 20, with 20 being the maximum level of commoditization. Commoditization is the sum of three measures:

  • The level of standardization: Determines the potential ease with which the product or technology can be interchanged and, hence, the buyer's potential capability to exercise choice.
  • The number of suppliers: Defines the range of choices available to buyers and, hence, their potential ability to take advantage of the interchangeability/interoperability yielded by standardization.
  • Access to appropriate skills: Every AD platform, toolset, discipline or architectural style/approach requires some level of internal capability to use it. The ease with which these capabilities can be obtained and augmented directly affects the internal cost of switching suppliers.

Levels of Standardization

Table 1 summarizes the scores corresponding to the different levels of standardization.

Table 1. Summary Measures for Standardization


IT Hardware

IT Software

IT Services


Open standards, broadly based and enforced.

Highly componentized. Most interfaces conform to codified or open-standard definitions. Covered by free/low-cost licensing or Open Source Initiative (OSI)-recognized, open-source license agreements.

Significant cross-supplier adoption of a common technology and processes. Many codified or open standards.


Open standards embraced in core areas.

Componentized. Interfaces for core functionality conform to codified definitions. Covered by free/low-cost licensing or OSI-recognized, open-source license agreements.

Limited cross-supplier adoption of common technology and processes. Some codified or open standards.


Commercial standards embraced in core areas.

Partly componentized. A mix of open and proprietary formats and interfaces to functionality.

Cross-supplier adoption of common technology and processes. Some codified commercial standards.


Limited commercial standards.

Not componentized. Proprietary file formats. Interfaces through published proprietary APIs.

Limited cross-supplier adoption of common technology and processes.


Proprietary technology employed by each leading vendor.

Limited interoperability with competing products. Proprietary file formats and no published APIs.

Proprietary technology and/or processes employed by each leading supplier.

Source: Gartner (October 2017)

Level of Supplier Choice

Table 2 summarizes the scores corresponding to the levels of supplier choice.

Table 2. Scores for the Number of Available Suppliers


Number of Suppliers


Five or more suppliers


At least three geographically overlapping suppliers; consistent level of choice in all geographies


Three major suppliers, nonoverlapping


Two suppliers


Single supplier

Source: Gartner (October 2017)

Ease of Access to Appropriate Skills

Table 3 summarizes the scores corresponding to the levels of skills availability.

Table 3. Evaluating Access to Appropriate Skills


Access to Skills


Skills levels reduced and becoming part of general skill set


Skills readily available, costs falling


Supply and demand for skills balanced, stable costs


Skills in short supply, situation improving (demand falling and/or supply increasing)


Skills in short supply, shortage set to stay the same or worsen

Source: Gartner (October 2017)

Market Life and Commoditization Measures

Figure 2 summarizes the market life position, as well as the commoditization scores for each asset class.

Figure 2. Market Life and Commoditization Measures

Figure 2

Source: Gartner (October 2017)

IT Market Clock Changes for 2017

New for 2017

Two new asset class profiles are introduced for the first time in this IT Market Clock. This reflects a refinement in the market's approach to application development life cycle management (ADLM), which has been split into the following two asset class profiles:

  • Enterprise agile planning tools
  • DevOps toolchain

Off the IT Market Clock

Because this IT Market Clock pulls from such a broad spectrum of topics, many technologies are featured in a specific year because of their relative visibility, but are not tracked over a longer period of time. This is not intended to imply that they are unimportant — quite the opposite. In some cases, these technologies are no longer the strategic focus of the majority of organizations because of the emergence of newer technologies.

As such, we have removed the following asset profiles from this year's IT Market Clock:

  • Application development life cycle management
  • Client/server architecture

IT Market Clock Recommendation Summary

The recommendation summary (see Figures 3, 4 and 5) is a companion to the IT Market Clock for Application Development, 2017. It maps each AD asset class by current market life status and expected changes in an easy-to-read grid format.

Each element is color-coded by priority of the actions required:

  • Red denotes a recommendation that should be acted on within the next 12 months due to immediate potential opportunities or impending threats/risks.
  • Orange denotes a recommendation that should be acted on within 24 months.
  • Light green denotes a recommendation that is less urgent.

The degree of required action will vary by asset class. For some asset classes, the only requirement will be to establish that the technology remains a safe investment. Once an asset class moves to a new market phase, the timing indicator is reset.

Figure 3. The Market Clock Recommendation Summary (1/3)

Figure 3

Source: Gartner (October 2017)

Figure 4. The Market Clock Recommendation Summary (2/3)

Figure 4

Source: Gartner (October 2017)

Figure 5. The Market Clock Recommendation Summary (3/3)

Figure 5

Source: Gartner (October 2017)

Market Background

Because most enterprises build, integrate and maintain at least some of their own applications, they represent a large part of the market for AD disciplines and technologies. Within enterprises, these disciplines and tools are used by business and IT architects as well as analysts, project managers, life cycle managers, designers, developers, quality assurance (QA) testers, implementers, production turnover personnel and maintenance staff, among others.

Digital business will continue to change the way people interact with the virtual world and the things around them. With this, the dramatic changes in technologies and platforms, as well as shifts in processes to be more responsive, are creating a great disruption in the AD market.

Digital business is now expected to be a significant aspect of achieving competitive advantage and differentiation using information and technology. While digital innovation is key, there are functioning organizations that need stability and predictability in business operations. Gartner considers bimodal IT essential for enterprises to be able to respond to the varying needs of the business. Enterprises are increasingly adopting new technologies and agile approaches while figuring out how to manage large legacy platforms. Additionally, technology and practice decisions must be made jointly across the AD and operations teams to ensure smooth delivery, which is one of the reasons for the increasing adoption of DevOps practices.

All of AD cannot be covered on one IT Market Clock. For example, in the "Hype Cycle for Application Development, 2017," we covered 37 emerging technologies and disciplines – and even more have matured "beyond the Hype Cycle" and are in use in large organizations. Hence, in this research, we summarize the AD disciplines and technologies that are most important to our clients and position them on the "cradle to grave" technology life cycle.

To retain – or reassert – their role in facilitating transformation and growth, AD departments must continue their rebirth as more agile, responsive and innovative organizations.

We see four major areas in which AD leaders should prudently invest in support of that aim:

  • New processes: Integrate application life cycle and connect to operations creating an automated delivery pipeline. This will include increased automation as well as tools and practices to enhance collaboration. IT teams should be maturing beyond project-level agile practices into end-to-end enterprise agile delivery, and should consider scaled agile approaches such as Scaled Agile Framework (SAFe) or Disciplined Agile (DA).
  • New architectures: To fully enable DevOps and continuous delivery practices, organizations must embrace service-oriented development and understand emerging microservice architectures (MSAs) that enable application release flexibility. These also are required to connect business processes to devices, enabling business innovation.
  • New approaches to user experience (UX): Apply user-centric design practices to ensure that the business value planned is the business value delivered by a given project. New modes of customer interaction delivered by devices and sensors will require new skills and context-aware frameworks.
  • New software models: Leverage cloud AD tools and application stores to unleash application innovation from software vendors, consultants and internal developers. Adopt citizen developer strategies to enable business users to create some of their own solutions in partnership with IT.

Unlike technologies covered in other IT Market Clocks, AD technologies tend to exist for extended periods of time, even after reaching the Replacement phase. This is because AD technologies are used to deliver the core applications of the business, many of which exist for decades, and it is not easy to replace an AD tool without jeopardizing the applications built with it. Just because AD technology advances, it doesn't mean these applications need to be modernized. Unless business needs have changed significantly, modernization is not usually worth the cost, or the risk. However, the longer that software stays in place, the more risks will increase based on loss of knowledge, loss of support and incompatibility with new technologies. The tools to maintain and run an organization's core applications tend to survive until the vendor ceases to support them under its maintenance contracts. At that point, modernization becomes imperative. As a best practice, there should be a transition plan for the entire solution architecture – addressing application, data, technology and business interdependencies – to avoid having to rush into costly and risky modernization projects. The related research on application portfolio management can help application leaders to assess the business value as well as technology risks and effectiveness of applications based on the categories of tolerate, invest, migrate and eliminate (TIME).

Organizations need to continuously update their AD portfolios, and the pressure to rebuild AD has rarely been greater. However, change requires planning, investment and time. Legacy AD disciplines and technologies must be retired before they become liabilities. New ones must mature enough to have standard features, adequate suppliers and good access to skilled developers. New AD must coexist with legacy AD as the portfolio evolves. This research helps organizations make timely and orderly transition decisions.

Supplier Landscape

There are thousands of application development technology suppliers, which vary in size, reach and focus, as well as a rapidly growing supply of open-source AD technologies. Nearly every major hardware, database, OS, middleware, cloud platform and application package vendor offers AD tools – or has partners that do. Many of these suppliers are megavendors with multiple product lines (including consulting services and solutions) and a global market presence for selling and supporting their AD tools. Even smaller, niche vendors tend to have worldwide reach – if not directly, then through alliance partners and distribution channels. However, some smaller vendors have a more-confined reach, with a focus on specific geographic niches or submarkets within those geographies. Similarly, some offerings focus on vertical industry niches (such as banking, insurance or healthcare) and horizontal technology niches (such as mobile AD and web application testing). In many cases, the offerings from smaller or niche vendors can be the best choice for an organization.

Because AD tools are interrelated – that is, modeling tools are generally integrated with programming tools, and requirements management tools can be used to drive automated test planning tools – many vendors with multiple types of AD tools sell them as part of a suite. However, we have found that most users have tools from multiple vendors and, in many cases, have similar functionality in multiple tools. Some have worked to have a single primary vendor that offers an "integrated" solution, but that often still requires significant effort to implement and integrate. These factors are leading to a growing market for life cycle integration and reporting.

A number of factors are changing the economics of the AD market. The increasing popularity of open-source software (OSS) has challenged large AD vendors and opened the door for smaller, more-aggressive suppliers. Often, open source software is combined with cloud computing, shifting the value proposition and enabling new vendors to disrupt traditional solutions. Increasing numbers of suppliers offer application platform as a service (aPaaS) solutions and development tools in the cloud. Some even offer components that execute inside the firewall, but are managed outside the organization. For the purposes of this research, we look at all suppliers of AD tools for professional developers, regardless of their fee structure or computing model.

Asset Class Profiles


Container Management

Analysis by: Dennis Smith

Definition:Container management software provides management and orchestration of OS containers. This category of software includes container runtimes, container orchestration, job scheduling, resource management and other container management capabilities. Container management software is typically DevOps-oriented and depends on the use of a particular OS container technology or specific container runtime.

Trend Analysis: Interest in OS containers is rising sharply as a result of the introduction of container runtimes, which have made containers more easily consumable by, and useful to, application developers and those with a DevOps approach to operations. Container management software vendors have increased the utility of OS containers by providing capabilities such as packaging, placement and deployment, and fault tolerance. The most notable container runtime is part of the Docker framework, which has a core value proposition that allows easy and efficient packaging of applications into OS containers. Together with APIs that allow easy integration and extension of the entire Docker framework, its runtime has become the nexus of the container-related ecosystem. Its main rival is rkt runtime (provided by CoreOS) and its associated app container specification.

Most use of container management software is focused specifically on Linux environments, where the OS container technology has rapidly improved. Windows-native containers, introduced with the release of Windows Server 2016, lag Linux in technology maturity. As use of OS containers – especially in conjunction with container runtimes – has grown, there has been strong growth of the associated ecosystem. That includes container management systems that bundles multiple container functionality (such as Apcera Platform, CoreOS tectonic, Docker Community and Enterprise Editions, Mesosphere DC/OS, and Rancher Labs Rancher), PaaS frameworks that have incorporated container functionality (such as Apprenda, Platform, Pivotal Cloud Foundry, and Red Hat OpenShift), and public cloud infrastructure as a service (IaaS) solutions specifically designed to run containers (such as AWS Amazon EC2 Container Service, Google Container Engine, Microsoft Azure Container Service and Joyent Triton Elastic Container Infrastructure).

An increasing number of today's organizations are pursuing the use of container runtimes in production environments. These organizations are also exploring how container management software could alter processes and tools in the future. There is a high degree of interest in, and awareness of, container runtimes in early-adopter organizations, and significant grassroots adoption from individual developers. Consequently, container runtimes and their associated software may be used with increasing frequency in development and testing. Container management software is likely to remain an early-adopter technology for the next 12 to 24 months.

Time to Next Market Phase: 0 to 2 years

Business Impact: Container management enables the use of container runtimes, which make it easier to take advantage of OS container functionality, including providing integration into DevOps tooling and workflows. Container management software provides additional capabilities (including orchestration, scheduling, security and logging) that allow containers to be run at scale in production environments. Containers typically take an application-centric view — the OS container is simply a convenient vehicle into which an application can be deployed, improving management efficiency by providing applications with an apparently homogeneous OS environment. Container management software should help improve both the productivity of DevOps engineers and quality via standardization and automation.

Because OS containers can be rapidly provisioned and scaled, and the scaling units can be much smaller than a typical virtual machine (VM), container frameworks with autoscaling capabilities can further improve utilization by dynamically allocating small increments of compute resources. This resource efficiency potentially leads to lower costs, especially when deploying applications into IaaS and PaaS offerings.

User Advice: Early-adopter organizations should begin exploring Docker or rkt as alternatives for packaging and deploying applications and their runtime environments. Container management tools should be viewed as a supplement to configuration management, not a replacement for it. As container integration is added to existing DevOps tools and to the service offerings of cloud IaaS and PaaS providers, DevOps-oriented organizations should experiment with altering their processes and workflows to incorporate containers. Organizations should also look at the emerging ecosystem around the container runtimes.

An organization may be a good candidate to explore a native container management tool in conjunction with OS containers as an alternative to more VM-based cloud management platforms if it meets the following criteria:

  • DevOps-oriented.
  • High-volume, scale-out applications; a microservice architecture; or large-scale batch workloads.
  • Willing to place these workloads in OS containers.
  • Assumes trust between containers.
  • Intends to use an API to automate deployment, rather than obtaining infrastructure through a self-service portal.

Sample Vendors: Apcera; Apprenda; CoreOS; Docker; Mesosphere; Pivotal; Red Hat; Rancher Labs

Machine Learning

Analysis by: Magnus Revang

Definition: Machine learning (ML) is a technical discipline that aims to extract certain kinds of knowledge/patterns from a series of observations. Depending on the type of observations provided, it splits into three major subdisciplines:

  • Supervised learning: Where observations contain input/output pairs (aka labeled data).
  • Unsupervised learning: Where those labels are omitted.
  • Reinforcement learning: Where a feedback loop gives an evaluation that reinforces or weakens a particular outcome.

In many cases, machine learning is synonymous to artificial intelligence (AI) – although "AI" is a broader term and can also refer to larger systems where machine learning forms a small component.

Trend Analysis: Machine learning, either as an extension or as part of AI, is absolutely one of the most- hyped concepts in IT at the moment. A sub-branch of machine learning called deep learning, or deep neural nets, gets even more attention, because it seemingly conquers cognitive fields that were previously the exclusive domain of humans: image recognition, text understanding and speech recognition.

Currently, machine learning supersedes older terms such as "data mining," "predictive analytics" and, to some extent, even "advanced analytics." The drivers for continued massive growth and adoption are the growing surges in data volume and complexities that conventional engineering approaches are increasingly unable to handle. We are seeing almost weekly reports of machine learning's impact (and potential impact) on transportation, energy, medicine and manufacturing.

Most application developers will be exposed to machine learning through cloud-based APIs that aim to be developer friendly and hide the implementation details. Only a minority will have to go deeper, looking at the particular machine learning algorithm. Although open-source machine learning libraries from Google, Facebook, IBM, Yahoo, Microsoft and Baidu have been released, adoption remains largely outside of the enterprise in research and software companies.

Time to Next Market Phase: 2 to 5 years

Business Impact: The more complex the problem, the more likely that monitoring and control of it cannot be effectively mastered by even the smartest engineers. Machine learning drives improvements and new solutions to business problems across a vast array of business and social scenarios, including:

  • Automation
  • Drug research
  • Customer relationship management
  • Supply chain optimization
  • Predictive maintenance
  • Operational effectiveness
  • Workforce effectiveness
  • Fraud detection
  • Automated vehicles
  • Resource optimization

User Advice:

  • Evaluate platforms that exist as SaaS and can be used through APIs – deploying and training algorithms directly should only be done in cases where there is significant competitive advantage in doing so.
  • Establish a center of excellence for machine learning comprising data scientists, developers and representatives of the business.
  • Nurture the required talent for machine learning, and partner with universities and thought leaders to keep up to date with the rapidly changing pace of data science.
  • Understand the capabilities of machine learning and its potential business impact across a wide range of use cases — from process improvements to new services and products.
  • Track what initiatives you already have underway that have a strong machine-learning component (for example, customer scoring, database marketing, churn management, quality control and predictive maintenance).
  • Monitor what other machine learning initiatives you could be a part of and what your peers are doing.
  • Assemble a (virtual) team that prioritizes those machine learning use cases and establish a governance process to progress the most interesting use cases to production.
  • Involve people outside of IT who are already working with advanced analytics, like business intelligence, analysts and data scientists in any machine learning project.

Sample Vendors: IBM, Microsoft, Amazon, Google,, Salesforce, Veritone,


Analysis by: Anne Thomas

Definition: A microservice is a tightly scoped, strongly encapsulated, loosely coupled, independently deployable and independently scalable application component. Microservice architecture (MSA) applies service-oriented architecture (SOA) and domain-driven design (DDD) principles to deliver distributed applications composed of microservices. MSA has three core objectives: development agility, deployment flexibility and precise scalability.

Trend Analysis: Leading digital business organizations, such as Netflix, Amazon and eBay, started building systems using MSA a few years ago, but the term "microservice" didn't start to gain traction until early 2014. Now, it is one of the hottest buzzwords in application architecture circles, and it is becoming the preferred way to build cloud-native applications. And yet, it is the subject of much debate: What is a microservice? Is it defined by its size, or something else? Do you need to adopt new patterns and infrastructure to use them? (Yes!) What are the differences between MSA, APIs and ordinary SOA?

Some vendors are blurring the debate by microservice-washing their products. Many people have started calling any shared service or anything with a RESTful API a microservice. This confusion muddies the water because APIs and ordinary SOA don't deliver the same agility and scalability benefits as MSA.

To achieve the MSA benefits of continuous delivery and web-scale scalability, development teams must adopt significantly different design patterns and application infrastructure, they must manage and coordinate many independent components and interdependencies, and they will need to make significant changes to organization structures, roles and responsibilities, and governance practices. These changes will be too disruptive for many organizations and, as a result, many organizations are likely to abandon efforts to adopt MSA. More likely, they will adopt a less-disruptive and less-beneficial approach to dismantling their monolithic applications: an approach that Gartner calls "miniservices," and yet they will still call it "microservices," compounding the confusion. For now, adoption of true MSA is very limited, but adoption of pseudo-MSA (APIs, miniservices and SOA in disguise) is rampant.

Time to Next Market Phase: 2 to 5 years

Business Impact: MSA is a key enabling innovation for digital business pursuits. MSA enables business agility. Developers can rapidly implement and continuously deploy new features according to business demands, rather than application development schedules. MSA also enables scalability. Most organizations that have adopted MSA have done so to accommodate massive growth in their digital business initiatives. As platform vendors build out microservice infrastructure, this architectural model and its agility and scalability benefits will become more accessible to a broader audience.

User Advice:

For CIOs, CTOs and application leaders:

MSA imposes a significant learning curve, and it requires discipline and commitment:

  • Agile development methodologies, DevOps and continuous delivery practices are MSA prerequisites. Cloud computing, particularly application platform as a service (aPaaS), function platform as a service (fPaaS) or container infrastructure as a service (cIaaS), goes hand in hand with microservices.
  • MSA purists build microservices by using the DDD philosophy and design patterns, including bounded context, event sourcing and command query responsibility segregation (CQRS). Your architects and developers may not be familiar with this paradigm, and they will need training.
  • For the moment, MSA is done mostly by hand. Developers can use almost any service development framework to build microservices (each feature team can use a different language and framework, if they like). The challenging part is managing and coordinating microservice interdependencies. MSA simplifies the implementation of individual services, but the entire distributed application creates a more complex operational environment. Dependencies must be explicit, and interface contracts must be solid.
  • MSA requires a new infrastructure. Currently, most teams working with MSA rely on an assortment of open-source technologies to coordinate development, deployment and operation of microservices. Be prepared to handcraft a microservice infrastructure from piece parts with no instruction manual.
  • MSA also impacts organizational responsibilities. Microservice feature teams should "own" their microservices — from design to support. DevOps operators must learn new software packaging and life cycle management techniques.

Microservices are not for everyone:

  • Many organizations don't have extreme agility and scalability requirements and, therefore, don't need to adopt this type of disruptive architecture. These organizations can derive adequate benefits using the less-disruptive miniservices model.
  • Organizations with extreme agility and scalability requirements should adopt MSA in its purest form.
  • Organizations that are implementing a continuous delivery practice should investigate using MSA to facilitate the practice.
  • Organizations that are not using agile development methodologies and DevOps should invest in improving their development practices before adopting microservices.

Sample Vendors: Amazon Web Services; Confluent; Docker; Google; HashiCorp; IBM; Microsoft; Netflix; Pivotal; Red Hat

Enterprise-Class Agile Development Methodologies

Analysis by: Mike West, David Norton

Definition: Enterprise-class agile development (EAD) is the use of business-outcome-driven, customer-centric, collaborative practices with continuous stakeholder feedback in order to support delivery of composite solutions across agile multiple development teams.

Trend Analysis: EAD adoption has traditionally been driven primarily bottom-up and as a natural evolution of agile development in single-product teams. However, top-down strategic adoption is growing, driven by information and communication technology (ICT) transformation initiatives, and business demands for faster time to market. Top-down adoption has been accelerated by growing awareness of frameworks such as SAFe, LeSS, DA and others.

Through 2017 to 2020, 40% of organizations will actively adopt EAD and related enterprise agile frameworks to gain business differentiation. Many people in architecture groups, the project management office (PMO), and infrastructure and operations (I&O) organizations are resistant to agile practices, which could impede EAD adoption, but 2015 saw a significant "thawing" of attitudes as EAD became more mainstream. Both boutique and global system integrators (GSIs) are actively promoting agile commercial offerings. EAD (and the frameworks that support it) are growing in mind share and adoption, especially in large organizations with complex agile implementations to manage.

Time to Next Market Phase: 0 to 2 years

Business Impact: EAD is about business benefits and business outcomes; it is not just a technology and IT issue. Application and business leaders must be clear about the commitment required to make EAD successful. Hence, EAD benefits can only truly be realized if lines of business and product owners structure their business cases and roadmaps with agile delivery in mind. This includes pooled funding, targeted revenue gains, and agile portfolio and product management processes.

A holistic approach to EAD across the entire development process makes EAD beneficial to a wide range of projects and products. EAD is, however, best-suited to large and complex initiatives. Both systems of differentiation and systems of record can benefit from EAD.

Business domains with a degree of uncertainty – or where the level, scale and pace of business change are issues – will be good fits for EAD. Also, programs that require a more proactive approach to fiscal governance can benefit from EAD.

User Advice:

Application leaders should:

  • Develop a long-term picture of EAD that ensures alignment of teams and business outcomes, taking into account tactical and strategic needs across the enterprise.
  • Get the basics right with agile product/feature teams trained in extreme programming practices, DevOps practices that are supported by lean IT principles, and communities of practice to support both agile and DevOps.
  • Move away from a functional organization structure to one that is based around value streams serving customers with cross-functional teams for end-to-end product delivery — that is, from idea to release.
  • Define agile roles and touchpoints with key stakeholders, including application developers, information and solution architects, and those in portfolio and product management, quality assurance (QA) and I&O areas.
  • Create key performance indicators (KPIs) that include technical attributes (defect density, technical debt, refactoring rate and QA) and business attributes (backlog value, cycle and lead time, responsiveness and flexibility), and make the KPI dashboard available to all stakeholders.
  • Adopt a flat management structure, moving away from classic command and control to one where decision rights are pushed closer to teams and related business units, guided by lightweight program and portfolio governance.
  • Consider enterprise-scale agile approaches, such as SAFe, LeSS and DA, but do not assume that these frameworks will lead to the changes you want without the required culture change.
  • Expect that the most effective enterprise agile approach may require some adaptation and specialization of enterprise agile frameworks, rather than assuming that they can be used "as is," off the shelf.
  • Recognize that suppliers of tools for application development life cycle management and project and portfolio management are moving aggressively to provide EAD support.

Sample Vendors: Agile Alliance; Disciplined Agile; Scaled Agile; Scrum Alliance; The LeSS Co.


Analysis by: George Spafford, Thomas Murphy

Definition: DevOps is a perspective that requires cultural change and focuses on rapid IT service delivery through the adoption of agile, lean practices in the context of an integrated approach. DevOps emphasizes people and culture to improve collaboration between development and operations groups as well as other IT stakeholders, such as architecture and information security. DevOps implementations utilize technology (especially automation tools) that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.

Trend Analysis: DevOps doesn't have a concrete set of mandates or standards, or a known framework – such as Information Technology Infrastructure Library (ITIL) or Capability Maturity Model Integrated (CMII) – making it subject to a more liberal interpretation. For many, it is elusive enough to make it difficult to know where to begin and how to measure success. This can accelerate (or potentially inhibit) adoption. The key is to define what it means to your organization. DevOps is primarily associated with continuous integration and continuous delivery of IT services as a means of providing linkages across the application life cycle, from development to production. DevOps concepts are becoming more widespread across cloud projects and in more-traditional enterprise environments, yet every implementation is unique. The creation of DevOps teams brings development and operations staff together to more consistently manage an end-to-end view of an application or IT service, but this requires major shifts in culture and the ways in which success is measured.

Time to Next Market Phase: 2 to 5 years

Business Impact: DevOps is focused on accelerating the delivery of business via the adoption of continuous improvement and incremental release principles adopted from agile methodologies. Smaller and more frequent releases to production can improve overall quality, resulting in improved stability and risk mitigation. While not explicitly a focus for most DevOps projects, once initial projects are successful, an adjacent but critical outcome is that clients of IT (both internal and external) will have a better experience when using the application.

Many new and transformational initiatives are not sufficiently focused on reducing risk, but, through iterative use of DevOps and architectural adoption, value can be enhanced while risks and costs can be managed.

User Advice: DevOps projects are most successful when there is a focus on business value, and there must be executive sponsorship with the understanding that this new team will have to make an often-difficult organizational philosophy shift from traditional development and operations projects today. Focus DevOps projects to develop Mode 2 capabilities that support systems of innovation utilizing agile development.

Application leaders should recognize that DevOps hype has peaked among tool and service vendors, with the term applied aggressively and claims outrunning demonstrated capabilities. Many tool vendors are adapting their existing portfolios and branding them DevOps to gain attention. Some vendors are acquiring smaller point solutions specifically developed for DevOps to boost their portfolios. IT organizations must establish key criteria that will differentiate DevOps traits (strong toolchain integration, workflow and automation, for example) from traditional management tools. Both development and operations should look for tools that replace custom scripting with improved deployment success through more predictable configurations.

Because DevOps is not prescriptive, it will result in a variety of manifestations, making it more difficult to know whether what you are doing is actually DevOps. However, the lack of a formal process framework should not prevent IT organizations from developing their own repeatable processes for agility and control.

IT organizations should approach DevOps as a set of guiding principles, not as a process dogma. Select a project with both acceptable value and risk involving development and operations teams to determine how to approach DevOps in your enterprise. Start small and deploy DevOps iteratively, taking into account lessons learned along the way. As a minimum, examine activities along the existing developer-to-operations continuum, where the adoption of more-agile communication can improve production outcomes. As development efforts leverage enterprise agile frameworks to scale, DevOps must be addressed as well.

Sample Vendors: Not applicable

Multivariate and Split A/B Testing

Analysis by: Magnus Revang

Definition: A/B and multivariate testing is a systematic, statistical and empiric process designed to facilitate a continuous improvement of user experience. E-commerce and marketing sites use the technique extensively to increase measurable outcomes, but it has gained popularity in other settings, including media, financial services and travel.

Trend Analysis: Although split A/B testing and multivariate testing are well-known and well-established in high-end e-commerce sites, other smaller websites are only now starting to adopt these techniques, which places this category of technologies in the early mainstream stage. Within the enterprise, business users – primarily in marketing – have incorporated this practice into their repertoires. This is applied primarily to customer-facing sites and only occasionally to internal-facing and other sites.

Even with multiple high-profile cases of tremendous value generation from A/B testing, the reality is that adoption has always been a lot slower than expected. One of the root causes of this enterprise inability to adopt is the need for A/B testing tools to have a dedicated team that works incrementally over time. In IT project organizations, this runs contrary to how many enterprises are organized – meaning that such efforts often fail.

The move through agile to DevOps and hypothesis-driven development creates an organization structure that is compatible with how A/B testing needs to be performed to gain value over time. Although, it also presents a shift from A/B testing tools being focused on front-end content through JavaScript tags, to doing deeper feature testing with an API.

There are numerous A/B testing tools – both stand-alone and integrated – with broader-scope platforms, offering varying degrees of functionality. They are often relatively easy to instrument, so enterprises should run proofs of concept.

Time to Next Market Phase: 0 to 2 years

Business Impact: Multivariate and A/B testing can increase revenue by improving the conversion rate, improving user satisfaction and retention by enhancing user effectiveness, and reducing the cost of self-service scenarios by improving usability. It can enable more-rapid development by testing features on subsets of users before final release, acting like an immune system for usability problems and freeing up developer time in a DevOps environment.

User Advice:

Application leaders should:

  • Establish measurable success criteria for the website to be improved with A/B and multivariate testing.
  • Follow a proper A/B testing process based on the scientific process of testing hypothesis.
  • Combine A/B and multivariate testing for different purposes by testing the hypothesis with A/B and refining the change needed with multivariate testing.

Sample Vendors: Adobe; Dynamic Yield; Monetate; Optimizely; Qubit; SiteSpect; Visual Website Optimizer


DevOps Toolchain

Analysis by: Thomas Murphy and Colin Fletcher

Definition: DevOps toolchains are compositions of synergistic tools enabling creation, delivery and management throughout the software product life cycle. These tools are designed to support automation and collaboration of key tasks and processes and generally support the practices of continuous integration (CI) and continuous development (CD). DevOps implementations utilize technology (especially automation tools) that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective. Many of these tools are outgrowths of prior technology components such as ADLM and configuration and release management.

Trend Analysis: DevOps toolchains are composed of several components and many vendors have been scrambling to position themselves as providing a complete solution. There are also many vendors that are expanding their products from their natural markets into neighboring markets (with continuous integration tools trying to participate as release automation tools, for example). The market is also evolving as technology platforms shift to support container-based cloud delivery, and tools themselves are rapidly shifting from primarily being on-premises to cloud-based SaaS. All of the major cloud platform providers are also getting involved in some fashion by creating code pipelines, which is generally the CI/CD section of the overall toolchain.

The pace of market change is creating challenges for users and there are still many gaps for most users. Tools must also fit to where an organization currently sits in the overall adoption of DevOps practices, and we often see companies a bit out of sync, either trying to drive tool consolidation too early, or expecting the tools to solve what are fundamentally cultural and practice problems. On the development side of the toolchain in particular, the market has been heavily influenced by open source (including git, Jenkins, Selenium, Docker and Postman) and we expect that this will continue as vendors make increasing use of the open-source products that underpin their offerings.

Time to Next Market Phase: 2 to 5 years

Business Impact: DevOps is focused on accelerating the delivery of business, and toolchains are designed to help automate and manage these processes. Increasing the pace of delivery cannot happen without broad adoption of automation techniques. It is impossible for an organization to practice DevOps without at least a core set of tools assembled into a toolchain.

On their own, many of the components of a toolchain can provide organizations with productivity improvements while also increasing quality through a reduction of human error in repetitive tasks. We expect that, as toolchains are assembled, additional gains will occur through the use of machine learning and AI technologies. Tools, however, will only be effective if the organization first has a firm grasp on what it expects to achieve, not from a technology perspective, but from a business perspective. We expect that cultural challenges will lead to several failures.

User Advice: It is critical that businesses first understand business objectives, activities, skills, and then select tools that will support these. Many of the tool decisions currently being made will be tactical rather than strategic, and the market will continue to evolve through merger and acquisition activity as well as other competitive behavior. It should also be noted that DevOps is at its best in use cases that match digital business. It is more hard pressed when it comes to legacy technology, Mode 1 development and off-the-shelf software. In many cases these may not make use of the full toolchain, but still benefit from tools such as test and release automation.

In the long term, custom development will consolidate around the cloud platform providers as they provide customizable toolchains optimized to deliver and manage components built and delivered to their own platforms. We expect that independents will also continue to exist to support cross-platform use cases, as well as to support specific niche functionality (such as regulatory compliance in specific industries). In many cases, these will be plug-in services, and it is important to understand how vendors deliver and support programmatic APIs to support integration.

Full Life Cycle API Management

Analysis by: Paolo Malinverno, Mark O'Neil

Definition: Full life cycle API management involves the planning, design, implementation, publication, operation, consumption, maintenance and retirement of application programming interfaces (APIs). It includes a developer's portal to target, assist and govern communities of developers who use APIs, as well as runtime management through API gateways, providing security and gathering analytics.

Trend Analysis: Focus has shifted toward the role of API programs as fundamental enablers of digital strategies, resulting in projects becoming more business oriented, with buying centers shifting from IT departments to business units. At the same time, support for service delivery through microservices, as well as the changing nature of API consumers beyond mobile, is driving innovation in API management. Security has a renewed focus, as API breaches and malicious bot activities grow.

Time to Next Market Phase: 0 to 2 years

Business Impact: APIs have grown strongly in previous hold-outs (such as financial services, government and healthcare) joining the early adopters in media, travel and retail. CIOs now recognize the importance of an API program as part of a digital business technology platform. Full life cycle API management is key to managing these new API programs. A critical mass of companies and government institutions is now publishing APIs in developer portals to fuel B2C innovation, enable the use of mobile apps and take advantage of more-direct B2B interactions with business partners.

Although full life cycle API management offerings typically provide API monetization capabilities, direct revenue from APIs remains unusual. Currently, APIs are typically deployed in the service of a larger business outcome.

User Advice: For CIOs, architects and application leaders in charge of API programs:

  • Full life cycle API management is the functionality organizations need in order to run successful API programs, execute digital strategies, and thrive in the API economy. Starting with the planning of an API program, the design of APIs, through to delivery of APIs and ongoing versioning and retirement, full life cycle API management addresses all stages of an API program. Make an early decision on a full life cycle API management platform to apply planning and control to your API program, rather than leaving it to when APIs have already been built and are about to be deployed.
  • Treat APIs as products, which may include creating the role of "API product manager." Full life cycle API management empowers API product managers by providing visibility of API usage, allowing decisions to be made on API roadmaps and versioning, and providing business metrics to communicate to stakeholders.
  • Understand that API management should not limit clients to only one way to consume APIs, such as via mobile apps. Full life cycle API management solutions have evolved to include support for IoT scenarios and protocols, as well as for B2B APIs and APIs consumed by rich web applications.

Sample Vendors: Amazon Web Services, Axway; CA Technologies; Google (Apigee); IBM; Microsoft; MuleSoft; SAP; Software AG; TIBCO Software

Enterprise Agile Planning Tools

Analysis by: Keith James Mann

Definition: Enterprise agile planning (EAP) tools enable organizations to make use of agile practices at scale to achieve enterprise-class agile development. This is done with practices that are business-outcome-driven, customer-centric, collaborative and cooperative, as well as with continual stakeholder feedback. These tools represent an evolution from project-centric agile tools and ADLM tools. The majority of tools in this space play into the overall ADLM product set, acting as a hub for the definition and management of work item tracking.

Trend Analysis: Agile tools initially focused on tactical, team-level activities, generally not scaling beyond the project level. Strategic, enterprise-level ADLM tools, meanwhile, have focused on traditional waterfall and iterative methodologies. The rise of agile to the enterprise scale has challenged ADLM vendors to add agile support, and pressured agile tool vendors to provide portfolio-level capabilities. Some vendors have responded through product evolution, others through partnerships, and still others through acquisitions. The market remains dynamic, with these activities continuing today.

For application leaders, the selection of an EAP tool is complicated not only by the rapidly changing market, but by the relationship between EAP tools, enterprise-class agile development frameworks, and DevOps tools. A cohesive strategy that takes into account the cultural and organizational changes inherent in DevOps and enterprise-scale agile is needed.

Time to Next Market Phase: 2 to 5 years

Business Impact: Digital business demands enterprise-scale agile, but management and governance bodies require insight into the development of the agile product portfolio. Providing this insight through traditional means, such as project plans and milestone-based reporting, can be difficult or impossible because of the short iterations and changing backlogs inherent in agile. Many application leaders find that they need dedicated planning tools to sustain enterprise-scale agile development.

The exact processes that these tools support are heavily influenced by enterprise agile frameworks such as SAFe, LeSS and DA. Many vendors specifically support certain frameworks. We expect to see a co-evolution of the tools and frameworks, which will affect the processes used in many application organizations. This may lead to change fatigue. Conversely, some organizations may begin to define their own methodologies (perhaps drawing on various frameworks), and may look for vendors that offer flexibility and ease of configuration over adherence to a formal framework.

User Advice: Organizations will need to satisfy multiple stakeholders when choosing an EAP tool. In many cases, these stakeholders will already have tools in place that, from their perspective, meet their application development planning needs. Application leaders may have to balance competing demands. Organizations already using, or planning to use, an enterprise agile framework should select the framework first, then select a tool to support it; not all tools are flexible enough to support all frameworks.

Many agile teams thrive on self-organization and autonomy. Introducing process-specific tools to such teams will probably be counterproductive. Lightweight, flexible tools may be appropriate. An alternative is to allow teams to use their own tools and processes, and integrate these with an EAP tool. Such integration may be imperfect and require manual workarounds.

The components of a DevOps toolchain can provide valuable information about the status and quality of application development work. The ability of an EAP tool to integrate with certain DevOps tools but not others could be a factor, either in the choice of the EAP tool or in the choice of the DevOps tool.

Few organizations have truly aligned their application development and operations practices with each other, let alone with their governance and planning practices. For most, it is a work in progress whose path is hard to foresee. The organizational and cultural changes of such alignment will likely outweigh the impact of any tools, but application leaders should beware of tools that will constrain the change.

Sample Vendors: Atlassian, CA Technologies, Collabnet, Hewlett Packard Enterprise, IBM, Inflectra, LeanKit, Microsoft, Targetprocess, VersionOne


Analysis by: Anne Thomas

Definition: A miniservice is a coarse-grained, loosely coupled, independently deployable and independently scalable application component. It is similar to a microservice, but it is coarse-grained and has more relaxed independence constraints than a microservice. It is finer-grained than a traditional service-oriented architecture (SOA) service and is independently deployable, whereas most SOA services are deployed in monolithic packages.

Trend Analysis: The term "miniservice" is not yet commonly used, but most people that claim to be building microservices are, in fact, building miniservices. Most people looking to improve agility by refactoring their monolithic applications start with coarse-grained miniservices. If they were building microservices, they would adopt different design patterns that ensure the rigorous independence of each component. These microservice design patterns are disruptive from many perspectives, which makes microservices difficult to adopt. Miniservices are built using more-traditional design patterns and technologies. They use request/response interfaces, they support multiple inquiry and update functions within an individual service, and multiple miniservices can share a database. These traditional patterns make miniservices less disruptive than microservices. Quite a few vendors are "microservice washing" their "marketectures" (marketing architectures), and are, in reality, supplying miniservice tools, frameworks and runtimes. Miniservices are likely to gain rapid adoption and move quickly through the next market phase.

Time to Next Market Phase: 2 to 5 years

Business Impact:

  • Miniservice architecture is a key enabler of business agility and digital transformation. Breaking up a monolithic application into coarse-grained miniservices makes it easier to maintain the application and to deliver new features that leverage business moments.
  • Miniservice architecture makes it easier to support multiple channels and to assemble existing capabilities to support new business workflows.
  • Miniservice architecture can improve application scalability, enabling you to keep up with demand in your successful digital business initiatives.
  • Miniservices are well-suited to customer-facing applications where agility really counts, and are strongly recommended for API economy scenarios.

User Advice: For CTOs, CIOs and application architecture leaders:

  • Adopt miniservices to gain moderate agility and scalability advantages — especially in scenarios where the organization is not prepared to adopt microservices. Refactor your monolithic applications and traditional SOA services into miniservices where you have agility and scalability issues.
  • Miniservices are much easier to adopt than microservices. You won't gain the same level of agility and scalability as you would if adopting microservices, but your developers won't face the same learning curve, and you will cause a lot less disruption among your enterprise architecture and data management teams (which are likely to resist the notion that every microservice owns its own data).
  • Miniservices require new frameworks (such as Dropwizard or Spring Boot) and runtime infrastructure components, including API management, dynamic registries and runtime monitoring services. Unlike traditional SOA services, miniservices are independently deployable, so you would not deploy them into an application server or enterprise service bus (ESB). Instead, you would use lightweight application infrastructure, such as embeddable web servers (like Jetty or Undertow), and you typically deploy the services directly to a platform as a service (PaaS) offering, containers or small virtual machines. You may need to invest in a new toolchain that supports Docker or a similar container management system.
  • Miniservice architecture moves application complexity from the inner workings of a monolithic application to the space between independent application components. Invest in governance practices and tools (such as API management) to track miniservice interdependencies.

Sample Vendors: Dropwizard; Eclipse Foundation; Microsoft Azure App Service; MuleSoft; Pivotal; Red Hat; WSO2 Mesh App and Service Architecture

Analysis by: Anne Thomas

Definition: The mesh app and service architecture (MASA) is the preferred SOA application structure for the digital age. A MASA application is implemented as a set of distributed, loosely coupled, autonomous components, including multiple fit-for-purpose apps. Each of these provides an optimized user experience for specific personas and interface channels, and multiple composable back-end services that support the workflows of each app. Apps and services communicate via mediated APIs. These can cross application boundaries, enabling a flexible mesh of capabilities.

Trend Analysis: The traditional three-tier application architecture structure that has sustained most organizations for the past 20 years is now obsolete. Mobile, social, cloud, big data, the Internet of Things and other components of digital business have fundamentally changed application requirements and forced a change in application architecture. MASA is a service-oriented application structure that has evolved in response to these requirements. MASA applications support optimal, multichannel user experiences, diverse data sources, and IoT integration. Their agile architecture enables continuous delivery of new capabilities. MASA is a prerequisite for cloud-native architecture, and provides a foundation for the five digital business platform capabilities: customers, ecosystems, things, IT systems and intelligence.

Although the term "MASA" is new, the architecture is not. It has been evolving for the past decade as organizations have applied SOA to their application designs. Many organizations still refer to their applications as "three tier," but their applications are structured more like MASA. Three-tier structures support relatively simple, single-channel applications, and this architecture is just not sufficient in the digital age. Many organizations have been shifting to MASA – and a more-service-oriented model – without realizing it in order to address multichannel requirements. If an application supports multiple clients from a single back end, it's on a path to MASA.

As organizations get more comfortable with the model, they typically start decomposing the back end into coarse-grained services (macroservices and miniservices). More-advanced practitioners are starting to increase service granularity and adopt microservices.

MASA is following the trajectory of its constituent parts: SOA, public and private APIs, apps, miniservices, full life cycle API management, and mediated APIs. MASA is perhaps more complex than each individual part, but it also ties these individual trends together to enable powerful business outcomes. MASA is also the fundamental architecture structure that enables many emerging architecture models, including cloud-native architecture, microservices and continuous experience. Although these emerging technologies are not essential for successful implementation of MASA today, they will eventually blossom into core features of the architecture. Given the imperative to build applications that support digital business expectations, MASA will rapidly gain prominence as the de facto application architecture structure for the digital age. Architecture teams may stumble a bit as they climb the learning curve, but MASA should reach maturity within three to five years.

Time to Next Market Phase: 0 to 2 years

Business Impact: MASA supports fundamental digital business requirements, such as rapid delivery of new features and capabilities; multichannel interfaces and optimized continuous experiences for better user engagement; development of ecosystems; IoT integration and improved automated decisions by leveraging pertinent context information.

User Advice:

Application leaders should:

  • Set up an innovation program for experimenting and building expertise with MASA, and for learning how to handle the delivery and management of distributed, loosely coupled, autonomous application components.
  • Ensure that development teams have competent skills in user experience design, SOA and domain-driven design.
  • Execute pilots for both new application development and monolithic application refactoring.
  • Task architects with defining technical architectures, standards, governance mechanisms and success metrics for MASA.
  • Build new applications using MASA.
  • Develop a roadmap for a digital transformation journey.
  • Assess existing application portfolios and identify applications that are critical to that journey.
  • Build a business case for rearchitecting these applications to MASA.

Sample Vendors: Amazon Web Services; Google; IBM; Microsoft; Pivotal Software; Red Hat

Cloud Testing Tools and Services

Analysis by: Joachim Herschmann, Thomas Murphy

Definition: Cloud testing tools and services involve the use of cloud technology to support testing from or in the cloud. This includes cloud-based lab management, service virtualization, on-demand-delivered testing tools, and device clouds. This term also covers support for large-scale load and performance tests, strong technology coverage (middleware, message formats and security protocols, for example) and the ability to work across applications using a mixture of technologies.

Trend Analysis: Cloud testing solutions have become commonplace in performance and load testing, including performance monitoring. In addition, cloud-based solutions for functional, usability and user experience testing, as well as cross-browser, cross-platform and mobile testing, have gained strong traction in the market. The demand created by mobile-first and bring-your-own-device programs, as well as omnichannel delivery initiatives, has created a strong demand for web, mobile web and mobile device testing options in the cloud. This includes visual testing, such as comparisons of how applications render on different end-user devices and browsers.

In many cases, cloud testing tools supplement existing on-premises testing solutions, but the balance continues to shift as larger parts of DevOps toolchains are delivered from the cloud. The cloud model has become widely accepted, and cloud-based testing solutions are both accelerating the adoption of automated testing, and becoming integral parts of testing tool portfolios.

Time to Next Market Phase: 2 to 5 years

Business Impact: Moving test labs to use a virtualized infrastructure in private and public clouds can reduce the cost of management, hardware, software and power. At the same time, it can be the crucial element that is needed to reach the goal of continuous delivery. Hosted tools increase the ability to run more tests more frequently, which reduces production errors and system failures. Cloud-based software provides more-flexible billing and capacity, which must be balanced against usage profiles. This flexibility is viable for all organizations, regardless of the development methods they use.

User Advice: There are a number of use cases for these products. However, primary consideration should be based on lab scalability and the ability to match production use scenarios in a realistic way. For companies looking to control the costs of lab setup and maintenance of tool licenses – especially where the use of testing tools is seasonal – cloud testing services and tools provide good choices. This is also a good option for companies that lack tools and rely on manual testing. It can enable them to move to automation and best-practice behavior.

Application leaders should integrate cloud-based testing tools into an agile delivery pipeline to further accelerate development and testing of applications, and consider cloud-delivered, scalable and automated on-demand test labs as part of a DevOps strategy. Full success requires that testing organizations also develop mature change management and testing practices.

For performance testing, consider whether there are readily available machines that have already been purchased. These may be easier and less expensive for smaller-scale internal testing that doesn't require a heavy load. Here, it's important to understand the costs and benefits of in-house provisioning versus the cloud, and to clearly define the objectives of moving a particular testing project to the cloud.

Sample Vendors: CA Technologies; Hewlett Packard Enterprise; IBM; Micro Focus; Microsoft; Neotys; Perfecto; Sauce Labs; Skytap; Soasta

SOA Testing

Analysis by: Thomas Murphy, Joachim Herschmann

Definition: Service-oriented architecture (SOA) testing tools are designed for application interface testing without a graphical user interface (GUI). As such, they are mainly aimed at testing "headless" components such as web services, enterprise service buses (ESBs) and process models, and they include solutions for automating the testing functionality and/or performance for APIs or web services as well as simulating or "virtualizing" interdependent components. SOA testing can enable testing below the user interface, resulting in more-robust tests.

Trend Analysis: The rise of distributed SOAs drives the need for distributed, headless testing. The rapid adoption of agile and DevOps practices requires teams to test earlier and more frequently, making this technology a prime candidate for use in organizations using agile. Service virtualization enables development and QA teams to simulate and model the behavior of complex, interdependent unavailable and/or limited services, thus removing constraints around needing to access components, databases, mainframes and so on. As companies both produce and consume more and more public and private APIs, there will be a greater need to provide facilities for users to test their implementations without hitting the actual transaction system.

Time to Next Market Phase: 2 to 5 years

Business Impact: Because services play a central role in transforming the business, building stable and reliable services will be critical to the strategic success of businesses. SOA testing tools are a crucial element in driving reuse with SOA and establishing a stable architecture. As part of a DevOps toolchain, SOA testing tools become an essential success factor of DevOps initiatives. An enterprise's application portfolio increasingly needs access both Mode 1 and Mode 2, as well as deep interaction between them. An API interface is what makes interaction possible, and SOA testing tools have a strong use case in bimodal organizations.

User Advice: Organizations that are adopting web services beyond basic scenarios – for example, creating SOAs using ESBs – or that are moving toward cloud infrastructures, will benefit from the use of targeted tools that can support more-complex testing scenarios. Such organizations should look beyond their traditional web testing tools to support these scenarios. In addition, companies testing against services that are not always available, or that may have fees associated with transactions, should explore the service virtualization functionality of SOA testing.

Sample Vendors: CA Technologies; Crosscheck Networks; Hewlett Packard Enterprise; IBM; Parasoft; Postman; SmartBear; Tricentis

Agile Software Development Methodologies

Analysis by: Nathan Wilson, Mike West

Definition: Agile software development is a highly accelerated, iterative development process with monthly, weekly and even daily deliverables of priority requirements, as captured in user stories and often documented in test cases prior to coding. The principles of collaboration, continuous integration, refactoring and promoting ownership are key differentiators. Agile methods tend to be defined in terms of "values, principles and best practices" rather than as "processes and procedures."

Agile methods require a high level of collaboration among developers and business users. They also tend to "flatten" the project and organizational structure, often through self-organizing teams. Agile methods embrace the notion that requirements change, unexpected requirements appear and priorities shift, and that development practices must enable quick, accurate adaptation to these changes. Agile approaches are more amenable to projects of short duration with incremental deliverables and implementations (with associated testing and reintegration with related applications as needed). As a result, organizations have been more successful in using agile methods when focused on individual siloed projects than as part of more-enterprise-class development, where there is a need to spend analytical and integration testing time addressing interdependencies with other projects, applications and data. Hence, in this IT Market Clock, we distinguish between these two uses of agile methods with different dots and positioning for project-oriented and enterprise-class agile development methodologies.

Trend Analysis: While some resistance to agile methods still exists in the development management departments of conservative organizations, we have been seeing increased support from development management during the past couple of years. Project management and quality functions are frequently starting to redefine their functions in an agile context. Although significant barriers to agile development remain when outsourcing, outsourced projects continue to proliferate.

The popularity of other iterative approaches, including more-traditional iterative and incremental development (IID) and rapid application development (RAD), also influences the expected pace of adoption in organizations that are attempting agile approaches. Increasingly, agile initiatives are becoming more strategic at the enterprise level. Many organizations doubt the effectiveness of agile methods, because of the perception that they have no structure. Unlike "cowboy coding" (any development undertaken in an undisciplined fashion) and undisciplined RAD (bad RAD), agile approaches have more-disciplined, closer control (daily) of development activities and offer clear practices, while shunning some traditional formalism (for example, big documents) that have provided a level of comfort but not improved execution.

Agile techniques are gradually transforming more-traditional iterative approaches. Many practices, such as more-consistent unit testing, daily builds and Scrum meetings – all of which are story-driven – have been put in place in successful AD organizations. Too often, less-disciplined developers have honored such practices more in isolated instances. To achieve broader and more-successful use of agile methodologies, IT organizations must promote their enablement of application agility and consistently use these agile practices across projects.

Time to Next Market Phase: 2 to 5 years

Business Impact: In many development organizations, agile approaches on selected Mode 2 projects are providing the benefits of fast and accurate delivery of priority application requirements. Although agility is rarely the dominant approach in large organizations, even the more conservative organizations have learned the benefits of agile development. Tight business collaboration (on-site customers) is a key success factor with agility, but it's also the most broken principle ("You can have my domain expert for three weeks, but no more"). The successful adoption of agility requires more business involvement in the development process than most organizations have committed or experienced with older methods.

User Advice: Application leaders should adopt agile approaches judiciously and on projects with disciplined teams that accept process discipline. "Coding cowboys" are not the developers who will capture lessons learned from agile development projects or help build agile capability in the organization. Agile projects need smart, disciplined developers who understand the patterns, the business owners and the power of collaboration. Agile proponents say that they value working with software over process and documentation, whereas less-sophisticated developers value agility to the exclusion of (or "instead of") process.

A key enabler of agility is the increasing number of tools that enable agile practices and provide the comfort a bigger process will provide. Comfort is coming from real information and data, not just a big, static document. Examples are code review tools, unit testing tools, continuous integration tools, metrics tools and project management tools that help automate and drive information into repositories to support management views of project status.

Sample Vendors: CA Technologies (Rally Software); CollabNet; LeanKit; Microsoft; Pivotal Labs; Targetprocess; ThoughtWorks; Valtech; VersionOne; Xebia


Analysis by: David Mitchell Smith

Definition: "HTML5" is a term that has multiple meanings. It is the name of an actual standard run by the World Wide Web Consortium (W3C) that was "finalized" as of year-end 2014. It is also synonymous with "the modern web" — a collection of technologies commonly used to build web applications. HTML5 brings many of the rich capabilities that previously required additional software (including plug-ins such as Flash or Silverlight).

Trend Analysis: In addition to its formal standard status, HTML5 encompasses a collection of more than 100 specifications managed by five standards organizations (if you include related modern web technologies, such as Cascading Style Sheets 3 [CSS3] and JavaScript). HTML5, the standard, was ratified by the W3C in 2014. However, only those components of the loose collection of features previously lumped together under the name "HTML5" that have been finalized have been incorporated. The W3C basically drew a line around what was done (including Canvas, WebSocket and video). Before its finalization, HTML5 included other components (such as WebRTC, offline and payments) that are now separate entities.

HTML5 (and modern web) usage and stability are being driven by both desktop and mobile use scenarios, with different driving factors for each environment. There are different use cases for HTML5 in mobile, where a pure web approach includes web apps that are accessed by a mobile browser. There is much wider adoption of hybrid architectures, where HTML and other web technologies are used in conjunction with native-code wrappers. This contributes to the continued confusion around HTML5 discussions even today, after the specification and definition have been finalized.

In desktop use cases, web technologies are the primary way new functionality is delivered. More modern capabilities (such as those specified in HTML5) are gradually gaining usage, which is contributing to the increased popularity and usage of modern browsers (such as Google Chrome).

Time to Next Market Phase: 0 to 2 years

Business Impact: While usage of HTML5 as part of the web continues to increase, some of that use is not in pure web scenarios (such as in mobile hybrid). Therefore, business strategies need to account for app store monetization and distribution strategy issues. As with many technologies, especially on the web, interest is occurring primarily outside the enterprise sector – such as among progressive web designers and mobile application developers. Many web developers and designers are becoming familiar with the core features of HTML5 that have been finalized, such as Canvas and video. Developers of sites that rely on Adobe Flash and Microsoft Silverlight need to pursue a migration and exit strategy that will be implemented as their sites are refreshed or rewritten. Mobile developers are increasingly adopting HTML5 as part of multiple cross-platform strategies, including pure web and hybrid approaches, especially for internal-facing projects.

The business impact is in the benefits of portability. Although 100% portability is still not attainable, there are benefits to increased portability, which HTML5 helps provide.

User Advice:

For developers:

  • Become familiar with the constituent subsystems of HTML5 and other modern web technologies (and their relative maturity) and align these to the requirements of your web projects.
  • Implement your application or website using proven practices, such as feature detection (instead of browser detection) and progressive enhancement.
  • Adjust your strategies for HTML5 to take into account its multiple use cases, including mobile hybrid architecture.
  • Sample Vendors: Adobe, Apple, Google, Microsoft


Cloud-Native Application Design

Analysis by: Mark Driver

Definition: Cloud-native solutions are designed to take full advantage of the defining characteristics of cloud computing platforms. These include elastic scalability, metered usage, shared infrastructure and automated self-service access. Cloud-native application design embodies the practices and patterns required to support these principles during solution delivery.

Trend Analysis: Cloud-native application design is applicable to new cloud-based solutions delivered via IaaS, as well as aPaaS, which offers some of the cloud characteristics (including elasticity, use tracking and self-service) and reduces the demands on cloud-native application design. Cloud-native application design intended for deployment on IaaS requires that application designers implement some of the cloud fundamentals in the design of the application. To maximize the potential of the cloud services used in solution delivery, developers and architects should become familiar with event-driven and parallel programming, architectural principles of separation of concerns, and adding tools such as the actor model to the well-worn model-view-controller pattern used in many web applications. These design elements work in collaboration to optimize an application's performance on underlying cloud runtimes.

Although adoption (or, at least, attempted adoption) of cloud-native application design practices and patterns in IT organizations remains primarily in the realm of cutting-edge web startups, Gartner clients report increasing interest in constructing their own cloud solutions to capture the reduced operational complexity and time-to-market benefits of cloud services. Increased adoption of private cloud computing, which tends to manifest itself primarily in coarse-grained compute and storage virtualization (for example, IaaS), will provide AD departments with self-service resources that can be maximized only with cloud-native application design. The emerging notion of private PaaS is a part of this trend. Popular programming frameworks such as Spring, Rails and Node.js are evolving to encompass and simplify the use of the principles of cloud-native application design.

Time to Next Market Phase: 0 to 2 years

Business Impact: Failure to address the principles of cloud-native application design will burden enterprises with unknown risks that are likely to be realized at inconvenient times. At the same time, applying the full gamut of those principles to every custom cloud solution will burden the enterprise with unnecessary costs, intensive skills requirements and loss of agility. Application leaders should establish guidelines that define when and where the various practices of cloud-native application design should be applied, based on the cost, risk and time factors applicable to a given cloud solution delivery initiative.

User Advice: Application leaders considering custom cloud solution delivery should apply the principles of cloud-native application design when the cloud characteristics of elasticity, fault tolerance and metered usage are of paramount importance. Cloud AD efforts must also consider that cloud platforms are emerging and immature, and that there are substantial architectural differences between cloud infrastructure providers that will inevitably affect development efforts for the foreseeable future.

Sample Vendors: Amazon, Engine Yard, Google, Microsoft, OpenStack, Oracle, Rackspace, Red Hat, Salesforce (Heroku), VMware

Iterative and Incremental Software Development Methods

Analysis by: Nathan Wilson, Mike West

Definition: Iterative software development methods are fixed-time, fixed-resource, fixed-cost waterfalls that are repeated. The variable is function, and how much function can be delivered in this time box. Incremental development implies well-known, fixed requirements, while the iterative approach assumes that requirements cannot be frozen before construction begins. Known requirements are realized and implemented using short cycles of analysis, design and implementation, allowing the system to evolve. Testing is performed in each iteration, instead of being pushed to the end, thereby reducing risk and improving quality. True iterative development is done on a regular cadence – every two, three or x months, for example.

The short releases of an iterative project minimize the amount of requirement change that will occur during the project. They also provide timely feedback on how accurately the software is solving the business problem.

Trend Analysis: One point often lost in the agile-versus-waterfall debate is that iterative development methods allow for quicker projects than waterfall without the significant cultural changes that agile requires. After holding steady at around 20% for several years, iterative development has started to decrease as agile development practices become more mainstream. Despite this, iterative development remains a stable and effective software development pattern that produces better software at a lower risk than waterfall approaches.

Time to Next Market Phase: 2 to 5 years

Business Impact: Organizations that are not ready (or able) to transition their Mode 1 projects to agile should start the transition to incremental and then iterative development. Incremental delivery has been shown to improve the tracking and on-time performance of projects, and iterative methods can provide organizations with more agility without the cost of a conversion to full agile.

User Advice: Look to move waterfall projects to incremental and iterative approaches to reduce risk and improve agility.

Sample Vendors: Hewlett Packard Enterprise; IBM; Micro Focus; Serena Software

Service-Oriented Architecture

Analysis by: Anne Thomas

Definition: Service-oriented architecture (SOA) is a modular design paradigm that helps reduce redundancy and improve application quality. In service-oriented applications, shared functionality is encapsulated and made available as a service that can be called from multiple applications via APIs. SOA is based on three design principles: separation of concerns, encapsulation and loose coupling. These principles ensure services are modular, distributable, shareable, swappable and discoverable.

Trend Analysis: SOA has matured to become a common practice in nearly all areas of software design. It is a fundamental enabler for the MASA application structure, and it supports multichannel applications (such as web, mobile and social). It is a required model for cloud applications. It enables easy interoperability. It is the foundation of the API economy and algorithmic business.

As the practice of SOA has matured, patterns of use or styles of SOA have emerged, including:

  • Remote procedure call (RPC)-based SOA (using WS-*)
  • RESTful SOA (using web APIs)
  • Event-driven SOA (using various eventing protocols)

RESTful SOA has gained significant mind share during the past five years and has mostly supplanted RPC-based SOA. Event-driven SOA is gaining modest traction with innovations such as microservices, although for the time being, event-driven SOA remains a niche practice.

The long-term trend for SOA is positive, because it is now standard practice for integration as well as most software designs. Although there is less discussion of SOA as a separate concept, many organizations are seeking to revitalize earlier SOA initiatives now that there is a greater industry understanding of the approach. Agile methods are being applied and used in conjunction with SOA design approaches to deliver software that is more flexible and maintainable.

Time to Next Market Phase: 5 to 10 years

Business Impact: Like the relational data model and the graphical user interface, SOA represents a durable change in application architecture. SOA's main benefit is that it reduces the time and effort required to change application systems to support changes in the business. Business functions are represented in the design of SOA software services, which helps align business and technology models. The implementation of the first SOA application in a business domain will generally be as difficult as, or more difficult than, building the same application using non-SOA designs. Subsequent updates will be easier, faster and less expensive, because the improved application structure is easier to change, and new applications can leverage previously built services.

SOA is an essential ingredient in strategies that look to enhance a company's agility. SOA also reduces the cost of application integration, especially after enough applications have been converted or modernized to support an SOA model. SOA is also the basis for integrating cloud services into the portfolio, and is the enabler of complex mobile interactions with back-end systems. The transition to SOA is a long-term, gradual trend, and it will not lead to a strategic realignment in vendor ranks or an immediate reduction in the IT outlays of user companies.

User Advice: Use SOA to design large new business applications, particularly those incorporating cloud services, delivering mobile content, or with life spans projected to be more than three years, as well as those that will undergo continuous refinement, maintenance or enlargement. SOA is especially well-suited for composite applications in which components are built or managed by separate teams in disparate locations. These components can also leverage pre-SOA applications by wrapping function and data with service interfaces. When buying packaged applications, rate those that implement SOA more highly than those that don't.

Use SOA in application integration scenarios that involve composite applications that tie new logic to purchased packages, legacy applications, or services offered by other business units (such as those found in SaaS and other types of cloud computing). However, do not discard non-SOA applications in favor of SOA applications solely on the basis of architecture. Do that only if there are compelling business reasons why they have become unsatisfactory.

Sample Vendors: IBM; Microsoft; Oracle; SAP; Software AG; TIBCO Software

Software Quality and Testing

Analysis by: Joachim Herschmann; Thomas E. Murphy

Definition: Software quality and testing encompass the tools covering the core software testing activities of test planning and execution, functional automation, API testing and performance testing. These tools have evolved as application technologies have changed, as new development methods have emerged, and as new delivery models (such as cloud) have appeared.

Trend Analysis: Traditionally, this market has been dominated by vendors that offer solutions for test management, functional test automation and performance testing. As applications shifted toward consumer-facing web and mobile applications, the market has opened for new providers and is beginning to fragment, driven by the shift to agile, DevOps and the growing popularity of open-source tools. However, the lack of ability to migrate skills and assets from one product to another means that entrenched tools tend to stay in place for a long time. This has led to an increasing number of organizations with multiple "similar" tools to address different technology stacks and delivery processes. The market is also split based on who is utilizing the tool (business analyst, test engineer or developer) and the technology under test: mobile (API, packaged software, web or client/server). Many organizations find themselves short of skills and in need of ways to manage more-complex test labs. This is driving growth in testing services and cloud-delivered solutions.

The shift to agile and DevOps, with its focus on testing early and often, is being accompanied by growth in the use of open-source testing frameworks. We expect that open-source test tools and frameworks (for example, Selenium, SoapUI, Appium and JMeter) will become the standard over the next five years, with commercial vendors providing value via frameworks, analytics and cloud delivery.

The traditional established testing tool vendors are often not the first to support new technologies, but offer improvements in the overall productivity and delivery of more-robust testing suites. We predict that disruptive vendors will continue to exhibit strong growth, but also expect to see more acquisitions as technologies and methods evolve.

Time to Next Market Phase: 5 to 10 years

Business Impact: Adequate software testing is required to reduce the risk of defects in production that can mean loss of orders, incorrect actions and unavailability of service. Good tools boost testing productivity, and solid reporting and planning, which enable efficient software delivery.

User Advice: Software quality and testing practices in most organizations are stable; however, new technologies such as mobile and the cloud create a need to seek additional tools and new skills. Users should expect mainstream testing tool providers to acquire additional assets and/or deliver new product lines to support additional functionality. As agile and DevOps practices continue to mature and expand beyond core AD to encompass the entire product life cycle testing practices will be impacted and need to evolve further.

Sample Vendors: Hewlett Packard Enterprise; IBM; Micro Focus; Microsoft; Perfecto; Ranorex; SmartBear; TestPlant; Tricentis; QASymphony; QMetry


Responsive Design

Analysis by: Magnus Revang

Definition: Responsive design (formerly responsive web design) is a client-side technique for supporting multiple layouts in a single web channel. Responsive design uses declarative rules in Cascading Style Sheets 3 (CSS3), which are applied to HTML on the client side and supplemented with JavaScript. Responsive design has now become ubiquitous, and "unresponsive" websites are the exception.

Trend Analysis: Responsive design first appeared in 2010, although it builds on the broader concept of "adaptive design" that dates back more than a dozen years. In the original, strict sense of the term, responsive design uses the CSS3 media query function to render different designs for different devices. The term is now sometimes used loosely to mean any website or application that adapts itself to the device on which it appears, regardless of how the adaptation is accomplished. Gartner will continue to use responsive design in the original, specific sense of the term and use the term "adaptive design" to refer to the broader set of techniques (which can be client-side, server-side or a combination of the two).

Responsive design is attractive because a single website can support access from smartphones, tablets and desktop machines. The performance limitations of a strictly client-side mechanism are now well-known but, in general, the benefits of responsive design outweigh the limitations, especially for content-centric sites. Initially, most implementations were either hand-coded or one of many small, open-source libraries or frameworks. However, this has changed, as support for responsive design has become pervasive across the industry. Open-source content management systems (such as WordPress and Drupal) that have large ecosystems (including marketplaces for skins and themes) were the earliest systems to incorporate responsive design, and this is now well-established. For example, in online marketplaces for WordPress themes, the majority of themes are now responsive. Major enterprise vendors (such as IBM, Microsoft and Adobe) have modified their tools and platforms to support responsive design. Many mobile application development platform (MADP) vendors either conform to responsive-design concepts, or have released adaptive products in order to address the dynamic pace of mobile change. There are multiple techniques to mitigate the performance disadvantages of the client-side mechanism, but no single approach is dominant.

Responsive design is now so ubiquitous in the market that it's more of a surprise when a new website is not responsive. For the vast majority of Gartner clients, responsive design would be the right choice – and the very few clients where this is not the case will more than likely be in e-commerce or other sectors that operate transactional sites that need heavy adaptability to devices.

Time to Next Market Phase: 5 years

Business Impact: What drives rapid adoption of responsive design is the increasing cost and complexity of supporting a variety of devices with multiple form factors. Organizations need to project their web presence across these channels and are struggling with redundancy, overlaps and cost inefficiencies.

Responsive design addresses this increased need for a unified web presence. The performance limitations of the client-side approach are not always perceived in the initial enthusiasm. A key limitation of responsive design is that, in the native implementation, images are transferred at their full resolution from servers to mobile devices. The images are then scaled on the client-side devices after data transfer (meaning that mobile bandwidth is wasted). A better approach, in most cases, is to do some amount of server-side detection and conditional processing, sending already-scaled images from servers to clients. Although responsive design is not a cure for all ills, it can (and has) deliver significant value in many content-centric scenarios – mostly for the low- to midrange requirements.

As tools mature, expect to use a broader range of sites and applications to deliver a responsive design experience.

User Advice:

  • Responsive design should be the default starting point for a decision on how to implement content-centric mobile websites across a wide range of devices – a decision not to go down this route should be very well documented.
  • Mitigate the limitations of responsive design, including performance limitations, by combining it with server-side adaptive web techniques.
  • Before hand-coding responsive design into a website, survey the landscape of web publishing tools, content management systems and portal packages, because these all provide support for responsive design.
  • Sites with complex requirements beyond content consumption (for example, transactional sites with multiple pages and complex forms) should examine the alignment between their needs and the current capabilities of responsive design tools and platforms.
  • Although responsive design can provide a common user experience across a broad range of browsers/devices, organizations should not dismiss the potential of mobile-centric, targeted apps that deliver higher performance and make the most of the unique capabilities of each device (for example, offline mode, cameras and location sensing).
  • A long-term vision for design should include a full range of adaptive and contextual processing in order to deliver the optimal user experience.

Sample Vendors: Adobe; Automattic (WordPress); Filament Group; Gridless; Moovweb; retina.js; Responsify; Twitter; Usablenet; Zurb

Java EE Application Servers

Analysis by: Mark Driver

Definition: Java EE has historically been an architecture and programming model for multiplatform Java business applications (managed by the Java Community Process). It is implemented as Java EE application servers by many commercial and some open-source, community-style vendors. The Java Community Process provides compliance test cases and reference implementations for Java EE components. Multiple vendors certify that their Java EE-compliant application servers pass all the test cases, which are available to licensees, typically for a substantial fee. The rigor of the test scenarios imposed by the process has delivered an unprecedented degree of portability among implementations of Java EE from different vendors, reducing user risks, facilitating open competition, and lowering costs. Some smaller vendors avoid the cost of Java EE licensing and claim compatibility without having passed these tests.

Trend Analysis: In a blog entry posted on 17 August 2017, Oracle announced that version 8 of the JEE specification will be the last. It intends to hand off the specifications, reference implementations, and test kits to an (as-yet unnamed) open-source foundation in the near future. This will effectively spell the end of JEE as we know it today. The project may continue beyond the Java Community Process within an open-source community, or it may wither and die as developers and architects look beyond JEE to lighter-weight Java application architectures for future projects. In any event, the commercial availability of JEE application servers will undoubtedly continue for several years into the future in order to support existing deployment and "legacy" code bases.

Time to Next Market Phase: 2 to 5 years

Business Impact: The significant degree of portability (as well as the well-established interoperability of the licensed Java EE implementations) reduces the cost of software engineering by making vendor implementations and skilled engineering resources readily available in-house, through system integrators or offshore outsourcing. Continuing strategic investment by independent software vendors – including large vendors such as SAP and Oracle – further solidifies the long-term staying power of Java and the Java EE standard programming model. However, Java EE is challenged by a variety of lighter-weight microservice containers (Docker, for example) and architectures. The emergence of cloud computing and its requirement for multitenancy may, in the next three years, be a market-altering change.

User Advice: Many mainstream business applications will not need the full power of a high-end Java EE platform. Considering the high degree of compatibility among implementations of Java EE, it is good practice to use the proven, low-cost offerings for less-demanding parts of the application, and to invest in the high-end alternative platforms only for select, high-demand parts of the application environment. Consider the open-source option as a viable alternative to the more-established, closed-source implementations for most mainstream application projects.

Sample Vendors: Apache Software Foundation; Fujitsu; Hitachi; IBM; Kingdee International Software Group; NEC; Oracle; Red Hat JBoss; SAP; TmaxSoft

Waterfall Software Development

Analysis by: Nathan Wilson, Mike West

Definition: The emergence of waterfall methods was an attempt to apply defined engineering principles to software development. Traditional waterfall is based on three seemingly simple premises, all of which must be present for delivery success:

  • The various phases of the process can be independently verified to take up well-known and predictable portions of the development cycle.
  • A single set of requirements, analysis, and design documents can define what the business wants the software to do and how the software will be delivered.
  • The specifications do not change, or change only minimally (and perhaps predictably) during the construction and test phases of the project.

The first premise has, in many cases, proved to be false, with each phase being rushed to meet its individual date. This has resulted in many projects stalling in the final phase when these early deficiencies become obvious.

The second premise is false in Mode 2 projects and is highly debatable in Mode 1 projects. Requirements are wrongly specified (defects of commission), and some are not apparent or are forgotten (defects of omission). There are multiple options for the actual delivery, especially around usability, look and feel that aren't apparent until the software is developed.

The third premise is a source of both amusement and dismay to those who develop and/or deploy software solutions. Even in Mode 1 projects, there are always changes. The traditional approach to a reliable waterfall delivery method lies in minimizing the number of changes during the time allocated for construction and testing of functions.

Traditional waterfall simply breaks down under the fallacy of these assumptions. Certain techniques can be employed to lessen the failure but, unless the construction and test phase duration of the project is constrained, there will still be adverse impacts on schedules and budgets. Hence, traditional waterfall methods without constraints to duration should be avoided in favor of newer development methodologies that can better deal with uncertainty.

Trend Analysis: Waterfall projects are known to fail more often and more spectacularly than iterative or agile projects. Despite its long tradition as a development methodology, waterfall's use is declining.

Time to Next Market Phase: More than 10 years

Business Impact: Although waterfall continues to be the most popular software development methodology, more than half of all software projects in the most-recent Gartner CIO survey were iterative or agile. Despite decades of attempted improvements, the track record of waterfall, when compared to iterative and agile methodologies, does not justify its continued use.

User Advice: Look to move waterfall projects to an iterative approach to reduce risk and improve agility. If waterfall must be used, limit project duration to less than 120 days, and take advantage of development practices coming from the agile software world, such as automated testing, code complexity reduction, continuous integration and refactoring.

Sample Vendors: Not applicable

Note 1
Definition of Bimodal IT

Bimodal IT refers to having two modes of IT, each designed to develop and deliver information- and technology-intensive services in its own way.

  • Mode 1 is traditional, emphasizing safety and accuracy.
  • Mode 2 is nonsequential, emphasizing agility and speed.

Each mode has all the people, resources, partners, structure, culture, methodologies, governance, metrics and attitudes toward value and risk that its operation requires. New investments are deployed through one of the two modes, depending on the balance of needs. When the balance changes, existing investments and operations move between the two. The most mature version of Mode 2, enterprise bimodal, is not just about the IT organization — it also encompasses a fast, agile mode of doing business.

Source: Gartner Research Note G00325676, Keith James Mann, Mike West, Anne Thomas, David Norton, David Mitchell Smith, Dennis Smith, George Spafford, Joachim Herschmann, Magnus Revang, Mark Driver, Nathan Wilson, Mark O'Neill, Thomas Murphy, Colin Fletcher, 4 October 2017