Are you building new applications or refactoring existing applications, and realizing that your pipeline is sluggish and error-prone? If yes, you may be wondering whether you need to start deploying containers to make your infrastructure more agile. But what are containers and how can they solve the problem of simplifying the application life cycle?
Containers enable software to run reliably when moved from one computing environment to another. “They package an application, plus all of its dependencies (libraries, configuration files) bundled into one portable image,” says Anna Belak, principal research analyst at Gartner. “Containerization decouples the application and its dependencies from the underlying infrastructure. As a result, issues caused by differences in operating system (OS) distributions and core infrastructure are removed.”
“ To avoid wasted resources and docker container abandonment, there are four steps to complete before initiating any containerization projects”
Containers are appropriate in several real world use cases. For example, containers allow for relatively easy migration of workloads from on-premises infrastructure to the cloud or between two different cloud providers. This type of migration can be permanent as part of a cloud strategy or temporary, such as in the case of cloud bursting.
Because containers provide workload isolation while sharing the OS, they enable system administrators to run several versions of an application on the same server without interference. This aspect of containerization can also be leveraged from a server consolidation perspective.
Additionally, the lightweight profile of containers and their ability to spin up and down quickly make them compatible with automated continuous integration and automated testing environments for rapid software development and release.
In order for technical professionals to deploy a successful containerization initiative, Gartner recognizes four steps for them to complete before initiating any containerization projects.
Step No. 1: Prepare for cultural adjustments and fill skills gaps
“Container adoption in the enterprise cannot be owned and administered by a single team,” explains Belak. “As containerization projects are time-consuming and can be costly, successful completion requires buy-in from all relevant teams, such as security, infrastructure and operations, networking and application development.”
New technology implementations also entail a lot of work so it is important for technical professionals to foster active collaboration between the teams involved in the project — especially those from application development and infrastructure and operations. “We recognize that DevOps-minded organizations are better equipped to handle the challenges of containerization as containers provide a technology framework that is consistent with the DevOps methodology,” says Belak.
Many solutions required for deploying containers in production are offered as commercially supported, open-source-based products with enterprise licenses. Whether an organization decides to deploy open-source components or commercial products only, its teams will need to adopt new operational models to manage containers successfully. Technical professionals must provide appropriate training for their personnel to ensure a quick onramp and long-term consistency in managing these new and complex solutions.
Step No. 2: Develop your infrastructure automation proficiency
Container deployments require automation and management via common line interfaces (CLI) or application programming interface (API). Due to high degree of container life cycle churn, rapid startup and shutdown, containers are too difficult to manage manually at scale. Although container management solutions offer dashboards that provide visibility into the deployment, most operational tasks must still be performed through a CLI or API. “It is important for technical professionals to select container management tools that offer visibility through dashboards, but at the same time they need to expect to interact with their deployments exclusively through the CLI or API, not a graphical user interface (GUI),” explains Belak.
Step No. 3: Solidify primary and secondary initiative objectives
To successfully extract business value from containers, which are themselves a piece of technology, organizations must set realistic goals. They should develop both primary and secondary goals and avoid focusing any projects around low-impact initiatives.
“Primary goals are those you would ideally like to achieve as a direct result of your containerization initiative,” says Belak. They are high-impact and:
- address major existing pain points
- create quantifiable cost savings opportunities
- enable teams to tangibly improve the delivery of products or services to internal or external customers
Secondary objectives are medium-impact goals. They can still be attained if the primary objective fails, or peripheral objectives are achieved automatically or with minimal, additional effort as a result of embarking on the primary objective.
Commonly discussed low-impact goals are how to save money on licensing fees or to avoid paying the ‘virtualization tax.’ Although a containerization initiative can, in some cases yield this result, the investment is not worth it. However, an application refactoring initiative that increases agility, operational efficiency and customer satisfaction is more likely to be worth the investment and may have additional positive effects, such as reduced infrastructure costs.
Step No. 4: Select candidate applications
This final step consists of carefully selecting which applications are appropriate candidates for the refactoring process or deciding whether a net new application should be deployed in containers. Once determined, develop your teams' skill sets on a simple containerization project. Equip them with new knowledge and tools to enable developers and operators to predict potential pitfalls in future projects and feel more comfortable selecting good candidate applications for subsequent initiatives.