Infrastructure and operations (I&O) leaders are typically focused on new ideas, technologies and ways to deliver business value. Yet reinvesting in legacy technologies like data center infrastructure can often pay dividends in the form of increased capacity and reduced operating costs.
“Most I&O leaders are dedicating their attention to cloud migrations, edge strategies and getting workloads closer to the customer, but it’s important to remember that a core set of workloads may remain on-premises,” says David Cappuccio, Distinguished VP Analyst, Gartner. “Although continued investment in an older, more traditional data center may seem contradictory, it can yield significant benefits to short- and long-term planning.”
Here are three ways that I&O leaders can optimize existing data centers to support new and emerging business services.
Data centers that are nearing operational capacity are typically limited by a lack of physical space, power to support additional equipment or adequate cooling infrastructures. This results in companies either choosing to build a new, next-generation data center or using colocation, cloud or hosting services.
Although these are viable options, each of these solutions requires moving workloads away from the traditional on-premises operation. This introduces risk and adds complexity. An alternative solution for long-term upgrades of existing data centers is to use self-contained rack solutions.
Self-contained racks are manufactured enclosures that contain a group of racks designed to support medium to high compute densities. They often integrate their own cooling mechanisms, as many of these solutions were designed for use in high-density computing environments. These retrofit options can be a simple and effective solution to improve data center space.
The least-intrusive retrofit technique involves clearing out a small section of floor space for one of these self-contained units. Depending on the vendor, the self-contained rack unit will require power from an existing power distribution unit, or in some cases, it may require a refrigerant or cooling distribution unit. Assume an increase in per-rack space of approximately 20% to take into account additional supporting equipment.
Because in-rack cooling solutions are self-contained, they do not require a hot-aisle/cold-aisle configuration or containment. This will enable more flexibility in the placement of the new racks on the data center floor.
Once the unit is installed, begin a phased migration of workloads from other sections of the floor. This is not a one-for-one migration, because these rack units can support higher cooling densities. Often, an existing data center would only utilize on average 50–60% of rack capacity, because higher-density racks cause hot spots on the floor. With the new contained racks, the amount of workload migrated is often 40–50% greater. For example, a new, self-contained four-rack unit might absorb the workloads from between six and eight racks on the existing floor.
Therefore, workloads moved to the new enclosure are unlikely to come from the same racks, and the older section of the server area will be heavily fragmented. The next phase in the project entails defragmenting the environment and moving workloads out of underutilized racks to free up additional floor space.
Then once these workloads are moved, begin the process of physically relocating equipment and clearing out the next section of floor space to make room for the next self-contained rack installation. As each subsequent unit is installed, the overall density of computing per rack increases, resulting in a significantly higher compute-per-square-foot ratio and a smaller overall data center footprint.
Depending on where existing servers are in their economic life cycles, this migration phase might also be an excellent time to consider a server refresh. Implementing smaller server form factors can increase rack density, while reducing overall power and cooling requirements.
The key to all of this remains the input power to the data center — it must be adequate for the higher-density racks. One offsetting benefit is that the overall cooling load can actually decrease as more workloads move to high-density racks, because much of the cooling air flow is handled inside the rack, reducing the amount of air flow needed across the entire data center space.
Although new chip designs attempt to lower the heat footprint of processors, increased computing power requirements lead to higher equipment densities, in turn increasing cooling requirements. As the number of high-density servers grows, I&O leaders must provide adequate cooling levels for computer rooms.
For those looking to retrofit data centers for extreme densities in a small footprint, perhaps for quantum computing or artificial intelligence (AI) applications, consider liquid or immersive cooling systems as viable options. Gartner predicts that by 2025, data centers deploying specialty cooling and density techniques will see 20% to 40% reductions in operating costs.
Every environment is different, so it’s critical that I&O leaders use detailed metrics such as power usage efficiency (PUE) or data center space efficiency (DCSE) to estimate the benefits and unique cost savings from such investments.
By implementing a phased data center retrofit, I&O leaders may attain significant growth in their existing facilities, while reducing the cooling requirements and freeing up power for additional IT workloads. This activity is not without risk, because any physical equipment move within a live production data center is risky. However, if executed as a long-term project, and broken into small, manageable steps, the benefits will be far-reaching.