Field Research Results: Desktop Virtualization in the Healthcare Industry
Healthcare organizations are rushing to support numerous challenges in their environments: demand for mobility, fear of security breaches, and the growth in complexity and cost of desktop management. Gartner reviews how desktop virtualization can help solve them.
Table of Contents
- Business Context
- The Participants Speak
- Gartner Perspective
- What We Discovered
- Budget Constraints
- Application Concerns
- Safety and Security
- User Experience
- IT Governance
This document was revised on 5 November 2012. The document you are viewing is the corrected version. For more information, see the Corrections page on gartner.com.
Healthcare organizations are considering a complete transformation of their desktop environment, and many are finding server-hosted virtual desktop (SHVD) to be the best fit to their business problem. Healthcare organizations see how this technology empowers their users, simplifies the overall management and increases their ability to secure their environments. The transition from physical desktops to virtual desktops was not a simple one though. These organizations ran into many obstacles along the way. This field research study shares valuable insight from early SHVD adopters in the healthcare industry.
In early 2012, Gartner carried out in-depth interviews with early adopters of desktop virtualization; 19 organizations where interviewed, many of which were in the healthcare industry. These discussions were free-form, but many healthcare organizations had similar stories of the successes and failures they had during their desktop virtualization projects. Patient safety concerns, Health Insurance Portability and Accountability Act (HIPAA) compliance issues and protected health information (PHI) were among the top reasons these organizations chose to deploy these solutions. Along the way, however, these healthcare organizations learned many valuable lessons.
The study's healthcare participants made numerous important observations, which are included in the remainder of this section.
Patient safety concerns drove the decision to deploy virtual desktops:
- "Patient safety is the first driver for moving to virtual desktop infrastructure (VDI). The causal reason for this was speed of logon. When you're dealing with things like drug reactions, time is of the essence."
- "[VDI business case] was going to happen no matter what because of patient safety — [it] takes too long to get into a PC. Put kids at risk. Had to be fast and ubiquitous — children's lives were at stake."
- "We launched an effort to build technology for our new hospital. It gave us an opportunity to go into every clinical area, let the clinicians dream a bit and say what the future hospital should look like. We decided to offer a 'follow me' desktop that would offer a safer and more convenient solution for clinicians."
Clients had different opinions on how desktop virtualization affected cost:
- "We laid out the challenge to vendors — you won't have high adoption without cost parity. Vendors argued you'd get labor savings, but we said 'That doesn't count. That's ours!' If you want massive adoption, it's got to be less expensive."
- "We feel we are very close to cost parity. We have to bargain hard with the vendors. I've built a model that gives us cost parity over an eight-year cycle (two PC replacements)."
- "We did an ROI: Five-year 20% ROI. PC replacement, license savings, support staff, user issues versus PCs, not paying two instances cost versus four licensing cost. Server for XenApp, VDI license (users no longer needed a separate Windows Remote Desktop Services Client Access License [RDS CAL] for remote access)."
Desktop virtualization caused businesses to restructure the IT department:
- "We brought infrastructure and operations under one leader. We formed a cloud and virtualization team out of storage, virtualization and client access technology practices. Central IT has grown to support innovations at greater economies of scale."
- "From a support perspective, our team failed to realize that we need to staff differently because we now have end users as our customers. Previously, our virtualization customers were the Windows and Linux server operating system (OS) teams. We now have thousands of people that can call in with issues. We turned into a customer-facing team overnight. That continues to haunt us."
- "Originally when we promoted virtualized desktops, we had our server team leading the charge. We quickly learned that server teams are good at virtualizing the data center, but not the right group to ask to virtualize the desktop. The desktop needs are much different than the server and database needs. The desktop engineering team needed to own that work."
On discovering major storage issues during the deployment:
- "The thing that surprised everyone is the need for disk I/O [input/output]. It was startling. We didn't believe it at first, [and] thought we had made a mistake. This changed how we architected and engineered everything. We stopped designing for capacity and started designing for I/O."
- "Once we understood the process [of linked clones], we knew how to design it. And that went back to storage. It always goes back to storage for VDI."
- "Performance is a problem that has turned into a risk. If you grow too fast, the storage back end can't keep up. Our risk of massive adoption of VDI is that our infrastructure couldn't keep up. We were shocked that desktop I/O was significantly higher than server virtualization I/O. Our busiest servers don't produce the same I/O that our desktops demand."
Study participants offer countless more insights that will be highlighted throughout the remainder of this document.
Traditionally, healthcare has been viewed as a vertical that is slow to adopt new technologies. However, the reverse is true for server-hosted virtual desktop (SHVD) technology. According to Gartner's hosted virtual desktop (HVD) worldwide forecast, healthcare is one of the fastest-growing market segments for virtual desktops. In fact, 70% of total HVD usage is covered by healthcare, financial services, government and education collectively, with healthcare being the top user of those four.
In a recent contextual research study, Gartner set out to discover the lessons learned by early adopters of SHVD technologies. It is easy to point out the failures of companies that seek to embrace new and untested technology. After all, the first to use a technology will also be the first to find its flaws. It is important to keep in mind that, when these organizations started these projects, most of the technologies that exist today did not exist. Storage caching solutions that solve many of the I/O issues in desktop virtualization were created to solve issues found by these early adopters. Desktop analysis tools that compare the appliance usage and performance of a physical desktop to a virtual desktop came about due to the need to understand these differences. In one sense, gratitude should be given to these early adopters because they cut the path for other organizations to follow. In the words of Warren Buffet, "Someone's sitting in the shade today because someone planted a tree a long time ago." These early adopters planted many trees that allow businesses today to avoid many of the pitfalls through which they worked.
In the Gartner Perspective, section we look at the following:
To deploy a new desktop virtualization environment, companies spend between 40% and 60% of their budget on storage alone.1 Not only was storage the highest capital cost involved in the desktop virtualization project, but often the bottleneck that affected virtual machine (VM) performance. This focus on storage put storage administrators front and center on many of these deployments:
Every Friday we have a standing meeting with our storage and virtualization teams to discuss our storage latency problems. We call this meeting "storage wars." We state that there is a problem each week, and they say, "No, there isn't." We do that every week. This has been going on for three months.
Storage gets a massive amount of attention because it tends to be the first bottleneck that these deployments run up against. One participant shared a nightmare scenario that they experienced:
Two weeks out from August, we had our VDI environment spun up, [with] 700 to 800 desktops active at the time on an EVA [Enterprise Virtual Array] (midrange HP storage array) with 256 MB in write cache. [An] admin applied a Microsoft update live on 700 to 800 sessions. It brought everything to its knees. It took us eight to nine hours to recover from that. We realized that the bottleneck was the write cache. There's no way the EVA was sized to handle what happens when you expose that number of writes in a VDI session.
Not understanding how to size a storage area network (SAN) is a common problem, especially for these early adopters. Traditionally, storage has been focused on capacity. However, desktop virtualization completely changes the focus. In desktop virtualization, storage administrators have to understand the total amount of input/output operations per second [IOPS] that their SANs can handle:
The biggest tech issue, by far, has been storage IOPS (storage performance). We understand it a lot now.
Often, IOPS gets overlooked, since, initially, desktop virtualization projects start with a few VMs that the SAN can easily handle. However, as the environments scale, IOPS becomes a greater challenge:
Around storage specifically, in 2009, we determined that the system management approach for linked clones was not mature enough to allow us to scale. Compromises have perpetuated high storage costs. IOPS can bring our NetApp storage to its knees.
The use of a persistent desktop over a nonpersistent desktop can cause major increases in storage needs. Persistent desktops are built for a single user, and retain all data and settings when the desktop is powered down. Nonpersistent desktops are built from a master image that IT maintains. Every time a user logs off, all data and settings are restored to the original image. Some environments incorrectly planned the use of persistent desktop and believed they could use nonpersistent desktops. This caused major storage capacity issues as the system scaled:
We thought we'd be 70% nonpersistent, but we wound up going the other way — to 70% persistent. That changed how we had to look at the storage.
One participant, who is well into a desktop virtualization project, is still battling storage issues:
We're still looking at the storage piece. We're planning to do a "bake-off" between VCE, HP and NetApp to look at the solutions from a storage perspective.
Much of the blame of poor performance is put on the storage administrators. One customer lamented on getting storage administrators to understand the needs of a virtualized desktop:
Storage is the worst silo in the IT organization. They're good people, but getting them to change is hard. You have to change language and interactions, and people have to change in what they are delivering to the customer. [The] storage person needs to own it if an IOPS problem is bad for someone in the psychiatric unit right now. If we can drive home that it's an integrated service, you win half the battle.
Many storage administrators have not had to deal with I/O issues, and those that have experience with I/O are used to database applications that require a high level of read I/O. In desktop virtualization, roughly 80% of all I/O is write, while only 20% is read. This meant that not only were storage administrators unprepared for this, but storage vendors were unprepared as well:
Our journey has a lot to do with storage — but it's their journey, too!
Storage issues remain a top issue with all desktop virtualization projects. Storage and software vendors are rapidly creating a new product portfolio to resolve these issues.
Many of the early adopters naively entered their desktop virtualization projects. The IT departments wanted to test the technology to see if it would be a fit in their environments. To do this, they would test virtual desktops in miscellaneous locations. As these locations appeared to work, they would grow the deployment. This approach to a project did allow IT a lot of flexibility in their rollouts, but it also enabled them to miss important details that they only discovered as they grew. One important detail is something known as "application sprawl":
Apps started growing out from the initial list, and that was one of the adjustments we had to make.
These early participants did not have the same set of tools that are available today. The tools that are now available can easily show every application running on the network, how often an application is used, the response time of the application, and whether or not that application is suitable to be run on a virtualized environment. An example of these modern tools is shown in Figure 1.
Source: Lakeside Software (October 2012)
Figure 1 shows a real-world example of application sprawl on a network of roughly 7,000 virtual desktops. This image shows multiple versions of the same application spread throughout the network. The image also shows that, although there are a group of applications that are in use by most users, as the list of applications goes on, the amount of users that use those applications greatly reduces. One thing of importance in Figure 1 is the scroll bar under the bar graphs. This bar signifies how much more there is to show. If the picture could be made large enough, thousands of applications would be shown.
Having a lack of understanding of the applications running on the environment and how they operate can cause major problems down the road. One participant had originally planned to run the desktop in a nonpersistent manner (that is, the desktop goes back to a pristine state after each log off), but after attempting to do this, the participant found that certain applications would not work in this fashion. Thus, the participant had to change the architecture to support persistent desktops.
We knew early on that we'd only have so much storage. Apps required us to go more persistent, and that changed the storage requirements greatly.
Having an understanding of the applications in use in the environment is but one application issue observed by these early participants. There were other issues with applications that are addressed in the Application Concerns section. Understanding the application needs of an environment can be the difference between a successful desktop virtualization project and a failure.
During the contextual research study, Gartner recognized the following patterns between participants:
Many early adopters didn't start by looking at the ROI of the desktop virtualization project. They had pressing needs that demanded a new desktop management model, and virtual desktops seemed like the right choice. Of those that did look at the ROI, many were pleased:
We did an ROI: five-year 20% ROI. PC replacement, license savings, support staff, user issues versus PCs.
When organizations compare the cost of physical desktops to that of virtual desktops, it's very rare that virtual desktops would be cheaper than physical desktops. However, one organization fought to get these numbers as close as it could:
We feel we are very close to cost parity. We have to bargain hard with the vendors. I've built a model that gives us cost parity over an eight-year cycle.
The selling point from most vendors is that, although the capital costs may be high for virtual desktops, the return comes from reduced operating costs. However, some organizations don't feel this is something vendors get to claim:
We laid out the challenge to vendors — you won't have high adoption without cost parity. Vendors argued you'd get labor savings, but we said, "That doesn't count. That's ours!" If you want massive adoption, it's got to be less expensive.
Cost is a top concern for most organizations, and many technologies are being created to address the cost issue. In numerous cases, Gartner has documented that storage tends to be the No. 1 driver of cost, demanding as much as 60% of the total project budget. Due to this, we see many new storage technologies seeking to address this issue. The early adopters interviewed during this study were not using the more modern approaches to storage, such as an appliance, Fusion-io, Atlantis ILIO or direct-attached storage (DAS). However, we expect that, as SHVD adoption continues, more organizations will look at these alternatives to bring down the cost of these projects.
Participants of the study had many things to share about getting applications to run on a virtualized desktop:
- Epic electronic medical record (EMR)
- U.S. Food and Drug Administration (FDA) regulations
- Radiology applications on virtual desktops
Epic EMR is a very popular EMR product used by many participants in the study. However, not all participants were using the same SHVD or server-based computing (SBC) platform. Participants were split down the center as to which vendor platform they were running. Roughly one-half of the participants interviewed were running Citrix XenDesktop, while the other half were running VMware View.
Bundled with these products are application delivery solutions that are significantly different:
- Citrix XenDesktop Enterprise and Platinum include licensing for XenApp, which is SBC technology that runs on a Windows server and remotely delivers a Windows-based application to any number of destinations (such as tablets, physical desktops, virtual desktops and so on).
- VMware View Premier includes licensing for ThinApp, which is an application virtualization solution that streams an entire application package to a Windows OS, be it virtual or physical.
Streaming an application (ThinApp) versus remotely delivering an application (XenApp) are extremely different methods of application delivery. As such, EMR vendor support for these different methods greatly vary. In most cases, application streaming is not supported for extremely large and complicated applications such as an EMR, as one participant reflects:
Epic is just too big to stream, so we bake it into the image.
"Baking" an application into the image is a reference to installing the application on the master desktop image. This process removes the need to remotely deliver or stream the application to the desktop, since the application is already baked into the OS image. This is a common practice on many applications that require too much work to function properly via remote application delivery or application streaming.
When an application is installed directly on the OS, there is little difference in application functionality between this and how most applications run in a physical environment. However, even this practice finds a lack of support from application vendors. Gartner reached out to a group of popular EMR vendors to see how they compared on platform support. Each vendor was asked if it supported one of the three platforms used by participants of this study. Table 1 shows how each vendor responded.
Source: Gartner (October 2012)
It comes as no surprise that Citrix XenApp was supported by all EMR vendors, since its XenApp platform has been dominant in the healthcare industry for more than a decade. However, Citrix XenDesktop and VMware View are lacking in support. Technically speaking, there are very few reasons that an EMR application would not work in an SHVD solution. The lack of full support on the SHVD platform, though, still shows dominance for the Citrix XenApp solution in healthcare.
One organization opted to run VMware View over the better-supported Citrix platform, feeling as though VMware View better met its strategic needs. Since Epic EMR was not fully supported on its platform choice, the organization had to work with Epic to get its product to function properly. The participant pointed out that, although Epic did initially work, roaming became a major issue when the organization moved from Microsoft's Remote Desktop Protocol (RDP) to Teradici's PC over Internet Protocol (PCoIP):
Originally, we were running on RDP, and RDP had a method of sending the client workstation identifier to the hosted VM. When we moved to PCoIP, this was no longer the case. We had to work with Teradici to locate the identifier, which was in a different registry location. Once we identified the location of the identifier, we worked with Epic, and, internally, we wrote a registry hack to pass this information into Epic.
This organization had a lot of help from Epic during this process, and it felt there was a reason for that:
[Epic] didn't want to be married to Independent Computing Architecture (ICA) and Citrix. [It] wants to show that there are other solutions out there.
According to Epic, support will be provided on any platform. However, XenApp is the only platform on which it has expertise. Epic draws the line if the problem is in the platform, since it does not have the expertise to handle platform issues.
The organization has been successful in getting Epic fully operational in its VMware View environment, and points out how this has benefited their bring your own device (BYOD) policy:
The physician that loves a Mac can get Epic through his virtual desktop. Physician complaints about desktops have really fallen off.
Many healthcare providers are challenged by FDA regulations that don't allow these institutions to virtualize some systems. Speaking about FDA regulations, one organization stated:
[The FDA] shouldn't let the regulation get in the way of progress. [It] won't approve VDI. We can prove it works in the VDI environment, but [the FDA] just doesn't understand it.
One participant would not accept a solution that did not work on a desktop virtualization platform. This organization decided that getting its blood bank software to work on a desktop virtualization platform was worth the effort to go through the process for FDA approval. This participant shared how this process works, and how it was accomplished:
The manufacturer has to list with the FDA what it certifies. Working with the vendor, we were able to certify that our blood bank program does run in a VMware View environment successfully. It runs parallel ports. We did tests on our PC, and we did the same tests on the View environment. We actually had to be the testing arm for them. And, just recently, we received the approval from the blood bank program that we can, in fact, use our View environment to run the WebEx software. That took probably just under a year to get done.
This participant noted that, during the testing process, they had to have two test environments running: One had to be fully physical, and the other ran the intended virtual environment. Anytime an issue occurred on the virtual environment, this organization had to prove that the issue also occurred on the physical environment. Working through all these tests proved very time-consuming. However, in the end, the organization felt it benefited from the time spent.
Some healthcare providers believe that desktop virtualization is not ready for every application:
We had some challenges with radiology apps, and that was our litmus test — basically, video choppiness with original RDP and a little bit in PCoIP.
Radiology applications tend to be either two-dimensional (2D) or three-dimensional (3D). While it is technical possible to use a virtual desktop to run 2D radiology software, hospitals are reluctant to do this for diagnoses. This stance makes a lot of sense, since radiologists use extremely high-end monitors that handle a wide array of grayscales. These monitors help radiologists spot pixels of information that can be the difference between being diagnosed with something like cancer or missing it. In these scenarios, it is much too risky to depend on a virtual desktop to deliver every pixel. However, radiology for reviewing an x-ray with a patient is considered acceptable, and virtual desktops are fully capable for this less critical use case.
No hospitals we interviewed were using virtual desktops for 3D radiology, which often requires a graphical processing unit (GPU) to render the image. Although Citrix (via Microsoft RemoteFX) and VMware have made many leaps in software-based GPUs, these x86-based GPUs are still lacking in capability. Software-based GPUs often have limited capabilities in rendering images, and they come with the side effect of reducing the amount of VMs that can be hosted per server. New technologies, such as Nvidia's virtual GPU experience (VGX) card, replace the x86-based GPUs with a full GPU that is shared between multiple VMs. This means that, in the future, 3D radiology may be possible on virtual desktops. However, as is the case in 2D radiology, the use of virtual desktops for diagnostic imaging will be highly scrutinized.
Healthcare providers have high security concerns that are dictated by HIPAA. These concerns have caused organizations to look into desktop virtualization for a variety of reasons:
- Protecting health information
- Desktop performance in life or death situations
- Electronic health records (EHRs) increase the need for highly available desktops
Desktop virtualization is often touted as a security benefit. Although it makes no changes to the underlying OS's security, it does provide endpoint security, since the endpoint devices do not store data:
If you don't have data or apps on an endpoint, I don't have to be as concerned with the management and security of that endpoint.
A constant fear of healthcare providers is keeping tabs on PHI. In a physical world, it can be very easy for data to reside on the local hard drive, which is problematic for IT that allows medical professionals remote access or access with personal devices. Hospitals are required to report any data leaks that occur, yet allowing access to PHI opens a floodgate of allowing potentially sensitive information to be stored anywhere. Even technologies that reduce bandwidth utilization by allow caching of common images can be at risk of containing PHI. Utilizing desktop virtualization allows these organizations to keep data inside the data center. For this reason, they no longer have to be concerned that PHI is on a device outside the data center:
We've become more secure by being more open. There's no data or app on a device, so we don't have to worry about it anymore.
One provider was very succinct in the role of IT to deliver information in a timely manner:
A [VDI business case] was going to happen no matter what because of patient safety. [It] takes too long to get into a PC, [which] put kids at risk. [It] had to be fast and ubiquitous, since children's lives were at stake.
Depending on how virtual desktops are architected, it may be possible to have major increases in response time, compared to a physical desktop. Having the desktop sit in the same data center as the EMR reduces latency and increases bandwidth between the desktop and the hosting server. In many institutions, this means an increase in response time for their Emirs. Other advantages include boot and login times:
Patient safety is the first driver for moving to VDI. The causal reason for this was speed of logon. When you're dealing with things like drug reactions, time is of the essence.
When considering time as a life-saving issue, various products can greatly improve the response time of virtual desktops. Fusion-io cards enable a massive amount of IOPS and place the entire desktop on a Peripheral Component Interconnect Express (Pie) flash card, which means very fast boot, login and response times. Other vendors' products, such as Atlantis ILIO, enable a "diskless" architecture by placing the entire desktop in RAM. In the context of an emergency situation, these technologies could very well be the difference between life and death.
As hospitals move to a mandated EHR, getting patient data becomes an issue of having a desktop that is up, running and able to access the EHR. This makes desktops a critical component of patient safety, which means that healthcare organizations need to think of new ways to manage desktops. Many organizations interviewed understood this, using the desktop virtualization project as a means to enable higher service-level agreements (SLAs) on their desktop environments. However, in one interview, it was noted that the organization failed to recognize this reality:
I like to think of high availability (HA) as buying insurance. You're paying for what you're not using, and it's hard to justify.
This comment seems out of place. Hospitals pay for generators in case of power loss. They pay for fire systems in the case of a fire, and they pay for defibrillators in case someone's heart stops. The reason they pay for this equipment isn't for insurance reasons — they pay for it to save lives. All healthcare providers should consider how the desktop has changed due to the emergence of the EHR. Gartner recommends choosing a system that will best enable healthcare providers to access the EHR in emergency situations. Here are some questions to consider:
- What is the risk in not being able to access the EHR during an emergency?Some organizations fall back on pen and paper when the EHR is not available. This approach only offers documentation of what happened during the emergency. It does not offer patient history, which holds the patient allergies and other critical information. As more of the patient record becomes digital, the chances of finding patient history on paper will be greatly reduced.
- What are the power requirements of our current desktops?Using alternative desktop technologies may offer considerable power-saving benefits. Some thin and zero clients can run using less than 10 watts of power. In times of elongated power outages, this may enable the system to say up longer by drawing less power from the generator or battery backup.
- Is the emergency power capable of powering the data center and the endpoint (including the monitor)?Most organizations have sufficient power to keep the data center up and running in an emergency but the desktop is often not included. New Power Over Ethernet (PoE) system on a chip (SoC) devices, such as the HP Smart Zero Client, extend the PoE capabilities to a monitor. This would allow the desktops to leverage the power redundancies seen in most data centers.
Some organizations see IT as an overhead expense that needs to be reduced, while other organizations see IT as a department that enables business goals. Healthcare organizations tend to fit the former category, since their priorities tend to be patient safety, data security and improved user experience. This last category may seem out of place, but many healthcare IT departments are starting to realize that getting user acceptance is a key factor in the success of a project:
The whole reason for VDI was user experience. This was not about IT control. We struggled with consultants coming in and wanting us to turn everything off to work better. We didn't do that. We're not about IT control — we're here to serve our customers. We fought hard to make sure users are attracted to the technology, and that has propelled the whole thing.
The idea of IT control is seen in a common suggestion to disable the indexing service in Windows. This suggestion reduces CPU cycles and disk I/O, which supports the IT mission of reducing cost, but it ignores the user experience. Following this suggestion would mean that users would no longer be able to run quick searches on their desktops. This would cause push-back on this new technology, since the user would see the technology as a restricting one, instead of an empowering one. A better user experience is one of the benefits of desktop virtualization that is often overlooked by many organizations. User experience is not easily tracked and doesn't show up on a balance sheet. However, an improved user experience has many unintended benefits.
This section covers the following:
At some point in time, many organizations realized that there was a major shift in the desktop virtualization project. These projects all had similar beginnings. IT wanted to test a few locations to see how the desktop solution would work. After these tests, IT would determine if the technology should be deployed further. Although IT was taking a slow and steady approach to the project, many users and departments saw the new system and the benefits it brought them:
- Improved login speeds
- The ability to access the same desktop from any device
- Improved application performance
As users saw other departments getting these benefits, they began calling IT and demanding that they get the "new computers." Commenting on this shift in the project, one participant said:
We knew we were on the right track when clinicians who saw VDI in place asked to be next. We switched from a "push" to a "pull" model.
Push to pull refers to how IT originally was "pushing" the technology out to users, then the users starting "pulling" on IT to get the new system. One organization had its typical critics completely change their opinions after seeing the new desktop virtualization project:
Physicians are pretty risk-averse. We had one department that didn't want VDI. Physicians came into new hospitals, and then saw the new ways of working. They said to take the PCs out.
For this organization, it only took one visit before its physicians demanded that all PCs be replaced with the new desktop solution. The benefits of a desktop virtualization project are not just for the IT department — users will also immediately notice these benefits. Gartner recommends that IT departments prepare for this shift in the project, since the shift can sometimes cause IT to move too quickly and go over capacity. Expect the project to start slow, then move rapidly, planning for this eventuality.
The ability to log in from any device and get the same desktop is often referred to as a "follow-me desktop." This is a completely new feature that many organizations are finding as a big win for their desktop virtualization projects:
We launched an effort to build technology for our new hospital. It gave us an opportunity to go into every clinical area, let the clinicians dream a bit and say what the future hospital should look like. We decided to offer a follow-me desktop that would offer a safer and more convenient solution for clinicians.
The follow-me desktop technology isn't often highlighted in the beginning of a desktop virtualization project, but is often one of the first things a user will notice and embrace:
The biggest thing for us from a clinical perspective is the follow-me desktop. Any physician can go up to any device — a bedside device, clinician workstation, shared office, personal clinic, home or anywhere. The ability for the physician to get to his or her desktop from anywhere is huge.
A follow-me desktop is often popular with doctors who get calls off hours and need to make a decision on patient care. The ability to remotely log in to their desktop from any device on which they happen to be means they can make an informed decision without a major interruption of their personal lives.
HIPAA compliance requires that users sign in and out every time they use a workstation to access PHI. A typical healthcare worker will use numerous workstations on any given day. The tediousness of having to log on and off every workstation frustrates users to the point where they look for ways around this policy. The policy isn't going to change, and this is the main reason why single sign-on (SSO) solutions, strong authentication and proximity card readers have been very popular in healthcare organizations. These technologies simplify the login process while maintaining HIPAA compliance.
Failure to maintain compliance is no small matter. One organization described the ramifications for staff if they fail to follow HIPAA compliance:
You have to log out. If there is a violation or fraud, it's your login. We've had to adapt policies to make sure Nurse B doesn't use Nurse A's login. The smart ones don't complain because they know what they're doing. The dumb ones complain. Some nurses knew what they were doing, took the risk and got fired for it. That's one thing that we are very strict on. When you offer a virtualized environment, that's a risk you run. Our goal is for Imprivata (an SSO provider) to solve that.
Before most hospitals would even consider a desktop virtualization solution, they require the client and host to support SSO solutions. Imprivata was a popular vendor choice by many of the participants in the study, due to the company's success in getting its solution to work on a virtual desktop and the many different client choices, including zero clients.
Virtual desktops easily integrate into environments that are used to physical desktops. In many cases, the IT department is actively working to stop using terms that refer to the technology as anything other than a desktop. To these organizations, a user doesn't need to know that they are using a virtual desktop:
We have a saying: "Don't train users about VDI. Just train them how to do their jobs in the real world."
The reasoning behind this approach is twofold. First, IT departments are training their support staff to understand the differences in how to troubleshoot a virtual desktop. Second, the user desktop experience should be identical or better than the previous physical desktop. As such, users shouldn't be concerned with how they have a desktop, only that they have a desktop with improved or added capabilities.
Other organizations also complained about the ability to get storage and other data center team members fully on board with the desktop virtualization project. To solve this problem, some organizations restructured their IT departments. For example, one participant noted that they brought infrastructure and operations under one leader. The organization formed a cloud and virtualization team out of storage, virtualization and client access technology practices. This restructuring enabled this organization to better implement and support the desktop virtualization project.
Although restructuring an IT department may seem extreme, it is important to understand that a desktop virtualization project is much more complicated than a server virtualization project. Many IT departments are not used to having direct interaction with end users. As such, they are not used to the demands these consumers will put upon them. In large IT departments, different teams handle different technologies. For a successful desktop virtualization project, Gartner recommends a team consisting of the following:
- Storage administrators
- Virtualization administrators
- Network administrators
- Security administrators
- Application administrators
- Desktop administrators
Having at least one member from each team as a part of this process will help when issues occur. It is important to understand that troubleshooting a virtual desktop may require the expertise of all teams involved. Many participants noted that, in restructuring their IT to support the desktop virtualization project, they were able to go from three levels of support down to two.
Desktop virtualization is rapidly maturing, but far from simple to implement. Organizations should take caution to properly assess user experience, application, and storage requirements, and spend considerable time on architecture and testing to avoid the mistakes that were common among early adopters.
Gartner has found that healthcare organizations are willing to work hard to overcome desktop virtualization pitfalls. As a result, we expect the healthcare industry to lead in overall virtual desktop adoption for the next several years.
|BYOD||bring your own device|
|EHR||electronic health record|
|EMR||electronic medical record|
|EVA||Enterprise Virtual Array|
|FDA||U.S. Food and Drug Administration|
|GPU||graphical processing unit|
|HIPAA||Health Insurance Portability and Accountability Act|
|HVD||hosted virtual desktop|
|ICA||Independent Computing Architecture|
|IOPS||input/output operations per second|
|PCIe||Peripheral Component Interconnect Express|
|PCoIP||PC over Internet Protocol|
|PHI||protected health information|
|PoE||Power Over Ethernet|
|RDP||Remote Desktop Protocol|
|RDS CAL||Remote Desktop Services Client Access License|
|SAN||storage area network|
|SHVD||server-hosted virtual desktop|
|SoC||system on a chip|
|VDI||virtual desktop infrastructure|
The weightings and platform alternatives have been derived from Gartner's 2012 contextual research study, which involved site visits with 19 early virtual desktop adopters and scales beyond 2,000 seats. Contributions were also provided by several dozen end-user organizations that contacted Gartner via the standard inquiry process.
1 Teradici is a software and hardware company that owns the PCoIP protocol.