Gartner

Our Mission

To foster information sharing and best practices among AR professionals for more effective and efficient interactions with Gartner and measureable business value to their company's.

May 2014 Analyst Relations Newsletter

 

The Gartner Vendor Rating: Consolidating Positions on Complex Vendors

Andy Butler
VP and Distinguished Analyst, Gartner Research

 

As discussed in the accompanying article on the Gartner SWOT, we often get questions from Analyst Relations professionals about Gartner research methodologies and how they should prepare to work with analysts during the research process. Others ask if there are ways they can influence the analysts to cover them in a specific methodology type. Here we discuss the latter question regarding the Gartner Vendor Rating by interviewing Andy Butler, Gartner Research Vice President and Distinguished Analyst. Andy’s research agenda encompasses market and technology trends for most server technologies and includes operating system evolution, architectures, platforms and vendor strategies. Andy recently was the lead analyst for Cisco Systems and is now the lead analyst for Red Hat Software. He’s uniquely qualified to help AR professionals understand more about the Vendor Rating methodology.

Andy, share with us an overview of the Vendor Rating, its use and how you see Gartner end-user clients using the Vendor Rating document in their decision making.

Quite often an end-user client is looking for guidance on the overall health of a vendor, particularly on those handful of vendors that are strategic to them. End users are often keen to understand whether these vendors are rising in the marketplace or if they are declining. Where are they facing challenges? And for a particularly diverse vendor, like Cisco or HP, at any given time they can have a range of different products, portfolios, technology strategies, some of which are in ascendance and some of which are in decline. The Vendor Rating gives our clients that opportunity to see where parts of a vendor’s portfolio are failing to resonate with the market, where the vendor is putting its focus and its investments, and to gauge the overall direction of the covered vendor. I don’t see the Vendor Rating as replacing the deep research we do, and clearly the lead analyst is rarely the expert on anything other than probably just a fairly small portion of what the Vendor Rating covers. To me, this is that 5,000-foot view of the whole company to give people a feel for — is this company executing its corporate and overall set of strategies well, as opposed to maybe just individual ones?

How do we recommend our end-user clients leverage the Vendor Rating alongside other methodologies like Magic Quadrants?

I think it all comes down to the ratings we provide. Forcing ourselves to go with only five possible categories within the Vendor Rating methodology gives analysts focus to be brutally honest about where we assess a given technology or product within this band of five. Challenges often arise when using just these five categories; for example, let’s say an analyst gives a vendor a “Caution” rating as it enters a market for the first time. We advise clients to interpret that in a slightly different way compared to a cautionary rating that stems from a proven legacy technology that perhaps was not gaining any new business but is continuing to address the needs of a certain community. This would merit a caution because there isn’t any new business. So we are careful to advise clients that there are sometimes two different ways of interpreting a rating depending on the type of technology being offered and whether it is new growth technology or older, declining technology.

As far as a companion methodology, the Vendor Rating works well together with the IT Market Clock. Where a Market Clock exists, it does a very good job of helping clients look at a specific technology area that a vendor is investing in — Cisco UCS blade, for example. By looking at the assessment we make of UCS and the blade products that Cisco offers, they can then compare that to the relative evolution of blade servers in the server technology Market Clock. This can give people a very good comparative picture.

What about SWOTs? How would you compare and contrast those with the Vendor Rating?

Clearly the Vendor Rating cannot go into the depth of the SWOT; we keep the Vendor Rating at a high level and then use the SWOT to dive more deeply into a specific aspect of a complex vendor. While the SWOT is written for the vendor, given that we have seen high readership of SWOTs by end users, we should consider in the future an end-user version.

You mentioned Vendor Ratings written about complex vendors. What are the other criteria that analysts use to decide on initiating coverage through the Vendor Rating document?

. Our strategy is that we write Vendor Ratings only about the complex vendors. If you are designated a complex vendor lead analyst, you are expected to write the Vendor Rating — pretty straightforward.

We have about 30 vendors with complex vendor status, and only those vendors will have a Vendor Rating. One of the obvious criteria is that the vendor is typically large, and does business in multiple technology segments, which results in many analysts writing about aspects of that vendor. The lead analyst’s primary role is to synthesize various Gartner analysts’ positions on a complex vendor working collaboratively with other covering analysts —  and hold the consolidated view. The Vendor Rating is the document that captures that view.

Are there conditions under which we would no longer cover a vendor via the Vendor Rating?

Yes! A great example of this was Microsoft’s acquisition of Nokia, where Nokia was a complex vendor with a Vendor Rating. The Nokia Vendor Rating will likely be retired and ratings of the Nokia portion of the Microsoft business will be folded into the Microsoft Vendor Rating.

If we think about the vendor Analyst Relations team, what do they have to do differently when they reach this complex vendor stage? Do you think it’s more or less or equal work with the analysts on the Vendor Rating than it is on a Magic Quadrant?

They are more or less comparable. Clearly the Vendor Rating document will go to the AR team and it covers the entire company, so I think there will be probably more work for them in that they will have to pass the document to multiple different product teams and groups of people for review. However, each of those teams will only have to review a paragraph or two. So I would say there is a broader distribution of the document that AR needs to manage across the vendor internal community but the amount of content that’s being reviewed at each different stage will be a lot less.

A Magic Quadrant in comparison is  very specific. Using Cisco again as the example, Magic Quadrants on Cisco include a networking MQ, a blade server MQ, a unified communications MQ, so the AR people working any one of those MG’s just have to go to one internal team vs many as is the case with a Vendor Rating.  However that one internal team will have  more work  because of the deeper information gathering required — they’ve got to evaluate the strengths and the challenges that the Magic Quadrant will expose. But I think the Vendor Rating becomes a more predictable document for the AR people to manage internally, whereas a Magic Quadrant, though they are more localized in terms of the amount of the vendor’s coverage, tend to get peoples hackles up pretty quickly and I think the AR people can quite often become sort of stuck in the middle between passionate marketing people and analysts looking for the balanced view. I think a good AR person should never  get too close to the analysts, and never get too close to marketing, either. They have to be that neutral bridge between the two organizations and it’s an incredibly difficult role to fulfill!