How are you identifying and measuring critical skills gaps in AI and data across your IT organization? What approaches are proving most effective?
Sort by:
We came up with a minimum digital standard because when you have integrated systems, the glue is the digital element. We started by measuring utilization rate, but that’s not enough. You need to tackle how well people are using the tools, not just checking the box. That goes into the culture and the digital mindset. You have to change the mindset to adapt these things as a first principle, not just because someone is forcing you.
I definitely have a carrot, but instead of a stick, I want stickiness. Stickiness is what is sustainable. The carrot creates momentum, but if people aren’t adaptive, it won’t last. If you use a stick, people will do it, but there’s no participation and it won’t sustain. I use competition and bold goals. How many more projects can we do for the same cost? That gives two opportunities: I can deliver more business projects for the same dollar, and people shift from just keeping the lights on to delivering more business capabilities. My expense line goes down due to automation, and people level up. It makes the whole organization shift up, and gives value for them to stay on course because they see the value of what they deliver. So I want a carrot and stickiness, not a stick.
What we’ve learned over the past three years is that the skills typical organizations have for development teams are good, but not necessarily the best. What’s really missing is the curiosity factor. You want people to be curious, not just follow a template. The most curious people are the most innovative. Technology doesn’t have many limitations, but you need people to mold the problem. We rely heavily on small teams with very good discipline knowledge, whether from the business or IT. We want at least one or two people to tackle a problem and show a new idea. We went through hundreds of ideas with a governance body, but the driving force is ROI: are we improving things? Are we making a difference? In our experimentation, a very small, motivated team with rounded skill sets can deliver in a short period of weeks.
We leverage our AI governance committee to help us keep our finger on the pulse about where the skills gaps are in the organization. It’s a multidisciplinary set of folks from many parts of the org, whether it be data, BI and analytics, or IT. They work with our users and know best where we’re lacking. I consult with them, and they work directly with our users, so they know where the gaps are.
A lot of what I’ve been doing is having discussions with individuals who come to me and ask about AI. One big thing the team lacks is data and AI literacy to even understand that they may have a great idea, but they’re not sure what to do with the outcome or how to validate it. I’m speaking mainly about generative AI, since we don’t have a lot of legacy or traditional AI. Everybody’s hearing about it, so I think it’s important to have a basic understanding of whether they really understand what they’re trying to achieve. From that, we build a data and AI literacy program so everyone understands what to expect and not expect miracles. We need to level set expectations.

We’re thinking about raising the ceiling from a capabilities perspective, but also raising the floor for adoption. We’re considering minimum requirements for tool utilization and setting a goal that there has to be a certain level of utilization, which is easy to track for developers and testers. I’m curious if anyone has tried minimum standards and whether it worked or backfired.