Do you have an established GenAI strategy or AI policy? If so, what outcomes are you working toward? If not, have you allowed the use of GenAI organically? What outcomes have you seen from this approach?
Sort by:
We have both an AI policy and a strategy. Initially, we developed an AI strategy, but it quickly became evident that it should be tied to our strategic objectives rather than just technology. Our approach is more of a framework, with mandatory AI training for everyone to avoid issues like hallucinations or uploading confidential information. We haven't blocked anything except deep seek, and we ensure data governance and privacy reviews. We're letting people experiment with free tools but encourage using our existing resources.
We definitely have an AI strategy. Being a financial services organization, we have a lot of risk, so we started with our own LLM and blocked external chatGPT. Our strategy includes governance and policy, aiming to make it easier for customers to work with us and improve our contact centers and IT operations. We're leveraging AI to eliminate bureaucracy and improve efficiency, with a clear direction and focus on making business easier.
We don't specifically talk about a GenAI strategy; instead, we focus on policies. Our primary goal is solving business problems, and GenAI serves as a tool to help us achieve that. Over the past six months, we've been concentrating on this area, utilizing AI as an assistant in various sectors, such as building unstructured documents and extracting information from SharePoint. We have around 100 engineers working to enable these solutions, and we're collaborating with providers to develop specific use cases.
We're focused on addressing significant problems, particularly clinical and operational ones. Our strategy aligns with our organization's objectives, using digital tools, including GenAI, to support these goals.