Is your organization pursuing large-scale AI enablement, such as enterprise-wide access to tools like Microsoft Copilot, or are you focusing on high-value use cases within specific functions? Why have you chosen that approach?
Sort by:
We have chosen a multi-layered approach all leveraging the Microsoft Azure AI stack. This includes, from the simplest layer, providing employees with access to the Copilot browser for general chat/inquiry capability. The next layer is enabling other Copilot out of the box capabilities like GitHub. The third layer focuses more on Agents and low code development and the fourth leverages MS Foundry tools for custom RAG copilot development. Our journey started with the 4th layer with a custom RAG solution and proved to be a high-value use case. We created an enterprise GAI council to ensure that we are promoting reusability versus reinventing and prioritizing uses cases of higher value across all functions.
Hi. We are evaluating ms copilot and planing to deploy it in phases
We have deployed ChatGPT Enterprise, but regarding focused, high-value use cases, we recently implemented a tool for company sourcing to support investment opportunities. In this specific case, users input a few metrics and prompts specifying what they’re looking for and what to exclude. The tool generates an initial list, conducts deep research on each entry, and returns a refined list of potential investment opportunities.
This product has successfully passed all proofs of concept and trials and has seen significant use. Beyond general GPT applications, this is a notable example of a specific, high-value use case that has been widely adopted in our organization.
The company we’re exploring a partnership with has a solution that combines several large language models and machine learning capabilities—the popular ones—into a single offering. It’s an interesting proposition, and we’re still evaluating whether it will work for us or cause confusion. Access will be strictly controlled, limited to specific service groups or group IDs, and contained within a dedicated environment for testing and validation.
We’re just getting started and are taking a cautious approach. What we’re seeing is that many emerging solutions aren’t tied to a single provider. For example, Microsoft’s tools have evolved significantly, and now platforms like GCP offer Gemini AI as part of their ecosystem.
Personally, I prefer to remain agnostic, as being locked into one provider can limit flexibility. There may be a need for more generic solutions in the future. Sharing experiences and learning from each other will be valuable as we move forward.
while building generic solutions that is model agnostic, AI Chat, RAG capabilities similar to ChatGPT Enterprise but hosted on-premise, I hear the same views as your from many IT leaders. <br><br>I believe, LLM have a different capabilities for different jobs and it must be flexible enough to be ready with all the lastest updates in all openAI, Gemini and anthropic ties to single solution with the orgwide rolling out facility in terms of AI adoption, governance and security. <br><br>what are you prioritising?

I'd tackle both. Go wide and deep. Wide - democratising AI capability across the org (right tools for the right people though), particularly subject matter expertise to enable innovation and value creation at scale in the lines of business. Deep, to focus on RoI/value use cases that have the most tangible impact. The democratising will create and innovation cycle to continually think about how to improve business. Deep should help unlock funding to invest in going wide.