How are large professional services organisation deploying GPT4 (or similar LLMs) within their own organisation for internal use, particularly when using sensitive/confidential/PII data? For example we're a UK based accountancy firm so have created our own internal app based on Microsoft's Open AI GPT4 model deployed to our private Azure hosting. We have built a custom app designed to "preprogramme" personas, i.e. pre-created prompts, for different areas of the profession, so internal users can have GPT4 act in different ways without prior knowledge of prompt engineering. I would be interested to see how others are building or buying solutions, particularly those based in EU/UK who can't use OpenAI/ChatGPT for with customer/confidential information.

4.7k viewscircle icon4 Upvotescircle icon4 Comments
Sort by:
Chief Information Officer3 days ago

We enable our customers to connect their own @Azure, @AWS APIs to leverage the different model or connect custom or opensource models to rollout securely AI to their employees on prem.

Global Head of AI, Data & Analytics in Software2 years ago

Assume this question is a bit older, but you did outline a standard approach, host / access a model as a service internally through Azure OpenAI, Azure Machine Learning / VM, or Amazon Sagemaker, Amazon Bedrock
Though now Enterprise ChatGPT will make things easier if you don't care about federating models

Director of Other in Software2 years ago

We are building solutions and enable our customers to bring their own LLMs or integrate with any open and closed-source LLMs. In case like yours, we encourage our customers to integrate with Azure OpenAI to take advantage of Azure's enterprise security. 

Lightbulb on1
CEO in Services (non-Government)2 years ago

One of the answers is to use Microsoft Azure GPT-4 if we want GPT-4 kind of performance. Another option is to host open source models such as LLama2 in the cloud/in-premise environment and serve it for inference or consumption purposes. We see such requirements coming in from multiple clients. For one of the India Govt client for developer experience, we have hosted a code LLM and built visual studio plugin for developer to consume it. For a Italian financial client we are about to make this happen. This involve 1 time hardware cost if the client is looking for on-premise solution, for cloud based there will be recurrent GPU usage cost based on the number of requests that needs to be served.

Content you might like

The future is cyber secure18%

The future is cyber insecure53%

Innovations will hinder cybersecurity.18%

Innovation will enhance cybersecurity13%

View Results

Completely confident – they’re as solid as possible17%

Sort of confident – policies seem adequate54%

Slightly confident – better than nothing21%

Not at all confident – we need to redo these5%

Unsure1%

View Results