How are you enabling both technical and non-technical teams to securely experiment with AI? Do you provide test environments and/or test data?
Sort by:
At Price Smart, our current approach is to limit AI experimentation primarily to ChatGPT and Copilot, with access mainly granted to technical teams. Non-technical employees receive training and access to Copilot, while technical staff can use ChatGPT under strict guidelines. We prohibit the input of confidential information, personally identifiable information (PII), or any sensitive data into these platforms.
We are exploring software solutions that monitor prompts submitted to Copilot and ChatGPT, allowing us to implement rules that mask or block sensitive data before it reaches the language model. For example, if a Social Security number is entered, the system will mask it, alert our team, and continue processing the query. If certain keywords are detected, the input is blocked entirely, preventing it from reaching the model.
Our non-technical teams are currently limited to Copilot, with some access to Claude and Read AI. We keep our Read AI deployment local, which minimizes concerns about data security. We are in the early stages of rolling out these tools, proceeding cautiously due to the complexity of our organization. With approximately 500 employees in the US and 14,000 in Latin America, we must navigate varying data laws and regulations across countries. Ensuring that all relevant laws are reflected in our policies is a significant challenge.
Our environment is somewhat different, but the central theme remains: fostering innovation while implementing governance controls. At Coleman Incorporated, we have adopted a "crawl, walk, run" approach to AI adoption. Currently, we are moving quickly but deliberately, following a two-year cycle of education and stakeholder alignment.
A significant part of our process has involved educating non-technical staff about AI capabilities, clarifying misconceptions, such as the difference between personal and enterprise versions of tools like ChatGPT. We have invested in on-demand training for all employees and established clear guidelines for AI usage. While we do not have a dedicated test environment, there is visible awareness and guidance on how to use AI tools responsibly. Additionally, our security operations team actively monitors AI-related activities to ensure compliance and safety.

At Salona, enabling technical teams to experiment with AI is less about permission and more about setting appropriate guardrails, as there is no stopping them from trying new things. To address this, we have established an AI governance program that works with various teams to review their requests and intended use cases. Our operations are governed by the EU AI Act, which requires us to maintain a clear understanding of how AI is used within the company.
We funnel different teams into our governance program by controlling access to certain resources, such as API keys for large language models. Access is only granted after a formal approval process, which ensures compliance with regulatory requirements and prevents circumvention of necessary procedures.
Regarding test environments and test data, our approach depends on the context. For AI built into our products, customers interact with those features directly. For internal business needs, we strongly prefer employees to test new software in sandbox environments, never connecting to production systems or using production data. If production data or systems must be used, the proof of concept or test must be onboarded as a fully recognized use case or supplier, following our compliance protocols.