Are short term bans of the use of GenAI applications and tools (such as ChatGPT) a good idea for end users in most organizations? For context see this article on the State of Maine Government's directive before you vote: https://www.govtech.com/artificial-intelligence/chatgpt-generative-ai-gets-6-month-ban-in-maine-government

Yes - Maine did the right thing. There are too many security risks with free versions of these tools. Not enough copyright or privacy protections of data.29%

No, but.... - You must have good security and privacy policies in place for ChatGPT (and other GenAI apps). My organization has policies and meaningful ways to enforce those policies and procedures for staff.46%

No - Bans simply don't work. Even without policies, this action hurts innovation and sends the wrong message to staff and the world about our organization.19%

I'm not sure. This action by Maine makes me think. Let me get back to you in a few weeks (or months).4%

735 PARTICIPANTS
33.1k viewscircle icon8 Upvotescircle icon8 Comments
Sort by:
Employee in Government19 days ago

I remember the days when Internet access to users was limited. Personally I believe we need to encourage usage but it requires clear boundaries and good training.
If you ban, users will look for a way around it, by securely adopting you will achieve more and remain safe.

Enterprise Systems Architect in Government7 months ago

Short term bans are ok to allow some breathing room to get at least basic policy, governance, and systems in place. If it's allowed to stretch much beyond the six months, bans will produce accelerating risks of non-compliance over time. While I'm disappointed by the amount of over-the-top hype about AI and the tendency to engage in magical thinking about what it can solve, the fact remains that it's a very useful, game-changing tool when properly applied. People are going to use AI because they see it's utility. The more they see other people using AI to successfully reduce workload the greater the temptation will be to engage with it regardless of a ban. Risks that have not materialized into direct, consequential problems are not a deterrent to the average user; otherwise things like this, https://www.lawnext.com/2025/05/ai-hallucinations-strike-again-two-more-cases-where-lawyers-face-judicial-wrath-for-fake-citations.html, would not keep happening.

CIO in Services (non-Government)7 months ago

Simply put. No one is stopping any younger generation employee from using AI in some way whether on work assets or their own personal assets. It is a way of life. I had the opportunity recently to mentor a group of high school students about tech, AI, future of jobs, etc. With these young people, it isn't even a question of "do you use AI", but rather a conversation about responsible and effective use. Organizations need to embrace the future, albeit employee guidance and training and even restrictions on corp IP/data and in some cases no use on corporate assets. However, like in many things in life, people will find a way, anyway. Prepare for the future, it is tomorrow. 

IT Analyst7 months ago

Yes - because you need time to get policy and governance published, training organised, data tagged for DLP, security updated, etc.  Staff need to know they shouldn't be using it without clear guidelines, especially free tools.  Nothing is secret so don't put anything sensitive/confidential in.   But we know people will use them anyway and we know that we need to extend and embrace so it can only be a temporary block. 

Information Security Analysta year ago

Yes - in our organization we blocked ChatGPT and other public/open AI models until we had training and monitoring resources configured. We then incorporated responsible AI use in our annual security awareness training and set up DLP policies to monitor. We want employees to experiment and innovate, but we also want to ensure our IP and PII is not at risk. We have since created our own generative AI model using Azure OpenAI and encourage employees to take advantage of that, Copilot, Power BI and PowerApps within our secured network, vs using externally hosted tools. Finally, we have embedded AI functionality review in our vendor risk management process and are in the process of putting together an inventory of AI use cases in our organization so we can risk-rank them and incorporate periodic assessment within our existing processes.

Lightbulb on1

Content you might like

Yes76%

No20%

Depends (share a comment)3%

View Results

Yes36%

Yes, but not as much as we hoped49%

No10%

No, but we're optimistic that will change4%

View Results
Are short term bans of the use of GenAI applications and tools (such as ChatGPT) a good idea for end users in most organizations? For context see this article on the State of Maine Government's directive before you vote: https://www.govtech.com/artificial-intelligence/chatgpt-generative-ai-gets-6-month-ban-in-maine-government | Gartner Peer Community