What type of resistance are you seeing (if any) to your efforts to roll out agentic AI? Is this causing you to second guess any roll outs?

1k viewscircle icon2 Upvotescircle icon3 Comments
Sort by:
VP of Marketing in Software4 days ago

I work in Marketing for a cloud collaboration and governance tool provider. In my experience, when not talking about rogue shadow AI, most resistance typically comes from IT departments. The reasons given:

- Agentic AI rarely has enterprise-grade management capabilities, so even keeping an overview of all the agentic AIs deployed in an organization is very tricky.
- Most Agentic AI - and GenAI for that matter - services lack basic reporting and governance functionality, rendering it almost impossible to control security-related aspects of AI, such as access to information
- Last but certainly not least, cost is a huge concern. If properly embedded in IT, most AI services have to be run as corporate accounts; every user comes with their own license. Pay-per-use models add the risk of runaway costs (think of high-workload AI agents being shared with the entire organization). Once rolled out, adoption spikes but then declines again. Unused licences can quickly become a financial burden.

Director, Analyst Relations in Software4 months ago

I think most vendors are seeing some resistance, but it's not surprising, and in many ways, healthy. The hesitation is less about the concept of agentic AI itself and more about concerns around security, data governance, and long-term sustainability. IT and security leaders are asking tougher, smarter questions, such as:

Can we trust this AI to act autonomously? How is it secured? What happens if it makes a mistake or a decision that violates compliance frameworks?

This level of scrutiny is entirely warranted, especially in highly regulated or risk-averse environments. It’s actually prompting most to double down on due diligence, ecosystem fit, and vendor transparency rather than second-guessing strategy. Focus on AI partners who go beyond core capabilities, those who offer robust support, a clear roadmap, ethical AI commitments, and strong integration into your existing software stack.

Rather than scaling back, be more strategic and deliberate, ensuring you make safe, sustainable AI investments that align with both short-term goals and long-term risk posture. This approach helps teams build trust across the organization while rolling out agentic AI in a way that’s responsible and resilient.

Some very good webinars on this topic have been shared across LinkedIn recently.

Lightbulb on1
Director of Supply Chain4 months ago

We're focusing our AI workflow development on the existing data entry-type tasks, which our team appreciates, because those are not usually the most enjoyable parts of their job. In addition, we engage AI-infused workflows to accomplish things we haven't had the resources to accomplish previously. 

I have a team of marketing agents that I've built as a side project, to assist with campaign development and content generation. Just this week, one of my team members requested that we engage the AI team to see what they come up with for an upcoming campaign. So, I was impressed that a team member actively sought out this potential insight, to have the willingness to experiment. 

My experience has been, when you present these tools from an empowerment standpoint, while the unknown can still be intimidating, I've found that they are more readily embraced. It is also helpful to work at an organization who values human enterprise, and is not looking to replace people with AI, but rather augment people to create the most value.

Lightbulb on1

Content you might like

Marketing + sales30%

Marketing + supply chain operations6%

Marketing + product23%

Marketing + IT 40%

Another department (comment below)

View Results

HTC Vive23%

Oculus Rift67%

Other9%

View Results