If you’re using AI chatbots in a regulated industry (like healthcare or banking), have your end-users shared any discomfort with using them or distrust of their output?

2.2k viewscircle icon1 Upvotecircle icon3 Comments
Sort by:
Chief Data Officer in Mediaa year ago

I have heard both concerns from multiple clients. Building small (100M - 1B parameters) language models that run on low-cost hardware works very well. Developing a single platform where all ML and AI tools are available helps keep shadow tool usage to a minimum.

Global Chief Cybersecurity Strategist & CISO in Healthcare and Biotecha year ago

Yes, end-users have expressed discomfort and distrust of AI chatbots across industries, and with good reason! Concerns often stem from data breaches and inaccurate responses. It’s crucial to address these issues by implementing strong data security measures, clearly communicating them to users, ensuring response accuracy, seeking feedback, and being transparent about data handling practices.

Lightbulb on1
Senior Director Of Technology in Softwarea year ago

We are using AI chatbot for our feedback messages. Our bot understands the response from customer and based on tonality, it starts the conversation.

We dont recommend any medicines or health related issues etc on chat but soon would venture out into it.

Lightbulb on1

Content you might like

Traditional project management frameworks32%

AI-specific models for cost estimation61%

Industry benchmarks40%

Collaboration with financial department14%

Other3%

View Results

It's at the top of my list32%

It's on my radar54%

It's not a focus right now14%

Honestly, I haven't thought about it

View Results