In light of possible security and accuracy issues with Amazon’s Q, how do you feel about it as an enterprise-level tool? Is it still something you’d consider adding to your stack?

7.8k viewscircle icon2 Comments
Sort by:
Chief Data Officer2 years ago

I would approach Q (and any other GenAI tool) as a potential component in a larger application.  For instance, for internal enterprise search using a RAG architecture, there are multiple tools involved that have to tied together.  If done well, the accuracy issues should be mitigated and security issues more controlled.  

So yes, I would consider it as part of the stack, but I would set expectations that the technology is not intended to be rolled out on it's own as a general purpose tool.

I would also not do so without an AI governance process in place that will evaluate the AI-specific risks during the architecture process etc.

Lightbulb on1 circle icon1 Reply
no title2 years ago

We have successfully implemented RAG in AWS using LLM, langchain, AWS Kendra and streamlit. This is used by internal teams.

Content you might like

Actively exploring16%

Considering it60%

No, not considering14%

Already implemented9%

Not applicable to my organization1%

View Results

Yes, AI tool cost was too high8%

Yes, talent costs were too high50%

Yes, costs for both AI tools and talent were too high8%

Not yet, but we may have to33%

No, our AI projects are within budget

N/A, we have no AI projects

View Results