In light of possible security and accuracy issues with Amazon’s Q, how do you feel about it as an enterprise-level tool? Is it still something you’d consider adding to your stack?

7.8k viewscircle icon2 Comments
Sort by:
Chief Data Officer2 years ago

I would approach Q (and any other GenAI tool) as a potential component in a larger application.  For instance, for internal enterprise search using a RAG architecture, there are multiple tools involved that have to tied together.  If done well, the accuracy issues should be mitigated and security issues more controlled.  

So yes, I would consider it as part of the stack, but I would set expectations that the technology is not intended to be rolled out on it's own as a general purpose tool.

I would also not do so without an AI governance process in place that will evaluate the AI-specific risks during the architecture process etc.

Lightbulb on1 circle icon1 Reply
no title2 years ago

We have successfully implemented RAG in AWS using LLM, langchain, AWS Kendra and streamlit. This is used by internal teams.

Content you might like

Proven outcomes – Documented success stories and measurable KPIs32%

Implementation confidence – Detailed plan, risk mitigation, and resource readiness41%

Total cost – Clear TCO, price protections, and exit terms40%

Innovation & future readiness – Ability to scale, adapt, and support emerging needs15%

Vendor relationship strength – Cultural fit, governance model, and executive commitment16%

View Results

Yes—we've already passed the AGI line, but we keep redefining it.16%

Almost—these systems feel general, but something's still missing.47%

No—AGI must be autonomous, embodied, or conscious. We're not there.29%

It doesn’t matter—what we have now is disruptive enough.5%

The term AGI is a distraction. Focus on outcomes, not labels.2%

View Results