In light of possible security and accuracy issues with Amazon’s Q, how do you feel about it as an enterprise-level tool? Is it still something you’d consider adding to your stack?

7.8k viewscircle icon2 Comments
Sort by:
Chief Data Officer2 years ago

I would approach Q (and any other GenAI tool) as a potential component in a larger application.  For instance, for internal enterprise search using a RAG architecture, there are multiple tools involved that have to tied together.  If done well, the accuracy issues should be mitigated and security issues more controlled.  

So yes, I would consider it as part of the stack, but I would set expectations that the technology is not intended to be rolled out on it's own as a general purpose tool.

I would also not do so without an AI governance process in place that will evaluate the AI-specific risks during the architecture process etc.

Lightbulb on1 circle icon1 Reply
no title2 years ago

We have successfully implemented RAG in AWS using LLM, langchain, AWS Kendra and streamlit. This is used by internal teams.

Content you might like

>20%10%

16-20%35%

11-15%16%

5-10%9%

<5%28%

View Results

Yes, I believe so.12%

Yes, but guardrails will be already present 43%

Yes, and we will not be ready38%

I don't know now3%

No, I don't think so4%

No, because AI will never reach such "status"

View Results