In light of possible security and accuracy issues with Amazon’s Q, how do you feel about it as an enterprise-level tool? Is it still something you’d consider adding to your stack?
Sort by:
no title2 years ago
We have successfully implemented RAG in AWS using LLM, langchain, AWS Kendra and streamlit. This is used by internal teams.

I would approach Q (and any other GenAI tool) as a potential component in a larger application. For instance, for internal enterprise search using a RAG architecture, there are multiple tools involved that have to tied together. If done well, the accuracy issues should be mitigated and security issues more controlled.
So yes, I would consider it as part of the stack, but I would set expectations that the technology is not intended to be rolled out on it's own as a general purpose tool.
I would also not do so without an AI governance process in place that will evaluate the AI-specific risks during the architecture process etc.