Can anyone recommend application security best practices for generative AI tools?
VP Information Security Assurance, 10,001+ employees
There are the following dimensions in my mind1) controls like
a) Input validation (while maintaining the spirit of natural language). you dont want your LLMs to crash / or elevate privilege
b) ensure relevant privacy conditions are built in specially when the model attempts to store the questions as an input for its future learning. So while the user would appreciate the result LLMs will thru s/he may need his own data anonymized in results
c) boundary conditions such that queries or its results dont overwhelm the environment and make service unavailable
2) "intelligence in response" such that the LLMs are not fooled in providing responses that might be counter protective.for example. "how to hack LLMs" may get no result but question like " is there current weakness that the LLM is slf healing" might, giving important reckon
3) the LLMs itself, like immutable logs, may need to be protected from tampering
4) protect the knowledge set from where it currently responds based on incremental context learning. so that it doesn't get poisoned
OWASP has a reference for LLMS OWASP Top 10 for Large Language Model Applications | OWASP Foundation, check it out pl
Keen to learn your perspectives when possible pl
CTO in Consumer Goods, 11 - 50 employees
Human in the loop. Exercise a lot of caution around agent applications that have integrations beyond information retrieval. eg database updates, scoring, api updates, etc.Content you might like
Data breaches due to remote work27%
Ransomware attacks43%
Lack of a corporate security plan13%
Missing security patches8%
Failure to inform employees of threats3%
Other (please specify)3%
349 PARTICIPANTS
Head of Cyber Security in Manufacturing, 501 - 1,000 employees
I would say, DPO and Security team both shall be involved and work hand in hand.Most of the time the legals and or DPO don't have the technical acumen to understand when data is floating to third party services.
Lets ...read more
Yes, most security leaders.37%
Yes, some security leaders.53%
No7%
Not sure2%
355 PARTICIPANTS
CTO in Software, 201 - 500 employees
Without a doubt - Technical Debt! It's a ball and chain that creates an ever increasing drag on any organization, stifles innovation, and prevents transformation.
I agree with that the rules of engagement do not change with AI in the picture; you still have to follow the same rules of DevSecOps and have the same level of due diligence to ensure your code is designed to provide that is designed to do, including performing needed Threat Modeling, Design Reviews, keeping in mind transparency and accountability with Privacy by design and Security by Design.