Any tips on safely using AI for test generation? How do you avoid potential security or quality risks?
Sort by:
As I say, I use my I (eyes) and my I (Intelligence) before and after the AI!
The Risk Management will need:
1. Do not fully rely on AI - thoroughly review and refine AI's output. In this case, I will have to define full scenario (starting from Business Requirements to Design to the Final Product to the AI agent) - basically, my full knowledge dump to the AI agent.
2. Use multiple AI agents independently and make the best out of them as input to your thorough reviews.
3. Best is, to share 1 with the AI Agent along with your Test Plan, Test Cases and seek its views/ inputs/ refinements.
Hope this helps.
As a data scientist, I'd say, trying to address (statistical) design of experiment approaches firstly (before AI based test generation) and then comparing the variety of testcases of both attempts including the probabilities of detecting failures to have at least one benchmark by the DoE cases.
Whether its Code Generation or test generation, gates need to be built in for overview. For AI generated test , the tester should be reviewing the test and maybe even editing it for perfect use.
Also we have a mandatory code review for all code or tests so we now have 2 pairs of eyes.
Follow are a some initial thoughts:
-Operationalize governance mechanisms to address assurance concerns, including quality and security.
-Ensure that all initial AI lifecycle phases (inception, elaboration, construction, etc.) involve human oversight and accountability for outcomes, regardless of the tools employed.
AI is used for test case generation. Prompts written are thoroughly reviewed for "prompt injection" or "jailbreak" intention. After that test case review is done by humans to ensure we avoid potential risks.