How prescriptive should guidance on AI coding assistants be? Should developers have more flexibility, or should software leaders set firm guidelines?
Sort by:
The responsibility lies with both the developers and the organization. Companies need to set guidelines on coding standards and tests for AI-generated code. AI is here to assist, not replace developers. It’s everyone’s accountability to ensure proper use of AI, following set guidelines.
AI coding assistants are here to assist, not take over our jobs. The responsibility lies with the developers, who must balance freedom with clear rules. In the financial industry, we don’t use publicly available AI tools but have in-house models with our own datasets. We encourage peer reviews to ensure code quality. Despite this, bugs can still occur, so manual reviews and security checks are essential to allow innovation within safe boundaries.
It depends on the organization and its needs. In a smaller startup like mine, where we deal with legal information but not highly regulated data, we allow more flexibility. My guidance is that developers must understand every bit of code they check in, whether it’s suggested by AI or written by themselves. They are responsible for their software. However, in MedTech or FinTech companies, I would be much more prescriptive. Developers should also be cautious about sharing potentially sensitive information with AI tools. They need to understand their responsibility for the code and that organizations provide the necessary guidelines to use AI tools effectively and securely.
Accountability is crucial. Both developers and leaders share the responsibility. Developers should follow proper guidelines while using AI assistants to ensure productivity without compromising security. Leaders need to regulate the use of these tools to maintain a balance between flexibility and compliance.