How often do your engineering sprint cycles hit expected release timelines?
Less than 50% of the time24%
50 - 74% of the time57%
75 - 90% of the time14%
Over 90%3%
722 PARTICIPANTS
4.3k views1 Upvote1 Comment
Sort by:
Content you might like
AI Coding Assistants: Velocity or Vulnerability?
Latest research reveals a hidden feedback loop threatening software security:
The Evidence: • ≈33% of Copilot suggestions echo CWE Top-25 flaws (Asare et al., 2024) • ~50% of LLM code snippets contain exploitable bugs (CSET, 2024) • Developers with AI help feel more confident while shipping less secure code (Stanford, 2023)
The Systemic Risk: Flawed AI output is pushed to public repositories, polluting the next model's training data—risk compounding with every release cycle.
What leading teams are doing this quarter:
1. Audit AI-tool usage and establish approved lists
2. Insert AI-aware SAST (e.g., Snyk Code, Semgrep Assistant) into IDEs and CI/CD pipelines
3. Adopt OWASP LLM Top 10 + MAESTRO for threat modeling
4. Track percentage of AI-generated code and its defect rates
Question: How are you measuring AI-generated technical debt today, and what is your plan to stop it from becoming tomorrow's supply-chain crisis?
Share metrics or tools that worked for you.
Has your software team achieved a positive ROI with AI coding tools yet? (Bonus: share how you’re measuring this in the comments!)
Yes
No
Not sure, we’re not measuring this100%
N/A, we’re not using AI coding tools
Just before the renewal4%
A few days in advance38%
A few weeks in advance23%
A few months in advance31%
A few years or more in advance2%
This is a sad state of affairs, in my opinion. Though I think the responses do reflect the current state. I still see most organization doing agile rather than being agile.
I consider a good number for my teams to be 105%. I have had teams that have hit this rate for years on end. Where this isn't happening, the focus should almost always be on management rather than on the teams.