I am interested in people who have tried generative AI coding projects. In your experience, are there project attributes that make generative AI coding capabilities a "good fit"? Are there project attributes that are likely indicators that generative AI will not result in something useful? I would think complexity of requirements is a big factor. If the complexity of the project or its requirements is low - I assume genAI coding can get you within the ballpark. For instance, logic stores: if your project has critical business logic spread out all over the place and not centralized I think that generative AI would not be able to make sufficient sense of that.
Sort by:
I Dan, I don't consider myself a super expert as I do not have developers directly under me, but I am one of the many leaders that is trying to figure our impact of cost of tools vs. productivity. Here I am sharing the summary of my notes from the conversation with a few very knowledgeable professors in the field:
When generative AI coding is a good fit:
- Well‑known patterns and boilerplate: Copilots are “extremely good and reliable” at widely taught, standard code structures—think canonical algorithms, idiomatic scaffolding, and routine functions—likely because models were trained on abundant, high‑quality examples.
- Education, explanation, and tutoring: They excel at correcting code, generating examples, and explaining structures, making them strong for onboarding, learning, and clarifying conventional patterns in existing codebases.
- Accelerating capable developers and enabling non‑programmers: They lower the barrier for non‑CS professionals to produce small pieces from high‑level instructions and give pros a productivity boost on routine tasks.
When generative AI coding is a poor fit (or needs strong guardrails)
- Integrations and orchestration: Today’s copilots still struggle to be reliable at configuring multiple services, coordinating large workloads, and wiring complex systems together—areas where advanced developer expertise remains essential.
- Cross‑module/system reasoning: Many issues are not obvious from simply running code; generated code still requires human reading and checking to catch subtle defects or mismatches in assumptions.
Expectation management on output share: Headlines claiming ~40% of code via AI are overstated; a more realistic figure might be closer to 10-15% today because much modern software work is system integration rather than writing fresh low‑level functions.
Talent substitution: Organizations should not let go of their skilled software developers—you still need experts to make software actually work in production and to own system‑level decisions.
Productivity: eventually, especially for a large team, even 10-15% boost in productivity for skilled developers could lead to either a 10-15% boost in business or a 10-15% reduction in workforce. It would be interesting to see how expensive gen AI tools will become if they really deliver on cost savings for organizations..
I am always happy to learn more and getting valuable insight from experts in the field, so, please, feel free to challenge my assumptions.

You're spot-on about complexity being the decisive factor. GenAI coding tools excel with problems with clear patterns in training data. They struggle when critical business logic is scattered across codebases, exactly as you suspected :) because context window limitations mean AI can't track architectural decisions. The breaking point isn't just complexity, it's contextual dependencies.