I am interested in people who have tried generative AI coding projects.  In your experience, are there project attributes that make generative AI coding capabilities a "good fit"?  Are there project attributes that are likely indicators that generative AI will not result in something useful?  I would think complexity of requirements is a big factor.  If the complexity of the project or its requirements is low - I assume genAI coding can get you within the ballpark.  For instance, logic stores: if your project has critical business logic spread out all over the place and not centralized I think that generative AI would not be able to make sufficient sense of that.

2.5k viewscircle icon5 Comments
Sort by:
VP of AI Innovation in Software9 days ago

This is not about project specifics or complexity of requirements.

Essentially, a paradigm shift is required from producing deliverables based on human-friendly descriptions (legacy artifacts - diagrams, BRDs, etc.) to machine-optimized ones. Prompt the almighty, with a complex and intricately defined structure of inferences rules it all. And not only everything is prompt-to-prompt, that flow itself is handled by MCP - going from system to system directly, with humans in supervisory role, and agents full-on execution.

GenAI is inherently hampered by the fact that it is, for all needs and purposes, a glorified autocomplete. Yet, when you ask it the right questions - you get the right answers. And when you set up and manage the context correctly, hallucinations (another inherent feature of models designed with assistance in mind) become a minor impact to be buffered.

Last but not least - consider engineering commoditized. Codebase, infrastructure (IaC, obviously) - everything is disposable. The only thing that persists, is UVP of a product, and/or enabling capability. These evolve, but remain. Everything else is re-generated continuously - so you basically live in a massive repository for everything, from prompts to outcomes.

Senior Vice President, Engineering in Software24 days ago

We have tried Vibe Coding 'A Lot' and from our experience, I can guide you that doing Spec-based coding using a tool/IDE like KIRO is a relief. You can do a lot of what you mentioned in your post efficiently and accurately if you write the correct 'Spec'. Try it out once, happy to chat more about it.

Director of Marketing in IT Services24 days ago

We are implementing a multi-layered strategy for leveraging Generative AI and automation technologies to drive business outcomes. Our current stack includes:

- Code-Gen AI for Developer Velocity: We are deploying AI coding assistants, including Cursor and Copilot, across our primary development stacks (Python, .NET, Node.js, React). The objective is to accelerate feature delivery by reducing coding time and automating the generation of documentation and test cases. This initiative is complemented by enhanced code review processes to ensure quality and maintainability.

- No-Code Platforms for Business Agility: Our Marketing and Community teams are empowered with the Lovable platform to independently develop and deploy landing pages and user registration flows, increasing operational agility.

- Enterprise-Wide Productivity Suite: As a Google Workspace organization, we have standardized on Gemini Pro for company-wide use, democratizing AI-powered content creation, analysis, and communication.

- Proprietary Hyperautomation Platform: At the core of our strategy is Skyone Studio, one of our flagship products (an integrated platform that includes iPaaS/Lakehouse/Agent Builder&Orchestration). This low-code environment enables complex data integration from SAP and 17 other enterprise systems and facilitates the creation and orchestration of autonomous and conversational AI agents, which are being deployed across all business functions to drive efficiency and innovation.

Former Director / Sr Principal, Global Products and Technology in Healthcare and Biotech25 days ago

You're spot-on about complexity being the decisive factor. GenAI coding tools excel with problems with clear patterns in training data. They struggle when critical business logic is scattered across codebases, exactly as you suspected :) because context window limitations mean AI can't track architectural decisions. The breaking point isn't just complexity, it's contextual dependencies.

Head of US Corporate Research25 days ago

I Dan, I don't consider myself a super expert as I do not have developers directly under me, but I am one of the many leaders that is trying to figure our impact of cost of tools vs. productivity. Here I am sharing the summary of my notes from the conversation with a few very knowledgeable professors in the field:

When generative AI coding is a good fit:

- Well‑known patterns and boilerplate: Copilots are “extremely good and reliable” at widely taught, standard code structures—think canonical algorithms, idiomatic scaffolding, and routine functions—likely because models were trained on abundant, high‑quality examples.​

- Education, explanation, and tutoring: They excel at correcting code, generating examples, and explaining structures, making them strong for onboarding, learning, and clarifying conventional patterns in existing codebases.​

- Accelerating capable developers and enabling non‑programmers: They lower the barrier for non‑CS professionals to produce small pieces from high‑level instructions and give pros a productivity boost on routine tasks.​

When generative AI coding is a poor fit (or needs strong guardrails)

- Integrations and orchestration: Today’s copilots still struggle to be reliable at configuring multiple services, coordinating large workloads, and wiring complex systems together—areas where advanced developer expertise remains essential.​

- Cross‑module/system reasoning: Many issues are not obvious from simply running code; generated code still requires human reading and checking to catch subtle defects or mismatches in assumptions.​

Expectation management on output share: Headlines claiming ~40% of code via AI are overstated; a more realistic figure might be closer to 10-15% today because much modern software work is system integration rather than writing fresh low‑level functions.​

Talent substitution: Organizations should not let go of their skilled software developers—you still need experts to make software actually work in production and to own system‑level decisions.​

Productivity: eventually, especially for a large team, even 10-15% boost in productivity for skilled developers could lead to either a 10-15% boost in business or a 10-15% reduction in workforce. It would be interesting to see how expensive gen AI tools will become if they really deliver on cost savings for organizations..

I am always happy to learn more and getting valuable insight from experts in the field, so, please, feel free to challenge my assumptions.

Content you might like

Quite a bit — we don’t impose many restrictions on experimentation17%

Some — we have guardrails in place56%

Not much — we have to be careful with how our staff use generative AI24%

None — we don’t allow experimentation1%

We’re not using generative AI

View Results

Recruit talent from diverse or non-traditional backgrounds (e.g. different degrees, institutions, or work experience)34%

Recruit less experienced AI talent with a high aptitude to learn 48%

Communicate the intrinsic benefits of the role (e.g., mission, culture, resources, opportunity for impact) 28%

Build talent pipelines through partnerships with academia and professional societies47%

Hire and upskill internal talent51%

Use specialized AI recruitment agencies11%

Other (please share details in comments)3%

View Results