Have you ever seen a dev’s experiment with AI go off track? What happened, and how did your team handle it?

1.4k viewscircle icon1 Upvotecircle icon2 Comments
Sort by:
VP of IT9 months ago

As the saying goes, "garbage in, garbage out." The quality of data and input directly affects the output. Without a well-structured prompt or accurate data, results can lead to hallucinations or irrelevant outcomes. Once the prompt was refined and better data provided, the results significantly improved, showcasing the importance of thoughtful input for effective outcomes. Now the prompts can be dynamic with agentic AI.

Lightbulb on2
Director of Engineering10 months ago

Most typically, I see machine learning experiments go off track when the dev forgets that good data beats good data science every time.  This issues manifests when the dev puts focus on tuning the model rather than focusing on sourcing explanatory features or the dev focuses on cleaning data rather than on sourcing quality data.  

Similarly, I see LLM experiments go off track when the Dev focuses on the Open AI API params rather than the prompt.

Lightbulb on1

Content you might like

Yes67%

No33%

Yes, we have a comprehensive governance policy50%

No, we need more guidance around this48%

Not sure what policies are currently in place1%

View Results