When using generative AI, has anyone faced bias in the answers provided to your prompts? If yes, please share your experiences.

No63%

Yes (please share your experience)37%

175 PARTICIPANTS
13.8k viewscircle icon1 Upvotecircle icon6 Comments
Sort by:
Chief Information Technology Officer in IT Services2 years ago

Generative models learn from the data they are trained on. If the training data contains biases, the model will likely reproduce those biases in its outputs.

Global Intelligent Automation & GenAI Leader in Healthcare and Biotech2 years ago

AI/GenAI pulls from data points. These data points are normally written by a human. Humans in nature are biased. Out of those 188 labeled biases each human is said to carry 50 at a time. 

However, using AI/GenAI I would say, makes humans less biased. When we look to 'ground' the LLM data we look to write it better, and with intent. AI/GenAI is also using word choice to pick less biased or nebulous words. 

In the end, and with a bit of effort and some foresight. We could have a pretty decent outcome. 

Lead AI Architect in IT Services2 years ago

It is impossible to avoid this because (a) the training data comes from so many sources, (b) not everyone will agree what the word 'bias' means or includes as it is an endlessly-moving target, (c) the output is non-deterministic and may contain inadvertent errors or omissions, (d) cues in the prompt may lead to biased results even if unintentionally, (e) there is no way to test every scenario or outcome in a lab.  That doesn't mean industry should not try, but the naive belief that an executive order can somehow eradicate bias from AI is insane.  Good luck enforcing this in the Russian troll farms that flood social media with biased misinformation, for example.  It is an intractable problem.

IT Analyst in IT Services2 years ago

For example when you ask for which programming language is top and best , it gives a biased answer

Data Manager in Government2 years ago

Just recently I asked Bing Chat to summarise trending news of the day, among which was unfortunate news of a family of four being killed in a fire. Apparently because of the ethnic group of the family, the AI hallucinated conclusions of a connection between the cause of the fire and immigration, even though it was clearly stated in the news that there was no such relation. Even worse, the output was generated in a sarcastic style, such you might see on the darker side of the internet. In the same results was also completely hallucinated imaginary news events that did not happen at all. 

Content you might like

Traditional project management frameworks32%

AI-specific models for cost estimation63%

Industry benchmarks39%

Collaboration with financial department14%

Other3%

View Results
Read More Comments

Yes82%

No18%