What are your biggest concerns as AI capabilities become a standard industry offering? 

2.2k views3 Upvotes7 Comments

Board Member, Advisor, Executive Coach in Software, Self-employed
If AI is done right, it's a huge opportunity for technology; if we get it wrong it’s a huge slippery slope. There's already been evidence, in some cases, of the consequences involved if we mess up broad aspects of AI: a few years ago, Microsoft put out a chatbot that didn't have the right monitoring and people were seeding it with harassment and hostile language. That basically trained the chatbot to say inappropriate things back to people because people had flooded it with all this negative stuff.

There have been instances of AI that have already had severe consequences, like the criminal risk assessment algorithms that are used by some courts to determine if people will reoffend. There was a research study done by a group of PhD students that found that this system was substantially discriminatory against African-Americans because of the way in which the data was put into it. It used machine learning to determine somebody's likelihood to recommit crimes based upon the anchor point of a historical bias, which the machine then learned to continue going forward. I've worried a ton about that at Cylance and even during my time at Intel.
1 1 Reply
VP of Product Management in Software, 11 - 50 employees

Whether or not we do it right is always the question in software development and probably in any technology. You can't be afraid to fail, but you've got to fail as fast as you can. Because AI is not an “if”—it already is. We just need to keep on working on it and continue to learn and learn.

Director of Security Operations in Finance (non-banking), 5,001 - 10,000 employees
Understanding that you can't put Pandora back in the box is a necessity for what we do. AI is not good, bad, right or wrong, it just is; we have to figure out how to deal with it. All of us within the security or technology professionals need to understand that. But we also have to understand that AI has human consequences.
CEO and Co-Founder in Software, 51 - 200 employees
When you start looking at the commercial world, the models are very biased based on the data. There are enough studies that have demonstrated that and not only in cyber; the best example was the RELIEF study.

When you're trying to do short, secure administration models, Palantir completely goofed up. It was absolutely racially biased and that opened up MIT to come out with IBM and say, "We've got to be really careful. Do we have the right sample size?" And when you look at security, we all know that we don't have large enough sample sizes. We want to talk about AI, but we truly don't even have good ML models. We stretch statistical models to ML and say we have some sort of AI. I don't want to downplay the progress because I’ve published tons of papers on that. But the key is trying to draw that fine line, and people are not willing to tell you where the data is coming from.
Inventor, Wearables Pioneer, Product Designer and Manager, Thought Leadership in Software, 2 - 10 employees
Artificial intelligence is not the same kind of intelligence as human intelligence and never will be.  We can create all kinds of algorithms and apply machine learning -- allow AI to make decisions much more quickly and speed up systems.  However, computers can never develop intuition, which relies on gut feelings; a deep emotional response.  We can model other kinds of emotions and interactions, but intuition -- often referred to as the "sixth sense" --- cannot be programmed.  Taking this out of the equation in decision making (or worse, attempting to create a mathematical model) is a path we should not attempt.
CIO / Managing Partner in Manufacturing, 2 - 10 employees
The volume and cleanliness of data that is required for these capabilities to succeed.

I normally estimate that 70% of any AI project is sorting out data and not value add.
CTO in Education, 11 - 50 employees
Is there a way to train the models 'Ethics and Morality'? 

Using this powerful technology ethically and morally correct way still depends on human individual. There lies the danger. 

Content you might like

Too Much Hype25%

Amazing Engineering - Great step forward38%

Still lacking judgement17%

Too Dangerous4%

Whats is GPT-3 ? How do i learn more?17%


1.2k views1 Upvote

Providing more accurate service32%

Personalized service47%

Faster service51%

Analyse customer data38%

Provide insights to help businesses better understand their customers33%

Automate customer service task25%

Identify customer sentiment12%

Detect customer issues more quickly13%


463 views1 Upvote

CTO in Software, 11 - 50 employees
No, we haven't published corporate guidance establishing guardrails for use of commercial generative AI services.
Read More Comments
2.2k views1 Upvote3 Comments