Should technology leaders trust AI, presuming the technology is fully realized?


2.2k views5 Comments

Sr. Director of Enterprise Security in Software, 5,001 - 10,000 employees
What will be interesting is that we won't trust these things until we have the data to trust them. If Tesla had all the data to say that 87% of the time that we witnessed a driver be presented with a pedestrian to hit or a wall to run into, they chose to hit the pedestrian. Would you feel better about that or not? Because at some point, Tesla is going to have that data, because this circumstance will happen enough times.

And it's not just Tesla. Across the industry, you'll have the data because you'll know how people have reacted. You'll understand what human behavior has been, but I don't know if that'll make me feel better or not. I think that's going to be interesting. I definitely will look forward to the day that we can do that.
1
Managing Director in Finance (non-banking), 1,001 - 5,000 employees
One of the exercises we went through in thinking about a fully autonomous vehicle was using those pseudo edge cases, situations that happen in real life but are not rational decisions you intend to make as a human. For instance, your car is barreling towards an intersection. The light has turned green for you but there are pedestrians crossing the road, and there's no way for you to swerve. Do you run over the pedestrians or slam your car into the concrete barrier? You don't know the outcome of either choice. The pedestrians could survive. You might survive. 

It's an ethical question as much as a logic question but your expectation is that the AI will make the most logical decision, while you might make a more emotional, human decision. For this scenario, more than 95% of the people would go into the concrete barrier. The responses changed once that action had more deadly consequences for you and passengers in the vehicle, or if you changed the pedestrian to an animal.

The point is that it's a hard problem to solve. Sometimes there isn’t a logical solution to the problem because you have to bring ethics into it and everybody's ethics could be different. In different jurisdictions, for instance, you might make different choices.
1 1 Reply
Member Board of Directors in Finance (non-banking), 201 - 500 employees

I think we're still very far away from AI being able to make ethical decisions in addition to decisions that can follow the algorithms. Humans still have their job to do, for the foreseeable future. There are so many permutations, and when you start using examples with children and animals, etc., it becomes very clear that AI still has a long way to go.

2
CEO in Manufacturing, 11 - 50 employees
When you look at the reality of having people driving the car versus a car driving itself, 99.9% of the time the reaction times, etc., will be better with the machine driving. When you look at the statistics, a significant number of deaths would be avoided with autonomous vehicles. There's a ton of data around all that, so it's inevitable. These things are going to happen. 

Uber Air is an autonomous people drone that's going to be live in the next 3-4 years. Why? Because that's the simplest autonomy problem. There isn’t the same issue with everything on the ground that I have to figure out in a city like San Francisco. Would I take that Uber Air drone to get a 15 minute ride to San Francisco from Saratoga? Absolutely. Because it's a multi-pronged machine. They've done a bunch of things in it.
2
VP of IT in Software, 10,001+ employees
I don't know that this will prove to be a meaningful question.  In a post a few minutes ago I wrote that we quit calling it AI once it works. 

When AI is fully realized (whatever that means) it will disappear.  Things are going to make decisions for you and talk to other things making decisions for you.   You won't be aware most of the time.  

Between now and the time it is fully realized, we certainly shouldn't trust it. Doesn't mean we shouldn't use it.  So many thing can mess up our models.  We should be paranoid about what we feed our models.  We should be paranoid that they are still valid.  And just like other things that we don't trust we should monitor them,  limit their reach to only what they need, secure them, protect how we train, test, deploy them.

Just like children, AIs are still babies inching their way towards adolescence.  When they become adults they will be on their own and it doesn't matter if we trust them.
2

Content you might like

No plans on undergoing a migration yet34%

Currently deploying SAP S/4HANA28%

Migrating to SAP S/4HANA within the next 1-2 years19%

Migrating to SAP S/4HANA within the next 3-6 years10%

Already have SAP S/4HANA in production8%


3982 PARTICIPANTS

31.2k views154 Upvotes32 Comments

Open source15%

Paid60%

Both14%

No LLMs here11%


132 PARTICIPANTS

1.1k views

Director of IT in Manufacturing, 5,001 - 10,000 employees
I sorry I never found the AI tools to conduct assessment of IT Contract, I suggest to you , you can create customize internal tools to screen it contract
Read More Comments
2.5k views4 Comments