Let's have a discussion on the ethics challenges of AI?
Sort by:
Claude Shannon once said whatever line is set for us, as soon as we achieve it, the rest of the world will say that's not really intelligence. You can't keep shifting the line. Will AI continue to get more and more intelligent? It certainly will but it doesn't have to be a contest. It doesn't have to be a competition. The work I do focuses on what I refer to as symbiotic tech solutions. How to have humans and machines work integrally. We are the species that augments from eyeglasses to iPads, from shoes to cell phones. We are born, you know, naked, alone and afraid. It's only through technology that we are able to achieve what we do as humanity. And so this is nothing new for us. This is just the next logical step.
AI is a very powerful tool/process that must be used in conjunction with the human intelligence to be effective in obtaining the desired results. The ethical component is applicable to the human who uses AI to achieve an objective.
I would agree with Timothy Campos' assessment. The AI we have now comes down to a series of algorithms and the ethics are tied to how we interpret and use the data. The biggest challenge from my perspective, is using data in a way that is beneficial to the greatest number of people even if that means it may negatively impact our ideas, sense of morals, or profit.
There's a typo in my response I can't edit - Should read: "Most technology follows an S-Curve not a J-Curve - yet it is easy to mistake the two"
Another way to look at this is via the way your AI solution can be attacked to produce incorrect results. We've all seen the demos of the image classification algorithms that mistake turtles for guns, or the t-shirts that your Tesla thinks is a stop sign. So if those types of "AI" solutions can be attacked, what hidden biases exist in your solution? I think the work that's being done on "explainable AI" is pretty important. If we're going to be reliant on systems that are built using these technologies it's important that we (or at least the designers) have an understanding of what is actually occurring. Otherwise, we're just building systems that look like that old whiteboard drawing with the box that says "Magic Happens"