Will generative AI be a tool of misinformation & risk to civilization & humanity?


673 views31 Upvotes12 Comments

CTO in Software, 2 - 10 employees
Yes, generative AI has the potential to be misused for misinformation, identity theft, fraud, and propaganda, which could pose risks to society. However, these risks can be mitigated through responsible use, development of AI safeguards, and robust policies and regulations. Efforts are also underway to detect and counteract AI-generated misinformation. The tool itself isn't inherently a risk; it's the misuse that presents potential threats.
1
VP of Engineering in Banking, 201 - 500 employees
There's certainly an element of that. However, misinformation is already happening all over the news and the internet without generative AI. Generative AI may help in amplifying that to a bigger scale.
2
VP of Marketing & Solutions — Artificial Intelligence in Software, 10,001+ employees
Generative AI as a basis technology will be a tool for many things — good and bad. One of the latter is the potential that people will use it to create misinformation. There are three factors contributing to it: (1) Access, (2) Quality, and (3) Scale.

(1) Access: Anyone with an internet connection and a browser can create Generative AI content (text, image, audio). The cost is literally zero, because of free trials and free options.

(2) Quality: It already is hard to distinguish AI- from human-created results and it will only get harder as the technology advances further.

(3) Scale: Anyone who’s created it can share it with anyone in the world — digitally and instantly.

Ultimately, it comes down to the people using the technology and to their intentions for doing so. I’ve recently explored the following question as part of a personal creative project: “What’s really anymore? And how could you tell?”
3
Legal Operations Counsel & Innovation Strategist in Services (non-Government), 10,001+ employees
Like all technology (even the Internet), there is a possibility of misuse and risk of involvement by bad actors. But these risks don't outweigh the potential, nor should they justify placing limits on technological innovation and progress. We should anticipate and mitigate these fears and dangers appropriately, with clear and actionable guidelines and guardrails.
Editor-in-Chief in Media, 201 - 500 employees
No, if you know what you are looking for, not even a generative AI can mislead you. However, if you don't know what you are looking for, even if it's right in front of you, you can still be misled. Therefore, a generative AI should be seen as a blessing for easy and informative sourcing of information and answers on any valid inquiry, rather than being perceived as a threat to humanity or its civilization.

If the internet itself cannot mislead or harm humanity, then why should AI, which is meant to expedite day-to-day work with efficiency, be seen as a tool of misinformation and a risk to civilization and humanity?
1
CDO in Software, 10,001+ employees
Any technology badly utilized poses a risk and even unintended consequences. Social media has already went that route, before genAI. So, te question should be: Will humanity be mature and responsible to harvest the positive benefits of generative AI?
Community User in Software, 10,001+ employees
I think just like other technologies, there are risks for the tools to be used for a variety of evils. We just saw a generated image convince enough investors of a pentagon attack that the stock market dipped for a day. We've also seen the rise of deep fakes masquerading as politicians, and even being used to blackmail people. Obviously most users are going to use the technology innocently for things like productivity gains, but there are nefarious people who will look to use the technology in malicious ways. 
Director of IT in Manufacturing, 5,001 - 10,000 employees
I think AI will continue to improve itself
Chief Technology Officer in Media, 2 - 10 employees
Generative AI, particularly in the form of deep learning models like GPT-3, has the potential to be used as a tool for generating misinformation and posing risks to civilization and humanity.
Executive, Self-employed
Yes, like any other technology, if people misused it it will provide bad information, thinking on the majority of people feeding information a very high percent of people just don't care about information sources and just share what they think is interesting, creating more misinformation

Content you might like

Yes - Maine did the right thing. There are too many security risks with free versions of these tools. Not enough copyright or privacy protections of data.28%

No, but.... - You must have good security and privacy policies in place for ChatGPT (and other GenAI apps). My organization has policies and meaningful ways to enforce those policies and procedures for staff.57%

No - Bans simply don't work. Even without policies, this action hurts innovation and sends the wrong message to staff and the world about our organization.10%

I'm not sure. This action by Maine makes me think. Let me get back to you in a few weeks (or months).3%


338 PARTICIPANTS

8.9k views9 Upvotes1 Comment

Yes, AI has significantly reduced costs and improved customer experiences.4%

Somewhat, there have been some cost reductions and customer benefits, but there's room for improvement.81%

No, AI implementation has not yielded noticeable cost savings or substantial customer enhancements.12%

Not sure / I don't have enough information to assess AI's impact.4%


26 PARTICIPANTS

157 views