We don’t know how we feel about AI.
Since ChatGPT was released in 2022, the generative AI frenzy has stoked simultaneous fear and hype, leaving the public even more unsure of what to believe.
According to Edelman’s annual trust barometer report, Americans have become less trustworthy of tech year over year. A large majority of Americans want transparency and guardrails around the use of AI — but not everyone has even used the tools. People under 40 and college-educated Americans are more aware and more likely to use generative AI, according to a June national poll from BlueLabs reported by Axios. Of course, optimism also falls along political lines: The BlueLabs poll found one in three Republicans believe AI is negatively impacting daily life, compared to one in five Democrats. An Ipsos poll from April came to similar conclusions.
Whether you trust it or not, there is not much of a debate as to whether AI has the potential to be a powerful tool. President Vladimir Putin told Russian students on their first day of school in 2017 that whoever leads the AI race would become the “ruler of the world.” Elon Musk quote-tweeted a Verge article that included Putin’s quote, and added that “competition for AI superiority at national level most likely cause of WW3 imo.” That was six years ago.
These discussions all drive one imperative question: Is AI good or bad?
It’s an important question, but the answer is more complicated than “yes” or “no.” There are ways generative AI is used that are promising, could increase efficiency, and could solve some of society’s woes. But there are also ways generative AI can be used that are dark, even sinister, and have the potential to increase the wealth gap, destroy jobs, and spread misinformation.
Ultimately, whether AI is good or bad depends on how it’s used and by whom.
Positive uses of generative AI
The big positive for AI that Big Tech promises is efficiency. AI can automate repetitive tasks in fields like data entry and processing, customer service, inventory management, data analysis, social media management, financial analysis, language translation, content generation, personal assistants, virtual learning, email sorting and filtering, and supply chain optimization, making tedious tasks a bit easier for workers.
You can use AI to make a workout plan or help create a travel itinerary. Some professors use it to clean up their work. For instance, Gloria Washington, an Assistant Professor at Howard University and a member of the Institute of Electrical and Electronics Engineers, uses ChatGPT as a tool to make her life easier where she can. She told Mashable that she uses ChatGPT for two main reasons: to find information quickly and to work differently as an educator.
“If I am writing an email and I want to appear as if I really know what I’m talking about… I’ll run it through ChatGPT to give me some quick little hints and tips on how to improve the way that I say the information in the email or the communication in general,” Washington said. “Or if I’m giving a speech, [I’ll ask ChatGPT for help with] something really quick that I can easily incorporate into my talking points.”
As an educator, it’s revolutionizing how she approaches giving homework assignments. She also encourages students to use ChatGPT to help with emails and coding languages. But it’s still a relatively new technology, and you can tell. While 80 percent of teachers said they received “formal training about generative AI use policies and procedures,” only 28 percent of teachers said “that they have received guidance about how to respond if they suspect a student has used generative AI in ways that are not allowed, such as plagiarism,” according to research from the Center for Democracy & Technology.
“In our research last school year, we saw schools struggling to adopt policies surrounding the use of generative AI, and are heartened to see big gains since then,” the President and CEO of the Center for Democracy & Technology, Alexandra Reeve Givens, said in a press release. “But the biggest risks of this technology being used in schools are going unaddressed, due to gaps in training and guidance to educators on the responsible use of generative AI and related detection tools. As a result, teachers remain distrustful of students, and more students are getting in trouble.”
AI can improve efficiency and reduce human error in manufacturing, logistics, and customer service industries. It can accelerate scientific research by analyzing large datasets, simulating complex systems, and aiding in data-driven discoveries. It can be used to optimize resource consumption, monitor pollution, and develop sustainable solutions to environmental challenges. AI-powered tools can enhance personalized learning experiences and make education more accessible to a broader range of individuals. AI has the potential to revolutionize medical diagnoses, drug discovery, and personalized treatment plans.
The positives are undeniable, but that doesn’t mean the negatives are worth ignoring, Camille Carlton, a senior policy manager at the Center for Humane Technology, told Mashable.
“I don’t think that these potential future benefits should be driving our decisions to not pay attention and put up guardrails around these technologies today,” she said. “Because the potential for these technologies to increase inequality, to increase polarization, to continue to [affect the deterioration of our] mental health, [and] increase systemic bias, are all very real and they’re all happening right now.”
Negative aspects of generative AI
You might consider anyone who fears negative aspects of generative AI to be a Luddite, and maybe they are — but in a more literal sense than how the word is carried today. Luddites were a group of English workers in the early 1800s who destroyed automated textile manufacturing machines — not because they feared the technology, but because there was nothing in place to ensure their jobs were safe from replacement by the tech. Beyond this, they weren’t just economically precarious — they were starving at the hands of the machines. Now, of course, the word is used to derogatorily describe a person who fears or avoids new technology simply because it is new technology.
In reality, there are loads of questionable use cases for generative AI. When we consider healthcare, for instance, there are too many variables to worry about before we can trust AI with our physical and mental well-being. AI can automate repetitive tasks like healthcare diagnostics by analyzing medical images via X-rays and MRIs to help diagnose diseases and identify abnormalities — which can be good, but the majority of Americans are concerned about the increased use of AI in healthcare, according to a survey from Morning Consult. Their fear is reasonable: Training data in medicine is often incomplete, biased, or inaccurate, and the technology is only as good as the data it has, which can lead to incorrect diagnoses, treatment recommendations, or research conclusions. Moreover, medical training data is often not representative of diverse populations which could result in unequal access to accurate diagnoses and treatments — particularly for patients of color.
Generative AI models don’t understand medical nuance, can’t provide any kind of solid bedside manner, lack accountability, and can be misinterpreted by medical professionals. And it becomes far more difficult to ensure patient privacy when data is being passed through AI, obtaining informed consent, and preventing the misuse of generated content become critical issues.
“The public views it as something that whatever it spits out is like God,” Washington said. “And unfortunately it is not true.” Washington points out that most generative AI models are created by collecting information from the internet — and not everything on the internet is accurate or free from bias.
The automation potential of AI could also lead to unemployment and economic inequality. In March, Goldman Sachs predicted that AI could eventually replace 300 million full-time jobs globally, affecting nearly one-fifth of employment. AI eliminated nearly 4,000 jobs in May 2023 and more than one-third of business leaders say AI replaced workers last year, according to CNBC. This has led unions in creative industries, like SAG-AFTRA, to fight for more comprehensive protection against AI. OpenAI’s new AI video generator Sora makes the threat of job replacement even more real for creative industries with its ability to generate photorealistic videos from a simple prompt.
“If we do get to a place where we can find a cure for cancer with AI, does that happen before inequality is so terrible that we have complete social unrest?” Carlton questioned. “Does it happen after polarization continues to increase? Does it happen after we see more democratic decline?”
We don’t know. The fear with AI isn’t necessarily that the sci-fi movie iRobot will become some kind of documentary, but more that the people who choose to use it might not have the best intentions — or even know the repercussions of their own work.
“This idea that artificial intelligence is going to progress to a point where humans don’t have any work to do or don’t have any purpose has never resonated with me,” Sam Altman, the CEO of OpenAI, which launched ChatGPT, said last year. “There will be some people who choose not to work, and I think that’s great. I think that should be a valid choice, and there are a lot of other ways to find meaning in life. But I’ve never seen convincing evidence that what we do with better tools is to work less.”
A few more questionable use cases for AI include the following: It can be used for invasive surveillance, data mining, and profiling, posing risks to individual privacy and civil liberties; if not carefully developed, AI systems can inherit biases from their training data, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice; AI can raise ethical questions, such as the potential for autonomous weapons, decision-making in critical situations, and the rights of AI entities; over-reliance on AI systems could lead to a loss of human control and decision-making, potentially impacting society’s ability to understand and address complex issues.
And then there’s the disinformation. Don’t take my word for it — Altman fears that, too.
“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.” For instance, consider the AI voice-generated robocalls created to sound like President Joe Biden.
Generative AI is great at creating misinformation, University of Washington professor Kate Starbird told Axios. The MIT Technology Review even reported that humans are more likely to believe disinformation generated by AI than by other humans.
“Generative AI creates content that sounds reasonable and plausible, but has little regard for accuracy,” Starbird said. “In other words, it functions as a [bullshit] generator.” Indeed, some studies show AI-generated misinformation to be even more persuasive than false content created by humans.
What does this mean?
“Instead of asking this question about net good or net bad…what is more beneficial for all of us to be asking is, good how?” Carlton said. “What are the costs of these systems to get us to the better place we’re trying to get to? And good for who, who is going to experience this better place? How are the benefits going to be distributed to [those] left behind? When do these benefits show up? Do they show up after [the] harms have already happened — a society with worse mental health, worse polarization? And does the direction that we’re going in reflect our values? Are we creating the world that we want to live in?”
Governments have caught on to AI’s risks and created regulations to mitigate harms. The European Parliament passed a sweeping “AI Act” to protect against high-risk AI applications, and the Biden Administration signed an executive order to address AI concerns in cybersecurity and biometrics.
Generative AI is part of our innate interest in growth and progress, moving ahead as fast as possible in a race to be bigger, better, and more technologically advanced than our neighbors. As Donella Meadows, the environmental scientist and educator who wrote The Limits to Growth and Thinking In Systems: A Primer asks, Why?
“Growth is one of the stupidest purposes ever invented by any culture; we’ve got to have an ‘enough,'” Meadows said. “We should always ask ‘growth of what, and why, and for whom, and who pays the cost, and how long can it last, and what’s the cost to the planet, and how much is enough?'”
The entire point of generative AI is to recreate human intelligence. But who is deciding that standard? Usually, that answer is wealthy, white elites. And who decided that a lack of human intelligence is a problem at all? Perhaps we need more empathy — something AI can’t compute.