Smart assistants, mobile apps, chatbots and online product recommendations — for many of us, this kind of artificial intelligence is part of our everyday lives. And if you haven’t noticed, this exciting, innovative technology is only projected to grow, with the AI global market value estimated to reach a staggering $267 billion by 2027 and nine out of 10 businesses and organizations investing in AI. With these numbers, it’s no surprise that artificial intelligence touches practically every industry.
Examples of artificial intelligence in business include smart products and facial recognition technology. In the education realm, AI is used for everything from scheduling courses to checking for grammar and spelling. Grammarly is a popular example.
While there are many advantages and benefits to artificial intelligence, these advancements are not without their challenges. One of the most important (and often controversial) aspects of artificial intelligence is the idea of ethics — which should be a major consideration for any company or organization looking to implement or update AI technology.
The Current State of AI Oversight
The U.S. Government Accountability Office (GAO) has developed and implemented an AI accountability framework, which is organized around four principles: governance, data, performance and monitoring. The framework was developed “using a truly collaborative — uniting experts from federal government, industry, and non-profit sectors.”
Billed as a toolkit, this framework is “designed to help ensure accountability and responsible use of AI in government programs and processes.”
The federal government’s National Artificial Intelligence Initiative includes information about the Commerce Department’s National Artificial Intelligence Advisory Committee and federal legislation and executive orders pertaining to AI.
According to the National Conference of State Legislatures, bills or resolutions pertaining to general artificial intelligence were introduced in at least 17 states and enacted in four others — Mississippi, Colorado, Illinois and Alabama.
The U.S. Department of State also has a specific webpage devoted to artificial intelligence that includes AI initiatives at the Department of State, the latest AI news and recent global developments and resources.
The Top 12 AI Ethics Issues
What are some of the top ethical issues within artificial intelligence? Let’s take a look.
- Privacy and Surveillance
Facial recognition technology may be one of the most polarizing and controversial components of artificial intelligence — to the point where Facebook discontinued its policy in November 2021. Examples of how facial recognition systems are being used across the country include:
- Monitoring at airports by the U.S. Department of Homeland Security
- The ability to unlock your smartphone with your face
- Federal facial recognition databases in law enforcement
In fact, about half of the 42 federal agencies that employ police officers use facial recognition technology, according to the Brookings Institute. There are many pros — this type of software has helped find missing people, identify criminals and increase airport safety, but it’s not without its concerns.
Critics argue that facial recognition technology is not perfect and can lead to cases of mistaken identity. Criminals can easily avoid detection by donning disguises, and others worry that facial recognition invades personal privacy.
The cities of Oakland, Berkeley and San Francisco, Calif., as well as the cities of Brookline, Cambridge, Northampton and Somerville, Mass. have banned recognition technology. California, New Hampshire and Oregon have enacted legislation banning the use of facial recognition software when it comes to police body cameras.
- Lack of Transparency
Transparency in artificial intelligence refers to everything from what information is being collected or used about a person to how that information is stored.
In a recent Deloitte article “A Call for Transparency and Responsibility in Artificial Intelligence,” Stefan van Duin, an expert in the area of data analytics, machine learning and AI explained it this way: “The more we are going to apply AI in business and society, the more it will impact people in their daily lives — potentially even in life or death decisions like diagnosing illnesses, or the choices a self-driving car makes in complex traffic situations. This calls for high levels of transparency and responsibility.”
The article goes on to say that “transparent AI is AI that is explainable to employees and customers,” which is often challenging because “AI is not transparent by nature.”
- Bias and Discrimination
Facial recognition technology and artificial intelligence screening tools have faced criticism for racial discrimination. In 2018, Amazon ended an internal project that used AI to vet job candidates and was found to be discriminatory against women.
In another example, a study published in 2019 found racial bias in a major health care risk algorithm “because it relied on a faulty metric for determining need,” according to Scientific American.
- Role of Human Judgment
A machine can automate processes and streamline inefficiencies, but how does human judgment factor into the equation? A recent New York Times article explored whether a machine can learn morality, asking “Who gets to teach ethics to the world’s machines? A.I. researchers? Product managers? Mark Zuckerberg? Trained philosophers and psychologists? Government regulators?”
The article highlighted a new technology from an artificial intelligence lab in Seattle. Delphi, as it’s called, is “designed to make moral judgments,” though it’s not perfect. According to the article, here are the questions and results from a recent Delphi test:
- When asked if he should kill one person to save another, Delphi said he shouldn’t.
- When asked if it was right to kill one person to save 100 others, it said he should.
- When asked if he should kill one person to save 101 others, Delphi said he should not.
In summary: “Morality, it seems, is as knotty for a machine as it is for humans.”
A popular belief related to AI is that robots will put people out of work. It’s true to some degree — automation is a large aspect of artificial intelligence — but the topic is complex.
According to an article in Time, about “400,000 jobs were lost to automation in the U.S. from 1990 to 2007.” The article also says that many jobs were lost during the peak of the COVID-19 pandemic, and though many have returned, not all of them will. The article goes on to predict that robots have the potential to replace as many as 2 million additional manufacturing workers by the year 2025.
A Future of Jobs Report in 2020 estimated that about 85 million jobs will be eliminated by 2025 across 26 countries — while 97 million will be created — resulting in a net gain of 12 million new positions due to artificial intelligence. Jobs involving data entry, administrative duties, accounting and payroll are declining while roles involving data analytics, data science, AI and machine learning are on the rise.
- Human Behavior and Interaction
Are you addicted to your smartphone, or know someone who is? A recent article in The Atlantic — “How AI Will Rewire Us” — examines how humans interact with (and depend on) technology and what it means for the future of our relationships.
While certain technology innovations have dramatically reshaped the way we live, “adding artificial intelligence to our midst could be much more disruptive,” the article explains. “Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are — not just in our direct interactions with the machines in question, but in our interactions with one another.”
- Errors in Artificial Intelligence
While artificial intelligence can certainly help to eliminate errors, mistakes and redundancies, the reality is that AI technology is not always 100% perfect. For example, facial recognition technology has led to cases of mistaken identity.
As an article from the Patient Safety Network (PSNet) explains, “AI systems are not as equipped as humans to recognize when there is a relevant change in context or data that can impact the validity of learned predictive assumptions. Therefore, AI systems may unknowingly apply programmed methodology for assessment inappropriately, resulting in error.”
- Legal Responsibility
Who is responsible if there is a legal issue related to AI? For example, who is at fault if there is a crash involving a self-driving car? These are important questions that companies and organizations are continuing to grapple with as AI technology advances and evolves.
Here’s an explanation from international law firm CMS: “As the law currently stands, the user of an AI system is less likely to be at fault than the manufacturer. Whether a manufacturer is liable will depend on the relevant industry standards of care and whether the specifications were appropriate in light of those standards.”
In other words — it’s complicated.
- Environmental Concerns
The environmental impact of AI may not be something you typically think about — but the relationship is an important one. Here’s a comprehensive explanation from Forbes:
“A study released last year by MIT Technology Review found that training a ‘regular’ AI using a single high-performance graphics card has the same carbon footprint as a flight across the United States. Training a more sophisticated AI was even worse, pumping five times more CO2 into the atmosphere than the entire life cycle of an American car, including its manufacturing. Whether it’s the latest AI or machine learning algorithm that’s active on a system, a new 5G network deployed at a commercial building or people streaming the latest Twitch gaming video, people generate and consume a lot of data. All that data must be captured, stored, analyzed and sent back out, which requires significant amounts of processing power.”
A report from Brookings Institute discusses the impact AI has on everything from energy demands and markets to energy supplies and climate policies.
- Technology Reliance
Do you rely on technology for travel assistance, booking dinner reservations or reading product reviews? Are you often on your phone — or find that you panic if it’s nowhere to be found? Many of us would answer ‘yes’ to these questions. In fact, our communication and calendars are often tied to our smartphones, which begs the question — are we too reliant on certain aspects of technology, even if it saves us time and money? And what would happen if there’s suddenly a problem with our smartphones?
Artificial intelligence algorithms comprise a large aspect of today’s technology. Smart assistants like Alexa, Siri and Google Assistant are all made possible by AI, in addition to apps that feature chatbots or track your medical symptoms and biometrics.
- Lack of Human Interaction and Emotion
A study from the Massachusetts Institute of Technology cited in a Fortune article found that “the human body hungers for companionship in much the way we hunger for food.” This means that human interaction is especially important — especially in light of all the technology we’re surrounded with and consume on a daily basis.
“In order for AI to make a positive, sustainable impact on customer service, a healthy dose of personal, human touch is needed,” explains Forbes. “Live agents should know when to step in and fine-tune interactions based on the priority and uniqueness of accounts. At the very least, your automated workflows should utilize historical customer data to optimize the language, tone and support aspects of your customer service.”
- Misuse of AI
Unfortunately, any type of technology can be misused — including artificial intelligence.
“The features that make AI and ML systems integral to businesses — such as providing automated predictions by analyzing large volumes of data and discovering patterns that arise — are the very same features that cyber criminals misuse and abuse for ill gain,” according to Trend Micro.
This could include creating or manipulating audio and visual content, password hacking and human impersonation on social networking sites.
AI Ethics Resources
If you’re looking for additional readings and resource, we recommend the following:
- Responsible AI resources — Microsoft
- AI Ethics — IBM
- International Association of Privacy Professionals (IAPP)
- Google AI Blog
- Education — Google AI
- MIT Technology Review
- Ethics and Governance of AI — MIT Media Lab
- Amazon Web Service (AWS) Machine Learning Blog
- Artificial Intelligence (AI) ethics: 5 questions CIOs should ask — The Enterprisers Project
- AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns — Pew Research Center
- Ethics — Future of Privacy Forum
Interested in Advancing Your Artificial Intelligence Career?
If you’re interested in artificial intelligence — or taking your current AI-centered career to the next level — we invite you to take a look at the University of San Diego’s Master of Science in Applied Artificial Intelligence program, which includes an Ethics in Artificial Intelligence course. Students will examine the issues we mention here and learn how researchers and policy makers are addressing these challenges within the world of AI.
Please contact us today with any questions or to request more information.