https://research.aimultiple.com/ai-ethics/
Top 9 Ethical Dilemmas of AI and How to Navigate Them in 2023
Ethical concerns arise as AI applications become widespread. Explore ethical issues of AI such as autonomous decisions & limited privacy & how to address them
research.aimultiple.com
Though artificial intelligence is changing how businesses work, there are concerns about how it may influence our lives. This is not just an academic or a societal concern but a reputational risk for companies, no company wants to be marred with data or AI ethics scandals that impacted companies like Amazon. For example, there was significant backlash due to the sale of Rekognition to law enforcement. This was followed by Amazon’s decision to stop providing this technology to law enforcement for a year since they anticipate the proper legal framework to be in place by then.
This article provides insights on ethical issues that arise with the use of AI, examples from misuses of AI, and best practices to build a responsible AI:
해석: 인공지능이 기업들이 일하는 방식을 바꾸고 있지만, 그것이 우리의 삶에 어떻게 영향을 미칠지에 대한 우려가 있다. 이는 단순히 학술적이거나 사회적인 문제가 아니라 기업의 평판 위험이며, 아마존과 같은 기업에 영향을 미친 데이터나 AI 윤리 스캔들로 손상되고 싶어하는 기업은 없다. 예를 들어, 인지도를 법 집행기관에 매각한 데 따른 반발이 컸다. 이것은 아마존이 그때까지 적절한 법적 체계가 마련될 것으로 예상하여 1년간 법 집행기관에 이 기술을 제공하지 않기로 한 결정을 야기하였다.
이 기사는 인공지능의 사용으로 발생하는 윤리적 문제에 대한 통찰, 인공지능의 오용 사례, 그리고 책임 있는 인공지능을 구축하기 위한 모범 사례를 제공한다.
*목차
-Automated decisions / AI bias : 자동화된 의사결정 / AI의 편향
-Self-driving cars : 자율주행 자동차
-Lethal Autonomous Weapons (LAWs) : 치명적 자율 무기 (LAWs)
-Unemployment due to automation : 자동화로 인한 실업
-Surveillance practices limiting privacy : 사생활을 제한하는 감시
-Manipulation of human judgment : 인간의 판단 조작
-Proliferation of deepfakes : 딥페이크의 확산
-Artificial general intelligence (AGI) / Singularity : 인공 일반 지능(AGI) / 특이점
-Robot ethics : 로봇 윤리
What are the ethical dilemmas of artificial intelligence?
Automated decisions / AI bias
Al algorithms and training data may contain biases as humans do since those are also generated by humans. These biases prevent AI systems from making fair decisions. We encounter biases in AI systems due to two reasons
- Developers may program biased AI systems without even noticing
- Historical data that will train AI algorithms may not be enough to represent the whole population fairly.
Biased AI algorithms may lead to discrimination of minority groups. For instance, Amazon shut down its AI recruiting tool after using it for one year. Developers in Amazon state that the tool was penalizing women. About 60% of the candidates the AI tool chose were male, which was due to patterns in data on Amazon’s historical recruitments.
To build an ethical & responsible AI, getting rid of biases in AI systems is necessary. Yet, only 47% of organizations test for bias in data, models, human use of algorithms.
Though getting rid of all biases in AI systems is almost impossible due to the existing numerous human biases and ongoing identification of new biases, minimizing them can be businesses’ goal.
If you want to learn more, feel free to check our comprehensive guide on AI biases and how to minimize them using best practices and tools. Also, a data-centric approach to AI development can help address bias in AI systems.
해석:
AI 알고리즘과 훈련 데이터는 인간에 의해 생성되기 때문에 인간이 하는 것처럼 편견을 포함할 수 있다. 이러한 편견은 AI 시스템이 공정한 결정을 내리는 것을 방해한다. 우리는 두 가지 이유로 AI 시스템에서 편견에 직면한다.
1. 개발자들은 눈치채지도 못한 채 편향된 AI 시스템을 프로그래밍할 수 있다.
2. AI 알고리즘을 훈련시킬 역사적 데이터는 전체 인구를 공정하게 나타내기에 충분하지 않을 수 있다.
편향된 AI 알고리즘은 소수 집단의 차별로 이어질 수 있다. 예를 들어, 아마존은 AI 채용 툴을 1년 동안 사용한 후 종료했다. 아마존의 개발자들은 그 도구가 여성들에게 불이익을 주고 있다고 말한다. 인공지능 도구가 선택한 후보자의 약 60%가 남성이었는데, 이는 아마존의 역사적인 채용에 대한 데이터 패턴 때문이었다.
윤리적이고 책임감 있는 인공지능을 구축하기 위해서는 인공지능 시스템의 편견을 없애는 것이 필요하다. 그러나 조직의 47%만이 데이터, 모델, 그리고 인간의 알고리즘 사용에 대한 편향을 테스트한다.
AI 시스템의 모든 편견을 없애는 것은 기존의 수많은 인간의 편견과 새로운 편견의 지속적인 인식 때문에 거의 불가능하지만, 이를 최소화하는 것이 기업의 목표가 될 수 있다.
--> 인공지능의 알고리즘과 데이터는 인간에 의해 만들어지기 때문에 편견을 가질 수도 있는데, 그 이유는 개발자들이 개발 과정에서 그것을 알지 못하거나, 데이터의 부족으로 인한 것일 수도 있다. 이러한 편향은 윤리적 인공지능 시스템을 구축하는 것을 막을 수도 있다.
Autonomous things
Autonomous Things (AuT) are devices and machines that work on specific tasks autonomously without human interaction. These machines include self-driving cars, drones, and robotics. Since robot ethics is a broad topic, we focus on unethical issues that arise due to the use of self-driving vehicles and drones.
Self-driving cars
The autonomous vehicles market was valued at $54 billion in 2019 and is projected to reach $557 billion by 2026. However, autonomous vehicles pose various risks to AI ethics guidelines. People and governments still question the liability and accountability of autonomous vehicles.
For example, in 2018, Uber self-driving car hit a pedestrian who later died at a hospital. The accident was recorded as the first death involving a self-driving car. After the investigation of the Arizona Police Department and the US National Transportation Safety Board (NTSB), prosecutors have decided that the company is not criminally liable for the pedestrian’s death. This is because the safety driver was distracted with her cell phone, and police reports label the accident as “completely avoidable.”
해석:
자율주행차 시장은 2019년 540억 달러로 평가됐으며 2026년에는 5570억 달러에 이를 것으로 전망된다. 그러나 자율주행차는 AI 윤리 가이드라인에 다양한 위험을 제기한다. 사람들과 정부는 여전히 자율주행차의 책임에 의문을 제기한다.
예를 들어, 2018년에는 우버 자율주행차가 후에 병원에서 사망한 보행자를 덮쳤다. 이 사고는 자율주행차가 연루된 첫 사망자로 기록되었다. 검찰이 애리조나주 경찰청과 미국 교통안전위원회(NTSB)의 조사 결과 보행자 사망에 대해 회사 측에 형사 책임이 없다는 결론을 내렸다. 이것은 안전 운전자가 자신의 휴대전화에 정신이 팔려 있었고, 경찰 보고서는 이 사고를 '완전히 피할 수 있는 사고'로 규정하고 있기 때문이다.
--> 자율주행차의 윤리가이드에 위험성이 있다고 하는데, 이 기사에서는 자세한 내용이 나와있지 않아서 추가해보자면 일단 자율주행차는 Autonomous Things, 약자로는 Aut라고 불리는 자율 사물의 종류이다. 자율 사물은 인간의 상호 작용 없이 특정 작업을 자율적으로 수행하는 장치 및 기계로, 자율주행차 외에도 드론, 로봇 등을 포함한다. 자율주행차의 대표적인 윤리적 문제로는 사고가 났다면 그것이 탑승자의 책임인지 또는 자율주행차를 만든 회사의 책임인지에 대한 문제가 있으며, 해킹으로 인한 문제도 있다.
Lethal Autonomous Weapons (LAWs)
LAWs are one of the weapons in the artificial intelligence arms race. LAWs independently identify and engage targets based on programmed constraints and descriptions. There have been debates on the ethics of using weaponized AI in the military. For example, in 2018, United Nations gathered to discuss the issue. Specifically, countries that favor LAWs have been vocal on the issue. (Including South Korea, Russia and America.)
Counter arguments for the usage of LAWs are widely shared by non-governmental communities. For instance, a community called Campaign to Stop Killer Robots wrote a letter to warn about the threat of an artificial intelligence arms race. Some renowned faces such as Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Jaan Tallinn, and Demis Hassabis also signed the letter.
Unemployment and income inequality due to automation
This is currently the greatest fear against AI. According to a CNBC survey, 27% of US citizens believe that AI will eliminate their jobs within five years. The percentage increases to 37% for citizens whose age is between 18-24.
Though these numbers may not look huge for “the greatest AI fear”, don’t forget that this is just a prediction for the upcoming five years.
According to Mckinsey estimates, intelligent agents and robots could replace as much as 30% of the world’s current human labor by 2030. Depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs, requiring as many as 375 million people to entirely switch job categories.
Comparing society’s 5-year expectations and Mckinsey’s forecast for 10 years shows that people’s expectations of unemployment are more pronounced than industry experts’ estimates. However, both point to a significant share of the population being unemployed due to advances in AI.
Another concern about the impacts of AI-driven automation is rising income inequality. A study found that automation has reduced or degraded the wages of US workers specialized in routine tasks by 50% to 70% since 1980.
해석: 이것은 현재 AI에 대한 가장 큰 두려움이다. CNBC 조사에 따르면 미국 시민의 27%는 AI가 5년 안에 일자리를 없앨 것으로 보고 있다. 그 비율은 18세에서 24세 사이의 시민들에서 37%로 증가한다.
비록 이 숫자들이 "가장 큰 AI 공포”로 거대해 보이지는 않을 수도 있지만, 이것은 앞으로 5년 동안의 예측일 뿐이라는 것을 잊지 말아라.
맥킨시의 추정에 따르면, 지능형 에이전트와 로봇이 2030년까지 전 세계 인력의 30%를 대체할 수 있다. 다양한 채택 시나리오에 따라, 자동화는 4억에서 8억 개 사이의 일자리를 대체할 것이며, 3억 7천 5백만 명의 사람들이 일자리 범주를 완전히 바꿀 필요가 있을 것이다.
사회의 5년 전망치와 맥킨시의 10년 전망을 비교해보는 것은 사람들의 실업에 대한 기대가 업계 전문가들의 추정치보다 더 뚜렷하다는 것을 보여준다. 그러나 두 가지 모두 AI의 발전으로 인해 실업자가 된 인구의 상당 부분을 암시한다.
--> 실업 문제는 인공지능과 관련된 가장 큰 문제라고 생각되는데, 이 기사에서도 2030년까지 일자리의 30%를 대체할 수 있다고 보고 있다.
Misuses of AI
Surveillance practices limiting privacy
“Big Brother is watching you.” This was a quote from George Orwell’s dystopian social science fiction novel called 1984. Though it was written as science fiction, it may have become a reality as governments deploy AI for mass surveillance. Implementation of facial recognition technology into surveillance systems concerns privacy rights.
According to AI Global Surveillance (AIGS) Index, 176 countries are using AI surveillance systems and liberal democracies are major users of AI surveillance. The same study shows that 51% of advanced democracies deploy AI surveillance systems compared to 37% of closed autocratic states. However, this is likely due to the wealth gap between these 2 groups of countries.
From an ethical perspective, the important question is whether governments are abusing the technology or using it lawfully. “Orwellian” surveillance methods are against human rights.
Some tech giants also state ethical concerns on AI-powered surveillance. For example, Microsoft President Brad Smith published a blog post calling for government regulation of facial recognition. Also, IBM stopped offering the technology for mass surveillance due to its potential for misuse, such as racial profiling, which violates fundamental human rights.
Manipulation of human judgment
AI-powered analytics can provide actionable insights into human behavior, yet, abusing analytics to manipulate human decisions is ethically wrong. The best-known example of misuse of analytics is the data scandal by Facebook and Cambridge Analytica.
Cambridge Analytica sold American voters’ data crawled on Facebook to political campaigns and provided assistance and analytics to the 2016 presidential campaigns of Ted Cruz and Donald Trump. Information about the data breach was disclosed in 2018, and the Federal Trade Commission fined Facebook $5 billion due to its privacy violations.
Proliferation of deepfakes
Deepfakes are synthetically generated images or videos in which a person in a media is replaced with someone else’s likeness.
Though about 96 % of deepfakes are pornographic videos with over 134 million views on the top four deepfake pornographic websites, the real danger and ethical concerns of society about deepfakes are how they can be used to misrepresent political leaders’ speeches.
Creating a false narrative using deepfakes can harm people’s trust in the media (which is already at an all time low). This mistrust is dangerous for societies considering mass media is still the number one option of governments to inform people about emergency events (e.g., pandemic).
Artificial general intelligence (AGI) / Singularity
A machine capable of human-level understanding could possibly be a threat to humanity and such research may need to be regulated. Although most AI experts don’t expect a singularity (AGI) any time soon (before 2060), as AI capabilities increase, this is an important topic from an ethical perspective.
When people talk about AI, they mostly mean narrow AI systems, also referred to as weak AI, which is specified to handle a singular or limited task. On the other hand, AGI is the form of artificial intelligence that we see in science fiction books and movies. AGI means machines can understand or learn any intellectual task that a human being can.
Robot ethics
Robot ethics, also referred to as roboethics, includes how humans design, build, use, and treat robots. There have been debates on roboethics since the early 1940s. And arguments are mostly originated in the question of whether robots have rights like humans and animals do. These questions have gained increased importance with increased AI capabilities and now institutes like AI Now focus on exploring these questions with academic rigor.
Author Isaac Asimov is the first one who talked about laws for robots in his short story called “Runaround”. He introduced Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
How to navigate these dilemmas?
These are hard questions and innovative and controversial solutions like the universal basic income may be necessary to solve them. There are numerous initiatives and organizations aimed at minimizing the potential negative impact of AI. For instance, the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich conducts AI research across various domains such as mobility, employment, healthcare, and sustainability.
Some best practices to navigate these ethical dilemmas are:
Transparency
AI developers have an ethical obligation to be transparent in a structured, accessible way since AI technology has the potential to break laws and negatively impact the human experience. To make AI accessible and transparent, knowledge sharing can help. Some initiatives are:
- AI research even if it takes place in private, for-profit companies, tends to be publicly shared
- OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman, and others to develop open-source AI beneficial to humanity. However, by selling one of its exclusive
models to Microsoft rather than releasing the source code, OpenAI has reduced its level of transparency. - Google developed TensorFlow, a widely used open-source machine learning library, to facilitate the adoption of AI.
- AI researchers Ben Goertzel and David Hart, created OpenCog as an open-source framework for AI development
- Google (and other tech giants) has an AI-specific blog that enables them to spreads its AI knowledge to the world.
Explainability
AI developers and businesses need to explain how their algorithms arrive at their predictions to overcome ethical issues that arise with inaccurate predictions. Various technical approaches can explain how these algorithms reach these conclusions and what factors affected the decision. We’ve covered explainable AI before, feel free to check it out.
Inclusiveness
AI research tends to be done by male researchers in wealthy countries. This contributes to the biases in AI models. The increasing diversity of the AI community is key to improving model quality and reducing bias. There are numerous initiatives like this one supported by Harvard to increase diversity within the community but their impact has so far been limited.
This can help solve problems such as unemployment and discrimination which can be caused by automated decision making systems.
Alignment
Numerous countries, companies and universities are building AI systems and in most areas there is no legal framework adapted to the recent developments in AI. Modernizing legal frameworks at both country and higher levels (e.g. UN) will clarify the path to ethical AI development. Pioneering companies should spearhead these efforts to create clarity for their industry.
--> 해결책에서는 투명성, 설명 가능성, 포괄성, 동맹에 대해 언급하고 있다. 투명성은 오픈 소스 등 지식의 공유, 설명 가능성은 알고리즘에 대한 분석과 그것이 어떻게 기능하는지, 포괄성은 차별하지 않고 연구자들을 뽑는 것, 그리고 동맹은 인공지능의 윤리와 관련된 국제 협약 등을 맺는 방법이다. 나는 해결책으로 4차 산업 혁명 시대에 빠르게 적응할 필요가 있다고 생각한다. 위에서 언급한 문제점들이 아니더라도 인공지능에 따른 문제는 수없이 생겨날 것이고, 우리는 그러한 환경에 신속하게 익숙해질 필요가 있다. 마지막에 언급한 실업 문제의 경우도 마찬가지로, 우리는 평생 직장을 가지기 힘들어질 것이기 때문에 새로운 기술을 수용할 필요가 있다.
'기사 > 영문' 카테고리의 다른 글
New technology turns plastic trash into jet fuel (환경/쓰레기) (0) | 2023.02.15 |
---|