Ethical Considerations of Artificial Intelligence: Examples of Moral Dilemmas

The rapid development of artificial intelligence (AI) has introduced numerous opportunities across the globe, from enhancing medical diagnoses to fostering human connections through social media and improving work efficiency via automation. However, these swift changes have also led to significant ethical concerns. The apprehension centers around the potential for AI systems to introduce biases, exacerbate climate issues, and threaten human rights. These risks associated with AI could exacerbate existing inequalities, further harming already marginalized groups.

Bias in Artificial Intelligence

When searching for "the greatest leaders in history" on a popular search engine, you might find a list predominantly featuring notable male figures. Have you ever counted how many women are included? If you search for "female students," you may encounter a page full of images depicting women and girls in provocative outfits. Conversely, when you type "male students," the results primarily show ordinary young male students, with hardly any appearing in revealing attire. This gender bias stems from ingrained stereotypes in society.

Search engine results are not neutral. They process large datasets and prioritize results based on user behaviors and geographical location. Consequently, these search tools can become echo chambers that support and reinforce real-world biases and stereotypes.

How can we ensure more balanced and accurate outcomes? During algorithm development, the creation of large datasets for machine learning, and AI decision-making processes, measures should be taken to avoid or at least reduce gender biases. image.png

Artificial Intelligence in the Courtroom

The application of AI in judicial systems worldwide is becoming increasingly common, raising numerous ethical questions that need exploration. AI could potentially evaluate cases and deliver justice more swiftly and efficiently than human judges. Artificial intelligence has the potential to impact several areas significantly, from legal professions and judicial departments to assisting legislative and governmental decision-making bodies. For example, AI can improve the efficiency and accuracy of lawyers during consultations and litigation, benefiting them, their clients, and society as a whole. Existing judge software systems can be supplemented and enhanced by AI tools to support drafting new judgments. This growing trend of using autonomous systems is referred to as judicial automation.

Some argue that artificial intelligence contributes to a fairer criminal justice system as machines can leverage their speed and large-scale data analysis to assess and weigh relevant factors better than humans. Consequently, AI makes informed decisions devoid of any biases or subjectivity.

Yet, there are numerous ethical challenges:

  • Lack of Transparency: AI decisions are not always understandable by humans.
  • Non-neutrality: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, and ingrained biases.
  • Data collection and privacy protection for court users through surveillance practices.
  • New concerns regarding fairness, human rights, and other fundamental values.

Would you be willing to accept a trial by a robot in court? Even if we don't understand how it reached a conclusion, would you be willing?image.png

AI and Artistic Creation

The application of AI in the cultural sector raises fascinating ethical considerations. In 2016, a painting titled "The Next Rembrandt" was designed by a computer and printed using a 3D printer, more than 350 years after the painter's death. To achieve this technological feat, researchers conducted pixel-by-pixel analyses of over 300 Rembrandt paintings and used deep learning algorithms to augment them, creating a unique database. Every detail of Rembrandt's artistic identity was captured to form the foundation for an algorithm capable of producing unprecedented masterpieces.

To bring this painting to life, a 3D printer recreated the brushstrokes' texture and layers on the canvas, capable of deceiving any art expert.

But who should be designated as the author? The company that curated the project, the engineers, the algorithm, or… Rembrandt himself?

In 2019, Chinese technology company Huawei announced that its AI algorithm was able to complete the final two movements of Schubert's Eighth Symphony, which was started but left unfinished 197 years ago. So, what happens when machines have the capability to create art themselves? If human authors are replaced by machines and algorithms, how should copyright be attributed? Can and should algorithms be recognized as authors and enjoy the same rights as artists?

AI-created art necessitates a redefinition of "authorship" to fairly consider both the original creators' artistic work and the algorithms and technologies responsible for creating art. Creativity is the ability to conceive and produce new and original content through imagination or invention, playing a critical role in open, inclusive, and diverse societies. Thus, the impact of artificial intelligence on human creativity warrants serious contemplation.

While AI is a powerful tool for creation, it also raises important questions about the future of art, artists' rights and remuneration, as well as the integrity of the creative value chain. New frameworks need to be established to distinguish between piracy, plagiarism, and original creation, and to recognize the value of human creative labor in our interactions with AI. These frameworks are crucial for avoiding the exploitation of human labor and creativity and ensuring artists receive adequate compensation and recognition, as well as preserving the integrity of the cultural value chain and the cultural sector's ability to provide decent work. image.png

Autonomous Vehicles

Autonomous vehicles are capable of perceiving their surroundings and operating with minimal or no human intervention. To ensure these vehicles navigate safely and understand their driving environment, a multitude of sensors on the vehicle continuously capture vast amounts of data. These data are then processed by the vehicle's self-driving computer system.

Autonomous cars must also undergo extensive training to comprehend the data they collect and make correct decisions in any conceivable traffic situation. People make moral decisions daily. When a driver slams on the brakes to avoid hitting a pedestrian crossing the street, they are making an ethical choice, shifting the risk from the pedestrian to those inside the vehicle. Imagine an autonomous vehicle whose brakes fail and speeds toward an elderly woman and a child. Simply veering slightly could save one of them. This time, the decision is made not by a human driver but by the car's algorithm.

Who would you choose, the grandmother or the child? Do you believe there is only one correct answer? This is a classic ethical dilemma, highlighting the importance of ethics in technological development.

Recommendations on AI Ethics

UNESCO's Release of the First Global Standard on AI Ethics

In November 2021, UNESCO released the first global standard on the ethics of artificial intelligence, known as the "Recommendation on the Ethics of Artificial Intelligence." This standard applies to all UNESCO member states. The recommendation is built upon the protection of human rights and dignity, with fundamental principles such as transparency and fairness at its core, emphasizing the importance of human oversight over AI systems. What makes this recommendation particularly applicable is its broad scope, covering various policy action areas, allowing policymakers to translate core values and principles into action across domains such as data governance, environment and ecosystems, gender, education and research, health, and social well-being.

The recommendation broadly interprets AI as systems capable of processing data in a manner similar to intelligent behavior. Due to the rapid pace of technological change, any fixed and narrow definition could soon become outdated and render forward-looking policy efforts ineffective.

OpenAI-Funded Research on "AI Morality" delving into Complex Domains

On November 23, 2024, reports revealed that OpenAI is funding research in the complex domain of "AI Morality." According to TechCrunch, documents filed with the U.S. Internal Revenue Service disclosed an initiative where OpenAI's nonprofit sector provided a grant to researchers at Duke University for a project titled "Research AI Morality." Spanning three years, this research program is backed by a $1 million investment to study how AI can be endowed with moral awareness.

Led by professors Walter Sinnott-Armstrong and Jana Schaich Borg, who specialize in practical ethics, Sinnott-Armstrong is recognized in the field of philosophy, exploring areas such as applied ethics, moral psychology, and neuroscience. His team at Duke University focuses on resolving real-world challenges, such as devising algorithms to determine organ transplant recipients and balancing public and expert opinions to enhance the fairness of these systems. The funded project aims to create algorithms that predict human moral judgments in domains like medicine, law, and business. The challenge lies in how AI operates: machine learning models predict outcomes based on training data, often reflecting mainstream cultural opinions and potentially harboring biases.

AI from a Human Rights Perspective

Ten Core Principles Centered on Human Rights in AI Ethics

  1. Principle of Proportionality and No-Harm

The use of AI systems should not exceed what is necessary to achieve legitimate purposes. Risk assessments should be conducted to prevent potential harm from such use.

  1. Safety and Security

AI participants should avoid and address unnecessary harm (security risks) and vulnerabilities (safety risks).

  1. Privacy and Data Protection

Throughout the life cycle of AI, privacy must be protected and promoted. Appropriate data protection frameworks should be established.

  1. Multi-Stakeholder and Adaptive Governance Cooperation

When using data, international law and national sovereignty must be respected. Involving diverse stakeholders is essential to realizing inclusive AI governance approaches.

  1. Accountability and Responsibility

AI systems should be auditable and traceable. Mechanisms for oversight, impact assessments, audits, and due diligence should be established to avoid conflicts with human rights norms and threats to environmental well-being.

  1. Transparency and Explainability

The ethical deployment of AI systems is contingent on their transparency and explainability (T&E). The level of T&E should suit specific contexts, as T&E may conflict with other principles like privacy, safety, and security.

  1. Human Supervision and Decision-Making

Member states should ensure AI systems do not replace human ultimate responsibility and obligations.

  1. Sustainability

The impact of AI technology on "sustainability" should be evaluated, understood as an evolving set of objectives, including those outlined in the UN Sustainable Development Goals.

  1. Awareness and Literacy

Public understanding of AI and data should be enhanced through open and accessible education, civic engagement, digital skills, AI ethics training, and media and information literacy.

  1. Fairness and Non-Discrimination

AI participants should promote social justice, fairness, and non-discrimination while ensuring that all individuals can benefit from AI inclusively.