The goal of this tutorial is to introduce learners to the challenges posed by the increasing development and use of artificial intelligence. It does so through an introduction to the potential risks which may be caused by continued development, an interdisciplinary look at the current state of Artificial Intelligence’s progress and case studies of real-world examples detailing the complexities of Artificial Intelligence which have already confronted society across legal and ethical fields.
After completing this training resource, learners should be able to:
- think critically about both the benefits and drawbacks of the implementation of Artificial Intelligence in society
- separate the current state of the complexities of Artificial Intelligence from the potential future risks
- explore and gain familiarity with real world examples of issues surrounding Artificial Intelligence
Artificial Intelligence, known as AI, has long presented as a theoretical contradiction; some say it will have the power to propel us into a world of greater accuracy, efficiency, ease, and provide us with capabilities and outputs beyond our wildest imagination. Others speak of the fears instilled by the power and potential held by AI. Will it replace us? Will it turn on us? Will it destroy society? Both its champions and its detractors can agree on one thing. The development and implementation of AI will change our lives and our world. It is up to us to determine how that change will present itself.
As we move from simple AI assistants like Siri and Alexa who we rely upon to wake us up and tell us the weather in the morning towards more capable and controversial forms of AI like chat gpt, which can compose full film scripts and pen essays for University courses at the touch of a button, the public is in the throes of determining the terms of its relationship with a formidable and enduring companion.
In such a pivotal moment, it is crucial for the public to become and remain informed, not only on how AI impacts our own field, our research topic or our family and friends, but also on a broader philosophical scale as well as in fields we may never consider which may end up touching us all.
In order to make more informed and unified decisions which guide the direction of AI as a part of our future, we must have an awareness of where development stands and where it may progress towards in fields including healthcare, criminal justice, economics, surveillance, governance, education, warfare, intellectual property, logistics and more.
This tutorial provides an organized overview and examples of the most thought provoking and pressing challenges facing us today as we learn, debate and help define what a future looks like with AI.
AI introduces us to a litany of debates between what is right and wrong in its use. It makes us question our humanity, weigh progress against protections and pre-determine outcomes to a number of autonomous quandaries which force us to confront our own morality and values.
The great AI debate is a larger prism for discussions of the society we would like to build. When it comes to AI in warfare or medicine or self driving vehicles, we are plainly confronted with the infamous “trolley problem” of Philosophy 101 in real life. If an AI tool is going to be making decisions which could impact us corporeally, we must program the desired outcome, forcing us to grapple with pre-determining impossible choices like choosing the life of either the passengers or a pedestrian in a cross walk.
It is imperative to open our minds to the ethical issues we face with the development in AI so that we may debate, explore and formulate our own informed opinions on the society we would like to create with the choices being made during this time. As such, the following section provides an introduction to both the potential risks as well as the current state of AI and ethics. Learners should challenge themselves to draw connections between the current states and the potential risks that are presented in this tutorial. What may be done to avoid the risks noted here? What are positive outcomes we may help guide AI towards with proper regulation of development?
Bias and Fairness: Artificial Intelligence algorithms are only as good as the data they are trained on. If the training data is biased, the Artificial Intelligence system may perpetuate or even amplify existing social inequalities. For example, biased facial recognition systems might misidentify certain racial or ethnic groups more frequently.
Privacy and Surveillance: Artificial Intelligence systems can process vast amounts of personal data, potentially leading to privacy infringements. As Artificial Intelligence becomes more sophisticated, there’s an increased risk of surveillance, unauthorized access, and misuse of personal information.
Autonomy and Accountability: As Artificial Intelligence systems become more advanced, they may operate autonomously, making decisions without human intervention. This raises questions about who should be held accountable for AI-generated decisions and actions when something goes wrong.
Lack of Transparency: Many Artificial Intelligence algorithms, such as deep learning neural networks, can be complex “black boxes” that are difficult to interpret. The lack of transparency and explain-ability can make it challenging for users to understand how AI arrives at specific conclusions or recommendations.
Job Displacement and Economic Inequality: Artificial Intelligence and automation could lead to job displacement and create economic inequality if certain sectors or regions are more affected than others. The distribution of AI’s benefits and its impact on the workforce remains a concern.
Safety and Security: Artificial Intelligence applications have the potential to be weaponized or manipulated maliciously. Ensuring that Artificial Intelligence systems are secure and cannot be easily exploited for harmful purposes is a significant ethical challenge.
Moral Decision Making: Teaching Artificial Intelligence systems to make moral decisions poses ethical dilemmas. Deciding what moral principles to embed in Artificial Intelligence and how to handle situations where moral choices conflict raises complex philosophical questions.
Impact on Human Agency: Artificial Intelligence systems that predict and influence human behavior might undermine individual autonomy and manipulate people’s choices and actions without their awareness or consent.
Artificial Intelligence in Warfare: The use of Artificial Intelligence in military applications, such as autonomous weapons, raises ethical concerns about the potential for indiscriminate harm and the erosion of human responsibility in warfare. If an AI is responsible for entering a conflict space and is tasked with achieving a certain goal, it will not have the same respect for the lives of bystanders like a person might have.
Long-term Consequences and Existential Risk: There are concerns about the long-term consequences of Artificial Intelligence development, including the potential for Artificial Intelligence systems to surpass human intelligence and the risks associated with creating superintelligent entities.
Bias and Fairness: The issue of bias in Artificial Intelligence systems was a major concern. Many Artificial Intelligence applications, including facial recognition, hiring algorithms, and credit scoring, were found to exhibit biases against certain groups, perpetuating discrimination and social inequalities.
Privacy and Surveillance: Artificial Intelligence systems were increasingly used to process personal data, raising concerns about privacy violations and the potential for mass surveillance.
Autonomous Weapons and Military Artificial Intelligence: The development and deployment of autonomous weapons and Artificial Intelligence in military contexts were significant ethical issues, with concerns about the lack of human oversight and potential for increased harm.
Accountability and Transparency: The lack of transparency and explainability in Artificial Intelligence decision-making processes remained a challenge, making it difficult to understand how Artificial Intelligence arrived at particular outcomes and who should be held responsible for Artificial Intelligence-generated decisions.
Job Displacement and Economic Impact: As Artificial Intelligence and automation advanced, concerns about job displacement and economic inequality were prevalent, particularly in industries where Artificial Intelligence technologies were replacing human labor.
Deepfakes and Misinformation: The rise of Artificial Intelligence-generated deepfake videos and misinformation posed ethical challenges in terms of media manipulation, fake news, and potential harm to individuals and societies.
Artificial Intelligence in Healthcare: Ethical considerations were raised regarding the use of Artificial Intelligence in healthcare for tasks such as diagnosis and treatment recommendations, with concerns about data privacy, informed consent, and potential biases in medical decision-making.
Manipulation and Persuasion: The use of Artificial Intelligence-driven algorithms in social media platforms and advertising raised concerns about targeted manipulation and persuasion, potentially influencing people’s beliefs and behaviors.
Data Governance and Ownership: The increasing reliance on data for training Artificial Intelligence models brought up questions about data ownership, access, and governance, particularly in cases involving sensitive or personal data.
In order to see how some of these concepts have manifested around the world, please feel free to explore some of the case studies provided below.
Autonomous Vehicles and Pedestrian Safety: In March 2018, a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona. The incident raised concerns about the safety of autonomous vehicles and the need for human oversight during testing and deployment.
Facial Recognition Bias: In 2018, Joy Buolamwini, an MIT researcher, found that some facial recognition systems from major tech companies were less accurate in identifying darker-skinned and female faces compared to lighter-skinned and male faces.
Automated Hiring Bias: In 2018, it was reported that Amazon developed an AI-powered recruiting tool that showed bias against female candidates, downgrading resumes that included terms like “women’s” and penalizing applicants from women’s colleges.
AI in Criminal Justice: In 2016, ProPublica investigated the use of the COMPAS algorithm in the U.S. criminal justice system. The algorithm, used to assess the likelihood of a defendant reoffending, was found to have significant racial bias, leading to biased decisions in sentencing.
AI in Healthcare: In 2017, Google’s DeepMind was involved in a controversy when it partnered with the UK’s National Health Service (NHS) to process patient data without explicit consent. The partnership raised concerns about data privacy and informed consent.
Having explored these ethical dilemmas and AI-driven scenarios, one can see the nuances and complexities of right and wrong in AI applications. From fairness in algorithmic decisions to the preservation of individual privacy, the ethical dimensions of AI touch every aspect of our lives. It’s vital to apply this understanding as AI continues to shape our ethical landscape, fostering an environment of responsibility and transparency.
AI has the power to reshape how we interact and live. It can both help and challenge social norms but simultaneously, it can perpetuate issues in society which many of us are trying to minimize. If an AI powered tool is brought in to choose the best job candidate but it is trained on data which reflects hiring practices from the last few decades, it may result in selecting candidates which are most similar to the past successful hires, rather than a candidate who may bring forth soft skills or a different life path which could make them a great fit for the job, even if they don’t fit the profile of the rest of their colleagues.
The more pervasive knowledge is on AI, how it works and how to use it effectively, the more empowered we will become as a society. As AI becomes more common, it’s important for society to understand how it works and manage its impacts in order to mitigate the negative aspects resulting from its training data so that it’s implementation is in alignment with where we want to be as a society in the future.
The following examples of potential risks should also inspire inverse thoughts and ideas on the potential opportunities and benefits which could manifest in our society which would see us moving forth from the current status to a more unified, equitable and progressive society.
Job Displacement and Economic Inequality: Artificial Intelligence and automation could lead to job displacement across various industries, potentially exacerbating economic inequality as certain sectors are more affected than others. Ensuring a just transition for workers and addressing the impact on income distribution is critical.
Skills Gap and the Digital Divide: The adoption of Artificial Intelligence may create a skills gap, where the demand for Artificial Intelligence-related expertise outstrips the supply. Unequal access to Artificial Intelligence technologies could widen the digital divide between regions or socio-economic groups, further marginalizing disadvantaged communities.
Ethical Decision Making and Agency: As Artificial Intelligence systems take on more decision-making roles, questions arise about how to imbue Artificial Intelligence with ethical principles and ensure that Artificial Intelligence respects human agency and values. Artificial Intelligence algorithms can perpetuate and amplify existing biases, leading to discriminatory outcomes in areas like criminal justice, hiring, and lending.
Surveillance and Privacy Concerns: The extensive use of Artificial Intelligence for surveillance purposes, both by governments and private entities, may raise serious privacy concerns and potential abuse of personal data.
Autonomous Weapons and Warfare: The development of Artificial Intelligence-driven autonomous weapons could raise ethical concerns about the potential for uncontrolled use and accountability in warfare.
Artificial Intelligence and Addiction: Artificial Intelligence-driven algorithms in entertainment and social media platforms may contribute to addiction-like behaviors and excessive screen time, particularly among vulnerable populations.
Unemployment and Social Safety Nets: Widespread job displacement due to Artificial Intelligence adoption may necessitate the restructuring of social safety nets to support those affected.
Human-Machine Interaction and Social Isolation: As Artificial Intelligence becomes more integrated into daily life, establishing appropriate and ethical human-machine interaction norms and guidelines becomes crucial. Increasing reliance on Artificial Intelligence and automation may lead to reduced human interactions and social isolation, potentially impacting mental health and well-being.
Existential Risk: Some researchers and thinkers express concern about the potential for Artificial Intelligence to surpass human intelligence, leading to existential risks that may require careful management and oversight.
Deepfakes and Misinformation: The rise of Artificial Intelligence-generated deepfake videos and misinformation can have significant societal consequences, leading to confusion, distrust, and the erosion of truth.
Artificial Intelligence and Politics: Concerns about the influence of Artificial Intelligence-driven misinformation on political discourse and election processes were prominent.
Artificial Intelligence and Education: The role of Artificial Intelligence in education and the potential impact on the workforce and learning outcomes were subjects of debate. Students are less likely to have a deep understanding of topics and a greater understanding and familiarity with using Artificial Intelligence to achieve the output they need. Will that change the level of preparation one may need to enter the workforce or has the workforce changed enough that the skill of engineering the correct prompt is sufficient?
Artificial Intelligence and Climate Change: The potential for Artificial Intelligence to contribute positively to addressing climate change and sustainability was being explored, along with concerns about its environmental impact due to its energy intensive process and large carbon footprint.
Data Privacy and Security: The use of Artificial Intelligence involved significant data collection and processing, raising concerns about data privacy, cyber security, and potential misuse of personal information.
Artificial Intelligence Bias and Discrimination: Bias in Artificial Intelligence systems leads to discriminatory outcomes in areas such as hiring, lending, and criminal justice. These biases are the result of the data sets that the AI is trained on and reflect the inequity of the past.
Job Displacement and Economic Impact: The potential for Artificial Intelligence and automation to displace jobs and impact certain industries is a significant societal concern. Re-skilling and workforce transition are important aspects of ensuring that the imbalance of gender, race and socioeconomic groups impacted first will not be left behind as the world adapts to an AI led workforce.
Artificial Intelligence and Surveillance: The increasing use of Artificial Intelligence for surveillance purposes, both by governments and private entities, raised ethical and privacy concerns.
Artificial Intelligence Content Moderation and Misinformation: The use of Artificial Intelligence for content moderation on social media platforms and its role in combating misinformation and hate speech raised challenges related to freedom of expression and algorithmic biases.
Artificial Intelligence in Criminal Justice: The use of Artificial Intelligence algorithms in predictive policing and sentencing, and the potential for bias and fairness issues, raises concerns for race based discrimination.
In order to see how some of these concepts have manifested around the world, please feel free to explore some of the case studies provided below.
AI and Surveillance: China’s use of AI-driven surveillance and its social credit system, which scores citizens based on their behavior, raised significant ethical and privacy concerns.
AI and Consumer Manipulation: Exploring the ethical implications of AI-powered behavioral advertising and its potential to manipulate consumer behavior and choices.
AI and Social Media: Examining how AI algorithms can amplify online harassment and hate speech, leading to toxic online environments.
AI and Disinformation: Research on how AI can be used to create and propagate disinformation, leading to misinformation campaigns and their potential impact on society.
AI and Job Displacement: Examining the potential impact of AI and automation on low-wage workers and vulnerable populations, and the need for policies to address economic inequality.
As shown in these powerful examples, AI’s influence on social structures is profound. Understanding its impact on communication, relationships, and societal norms is essential. Being aware of and debating these changes will empower us to harness AI’s potential while mitigating its issues.
AI creates new challenges for our legal systems with questions arising that have never been considered before. From data privacy to responsibility for AI decisions, we need understanding and discussion in order to craft responsible laws.
The debates surrounding AI and law span from advocates of the establishment of regulations to guide the development of AI in the first place, the protection of individual legal rights to safeguard against poor use and the necessity of legal rulings when unforeseen infractions occur and accountability must be taken. Keeping our laws updated and in keeping with the potential of technology will help ensure AI is used safely and responsibly.
With a field as rapidly developing as AI, there are bound to be legal issues which arise from improper development or deployment, but with a greater awareness of the risks and regulatory measures put in place, we can begin to create precedent and avoid conflict in the future.
Liability and Accountability: Determining liability when Artificial Intelligence systems cause harm or make erroneous decisions without human intervention is a complex legal issue. Questions arise about who is responsible - the Artificial Intelligence developer, the user, or the Artificial Intelligence system itself.
Data Privacy and Ownership: The extensive use of Artificial Intelligence relies on vast amounts of data. Legal issues related to data privacy, consent, and ownership might become more complex as Artificial Intelligence systems collect and process personal information.
Intellectual Property: Legal challenges could emerge concerning Artificial Intelligence-generated works and inventions. Determining the ownership and copyright of Artificial Intelligence-generated content or patents could be contentious.
Regulation of Artificial Intelligence Technology: Developing a regulatory framework for Artificial Intelligence poses significant challenges. Striking a balance between fostering innovation and ensuring safety and ethical use is a complex legal endeavor.
Artificial Intelligence in Healthcare and Medicine: The use of Artificial Intelligence in medical diagnosis and treatment may lead to questions about medical malpractice, privacy, and the appropriate level of human oversight.
Artificial Intelligence in Criminal Justice: Artificial Intelligence systems might be used in predictive policing or risk assessment. Legal challenges could arise regarding the fairness, transparency, and potential bias of such systems.
Security and Hacking: The use of Artificial Intelligence in cyber-security may lead to legal issues concerning the accountability of Artificial Intelligence-powered security measures and potential misuse of Artificial Intelligence for hacking and cyber-attacks.
Autonomous Weapons and Warfare: Concerns about Artificial Intelligence-driven weapons and their regulation could lead to international legal debates regarding the use of lethal autonomous systems in warfare.
Consumer Protection: Legal issues may arise related to the transparency and disclosure of Artificial Intelligence systems in consumer products and services, particularly concerning potential biases or misleading functionalities.
Robot Rights: As Artificial Intelligence and robotics become more advanced, there could be discussions about the legal status of Artificial Intelligence entities, their rights, and their responsibilities.
Data Privacy and Security: Data privacy laws, such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), were already in effect to regulate the collection, storage, and processing of personal data by Artificial Intelligence systems.
Algorithmic Bias and Discrimination: The potential for Artificial Intelligence algorithms to exhibit bias and discriminate against certain groups was a significant concern. Some jurisdictions explored ways to address algorithmic bias and ensure fairness in Artificial Intelligence decision-making.
Intellectual Property and Copyright: Questions about copyright and intellectual property rights arose as Artificial Intelligence-generated content, such as art, music, and writing, became more prevalent. Determining the ownership and protection of Artificial Intelligence-generated works was a complex legal issue.
Liability and Responsibility: The allocation of liability and responsibility for Artificial Intelligence actions and decisions was a challenging legal question, particularly in cases where Artificial Intelligence systems operated autonomously without direct human oversight.
Regulatory Frameworks: Countries and regions were in the process of developing regulatory frameworks to address the unique challenges posed by Artificial Intelligence. Some focused on safety regulations for Artificial Intelligence in specific domains, such as autonomous vehicles.
Autonomous Vehicles: Legal issues concerning the deployment of autonomous vehicles were under scrutiny, including liability in accidents and the responsibilities of manufacturers and developers.
Artificial Intelligence in Healthcare and Medicine: Legal considerations in the use of Artificial Intelligence for medical diagnosis, treatment recommendations, and patient care included issues related to liability, informed consent, and medical malpractice.
Intellectual Property Infringement: Legal disputes arose over the use of Artificial Intelligence technology for intellectual property infringement, such as using Artificial Intelligence to create counterfeit products or infringing copyrighted content.
Artificial Intelligence in Criminal Justice: The use of Artificial Intelligence in predictive policing and risk assessment faced legal challenges related to transparency, fairness, and potential bias.
Autonomous Weapons: Discussions about the legal regulation and potential ban of lethal autonomous weapons were ongoing in international forums and organizations.
In order to see how some of these concepts have manifested around the world, please feel free to explore some of the case studies provided below.
Facial Recognition and Privacy: In 2020, Clearview AI, a facial recognition company, faced multiple lawsuits and legal challenges over privacy concerns. The company scraped billions of images from social media platforms without users’ consent, raising questions about data privacy and the use of facial recognition technology.
AI and Copyright Infringement: In a long-standing legal battle, Google and Oracle clashed over the use of Java APIs in Android’s operating system. The case raised important questions about the fair use of APIs and the intersection of copyright law and AI-driven software development.
AI and Election Interference: The Cambridge Analytica scandal in 2018 revealed how AI-driven data analytics were used to collect and manipulate user data for targeted political advertising, raising concerns about election interference and data privacy.
AI and Intellectual Property: In 2020, IBM filed a lawsuit against Airbnb, accusing the company of infringing on IBM’s patents with its AI-powered technologies. The case highlighted the legal complexities in AI patent disputes.
AI in Education: Proctorio, an AI-powered remote proctoring service, faced legal challenges related to data privacy and surveillance concerns. The software was used to monitor students during online exams, raising questions about privacy and data security.
Given the rapid development and array of current cases open, the need for ongoing legal adaptations for AI is clear. Laws that govern AI must be in a constant state of flux, adapting quickly and responsibly to new advancements and challenges. Staying informed will be crucial as AI tests legal boundaries in the future, ensuring that we maintain a balance between innovation and regulation.
While AI’s growth holds transformative potential, if we do not treat it with caution, it also has the potential to amplify biases, jeopardize privacy, and redefine human-machine interactions. As we grow along with AI, it is necessary to establish ethical frameworks and legal safeguards to ensure its saftey and usefulness. Engaging in debates and educating ourselves will aid in our abilities to stay informed and advocate for development that will guide AI towards a positive future.
Here are 10 key takeaways from this tutorial to keep in mind moving forward:
- AI’s Societal Impact: AI has a huge impact on our societal structure, personal interactions and the role of biases which makes debate and education critical.
- Workforce Evolution: AI’s potential role in hiring necessitates close attention to ensure a workforce of diverse and efficient workers. Retraining is a key concern moving forward to prevent further inequity due to AI deployment
- Legal Challenges: AI introduces new, complex legal considerations, from data privacy to legal accountability. Existing regulations, like GDPR, already target AI’s data handling, but evolving challenges demand constant legal adaptation.
- Regulation Debate: Balancing the establishment of AI regulations with safeguarding individual rights and addressing harmful violations is necessary.
- Liability & Accountability: Determining responsibility when AI creates harm—from the developer, user, or AI—is a pressing concern.
- Data Concerns: With AI’s heavy reliance on data, issues of privacy, ownership, and consent are going to erupt with greater frequency as we implement AI more into our daily lives.
- Intellectual Property: AI’s ability to create works of art and writing poses challenges in copyright, ownership, and intellectual property rights.
- Safety & Ethics in AI: There’s a critical need to strike a balance between AI innovation and its safe, ethical deployment.
- Algorithmic Bias: Addressing and preventing biases in AI decision-making is essential to avoid discriminatory outcomes.
- Future Outlook: As AI’s integration deepens, proactive efforts in awareness, regulation, and ethical considerations will determine its societal impact.
Interested in learning more?
For a broad look at the ethics surrounding Artificial Intelligence, look into “AI Ethics” by Mark Coeckelbergh. In this foundational work, Mark Coeckelbergh discusses topics such as machine autonomy, moral machine behavior, robot rights, and data ethics.
Coeckelbergh, Mark. AI Ethics. The MIT Press, 2020.
For those interested in the latest debates, discussions and developments related to AI, be sure to look into publications from institutions like the Stanford Institute for Human-Centered Artificial Intelligence (HAI) or the MIT Technology Review.
For those who prefer to listen to podcasts, In Machines We Trust thoughtfully examines the far-reaching impact of artificial intelligence on our daily lives.