Introduction
In the last few years, we have seen widespread adoption of Artificial intelligence (AI) technologies both by private and public agencies. These technologies increasingly mediate our interaction with organizations both private and public, through for example newsfeeds, recommendations, diagnostics, and analytics. AI systems are also at the heart of many governments’ efforts to reduce crime through automated policing, improve public health through precision medicine, and effective distribution of welfare benefits, among others. With the increased adoption comes also risks associated with lack of transparency, discrimination, manipulation, and dangers to the democratic processes. Thus, increasingly, companies and public agencies must navigate evolving accountability demands and be aware of developing regulation surrounding the use of AI and related technologies.
In this course, participants will explore the ethical, normative, and societal implications of AI and mechanisms to ensure that AI systems remain accountable and advance the social good. Combining theoretical foundations from data ethics, law and governance, and real-world inquiry, participants will build their ethical imaginations and skills for responsible use of AI. The course will put a strong emphasis on the positives of AI technologies (AI for good) as much as the challenges.
To this end, the course will examine legal, policy, and ethical issues that arise throughout the full lifecycle of data science, digital platforms and autonomous, artificial intelligence, systems, from data collection, to storage, processing, analysis and use, including, privacy, surveillance, security, classification, discrimination, decisional-autonomy, and duties to warn or act.
Practically, using case studies, participants will explore current applications of quantitative reasoning in organizations, algorithmic transparency, and unintended automation of discrimination via data that contains biases rooted in race, gender, class, and other characteristics. The cases will be considered in the light of existing and proposed regulations in the European Union (EU) such as the General Data Protection Regulation (GDPR), the proposed AI Act and proposed Data Governance Regulation. The course will also introduce students to the global regulatory landscape, and ongoing efforts to make AI more accountable and work towards sustainable implementations of it.
Classes will be conducted in English (the term paper may be written in Norwegian).