-
Excerpt from course description

Responsible AI Leadership

Introduction

In the last few years, we have seen widespread adoption of Artificial intelligence (AI) technologies both by private and public agencies. These technologies increasingly mediate our interaction with organizations both private and public, through for example newsfeeds, recommendations, diagnostics, and analytics. AI systems are also at the heart of many governments’ efforts to reduce crime through automated policing, improve public health through precision medicine, and effective distribution of welfare benefits, among others. With the increased adoption comes also risks associated with lack of transparency, discrimination, manipulation, and dangers to the democratic processes. Thus, increasingly, companies and public agencies must navigate evolving accountability demands and be aware of developing regulation surrounding the use of AI and related technologies.

In this course, participants will explore the ethical, normative, and societal implications of AI and mechanisms to ensure that AI systems remain accountable and advance the social good. Combining theoretical foundations from data ethics, law and governance, and real-world inquiry, participants will build their ethical imaginations and skills for responsible use of AI. The course will put a strong emphasis on the positives of AI technologies (AI for good) as much as the challenges.

To this end, the course will examine legal, policy, and ethical issues that arise throughout the full lifecycle of data science, digital platforms and autonomous, artificial intelligence, systems, from data collection, to storage, processing, analysis and use, including, privacy, surveillance, security, classification, discrimination, decisional-autonomy, and duties to warn or act.

Practically, using case studies, participants will explore current applications of quantitative reasoning in organizations, algorithmic transparency, and unintended automation of discrimination via data that contains biases rooted in race, gender, class, and other characteristics. The cases will be considered in the light of existing and proposed regulations in the European Union (EU) such as the General Data Protection Regulation (GDPR), the proposed AI Act and proposed Data Governance Regulation. The course will also introduce students to the global regulatory landscape, and ongoing efforts to make AI more accountable and work towards sustainable implementations of it.

Classes will be conducted in English (the term paper may be written in Norwegian).

Course content

1. Foundationas of AI and Data Governance

  • The Nature of Intelligence
  • The History and Core Concepts of Artificial Intelligence
  • Recent Developments and Impact of Artificial Intelligence
  • The AI Life Cycle
  • The Nature and Pitfalls of Data
  • Navigating the Realites and Tradeoffs in Data Science

 

2. Practices of Responsible AI Leadership

  • Al Accountability, Transparency and Explainability
  • AI Risk Management
  • AI and Global Perspectives
  • AI and Public Service Perspectives
  • Governance frameworks for AI Implementations

 

3. AI Regulatiosn and Governance

  • Risks and Benefits of Regulation
  • Regulations governing AI
  • Regulation through AI
  • Upcoming regulatory developments
  • Participation in Regulation – AI Sandboxes

 

4. AI for Good

  • Open-Sourcing and Communities of Practice
  • AI Impact Assessment
  • Co-Design and Stakeholder Engagement with and around AI
  • Tackling Grand Challenges with AI

Disclaimer

This is an excerpt from the complete course description for the course. If you are an active student at BI, you can find the complete course descriptions with information on eg. learning goals, learning process, curriculum and exam at portal.bi.no. We reserve the right to make changes to this description.