Blog Post

Ethics and artificial intelligence

Machine learning and artificial intelligence (AI) systems are rapidly being adopted across the economy and society. Early excitement about the benefits of these systems has begun to be tempered by concerns about the risks that they introduce.

By: , and Date: December 21, 2018 Topic: Innovation & Competition Policy

1. Introduction

Machine learning and artificial intelligence (AI) systems are rapidly being adopted across the economy and society. These AI algorithms, many of which process fast-growing datasets, are increasingly used to deliver personalised, interactive, ‘smart’ goods and services that affect everything from how banks provide advice to how chairs and buildings are designed.

There is no doubt that AI has a huge potential to facilitate and enhance a large number of human activities and that it will provide new and exciting insights into human behaviour and cognition. The further development of AI will boost the rise of new and innovative enterprises, will result in promising new services and products in – for instance – transportation, health care, education and the home environment. They may transform, and even disrupt, the way public and private organisations currently work and the way our everyday social interactions take place.

Early excitement about the benefits of these systems has begun to be tempered by concerns about the risks that they introduce. Concerns that have been raised include possible lack of algorithmic fairness (leading to discriminatory decisions), potential manipulation of users, the creation of “filter bubbles”, potential lack of inclusiveness, infringement of consumer privacy, and related safety and cybersecurity risks. There are also concerns over possible abuse of dominant market position,[1] for instance if big data assets and high-performing algorithms are used to raise barriers to entry in digital markets.

It has been shown that the public – in the widest sense, thus including producers and consumers, politicians, and professionals of various stripes – do not understand how these algorithms work. For example, it has been shown that Facebook users have quite misleading ideas about how algorithms shape their newsfeeds (Eslami et al.).[2] At the same time, the public is broadly aware that algorithms shape how messages are tailored to and targeted at them – for example, in the case of news or political information, and of online shopping. Algorithms also shape the logistics of vehicles, trades in financial markets, and assessments of insurance risks.

To date, however, by far the most common and dominant implementation of algorithms has been in messages that target people directly. Thus, to build awareness among a broad public, the topic of platforms that affect everyone cannot be avoided. The two domains, shopping and news (or political information: whether some non-news dissemination can be counted as ‘news’ is precisely at issue in algorithmically disseminated ‘fake news’) are also relatively long-established.

But it is not only the public that does not understand how algorithms work. Many AI experts themselves are painfully aware of the fact that they cannot explain the way algorithms make decisions based on deep learning and neural networks. Hence there is also considerable concern among AI experts about the unknown implications of these technologies. They call for opening up this blackbox: from this perspective explainability of algorithms is one of the key priorities in this field[3].

Furthermore, the application of AI in robotics has created numerous new opportunities but also challenges. Already the extensive use of industrial robots in production has raised productivity for decades. The introduction of smart robots will only increase this trend and transform employment conditions in an unpredictable way.

The introduction of autonomous vehicles certainly has the promise of leading to smart and efficient (urban) transportation systems. However, autonomous vehicles also raise ethical issues related to the decision-making processes that are built into their hardware and software. A widely used example is the case of an unavoidable accident where the autonomous car is called to choose at an instant of time whether it will sacrifice its occupants to protect pedestrians or not.

An area of immediate concern is the possible use of AI technology to develop lethal autonomous weapons. As illustrated very graphically by the video “Slaughterbots” (see autonomousweapons.org) it is conceivable today that drones equipped with AI software for navigation and face recognition can be turned into cheap lethal weapons capable of acting completely autonomously. Allowing such weapons to become reality will likely have catastrophic consequences at a global scale.

In terms of ethical challenges AI and robotics raise questions that are unprecedented. Given the increasing autonomy and intelligence of these systems we are not just talking about societal implications that merely ask for new ethical and legal frameworks. As the boundaries between human subjects and technological objects are virtually disappearing in AI, these technologies affect our fundamental understanding of human agency and moral responsibility. Who bears responsibility for AI-behaviour is a complex ethical issues. What is needed is a shared or distributed responsibility between developers, engineers, industry, policymakers and users. And last but not least we will also need to take into account the moral responsibility of the technology itself, as it develops towards increasingly autonomous behaviour and decision-making.

2. Policy response

The breakneck pace of development and diffusion in AI technologies urgently requires the development of suitable policies and regulatory infrastructures to monitor and address associated risks, including the concern that vast swaths of the economy and society might end up locked-in to sub-optimal digital infrastructures, standards and business models. Addressing these challenges requires access to better data and evidence on the range of potential impacts, sound assessment as to how serious these problems might be, and innovative thinking about the most suitable policy interventions to address them, including through anticipatory and algorithmic regulation strategies that turn big data and algorithms into tools for regulation. We need to adopt a more balanced approach that also considers ‘the human factor’ and the proper place of AI in our democratic society. And for this we need a trans-disciplinary research agenda that enables the building of knowledge on which responsible approach towards AI can flourish.

However, the research community concerned with algorithms is diffuse. Different academic disciplines are studying these issues from a variety of perspectives: technical, social, ethical, legal, economic, and philosophical. This work is incredibly important, but the lack of a shared language and common methods makes discourse, synthesis, and coordination difficult. As such, it has become near-impossible for policymakers to process and understand this avalanche of research and thinking, to determine which algorithmic risks are already being tackled through technical measures or better business practices, and what algorithmic risks are relatively underserved.

‘Formal’ policy interventions and regulatory frameworks are unlikely to be enough to steer an increasingly algorithmic society in desirable directions. It is likely that corresponding changes are also called for in the behaviours of day-to-day users of algorithmic services and platforms whose choices eventually determine the success or failure of online platforms, products and services. A better understanding of the risks and hidden costs of AI decision-making could inform their choices. This could lead in turn to the development of social norms upholding regulation and making it more effective.  Europe should take the lead in developing the codes of conduct and the regulatory and ethical frameworks that guide the AI community in developing ‘responsible AI’.[4]

3. Recommendations

  1. Adopt transparency by design principles over how the input data in AI algorithms is collected and is being used. Many times algorithmic bias is inherited by the fact that input data does not well represent the sample and introduces bias towards specific categories of people. Transparency over how data is collected in decision-making algorithmic systems is necessary to ensure fairness.
  2. Invest in research on explainable AI. In this way we can increase the transparency of algorithmic systems. AI systems are based in deep-learning techniques in which many times the intermediate layers between the input data and the algorithmic output are considered a “black-box”. Explainable AI can substantially contribute to understanding how these automated systems work.
  3. Integrate technology assessment (TA) in AI research. In order to create awareness of the potential societal and ethical impacts of AI not after the fact but in an early stage of development, prospective policy research such as TA helps to create both awareness of unintended consequences of AI within the AI community and agility among policymakers.
  4. Increase public awareness. As AI algorithms penetrate more and more our life we should be well informed about their usefulness and potential risks. Educational and training programmes can be designed for this purpose. In this way, individuals will not only be aware of dangers but they will also maximise the value from using such systems. In addition, public discussions at a local level on the implications of AI systems should be organis
  5. Develop regulatory and ethical frameworks for distributed responsibility. These frameworks should include clear standards and recommendations over the imposed liability rules which facilitate the protection of both users and manufacturers through efficient and fair risk-sharing mechanisms.
  6. Develop a consistent code of ethics in the EU and at the international level, based on shared European values that can guide AI developers, companies, public authorities, NGOs and users. Authorities, big professional organisations (e.g. the partnership on AI) and NGOs should work together closely and systematically to develop a harmonised code of ethical rules that will be, by design, satisfied by AI systems.
  7. Experimentation. As with clinical trials of new medicines of pharmaceutical companies, AI systems should be repeatedly tested and evaluated in well-monitored settings before their introduction in the market. In such experiments, it should be clearly illustrated that the interaction between individuals and AI systems (e.g. robots) satisfies the standards of safety and privacy of human beings. They should also provide a clear message on how the design of AI systems should be modified in order to satisfy these principles.
  8. Ban lethal autonomous weapons. Europe should be at the forefront of banning the development of lethal autonomous weapons, which includes the support of the respective initiatives by the United Nations.

Note: the authors have participated in the CAF / DG Connect Advisory Forum 

Footnotes:

[1] See Ariel Ezrachi and Maurice E. Stucke (2016), Virtual Competition: The Promise and Perils of the

Algorithm-Driven Economy, Harvard University Press.

[2] Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K. & Kirlik. A. 2016.

First I “like” it, then I hide it: Folk Theories of Social Feeds . Human Factors in Computing Systems

Conference (CHI). See also Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios,

K., Hamilton, K., and Sandvig, C. 2015. “I always assumed that I wasn’t really that close to [her]:”

Reasoning about invisible algorithms in the news feed. Proceedings of the 33rd Annual SIGCHI

Conference on Human Factors in Computing Systems, Association for Computing Machinery (ACM):

153-162.

[3]  See for instance: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

[4] See for instance V. Dignum (2017), Responsible Artificial Intelligence: Designing AI for Human Values. In: ITU Journal: ICT Discoveries, Special Issue, No 1. Sept 2017.


Republishing and referencing

Bruegel considers itself a public good and takes no institutional standpoint. Anyone is free to republish and/or quote this post without prior consent. Please provide a full reference, clearly stating Bruegel and the relevant author as the source, and include a prominent hyperlink to the original post.

View comments
Read article More on this topic More by this author

Blog Post

Do AI markets create competition policy concerns?

AI markets are young and their structure is yet to crystallise. Is European competition law ready for what happens next?

By: Julia Anderson Topic: Innovation & Competition Policy Date: January 23, 2020
Read about event More on this topic

Past Event

Past Event

The European AI policy: writing the first lines of the code

At this closed-door event, an open discussion with Margrethe Vestager will contribute to her work on artificial intelligence. The aim of these discussions is to set out a policy and regulatory approach in the form of a white paper, including the human and ethical implications of AI.

Speakers: Margrethe Vestager and Guntram B. Wolff Topic: Innovation & Competition Policy Location: Bruegel, Rue de la Charité 33, 1210 Brussels Date: January 21, 2020
Read article More on this topic More by this author

Podcast

Podcast

AI in Europe: a conversation with Google's CEO

It seems almost inevitable that Google will be big part of Europe's future. And Europe will be a huge part of Google's too. This week, Alphabet, Google's parent company, hit $1 trillion market cap for the first time. Can Google's AI be socially beneficial? Are big tech companies intrinsically bad? This week, Guntram Wolff talked to Google and Alphabet's CEO, Sundar Pichai.

By: The Sound of Economics Topic: Innovation & Competition Policy Date: January 20, 2020
Read about event More on this topic

Upcoming Event

Feb
5
12:00

The quality and quantity of work in the age of AI

At this event, the panelists will discuss the implications of Artificial Intelligence on the labour market and the future of work in general.

Speakers: Robert Atkinson, Anna Byhovskaya, Maria Demertzis, Carl Frey and Daniel Samaan Topic: Innovation & Competition Policy Location: Bruegel, Rue de la Charité 33, 1210 Brussels
Read about event More on this topic

Past Event

Past Event

Partnering with Europe on responsible AI: a conversation with Sundar Pichai, CEO Google and Alphabet

At this event, Google's and Alphabet's CEO Sundar Pichai will elaborate on his views on Artificial Intelligence.

Speakers: Sundar Pichai and Guntram B. Wolff Topic: Innovation & Competition Policy Location: SQUARE, Mont des Arts, 1000 Brussels Date: January 20, 2020
Read article More on this topic More by this author

Opinion

Could the U.S. economy be experiencing a hidden tech-driven productivity revolution?

In the last decade, most advanced economies have grown more slowly than before. Slower growth has frequently been seen as a legacy of financial crises, especially that of 2007–2009.

By: Marek Dabrowski Topic: Innovation & Competition Policy Date: January 6, 2020
Read article More on this topic More by this author

Blog Post

AI and the Productivity Paradox

In this blog post, I review the main explanations for this paradox and I briefly discuss relevant policy options in order to increase the contribution of AI on productivity

By: Georgios Petropoulos Topic: Innovation & Competition Policy Date: December 24, 2019
Read article More on this topic More by this author

Podcast

Podcast

The Sound of Margrethe Vestager

Will AI exacerbate the gap between big companies and small ones? Do ordinary Europeans gain anything from having European tech giants? This week, Nicholas Barrett and Guntram Wolff went to the Berlaymont to interview Margrethe Vestager, the Executive Vice President of the European Commission for a Europe Fit for the Digital Age.

By: The Sound of Economics Topic: European Macroeconomics & Governance Date: December 19, 2019
Read article More on this topic More by this author

Podcast

Podcast

Capture the nodes

How do states exercise power through global economic networks? The multilateral world order is supposed to be harmonious, but by seizing the nodes of production, powerful forces can control access to the global economic system and threaten to lock their rival out. This week, Nicholas Barrett and Guntram Wolff are joined by Henry Farrell, Professor of political science and international affairs at the George Washington University, and Abraham L. Newman, Professor of Government at the Georgetown University, to discuss their theory of weaponised interdependency

By: The Sound of Economics Topic: Global Economics & Governance Date: December 16, 2019
Read article Download PDF More on this topic More by this author

Policy Contribution

Bridging the divide: new evidence about firms and digitalisation

Small European firms are falling behind in the race to digitalise, but so are their American counterparts.

By: Reinhilde Veugelers Topic: Innovation & Competition Policy Date: December 11, 2019
Read about event More on this topic

Past Event

Past Event

A European approach to Artificial Intelligence?

Closed-door brainstorming event to discuss the European Commission's AI strategy

Speakers: Diane Coyle, Marietje Schaake, Juan Carlos De Martin, Guntram B. Wolff and Margrethe Vestager Topic: Innovation & Competition Policy Location: Bruegel, Rue de la Charité 33, 1210 Brussels Date: November 19, 2019
Read article More on this topic More by this author

Blog Post

Work Protection in the Digital Age: Towards a new social contract

Over the past few years, new business models have emerged, empowered by digital technologies. These have disrupted a range of activities, from food delivery and transportation to accommodation and venture capital. Digital companies and their new business models collectively make up the so-called platform or collaborative economy. New forms of work have been created posing the question: How can the social contract catch up?

By: Georgios Petropoulos Topic: Innovation & Competition Policy Date: November 4, 2019
Load more posts