1.1. Overaching AI policy initiatives

In the last few years, Artificial Intelligence (AI) has gained prominence across individuals, businesses, academics and governments both at international and national level. Think-tanks, companies, and civil society have developed numerous toolkits, position papers and initiatives that focus on ethical principles for AI.
A key player in the AI debate is the European Union (EU). It is important to note that there is already existing legislation at the EU level which continues to apply in relation to AI (safety and liability, fundamental rights, consumer protection, data protection frameworks), although certain updates to those frameworks may be necessary to reflect the digital transformation and the use of AI.

Plethora of Policy Initiatives on AI

Having a transversal impact, AI systems have gained much scrutiny from a broad range of actors: international institutions, governments, stakeholders such as civil society, academia, private sector, etc. Therefore, diverse and multiple initiatives in the field of AI exist and it is not always easy to find its way in this tangled landscape. Below we provide an overview of prominent actors active on this item

1.1.1. International Institutions initiatives

There are multiple international policy initiatives that influence the European Union or have an impact on its Member States, this is a brief overview of some of the initiatives working on AI and on AI and media.

The Organisation for Security and Co-operation in Europe – OSCE
The OSCE Representative on Freedom of the Media (RFoM) has set a specific focus on AI and freedom of expression and developed projects around it such as #SAIFE for spotlight initiatives on AI and freedom of expression. In 2022, the SAIFE Policy manual was launched and provides comprehensive policy guidance for States to ensure that online information spaces are in line with international human rights standards and realise the key principles of transparency, accountability and public oversight. In December 2020, it released a Policy Paper on freedom of the media and AI and in April 2021, it published a policy paper on AI and freedom of expression in political competition and elections.
The Organisation for Economic Co-operation and Development – OECD
The OECD Artificial Intelligence Policy Observatory provides information on AI from various resources, facilitates the dialogue between stakeholders while providing multidisciplinary, evidence-based policy analysis in the areas where AI has the biggest impact. The website contains a lot of interesting information on research, collaboration and policy initiatives on AI including sections about AI initiatives in different countries, statistics and trends about AI, specific policy focus areas and how AI impacts these aspects. It also includes the OECD AI principles which focus on how governments and other actors can shape a human-centric approach to trustworthy AI. They were adopted in May 2019, as an OECD legal instrument via the recommendation of the Council on Artificial Intelligence. The document was ratified by 46 Countries. In February 2022, the OECD Framework for the Classification of AI systems was adopted. It is a user-friendly framework in line with the OECD AI principles to guide policymakers, regulators, legislators and others as they characterise AI systems for specific projects and contexts to help the risk assessment and facilitate policy deliberations.
The United Nations – UN
The United Nations opened a centre on Artificial Intelligence and Robotics which focuses on expertise on AI. The International Telecommunication Union (ITU), which is the UN’s specialised agency for information and communication technologies and has also become key for assessing AI’s impact. ITU has been organising for several years the AI for Good
summits, which focus on how AI can accelerate the achievements of the UN Sustainable Development Goals. ITU owns a journal where AI issues are regularly published and manages an AI repository identifying AI related projects, research initiatives, think-tanks and organizations.In April 2021, the United Nations released a Resource Guide on Artificial Intelligence Strategies, laying out existing resources on AI ethics, policies and strategies on national, regional and international level.
Council of Europe – CoE
In September 2019, the Council of Ministers of the Council of Europe created an ad-hoc Committee on Artificial Intelligence (CAHAI) which has been in 2022 succeeded by the Committee on Artificial Intelligence (CAI). The CAHAI webpage contains a collection of relevant material from the Council of Europe, a collection of publications by scholars on AI and a data visualisation of AI initiatives categorised by the subject of or the entity responsible for the initiative. CAI now works to elaborate an appropriate legal framework on the development, design, and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law, and conducive to innovation, which can be composed of a binding legal instrument as well as additional binding or non-binding instruments to address challenges relating to the application of artificial intelligence in specific sectors. We can also point to the following: a recommendation on the human rights impacts of algorithmic systems; a Declaration on the manipulative capabilities of algorithmic processes was also adopted, guidelines on facial recognition, a conference on AI and the challenges and opportunities for media and democracy. One of the topics covered was about the impacts of AI-powered technologies on freedom of expression, reflecting on a background paper written in 2020. To conclude the conference, a declaration and resolutions were adopted. A guidance note on content moderation also deals with AI. Finally, in April 2022, a recommendation on the impacts of digital technologies on freedom of expression.

1.1.2. National Initiatives

At the national level, policy initiatives flourished all over Europe and it is difficult or impossible to keep track of all of them. But as outlined above, some organisations provide mappings of the existing AI initiatives, including national ones such as the OECD AI Observatory or the Council of Europe.
So far, we can underline the following policy-making trends:
– The rapid emergence of regulatory instruments including self-regulation, sandboxes, national AI strategies and plans, studies to assess and ensure the co-existence and respect of existing legislation by AI systems.
– Legislative proposals focused so far primarily on regulating the use of automated vehicles.

1.1.3. Stakeholder Initiatives

Stakeholders’ initiatives are also countless, and the various sectors are extremely active in providing policy documents, reports, studies, surveys including from the private sector, think-thanks, academia, civil society including NGOs, to professional associations, technical community and trade unions.

Existing AI Policy Initiatives at European level

In the last four years, there has been a variety of new publications, guidelines and political declarations from various EU bodies on AI which apply horizontally, namely:

1.1.4. Communication on Artificial Intelligence for Europe

On April 25th, 2018, the European Communication (EC) issued a Communication on Artificial Intelligence for Europe. The aim of the Communication is to embrace the idea that AI is transforming the world, the society, and the European industry.
The Communication focuses on the following different elements. Firstly, the EU’s Position in the Globally Fierce Competition Concerning the AI Landscape especially in light of the US and Chinese investment on the topic. The Communication sets out a European Initiative on AI, which aims to boost the EU’s technological and industrial capacity and AI uptake across the economy; prepare for economic changes brought about by AI; and ensure an appropriate ethical and legal framework, based on the Union’s values and in line with the Charter of Fundamental Rights of the EU. Finally, the EC encourages Member States to engage in the coordinated plan on AI to share best practices, identify synergies, align actions, and fuel the emergence of AI start-ups while avoiding the fragmentation of the single market.

1.1.5. Coordinated Plan on Artificial Intelligence

Following up to the April Communication, the Commission presented in December 2018 a coordinated plan prepared with Member States to foster the development and use of AI in Europe. This plan builds on the idea that in order to ensure successful uptake of AI, coordination at European level is essential. It proposes joint actions for closer and more efficient cooperation between Member States, Norway, Switzerland and the Commission in four key areas.
1. Maximise investments
2. Making more data available
3. Nurture talent, skills and life-long learning
4. Develop ethical and trustworthy AI

1.1.6. White Paper on Artificial Intelligence

On 19 February 2020, the EC released 3 key documents which set out its vision for the digital economy and its recommendations for digital policymaking over the next five years: i) the European data strategy, ii) the Report on safety and liability implications of AI, the Internet of Things and Robotics, and iii) the White Paper on fostering trust and excellence in Artificial Intelligence. The purpose of the White Paper on AI is to outline a strategy on developing a common European approach to trustworthy AI. The document analyses the strengths and weaknesses of the EU on AI but also the opportunities that AI can bring to the EU global market. The plan is based on European values and fundamental rights including human dignity and right to privacy but also on the sustainability dimension. The White Paper points out that AI will be key for meeting the European Green Deal Goals and underlines that the environmental impact of AI systems needs to be duly considered throughout their lifecycle. This includes not only the design but also the storage of the data, the resources of usage and the waste management of the AI systems components. To reach these goals, the AI strategy outlined in the White Paper details policy actions which will be undertaken to support the development and uptake of AI including investment increase, improving accessibility to data, and creating a future regulatory framework which will address the risks associated with the AI technology. This last action point materialised with the release of the AI act proposal in April 2021. The White Paper sets different policy and regulatory options and how to achieve the objectives.

A. Create an ecosystem of excellence
– Show leadership in AI both at the European, national and regional levels
– Increase investment in AI, attract and keep talents
– Align with the data strategy as data are key for AI systems design, development and deployment

B. Create an ecosystem of trust
– Acknowledging and considering the risks associated with AI systems
– Establish a flexible and technology proof definition of AI
– Design a new legal framework following a risk-based approach
– Create mandatory legal requirements building on the work of the High-Level Expert Group Guidelines on trustworthy AI
– Ensure compliance and enforcement at the European and national level
– Set up a European governance structure on AI based on cooperation with national authorities

Authors:
Lidia Dutkiewicz, Emine Ozge Yildirim, Noémie Krack from KU Leuven
Lucile Sassatelli from Université Côte d’Azur