How the AI Act Will Shape the Future of Emergency Services

The blog post introduces the new EU Artificial Intelligence Act, which took effect in July 2024, with rules applying from 2026. The Act establishes a framework to ensure AI remains controlled, especially in high-risk areas like emergency services.

The new EU Artificial Intelligence Act (in its official, yet less catchy name: “Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence”) entered into force this July (but don’t worry, the rules only start applying from 2026). Rather than setting rigid rules on the use of artificial intelligence (AI), this legislation aims to establish a framework to maintain control over AI and prompt reflection on how it is utilised, particularly in high-risk areas like emergency services. As the use of AI is growing exponentially in Public Safety Answering Points (PSAPs), as in other aspects of society, emergency services organisations and other public safety actors will need to review some practices to ensure compliance with the new rules. Let’s take a brief look at them.

As listed in Annex III, AI systems used to “evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems” are considered “high-risk AI systems” under the new legislation. These systems will not be banned, but a specific framework will need to be established to ensure they remain within our control. The obligations will apply to all actors involved in the use of these AI systems, including users, developers, distributors, and importers.

If any of these actors are reading this blogpost: do not worry! In fact, the obligations are relatively light and impose processes that are often already in place or make sense. However, some significant work may still be needed to monitor, test regularly and document such processes. This includes establishing a risk management system to anticipate and mitigate the risks that some AI systems may pose; using high-quality data (which should be relevant, representative, and free of biases) to ensure that the AI does not lead to discrimination in the handling of a call; maintaining human oversight over AI via human-machine interfaces; doing post-market monitoring (to keep track of any adverse effects or unexpected behaviour) and ensuring the cybersecurity of AI systems. Other rules imposed on high-risk AI systems include the availability of technical documentation, record-keeping, provision of instructions for use to the users of AI systems, and cooperation with competent authorities. In addition, users of high-risk AI systems that are public bodies or private entities providing public services will need to perform a fundamental rights impact assessment to evaluate and mitigate potential risks posed by the AI systems to people’s fundamental rights, such as privacy, non-discrimination, or freedom of expression.

In other parts of the legislation, a few limited practices are also banned. This includes the use of facial recognition in public spaces in real-time, except in a few specific cases (such as searching for a missing child or preventing terrorist attacks), or practices reminiscent of a Black Mirror episode, such as government social scoring of individuals or manipulating behaviour to cause harm. Finally, the regulation imposes several transparency obligations to ensure that a person interacting with AI is aware of the nature of the interaction. This applies, for instance, to AI-based chatbots.

Rather than completely changing how we use artificial intelligence, the AI Act will lead to a new mindset, encouraging us to consider carefully how we use AI and how we can maintain control over it. This does however not mean that no work will be needed by concerned organisations to comply with the legislation, as many operations and processes will have to be monitored and documented. This regulation may not be sufficient to address all the questions posed by AI in the future, but it deserves some credit for existing. Just as the General Data Protection Regulation (GDPR) forced us to reflect on how we process people’s data, the AI Act will also prompt us to reflect on how we use AI in the future.

Benoit Vivier
Public Affairs Manager at an EENA staff | + posts

Share this blog post on:

Facebook
Twitter
LinkedIn