Demystify AI regulations and explainability

Back to news list
Demystify AI regulations and explainability

Demystify AI regulations and explainability

19 March 2024  - ICT

In order to demystify the future of AI regulations, we talked to Samuel Renault, Head of AI & Data Analytics Lab Pole at the Luxembourg Institute of Science and Technology (LIST).

Can you explain briefly how the topics AI explainability and AI regulations are relevant in your work at LIST? How are both topics related?

The future EU regulation on AI (the AI Act) will require that AI systems presenting risks to EU citizens (including risks related to their human rights) provide transparency on their behaviour and the results they produce. This will require AI models’ developers to include more explainability in their design and implementation choices. In the end it opens a field for research and development of technologies for explaining AI models, a field on which we are active at LIST.

How do you propose striking a balance between innovation in AI and implementing effective regulations to address ethical concerns and potential biases the technology brings?

First, even if regulating the deployment and use of AI systems in risky contexts, the future AI act explicitly plans room for innovation under a concept that is called a regulatory sandbox. It should be seen as an environment where new AI applications can be observed and analysed by regulators and technical experts to determine how regulations have to adapt to new technology. In LIST we have the technology infrastructure to set up this environment and the technical expertise to perform this kind of analysis. Concerning bias and ethical concerns, we have started to address the point with a first prototype that analyses and reduces biases of AI models used to support recruitment processes. Also, we are about to issue a public leaderboard that assesses the social biases of the most important Large Language Models used on the market today.

In terms of AI explainability, various approaches, such as interpretable machine learning models and model-agnostic techniques, are being explored. Can you discuss the pros and cons of such methods and share insights into their practical applications in real-world scenarios?

For sure, there is no single technique that can cover the whole spectrum of AI models and use cases. It depends for example on intrinsic characteristics of the model, e.g. is it accessible (whitebox) or not (blackbox), or on the scope of the explanation we are looking for, e.g. understanding the behaviour of the model in all situations (global explainability) or in very specific cases (local explanations). In LIST we actively follow the developments of all these techniques (more than 30 are tooled and available today), and we use them in our own developments like the improvement of smart telescopes with AI models. We also plan to showcase these techniques on the testing facilities we operate in LIST for the DIH of Luxembourg[2] and the EU AI testing facility for smart cities. This allows us to support demands about the choice of the best-suited explainability technique according to the context of AI model development.

A huge thanks to Samuel, for this insight. And if you are curious about the potential impact of AI on our jobs or lives in the coming years, gain free access to weekly chapters of empowering knowledge with “Elements of AI”, a free online course developed by MinnaLearn and the University of Helsinki, now available in Luxembourg: click here.

banner-eofai

This article is also available at: https://delano.lu/article/demystify-ai-regulations-and-e 


Share this article on :