AI Oversight Lab: Developing trustworthy AI algorithms for public authorities

Blue people connect
Developing and applying trustworthy artificial intelligence (AI) is essential if people’s confidence in AI is to be increased. The challenge is to make existing algorithms more reliable and ethically responsible by investing inter alia in good use of data, by increasing transparency, by reducing bias and by creating a clear picture of the effectiveness and the embedding within the broader application process.

This should include translating the existing ethical guidelines for trustworthy AI into best practices and specific tools. Local authorities can then put these guidelines into effect when using AI methods and software companies can also use them when developing AI methods further.

Use of artificial intelligence

The work on innovation in this project has largely been about ‘Trustworthy AI’, paying attention to both the theory (in the form of guidelines and criteria) and the practice (concrete cases from public authorities and the development of practical tools).

What challenge does it solve?

The ultimate aim is to improve confidence in AI algorithms by presenting specific, functional examples of trustworthy AI, focusing among other things on greater awareness and risk limitation, and being able to explain and audit AI algorithms. The current guidelines for developing and applying trustworthy AI are abstract and often awkward to use in the specific context of municipalities, inspectorates and other public authority bodies, for instance. That is expressed in the project using a two-pronged approach, namely investigating specific cases from practice (such as the AI methods used by municipalities, and then making recommendations) and translating the theory and ethical guidelines into best practices and concrete tools.

What will the use case teach us?

The results envisaged include formulating and developing:

• Policy for the use of AI algorithms.
• Criteria for reliable AI tools and collating the best practices associated with such tools.
• Prototypes of these AI tools that comply with the ‘Trustworthy AI’ principles and guidelines.

Work is also being done on a ‘community of practice’ in which authorities and other partners participate to let them learn from one another about reliable applications of AI.

First result

The Municipality of Nissewaard deploys an algorithm for risk assessment of abuse or improper use of social assistance recipients. The specific aim of this algorithm is to replace the previously used general periodical check-up, whereby supervision is focused as much as possible on clients who deserve attention (also referred to as risk-driven enforcement). The Municipality of Nissewaard considers it important that the algorithm is used correctly and also wants to take a critical look at its own working methods. Therefore, the Municipality of Nissewaard has asked to identify possible risks regarding the (use of the) algorithm and to perform a technical-substantive evaluation of the AI algorithm for this purpose. Click here for more information and also view the final report prepared by TNO

Parties involved:

This project is part of TNO’s Appl.AI programme and it is partly financed from the kickstart fund that the NL AIC received from the government for research and development of AI applications. This project, which is led by TNO, is a cooperative effect by several municipalities and knowledge partners such as Statistics Netherlands (CBS). Governmental and public authority bodies that would like to take part in the project are very much welcome.

Share with

More information

Building blocks

The NL AIC collaborates on the necessary common knowledge and expertise, resulting in five themes, also called building blocks. Those are important for a robust impact in economic and social sectors.

Sectors

AI is a generic technology that is ultimately applicable in all sectors. For the development of knowledge and experience in the use of AI in the Netherlands, it is essential to focus on specific industries that are relevant to our country. These industries can achieve excellent results, and knowledge and experience that can be leveraged for application in other sectors.

Become a participant

The Netherlands AI Coalition is convinced that active collaboration with a wide range of stakeholders is essential to stimulate and connect initiatives in Artificial Intelligence. Within fields of expertise and with other stakeholders in the ecosystem to achieve the most significant result possible in the development and application of AI in the Netherlands. Representatives from the business community (large, small, start-up), government, research and educational institutions and civil society organisations can participate.
Interested? For more information, see the page about participation.