ELSA Lab Meaningful Human Control over Public AI Systems

Using AI in policy implementation not only promises to handle social challenges better but is also associated with costs relating to democratic and public values. These costs are now often presented anecdotally or as generic normative frameworks.

That fails to address the integrative nature of this issue and disregards local complexity in implementation and the effects on the public, where AI intervenes, sometimes radically and sometimes subtly. This ELSA Lab focuses on AI systems in policy implementation.

What social challenges in AI are being worked on?

Artificial intelligence (AI) is used for various social challenges, such as encouraging good health and improving wellbeing, reducing inequality and promoting peace, justice and security. At the same time, we are working on developing responsible AI and innovations in public administration, benefiting renewal and infrastructure and building strong institutions.

What types of solutions are offered to the end user?

This ELSA lab helps identify realistic frameworks for human-centric AI that must be complied with in applications in a governmental context. This is done based on an integral view. Both the normative framework (plus setting up safeguards for it) and the parties involved in the implementation are helped. This applies to the parties that have a role in the use of AI in policy implementation and to those involved in developing and implementing responsible AI applications. It is important to note that this combination can very well lead to the decision not  to use AI for certain domains, issues or contexts.

What AI methods or techniques are used in the research?

The ELSA Lab focuses on algorithmic decision support systems. These can include: ML/NLP for supporting operational processes and any decision making; decision-making models; computer vision for safety on the streets; self-learning decision support. The concrete methods and techniques are in the action cases; the criterion for this is that an AI application is being used or is under development. These applications are data-intensive, which creates dependencies in the context of getting the right stakeholders involved. Synthetic data is also being worked with in some cases. Finally, we are working with one of the partners on AI that learns based on the assessments and context of professionals/experts, so that it does not need Big Data to be able to offer decision-making support.

Are we collaborating with other sectors?

This ELSA lab collaborates as a consortium of scientists from various disciplines (including administration, rights, IT and complex systems), parties from civil society (including interest groups and parties that carry out activist research) and technology parties (including start-ups), governmental organisations from the whole justice and security chain, governmental organisations from other domains and from all layers. Each of these parties is in turn embedded in its own network.

All these sectors work closely together; the stakeholders are involved in action research and in building the ecosystem. We use a chain perspective: which parties play a role in the data, the technology, the policy, the implementation at different levels (management, street level), the public or other clients, promoters of the interests and supervisory organisations. It is based around a concrete case every time, as inclusively as possible. The composition of the lab’s partners makes this both realistic and feasible.

What is the ultimate success this ELSA Lab can achieve?

This is a world with so much complexity, so many interdependencies, differing interests and a variety of perspectives that finding an ultimate goal can in itself be a never-ending process. In addition to our substantive goals, we are aiming for a cross-domain collaborative platform in which the quadruple helix evaluates the implications and scalability of AI-based systems in this sector. This will achieve a broad ecosystem that can support the application of meaningful human control.

Awarded the NL AIC Label

NL AIC LabelThe Netherlands AI Coalition has developed the NL AIC Label to underline its vision for the development and application of AI in the Netherlands. An NL AIC Label formally recognises an activity that is in line with the aims and strategic goals of the NL AIC and/or the quality of that activity. NL AIC would like to congratulate the ELSA Lab for Meaningful Human Control of Public AI Systems.

More information?

If you’re interested in this ELSA Lab, please contact one of the people below. If you would like more information about human-centric AI and the ELSA concept, please go to this page.

 

Share with

More information

Building blocks

The NL AIC collaborates on the necessary common knowledge and expertise, resulting in five themes, also called building blocks. Those are important for a robust impact in economic and social sectors.

Sectors

AI is a generic technology that is ultimately applicable in all sectors. For the development of knowledge and experience in the use of AI in the Netherlands, it is essential to focus on specific industries that are relevant to our country. These industries can achieve excellent results, and knowledge and experience that can be leveraged for application in other sectors.

Become a participant

The Netherlands AI Coalition is convinced that active collaboration with a wide range of stakeholders is essential to stimulate and connect initiatives in Artificial Intelligence. Within fields of expertise and with other stakeholders in the ecosystem to achieve the most significant result possible in the development and application of AI in the Netherlands. Representatives from the business community (large, small, start-up), government, research and educational institutions and civil society organisations can participate.
Interested? For more information, see the page about participation.