What social challenges in AI are being worked on?
When analysing cultural expressions, AI systems are currently not capable of detecting potentially offensive terms. This is because human culture is very diverse and subjective, an area where context can also make a big difference and new value systems emerge over time. For instance, it makes quite a difference whether heritage institutions talk about the seventeenth century or the Golden Age. So it would be useful if AI technology could recognise cultural contexts and potentially sensitive terms or expressions.
What types of solutions are offered to the end user?
The common goal is to create AI tools that can detect possible sensitivities and that cultural institutions can use to search through the databases of their collections. Given the enormous amount of information, it is not possible to do all of this manually. Meanwhile, it is important that heritage institutions present information in a way that is appropriate nowadays, paying attention to diversity and inclusiveness and leaving nobody feeling excluded. AI tools could play a role in this in several ways. AI could ensure that a warning or explanation can automatically be added to potentially controversial artwork captions.
What AI methods or techniques are used in the research?
No new AI systems are being developed for the time being, but the current algorithms will have to be significantly modified. Simply labelling possibly sensitive terms is not enough. That is too black-and-white, leaving it unclear to what extent a certain context can potentially be sensitive. Humans and machines will have to work closely together in this area. And on the human side of that cooperation, it is not just about data specialists, but also about experts who have a lot of heritage knowledge. Cultural AI requires an advanced form of machine learning – an evolution of the system that also makes other uses possible, for subjects where it is also important to place concepts in their correct contexts and to be able to distinguish nuances.
Are we collaborating with other sectors?
This ELSA Lab is very much open to that. It is good to know that the parties involved already started doing so on their own initiative in 2021, i.e. even before they were an ELSA Lab. It shows that this is a topic that is relevant for society. It is a joint initiative by the Centrum Wiskunde & Informatica, the Humanities Cluster of KNAW (the Royal Netherlands Academy of Arts and Sciences), KB (the national library), the Netherlands Institute for Sound and Vision, the National Museum of World Cultures, the University of Amsterdam, VU University Amsterdam and the Rijksmuseum. Knowledge is also regularly exchanged with other ELSA Labs, and in particular with the ELSA Lab for AI, Media & Democracy.
What is the ultimate success this ELSA Lab can achieve?
That there are AI solutions soon that help heritage institutions share their collections with the public at large, with warnings and explanations that ensure people do not feel offended unnecessarily and do not get the uneasy feeling that they are being excluded. This is already possible in small-scale projects with the help of human curators, but it is very time-consuming when making large databases accessible. Help from a specially trained AI system would therefore be very welcome.
Awarded the NL AIC Label
The Netherlands AI Coalition has developed the NL AIC Label to underline its vision for the development and application of AI in the Netherlands. An NL AIC Label formally recognises an activity that is in line with the aims and strategic goals of the NL AIC and/or the quality of that activity. The NL AIC would like to congratulate the ELSA Cultural AI Lab.
More information?
If you’re interested in this ELSA lab, take a look here for more information or contact Marieke van Erp of the KNAW. If you would like more information about human centric AI and the ELSA concept, please visit this page.