German start-up Zander Laboratories signed a €30m funding deal with the German government’s cybersecurity innovation agency on Friday (15 December), making it the largest single investment by an EU government in AI research.
Read the original German article here.
Zander Laboratories and the government agency sealed the deal at the Brandenburg University of Technology Cottbus-Senftenberg (BTU), beating the other four competitors in the race for the government’s “Safe Neutral Human-Machine Interaction” tender published in October 2022.
Zander Laboratories came out on top with its “Neuroadaptivity for Autonomous Systems” (NAFAS) idea, an AI project designed to capture and train on real-time brain data.
“This approach in the field of brain-computer interfaces (BCI) clearly shows the differences in approach between the USA and Europe,” said Thorsten Zander – a sentiment echoed by the government’s cybersecurity agency, which highlighted the European Union’s advances in this area of research compared to international competitors.
“While the US favours invasive methods and focuses mainly on medical applications, we focus on non-invasive technologies and aim to serve users without limitations. This will revolutionise human-machine interaction,” he added.
For the government’s Cyber Agency, it is in this area of research that Europe’s results speak for themselves, with the EU ranking at the top when compared internationally.
The funding was also welcomed by the city of Cottbus, with Head of Financial Management, Economic Development and Social Affairs Markus Niggemann saying – on behalf of the Mayor of Cottbus – that the startup “will drive forward the transfer of our urban society into a region of the future.”
AI revolution
Under the project, the researchers will categorise human mental reactions based on brain signals so that machines can better interpret responses at EU-US Trade and Technology Council meetings in the future.
“The revolution will enable machines to record and interpret brain data in real-time,” said Zander, adding that it will give the machines “insight into the user’s current, individual perception and interpretation.”
“This will allow us to transfer the user’s knowledge, values, and goals to the machine, enabling intuitive interaction,” said Zander.
Most government funding has been earmarked for developing neurotechnological prototypes over the next four years in the hope that the prototypes can read information from the brain.
The idea is that a person exchanges information with an external system via their thoughts, with the transmission of thoughts guiding the machine to complete tasks or learn new skills.
International standards
The announcement comes a few days after the EU Council, the European Parliament, and the European Commission reached a political agreement on the AI Act on 8 December, the world’s first comprehensive law on Artificial Intelligence.
Germany reacted to the key new EU development, with German Digital Minister Volker Wissing saying it could set a standard for AI regulation worldwide.
“Artificial intelligence is the big game changer,” Wissing told the digital committee in the German parliament. While Wissing assured that finding a solution at the EU level was necessary, he also stressed that the approach must guarantee a good outcome.
“We also don’t want to become the most strictly regulated market. In this respect, we are looking closely at the written result and will support it constructively, but not uncritically,” he added – echoing somewhat the position of France and Italy on the subject.
In front of parliament’s digital committee, Wissing also praised the G7 countries’ pledge in October to create an AI Code of Conduct that follows 11 principles like risk prevention in AI’s development phase, labelling of AI-generated content, and ensuring data protection and copyright.
“I believe that this ‘Code of Conduct’ is the right approach to act internationally uniformly. On the one hand, because it creates a standardised ‘level playing field’ and, on the other, because we can only really tackle the security risks associated with AI globally, not only at the European or national level,” Wissing added.
[Edited by Kjeld Neubert, Daniel Eck]