In the EU’s AI regulation debate, model cards – summaries of a machine learning model – emphasise a risk shift from developers to users, and may be illusory in achieving their intended impact, writes Cristina Vanberghen.
Cristina Vanberghen is based at the European University Institute in Florence, specialising in European Union governance, common and foreign security policy, and cybersecurity at Université Libre de Bruxelles and the European Institute of Public Administration Luxembourg. She is also a member of the WICCI India-EU Business Council and serves on the Advisory Council of the Indian Society of Artificial Intelligence and Law.
The EU is currently in the process of regulating AI, yet there remains a dilemma about how to regulate General Purpose AI systems. Some propose regulating specific applications rather than foundation models, aligning more with the risk-based approach. This means foundation model developers would have to define model cards, including technical documentation that presents information about trained models in an accessible way, following best practices within the developers’ community.
While the concept of defining model cards as a mandatory element of self-regulation aligns with the principle of “transparent AI”, it essentially represents a formal transparency similar to the information available on food products. Model cards will necessitate the inclusion of parameters such as intended uses, potential limitations, biases, and security assessments. However, this information primarily enables users to make informed decisions – “to buy or not to buy”.
While providing detailed information through model cards empowers users to make more informed decisions, it is crucial to recognise that transparency, accountability, and responsible AI criteria are difficult to implement due to the highly complex nature of AI systems. Users with varying technical expertise may find it challenging to interpret the information accurately. Model cards are more accessible to developers and researchers with a high level of education in AI. There is an imbalance of power between developers and users regarding the purpose and limitations of models, and the standardisation of information remains a necessary part of AI governance.
Providing too much information in model cards may overwhelm users. Striking the right balance between transparency and simplicity is crucial when overloaded with information. Users may not be aware of the existence of model cards or may not take the time to review them, especially in cases where AI systems are deeply integrated into critical processes.
The issue of stifling innovation is also relevant for model cards as they can evolve over time, and information may become outdated. How to update and establish accurate model cards? There is also the issue of proposing model cards based on best practices within the developers AI, community, contributing to a more coherent understanding of AI models across different domains and applications.
Addressing biases in model cards is essential for responsible AI development, promoting awareness and encouraging efforts to mitigate biases. Including model cards means AI governance adds an additional layer of accountability and oversight on how developers can implement self-regulatory standards. This model will require more safety and ethical considerations while allowing more flexibility for developers.
Automatically embracing self-regulatory model cards as the default choice, solely due to the challenges regulatory frameworks face in keeping up with rapid changes, may not represent the optimal solution. Foundation models span a broad spectrum of AI applications across various industries, making it difficult to design regulations that are both applicable and effective in diverse contexts. The question arises: How can we establish a regulatory approach that fits all?
The primary challenge lies in our inability to manage the interdisciplinarity and coordination required for the responsible regulation of AI. Bringing together experts in computer science, ethics, law, and social sciences at the same table becomes a daunting task. How can we address the issue of fair AI practices when the concept of fairness varies depending on the perspective of developers or users?
Effectively managing these risks requires continuous monitoring, assessment, and updates, a process that can be resource-intensive and time-consuming. This approach aims to safeguard users and maintain a human-centric perspective. Ethical perspectives evolve over time, encompassing concerns related to privacy, autonomy, accountability, and the societal impact of AI applications.
A potential conflict of interest between commercial considerations and ethical considerations may lead to unfair AI practices. The challenge lies in crafting regulations that address ethical considerations without hindering innovation. We find ourselves at a pivotal juncture, where AI has become a geopolitical issue, and different countries may adopt diverse regulatory approaches based on their cultural, legal, and ethical perspectives. The crux is aligning societal values with regulations that grapple with issues of legitimacy and acceptance.
Consumers anticipate a robust legal framework to shield them from AI-related harms. However, seeking legal remedies and holding developers accountable for issues like privacy violations or biased decision-making can be impractical for consumers with busy schedules. Effective regulations are crucial for instilling confidence that model cards remain attuned to broader societal evolution, uphold principles of fairness and equity, and prevent developers from causing reputational damage through unfair AI practices.
Even more, the efficacy of self-regulation hinges on an organisation’s commitment to ethical practices, transparency, and responsible AI development. So, are model cards a realistic way to “regulate” AI? The trustworthiness of self-regulatory models in AI depends not only on the organisation’s track record but also on its internal governance mechanisms. Self-regulation relies on trusting business leadership to implement ethical principles, while prescriptive regulation demands a more comprehensive involvement of the public, policymakers, and the AI community.
In conclusion, self-regulatory model cards entail a potential absence of external monitoring, thereby shifting the balance of risk directly from developers to users. How can we enhance oversight of developers to curb unfair practices, especially when this challenge persists even in the offline world and may result in AI development veering into unintended directions?
The adoption of self-regulatory model cards may foster a range of practices, leading to disparities in AI systems. It’s essential to recognise that most of the model card focuses on training data derived from open-source, publicly available information. However, relying on self-regulatory model cards doesn’t always guarantee sufficient input from the broader public and may inadvertently exclude diverse perspectives. Developers, in their efforts to swiftly adapt to evolving risks, might inadvertently create gaps in addressing emerging ethical concerns.
Remember the fairy tale ‘The Emperor’s New Clothes’. Model cards can be likened to a magical suit of clothes that remains invisible to users. While it seems to offer protection to users, the reality is that companies engaged in lobbying activities create an illusion, parading the notion of safeguarding against unfair AI with these imaginary clothes. Yet, much like the child in the story, one might assert that model cards lack true substance or effectiveness in their purported role.