Jan & Feb 2026

BRIDGING THE ACCOUNTABILITY GAP

BRIDGING THE ACCOUNTABILITY GAP

The Democracy Forum brought together leading voices to examine how diverging regulatory pathways are influencing innovation, accountability and the future alignment of AI with human and democratic values.

As artificial intelligence progresses apace, The Democracy Forum hosted a January 28th webinar titled ‘Global implications of AI deregulation’, at which a panel of experts discussed the global implications of AI deregulation, exploring the economic, ethical, social and security dimensions,  and the consequent response of governments and regulators.

In referencing UNESCO’s 2024 guidance to regulators, TDF president Lord Bruce highlighted that while over 30 global governments have enacted AI regulations since 2016, many of these initiatives fail to address the evolving complexity of the issue. According to an FT article, AI has boosted productivity in sectors including software and administration, but its widespread use raises significant concerns. Lord Bruce mentioned US Vice President JD Vance’s speech at the AI Action Summit in Paris, where Vance argued that over-regulation could stifle innovation and hinder a new industrial revolution. Vance was criticical of the 2024 EU AI Act, and Lord Bruce said it may have over-reached itself, potentially jeopardising the competitiveness of EU companies and start-ups due to its stringent rules.

TDF webinar photo
AI DEREGULATION: Panellists at TDF’s January 28th virtual seminar

Indeed, there is real divergence in regulatory approaches, added Lord Bruce, with Washington at risk of under-regulating AI while the EU might over-regulate. Recently, President Trump issued an executive order to establish a federal framework for AI regulation, aiming to replace the inconsistent state-by-state initiatives. This order emphasises the need for US AI companies to innovate without cumbersome regulations, arguing that the country is in a race for technological supremacy. However, some commentators noted that the administration’s approach is not entirely deregulatory but rather concentrates regulatory power at the federal level, making it less transparent and democratically accountable.

We are at a pivotal moment in the evolution of AI, said J.B. Branch, Big Tech Accountability Advocate at Congress Watch, Public Citizen. For much of the past decade, discussions on AI have centred around innovation, efficiency, and economic growth. However, the conversation has now shifted towards ensuring AI is safe, accountable, and aligned with democratic and human values globally. Branch highlighted the dual challenge in AI governance: the technical aspect, involving more autonomous and opaque systems, and the governance aspect, where regulatory frameworks are fragmented and reactive, struggling to keep pace with AI advancements. Traditional regulatory models, designed for static technologies with clear responsibilities, are ill-equipped for the dynamic nature of AI.

This gap between tech nological capability and governance is not just a technical issue but a democratic one too

Branch argued that this gap between technological capability and governance is not just a technical issue but a democratic one. AI reshapes how information is produced and assessed, transitioning from an internet-based distributed ecosystem of knowledge to one dominated by large language models. These models are becoming the primary source and endpoint of research, raising concerns about manipulation, distortion, and the concentration of information power. Thus, the critical question now is not just who controls the data, but who controls meaning and information. Nowhere is this more visible, Branch added, than the intersection of AI in civil rights, as AI systems are increasingly being deployed to police immigration efforts, welfare eligibility, screening for employment, housing, etc, and these are not neutral spaces. Nor are these abstract concerns. So what we need, he concluded, is a rights-centered approach to AI that treats safety, transparency and civil liberty not as competing interests, but as mutually reinforcing obligations to uphold democratic principles.

The focus should be on how we regulate to en sure that it aligns with our values and priori ties

Dr Giulia Gentile, a Lecturer in Politics and Law at Essex Law School, University of Essex, emphasised that the debate around AI regulation strikes at the core of democracy. She argued that society is somewhat confused and misled in its perception of progress and technology. While progress is often defined as technology improving society on a large scale, such as in medicine and science, AI is currently controlled by a few powerful companies and state departments. This concentration of power allows these entities to exercise their advantage over those without access to technology, or those impacted by it.

Gentile highlighted the need to distinguish between the technology itself and the actors driving it when discussing AI regulation. Regulation should aim to foster technology while ensuring equal access and sufficient safeguards to prevent harm. She stated that regulation is a tool for establishing a hierarchy of values and legal mechanisms to control and shape behaviours towards specific goals. On the other hand, deregulation creates uncertainty about the rules, leading to a future governed more by technology and its controllers, rather than by principles, values and the rule of law.

ai graphic
Although progress is often defined as technology benefitting society on a large scale, such as in medicine and science, AI is currently controlled by a few powerful companies and state departments.

At the start of 2026, AI regulation appears to be divided into two distinct approaches, noted Colin Gavaghan, Professor of Digital Futures at the Bristol Digital Futures Institute/University of Bristol Law School. On one side, the EU is implementing its AI Act, prioritising safety and human rights. Conversely, the Trump administration in the US is following an aggressive deregulatory policy towards AI and Big Tech.

However, Gavaghan argued that this binary view is oversimplified. Even in the US, various states like California and New York are leading efforts to regulate AI, particularly chatbots and AI companions. The debate isn’t about whether we should regulate AI – we must and we will. Instead, the focus should be on how we regulate to ensure that it aligns with our values and priorities.

Rounding up the discussion was TDF chair Barry Gardiner MP, who noted that perception and knowledge are always bound to the interpretive perspective of the person who is regarding whatever the situation is. This, he added, is absolutely fundamental to what we have been talking about today vis-à-vis AI and deregulation.

Key takeaways from the panel discussion were that AI regulation is not about slowing innovation, but about aligning the technology with values, as history shows that global cooperation is possible; and that regulation is about people, not things, as it is not AI we must target but the people behind it. 

 

MJ Akbar is the author of several books, including Doolally Sahib and the Black Zamindar: Racism and Revenge in the British Raj, and Gandhi: A Life in Three Campaigns

To watch the full discussion, tune in to