Categories
Blog

AI integration in lawmaking requires a parliamentary change process, not just a tech project

By Franklin De Vrieze.

Across democratic governance systems, parliaments are facing a paradox. Never before have there been so many technological tools promising to improve legislative scrutiny, evidence use, and public engagement. Yet never has the gap between the pace of technological change and the capacity of democratic institutions to respond felt so wide. This is not simply a story about adopting new software or experimenting with artificial intelligence (AI); it is about institutional transformation. The application of technology and AI in legislative scrutiny requires a fundamental change process in parliaments: one that reshapes data practices, organisational culture, and even how we conceptualise lawmaking itself.

The pacing problem: when governance moves slower than technology

The “pacing problem” captures the growing mismatch between rapid technological innovation and slower institutional adaptation and governance.

Westminster Foundation for Democracy (WFD) and POPVOX Foundation distinguish three interlocking layers of this challenge with regards to parliaments. First, the external pacing problem: parliaments struggle to keep up with technologies already embedded in society. Secondly, the intra-government pacing problem: executives are often early adopters of AI for service delivery, policy design, or enforcement, while legislatures lag behind in their ability to scrutinise these tools effectively, reinforming existing patterns of executive dominance. Thirdly, the internal pacing problem: parliamentary ICT systems, data structures, and workflows frequently remain fragmented, paper-based, or incompatible with modern analytical tools.

The risks of inaction are not abstract. According to the IPU World e-Parliament Report 2024, over 70% of parliaments globally now publish legislative information online, but fewer than half report having interoperable legislative data systems capable of supporting advanced analysis. As AI adoption accelerates in the executive and private sectors, these internal weaknesses can translate into real power asymmetries, further weakening legislative oversight and checks and balances.

How parliaments are already using AI

Despite these challenges, parliaments are not starting from zero. A growing number are experimenting with AI across administrative, legislative, and participatory functions.

On the administrative side, AI is increasingly used to automate transcription, translation, document classification, and summarisation. The European Parliament, for example, deploys eTranslation, speech-to-text systems, and automated indexing to manage multilingual debates and documents at scale. Similar AI functions are noted in the Canadian House of Commons.

Legislatively, AI tools are being applied to amendment analysis, legal consistency checks, and information retrieval. Italy’s Parliament has piloted AI-supported compliance checking of amendments against constitutional and legal constraints. The National Assembly of France has created tools using open data to model fiscal and social impacts and to compare draft bills with existing legislation.

Perhaps the most comprehensive example is Brazil’s Ulysses Suite, an integrated parliamentary AI ecosystem supporting everything from bill analysis to citizen interaction. These cases reflect a broader trend: many parliaments are piloting AI for legislative research, amendment tracking, and committee support.

Yet experimentation alone does not equal transformation. Many pilots remain isolated, dependent on individual champions, or constrained by legacy systems and procurement rules.

Technology across the legislative cycle

The potential of AI becomes clearer when viewed across the full legislative cycle. As highlighted in the Course Manual for the Certified Course on Legislative Scrutiny and Technology, technology is already reshaping each phase of lawmaking.

  • During legislative drafting, AI-assisted tools can analyse vast corpora of statutes and case law, suggesting language aligned with existing norms and reducing ambiguity. This supports consistency without replacing human judgment.
  • In ex-ante impact assessment, predictive modelling and data analytics allow parliaments to interrogate the likely economic, social, or environmental consequences of proposed legislation. Such tools can surface unintended effects earlier in the process, strengthening evidence-based scrutiny.
  • Citizen engagement is also being transformed. Digital platforms enable large-scale consultations, while AI can help analyse submissions, identify patterns, and surface underrepresented perspectives. This is particularly valuable as consultation volumes increase beyond what manual analysis can reasonably handle. Some argue for a path of Augmented Deliberation regarding citizen-initiated mechanisms for democratic participation, including a governance roadmap of digital guardrails such as AI watermarking, public-interest AI platforms, and independent algorithmic audits.
  • Finally, in post-enactment analysis, data analytics can track implementation and outcomes, support post-legislative scrutiny and close feedback loops between lawmaking and lived experience.

These applications demonstrate that AI does not sit at the margins of parliamentary work. It intersects with core legislative functions.

The centrality of explainability

The opportunities are significant, but so are the risks. AI evolves faster than parliamentary cycles, creating persistent regulatory lag. Existing standing orders and parliamentary procedures rarely anticipate automated analysis or algorithmic support. Capacity gaps mean expertise is often concentrated in a handful of staff or external vendors, raising dependency risks.

There are also well-documented technical and ethical challenges: bias in training data, errors and “hallucinations” in generative AI, data protection concerns when handling sensitive parliamentary or constituency information, and the growing threat of AI-enabled disinformation through synthetic submissions, as analysed by OECD, among others.

Across all these risks, one principle stands out: explainability. As Bruce Schneier and Nathan Sanders have argued in their book “Rewiring Democracy”, democratic institutions cannot rely on systems they do not understand. If MPs and staff cannot interrogate how an AI tool reached a conclusion, its outputs cannot legitimately inform legislative scrutiny. Explainability is therefore not a technical luxury but a democratic requirement.

AI adoption is a change process, not a tech project

Too often, AI adoption is framed as a procurement or IT challenge. In reality, it is an institutional change process. Deploying tools without addressing underlying data quality, governance, skills, and culture will at best produce marginal gains, and at worst undermine trust.

Effective change requires attention to at least six interlinked elements: strategy, prioritisation, implementation, governance, training, and coordination. It involves iterative piloting rather than “big bang” rollouts; cross-parliamentary governance bodies rather than siloed initiatives; and continuous learning rather than one-off training.

Data governance is foundational. Without legislature-wide data maps, data management plans, and interoperable systems, AI outputs will be unreliable or biased. Treating data as a strategic asset is a precondition for any meaningful AI readiness.

Crucially, this change process also invites deeper reflection. Rethinking lawmaking in terms of “law as code”, exploring how digital tools reshape legislative design, and reimagining human oversight so that humans remain firmly “in the loop” are all parts of the transformation.

The theory of organizational change by sociologist Everett Rogers, called the Diffusion of Innovations, points at different categories how people adapt to proposed changes. In any organization, including in parliaments, some are innovators, early adapters, early majority, late majority and the laggards. The innovators are often in a minority, but their approach will determine if the majority adapts and accepts the proposed changes. Similar categories on people’s approach to change applies to parliaments when introducing AI in the parliamentary workspace.

Frameworks to guide the journey

Parliaments do not have to navigate this alone. The Guidelines for AI in Parliament, published by the WFD, provide a practical framework covering ethics, governance, capacity, and implementation. Complementing this, the IPU’s Maturity Framework for AI in Parliaments offers a self-assessment tool across six levels, from “initial” awareness to “leadership”, where parliaments act as global benchmarks. The UK Parliament has issued guidance to its Members on the use of generative AI tools.

Together, these frameworks suggest a pragmatic path forward: start with pilots, invest in data foundations, prioritise explainable systems, learn from peers, and embed AI within transparent, ethics-driven governance structures.

Bridging expertise: lawmaking, technology, and parliamentary strengthening

Finally, successful transformation depends on people as much as systems. One of the clearest lessons from comparative practice is the need to connect three communities that too often operate separately: lawmaking experts, technology specialists, and parliamentary strengthening practitioners.

This was the guiding logic behind the January 2026 Certified Course on Legislative Scrutiny and Technology, which deliberately brought these three perspectives together. Hence, it was co-organized by leaders in each of these three fields: Institute of Advanced Legal Studies (IALS) of the University of London, Popvox Foundation and WFD. Legislative quality scholars, parliamentary officials, technology specialists, and democracy practitioners all contributed to a shared understanding: AI in parliament is not just about efficiency, but about safeguarding democratic legitimacy in an age of acceleration.

The rise of AI and legislative technology is not merely an administrative upgrade. It is a fundamental institutional challenge. If parliaments fail to adapt, the pacing problem will deepen, oversight will weaken, and democratic accountability will erode. If they succeed, AI can become a powerful ally in strengthening scrutiny, transparency, and public trust.

About the author

Franklin De Vrieze is the Head of Practice Accountability at the Westminster Foundation for Democracy.


Categories
Blog

Democratising Hansard: continuing to improve the accessibility of parliamentary records

The official, substantially verbatim report of what is said in both houses of Parliament is an essential tool for ensuring democratic accountability. This record, Hansard, contains a wealth of data, but it is not always fully accessible and easy to search. Lesley Jeffries and Fransina de Jager explain how a new project, Hansard at Huddersfield, aims to improve access to the Hansard records and contribute new ways of searching the data.

Categories
Blog

One Small Step for Technology, One Giant Leap for the Commons

By Louise Thompson

House of Commons Speaker John Bercow suggested in a speech last week that it “wouldn’t be so heretical” to consider whether Commons votes might in the future be taken with the help of modern technology. Housed in the nineteenth century building is an increasingly techy Parliament and a digitally aware cohort of MPs. In the last few years alone we have seen MPs tweeting directly from the chamber, parliamentary papers delivered to Members’ iPads and speeches given from tablets rather than handwritten notes.  Electronic voting then seems quite a natural progression.