The case for a human rights based approach to AI governance
Guest blog from Jen Ang, Director of JRS KnowHow and Director of Development and Policy of JustRight Scotland
Why should we consider a human rights based approach to regulating AI?
AI governance frameworks globally are emerging and untested. The focus of AI governance in the UK and Scotland is on supporting innovation and encouraging development that is trustworthy, ethical and inclusive. AI is already being used in many different contexts in Scotland – the question is: are current frameworks sufficient to meet new technologies and new applications of existing technologies? Are they robust and forward-looking enough to also uphold new data rights and protect us against new types of harm? Can a human rights based approach to AI governance make a difference – and if so, how?
Global Regulation of AI: Emerging Models
National, regional and global AI frameworks for the regulation of AI are emerging, and as yet – untested. None are in operation yet, although the Global North is leading the charge. The European Union’s Artificial Intelligence Act (EUAIA) passed by an overwhelming majority of the European Parliament in June 2023, with provisions expected to come into effect later in 2024. The Council of Europe’s Committee on Artificial Intelligence is working on the first global treaty on the development, design and application of artificial intelligence.
The United Nations has appointed a multistakeholder advisory body on AI, while the US has developed a Blueprint for an AI Bill of Rights (which is non-binding) and it look likely that individual states will develop a state-by-state approach to AI regulation over the coming years.
Regulation of AI in the EU, the UK and Scotland: Rules Based vs Principles Based
The EUAIA takes a rules-based approach to AI – dividing new technologies into different categories according to the risk they pose, and imposing different rules depending on what category a technology and application falls into. So, for example, the EUAIA has classified the use of facial recognition technology for mass surveillance in public places as high-risk, and banned the use of this technology altogether.
In contrast, the UK Government has recently published a white paper: “A pro-innovation approach to AI regulation” which sets out the following core principles:
Safety, security and robustness
Transparency and explainability
Fairness
Accountability and governance
Contestability and redress
The UK Government proposes that these principles are to be acted on by existing regulators to “guide and inform” development and use of AI – and has set aside funding to create a central support function to help existing regulators tackle this new challenge, and to foster cross-sector coordination of these efforts. They have also put forward funding for a “Frontier AI Taskforce” to lead research to evaluate risks at the frontiers of AI, and committed to hosting an AI Safety Summit, which has just concluded on 1-2 November 2023.
Scotland’s AI Strategy, in similar terms, takes a principles-based approach to AI regulation, focusing on a commitment to being: trustworthy, ethical and inclusive.
Why take a human rights based approach?
As human rights lawyers, at JustRight Scotland, we work with people, every day, who suffer breaches of their human rights. We might be helping a disabled person who is also racialised, for example, challenge discrimination in accessing education or work. Or we could be working with a migrant woman who is also a survivor of domestic abuse, to find safe housing for her and her children, to report their abuse to the police and to challenge the hostile environment policies which would otherwise keep the family destitute.
In each of these cases, we rely on human rights and equality frameworks – in the UK, the Human Rights Act 1998 and the Equality Act 2010 in order to assert people’s rights to fair, equal and dignified treatment. The core principle of a human rights based approach is that all people have rights, simply because they are human – and that no one’s individual enjoyment of rights - or access to justice to protect those rights – should be reduced because of who they are, or where they come from.
As human rights lawyers, we are also acutely aware that our existing human rights legal frameworks fall short of what is necessary, in many places – and further, that there are real and concerning gaps between our laws and the experience of people in Scotland when they try to exercise those rights.
For example, it is much more difficult to protect people against breaches of their rights arising from the activities of private companies, as compared to rights breaches committed by public authorities (like the NHS, Police Scotland or local authorities). This has become more problematic as public authorities increasingly outsource public functions (like delivery of enforcement services, like providing security at prisons, or delivery of care or health services) to private companies. When those private companies breach people’s human rights, our existing human rights legal framework creates considerable barriers for people to seek justice and remedy.
This is an example – of how, when the environment around us changes – in this case, as states increasingly move to privatise the delivery of public functions, a legal framework for the protection of our rights can increasingly fail to do that effectively. More and more activities (and rights breaches) move out of scope or at least, out of easy reach of the laws as they were intended to operate, and a right without an accessible remedy, is no right at all.
Rights breaches – like people being shut out of health care and social services – or people suffering abuse and assault by poorly regulated enforcement staff in prisons and detention centre – lead to increased injury, abuse and exploitation.
The consequences of failing to maintain and update a legal framework for the protection of human rights so that it is fit-for-purpose for modern times are felt gradually, but the shift and the reduction in protection for all of us is continuous and only ever tends in one direction: towards less rights for us all.
What difference would a human rights based approach to AI make?
Legal frameworks face a similar challenge when confronted with new technologies and new applications of existing technologies – they were inevitably designed for a different time, and a different legal context – and can be expected to produce gaps when the limits are tested by technological advances.
At present, there are a number of concerns about how effective the UK and Scottish principles-based approach to AI will be in adequately protecting people’s rights in relation to emerging AI technologies:
The UK White Paper offers no prospect of new legislation and no additional powers for existing regulators or new regulatory bodies
Crucially, existing regulators have no legal obligations to take these principles into account in seeing out their duties
There is no real explanation for how we would regulate the development and application of AI in areas that are not currently covered by existing regulators
It is unclear how we would agree (define) the meaning of these principles and what would happen if regulators or others failed to uphold them.
In these gaps, the final safety net for individuals against rights breaches is human rights law, or tort (delict) law (an area of private law for seeking compensation for injury) – as explained above, it is already difficult to seek justice against private companies using existing legal frameworks
Relying on individuals to raise legal claims is a reactive approach to regulation, and in our current justice system, adjudication tends to favour the better funded litigant.
In summary, the concern is that the current proposals to proceed with a principles-based approach to AI governance – particularly following the UK’s exit from the EU – will creates regulatory gaps, leaving people in Scotland more vulnerable to an ever-wider range of breaches of their rights – with no legal recourse to seek justice or remedy.
The benefits of a human rights based approach include:
Use and strengthening of an existing legal framework, with established definitions and processes, and crucially, established mechanisms of accountability.
If embedded in a single piece of legislation or our existing legal frameworks - a unified and consistent approach to upholding people’s rights, across sectors and activities – rather than a variable approach enforced by approximately 90+ individual regulatory bodies.
A person-centred approach to regulating the impact of AI technology on people, insofar as a human rights-based approach requires inquiry into the individual needs, experience and identity of a person engaged by the AI technology rather than a one-size-fits-all set of rules.
What could a human rights based approach look like?
A non-discrimination clause at the outset – making it clear that everyone should have equal enjoyment of rights and protections
A risk-based approach, including bans on the use of AI considered to be a threat to people and regulation of AI seen to be a threat to rights
A right to know if AI was used to generate content (like ChatGPT)
A right to understand and how AI works and to give informed consent to use or sharing of personal data
A right to know who is responsible for the design and application of AI, and how to hold organisations accountable for breaches of rights
A clear remedy for breach of rights
Is a human rights based approach enough to meet the challenge of regulating AI?
A human rights based approach may be an improvement on current proposals, but it may also not be enough to meet the challenges of emerging technologies which are outstripping our ability to foresee or manage their consequences.
Returning to the example of the use of facial recognition technology for purposes of surveillance – the EU has proposed a complete ban on the application of this technology, until such time as it can been determined to be less high risk to people. In the UK, in contrast, this technology is widely used today by our police forces – and the outcomes of its use, as expected, are concerning. According to Big Brother Watch, the Met and South Wales Police use of facial recognition technology resulted in over 89% inaccurate matches between 2016-2023 – this equates over 3000 people wrongly identified by facial recognition technology used by these forces alone.
In Scotland, the Ferret has report that Police Scotland’s use of retrospective facial recognition as tripled over the last five years and is on course to rise even further in 2023, making it the fourth most prolific force in the UK for using this form of facial recognition last year.
When the EUAIA passes later this year, we will see a divergence in our data and privacy rights with respect to mass surveillance by the state. In short, people in the European Union will enjoy protection of those rights – and we in the UK and Scotland will not.
Our legal human rights frameworks in the UK have proven, thus far, not yet up to the task of protecting us from these potentially harmful applications of AI technology. And our plans for future regulation are likely to lead to a reduction, rather than increase, in rights and protections for all of us.
The Council of Europe’s decision to draft a new treaty for AI regulation is a strong steer towards recognising the need for new legislation to ensure all our rights are maintained and not diminished, as AI technology plays an ever increasing role in our lives.
That’s a steer I hope we heed in the UK and Scotland, and a conversation that I hope we can start to have together.