Demystifying Artificial Intelligence: Separating Fact from Fiction 

 
 

In recent months few topics have captured public attention and imagination quite like artificial intelligence (AI). Discussions around AI seem to be permeating every corner of society with large volumes of news stories, debates and government discussions. However, as with any transformative technology it is important that discussions remain accurate and informed.  

Unfortunately, the media's tendency to misrepresent AI often leads to misconceptions and fear. In this blog post, the Scottish AI Alliance team aims to shed light on some of the prevalent myths and misconceptions about AI, while acknowledging the genuine risks and concerns it poses and emphasising the importance of trustworthy, ethical, and inclusive AI in Scotland and beyond. 

AGI 

One common misconception is around the concept of artificial general intelligence (AGI) and with that comes a fear of super-intelligence, or an AI that is more intelligent than humans. It is important to recognise that AGI does not currently exist, and it is debated as to whether it ever will. As Bill Gates put it in his recent newsletter; “AGI doesn’t exist yet—there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all.”  

All current AI can be described as narrow AI defined by DeepMind as “artificial intelligence systems that are specified to handle a singular or limited task,” as opposed to general AI which would be able to handle more generic tasks. Despite remarkable advances in narrow AI, Chat-GPT being one of the most prominent, we are still a long way off creating AGI and, without significant and currently unforeseeable advances in research and development in a variety of areas, it remains the realm of science fiction. Generative AI systems like ChatGPT have become more general, however they make a lot of mistakes, while appearing confident, and (misleadingly) pretending to have some agency and motivations which can confuse people.  

It is important to point out that generative AI systems are not connected to the real world in terms of sensing or acting, and they have no notion of right or wrong. They could in principle be connected to devices and cause harm, but that’s not because they’re “intelligent” – you can also hook up a thermostat to a nuclear weapons silo and annihilate humanity. 

Being able to separate facts from fiction allows us to have more productive conversations around the current state of AI giving our full attention to real and current risks.  

Risk of extinction?  

With recent advancements in AI comes many legitimate risks worth exploring. However, recent examples of sensationalist media coverage risks creating misplaces fears of AI. A recent example of this is Matt Clifford, chair of the Advanced Research and Invention Agency, being quoted on the front cover of The Times newspaper as saying, “Two years to save the world, says AI adviser.” In a series of tweets Clifford shows how this headline does not reflect what he said at all and he concluded by saying; “short and long-term risks of AI are real and it’s right to think hard an[d] urgently about mitigating them, but there’s a wide range of views and a lot of nuance here, which it’s important to be able to communicate.” 

Current hyperbole around AI seems to come from Hollywood like scenarios where AI spontaneously gains consciousness or starts to act independently and cause havoc.  

Accountability  

There are many other sectors, some of which deal with high-risk technology (aviation, biotech, nuclear) that have well-established safety and ethics regimes in place, together with the regulatory and legal scaffolding that makes people “trust” them. Traditionally, even before AI, the tech industry has defied calls to develop these cultures, processes, and rules and this is something that needs to be solved. For example, it’s irresponsible to just throw new systems like ChatGPT “over the fence” without having any mechanism to avert damage or manage risks. It’s a bit like creating a machine that can construct any chemical compound, and the next day anybody can buy it in a supermarket (and either cure cancer or deploy new lethal chemical weapons). 

It is important to remember that behind all AI systems there are individuals and organisations responsible for its development and use. Like many tools, AI can be used for good or for nefarious purposes. Undesired outcomes and unintended consequences can occur. In this situation, it is the individuals responsible for the design, development and deployment of AI systems that should be held accountable, rather than the technology itself. When blaming AI we are in danger of anthropomorphising the technology and misrepresenting its capabilities.   

Job loss 

Another fear is that the ongoing implementation of AI will lead to massive job losses across the economy. Goldman Sachs has predicted that 300 million jobs will be lost or degraded by AI, and the World Economic Forum has suggested that “AI will drive 83 million ‘structural’ job cuts in 5 years.” These kinds of headlines understandably make many of us anxious, driving fear in the workforce. However, The World Economic Forum has also predicted that AI will cause long-term job growth creating more jobs than is replaces and the American Economic Association has also predicted a growth in new jobs due to AI. Therefore, it is hard to predict exactly what will happen. There are bound to be job losses as AI becomes very efficient at automating certain tasks, however it is unlikely to ever completely replace humans.  

The future of work is likely to be a human-AI team. AI works best as a tool that humans use to assist and augment their own abilities, to free up time for more complex or creative tasks that AI cannot handle or take on dangerous tasks.  

AI can create a plethora of new jobs and opportunities across a variety of industries. We are likely to see a shift in the job market, rather than the eradication of work. With these shifts likely to happen we are going to have to navigate carefully, and politicians will have to make important decisions that might affect the livelihoods of people. For example, if more and more people are needed to provide oversight of AI systems, how do we give them the skills to do that, and how do we make sure these jobs are interesting and allow people to flourish? And how do people develop more advanced skills if they don’t first practice simple skills because those are done by AI? It is key to educate, retrain and upskill, or even support people, to make sure that everyone can thrive in a new AI economy.  

However, the media chooses to focus on the negative side of job loss, rather than looking at the potential AI has to create new opportunities and work towards creating a more skilled workforce. With these shifts in mind, we need to start to think about how we can make sure that the benefits of AI are being shared ethically among workers and that there is a just transition to AI and that workers' rights are upheld throughout.    

Trustworthy, Ethical, Inclusive  

While it is important to tackle misconceptions around AI it is equally important to acknowledge that there are risks posed by the technology today and that regulations and standards need to be in place and enforced to mitigate them.  

Some of the key risks include:  

  • Perpetuating bias and discrimination 

  • Harming individuals through incorrect decisions  

  • Privacy and security concerns  

  • Spread of misinformation 

  • Malicious use and autonomous weapons  

  • Increasing inequality 

Recognising the potential impact of AI on society Scotland’s AI Strategy calls for the development and use of trustworthy, ethical and inclusive AI. We could write an entire paper on each of the topics, but below we briefly touch on some of the important aspects of each. 

Trustworthy: When we say Trustworthy AI what we mean is that we need to be able to trust how, when and why AI is used. Trustworthy AI must be transparent so that we can observe how these systems help organisations make decisions, and must be disclosed so that people are aware of the use of AI in their lives.  

As AI tools are increasingly being introduced into our everyday lives in banking, healthcare, transport and many other areas it is vital that we trust these systems not to jeopardise our safety and well-being. If AI developers focus on creating trustworthy systems, they can increase user confidence and foster wider adoption. Untrustworthy AI systems can make errors, provide inaccurate information, or deliver biased outcomes eroding confidence in AI systems and posing risks to safety, privacy, and security. When AI is Trustworthy, it provides a robust environment from which to deliver the benefits of AI as widely as possible. 

Ethical: When we say Ethical AI what we mean is that we want AI that respects and supports the values of a progressive, fair and equal society.  

As AI becomes more widespread it raises a huge number of ethical questions. For example, a Times investigation found that that OpenAI “used Kenyan workers on less that $2 per hour to make ChatGPT less toxic,” exposing them to “harmful text and images,” raising human rights concerns. Furthermore, unethical AI can present ethical dilemmas where decisions made by AI systems contradict widely accepted ethical principles. By adhering to certain ethical principles in the development and deployment of AI it can contribute positively to society, prioritize human values, and avoid negative impacts on individuals and communities. Ethical AI is accountable to the people who are affected by it and respects the laws and the rights of people internationally. 

Inclusive: When we say Inclusive AI what we mean is that we want AI that includes everyone from across the diverse cultures and communities that make up society.  

AI systems can often reflect and amplify societal biases if not carefully designed and monitored. Efforts must be made to address bias, ensure transparency, and promote fairness in AI algorithms and decision-making processes. Inclusive AI must be mindful to not exclude any group, particularly our children and youth, and under-represented or disenfranchised people. Inclusive AI must be shaped by a diverse range of voices in all areas from strategy development, data gathering, algorithmic design, implementation and user impact. Inclusive AI respects our human right to live free from discrimination.   

Empowering people through AI knowledge  

Overall, we view AI as a tool to be used by humans, and as with any tool, it can be used for good or for bad. To navigate the complex landscape of AI it is important to give people a foundational understanding of the capabilities and limitations of the technology. That is one of the key aims of the Scottish AI Alliance, by providing people with knowledge about AI we aim to enable them to be active participants able to contribute productively and if necessary, critically, to the discourse around AI and how it is used in our lives and society. 

This September we will be releasing Living with AI, a free online course open and available to everyone designed to introduce people to the world of AI. During five weeks participants will learn about the impact AI is having on our society and work, explore the challenges we face with AI, and discover what AI holds for us in the future. Sign up for Living with AI

Artificial intelligence is already transforming our lives in so many ways, and future developments have the potential to change so much more. However, to fully embrace the benefits of AI, we must confront the misinformation surrounding it. While acknowledging the legitimate risks and challenges, it is essential to encourage a balanced understanding of AI's capabilities and limitations.  

Responsible development, ethical guidelines, and ongoing research are pivotal to harnessing the potential of AI while safeguarding against its risks. The Scottish AI Alliance stands committed to promoting public awareness, facilitating dialogue, and fostering collaboration to ensure the responsible and beneficial deployment of AI in Scotland and beyond. 

Steven Scott

We are twofifths design agency. We design logos, create unforgettable brands, design & build beautiful websites, and bring stories to life through animated motion graphics films.

http://www.twofifthsdesign.com
Previous
Previous

Data and Decision Making: how AI and data tools can help influence evidence-based policy change

Next
Next

Fairness Innovation Challenge: Call for Use Cases