Podcast: The Public and AI

Welcome to Turing’s Triple Helix, the Scottish AI Alliance’s podcast.

In this episide, Steph Wright, head of the Scottish AI Alliance, speaks with two amazing guests who have been closly involved in recent studies about public perceptions of AI in the UK.

We discuss the growing public awareness of AI across society, changing attitudes towards the use of AI in various sectors, and areas of public concern.

Our guests

  • Anna Colom, Public Participation and Research Lead, Ada Lovelace Institute

  • Sylvie Hobden, Head of Public Attitudes, Centre for Data Ethics and Innovation

Listen

Links

How do the public feel about AI (Ada Lovelace Institute)

Public attitudes to data and AI: Tracker survey (Wave 3) (CDEI)

Transcript

Steph Wright:

0:02 Hello and welcome to the latest episode of Turing's Triple Helix, the podcast channel for the Scottish AI Alliance.

0:08 I'm Steph Wright and I'm head of the Scottish AI Alliance today

0:11 I'm joined by two fantastic guests to discuss their work on what the public think about AI.

0:16 First, I have Anna Colum.

0:17 Anna is the public participation and research lead at the Ada Lovelace Institute and joining Anna is Sylvie Hobden.

0:24 Sylvie is the head of Public Attitudes at the Centre for Data Ethics and Innovation.

0:28 Welcome both.

Sylvie Hobden:

0:29 Hi there.

Anna Colom:

0:30 Hello

Steph Wright:

0:31: Great to have you both here today.

0:33 So, let's, let's dive straight in.

0:35 So let's start with Sylvie.

0:37 Let's start with the Centre for Data Ethics and Innovation or as most of us call it CDEI.

0:44 Can you please tell us a bit about it and what it is you do?

Sylvie Hobden:

0:48 Sure.

0:48 So CDEI leads on the government's work to enable trustworthy innovation using data and AI from within the Department for Science Innovation and Technology.

0:58 So the team at CDEI develops tools, guidance and standards to help organizations use AI and data in a way that builds public trust.

1:06 So we work really closely with policy teams across the department to ensure that consideration of issues like bias, privacy and transparency is really embedded into policy making legislation.

1:17 So within CDEI I lead the public attitudes team and we conduct regular engagement both qual and quantitative with the public and affected communities about data and AI in order to ensure that the work of both CDEI but also across wider government is reflective of those public values relating to those issues.

Steph Wright:

1:34 Fantastic, that sounds great.

1:36 And then over to you, Anna, can you please tell us a bit about the Ada Lovelace Institute and what it is you do?

Anna Colom:

1:42 Yes, of course.

1:43 So the Ada Lovelace Institute is an independent research institute and our mission is to make sure that data and AI work for people in society.

1:51 So our work covers a wide range of topics including regulation, how technologies can be developed and deployed in ways that support and protect people, use those technologies in public services.

2:03 So we try and work towards that mission in three different ways.

2:07 One is building evidence on AI and data in society.

2:10 The other one is discovering and amplifying the voices of those most affected.

2:15 And then we also bring that evidence to governments industry, public institutions, civil society to make sure that this evidence informs the debate and the thinking around data and AI.

2:24 And as we mentioned in the latest report, you know, the Ada Lovelace institute was founded on the principle that discussions and decisions about AI cannot be made legitimately without the views and experiences of those most impacted.

2:36 So that's basically the core of our work and our mission.

Steph Wright:

2:41 And that very much aligns with what we're trying to do at the Scottish AI alliance.

2:45 So it's great to hear that.

2:47 OK, let's talk a bit about the work that you guys do.

2:51 So Sylvie back in September, the CDEI released a report about public perceptions towards the use of foundation models in the public sector.

2:59 Can you please tell us a bit about this and how it came about, especially the term foundation models where maybe not everyone knows what that means.

3:08 And can you tell us a bit about the key findings?

Sylvie Hobden:

3:11 Sure.

3:11 So foundation models are a type of AI that's been trained on vast quantities of data and they have a relatively broad capability which can be applied across a range of different tasks.

3:23 And there's been huge advances in the capabilities of these foundation models over the last few years.

3:28 People will be familiar with ChatGPT and other large language models which are a form of foundation model. That prompted growing interest across government in how foundation models could be used within the public sector.

3:39 And as I said, there are loads of different applications, everything from marking homework, reviewing X ray images, assisting call centre staff.

3:46 But there was very little understanding of how members of the public might feel about the use of foundation models in those different ways.

3:53 And what they might see is the benefits and the risks.

3:56 So within CDEI, we conducted a series of online focus groups with participants across the UK to explore these issues.

4:03 So a very whistle-stop tour of the key findings, we found that overall participants were fairly open to the use of foundation models in the public sector.

4:12 This was particularly the case where it was felt that the use would have a tangible positive impact on the general public.

4:20 So one application that was viewed particularly positively was the use of foundation models in healthcare research and development.

4:26 And here it was felt that there was this really concrete benefit, the faster development of medical treatments, which would have a really meaningful impact on individual members of the general public.

4:36 So the benefits were recognized, but of course, participants also identified a range of risks.

4:41 So accuracy was by far the most prominent concern that we came across.

4:45 Participants really only supported the use of foundation models

4:48 if they reliably produced accurate outputs and linked to this, participants wanted to see a human accountability over any decisions or outputs that were taken from foundation models.

5:00 They felt most comfortable where foundation models were used to augment and complement human capabilities rather than to replace it entirely.

5:08 There's loads more detail in the report which is published on gov.uk.

5:12 But I really think that the key takeaway is that attitudes towards use.

5:16 of foundation models are really context dependent and they depend on those kinds of factors, the accuracy, the human in the loop, the impact on individuals and how tangible that impact is.

5:28 So each application really does need to be assessed on its own merit.

Steph Wright:

5:31 That's really interesting.

5:32 I am curious when you when you discussed the accuracy element, how did you define that?

5:38 Because obviously no AI tool is 100% accurate.

5:43 So is it accurate relevant to human accuracy or… it would just be great to expand on that?

Sylvie Hobden:

5:49 Yeah, so I should say that the people, we were speaking to were a combination of kind of lay members of the public who had no particular interest in AI.

5:58 But also we had some early adopters in there, people who kind of had used AI and felt they knew more about it than the average person.

6:05 So there will have been a range of understandings of accuracy with and that we didn't define it for participants.

6:11 We were very much working from what people mentioned.

6:14 There was definitely a sense that there was the potential for the use of foundation models to create more problems than they solved to create kind of more work. People felt that if there needed to be really close oversight of the outputs of AI then possibly it wasn't worth using them.

6:31 They really needed to be an improvement in terms of accuracy over and above what a human would produce.

6:38 One example of this came from, one of our participants who worked in the education sector and they said if, if a teacher is going to have to remark homework after AI has looked at it in order to make sure that it's accurate, it's not worth it.

6:49 It's better for me just to do it in the first place.

6:52 So people were thinking about that.

6:54 I think bias also fed into people's kind of conception of accuracy.

6:57 People wanted to know that the findings were fair as well.

7:01 And I do think that that was kind of conflated with accuracy as well
in the conversations.

Steph Wright:

7:05 Yeah, I think it's really interesting because there are conversations about, you know, what degree of accuracy and efficiency gain is like, you know, cancelled out by checking on that accuracy.

7:16 So I think that's a really interesting point and that's interesting, that's come out of a public engagement workshop.

7:22 So, yeah.

7:22 No, that's fascinating.

7:23 Thank you very much.

7:25 Over to Anna. So the Ada Lovelace Institute released a report at the end of October, or an evidence review, I believe it was called built on a report title.

7:35 What did the public think about AI?

7:38 Can you tell us a bit about this and how that came about and what are the key findings in the report?

Anna Colom:

7:44 Yeah, definitely.

7:45 So, at that time, that was during the build up to the AI UK summit and we saw a gap in discourses on AI safety and regulation.

7:55 You know, we thought the voices of the people and civil society were really largely missing from all those discussions.

8:02 So we were thinking, what can we do

8:04 That's most helpful at this point in time? Considering also the time we had, to do something helpful in the context of the summit.

8:11 So that's where we thought, let's bring together the evidence that's already there about what the public think.

8:16 And that's why we did this desk research, this rapid review of evidence.

8:20 It included a wide range of methods and publications including CDEI’s work on foundation models, which is one of the few ones that there are on these new applications and technology.

8:31 So that was included as well.

8:33 It was really, really helpful.

8:35 So, yeah, that's what the review does.

8:37 It summarises what we already know and what's in terms of the findings.

8:42 What's really interesting is that there's a lot of consistency really on what's already there.

8:47 And of course, different methods will give you different types of outcomes.

8:51 It's not the same to do a survey than to do a qualitative study or to do a deliberative process like a citizen's jury.

8:58 But it really, it's quite consistent and there's also a lot of overlap with what Sylvie was just saying about foundation models.

9:05 So for example, one thing is the nuance and the importance of understanding the type of AI used, the context in which it's applied and who is affected.

9:15 Whereas it's true that the public is quite positive on some AI uses.

9:19 For example, those on app apply to science, advancement and health, they are also quite concerned, and sometimes often there's both benefits and concerns for the same uses.

9:32 So, you know, people might say, well, I can see the benefit of using AI in early diagnosis of cancer, that's actually a really positively viewed application.

9:41 But there's also concerns about who's making the decision.

9:44 Is there an option to identify where the mistake was?

9:48 If there's a mistake, how do I find redress for, for a mistake?

9:52 So there's a lot of nuance in the public's views, the applications that people are more concerned about, are those related to eligibility, like job recruitment or access to financial support, whether it's loans or welfare.

10:06 So any application that has an impact on decisions with high stakes on people's lives.

10:11 And there's also strong support in the evidence for protection of fundamental rights, for example, privacy.

10:16 There's concerns about AI use if that can amplify discrimination or create two tiered societies.

10:23 And then one that's also quite consistent in relation to regulation is that there's a demand for regulation that's independent and that has teeth.

10:31 And again, there's a bit more detail on that in terms of the need for explainability and being able to, to, to ask for redress when things go wrong.

10:40 But yeah, that's overall the kind of top line top findings.

Steph Wright:

10:45 That's great.

10:46 And before I ask some, you know, sideways questions, we're going to go back to Sylvie because you guys also undertake an annual tracker survey on public attitudes towards us data and AI.

10:57 So the results of the third survey are about to be released very soon.

11:02 Can you give us a heads up on the key findings?

11:05 And do you think attitudes have changed since last year, especially when you guys did it last year?

11:10 Chat GPT wasn't like on everyone's mind.

11:13 And now, you know, AI is definitely much more in the public's consciousness as than a year ago.

11:20 So over to you, any, any sneak peeks you can offer us?

Sylvie Hobden

11:24 Yeah, sure.

11:25 So, we'll publish the third wave of our tracker survey in early December.

11:28 So we conducted online interviews with 4000 people across the UK and, and also 200 telephone interviews with people who are digitally excluded.

11:38 And as you say, as you might expect over the last year, awareness and understanding of AI has increased.

11:44 And we see this shift across society, even among those who have a lower baseline level of understanding such as older people and people with low digital familiarity, everyone's awareness and understanding is moving up.

11:55 However, alongside this increased understanding, we're seeing ongoing anxieties linked to AI.

12:01 So a growing proportion of the public now think that AI will have a net negative impact on society overall.

12:08 And that pessimism is heightened among those who are digitally excluded, which potentially highlights some emerging concerns around inequality.

12:17 And really underscoring those concerns when we ask people which word they would use to describe their feelings about AI words like scary and worry really dominate the responses.

12:28 It's quite striking.

12:29 If we dig a bit deeper into what underlies those anxieties, we see that the most common concerns are that AI will take people's jobs which is really widespread across the population.

12:38 But we do see it's particularly high among those who have lower levels of education.

12:43 We also see concern around AI leading to the loss of creativity and problem solving skills among humans.

12:49 And actually, the Scottish public are particularly concerned about deskilling compared with the rest of the UK, which is quite interesting.

12:55 We generally didn't see too many differences between the Scottish public and the rest of the UK.

12:58 But that was one.

13:01 And both those concerns also came through quite strongly in the focus group research that I discussed earlier.

13:06 So it's always nice when there's consistency across different pieces of research.

13:09 Despite the very real concerns that we see in the data, people do recognize the potential benefits of AI.

13:16 So participants were most optimistic about the potential for AI to streamline everyday tasks, improve convenience,

13:24 but also to improve key public services like healthcare, policing and education.

13:28 Something that we see when we ask more generally about data is that the areas where people see the most opportunity for data use to have an impact, which were the cost-of-living, health and the economy,

13:39 mirror what the public think are the greatest issues facing the UK at the moment.

13:43 So I think that really highlights that people do recognise the potential for data driven technologies like AI to have a really transformative power in the areas that matter most to them,

13:53 it's just that there are those kind of concerns at the moment which are kind of cutting through really strongly.

Steph Wright:

13:58 And why do you think those concerns are cutting through strongly?

14:02 I mean, from a personal perspective, I think a lot of the narratives you see in, you know, mainstream media don't help because they're presented in a very binary fashion, you know, AI will, you know, take our jobs.

14:16 AI will be the end of humanity versus AI is magic, it will do everything, it will solve all our problems.

14:22 Whereas the truth is A I is something very much in between and that nuance isn't, you know, kind of pushed out as much or what, what do you guys think about that?

Sylvie Hobden

14:32 So in the survey, we actually ask a question around any media stories that people recall around data, not specifically AI, but we do see that relatively high proportion of people recall a story.

14:44 But they do skew towards more negative stories.

14:47 And I think that's a reflection of both media representation of data, but also in what people naturally recall. If people are feeling risk averse, then the media narrative feeds into it and people are more likely to recall those negative stories.

15:00 And I think that's always been the case across each wave of the survey, but it is possibly becoming a little bit stronger in recent years because there has been that extra level of a kind of conversation around it in the last year.

Steph Wright:

15:13 How about you Anna?

15:14 What do you think might be behind some of this?

15:18 I remember last year your previous colleague, Sylvie, presented word clouds on the positives and negatives of AI and robots were in both of them, we thought was quite funny.

15:31 I'm interested to see what the word clouds are this year.

15:35 But it'd be interesting to know, what other factors, if we haven't mentioned it already, has contributed to this increased fear, you know, and anxiety around AI?

Anna Colom:

15:47 I think that's possibly, definitely the discourses have been, there's been so much around the existential risks and the different, you know, views on whether we focus on existential risk or the actual risks that are already happening

15:58 now, so the media discourse playing a big role,

16:03 but also the fact that the models and some AI uses are becoming more tangible now for people.

16:11 So I think there's maybe this close interaction with what these applications might mean or not.

16:18 But even within that context, it's interesting that, yeah, the concerns as Sylvie was saying are still, around jobs or health and health care.

16:27 What's interesting as well that we found in the evidence review is that, whereas people value the potential benefits and efficiency and productivity, they also really care and are worried about losing empathy and human communication.

16:40 So, whereas people see the uses of AI in health positively, they are less positive out there using health care and, you know, AI use replacing care and the communication that plays into taking care of someone, so yeah, I think there's been all this hype and discourse, but I think people now are getting closer to different uses and what they might mean.

17:02 And I think that's where the nuance and, and possibly concern, is playing a role.

17:07 There's another finding as well that relates to people's levels of knowledge and degree education and how informed they feel about AI and unlike some views that argue that people are worried because they don't understand the technology.

17:21 Actually, these findings show the opposite, that those with higher levels of education are feeling more important and more concerned on some uses.

17:29 So, yeah, we should be careful with these views that, oh, well, people don't know, that's why they are worried.

17:34 It's not really like that.

Steph Wright:

17:36 That's a really, really good insight actually.

17:39 And, did you find in, in, in both your work that people kind of think of AI technologies as a single thing?

17:47 Do they think of AI as a single entity as opposed to ultimately, it is, you know, a suite of technologies and they have different uses, different applications, et cetera.

17:58 Just wondering what kind of insights you had from that?

18:01 Do people treat AI as a single thing?

18:04 As one technology?

18:07 Anyone?

Anna Colom:

18:08 I am happy to go.

18:10 I was waiting for Sylvie.

18:11 But well, again, one of the findings in this latest evidence we use that according to the evidence with the public, there isn't one AI, but of course, this also depends a lot on how you ask about AI.

18:24 So some surveys have asked, how worried or positive do you feel about AI?

18:30 So then there's only an option to respond very worried or a little or quite positive or very positive.

18:36 But we think it's very important to ask about the range of applications and uses because they are so different.

18:42 And that's where the evidence and the nuance comes in.

18:46 The report we released before the evidence review was a survey as well representative of the British public.

18:52 And we asked about 17 different types of AI uses because we wanted to avoid this kind of one AI which then really doesn't give very helpful answers.

Steph Wright:

19:04 No, that's really good to hear and Sylvie over to you.


Sylvie Hobden:

19:07 Yeah, I'm not sure that we have any evidence around people's understanding of different types of AI.

19:12 But certainly as Anna kind of said, the fact that people really are able to have quite nuanced opinions around different applications of AI does show that there is an understanding that AI has strengths and weaknesses which are kind of going to manifest differently across different situations.

19:29 It feels like the kind of narrative over the last year has been really focused on foundation models.

19:34 So I think naturally there probably is, you know, a lot of people are kind of thinking of foundation models when they're thinking of AI.

19:41 But I think the public has got a good nuanced understanding of how different applications might be beneficial or introduce risk.

Steph Wright:

19:52 That's great to hear.

19:53 So, obviously, you've both given us some insights into the work that you do.

19:57 I guess we've touched on this a bit already about the overlaps in the work.

20:02 Obviously, I assume you guys have different methodologies and things and had different focuses.

20:07 But what general overlaps do you see?

20:11 And here's the million-dollar question, why do you think it matters?

20:15 You know, I know why it matters, but it would be great to hear from you guys.

20:18 Why do you think the work you've been doing in this matters over to Anna?

Anna Colom:

20:23 OK.

20:24 Well, we think there's been so many different AI developments uses, AI is already used in many ways, including in the public sector.

20:35 It's affecting people's access to housing welfare jobs, it's impacting already how people feel at work or the type of work that they are doing.

20:45 So we do think that this research is really important that as many different institutions, academics, civil society can engage in this discussion the better because people are ultimately the most impacted, societies are going to be, are being the most impacted.

20:59 So, yeah, we, we think it really matters and we've been collaborating as well with Sylvie's team to make sure that we understand the findings from the different studies.

21:10 And that we are building a picture of evidence that's as helpful as possible for policymakers and society.

Steph Wright:

21:17 That's great.

21:17 Thank you, over to you Sylvie.

Sylvie Hobden:

21:19 Yeah.

21:19 So in terms of the overlaps between the research, we're getting to a point where unlike a few years ago, perhaps, there is now a growing body of evidence about public attitudes towards AI.

21:29 So the work of Anna's team drawing together the data and rationalising the key findings is really valuable to ensure that we're not duplicating other work and to try and identify gaps in the research that needs to be filled.

21:40 And the survey that Anna mentioned that was conducted by the Ada Lovelace Institute with the Alan Turing Institute was also really complimentary to our survey. Our survey is quite general,

21:51 it gives that kind of overarching understanding of public opinions over time.

21:56 But as I said, while we identify that there is this nuance across different applications of AI, we didn't really go into the detail.

22:04 So that previous survey really unpicks the nuance of the attitudes towards different types of AI which is really positive and really kind of added to the understanding of that. In terms of why the research matters,

22:18 I think AI has the potential to have really material consequences in the real world that are going to affect real people in ways that are some of which predictable, some of which may be unexpected.

22:29 So we really need to engage with those people who are going to be affected by this, to understand how they want AI built implemented, how they want to see it governed.

22:37 It's really only through that engagement, that we can understand the conditions that AI needs to meet in order to secure public trust and then to go ahead and kind of ensure that those conditions are then reflected in policy and legislation.

22:51 So the data in the tracker survey is really interesting in its own right.

22:54 But it also helps us to start understanding kind of the shape of that conversation that we need to be having with the public on these topics and the areas that we need to explore in more depth.

Steph Wright:

23:03 Thank you very much.

23:04 That leads in very nicely to my next question.

23:06 Really.

23:08 I mean, the vision outlined in Scotland's AI strategy is around trustworthy, ethical and inclusive AI.

23:14 You know, we're very much about positive AI futures for everyone that it should benefit everyone and not the few and therefore having as many people involved on this journey is really, really important.

23:27 What insights from your work do you think can help us achieve this vision?

Anna Colom:

23:32 So, yeah, one of the findings in this evidence review is that people want to have a meaningful say over decisions that affect their everyday lives.

23:40 They want their views and their experiences to be included in decision making.

23:44 They also expect that a diversity of views to be included and heard and they are also concerned about inequities and the creation of two tiered societies.

23:52 So I would say is that your vision is really aligned with what the public expects.

23:56 So I guess that's already, you know, a very important place to be and trustworthy practices can include of course, better consultation with, and listening to, the people as suggested by interviewees in some of the research that we've done and that we've included in this evidence review.

24:13 For example, there were some participants in a study on the deployment of pandemic contact tracing apps who felt that mistrust of central government was in part related to the feeling that the views of the citizens and the experts had been ignored in the process.

24:29 So involvement is really important for trustworthiness and inclusion.

24:32 And yeah, there's just a range of evidence in the review that points towards that so I would say, yeah, continue including the public meaningfully also in legislative monitoring oversight processes, not only and you know one off consultations.

Steph Wright:

24:48 Thank you over to you Sylvie.

Sylvie Hobden:

24:51 Yeah.

24:51 So our research really sheds light on several characteristics of AI systems that the public feel are really needed in order for them to be trustworthy.

25:00 So it comes through quite strongly that it's important for the use of AI to be transparent.

25:05 People want to know when AI is being used and why it's being used.

25:09 I think there's a bit of a debate about whether or not people want to know how the exact kind of ins and outs of how AI is working.

25:18 And I know there was some really interesting findings in the survey that Anna's team conducted around the relative importance of explainability compared with accuracy, which is really interesting.

25:30 But the fact that AI is being used is really, really important to be transparent about that, so that people can make judgments.

25:37 based on that. Second, as I mentioned earlier, the importance of accuracy was really consistently highlighted in our work, people wanted systems that were robust, reliable and of course, they wanted them to be free from unfair biases.

25:49 And then the other thing that comes through across our work is that the public want a human or an organisation to be accountable for the outputs of AI.

25:57 And research also shows that there's a demand for kind of clear avenues through which people can appeal against decisions made by AI in the case of any doubt, which is something that Anna has also highlighted today.

26:09 Yeah, so while all these factors and others need to be in place, it comes back again to the fact that the concerns that the public have vary substantially depending on the specific application.

26:18 So although there are lots of tools and frameworks that can be used to meet these ideals, some of which have been developed by the CDEI, there isn't necessarily a one size fits all approach.

26:28 As Anna mentioned, public engagement is so important to really understand the nuance across different uses.

Steph Wright:

26:34 It's really great to hear those insights because it's very much aligned with what our thinking, you know, something we do speak quite often on is that there is no one size fits all.

26:45 It's, you know, understanding that there are different ways of regulating, governing, applying different A I technologies and the safeguards, et cetera that need to be put in place to ensure that they don't negatively impact people.

26:59 And it can't be seen as a panacea, you know, you have to have specific solutions for specific use cases.

27:07 I think it's great you guys are doing this work and we'd love to obviously work with you and trying to see how we can use the results in what we're trying to achieve.

27:16 So I think bringing us to a close, what is next for the outputs from your work, and how do you envisage them being used and by whom? I know we've kind of touched on this, but it'd be great to kind of specifically address those questions.

27:31 So who wants to go first? Anna.

Anna Colom:

27:34 Happy to go.

27:35 Yeah, so we've of course been sharing this evidence as much as we can, we organised a panel during the AI UK Fringe Summit about Public Voice and AI.

27:48 There's a part of the report which we haven't really covered, but that tries to take the evidence to the next step, which is OK,

27:55 we know one of the key findings was that the public want to be meaningfully involved,

27:59 but we know that there's questions we get sometimes from policymakers:

28:04 How do we do that?

28:05 How do we meaningfully involve the public?

28:06 So, we've tried and we curated this panel bringing different experiences and evidence on ways to involve the public and civil society including deliberative processes.

28:17 As the evidence also shows that complex issues need in-depth involvement.

28:23 So, not only surveys and polls.

28:26 So, yeah, sharing the evidence of course, continues to be a key priority.

28:29 And we are also getting started with the next research studies that we think we need to do to fill some of the gaps in our previous work or in the evidence in general.

28:41 And our priorities are those uses of AI where people are most impacted but least likely to be represented.

28:50 And public services jobs are key priorities.

28:54 We also want to follow with more qualitative and deliberative research since we did the survey earlier this year with Turing.

29:01 So, yeah, these are the plans and continue engaging in the conversation.

29:05 So that was great to also be part of this conversation.

29:09 So thank you for that.

Steph Wright:

29:11 Thank you, over to you, Sylvie.

Sylvie Hobden:

29:12 Yeah, so we really hope that as in previous years, the tracker survey findings will be used widely across government as well as by civil society and academia.

29:21 We'll be doing lots of work to share the insight with our colleagues across government and to help them work through what the findings mean for their policy work.

29:29 The fact that we've now got three ways of data mean that the findings are quite rich and they really tell an interesting story about how public attitudes are evolving over time.

29:38 But as with lots of survey work, it raises as many questions as it answers.

29:42 So we'll also likely be conducting more qualitative work in the coming months to explore some of these topics in more depth.

Steph Wright:

29:51 That's great.

29:53 No, thank you.

29:53 Thank you very much both.

29:54 I think, you know, if I had it my way, we could just be talking about this all day because, you know, it's something we're very passionate about at the AI Alliance and we love to, you know, dive deep into the findings, and your learnings from it, because it's something we're, we're trying to, you know, do here as well, you know, meaningfully engage and that, that's the key word I think, meaningfully engage with the public around the issues on AI, so thank you so much.

30:19 And, yeah, any final words, if not, thank you so much for your time and keep up the great work and we can't wait to find out more about what you do next.

30:30 So, thank you very much, both.

Anna Colom:

30:33 Thank you.

Sylvie Hobden:

30:35 Thank you.

Steven Scott

We are twofifths design agency. We design logos, create unforgettable brands, design & build beautiful websites, and bring stories to life through animated motion graphics films.

http://www.twofifthsdesign.com
Previous
Previous

AI Reshaping the Scottish Legal Ecosystem

Next
Next

New Digital Economy Guide Launched