Think Digital

Think Digital Logo with tagline, Three lightbulbs on the left

Government in the Era of ChatGPT

Let's Think Digital Podcast

Government in the Era of ChatGPT

New episodes every two weeks. Watch on YouTube and listen wherever you get your podcasts!

Government in the Era of ChatGPT

Government in the Era of ChatGPT

Government in the Era of ChatGPT

Government in the Era of ChatGPT

This episode, we go deep and talk about everything you need to know about artificial intelligence, machine learning, large language models like ChatGPT, and big data in the government context.

Our first guests are Jen Schellinck, Associate at Think Digital and CEO of Sysabee, and John Stroud, who runs an initiative with Jen called AI Guides. Jen and John will introduce you to AI basics and buzzwords.

Next we look to the future with Cecilia Tham, CEO and Founder of Futurity Systems, to talk about where we could be heading as a society with AI in ways we can only start to imagine.

Third, Shingai Manjengwa, Founder of Fireside Analytics, will join us to chat AI risks, mitigations, and frameworks to use AI in responsible and ethical ways.

And finally, we have a preview of a research report that we are publishing next month that explores how governments around the world are approaching the governance of artificial intelligence. Jacob Danto-Clancy and Bryce Edwards from the Think Digital team join us to share some insights from their work on this project.

Related Links

  • Building an interspecies economy via the Plantiverse (from Futurity Systems)
  • Intense – a quarterly lifestyle magazine from 2030.  All the images were made by Midjourney and co-written with GPT. (from Futurity Systems)
  • Futurity Science Tools, a data-driven platform for futures intelligence (from Futurity Systems)
  • If ChatGPT was a colleague… – blog post by Shingai Manjengwa

Watch the Episode on YouTube

Transcript
Ryan:

I'm Ryan Androsoff. Welcome to Let's Think Digital. Unless you've been living under a rock, you've probably heard something about artificial intelligence as of late. Maybe it's about how it's going to revolutionize how we work, or that we're moving too fast, and we don't understand what we're getting into. These days, it seems like every second article I see online in the news is about ChatGPT, or some type of new AI bot that's come onto the market. But what is AI anyways? And what are the possibilities? And yes, the very real risks of artificial intelligence? And are we ready for what's going to come next? Well, on today's episode, we're gonna get into everything you ever wanted to know about AI, and in particular, what it might mean for the future of government. We've got a jam-packed podcast today with an amazing lineup of guests to help guide us through this topic. First, we have Jen Schellinck, our resident AI expert here at Think Digital, and John Stroud, who runs an initiative with Jen called AI guides. Jen and John will give us an introduction into artificial intelligence, and talk to us about what some of the buzzwords you've been hearing actually mean. Then we're gonna look to the future with Cecilia Tham, CEO and founder of Futurity Systems to talk about where we could be heading as a society with AI in ways that we can only start to imagine today. Of course, we also have to talk about the very real risks of AI, and how we might think about mitigating those risks. So to talk about that, joining us will be Shingai Manjengwa, founder of Fireside Analytics, and somebody who has been thinking about frameworks to use AI in a responsible and ethical way. And keep listening to the end, because we're going to have a preview of a research report that Think Digital will be publishing next month, that explores how governments around the world are approaching the governance of AI. So let's get into our deep dive on AI, starting with some definitions from Jen and John.

Ryan:

You know, this term of big data, and then these kind of bundled terms of artificial intelligence and machine learning, what do those actually mean? And I'm gonna ask you to try, you know, explain it to me, like I'm five years old, what does big data mean? What does AI and machine learning mean?

John S:

Well, why don't I try that first? Because I'm not the data scientist. So how can I explain it? You know,

Ryan:

Perfect, you can try John, and then Jen can correct you on any natural errors.

John S:

Exactly. This is the way most things work. So almost like explaining it to an ADM, right? And one way to think of artificial intelligence, if you were talking to a five year old is to think of it like a robot with a brain. And this robot is able to carry out many tasks for you. In order to get better at those tasks, it has to learn and the machine has a special way of learning it. And the more data that you give to it, it'll tend to make the machine smarter and more helpful over time. But what's interesting is that some of the tasks that a robot that would be good at are very hard for humans, like math. But some things that a child would be able to do, like distinguish between a dog and a cat can actually be quite complicated for a machine when it's starting out. So think of it as a helper.

Ryan:

Right? Okay, so robot with a brain. And data is the food for the brain is what I'm taking away from that metaphor. Jen, what do you want to add to that?

Jen:

Well, I can talk about big data, because that's a term that people often hear. And I'm going to use an analogy that sometimes I talk about in, in the course that we do together, which is if you think about data like water, and you see think of it a cup of water, so that's like small data, you got a little bit of water in there. And then you think about the ocean. That's obviously big, right? And when you think of an ocean, it has a lot of different properties than a glass of water, it has waves, it has different depths, and different behaviors and pressure and all kinds of things. And so that's really what big data is, effectively, it's you have so much data that you have to treat it quite differently than you would if you just had a small amount of data, you have to use different types of computing, processing power and different types of memory and all of these things. So big data is data that's large enough that you have to treat it differently. And that's important with AI because as John was saying, you you really need a lot of data for AI to learn. And so you have to sometimes deal with big data in order to get good results out of AI.

Ryan:

Right? So, so big data is not if, if I'm understanding it correctly with your definition. It's not just the quantity of data that there's some magic threshold where small data becomes big data, or regular data becomes big data. It's more that as the data gets large enough, it actually takes on new properties because of that, is that is that kind of fair to say?

Jen:

That's right. That's right. And it takes on different properties because of some underlying physical limitations we have that we're always sort of moving past as well. So in a really concrete sense, sometimes I will say big data is any data that is actually impossible to store on a single computer. And so when you start to have data, like a data set that's so large, that you have to break it across computers, then you actually have to start doing things differently with the data because you have to look at it differently and work with it differently.

Ryan:

Would it be fair to say that without big data, AI wouldn't be able to exist?

Jen:

Oh, Ryan, that's such a good question. I mean, my short answer would be yes.

Ryan:

Yeah. Like on what kind of wondering, is big data a necessary precondition for AI to exist? And just so our listeners know, I did not prep Jen for this question. I'm literally thinking of it as we're talking, because it's an interesting kind of conception around it.

Jen:

I would say that the way that AI works right now, you do need big data. Absolutely. Because right now, a lot of the very, the major advance- advances we're seeing in AI are using something called deep learning. And the way that deep learning works, you do need large, large, large amounts of data for it to work. If I talk about the type of research that's happening in the lab at Carleton that I'm connected with, they're trying to do a different type of AI, that is more focused on representing knowledge such that you can do things with smaller amounts of data. So it could change in the future. But right now, yes, you need very large amounts of data to do AI successfully at the moment.

Ryan:

Okay. And so then this kind of brings me to my last definitional question, which is, you know, the term that we hear in the news everywhere these last few months is ChatGPT, which is, you know, kind of the brand name for a type of LLM or large language model. So how would you describe a large language model? And is it like is LLMs, large language models, ChatGPT? Is this just another way of saying AI? Is there, or is there something that's different about that, versus talking about AI or machine learning?

John S:

So, large language model is not just a synonym for AI, right? There, there can be other types. And in terms of how does ChatGPT work, OpenAI, the company that created it, trained the AI by reading massive amount of material. So essentially, you're reading everything that's on the internet. And using the large language model, it's essentially a prediction machine. And you ask it a question. And it's responses come back of what words make the most sense in this context. So it's a way to have a conversation with the the AI itself by tapping into the large language model.

Ryan:

Right. So then, you know, Jen, from kind of a data scientist perspective, is it fair then to say, you know, ChatGPT is an example of essentially feeding big data into some kind of AI algorithm to get something useful out of it?

Jen:

Yeah, it's a perfect example. And I, it's by far the most successful, I would say, example of that, that we are seeing. It's funny, already people have sort of forgotten that not long before we saw the large language model explosion, we were also seeing this generative AI explosion in image generation. So that, that was another really, really good example of this type of technology. But yeah, I mean, and I think it is important, I'm glad you flagged this, that people are sort of now starting to equate AI with these large language models. But that's actually a very small corner of the types of techniques that are available and that people are researching in AI as a research discipline. So, you know, it's clearly very important right now, but there's, there's so much more to explore in this space.

Ryan:

Yeah, no, that's really helpful. And thanks to both of you for kind of walking through those because again, I think these are these terms that get thrown around a lot. But sometimes people are kind of, you know, they're afraid to ask the question as to what does this actually mean? So that gives us I think, a helpful mental model, okay. So so I want to then shift to kind of the so-what question, right, you know, why should we care about this? And, and certainly, you know, we've, we've heard a lot of discussions about some of the societal wide implications that we kind of hear in the press and the people are talking about, obviously, here on Lets Think Digital, you know, we're really focused on the public sector, on governments, what technology trends mean for the public sector. So why should governments care about this, or should they? You know, is this something that both of you think that governments need to have on their radar? And is there something kind of unique about what the impact of this might be for, you know, government organizations versus a private sector company.

Jen:

So in terms of whether the government should care, I mean, as soon as you said that, my first thought was, well, everybody needs to care. The government doubly so. Because I do think that for a while now, these technologies have been sort of slowly improving in the background. But really, with the advent of these large language models, we are deep, we're seeing the power that they have, and I'm using that word advisedly, power, right? And so I think the government really does have to be aware of that power. I've often used the analogy, even before this happened, talking about data science and AI in connection with nuclear power. And that, to me, they have a lot of very strong analogies between these two types of technology. And I think that it is very much the role of the government to be aware of these types of technology that have a lot of power, and think about both how they want to use them themselves for the benefit of their citizens, and also, but how they need to think about regulating them to make sure that they're used in in good ways.

Ryan:

Right. Yeah, and you, you see a lot of commentary from the data science community, you know, people who are kind of deeply immersed in the research around this, who are concerned about it. I was reading the other day a survey, I think it was from last year 2022 of data scientists around the world. And it was something like 50% of data scientists were concerned that there was a 10%, or greater chance that AI was going to essentially, you know, end the world, right, you know, where they're, you know, about half of the data science community was kind of worried about catastrophic impacts, or, you know, the kind of Terminator scenario, you know, of self-aware AI that goes rogue. Jen as a data scientist, is this something that you're worried about?

Jen:

It, it sounds a little out there to say, but it is something I think about, it is something where I have made some conscious choices about what types of research to do versus what types of research I don't want to do, because I feel that some of it is more or less ethical, or more or less safe than other types of research. So, and that's why I'm often trying to get people to realize the power of these technologies. So it's definitely something that I'm aware of, and thinking about when I'm making choices about even what types of types of research to do.

Ryan:

Am I fair in saying from both of your perspectives, you think the hype is real? Or at least there's something real behind the hype around, around where AI is going?

Jen:

I do and I mean, I don't, I don't, you know me, Ryan, I'm normally like No, and John, you just said it as well, you know, no, you know, it's people are getting too excited. Like that's, that's very much where I come from, but.

Ryan:

You are, you are a very pragmatic person, Jen, in your approach, I think, to these issues.

Jen:

Thank you, Ryan, I appreciate that. So when I say this feels like a turning point to me. I mean, I, this is probably the first time I've said that. And I've been doing AI research for, arguably 20 years at this point. So this is to the point where I don't usually tell a lot of people about my work and my research, but I'm starting to go out of my way to talk to my friends and my family. To say you need to be aware of this, that this is happening now. So that's, that's where I'm coming from right now.

Ryan:

And John as, as kind of a non-technical person who's now working in this space, you know, do you feel the same way? Does this feel like a turning point to you?

John S:

Yeah, I feel like we are post aware of what the problems are, but pre deployment on what this is going to mean. So I think we're at an incredibly exciting time. And the the change is only going to get faster.

Ryan:

So now that we've got a better understanding of what AI is, let's start thinking about what's possible in the future. We've got the perfect guest to talk about that. Cecilia Tham, CEO and founder of Futurity Systems. Cecilia does something called future forecasting or foresight. A practice of predicting what will happen 10, 20, 30 years from now, to help us determine what we should be doing today to prepare for those possible futures. Have a listen.

Ryan:

I know you've got a couple of really interesting projects you've been doing around, you know, everything from thinking about a Metaverse for plants, to a magazine looking at, you know, what life is going to be like in 2030, to building a data driven platform for, for kind of futures intelligence, wondering if you could kind of talk through some of those projects that you and your team are doing, because I think they give an interesting insight into how we might, you know, shape our thinking to look at the future to be able to look back towards the present.

Cecilia:

I will start with my favorite, I don't have a favorite, all my projects are my babies, but if I have to pick a favorite. So I have a pin here that says crazy, crazy plant lady, which I am. But I'm also very proud of being a crazy plant lady. But we started this project with the plantiverse, literally thinking beyond ourselves. And so in this in the exercise of futuring, a lot of times we kind of incorporate the perspective and more often than not, we see it through our own lens. So in this exercise the point that we have, we actually kind of flipped it and said, Well, what if we see it through the lens of a plant, which sounds really ridiculous, but. But through that exercise, because every single futuring exercise for us is a rehearsal of what could be in the future. And so there's no over-rehearsal right. So, plantiverse was a project where we looked at giving plants autonomy, and the word autonomy here for us is a kind of understanding of giving capability in decision making. The possibilities for, for, for some something like a plant, right? So we've built off of that kind of idea, okay, so if, if we treat plant the same way we treat an autonomous vehicle, what would that be like? So we literally put a plant on wheels with sensors, and that the plant can decide where what to do so Herbie, we have a name, Herbie will navigate to the water when they need water, Herbie will go to light when Herbie needs light. So we pushed it a little bit more and we said well, what if Herbie has a crypto wallet? What if, what if Herbie can earn money by selling its own plants? Or you know, becoming a plant influencer? And really push that thinking and say, Well, what if Herbie can be an active participant in our human centric economy? So we went down that rabbit hole and we say what if we built an interspecies economy where plants could make decisions about their their future based, because now they can have an economic empowerment, to spend money on themselves that prioritize themselves. So we built this whole entire project of the NFTrees, we sold those NFTrees, we give the money back into Herbies crypto wallet, and we are trying to build a Dao that is governed by the plant themselves, so that they can decide how the money is spent, that prioritize themselves. So that's the that's the gist of the project of the plantiverse.

Ryan:

That's fascinating. Yeah, fascinating. And, you know, this notion of kind of greater autonomy, where you're linking things like Internet of Things and artificial intelligence, do you think this is kind of a realistic future that we may go into where we see more autonomy for kind of the non human agents that are that are in our world?

Cecilia:

Definitely, we're already seeing the evolution of ChatGPT, which is this version called AutoGPT, which also stands for autonomous, right, and it makes his own, it makes it's own decision that you don't have to continuously prompting it. We're seeing this evolution of the AI agents and kind of this, this, how they will turn, they'll become either extension of ourselves or become their own autonomous agent that can navigate. And so one of the other things that I wanted to kind of share with you, and we started this research a couple years back is this evolution of from the eCommerce to what's called the autonomous commerce, aCommerce, and in this autonomous commerce, possible future, the transaction participants within it. So right now in our, in our economy, it's people to people or companies to companies or people to companies, right. But in this new future, there could be this new economy called M to M. So machines to machines, agents to agents, and theyll be able to buy/sell the same way that Herbie, which is essentially a bot, could buy and sell to us or to another bot, right. So that for me, it's really interesting. And, you know, looking back at how we operate as humans, we have been evolving very much driven by transactions, by, you know, economy. And, you know, having a non human participant, you know, having these AI agents as participants is going to be a very, very interesting shift.

Ryan:

Yeah, I mean, I can't help but think about the philosophical ramifications in some ways too of this, right. Because I think, you know, historically, we're used to, or a lot of our concepts around how society how the world works, are based upon the premise that humans are the only autonomous actors in society, right. And we, we tend to kind of view nature and the rest of the world as being something that that we influence or control to varying degrees. Now, whether that's true or not, I think there's a big open debate and, you know, the, the ecological movement has very much been trying to bring light to the idea that we are part of an interconnected ecosystem, but we still I think a lot of is yours, you know, as you're highlighting, our economic systems, our societal systems are based upon humans being the only, you know, rational agents out there, this starts to flip that a little bit on its head. And, you know, I wonder if some of the backlash we're seeing against AI, or some of the worry or concern about it is just the sense of alienness to us that we're just not used to having, you know, another type of intelligence out there.

Cecilia:

Absolutely. And I think this loss of control that you're mentioning, because in a human centric society, we are the ones who are making the decisions. In this AI world, the autonomy and coming back into the decision making rides on the intelligence itself. And so we, we kind of built it so that it can ease our, our, you know, job or optimize our time, whatnot. But at the at the end of the day, what we're training is that they make the decisions for us, right. And so, you know, there's the good side and the bad side to it. But you're absolutely right.

Ryan:

Yeah. So when, when you think about, you know, what that potential future looks like. I'm, you know, I'm curious, a couple of the other projects you and I were discussing, one was this, you know, lifestyle magazine from 2030, where you're looking at, you know, how are people going to be living, what's going to be, you know, in fashion, you know, eight years from now or seven years from now, you've also got a platform for kind of data driven futures exploration, you know, when you think about some of the other work that you're doing around trying to imagine those possible futures, what kind of hints are those experiments giving you about where you think AI is going to go and what it's going to be as part of our life for I mean, for society, you know, 10 years from now, let's say.

Cecilia:

So, okay, mentioning these two projects, I'm going to weave them together. And so the data platform gives us the rigor of the science, the technologies, understanding the state of the art, but also understanding where they're heading. The magazine serves as kind of a more conceptual, more design, more imaginative, these are very opposite worlds, but we need both of them in order for us to kind of coming back and forth, to see what is possible in the future. And through these exercises, you know, we see possibilities like, like AI agents, having their own autonomy, maybe even going into social websites and connecting with each other. Maybe in the future, they might even have empathy. And and what does that look like, we also look into the possibility of what we're calling digital souls. So kind of the evolution of how these AI agents with a level of humaneness to it can be then embedded into our cars in the, into our refrigerators into our homes. And then they come, we tend to personify these, these agents, right. So what would that look like? And kind of these imaginations, like I said earlier, using this as a source of rehearsal, then we can start thinking about the possible impacts and possible, even unintended consequences that we haven't thought of before. So I think there's, you know, currently we do a lot of these exercises internally, but what we really need to do is educate you know, the great public, so that everyone can learn this methodologies, envision this future, being able to practice but most of all building intentional futures, and I'm going to come back into the, the point of the government. And I do think that the role of government is crucial here because it's part education, but it's also part influence, because when we look at companies, companies tend to look at futures much more short term, right because there are short term gains there, return of investment, or three years, and the runway. Whereas governments, because of their social duties or public duties, they have to look further. And so we can't just rely on companies to develop these technologies and really be intentional about building better futures, right, because their agendas are very different than the government, that that needs to be much more hands on in handling how these technologies are being developed.

Ryan:

Yeah, it's, you know, it's really interesting, because I think in the, in the AI space, we often talk about, you know, the need for government employees to have a better understanding of the technology, which I think is true, I think there's a certain level of, you know, digital or data literacy that public servants today probably need if they're going to navigate the space. But, but I think what you're pointing to is there's also a certain need for imagination, right, to understand where these technologies might go, particularly because there's, as you mentioned, they're exponential technologies, right? The pace of change is so much faster than what we're used to. And, you know, traditionally, I feel almost like science fiction plays a lot of that role in our society, right? We tend to use science fiction as a proxy to think about possible futures. But it's limited sometimes. And this notion that government could be leading, you know, some of that thinking around possible futures, I think, is a fascinating idea, as it links to what their responsibility might be to help guide the evolution of this. I'm wondering, you know, you mentioned this notion of kind of AI agents, and this is something that I, you have heard talked about in particular in the context of government service delivery, right, you know, we for people, you know, who have been working on digital transformation projects, a big part of that was always trying to help citizens untangle the messy web of, you know, how different levels of government work. Do you think it's a realistic, or a probable future that we may see people essentially have an AI agent that essentially does that kind of service delivery interface for them when they're interacting with government in the future?

Cecilia:

100%. 100%. I see these agents as extensions of us. And it's already happening, right. So in my ChatGPT, I have a particular chat that I have already trained, and it's called CEO Cici. And there I have, you know, put in some of my words, my terminologies, my tone of voice so that it can answer emails for me and understand what we provide a services in the company and can answer those emails for me in those particular voice. But I have another one called Professor Cici. When I teach and I have a different tone of voice and different content, right. So carrying that through, these then can become these agents that can perform different tasks for me as an extension of Cici. Now thinking from the government perspective, they can equally do the same on different branches and different, you know, content and different data, that could be very specific to their functionality. And so if when you merge this to then, you know, maybe citizens Cici, can talk to, you know, the tax agents in Spain, and then they can converse and understand each other, and so that they can, you know, make certain conclusions together, and then come back to me and give me the, the aligned answers. And I definitely think that this is going to be the future.

Ryan:

Yeah, and there's an interesting implication behind that. Because if that is the future, where, let's say by, you know, 2030, maybe even sooner than that, you know, we all have our AI agent that is, is you know, we're unleashing to be able to access services to do things for us. It occurs to me then, for people inside government, they have to start thinking about how do we produce content and services that's accessible to robots, not just to human beings, right? We have this whole discipline around user experience user interface designers, who are very much about how do we structure websites so that humans can use them easily. But it strikes me that, you know, if that's the world we're going into, we're going to need to update that discipline to be how do we structure information and produce, you know, online content in a way that autonomous, you know, robots of one sort or another can be able to access?

Cecilia:

Absolutely, absolutely. I recently gave a similar talk. But the target audience were marketing people, and I said exactly the same thing, what would be the ads of the future, or publicity of the future, you're no longer going to be creating ads for people. Instead, we'll be creating ads for bots, right. And so very, very different approach and connection and on that front, you know, we were talking about AI agent or these digital souls being embedded, you know, one of the future possibility that we thought of is if I have a, an autonomous vehicle with a digital soul, Cici car, embedded in it, and and there is a an AI agent from the government side that is, you know, directing traffic. So mobility, and somehow there is an accident and accidents always occur even in the level of you know, autonomous vehicles right as reduced as they might be. Maybe CC the person will never be involved in the, in the jurisdiction because the laws will already say, oh, you know, five points less for Cici car agent, because x, y&z and that's an underlying kind of layer that a citizen Cici, I will never see. And it's all bot to bot, m to m kind of dialogue. And I think that's definitely going to be.

Ryan:

Right. I have this vision of like the Cici bot, you know, going to AI jail for traffic violations down in the future. So, no, it's, it's a really, I mean, it's, it's, you know, I think sometimes this can kind of seem super science fictiony. And, you know, beyond anything that's realistic, but you know, we're moving so quickly on this, there's just this notion that, you know, our societal laws and systems and the way we structure things are designed for people, and suddenly people are not going to be the as you say, the only autonomous agents that are operating.

Cecilia:

Yeah, absolutely. And there are there a few things that I, I feel like we don't really have a good handle on. One of them is the line between what is real and what is a synthetically created, is, you know, a lot of these mid journey, DALLY images, sometimes I can't even tell if they're real or not, right. And when we add on to kind of the augmented reality mixed reality, even VR, we might not know what we see whether they're real or not. And I, there, I think there might be some, some problems into, you know, we sometimes we call it digital schizophrenia, no, when we when we don't know. And a lot of the abuse and misuse could be applied on that layer. Because we're already seeing what's happening with Facebook, Tik Tok a lot of these social media, they control what we see. Right? And so in this new world of like generative realities, who's going to control what we see, our parents, the government, you know, you? Yeah, and how does that work? The companies and that line is, is very iffy right now.

Ryan:

Do you have a sense of what the solution might be to this? Because I think you're right, this is a real problem. You know, all the concern we have about misinformation, disinformation, today in the social media context is just going to be an order of magnitude, you know, more potent in, in this in this world of as you said, you know, artificially created images, which might not just be on a computer screen, but in front of your actual eyes. Do you think there is a need for, for almost regulating transparency around that, that there needs to be some kind of watermark? If it's, if it's a fake image versus a real image? How do you think we even kind of come at trying to approach this at scale?

Cecilia:

I do think that we need to regulate it. Because if you leave it in the hands of the companies they have, historically speaking, they they have not been responsive on that front, right. So it's more of a post... post priority for them. And so I do think that the government definitely need to step in. I do also think that government needs to upskill their, you know, their team in order to have that kind of conversation. Right now, I feel like there's a paralysis from the government's front. They don't want to move forward. But they don't, you know, the answer very often is like, let's block it, right? Italy did the same when ChatGPT came out in Italy, and they just immediately blocked it. And two days later, you know, another version of GPT, which is called PizzaGPT, which I find it hilarious, came out. So how do you not play catch up? How can the government be, you know, in the front of the curve? How can they actively shape what that terrain looks like in the future? I think that is, that is the number one concern that I have is the government's role.

Ryan:

This has been a fascinating conversation as we kind of come to close it. I want to ask you one last question, which is, you know, there's there's a lot of optimism and excitement about some of the possibilities that AI can unleash, but also a lot of worry about some of the downsides that we've talked about and, and even some of those kind of unknown potential downsides that we maybe can't even imagine right now. When we're thinking about this space and, and kind of the power of these emerging AI tools. I'm curious, you know, what has you most excited when you think about these possible futures where we're living with AI? And what has you most concerned?

Cecilia:

Then let's start with the good ones and then I will share my concern and then another good, my most exciting kind of understanding of how AI could be developed, and it can be development in many, many ways is that, you know, we say AI and I sometimes think of the I as like imagination, artificial imagination, because all of a sudden opened up a whole massive capability that I, to be honest, it was very questionable if computers and AI could go down into the creative path, we have proved them wrong, right. And so from that area, we see this shift from, you know, creation. So creating content from scratch, to right now, almost curation, where I can just, you know, use mid journey or DALLY and have 50,000 images. And from those, I can say, I want to do this, this, this, this, this, right, same with the video, same with 3D, same with ChatGPT, give me 10, you know, ways to market myself, and then I can pick from there, which, you know, saves me 80% of my ability to kind of- my workload, which is amazing. So that is, that is one thing that I'm really, really interesting and excited about. The scary part, I think that there have already been proved that, you know, people will abuse it, and they will try to kind of work around, you know, the other day I saw a scammer trying to learn a voice of a child, and use that as and sent to a parent that your child has been kidnapped. And it was all AI, synthetic voice, AI, you know, content. And as a parent, I can't imagine, the kind of fear that these misuse and abuse of these technologies will be used against me or against worst yet, against a generation of like elderlies that are very vulnerable, that they might not even know, you know, if it is real or not. And that for me, it's, it's frightening. And I think we're just seeing, you know, how these will be misused. But coming back into, you know, another example of how these things could be applied, you know, we have projects, working with the government actually to use apply AI for the elders. And so I wanted to end on a good note and a good example, that, you know, creating these digital twins, these AI agents, these extensions, could be an immense, immense value added to the to the growing population. Because you know, coming back into autonomy, like giving them autonomy, giving them ability to learn new things, giving them an extension to themselves, and even kind of a digital twin of how they can share their knowledge, their skill set could be an immense value to our society. And so we're really excited to use kind of ChatGPT AI, digital twins, web three tools to give the elder population a second life.

Ryan:

As we talked about with Cecilia, our potential futures around AI can absolutely go both ways. It can be one of great promise or great peril. And it's ultimately going to be up to us today to determine how to shape that future. So how do we shape our future in responsible, ethical ways when it comes to the use of artificial intelligence? To talk about that, I spoke with Shingai Manjengwa, founder of Fireside Analytics and an expert on the topic of the ethical use of AI.

Ryan:

You know, one of the things that we we tend to think about or certainly I'd like to think about when we think about the use of AI in a societal context is some of the potential challenges around what, what the risks might be around the use of AI, right. And we see a lot of hype around, you know, the benefits and some of the economic disruption this might bring. But particularly, we're thinking about, you know, government institutions and the public sector, which tends to be our focus here on the podcast. You know, there is some specific considerations around this. I know this is an area you do a lot of work in, curious to get you to maybe give us a bit of a primer on what the potential risks are of AI, you know, particularly thinking about, you know, marginalized communities in society and for our public sector institutions.

Shingai:

Right, so there're the obvious ones, Terminator two. So, so let's pack that for now and maybe come down to earth a little bit more so we can talk about things that will directly affect us probably sooner rather than later. And my view of the risks of artificial intelligence are starting with the technology itself. And we might say that it is a context in a world and an environment built by narrative because we're mining data and we have 3 billion people who have not been very much part of the conversation and the narrative that's been used to train this artificial intelligence. So if you believe it, as you see it as a world as the matrix, right, imagine that the authors of The Matrix excluded certain populations. So we have, you know, just over a billion people on the on the African continent, and we have China, who participate differently on the internet. So if you just use that as a benchmark for how exclusion can take place, in the framing of this context that we've built the artificial intelligence intelligence engines on, then you can see quite quickly how certain populations can be excluded. So I'll give you an example. And I mentioned to you earlier, Ryan, there's this blog I wrote about if chat GPT was a client- was a colleague of ours, I put in my own name into check GPT to see what will come out. And it hallucinated many variations of you know, what I studied, where I studied, where I grew up, etcetera, that themes, it was thematically correct, mostly correct, but something that I noticed was that the undergraduate University it always gave me was never an African University. Right, it was never an African university that it hallucinated. And I ran, ran multiple, multiple iterations to see what would come up. So you know, it's been very difficult. And I tried to really pinpoint exactly where the bias might be, and how, what how it might manifest. But that's an example. There will be subtleties. Very, very subtle ways in which we will be framed, the reality and how we behave and respond to the information we get. All of that will be framed by this context and this narrative that's been generated by a subset of the global population.

Ryan:

Well, and, you know, and it makes me think that, in some ways, those kind of subtle errors that work its way in, in part, as you're saying, you know, given the fact that it doesn't have a complete training data set that represents everybody, you know, those can sometimes, I would think, be the most difficult because they can, they can be persistent in a way that's tough to surface unless you really kind of dig into the code. And you know, and get in a little bit to the black box, as they kind of, say, behind the algorithm.

Shingai:

And I'll add too that, you know, because I spend a lot of time in the area, keeping myself honest, I also want to say, so what? Because we raise the alarm when, you know, we got search engine results that pointed to executives, and gave us images of white men. We raised the alarm when we put into search engines, beautiful people, and we got images of white women, right? And there are multiple examples like that. We raise the alarm over the years. And what concerns me is that potentially, you know, the way we are immersed in these technologies, perhaps they're changing our minds and our worldviews, and maybe we are resilient enough to see through it. But what about our children who are so dependent on these technologies?

Ryan:

Right.

Shingai:

Be it the search engines, and now more so with the use of these large language models in a much more profound and ubiquitous way.

Ryan:

Yeah, I mean, that generational aspects, actually, I think, a really fascinating, you know, comment, because you're right, you know, we're in this transition generation of people who've, you know, lived in the world pre AI, and are now kind of transitioning into it impacting those algorithms that are behind our life in so many ways. But that's absolutely right, that, that for people who are now coming up, and this next generation, you know, this, this may be kind of a reality going forward. And that certainly shapes their perception of their world more broadly. You know, and so it kind of begs this question of what can we do about this, right? What, what, in what ways can we actually kind of mitigate some of these challenges? You know, and is it actually possible to train AIs in an ethical way, just given the realities of the biases that do exist in our society?

Shingai:

Well, before I get to that, and I'll just warn you, I don't have the answer. I don't have the 100% solution, but maybe some directional thoughts. Um, there's two other risks I just want to mention. So I hinted at the first one, which is the hallucination, which is right now, the large language models that we, that we are using, and there are many, many products and pieces of software being generated with these models are, the models themselves hallucinate information. So just very quickly addressing that one, having a fact checking layer would be one way that you could mitigate that as a challenge. So you might be using a large language model for a chatbot, for whatever purpose, it could be a private public sector application, having a fact checking layer between that and what the public uses would be one way that you could mitigate the hallucination, which is still a feature, a bug/feature in the large language models that we're using. And then the last risk that I'll just point to before we start getting into mitigation is the impact on society. So the job losses are coming. I myself am, I have benefited from a role that didn't exist 20 years ago. So artificial intelligence education, or data science education is a role that I created for myself when I started my company, Fireside Analytics. But I now see these roles, I now get calls from people saying, Shingai, we know you sort of do this, you know, can you speak to us about getting that done. So that is that gives me hope, makes me optimistic, because I'm a beneficiary of, you know, a role that's been created as a result of the technologies. But we are going to see massive job losses in the near and further future. And I think we need to really be sober and prepared about that,

Ryan:

You know, just on this point, because I think sometimes this notion of job losses, you know, we've heard people kind of come down on different sides of that issue, whether it's inevitable or not. So if I'm taking you right, you're saying one, you think it's inevitable? And I'm also kind of curious, the other side is, do you think that those job losses are just going to be a net loss in general? Or do you think there is enough new types of work new types of job that is going to replace or build upon that?

Shingai:

All right, so on the subject of mitigation, I don't think that the upside will happen organically. I think it has to be something that we're investing in. So in 2019, I was doing a talk in Paris at the World Innovation Summit on Education. And there, I was talking about automation. So this was before the pandemic, this is before these, these layoffs, and at the time, the literature pointed to okay, you know, universal basic income, some sort of taxation, when tech companies and it's many types of companies now, not just tech ones, layoff people because of the loss of income from income taxes, etc. So massive, far reaching implications of job losses due to automation. But that conversation went quiet, and then we had the pandemic, and then now we're seeing, you know, quite significant layoffs, and that compensation hasn't come back. So it must be deliberate. And I think it's something that we need to be putting on our roadmaps today. Professional development among, you know, should be on every organization's strategic professional development, strategic plans going forward, more so in government, because, you know, the impact on government, as a recipient, and as, as an employer is also quite significant. So it doesn't happen on its own, we have to be investing in what those plans look like, when the automation happens. And then that way, we can create the new roles and be deliberate about it. And we can also mitigate, you know, some of the roles that are just gonna go away and not come back.

Ryan:

Right. No, that's a really interesting point. And I think an important point around this notion that the mitigation economic- in terms of economic impacts of AI are going to have to be deliberate.

Shingai:

Can I tell you a story, actually?

Ryan:

Please.

Shingai:

So when I, one of my- it was my first job in Canada was at TV Ontario. And I remember one of the first projects that I got was Shingai, can you do an analysis for us? You know, there's a documentary, we want to be really successful, we wanted to have high ratings. So can you tell us when to put it on TV to ensure that it does well. So that's a classic data science problem. I chose to use decision trees, I could have used a number of techniques. But that was the one, you know, looked at some historical data, use some proxy data, pulled out a model, typed of beautiful email, lovely charts, CC'd the right people, and I hit send. And I got an email back in all caps. That said, Who is your boss? And so obviously, somebody didn't like that email. So you know, I went to the person I was reporting to at the time. And I said, Hey, what is this, and after some investigation, it turned out that scheduling is a business function, and it was someone's job. And that person had multiple Excel spreadsheets that had formulae that referenced other formulae in different sheets. It took a while to refresh that it was a unionized role and function within the organization. And I had shown up, and I had done this in a couple of hours. That to me was a good case study in automation. Because it's not an army of robots. It's not you know, there's a scene in The Matrix where the agents are at the door. That's not what it looks like. It's going to be someone with a name that you can pronounce, perhaps, who sends an email that does your job. And I think it highlights the risk too, for policymakers, of if we fast forward 10, 15, 20 years down the line, and we have this type of automation taking place. The kind of civil unrest that can come from that is actually potentially a bigger risk than some of the challenges we face directly from the technology. So if you have massive unemployment, and that is my prediction, it is the case. And we don't account for that, and we are not making plans for, you know, what are the alternatives? Or how can we redirect those efforts, those energies, then you might find yourself in a situation like I did in trouble for what I considered progress, what the organization considered progress, but we just hadn't thought about how our progress affects the livelihoods of other people. And that, that, that creates feelings and oftentimes negative feelings, too.

Ryan:

Yeah. I mean, there's absolutely a human dimension to this. No, it's a fascinating story, and I think illustrates this really well. Well, and it makes me think a little bit about that, that article you wrote about having, you know, ChatGPT as a colleague, and what that might look like. And I wonder about this tension between, you know, regulating and controlling new technologies like ChatGPT, versus enabling innovation. And, you know, my sense is often in the private sector, it tends to index more towards enabling innovation, because if you can improve your bottom line, you know, there's pretty clear case to be made to reduce your headcount and be able to go in that direction. I mean, as you're pointing out through the story you just shared in the public sector, it's a more complicated dynamic around that, as you said, you've often got kind of a different workforce dynamic around it. And I'm just wondering how you imagine this future looking or, again, back to this, this notion around mitigation? How do we prepare ourselves, particularly for those who are working in government institutions, for a future where they may be colleagues with an AI? Right, their colleagues may not just be humans, but they may have these artificial agents of one sort or another who are working alongside them? Do you think we can do that in a harmonious way?

Shingai:

Absolutely. And we are, right, we open our phones with our faces. That is the use of artificial intelligence, I think, you know, we have to separate some issues, there are concerns around government use of artificial intelligence. So if we now have an automated decision system that does taxes, for example, we have to be fully aware that a system like that is an input to other systems, such as immigration, and where there is potentially bias in a system like that the impact is far reaching. One mitigation note, I suggest is recourse. The technology is moving faster than even the technologists can keep up with, right. So policymakers are a couple of steps behind even the people that are working on on the tech. So you know, doing our best to keep up and I applaud the efforts of, you know, the ministries of innovation, you know, some interesting legislation coming out. And I appreciate the difficulty, it's not a criticism at all. I think at some point, we start somewhere, and then we iterate, perhaps. But I think one thing that we can do is make sure that we have really robust areas of recourse. And you know, perhaps it's a government department, perhaps it's, you know, the Privacy Commissioner, I'm not exactly sure what it looks like yet. But if we now have AI in the wild, which we do, private sector, public sector, I should be able to pick up the phone and reach a government department to say, you know, this technology was used on me, or I've been, you know, has affected me in some way, can we investigate it, and that team has to be the most responsive team in government, right, they should be the most technologically advanced, they should have access to the best tools, and the best researchers and the best technical people, so that they can actually get to the bottom of that. And they can also collect data, because remember, what we were calling models is individuals with laptops. Right? So some of them are grouped in big companies like Open AI and Google, etc. However, for the most part, I can do this and with my laptop here at home, so what you might say is okay, it's going to be difficult to police pedestrians, right. So why don't we have just like we do in the transport example, guardrails and institutions that deal with the cyclists, that deal with driver's licenses, that deal with drivers and requiring them to have driver's licenses really move up that chain, and then make it so that if there is a traffic issue, that there is a responsive department that will answer specifically to that category of issue. And you know, it's, it's pie in the sky at the moment. But I think if we start talking about it in those ways, it stops being abstract and stops being overwhelming. Because we can govern AI, we cover, we govern very complex things in the world at the moment. This is just something that we need to figure out and frame in ways that we, you know, we have examples to draw from.

Ryan:

Yeah, it's interesting because I was reading something quite recently about exactly this notion of, you know, that we can build regulations around this but it actually requires a new kind of governance infrastructure to do it, as you're saying, you know, institutions that may be very much focused on this question of governing AI. But it makes me think about, you know, earlier comment you made is inherent to that. We need people in the public sector who understand it, who have the skill sets behind it. I mean, I do worry sometimes about this asymmetry where a lot of the people who have the technical expertise, particularly around data science, which is such an in demand, you know, skill set right now, and government seems to be struggling at the best of times to get you know, good data backgrounds and data science professionals in to deal with their own kind of data analytic needs, let alone being able to enforce regulations on data science. I mean, this is your this is your area that you focus on, you know, with your company and the work that you do. What do you think government needs to do to be able to kind of increase the amount of skills and, and knowledge that it has about how these technologies work?

Shingai:

Well, I'm glad you brought it up. Because if I, if I had led with that, it seems oddly self serving, but it is true, education has to happen. And at multiple layers. So look, we have a sense of urgency now. So you know, the calls for I need an executive briefing, those calls I'm taking, and it makes sense. Let's do that, right. So if there's a department and you know, folks, even quite at senior levels of feeling that they're not on top of this, they don't have a handle, the rate of change is vast, it's too quick, you know, hold the phone, book the time, and have those executive briefings with, you know, the multiple organizations that do this type of work. And in that way, just as a level setting artificial intelligence one on one, here we are in 2023. Or whenever you're watching this, that's going to be a good, you know, point of departure, rather than letting this continue. Because the rate of evolution and change, you know, at some point, I do feel some people will be left so far behind. It will compromise their ability to do their jobs as policymakers. And I say that with all due respect, I just think, you know, we're at the point where I asked the question, What will our prime minister what quality- qualification will our prime minister need in 20 years time? You know, what, what will that person look like? What education will they have? What experience will they have? And I think part of making sure that that person is ready requires policymakers right now today, from across the ministries, from education, to agriculture, to everything, folks need to really be educated in terms of what these technologies do, what the potential is, and what some of the risks and downsides could be.

Ryan:

Yeah, it's, no, it's a fascinating thought, you know, traditionally, politicians tended to be lawyers, if you look kind of demographically at their professions, and, and I think you're absolutely right, that, you know, maybe to be an effective legislator, or effective Prime Minister, you know, 10, 20 years from now, you're gonna have to be a data scientist to be able to do that effectively, given, given how government's going to be processing and dealing with issues.

Shingai:

Or, you know, to be fair, just make sure that those skills are acquired along the way. Right. So we may do away with the titles, remember, the whole education sector is being up ended right now. So who knows what that will look like then as well. But then I think really focusing on the skills and being able to understand what the technologies do and the impact that they will have on society. That's going to be critical.

Ryan:

Yeah, absolutely. So I think it's been a really interesting conversation around you know, what some of the potential concerns around the introduction of AI into our world more broadly has been, and and appreciate your perspective around, you know, some of the potential mitigations around this, you know, and so as you said, we may not be moving directly into, you know, the world of Skynet and Terminator. But I'm curious as we kind of wrap the conversation up, I mean, are you are you worried ultimately, do you think we're going in the right direction? And for people who are listening, you know, particularly those who are working in government, what should they be kind of taking away as, as the most you know, important thing for them on a practical level, to be able to be keeping an eye on or to be watching for in the in the months and years to come on the, on this issue?

Shingai:

So firstly, I'm optimistic about artificial intelligence and the impact that it can have on us. I'm also optimistic that as a society, we will be able to absorb that change, and potentially make it work for the better. We are smarter and wiser than we were in the past. However, the things that I would focus on, or maybe singularly the thing that I would consider is that ultimately, these models require what we call an objective function, right? That's mathematically setting the objective for what we want the model to achieve. And what I'm noticing now is that artificial intelligence is bumping up against some, you know, really cool foundational ideas that we have a society, such as maximizing shareholder value. Right? And that's, that's core to capitalism and what we believe. But if we now set that as an objective for artificial intelligence, what is the impact that would have on people? Or is it potentially that we set objective functions, but we also have to measure those with some of the societal objectives that we also want to prioritize and value? Right? So anytime that you're considering these large language models, it's really important to ask ourselves, what is the objective function? What was this model developed to do? Because the model will do what it was developed to do, its mathematics. Mathematics, at least is reliable and consistent in many ways. And certainly at this level that we're operating at when it comes to artificial intelligence. So that to me is the key focus is what is the objective functio? And our use in the public sector? Think of an objective function, like, I'm just going to be a bit controversial here, expand the British Empire? Right? What would we get from that set as an objective function for artificial intelligence? What does that mean for colonization and slavery? Right? Those are big, philosophical questions. But I think at the heart of it, that objective function is central to how this is, this is going to pan out and how these models are being used, and the impact that they're going to have. So I think, fully understanding, you know, the ripple effect that comes from some of these foundational ideas that we have, now that we have this technology, that fire hoses, those technologies into the world, is is is an important perspective.

Ryan:

Yeah, and just, you know, the scale of the impact of these technologies. I mean, as you rightly point out, you know, setting that objective and defining what the problem is, right, that these new technologies are trying to help solve for you. When that goes awry with an individual policymaker, individual person, you know, there's kind of a limit to the scale of impact they have, you know, what I always kind of worry about is with with some of these new technologies and the algorithms we're kind of unleashing, that scale of impact could, you know, touch millions, billions of people almost in an instant, and it becomes a very different world in that way as well.

Shingai:

Right? In politics, you can set an objective function to create a policy that benefits you know, your constituents, you can also set one that allows for your re election, right, those two objectives are going to result in two different parts, potentially very likely, will result in different pathways. And we've, we really need to appreciate what objectives we're setting and what we want this artificial intelligence to do for us.

Ryan:

There's a lot to think about when it comes to AI and using AI responsibly. But I really appreciate Shingai's insights and the frameworks that are being developed to help guide this work right now. So the question is, is government ready to do this work and guide us through an AI enabled future? To answer this question, let's return to Jen and John, who have been working with Think Digital to help prepare government leaders for the AI revolution. Last question I want to throw to both of you before we before we close out this segment is, you know, I think I think all three of us, I think it's fair to say, agree that this is probably something that those in government need to have on their radar, probably higher on their priority list, certainly having an understanding of it. And viewing this as more than just, you know, a technical challenge for folks in the IT shop. But this is something that any digital innovator, I would probably argue any policy practitioner in government should be thinking about, what are your tips for how people can learn more about this, and how they can start thinking about how to explore the potential of the tech, you know, in in a situation where sometimes they're not going to have clear guidance or frameworks, because things are moving so fast.

Jen:

I'm reluctant to say this, but I'm gonna say it anyway. And that is that I don't think there's any way around improving your digital literacy... to some extent. People really need to get their heads into the digital space. Think about what that is, I'm not saying that you have to become an expert, but you can no longer, I think, if you're in the government, be a non technical person to some extent. So I would say, start thinking about your digital literacy. And then I would say, learn on what I would call safe projects. So this also comes back to, again, those risk elements that we were talking about. So I think it's really important for the government to move forward and start doing these types of projects. And so if you're improving your digital literacy, you're thinking about the types of projects you can do. That's great, pick projects that are useful, but lower on the risk scale. And then you can learn, I'm not saying you should never do high risk projects, I'm just saying learn on the lower risk projects.

Jen:

Right. John, from your perspective, you know, in terms of helping people kind of think about how to prepare themselves for, for what is coming in some ways, what is already here? What's your tips and advice?

John S:

Sure. And so I'm not the technical person, right. But if you're a policy person, if you're a leader in the public sector, and this is the most important development since either fire or electricity, then you need to know about this. And you don't have to have all the technical answers yourself. But you should be thinking about how do I build a diverse team of people that's going to have all of the skills that I need, and it's going to have the policy people and the, the AI people, and your IT team. But something that is a good way to get started is thinking about your use case, and you want it to be something that's going to be important to you. And that you can describe in simple terms to somebody else. So one tip is to think about the press release that would be associated with your project, set it out in advance of what it is that you're trying to do, and why people would be motivated, and why would citizens care. And if you can set that out in advance, that can be a North Star for all of the hard work that's going to follow to make this come true. And that doesn't require technical expertise. But think about how could we be helping Canadians? What's a big lever that we can pull, and let's get a diverse group of people together to try and tackle that problem.

Ryan:

I mentioned at the top of the show that Think Digital has been hard at work on a research report, looking at how public sector organizations are approaching the governance of AI, and what some of the risks are. We think this is important work to help folks benchmark where they are on their AI journey. And to and further some of the important discussions that are happening in society right now on this topic. We're going to be publishing this work next month with our partner organization, the Institute on Governance. But I want to end today's show with a bit of a preview. And joining us are Jacob and Bryce, two of our analysts at Think Digital who've been hard at work on this research report. Have a listen. Well, I'd love to get both of you to share a little bit about you know, yourselves and what you've been working on with us.

Jacob DC:

We're happy to be here. My name is Jacob. Bryce and I met in the of the first cohort of the MPPDS, which is the Masters of Public Policy and Digital Society at McMaster University. We graduated in 2022. We were friends then and we joked about working together once we graduated, and now the joke has become a all too real. But so we are we are working as the co-leads on a report for Think Digital on, as you mentioned, AI governance, but also on some of the some of the ways that AI has been used by the public sector internationally. And with a particular emphasis on when governments and public sector agencies have encountered some failures when it comes to AI. I think that what we noticed in our initial sort of research was that a lot of governments have been eager to make a sort of first kind of first move on AI in terms of an AI strategy, or essentially just trying to broadcast to the public that they are aware of AI and what it means perhaps for the market and the economy and for private citizens. But a lot of, we had to we had to dig pretty deeply to find governments commenting on their own use. And that's sort of what we were most interested in. So surprisingly, I think maybe the first thing that we noticed was that there's actually kind of a dearth of material when it comes to governments commenting on and thinking about what it means for them to use AI. So that was, that was probably the first thing that we registered as intriguing and worth, you know, having and what it had, what it meant was in the fact that we had to actually dig a lot deeper, it was not necessarily easy to find open source material on the way it's, the government's already been using AI.

Ryan:

Yeah, it's an interesting point, right, that for a lot of governments, they're their entry point into this as almost as an economic development strategy, right, where they're looking at, you know, how can they spark their own AI industries? I mean, even here in Canada, we have kind of the supercluster approach to really kind of, you know, drive AI research and development. And yeah, and governments tend to be sometimes kind of laggards in adopting some of these new technologies into the organizations. Bryce from your end, anything that really stood out for you from you know, from the research you were doing, looking at, again, you know, how governments for their own purposes have been using AI?

Bryce E:

Yeah, I mean, I thought a lot of it was kind of written in like this, or what we were looking at at least is kind of focused on the more kind of automated decision making stuff, or process automation, or decision support type thing. And I think now with all the interest in generative, like large language models, I don't know that these existing governance frameworks do a perfect job of kind of incorporating some of the risks associated with that. So I think we'll probably see, like a large kind of, you know, wave of publications of people kind of talking about, you know, how their department or whatever is going to be kind of governing, basically ChatGPT and those sorts of things.

Ryan:

Right. Yeah, it's an interesting dynamic that, you know, we've been exploring a bit on today's episode, which is, I think, you know, in the past, if even go back, you know, three, four or five years, kind of AI seemed almost like this distant thing for most governments, right. And if they did get into it, it was a big investment, they were having to, you know, they're having to bring in vendors to be able to build custom models of some sort. But the rise of chat GPT, which is obviously the thing that's dominating the headlines, you know, as of late, that that's kind of bringing a lot of a lot of light onto this, you know, that that friction for governments to start getting involved is becoming much lower. And so I think, to your point, Bryce, you know, we're going to be seeing some of those gaps in how the public sector thinks about how we can use these tools responsibly coming up a little bit more.

Bryce E:

What was really interesting was that kind of so many of the governance models are like the same.

Jacob DC:

Yes

Bryce E:

So I think that's interesting. And a lot of the principles associated with them are the same as well. And so I do think that those principles obviously have kind of like a very enduring kind of thing about them. Like, we probably could use a lot of the same principles when governing, you know, large language powered applications.

1:12:05

Absolutely. Yeah. It's not only I think there's like kind of a common place language when talking about AI that probably comes from, I don't know, this is a bit of a broad stroke. But maybe from industrial policy, it seems like the way that people want to talk about AI is similar the way they want to talk, as you said, like sort of an economic strategy. And I think often the language guiding at least the terms of like, the the idea of risk, like mitigation, when it comes to AI is often ethics based, which is good, as Bryce said, it's enduring. It's also dynamic, it can kind of adapt to, you know, different contexts, ethics seem to be something that is like normative and universal, but really, when it comes to enforceability in terms of like, how AI might actually pose a nuanced risk, I'm not sure necessarily that ethics based frameworks are the... the way to go. But we'll see that seems to be what's what what countries are have generated thus far.

Ryan:

Yeah, and well, and it's interesting. I mean, as as you said, you know, there is this kind of commonality between different countries around the world that we've been looking at in this project, which is interesting, because you know, a lot of the the concerns from governments can be quite kind of local in nature, and AI tends to kind of transcend borders. So maybe that's a good thing, right? That we're seeing some commonality, if we're going to have to have kind of a, you know, an approach to this that goes beyond traditional political borders. But Jacob, I'm interested to kind of pick up on that comment a little bit around, you know, that that kind of an ethical framework by itself might not be the only way or the best way to kind of capture all the concerns, wondering if you can maybe just kind of expand on that a bit. And, and what you think those kinds of cases might be, that aren't going to get captured by that that could turn out to be problematic?

1:13:56

Well, I think the thing that we noticed in, so we- our scan happened in two in two parts. The first was actually looking at real cases in the world of governments using AI. And the second was a scan of, of governance practices and models and frameworks. And so we noticed in that first scan, when we were looking at actual case studies of public sector agencies employing AI, we noticed that like, regardless of whether or not a country had actual governance in ink, policies or even legislation in place to kind of guide, so you know, to guide their implementation, we noticed that really what it came down to often was context, that context really was an important factor. So you could actually have these policies and legislation as guardrails, and you can have, you know, smart design teams, but if it's implemented into a sensitive context, there's usually going to be a kind of friction or tension that's created. And usually with that, on the, on the, on the sort of end of that is usually public scrutiny. That governments and reputational risk that governments obviously want to avoid, especially when it comes to implementing new technologies, things that probably takes governments a lot longer to implement in the first place. So ultimately, I guess what I'm trying to say there is that when it comes to, like responsibility or ethics based trustworthy systems that are non enforceable, you can have, essentially, the boxes ticked and still come up with not only just reputational risk, but sometimes real harms, if the context is sensitive enough, where it doesn't really matter the efficacy of your technology, if it's doing its job really well, and you've ticked all the boxes in terms of, you know, the responsible or trustworthy way of implementing a system, there is still a certain amount of real harm that can be caused, basically, because a system has been employed, and in a situation that's, you know, inherently too sensitive, where people are going to be, there's gonna be some friction. Yeah.

Ryan:

Yeah, just the nature of the work the government does, in some cases, right, inherently, you know, I mean, sometimes it is literally life in deficit- decisions that government has to make. And as that gets automated, some of those decisions could get automated along with that.

Jacob DC:

That's right.

Ryan:

Yeah. And so just, you know, it kind of begs the question, then, after having spent a few months doing a deep dive into this, I'm curious what both of you think in terms of, are we prepared for what's coming down the line? Because it seems like there's a real acceleration happening around these AI and machine learning technologies. Clearly, it's going to be impacting government in in a pretty powerful way. And by implication impacting citizens. Bryce, maybe I'll start with you, but you know, what's your what's your kind of gut sense having now looked at approaches, you know, in Canada and internationally? Do you think governments are ready for what's coming?

1:16:42

I mean, as far as like their own internal use of, you know, AI goes like, from what I understand government has like this tendency to kind of like, put anything they don't really understand on the other side of their internal firewalls. And so if we haven't already seen that with like, ChatGPT, or basically any of the big API's right now like that, that's probably going to be coming. Like it only takes one kind of embarrassing event of somebody automating emails using, you know, ChatGPT, or some, like, strange, niche off the wall language model, to kind of generate, you know, a little bit of public distrust in government communications. So I could see that as kind of being something that triggers, like a real backlash against it. But I think if governments was smart, they would start figuring out ways to basically work large language models into their overall workflows in ways that are kind of easy to monitor, and not super complex. So maybe it's a small, you know, bit of data transformation that they can, you know, show is consistently in line with whatever their human baseline for that transformation is, or whatever their existing system might have had is like, kind of a performance benchmark. I think if when they start making moves like that, then everyone can be a lot more confident that they probably won't have any really massive, you know, public failures, essentially. But we'll see.

Ryan:

So when we said at the beginning of the show that we're going to be going deep on AI today, we really meant it. I hope that you found these conversations as insightful as I did. Yet, I feel like we've really only just scratched the surface on this incredibly important topic. No doubt, this won't be the last time that we talk about AI on Let's Think Digital. If this is as big and as revolutionary of a change as many people are saying, there's going to be a lot to unpack on this topic in the months and years to come. So what do you think? Are you optimistic or worried about what AI means for our future? Let us know. Email us at podcast@thinkdigital.ca or use the #letsthinkdigital on social media. If you're watching this on YouTube, make sure to click the like and subscribe buttons. And if you're listening to us on your favorite podcast app, be sure to give us a five star review afterwards. And no matter where you're listening, be sure to tell others about the podcast and share it with your networks. Today's episode of Let' s Think Digital was produced by myself, Wayne Chu, Mel Han and Aislinn Bornais. Thanks so much for your listening and let's keep thinking digitally.

Government in the Era of ChatGPT