Think Digital

Think Digital Logo with tagline, Three lightbulbs on the left

Everything You Should Know About AI (but were afraid to ask)

Let's Think Digital Podcast

Everything You Should Know About AI (but were afraid to ask)

New episodes every two weeks. Watch on YouTube and listen wherever you get your podcasts!

Everything You Should Know About AI (but were afraid to ask)

Everything You Should Know About AI (but were afraid to ask)

Everything You Should Know About AI (but were afraid to ask)

Everything You Should Know About AI (but were afraid to ask)

It’s only been a year since our last episode on artificial intelligence, but already a lot has changed. It seems like Generative AI is everywhere and everyone, including governments, are struggling to keep up. So on this episode Ryan is joined by a special co-host Jen Schellinck, Think Digital Associate and our resident expert on AI and cognitive science to talk about what you need to know when it comes to AI. We’re also joined by Paul Craig, the creator of the TaxGPT AI bot, and Shan Gu, Founder and CEO of Foci Solutions. Both Paul and Shan share their experience working with AI projects in and around the public sector and discuss their insights on what they have learned.

In our conversation we talk about the current state of AI technology, the questions that governments should be asking when thinking about using AI, and most importantly the question on everyone’s mind, who is more intelligent: ChatGPT or Ryan’s cat?

(Note: At 3:26, Jen refers to a steady state model. She meant to say state space model)

Related Links

Watch on YouTube

Chapters

00:00 Introduction and Welcome

01:27 The Current State of the Art for Generative AI

06:15 AI’s Expansion: Beyond Text to Visuals and More

10:27 Generative AI in Government: Policies and Adaptation

18:04 Paul Craig and TaxGPT

24:44 Learnings from Running TaxGPT

38:04 Shan Gu and Adopting AI tech in government

45:42 The Future of AI in Government: Opportunities and Challenges

52:21 Is ChatGPT more intelligent than Ryan’s cat?

01:08:02 Conclusion

Transcript

Ryan 0:05

I'm Ryan Androsoff, welcome to Let's Think Digital. If you can believe it or not, ChatGPT was only released about 16 months ago. Now it seems like generative AI is everywhere. And in the short time since our last episode here at Let's Think Digital on AI in Government was released almost a year ago in May 2023, alots changed. Advances in AI are coming fast and furious. So today, we're going to get up to speed on the latest in artificial intelligence and what it means for government. Later in today's podcast, we're going to talk to Paul Craig, an Ottawa-based developer who created TaxGPT, an AI system designed to help Canadians file their taxes a lot easier. He's going to share his insights on what it's like to work with these AI models under the hood. And then we're gonna have Shan Gu, CEO of Foci Solutions, who's going to talk about the challenges of executing on AI in government, and what it might take for governments to meaningfully make it work. But first, we're joined by Think Digital associate and our resident data science, AI and cognitive science expert, Jen Shellinck, who's going to be my co host for today's episode. Hi, Jen. Thanks for helping me co host today.

Jen 1:22

Hi, it's my pleasure. It's always nice to be here on your podcast, Ryan, thanks for having me.

Ryan 1:26

Well and, likewise. And so you know, Jen, we had an episode almost a year ago last May, where we were talking about government in the era of ChatGPT. You know, ChatGPT, being kind of the brand name of generative AI that a lot of people got familiar with, it was still very, very new. But a lot's happened in the last year, and you know, we thought in today's podcasts, we could have a chance to kind of dive into that a little bit. You know, I'm curious your perspective, as we kick this off, we kind of look back at the last year or so of the evolution of AI in general, but perhaps generative AI in particular, what's changed? What are the big things from your perspective that are really different than they were, you know, back in May of 2023?

Jen 2:10

So I will say that, yeah, so it's, first of all, it's mind boggling how much has moved how quickly. And I would say that one of the things that's changed just that piece is that this new technology arrived on the scene was surprisingly successful, even to the people who put it out there. And since then, we've all had a little bit of a chance to get more comfortable with it. But even so, the pace of change has been rapid. So but in terms of what's been happening behind the curtain, I would say that the technology that appeared at that point, which was this transformer technology, this transformer architecture, that's a type of deep learning, is still state of the art. Luckily, things haven't changed so drastically in the last year that that is not the case. And so what we're seeing these engines, these large language models are still being powered by transformer architectures. That said, I often say to people that they shouldn't get too comfortable, because we are seeing some developments. One, I've been hearing some buzz about state models, which are another, they're, they're a type of classic machine learning, but they've also moved into the deep learning space. And these state models, steady state models are more computationally efficient than transformers. And just recently, like literally, as of, you know, December of 2023, are starting to have similar functional performance. So that's, that's a piece. The thing that I'm really excited about, though, is something called neuro symbolic architectures. And the reason I'm excited about that is because it's actually a strategy for combining what we would call good old fashioned AI techniques, plus, sort of classic machine learning, machine learning plus deep learning architectures. And so it's kind of like bringing all the worlds together. And I've been hearing the term neural symbolic architecture kind of floating around. And then in January, Google came out with a paper talking about how they created an architecture that could do geometry problems. And this was, this was quite a big deal. Because as a lot of people know, these transformer architectures are not that good at reasoning and math. And so when you looked at the paper that they published, it turns out that they're using a neuro symbolic approach to do this, which suggests that, again, this is a technology that seems to be maturing and could appear available to us soon. So that's some of the stuff that's happening. Yeah.

Ryan 4:49

Well, and I'm curious just you know, from kind of a lay person's perspective, this this kind of neuro symbolic architecture, does this kind of more closely mimic how a human membrane would interpret and work with information?

Jen 5:03

That's a good point. I think that's one of the reasons I as a cognitive scientist, I'm kind of excited by it, because some of the some of the phrases that people have been using, they'll say things like deep learning for the perception. And then, you know, reasoning, good old fashioned AI for the logic. And so it does have this sense where we are starting to kind of, piece by piece, build up what we might think of as a more kind of almost like biological or organic architecture. Yeah.

Ryan 5:35

Using kind of a mix of different learning models and different AI models who have I guess, specialization in different areas where they're particularly good,

Jen 5:43

Yeah, yeah,

Ryan 5:44

to sort through different types of information.

Jen 5:46

Exactly. And to get really geeky for a moment for those who want to join me in that, people sometimes talk about in terms of sub symbolic processing versus symbolic processing. So the deep learning is, those are the sub symbolic. The reason engines are the symbolic and we we often think about our minds as probably doing a combination of those as well, although that's contentious.

Ryan 6:07

Yes, yeah. And we're going to talk about intelligence later and the nature of it, because I think that is one of the interesting, philosophical debates about this. And Jen, one of the other things, I mean, really strikes me in the last year is, you know, if we kind of rewind the clock back to spring of 2023, a lot of the interesting AI models and kind of services that were out there and ChatGPT kind of being, you know, chief among them. I mean, they really were chatbots, right. I mean, it was it was textual discussion, I mean, kind of an amazing increase in that ability to have outputs that kind of seemed almost human-like in terms of, of how they were written and their understanding. But it tended to be in those you know, I'll call it the early days of generative AI, at least, you know, from from kind of a public accessibility tended to be very much kind of text-based chat bots. And to me, one of the interesting things that has popped up in the last year is kind of, you know, how these models have kind of grown up that they're now able to do visual information. So not only can they generate images, but they can actually look at images and be able to interpret them, you know, speech, you know, audio outputs and inputs in some cases, we're seeing now in the last couple of months, video outputs very incredible video outputs from some of these for some of their kind of test models, obviously, the the kind of GPT model under neath open AI architecture has evolved, we've got, you know, Gemini came out from, from Google, we've got a variety of other types of AI models coming out. And even just like what I would kind of almost call the democratization of generative AI with things like Microsoft Co-pilot, where we're now seeing these AI tools embedded in people's day to day work. I mean, has it surprised you as you know, somebody who's a data scientist and a cognitive scientist, you followed this academically for, you know, well over a decade pushing two decades now? Like, has the speed of change over the last year surprised you? Or did you always think it was going to accelerate this fast?

Jen 8:08

Oh, it's definitely surprised me. And I know I'm not alone in this. And I think one of the reasons it's been so surprising is that classic techniques were always very problem specific, they weren't universal in the sense that you could use this, like, a single architecture or variations on a single architecture to do wildly different things. And what we're seeing now, and it's interesting, because certainly the GPT, large language models, they're what people tend to think of as having kicked this off, but narrowly, like just a little bit before that, we were seeing some image, text to image, like that, that was kind of the first and sort of, from a research point of view, people have been doing this parallel working with images with these transformer architectures and working with text. But as you say, now we're just seeing this amazing merging of them. And it really makes us realize that the term transformer is very apt when we think of transformers because what we're seeing is that they seem, you know, they have to be trained on different datasets, but they're really, really good at just transforming thing A into thing B. So it's not just oh, we're good at transforming language into more language, or we're good at transforming, you know, text to images. But it's basically like, if we can think about what we have and what we want, this architecture seems to be able, with the right kind of data and enough data to be able to handle it. I think that's, that's really intriguing.

Ryan 9:36

Yep. Well, and, and what's also like, in my mind, interesting about this is, I think, in some of the past iterations of, of, you know, classical AI. These were kind of big enterprise systems, right. You know, they required big investments, you spent millions of dollars on them, you required huge amounts of server infrastructure and data, etc, etc. I think the big thing that became really interesting, at least from my perspective, as you know, somebody who helps advise governments is that suddenly, anybody on their iPhone or on their, you know, on their smartphone had access to, you know, fairly powerful generative AI models. And what's interesting is, you know, the speed at which these changes are happening, I think governments who aren't known for being fast in terms of catching up to new technology, are really trying to kind of figure out what it means for them. And Jen, you and I have done in the last year a number of kind of presentations from management teams and government who are trying to kind of better understand what generative AI means for their business, you know, how they're going to manage it, what some of the risks are, you know, we've seen governments around the world putting in place new policies and new governance measures to deal with this, including government of Canada in the Fall came out with guidelines around the use of generative AI, they updated the directive on automated decision making. Just a few weeks ago, the European Union passed their artificial intelligence law. So we're seeing this kind of proliferation of I think governments trying to turn their attention to understanding what the impacts are of AI both at a society level, but also, I think, from the work that we do, interestingly, just in terms of their internal operations, right, how do they kind of manage this? I mean, has anything, you know, has anything stood out for you from that, that that's kind of interesting or surprising, in terms of how organizations, including governments are trying to kind of adapt to these rapid high speed changes?

Jen:

Well, I will say that they have, as you were saying, you know, government doesn't always move fast. And so I've actually been surprised, pleasantly surprised, at their willingness to try and move quickly, as you say, they've recognized that this is something that is not going away, that is potentially powerful. So they're thinking about it in those terms. And also, just as you say, from an internal perspective, the level of enthusiasm is interesting, like, with a lot of new technologies, when we're talking about change management, the problem is more, you know, how do we get people to adopt this technology, whereas with this technology, we are actually seeing kind of the opposite, where people were like, suddenly, within the last week, everybody's using this technology? And how do we how do we grapple with that? So it's almost the inverse of what we would usually see.

Ryan:

And maybe sometimes not using it in ways that are completely appropriate, or, you know, may kind of open up some some interesting challenges down the road. Yeah, cuz I, because I think back like, you know, the discussions we've been having around generative AI in the last year, like, to me remind me a little bit of the discussions 15 plus years ago around social media, when social media was first coming into government. And I do think one of the differences is back then there was definitely this, like, strong undercurrent, I found from a lot of senior leadership of like, this is a fad, it's gonna go away, we can kind of just like, ignore it a little bit. Whereas now with AI and these new generative AI tools, I see less of that, right. I think there's, there seems to be actually this very pervasive sense. I don't know if if this has been your sense as well, that senior management is kind of like, okay, this is a real thing. And you know, they're still maybe struggling to get their heads around it in kind of a nuanced way. But there seems to be less of that sense of this as a fad that's just gonna go away at some point?

Jen:

Well, and I think because there's, first of all, there's this immediate recognition that it has the potential to be useful, which is, again, rarely the case with technology like this is this is really unusual in that respect. But also the recognition that it can be useful, like immediately on a personal level, and not just an organizational level. And I think even, that's even different from social media, with social media, you know, it's about communicating. And you can say, oh, are people going to listen to me? Whereas as soon as you see these language models, you go, Oh, my gosh, I can use it to write an email. That's, that's gonna make a huge difference. And so, yeah, so it's just it's impacting people in this very personal and immediate fashion. And I think that that's really pushing the rate of change.

Ryan:

Yep. Yeah, I think that's I think that's very true. And it has been interesting that I think, you know, there are some of these bigger implications for public sector entities in particular, that people are talking about. And I think trying to wrestle with, I mean, one interesting one that actually isn't talked about that much, but it's coming more and more on the radar is the environmental impact of these AI models. I mean, and maybe we can get into this a little bit later, but just the fact that at least right now, they are very power hungry, very computationally intensive, and just require a lot of electricity and water for cooling, to be able to manage that. And I think there's been some media in the last couple of months about, you know, if we're seeing AI become a pervasive part of our world, how do we actually manage some of the environmental footprint around that, which is going to be kind of a big, big P policy question, I think, for governments in the years going forward.

Jen:

And I would add to that, I mean, I think it's great that people are thinking about it, and you were saying earlier that, you know, it's interesting that whereas before, before, these technologies were kind of very restricted to people who had a lot of resources to create them, now they're more accessible. But we have to remember that behind the scenes, the models themselves are still massive, they're still computationally expensive. They're still, you know, expensive on an energy front. And we just in some ways that's being hidden from us. So and it's being hidden from us, because the fact that these are such multipurpose models means that, you know, we can just we can use the model that somebody else created, but that doesn't mean that that model isn't sort of massive and complex, like behind the scenes.

Ryan:

Yep. Yeah, that's a great point. And then I think the other side of this is, you know, governments are historically traditionally viewed as being very risk adverse. And so on the one hand, you know, we got this new technologies, as you said that even on the individual level, is becoming very applicable very quickly. But I think governments as institutions we're seeing are trying to grapple with, you know, what are some of the potential concerns about this? How do we use it responsibly? And I want to just, you know, know, Jen, and a bit of a shout out, you know, you and I co-authored one with some folks from our team at Think Digital last fall, a paper that we published around the use of AI technologies and government, we'll put a link to it in the show notes for today's session, for anybody who wants to look at it a little bit deeper. You know, and I think one of the things we looked at, in that paper was some case studies from around the world where kind of AI has gone wrong, and public sector implement, you know, implementations of it, where there were some unintended consequences that happened. And then, you know, we proposed, I think, a little bit of a unique set of kind of risk factors that were, you know, encouraging governments to think about, namely, kind of scalability, bound ability, reversibility, and visibility, right. And these kind of four factors that can be somewhat unique in the use of AI, in ways that may not apply to some more kind of if I can call it traditional technologies that government might be employing. And I think it's been interesting to kind of see how that conversation around this has been evolving, and in some different ways in the last couple of months.

Jen:

Yeah, yeah, and I think one of the challenges is that, I don't know, it's my speculation that some of the impacts that this technology is going to have are relatively subtle, and so we won't necessarily see them coming out right away. And so when you think about that reversibility piece, and when you think about that bound ability piece, I think that that's where it's important to keep those those elements in mind with this type of technology, in particular, because it's new, it's pervasive, and we don't yet know all of the impacts that it's going to have. So, yeah.

Ryan:

Yep. Yeah. And I think, listen, that's a great segue into where we want to go next, which is to actually kind of talk a little bit about, you know, what is this looking like in practice, in terms of AI products inside of government and the public sector more broadly, and how they're able to use it? And to do that, we want to bring in a couple of guests into this conversation to be able to answer that exact question. And first up, I'm going to invite Paul Craig to come join the conversation. Paul's an Ottawa-based developer, formerly with the Canadian Digital Service here in Ottawa, before that was with the UK government's Government Digital Service. And Paul made a little bit of a splash last year when he created what he called TaxGPT, a friendly AI chat bot, who was there to help Canadians with tax filing questions, and we're right in the midst of tax season right now. And I know version 2.0 of the AI bot has recently launched just in time to help folks out with with the latest tax season. So Paul, welcome, eager to learn a little bit more about the work you've been doing and your experience kind of getting under the hood with with AI.

Paul Craig:

Thanks for bringing me on the show, Ryan, I'm excited to talk about my private IP in the podcast.

Ryan:

Well, we appreciate you being willing to share it. I mean, Paul, maybe just to start with, be interested to either just, you know, give a bit of an explanation around what TaxGPT is, and why you decided to build it.

Paul Craig:

Yeah, it's funny. You mentioned ChatGPT in your intro, and you'll notice the three letters are the same there. So I'm pretty sure TaxGPT I gamed it first and they took it. But the idea behind TaxGPT is that it's, it's a chatbot about taxes, right? So it uses AI technology to answer questions specific to the Canadian tax filing and more specifically, it references content that's from the CRA website. So it's all public information. This came out of work that I had done with the CRA actually, as I used to work for the Canadian Digital Service and we would work with departments to what's like modernize or update their approach to, to building technology, but also to how you create it. So. So we would take this kind of user-centric approach, we would try to find the users try to talk to them. So we had been in 2019, so years ago, we were trying to build almost like an expedited tax filing product that would help people who, there's a population of people in Canada who don't file taxes, and then they, if they filed it sort of automatically unlocks benefits for them, so they don't file they don't receive their benefits. And the idea was like, how do we make it easier for those, for those folks? We learned that the, from the perspective of the filer; filing taxes, they see it as a very technical, super complex task, where if you do something wrong, you're in trouble. So when they went, so we would go to see these community volunteer tax filing clinics, so you can go there, if you have a modest income and a simple tax scenario. So you can you can look those up if you want, but a volunteer so somebody who is some, you know, sometimes a student or an accountant, or just sort of like a, you know, somebody who's donating your time, basically will sit down with you, and they'll go through the process of doing your tax filing for the year. So we went to these clinics to see who is coming in. So you know, what are they kind of look like? Like, what are their reasons for coming in? And then how does the service work for them? And in a typical case, we would see people come into the clinics, they bring everything that they have, like literally everything, so like ID, T4, sometimes their bank statements, any letter they've ever gotten from the CRA, like any mail. So just a big stack of papers. They sit across from somebody, they sit at like a you know, like a beer pong table, almost like a collapsible table. And somebody on the other side has a laptop, they asked him some questions, you know, where do you live, do you have a job, do you have dependents. And they kind of walk them through the process step by step. And it takes 10-15 minutes, and then they're filed. And so I think observing this process, you see that like the person comes in, they have everything they need with them. And then you ask them, Why are you here? And they're like, Well, I can't do this at all. And then 15 minutes later, it's done. And it's like, Well, what did the other person really add to this transaction? Right? Like, the volunteered. And this was, like, we talked about API's sometimes in, in technology, but essentially, like, the person typing stuff in for you, gives you the confidence and knows how to guide you through the process. And that was my, that was sort of a kernel of insight around like, maybe we see a lot of these generative AI interfaces, they're like somebody, they they really feel like there's someone talking to you on the other side. And maybe there's like a conversational approach that will guide people through the process of tax filing, or at least help them you know, help them learn about taxes is ultimately we're where it is today.

Ryan:

Yeah, and if those human volunteers are acting as kind of a human API, can we not kind of essentially have an AI driven API to help people kind of bridge that gap to be able to get through the the process for simple cases in particular?

Paul Craig:

Yeah, and I think a lot of what I think a lot of what people, like, if you ask the volunteers, they, for them, it's like processed cheese, right? They're like, they see, well, I'm not putting words in anyone's mouth, but it's like, they are used to filling out a lot of applications in a shift, and they take 10-15 minutes, so maybe in a four hour shift they can do... maybe like five an hour, you know, maybe they're doing 20. So they're, they're just like running through them. They're very, they're kind of used to, here are the benefits, if we were looking at like, here are the situations that are relevant. And they can pull that information back really quickly. And I think when, like just looking at the traffic, what people are asking TaxGPT today, sort of common questions are like, you know, I'm a single person in Ontario, what are the benefits that are applicable to me, and like, the volunteer could just run those off really quickly. You know, and it's, it's like that you don't need to be an expert to know that or, like, have a deep understanding of somebody's financial situation to just be like, Okay, well, you know, there's the Trillium benefit, and there's like the northern energy tax credit and blah, blah, blah, stuff like that.

Ryan:

Yeah. So I was I was curious to ask you, Paul, because, you know, last year you launched TaxGPT for the first time, kind of a beta version of it out there. I think he had 10s of 1000s of people who ended up you know, using it in one way or the other. And I know you've recently launched kind of version In 2.0, that with some updates. You know, I'm curious kind of what you learned from this process of running it last year, what you're seeing this year, what changed, right? Like if it was if there was anything kind of surprising for you from, you know, once once kind of the rubber hits the road on actually putting this out in the real world? Did it work as expected? Or was there anything that kind of surprised you along the way?

Paul Craig:

Yeah, good question. So, Jen, something you mentioned was that the traditional, you know, the sort of LLMs that we see are not that good at reasoning and math. So that's why I think tax filing is really good opportunity. Because you don't need it. I'm just kidding. I, yeah, I think that that No, but that's a really good point about the subject of tax, specifically, because what people are looking for from the bot, you could say the program is, it varies widely. But I think what I have, I think a high level, like, it's not going to be a replacement for a spreadsheet, right? Like, if you come in and say, Here, here's my income, here's what I'm paying for this, here's what I'm paying for this. It's really bad at that. If you're looking for almost like explainers, or how does this concept work? Or, you know, I don't know. For you know, there's new programs being released, like the Canada, there's two of them this year, that sound very similar the Canada dental benefit, and the Canada dental care plan. One is for kids and one's for seniors. And like, you know, it's like, you can imagine that it's new, they sound like something else, like what is it? Like for stuff like that, it's really, it's really good at that. I think that the almost deceptive part of trying to tune a system like this is, and I sent you an article, but this once Ryan. But like, the out of the box, I haven't done anything with this except put it on a screen functionality. It's just very impressive. Normally, when you build a prototype, you start off with like, black and white text, and it's all at the top of the screen, and it doesn't really do a lot. With an AI bot like this, you can pretty much on day one have something that seems to work like really, really well, but the...

Ryan:

But perhaps deceptively impressive.

Paul Craig:

Yeah, but then the other side is, you know, like maybe it maybe it's good at 80% of what you want and not good, not so good at the 20 and then trying to control when it sort of is in that 20% of like, don't do this, don't say this, avoid this. That's the wrong answer, like sort of trying to accommodate for that takes up. That's kind of where you spend your energy. So. So yeah, I think you can get really, you start off with something that looks very impressive. And then you have to spend a lot of time trying to like tune it and almost like limit the things that it'll do because because it's kind of like the pro and con, right, it's like they give these amazing answers that they generate almost out of thin air. And then they're just not really that traceable. So when you're saying like, where did that come from? Or like, don't do that again, like you don't, it's not exactly so easy to, to, to control that behavior in the future. Like, even if you see it a few times, you can try to intuit what it's doing. Try to accommodate for that. So yeah, I would say at the beginning, very little effort, you get a pretty good service. And then the hard work is almost like trying to constrain those outputs, you know, stop people who are trying to hack the bot. And also, you know, if there's a category of answers, one that I know is when people ask about specific income levels that gives bad answers for those like, how do I not? How do I get it to, like, not do what it likes doing? Just saying like, here's your answer, even because it's like, I don't think that's a good, I don't have high confidence that that will be a good answer. So how do we like almost like curb, head that off before it offers something quote unquote, helpful, you know?

Jen:

And I was gonna jump in and say, I often try to explain to people and you know, this is this is a bit of anthropomorphizing, but I think it's useful is I want to say like, it wants to be helpful, but sometimes it doesn't know when it doesn't know enough that it shouldn't be helpful, right? So what you're describing doesn't, you know, your TaxGPT wants to be helpful, but but sometimes you'd be like, maybe not. Would you agree with that, or?

Paul Craig:

Yeah, absolutely. I think if you use these bots for anything in your day to day life, you sometimes ask it's something. I think, like, if you know what the answer is, they're a really good tool. Because you know, like draft me this content, it can return some content. If you're somebody who can evaluate the correctness of that you're like, This is good, right? Or do it again. And whenever you tell it, it's wrong and do it again. It always says, You're right, I'm wrong. You're right. I'm totally wrong. And so when you're like, No, that's wrong. It'll just be like, No, you're totally right. And it gives you a new answer. It doesn't mean that that new answer is right. Or that even its original one was wrong. It just, it always agrees with the premise that you're, it's like, you are right, the thing you just asked me to do I have to do that. And the thing you just told me, you're correct. So it gives this very like, it's almost like the appearance of somebody very eager to help you. But will sort of never challenge your assumptions, really. So you want to, you want it to be helpful, but not to sort of lead people down the wrong, wrong path.

Ryan:

Right. Yeah, I think, you know, genuine, I use this line, sometimes in some of the presentations, we've done talking about Jeremy VI, where we're like, you should be thinking about these tools as like an eager intern, not as an all knowing Oracle. And I think that that mindset shift is is kind of important on on being able to use it properly. Paul, I want to ask you one, one last question on this, and we'll bring you back in the conversation a little bit later on as well. But um, you built tax GPT, kind of as a individual Canadian, this was not a work project, you you were kind of outside of government when you were doing this. I'm curious your view on why government didn't build this right. And you know, if you know, your view, is are these the kinds of projects that government should be doing? Or is this something where it actually makes more sense for, you know, individuals or companies or civic tech groups, or whatever it may be from outside of government to be kind of building these types of AI assistants from outside rather than from inside?

Paul Craig:

Yeah, it's true. Like I, I also have a blog, Federal Field Notes which I work, which, I am inside government, but as an outside of government project, so. And I had talked to one of my managers who tried to impress upon me the importance of that distinction between when you are inside or outside of government, and what kinds of like, what's appropriate, when, right? So I think that for, I think there's sort of two ways to look at that. I think, you know, AI is a very new technology, that the tech stack I use is fairly, very modern, it's using like, it's like a react app. It's a cloud hosted react app, using AI back end, and like, blah, blah, blah, but trying to build an app that looks like that in government, in my position as a developer who knows how to build apps like that, very difficult, because government in general, sort of slow to adapt to these things. So I would say on the general point about like, government adopting new technology, this this definitely is something that's very challenging inside of government to use something that you haven't seen before, generally, so government's very comfortable with what it already does. And you know, I think, almost like the general point of, should we look at new technologies, how do we integrate them into our workflow? And like, how do we, from my perspective, like, how do we get them into production? That's the only way you really learn otherwise you spend a lot of time doing options analysis, you never ship anything. With TaxGPT, specifically, right, we're talking about a particular technology, should government be releasing, like consumer facing AI applications? Try to guide them through a process, I think that that, I think the right argument there is probably not in that configuration, right? It's, I would say there's fairly high risk activity, obviously, you know, if you use ChatGPT today, they do the like one point texting, if you zoom in, it says ChatGPT can make mistakes. So you could totally put that disclaimer on top of it. And in probably you're totally free and clear, right, like your Canada, lead separate legal entity. We have no control over this. But, but, but no, I think that, like, the, there are different ways to do this. This thing I mentioned before about, about sort of the expert, if you are an expert in the subject, getting the answers from a generative AI, you are somebody who can evaluate those, the credibility of those answers, so. You know, one thing you might do is say, hey, we have help desk staff, they respond to people's questions, we're using the bot to formulate answers, and then me as the person who would respond like it's saving me a bunch of time. And I can look at that question. And say like, Yeah, that's probably, you know, that's the answer. So I think there's different ways to deploy it that are essentially not just like, generate an answer that immediately seen by an end user, I think you would put some, some folks in the way, but essentially say like, Hey, we're making this process more efficient.

Ryan:

Yeah. Yeah, that kind of back end efficiency, kind of use case rather than the public facing one.

Paul Craig:

Yeah. And I think if we sort of zoom out, you know, abstractly, you can talk about one of my blog posts, I talked about the government design principles. So I used to work in the UK Government design principles. The top one is user needs, like, who are the users? What do they need? And I think from that broad angle, people, you know, government, there's a lot of information, we offers a lot of services, say we, but I don't, I don't work for government anymore. But government has a lot of information, offers a lot of services, it's complex to lay all those out and to know all of that. I think there are many users, and there are many user needs here. And the strengths of, the strengths of AI, for example, the strength of TaxGPT, is very easily, very easily able to find and summarize tax benefit programs, especially if you know the name of that program, it's basically like 100% accurate to just say, like, here's a paragraph about how it works. I think that's like an obvious need, that could be served with like a high degree of confidence in some form or another. So I think like, you know, these AI models present a tool to address some of the user needs. And if we think about like, what are people struggling with? What do people you know, what, what's coming in over your, over your phone line? What are people emailing you about, you could probably run some kind of analysis and be like, what's what's tough, and, you know, even for me as TaxGPT kind of hosts, so I don't know, when when people ask questions, I don't know anything about them. I don't have any personal information, but I can see the questions that come in. And I've kind of got a sense of, like, the categories of questions that people are, that have come up over and over, right. So I think, like, for my product development roadmap, I can say like, well, it should be better at these things. And it's like, for this category of things, it's not as much of an issue, right, so yeah, I think it's important for government to kind of like, use, use new technology, take advantage of what's there, and, and ultimately be grounded in sort of, like the user analysis rather than like, the technology or the particular technology.

Ryan:

Right. Yeah, as I think we would always advocate for right, you know, not not just chasing the shiny object for the sake of it, but but making sure it's used in a kind of a useful way around it. Paul, thanks for this. This has been really helpful, I think, to kind of conceptualize like a real world case study around it. For those who are interested, the actual app is available at taxgpt.ca. We'll make sure we put a link in the show notes.

Paul Craig:

Important that it is actually taxgpt.ca, there is a taxgpt.com. That's a US, that's another guy. He's building US version. So taxgpt.ca is the, is the Canadian.

Ryan:

Perfect, you'll see a happy smiling maple leaf on the page if you've gotten to the right place. So taxgpt.ca important clarification, but stick around, Paul, we're going to bring you back at the end. But before that, we wanted to talk a little bit about kind of this bigger picture, as we've started discussing with Paul about, you know, is government ready to execute on AI? I mean, our focus here on this podcast is really around that intersection between technology and the public sector and really happy to have our next guest Shan Gu joining us, Shan's the founder and CEO of Foci Solutions, does a lot of work with clients in government to help them build better technology solutions. So welcome, Shan, great to have you on the podcast.

Shan Gu:

Great to be here. Thank you for having me.

Ryan:

So I thought, you know, maybe to kind of start this off, I know you're doing you know, you've got a long history working with the public sector and helping them to build kind of well functioning digital teams, if I can put it that way. I'm curious, you know, what you think government has kind of going forward right now, when it comes to using AI? And where there might be some areas where you think it's, it's falling behind? Or having some struggles around it?

Shan Gu:

Yeah, I think the number one advantage in government, I'm very optimistic is that it is target rich, right? So if you look at the business of government, its documentation. Right? So the language of government is documentation. That's how the public interfaces with government, that's how government goes about its work. And if you take that, its documentation, but with a particular syntax, if you think about it in the context of language, government has its own dialect, right? And that's not very accessible to the public. And you know, what I find really interesting, having done digital government for a long time is, you know, the digital promise of public service was never delivered on because all we did was take government language syntax, and put it into what form, but we haven't changed the syntax. We haven't changed the complexity of the dialect. Everybody's like, well take that paper form and just fill out the web version, right? It's still just as difficult. The terminology is just as difficult. The reason why this moment is so significant is we finally take an eye and say we now have the technology to interact with people in conversation, which is actually our natural language of speech, and be able to translate that to government syntax. And that's kind of the powerful thing, right? So, you know, when I look at government, if there was one organization that can really benefit from generative AI from an efficiency perspective, it's this. Right, so, you know, I'm very optimistic that there's this massive potential to adopt technology in, you know, the low hanging fruit stuff, right? We're not the big reach, oh, we're going to use this to automate some entire approval process or suddenly be able to identify financial fraud with crazy amounts of accuracy or whatever. It's going to be the small efficiency plays, right? So you look at the total cost of government between personnel and contracting, it's what 80 million or 80 billion a year, save half an hour, every day, of someone's time reading a document, we're talking about what potentially five to $10 billion of savings annually, right. So like, the opportunity to increase the throughput of how we help Canadians and how we drive savings is massive, right? So I'm very optimistic. And because of that scale, we have scale. So on the private sector side, you know, when we're talking to clients, like, Well, why are we going to invest in something on our website, because it doesn't generate millions of dollars of revenue, right? It might make someone's life a little bit easier, it might make documents a little bit more searchable, but whatever, right? Co-pilots, unless you're working in a product that has specific syntax, or their proprietary language, or something like that, those are also high investment, low reward. But in government, because you're dealing with this kind of critical mass of value, there's actually just infinite number of use cases, right. So I think the advantage is, if you're working in AI today, you want to make a dent, and you're looking for big, juicy problems that are not technically very difficult. That's kind of be the place for be, right?

Ryan:

Yeah. Because it's interesting, because I actually, you know, I mean, you're kind of getting at this, like back office efficiency, where AI might be able to be most productive right now. And I actually kind of think, you know, and Jen you and I talk about this a lot, like a lot of times I think people get almost like kind of seduced by the flashy use cases of as you said, you know, it's going to, you know, it's going to be that public facing bot that's out there, it's going to do you know, this kind of algorithmic magic to kind of come up with, you know, whether it's financial fraud, or whatever it might be. And that seems to be actually the areas where things can go wrong most easily. Whereas actually, I think the back office stuff gets ignored a lot. I mean, it impacts the quality of public servants jobs. And as you're saying, Shan, may, maybe that is an area that we need a little bit more attention on in the public sector.

Shan Gu:

Absolutely. As well, the front citizen facing stuff, right. So looking at the great work of TaxGPT, but let's extrapolate that out to all of government. Right, so, what was the core problem? Government counted documentation on the website is really hard to search and navigate, make heads or tails of, right? So we did this experiment last year, when, you know, I think Google Vertex was first generally available, their search functionality was really cool. So we decided to play with it. So what we did was we indexed a subsection of Canada dossier of just, I said, CRA, Health Canada, and maybe one other website, and or a segment and we just asked her a question, I want to start a CBD hand cream business, what do I do? And it was able to index across boundaries and say, it's a regulated product, you're going to need a business number, you're going to need a tax number. And by the way, you're gonna have to go to Health Canada, you have to apply for these forms. Everything was referenced and cited. And it created a nice two paragraph response. Right, but that's because, you know, the technology had advanced by that point, citations were out of box features, you know, very bounded indexing was a was an out of box feature. So we were able to do that really, really quickly. I think it was like a one-two day job. But just making government information more searchable in the way that Canadians actually care about. That's, you know, incredibly powerful for incredibly low input, right? So that Canadians don't care that, you know, I sit at CRA, or any separate departments, they want to do a thing.

Ryan:

Yep.

Shan Gu:

So, you know, let's kind of cross that boundary and translate the thing that they want to do. You know, Paul, to your point of meeting the Canadians where they are, and really focusing on the user needs, which is, I need to do something that involves government and translating that into government syntax. I'm like, Okay, well, that means, you know, A, B, and C in government speech.

Jen:

I just want to jump in and say, I'm loving your term of government syntax, like it's, it's like, kind of like a programming language. And that, you know, we need to translate this into plain language, because it really is easy to forget that when you're inside the government, that this this vocabulary and the support that you're using is not at all familiar to people outside the government. So I, again, transform, transform that that syntax into something that works for people. I love it.

Ryan:

Yeah, yeah. I mean, we spend a lot of, you know, effort trying to encourage people to look at plain language writing and you know, and to kind of get away from that doesn't work in a lot of cases. We haven't gotten there in a lot of cases and yeah it is making me think whether or not AI can be that kind of translation function for government speak into, you know, human beings speak even, you know, Jen you're talking about between federal departments, but even across different jurisdictions. I mean, if I'm setting up a business in Ontario, for example, you know, I've got to think about potentially municipal and provincial and federal regulations and others that might exist. And being able to have, I mean, I kind of imagine this future where we have AI agents of one sort or another, whether they're being provided by government, or maybe more likely, like in Paul's example, somebody from outside of government, who's able to kind of help bridge those jurisdictional silos, that governments themselves just, frankly, seem unable to bridge themselves.

Shan Gu:

Yeah, I think, you know, in my kind of mind's eye, I have this vision of the future of how government packages knowledge and expertise, right, it feels like, we're starting to get to a point where models are effectively how you package knowledge, whereas before documents and databases, are how you package data information, right? So we're getting to that point. And then you talk about agents, right? There are frameworks that are starting to be developed by, you know, Microsoft and Google and, and AWS or anthropic around having very contained agents that will return responses against datasets that can now be published and I can orchestrate across right, so we're doing some experimentation with like Google Playbooks or Vertex AI playbooks, you can see a world where, you know, every department publishes a catalogue of agents that allows their very specific data to be answered and users orchestrate across using natural language. And then you can now be able to pull back the relevant information from, you know, government of Ontario or government of Quebec, depending on what your location is, and they'll provide this, you know, wonderfully customized or tailored response that, that kind of takes all that stuff into account, right? It's almost like some of these agents and and models are becoming the new versions of API's, if you will,

Ryan:

Right. Yeah, that's really interesting. Shan, one more question, I want to ask you before I bring Paul back into the conversation, is you do a lot of work on, on helping government to create kind of what I'll call high functioning teams, in kind of the tech and digital space. And I'm wondering, you know, number one, what kind of teams do you need to make AI work in the public sector? You know, what kind of skill sets are you looking for? Is it different from kind of general Digital Product Management? Or is there anything specific in kind of an AI context? And then secondly, look, from a technology standpoint, you know, is government set up to be able to actually implement on AI? Or are there things that it needs to kind of get some housekeeping pieces or plumbing pieces done before we can take advantage of it?

Shan Gu:

Yeah, that's, that's a question we can spend a lot of time on. But so I think, you know, AI, generative, I'll talk specifically about generative and stuff like classic flavor AI. But what we're seeing is really, you're talking about user experience, you're talking about making something translatable to the linguistics of the user. So we're, you know, successful teams are going to have to take on people with behavior science, you know, UX, psychology, linguistics, English, right, you're gonna see communications and English majors in, in AI teams, because you need to be able to anticipate how a user is going to interface with AI, and then be able to validate that against the response and go is that response what the user expects, because the danger here is the AI responds in a way that a user misinterprets because now we're dealing with language, which is terribly inaccurate, a pest, right. So, so, I think, you know, having these multifunction or multidisciplinary teams where you're actually incorporating people with not just tech and data expertise, but also human behavior and linguistic expertise, and also business expertise, right, the understanding the problem space, if you're going to build something around policy, somebody is better be there, be able to really understand the deep questions and be able to answer about how policy is supposed to be interpreted. Right? So I think, you know, we have to move towards this multidisciplinary team that has, you know, the traditional tech skills plus everything else. Now, in terms of do government have what it takes and what are the barriers? I think the whole idea of a Chief Digital Officer is really interesting. You know, I've seen government really slow to form- formalize this role and where it does formalize it's usually off spin of the IT side of things under the CIO, think that needs to change. Right, you know, to my previous comment about how AI is really providing that digital promise that was never formally realized. It's really a shining a light of like digital is a business function. And as a consumer, right, almost, right, we expect that interaction to happen digitally, which means that chief digital officer needs to be a business function and not be buried under IT. So structurally, that needs to change, that needs to get changed pretty quickly and gets mandated with its own budget and authorities, and all that kind of stuff, because it also allows you to put together these multidisciplinary teams. What I'm seeing government as a limitation is AI is very tech driven. It's very IT driven, right? Like the large majority of investment, which means you get into these loops of like, well, we got to go through EARB, we got to validate that we're picking one tool for the enterprise, right? Because if you know, somebody wants to use one cloud, but that's not the cloud we've operationalize. So we kind of get into our own way of saying, Well, this is IT function, we have rules in IT, therefore, we can't innovate, because that's moving too fast. Right. So I think that's kind of what's holding back in terms of plumbing. Yeah, I mean, there's a lot of plumbing that needs to happen, right, the majority of innovation in the space is happening in the cloud. The rate at which governments standing up these kind of common cloud environments is not nearly fast enough. So that has to get better, right, the end, not just treating every environment as an enterprise environment, there needs to be room for experimentation. There needs to be people who can come in and build, you know, a Vertex pipeline or something all anthropic, without be like, Wow, I gotta go through PBMM and nine months of, you know, SA&A for an enterprise level AWS, or Google Cloud environment. So I think, you know, there's some plumming and just infrastructure readiness that has to happen, and then there's that massive amount of kind of organizational will that needs to happen.

Ryan:

Yeah, yeah. And that experimentation piece, I think is really critical. I mean, and particularly with how fast the technology is moving right now. I mean, Jen, you and I often encourage, you know, folks we're working with to say, hey, build some space for experimentation, try this out, see what's in the realm of the possible, but I think Shan to your point, if everything is treated as a big enterprise deployment, you just can't move fast enough to experiment. And instead of being user centered, I think Paul was making the call out to, you know, your instead being kind of caught in this web of IT rules, rather than actually saying, Hey, here's a real use case that we want to at least try get some data on to see if we can actually do something productive there.

Shan Gu:

Yeah, exactly. Yeah.

Ryan:

Thanks, I really appreciate the perspective. And, Paul, I want to bring you kind of back into the conversation, you know, as we kind of come to our last segment of the episode, and maybe kind of pick up on some of these themes in general that we've been discussing over the last number of minutes. And I wanted to maybe just kind of get a little philosophical, as, as we dive into this, you know, I'm always interested in kind of thought experiments around this. And one of the things we've kind of touched on a little bit is this idea of intelligence. And it's, you know, it's implicit in the name, right, we talked about AI, artificial intelligence. But I think there actually is this question around how intelligent you know, are these these AI models right now? Because I actually think the level of intelligence behind it, you know, dictates a little bit how we treat some of these tools, and how they're going to get integrated into our workplaces and into our institutions in the years to come. So Jen and I have a little thought experiment that we've used at some recent workshops, and I want to, I want to do a quick lightning round to get all of you to respond to this. So I have this lovely 18 month old cat named Pumpkin, little calico cat. And I often think about, you know, ChatGPT is probably the AI model most people are most familiar with, is ChatGPT more intelligent than my cat? Which kind of like on the surface seems like an absurd question. But it's interesting, right? Like, when you kind of think about the different markers of what we kind of associated with intelligence, you know, like, what kind of information can it then evaluate, right, and, you know, ChatGPT could take in text and visual information, my cat can not, can't take in text information, but visual, but also auditory and touch and smell, you know, obviously, ChatGPT communicates using language at a fairly high level of sophistication, whereas the cat only has body language and you know, the occasional meow that it might be able to bring forward. That ability to kind of create novel images or responses are obviously very different, where ChatGPT may be kind of, you know, there's a little bit better than that. You know, I think one of the big ones is self awareness. Right? You know, whether, you know, I think most people would argue, probably, Jen, that a cat has some degree of self awareness to set its own goals, whereas the AI models are more stimulus in response. And then even, you know, the emotional side of it, right, you know, I can kind of trick ChatGPT into saying that it loves me, but it does, probably doesn't really mean that, you know, and the cat, I think it does, but sometimes he's just hungry and wants to be affectionate, cuz she wants me to feed her, you know, but this notion of kind of emotional capacity on that. And I think, I mean, Jen, you and I talk a lot about kind of that, what's the nature of intelligence and how do we express it? And as you and I have said, kind of, like humans tend to view language actually as a real indicator of intelligence where it may or may not be all the time.

Jen:

Yeah, absolutely. And that's where it gets really interesting. I feel like I'm gonna hold my answer back until I've, I've heard what other people have to say, but I do think that language as you say, like, it's such an indicator that it can sometimes pull people in directions that are misleading for sure.

Ryan:

Yeah. Okay, so Shan, Paul, you've never met my cat, but just you know, take take your mental model of a 18 month old cat. What's your answer? Is ChatGPT more intelligent than my cat, Shan, go to you first.

Shan Gu:

Me? Um, yeah, I think I define intelligence, maybe a little bit more around the emotional and creativity side and contextual. So yes, I would say, you know, problem solving all that kind of stuff, right? If I gave ChatGPT, a map where food is and told him it's hungry, it probably can't find its way to food. I think a cat will figure it out.

Ryan:

Okay, so you're given a vote to pumpkin.

Shan Gu:

To Pumpkin. Basic survival skills. Pumpkin.

Ryan:

Paul, where would you put your vote pumpkin or ChatGPT?

Paul Craig:

So I've been I've been thinking about this, I think if you've seen Blade Runner as a famous utopian, but how cool everything is in the future. They do the tests, right? The essentially Turing test, which is, you know, if I can't see the person giving the answer, the person or entity, and I asked questions, and I'm receiving answers, I have to guess, is this an AI? Or is this a human? So that's the idea of the Turing test is like, without knowing anything else, what would you assume? And I think that ChatGPT clearly passes that. That sort of bar. So like, if, if I was just getting letters in the mail, and I was writing them and getting it back, I would assume that that is like, a reasonably intelligent person. Which I don't know, you know, I guess we're measuring different things, right? But I wouldn't expect your cat to be the best pen pal.

Ryan:

Probably not. No. Okay, so Paul's given a vote, I think to the ChatGPT. Jen, where do you where do you come down on this?

Jen:

Well, so first of all, I am biased, because I know Pumpkin. So of course, I want to say Pumpkin is intelligent. And Paul, I think you picked up something really, really important about the Turing test, because it is true that the Turing test was language based. And so there was this idea that if you can speak, and you can have a conversation, then you're intelligent. And a lot of people start to kind of like push back against the Turing test. And other people are saying, you're just moving the goalposts, that's not fair, we decided that if you could talk that meant you were intelligent. That's all there is to it. And now you're just moving the goalposts. So I think it's really forcing us to define something that we maybe didn't have to define so much. I think that ChatGPT, or other large language models are very good at mimicking a certain type of intelligence. And then you can play that game of you're like, well, if it's so good at mimic- mimicking it, is it actually doing it? But I'm gonna fall back on this idea that intelligence needs to be embodied. This is controversial in cognitive science circles. So some people who hear me saying this are going to be like, No, I don't agree. But I'm going to say that embodiment is, in some ways important for intelligence. And so when we get robots, then I'm gonna be like, ooh, like, that's, then I'm gonna have to maybe reconsider. But for now, I'm going with Pumpkin.

Ryan:

Yeah, okay. Interesting. And I have to say, for my end, you know, I think initially, it probably was sympathetic towards Pumpkin and this notion of kind of that broader sense of intelligence. But I will say, even with the advancements of some of these generative AI models in the last year, I would kind of make that argument that they're kind of rapidly catching up in some ways, right? And so certainly, from a language perspective, you know, they, they, they Trumpet by far. But, but as you're pointing out, Jen as, as we see more capabilities come in there, and particularly if they're able to be embodied, where they can take in, you know, kind of sensory information from their environment and be able to respond to that environment, it starts, it starts tipping the balance in really interesting types of ways. I will just say, you know, when we run this in workshops, I think it's fair to say the majority of people pick ChatGPT as their answer. As thinking that ChatGPT is more intelligent than Pumpkin the cat. So I think it is really kind of interesting. And I think as a few of you kind of mentioned, it's because language tends to be our bar for intelligence, right? And we've used something that can kind of talk to us, even if it might not really know what it's saying. But it seems so convincing that we kind of view that as being the test of intelligence in a lot of ways. And so I, I'm, I'm curious kind of building on this, you know, because I think particularly in this last year and a half with the rise of these generative AI models, we kind of see this phenomenon of, you know, AI that we actually can talk to, it's changed some people's risk calculus around this, and in particular, kind of in the AI community, there's a little bit of kind of the worry around, you know, Terminator style, you know generative AI, or general artificial intelligence down the road, that causes huge risk to human society. And we've seen, I mean, interesting dialogue around this. But also, I think government's reacting by trying to put regulations in place, putting new laws in place, certainly kind of policy decisions around how they're going to use it. And there's this interesting debate around on the one hand, some would argue we're not moving fast enough, and that, you know, technological developments are outpacing our ability to respond to it. And we're not actually containing the risk, others worry that we're over-regulating it. And we're actually kind of stopping government and society from taking advantage of some of the potential benefits. I'm curious, you know, where folks kind of land on this. And if you think that we are not kind of regulating fast enough, or if you think we're kind of risking over regulating, and stopping our ability to actually be able to take advantage of some of the benefits of these new technologies?

Shan Gu:

Um, you know, I don't know if I fall under either fence, I have maybe a little bit more nuanced answer is that I think the general approach of broadly regulating something doesn't work, right, because the challenges are so context specific or finite. So I think we have to do a whole bunch of things, we have to A create safe experimental spaces to make sure innovation still happen, right? We have to say these things are okay. These things we think are actually a relative risk, we might be proven wrong. But, you know, these are the things we're willing to take a chance on document synthesis, you know, natural language searching, some of these things, go, go do that, right? And then the other thing is like, you know, logic, this ability for AI to reason, where they actually look at a piece of, you know, document or body of knowledge and start making decisions of their own accord that is not on guardrails. Yeah, we should probably have some guardrails, around how that gets reported, how that gets verified, and things like that, you know, things that create social discord and social risk, like deep fakes and that, you know, photorealistic photorealistic images. Yeah, we should probably regulate that stuff and go, What is the social risk of not allowing it? Yeah, we might lose some art, we might lose some creative, but like, what is the social risk of allowing it? So I think like, we have to be a lot more targeted in what we allow, what are safe experimental spaces, versus ones are like, there's just too much risk here. Let's kind of slow down a little bit.

Ryan:

Jen, what's your - Paul, go ahead, yeah, what's, what's your thought on this Paul?

Paul Craig:

I mean, I really like Shan's answer, I was going to, I was gonna say something similar not to, not to hand wave away. But I think if we, if I think about the places I've worked in governments, the projects teams I've worked on. The problem isn't that you can't get a meeting with 100 people together to just constantly flag risks about the project. Right? So I think this sort of like, how are we managing the risk side? Government, like government has it at hand, I think we're like, we're, we're, we're good at that. I think a lot of what I would see is that a lot of those conversations happen without the context of somebody who is like, use the tools, understands the tool, kind of has a familiarity with it. And so like, often getting to the point of having an experimentation and putting it out. And like I would draw the line between internal experiments and external, I think that like internal experiments happen pretty often. But getting anything over the line... if, if, it's never like an external release, it doesn't count as precedent for like somebody to use it or for future to build on, right. So I think having those spaces for experimentation is important and actually sort of like, as we talk about the risks, trying to, in some ways, like operationalize these technologies. When I, as a small anecdote, when I worked at CDS, well, there is a team that was working on a cybercrime tracker, and... the idea being that like when people are getting these messages, sometimes romance scam, sometimes like Hey, I'm the CRA, you should you have to pay me blah, blah. And the team was taking months and months to build sort of a form for reporting. You know, like, hey, this happened to me kind of thing and, and that would just be for recording, like it wasn't really a full service or someone's gonna help you get your money back. And at the time, my girlfriend was working at an app company for second phone numbers, in which they constantly people are signing up from other countries, for getting, sort of renting an Ottawa number for like $5 a month and going on Tinder and doing romance scams and it's like from her perspective, there's like 10,000 of these messages getting sent out a day. And from kind of what I'm seeing, we're trying to build a form, it's taking us five months to report it, it's like, I think there's a risk of going too slowly within government and sort of almost leaving this open field for like various bad actors to flood the, you know, flood us with these, like fake phone calls and, and things like this. So. So yeah, I think the US put out a pretty reasonable, like, here's how we should regulate AI sort of generally about a year ago, or can last six months, I think we could we do well to look at that approach. And then try to sort of gain the experience ourselves of like, what are these tools good at? What are they bad at? What should we be worried about? What should we not be worried about?

Ryan:

Yeah, no, that's, that's very interesting, Paul. And I think there is this little bit of a sense of kind of an arms race, right, particularly with with kind of nefarious actors using some of these tools is, you know, government has this natural caution. But the status quo has some risks as well, particularly when when others have access to this. As we maybe kind of bring the conversation a bit too close, Jen, maybe some some final comments to you on this on on kind of where you're seeing things going on the type of guidance and regulation out there. And if you think we need more of it, or if you think we're getting into getting to the risk of being over regulated?

Jen:

I feel really conflicted about this. And I really do not envy the regulators. I think it's a really, really difficult position to be in right now. Because my perspective is that it's really, really, really hard for us right now to predict the nature and the levels of impact of this technology. And regulating in that environment is really hard. And personally, I can't decide if this is going to be technology, like the telephone, or technology like nuclear power. And so I'm like, I'm an expert in this field that I'm not sure, right. So if it's like the telephone, the telephone had a massive impact on our society, like globally. But if we look back, we wouldn't say oh my gosh, we should have, you know, waited 10 years before everybody was allowed to start using telephone. So if it's technology like that, then I'm like, Okay, well, yes, we should, you know, promote it and there will be substantive impacts, but it's manageable. If it's like nucular technologies, there was a period of time when we were extremely Cavalier using nuclear technologies. And many problematic things occurred. And now those technologies are heavily regulated, and we would never even imagine that we would not heavily regulate them. So that's a bit of a non answer, except to say that I'm very empathetic to the job that the regulators have at this time.

Ryan:

Yep. Yeah. It's, I think it's a great metaphor, Jen. And, and I think that's the dichotomy we're going with, right, is we don't know for sure yet. And maybe we're going to wind up with more of kind of a nuclear power telephone than anything else, a little bit of a little bit of it from, from both columns. I just want to say thank you to all of you. You know, this has been a really interesting conversation. I mean, we could talk about this for hours, obviously, no doubt, we'll come back to it on the podcast, I think of you know, all the emerging technologies that are facing us in the world today, AI seems to be one that is not only kind of maintaining that space on the hype cycle, but I think we're seeing more and more real world impacts. And so really thankful to all three of you for taking the time to share your perspective on this and your expertise, as people I think, who are listening, are trying to sort through all this in their work and in their world. So thanks. Thanks so much, Jen, for co-hosting with me. And Shan and Paul, great to have you both with us today.

Shan Gu:

Thank you so much.

Paul Craig:

Good to be here. And hope everyone gets their taxes filed by April 30th, 2024.

Ryan:

Exactly, exactly. So thanks so much, everybody. And that's the show for this week. So again, a big thank you to Jen for co-hosting with me and to Paul and Shan for talking about their work in the AI space. So what do you think? Are we ready for AI and government yet? Do we have too much regulation around AI or not enough? And perhaps most importantly, is ChatGPT more intelligent than my cat? If you're watching on YouTube, let us know in the comments below. Or you can email us at podcast@thinkdigital.ca or use the #letsthinkdigital on social media. And also remember to like and subscribe. If you're listening to us on your favorite podcast app and you enjoyed this episode. Be sure to give us a five star review. We're also on the web at letsthinkdigital.ca, you can visit our website and sign up for our newsletter and also catch up on past episodes of the podcast. Today's episode of Let's Think Digital was produced by myself, Wayne Chu and Aislinn Bornais. Thanks so much for listening and let's keep thinking digital.

Everything You Should Know About AI (but were afraid to ask)