Disturbing Revelations: Top AI Director Exposes Terrifying AI Concerns

Kate Puech is currently Director of ML Engineering at Axon. Her team focuses on building tools for developing responsible AI solutions that accelerate justice, protect truth, transform law enforcement in the field, and ultimately, save lives.

Prior to joining Axon, Kate worked as a Manager in Office 365, at Microsoft, where she led a team focussing on creating privacy preserving datasets for machine learning. Prior to becoming a manager, Kate worked in several departments at Microsoft, including Microsoft Speech and Language (now Cognitive Services), Exchange, and Outlook online. She got a Masters in AI from Polytechnique Montreal, in 2013.

When she is not working, Kate can be found hiking the Cascades mountains, paddling on the Pudget Sound, reading in bed, or flying an old Cessna.

Host:
Hello, how are you?

Kate Puech:
I'm good, done with things.

Host:
Good, good. I'm going to introduce you a bit. So, this is Kate Puech, and apart from being very experienced in the industry, she is a director of AI engineering at Axon. She's been at Microsoft for many years as an engineering manager, guiding and leading other engineers towards their goals as an architect, hands-on building systems for many years at Microsoft. And she's going to talk to us a little bit about AI, her understanding of it, how she thinks it's going to affect humanity, the market, and how people work. It would be great to learn from her today. So, thank you for taking the time and joining us.

Kate Puech:
Sure, I'm glad to be here.

Host:
Fantastic! So, first thing I want to ask is, what's your day-to-day like? So, what does a director of AI engineering do?

Kate Puech:
Okay, um, what do I do every day? Well, um, I do something I really like actually. I solve problems, and I remove barriers from people. I try to empower people, give them the tools they need to be successful and to build good things. So, that's a lot of problem solving in all kinds of domains. It can be a technical problem, it can be a legal problem, it can be an ethical problem. So, I help people to find solutions, and I give them the right contacts and the right materials and funds so that they can build good things for the company.

Host:
Do you have a way of, sort of, when a person comes to you for a problem, regardless of the subject matter, do you have a way or a step-by-step process of processing what they need and what kinds of things you might be able to provide them, or if you want to push them in one direction or another? So, what's your ethos on that?

Kate Puech:
Yeah, great question. You know, I really like the Socratic approach. I like to ask questions to help people find solutions rather than prescribing. I think that's not really useful when you just give a solution to someone. Actually, they don't get much value, and they're going to come back with another question. So, right there, I try to ask good questions, point them in the right direction, but I like to let people find solutions themselves.

Host:
Do you consider yourself more of a people person or more of a— I wouldn't say a tech person, but a person who is more interested in things and how those things work? Because it'd be interesting to see how you came to where you were.

Kate Puech:
Mm-hmm. Uh, that's interesting. I don't consider myself like a big extrovert, I'm fairly introverted and shy. But, um, at the same time, I have a lot of empathy, and I think I can understand a lot of different people. What I really enjoy in my work right now is I'm working with a lot of different cultures, and I work with people in different countries—in Asia, in Europe, and in the US and Canada. And, um, they all think very differently, and that's great. I love to be exposed to that, all these different ways of seeing problems. I think that makes me much smarter. I feel I'm pretty lucky.

Host:
Oh, that's fantastic. So, do you feel like people from different places approach problems differently? And do they approach tech problems differently from different places? I don't know how that would be possible, but is that a thing?

Kate Puech:
I mean, every person is different, but there are cultural trends, and just in the way people communicate or what they care about first when they look at a problem, yeah, there are cultural things. I think if you ask, for example, a French person, uh, French people are much more direct than American people. Um, they, you know, that when I arrived here, it was I had to learn to say everything is awesome and that’s great, that you know, this coffee is wonderful. Uh, for a French person, like, if I say to my parents in French, "Oh, Mom, your coffee is wonderful," she was like, "What? What’s going on?" It’s just healthy, Kate. Anyway.

Host: 

Yeah, I think for American people, you need to show them the North Star and tell them, "That’s what we are building, that’s where we are going, why." For French people, you need to talk, um, from the get-go about the risks and how you are going to handle that, and, you know, they are much more, um, yeah, they have a different approach. I think so; there are cultural differences still.

Kate Puech: 

Even so, a great person is unique. Uh, do you, I don’t know, do you think there’s a leadership style that’s better, or do you feel like, you know, it’s sort of very much about, uh, the culture? Like, do you think that one produces more results than the other?

Host: 

No, I don’t think so. Uh, I think, um, a good leader is able to see the value of diversity and combining different ways of thinking and approaching problems. So, I actually think what’s working best is combining, uh, you know, a person from each culture together, and, and they challenge each other and they find the right path this way.

Kate Puech: 

If you think about both of those that you’ve identified, right? One is, you know, really putting forward something that’s big and positive and pushing people towards that, and I think that can create a lot of momentum. But then I think with the example you just gave on the other side, where it’s very realistic, like, "Here are all the seven things that might go wrong and let’s attack them individually." Um, like, that could—that’s more realistic, but it could also be something that makes you less willing to do something because you’re like, "Oh, I have all these challenges ahead." So, do you—I would almost assume for one side, you would have to sort of bring them down to Earth, and then from the other side, you have to show them the sky in a way. Does that resonate with you?

Host: 

Yeah. Okay. Yeah. Yeah.

Kate Puech: 

Um, so let’s talk a little bit about, uh, sort of how you got to this position. Uh, as a person, you identified as someone who’s a little bit introverted, now having to sort of all day talk to people and give them your energy.

Host: 

Um, so I know that you started off as sort of like an intern.

Kate Puech: 

Um, but even before that, what made you interested in technology?

Host: 

Okay, great question. What made me interested in technology? Um, I think is that I think nobody in my family really was—they were all, yeah, exactly, they were all artists, and I think it’s kind of natural to, to want to go, um, you know, beyond what your parents are already giving you. I wanted to explore something, uh, new. And so I think that’s how I got interested when I was young. I also realized that there was a huge potential. I was amazed when I built my first website. I think I was like 13 or something, or 14. And

Kate Puech: 

Um, I was amazed at how much you could build with nothing. You did not need a lab or anything like that to make experiments. You can just do everything from one computer. That’s an endless world of possibilities. So, I loved that, and I got into technology.

Host: 

But I’m sure you’re quite good at other things, so why did you commit to this as your, uh, you know, how you were going to contribute to society?

Kate Puech: 

Yeah, great question. Um, I used to love to write. I still do. Um, but I don’t have enough time right now, and that’s a shame. I should go back to doing that more. Um, but anyway, I was really debating, yeah, when I was young, should I go and do some literature study, or, um, or go into the tech world? And I decided to go into tech because I thought, well, um, I don’t know, for whatever is artistic, if you are constrained by having to gain your paycheck, it’s gonna constrain myself too much in my art, in what I write, and in, you know, what I compose, or paint, or whatever. If I have to leave out of it, I will have to think about, "Oh, what will people like? What do I have to put in the book so that people buy it?" And so I thought, "That’s not going to work for me. I want to write whatever I want and say whatever I want, and I will contribute to the world and get a paycheck a different way." Um, and I was also really, really curious about science anyway, so I, I... another. The aspect is I thought that, well, I can write and learn about literature by myself, for science I need a teacher, so that was another reason.

Host: 

Okay, that's pretty interesting. So if we sort of talk about what you ended up doing when you got into Microsoft eventually, where I think Microsoft's a huge company, and there's lots of opportunities for a lot of things you could do within the technical world, but you decided to sort of commit yourself more to cloud architecture at Microsoft and speech and language when it wasn't cool. Now it's all over the place, but you know, you did that at Microsoft and decided to go in that direction. You know, I'm not sure if you did it before and owners were interested in it because it sort of is a derivative of, you know, being a person who is interested in writing, but you know, being a person who's interested in learning, writing, and then doing speech and language. Is it kind of, it sounds the same, but it's very different from a practical point of view?

Kate Puech: 

Um, so even committing yourself in that direction is the whole thing. So what made you decide to go in that space at Microsoft?

Host: 

Yeah.

Kate Puech: 

Um, okay, that's an interesting question. I think it would be too easy if I could just drive my life, you know, saying, "Oh, I want to go into Microsoft. Oh, I want to go work in speech and language." I know the way the world works is you get opportunities, you take them or not. And so, I did not decide, you know, after I graduated with my masters, "I'm gonna go for Microsoft." Actually, I applied to several companies, and my two of my friends, you know, there is a little bit of chance and fate in that. That's what I want to say. I did not drive the whole way. I just tried to make the right decisions based on my principles.

Host: 

Uh, and so, um, I followed two friends in Microsoft and I met my first manager, who was a great, great person. Um, and because that guy was so great, I decided, "Yeah, I want to be in that team." That's how I started, and that's how I learned to program for real.

Kate Puech: 

Um, I'm working on the back end of Outlook. That was, there was no AI at all there, but I think it's good. I did a few years of just hardcore programming before jumping into science because, um, this was a very good team with a very high technical bar, and I liked that. So, I stayed a while because I thought, "Well, I'm learning something here." Like, when I was a researcher in Montreal, um, I had never programmed before, uh, out of the academic context, and, um, definitely, that was a blocker. And I felt when I worked for the first time in a professional team with a high technical bar that I was learning something valuable for my ultimate goal, which was to go back to AI and work on, you know, state of the art AI projects.

Host: 

Yeah, so I learned for a while. When I felt, "Okay, I'm good enough now, I need to move." I've learned enough from my mentors.

Kate Puech: 

I contacted the one manager in Cognitive Services, which is an applied science department for speech, and I, um, I said, "Well, would you like—do you think there's value in having top-notch engineers in your science team?" And he said, "Yeah, absolutely." But, you know, and so, I came in this speech and language team, and I was the first engineer in the first engineering team there, and, um, yeah, that was a great time. I had a lot of fun.

Host: 

Um, I built a large-scale platform for experimentation there, and I worked weekends and long hours, but I did not care. That was wonderful. Um, you know, I was building something new from scratch, and I could see how it transformed the way research was conducted, so it was really motivating.

Kate Puech: 

Um, and yeah, after a while, I decided that, you know, I started mentoring a few people on my platform. It was now stable and maintained by junior engineers, so I felt it was a good time for me to move and, you know, let my platform live without me. It did not need me anymore, so I moved to another team.

Host: 

But before you go on to the other team, could we talk a little bit about the technical details of sort of what you built? Is that something we can talk about?

Kate Puech: 

I can talk a little bit about it. The idea was to scale up our, um, data prep pipeline. That was the main idea. We wanted to work with much, much more data, and so I had to basically build a large-scale compute cluster to process much, much higher volume of data.

Host: So from, you know, yeah, multiplying the goal, where the ultimate goal was to multiply the amount of data we were able to process by 10x so the challenges there were around uh multi-threading how you paralyze stuff how you handle partial failures how you do restart how you how to skip Auto scale your cluster how you optimize the costs all those things okay were you working on every single one of those that sounds like a lot so what are you working on every single one of those things uh during for the and how long would you do that project for oh gosh uh maybe two three years two three years just to get it started well no it was started after I would say six to nine months and then maintained yeah yeah then you maintain you add new features and so on right right okay and so from there you were able to move out of the project had more Junior Engineers maintain this uh this monolith of a of an Endeavor and then you moved into

Kate Puech:
Yeah, I'm just back into the office and I managed my first team. It was a team of talented engineers working on building privacy-preserving pipelines for AI in Office 365. So there, I learned a lot about privacy and privacy-preserving techniques, like how you can build datasets and manipulate data with care. For Microsoft, privacy is very, very important; that's a big selling point for them to say they are compliant. They care a lot about it, which is a good thing. I learned a lot about privacy-preserving techniques and things like that, plus I learned to be a manager. I learned to run the team. How long have you run that team—for a bit more than a year or more than me?

So I think here now we can, you know, maybe come to sort of the meat of it, which is AI and unemployment. What are your thoughts on AI? So, what things are going on? Can we just start off sort of, you know, talking a little bit about AI—what it is and then talk a little bit about sort of how you think it's going to change things, the risks of AI, and also advantages that you think the market is going to have with these new ways of interacting with machines? Right, um, yeah, yeah. I think what you just said is pretty important. AI is going to really change how we interact with software and machines in general. In the upcoming 10 years, it’s going to be a total transformation, just like the smartphone changed our world or the internet. I think the transformation we are going to observe during the upcoming 10–15 years is going to be that big. And I’m at the same time looking forward to it and at the same time scared. So yeah, it’s just like nuclear technology—ethical AI can take us to Mars, it can save civilization, but it can also destroy us all. I think so. But if you go to Mars…

With the rapid rise of AI, there are serious concerns surrounding AI misinformation. The potential for machines to generate false narratives and influence public opinion is a growing issue. As AI becomes more integrated into daily life, it will be crucial to ensure that it is being used for the greater good. Ensuring AI security is also critical to avoid breaches that could lead to harmful consequences. While the potential of AI is immense, it’s important to stay vigilant and understand the risks of AI.

Host:
Um so yeah um the reason the main reason why I wanted to to talk with you is was uh and with other people get my voice heard is because I think it's it's important that we but on the right guide rails and that um we all we are all part of this if we just expect that oh well someone in Washington is gonna come up with a law that's gonna protect us all from AI That's that's not gonna work I think everybody should feel they are accountable for what's happening and how they use the technology how they develop the technology um I think it's really important everybody understands the the risk associated with it um and and so that's why I wanted to to chat about so

Kate Puech:
Um I think a lot of people have been talking about more of the long-term risks I've heard you know Elon Musk intern and so on I've been talking about the risk of AI just taking over the world um which is a fair risk I think but um there are shorter term risks which actually there there are things already happening I like the loss of authenticity like you put your finger on it last time we talked I there is this thing that's really worrying me um not about the future about now and after our conversation I was like yeah I just put his finger right on the World to characterize This and like that's authenticity the loss of authenticity of um of of of our world and how the reality we see on our screen is disconnected for from the actual world well let's talk a little bit about what you you consider AI to be just because for people who don't uh know much about you know. I think you can definitely, for all kinds of reasons, really give—uh—what it really is like. I think a lot of people, and it's actually quite hard to define, but a lot of people think that AI or "Cai" is a sort of pseudo-amorphous thing, that they know that it can sort of change the copy that you ask it to change or something, and that's how they see it. But for someone like yourself, and people who are in the tech world, they understand the fact that because you work with text so much, the fact that there is a system that can do that, I think insinuates such a high amount of power that is so hard to translate, like what it would mean for you to have something that can do that. So I'd love for you to give it a try. Uh, okay, to sort of present—yeah, you know, it's hard even for me to ask the question, but you know, do you—if you don't mind—hopefully you'll do better than I can.

Host: 

Okay, I'll try.

Kate Puech: 

Um, let me see. Where do I start? I think an AI system is a digital brain that can learn from data and its exposure to the world, just like a human baby does. So, you know, when you have a first human baby, you give them toys so that they learn to handle things in their hand, and then they learn by mistake. Right? They start to walk, they try it and fall, and they try again. And by trying and trying and trying, they learn. And the AI system is pretty much like that. It's a digital brain, so it's a set of connections between neurons—digital neurons—that are being shaped by the data you feed it. So, at first, when you take an AI neural network, an AI system, it's blank, just like—it's just a neutral set, it's random basically. It's completely random. You give it a stimulus, and the output is going to be random. But what you do is that you teach it, like a baby, to react properly to each input. And so whenever the AI system is fed information, it outputs something. If it's correct, you say, "Yeah, great, that's correct." If it's off, you say, "Well, you need to adjust," and you tell the system how to adjust. And the digital brain is going to learn from that correction, and over time, it will be able to output the right thing. AI concerns arise when these systems start becoming more complex and autonomous, but the basic principle is still the same: it learns from experience. AI concerns can also arise as these systems become more integrated into daily life and start making decisions based on data that may not always be complete or accurate. Um, does that make any sense?

Host: 

That was hard.

Kate Puech: 

No, I know. It's tough. It's tough to really—so we have that sort of definition. It's basically a digital brain. I think that is a very, very accurate description. And sort of now, we want to take that and move forward. Okay, we have these digital brains. You know, what is it about what makes it so risky to have these digital brains? I think that some of the risks are self-evident in that, okay, you know, you have the non-uniqueness of the ability to learn in the market, which, you know, now you sort of mitigate that against cost and mitigate that against scalability. But it's—uh—and sort of as soon as an improvement in that over time as more capital is poured into it. But, you know, as that happens, why is that risky and what makes you concerned about that?

Host: 

Yeah, so that's a great question.

Kate Puech: 

So, that's the crux of the problem. One thing that always fascinated me about AIs is that they are not only spitting out what they have seen before. They are also extrapolating, just like we do. So, it's not just that they have learned—they have seen an example before, and they are just spitting that out. No. They are able to extrapolate. Um, and well, when you have just narrow AI systems on the market, like, um, what we call narrow AI is an AI which has a small input space and a small output space, like an AI that learned to play chess—it can only do that. It cannot drive your Tesla, like you have autopilot. It cannot play chess like these. These are narrow AI. Now we have more broader AI systems. We can do much more, um, because they are just much, much bigger. We have the complete power to do that now. And what's concerning about them is that, um, they, if you compare them to the brain, they are able to process information much, much faster. Like, if you think about it, ChatGPT has read all of Wikipedia, has consumed all of the written content of the internet. If you and I were trying to do that, well, good luck. You know, try and read all of Wikipedia. It'll take me longer than an evening, that's for sure.

Host: 

Yeah, exactly. So, this AI—even so, they learned similarly to humans by trial and error—internally, they are very different.

Kate Puech: 

So, they might seem human because we have tailored their input and output to be very similar to what we are familiar with. That's because that's how we interact with them. But internally, they work very differently. They have far less connections.

Host: 

That's interesting. But in our brain, there are much more connections between neurons than there is even in ChatGPT. But they consume information, they process information much, much faster.

Kate Puech: 

So, they have far fewer connections than the amount of connections they have in your human brain, is that what you said?

Host: 

Yeah, okay. But they just have to process information faster.

Kate Puech: 

And that's where the power comes from.

Host: 

Okay, got it. Got it. Yeah, and so, well, that's a new form of intelligence, which, um, might just be smarter than us.

Kate Puech: 

There's more power per unit neuron, essentially. So, there's an improvement in the quality of the neuron in the digital neurons that are... improvement in quality of neuron. Our brain does not show a lot of compute power, and these big neural networks use a lot of power when they learn, but they learn very, very fast. And maybe their way of building connections is faster than us.

Host: 

So, that being said... that being said, um...

Kate Puech: 

I still think so. I think there is danger because they are very smart. But, um, I also think that there is a thing very specific to human intelligence, which is the ability to proactively do something and to decide to do something. There's a power of will. I think AI does not yet have that. So, that's why I don't think ChatGPT is going to take over the world tomorrow or even next year because it still needs to be prompted. Right? Um, when you woke up this morning, or when you sent me an email saying, "Okay, do you want to do this podcast?" Nobody prompted you to do that. You just decided to do it.

Host: 

Um, and, um, so the AI we have, I think, have no power of will. But, um, still, they are impressive, and, um, they are definitely able to process information and finances faster than us.

Kate Puech: 

Okay, so do you feel like will is the only... so...

Host: 

If will is the only issue, and then... because I think...

Kate Puech: 

They even, you know... is that the line? Is that a line that needs to be drawn? Or is that the only... that you're concerned about?

Host: 

There's a million of things.

Kate Puech: 

Okay, go ahead. Because you talked earlier about like the small, the small, um, up things that are happening currently. But go, please, go ahead.

Host: 

Yeah, absolutely. Yeah, thanks. Thanks for your question. I think there is this long-term risk that a lot of people are debating about—about, okay, is AI a danger for humanity? Is it going to take over the world? But, as I mentioned earlier, um, AI is just like nuclear power. Nuclear power in itself is not a bad thing. It does not have any will, right? But used by the wrong people, it can be very destructive. So, um, I think AI is the same way. It's a very powerful tool.

Kate Puech: 

Um, we have, um... I don't know if you remember the scandal, um, of Cambridge Analytica. Do you?

Host: 

I remember, yes. I do. I was in 2018.

Kate Puech: 

So, that was before generative AI and ChatGPT was, uh, available broadly, right?

Host: 

We have elections coming up, um, and let's see what happens.

Kate Puech: 

There are things... yeah, there are tools online for anyone to create deepfake videos. It's a very creative business. So, there are tools being built everywhere and sold for creating fake content in fake videos. And so, you know, when we are just months away from an election, it's kind of scary. So, that's one thing that's really worrying me. The fact that, um, it becomes now really easy to manipulate information, you know, we cannot trust anymore, um, multimedia content, pictures, and videos. You cannot know whether it's authentic or not. So I think we urgently need, um, tools and legislations to, uh, to preserve the authenticity of our information. So I think that's quite the alarm.

Kate Puech

And I think that's very fair to say. Um, I think it's a very fair concern. So, so that's almost—so one of the things you said that was interesting is, like, almost like nuclear energy, and what we ended up doing with nuclear energy is there's two things that happen. We simultaneously, simultaneously it was used, and if you think of humanity as sort of a collective unit, especially with something like this where, you know, the thing about nuclear energy is that you use it once and you get the point, right? And you decide, okay, maybe this isn't—but with something like AI, where you—if you can get to the point where it has a nuclear effect because of the arena that it plays in, you can't really roll it back. It's not a rollbackable thing. So that's one issue that you brought up. The second issue is the concept of authenticity with being with manipulation of human perception using digital media.

Host

And what that also does is create an obstruction with collective decision-making regarding anything that we need to have a consensus on that has effects that affect us, right? Uh, and then we could have consensus on, but we don't because we're misinformed. So it's almost like it can attack—it has the potential to attack in multiple directions as long as we have this sort of, uh, this, uh, this network that is—or this sort of global, um, global, uh, what would you call it, nervous system that is the internet, right? So, so, there's a short-term and long-term thing. And that, you know, without that being—so, I guess, you know, there's so many problems you've sort of brought up, so I'm trying to sort of create a narrative around it, but I guess the first thing is, how do I want to say—how do we actually get an idea of it being like, how do we create something more authentic, right? But that's not even the problem, because I think, you know, that's something that we will do or something we should do. But how do you get people to understand that this is something that needs to be done quickly, right, before they're misinformed in this idea that it isn't? Does that make sense? Because we will be misinformed before we need to inform ourselves on how important the authenticity is, right? So does that make sense?

Kate Puech

Yeah, and that's—that's a tough challenge. That's why I'm, um, trying to raise a red flag here saying, well, we need to do something about, um, about authenticity of content. So the solution is going to be multifaceted, as always. So it's going to be a combination of legislation, thoughtful leadership, uh, and just accountability. Like, I think, you know, um, well, if you just, as I mentioned, if you just wait for a law to be passed before, um, acting in an ethical way, then, um, yeah, we are doomed, right? Yeah, I think we are.

Host

One thing I really appreciate about, um, the current company I'm working with is that they have this principle called ethics by design, um, where whenever we build an AI system, we try to consider several ethical dimensions before going into building. And so you have to think about bias, you have to think about privacy, you have to think about the risk of causing harm, whether it's physical, psychological, and I see that that's becoming a trend in the industry. You know, there are thoughtful leaders out there, so there are good people pushing AI in the right direction. That's—that's why we're not doomed. There are a lot of good, thoughtful leaders flashing their teams to ask themselves the right questions before building something. So, right, that's a powerful force as well. And all this is being done ahead of any legislation. Today, there is no legislation telling you you have to build a responsible AI system, you have to test your system for bias, you have to think about your system taking over or talking people into committing bad things, um, but people do it because they—they are educated and principled, and so they do the right thing. And the more we can spread the word and get the developer and scientific community, um, on board, um, I think that's the real solution, what analyst legislation...

Kate Puech

So how do we, uh, and I think you mentioned this before and I think this is great because, you know, it happens faster than... it's sort of like a grassroots perspective. So how do you empower and scale that, um, that mentality? How do you, um, how do you encourage, you know, this idea of ethics in engineering?

Host

Um, it's actually really interesting because, like, they have it in law and so to have that in engineering, it's almost like it's something you should... it's also come into classes, right? Like we, I think, people who build systems now have so much power that you do need to say, "Hey, could you do something that... um, can you do something that is net positive always?" Or could you take that into consideration all the time? It's actually quite interesting. I think it's going to change the way we do our education for this, so the engineering sector, right? Um, because it's almost in a way, the way things used to be, in order for you to have, you first have to build it, then you had someone to allow people to have access to it. But now, both those things are machines in and of themselves—both the distribution and the product are, but rather, the product is a distribution network sometimes, right? Like, that's what Facebook is, and that's literally what Google is—the product is a distribution network. Um, so it's almost like there's an assumption of that always when you build things moving forward, and so you would have to have some sort of, uh, an idea of, like, what are you distributing and you make sure that it's net positive and consider all the angles. Um, so, so how do you, and especially with AI, it's going to make it even more high-powered, so how do you, you know, I guess, what are the things to consider then when you are building these systems that, um, or, um, and I guess one way to say it is, what do you not do when you're building AI systems?

Kate Puech

Yeah, um, you don't test in production, so, you know, yeah, that's fair. That these systems are so powerful now that you need to think about the ethical risks from the get-go. Um, and you need to think about, uh, the impact you will have on society from the get-go. So, you, you think you do a full ethical study before going and deploying your product then?

Host

Um, I think so. You need to test your product for bias, you need to... so that requires an effort, but you need to collect the dataset which is representative of the diversity of the users you are going to provide the product to. Um, you also need to test your product for just basic safety. Um, so that's a... that's a new, um, you need... that's a new thing that's emerging, and I'm really happy about it. Um, do you define safety within this context? And, you know what I mean, like, what is safety in this context?

Kate Puech

Um, imagine you have an AI which is driving your car for you, right? Okay, it's been tested in most of the scenarios, and it's 99% accurate. But, um, imagine the engineering team has not done their due diligence, and so somehow at some point the AI receives an image which is triggering a numerical anomaly, and your car accidentally turns into a wall—boom, end of you. Um, so, you know, that's a risk we cannot afford, and so we need to develop systems for a more, uh, for safer neural networks, and there are ways to do that. I just talked, um, last week with a company called Femi's AI. Um, they develop a way for you to wrap your AI model into another model which will handle Black Swan events like this, so that guarantees you that when you have your model, it's internally going through, um, a Black Swan event, so there is going to spew out something completely absurd. But in people at risk, you can prevent that. You can detect it, and you can prevent it, stop the car, you can have a recovery process started instead of just, you know, the car crashing.

Host

No, fair enough. Which, which in itself sounds almost like, um, we talked initially about AI's having... being brains. It's almost... it sounds like a different cortex, right, in this sort of network of neurons that you're talking about. One for safety, one for productivity, and one for, um, is there, um... how do I... is there a way that these, these protectives... you know, I like what you said about it being a Black Swan event, right? Because it's a... basically, you're saying it's a statistical anomaly that emerges out of the system that is net negative, right? So is that, is this a number filter where you say these are the outcomes of all these weights, then the value that is being produced is so outside the normal distribution that we want to cut it off? So we only want to have normal distribution outcomes. If it's outside that, then we at least want to hit a recovery process or...?

Host: 

Um, and then we just, you know, our level of risk is sort of increasing or reducing this window, right? Does that make sense? Is that sort of how these systems are doing it? Or are they... is there a list of outcomes we don't want regardless of any mathematical model that we explicitly see at this point? If the model spits out this outcome, then we trigger this recovery process.

Kate Puech: 

It's kind of the generic idea, but it's more complicated than just a set of fixed rules. If you think about a self-driving car, the output is huge. It's humongous, and it's not making it... it's hard to in real-time analyze that and detect that it's a wrong output. And similarly, if you take the example of ChatGPT, how do you know whether it's right or wrong? It's such an open space, you know? The AI can tell you an infinity of things, so you cannot really hard code, that’s a good thing to say, "That's not politically correct" or "We don't want to export." So, it’s complicated, and you need to actually use AI to catch AI errors.

Host: 

So interestingly, we are using AI to correct AI errors?

Kate Puech: 

Yeah, so what is the... what is the axiom of that? If I can ask, what’s the idea behind that? What is this parent AI doing with...?

Host: 

Yeah, does that make sense? What is that doing?

Kate Puech: 

I don't know for that specific company. I cannot talk for them, and I don't know about their IP, but I can tell you a little bit about how... and it's an open area of research right now, honestly.

Host: 

Yeah, of course. So, I don't know how OpenAI is doing that thing, and I don't know how others or Google is doing it. They keep that recipe very secret internally. But, yeah, the idea is that you would train against each other. So, you will have an AI trying to trick another one, and you would handle the output. So, it’s that kind of mechanism that’s being used here. It’s sort of like an evolutionary mechanism, nature evolved to live all together until one is very good at defeating the other.

Host: 

Right, okay. So, we've talked a little bit about AI safety, and you know, some of the things that we want to look out for when we don’t want AI to move in one direction, right? Specifically, we talked about... I guess your example was, sort of, cars and making sure that it doesn't produce negative things.

Kate Puech: 

Mm-hmm.

Host: 

But one thing you talked about earlier was when we talk about having AI that just generates false realities and... there is no set of outcomes that is negative, and we want to prevent... we just, anything that it produces has the potential to be an issue because the issue is in the perception of the person. So, what do you think about that? And, like, sort of the issues around that, and so what are the things that we can do to manage that, right? I mean, there’s an obvious... like a “this is fake for sure” sticker around every piece of anything that’s generated by AI or something, but as you know, as one might assume, once that sticker can be removed, right? So that’s like a sort of bad solution. What do you, as someone who’s big in this space, what do you think is at least the area good solutions can exist in?

Kate Puech: 

You know, I don’t think we should remove content unless it’s 3D content that... unless it’s really content that’s dangerous for some population, like child pornography, stuff like that, you know? You have extreme stuff that you don’t want. I am very much for free speech. I like... I think that’s the strength of this country, and I think that what Musk has been doing with Twitter, trying to arrange for these people, is actually a good thing. Even so, you might not like these people, I think everybody should have the right to say what they want to say and be heard.

Host: 

Um, if it's not offensive, so I think removing content is not a good idea, but I think transparency is important. Um, the big danger with generated content and bots and things like that is that you can reinforce people into their opinion and expose them to content which is not realistic, but they will watch it anyway. Um, and so you're gonna reinforce good vision between people, isolate each person in their own little bubble of content they like, which is verifying their own biases and hypothesis.

Kate Puech: 

Do you think that they will still watch it if they knew it was fake? Do you think that people are will go in that direction?

Host: 

I don’t know. I think at least we should... I think at least we should be transparent. I think we should at least have a rule where we force content producers to mention whether this video was AI-generated, whether it was enhanced or this, um, whether it’s plain truth, you know, right? And I think that would really, really help.

Kate Puech: 

Now, are you saying that people are still going to watch this content even so it’s generated?

Host: 

Yeah, maybe. Um, or maybe... I don’t know. No, I think, I think, I think it depends on the content, but I think they will, right? I think they will. They will watch it. I never thought about that, but I think that you, you saying that initially, I think that they would. I think there are, I think this there's, you know, I think what you said about forcing people to identify that this is fake, um, I think would be super important, right?

Kate Puech: 

Because it can be very insidious, sort of the insertion, the insertion of artificial behavior, right? It could be someone's... it doesn't really have to be like a whole video that's fake. It could just be like as I'm talking to you now, I say something that is wrong or inappropriate and an AI does that part, right? Like as I’m talking right now, so an AI just does maybe this, not the last three seconds, right? Makes it seem like you’re responding to something that is not what you're responding to, right? An AI could do that. It doesn't have to be ostentatious. So whether they would or want to watch something, you know, whether... and then this idea of like... and then if you go to the bigger context where like everything is fake and whether people will be happy that this is fake or not. I think there are some things they absolutely don't care if it's fake.

Host: 

Yeah, there are some things now. I think the thing you were pointing out initially, whether or not, um, they would... whether or not they'll be willing to believe something that's... that they know is fake, um, I don't know. But I think somebody will figure out a way to make that somewhat true. It’s just I don’t know how, right? But there are many people that believe things like people who believe that Earth is flat, right?

Kate Puech: 

Like, yeah, in order to believe that you really have to go out of your way. You know, it takes quite a lot of work to believe that the Earth is flat. More... in fact, if you can, if you did that work to that degree, you may as well understand like, you know, the basics of astronomy. So, it's an interesting... it's an interesting idea, like the human behavior is very variable. Um, so it’s interesting, it’s a very interesting question. I think, yeah.

Um, there is the question of just the truth of facts and information, right? And, um, yeah, there may be people who will be willing to at least see the problem and have a discussion about whether, um, our president has said that or did not say that, you know? If you have a video out there with our president saying he will take all the guns of all the people in America, um, you know, that you might want to debate about whether it's false or not, but there are other types of content which are generated which have a different type of impact, which is much more fertile, um, which is dangerous too. So, there is a question of the truth of the fact, that’s one. And I think a lot of people are aware of that and see the risk that I mentioned of something really bad happening during the next election if we don’t do something. But there are, beyond the risk of a civil war—I hope not—but beyond the risk of that, there is just something that's happening right now, which is the fact that the reality we see through our screens is oftentimes very far from media and depression in young kids or in teenagers, and there is a correlation between the number of hours you spend every day on social media and your level of happiness. AI concerns have been raised because this issue is growing, and I think that's concerning. There is a huge issue beyond the political one: there is another huge issue, which is that we live in a world where there is a huge incentive to gain access to people's brain time. A lot of companies' business models in tech today are about how to get people hooked so that they spend more time looking at ads. In that context, there is a huge incentive to create catchy, fishy content, you know, the one that's going to get more views, right? AI concerns are accelerating this process, and with generative AI, the situation may worsen. A few psychologists have already tried to raise a flag since 2010, noting a huge increase in depression among young teenagers and especially women because they spend too much time on social media looking at unrealistic things. It's dangerous, and there is a big lobby, but nobody's talking about it. I think it's posing a threat to our society, and it's not helping people have a realistic view of what the world is. It's like everybody's trapped in a digital prison, a digital bubble, where you can feed them the content they want and get them to watch even more ads by getting them addicted to things which are not good for them. How good is it for you to scroll all day looking at pictures of who knows what?

So, there's a lot to consider. The first thing is, I think this idea of things being artificially constructed to push you in one direction or another has been happening for quite a while. People have been editing, whether in political campaigns or speeches, to give you one side of the story or another. I think the media magnets have played into that, almost with a consensus approach, where one side will edit one way, the other one the opposite, but no one will tell you the other side was edited in that way. A cascade is used to rile you up and capture your attention. Social media companies, if I may say, have almost done it more authentically, giving you what you want, which gets your attention. The problem is that it works so well. It's similar to the way sugar works: the problem with sugar isn't that it's addictive per se, but that it gives you all the energy you want. With any kind of absolute provisioning, it can be net negative. When we think about these two things—social media companies want your attention, and the content they produce doesn't have to be real—it becomes a cycle where the social media companies generate a reality for you. And it’s kind of happening already, where if I talk to my friend on the phone, the conversations are tailored to what we want. AI concerns are raised when this level of personalization starts creating a world where you're fed only what the algorithm thinks you want, reinforcing certain biases. AI security concerns also emerge when we realize how much data these systems are collecting, shaping our lives without us fully understanding the scope of it.

About it was really interesting because I got married recently and um I was, you know, I think I was talking about that uh with friends and stuff and suddenly I started receiving all these ads about wedding dresses and rings and stuff like that and now I'm getting older, yeah, do you want a baby? Um, and all these baby ads stuff, you know, so there and from all kinds of different media. Uh, and maybe it... I mean, content is already definitely tailored for each person, right? And based on which ad you would click or look at longer, um, of course, you will get tailored content. AI misinformation is another risk, where the personalization could lead to false or misleading content being targeted at individuals. AI and unemployment is a side effect of automation, where jobs are increasingly replaced by intelligent systems that perform tasks once done by humans. Risks of AI grow as these algorithms take over more aspects of our lives, potentially causing long-term societal effects. Um, so that's really dangerous I think again. Um, yeah, I would... I would raise and not have tailored ads for myself. I think that's not good. Ethical AI would mean that these systems need to be designed in a way that respects user autonomy and privacy. AI security concerns grow when you realize the amount of information companies are collecting to target you with these ads. Um, right, but so when I think about it...

Host: 

Would you, would you do... do you think it would be wrong for them? Let's say, because I think, I think, yeah, the tailored ad thing is so... that's one side of it, right? But would you think it would be like... let's say they decided that, you know, you're watching some sort of series of things because now the way that they can do this would be crazy, right? Because they could just say, you know, you're watching a bunch of things, right? So for example, you just said baby ads, you said, you know, sort of things, but that has to do with like being married or like wedding stuff, right? Right, but they could just generate a... they could generate something that's just a combination of all three because they won't have to actually make an ad. They could just generate something that is a combination of all three. And so they could even have it so that the service of the advert itself would be a subsidy. It would be an augmented service, right? Where... and that is, you know, that could... Do you consider that insidious, or do you consider that an improvement in marketing technology?

Kate Puech: 

No, I think our marketing technologies are far too good. I think we should not know them anymore. So it's almost the same thing with the sugar where it's like, it's not that sugar is bad, it's just like to have that level of accuracy for what you want is net negative by or simply because of its... it's its quality. It's like everything is just nuclear-powered, basically. Right? Like nuclear is not bad, it's just like it's all at once, so it blows everything up. AI is not bad, it's just all at once. It just blows... it just makes it so everybody's sort of... everything is human understanding is obsolete and the mark, you know, it's just everything's overpowered, or it has the potential to be overpowered to the way that like our physiology can't cope with it, it seems like.

Host: 

Yeah, we are playing around with very powerful tools. Yes. Yeah, that's...

Kate Puech: 

I think, um, it's really exhilarating. I don't think when I grew up, I thought, and I don't think I thought a lot about how technology was governing my world. I felt really, really free. Like I would, you know, I had no cell phone growing up. I would just... if I wanted to see my friend, I had to walk one mile and we would like take two horses and go ride in the countryside or whatever. That's fine. That's nice. You lived in the 1800s.

Host: 

Whatever. Whatever, that's cool.

Kate Puech: 

But, um, the point, the point is, um, funny. The point here is that we have so much developed, um, technologies to still sell stuff better and get people to buy things they don't need that, you know, it's becoming... the people are... we are... I am really... we are all really manipulatives. We have to be careful about how, um, we are manipulated. I think we... we need to give people back their freedom of decision and thinking and, uh, the way we are more and more hooking people to an online world and trapping them into a... a bubble where they interact with their friends and people through digital media all the time is concerning. Ethical AI is an essential part of the conversation because when we think about manipulation, we should also consider how technologies are influencing decisions in ways that may not be aligned with human well-being. I think a video conference like we do is... is totally fine. I don't know, like it's cream, but when all the content that's being presented to you has been tailored for you and all the contacts to meet are people who think like you, um, I... I don't know if that's really healthy. AI misinformation could easily be a consequence of algorithms that push certain ideas or topics, reinforcing biases rather than offering balanced perspectives.

So, you know just, I want to reel it in a bit because I think we can... there's a lot of... there's so many... there's so many problems, and so you bring it... you do a great job of sort of bringing up like all these areas that, um, you know, even some of the questions I'm asking are just a derivative of sort of what you're saying. I'm thinking about this... this can be in this area, this can be in this area, this could be a problem, but going back to what you talked about, which is the foundation of all this in terms of solutions, is ethics, right? So we talked a little bit about ethics and, um, some of the things that, uh, we need... we need to think about when we think about AI and unemployment. The way technology is integrated into industries could lead to displacement of jobs and changes in how work is structured, creating new challenges. We need to think about how to manage that transition. One of the things we need to... think about is like lawyers we need to have ethics in engineering and we need to have that. Another problem though with that is now that we did this integration of formal education systems, the pipeline for knowledge has now been is not distributed as well, so there is that issue there. But you know, let's make an assumption that we have the ability to interface all interactions with the ability to, you know, software skills with some sort of under ethical understanding whether it's mandated or not. In this sort of a new world where the only pipeline technology is not universities, is that, you know, sort of how would you go about that? Like, what do you think is the foundational thing? I think you sort of insinuated it a little bit, which is establishing authenticity, but what's your idea of sort of how to sort of... and even with that being sort of the goal, what do you think are some of the ways we can make that happen? Like, how do we incentivize ethical AI thinking within the system we currently have right?

Because we can't... with legislation, I mean, if we do this legislation, it would be there because we forced it, but then as we talked about earlier, it'll be too late by the time we need it, right? Because legislation reacts to issues, problems that happen, and this is one of those problems that when it happens, it's gonna be... it's gonna be too bad. Right? It might destroy society before society can react, right? So I think it's again a multifaceted solution. So yes, there is a legal aspect of things, that's not sufficient. Understand, I think education is super important. So and there should be an ethical class in every single computer science course. And then there is that. And then there is just the fact of holding people accountable, asking them questions, making them feel that they are responsible for what they are doing. I do not think that the majority of developers at Meta are evil. You know what, I don't think so. I think they are great people and smart people and they might... well, I should not take Meta as a company, but you know a company publishing content maybe and pushing ads on people, maybe they do not realize. You know, in my interviews I always ask that question: "Tell me about a time when you have declined an opportunity for ethical reasons?" And 90, sadly, 90% of the people answer, "I've never thought about it." Right? Like it's impossible, you're 30, 40, like you have to have one time in your career, had a situation when you had to say no, I don't want to do that job. No, I'm not going to take the higher paying offer, I'm going to take that offer because it's a way to use my skills for good, greater than... but oftentimes it's like, "Oh, I'm just developing a database," or, you know, "It's not my fault, like I'm just a database engineer, you know?" Or, "I'm just working on an AI to enhance the quality of ads, it's not my fault if I... oh, it's not my problem, right? I'm just doing my thing." But that's a very dangerous way of thinking, I think. Okay, we are reaching the point of discussion. You know, the guy who drove the train to bring people to a concentration camp was just driving the train, right? Yeah, it's the same thing when you say, "I'm just a database engineer." No, you're accountable, and I think that's very... when you realize that you are accountable, even if you're just developing the database for that big thing, I think that's a powerful start. If people start to be aware of what they are doing and feeling accountable for it. Like we see for global warming, I think it's kind of working after years and years of information. People, young people started... it took a long time, they've been doing it since the '70s, and finally, we have one product that considers it, you know? And this is where the discussions about AI concerns become critical, as we can't ignore how this technology can impact society. AI concerns are growing, and we must act before it's too late.

Host: 

Yeah, for global warming, it's kind of working. We have shown that when people are really educated about the topic, well, they start saying no and feeling responsible. They triage their trash, and they buy electric cars rather than another car. So not everybody does it, but most people now are trying to reduce their environmental footprint. Similarly, for AI and ethics, I think if we promote and invest in ethical, responsible AI practices, and make it transparent that there are practices that should not be tolerated or are dangerous, I think that the field is going to change. Well, at least I'm hoping that.

Kate Puech: 

Well, you know, it was great talking to you today. Honestly, it was very informative. It brought up a lot of questions, even more than we talked about before, and I thought we had a great conversation. But this one was just as thought-provoking and just as difficult to meander through.

Host: 

Okay, thank you so much for coming on and teaching us a few things, and bringing out these ideas. If people want to reach out to you, I really do hope people listen to this and take heed to it and really try and pay attention to the points you're making. I think these are very hard to understand but very powerful points. If you're not in tech and you're listening, just really think about what's being said and the power of the tools that are being brought forward. But anyway, how do people find you if they want to reach out and ask more questions or get more clarity?

Kate Puech: 

Good question. Well, I think they can find me on LinkedIn. I'm not really on social media, as you could have guessed. They can find me by my name on LinkedIn and drop me a message. I always appreciate when people want to chat about AI and its possibilities and the risks associated.

Host: 

I would like to end on a positive note. I'm still working in AI, and I have a lot of conviction about how it can transform society for the better. It's the reason why I'm fighting so much against the risks associated with AI, so that we can keep using the tool to build a better world.

Kate Puech: 

All right, well, thank you very much for having me. It was a wonderful conversation, you asked really interesting questions as always, and I'm left with a lot of new ideas. Thank you very much.

Recursive House

Recursive House provides consulting and development services tocompanies looking to integrate AI technology deeply into their companyoperations. Using our expertise we teach and build tools for companies to outcompete in marketing, sales, and operations.

Trusted Clients

Helping Clients Big and Small with
Clarity & Results

Drop us a line, coffee’s on us

What's better than a good
conversation and a cappaccino?

Address
Toronto, Ontario

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Looking for More Information?

Download our latest report on the AI market to gain valuable insights, understand emerging trends, and explore new opportunities. Stay ahead in this rapidly evolving industry with our comprehensive analysis.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
View all News

Lets Chat About Your Future

Unlock the power of AI with Recursive House’s tailored AI/ML and GenAI
services. Our expert team follows a proven development process to
deliver innovative, robust solutions that drive business transformation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.