Intel Will Soon Change Everything! - Director of AI at Intel
Janet George is a seasoned technology leader with over 20 years of experience at renowned tech giants like ORACLE, YAHOO, WESTERN DIGITAL, APPLE, eBay, and ACCENTURE TECH LABS. Her expertise lies in driving digital transformations, with a specialization in Cloud Native, Edge, Autonomous AI, and ML technologies.
Throughout her career, she has contributed to mega deals totaling $900 million, including a $350 million investment in an Advanced Analytics Center of Excellence (AACOE). She has played a pivotal role in achieving $1 billion in Annual Recurring Revenue (ARR) from cloud consumption across various sectors.
Janet’s core competencies include building Cloud Platform technologies from the ground up, covering Edge and Cognitive platforms, data science, Machine Learning, and Artificial Intelligence. Her focus on execution and delivering industrial-scale solutions underscores her commitment to innovation.
Host:
Hey Janet, how are you doing?
Janet George:
I'm doing great, thank you.
Host:
I'm going to introduce you here. So, this is Janet George. She’s done many things in her career, but currently, she's the CVP Data Center and AI at Intel. She also works with GM Cloud Enterprise, which is also attached to AI and Security Solutions Group. So, if you know anybody, currently she's busier than all of them combined. Thank you for giving us your time. Thanks for coming on and talking to us.
Janet George:
Thank you.
Host:
Very quickly, just to familiarize some people with you, what would you say is your day-to-day? What do you feel like you do on a daily basis?
Janet George:
Oh, yeah. On a daily basis, I’m in a very operational role, a very implementation role. So, you know, managerial aspects of my day-to-day implementation with customer engagements and customer products. AI is my strength; it’s my background. I’ve spent two decades in AI. But I think we sit at the intersection of many things, right? So, we sit at the intersection of cloud, AI, data, and silicon. All of these things have to come together for us to be able to get the true outcomes that we want. I feel like infrastructure is a big leg of the stool, but there are also two other legs, right? There are the outcomes with AI and all of the tooling, and then there are also the outcomes with data. The combination of it really makes it very interesting.
Host:
I think this might be an obvious question to you, but for people who are not really sure, how exactly does AI interact with data? Where do you see that relationship, especially for someone who's working so low at the tiers—there’s hardware, software—but how do you see data from a hardware point of view, in terms of the challenges there and all that?
Janet George:
Yeah, so, you know, without going too deep into AI, let’s talk about how encoders or neural networks work. These are things that can take and reconstruct data, whatever form or shape input data is being given to them, and then they can train themselves. As they train themselves using a back-propagation algorithm or some other algorithm, they can start making predictions. That’s the traditional AI, which does all kinds of predictions. We have things like Naïve Bayes, random forests, decision trees—different kinds of AI algorithms. And then we have the newer ones, which are really what the hype is all about now, like large language models and generative AI. Underpinning all of this is really the size of the data. If you have really small data, AI is questionable. I mean, do you really want to use AI to get the outcomes you're after? You can do a fairly good-sized query and build a robust search engine to get what you're searching for. But if you really want to train, scale, and learn on new data on a continuous basis, then you need AI-optimized hardware or data center AI solutions incorporated into the backbone of everything you're doing. The AI capabilities of Intel are specifically designed to handle such large-scale AI processing needs, with Intel AI chips offering unparalleled performance in training and deploying complex AI models. Moreover, Intel AI research is constantly pushing the boundaries to improve these solutions, enabling faster, more efficient AI computations.
Host:
So, data is important, and AI allows you essentially to massage the data and get the outcomes you want very quickly because there's a lot going on on the hardware side. But a lot of people talk about software, as they should, because it really is quite a revolution in software. Essentially, it's these models and algorithms being bunched together—k-means, like you talked about, back-propagation. But at the end of the day, it exists on a physical system. So, is there something people have to or should be aware of when dealing with the hardware? Something that's been in the news a lot, obviously, is NVIDIA and some of the challenges they're facing from a supply chain point of view, which is a good problem for any company to have, but it's still a problem when we're talking about these technologies. What's your perspective on that?
Janet George:
From a hardware perspective, if you think about it from a layperson’s view or anybody building AI on hardware systems or silicon, for them, ultimately, what matters is the workload and the performance of that workload. Let’s say you're working with AI workloads, search workloads, or any kind of workload related to edge workloads, and so on. You want to see the performance, and the performance has to translate from the silicon all the way through the software stack up to the application layer where you're actually seeing, in real-time, that performance. If you can't unlock the performance from the silicon, you will notice your training taking a long time, and you will notice that your datasets are just spinning for a long time. Early on in my career, I spent a lot of time looking at these issues. I was doing compilers, and back then, compiling took so much time. You would set the compiler on, get two cups of coffee, and sit around waiting for it to finish.
In the early stages of AI training, it was similar. People would train with large datasets, and my team and I would be waiting for these training sets to finish. Sometimes, if the training is done on the cloud, you're looking at 12 hours of training, depending on the size of the dataset. You would set off the training modules, go off, and come back the next day. Now, we've come a long way from that. We can train much faster in a couple of hours, depending on the dataset. It's important to have datasets that are highly cleansed, with less noise and annotations with metadata, so you don't have autoencoders trying to denoise the data. When autoencoders are trying to denoise corrupted data, you see long performance times even on fairly performant silicon.
Companies like NVIDIA are working to translate that performance from the chip through the software stack. We have to touch the Linux kernel and go all the way up from the Linux kernel to the algorithms and neural networks doing the work. That’s the connection between silicon, software, and AI, which sits on top, doing all the training up and down the stack.
Host:
What is the fundamental algorithm or optimization that the Linux kernel now has to cope with? I know there are a lot of linear algebra calculations, and that’s why NVIDIA became significant in this space. But it’s not instinctual to me, or to others, what the kernel needs to understand now.
Janet George:
You hit it. That's exactly it. You have to ensure that if you have powerful hardware, like NVIDIA’s or Intel’s, you can unlock that value. Think of it like Red Hat or any Linux distribution sitting on top of the operating system. You need performance benefits, achieved through mathematical calculations and optimization up the stack. Another perspective is security—security is very prominent for us.
Say at the hardware level, you have secure enclaves where data can stay secure while being transported to a data lake or across applications. Much of AI involves looking at data between applications. In traditional enterprise data manipulation, data is often siloed within platforms like Salesforce or HCM tools. When working with enterprise data, you're often trying to find insights across different applications. For example, if you're doing vendor performance management predictions, you might look at data from both CRM and HCM to predict vendor performance.
Host:
Just to clarify, what are HCMs?
Janet George:
HR systems. If you have recruitment systems or HR systems, or interactions logged with customers, you might want to look at these data points to predict a vendor's performance or manage a vendor's ranking. Details call for this level of particular vendor performance and so on. So now you're looking at data being transported from multiple legacy systems that are sitting in an enterprise, and as you transport this data, this is where the data hijack happens, and this is where the data gets first rate out of the NFT. These are all, I call, wonderful spots for the data. So, when you enter these "wonderful spots," you want to make sure there's some kind of security, and these are built into the silicon. You have what I call a secure enclave in the silicon, so when you transport the data, the data is actually sitting inside a secure enclave, and the next data is coming into or is operating in another secure enclave. Under transportation, the protocols between the two secure enclaves are very much validated in an attested way, right? So it’s qualified, and you can't have hijacking happen. But if you have all these capabilities at the hardware level and you're not seeing that penetrate or propagate through the software layer, then you're not going to actually use any of that, right? So, the whole goal is to make sure that what needs to be there is present, and so we sit and practicalize security at the higher levels of the stack, right?
You can do security at the software layers of the stack, but it doesn't compensate for what you can do at the silicon layer. One of the advantages with CUDA or any other software system is really about this optimization all the way from the silicon to the top. Now, that involves not just the Linux kernel but also other things like optimization of Kafka, optimization of PyTorch, optimization of libraries. So optimization doesn’t stick at one level; it’s very layered, and it goes all the way up the stack, right? And as you know, you're looking at Kubernetes orchestration and all these different layers of orchestration because of the layer that we have—business logic versus middleware versus application layer and algorithms. So ultimately, AI is very much sitting at the top, right? AI is not a middle layer of the stack; these algorithms—TensorFlow, PyTorch—they’re all sitting on the top.
And so, unless you see that value come up, you're going to have to make adjustments for it at the algorithm level, right? But it’s more difficult to protect at the algorithm level. It’s easier to take foundational capabilities and protect yourself at that level. If you want to do it at the algorithm level, you kind of have to create these pockets, and because you’re stacking them or because things are stacked by the very nature of it, the higher you go up the stack, the less speed you have and the more complication there is because it’s not optimized on a chip level or a pseudo-chip level relative to how you want to think about it.
And, you know, one example—I’ll just say this example. One example from my tenure with Western Digital when I served as a Chief AI Officer was that we couldn’t work with open-source TensorFlow because it was so exposed to our critical data being stolen, right? There were so many gaps. And so, we had to take all this TensorFlow technology that we saw in open source and augment it with our own technology. So, we had to write our own TensorFlow. And people would ask, "Why? Why do you want to write your own TensorFlow when there’s already TensorFlow out there in open source, right?" And the answer is that you can use many of the techniques that are in open source and use those as building blocks, but you also need to augment what you have.
Because when you think about silicon data or design concepts, these are highly proprietary. You don’t want that data to be exposed either to your competition or to anybody looking to attack that data. So, those are the kinds of things that you then have to go out and do on your own. Now, if it was already built into the silicon, you could use it, but in Western Digital’s case, they were memory providers, so their entire footprint was about memory and the data related to that.
The AI-optimized hardware would have greatly benefited in this situation, as it could enhance the security and efficiency of such operations. In fact, with Intel AI chips integrated into these processes, we could see improvements in the way data is handled and protected across systems. The shift toward data center AI solutions that focus on secure and efficient data processing will be essential for organizations looking to safeguard their data while optimizing performance.
I think, just to get people more familiar with you, because one thing is for sure—you’ve really been able to explain the complexities that you deal with on a day-to-day basis. But, as with all people who deal with a lot of problems, they always ask themselves how they got here. And so, if I ask you the same question, I’d love to know: How did you get into tech and into these different pieces of technology that honestly were speculative earlier, as recently as five or six years ago?
I think people in the industry saw it sort of come up slowly. I remember seeing the reimagining of images around, which was fascinating as it was in a nascent stage, and people were making pseudo-websites on it. And now it's become this phenomenon. So, getting into AI early was...well, I’m not sure how popular it might have been, and I’d love to hear about how you got into it and then got into the scaling of these problems and the scaling of AI solutions.
The power of these problems and almost like, so, I think, most granular level trying to solve the whole stack in a way—or solve for basically the entire stack. Early on in my, you know, early days, in childhood, I was always interested in complexity and complex... I naturally navigated towards complexity and how to solve complex problems. I was also very, very good at math. So, you know, the combination of being mathematical and also just really looking at complexity and problems. And then, you know, my training was in computer science, with artificial intelligence as my thesis topic.
So, I picked pretty early on that I wanted to go into artificial intelligence because I was very interested in inferencing. At the time, we were just inferencing—it was, you know, data sets were so small. We’re talking about 20 years ago, right? So, data sets were very small, and you couldn’t do very large scale. But if you look at every single algorithm, they’ve been around for 20 years. You look at random forest, Naive Bayes, any of these algorithms—these have been around. Market model, right, or J48 model—these have been around for a long time. So, it’s not like machine learning wasn’t around; it was around, and people were studying it as part of their training and basis of education.
It’s just that they couldn’t make sense of it and use it. When I was working at Apple, I spent a lot of time in inferencing, and I realized very quickly that the data sets were so small that you couldn’t do much with these data sets. This is why I went into compilers and compiler programming, and I really started going down that path within computing. But very shortly after that, at Yahoo, actually, eBay was where I first encountered a proliferation of data. We had so much data around, and now we had to do searching and crawling on these massive data sizes. Search algorithms were not built to do that, you know.
I call it the retrieval problem versus the extraction problem, right? When you retrieve data—especially large amounts of data as part of a search problem domain—you’re not actually looking at what was retrieved; you’re just looking at information retrieval. But now, the extraction problem, that’s an AI problem, because now you’re looking at the contents of what is retrieved, not just the retrieval and indexing of very large data sets. The World Wide Web created massive amounts of data, and much of this data was just noisy. So, search results weren’t always contextually relevant.
PageRank, for instance, was inherently about who viewed the content, right? So, it was about the cloud of users viewing the content. The problem was that, if someone liked something for humorous reasons or other reasons, that link would surface to the top. We had to lean into extraction—tokenizing the contents and learning what was in the data.
So, I think I got into AI from two aspects: one, my own passion, and the other was my career track record. I was always attracted to and leaned toward harder problems and the more novel, cutting-edge things. I was sitting at the precipice of these bigger problems that the world hadn’t solved yet. eBay was trying to figure out how to do valuable searching for their sellers and buyers on the marketplace. The volume of data was unlike any other. Yahoo, for example, had an entire library of content state, you know what I mean? And on the website, there were just amazing amounts of data that we had to figure out what to do with, to make it valuable for our customers.
We wanted to bring value—like if you were on the Yahoo website, we wanted to make sure you actually saw content that was relevant to what you were browsing. We didn’t want to show you jiggly ads or things like this, you know? So, it made sense to really mine the data for our customers and then start to do early AI and data science, and that’s where my career trajectory in AI really took off—at Yahoo. From there, it became more industrial because the world was catching up, and people were trying to figure out AI tooling. Now you can see, I feel like with these newer models, that younger generations are embracing it much faster.
If you see, they’re writing their school papers and using five different large language models to create their school papers, and then they make a subset of all of these. So, the baseline has been established and is here now. There’s ChatGPT and other technologies out there that can very robustly inform you about a particular topic. But it’s not without human reinforcement learning, right? Humans will always have to look at these and say, "Does this make sense?"
Host:
You can't rely on the machine completely, right? There’s still very much human feedback in the loop, and it’s augmented by human intelligence. You talked a little about the complexity of this, the role of math, and your interest in solving hard problems. It reminds me of Turing’s early work. The first thing he did was talk about how you would need a lot of statisticians, people to solve problems. His original problem was the idea of search, sorting, and how these fundamental abstractions relate to intelligence itself. It’s fascinating that, although initially, it didn’t seem to require statisticians, now it’s fundamentally all about that. It’s interesting how, from the beginning of Turing’s idea of machines, these concepts have been there.
You also brought up a few intriguing points. One was about security and optimization, which you almost used interchangeably. But to me, they can be at odds. The more barriers you set up, the slower things become, which can lessen optimization, often measured by speed, especially in business. Could you talk about some of the challenges in balancing security with optimization?
Janet George:
Yes, it would be amazing if we could start with brand new data. Our problems would be much easier if we didn’t have to deal with bias or noise. But the reality is the data we have is imperfect, and we must start from there. AI, in my view, isn’t revolutionary but evolutionary. When working with existing datasets, every enterprise considers both internal and external data — data from customers, websites, partners. It all starts there.
When dealing with data, especially from various sources, you face vulnerabilities. We saw this early on at Yahoo, where bringing together diverse data — the concept of a data lake — had both advantages and trade-offs. Every technology has its pros and cons, and data lakes are no exception. But the conversation shifts when AI can begin to differentiate between quality data and noise.
The human brain doesn’t filter out all the noisy data we encounter. Instead, we can intuitively discern between useful and irrelevant data, focusing on the valuable. Google Brain, for instance, did this with images of cats. Training on vast amounts of authoritative, high-quality data made it unnecessary to filter out every irrelevant image. In enterprises, the neural networks need to distinguish between real and fake data. This is where security becomes crucial. The role of a discriminator algorithm is essential, but these algorithms are still evolving. Even with generative AI, we see hallucinations, and these hallucinations coming from it trying to create its own data, right, without any human intervention. And we know—all of us know—like, if you were to go into a factory, manufacturing especially, there is so much organizational knowledge, there is so much domain knowledge. It almost takes you two years to really capture that domain knowledge and get ahead of it, right? So that sort of trying to figure out how this organizational knowledge within an enterprise can translate, and how do you train networks to understand...
I remember spending hours in the factories as I was building practical AI, you know, to really train the neural networks with very small sets of data. We were doing generative AI at the time and were really creating capacitor images with fake and true polarities, right? If the fake polarity was the one that caused all the visual inspections to fail, true polarity was what caused us to succeed. And so we were spending a lot of time training these neural networks, and really, there were so many permutation combinations of how the polarity was put on the physical silicon. Even the experts would have to sit down and debate whether these polarities made sense or should be...
I'll take another example—the memory channel. So when you send the current from the upper layer to the bottom layer of a NAND, this current creates these memory holes. These memory holes can look like oval or eclipse-shaped or, you know, somewhat heart-shaped or whatever. And we had to figure out what the right memory hole should look like, what’s the right circumstance, what’s the right radius, how much deviation it can have. All of these things required a lot of people who are experts in device physics to come and sit down and decide whether this was right or wrong.
Host:
And you were embedding this AI straight into the chip? This—like, that was the goal, was to embed a bit of the decision-making abstraction straight into the chips?
Janet George:
That’s right. That’s right. So now we’re sitting here saying, "Hey, but when we do this, and if we do this, do we have the domain knowledge and the expertise of the domains built into the training sets and then built into the discrimination and encoders and things like this, right?" We don’t want to sit on the outside and keep on monitoring what AI did. We’d rather have AI do the right things from the get-go, right? And we score it, and rate it, and change all of that to make it do the right thing as we learn.
So this discrepancy is going to consistently exist, right? Even with web reviews, if somebody created web review as number five, and then within the context of the web review, they say, "I’m not going to buy the product." For us humans, we're very quick to hone in on that, right? We say, "Okay, they’re not going to buy the product, so obviously, they were dissatisfied with this product. They gave us a great review because they were kind." And so I think this knowledge is very contextless, right? And it has so many different permutations.
You can see things like a house burned up and a house burned down—it means the same thing. It’s just a different way of expressing the same sort of knowledge or information. So I think those are areas where we're going to see AI, as it advances, trip up quite a bit.
Host:
Right, right. So again, I mean, I think this—even the terminology is almost deceiving because this idea of "good" insinuates virtue, and at the same time, it could also, in a quantitative science, mean some sort of measure being hit quite precisely to some rounding value. So I think there’s a lot of space for AI to trip up in that. And I think even when you sort of alluded to this idea of the PageRank algorithm, which is basically what initially Google did, it doesn’t do that anymore. And sort of how quickly—I think initially, whoever was on the internet early on realized, or maybe didn’t necessarily realize, but saw how useful it was. It really did give you what you asked for. You had to ask it in certain ways maybe, but you were getting the information you wanted. Now, it’s something else. It might be more marketing, I think.
The gear system has been gamed, which to some degree could be good or could be bad. Obviously, it gives a lot of people an opportunity to have their products out there, so it’s not necessarily a bad thing, but it’s not exactly what people always want from this technology. So this also sort of brings me to when we talk about gaming algorithms, sort of how people feel about AI. There's a lot of trepidation, to say the least, about what it can do, what it can’t do, what it will do, what it won’t do, what it should or shouldn’t do as well.
Could you talk a little bit about—because if there’s one person who understands the limits of this technology, it’s you, because you work at the very physical, at the electron-shifting level of it—so could you talk about its limits and sort of what you see it can and can’t do? Maybe it’s a bit professorial, this request, but it’d be interesting to get any of your points of view on it.
Janet George:
Yeah, I mean, I’m more scared of the people using and the people dancing like an AI itself, right? Because I feel like human beings are so capable of—we’re capable of distorting, right? So if we are the responsible owners of technology—and we should be the responsible owners of technology—then it is our responsibility to make sure that it’s not misused, that there are the right regulations and the right policies and all of these things that are put in place. Because it’s very easy to misuse technology, right? For example, we can abdicate our own responsibility. We might say, “Oh my, there’s an AI system over there, right? The AI system said so, so let’s all follow the rules and let’s just force the rules,” right? But what if that doesn’t make any sense? What if AI made a mistake? What if AI predicted something wrong? Who is going to arbitrate? Right? Do we need an authoritative person to arbitrate, or can a regular person arbitrate?
So I’m worried that the rights of the regular people, who don’t have power, are more at risk, right? Because they’re going to be less able to voice themselves or push back against policies and systems and processes. And while these things have to be put in place, we have to pay attention to who is putting these things in place, how they are put in place, and what is the exception. I think in AI, we’re going to need a lot of exception handling, just like when we started programming in C and C++, we had a great deal of exception handling, right? And I think we need that before we can say these systems are really “good,” because the data we’re dealing with is incredibly biased. And then people are operating with constraints, and people are operating with the knowledge that they’re taught and built upon, so it’s very difficult for somebody to break out of that. We are a system of rules, right? We want to follow rules, which are good things. But if you have the wrong rules in place—that’s why the world is freaking out, right? That’s why people are concerned because there’s a genuine concern that AI can be utilized for non-good. But the technology itself is somewhat neutral. It’s the implementers of this technology, the usage patterns, and the power behind the technology that could make it really bad.
So I think if we start small and we say, “Let’s use AI-optimized hardware for efficiency reasons” and sort of build upon that, right? Let’s just get efficiencies out of Intel AI chips, rather than having large systems mandate and hold a very important place in our lives. I think it’s important to start small and to start looking at the value that we get and the efficiencies that we get from these technologies, including the data center AI solutions we use to enhance scalability. As the AI capabilities of Intel continue to advance, these technologies will drive innovations in everyday business practices. For example, Intel AI research is already looking into new ways to optimize these systems, making AI-driven solutions both faster and more accessible.
Host:
So what do you mean by efficiencies? Is it the efficiency with which the technology is able to work, or the efficiency with which it’s able to produce results that are—
Janet George:
I want to say, virtuous, right? Because once you make decisions, there is a concept of morality that’s sort of added there at this level of, like, pseudo-consciousness, right? Because, I mean, you can say an “if-else” statement can have morality, but not intrinsically. But I think the way that AI is designed, it’s interacting with a person and making a decision for them—it’s kind of how it’s being used.
Host:
Right, so how do you make that happen, I suppose? I’m not sure that’s a very well-structured question, but it’s a very tough thing, right? Because the English language itself is, just by its very nature, abstract. Can you even embed that level of abstraction into a chip?
Janet George:
Yeah, it’s a very, very difficult question, and it’s a very difficult problem. I would sort of move towards enterprises, right? Let’s start with the efficiencies before we try to make all consumers efficient. Then, we go down that path—let’s make enterprises, right? These businesses that are spending so much money on operational costs. And I think today we are still spending, as enterprises—when I talk to my customers, all of them are spending huge amounts of money.
Sometimes, it’s things like tickets they receive—like, 400,000 tickets in the IT department. There’s no knowledge base for the employees to go seek, search, and get answers. There are no clusters and classifications for their customers. They don’t even know which customers are really good, which customers are not so good, and so on and so forth. So, I think the efficiency layers just at the enterprise level—that’s why I think that now the world is basically saying, “Hey, are you an AI company doing business, or are you a company doing business and then just using AI-optimized hardware as a tool?”
And I think eventually every company will become an AI company. That means it will have to embrace Intel AI chips for its foundational back-office operations and operational efficiencies, just like you need maps. I remember working for a mapping company before, and I was telling people, “Hey, why do you have static maps on your website? You should have dynamic maps on your website because people usually get a picture of their location and they just showed you a static map and now we have a dynamic map so you can not just click on the picture you can actually navigate directions and all of those things in a way Intel AI research is pushing us towards more sophisticated systems.”
So, AI is going to go to that level, right? Like, so you're not just going to have inefficiencies in your organization; you need to up-level your organization to get efficiencies just from an operational standpoint. Then, you go to the next level and you start to provide value to your customers through data center AI solutions, building out new products.
I've been at the other end of the spectrum, so I've actually created revenue through AI, right? So, you can have net new revenue streams and you can have net new businesses being created out of being an AI company, but starting off, I think the efficiencies can just be at the Enterprise level—really just making sure that your operations are running, your events are optimized through AI capabilities of Intel, and your performance reviews are done via some sort of a prediction model. You keep track of all of the data and you use the data correctly.
I even remember, even with HCM systems like HR systems, HR systems are so manual today. I mean, if you have two people with the same name, you can't differentiate, right? You need a country manager to say, “Okay, what's their record and what's this record?” Then, what kind of version you need to do in order to pull out their records differently. The older systems are struggling with that still. So there's a lot of opportunity, I think, with AI and efficiencies and cost in business.
Host:
I think that's where we have to start and then the consumer end will continue to grow faster and quicker like chat GPTs and others where people will use some sort of a chat GPT type model for searching. They would want to know, you know, not just query search and putting it all together but I was recently sitting with one company that said, let's take chat GPT and ask chat GPT how we should compete better and chat GPT did a very good job telling them what their weaknesses were in that industry and how they could compete better on who, what that competition is doing and you know for a baseline I was super impressed.
Janet George:
Right and I was sitting with the CEO and all of the people and they said we didn’t know this, there was so much information here that we didn’t know so I think as humans we are getting to the limits of the amount of information we can pull via search, there's so much information out there as you know and as all of us are experiencing, we are under information overload. Information overload, how did you even pull base information that is relevant to you, that is context-driven, that actually provides value? Right, we start there and then we have to put in a lot of regulations in place as we go higher and higher and say let's go with AI further, like you know this recent case about the lawyers having chat GPT do the thing, you know that's just over the top right like that's not, we should not be going there. That is on the other side.
With the advancements in Intel's AI technologies, we are beginning to handle such vast information loads more efficiently, allowing for better data retrieval and context-sensitive responses.
As Intel's AI technologies evolve, we will need to navigate challenges like information overload and ensure that ethical standards are upheld while pushing the boundaries of AI applications.
Janet George:
I think we have a lot of calling to do and we have a lot of efficiencies to get but that’s it, I think on the Enterprise side, on the government side, we are seeing a lot of adoption and that's primarily because people are saying hey, given that we have, I mean we saw how text has changed our lives, right? Now anymore, you don't write a very long email if you just want to put in two sentences. Emails will become texts actually, if you write an email in the way you used to, you can be considered a low-dieter or someone who wastes someone else's time.
Host:
That's right, and so just imagine the next level where you have assistant-based technology, right? It's not even text, it's just like you're saying, hey, I'm thinking of David and I want to give this text and then it automatically says, "David, Janet's trying to text you and get hold of you," all your time and you know, that's the thing, right? Like one less typing, one less a bit of your time because it's all measured in time.
Janet George:
So I think those are where the technologies will go, where you will get assistance, you know, pull out my favorite whatever, look at my email that I wrote from like two months ago, you know, pull this, pull that out, so search will really go to the next level in my opinion.
Host:
Right, and this is why I think Microsoft co-pilot has really made a big statement. We can see, like, even programming, when you see these newer technologies, AI, I mean they're getting through computer science courses very well, right? And you say, "Wow, did it study the whole coursework in such a short period of time and it got a 4.0 GPA score or 3.9 GPA?" Because that's like very good, right?
Janet George:
Yeah, it’s very, very good. So these technologies and that’s what we are seeing, we are seeing the power of these technologies, and the power of the technology is frightening us. Instead, I think we should really look at the power of the technology and say, "We have such a powerful technology, how could we try to be more efficient? You know, what is open to being less manual?" You know, so I think. Yeah, you're not wasting time, and that's what happens when you have information overload. You're just putting it for your time, and you are putting it for your own time. So, you know, then these efficiencies matter because it gives you relief. So, I think, and just to summarize what you said, I think the best example is when you talked about how companies, the searching in a company for someone's name is a complex issue. Something as simple as that, if you have the same name, who are you really?
Host:
Yeah.
And then, you know, getting that identity of a person or a company of four or five thousand people, there's quite a few John Smiths, and having someone make that decision for you, having an AI to get that information for you, it could save you a lot of time because it's too simple a question for you to have to call five or six people in an organization. So, I think that's a very powerful idea.
Janet George:
Yeah, also think, like, in the database world, we had this whole problem of deduplication, right? Like, it's showing up over and over again, and you're like, is it the same J Lee that I'm talking to? Is it a different J Lee? There's Etch in the middle, you know? What's going on, right? And now you—and in context, like, which department are they? What are they working on? Is it the same person I've interacted with? Is it a different person? Right?
Host:
Exactly.
Janet George:
Yeah, no. Even, um, so, it's interesting. The Israeli security force had a company that came out of it, and essentially, their way of identifying people is how many dimensions of information exist, and that sort of defines authenticity. So, a person who isn't, um, yeah, a person who doesn't have a nefarious intention has quite a few breadcrumbs of their identity existing all over the place, and the person that is, uh, that does have a nefarious intention has maybe one or two threads.
Host:
And if you look at it as a database table, it's just how many columns does this ID have, and it's sort of the same if you—you know, if you have too many columns, you might need someone to go look at them. That's sort of what's popping up in my head when you talk about identifying individuals. So, I have someone just scrape one, run all the columns in a row, and find out sort of who it is, or, you know, how many dimensions of a person can an AI identify, and that tells you who they are, and then you can really ask that question.
Janet George:
And, you know, when we talk about AI in this way, and we talk about it helping us make these kinds of decisions, all these sort of, this is a very simple example. It's great, but as you sort of scale up that decision matrix, there's two things, and I think this sort of exemplifies a lot of people's concerns, and it'll be interesting to hear your point of view on it. As we, uh, as we offload these decisions, um, and we codify them—so, you initially gave the example of, um, you have this hardware that you have to kind of make it have a lot of experts decide what you encode as the actual decision that's going to be made. As we stack these decisions, first of all, um, if we get one wrong, what happens? And the second thing is, how does a company react to an environment when the environment changes? The company becomes obsolete. How does a company deal with that reality?
Host:
Especially with something like hardware. I mean, this also depends on the company, but hardware is quite concrete in its outcomes, uh, and exists in places for quite long periods of time, and software, you can kind of change it, right? That's almost—it's, uh, software in software engineering space, it's gotten, software is defined by kind of its velocity of change.
Janet George:
So, can you talk to a little bit about then how organizations not have to see themselves now? Um, and see—I'm sure there's going to be an emergency organizational behavior in terms of how they see how things happen. I'm not sure if I can—I'm trying to construct this question as well as I.
Janet George:
Yeah, it is like a whole new AI center. It's really about managing the AI in a way that it reacts to the market and in this sort of knowing how many decision layers deep it is that you have to go and strike something if you want to change it.
Yeah, I mean, you said it right. Like Chris, you know, the AI is a massive disruptor, right? It's going to change a lot of things and especially when it crosses the tipping point. So now the silicon chips that you develop for AI have to look very different from the silicon chips that you develop for CPU and for x86, right? It cannot be the same chip just because the artifacts that are required for compilation versus the artifacts that are required for training are completely different, right? So how you compile a set of code or how you compile programs are very different from training where there's no compilation required, but it's a learning system, right? And so the chip that does programming is not the same chip that could do well with training and learning, right? And so you're looking at fundamental changes that, on top of that, you have very strong abstraction layers that you have to work through, right? Because the silicon is at the very bottom, and then you have all these different layers that you have to orchestrate to work through. The abstraction is going to be another factor. Then it is the whole cloud, like when you think about cloud, like it's similar to the internet, right? You're using the cloud, but yet you don't know that you are. So far away from your physical machine, you're really in a data center in a cloud that you are using all of the ecosystems that come with that cloud. So, you know, that level of abstraction where everything is pooled—memory is ported, storage is ported, computer is ported—and you're working with fleets, and these fleets can load balancers can move the fleet off to any configuration that they would choose. They can even, you know, predict which will work on which hardware and so on and so forth. All of these things become very big challenges for hardware; it becomes very big challenges for silicon, right? Because as you think about it, the control is lost where we had very tight coupling between what was run on the silicon from the whole software stack, x86. If you look at this, it was so tightly coupled, you almost knew what outcome you were getting and what you needed to build. Now, it's so loosely coupled, and it is so mix-and-match, and it's such an abstraction layer. So, you're going to rely very heavily on the APIs that you have, or you're going to rely very heavily on the standards that you put in place, and so on and so forth. So, with AI now, most of the CSPs and Intel's AI technologies were all building AI chips, right, that are critical and look different from your regular chips just because the game is different.
With Intel's AI technologies, we can see a shift in the way chips are developed and optimized, allowing for better handling of AI workloads that are fundamentally different from traditional computing tasks.
Janet George:
You look at different types of clusters, and I—you often talk about the parameter space, right? Somewhere, I published these parameter spaces and we've gone from like, you know, millions of parameters to billions of parameters, and now we're dealing with like trillions of parameters, right? Well, where are we going to end up with this parameter space, just the space and scale of parameters, right? How are we going to tackle that? China traditional silicon cannot—traditional chips help us with this parameter space, right? And, eventually, we would have to keep up with the knowledge of the world. We can't cut—I can keep on cutting down the knowledge of the world and cleaning it into small little snippets, right? So, I see that the conversation changes when AI can deal with the noise and the signal on its own. So, if AI can sit at the edge of data and start to assess, and it's doing that very well, this is how a ton of stars are working because it can do that discernment of world data very well, right? Because world data is authoritative, this encyclopedia is on it, and you can learn. But when we get inline, whose line, which has, you know, all of these tokens that is already around for many, and of course the connotations, we still have to understand in accents and cultures and, you know, the inner meaning of language in different languages, and more complexity to that. So, piping in the information about how the language changes and what meanings are jargon, slang, and those, you know, different kinds of world data, physical world data in the data that we care about, usually seemingly as this, you know, seven-tier layer on top, which is humans, human-generated information. Anyway, but please go ahead.
Host:
Think also about natural language intelligence, right? Like, there's so much intelligence built into the way we speak and the tone and the context of it, and even expressions that come through in these lives. So, we know retro on that, but we are getting there. But only after the internet that made it change faster, because they have kids throughout the world even faster than it used to. So these iterations...cycles are or these change cycles are happening more efficiently. You know, a person two years younger than you can speak in a week completely different because there's something that proliferated that you don't recognize. Um, yeah, but yeah. So now, you turn it around to the enterprises, and enterprises are the oldest change. They have so much history and so much tradition. It's difficult for them to pivot and change, but they have to. They have to keep up with the change that is occurring in the consumption world, right? And so, as enterprises look to follow the consumption world and as they look to cater and help the consumption world, you remember the time when we never allowed texting and social media in enterprises. And now, social media and texting are part of an enterprise, right? Because the enterprise figured out that that is part of the world we cannot keep it out, we have to bring it in.
Janet George:
Similarly, um, we will see that these intelligence models, the knowledge models, will have to get better, and this is where I think the conversation in line will change when we can take all of the enterprise data and not have to spend so much time on data quality and noise and encoders and decoders, and really trying to build without these discriminators and focus, uh, more on trying to seek RPA. How do I look at cross data sets, like data sets that have completely different formats and schemas? And can these schemas and formats evolve over time? And what is the bias in the data set, and how do we even tag these biases? Right? Like how do we put, put and give us an understanding of this bias, and where do we, where do we build the semantics? The way we represent these data sets, like how are the semantics going to be built? What features do we extract or don't extract? Like today, we are all extracting a lot of features, but can we live without extracting the features and just focus on the signals? And is that enough focus to just, like the human brain, just focus on what matters and focus on the signals and build from there?
But I think this whole generative AI has so much promise, and the promise is because it can create new data sets. And so, if AI can create data sets, powerful data sets that is, again, human augmented or human helped by human, then I think we are on to something very big. Right now, we look at factories, we look at these enormous production sites, we look at, um, you know, trying to look at operational things like GE equipment and manufacturing and whatnot, different industries, even life sciences industries where you're discovering cancer or drug or looking at structural biology. You know, these are areas where now generative AI can take the knowledge of a human, duplicate the knowledge, and that was the power I saw in generative AI in the factories. Where generative AI quickly took what belonged to a few elite sets of domain experts, device physicists, uh, you know, scientists that studied this day in and day out. It took that knowledge or that base data set and replicated massive amounts of failure data or massive amounts of disk capacitor data that was then used to train, right?
So, now you're creating the data that you need to train, so you're creating the right data to train it on, and then you're using the same data to then do the predictions, right? So, that sort of shifts. When that happens with generative AI, we are on the precipice of something very huge, something massive, right? Because now we can go into our factories and we can look at all of the, uh, archaic ways of doing things and really add what I call super productivity, super productivity enterprise.
Intel's AI technologies are playing a major role in enabling this transformation, bringing advanced computational power to facilitate the creation and use of powerful data sets. With Intel's AI technologies, factories and industries can leverage generative AI to automate complex data creation and enhance predictions with unprecedented efficiency.
Host:
One thing I'd love to ask is, uh, you know, as technical as we've been, granular as you have been, you are, um, how do we sort of, at a different tier of your organization as a CPO, you have to take all this and then also sort of be a leader and, um, translate this in more languages than, um, most people are probably used to, and not necessarily non-English languages, but, um, ways of understanding. Um, and then you also lead a team, obviously, and probably multiple teams. So it'd be great to get your understanding on leadership, especially in technology, and sort of how you've seen that, uh, not only evolve but what it's going to be now and what it's going to need to be. Um, and it'd be interesting to see because you've probably, especially in the AI space, so what is it? How are you presenting ideas to people and framing things so that they can, um, they are sort of in an environment that's preparing them for?
Janet George:
I distinguished a lot between managers and leaders, right? And I think that a lot of us, we were managers first. We managed large teams, we managed large organizations, and we came into operational roles in other leadership roles through the management track. And so if you look at the management track and how you became a leader through the management track, you were never really trained to be a leader. But to me, leaders are people who are visionaries, leaders of people who inspire, leaders of people who have deep subject matter expertise and knowledge because they can gain the respect of their team and be transparent as they lead organizations.
And I think that when you are a leader, you know your craft, what you're really good at, becomes very important because people read that, people see that, right? And they look to you to set the direction, to get the customers to be proficient in your area of expertise or whatever, and to have credibility when you talk to your customers. You talk because leadership is a public stage, and I always say that, you know, you can't fake it when you are a leader. You just really have to be out there, and the people are going to either respect you as a leader or not respect you as a leader.
And the people will always respect you as a leader when they see that you are able to lead, not just manage. You know, management is about just keeping together the operational aspects and dealing with your road maps and making sure that your deliveries are happening. We do need the management aspect, but I think leadership is directing and taking, going into new territories, opening new grounds, and leading people through their sphere of adoption of new technologies and new markets, AI-optimized hardware, and new businesses that operate. With Intel AI chips, leadership becomes even more significant as companies adopt data center AI solutions to drive growth. The AI capabilities of Intel are central to shaping these innovations, and Intel AI research plays a critical role in advancing the tools that allow leaders to guide their teams effectively in this rapidly evolving space.
So I do think that, you know, I spend a lot of time trying to make sure that, despite my leadership and despite where I am leading these organizations and folks, I have spent enough time making sure that I'm relevant in my industry and relevant because I was a practitioner always, as a practitioner, will remain a practitioner. You know, I get, and it's about participation. It's about how you participate with your teams, right? I am not afraid to discuss hard problems. I'm not afraid to discuss getting to the code if that's what it takes. I'm not afraid to have deeper conversations around things and just really look at trade-offs and pros and cons. It's not a narrative that we're trying to fit into. It's more about leading the team.
And I think that future leaders would have to hone in on that, right? They, you can't be a manager and lead a company. You'd have to really step into leadership in a different way than you have in the past. In the past, you could be a manager because you're very well liked or because you're very much renowned. But then the organization, or you spend a lot of time in the organization, you know, many reasons why you are a step from a manager role to a leader role. But I think true leaders will eventually be leading organizations, and people would like to work for them because a title does not make a leader. You know, titles do not make a leader.
Intel's AI technologies have also played a role in how leadership has evolved, pushing the boundaries of what is possible when it comes to data-driven decision-making. With Intel's AI technologies, organizations can now access tools that help leaders make more informed decisions, faster and more efficiently, aligning them with industry standards and the needs of their teams.
Janet George:
I think that if you have to really be a leader, it is beyond title, your ability to influence the people, and you secure the hearts of the people working for you. And the hearts of the people, when you secure the hearts of people working for you, it works for you at all hours of the day because you're so passionate about your mission. They're so passionate about what you do, and it translates through your leadership and all through to the team members.
Janet George:
So that's what I believe in. I totally believe in the fact that title does not a leader make, but at the end of the day, you know, the people determine whether you are a good leader or not.
Host:
Well, thank you for being a leader on this public stage. And if someone wanted to find you, where would they be able to do so?
Janet George:
Well, they can look me up on LinkedIn. I have my email right at the top: Georgetown@gmail.com and also Intel, so you can email me at Intel as well.
Janet George:
And I would not be a value. Thank you very much, it was very informative, and I do appreciate the time.
Host:
Thank you. It was a pleasure talking to you. That was a great time. Thank you.