How Does AI Work in Factories?

Carl Neely is a passionate and visionary leader with 12 recent years of experience in analytics, data science, artificial intelligence (AI), and machine learning (ML).  Carl has led and managed multi-million dollar strategic, IT programs that drive company transformation. His diverse and professional background include working in global and large companies like Ford, John Deere, Toyota, and Blue Cross Blue Shield/HCSC in various leadership and managerial roles.  

Carl’s current career focus is helping company leaders define and execute AI/ML strategies to reimagine business value, create intelligent customer solutions, and drive product innovation.  He is the Senior Manager of AI/ML at TAMKO Building Materials responsible for delivering AI/ML business solutions and technologies like data integration, digital twins, data transformation pipelines, and AutoML applications.  Also, Carl runs the AI/ML community of practice at TAMKO geared toward upskilling the company’s talent.

Carl is active in the AI professional community where he gave presentations on “Automated Medical Record Reviews using NLP” (Data Bridge conference 2021) and “Assessing Value in Digital Twins & AI/ML Proof of Concepts” (The AI Summit & IoT World Conference & Expo Austin 2022).  Also, he is a board member of Dallas AI meetup group and volunteers to teach AI/ML technologies to minorities and underrepresented youth.

Host: Hey Kyle, how are you doing?

Carl Neely:
I'm doing well, good morning! How are you?

Host:
I'm okay, how are you? Do I get this right?

Carl Neely:
Fantastic!

Host:
Thank you. Fantastic. I'm going to introduce you here a little bit, so... and correct me if I say your name incorrectly, but this is Carl Neely.

Carl Neely:
That's correct, yeah.

Host:
Absolutely correct, yeah. Thank you. And he is, among many things, a senior manager of AI and machine learning at Tamco. Is that correct?

Carl Neely:
That is correct.

Host:
And he's actually been in the industry for a very long time. He worked at different companies like Blue Cross. He's been a project manager and a senior specialist at Toyota and John Deere. So, you know, from what I'm seeing from that resume, he's been in the manufacturing industry for quite a few years.

Carl Neely:
That is correct.

Host:
And he's also been in healthcare as well, and he's here to basically talk a little bit about both those things. So, thank you so much for joining us.

Carl Neely:
Well, thank you for having me. I'm super excited to be here and to share my experience with you and your audience.

Host:
Thank you so much. I'm going to start off with asking basically what's your day-to-day like as a senior manager at Tamco? So, what does it mean to have that responsibility tier, and how do you provide value on a day-to-day basis?

Carl Neely:
Yeah, I enjoy it at Tamco. One of the opportunities there was really to lay out the foundation of what should the AI and ML strategy look like from a long-term perspective. And so a lot of that first started out was laying out the strategy: how do you integrate that into our production building process? Because there, we make building materials, as an example. So, really, my day-to-day looks like, "Hey, here are all the critical use cases that we have to execute that strategy," whether it be for predictive maintenance, vision systems identifying defects, or those sorts of things. So, it's really about the execution side now that we've laid out what that strategy is. And just in general, we're looking toward—I'll get into specifics—towards autonomous operations here. So, analogous to what you see in vehicles, for example.

Host:
Right.

Carl Neely:
How do we get these processes to intelligently run with little or no human interaction? And that's really the aspirational goal here.

Host:
Okay. From my understanding, vision systems—I'm assuming you're using a little bit of robotics as well to facilitate all these in order to eliminate the need for, in a lot of cases, kind of dangerous things that a human might have to do? We had someone on who was working in AI and woodwork, which is quite a unique thing, but it's also manufacturing. So, identifying defects and things like that—is that something similar to what the strategy is in your organization?

Carl Neely:
Yeah, so one of the things that we're focused on in particular is improving product quality. And so, typically we use vision systems to identify defects when they arise from products as they lean bound to stimuli. So, you don't want that to get shipped to the customers, of course. So, you can either scrap it or, if there's an opportunity, fix the material before it goes out the door. You have that much opportunity to do that.

Host:
Okay, so some of the general use cases around that are defect detection as one, and then in other words, safety. You use it mainly in the aspect of defect protection?

Carl Neely:
That's right.

Host:
Right, okay. What I wanted to ask you personally is: what made you get into machine learning and AI? It's only become popular now, you know, besides some of these things that have really become interesting, and people have been able to interact with products in the market that primarily use the abstractions. But, I mean, what was it like to get into this early on in your career?

Carl Neely:
You know, I've always been in the technology space, you know, for over 25 years, but it started about 12 years ago when I was working in the healthcare vertical there for a Blue Cross Blue Shield, and I got an assignment where I was working with some statistical actuaries to do predictive models for readmission. So in healthcare readmission, you think about as a defect, you know, you have a 15-day readmission, a 30-day readmission, you discharge the patient, things like that. And so, working with them really got me excited because I was unattractive management, you know, kind of a little removed from the hands-on and the technical aspect of it. And so that project from 10 years ago gave me an opportunity to get the hands-on work with the actuary team again on designing that. And so it really sparked my interest. I started taking training and classes, certifications, and learning more about machine learning, and here I am today. It's really exciting, and that project was quite fascinating because working with a team of actuaries, they really statistically base. So it was basically a logistic regression, you know, all these statistical terms, you know, P-values, you know, we gotta do confidence intervals and things like that. It was actually pretty successful. We applied it to a population where we were able to, I think, accuracy was about 70-ish or so, but that was good enough for us to predict if a patient would be admitted with these attributes. For example, where this profile had a high probability—70. 70 people, no, no, yes, the accuracy was 70. Accuracy 70. Exactly, yeah. Okay, okay, so that was pretty good because we were able to identify people, what we call high risk for readmission, and then the easy address of that is having our coalitions now reach out to those people, work with those people that are being discharged from the hospital to make sure that they have the proper care outside of a hospital and discharge planning, like a caregiver, a loved one, or a family member, can help them so they won’t go back into the hospital prematurely, like on a 15-day or 30-day readmission, as an example.

Host

What were the properties of the people who have the tendency to be readmitted? So what were the characteristics that the model was able to identify about the people? Because it's such a, you know, I don’t want to say it’s such a personal thing, but not to be cliche, but it’s how do you identify a person’s mental tendencies, if they are mental, and if it’s not, if it’s just an issue with the way that the care was done, that’s also difficult to do?

Carl Neely

Yeah, that’s a great question. So what we found in the analysis of the data, so the data set we looked at was, you can think of it as a different class. So we looked at the demographics of the individual, we looked at the historical medical records, the procedures and services at the hospitals and the hospitals themselves, or the facilities that they were in. So we looked at a combination of those variables, yeah. So we looked at a combination of those and we found some of the stronger predictor variables of readmission are race and gender as one of them, and then also we found hospital facilities too as well, as another great predictor here. What we didn’t find is a lot, in the healthcare space, a lot of treatment plans and procedure codes, even though they’re standard, they’re still very, and then like you put it out, it depends on the individual how they respond to the treatment. Right? Like, you know, when you look at helping, you do the core hooks, right? But it’s really about the individual, how they respond. So really, the demographics, like again, the gender and actually race as well, and then the hospital facility, tend to be the stronger predictors of readmission. We did not look at mental health clinicals because, usually when you talk about HIPAA, they try to keep those separate unless you have permission. So your physical health information and practitioner information is separate from your mental health, unless you're getting permission to combine that. And that's actually one of the challenges with some of our HIPAA laws too.

Host

Well, wow, you would have preferred to have that. I would assume you prefer to have that data, to be able to. You think that would have provided you with a higher level of accuracy? Because 70 is honestly quite high, but you think that mental health data would have provided you with maybe 80?

Carl Neely

I think so. I think the medical professionals, really the doctors and the nurses too, would prefer that, but again, you have to get permission to apply that to your population. If you don’t have that permission, you just can’t. But we have a principle when I work that’s called medical necessity, so you always can grab as much data as needed for medical necessity for the application or use cases you want to pursue. Right? Oh, okay, fair.  Enough, so, you know, another thing I wanted to ask you was, as I'm just switching back more to a leadership perspective. Rick, well, even before I do that, if I could guess what gender was more likely to go back or would be readmitted, I would guess male. That was actually women.

Host

No way.

Carl Neely

I was just that... I thought it would be real... That's—is there a reason for that? It's just the outcome of the data?

Carl Neely

It was—it was hard to get down to the root cause, right? It would be female, but the complication that we were looking at, that we applied to. So, healthcare, they got different types of what we call business population segments. So, we were looking at Medicaid and Medicare types of populations that we applied to, and it actually turned out to be one way. There were more women in the set too as well.

Host

Right, right.

Carl Neely

So, you have that bias, but that still is a strong predictor of the data as well.

Host

Yeah, you still need to—you know, pay attention to that because you can't—yeah, you just can't. It's—sorry, it is what it is.

Carl Neely

Yeah, fair enough.

Host

Okay, so going back to, you know, sort of stepping out of that, moving more into the current experiences. So, you know, moving back into the idea of being a senior manager, so do you have people under you? And sort of, if you can, can you talk about leadership? And, and even if you don't, you know, sort of engaging with other people in different departments, trying to explain—because a big part of this is like, how do you explain what you do to other people? And, you know, understand the value of it if they're obviously not trained in it? And how do you communicate your ideas?

Carl Neely

What's fascinating about being in the senior manager role here at Tampa is that it's really a combination of, you know, setting the vision and the strategy and then the hands-on aspect. So, you get a little bit of both. I have a team that reports to me as well. My team is about three people that report to me, and really what that looks like is, you know, when we lay out the strategy, you know, what are the tool sets, the technologies that we need to pursue this type of strategy. And so, it's about, you know, working with our senior leaders. In that case, I work with executives almost on a daily basis, laying out, okay, what is the business case to pursue this? Do we have the right use cases? And then, not only that, the equally important piece is, do we have the right type of skill sets to pursue the strategy? So, a lot of it also is upskilling. I run the community of practice, for example, to upskill a broader, my broader community of people that use our AI/ML tools and platforms. We use a tool called RapidMiner, for example. It's an AI/ML auto-ML tool that we use to train our community of practice folks on how do you apply this day-to-day? Engineers, as well as analysts, data analysts, etc., on how to use the tool to solve real business problems. So, it's a combination of the hands-on and then laying out that strategy—what do we need to do from an investment perspective? Are we focused on the right use cases? And, you probably understand this, that, you know, depending on the research you look at, you know, 50 to 60 percent of AI/ML projects and solutions just never make it to production for whatever reason. And that's a high failure rate if you think about it like that, but it's—it’s that experimental nature too that you have to deal with and have to understand that, hey, you know, this is a learning process. So, you have an element of cultural change there too as well. Some people, maybe, are used to saying, well, you know, it's our reports that are successful. It's like, well, we're not dealing with reports here. We're dealing with highly experimental AI/ML solutions. And can you find the right signal through the data? Do you have the right type of data to find? See how to do what you want to pursue the solution. So, it's a lot of that, and it's fun and it's challenging. And really, like I said, our aspiration is the autonomous operations. How do you build those continuous processes that continuously feed data from an environment, analyze it, make a decision based on that, move some equipment independent of people, and then take that information and self-assess and reflect, just like you and I would, to make sure that that was the right decision.

Host

So, what would you consider the cycle to be for this? So, nine months, a year, to sort of move the organization, and there obviously, it's different for different organizations, but move the organization towards a place where they're prepared to make the implementations ready? Because it is, you know, getting the data is one thing, it's a huge part of it. Getting the data, building the models, testing it, you know, building the infrastructure to test it in a way that doesn't affect business processes. And this is what I'm understanding from what you're sort of explaining, and then doing that over and over again until you get an objective result, and then casting that result down. You're almost building—You're building a pipeline of outcomes throughout the organization a lot of those pieces are not necessarily going to control specifically the front end and back end of it essentially right so or you can flip it the other way the back end in front end of it actually because the front end is the outcome so both those sides you need a lot of support and and that does take time and then building it takes time and then testing it takes time so what do you feel like, uh you know just for people who are trying to do this and, um, where where what is the point where you should have an expectation that something is should have an outcome and not get frustrated because that does happen a lot in in large organizations people do get frustrated about not having support.

Carl Neely: 

Right, excellent question, uh and and really the answer on working in various, uh organizations to verticals, healthcare and as well as manufacturing really, really depends, uh so let me let me give you an example of however then I'll give you an example in, uh manufacturing so in our journey to the work that Blue Cross Blue Shield and that Journey, uh we're talking about hey how did you, uh, basically institutionalize these Solutions and and help the organization understand how to apply it so in healthcare the standards are very high, uh and so that iteration Cyclone process, um have to be, uh, very good, uh otherwise a doctors or clinics a nurse is not going to trust it and accept it it's not going to be, uh, something that we can apply to help, uh inform a person's, uh Health Care choices and decisions.

Uh, what I'm happy to say though, uh that Blue Cross Blue Shield been on a majority for quite some time so they understand, uh, risk they understand statistical analysis it was just moving them in the direction of are now applying more powerful tools like machine learning so when I worked there we worked on the system, uh, a natural lighter's processing system to automate the, uh, reviews of medical records there, uh, and to we had the foundation of places we we had our data, uh basically what they call the if you think of the agile, uh model obviously the product owner that reported up to the executive team.

In AI-driven manufacturing process, the approach becomes highly integrated, leveraging the power of data to improve production processes. The application of supply chain AI optimization was key to managing resources and ensuring timely deliveries, which helps to meet production schedules effectively. We also explored the role of production lines automation within manufacturing, where automating repetitive tasks has helped increase overall production efficiency. The integration of robotics in manufacturing was essential, particularly for reducing human error and improving safety on the production floor. To further enhance the process, AI-based quality control was implemented, ensuring product consistency and identifying defects in real time.

Host: 

Yeah, yeah exactly and so in that case, um, we had the foul base we had the data, uh, the data engineering we had the it application team we actually had a startup vendor, uh, too that was responsible for actually building the solution for us and then we had a data science team so we had all the pieces in place so there wasn't a really strong lift from a cultural perspective that we had to put in place for that and so we were able to iterate on that solution and we're able to, uh, actually get a working prototype, uh, in place within literally like four months, uh, and that is that is lightning speed.

Carl Neely: 

Incredibly, yeah, yeah lightning speed, uh whereas at Tampa and working there, you know it's a green field we're just starting out and so some of the foundational things that we hadn't put in place there is we had to say how do we how do we identify we, uh, you know what kind of tool sets we need, uh, and we went through a proof of concept looking at different, uh, uh, you know AI ML platforms and engines, cloud-based platforms looking at we consider Google, Google, Microsoft and a few others, SAP looking at those different so now you have the time there and really to get the foundation in place you know you're looking at your data Foundation, Technology Foundation, uh, also building the business cases to get these tools ahead in order for us to execute the strategy we, uh, we need these type of tools so that in and other stuff took about, you know, well over years and when you implement put at least the foundational tools now you got the upskilling piece that you have to put in place.

So the folks we have a new tool you have the upscale people on the IT team, upskill the data analysts these engineers that we work with, uh, those type of things so that takes time too and so and then the iteration cycle, uh, in that in that space is really about now that you, you have it do we have the right data do we have the right use case, uh, that we need to focus on it's so easy to say well hey, you know let's let's maybe pursue this, uh, predictive maintenance but if you don't have, uh, run to sell your data, uh you're not gonna get good outcomes right and so you got to do that at those type of assessments up front so the iteration cycle, uh, in manufacturing the data is a little bit different but the iteration cycle is really about can you find the signal, uh, there at that point and and so those those cycle times for us looking basically more like you know anywhere from three to five months, uh, and then we have criteria we call the kill criteria we have criteria that we if we can't find the signal we identify we don't have the right, uh, data or this solution is not better than what we have today that it makes no sense through a business perspective to pursue it.

With the introduction of AI-driven manufacturing process, we are able to significantly increase production efficiency, leveraging the data foundation to streamline operations. A critical part of this is the application of supply chain AI optimization, which helps optimize and manage our supply chain activities, ensuring the timely delivery of parts and materials required for production. As we progress, we also focus on production lines automation, automating repetitive tasks and improving the overall efficiency and precision of our workflows. The integration of robotics in manufacturing has also become a game-changer, helping to reduce human error and enhance productivity. Finally, the role of AI-based quality control is crucial, as it allows us to monitor product quality in real-time and make adjustments as necessary to maintain consistent high standards.

Host: 

So signal and correct me if I'm wrong, I'm understanding signal as identification of net positive outcome, is that or is it something else so.

Carl Neely

Signal is specifically the relationship of the data that you're looking for. You're trying to find a signal so you can predict something or you can classify something right in the data. And so, if you think in the industrial space, we're dealing with all these different sister devices, heat sensors, vibration sensors, those type of things, and you're bringing them together and you try to find a relationship between them, right? And that's what we talk about with signal.

Host

Got it. Okay, so are you basically looking for correlation? Which makes sense because it is an ML model, right?

Carl Neely

Yeah, right. Um, so, and I'm assuming, I almost wanted to go down the road of, like, how do you gather all this? What are the tools for gathering the information from all these sort of IoT devices, right? But I'm assuming that's all kind of there. You just have it, you sort of tie into an existing data pipeline system that people must already have built before you got there, or have had, or they found the need to actually bring in AI expertise?

Yeah, so we do have our manufacturer execution systems that collect this data from these different sensors and devices that tell you basically in general how this piece of equipment is performing relative to the production process. So we do have that as a validation. However, it's not contextualized in a meaningful way, right? It's just data that's collected. You know, like I said, the relationship, you don't understand this relationship with this sensor to that sensor. I think, yeah, we have very smart process people that understand the relationships in general because they are even responsible for running the process, but you really don't know, you know, depending on the use case you want to pursue, if you've got the right type of relationships and if you can model those right type of relationships. A lot of things, you talk about correlation. A lot of things are historically, you look at, you know, linear correlation things, but a lot of relationships are not linear, especially when you're dealing with different... almost unknown, yeah.

Host

Right, you're dealing with different equipment manufacturers from different decades, really, because some of the capital equipment is, oh, it's like, uh, someone's 50-60 years old, and so you have these external sister devices. So how do you bring that all together in a conceptualized meaningful way to find the signal, where you can say, "Oh yeah, for sure, these are great predictors where I can predict the temperature, you know, 10 minutes in the future using a time series model, so that if that temperature is off, I can make an adjustment to a machine to make sure that I can avoid it"? So it's like you and I, we’re driving down the street, we see the pothole coming up, and if we see that pothole, if I see it, I can avoid it.

Carl Neely

And so that's one of the strategies that we employ too as well.

Host

That's a quite granular endeavor. Like, I just... you know, to do all this and all this, because, you know, every... there's a growth of complexity every single sensor you're adding. And you know, under the assumption—and not all sensors need to actually integrate with each other—but that work can grow quite exponentially, especially if each sensor is doing a different thing.

Host

Um, there's a lot to talk about there, but I do want to move on to data privacy because you're in a unique position to talk about that. It's also a concern for a lot of people, especially with some of the new developments in AI. But because you worked in healthcare, that's something that data privacy is absolutely essential to make people comfortable. It's a big part of the organization, both as customers or even, from a more point of view, as people are working in there. So could you talk a bit about that and, you know, either your perspective on that, your experience with that, and just explain to the audience what that was like?

Carl Neely

Yeah, so data privacy, and think about it in the healthcare space, is really about complying with HIPAA laws when it comes to your healthcare and medical records in history in that case. And so a lot of what... when we pursued our AI/ML solutions, like we built the natural language process to review records, we had to be HIPAA compliant. And so that's additional constraints that we have on that. So in some cases when we build the models, we anonymize that data, right? We’ve got to basically strip out the names. Exactly, it's about the names, those type of things. And they're dependent on the use case also. We have to treat what we call protected attributes differently. So protected action groups are like what I mentioned, like race, age, gender, those type of things. Those are allowed in particular use cases that you're trying to solve for because they are important, but also, you can't use them in a way that's biased or discriminatory in your data set. And so you have to deal with those factors, especially when you come to healthcare data there.

When you're talking about data privacy in the manufacturing space here, it's more about proprietary secrets and processes and recipes, like for example Coca-Cola, they have a proprietary recipe, it's a secret, you know, that's protected, you know, you have to always have the security around that as well. And there's, you know, security protocols that we have in place to protect that data. On the HIPAA side, you know, there's protocols like FHA, fire, FHA7, these are all integration protocols that also add a layer of protection on top of it.

With the introduction of the AI-driven manufacturing process, we are able to enhance production efficiency while ensuring the protection of sensitive data. These advancements are crucial when we consider supply chain AI optimization, which helps manage and secure critical data flows within the production environment. Furthermore, the integration of production lines automation has improved the overall efficiency of these processes, reducing errors and enhancing throughput. One key aspect in this is the use of robotics in manufacturing, where machines can perform tasks autonomously, optimizing the workforce and ensuring a safer working environment. Finally, with the application of AI-based quality control, we can continuously monitor product quality and ensure it meets industry standards without compromising data privacy.

Yeah, so these are interface protocols with levels of encryption on top of them, yeah, from that perspective. And then, internally, from a process perspective, you know, in your healthcare space, you have what they call medical necessity, so you can only use the data that's medically necessary to the goal that you're trying to pursue, nothing more, nothing less. So in the case of when we go to the automated medical record review, you can't go back and look at a person's total history. You only have to look at the history relative to the procedure or the treatment to make a medical decision. And so, we have to build that into the system.

Host: 

Yeah, exactly, so it's not a broad swath of information, just what’s relevant to the treatment.

Carl Neely: 

Right, right. And sometimes a person needs more information to make a medical decision using that system. They have the option to request that information and they can get it, but again, it only has to be medically necessary. You just can't open up, you know, if I'm going in for knee surgery, you can't open up my 50-year history of medical records and look at all that. All you have to look at is what is relevant to making the decision for my knee case.

Host: 

That's a very tight scope, which makes sense given the constraints in healthcare. Now, I want to get into more of the technical aspects of what you do on a daily basis, the ones that you can speak about. And, in my mind, it's sort of like the architecture and things like that, but, you know, even if it's something like the goals you're trying to achieve by what you're doing with the models, in the same way you spoke about your project in healthcare. And then, sort of, how that's feasible. You talked about sensors, which was an introduction, so you sort of have these sensors and you have to make sense of these things. But, you know, why do you have them and what are you trying to send? Can you build that picture and then we can talk about how you're making all this possible?

Carl Neely: 

Yeah, so really looking at the goal of implementing any type of autonomous agent, what you want to do is you want to continuously sense your environment. So if you think of the example of like an autonomous vehicle, right? You have vision sensors, you have heat sensors, you have touch and capacitive sensors. And so, you want to pull all those continuously to understand what's going on in your environment real-time, dynamically, right? Those dynamic processes, and that's the same we look at in a production environment as well.

Host: 

Yeah, absolutely.

Carl Neely: 

So, we have all these different sensors, and we want to understand based on that, hey, is the healthiness of our process optimal to actually build a good product? And so, that's what you want to look at from that perspective, and that's what we focus on.

Host: I see.

Carl Neely: 

So, when you have all that information, you're able to detect, okay, we know when this is operating this way, it's healthy, right? It's in optimal function versus not. And so, now you have applications of anomaly detection within it. And then, based on that anomaly or based on any, what we say variations in these sensors or processes that we're looking at, we can take action to address that real-time.

Host: 

Got it. So you're dealing with data at a very granular level, milliseconds, if not faster?

Carl Neely: 

Exactly. We're talking seconds and sub-seconds, milliseconds, which is very fast because a lot of production environments. So you’re bringing all that information, and you’ve got to process it through data pipelines. Specifically, you’re taking that data, feeding it in, cleaning it, stripping out the noise, if you will—the erroneous data. Sometimes, sensors can get... especially in an industrial environment, sensors can get erroneous data. They can overheat, and you can capture and run these data, so you’ve got to strip out all that and get to the core of the signal. Sure, about the noise. So then, we’re building data pipelines, and these data pipelines have to run very fast, and some of the technologies... which is fascinating... I’m still amazed at some of the solutions that ever, but, you know, we’re sending this information to secure cloud environments and processing this information in sub-seconds, or seconds. These data pipelines clean that data, and then after you clean the data, now you send it to a model. Right? Now, so a model can look at that information to make a decision, and once that information is fed into the model to make a decision, you’re sending at that point, you're sending commands and signals back down to the production environment to move pieces of equipment. And what’s fascinating about that is that the pieces of equipment may be... maybe decades old, right? Some of them, in some cases, they’re not fully digitized. And in some cases, when you do that, you're also, from a foundational perspective, having to go through a digitalization effort with your older capital equipment because they’re not designed to be digitized. So, that’s really kind of the cycle. When you think about taking that data in real time, continuously, seconds, and sub-seconds, feeding it into this data pipeline that you built, and you know, it could be on-prem or in the cloud. We send ours in our architecture to the cloud, process that information, send it to a model for real-time quick inference, and send a signal back or commands back to pieces of equipment to control them.

With the AI-driven manufacturing process, the system learns and adjusts in real-time, ensuring optimized workflows and minimal downtime. As part of the overall production lines automation, these systems can seamlessly manage multiple machines and production tasks. Moreover, integrating robotics in manufacturing has enabled precise handling of parts, enhancing the speed and efficiency of operations. The data pipelines feeding into the models also play a key role in AI-based quality control, ensuring that every product meets the required standards.

There are lots of questions there in that pipeline. But the first, we’ll start from the beginning, which is this idea of erroneousness and information, especially when you're coming from an analog data set. How are you able to determine what is a spike in information versus something that’s... you know? What is the boundary of error that you're within the range of data? Not necessarily specifically, but what makes you feel like the information is incorrect?

Excellent question. So typically, what we do is we look at the distribution of the data. So we know what a healthy assessor machine looks like from a distribution of the data. And typically in the industrial space, you have specifications, so you have these specs, you have upper and lower specs, and you know where the boundaries of where that information need to lie. And when they get out of tolerance, if you will, a lot of specifications, if you know we’ve got anomaly problems in that case. So we can basically model the distribution and the behavior of the data—the profile of the data—and determine it based on that.

Host

Okay. And that is quite helpful, honestly. That is really helpful. So, we’re moving on to sort of like where you're putting the data. So you just said two things. You said on-prem and you said in the cloud. You said it can be on-prem. Is there a reason why you prefer or don’t prefer cloud? Because I could also see a privacy issue with the cloud, that you would have to... you’re outsourcing your privacy in that context. And if you're confident people are comfortable doing that, organizations are just comfortable doing that, then that’s fine if they have some sort of promise or a positive outcome at privacy. Then fine. But there are also situations where, especially with manufacturing, it may just be faster to do it on-prem. So, you know, whether those are the right questions, I don’t know. But if you know, correct me if I’m coming from a different angle than what is realistic.

Carl Neely

No, I think you hit on something that’s very important around these architectural type of decisions, like you said, and privacy as well. So in the case of privacy, it really depends on what type of data that you're sending to the cloud. Right? If you're not sending any of your proprietary secret processes or your product design specifications, those are the type of things that typically manufacturers will not send to the cloud. And even when you build that, when you send in your, let’s say, your sister production data to a cloud environment, as an example, it’s typically a private tenant cloud in that architecture. So it’s a private tenant, with a secure funnel back to your company. So you have those pieces in...There and typically most companies won't put, like I said, a lot of the secrets, proprietary type of information in those cloud environments, but privacy to your point is a real issue. When it comes to that, because some, you know, security is a function of time. Let me just say that. Security is a function of attack. If you got hackers out there, they hack, it's just a matter of time. You just gotta kind of keep up and then you can use AI in the factories to detect fraud and hackers and attacks. We have applications, and a lot of other big organizations have applications, especially in the financial industry, where you can detect that fraud, detect those attacks.

Now, coming back to your question on does it make sense to be on-prem versus the cloud, and it really depends on the use case there. So you can push compute to the edge where we say the edge, that's typically on-prem because you really need that faster response time, you need that faster response time. And depending on how you architect it, right, you can do what they call Federated learning. Federated learning basically, you know, you have your main model, let’s say it’s in a cloud environment, doing the main inference, and then you have copies of your models on edge devices, let’s say on-prem, right? And so it’s all fly. The ones on-prem are offline, but as it makes friends and as you retrain, go through retraining process and learn, it connects back to that main cloud version of the model and it learns and it shows each other's learning. That’s a very complex architecture and design. I haven’t seen a lot of companies, you know, really implement that yet, that type of design, but that’s one way to—if you need that faster sub-second on-prem response time that you’re speaking to as well—you can do that right at the edge. So you can have it right there, it’s local. And with the advancement of AI in the factories, this setup could optimize real-time decisions and processes.

Well, in my experience of what I’ve seen, and the solutions that we ran into, the cloud, even in the healthcare solution, we’re talking seconds—their ability to get a response time.

Host

So, how do you make that delineation?

Carl Neely

Well, there’s a bunch of questions there, and I like that. I want to answer them succinctly. What came to mind for me is how do you create... There’s a bunch of problems with processing power and all that. I think maybe manufacturing is less of a problem because you just have this space, quite literally, to say, okay, if you have something, if it needs to be big, it’s big, but it’s on-prem, that’s what we need, right? But even with that, because there’s so much data, there’s also limitations to that too, because you know, you need space for the actual machines as well. So, there’s a bunch of complications there. So, how do you, as a person who is in charge, draw a line of like how much power is local and how much power is on the cloud? Because cloud is pseudo-infinite and local is it, right?

Right, so yeah, good question. So, it really depends on a couple of things that factor in. Number one, you know, how much existing compute power you have on-site, so we look at that. And when you look at that and determine, hey, does it make sense from a capital investment perspective? And this is, yeah, obviously the management side of working in the AI space, does it make sense from a capital investment to bring more compute on-prem to support this application versus do it in the cloud? Cloud is a utility-based, usage-based type of a model, which actually is pretty in and of itself, depending on what cloud vendor you use, can be complex in and of itself. And it could be a lot of hidden costs there too, as well. But some of those decisions factor in. Does it make sense from a cost perspective? Do we have the skill sets to run it on-prem versus, you know, a lot of these new servers require different kinds of skill sets, whether you write it on a Linux operating system versus something else? And then after you write a little in the cloud, that’s also a different kind of skill set, right? Depending on the model, if you want to do a service—a hosted service model—you build and run your own, it’s in it on the cloud, whether it be Azure or AWS, what have you. So, you gotta factor in those skill sets, costs, and what you have on-prem from a compute perspective. Sometimes it just may not make sense to add more compute on-prem because maybe it’s cheaper to do it in the cloud. And plus, you can buy a hosted service model with that, or you have the skill set in-house to build your own tenant and to run it on the cloud as well. The other complexity when you’re looking at the feasibility of those types of things, the other complexity you gotta look at is integration.

 Integration has always been a challenge, uh, it's a challenge in the AIML space too as well. You're dealing with these hybrid architecture models, uh, so if you have this, uh, you know, this hybrid public cloud architecture, you know, you gotta look at the integration costs, you got to look at the integration technologies, uh, some of them are, um, some of them are mature, and others are not mature at all. And do you have the skill set to support that? So those are some of the, you know, when you talk to the architecture, the compute, the factors, you get in the good thing with the cloud, like I said, you can pay as you go, but also there could be hidden costs there as well that you have to be aware of.

Carl Neely: 

Now, in the video, I just want to mention too, see, Nvidia have a computing model where, uh, they actually still do their compute on-prem for you, uh, and they take care of it all. And there's been studies that they have that show with that all-prem compute for you, uh, that Nvidia, uh, host on your app, is actually more cost-effective than running the cloud. So I thought it was some interesting, uh, studies and models that they have out there.

Host: 

Right, right, especially with the amount of data you would have to send off-prem in order to be able to, you know, for it to work, a principle, especially something like manufacturing.

Carl Neely: 

Exactly. Uh, anyway, and you know, with where everything, it seems like you're trying to do, you're trying to create, uh, you're about your boundary for real world simulation is very high, um, you know, because you're literally, you know, sensing reality itself, as opposed to like sort of a lot of people who are doing ML models are basically, you know, playing with user inputs essentially, but you're, you're essentially, you're really playing with reality and there's a lot of data there, uh, to have it send over the network. Over a network that's, I would assume it's like, could be terrorized if data every, like sort of periodically, and that that's quite high. So for them to bring it on-premise, I didn't know that they had a service that's local, uh, for, for larger organizations. So, yeah, I thought it was interesting too, and they say, hey, it's, you know, it's cheaper than your cloud. They did some studies.

Host: 

Wow. Yeah, makes sense. Makes sense.

Carl Neely: 

So I, you know, what was great is you talked about integration, right, and, and sort of the concerns about integration, and you alluded to that a little bit earlier with the older machines and what I would need for those machines to be, you know, changed from or at least integrate with a digital interface and how that, you know, can be challenging. Sixty, seventy-year-old machines, um, the people who, you know, in a way, but the people who know the most about them are, you know, have given sort of, are no longer here, or it's possible that they aren't. Um, you know, of course there's, um, manuals, but lastly, people don't like really manuals and if they do, it takes a while. So this, there's lots of complexity there.

There's actually a story, yeah, there's a guy, I think there's a guy who, uh, he, he was, uh, he's very successful, if I get his name, but like, what the way he got into the market and was able to, in a sort of elevate his career is that he worked at a store that managed and took care of HP printing machines, and people asked him, right, was he so able to accelerate his business so quickly? And he said, I read the HP manual because, and when someone asked a question, I would say, I would do it and go read the manual at night, and I'd get the answer because nobody wants to read manuals. Uh, but anyway, you know, with that adding, you know, with that anecdote, that sort of as a baseline, so how do you actually make this possible with these other machines? Is it really challenging to go into something that was made, and with or without a manual, to understand how to make, you know, get this to give you the information you need? There's an expertise issue there, um, from just like the basics you need to even understand what's possible, and then there's the actual having to reverse-engineer, um, you know, the problem and trying to get a solution to it. So if you could speak to that, that'd be great.

Yeah, that's a great question. So typically when you're talking in industrial space, you know, uh, a lot of the older, you know, capital equipment, you know, have been controlled through like PID loops and PLCs, programmable logic units, and those type of things, and they're not, you know, they're not, you know, uh, devices that have open integration protocols and things like that. So you typically have translators in between, uh, to help you translate that. Uh, so for example, you know, we, we couldn't send a, you know, a REST API call directly to a PLC, just, it just wouldn't work. Good, because the integration is not there. So we got to have these middle, middle-layer translators that exist, and, uh, some of the, um, the, uh, what we call the IoT type of vendors out there, uh, internet of things, that help us provide those type of technologies that would integrate, uh, with these new, uh, API calls or these new, uh, interfaces back to the older technologies to push, uh, some of these things, and, and so what you're hitting at, uh, Chris, here is what we call.

Uh, uh, the process of digitization, right? Uh, how do you digitize, uh, some of your older capital equipment, things like that? And that's a huge, uh, concern in the AI in the factories. And so, um, you know, I learned and employ solutions like digital twins to help in that process. The process of adding AI in the factories to older manufacturing equipment can greatly enhance operations and performance.

Host

Individual twins? Sorry, could you say that again?

Carl Neely

Digital twins. So digital twins are basically these are virtual replicas of your production environment. So it takes the data, it contextualizes that in a very meaningful way where you can basically have a replica of your product, your process, uh, different pieces of equipment, uh, and so you can understand its behavior from a data perspective. And with those type of platforms and tools, it also gives you that integration technology. So now you're interfacing directly with that, versus, uh, interfacing directly with a device or some translation layer. Uh, and so those are pretty, uh, powerful platforms like Siemens, for example, they have digital twins, uh, in the industrial space. Uh, there's other, uh, products like, uh, uh, Brandtube, there's another one called Twinthread, uh, but anyway, that gives you that, it gives you that digitalization necessary so that, uh, when we're talking about these AI/ML solutions, you can interact with the data in the way that you don't have to build a lot of proprietary interfaces down to these different machines and things like that. Uh, so that's, so you're digitizing your operations in effect.

Host

Right, and that saves you a lot of time?

Carl Neely

Exactly, exactly. Um, yeah, because that in the, in the web world, it's called—sorry, go ahead.

Host

No, no, I was just saying, is it like I said, because a lot of the old capital equipment is still useful, uh, and if you don't want to replace it, but now you've got these translation layers. But there's a, there's another challenge with that. We'll talk about that since...

Carl Neely

No, I was gonna say, uh, in the web world, it's, uh, it was just called like a dev with a staging environment, and, uh, it's the same sort of thing. It's like digital twin, you just sort of have, uh, you know, a fake environment where you can play with the information, and that way you don't have to, you know, shut down the company for 30 minutes because you want to make a test. Um, so, so it's the same sort of thing. So that, I, I was just saying there's a, there's parallels all over the place within environments where you have to test, uh, for engineering, you have to test your engineering with fake environments.

Carl Neely

Yeah, digital twins a little bit more powerful in the sense that, uh, and it says that it brings together not only the digital aspects of it, and the contextualized way, so it gives you that, that basically that, that process be this sick. For example, that we do have, you know, you have different sensors taking different data at different frequencies, right? So this is going to integrate that in a very contextualized, a meaningful way so you don't have to deal with these lags and lags for the different, uh, sensors in that. Not only that, it gives you the ability to simulate. So you can simulate, uh, your process, you can make process changes, these are what-if scenarios for it, and see how that impacts your product quality, see how that impacts the efficiency or optimization of your production process. Uh, and it comes with that whole suite analytics. Not only that, so the data is integrated in a very contextualized, clean, uh, denoise way so that information can be fed into a AIML model quickly for inferred, uh, and, and so now you don't have to go through or you can reduce the process of going through doing the data cleanses on the data prep phase. If you think about the AIML development process, now you can reduce that cycle there, uh, versus having to clean it and build it a separate data pipeline for it. So there's some clear advantages there and using those authors.

Host

Yeah, no, absolutely. That's a lot of time saving, absolutely. Because the daily cleansing is quite—that's its own set of issues, right? Like cleaning up that pipeline, making sure you have the right information, making sure it's like you alluded to, there are no errors. Um, and because errors come spontaneously in these environments, right? This is not a result of code issues. It's just it, this thing, the sensor has an issue, and that's, that's a spontaneous thing and that can mess up your data because getting these models to, to actually give you the results, it does take time. You have to train it over a period of time, and air, and any error, it's hard to chase down where it was that caused this issue, and it must be, it must be quite the challenge. But you did allude to the idea that there are other challenges that come with this, so if you'd like to speak about that, I'd like to hear.

Carl Neely

Yeah, as a part of, uh, just looking at from a foundation perspective, you know, you got the AIML that kind of sits on top of the digital foundation, uh, you know, one of the challenges in the industrial space if you have, you know, capital equipment for, you know, 50, 60 years ago, uh, you can put translation layers in there to get that digital information, but you got to keep in mind that the equipment can only do so much. At that point so you have the actual limitation of the equipment itself uh a good translation layer or in the or the older equipment of the older equipment the capabilities of the older equipment

Host: 

So if we want the equipment to do certain things uh it may not have let's say the the memory base to do it right

Carl Neely: 

Because if we're sending data back to it send a command yeah uh it just doesn't have the memory base like if for example CLC may only have let's say uh 16k ability to do something you know and so yeah so you have those type of limitations that you have to deal with and sometimes I necessitate you know in this role you know say hey do we have a business case for Capital Equipment because now we have this barrier that really we can't get to that that that aspirational aim uh getting into a Titleist operation because we have these you know literally these physical equipment uh areas it's limitations is there is there um from and this is sort of more on the managerial side and you sort of talk a little bit to sort of like justification for Capital cost is

Host: 

Is there a lot is that it's hard to say but is there a line where you say okay we absolutely because this is not a small thing to replace they this is a multi-million dollar um thing to replace that's been working for 60 years how do you say to them hey you need to spend x amount this is something like that you absolutely must do and and they're like I really don't want to pull 10 million dollars from the account and uh and you say you absolutely must so that's that's a very hard conversation that's not I just want to hire another person that's a bit that's a that's a tough thing to sell so how do you even conceptualize that and justify that uh I would assume with lots of reports first and foremost the last of it like

Carl Neely: 

Two pages or yeah three four 15 18 pages of reports I wish it by itself takes quite a few you know probably quite a few weeks and months yeah and and actually um you know when you when you're dealing with that that level capitalist it should right uh because we're talking about you know amazing investment and so you know one of the things that um you know I tried to do and work with the executive to show them that uh okay this is you know this is the goal this is what we want to get to uh in order to get there you know these are the type of tools the type of capabilities we need in place and you know you can build a business case purely on new capabilities to get you to your strategic aim and I have did that a lot of times throughout my career a lot of it I've worked with newer Technologies and so the return uh is not that clear.

With the rise of AI in the factories, businesses can streamline operations and manage resources more efficiently, which can help in achieving long-term goals. It’s not just about the technology, but about aligning it with strategic objectives. Building a business case around AI in the factories involves identifying how these new capabilities can lead to competitive advantages and improve the efficiency of operations, even though the return on investment may not always be immediately clear.

Host: 

So it's high risk High reward uh and so those are the type of uh those are the really hard business case that come across because because you have you if you're dealing with Finance and Accounting they want to know hey what's the cost savings and and that may not be clear we we all understand that but at the same time we also know that uh in order to stay viable uh in the market or the industry you in you have to play strategic bets and we know that some of them are not going to pan out uh most Executives understand that and then you have to deal with the coach yourself uh

Carl Neely: 

Do they have a tolerance for high-risk High reward well they're more conservative uh when they come to these type of uh Investments so you have to deal with the Dynamics of all that in order to to be successful and I and I would tell you that um I I probably mentioned maybe one time in my career uh once out of all these 20 plus years I took a business case through and the Actuarial team they audited they was like this is solid go to this is they gave the thumbs approval this the chief medical officer was on board and we took off and that was in the case of that when we did the medical uh medical records uh automated review uh process we had actually reviewed those numbers uh and when you scale it out it came to be a basically a 46 return on investment over time we were able to bend the cost curve

Host: 

What what does that mean being a costcar uh

Carl Neely: 

Typically when you when you do more reviews uh utilization reviews they call for hospitals for employer groups you need more people to do it because collisions do the reviews so we're able to do that with less so uh it's a linear relationship so the more reviews the more people you need so we took that and we bent the curve with this and in over three years I think it was projected to be 3.8 million in savings just so that

Host: 

Yeah nice uh so that was a clear yeah yeah that was the clear-cut case uh sometimes in in case when you're dealing with new capabilities uh you arguing based on the fact that I think of uh think about roads and Highway systems right uh how do you do a return on investment on that you know you need it you understand it uh just think if you don't invest in those roads uh those bridges and those roads and that infrastructure it's going to come back to hot you anyway uh and so sometimes you gotta justify uh you know those

Carl Neely

Foundational pieces, uh, just like that. Um, and it takes some time. You're right, it's not an easy sale. People look at different things, uh, and you got to consider the dynamics of the environments, find a sponsor, uh, there that, uh, really believes and sees the vision, uh, that's credible and powerful, and that could help you move things forward.

Host

Yeah, that sounds—I mean, even the example you gave, it was still sort of a cost guy, because cost-cutting... everyone is—it's not that everyone's on board, but it is something that, um, people are more likely to move forward with. But once you have... and the real stuff that's really powerful, um, is, you know, when you're doing something experimental, where you're trying to catch a market share, and that's almost impossible to sell, because it is so speculative, right? Um, so it plays quite a lot of nuance to, to be able to push this, push this kind of stuff forward, uh, but you've been doing it for quite a long time—for 25 years—so somebody can do it, right?

Carl Neely

Um, so, uh, we'll move a little bit from where, you know, I would say more fun topic of sort of like speculative behavior and, and sort of, uh, technical know-how and being able to, you know, sort of decide where compute is, and sort of selling it to, to higher-level stakeholders a bit. That is, again, that sort of like... that's also sort of more risk-reward there where you're selling it to those guys, but you know it can be fun to—something that might be, um... and you tell me a little bit more frustrating where you have, especially in the manufacturing industry, um, and I'm not sure if you have had to do this, but you have, uh, to upskill, uh, upskill traditional, uh, manufacturing workers, trying to get them to—convince them that, uh, this machine learning model, first of all, communicate that what a machine learning model is and, uh, then convince them of this, and that positive outcome for them is... remember that meant that their work, uh, and then also then objectively get them to actually contribute and not sabotage the endeavor because... well, if people don't want to do something, they won't do it. So, uh, that's quite the sell on that side. Uh, so how do... how must be your experience with that?

With the rise of AI in the factories, there's a need to help employees understand and adapt to new technologies. When you're trying to get traditional manufacturing workers to accept something like a machine learning model, the challenge isn't just in explaining the technology, but in changing their mindset. It's important to show them how AI in the factories can directly improve their work and make tasks easier or more efficient. This approach can lead to better engagement and fewer risks of sabotage, especially when workers see the value the technology brings to them.

Yeah, that's, uh, that's a great question. Uh, that gets to basically, you know, organizational change, right? When you—you and manufacturing in general, uh, and industrial space, uh, they laggers when they come to technology. So they're not typically on the lead, in League Edge, uh, when they come like you see in, uh, tech firms or maybe financial services, all those type of things. Um, and one of the things, uh, in working with—because I actually run the community of practice—uh, one of the things that you find with people, especially people who've been in environment for many, many years, you know, they tend to have, in manufacturing typically in general, not all, but in general, they typically have a longer-term employee that's been there for decades, uh, and so... change can be very scary for them, right? Uh, and that—and that's probably based a lot, or that's probably based in a past experience of working with, uh, different initiatives at the company and things like that. Sometimes, uh, you know, uh, being new, especially in my world of Tampa, I don't—I don't have that, uh, baggage, if you will, but I have to understand that history. So typically it's, you know, when you're dealing with that—and it's also true with the doctors and nurses—but when you're dealing with that, you're dealing with, uh, lack of understanding. Right? How did you get somebody to understand, um, that these new capability tools are not here to replace you, but to, uh, actually help you be a better, uh, engineer or operator on your floor, right? Make your life easier. There's some, uh, things that you probably, uh, some pain points and some things that you hate to do. Uh, how can we make this technology, uh, eliminate that from your daily workflow or your daily, uh, responsibilities? Uh, and getting them to, uh, get to trust it. That's the other piece. You know, how do you trust it? And trust comes with understanding how it works and why it's doing what it's doing. And you can see it. You know, getting them involved, uh, up front, early and often as you can, so that they can understand the development process too. And then once they understand and they're involved, they start taking ownership of it. You start seeing ideas flow from them, like, "Hey, what if we can do this?" Uh, you know, I think it's, uh, you know, my experience to work with technology and people, I think it's naive for, you know, any I.T. department or data science team or let's say AI/ML team, like my team, to come into the picture and... think that, you know, we know all the answers. We don't. So we—we have to work with the experts, the people who've been doing it for many years, to understand, you know, their workflow process. You can build the, the best solution in the world, but if it doesn't integrate into their system by these daily workflow processes, they're probably not going to use it. If they don't trust it, they're probably not going to use it, right? If they don't understand it, they're probably not going to use it. Uh, and so that's a part of the, uh, challenge, is getting them engaged, having them to understand and explain it in a way that you can receive it, you know?

 We're talking about people who doesn't have Master degrees and in data science right and how do you explain in a way that's meaningful and impacted to them? So for example, you know what are the things that we try to do through our community practice? We have hands-on sessions just explain, hey what is a predictive model? What is it, what can it do for you? What is explaining these terms of like what is data engineering signal? Now I'm happy to say our engineers out there understand signal, they understand that hey, once I iterate and I build a model, I have to find signal, signal that's good enough to predict what you want it to do. If you can't find a signal or the relationships in the data then you know, you might as well stop the project and move on. You know, those type of things.

And so, you know, we try to teach those concepts, and also we teach, you know, what is data engineering? You know a lot of people don't understand that's fundamental to machine learning in particular. Yeah, you know what is that? It's simply that you transform the data, you clean it up. We operate with the assumption that all data is dirty until proven clean. We teach them that concept, and once it's clean it's ready for modeling. You know, those seven things, and you gotta go through an iteration cycle. So we involve them in some of those details. The people who want to jump in and abrasive, they will. You're always gonna have the naysayers on the fence, they basically over time they're gonna wash out anyway or retire, whatever the case may be. But it is a cultural change, especially in the industrial space of action, because they tend to be laggers when they come to this more advanced knowledge.

But that's a part of it, you know, you have the system aspect but introducing value-added change is also just as much about the people and getting them to trust, to augment their workflows and to reduce that fear of hey, is it gonna take my job? No, we're here to augment it. That's right, exactly, and upscale so that's why we're in this process, upskilling, so you can also know how to apply these tools to your daily job as well.

Host:

Well, thank you so much. I mean, it's been very informative, right? I really do appreciate it. Yes, and I do hope you could come back again and, but you know, before that, if you want to reach out to you, find out where you are, you know, can maybe learn some more, how can they do that? How can they reach out to you?

Carl Neely

Yeah, so they can reach out to me via LinkedIn. You know, I'm pretty much active on LinkedIn. That's my main form of social media. I don't mess with Facebook and things like that but you can definitely reach out to me on LinkedIn if you want to know, and I typically do a lot of different virtual conferences and things like that, so you'll probably see me in the near future doing some of those talks about just my experience and collaborating with other AI and ML professionals because that's how I learned too, Chris. So this was, I'm very super excited to have this conversation with you and to stay in your collaborative network of professionals as well. Thank you so much, really appreciate it.

Host

Thank you for giving us your time. It was amazing. Thank you so much.

Carl Neely

Thank you. Have a great rest of the day. Take care.

Recursive House

Recursive House provides consulting and development services tocompanies looking to integrate AI technology deeply into their companyoperations. Using our expertise we teach and build tools for companies to outcompete in marketing, sales, and operations.

Trusted Clients

Helping Clients Big and Small with
Clarity & Results

Drop us a line, coffee’s on us

What's better than a good
conversation and a cappaccino?

Address
Toronto, Ontario

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Looking for More Information?

Download our latest report on the AI market to gain valuable insights, understand emerging trends, and explore new opportunities. Stay ahead in this rapidly evolving industry with our comprehensive analysis.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
View all News

Lets Chat About Your Future

Unlock the power of AI with Recursive House’s tailored AI/ML and GenAI
services. Our expert team follows a proven development process to
deliver innovative, robust solutions that drive business transformation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.