artificial-intelligence-blog

The Role of Langchain in Modern Organizations: Insights by Ashish Syal

Ashish Syal - an accomplished professional in the field of IoT, applied AI, and ML. With a strong background in these transformative technologies, Ashish has made significant contributions to the industry. As the Founder and Chief Architect of the Open Source Hardware platform mangOH, Ashish has played a key role in building and deploying IoT solutions.

In their current role, Ashish focuses on developing solutions that cater to various industries requiring video surveillance. They specialize in assisting organizations in building innovative solutions that leverage video management systems and integrate seamlessly with their existing infrastructure. Ashish's expertise lies in understanding the unique requirements of different industries and creating user-friendly experiences that address their specific needs.

Additionally, Ashish is actively involved in integrating advanced Large Language Models like GPT and LLama to enhance developer efficiency and support the development of cutting-edge solutions in the realm of IoT, AI, and ML.

With over a decade of experience in senior leadership roles, including Chief Engineer/Sr. Director at Sierra Wireless, Ashish has consistently delivered successful cellular IoT products. Ashish's visionary leadership also led to the creation of the Open Source Hardware platform mangOH, which has gained recognition in the industry.

Host:
Hello, how are you doing?

Ashish Syal:
Good, thank you.

Host:
This is Ashish Syal. He is currently the Director of AI and ML at a company called Lantronix in British Columbia, Vancouver, Canada. He was at CR Wireless for 13 years in various positions: Chief Engineer, Principal Engineer, Senior Staff Engineer, and he was at VTEC for a few years as a team lead. So he's been in industry for quite some time. He's quite experienced, and we're going to talk to him primarily about AI in the various areas in the organization and how it's sort of applying itself. So thank you so much for joining us.

Ashish Syal:
Thank you for having me. It's a pleasure to be on your show.

Host:
First of all, I'd like to know what you do today. What is it like to be the Director of AI, especially as things are changing? How does it differ from what it was to what it is now?

Ashish Syal:
A lot of the work I do today is about understanding both the business as well as the technology side of AI. So part of my job is understanding how we can bring various functions of AI into the organization, improving the efficiency of the organization, number one, but also how we can apply AI to our products.

Ashish Syal:
So we're looking both at AI as improving operational efficiency as well as improving our products that we take to the market. Ultimately, our goal is to improve and bring to market solutions that help solve customer problems, and for doing that, we make the organization more efficient by applying AI. The products you bring to the market that apply AI help our customers as well. So it's efficiency improvement across the organization and in our products that help customers as well.

Host:
So it's basically permeating throughout the organization, from how people, how the products are actually going to get distributed or how they actually work, all the way to how you guys actually do things. So how has that process been like, and where is the sort of efficiencies that you feel AI has been better at now that you're trying it and testing it out?

Ashish Syal:
I joined this company around... I joined Lantronix, by the way. It's a U.S. company, based in Irvine. We do a variety of products across the board. We do gateways, cellular gateways, routers, switches, we have our own NB-IoT, and we also build compute modules that do AI functionality. So that's an overall description of Lantronix today.

When I joined Lantronix, at that time, GPT had just... was not there yet. GPT-3 was around, and by December or so, when GPT-3 came out, and people started using 3.5 a lot more, there was something where we said, "Okay, can we apply that internally to our organization?" We started building, as is very common right now, conversational bots that we can use to apply to our, I would say, all the documentation that are within the company, the PDFs, the websites.

We said, "Okay, let's use all of that to answer customer questions." So rather than a customer waiting X amount of time, can we drop the response time by multiple folds? So that was one way we found we could improve the performance or efficiency within the organization and responding to customers.

Number two, we are also using it to educate people within the organization. If they have a question, they can go to the bot and ask questions like, "What does this product do? How many SIM cards does it have inside the product? How many Ethernet ports does it have?" So rather than searching through documents, and even that 20 minutes that you might spend looking through the documentation, now you're saying, "I just go to the bot and ask them a question."

So that kind of efficiency improvement is what we're beginning to see within the organization. Itself in terms of response times internally to our own engineers asking questions, externally to customers, and thirdly I would also say what was surprising to us was as we built the system, is the ability to code based on our own standards. So for example, Electronics has a standard called p-file, which is something that we had built. That option rate itself is something you want to see. We want to increase it over time, and in order to do that, you need a lot of GitHub source code, which we didn’t have in some ways. So can we start writing the code through the chatbot we built? And besides seeing that the performance of the system is extremely good in terms of the code output it is giving, so if a customer wants to build with their products, they can just go type it in. Our goal is to allow them to start using this interface to start building as well. With the advancements in LangChain applications, we've observed significant improvements in the way engineers and customers interact with the system. This includes better integration with large language models, allowing the system to respond more quickly and accurately. A crucial aspect of the system's design is its ability to perform natural language processing at a high level, enabling it to understand complex queries and generate code based on specified standards. This functionality is enhanced by the system’s strong natural language understanding, which helps it interpret user requests and transform them into actionable tasks. Additionally, we’ve integrated named entity recognition to better identify and extract relevant data from customer inquiries, improving the overall accuracy and usability of the system.

Host

That’s extremely powerful. So it's already, you know, and people talk a lot about AI’s ability to write code, but you’re able to output code that you feel is pretty good?

Ashish Syal

It’s pretty good. It helps. In my view, there’s a chasm between reading a document and applying the code, because as a developer, and I do a lot of coding, I feel the first code you build is the toughest journey in terms of the final output. But if someone has a template, you can always use it better and faster. So that is something we’re seeing. The first template that’s coming out from this bot is extremely, I would say, very accurate in some ways for small pieces of code. And if it is not also very accurate, it’s close enough for you to then start making changes to the code structure itself.

Host

And the code that’s generated, is it based on... it's not like sort of an answer coming from the word—is it colored by the internal documentation? So, it's specific to your organization?

Ashish Syal

Yeah, so the way the bots work is, I’ll explain the flow itself for the LLMs, right? So the first thing we do is we’ll take complete data sets, and we might take it from the website. We might scan through the websites. We might take PDF documents, and once we have the documentation, we create what’s known as an embedded vector database. So I’m using Pinecone right now. So I take the database, I create an embedded vector database, and then I’m using what’s known as LangChain. LangChain is an orchestration framework. And using that, we write some... I’m writing some Python code. And with that, what happens is when someone asks a query, that query is converted into prompts. So, someone asks a question, we convert it to a prompt, and it allows me to search through the vector database. Then I’m using OpenAI, so I take the prompt, I take some of the most relevant pieces of the empirical vector database that I’ve created, send it to OpenAI, and it responds back with the most appropriate answer for us.

Host

So as an example, if I want... if I enter as a question, would you write for me a piece of Lua code, Lua as a language, which is using the p-file standards from Lantronix and generate a piece of code for me?

Ashish Syal

What the system then does in the background is it searches through the Pinecone database that has been, that includes the documentation, picks up the relevant pieces, combines it with the prompt, sends it to OpenAI, and using the OpenAI foundation models, it generates a piece of code in Lua. And that is code in Lua using the p-file standard that Lantronix has. So you can see there’s a complete flow here from end to end. Ask a question, use the vector you have created, an embedded vector database that’s part for you, and that question converts into a prompt. The two pieces of data combine, it goes to OpenAI, OpenAI then helps create a complete piece of code, and the user sees it on their front end.

Host

One question I’d love to ask is a little bit about security. So this Pinecone is Redis, there’s Chroma for embedded vector databases—why Pinecone as opposed to any other sort of solution? Well, is it convenience or familiarity?

Ashish Syal

It’s a very good question, right? So I was looking at Chroma as well. I looked at Chroma, the spice Faiss as well. I looked at that as well. I found that Pinecone—when I started building, that’s the first one I came across. It was Chroma as well. I just found it so easy to work with, for me. Maybe others have a different opinion, but for me, it just flowed. I took, I paid them for a few months, uh, for a monthly subscription. Yeah, I was able to create my database and use that, so I have no problems with it. I mean, of course, I’ve looked at Chroma as well, I’ve looked at Faiss. I just found this to be more easy to use from my own perspective, it was ease of use I would say, and I know that it's going to work because in some ways when you pay for something maybe it's just a mental model, but you have a bit of a guarantee. Although initially, I would say there were times where the Northeast server would come down quite a bit. I don't know why, maybe they're running on GCP or something where the system would come down, and then I went to Twitter as well complaining to them, and they would respond back like, "Oh, it's not working, like why is it not working?" So it was like, it was a couple of months ago that I think they've improved since. Yeah, I'm just using Redis because it's... it's... it can scale in multiple platforms, and the embedding is just for these... I'm not sure what Python Pinecone is in specifically, but it's just storing and embedding, and the embedding is generated by the OpenAI abstraction. So it's like if you're just storing and embedding, then just start where you want to store it. And then if it's just like, if you're just really trying to optimize for O1 access, then Redis can do it. You know, I use Redis. I initially used Redis for caching because giving the faster responses for caching, but not for a complete storage perspective. Redis was more of a caching solution for me. In fact, LangChain applications in modern organizations often leverage tools like Redis for embedding storage due to its scalability and efficient access patterns. The ease of integrating LangChain applications in modern organizations makes Redis an ideal choice when it comes to managing and optimizing data flow. With the increasing reliance on large language models, systems are now capable of handling complex data with much more agility. In this context, natural language processing plays a critical role, especially in the way data is structured and queried within these systems. A key aspect of this is natural language understanding, which enhances the model’s ability to comprehend queries and provide precise responses. To further improve accuracy, named entity recognition is utilized to extract relevant information, streamlining the process of data retrieval and understanding.

Host: 

Yeah, yeah. It's great that you gave us that sort of story and a narrative and sort of the structure and architecture because now that people understand based on what you're saying, "Hey, this is how it's sort of structured normally," then it's very interesting in thinking about how that's going to actually apply to the IoT space. Because a bunch of the examples you gave us for code was actually code for IoT devices. Because you're talking about sort of routers and things like that. So when you dump this sort of problem into that space, where connectivity... I mean, the router is kind of obvious that the connectivity is there, but the security issues... there are a bunch of complexities there. And maybe IoT devices that aren't really meant to be connected to the internet, how exactly are they using AI being able to leverage this technology? Are they also talking to these devices? And, I'm sorry, are they also talking to open AI? And what are the concerns there? I think that's something no one else has really even tried to talk about. Right? Everyone, I think people are just enamored with the technology right now.

Ashish Syal: 

Right. That's a very good question, actually. That's a very good way to express it: enamored with technology. From my perspective, I look at AI and all the other cool things we work on as just tools. Tools to solve problems. Because at the end of the day, what we want to do is solve customer problems and how can we do it faster in a more efficient way so they can go ahead and deploy their systems. So from that perspective, because of my experience in IoT as well, I look at everything in AI as just, how can I use it in the IoT domain? So this is one way in which you can use it. There are many other ways as well which I think will become prominent. But it's... it is more about applied AI. It is not about building convolutional neural networks. It's more about how do you use these models, these large language models, or even in some ways the narrow models as well. How do you use both of them within the IoT system?

Host: 

And this goes back to the security thing. Would you go and push this data that you have to an OpenAI system which has access to your data, to the IoT data itself, which your customers may not like? Or do you want to use the more open-source models like Falcon or Llama within your device, right? Allow it to continuously train itself and over time, build much better prompts to be able to interact directly with the data?

Ashish Syal: 

I mean, I'm sure you might have heard about the Voyager project which is working with Minecraft and they're trying to build this whole extremely sophisticated Minecraft, I said, playing system—yeah, ecosystem, right? So they're playing for 12 hours and I was reading, I was listening to a podcast the other day, and they talked about it where they said, "Look, we have this Voyager system working just using OpenAI and with just prompts and just learning so many things about the Minecraft itself." Right? I feel that's what's going to happen in the IoT domain as well. You might have these open-source models working within the device, interacting with the data, and in some ways, over time, getting better at interacting with it.

Host: 

The idea of having these models sitting on IoT devices, there are size constraints and performance constraints. And is that going to rotate how IoT devices are made and optimized? Or do we have things that are already able to cope with that?

Ashish Syal: 

You know, as it is with most things in software, you just sort of shift the balance of what you need relative to basically the reality of your hardware. And really, it's really about speed and process. It's really about speed, just processing and space, right? And you sort of rotate it based on what that situation means the most. And so for IoT, what is that balance? And is that balance uh gonna change do you like, you know the an easy thing to say is just Nvidia chips on everything or something right but from someone who really knows what do you think that is?

Host: 

That's a very good question as well. I think you have to look at IoT as a huge spectrum, right? You have the smallest constrained devices where you could just run a TensorFlow Lite model to do and that's a narrow model you're running, doing specific work repair, but at bigger devices, not the Nvidia A100s, but I would say more a computer vision kind of chipsets. You would say okay, like the cost of a computer vision is so much just sending data. Can I use something on the device itself to reduce that cost?

Ashish Syal: 

I think the use of these models will become, in some ways, the foundation models will be more restrictive, right now, but over time you might find that even some of these narrow models might become more text-based and do much more narrow work but that's relevant to the device in which they're used. But it's, I don't see, I see that over time. Right now, I think, as you say, as you said there's a lot of noise and excitement about the big models on big devices, but I think over time, because of cost and latency issues, you might be doing, um, not foundation models but smaller models on the edge devices.

Host: 

Is that how you feel the cost is on IoT? Is that going to blow up the cost of IoT devices on average, or is that sort of, is it cheap to dump into them and then sort of sell that as an increase in value add?

Ashish Syal: 

So, if you look at say computer vision systems, right, there's a huge amount of data. If you take a camera, right, and if you want fast response to something, you say the retail shop, you see something happening, um, and if suppose it's a lone worker safety problem and you say, "I want to protect this worker," what do you do? So you need continuous transmission of data back to the cloud, so there's a huge amount of data going to the cloud and the cloud companies charge you for the amount of data coming in and then the processing you do. And sometimes if you go offline, there's no way to respond as well, but for those kind of systems, if you build foundation models into the device, you might be able to do things that will protect you or prevent the data from having to go all the time to the cloud, and you're paying that extra cost, and latency can also be faster. So I feel there's a balance there. In some ways, you would like the data to go all the time but it's extremely expensive, but there's an added cost of adding foundation models, so more memory, higher CPU needed. So you'll have to balance it out to say where is the balance and those business decisions I think will definitely be something that will be asked over time and very soon.

I would say another complexity, and I think a good example of this is sort of driving and sort of the complexity. And it's funny enough, a little bit of that is IoT, but we were promised self-driving for many years. And I think the effort was genuinely made, but I also think that the reality is, is that one percent that makes the difference where... And I think there was a movie a few years ago that Will Smith was in and the robot saved the probabilistically most likely person to survive as opposed to sort of what a human would value, which is the person who has the most life left to live. And that sort of, those that kind of decision making. Yeah, and I think that just sort of exemplifies an extreme but just to bring it down a little bit more is the, even though some of the little things where you talk about what is the perception of error, what is an error in that case, right? Like I mean is that an error technically? No. And so how does that affect the way we sort of, we have models deployed in real life where they make autonomous decisions in a way, especially on, on obviously what human life is an extreme but the other situation categorizations of individuals based on one thing or another... you know the sort of the... there's a square of error that most people understand and sort of where those, where people, uh, so where software hits on that square can determine a lot of things.

Host: 

So how do we, is there a way that you're thinking about that? Especially with IoT devices where you can't say this is the, it's Google's fault, right? Where it's a little bit easy to see who's like, what the blame is for him. This one is to point on the cameras, here's the camera's fault.

Ashish Syal: 

It's an interesting thing.

Host: 

Yes, yes. I think it's very interesting in that and so a couple of weeks ago I was working on training a system for, say, lane detection on the road, I was using a model called "too simple," using PyTorch and the Too Simple dataset. I found that when my data was really small, my model was behaving as though it thought everything it was doing was right. But as I increased my data, the model said it was getting more accurate. The model itself wasn't wrong; it was the way I trained it with different amounts of data that determined its performance. So, the AI technology itself is okay, but how it's trained—the biases created through the data—ultimately determines the decisions the model makes. So, back to your Will Smith example: it's the data you train with and the biases you build into that data that will determine the actions of the AI model itself. This is something that most AI people working in the field are aware of. You have to make sure the data is diverse enough, wide enough, and broad enough. Foundation models are much broader, while narrow models have narrower data sets, and the application of the models at the end will be determined by the amount of data you give and the biases built into it. This is where LangChain applications in modern organizations can help to scale data handling and improve model training. By utilizing LangChain applications in modern organizations, businesses can better manage data sources, ensuring more accurate, diverse, and unbiased models. The integration of large language models is particularly useful for improving the scale and efficiency of training processes by analyzing and refining vast amounts of data. As organizations continue to leverage natural language processing, they can better understand and categorize complex datasets, ensuring models are not only trained effectively but also remain unbiased. This process is enhanced by natural language understanding, which allows the model to grasp the subtleties in the data, leading to more accurate decision-making. Finally, named entity recognition can be integrated into the training process to extract meaningful data points, further improving the precision of the model's predictions.

Ashish Syal

I feel that question changes with time. If you look 10 years ago, Nvidia's GPUs were not where they are now, and AlexNet, which came out in 2013, proved that performance was improving significantly when trained on much bigger datasets. That's what AlexNet proved—classification improved from 25% to 15% because it was trained on a much wider dataset. AlexNet had actually trained with two GPUs working together, which was very difficult at the time, and certainly the performance improved. So, what’s happening right now is, with convolutional networks, neural networks, plus the vast amounts of data available and the GPUs that are now accessible, the models are getting better. It’s not to say that, as more GPUs become available, the training itself becomes better. I think the training will improve, and the performance of models will improve, because we now have much larger datasets and it's cheaper to train in some ways. Five years ago, the performance of models was limited by what they had, but today, it’s better because we can train more. Although GPUs are still something we don’t get access to easily, that will improve over time.

Host

Let's zoom out a little bit. Thank you so much for all you’ve shared in the speculative space, seeing how things are going to evolve. But the reality is, you're having an experience right now with how you are able to be productive. You mentioned a little bit earlier how people use the product, how they generate code, and that sort of customer-facing aspect. But then, internally, how are you guys building code, and how are engineers being productive? As someone who sees the consequences of the whole team's productivity, what are you seeing there? Is there a best way to use these technologies?

Ashish Syal

From my personal experience, using GPT has definitely improved productivity to a massive extent. It's taken operational efficiency to a much better level. When you look at it from an organizational perspective, it’s going to improve efficiency across the organization as it becomes more adopted. It takes time for everyone to adopt it, I would say, but once adoption happens and it becomes an integrated process in the organization, core development becomes much faster. Once that happens, you're able to achieve your targets in a much more efficient way. Faster way and delivery of products will improve in a sense, but I think it will take time for organizations, and depending on the size of the organization, bigger organizations will take a longer time, and smaller ones will be a bit faster to adopt this into the daily analogs, if you want to call it that, right?

Host

If someone at your tier is listening to this and you were to give them advice on how to make it possible for their team to sort of bring this in the best way possible and reduce the amount of pushback, because people have their way of doing things, how would you suggest they get this to cascade through the organization successfully with the least amount of questions or pushback? How do you make that happen?

Ashish Syal

It's quite a personal journey to introduce yourself. Throughout my career, I've found it sometimes becomes like a religious war, even asking people to change from SVN to GitHub or GitLab. It takes a lot to do that. That's why you have to give it space and know that these things take time to bring into an organization. Personally, I found that the best way to do that is to discuss with your team and encourage them: here are the possibilities we have, but also do it yourself. I feel like the best way to make people follow is to go through the process flow once and then, as well as talk to your senior people in the team and say, “Let’s start looking at this and tell me what you think.” Then, incorporate their decisions into how you want to push it through the organization. I feel it’s more of a broad-based thing. You could force it down, but there will be opposition because people are used to what they're doing, and that’s fair enough.

Host

Have you seen someone... Have you been impressed by how certain people have been able to really take advantage of it? Is there an example you thought was really interesting?

Ashish Syal

That’s a bit difficult because right now we are adopting it within the organization, so it’s a bit of a slow process for us to see. It’s one thing to build a chatbot; it’s one thing to push this through the organization. Right now, in a few months, I'll give you a broader example, but right now, I don’t know if everyone’s adopting it across the board. I’ve asked to talk to software engineers, and I don’t see that adoption being that accelerated. Personally, I feel this is so good, but when I talk to people, I think there's hesitation because a lot of good software programmers feel they’re already at maximum efficiency in some ways. They look at it and say, "Why do we want to change?" But I think the people who want to adopt it—the more junior people coming in—they will adopt it much faster. It helps narrow the gap in some ways between the highest-performing people and those who want to get there.

Host:

I see. In my experience, it’s been like the code that it can give you is so simple that it almost becomes like a kind of autocomplete at this point. Maybe that’s just my experience so far, but there’s a tier of code where you’re going to have to write it yourself, and it can’t do that for you. But it can do some very impressive autocompletes. One thing is, for example, Python doesn’t have auto-imports the way JavaScript does on VS Code, so it can help you with that a little bit and other things like that. Languages that are a bit more esoteric—now that Python isn’t esoteric, but it can infer things that you would need other tools for, like auto-imports. It’s like all-in-one Dev extensions for your productivity.

Ashish Syal

There are like... having it sort of write something for you that’s truly complex... I think for people who are really good, their code is actually quite small and terse, and it does a lot at the same time. So then it’s like, it can’t do that just yet. It kind of... but I haven’t been impressed. It can just infer a really simple function. It was simple to write, but I don’t have to write it, and that—if it can do that over and over again—would be nice. You don’t write that that much architecture and so these are small pieces I need, can you quickly generate it and then integrate back into the bigger code? I don't think it can go and write a complete end-to-end system for you, but smaller pieces of code are integrated back into your system, which again that's what I was saying it might be more useful for the junior person who's joining, because for them, it's accelerated much faster because they get to see the whole picture way faster than a more senior person who can just quote it away. So I think it depends on where you are in your journey as well as how you adopt it.

Host

What about unit testing? Because the unit, I, you know, I know it's very useful but I don't like writing it.

Ashish Syal

Yes, unit testing for sure. For sure we feel like it's something we are going to build into our system, into our culture and use this to start building small amounts of unit tests back into the system. I remember a couple of months ago when someone had a GitHub paper where they talked about a complete end-to-end system where they automated unit tests for the code that was written. Right, so code was put in and the GPT Auto GPT, so it was very popular a couple of months ago. So there's an Auto GPT to create unit tests for the system and then pass it through the whole test infrastructure to use that. I don't know where the status is today, but I think that was something I was following very closely and then see with other things.

Host

Fair enough. So a big thing, I think, is being able to communicate into something you've been there for a long time, communicate these reasonably esoteric and abstract ideas to people of other skill sets, and other expertises, and other prints with other concerns and things and justifying the value to some degree. This one, the value is a bit easier to justify because I think it's the interface is the human, the human expression, but then being able to integrate it and justify a style of integration that's a bit more of a nuanced conversation. But how's that experience been with sales, marketing, finance, HR, trying to see and trying to have a conversation about the value of this sort of technology and the value of it to—I mean people who do copy have understand that it has a value and the integration is zero for them, but for you, it's a lot more intense. That integration is a lot more intricate. So has that conversation been, was it pushback and how are you able to massage that? Because that is its own skill.

Ashish Syal

So the way I approached it was I talked to the executive team first, and the feedback was really good. And once you get that buy-in, you say okay this makes sense. Otherwise, once you bring in something like that, you have to say, does it—is it something that am I just, as you said, esoteric, and people will understand it? But the feedback was extremely good and then you start—they help you push through the different parts of the organization, and I think that was what helped in some ways to push that idea. It's more a top-down in some ways in this case because once you're able to convince the executive team, it's easy to push it down. People adopt it much faster because it is genuinely useful, but people when they use it, they want to make sure that is it the right thing to do? And so I found that approach to make sense. A top-down approach in this case made sense. A lot of other projects I've worked on, I found the bottom-up makes sense, but in this particular case, it was top-down. The integration of LangChain in modern organizations is a key example of how new technologies can be adopted more effectively when supported by leadership. By focusing on this top-down approach, it helps drive LangChain in modern organizations to become a practical tool for diverse teams within the company.

Host

The LLMs, are you sort of leaning towards it? Have you been leaning towards the ChatGPTs or more the open-source words and have them locally?

Ashish Syal

I did use Llama initially on a local system, right? And I was using Llama for a pet, but in order to build a GPT chatbot, I decided to use OpenAI. It was—yeah, it's not that expensive, turbo, it's called—it's fast enough, it's cheap enough, it does the job. Is what is Llama not there yet, or you have to tweak so much to get that to work locally that it is not worth the time? It wasn't good at that time, it wasn't worth my time, but I had a lot of hope that over time, when people want to have documents about it, want others to get exposure to it, they'll start using more models. Right, so I think right now, because the models, the database I'm using is all publicly available, so I don't have a problem just going to GPT.

Host

Fair enough.

Ashish Syal

Yeah, I think those open-source models are going to be essential for the growth of this because obviously feeding ChatGPT all your information over time is—it's already clear that we've, I think, given these large organizations so much value and that GPT is a personification of that value. No pun intended in a way. So, you know, it's interesting to see how people are going to treat their information moving forward because everyone thought it was, you know, GitHub is this... you know, I don't think that was also—is also interesting, is GitHub, all the open-source projects were actually very good projects, very good engineering. Honestly, there's very few internal projects that are as good as open source. Projects as wonky as you might feel they are, and they are wonky, the code quality is quite high. So, the tragedy is to be able to read that and be able to produce value from it. It has an advantage that most junior engineers don't, which is to be able to play with very high-quality code.

I mean, before I joined Electronics, I was living an open-source project there for seven years. I led the foundation of and was the founder and architect for a good platform called Mango because everything was open-source—the hardware, the software, and we were building everything on top. I found that concept to make a lot of sense. And I feel that open-source models apply a lot in the case of large language models. It makes a lot more sense than open-source IoT in some ways because IoT is more commercial. But these models will become the foundation to go apply them into technology, into these IoT projects. So, I feel like I would definitely hope that with Falcon and Llama—Falcon definitely is, in some ways, ahead of in performance right now, but I'm sure Llama will come up. And I was also hearing that Facebook is thinking of going completely open-source with Llama as well. Because today Llama, they released that's not commercial, right? It's more released with the weights and stuff, but you can't use it for a commercial project somewhere. I was reading a couple of days ago because things are changing so fast that they are thinking of making it open-source. It'd be good if they do, right? And that's going to encourage more innovation and more people to come on board and start building with it. The last thing you want to see is three or four organizations controlling the whole interest. I'm a monopoly over this thing. This shift is where LangChain applications can play a significant role, as they help developers manage the flow of data, fine-tune models, and integrate natural language processing techniques. As natural language understanding continues to advance, integrating models like Falcon and Llama into IoT technologies will become more seamless. One exciting aspect of this evolution is named entity recognition, which will enhance the way these models interpret and process data, making them even more effective in real-world applications.

Host

Yeah, yeah, I'm completely with you. That's not the right way to go for it. What do you think the local maximum for the acceleration of this technology might be? Like, it's accelerating, but the acceleration is in the symptom of human genius, per se. It's just a symptom of the potential not having been reached with the hardware, software, and economic ecosystem. So, is there—this is the hard question, I actually have no idea what you're going to—what could be the answer to this—but what do you think the maximum, what do you think the local maximum for this is?

Ashish Syal

Yeah, that's a very interesting question. If you like, I come from—I grew up in India, I go there every year or two, and what is very interesting for me is, over the last seven years or so, I go to the shop, everyone's using the mobile phone to tap for their data, right? For the payment, everyone's using that for the payment. It's everywhere, and it's even more advanced than what I find here. And I was thinking about it recently. I'm like, so all these people have mobile phones, think about using those as compute systems. In my view, AI's biggest potential will be narrowing the gap between people who may not have had the chance to get all the formalized education to people who have had, who have gone through it. Because these language models allow people access to information across the board. It's even more powerful than the internet itself because it's a faster way to learn and bring, right? So, if you think about it, if there are a lot of smart kids across Asia, across sources of humanity, Africa, Asia, and other places, who have access to hardware, they can run these models and they can build things that humanity did not have a chance to do before. So, I think the local maximum is limited by the number of people on Earth now rather than anything else. So, I think that's what I think is a more broader change, as long as you don't regulate it down to four or five companies. The open-source aspect of it will allow this to flourish a lot more. So, if you believe that this is going to get to the individual and that's what's going to spur that is the local maximum, that's going to spur the next wave of innovation. But it has to get to that level first, it has to get to that level. I feel large organizations will only take it so much, but it has to be individual. It's a community that eventually adopts it and starts building more. If you look at even, say, the story of mobile, the hardware was done, but the real growth happened because of app developers on top of it. And I feel in this case as well, it will be the applications, not the core AI, that will determine. So, the growth is done by people and people who build using the AI models. LangChain in modern organizations will allow these applications to be more integrated and flexible in how they handle data, making the technology even more accessible. And that's what I think, that will be the local maximum in AI, especially as LangChain in modern organizations becomes more commonplace and allows businesses to integrate diverse data sources into their operations.

Host

Fantastic. Well, thank you so much for joining us. I really appreciate answering all the questions and you're really giving us some perspective. I think you're the first person who's talked about IoT and AI at the same time, and I don't think anyone else is really having to actually deal with the reality of that. And I think that is essentially where things are really going to go. You know, because at the end of the day, this technology, you're able to speak to it and, like you just said, if you can actually have something in your hand that you're actually speaking through the interface, it's human expression. I think that's really, really powerful. So, thank you for sharing that, sharing that with us.

Where can people find you, by the way?

Ashish Syal

So, I'm on Twitter as well, under Ashish underscore say I'll underscore, as well as on LinkedIn.

Host

Okay, so yeah, you can find me there. I will post once in a while, not that often, but I'll definitely be more involved. It seems like a lot more people go on social media right now, so I'll definitely post more as well.

Host

Yes, they do. Yes, they do. Well, thank you so much, I really appreciate it.

Ashish Syal

Thank you.

Recursive House

Recursive House provides consulting and development services tocompanies looking to integrate AI technology deeply into their companyoperations. Using our expertise we teach and build tools for companies to outcompete in marketing, sales, and operations.

Trusted Clients

Helping Clients Big and Small with
Clarity & Results
recursive-house
recursive-house-client-openai
recursive-house-client
recursive-house-client-td
recursive-house-client-staples
recursive-house-lululemon

Drop us a line, coffee’s on us

What's better than a good
conversation and a cappaccino?

recursive-house-address

Address
Toronto, Ontario

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Looking for More Information?

Download our latest report on the AI market to gain valuable insights, understand emerging trends, and explore new opportunities. Stay ahead in this rapidly evolving industry with our comprehensive analysis.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
View all News
recursive-house-logo

Lets Chat About Your Future

Unlock the power of AI with Recursive House’s tailored AI/ML and GenAI
services. Our expert team follows a proven development process to
deliver innovative, robust solutions that drive business transformation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.