Estimated reading time: 5 minutes
Navigating the realm of generative AI can be likened to aiming at a constantly shifting mark. The pace of change within this domain is nothing short of astonishing. There’s a continual stream of enhancements being made to existing models alongside the introduction of brand-new models, cutting-edge technologies, and upgraded capabilities. This rapidly evolving landscape often presents breathtaking advancements, with specific GitHub repositories gaining immense popularity almost overnight. The speed of these developments is genuinely remarkable.
However, when shifting focus from the exciting realm of theoretical possibilities to practical applications in production environments, the picture changes somewhat. A significant portion of generative AI applications, as they stand, are predominantly suited for experimental uses — ideal for testing, playful exploration, and intriguing experiments. Yet, when we delve into the arena of practical, real-world implementations, a handful of these applications emerge as notably popular. These particular use cases have begun to see broader adoption, setting them apart in their practical utility and effectiveness. Among these, there are standout examples that exemplify the potential of generative AI in operational settings, demonstrating its versatility and applicability in a variety of contexts.
Some of the popular enterprise production use cases include:
- Customer Support Chatbots
- Document Search & Document Processing
- Knowledge Engines for Unstructured Data
- Data Analysis & Data Science Automation
- Text Generators in Marketing & Sales
- Content Summarization Workloads
- AI Agents and Chains (early stages)
Building a ChatBot with just one line of code
In this demo blog, we will explore how to create a chatbot in minutes using Lyzr SDKs. Chatbot is one of the most popular use cases where Lyzr SDKs are used. These SDKs are powered by LyzrCore (our central research & development stack), along with the best of LlamaIndex RAG modules and Playwright.
The best part is that all it takes is just one line of code to build your chatbot. Lyzr Chatbot SDK handles chunking, embedding, retrieval and super-abstracts the entire process. Lyzr SDK does the heavy lifting so that you can focus on integration, application logic, or just more sleep.
But how does a chatbot work?
Before we dive into the experiment, let’s first understand how chatbots work. A few years back, around 2016-2017, when I was building chatbots, we used Python NLTK, an open-source toolkit for NLP, to build conversational chatbots. Back then, we didn’t have generative text, so we had to use pre-written texts and answers to make the chatbots work. These were simple, based on set rules, and NLTK helped with the language part. But now, with generative AI, things are different.
The most crucial thing in building a chatbot is implementing RAG, which stands for Retrieval Augmented Generation.
Retrieval Augmented Generation (RAG) is used in natural language processing, particularly in developing advanced chatbots and AI models. This approach combines two key components: retrieval of information and generative models. Here’s a closer look at how it works:
- Retrieval of Information: The RAG system first retrieves relevant documents or information from a large dataset or database. This is done in response to a query or input. The retrieval mechanism is typically powered by an index of pre-processed data, where the system can efficiently search for and find content related to the query.
- Generative Models: Once the relevant information is retrieved, a generative model, such as GPT4, is used. This model takes the retrieved information as context and generates a response. The generative model can produce coherent, contextually relevant text that is often a direct answer or synthesizes retrieved information.
The key advantage of RAG is its ability to provide more informed, accurate, and contextually relevant responses. By leveraging both retrieval and generative capabilities, it can surpass the limitations of models that rely solely on pre-trained knowledge or purely generative approaches. This makes RAG particularly effective in scenarios where up-to-date or specific information is required, such as answering factual questions, providing recommendations, or engaging in complex dialogues.
But RAG isn’t the only key aspect of a chatbot
The memory handling capability is as essential as RAG for a chatbot. Without memory, Chatbot is just a Question Answering engine. This is where LyzrCore comes into the picture. We have built memory handling capabilities in LyzrCore, which Lyzr Chatbot SDK leverages to provide a contextual chat experience to users.
Now, let’s dive into the demo.
Start with installing the lyzr package and the Playwright package (for scraping website data)
!pip install lyzr !pip install playwright && playwright install
Import the ChatBot module from the ‘lyzr’ package and pass the OpenAI API key
import os import openai from lyzr import ChatBot from pprint import pprint import nest_asyncio nest_asyncio.apply() openai.apikey = "Your OpenAI Key" os.environ['OPENAI_API_KEY'] = openai.apikey
This is the magic lyzr function. 1-line is all you need to pass the website URL. Lyzr website_chatbot function will take care of,
- scrapping the data
- creating vector embeddings
- store it in the default vector database (LanceDB)
chatbot = ChatBot.website_chat(url="https://www.lyzr.ai/")
Now start your chat with the bot
response = chatbot.chat("What is Lyzr?")
Voila. You now have your website chatbot built with just one line of code. And that’s our focus. Continue to focus on use cases and build low-code super-abstracted building blocks to help builders build and launch Generative AI apps in minutes.
Also, you can build a chatbot instantly for PDF, Youtube, Webpage, Docx, and TXT data in minutes. Try them out.
Book A Demo: Click Here
Join our Discord: Click Here
Link to our GitHub: Click Here