Making a Chatbot with FalconAI, LangChain, and Chainlit


Introduction

Generative AI, particularly the Generative Massive Language Fashions, have taken over the world since their delivery. This was solely potential as a result of they may combine with completely different purposes, from producing working programmable codes to creating absolutely GenerativeAI-managed Chat Assist Methods. However many of the Massive Language Fashions within the Generative AI house have been closed to the general public; most weren’t open-sourced. Whereas there do exist a couple of Open Supply fashions, however are nowhere close to the closed-source Massive Language Fashions. However not too long ago, FalconAI, an LLM, was launched, which topped the OpenLLM leaderboard and was made Open Sourced. With this mannequin on this information, we are going to create a chat utility with Falcon AI, LangChain, and Chainlit.

Studying Targets

  • To leverage Falcon Mannequin in Generative AI Functions
  • To construct UI for Massive Language Fashions with Chainlit
  • To work with Inference API to entry pre-trained fashions in Hugging Face
  • To chain Massive Language Fashions and Immediate Templates with LangChain
  • To combine LangChain Chains with Chainlit for constructing UI Functions

This text was printed as part of the Information Science Blogathon.

What’s Falcon AI?

Within the Generative AI discipline, Falcon AI is likely one of the not too long ago launched Massive Language Fashions recognized for taking first place within the OpenLLM Leaderboard. Falcon AI was launched by UAE’s Know-how Innovation Institute (TII). Falcon AI’s structure is designed in a method that’s optimized for Inference. When it was first launched, Falcon AI topped the OpenLLM Leaderboard by transferring forward of state-of-the-art fashions like Llama, Anthropic, DeepMind, and many others. The mannequin was educated on AWS Cloud with 384 GPUs connected constantly for 2 months.

At present, it consists of two fashions, Falcon 40B(40 Billion Parameters) and Falcon 7B(7 Billion Parameters). The principle half is that the Falcon AI makers have talked about that the mannequin might be Open Sourced, thus permitting builders to work with it for industrial use with out restrictions. Falcon AI even gives the Instruct fashions, the Falcon-7B-Instruct and Falcon-40B-Instruct, with which we will rapidly get began to construct chat purposes. On this information, we are going to work with the Falcon-7B-Instruct mannequin.

What’s Chainlit?

Chainlit library is much like Python’s Streamlit Library. However the meant objective of this Chainlit library is to construct chat purposes with Massive Language Fashions rapidly, i.e., to create a UI much like ChatGPT. Creating conversational chat purposes inside minutes with the Chainlit bundle is feasible. This library is seamlessly built-in with LangFlow and LangChain(the library to construct purposes with Massive Language Fashions), which we are going to do later on this information.

Chainlit even permits for visualizing multi-step reasoning; it lets us see the intermediate outcomes to know the way the Massive Language Mannequin reached the output to a query. So you may clearly see the chain of ideas of the mannequin by means of the UI itself to know how the LLM concluded the given query. Chainlit is restricted to a textual content dialog and permits for sending and receiving Photographs to and from the respective Generative AI fashions. It even lets us replace the Immediate Template within the UI as an alternative of returning to the code and altering it.

Producing HuggingFace Inference API

There are two methods to work with the Falcon-7B-Instruct mannequin. One is the normal method, the place we obtain the mannequin to the native machine after which use it instantly. However as a result of this can be a Massive Language Mannequin, it would want excessive GPU reminiscence to make it work. Therefore we go together with the opposite possibility, calling the mannequin instantly by means of the Inference API. Inference API is a HuggingFace API token with which we will entry all of the transformer fashions within the HuggingFace.

To entry this token, we have to create an Account in HuggingFace, which we will do by going to the official HuggingFace web site. After logging in/signing in along with your particulars, go to your profile and click on on the Settings part. The method from there might be

HuggingFace | FalconAI | Chainlit | GenerativeAI | Langchain
HuggingFace | FalconAI | Chainlit | GenerativeAI | Langchain
HuggingFace | FalconAI | Chainlit | GenerativeAI | Langchain

So in Settings, go to Entry Tokens. You’ll create a brand new token, which we should work with the Falcon-7B-Instruct mannequin. Click on on the New Token to create the brand new token. Enter a reputation for the token and set the Position choice to Write. Now click on on Generate to generate our new Token. With this token, we will entry the Falcon-7B-Instruct mannequin and construct purposes.

Making ready the Surroundings

Earlier than we dive into our utility, we are going to create a super surroundings for the code to work. For this, we have to set up the required Python libraries wanted. Firstly, we are going to begin by putting in the libraries that help the mannequin. For this, we are going to do a pip set up of the under libraries.

$ pip set up huggingface_hub 
$ pip set up transformers

These instructions will set up the HuggingFace Hub and the Transformers libraries. These libraries name the Falcon-7B-Instruct mannequin, which resides within the HuggingFace. Subsequent, we might be putting in the LangChain library for Python.

$ pip set up langchain

This can set up the LangChain Bundle for Python, which we are going to work with to create our chat utility with the Falcon Massive Language Mannequin. Lastly, with out the UI, the conversational utility will not be achieved. So for this, we might be downloading the chainlit library.

$ pip set up chainlit

This can set up the Chainlit library for Python. With the assistance of this library, we might be constructing the UI for our conversational chat utility. After putting in chainlit, we have to check the bundle. For this, use the under command within the terminal.

chainlit howdy
 FalconAI | Chainlit | GenerativeAI | Langchain

After coming into this command, a brand new window with the deal with localhost and PORT 8000 will seem. The UI will then be seen. This tells that the chainlit library is put in correctly and able to work with different libraries in Python.

Creating the Chat Software

On this part, we are going to begin constructing our utility. We now have all the required libraries to go ahead to construct our very personal conversational chat utility. The very first thing we might be doing is importing the libraries and storing the HuggingFace Inference API in an environmental object.

import os
import chainlit as cl
from langchain import HuggingFaceHub, PromptTemplate, LLMChain

os.environ['API_KEY'] = 'Your API Key'
  • So we begin by importing the os, chainlit and langchain libraries.
  • From langchain, we now have imported the HuggingFaceHub. This HuggingFaceHub will allow us to name the Falcon-7B-Instruct mannequin by means of the Inference API and obtain the responses generated by the mannequin.
  • The PromptTemplate is likely one of the parts of LangChain, crucial for constructing purposes based mostly on the Massive Language Mannequin. It defines how the mannequin ought to interpret the person’s questions and in what context it ought to reply them.
  • Lastly, we even import the LLMChain from LangChain. LLMChain is the module that chains completely different LangChain parts collectively. Right here we might be chaining our Falcon-7B-Instruct Massive Language Mannequin with the PromptTemplate.
  • Then we retailer our HuggingFace Inference API in an surroundings variable, that’s,  os.environ[‘API_KEY’]

Instruct the Falcon Mannequin

Now we might be inferring the Falcon Instruct mannequin by means of the HuggingFaceHub module. For this, first, we should present the trail to the mannequin within the Hugging Face. The code for this might be

model_id = 'tiiuae/falcon-7b-instruct'

falcon_llm = HuggingFaceHub(huggingfacehub_api_token=os.environ['API_KEY'],
                            repo_id=model_id,
                            model_kwargs={"temperature":0.8,"max_new_tokens":2000})
  • First, we should give the id of the mannequin we are going to work with. For us, it is going to be the Falcon-7B-Instruct mannequin. The id of this mannequin may be discovered instantly on the HuggingFace web site, which might be ‘tiiuae/falcon-7b-instruct’.
  • Now we name the HuggingFaceHub module, the place we cross the API token, assigned to an surroundings variable, and even the repo_id, i.e., the id of the mannequin we might be working with.
  • Additionally, we offer the mannequin parameters, just like the temperature and the utmost variety of new tokens. Temperature is how a lot the mannequin ought to be inventive, the place 1 means extra creativity, and 0 tells no creativity.

Now we now have clearly outlined what mannequin we might be working with. And the HuggingFace API will allow us to hook up with this mannequin and run our queries to start out constructing our utility.

Immediate Template

After the mannequin choice, the following is defining the Immediate Template. The Immediate Template tells how the mannequin ought to behave. It tells how the mannequin ought to interpret the query supplied by the person. It even tells how the mannequin ought to conclude to offer the output to the person’s question. The code for outlining our Immediate Template could be

template = """

You're an AI assistant that gives useful solutions to person queries.

{query}

"""
immediate = PromptTemplate(template=template, input_variables=['question'])

The above template variable defines and units the context of the Immediate Template for the Falcon mannequin. The context right here is straightforward, the AI wants to supply useful solutions to person queries, adopted by the enter variable {query}. Then this template, together with the variables outlined in it, is given to the PromptTemplate perform, which is then assigned to a variable. This variable is now the Immediate Template, which can later be chained along with the mannequin.

Chain Each Fashions

Now we now have each the Falcon LLM and the Immediate Template prepared. The ultimate half might be chaining each these fashions collectively. We’ll work with the LLMChain object from the LangChain library for this. The code for this might be

falcon_chain = LLMChain(llm=falcon_llm,
                        immediate=immediate,
                        verbose=True)

With the assistance of LLMChain, we now have chained the Falcon-7B-Instruct mannequin with our very personal PromptTemplate that we now have created. We now have even set the verbose = True, which is useful to know what occurs when the code is being run. Now let’s check the mannequin by giving a question to it

print(falcon_chain.run("What are the colours within the Rainbow?"))
"

Right here, we now have requested the mannequin what the rainbow colours are. The rainbow incorporates VIBGYOR (Violet, Indigo, Blue, Inexperienced, Yellow, Orange, and Purple) colours. The output generated by the Falcon 7B Instruct mannequin is spot on to the query requested. Setting the verbose possibility lets us see the Immediate after formatting and tells us the place the chain begins and ends. Lastly, we’re able to create a UI for our conversational chat utility.

Chainlit – UI for Massive Language Fashions

On this part, we are going to work with Chainlit Bundle to create the UI for our utility. Chainlit is a Python library that lets us construct Chat Interfaces for Massive Language Fashions in minutes. It’s built-in with LangFlow and even LangChain, the library we beforehand labored on. Creating the Chat Interface with Chainlit is straightforward. We now have to write down the next code:

@cl.langchain_factory(use_async=False)

def manufacturing unit():

    immediate = PromptTemplate(template=template, input_variables=['question'])
    falcon_chain = LLMChain(llm=falcon_llm,
                        immediate=immediate,
                        verbose=True)

    return falcon_chain

Steps

  • First, we begin with the decorators from Chainlit for LangChain, the @cl.langchain_factory.
  • Then we outline a manufacturing unit perform that incorporates the LangChain code. The code right here we want is the Immediate Template and the LLMChain module of LangChain, which builds and chains our Falcon LLM.
  • Lastly, the return variable have to be a LangChain Occasion. Right here, we return the ultimate chain created, i.e., the LLMChain Occasion, the falcon_chain.
  • The use_async = False tells the code to not use the async implementation for the LangChain agent.

Let’s Run the Code!

That’s it. Now after we run the code, a Chat Interface might be seen. However how is that this potential The factor is, Chainlit takes care of all the pieces. Behind the scenes, it manages the webhook connections, it’s chargeable for making a separate LangChain Occasion(Chain, Agent, and many others) for every person that visits the location. To run our utility, we sort the next within the terminal.

$ chainlit run app.py -w

The -w signifies auto-reload each time we make modifications dwell in our utility code. After coming into this, a brand new tab will get opened with localhost:8000

HuggingFace | FalconAI | Chainlit | GenerativeAI | Langchain

That is the opening web page, i.e., the welcome display screen of Chainlit. We see that Chainlit builds a complete Chat Interface for us simply with a single decorator. Let’s strive interacting with the Falcon Mannequin by means of this UI

HuggingFace | FalconAI | Chainlit | GenerativeAI | Langchain | Chatbot

We see that the UI and the Falcon Instruct mannequin are working completely fantastic. The mannequin can present swift solutions to the questions requested. It actually tried to elucidate the second query based mostly on the person’s context (clarify to a 5-year-old). That is the start of what we will obtain with these Open Sourced Generative AI fashions. With little to few modifications, we will be capable of create far more problem-oriented, actual scenario-based purposes.

Because the Chat Interface is a web site, it’s fully potential to host it on any of the cloud platforms. We will containerize the appliance, then attempt to deploy it in any container-based providers in Google Cloud, AWS, Azure, or different cloud providers. With that, we will share our purposes with the surface world.

Conclusion

On this walkthrough, we now have seen easy methods to construct a easy Chat Software with the brand new Open Supply Falcon Massive Language Mannequin, LangChain, and Chainlit. We now have leveraged these three packages and have interconnected them to create a full-fledged answer from Code to Working Software. We now have even seen easy methods to get hold of the HuggingFace Inference API Key to entry 1000’s of pre-trained fashions from the HuggingFace library. With the assistance of LangChain, we chained the LLM with customized Immediate Templates. Lastly, with Chainlit, we might create a Chat Software Interface round our LangChain Falcon mannequin inside minutes.

A number of the key takeaways from this information embody:

  • Falcon is an Open Supply mannequin and is likely one of the highly effective LLm, which is presently on the high of the OpenLLM Leaderboard
  • With Chainlit, it’s potential to create UI for LLM inside minutes
  • Inference API lets us hook up with many various fashions current within the HuggingFace
  • LangChain helps in constructing customized Immediate Templates for the Massive Language Fashions
  • Chainlit’s seamless integration with LangChain permits it to construct LLM purposes faster and error-free

Ceaselessly Requested Questions

Q1. What’s HuggingFace Inference API?

A. The Inference API is created by HuggingFace, permitting you to entry 1000’s of pre-trained fashions within the HuggingFace library. With this API, you may entry quite a lot of fashions, together with Generative AI fashions, Pure Language Processing Fashions, Audio Classification, and Laptop Imaginative and prescient fashions.

Q2. Are Falcon fashions highly effective?

A. They’re. Particularly the Falcon 40B(40 Billion Parameters) mannequin. This mannequin has surpassed different state-of-the-art fashions like Llama and DeepMind and bought the highest place within the OpenLLM Leaderboard.

Q3. What’s Chainlit Bundle?

A. Chainlit is a Python Library that’s developed for creating UI. With Chainlit, creating ready-to-work Chat Interfaces for Massive Language Fashions inside minutes is feasible. The Chainlit Bundle seamlessly integrates with LangFlow and LangChain, different packages which might be labored with to create purposes with Massive Language Fashions.

This autumn. Are Falcon Fashions Open-Sourced?

A. Sure. The Falcon 40B(40 Billion Parameters) and the Falcon 7B(7 Billion Parameters) are Open Sourced. This states that anybody can work with these fashions to create industrial purposes with out restrictions.

The media proven on this article will not be owned by Analytics Vidhya and is used on the Creator’s discretion. 

Latest articles

Related articles

Leave a reply

Please enter your comment!
Please enter your name here