FlowWiseAI

Embark on an exhilarating journey into the AI realm where customer support meets charm and charisma, all powered by FlowiseAI! Let’s roll up our digital sleeves and whip up an AI companion that not only solves problems but does so with a sprinkle of humor and a dash of panache. Buckle up, it’s going to be a witty ride!

Welcome to FlowiseAI

Meet FlowiseAI: not just your average UI platform, but a veritable wizard in the world of customized LLM applications and AI agents. Imagine a world where AI does more than just work—it wows. Built on the robust shoulders of LangChain.js, FlowiseAI is your gateway to crafting LLM orchestration flows and autonomous agents that are as smart as they are smooth. As you can see in the diagram below, FlowiseAI is the central hub that orchestrates various components like ChatOpenAI, Vector Storage, Text Tinkerer, and Memory Magic to create a seamless and delightful customer support experience.

FlowiseAI Architecture
Figure 1: a Template from FlowiseAI Marketplace

Why FlowiseAI Rocks Your Socks Off

  • LLM Orchestration: Like a symphony conductor wielding a baton, it harmonizes various components into a seamless performance.
  • Autonomous Agents: These aren’t just bots; they’re your digital minions ready to tackle any task with a flair.
  • Integration Galore: With APIs, SDKs, and embedded chat, it’s like having the Swiss Army knife of AI toolkits.
  • Open Source Freedom: Tinker, tailor, and transform at will—FlowiseAI is the playground for your creative spirit.

Crafting Your AI Sidekick: A Step-by-Step Guide

Step 1: Setup Shenanigans

  • Install Node.js: Get the latest and greatest version because, well, you deserve the best.
  • Summon FlowiseAI:
    npm install -g flowise
    
  • Awaken the Server Beast:
    npx flowise start
    
  • Enter the Portal: Point your browser to http://localhost:3000 and step into the command center. Then you will be able to see the FlowiseAI dashboard.
    FLowiseAI Dashboard
    Figure 2: FlowiseAI Dashboard

Step 2: Browse around the Marketplace

  • Explore the Marketplace: Like a digital bazaar, it’s brimming with possibilities. And this is usually the first step to start your journey because you can find a lot of templates that can help you to start your project. Why? Because that’s how we learn new things, right?
  • Try the Templates: Pick one that speaks to you and hit the “Use Template” button. You can also create your own template if you want to. As you can see in the image below, you can find a lot of templates that can help you to start your project.
    FLowiseAI Marketplace
    Figure 3: FlowiseAI Marketplace
  • Customize the Template: Make it your own by tweaking the settings and adding your personal touch. But before you start customizing the template, let’s configure our credentials and settings first. This would allow us not to waste time going to OPENAI, PINECONE, and UPSTASH to get the credentials and settings. In the image below, you can see that I have already configured the credentials for OPENAI and Huggingface.
    Credential Configuration
    Figure 4: Credential Configuration

Step 3: Let’s start making our own ChatFlow.

  • Create a new ChatFlow: Click on the “Add New” button and you will be redirected to the ChatFlow editor. In the image below, you can see that I have already created a ChatFlow called “PaperlessChain”. Just ignore the name, it’s just a random name that I have chosen because I am not a poet.

    ChatFlow Tab
    Figure 5: ChatFlow Tab

  • Add a new Node: Click on the “Add Node” button and you will be redirected to the Node editor. In the image below, You can see different types of nodes that you can add to your ChatFlow. You can add a node that can send a message, a node that can receive a message, a node that can call an API, a node that can store data, and a node that can make a decision.

    Node Tab
    Figure 6: Node Tab

  • An Example: In the image below, you can see different types of nodes that I have added to my ChatFlow. I have added a few nodes. Let’s decompose this chatflow and see what each node does.

    PaperlessChain Example
    Figure 7: a self-created ChatFlow Example

    • Recursive Character Text Splitter Node: This node probably breaks down text into smaller, more manageable pieces. The ‘Chunk Size’ suggests how many characters each piece contains, and ‘Chunk Overlap’ might allow some overlap in characters between pieces to maintain context.
    • API Loader Node: This node is used for retrieving data from an external API. The ‘GET’ method indicates it’s fetching data, and the URL would be the endpoint from which it retrieves the data. Since I am using Paperless-ngx, there is no need to download the document and upload it to the chatflow. We will just use the REST API that Paperless Provided. The output from this node feeds into the document input of another node for further processing.
    • OpenAI Embeddings Node: This node is used to generate vector representations of text using OpenAI’s embeddings model (specified by the ‘Model Name’ field). These vectors capture semantic meaning and can be used for tasks like semantic search or clustering.
    • In-Memory Vector Store Node: This is a storage node for vectors. It takes in document vectors and embeddings as input and allows for the retrieval of similar documents based on their vector proximity. Here you can totally use Pinecone as a vector store, if only you have too much money to spend. (I got a huge bill from Pinecone, so I decided to use In-Memory Vector Store Node instead of Pinecone. I am not a millionaire, you know.)
    • ChatOpenAI Node: A node that interfaces with OpenAI’s conversational AI model (indicated by ‘gpt-3.5-turbo’ in ‘Model Name’). It would handle generating responses to user inputs, potentially using the context stored in its cache.
    • Conversational Retrieval QA Chain Node: This node uses the chat model for understanding and generating language and the vector store retriever to find the best answers from a database of documents. It may also utilize a memory system to keep track of the conversation’s context and apply input moderation to filter out undesirable inputs.

Intermission: Aight, folks, I also deserve a break, you know. I have been working on a lot of projects for a while now. I need to take a break and have a cup of coffee. I will be back soon.

Ting Xu
Ting Xu
Innovative Electrical Engineer

Transforming Technology with Creative Vision