Rocket.Chat AI App
    • Dark
      Light
    • PDF

    Rocket.Chat AI App

    • Dark
      Light
    • PDF

    Article summary

    The Rocket.Chat AI app is currently in beta. For feedback, reach out to us on the AI app channel. For setup details, refer to the AI app setup repository.

    The Rocket.Chat AI assistant app is powered by the Retrieval-Augmented Generation (RAG) technique. This technique allows large language models (LLMs) to connect with external resources, giving users access to the latest, most accurate information. To use the Rocket.Chat AI app, you need a self-hosted LLM of your choice. This allows for greater information accuracy while keeping it secure.

    Key features of the Rocket.Chat AI app

    • Get answers to your general or business-specific queries.

    • Get a summary of thread messages to stay up-to-date on important conversations quickly.

    • Get a summary of livechat messages. This helps agents understand the context of an ongoing customer conversation.

    In this document, you will learn about the app installation, configuration, and usage details.

    Install the app

    Follow these steps in your workspace:

    1. Go to Administration > Apps > Marketplace and search for Rocket.Chat AI.

    2. Click Install and accept the required permissions.

    Configure the app

    Ensure that your workspace admin has set up the LLM. Follow the Rocket.Chat AI App Setup Guide for details and further examples for the settings.

    In the app Settings tab, update the following:

    Field

    Description

    Model Selection

    Select the model to be used for answering questions.

    Model URL

    Enter the model URL. For example, http://localhost:8020/v1.

    Don't include the slash / at the end of the URL. If no value is provided, the default value for Llama 3 8B is http://llama3-8b, and for Llama 3 70B is http://llama3-70b.

    Assistant Name

    Provide a name for the AI Assistant. By default, the name is Rocket.Chat Assistant. This name is used to greet the user if asked for a name. For example, "Hi, I am AI Assistant. How can I help you?".

    Vector database selection

    Select the vector database to be used for the RAG. The vector database retrieves the relevant documents from the knowledge base. Currently, only Milvus is supported.

    Vector database URL

    Enter the URL of the vector database. For example, http://localhost:19530.

    Vector database collection

    Enter the name of the collection in the vector database. For example, milvus_collection.

    If you are using the Rubra AI Assistant, you can enter the URL as http:{{RUBRA_API_SERVER_URL}}?name={{ASSISTANT_NAME}}. The {{RUBRA_API_SERVER_URL}} and {{ASSISTANT_NAME}} will be replaced with the actual values once you save the configuration.

    If there are any errors the values will not be replaced and will be reverted to the original values.

    Vector database API key

    Enter the API key for the vector database. This is required for authentication. For Milvus, you should use a colon (:) to concatenate the username and password that you use to access your Milvus instance. For example, root:Milvus.

    Vector database text field

    Enter the name of the field in the vector database that contains the text data. For example, text.

    Embedding model selection

    Select the embedding model to be used for the RAG. The embedding model is used to convert the text data into vectors. Currently, only the locally deployed model is supported.

    Embedding model URL

    Enter the URL of the embedding model. For example, http://localhost:8020/embed_multiple.

    Note that the embedding model inference server must have a fixed format of payload and response as input and output, respectively. Below is the format of the input and output:

    // Input
    {
        [
            "text1", "text2", ...
        ]
    }
    // Output
    {
        "embeddings": [
                [0.1, 0.2, 0.3, ...],
                [0.4, 0.5, 0.6, ...]
    
        ]
    }

    Use the Rocket.Chat AI app

    You can use the app in any of the following ways:

    Create a DM with the AI app

    From your workspace menu, click Create New > Direct messages. In the DM, send a prompt with your query and the bot responds in the thread.

    Get a thread summary

    In any channel, hover over the message in the thread you want to summarize and open the kebab menu. Click Summarize until here. The AI Assistant will provide a summary of the thread in the DM.

    Note that thread summary is not supported in DMs.

    Get a livechat conversation summary

    On the Room Actions menu, click the kebab menu, and click Summarize chat. The AI Assistant will summarize the livechat conversation in the same room. According to the conversation, the bot provides information about the users, the issue, the agent's response, and any follow-up to the conversation.

    Troubleshooting

    The app will always show a default error message if there is an error: An unexpected error occurred, please try again. If you are an admin, go to the App Info page and click on the Logs tab to see the different sections of the logs. The logs will help you debug the issue.

    If the interaction was a UI interaction, the logs will be listed under the jobProcessor section. The following screenshot shows an example:

    For any other type of interaction, the logs will be listed under the executePostMessageSent section. The following screenshot shows an example:


    Was this article helpful?

    ESC

    Eddy AI, facilitating knowledge discovery through conversational intelligence