Skip to main content

ChatOllama

Prerequisite

  1. Download Ollama or run it on Docker.

  2. For example, you can use the following command to spin up a Docker instance with llama3

    docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
    docker exec -it ollama ollama run llama3

Setup

  1. Chat Models > drag ChatOllama node

  1. Fill in the model that is running on Ollama. For example: llama2. You can also use additional parameters:

  1. Voila 🎉, you can now use ChatOllama node in CiniterFlow

Running on Docker

If you are running both CiniterFlow and Ollama on docker. You'll have to change the Base URL for ChatOllama.

For Windows and MacOS Operating Systems specify http://host.docker.internal:8000. For Linux based systems the default docker gateway should be used since host.docker.internal is not available: http://172.17.0.1:8000

Ollama Cloud

  1. Create an API key on ollama.com.
  2. In CiniterFlow, click Create Credential and select Ollama API, and enter your API Key.

  1. Then, set the Base URL to https://ollama.com
  2. Enter the models that are available on Ollama Cloud.

Resources