POST
/
create

By default we will use the opensource gpt4all model to get started. You can also specify your own config by uploading a config YAML file.

For example, create a config.yaml file (adjust according to your requirements):

app:
  config:
    id: "default-app"

llm:
  provider: openai
  config:
    model: "gpt-3.5-turbo"
    temperature: 0.5
    max_tokens: 1000
    top_p: 1
    stream: false
    prompt: |
      Use the following pieces of context to answer the query at the end.
      If you don't know the answer, just say that you don't know, don't try to make up an answer.

      $context

      Query: $query

      Helpful Answer:

vectordb:
  provider: chroma
  config:
    collection_name: "rest-api-app"
    dir: db
    allow_reset: true

embedder:
  provider: openai
  config:
    model: "text-embedding-ada-002"

To learn more about custom configurations, check out the custom configurations docs. To explore more examples of config yamls for embedchain, visit embedchain/configs.

Now, you can upload this config file in the request body.

For example,

Request
curl --request POST \
  --url http://localhost:8080/create?app_id=my-app \
  -F "config=@/path/to/config.yaml"

Note: To use custom models, an API key might be required. Refer to the table below to determine the necessary API key for your provider.

KeysProviders
OPENAI_API_KEY OpenAI, Azure OpenAI, Jina etc
OPENAI_API_TYPEAzure OpenAI
OPENAI_API_BASEAzure OpenAI
OPENAI_API_VERSIONAzure OpenAI
COHERE_API_KEYCohere
TOGETHER_API_KEYTogether
ANTHROPIC_API_KEYAnthropic
JINACHAT_API_KEYJina
HUGGINGFACE_ACCESS_TOKENHuggingface
REPLICATE_API_TOKENLLAMA2

To add env variables, you can simply run the docker command with the -e flag.

For example,

docker run --name embedchain -p 8080:8080 -e OPENAI_API_KEY=<YOUR_OPENAI_API_KEY> embedchain/rest-api:latest

Query Parameters

app_id
string
required

Body

multipart/form-data
config
file

Response

200 - application/json
response
string
required