Embedchain offers several configuration options for your LLM, vector database, and embedding model. All of these configuration options are optional and have sane defaults.
You can configure different components of your app (llm
, embedding model
, or vector database
) through a simple yaml configuration that Embedchain offers. Here is a generic full-stack example of the yaml config:
Embedchain applications are configurable using YAML file, JSON file or by directly passing the config dictionary. Checkout the docs here on how to use other formats.
Alright, let’s dive into what each key means in the yaml config above:
app
Section:
config
:
name
(String): The name of your full-stack application.id
(String): The id of your full-stack application.
collect_metrics
(Boolean): Indicates whether metrics should be collected for the app, defaults to True
log_level
(String): The log level for the app, defaults to WARNING
llm
Section:
provider
(String): The provider for the language model, which is set to ‘openai’. You can find the full list of llm providers in our docs.config
:
model
(String): The specific model being used, ‘gpt-3.5-turbo’.temperature
(Float): Controls the randomness of the model’s output. A higher value (closer to 1) makes the output more random.max_tokens
(Integer): Controls how many tokens are used in the response.top_p
(Float): Controls the diversity of word selection. A higher value (closer to 1) makes word selection more diverse.stream
(Boolean): Controls if the response is streamed back to the user (set to false).prompt
(String): A prompt for the model to follow when generating responses, requires $context
and $query
variables.system_prompt
(String): A system prompt for the model to follow when generating responses, in this case, it’s set to the style of William Shakespeare.stream
(Boolean): Controls if the response is streamed back to the user (set to false).number_documents
(Integer): Number of documents to pull from the vectordb as context, defaults to 1api_key
(String): The API key for the language model.model_kwargs
(Dict): Keyword arguments to pass to the language model. Used for aws_bedrock
provider, since it requires different arguments for each model.vectordb
Section:
provider
(String): The provider for the vector database, set to ‘chroma’. You can find the full list of vector database providers in our docs.config
:
collection_name
(String): The initial collection name for the vectordb, set to ‘full-stack-app’.dir
(String): The directory for the local database, set to ‘db’.allow_reset
(Boolean): Indicates whether resetting the vectordb is allowed, set to true.
embedder
Section:
provider
(String): The provider for the embedder, set to ‘openai’. You can find the full list of embedding model providers in our docs.config
:
model
(String): The specific model used for text embedding, ‘text-embedding-ada-002’.vector_dimension
(Integer): The vector dimension of the embedding model. Defaultsapi_key
(String): The API key for the embedding model.deployment_name
(String): The deployment name for the embedding model.title
(String): The title for the embedding model for Google Embedder.task_type
(String): The task type for the embedding model for Google Embedder.chunker
Section:
chunk_size
(Integer): The size of each chunk of text that is sent to the language model.chunk_overlap
(Integer): The amount of overlap between each chunk of text.length_function
(String): The function used to calculate the length of each chunk of text. In this case, it’s set to ‘len’. You can also use any function import directly as a string here.min_chunk_size
(Integer): The minimum size of each chunk of text that is sent to the language model. Must be less than chunk_size
, and greater than chunk_overlap
.cache
Section: (Optional)
similarity_evaluation
(Optional): The config for similarity evaluation strategy. If not provided, the default distance
based similarity evaluation strategy is used.
strategy
(String): The strategy to use for similarity evaluation. Currently, only distance
and exact
based similarity evaluation is supported. Defaults to distance
.max_distance
(Float): The bound of maximum distance. Defaults to 1.0
.positive
(Boolean): If the larger distance indicates more similar of two entities, set it True
, otherwise False
. Defaults to False
.config
(Optional): The config for initializing the cache. If not provided, sensible default values are used as mentioned below.
similarity_threshold
(Float): The threshold for similarity evaluation. Defaults to 0.8
.auto_flush
(Integer): The number of queries after which the cache is flushed. Defaults to 20
.If you provide a cache section, the app will automatically configure and use a cache to store the results of the language model. This is useful if you want to speed up the response time and save inference cost of your app.
If you have questions about the configuration above, please feel free to reach out to us using one of the following methods:
Embedchain offers several configuration options for your LLM, vector database, and embedding model. All of these configuration options are optional and have sane defaults.
You can configure different components of your app (llm
, embedding model
, or vector database
) through a simple yaml configuration that Embedchain offers. Here is a generic full-stack example of the yaml config:
Embedchain applications are configurable using YAML file, JSON file or by directly passing the config dictionary. Checkout the docs here on how to use other formats.
Alright, let’s dive into what each key means in the yaml config above:
app
Section:
config
:
name
(String): The name of your full-stack application.id
(String): The id of your full-stack application.
collect_metrics
(Boolean): Indicates whether metrics should be collected for the app, defaults to True
log_level
(String): The log level for the app, defaults to WARNING
llm
Section:
provider
(String): The provider for the language model, which is set to ‘openai’. You can find the full list of llm providers in our docs.config
:
model
(String): The specific model being used, ‘gpt-3.5-turbo’.temperature
(Float): Controls the randomness of the model’s output. A higher value (closer to 1) makes the output more random.max_tokens
(Integer): Controls how many tokens are used in the response.top_p
(Float): Controls the diversity of word selection. A higher value (closer to 1) makes word selection more diverse.stream
(Boolean): Controls if the response is streamed back to the user (set to false).prompt
(String): A prompt for the model to follow when generating responses, requires $context
and $query
variables.system_prompt
(String): A system prompt for the model to follow when generating responses, in this case, it’s set to the style of William Shakespeare.stream
(Boolean): Controls if the response is streamed back to the user (set to false).number_documents
(Integer): Number of documents to pull from the vectordb as context, defaults to 1api_key
(String): The API key for the language model.model_kwargs
(Dict): Keyword arguments to pass to the language model. Used for aws_bedrock
provider, since it requires different arguments for each model.vectordb
Section:
provider
(String): The provider for the vector database, set to ‘chroma’. You can find the full list of vector database providers in our docs.config
:
collection_name
(String): The initial collection name for the vectordb, set to ‘full-stack-app’.dir
(String): The directory for the local database, set to ‘db’.allow_reset
(Boolean): Indicates whether resetting the vectordb is allowed, set to true.
embedder
Section:
provider
(String): The provider for the embedder, set to ‘openai’. You can find the full list of embedding model providers in our docs.config
:
model
(String): The specific model used for text embedding, ‘text-embedding-ada-002’.vector_dimension
(Integer): The vector dimension of the embedding model. Defaultsapi_key
(String): The API key for the embedding model.deployment_name
(String): The deployment name for the embedding model.title
(String): The title for the embedding model for Google Embedder.task_type
(String): The task type for the embedding model for Google Embedder.chunker
Section:
chunk_size
(Integer): The size of each chunk of text that is sent to the language model.chunk_overlap
(Integer): The amount of overlap between each chunk of text.length_function
(String): The function used to calculate the length of each chunk of text. In this case, it’s set to ‘len’. You can also use any function import directly as a string here.min_chunk_size
(Integer): The minimum size of each chunk of text that is sent to the language model. Must be less than chunk_size
, and greater than chunk_overlap
.cache
Section: (Optional)
similarity_evaluation
(Optional): The config for similarity evaluation strategy. If not provided, the default distance
based similarity evaluation strategy is used.
strategy
(String): The strategy to use for similarity evaluation. Currently, only distance
and exact
based similarity evaluation is supported. Defaults to distance
.max_distance
(Float): The bound of maximum distance. Defaults to 1.0
.positive
(Boolean): If the larger distance indicates more similar of two entities, set it True
, otherwise False
. Defaults to False
.config
(Optional): The config for initializing the cache. If not provided, sensible default values are used as mentioned below.
similarity_threshold
(Float): The threshold for similarity evaluation. Defaults to 0.8
.auto_flush
(Integer): The number of queries after which the cache is flushed. Defaults to 20
.If you provide a cache section, the app will automatically configure and use a cache to store the results of the language model. This is useful if you want to speed up the response time and save inference cost of your app.
If you have questions about the configuration above, please feel free to reach out to us using one of the following methods: