Configuring AAQ
There are several aspects of AAQ you can configure:
-
Application configs through environment variables
All required and optional environment variables are defined in
deployment/docker-compose/template.*.env
files. You will need to copy the templates into.*.env
files.cp template.base.env .base.env cp template.core_backend.env .core_backend.env cp template.litellm_proxy.env .litellm_proxy.env
To get a local setup running with docker compose, you won't need to change any values except for LLM credentials in
.litellm_proxy.env
.See the rest of this page for more information on the environment variables.
-
LLM models in
litellm_proxy_config.yaml
This defines which LLM to use for which task. You may want to change the LLMs and specific calling parameters based on your needs.
-
LLM prompts in
llm_prompts.py
While all prompts have been carefully selected to perform each task well, you can customize them to your need here.
Understanding the template environment files template.*.env
For local testing and development, the values shoud work as is, except for LLM API credentials in .litellm_proxy.env
For production, make sure you confirm or update the ones marked "change for production" at the least.
- Secrets have been marked with 🔒.
- All optional values have been commented out. Uncomment to customize for your own case.
AAQ-wide configurations
The base environment variables are shared by caddy
(reverse proxy), core_backend
,
and admin_app
during run time.
If not done already, copy the template environment file to .base.env
Then, edit the environment variables according to your need (guide on updating the template):
deployment/docker-compose/template.base.env
#### AAQ domain -- change for production ######################################
DOMAIN="localhost"
# Example value: `example.domain.com`
# This is the domain that admin_app will be hosted on. core_backend will be
# hosted on ${DOMAIN}/${BACKEND_ROOT_PATH}.
BACKEND_ROOT_PATH="/api"
# This is the path that core_backend will be hosted on.
# Only change if you want to use a different backend root path.
#### Google OAuth Client ID ###################################################
# NEXT_PUBLIC_GOOGLE_LOGIN_CLIENT_ID="update-me"
# If you want to use Google OAuth, set the correct value for your production.
# This value is used by core_backend and admin_app.
#### Backend URL ##############################################################
NEXT_PUBLIC_BACKEND_URL="https://${DOMAIN}${BACKEND_ROOT_PATH}"
# Do not change this value. This value is used by admin_app.
# If not set, it will default to "http://localhost:8000" in the admin_app.
Configuring the backend (core_backend
)
Environment variables for the backend
If not done already, copy the template environment file to .core_backend.env
(guide on updating the template):
The core_backend
uses the following required and optional (commented out) environment variables.
deployment/docker-compose/template.core_backend.env
# If not set, default values are loaded from core_backend/app/**/config.py files
#### 🔒 Postgres variables -- change for production ###########################
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres #pragma: allowlist secret
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=postgres
#### 🔒 Admin user -- change for production ###################################
ADMIN_USERNAME="admin"
ADMIN_PASSWORD="fullaccess" #pragma: allowlist secret
ADMIN_API_KEY="admin-key" #pragma: allowlist secret
#### Admin user rate limits ###################################################
# ADMIN_CONTENT_QUOTA=1000
# ADMIN_API_DAILY_QUOTA=100
#### 🔒 JWT -- change for production ###########################################
JWT_SECRET="jwt-secret" #pragma: allowlist secret
#### Redis -- change for production ##########################################
REDIS_HOST="redis://localhost:6379"
# For docker compose, use "redis://redis:6379"
#### LiteLLM Proxy Server -- change for production ############################
LITELLM_ENDPOINT="http://localhost:4000"
# For docker compose, use "http://litellm_proxy:4000"
#### Variables for Huggingface embeddings container ###########################
# If on ARM, you need to build the embeddings image manually using
# `make build-embeddings-arm` from repository root and set the following variables
#EMBEDDINGS_IMAGE_NAME=text-embeddings-inference-arm
#PGVECTOR_VECTOR_SIZE=1024
#### Speech APIs ###############################################################
# CUSTOM_SPEECH_ENDPOINT=http://speech_service:8001/transcribe
#### Temporary folder for prometheus gunicorn multiprocess ####################
PROMETHEUS_MULTIPROC_DIR="/tmp"
#### Application-wide content limits ##########################################
# CHECK_CONTENT_LIMIT=True
# DEFAULT_CONTENT_QUOTA=50
#### Number of top content to return for /search. #############################
# N_TOP_CONTENT=5
#### Urgency detection variables ##############################################
# URGENCY_CLASSIFIER="cosine_distance_classifier"
# Choose between `cosine_distance_classifier` and `llm_entailment_classifier`
# URGENCY_DETECTION_MAX_DISTANCE=0.5
# Only used if URGENCY_CLASSIFIER=cosine_distance_classifier
# URGENCY_DETECTION_MIN_PROBABILITY=0.5
# Only used if URGENCY_CLASSIFIER=llm_entailment_classifier
#### LLM response alignment scoring ###########################################
# ALIGN_SCORE_THRESHOLD=0.7
#### LiteLLM tracing ##########################################################
LANGFUSE=False
# 🔒 Keys
# LANGFUSE_PUBLIC_KEY="pk-..."
# LANGFUSE_SECRET_KEY="sk-..." #pragma: allowlist secret
# Set LANGFUSE=True to enable Langfuse logging, and set the keys.
# See https://docs.litellm.ai/docs/observability/langfuse_integration for more
# information.
# Optional based on your Langfuse host:
# LANGFUSE_HOST="https://cloud.langfuse.com"
##### Google Cloud Storage Variables#############################################
# GCS_SPEECH_BUCKET="aaq-speech-test"
# Set this variable up to your specific GCS bucket for storage and retrieval for Speech Workflow.
Other configurations for the backend
You can view all configurations that core_backend
uses in
core_backend/app/*/config.py
files -- for example, core_backend/app/config.py
.
Environment variables take precedence over the config file.
You'll see in the config files that we get parameters from the environment and if not found, we fall back on defaults provided. So any environment variables set will override any defaults you have set in the config file.
Configuring LiteLLM Proxy Server (litellm_proxy
)
LiteLLM Proxy Server configurations
You can edit the default LiteLLM Proxy Server
settings by updating
litellm_proxy_config.yaml
.
Learn more about the server configuration in LiteLLM Proxy Server.
Authenticating LiteLLM Proxy Server to LLMs
The litellm_proxy
server uses the following required and optional (commented out) environment
variables for authenticating to external LLM APIs (guide on updating the template).
You will need to set up
the correct credentials (API keys, etc.) for all LLM APIs declared in
litellm_proxy_config.yaml
. See LiteLLM's documentation for more information about
authentication for different LLMs.
deployment/docker-compose/template.litellm_proxy.env
# For every LLM API you decide to use, defined in litellm_proxy_config.yaml,
# ensure you set up the correct authentication(s) here.
#### 🔒 Vertex AI auth -- change for production ###############################
# Must be set if using VertexAI models
GOOGLE_APPLICATION_CREDENTIALS="/app/credentials.json"
# Path to the GCP credentials file *within* litellm_proxy container.
# This default value should work with docker compose.
VERTEXAI_PROJECT="gcp-project-id-12345"
VERTEXAI_LOCATION="us-central1"
VERTEXAI_ENDPOINT="https://us-central1-aiplatform.googleapis.com"
# Vertex AI endpoint. Note that you may want to increase the request quota from
# GCP's APIs console.
#### 🔒 OpenAI auth -- change for production ##################################
# Must be set if using OpenAI APIs.
OPENAI_API_KEY="sk-..."
#### 🔒 Huggingface embeddings -- change for production #######################
# HUGGINGFACE_MODEL="Alibaba-NLP/gte-large-en-v1.5"
# HUGGINGFACE_EMBEDDINGS_API_KEY="embeddings" #pragma: allowlist secret
# HUGGINGFACE_EMBEDDINGS_ENDPOINT="http://huggingface-embeddings"
# This default endpoint value should work with docker compose.
#### 🔒 LiteLLM Proxy UI -- change for production #############################
# UI_USERNAME="admin"
# UI_PASSWORD="admin"
Configuring optional components
See instructions for setting these in the documentation for the specific optional component at Optional components.