Turning memories into mastery
- Python 3.10 (client) and 3.12 (server)
- Docker
- Node v22
Due to different dependencies (and their relations), client (only CLI application) and server need different Python versions. But both use a shared pre-commit setup to enable style consistency in the repository. The configuration is stored in .pre-commit-config.yaml. Install the recommended extensions from .vscode/extensions.json.
Use Python 3.12
python -m venv .venv.server
# Activate envionment based on system (Mac: source .venv.server/bin/activate)
pip install -r requirements-server.txt
pip install -r requirements-local.txt
cp .env.db.example .env.db
cp .env.example .env
pre-commit installFinally, set the defined variables in .env and .env.db accordingly. For an example of the .env, see here.
To run LlamaIndex components, start the respective services from the compose.yml file:
docker compose up -d vector-db pgadminUse Python 3.10
python -m venv .venv
# Activate envionment based on system (Mac: source .venv/bin/activate)
pip install -r requirements-client.txt
pip install -r requirements-local.txt
cp .env.local.example .env.local
pre-commit installFinally, set the defined variable in .env.local accordingly. If you don't know the value of the AUDIO_DEVICE, leave it empty, the CLI will print a list of available devices on startup.
Use Node v22
cd src/frontend
npm install
cp .env.local.example .env.localSet the defined variable in src/frontend/.env.local accordingly. For an example, see here.
Note
Due to loading the LlamaIndex setup, the startup can take some time. See the logs and wait until the setup is completed.
Important
Requires setup for development described here
Use startup files directly:
run.pystarts Flask serverrun-cli.pystarts CLI application (use defined flags for startup)
For the web client, use npm run dev in the src/frontend directory.
Important
Requires setup for development described here
Run server and client application via the Run and Debug tab
You can alternatively build and run the server and the web client also in a Docker container. Use compose.yml for the setup. For an example of the .env and src/frontend/.env.local file, see here.
docker compose up -d --buildDevelopment setup: http://127.0.0.1:3050
Docker: http://127.0.0.1:7070
When using the LMStudioLLM, you need to setup 'Meta-Llama-3.1-8B-Instruct' in your local LM Studio installation.
LECTURE_TRANSLATOR_TOKEN="<TOKEN>"
LECTURE_TRANSLATOR_URL="<URL>"
LLM_URL="<URL>"
ANKI_COLLECTION_PATH="./data/anki_env"
HUGGING_FACE_TOKEN="<TOKEN>"
FRONTEND_URL="http://127.0.0.1:3050"
ANKI_RELATIVE_DATA_DIR="./data/anki_files"
LLM_TO_USE="hosted" # use local when LM Studio is running, "hosted" will use the defined "LLM_URL" alongside a HuggingFace InferenceClientNEXT_PUBLIC_BACKEND_URL="http://127.0.0.1:5000"LECTURE_TRANSLATOR_TOKEN="<TOKEN>"
LECTURE_TRANSLATOR_URL="<URL>"
LLM_URL="<URL>"
ANKI_COLLECTION_PATH="/flask-app/data/anki_env"
HUGGING_FACE_TOKEN="<TOKEN>"
FRONTEND_URL="http://127.0.0.1:7070"
ANKI_RELATIVE_DATA_DIR="/flask-app/data/anki_files"
LLM_TO_USE="hosted"NEXT_PUBLIC_BACKEND_URL="http://127.0.0.1:7071"