Skip to content

ZihanWangKi/llm

Repository files navigation

LLM Training & Inference

training w/ Fastchat inference w/ vllm

setup

Create a new python environment. We will install everything with cuda 11.8. Other cuda versions and library versions likely won't work for now.

conda create -n llm python=3.10 -y
conda deactivate && conda activate llm
bash setup.sh

inference

Play with vllm_test.py

training

Modify data_module.py to support the data format, and use finetune.sh or finetune_lora.sh

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •