llama-factory fine-tuning 4 (mixtral fine-tuning)

发布时间 2023-12-19 09:27:42作者: Daze_Lu

introduction

fine-tuning

command

click to view the code
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage sft \
    --do_train \
    --model_name_or_path ../Mixtral-8x7B-v0.1/ \
    --dataset alpaca_en \
    --template mistral \
    --quantization_bit 4 \
    --lora_target q_proj,v_proj \
    --output_dir ../FINE/mixtral-alpaca_data_en_52k \
    --overwrite_cache \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 5e-5 \
    --num_train_epochs 3.0 \
    --plot_loss \
    --fp16