1
1
Fork 0

Brought this from: https://github.com/nomic-ai/gpt4all, includes the models!

Initial copy, including basic model.

- includes gpt4all-lora-quantized.bin
main
Rajat Arya 1 year ago
parent 419b40eb54
commit 5d445cdb5c
32 changed files (0 B → 3.9 GiB)
  1. 193
      README.md
  2. 237
      TRAINING_LOG.md
  3. 3
      chat/gpt4all-lora-quantized-OSX-intel
  4. 3
      chat/gpt4all-lora-quantized-OSX-m1
  5. 3
      chat/gpt4all-lora-quantized-linux-x86
  6. 3
      chat/gpt4all-lora-quantized-win64.exe
  7. 3
      chat/gpt4all-lora-quantized.bin
  8. 73
      clean.py
  9. 48
      configs/deepspeed/ds_config.json
  10. 15
      configs/eval/generate.yaml
  11. 17
      configs/eval/generate_baseline.yaml
  12. 14
      configs/eval/generate_full.yaml
  13. 15
      configs/eval/generate_large_2.yaml
  14. 15
      configs/eval/generate_large_3.yaml
  15. 9
      configs/generate/generate.yaml
  16. 14
      configs/generate/generate_llama.yaml
  17. 30
      configs/train/finetune.yaml
  18. 29
      configs/train/finetune_lora.yaml
  19. 113
      data.py
  20. 20
      env.yaml
  21. 252
      eval_data/user_oriented_instructions.jsonl
  22. 26
      eval_figures.py
  23. 137
      eval_self_instruct.py
  24. 3
      figs/duplicate_loss.png
  25. 3
      figs/first_lora.png
  26. 3
      figs/perplexity_hist.png
  27. 3
      figs/single_epoch.png
  28. 58
      generate.py
  29. 3
      gpt4all-lora-demo.gif
  30. 10
      read.py
  31. 12
      requirements.txt
  32. 207
      train.py

README.md (0 B → 7.7 KiB)

TRAINING_LOG.md (0 B → 9.8 KiB)

chat/gpt4all-lora-quantized-OSX-intel (0 B → 401 KiB)

chat/gpt4all-lora-quantized-OSX-m1 (0 B → 335 KiB)

chat/gpt4all-lora-quantized-linux-x86 (0 B → 401 KiB)

chat/gpt4all-lora-quantized-win64.exe (0 B → 182 KiB)

chat/gpt4all-lora-quantized.bin (0 B → 3.9 GiB)

clean.py (0 B → 2.1 KiB)

configs/deepspeed/ds_config.json (0 B → 851 B)

configs/eval/generate.yaml (0 B → 336 B)

configs/eval/generate_baseline.yaml (0 B → 352 B)

configs/eval/generate_full.yaml (0 B → 282 B)

configs/eval/generate_large_2.yaml (0 B → 292 B)

configs/eval/generate_large_3.yaml (0 B → 292 B)

configs/generate/generate.yaml (0 B → 223 B)

configs/generate/generate_llama.yaml (0 B → 308 B)

configs/train/finetune.yaml (0 B → 513 B)

configs/train/finetune_lora.yaml (0 B → 505 B)

data.py (0 B → 4.0 KiB)

env.yaml (0 B → 285 B)

eval_data/user_oriented_instructions.jsonl (0 B → 166 KiB)

eval_figures.py (0 B → 791 B)

eval_self_instruct.py (0 B → 5.1 KiB)

figs/duplicate_loss.png (0 B → 362 KiB)

figs/first_lora.png (0 B → 308 KiB)

figs/perplexity_hist.png (0 B → 15 KiB)

figs/single_epoch.png (0 B → 353 KiB)

generate.py (0 B → 1.9 KiB)

gpt4all-lora-demo.gif (0 B → 2.6 MiB)

read.py (0 B → 221 B)

requirements.txt (0 B → 121 B)

train.py (0 B → 7.4 KiB)

Loading…
Cancel
Save