For some reason the fine tuning seems to get worse, with more examples. Not sure what the reason is, but the unmmodified model seems better at chess than the fine-tuned one!main
parent
04bdf68fb5
commit
32ab1c0e01
3 changed files (6.6 KiB → 10 KiB)
08_BackToGPT2.ipynb
(6.6 KiB → 10 KiB)
zach_model/pytorch_model.bin
zach_model/training_args.bin
Loading…
Reference in new issue