NanoGPT toy version
NanoGPT is the simplest, fastest repository for training/finetuning medium-sized GPTs.
This experiment tries to reproduce the results of training a character-level GPT on the works of Shakespeare. I have also changed the parameters to significantly improve the loss during the CPU version of the experiment from 1.88 (in the actual experiment) to 1.54
Other than this, I have also fine-tuned gpt2-xl
on the works of Shakespeare and sampled the outputs just like the original experiment.
Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.
Download ArchiveDownload an archive containing the files of this artifact.