In this sequence of experiments, you will reproduce a result from machine learning from 1989 - a paper that is possibly the earliest real-world application of a neural network trained end-to-end with backpropagation. (In earlier neural networks, some weights were actually hand-tuned!) You'll go a few steps further, though - after reproducing the results of the original paper, you'll get to use some modern 'tricks' to try and improve the performance of the model without changing its underlying infrastructure (i.e. no change in inference time!)
You can find these materials at: https://github.com/teaching-on-testbeds/deep-nets-reproducing
Attribution: This sequence of notebooks is adapted from the ICLR 2022 blog post "Deep Neural Nets: 33 years ago and 33 years from now" (https://iclr-blog-track.github.io/2022/03/26/lecun1989/) by Andrej Karpathy, and the associated Github repository (https://github.com/karpathy/lecun1989-repro).
Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.Download Archive
Download an archive containing the files of this artifact.
Download with git
Clone the git repository for this artifact, and checkout the version's commit
git clone https://github.com/teaching-on-testbeds/deep-nets-reproducing # cd into the created directory git checkout 50e4deb6b5c566c434da84a0dd7d7d87e4ee0b56
Submit feedback through GitHub issues