Reproducing Lottery Ticket Hypothesis : Finding Sparse, Trainable Neural Networks (ICLR 2019)

This artifact reproduces the core findings of "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" (ICLR 2019) by Jonathan Frankle and Michael Carbin. The experiment uses a LeNet architecture trained on the MNIST dataset, applies iterative magnitude-based pruning, resets surviving weights to their original initialization, and retrains with early stopping. Each pruning iteration removes 20% of the lowest-magnitude weights and the process is repeated for 5 iterations over 5 separate trials. The results confirm that sparse subnetworks ("winning tickets") can train to comparable accuracy as the original dense model.

80 2 1 3 Jun. 14, 2025, 3:36 AM

Authors

Launch on Chameleon

Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.

Download Archive

Download an archive containing the files of this artifact.

Download with git

Clone the git repository for this artifact, and checkout the version's commit

git clone https://github.com/codenameyizzz/Reproduce-The-Lottery-Ticket-Hypothesis--ICLR-2019-.git
# cd into the created directory
git checkout e4b9ffd71dfca27cd0d551243626a7ef15333a7a
Feedback

Submit feedback through GitHub issues

Version Stats

6 2 1