Artifact description: Baleen is a flash cache that uses coordinated ML admission and prefetching to reduce peak backend load in bulk storage systems. This artifact contains Python code to reproduce the simulator results and key figures in the Baleen paper.
START HERE (README): https://github.com/wonglkd/Baleen-FAST24
First-timers: you will need an active allocation to launch this artifact on Chameleon. If you are unsure how to get one, please contact the Chameleon Helpdesk.
Baleen: ML Admission & Prefetching for Flash Caches
Daniel Lin-Kit Wong, Hao Wu, Carson Molder, Sathya Gunasekar, Jimmy Lu, Snehal Khandkar, Abhinav Sharma, Daniel S. Berger, Nathan Beckmann, Gregory R. Ganger
USENIX FAST 2024
Link to paper site
Estimated time to reproduce: 3 hours (setup, small-scale experiment, plotting figures using intermediate results). To re-run all experiments from scratch would take >600 machine-days.
Reproducibility status: awarded Results Reproduced, Artifacts Functional, Artifacts Available badges during FAST 2024 Artifact Evaluation.
Experiment Pattern: This artifact provisions a single node to run simulator runs and the Jupyter notebooks. (See notebook for details)
Support: create a GitHub issue (preferred) or email email@example.com
Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.Download Archive
Download an archive containing the files of this artifact.
Download with git
Clone the git repository for this artifact, and checkout the version's commit
git clone https://github.com/wonglkd/Baleen-FAST24.git
# cd into the created directory
git checkout 72c3853d4a5753af11b7dfc9f221c5e202675325
Submit feedback through GitHub issues