Artifact for Baleen: ML Admission & Prefetching for Flash Caches (FAST 2024)

Artifact description: Baleen is a flash cache that uses coordinated ML admission and prefetching to reduce peak backend load in bulk storage systems. This artifact contains Python code to reproduce the simulator results and key figures in the Baleen paper.

START HERE (README): https://github.com/wonglkd/Baleen-FAST24

First-timers: you will need an active allocation to launch this artifact on Chameleon. If you are unsure how to get one, please contact the Chameleon Helpdesk.

Baleen: ML Admission & Prefetching for Flash Caches
Daniel Lin-Kit Wong, Hao Wu, Carson Molder, Sathya Gunasekar, Jimmy Lu, Snehal Khandkar, Abhinav Sharma, Daniel S. Berger, Nathan Beckmann, Gregory R. Ganger
USENIX FAST 2024
Link to paper site

Estimated time to reproduce: 3 hours (setup, small-scale experiment, plotting figures using intermediate results). To re-run all experiments from scratch would take >600 machine-days.

Reproducibility status: awarded Results Reproduced, Artifacts Functional, Artifacts Available badges during FAST 2024 Artifact Evaluation.

Experiment Pattern: This artifact provisions a single node to run simulator runs and the Jupyter notebooks. (See notebook for details)

Support: create a GitHub issue (preferred) or email wonglkd@cmu.edu

See also:

40 19 3 6 Feb. 1, 2024, 7:46 PM

Authors

Launch on Chameleon

Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.

Download Archive

Download an archive containing the files of this artifact.

Version Stats

13 6 2