Serving machine learning models on edge devices
In a previous experiment, we explored model-level optimizations for ML model serving.
Now, we will evaluate the models we created in that experiment on a low-resource edge device.
Follow along at Serving machine learning models on edge devices.
Note: this tutorial requires advance reservation of specific hardware! You will reserve a 2-hour block on a Raspberry Pi 5 on CHI@Edge.
This material is based upon work supported by the National Science Foundation under Grant No. 2230079.
Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.
Download ArchiveDownload an archive containing the files of this artifact.
Download with git
Clone the git repository for this artifact, and checkout the version's commit
git clone https://github.com/teaching-on-testbeds/serve-edge-chi
# cd into the created directory
git checkout fea0eb5b3969ba31dd1b5790a5ab6886e6fdd28f
Submit feedback through GitHub issues