Chameleon Changelog for December 2024

Dear Chameleon users,

Happy New Year! 

2024 was a busy and momentous year on Chameleon. First, we’d like to thank the National Science Foundation (NSF) for its continuing the support for Chameleon – in addition to serving our community for another four years, Chameleon 4 will bring many innovations to the system that will allow us to support more experiments for more users (one small step forward in that direction below). 

On the hardware front, we added GigaIO nodes with A100 GPUs to both CHI@UC and CHI@TACC; those nodes support composable hardware experiments but can also be used in a static configuration leveraging the powerful GPUs they bring. We also welcomed a new volunteer site, CHI@NRP, that will allow us to support more distributed experiments, and made experimenting across combined Chameleon and FABRIC resources easier by introducing Fabric Layer 3 connections. On the system front, we upgraded the testbed to Openstack Antelope and revamped our Chameleon images, in particular added a new Ubuntu 24.04 appliance. It was also an exciting year for our edge testbed, CHI@Edge, which came out preview this year, with new support for edge in CHI-in-a-box and Raspberry Pi 5 devices

Chameleon projects and community have been growing larger than ever before – over 1200 unique projects and 11,000 users as of this writing – and so we improved user management for project PIs and managers, now supporting per user SU budgets. To help welcome new users, we overhauled our Getting Started guide. Looking ahead to improving reproducibility in Chameleon 4, we also released a new dashboard preview for Trovi, and improved the python-chi library (Python-chi 1.0) to better support expressing experiments programmatically, and – last but not least – we hosted a Chameleon User Meeting as a workshop on practical reproducibility.

Exciting developments for our VM offering coming soon! Since increasingly more of our users use GPU instances in their research we need to share them more efficiently – for this reason, for the first time ever we will offer GPU instances via Chameleon’s virtualized offering -- in fact, our new hardware, which will consist of nodes with H100 GPUs that we will deploy in the first quarter of 2025 will be available exclusively via virtualized instances to begin with (later in the year the same nodes will be available via bare metal reconfiguration as well). To make sure that they are shared efficiently we are making changes to the virtualized offering: gone are the “endless” leases (since we suspect nobody would give up a lease on a GPU instance ;-)) and in are advance reservations (so that you can make sure that the GPU instance will be there for you when you need it). You will also notice that those virtualized instances will now be a hit on your allocation balance. Will those changes to the virtualized offering limit your ability to experiment? Do you need to use the new hardware via bare metal? If any of those plans have the potential to create difficulties for you, please let us know via the comments section of this blog, through our mailing list, or via the help desk

Floating IP improvements at UC. In December, some of you noted that after associating a floating IP at CHI@UC, it could take as long as several minutes to be able to use the IP. After successful negotiations with the system gremlins the same operation now takes a few seconds at all Chameleon sites.

Publications management improvements. Reporting publications is exceptionally important as it helps NSF understand the extent to which Chameleon is useful and serves to support research. However, we noticed last year that it was not as easy for our users to report publications as it should be; in particular, this process was confusing when projects had multiple managers submitting publication information, or where a PI had multiple projects as it was not clear who reported what publications where. We’ve improved the publication list page, so that you can now see all submitted publications across all of your projects at once.

Happy New Year once again, we look forward to working with you in 2025 – and as always, happy experimenting!


Add a comment

No comments