Category – Featured

From GitHub to Publication: Using Trovi Effectively

How to organize, share, and publish your Chameleon experiments

Trovi helps you package and share computational artifacts that run on Chameleon, from Jupyter notebooks to complete experimental workflows. Whether you're importing code from GitHub, organizing artifacts into Collections, collaborating with co-authors, or publishing work to get a citable DOI, these tips will help you make the most of Trovi for your research and teaching.

Save the Date! Sixth Chameleon User Meeting - April 15-16, 2026 - Boulder, CO

Mark your calendars for our upcoming User Meeting co-hosted with NCAR; watch for updates!

The Chameleon User Meeting is an in-person forum held over two days for users to discuss their research and education projects, share experiences of working with the Chameleon testbed, solve challenges together, and propose new features that will make their experiments and education projects easier. The User Meeting will have features to interest Chameleon newbies as well as veterans, end-users and operators, and researchers and educators. For our sixth meeting, we are grateful to co-host with the National Center for Atmospheric Research (NCAR) at its main campus in Boulder, CO.

Chameleon at SC25

We're heading to SC25 in St. Louis. Read what we'll be up to!

SC2025 is taking place in St. Louis, Missouri from November 16 to 21, 2025. Kate Keahey, Chameleon PI, will be attending the conference and hopes to see some of you there. In this blog, we list some of the events where Chameleon is getting let out of the box!

Chameleon Newsletter & Changelog - October 2025

Testbed updates, new features, webinars, and other exciting news from our user community

October was Performance Month for Chameleon Cloud. We're excited to share a variety of performance upgrades for various testbed services, a new Trovi feature, new webinars, user resources, and awesome Trovi artifacts developed by our users!

Introducing MINCER’s Performance Measurement and Reproducibility Appliance

Building a Standardized Framework for Reproducible Performance Measurements Across Diverse HPC Architectures

Reproducibility is a cornerstone of scientific computing, yet achieving consistent results across different hardware environments remains a significant challenge in HPC research. The MINCER project tackles this problem head-on by providing researchers with an automated performance measurement appliance on Chameleon Cloud. Using Docker containers and the PAPI framework, MINCER enables standardized collection of performance metrics across CPUs, NVIDIA GPUs, and AMD GPUs, making it easier to compare results and understand how system-level factors influence computational performance. This post explores how MINCER is helping make HPC experiments more reproducible and accessible to the research community.