Exploring Statistical Multiplexed Computing for Unlimited Infrastructure Scaling
-
June 25, 2025
by -
Justin Shi
IT infrastructure forms the backbone of modern society, but traditional scaling approaches face critical limitations that expose services to security and reliability shortcomings. This research investigates Statistical Multiplexed Computing (SMC) principles to build infrastructures without scaling limits, similar to how TCP/IP protocols enabled indefinite network scaling.
Tips and tricks for making the most of Chameleon's new GPU resources and reservation-based workflow
-
June 20, 2025
by -
Cody Hammock
NVIDIA H100 GPUs are now available on KVM@TACC through a new reservation-based system. Learn how to leverage cutting-edge GPU acceleration, persistent storage, and flexible networking to maximize your research productivity within time-limited virtual machines.
This month, we have new H100 GPU nodes on KVM@TACC! Today, you can launch VM instances with 1 full H100 GPU. This hardware comes with a brand new workflow for reserving VMs. It’s important to note that this reservation workflow will be rolled out to the rest of KVM later in the summer. Additionally, we have refreshed our documentation. Lastly, CHI-in-a-box comes with a new image deploy tool for associate sites.
How Chameleon Cloud Transforms Computer Science Education Across Europe
-
May 27, 2025
by -
Massimo Canonico
Teaching cloud computing effectively requires hands-on experience, but establishing local datacenters or using commercial cloud providers presents significant barriers for students. Chameleon Cloud provides the perfect solution, offering real cloud infrastructure experience without access limitations or costs, enabling comprehensive cloud computing education across European universities.
Less Setup, More Science: Streamlined Images with Built-in Tools and Drivers
-
May 19, 2025
by -
Paul Marshall
What's the secret ingredient that makes our new Chameleon images so much better? From automatic SSH configuration to built-in rclone support, these aren't your ordinary cloud images. Find out what makes them special.
This month, we have new OS images with AMD ROCm and Ubuntu 24 on ARM. Additionally, we have improvements to mounting object store buckets using rclone, a new message-of-the-day, and we’ve fixed the firewall confusion on KVM@TACC.
Findings from the November 2024 Community Workshop on Practical Reproducibility in HPC
-
May 1, 2025
by -
Marc Richardson
View or contribute to the experiment packaging and style checklists (appendix A and B) on our GitHub repository here.
Download the report here.
We’re excited to announce the publication of the NSF-sponsored REPETO Report on Challenges of Practical Reproducibility for Systems and HPC Computer Science, a culmination of our Community Workshop on Practical Reproducibility in HPC, held in November 2024 in Atlanta, GA (reproduciblehpc.org).
Understanding and accurately distributing responsibility for carbon emissions in cloud computing
-
April 29, 2025
by -
Leo Han
Leo Han, a second-year Ph.D. student at Cornell Tech, conducted pioneering research on the fair attribution of cloud carbon emissions, resulting in the development of Fair-CO2. Enabled by the unique bare-metal capabilities and flexible environment of Chameleon Cloud, this work tackles the critical issue of accurately distributing responsibility for carbon emissions in cloud computing. This research underscores the potential of adaptable testbeds like Chameleon in advancing sustainability in technology.
HiRED: Cutting Inference Costs for Vision-Language Models Through Intelligent Token Selection
High-resolution Vision-Language Models (VLMs) offer impressive accuracy but come with significant computational costs—processing thousands of tokens per image can consume 5GB of GPU memory and add 15 seconds of latency. The HiRED (High-Resolution Early Dropping) framework addresses this challenge by intelligently selecting only the most informative visual tokens based on attention patterns. By keeping just 20% of tokens, researchers achieved a 4.7× throughput increase and 78% latency reduction while maintaining accuracy across vision tasks. This research, conducted on Chameleon's infrastructure using RTX 6000 and A100 GPUs, demonstrates how thoughtful optimization can make advanced AI more accessible and affordable.
Streamline Your Research Workflow with Trovi's New GitHub Integration
-
April 21, 2025
by -
Mark Powers
Learn how to leverage Trovi's new GitHub integration to easily create and update reproducible research artifacts. This step-by-step guide shows you how to configure your GitHub repository with RO-crate metadata and import it directly into Trovi, enabling better collaboration and adherence to FAIR principles for your experiments.