About

Chameleon is a large-scale, deeply reconfigurable experimental platform built to support computer science systems research, education, and emerging applications. Community projects range from developing new operating systems, virtualization methods, performance variability studies, and power management to software-defined networking, artificial intelligence, and resource management. As of May 2025, Chameleon has supported 12,300+ users working on 1,200+ projects, resulting in 800+ research publications since July 2015.

To support experiments of such variety, Chameleon supports a bare metal reconfiguration system giving users full control of the software stack including root privileges, kernel customization, and console access. While most Chameleon resources are configured in this way, a small amount is configured as a virtualized KVM cloud to balance the need for finer-grained resource sharing sufficient for some projects with coarse-grained and stronger isolation properties of bare metal. 

Chameleon hardware (see our discovery system for an up-to-date, detailed description) balances the need to support experiments at scale and the need for diversity. The need for scale is satisfied by a large-scale homogenous partition of nearly 15,000 cores, 5PB of total disk space, hosted across three main sites, the University of Chicago, the Texas Advanced Computing Center (TACC), and the National Center for Atmospheric Resarch (NCAR), connected by 100 Gbps network. The diversity of hardware configurations and architectures is reflected by support for innovative networking solutions including reconfigurable Corsa switches and support for Infiniband support for accelerators such as FPGAs and a range of different GPU technologies, storage hierarchies with a mix of HDDs, SDDs, VRAM, x86 technologies, and non-x86 architectures such as ARMs. 

Unlike traditional computer science experimental systems which have overwhelmingly been configured by in-house infrastructures, Chameleon adapted OpenStack, a mainstream open source cloud technology, to provide its capabilities. This has a range of practical benefits including familiar interfaces for users and operators, workforce development potential, leverage of contributions by a community of 2,000 developers strong, and the potential to contribute to infrastructure used by millions of users (in particular, Chameleon team contributions to OpenStack include the Blazar component). In addition, configuring the infrastructure as a cloud also provides a direct answer in the debate of whether computer science systems research can be supported on clouds – as well as the means to influence that answer through direct mainstream contributions.

Both the hardware and the features have been added gradually as reflected in our forum and blog posts (see changelog). More information about the system can be found in our papers.

Contact Us

We welcome your questions, feedback, and collaboration opportunities. There are several ways to get in touch with the Chameleon team, depending on your needs:

  • General Inquiries: Email us at contact@chameleoncloud.org for general questions about the project, collaboration opportunities, or press inquiries.
  • Community Support: Visit our Forums to connect with other users, share experiences, and get help from the community.
  • Technical Support: For urgent technical issues, account problems, or resource-specific questions, please open a ticket through our Help Desk.

Not sure which channel to use? Check our FAQ section for detailed guidance on when to use each contact method and answers to commonly asked questions.

We strive to respond to all inquiries within 2 business days. For immediate assistance with critical issues, the Help Desk is your best option.

Media Resources

Team

Kate Keahey

University of Chicago
Computation Institute
Principal Investigator
Chameleon Science Director

Haryadi S. Gunawi

University of Chicago
Department of Computer Science
Co-Principal Investigator
Operating/Distributed Systems

Joe Mambretti

Northwestern University
Internet Center for Advanced Internet Research
Co-Principal Investigator
Large-Scale Networking

Paul Ruth

University of North Carolina at Chapel Hill
RENCI
Co-Principal Investigator
Programmable Networking

Dan Stanzione

The University of Texas at Austin
Texas Advanced Computing Center
Co-Principal Investigator
Chameleon Facilities Director