A configurable experimental environment for large-scale cloud research

Recent news

  • Chameleon Changelog for May 2021

    by Jason Anderson

    This month, we announce the early user period for our "edgy" users out there, remind you to please update your list of publications using Chameleon(!) and also let you know of a few important and cool things going on. Stay tuned next month for some announcements about new hardware installations, if all goes well!

  • 2021 Chameleon User Survey

    by Isabel Brunkan

    As part of Phase 3, we’re looking to improve the Chameleon experience and make the system generally better over the next 4 years. This is your chance to guide hardware purchases, influence Chameleon capabilities and change priorities in our roadmap. 

    To help us in this effort, we would like to invite you to take this 10-15 minute survey, and share your experiences, desires, pain points and anything else to help us build a better Chameleon for you! The survey will be open till the end of June. 

    Thank you in advance for your participation!

    The survey is available here: 2021 Chameleon User Survey

    Image from San Diego Law Library

  • Make Your Summer School a Success with Jupyter and Chameleon

    by Jason Anderson

    Organizing a summer school, bootcamp or workshop this summer? Having trouble finding a consistent and predictable environment? Learn all about how to use the Chameleon JupyterHub artifact to configure resources (including GPUs!) and create a shared Jupyter notebook computing environment that any event attendee can use. You can manage users, deploy Chameleon resources, and use the Help Desk throughout your event!

  • High-Performance Federated Learning Systems

    by Zheng Chai

    This work is part of George Mason University PhD student Zheng Chai and Prof. Yue Cheng’s research on solving federated learning (FL) bottlenecks for edge devices. Learn more about the authors, their research, and their novel FL training system, FedAT which already has impressive results, improving prediction performance by up to 21.09% and reducing communication cost by up to 8.5 times compared to state-of-the-art FL systems.