ΒιΆΉΣ³»­

NSF CAREER award supports research designed to reduce carbon footprint of machine learning

Computer Science and Engineering Assistant Professor Feng Yan recently received a National Science Foundation CAREER Award that aims to improve the efficiency of machine learning.

Feng Yan

NSF CAREER award supports research designed to reduce carbon footprint of machine learning

Computer Science and Engineering Assistant Professor Feng Yan recently received a National Science Foundation CAREER Award that aims to improve the efficiency of machine learning.

Feng Yan

The Lab Director of the Intelligent Data and Systems Lab (IDS Lab), Feng Yan’s research on big data has led to collaborations with industry partners like Amazon, Microsoft Research, and Google. A recent National Science Foundation CAREER award will allow Yan to tackle one of the most important aspects of machine learning: Its environmental impact. As computing power increases, its draw on energy also increases—to the point that the impact on the environment of simple programs rivals that of an international plane flight or, in some cases, the lifetime emissions of an automobile.

Recently, Yan discussed this aspect of his CAREER award, along with his desire to pursue research that has the ability to improve society.

What is the goal of the CAREER project?

Machine-Learning-as-a-Service (MLaaS) is an emerging computing paradigm that provides optimized execution of machine learning tasks, such as model design, model training, and model serving, on cloud infrastructure. The goal of this CAREER project is to gain a fundamental understanding of MLaaS and utilize the unique features of MLaaS to design efficient, automated, and user-centric MLaaS systems. This approach will significantly reduce resource waste and shorten the model design cycles through a variety of novel optimization approaches and by eliminating candidate models that fail to meet model serving latency and target accuracy. To support complete MLaaS workflow, this project will also develop MLaaS model serving methodologies that can meet service level latency requirements with minimum resource consumption using intelligent autoscaling. Important insights and technologies will be produced targeting resource management and energy saving of the next-generation machine learning systems and cloud infrastructure. The findings of this project will also contribute to related fields of parallel and distributed systems, performance evaluation and optimization, and green computing.

What inspired you to pursue this research?

Explosive growth in model complexity and data size along with the surging demands of MLaaS is already resulting in substantial increases in computational resource and energy requirements. Unfortunately, existing MLaaS systems have poor resource management and limited support for user specified performance and cost requirements, exacerbating waste in computing resources and energy.

For example, training a typical ML model on the cloud may take tens to hundreds of hours that costs hundreds to thousands of dollars. According to a recent study from Emma Strubell et al., to design and train a large-scale model using an automated approach, e.g., Automated ML (AutoML), can take 274,120 hours with a steep cost of millions of dollars. What’s more concerning is the carbon footprint. The study from Emma Strubell et al. pointed out that the design and training of a large-scale NLP model consumes 656,347kWh·PUE power and emits 626,155 CO2e (lbs), which is 5 times more than the carbon footprint of a car’s lifetime (including fuel consumption). Even the emission of training a common BERT model on GPU is roughly equivalent to the emission of a trans-American flight.

Even though dozens of new MLaaS features and services are rolling out each year, the resource management of MLaaS has been often overlooked. The state-of-the-practice resource management of MLaaS does not consider the unique features of ML workloads and thus suffers from poor efficiency. In addition, existing management tools require manual specification and tuning of lots of unintuitive system parameters, which makes these tools difficult to be used correctly and effectively.

The above motivated me to develop new resource management methodologies for supporting automated and efficient MLaaS as well as integrate the learned lessons into the education so that students and a larger audience understand the importance and urgency of efficient MLaaS.

What does this award mean to you and for your work?

I am very honored to receive this prestigious award.... It allows me to expand my lab and recruit more talented students to work on the proposed topic of automated and efficient Machine-Learning-as-a-Service.

I am very honored to receive this prestigious award. This award provides necessary support for me to carry out the proposed research and education agenda. It allows me to expand my lab and recruit more talented students to work on the proposed topic of automated and efficient Machine-Learning-as-a-Service. These substantial integrated research and education activities will help me grow from an early-career researcher to a more experienced researcher. I am encouraged by this award and will keep on trying my best to conduct high impact and productive research in the areas of big data, machine learning, computer systems, as well as cross-disciplinary topics.

What impact on society do you hope to achieve through your work?

This project has the potential to tremendously reduce the resource and energy consumptions as well as the carbon footprint associated with the fast-growing societal demands in machine learning and cloud computing. This project will also provide unique opportunities for both undergraduate and graduate students by training them in the art of system optimization combined with the latest machine learning domain knowledge, to prepare them as future generation computer scientists and engineers.

Anything else you’d like to add?

I would like to thank all my students for their excellent research work and contribution to this project. I want to thank our Department of Computer Science and Engineering Chair Dr. Eelke Folmer for his consistent support and help, my mentor Dr. Sushil Louis for his great mentoring, as well as other colleagues in our department for their support, help, and collaboration. I want to thank Craig Holloman and the Engineering Research Office for their great support and help in preparing and submitting the proposal. I want to thank Dr. Mridul Gautam, Jennifer A Bonk, Kate Dunkelberger, and others in Research & Innovation for their great assistance. I want to thank our Dean Manos Maragakis and Associate Dean Indira Chatterjee for their support from the college level, President Brian Sandoval, Vice President & Provost Jeffrey Thompson, and Vice Provost David Zeh for their support from the University level. I want to thank my family, friends, and collaborators.

Latest From

ΒιΆΉΣ³»­ Today