Energy use by modern computer circuitry has become a major issue. Energy requirements per transistor have been dropping, but not as quickly as circuit density has increased. Energy used per unit time is power, so often you will see the term "power management" instead of "energy management."
In the 1960s, I remember the "Easy-Bake Oven" toy cooking brownies with the heat from a 40-Watt lightbulb. A typical modern processor uses about 80 Watts, and none of that energy is coming out in the form of light nor cooked brownies. Energy in becomes heat output, and heat not removed can quickly increase temperature beyond good operating levels. Further, temperature can continue to rise for a short while after energy consumption has been cut, so "thermal management" needs to be somewhat predictive -- purely reactive management could cause temperature to overshoot the allowable range.
Energy also costs money. Electricity costs easily can be a significant fraction of the total cost of ownership of a system. For example, despite Kentucky's exceptionally low electric rates, keeping KASY0 powered for one year cost more than building its interconnection network.
Thus, the goal of the work described here is to manage energy use to achieve the desired ballance between performance and cost -- while ensuring that the system's thermal design parameters are not violated. Technologies range from new compiler analysis for predicting power consumption to new ways to implement runtime control including DVS algorithms, online thermal models, and even environmental models for electric rates, air conditioning efficiency changes due to weather conditions, etc.
The only thing set in stone is our name.