New method significantly reduces power consumption of AI

Energy saving - Pixabay.com/geralt

Training neural networks for artificial intelligence (AI) requires enormous computing resources and therefore a lot of electricity. Researchers at the Technical University of Munich (TUM) have developed a method that works a hundred times faster and is therefore much more energy-efficient. Instead of proceeding iteratively, i.e. step by step, the parameters are calculated directly from the data on the basis of their probability. The quality of the results is comparable to the usual iterative methods.

AI applications, such as large language models (LLMs), have become an integral part of our everyday lives. The required computing, storage and transmission capacities are provided by data centers. However, the energy consumption of these centers is enormous: in 2020, it amounted to around 16 billion kilowatt hours in Germany - around one percent of Germany's total electricity demand. An increase to 22 billion kilowatt hours is forecast for 2025.

In addition, more complex AI applications will significantly increase the demands on data centers in the coming years. The new method, which calculates the parameters directly from the data based on their probability instead of proceeding step by step, significantly reduces the amount of electricity required for training.

  • Issue: Januar
  • Year: 2020
Image

Eugen G. Leuze Verlag GmbH & Co. KG
Karlstraße 4
88348 Bad Saulgau

Tel.: 07581 4801-0
Fax: 07581 4801-10
E-Mail: info@leuze-verlag.de

 

Melden Sie sich jetzt an unserem Newsletter an: