Artificial intelligence: friend or foe?

Picture of M.A. Martin Leon

M.A. Martin Leon

According to Google Cloud, artificial intelligence (AI) is “set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyse data, make recommendations, and more.”

Pretty fuzzy, right? That’s because AI as a term is difficult to explain and it covers a very wide set of techniques and use cases.

One way to understand AI is a process in which we expose a computer system to vast amounts of data and through trial and error we help it to learn what is “right” or “wrong” in a specific use case. In this way they can modify their internal parameters so they are more “right”. In this way, the machine will start reacting to patterns and then create the correct answers to the question defined in the use case.

For picture recognition, for example, that may be exposing a neural network (layers of interconnected software or hardware nodes) to millions of pictures and having it guess in each case if the picture is a parrot or not. After each try, we will tell it if it’s right or wrong, and with this information the network can readjust its parameters to try to improve next time.

This is of course a gross simplification. AI systems are extremely complicated and require immense amounts of processing power, first to learn from huge amounts of data (the training phase) and then to obtain new answers for users (the inference phase).

The exciting thing about modern AI is that it has become the solid winner in accuracy tests for most use cases out there. Think gaming, image recognition, language generation, etc.

As a planet-minded individual, why should I care about AI?

AI is an extremely powerful tool. It is fantastic at analysing data, finding patterns, and building predictions as well as at performing repetitive tasks extremely quickly. These attributes, amongst other things, can help solve the challenges of climate change:

  • Deep learning AI is already helping with very complex climate models to accelerate predicting how planetary systems will behave under certain circumstances. In one example, scientists are using it to predict which icebergs are melting and how fast.
  • AI algorithms can aid the energy transition by, for example, improving the efficiency of wind turbines.
  • Electricity companies are already using AI to monitor, predict, and give on-the-spot information about energy demand and how clean energy production is.
  • UNEP is currently using AI to monitor methane leaks from the oil and gas industry.

So what’s the problem with AI?

The problem is that the creation of this amount of intelligence requires an extremely energy-consuming process both in the training and inference stages. Especially given the “AI scaling law,” an unexpected quirk of artificial intelligence. This law refers to the fact that the bigger you make AI models — in terms of neural network nodes, data used in training, and number of passes through the networks for that data — the better it gets. The sky’s the limit, thus similarly so for the energy and hardware used in these models.

The companies in charge of these developments, usually the biggest hyperscalers on the globe (Amazon Web Services, Microsoft, Google), had pretty good renewable energy commitments, with many already running on 100% clean energy. However, in order to win the super AI arms race — that is, to create the meanest, most intelligent model yet — they have been using a brute force approach: deploying more and more energy and more and more server racks of GPUs (graphics processing units), the chips AI models require. This exponential growth is being done without caring that much about the impact that additional energy and hardware manufacturing causes.

There was some light earlier this year with newer models, such as DeepSeek coming from China, which use a number of simplifications to produce “good enough intelligence” with surprising energy efficiencies. However, these models pose a new challenge, one that we normally refer to as the Jevons paradox: the smaller models can be deployed on simpler hardware, opening AI up to many more use cases (i.e. imagine a headset that contains an AI machine inside its tiny chip), producing a “rebound effect” in the global amount of energy required overall.

How bad is AI for the future of our planet?

Let’s start with the fact that the ICT sector already has a sizeable impact on our planet, representing an estimated 4% of global greenhouse gas emissions.

Data centres, the backbone of any AI model, already required, according to the Organisation for Economic Co-operation and Development (OECD), 460 terawatts globally in 2022. This is a similar amount to the whole of France (463 terawatts). Its consumption is predicted to approach 1,050 terawatts by 2026.

It is clear that AI is particularly energy-hungry but measurements of its carbon emissions are still at an initial stage with most leading companies providing limited or no data on what their models and tools consume.

In a 2021 research paper, scientists from Google and the University of California, Berkeley estimated that the process of training OpenAI’s GPT-3 consumed 1,287 megawatt hours of electricity. This amount would power 120 homes for a year.

The measurement of its uses are still unclear with estimates ranging from: 

  • an AI powered internet search producing between five and ten times more than a normal internet search or
  • the use of an AI-powered tool causing 33 times as much CO2 than a non-AI specialised tool

Additionally, server racks need cooling. This is normally achieved by using chilled water. It is estimated that for each kilowatt hour of energy consumed at a data centre, two litres of water must be used to cool it.

Some manufacturers are starting to provide some numbers with Google estimating its Gemini apps consume 0.24 watt-hours of electricity per average text prompt, emitting 0.03 grams of CO2 and using up 0.26 millilitres of water. However, without knowing how many queries are handled every day, it is not possible to get an idea of the size of the problem overall.

What can we do as users?

A company which makes use of AI as a tool for their staff only has limited leverage over the amount of impact its AI use creates. It is important not to get too bogged down with measurements and details and try to work on low-hanging fruit that may make a real difference. Here are some ideas on how to go about doing this.

1. Ask yourself, do I really need AI? What do I need it for?

You can avoid using AI by default or trying to push your staff to use AI as a blanket option. AI can be useful in certain cases. Identify these and use AI only when there is a significant advantage. Not all use cases are equal in terms of their impact on our future. Image creation, for example, is a lot more computationally intensive than text creation.

Try to turn off the default use of AI in certain tools, such as Google or Microsoft, that may offer you automatic prompts without you even requesting them. An interesting trick is to include -ai before any Google search to avoid it using AI as its first suggestion.

2. Choose better AI

Companies can use their purchasing power by selecting providers that are truly committed to lowering their environmental impact. Research different options for the specific tool you need. The UN’s ITU publishes an interesting report on how various companies are doing on their commitments.

Do contact them to request further information and to choose the lowest environmentally costly option. When talking about hyperscalers, this may be locating your instance in a specific country or using a specific configuration.

There are also some leaderboards showing how environmentally friendly the different AI models are. As explained before, this is just starting out, so it can be complex to figure out which one is best. Hugging Face provides one of these you can use to start familiarising yourself with them.

Try to use the smallest model you can for the accuracy you need. There are smaller versions of most models (e.g. ChatGPT-4o Mini) which are designed to be more energy efficient.

3. Put a policy in place

Make conscious decisions on how you want your company and your staff to use AI and put policies and trainings in place to make sure your employees are clear on what they should be doing at each point. This should not only have environmental priorities but also take into account:

  • Privacy: make sure no private information about staff or clients gets included in a public AI tool or you may be infringing on GDPR.
  • Intellectual property (IP): be aware of using anything AI produces and make sure you are not breaching anyone’s IP. Alternatively, if you input your IP into one of these tools you could be making IP claims more difficult for your enterprise.
  • Confidentiality: do not use any confidential information in your prompts or it may be used to provide intelligence to other tool users.
  • Accuracy: always check AI’s results with the sources provided to make sure they are correct.
  • Bias: be careful with any bias AI may integrate in their decision-making.

One way to find out more about the environmental and social impacts of the digital world is to experience a 3-hour Digital Collage. Get in touch to find out more.

Get in touch or book a call