AI x Sustainability (5): There are always two sides to the coin

So far we looked at the (positive) potential of AI in concrete applications that are relevant and important to sustainable strategies and their implementation in the corporate world across business process and supply chains.

The question that remains though: is AI overall positive or rather not? Do we – at this relatively early stage – know anything about the trade offs and issues?

Like with all things tech, there are many myths associated with AI, as well as some very important pros and cons. Thankfully, both of these have been summarised well by others already. For those interested in the myths-vs-reality aspect, please visit the link above.

With that said, in the past blog posts, what we’ve looked at where by and large concrete cases, applied to sustainability, of where AI can benefit. In the following illustration, taken from an article by Nixus, all of these would fall into one of the five benefits of AI outlined on the left.

Illustration 1: Pros and Cons of Artificial Intelligence (image (c): nixustechnologies)

Not all of those are exclusively an issue of AI, but even the sustainable developments efforts e.g. underway in the Auto industry at the very moment, can have some of those very same negative side effects (as applies e.g. to job losses in the course of an industry’s transformation. Please listen to the following very insightful episode my MSCI’s “ESG Now’ podcast on the topic.)

In this blog post I’d like to dig a bit deeper into a couple of ‘overall’ areas, that often surface when discussing AI:

  • Where are we at in terms of CO2 footprint at the present, and what is the expectations going forward?
  • Does AI, for what the Sustainable Development Goals are concerned, have the potential to do more good or rather bad?
  • And: What is the key to ensure that AI does not have a net-negative ethical impact?

Those points above are some, certainly not all, of the main points of concern that at this very moment exist when it comes to the intersection of sustainability and AI. Others will no doubt arise in the future.

Question 1: Where are we at in terms of CO2 footprint at the present, and what is the expectations going forward?

A Colombia University article summarised in June 2023 the state of play as follows:

  • There is no good data on AI server farms per se.
  • We do have some (generic) data on data centres however: the world’s data centres account for 2.5 to 3.7 percent of global greenhouse gas emissions, exceeding even those of the aviation industry.
  • Estimates assume that energy consumption of data centres on the European continent will grow 28 percent by 2030.
  • Initial data (from Google) estimates that the significant minority (40%) of AI’s specific footprint is driven by its learning phase (called ‘training’), with the majority of 60% happening during its ‘production’ phase (called ‘inference’).
  • With that it is already clear that: a) data centres’ and with that AI’s footprint is primarily driven by the local grid footprint of the data centre location. And b) which means, that the overall footprint will likely have a geographical bias: giants like Google’s footprint is driven by where their data centres predominantly are located… with Asia being very carbon intensive and other geographies being ‘carbon free’ (or as good as).

Question 2: Does AI, for what the Sustainable Development Goals are concerned, have the potential to do more good or rather bad?

Important side note here: we really are talking about potential to do either good or bad.
It is realistically too early to assess how the use of AI in reality is going to, or indeed did, shake out. It is therefore important to understand, that much of that ‘potential’ is driven how we humans, and our society, chooses to use AI … with all the related REALLY BIG caveats.

Under ideal conditions, first research results (published in Nature) indicate that the AI could have a net positive impact on the Sustainable Development Goals, as can be seen in the graphs taken from said article:

Illustration 2: Summary of positive and negative impact of AI on the various SDGs (Source)
Illustration 3: Detailed assessment of the impact of AI on the SDGs within the Society group (Source)
Illustration 4: Detailed assessment of the impact of AI on the SDGs within the Economy group. (Source)
Illustration 5: Detailed assessment of the impact of AI on the SDGs within the Environment group. (Source)

But even this specific Nature article outlines explicitly:

“Current research foci [eds: on AI for good] overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.”

Vinuesa, R., Azizpour, H., Leite, I. et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11, 233 (2020).

Which brings us swiftly to the third and last question:

Question 3: What is the key to ensure that AI does not have a net-negative ethical impact?

The above paragraph could be the short answer to that question: Regulation and legislation is always assumed to be the last resort when it comes to prevention of ethical aberrations in our society.

This is however, is only half the answer from my point of view.

The fundamental key: as individuals, as much as society, we must not only be aware, but also willing to accept and be accountable for the biases and unethical components we have built into our functioning as society. And which we are choosing to live by in our everyday lives (hello Mr. Musk …) as individuals.

It is this understanding that allows us to questions AI’s ability to give not just ‘correct’ but thoroughly researched, scientifically grounded, and possibly (hopefully, one day) unbiased answers to our questions of curiosity.
This though means that for the time being it is ONLY regulation, which again is relying on stochastic data, to take that challenge on.

The drawback?

The EU’s AI act, one of the possibly best pieces of legislation in this area, has triggered the giants to withhold innovative uses of AI from being introduced into the European market. Why? Because they are neither able nor willing to be accountable for the collateral damage they are (potentially) creating and not (yet) aware of. They know most of the possible benefits, but have not cared to look into the damage that comes with it.
Because there are ALWAYS two sides to the coin.

Conclusion

Like always: every innovation has two sides to the coin. This is no different for AI. And without a shade of doubt that dichotomy does also apply to sustainability related case applications.

AI can be hugely beneficial to gain efficiencies, and to use data that up until now was simply not usable at all. That alone can open doors to positive impact so far unfathomable.

BUT: the flip side is that we take on also the potential negatives and give them (possibly) room to exist and flourish if not checked carefully. Out-scaled increases in data centre energy consumption is one (simple?) component in that bigger picture. But more fundamentally, AI can exaggerate existing flaws in our societal, political and legal systems as we’ve never seen before.

An early taste of this is what is happening as a result of social media algorithm we’re all subject to, and that end up, instead of opening the diverse world of nuances and differences to us, closing us thoroughly up into echo chambers of dangerously like minded people, exacerbating flaws we ourselves and our ‘bubble’ exhibit.

Much of the projected AI benefits are still based on assumed, and in most cases, ideal (pre-) conditions of the technology’s use. But as said in a previous post ‘Crap in = Crap out’.

In other words: the quality of the input is decisive for the quality of the output.
This applies, unsurprisingly, in the small for individual data points, as much as for the overall societal framework in which AI is being applied to and embedded in.