On sustenance and the sea of computation

On sustenance and the sea of computation

by Yoana Pavlova

When we tackle the subject of Artificial Intelligence and ethics, the topics that immediately come to mind are the various types of bias embedded in the datasets used for training AI models, the copyrights on works that end up in those datasets, the increasing uncertainty in the labor market that might be affected by the employment of AI, the corporate concentration of AI innovation and capitalisation, and so on. Amid the ongoing climate crisis, with AI novelty stories popping up in the news every couple of days, many are also raising the question of the environmental costs of AI.

More than 50 years ago, Intel’s co-founder Gordon Moore formulated what is now known as Moore’s law: the number of transistors in an integrated circuit doubles about every two years. Within a broader interpretation of this trajectory, today we have domain-specific processors and AI chips that seemingly maintain the exponential growth in computing laid in the very foundation of this industry. But there is also Koomey’s law stating that the energy efficiency of computers doubles roughly every 18 months. This is why the hyperscale cloud infrastructure has become the standard in the past two decades, gradually replacing the traditional data center and adopting renewable energy sources – financially-motivated development that allows tech giants to regularly publish jaunty reports on their carbon footprint.

Still, those reports tend to omit the full ecological impact of computational expansion, including the widespread water cooling systems required for present-day cloud computing. One paper from 2023, quoted by a number of outlets, alerts: “ChatGPT needs to “drink” a 500ml bottle of water for a simple conversation of roughly 20-50 questions and answers, depending on when and where ChatGPT is deployed.” This anthropomorphism vis-à-vis AI is new, but AI as a technological phenomenon is not new at all. Actually, the push for the implementation of AI has been following the economic logic of computational optimisation and improvement for many years. The fact that AI has become publicly visible, however, to the point of provoking both panic and hype, is a sociocultural phenomenon that needs more attention.


If you own a smartphone, the truth is that it is packed with AI technology – from image enhancement and music recognition to product identification and predictive text. If you have a profile on any social media platform, you may ask yourself about the carbon offset of its algorithms. Idem for the AI tools coalesced in those modern video-conferencing services that are meant to encourage you to travel less. Up until now, this “pragmatic” side of AI has been seen as trivial, harmless, merely a necessary evil for the sake of progress. Writing in the XIX century on the economy of fuel, William Stanley Jevons postulated that the increase in efficiency leads to an increase in consumption. Thus, it is probably not a coincidence that the pandemic coincided with what can be described as computational excess, making it possible for non-specialists to toy with generative adversarial networks and diffusion models. In line with the Jevons paradox, this AI surplus was rapidly transformed into a separate sector, with its application binarised as art and function.

What the majority of artists and thinkers in the audiovisual field realised almost instantly is that the output of AI is not really art, yet the discussion has not moved forward much. When Eryk Salvaggio suggested that AI artifacts should be called ontolographs, he referred to the technical process in which they are being generated into existence – but this term is also relevant to the assemblage of domain-specific knowledge itself. In this sense the system of categories and their representation, where AI models scoop up perceivable results, operates a lot like Jungian archetypes. Translated into audiovisual language, the stochastic parrot of deep learning extracts something closer to psychotherapeutic production. Rendering the collective unconscious conscious can be uncomfortable, even ugly. Furthermore, looking down on this manifestation under the pretext of a rational stance is a form of denial and repression that confuses the AI debate and shifts it into territory where the current power dynamics can be maintained.



If the crash of the NFT market can teach us anything it is that images aim to eschew narratology brokers with their market-savvy adjutants. While art connoisseurs have been loudly negating the right of “AI art” to exist, also on the basis of sustainability concerns, few have questioned its purpose. Surprisingly to no one, the “you know, this isn’t truly art, so why bother” logic has not fixed the quantity-over-quality reality of AI overstock, because what AI supplies under the label “art” may not be its actual feature but merely a byproduct. The constant AI influx is more than an amateur exercise in kitsch – for now many see it as a commercial incentive, and some perceive it as yet another TESCREALism symptom – but what if it signals a need for collective coping mechanisms in the face of advanced capitalism? This empowers people (if not with creativity, at least with the fantasy of creativity), giving them hope that oppressive hierarchies, including those in the area of arts and culture, can be evaded. Irrationally rejecting this opportunity, along with the bitter truths that may come out of it, leads to stifling proper dialogue and the compulsive repetition of the same resource-intensive yet oddly addictive pattern.

As the climate emergency escalates, so does the “proof-of-stake” narrative around it. At this point, we no longer have that neat distinction between art and function that made it so easy to substack AI updates at the beginning. The latest models are becoming increasingly complex, more “organic” and practical, and part of them no longer need large datasets for training, but rather focus on (whose?) “curation” and “documenting” (how?). Whether this may nurture or suppress the healing process that we, as a society, have been deprived of, is to be seen.

Yoana Pavlova is a Bulgarian critic, curator, educator, independent researcher, and occasionally an artist. Founder of, a playform for experimental media criticism, she explores digital arts and culture in the form of text, visuals, as well as through analogue materials.