Longtermism: How techbros justify real suffering for abstract gains
Faced with hypothetical threats to humanity's far future, a growing elite of technocrats and philosophers embrace longtermism — a belief system that justifies present-day inaction, suffering, and even violence, all in the name of a distant, abstract posterity

"It's not a new weapon. It's a new world"
- Neils Bohr, Oppenheimer
Imagine this: an intelligence unit of a Western country received a tip-off that a madman with extreme ideologies is hiding in Bangladesh and working on a doomsday weapon and intends to use it to wipe out the world, given that the chances of him succeeding are next to impossible. They have no further information on the identity or whereabouts of this person. Now, if the State's Head take a decision to launch a full-scale nuclear assault in our country (which will kill the entire nation) to save the greater humanity, what would be your moral stand in this scenario?
'Apocalypticism', the belief that human extinction is nearing the end, the end of the world is imminent, even within one's own lifetime, is of course nothing new, as it's founded on the religious scriptures and has been warned by many philosophical and scientific scholars.
But the alarming situation the human race is witnessing today is not based on the mere revelations of religious prophets, or secular metanarratives of human history, but on robust scientific conclusions defended by leading experts in fields such as climatology, ecology, epidemiology and so on. For example, we know for a fact that climate change poses a dire threat to civilisation. Now add machine learning, Artificial Intelligence (AI), superintelligence and geoengineering, etc., to it, and it will make the whole narrative further ominous.
Though the destruction and extinction of humanity is imminent in all the philosophical, religious and scientific debates, the very impact (good or bad) of such extinction has seldom been discussed, until recently. Over the past two decades, a small group of theorists, mostly based in Oxford, have been busy working out the details of a new moral worldview called longtermism, which emphasises how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. This ideology has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, which, by the way, was not only cited and endorsed but also grandly funded by Elon Musk.
We often hear that tech-utopians (or as I mentioned in the title, tech-bros) like Elon Musk, multimillionaire tech entrepreneur Jaan Tallinn (this also include brogrammer like Mark Elliot Zuckerberg, Jeff Bezos) do not believe that climate change poses an 'existential risk' to humanity because of their adherence to the longtermist ideology.
The point is that long-termism might be one of the most influential ideologies (which gained immense traction) that few people outside of top-notch institutions and Silicon Valley have ever heard about. I believe, in the era of technology and AI, this needs to be changed.
The initial thing to notice is that longtermism, as proposed by Bostrom and Beckstead (a research associate at FHI), is not equivalent to 'caring about the long term' or 'valuing the wellbeing of future generations'.
That would make it a simple utilitarian ideology, but in reality, it goes way beyond this. Longtermism calls on us to safeguard humanity's future in a manner that both diverts attention from current misery and leaves harmful socioeconomic structures critically unexamined. As a movement, as it's already evident, it has enjoyed stunning financial success and clout.
This is the reason we see these tech-bros advocating for and investing billions in futuristic tech, ignoring completely the devastating impacts of economic and environmental crisis, which have both immediate and far-reaching impacts on humanity. Why do they have such a nonchalant attitude towards, for example, world poverty and climate change?
Because even if such social and political crises cause death, disappearance and enormous sufferings to all living beings, it probably is not going to compromise our long-term potential over the coming trillions of years. So, according to them, even a climate catastrophe that cuts the human population by 75 per cent for the next two millennia will, in the grand scheme of things, be nothing more than a small blip (Émile P Torres, 2021).
This ideology creates a smoke-and-mirrors phenomenon where it implies and makes people believe that, instead of investing in poor people or countries, people with money and power invest it in people and projects who already have the money and potential to shape the future.
So, to them, it is for this goal that we should consider implementing a global surveillance system, keep pre-emptive war on the table, and focus more on superintelligent machines than on saving people in the Global South from the devastating effects of climate change and economic crisis (mostly caused by the Global North).
As Bostrom argued, we mustn't 'fritter … away' our finite resources on 'feel-good projects of suboptimal efficacy' such as alleviating global poverty and reducing animal suffering, since neither threatens our long-term potential, and our long-term potential is what really matters.
What do they mean by such long-term potential or value really? If one is patient enough to scratch the surface, one would find out imprints of eugenic ideology (scientific racism), Hitler's ideology of creating a 'master race' within it, where the poor, the underprivileged people and nations are perceived as 'unworthy' of investments or attention.
This unwarranted techno-optimism looks like a fool-proof recipe for disaster, because it sees present-day dire issues as mere ripple effects, and is willing and ready to sacrifice the present for the distant future. But the menacing fact that long-termists don't see or realise is that technology is far more likely to cause our extinction before the unknown distant future event.
If you, like me, value the continued survival and flourishing of humanity, you should care about the long term but definitely question if not outright reject the ideology of longtermism, which is not only dangerous and flawed but might be contributing to, and reinforcing, the risks that now threaten every person on the planet.
S Arzooman Chowdhury is an Alumna at the University of Cambridge and a Human Rights and Research Specialist.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard.