Stephen Clare (Author archive) - 80,000 Hours https://80000hours.org/author/stephen-clare/ Sat, 03 May 2025 02:55:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Great power war https://80000hours.org/problem-profiles/great-power-conflict/ Tue, 20 Jun 2023 11:45:03 +0000 https://80000hours.org/?post_type=problem_profile&p=77510 The post Great power war appeared first on 80,000 Hours.

]]>
Why might preventing great power war be an especially pressing problem?

A modern great power war — an all-out conflict between the world’s most powerful countries — could be the worst thing to ever happen to humanity.

Historically, such wars have been exceptionally destructive. Sixty-six million people died in World War II, likely the deadliest catastrophe humanity has experienced so far.

Since World War II, the global population and world economy have continued to grow, nuclear weapons have proliferated, and military technology has continued to advance. This means the next world war could be even worse, just as World War II was much deadlier than World War I.

It’s not guaranteed that such a war will break out. And if it does, it may not escalate to such a terrible extent. But the chance can’t be ignored. In fact, there are reasons to think that the odds of World War III breaking out this century are worryingly high.

A modern great power war would be devastating for people alive today. But its effects could also persist long into the future. That’s because there is a substantial chance that this century proves to be particularly important. Technologies with the potential to cause a global catastrophe or radically reshape society are likely to be invented. How we choose to develop and deploy them could impact huge numbers of our descendants. And these choices would be affected by the outcomes of a major war.

To be more specific, there are three main ways great power conflict could affect the long-term future:

  1. High international tension could increase other dangers, such as catastrophic risk from AI. Great power tensions could make the world more dangerous even if they don’t lead to war. During the Cold War, for example, the United States and the USSR never came into direct conflict but invested in bioweapons research and built up nuclear arsenals. This dynamic could return, with tension between great powers fueling races to develop and build new weapons, raising the risk of a disaster even before shots are fired.
  2. War could cause an existential catastrophe. If war does break out, it could escalate dramatically, with modern weapons (nuclear weapons, bioweapons, autonomous weapons, or other future technologies) deployed at unprecedented scale. The resulting destruction could irreparably damage humanity’s prospects.
  3. War could reshape international institutions and power balances. While such a catastrophic war is possible, it seems extremely unlikely. But even a less deadly war, such as another conflict on the scale of World War II, could have very long-lasting effects. For example, it could reshape international institutions and the global balance of power. In a pivotal century, different institutional arrangements and geopolitical balances could cause humanity to follow different long-term trajectories.

The rest of this profile explores exactly how pressing a problem great power conflict is. In summary:

  • Great power relations have become more tense. (More.)
  • Partly as a result, a war is more likely than you might think. It’s reasonable to put the probability of such a conflict in the coming decades somewhere between 10% and 50%. (More.)
  • If war breaks out, it would probably be hard to control escalation. The chance that it would become large enough to be an existential risk cannot be dismissed. (More.)
  • Competition over AI may increase the chance of conflict, and advanced AI systems may make any resulting war more likely to be devastating. (More.)
  • This makes great power war one of the biggest threats our species currently faces. (More.)
  • It seems hard to make progress on solving such a difficult problem (more) — but there are many things you can try if you want to help (more).

International tension has risen and makes other problems worse

Imagine we had a thermometer-like device which, instead of measuring temperature, measured the level of international tension.1 This ‘tension metre’ would max out during periods of all-out global war, like World War II. And it would be relatively low when the great powers2 were peaceful and cooperative. For much of the post-Napoleonic 1800s, for example, the powerful European nations instituted the Concert of Europe and mostly upheld a continental peace. The years following the fall of the USSR also seem like a time of relative calm, when the tension metre would have been quite low.3

How much more worried would you be about the coming decades if you knew the tension metre would be very high than if you knew it would be low? Probably quite a lot. In the worst case, of course, the great powers could come into direct conflict. But even if it doesn’t lead to war, a high level of tension between great powers could accelerate the development of new strategic technologies, make it harder to solve global problems like climate change, and undermine international institutions.

During the Cold War, for instance, the United States and USSR avoided coming into direct conflict. But the tension metre would still have been pretty high. This led to some dangerous events:

  • A nuclear arms race. The number of nuclear warheads in the world grew from just 300 in 1950 to over 64,000 in 1986.
  • The development of new bioweapons. Despite signing the Biological Weapons Convention in 1972, the search for military advantages motivated Soviet decision makers to continue investing in bioweapon development for decades. Although never used in combat, biological agents were accidentally released from research facilities, resulting in dozens of deaths and threatening to cause a pandemic.4
  • Nuclear close calls. Military accidents and false alarms happened regularly, and top decision makers were more likely to interpret these events hostilely when tensions were high. On several occasions it seems the decision about whether or not to start a nuclear war came down to individuals acting under stress and with limited time.

This makes international tension an existential risk factor. It’s connected to a number of other problems, which means reducing the level of international tension would lower the total amount of existential risk we face.

The level of tension today

Recently, international tension seems to have once again been rising. To highlight some of the most salient examples:

These dynamics raise an important question: how much more dangerous is the world given this higher tension than it would be in a world of low tension?

I think the answer is quite a bit more dangerous — for several reasons.

First, international tension seems likely to make technological progress more dangerous. There’s a good chance that, in the coming decades, humanity will make some major technological breakthroughs. We’ve discussed, for example, why one might worry about the effects of advanced artificial intelligence systems or biotechnology. The level of tension could strongly affect how these technologies are developed and governed. Tense relations could, for example, cause countries to neglect safety concerns in order to develop technology faster.5

Second, great power relations will strongly influence how nations do, or do not, cooperate to solve other global collective action problems. For example, in 2022, China withdrew from bilateral negotiations with the United States over climate action in protest of what it perceived as American diplomatic aggression in Taiwan. That same year, efforts to strengthen the Biological Weapons Convention were reportedly hampered by the Russian delegation after their country’s invasion of Ukraine raised tensions with the United States and other western countries.

Third, the rapid development of AI raises a range of challenges. It’s already a potential aggravator of existing tensions and competitive dynamics, such as those between the US and China, since countries may believe that obtaining powerful AI technology will give them advantages over their rivals. This can create the apparent incentive to race toward the technology, which can worsen tensions. And the competition likely makes it harder to cooperatively reduce the potentially catastrophic risks raised by advanced AI.

Finally, if relations deteriorate severely, the great powers could fight a war.

How likely is a war?

Wars are destructive and risky for all countries involved. Modern weapons, especially nuclear warheads, make starting a great power war today seem like a suicidal undertaking.

But factors like the prevalence of war throughout history, the chance that leaders make mistakes, conflicting ideologies, and commitment problems, make me think that conflict could break out anyway.

On balance, I think such an event is somewhat unlikely but hardly unthinkable. To quantify this: I put the chance we experience some kind of war between great powers before 2050 at about one-in-three.6

War has occurred regularly in the past

One reason to think a war is quite likely is that such conflicts have been so common in the past. Over the past 500 years, about two great power wars have occurred per century.7

Naively, this would mean that every year there’s a 2% chance such a war occurs, implying the chance of experiencing at least one great power war over the next 80 years — roughly until the end of the century — is about 80%.8

This is a very simple model. In reality, the risk is not constant over time and independent across years. But it shows that if past trends simply continue, the outcome is likely to be very bad.

Has great power war become less likely?

One of the most important criticisms of this model is that it assumes the risk is constant over time. Some researchers have argued instead that, especially since the end of World War II, major conflicts have become much less likely due to:

  • Nuclear deterrence: Nuclear weapons are so powerful and destructive that it’s just too costly for nuclear-armed countries to start wars against each other.9
  • Democratisation: Democracies have almost never gone to war against each other, perhaps because democracies are more interconnected and their leaders are under more public pressure to peacefully resolve disputes with each other.10 The proportion of countries that are democratic has increased from under 10% in 1945 to about 50% today.
  • Strong economic growth and global trade: Global economic growth accelerated following World War II and the value of global exports grew by a factor of almost 30 between 1950 and 2014. Since war disrupts economies and international trade, strong growth raises the costs of fighting.11
  • The spread of international institutions: Multilateral bodies like the United Nations General Assembly and Security Council promote diplomatic dialogue and facilitate coordination to punish transgressors.12

It is true that we are living through an unusually long period of great power peace. It’s been about 80 years since World War II. We just saw that a simple model using the historical frequency of great power wars suggests there was only a 20% chance of going that long without at least one more war breaking out. This is some evidence in favour of the idea that wars have become significantly less common.

At the same time, we shouldn’t feel too optimistic.

The numerous close calls during the Cold War suggest we were somewhat lucky to avoid a major war in that time. And a 20% chance of observing 80 years of peace is not that low.13 Structural changes might have dramatically reduced the likelihood of war. Or perhaps we’ve just been lucky. It could even be that technological advances have made war less likely to break out, but more deadly when it occurs, leaving the overall effect on the level of risk ambiguous. It just hasn’t been long enough to support a decisive view.14

So while the recent historical trend is somewhat encouraging, we don’t have nearly enough data to be confident that great power war is a thing of the past. To better predict the likelihood of future conflict, we should also consider distinctive features of our modern world.15

One might think that a modern great power war would simply be so destructive that no state leader would ever choose to start one. And some researchers do think that the destruction such a war would wreak globally makes it less likely to occur. But it would be hard to find anyone who claims this dynamic has driven the risk to zero.

First, a war could be started by accident.

Second, sometimes even prudent leaders may struggle to avoid a slide towards war.

We could blunder into war

An accidental war can occur if one side mistakes some event as an aggressive action by an adversary.

This happened several times during the Cold War. The earlier example of the wayward American reconnaissance plane shows how routine military exercises carry some escalation risk. Similarly, throughout history, nervous pilots and captains have caused serious incidents by attacking civilian planes and ships.16 Nuclear weapons allow for massive retaliatory strikes to be launched quickly — potentially too quickly to allow for such situations to be explained and de-escalated.

It is perhaps more likely, though, that an accidental war could be triggered by a technological malfunction. Faulty computers and satellites have previously triggered nuclear close calls. As monitoring systems have become more reliable, the rate at which such accidents have occurred has been going down. But it would be overconfident to think that technological malfunctions have become impossible.

Future technological changes will likely raise new challenges for nuclear weapon control. There may be pressure to integrate artificial intelligence systems into nuclear command and control to allow for faster data processing and decision making. And AI systems are known to behave unexpectedly when deployed in new environments.17

New technologies will also create new accident risks of their own, even if they’re not connected to nuclear weapon systems. Although these risks are hard to predict, they seem significant. I’ll say more about how such technologies — including AI, nuclear, biological, and autonomous weapons — are likely to increase war risks later.

Leaders could choose war

All that said, most wars have not started by accident. If another great power war does break out in the coming decades, it is more likely to be an intentional decision made by a national leader.

Explaining why someone might make such a costly, destructive, unpredictable, and risky decision has been called “the central puzzle about war.” It has motivated researchers to search for “rationalist” explanations for war. In his 2022 book Why We Fight, for example, economist Chris Blattman proposes five basic explanations: unchecked interests, intangible incentives, uncertainty, commitment problems, and misperceptions.18

Unchecked interests: Sometimes leaders who can decide to go to war stand to personally gain. Meanwhile, the costs are borne by citizens and soldiers who may not be able to hold their leaders to account.

Intangible incentives: War can sometimes provide some abstract value, like revenge, honour, glory, or status. This can help offset its costs.

Uncertainty: States will sometimes try to hide their strength or bluff to win concessions. Under this uncertainty, it can sometimes be in their rivals’ interests to call the bluff and fight.

Commitment problems: Bargaining is based on relative strength. If one state is growing in power more quickly than its rival, it may be hard to find a compromise solution that will continue to be acceptable in the future.

Misperceptions: Leaders may just misjudge the strength, beliefs, or resolve of their rivals and push for untenable bargains. Faced with what seem to be unfair terms, the rival state may decide to go to war.

This section discusses how great power tensions may escalate to war in the next few decades. It focuses on three potential conflicts in particular: war between the US and China, between the US and Russia, and between China and India. These are discussed because each of these countries are among the world’s largest economies and military spenders, and seem particularly likely to fight. At the end, I briefly touch on other potential large conflicts.

Projected real GDP of the US, China, India and Russia according to a 2022 Goldman Sachs analysis Source: Author’s figure using data from: Kevin Daly and Tadas Gedminas, “Global Economics Paper The Path to 2075 — Slower Global Growth, But Convergence Remains Intact,” Global Economics Paper (Goldman Sachs, December 6, 2022), https://www.goldmansachs.com/intelligence/pages/gs-research/the-path-to-2075-slower-global-growth-but-convergence-remains-intact/report.pdf.

United States-China

The most worrying possibility is war between the United States and China. They are easily the world’s largest economies. They spend by far the most on their militaries. Their diplomatic relations are tense and have recently worsened. And their relationship has several of the characteristics that Blattman identifies as causes of war.

At the core of the United States-China relationship is a commitment problem.

China’s economy is growing faster than the United States’. By some metrics, it is already larger.19 If its differential growth continues, the gap will continue to widen between it and the United States. While economic power is not the sole determinant of military power, it is a key factor.20

The United States and China may be able to strike a fair deal today. But as China continues to grow faster, that deal may come to seem unbalanced. Historically, such commitment problems seem to have made these kinds of transition periods particularly dangerous.21

In practice, the United States and China may find it hard to agree on rules to guide their interactions, such as how to run international institutions or govern areas of the world where their interests overlap.

The most obvious issue which could tip the United States-China relationship from tension into war is a conflict over Taiwan. Taiwan’s location and technology industries are valuable for both great powers.

This issue is further complicated by intangible incentives.

For the United States, it is also a conflict over democratic ideals and the United States’ reputation for defending its allies.

For China, it is also a conflict about territorial integrity and addressing what are seen as past injustices.

Still, forecasts suggest that while a conflict is certainly possible, it is far from inevitable. As of 8 June 2023, one aggregated forecast22 gives a 17% chance of a United States-China war breaking out before 2035.23

A related aggregated forecast of the chance that at least 100 deaths occur in conflict between China and Taiwan by 2050 gives it, as of 8 June 2023, a much higher 68% chance of occurring.24

United States-Russia

Russia is the United States’ other major geopolitical rival.

Unlike China, Russia is not a rival in economic terms: even after adjusting for purchasing power, its economy is only about one-fifth the size of the United States’.

However, Russia devotes a substantial fraction of its economy to its military. Crucially, it has the world’s largest nuclear arsenal. And Russian leadership has shown a willingness to project power beyond their country’s borders.

Country Military spending in 2021 (2020 USD, PPP adjusted)
United States 801 billion
China 293 billion
India 76.6 billion
United Kingdom 68.4 billion
Russia 65.9 billion

Top five countries by estimated military spending, 2021. Source: SIPRI

Russia’s 2022 invasion of Ukraine demonstrated the dangers of renewed rivalry between Russia and the United States-led West. The war has already been hugely destructive: the largest war in Europe since World War II, with hundreds of thousands of casualties already and no end to the conflict in sight. And it could get much worse. Most notably, Russian officials have repeatedly refused to rule out the use of nuclear weapons.

Unchecked interests and intangible incentives are again at play here. Vladimir Putin leads a highly-centralised government. He has spoken about how his desire to rebuild Russia’s reputation played in his decision to invade Ukraine.

Given their ideological differences and history of rivalry, it is reasonable to expect that the United States and Russia will continue to experience dangerous disagreements in the future. As of 8 June 2023, an aggregated forecast gives a 20% chance that the United States and Russia will fight a war involving at least 1,000 battle deaths before 2050.

China-India

India is already the world’s third-largest economy. If national growth rates remain roughly constant, the size of the Indian economy will surpass that of the United States’ sometime this century. India also has nuclear weapons and is already the world’s third-largest military spender (albeit at a much lower level than China or the United States).

One reason to worry that China and India could fight a war is that they already dispute territory along their border. Countries that share a border, especially when it is disputed, are more likely to go to war than countries that do not. By one count, 88% of the wars that occurred between 1816 and 1980 began as wars between neighbours.25

In fact, China and India already fought a brief but violent border war in 1962. Deadly skirmishes have continued since, resulting in deaths as recently as 2020.

Forecasters agree that a China-India conflict seems relatively (though not absolutely) likely. An aggregated forecast gives a 19% chance of war before 2035.

Other dangerous conflicts

These three conflicts — United States-China, United States-Russia, and China-India — are not the only possible great power wars that could occur. Other potential conflicts could also pose existential risk, either because they drive dangerous arms races or see widespread deployment of dangerous weapons.

We should keep in mind India-Pakistan as a particularly likely conflict between nuclear-armed states and China-Russia as a potential, though unlikely, conflict between great powers with a disputed border and history of war. Plus, new great powers may emerge or current great powers may fade in the years to come.

While I think we should prioritise the three potential conflicts I’ve highlighted above, the future is highly uncertain. We should monitor geopolitical changes and be open to changing our priorities in the future.

A race for AI might increase the risk of conflict

One major risk factor for great power conflict is the accelerating development of AI.

As companies push to create AI systems that can augment and replace much of human labour, the great powers have not been oblivious. Political leaders in the United States, China, and elsewhere recognise that the technology could reshape social, economic, and military systems.26

Facing the prospect of such a transformative technology, the leaders of great powers may feel they have more to fear from one another. They may come to believe that another great power leading in AI poses an unacceptable threat — one that justifies drastic action to defang.

There are at least three reasons falling behind a geopolitical rival in AI could seem so concerning:

  • AI could enhance economic strength — with AI advanced enough to perform most occupations currently done by humans, a country could accelerate economic growth at an unprecedented rate. Since economic strength provides a major advantage in conflict, this could upset the existing balance of power.
  • AI could enhance military might — advanced AI may also greatly strengthen a country’s ability to fight wars by, for example, allowing it to develop new weapons and supporting technologies like targeting systems, improve its intelligence-gather and logistics systems, or even significantly automate its armed forces. Even more specific capabilities, such as the ability to track or disable a rival’s nuclear arsenal, would be highly destabilising.
  • AI could otherwise enhance geopolitical power — AI might be used to advance a country’s interests in other ways, such as by strengthening its cultural dominance or helping it gain allies through strategic sharing of the technology’s benefits.

A country that fears its rival might be on the cusp of achieving such major advances may feel incentivised to strike before they materialise. Or it may feel the need to race forward in its own pursuit of the technology, despite the inherent risks. The country’s leaders may feel that if they wait, they’ll end up in a position completely unable to compete with an opponent and subject to their domination.

There’s already some evidence of tension between the United States and China over AI technology. Most notably, the United States has enacted a series of export controls to reduce China’s access to advanced AI chips.

Despite the risk, it’s far from certain that these dynamics must escalate. There are genuine opportunities for international cooperation on AI, which could help reduce the risk of conflict as well as the other serious risks posed by the technology. Finding shared interests and positive-sum agreements may allow the world to mitigate the dangers while increasing prosperity for all.

Overall predictions

Below is a table listing relevant predictions from the forecasting platform Metaculus, including the number of predictions made, as of 10 March 2023. Note the different timescales and resolution criteria for each question; they may not be intuitively comparable.

Prediction Resolution criteria Number of predictions Metaculus prediction
World war by 2151 Either:

A war killing >0.5% of global population, involving >50% of countries totalling >50% of global population from at least 4 continents.

Or:

A war killing at least >1% of global population, involving >10% of countries totalling >25% of global population

561 52%
World War III before 2050 Involving countries >30% of world GDP OR >50% of world population

AND

>10M deaths

1640 20%
Global thermonuclear war by 2070 EITHER:

3 countries each detonate at least 10 nuclear warheads of at least 10 kt yield outside of their territory

OR

2 countries each detonate at least 50 nuclear warheads of at least 10 kt outside of their territory

337 11%
When will be the next great power war? Any two of the top 10 nations by military spending are at war

“At war” definition:

EITHER

Formal declaration
OR

Territory occupied AND at least 250 casualties

OR

Media sources describe them as “at war”

25th percentile: 2031

Median: 2048

75th percentile: 2088

Never (not before 2200): 8%

No non-test nuclear detonations before 2035 No nuclear detonation other than controlled test

[Note the negation in the question. It resolves negatively if a warhead is detonated]

321 69%
At least 1 nuclear detonation in war by 2050 Resolves according to credible media reports 476 31%

I have previously independently estimated the likelihood of seeing a World War III-like conflict this century. My calculation first adjusts historical base rates to allow for the possibility that major wars have become somewhat less likely, and uses the adjusted base rate to calculate the probability of seeing a war between now and 2100.

This method gives a 45% chance of seeing a major great power war in the next 77 years. If the probability is constant over time then the cumulative probability between now and 2050 would be 22%. This is aligned with the Metaculus predictions above.

We can also ask experts what they think. Unfortunately, there are surprisingly few expert predictions about the likelihood of major conflict. One survey was conducted by the Project for the Study of the 21st Century. The numbers were relatively aligned with the Metaculus forecasts, though slightly more pessimistic. However, it seems a mistake to put too much stock in this survey (see footnote).27

We now have at least a rough sense of a great power war’s probability. But how bad could it get if it occurred?

A new great power war could be devastating

At the time, the mechanised slaughter of World War I was a shocking step-change in the potential severity of warfare. But its severity was surpassed just 20 years later by the outbreak of World War II, which killed more than twice as many people.

A modern great power war could be even worse.

How bad have wars been in the past?

The graph below shows how common wars of various sizes are, according to the Correlates of War’s Interstate War dataset.28

The x-axis here represents war size in terms of the logarithm of the number of battle deaths. The y-axis represents the logarithm of the proportion of wars in the dataset that are at least that large.

Using logarithms means that each step to the right in the graph represents a war not one unit larger, but 10 times larger. And each step up represents a war that is not one unit more likely, but 10 times more likely.

Cumulative frequency distribution of severity of interstate wars, 1816-2007 Source: Author’s figure. See the data here. Data source: Correlates of War Interwar dataset, v4.029

What the graph shows is that wars have a heavy tail. Most wars remain relatively small. But a few escalate greatly and become much worse than average.

Of the 95 wars in the latest version of the database, the median battle death count is 8,000. But the heavy tail means the average is 334,000 battle deaths. And the worst war, World War II, had almost 17 million battle deaths.30

The number of battle deaths is only one way to measure the badness of wars. We could also consider the proportion of the population of the countries involved who were killed in battle. By this measure, the worst war since 1816 was not World War II. Instead, it’s the Paraguayan War of 1864–70. In that war, 30 soldiers died for every 1,000 citizens of the countries involved. It’s even worse if we also consider civilian deaths; while estimates are very uncertain, it’s plausible that about half of the men in Paraguay, or around a quarter of the entire population, was killed.31

What if instead we compared wars by the proportion of the global population killed? World War II is again the worst conflict since 1816 on this measure, having killed about 3% of the global population. Going further back in time, though, we can find worse wars. Ghengis Khan’s conquests likely killed about 9.5% of people in the world at the time.

The heavy tail means that some wars will be shockingly large.32 The scale of World War I and World War II took people by surprise, including the leaders who initiated it.

It’s also hard to know exactly how big wars could get. We haven’t seen many really large wars. So while we know there’s a heavy tail of potential outcomes, we don’t know what that tail looks like.

That said, there are a few reasons to think that wars much worse than World War II are possible:

  • We’re statistically unlikely to have brushed up against the end of the tail, even if the tail has an upper bound.
  • Other wars have been deadlier on a per-capita basis. So unless wars involving countries with larger populations are systematically less intense, we should expect to see more intense wars involving as many people as World War II.
  • Economic growth and technological progress are continually increasing humanity’s war-making capacity. This means that, once a war has started, we’re at greater risk of extremely bad outcomes than we were in the past.

So how bad could it get?

How bad could a modern great power war be?

Over time, two related factors have greatly increased humanity’s capacity to make war. 33

First, scientific progress has led to the invention of more powerful weapons and improved military efficiency.

Second, economic growth has allowed states to build larger armies and arsenals.

Since World War II, the world economy has grown by a factor of more than 10 in real terms; the number of nuclear weapons in the world has grown from basically none to more than 9,000, and we’ve invented drones, missiles, satellites, and advanced planes, ships, and submarines.

Ghengis Khan’s conquests killed about 10% of the world, but this took place over the course of two decades. Today that proportion may be killed in a matter of hours.

First, nuclear weapons could be used.

Today there are around 10,000 nuclear warheads globally.34 At the peak of nuclear competition between the United States and the USSR, though, there were 64,000. If arms control agreements break down and competition resurges among two or even three great powers, nuclear arsenals could expand. In fact, China’s arsenal is very likely to grow — though by how much remains uncertain.

Many of the nuclear weapons in the arsenals of the great powers today are at least 10 times more powerful than the atomic bombs used in World War II.35 Should these weapons be used, the consequences would be catastrophic.

Graph showing that early nuclear weapons are 1,000s of times more explosive than previous conventional explosives Source: AI Impacts, Effect of nuclear weapons on historic trends in explosives

By any measure, such a war would be by far the most destructive, dangerous event in human history, with the potential to cause billions of deaths.

The probability that it would, on its own, lead to humanity’s extinction or unrecoverable collapse, is contested. But there seems to be some possibility — whether through a famine caused by nuclear winter, or by reducing humanity’s resilience enough that something else, like a catastrophic pandemic, would be far more likely to reach extinction-levels (read more in our problem profile on nuclear war).

Nuclear weapons are complemented and amplified by a variety of other modern military technologies, including improved missiles, planes, submarines, and satellites. They are also not the only military technology with the potential to cause a global catastrophe — bioweapons, too, have the potential to cause massive harm through accidents or unexpected effects.

What’s more, humanity’s war-making capacity seems poised to further increase in the coming years due to technological advances and economic growth. Technological progress could make it cheaper and easier for more states to develop weapons of mass destruction.

In some cases, political and economic barriers will remain significant. Nuclear weapons are very expensive to develop and there exists a strong international taboo against their proliferation.

In other cases, though, the hurdles to developing extremely powerful weapons may prove lower.

Improvements in biotechnology will probably make it cheaper to develop bioweapons. Such weapons may provide the deterrent effect of nuclear weapons at a much lower price. They also seem harder to monitor from abroad, making it more difficult to limit their proliferation. And they could spark a global biological catastrophe, like a major — possibly existentially catastrophic — pandemic.

Artificial intelligence systems are also likely to become cheaper as well as more powerful, as discussed above. It is not hard to imagine important military implications of this technology. For example, AI systems could control large groups of lethal autonomous weapons (though the timeline on which such applications will be developed is unclear). They may increase the pace at which war is waged, enabling rapid escalation outside human control. And AI systems could speed up the development of other dangerous new technologies.

Finally, we may have to deal with the invention of other weapons which we can’t currently predict. The feasibility and danger of nuclear weapons was unclear to many military strategists and scientists until they were first tested. We could similarly experience the invention of destabilising new weapons in our lifetime.

What these technologies have in common is the potential to quickly kill huge numbers of people:

  • A nuclear war could kill tens of millions within hours, and many more in the following days and months.
  • A runaway bioweapon could prove very difficult to stop.
  • Future autonomous systems could act with lightning speed, even taking humans out of the decision-making loop entirely.

Faster wars leave less time for humans to intervene, negotiate, and find a resolution that limits the damage.

How likely is war to damage the long-run future?

When a war begins, leaders often promise a quick, limited conflict. But escalation proves hard to predict ahead of time (perhaps because people are scope-insensitive, or because escalation depends on idiosyncratic decisions).

This raises the possibility of enormous wars that threaten all of humanity.

The risk of extinction

It is extremely difficult to estimate the chance that a war escalates to the point of causing human extinction.

One possible starting point is to extrapolate from past wars. Political scientist Bear Braumoeller fit a statistical model to the Correlates of War data I discussed above.36 His model suggests that any given war has at least37 a one in 3,300 chance of causing human extinction.

If we experience 15 wars in the next 30 years,38 then the implied chance of an extinction war is about 0.5%. Assuming 50 wars over the next 100 years, that rises to a disturbing 1.5% chance of extinction.

But this estimate must be interpreted cautiously. First, it infers probabilities of different outcomes today using data from the past. Yet the chance of different war outcomes have very likely changed over time. The Correlates of War data goes back to 1816; it seems reasonable to think that 19th-century wars, fought with cannons and horses, tell us little about modern wars. This means it probably underestimates the chance of huge wars in the 21st century wars.

The Correlates of War data also only includes battle deaths. But large wars also kill lots of civilians. So considering only battle deaths will underestimate the chance of an extinction-level war by a considerable margin (for example, if one civilian is killed for every soldier, then a smaller, more probable war of just over four billion battle deaths would cause human extinction).

On the other hand, to infer probabilities about extinction-level events, Braumoeller extrapolates far beyond the data we’ve observed so far. An extinction-level war would be more than 100 times larger than World War II. It is hard to imagine a conventional war,39 at least, escalating to this extent. The logistics would be enormously complex. And barring omnicidal maniacs, world leaders would be hugely incentivised to bring the fighting to an end before killing literally everyone. This makes the model look too pessimistic.

On the whole, a 1.5% chance of an extinction-level war this century seems too high to me.

But while Braumoeller’s model seems too pessimistic on net, his work makes it hard to rule out a war that causes human extinction. We’re just left pretty uncertain about how likely it might be.

Another approach is to estimate the specific risks posed by different weapons of mass destruction.

We’ve estimated that the direct risk of an existential catastrophe caused by nuclear weapons in the next 100 years is around 0.01%. Maybe half of that risk (0.005%) comes from escalation through a major conflict.

I’d guess that the risks posed by bioweapons are similar (and possibly higher). We should also consider the interaction between great power conflict and risks from AI, as well as other future weapons of mass destruction whose development we can’t predict.

We could assume that these risks, plus the risk of conventional wars, are approximately mutually exclusive, and that each contributes about 0.005%. That would give a total risk of around 0.025% — or around one in 4,000 this century.

The risk of collapse

A more likely scenario is a war which doesn’t cause extinction, but is much larger than World War II.40 Such an event would still be easily the most destructive and deadly in human history. Beyond the enormous suffering it would cause, it would inflict major damage on the world’s infrastructure, trade links, social networks, and perhaps international institutions. The effects could be very long lasting.

One possibility is that civilisation could be damaged to the point of collapse. While some people would survive, they would lack the physical and social infrastructure to maintain all the processes we need to sustain modern life.

Rebuilding in these conditions would be a formidable challenge. Adjusted for inflation, under the Marshall Plan the United States spent $150 billion helping nations in western Europe recover from World War II. Accounting for investments from other Allies and the affected countries, as well as damage in eastern Europe, Asia, and Africa, rebuilding after World War II cost trillions of dollars.

So rebuilding after a war much larger than World War II could cost tens of trillions of dollars. And such a war would leave fewer nations with intact economies to fund the recovery. Survivors could also face additional challenges like widespread nuclear fallout and uncontrolled spread of weaponised pathogens.

Given enough time, I’d guess that humanity would eventually recover and rebuild industrial civilisation.41 However, we don’t know this for sure. And the recovery could take a very long time. Meanwhile, society would be vulnerable to a range of natural and anthropogenic hazards which could drive them to extinction.

Even if it doesn’t cause extinction or civilisational collapse, a major war could affect our long-term trajectory

Finally, a large war could alter our future even if it doesn’t cause human extinction or a global societal collapse.

Consider how different the world looked before and after World War II. Before the war, most of the world was autocratic. Fascists controlled several of the world’s most powerful countries.

This changed after the war. The Allied victory preceded a global wave of democratisation. Though fascist regimes continued in some countries, far-right ideology clearly posed less of a threat after the war. Instead, as a direct result of the war, the international institutions that emerged in the years after were shaped by liberal values like human rights and international cooperation.42

World War II is a particularly dramatic example, but it’s not the only time that war has caused major geopolitical realignments and affected which values are influential globally. Major conflicts reshape the global balance of power. In their aftermath, leaders of the victorious nations often use their new influence to change various institutions in their favour.

They redraw borders and cause civilisations to rise and fall. They invest in military research and influence how technological change happens. And their diplomatic strategies shape the norms and institutions that structure the international system.

Such changes can have very long-lasting effects.43 In extreme cases, they can even change our civilisational trajectory: the average value of the world over time. This might sound abstract, but just think about how much more pessimistic you’d feel about the future if more of the world’s most powerful countries were still ruled by fascist dictators.

A new great power war has the potential to cause similarly important changes to global institutions. For example, if an authoritarian state or alliance emerged from the war victorious, they may be able to use their influence and modern digital surveillance tools to entrench their power at a global scale.44

This century may be especially important because we are at risk of value lock-in due to transformative AI. We’re also probably in a high-risk period due to technological progress generally. So, it could be hugely important which countries, values, and institutions become more globally influential following a war.

That said, World War II also shows that the effect of war on our civilisational trajectory is not always unambiguously negative. That’s why I have focused on the effects of war that do seem unambiguously bad: dangerous technological development, near-term death and destruction, and heightened risk of a major global catastrophe.

Overall view

Overall, the short answer to the question of how likely a war is to affect our long-term future is that we really don’t know. Not much research has addressed this question, and each of the estimates I’ve ventured above has some serious weaknesses.

But considering the track record of past wars and the chance that weapons of mass destruction are used, I’d put the chance of an extinction-level war at between 0.025% and 1%.

We’re much more likely to experience a somewhat smaller war (a war killing 800 million people is probably around three times more likely than a war killing eight billion). But its long-term effects are far more ambiguous than extinction. So perhaps the risk to the long-term future from trajectory changes is roughly equal — though it’s really hard to say.

On the whole, my best guess is that the chance a war seriously damages the long-term future this century is between 0.05% and 2%. But I expect, and hope, this estimate will change in the coming years as more researchers work on these questions.

Note, though, that this estimate only reflects the risk that the war itself directly leads to a catastrophe that affects the long-term future. Great power conflict could contribute to other catastrophic risks by breaking down vital international coordination, which might, for example, increase the risk that someone unintentionally deploys a catastrophically dangerous AI system.

What are the major arguments against this problem being especially pressing?

So far, I’ve talked a lot about reasons why you might want to work on this problem: the chance that a new great power war breaks out is low but far from zero, such a war could escalate to unprecedented size, and the effects could reach far into the future.

In other words, the importance of the problem is clear. But we need to consider other factors as well. In particular, we need to ask if there’s anything we can realistically do to help solve it. And there are several reasons to think that improving great power relations is among the more difficult major problems to make progress on:

  1. There are many people working on avoiding war, with strong incentives to do so. (More.)
  2. Even if you can influence policy, it’s often not clear what the best thing is to do. (More.)
  3. Maybe it’s better to focus instead on specific risks. (More.)

It’s less neglected than some other top problems

Most people want to avoid war

The most obvious reason you might not choose to work on this problem is that it’s less neglected than some of our other top problems.

War hurts almost everyone. Some (though not all) wars start with public support. But they are costly in human lives and economic disruption. Negotiated solutions are almost always preferable. In reality, most wars that could happen don’t because people work to avoid fighting them.45

That said, it’s important not to take this argument too far. It is not the case that everyone is harmed by war or high international tension. The most obvious example is defence companies who benefit when governments buy more and more expensive weapons. In certain circumstances, unchecked leaders can also gain status and enhance their reputation through war without personally incurring many costs. And some foreign policy professionals benefit from increasing demand for their work.

I mention these factors not to criticise these actors in particular. Rather, it’s to point out that we can’t assume war will be avoided because it’s so costly for large swathes of society.

We know conflict has been historically common. And we know that negative outcomes can occur when their costs are distributed across society but their benefits are concentrated among influential actors.

More people work on this problem than some other top problems

Still, the obvious costs of war mean that there are already thousands of people working in relevant diplomacy, research, and policy roles. For example, there are about 13,000 Foreign Service members in the US State Department alone.

Thousands more people work on these issues in think tanks and universities. The Council on Foreign Relations, a prestigious membership organisation which publishes Foreign Affairs and hosts events about foreign policy, has over 5,000 members. The International Studies Association, which focuses more on academics, has over 7,000 members. Many thousands more people work on this problem in the intelligence and defence communities.

Of course, these organisations cover a huge range of issues, with only a fraction of their employees focused on great power war in particular. And of this fraction, only a small number are probably focused on preventing or mitigating worst-case outcomes like extinction.

To get a rough estimate of the number of people working on this problem, let’s try assuming that the US government employs about 250,000 people who work on issues broadly related to great power war. Perhaps 5% of this effort focuses on the specific issues we’ve discussed throughout this profile. That would leave about 12,500 people working on the most important US foreign policies in the government today.

Assume further that another 10,000 people work on international relations in think tanks and universities and, again, 5% focus on the issues in this profile. That would bring our total to about 13,000 people.

Of course, this is a very rough estimate. Accounting for the civil servants, diplomats, analysts, researchers, professors, and advocates in other parts of the world could double or triple it.

(In comparison, we’ve previously estimated that only about 400 people work on existential risks from advanced artificial intelligence.)

Because so many people are already working in this field, you will probably find it harder to identify important issues on which a lot of progress can be made that other people haven’t already found.

There aren’t many possible actions which are clearly positive

Suppose, though, that you managed to work your way up to a role which allows you to influence foreign policy in some way. What advice would you give?

This is a hard question to answer for a few different reasons.

First, IR researchers disagree on even the field’s most basic questions, like when deterrence policies are effective and whether diplomatic, cultural, and economic engagement has pacifying effects. So there are a few — though not zero! — ‘consensus’ actions to pursue.

Second, predicting the effects of important foreign policy decisions is difficult. We just don’t know much about how accurate long-run forecasts are, even when they’re made by superforecasters with strong track records.

Third, our advice could not just be ineffectual; it could also be harmful. Not only are the long-run effects of foreign policy decisions hard to predict, they often involve difficult tradeoffs.

For example, some researchers argue that building up the world’s nuclear arsenals has made major great power wars less likely (because of mutually-assured destruction) but smaller conflicts more likely (because they are less likely to escalate and thus ‘safer’ to fight).

Under this model, the total effect of nuclear deterrence doctrines on existential risk is ambiguous. It raises the upper bound of how bad a conflict could get. But it makes such conflicts somewhat less likely. And it’s hard to say which effect dominates.46

For these reasons, the impact one can have by working in this area is probably best thought of as improving the quality of decision making on a case-by-case basis rather than advocating generally for specific policies. You’ll probably still have some doubt about which direction to push.

Of course, everyone faces the same issues. You could still have a big impact by giving better advice or making better decisions, given all these constraints, than whoever you’re replacing would have. But acting under so much uncertainty could be a strong limitation on the expected impact you can have.

Maybe it’s better to focus instead on more specific risks

These concerns may lead you to think that you can have a bigger impact by working on a more direct existential risks like nuclear security, biosecurity, or other risks related to AI.

To think through this decision, let us return for a moment to our tension metre metaphor. The goal of someone working in great power relations could be seen as lowering the reading on the metre. I’ve discussed how that might make a diplomatic breakdown or the outbreak of a war less likely, lowering total existential risk.

But it may seem too hard to affect the tension metre. Or the connection between the tension metre and any specific risk (like a deadly pandemic) may be too tenuous. In that case, you’d probably have a bigger impact by taking the current level of international tension as given, and working directly on one of our other top problems in whatever geopolitical context we may find ourselves in.

For example, one way in which great power war could lead to catastrophe is by causing the release of an extremely contagious and deadly biological agent. Perhaps high tensions and fear of war increase investment in biological weapons, increasing the risk of an accidental release. Or perhaps one of the great powers, faced with the prospect of a catastrophic lost war, chooses to release such a weapon in a desperate bid for victory, and it goes horribly wrong.47

Flowchart showing how reduced international cooperation and great power war are factors in the development and deployment of dangerous new technologies, which could cause existential risks. Source: “Modelling Great Power conflict as an existential risk factor”, Effective Altruism Forum

You could choose to reduce the likelihood of this outcome by reducing the chance we end up in a high-tension or outright-conflict scenario in the first place. Or you could reduce the likelihood of this outcome by focusing specifically on how biological agents are governed and controlled. Although the latter approach doesn’t reduce the other risks conflict poses, there are more concrete proposals you could work on implementing.

Whether it’s better to focus on overall tension or specific risks depends on the relative tractability of proposals in both areas and how many other risks are affected by changes in international tension. You’re more likely to think that trying to reduce conflict is more impactful if:

  • You think conventional wars pose a lot of risk on their own, either because they can escalate massively or cause trajectory changes.
  • You think that great power war drives a large fraction of the risks posed by nuclear weapons, biological weapons, military AI, and other emerging technologies. This would make reducing tensions between great powers a powerful leverage point for lowering total overall risk.
  • You think that there are good approaches to reducing great power war risk — perhaps ones that aren’t mentioned in this article.

If, however, you think most of the overall existential risk we face comes from a specific risk (such as AI or climate change) or great power war is just not that solvable, then you might want to focus on a different area.

Earlier, we identified five specific pathways through which great power conflict could cause an existential catastrophe (conventional war, nuclear war, bioweapons, AI, and future technologies). So by working to reduce great power tensions, you can reduce five risks at once.

But my current best guess is that it’s at least 10 times harder to reduce the chance of conflict by a given amount as it is to reduce a specific risk like a biotech catastrophe. So unless you feel that, for personal fit reasons, you would be at least two or three times more effective working on great power war broadly, it still likely makes sense to focus on one of the most pressing specific risks.

(That said, this is a very rough calculation — I could easily be wrong here!)

What can you do to help?

After reading the previous section, you might feel pessimistic about your chances of making progress on this problem.

It’s true that this problem seems generally less neglected than some of the world’s other top problems, and I’m not really sure what’s most helpful for it. But great power war encompasses many different issues. I think that some of these specific sub-problems are more neglected and tractable than great power diplomacy generally. You could have a big impact by focusing on them.

Here I highlight a few issues that experts I’ve spoken to have highlighted as particularly promising for people to work on if they want to have an impact in this space.

One promising path to impact looks like gaining a deep understanding of the foreign policy landscape, building a strong network, and practising good judgement. Later in your career, you could use your skills and expertise to support policies that seem good and resist policies that seem harmful. But exactly which policies those are currently seem hard to predict, as they’ll likely rely on highly contextual factors like who’s leading the countries involved.

Another thing to keep in mind is that to reduce great power war, you’ll probably need to combine foreign policy expertise with expertise in another important area.48

For example, US foreign policy experts who also know a lot about China or speak Mandarin are really valuable. Similarly, people who understand international relations and biosecurity, risks from advanced artificial intelligence, or nuclear security are sorely needed.

If you want to go into this field, you’ll probably need to be flexible and open to taking unexpected opportunities when they arise.

Finally, you’ll want to think carefully about personal fit. There are a lot of different jobs you could do in this area. Some are very research-focused, like working in a think tank. Others would be much more people-oriented, like working for a policymaker or going into politics yourself. Although you might work on the same issues, your daily routine would look totally different.

The rest of this section gives some preliminary ideas about where you might want to work in this area. It’s separated into two questions: where can you work and what issues should you try to focus on?

Where can you work?

Government

I’ll start with roles specific to the US government because it’s especially large and influential in many of our top problem areas.

The main US federal policy institutions are Congress49 and the executive branch (including both federal agencies and the White House).50

After my conversations with experts, I’ve divided the potential government roles in this space into four broad categories.

First, there are research-like roles in intelligence and analysis. Researchers can affect policy by ensuring it is addressing the right problems and focusing on the best solutions. For example, at the beginning of the Cold War, analysts suggested that the USSR’s nuclear arsenal was larger and more effective than the United States’, and that the gap was growing. This idea was wrong, and it helped drive the early nuclear arms race. Better analysis may have been able to avoid this.

Second, there are decision-making roles in which research is turned into policy. These include political appointees selected by the executive, and career civil servants who work their way up the bureaucracy. Decision makers influence which strategies to pursue and which policies to implement.

Third, there are programme management roles. Programme managers prioritise how government budgets are spent. Since these budgets can be quite large, even small improvements in how they’re spent could have a big impact.

Programme managers have been distinguished from decision makers because they work ‘deeper’ in the bureaucracy with less public visibility. The State Department’s Office of Cooperative Threat Reduction, for example, currently spends about $90 million a year on its Global Threat Reduction programme, which focuses on preventing the development and proliferation of weapons of mass destruction and “advanced conventional weapons.”51

Fourth, there are diplomatic roles that involve working with people from other countries to implement policies.

To enter a career in US foreign or security policy, the best paths include completing a relevant graduate degree (ideally based in Washington, DC), particularly a policy master’s or law degree, and participating in a policy fellowship — providing benefits like job placements, funding, training, mentoring, networking opportunities, application support, and more.

Working for the US government, especially in national security, can be impossible for non-citizens.

However, if you’re in a position to work on foreign policy issues in other influential countries like India or a major NATO member, you could still have a big impact.

Unfortunately, I’m much more uncertain about how to reduce risks and improve policy in Russia or China.

Think tanks

Especially in the United States, think tanks are also an important part of the foreign policy ecosystem.

Framing your career as a choice between working at a think tank or working in the government is actually a bit misleading. In reality, many people move back and forth between the think tank world and the government over the course of their career.

We’ve previously written about think tanks in this article. Working at a think tank allows you to spend more time investigating issues deeply, developing new policy ideas, and building your network and professional reputation. It can be a particularly good way to break into this field early in your career.

For example, you could work at prestigious foreign policy think tanks with broad focus areas like the Council on Foreign Relations, the Carnegie Endowment for International Peace, or the Center for Strategic and International Studies (CSIS). Alternatively, you could work at think tanks focusing on specific relevant issues like international AI policy or biosecurity policy.

Organisations that work on AI governance and military risks include the Center for Security and Emerging Technology (CSET), Brookings, the Center for a New American Security (CNAS) and the Federation of American Scientists (FAS). (CSIS and Carnegie also have relevant programmes).

For biosecurity, the most relevant organisations include the Johns Hopkins University Center for Health Security (CHS), the Nuclear Threat Initiative (NTI), Bipartisan Commission on Biodefense (BCB), and the Council on Strategic Risks (CSR).

Universities

You can also do research in universities.

My sense is that policy implementation, not research, is more of a bottleneck in the foreign policy and great power war space.52 This limits the value of studying and working in universities.

However, the foreign policy space is pretty crowded and competitive. This means that earning a master’s or PhD can also be very useful, or even necessary, to advance in your career.

If you’re pretty sure you want to work in policy, you can do one of the US policy-focused master’s degrees discussed here. If you want to do academic research, or move up to a high-level position in a prestigious think tank, it’s worth giving a PhD programme strong consideration. And if you’re going to do a PhD for career reasons, you could think about how to focus your research on important, policy-relevant issues.

Academics can focus on questions for extended periods of time. They can also think deeply about issues which don’t yet seem to have direct policy relevance. This could help them answer particularly complex questions or help reduce risks that are not yet salient but could be in the next few years or decades. I discuss some potential research topics in the next section.

What issues should you focus on?

The riskiest bilateral relationships

Wars can begin when leaders of one state misperceive the strength or intentions of a rival.53

This makes it very important to have experts who can help policymakers accurately interpret the actions of rival states. Combining an understanding of foreign policymaking processes, say in the United States, with an understanding of the historical, social, economic, and cultural context in another great power like China or Russia could be a highly valuable set of skills.

It might be particularly valuable to focus on AI governance and policy as it relates to the United States-China relationship. Strengthening efforts at coordination around this technology and reducing the incentives for an AI arms race could be highly beneficial.

Similarly, you could work to become an expert in an emerging or future great power like India.

One concrete example of this kind of work is facilitating Track II diplomacy programmes. This can include hosting summits and meetings between non-official (non-governmental) representatives from different countries to share information and build trust. People with expertise in two nations, such as both China and the United States, can play an important role in facilitating such dialogues.

Track II diplomacy can be useful, for example, when official diplomatic channels have been closed down due to high tension. There are some historical cases where they have even contributed to concrete policy change, such as the United States and the USSR signing the Anti-Ballistic Missile Treaty in 1972.

Language skills can also be very useful in this area. See, for example, the work of the Center for Strategic Translation, which works to translate, annotate, and explain influential Chinese texts for English speakers.

If you decide to go down this path, you should probably try to focus on the riskiest relationships, which I discussed here.

Crisis management

Some wars are sparked when small disputes escalate. And escalation is unpredictable and difficult to control.54 One way to lower the total risk of war is to prevent escalatory spirals before they begin.

It may seem difficult to imagine how one could do this. But there are a number of important crisis management systems one could work to improve or support.

You could research, advocate for, and work to implement information-provision systems like hotlines to reduce uncertainty during crises. Or you could research how new weapons and communications technologies might affect escalation dynamics and propose policies to pre-empt unexpected effects.

Thomas Schelling, for example, did influential research on crisis management and communication hotlines and helped motivate the establishment of the Moscow-Washington hotline following the Cuban Missile Crisis.

Analysing the effects of important foreign policy decisions

Another approach one could take is to become an expert in a particularly important foreign policy issue.

For example, great powers will use sanctions to punish aggressive actions from rivals. They may also try to slow their progress in important sectors (like by putting export controls on semiconductors). You could study such policies closely to better predict their effects. By working for the government you could improve their effectiveness and minimise major downside risks (like increasing the chance of conflict). Or you could work outside the government, like in a think tank or as a journalist who can scrutinise policy choices and provide public accountability.

Other areas of foreign policy in which you might consider developing expertise include:

International governance of weapons of mass destruction and emerging technologies

You could also help reduce total war risk by working to make extremely severe outcomes less likely. The most obvious way to do this is to study proposals for international governance agreements on the development, proliferation, and use of weapons of mass destruction. This would include both existing weapons, like nuclear weapons and bioweapons, and emerging weapons technologies like advanced military artificial intelligence systems.

Improving how WMDs and emerging technologies are controlled at the national level

Individual states can also reduce war risks by unilaterally improving their management policies for weapons of mass destruction. Some of these policies in the profiles on nuclear and biological risks.

On several occasions, malfunctioning systems have created false alarms that could plausibly have led to retaliation and escalation to war. If one thinks there is a low but constant risk of something like this going wrong, then we will inevitably head to disaster on long enough time horizons.

Research on how current policies could fail, or how new technologies (like advanced AI or improved satellite imaging) may raise or lower the chance of accidents, could be useful.

Other domestic interventions

There are several other potential interventions one could work on domestically. For example, one could try to affect the politics of war by influencing public discourse to reduce tension and working to get less war-like politicians elected. Or, one could try to strengthen democratic institutions to ensure that leaders remain ‘checked’ and accountable to the people who would bear the costs of war.

I’m more uncertain about how important and how feasible these interventions are, though. Given my current views, I’d instead encourage people to focus on the first five issues I listed in this profile.

Want one-on-one advice on preventing great power war? We want to help.

We’ve helped people formulate plans, find resources, and put them in touch with mentors. If you want to work in this area, apply for our free one-on-one advising service.

APPLY TO SPEAK WITH OUR TEAM

Find vacancies on our job board

Our job board features opportunities to work in government and policy on our top problems:

    View all opportunities

    Learn more

    Top recommendations

    Further recommendations

    Books

    Research

    Popular articles

    Other pieces

    Read next:  Explore other pressing world problems

    Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.

    The post Great power war appeared first on 80,000 Hours.

    ]]>
    Risks of stable totalitarianism https://80000hours.org/problem-profiles/risks-of-stable-totalitarianism/ Wed, 19 Jun 2024 13:48:27 +0000 https://80000hours.org/?post_type=problem_profile&p=86495 The post Risks of stable totalitarianism appeared first on 80,000 Hours.

    ]]>
    Why might the risk of stable totalitarianism be an especially pressing problem?

    Totalitarian regimes killed over 100 million people in less than 100 years in the 20th century. The pursuit of national goals with little regard for the wellbeing or rights of individuals makes these states wantonly cruel. The longer they last, the more harm they could potentially do.

    Could totalitarianism be an existential risk?

    Totalitarianism is a particular kind of autocracy, a form of government in which power is highly concentrated. What makes totalitarian regimes distinct is the complete, enforced subservience of the entire populace to the state.

    Most people do not welcome such subservience. So totalitarian states are also characterised by mass violence, surveillance, intrusive policing, and a lack of human rights protections, as well as a state-imposed ideology to maintain control.

    So far, most totalitarian regimes have only survived for a few decades.

    If one of these regimes were to maintain its grip on power for centuries or millennia, we could call it stable totalitarianism. All totalitarian regimes threaten their citizens and the rest of the world with violence, oppression, and suffering. But a stable totalitarian regime would also end any hope of the situation improving in the future. Millions or billions of people would be stuck in a terrible situation with very little hope of recovery — a fate as bad (or even worse) than extinction.

    Is any of this remotely plausible?

    For stable totalitarianism to ruin our entire future, three things have to happen:

    1. A totalitarian regime has to emerge.
    2. It has to dominate all, or at least a substantial part, of the world.
    3. It has to entrench itself indefinitely.

    No state has even come close to achieving that kind of domination before. It’s been too difficult for them to overcome the challenges of war, revolution, and internal political changes. Step three, in particular, might seem especially far-fetched.

    New technologies may make a totalitarian takeover far more plausible though.

    For example:

    • Physical and digital surveillance may make it nearly impossible to build resistance movements.
    • Autonomous weapons may concentrate military power, making it harder to resist a totalitarian leader.
    • Advanced lie detection may make it easier to identify dissidents and conspirators.
    • Social manipulation technologies may be used to control the information available to people.

    Many of these technologies are closely related to developments in the field of AI. AI systems are rapidly developing new capabilities. It’s difficult to predict how this will continue in the future, but we think there’s a meaningful chance that AI systems come to be truly transformative in the coming decades. In particular, AI systems that can make researchers more productive, or even replace them entirely, could lead to rapid technological progress and much faster economic growth.

    A totalitarian dictator could potentially use transformative AI to overcome each of the three forces that have impeded them in the past.

    • AI could eliminate external competition: If one state controls significantly more advanced AI systems than its rivals, then it may have a decisive technological edge that allows it to dominate the world through conquest or compellence (i.e. forcing other states to do something by threatening them with violence if they refuse).
    • AI could crush internal resistance: AI could accelerate the development of multiple technologies dictators would find useful, including the surveillance, lie detection, and weaponry mentioned above. These could be used to detect and strangle resistance movements before they become a threat.
    • AI could solve the succession problem: AI systems can last much longer than dictators and don’t have to change over time. An AI system directed to maintain control of a society could keep pursuing that goal long after a dictator’s death.

    Stable totalitarianism doesn’t seem like an inevitable, or even particularly probable, result of technological developments. Bids for domination from dictators would still face serious opposition. Plus, new technologies could also make it harder for a totalitarian state to entrench itself. For example, they could make it easier for people to share information to support resistance movements.

    But the historical threat of totalitarianism combined with some features of modern technology make stable totalitarianism seem plausible.

    Below, we discuss in more depth each of the three prerequisites: emergence, domination, and entrenchment.

    Will totalitarian regimes arise in future?

    Totalitarianism will probably persist in the future. Such regimes have existed throughout history and still exist today. About half the countries in the world are classified as “autocratic” by V-Dem, a research institute that studies democracy. Twenty percent are closed autocracies where citizens don’t get to vote for party leaders or legislative representatives.

    Democracy has seen a remarkable rise worldwide since the 1800s. Before 1849, every country in the world was classified as autocratic due to limited voting rights. Today, 91 — over half of V-Dem’s dataset — are democratic.

    But progress has recently slowed and even reversed. The world is slightly less democratic today than it was 20 years ago. That means we should probably expect the world to contain authoritarian regimes, including totalitarian ones, for at least decades to come.

    Could a totalitarian regime dominate the world?

    Broadly there seem to be two main ways a totalitarian regime could come to dominate a large fraction of the world. First, it could use force or the threat of force to assert control. Second, it could take control of a large country or even a future world government.

    Domination by force

    Many totalitarian regimes have been expansionist.

    Hitler, for example, sought to conquer “heartland” Europe to gain the resources and territory he thought he needed to exert global domination.1 While he didn’t get far, others have had more success:

    • 20th century communist rulers wanted to create a global communist state. In the mid-1980s, about 33% of the world’s people lived under communist regimes.2
    • At its peak, the British Empire comprised about 25% of the world’s land area and population.
    • The Mongols controlled about 20% of the world’s land and 30% of its people.

    In recent decades, ambitious territorial conquest has become much less common. In fact, there have been almost no explicit attempts to take over large expanses of territory for almost 50 years.3 But, as Russia’s invasion of Ukraine shows, we shouldn’t find too much comfort in this trend. Fifty years just isn’t that long in the grand sweep of history.

    Technological change could make it easier for one state to control much of the world. Historically, a technological edge has often given states huge military advantages. During the Gulf War, for example, American superiority in precision-guided munitions and computing power proved overwhelming.4

    Some researchers think that the first actor to obtain future superintelligent AI systems could use them to achieve world domination.5 Such systems could dramatically augment a state’s power. They could be used to coordinate and control armies and monitor external threats. They could also increase the rate of technological innovation, giving the state that first controls them a significant edge over the rest of the world in the key technologies we discussed previously, like weaponry, targeting, surveillance, and cyber warfare.

    AI could provide a decisive advantage just by being integrated into military strategies and tactics. Cyberattack capabilities, for example, could disrupt enemy equipment and systems. AI systems could also help militaries process large amounts of data, react faster to enemy actions, coordinate large numbers of soldiers or autonomous weapons, and more accurately strike key targets.6

    There’s even the possibility that military decision making could be turned over in part or in whole to AI systems. This idea currently faces strong resistance, but if AI systems prove far faster and more efficient than humans, competitive dynamics could push strongly in favour of more delegation.

    But a state with such an advantage over the rest of the world might not even have to use deadly force. Simply threatening rivals may be enough to force them to adopt certain policies or to turn control of critical systems over to the more powerful state.

    In sum, AI-powered armies, or just the threat of being attacked by one, could make the country that controls advanced AI more powerful than the rest of the world combined. If it so desired, that country could well use that advantage to achieve the global domination that past totalitarian leaders have only been able to dream of.

    Controlling a powerful government

    A totalitarian state could also gain global supremacy by taking control of a powerful government, such as one of the great powers or a hypothetical future world government.

    Totalitarian parties like the Nazis, for example, tried to gain more influence by controlling large fractions of the world. But the Nazis already gained a lot of power simply by gaining control of Germany.

    If a totalitarian actor gained control of one of the world’s most powerful countries today, it could potentially control a significant fraction of humanity’s future (in expectation) by simply entrenching itself in that country and using its influence to oppress many people indefinitely and shape important issues like how space is governed. In fact, considering the prevalence of authoritarianism, this may be the most likely way totalitarianism could shape the long-term future.

    There’s also the possibility that such an actor could gain even more influence by taking over a global institution.

    Currently, countries coordinate many policies through international institutions like the United Nations. However, the enforcement mechanisms available to these institutions are currently “imperfect”“: applied slowly and unevenly.

    We don’t know for sure how international cooperation will evolve in the future. However, international institutions could have more power than they currently do. Such institutions facilitate global trade and economic growth, for example. They may also help states solve disagreements and avoid conflict. They’re often proposed as a way to manage global catastrophic risks too. States could choose to empower global institutions to realise these benefits.

    If such an international framework were to form, a totalitarian actor could potentially leverage it to gain global control without using force (just as totalitarian actors have seized control of democratic countries in the past). This would be deeply worrying because a global totalitarian government would not face pressure from other states, which is one of the main ways totalitarianism has been defeated in the past.

    Economist Bryan Caplan is particularly concerned that fear of catastrophic threats to humanity like climate change, pandemics, and risks from advanced AI could motivate governments to implement policies that are particularly vulnerable to totalitarian takeover, such as widespread surveillance.7

    We think there are difficult tradeoffs to consider here. International institutions with strong enforcement powers might be needed to address global coordination problems and catastrophic risks. Nevertheless, we agree that there are serious risks as well, including the possibility that they could be captured by totalitarian actors. We aren’t sure how exactly to trade these things off (hence this article)!

    Could a totalitarian regime last forever?

    Some totalitarian leaders have attempted to stay in power indefinitely. In What We Owe the Future, William MacAskill discusses several times authoritarian leaders have sought to extend their lives:8

    • Multiple Chinese emperors experimented with immortality elixirs. (Some of these potions probably contained toxins like lead, making them more likely to hasten death than defeat it.)
    • Kim Il-Sung, the founder of North Korea, tried to extend his life by pouring public funds into longevity research and receiving blood transfusions from young Koreans.
    • Nursultan Nazarbayev, who ruled Kazakhstan for nearly two decades, also spent millions of state dollars on life extension, though these efforts reportedly only produced a “liquid yogurt drink” called Nar.

    But of course, none have even got close to entrenching themselves permanently. The Nazis ruled Germany for just 12 years. The Soviets controlled Russia for 79. North Korea’s Kim dynasty has survived 76 years and counting.

    They have inevitably fallen due to some combination of three forces:

    1. External competition: Totalitarian regimes pose a risk to the rest of the world and face violent opposition. The Nazis, Mussolini’s Italy, the Empire of Japan, and Cambodia’s Khmer Rouge were all defeated militarily.
    2. Internal resistance: Competing political groups or popular resistance can undermine the leaders.
    3. The “succession problem”: These regimes sometimes liberalise or collapse entirely after particularly oppressive leaders die or step down. For example, the USSR collapsed a few years after Mikhail Gorbachev came to power.

    To date, these forces have made it impossible to entrench an oppressive regime in unchanging form for more than a century or so.

    But once again, technology could change this picture. Advanced AI — and the military, surveillance, and cyberweapon technologies it could accelerate — may be used to counteract each of the three forces.

    For external competition, we’ve already discussed how AI might allow leading states to build a substantial military advantage over the rest of the world.

    After using that advantage to achieve dominance over the rest of the world, a totalitarian state could use surveillance technologies to monitor the technological progress of any actors — external or internal — that could threaten its dominance. With a sufficient technological edge, it could then use kinetic and cyber weapons to crush anyone who showed signs of building power.

    After eliminating internal and external competition, a totalitarian actor would just have to overcome the succession problem to make long-term entrenchment a realistic possibility. This is a considerable challenge. Any kind of change in institutions or values over time would allow for the possibility of escape from totalitarian control.

    But advanced AI could also help dictators solve the succession problem.

    Perhaps advanced AI will help dictators invent more effective, dairy-free life extension technologies. However, totalitarian actors could also direct an advanced AI system to continue pursuing certain goals after their death. An AI could be given full control of the state’s military, surveillance, and cybersecurity resources. Meanwhile, a variety of techniques, such as digital error correction, could be used to keep the AI’s goals and methods constant over time.9

    This paints a picture of truly stable totalitarianism. Long after the dictator’s death, the AI could live on, executing the same goals, with complete control in its area of influence.

    The chance of stable totalitarianism

    So, how likely is stable totalitarianism?

    This is clearly a difficult question. One complication is that there are multiple ways stable totalitarianism could come to pass, including:

    • Global domination: A totalitarian government could become so powerful that it has a decisive advantage over the rest of the world. For example, it could develop an AI system so powerful it can prevent anyone else from obtaining a similar system. It could then use this system to dominate any rivals and oppress any opposition, achieving global supremacy.
    • Centralised power drifts toward totalitarianism: International institutions could become more robust and powerful, perhaps as a result of efforts to increase coordination, reduce conflict, and mitigate global risks. National governments may even peacefully and democratically cede more control to the international institutions. But efforts to support cooperation and prevent new technologies from being misused to cause massive harm could, slowly or suddenly, empower totalitarian actors. They may use these very tools to centralise and cement their power.
    • Collapse of democracy: Some advanced AI system could centralise power such that someone in a non-totalitarian state, or maybe a global institution, could use it to undermine democratic institutions, disempower rivals, and cement themself as a newly-minted totalitarian leader.
    • One country is lost: A totalitarian government in one large country could use surveillance tools, AI, and other technologies to entrench its rule over its population indefinitely. They wouldn’t even have to be the first to invent the technology: they could re-invent, buy, copy, or steal it after it’s been invented elsewhere in the world. Although all the value of our future might not be lost, a substantial fraction of humanity could be condemned to indefinite oppression.

    The key takeaway from the preceding sections is that there does seem to be a significant chance powerful AI systems will give someone the technical capacity to entrench their rule in this way. The key question is whether someone will try to do so — and whether they’ll succeed.

    Here’s a rough back-of-the-envelope calculation, estimating the risk over roughly the next century:

    • Chance that future technologies, particularly AI, make entrenchment technically possible: 25%
    • Chance that a leader or group tries to use the technology to entrench their rule: 25%
    • Chance that they will achieve of decisive advantage over their rivals and successfully entrench their rule: 5%
    • Overall risk: 0.3%, or about 1 in 330

    We’re pretty uncertain about all of these numbers. Some of them might seem low or high. If you plug in numbers to make your own estimate, you can see how much the risk changes.

    Some experts have given other estimates of the risk. Caplan, in particular, has estimated that there’s a 5% chance that “a world totalitarian government will emerge during the next one thousand years and last for a thousand years or more.”10

    But another key takeaway from the preceding sections is that, while stable totalitarianism seems possible, it also seems difficult to realise — especially in a truly long-term sense. A wannabe eternal dictator would have to solve technical challenges, overcome fierce resistance, and preempt a myriad of future social and technical changes that could threaten their rule.

    That’s why we think the chance of a dictator succeeding, assuming it’s possible and they try, is probably low. We’ve put it at 5%. However, it could be much higher or lower. There’s currently a lot of scope for disagreement, and we’d love to see more research into this question. The most extensive discussion we’ve seen of how feasible it would be for a ruler to entrench long-term control with AI is in a report on Artificial General Intelligence and Lock-In by Lukas Finnveden, C. Jess Riedel, and Carl Shulman.

    It’s also worth noting that it’s low in part because we expect the rest of the world to resist attempts at entrenchment. You might choose to work on this problem partly to ensure that resistance materialises.

    Bottom line: we think that stable totalitarianism is far from the most likely future outcome. But we’re very unsure about this, the risk doesn’t seem super low, and the risk partly seems low because stable totalitarianism would clearly be so awful that we expect people would make a big effort to stop it.

    Preventing long-term totalitarianism in particular seems pretty neglected

    The core of the argument sketched above is that the future will likely contain totalitarian states, one of which could obtain very powerful AI systems which give them the power to eliminate competition and extend their rule long into the future.

    Even the impermanent totalitarianism humanity has experienced so far has been horrendous. So the prospect our descendants could find themselves living under such regimes for millennia to come is distressing.

    Yet we don’t know of anyone working directly on the problem of stable totalitarianism.

    If we count indirect efforts, the field starts to seem more crowded. As we recount below, there are many think tanks and research institutes working to protect democratic institutions, which implicitly work against stable totalitarianism by trying to reduce the number of countries that become totalitarian in the first place. Their combined budgets for this kind of work are probably on the order of $10M to $100M annually.

    There’s also the fact that the rise of a stable totalitarian superpower would be bad for everyone else in the world. That means that most other countries are strongly incentivized to work against this problem. From this perspective, perhaps we should count some large fraction of the military spending of NATO countries (almost $1.2 trillion in 2023) as part of the anti-totalitarian effort. Some portion of the diplomatic and foreign aid budgets of democratic countries is also devoted to supporting democratic institutions around the world (e.g. the US State Department employs 13,000 Foreign Service members).

    One could argue that many of these resources are allocated inefficiently. Or, as we discussed above, some of that spending could raise other risks if it drives arms races and stokes international tension. But if even a small fraction of that money is spent on effective interventions, marginal efforts in this area start to seem a lot less impactful.

    In addition to questions of efficiency, the relevance of this spending to the problem of stable totalitarianism specifically is still debatable. Our view is that the particular pathways which could lead to the worst outcomes — a technological breakthrough that brings about the return of large-scale conquest and potentially long-term lock-in — are not on the radar of basically any of the institutions mentioned.

    Why might you choose not to work on this problem?

    All that said, maybe nobody’s working on this problem for a reason.

    First, it may not seem that likely, depending on your views (and if we’re wrong about the long-term possibilities of advanced AI systems, then it might even be impossible for a dictator to take and entrench their control over the world).

    Second, it might not be very solvable. Influencing world-historical events like the rise and fall of totalitarian regimes seems extremely difficult!

    For example, we mentioned above that the three ways totalitarian regimes have been brought down in the past are through war, resistance movements, and the deaths of dictators. Most of the people reading this article probably aren’t in a position to influence any of those forces (and even if they could, it would be seriously risky to do so, to say the least!).

    What can you do to help?

    To make progress on this problem, we may need to aim a little bit lower than winning wars or fomenting revolutions.

    But we do think there are some things you can do to help solve this problem. These include:

    • Working on AI governance
    • Researching downside risks of global coordination
    • Helping develop defensive technologies
    • Protecting democratic institutions

    AI Governance

    First, it’s notable that most — possibly all — plausible routes to stable totalitarianism leverage advanced AI. You could go into AI governance to help establish laws and norms that make it less likely AI systems are used for these purposes.

    You could help build international frameworks that broadly shape how AI systems are developed and deployed. It’s possible that the potentially transformative benefits and global risks AI could bring will create great opportunities for international cooperation.

    Eventually the world might establish shared institutions to monitor where advanced AI systems are being developed and what they may be used for. This information could be paired with remote shutdown technologies to prevent malicious actors, including rogue states and dictators, from obtaining or deploying AI systems that threaten the rest of the world. For example, there may be ways to legally or technically direct how autonomous weapons are developed to prevent one person from being able to control large armies.

    It’s in everyone’s interest to ensure that no one country uses AI to dominate the future of humanity. If you want to help make this vision a reality, you could work at organisations like the Centre for the Governance of AI, the Oxford Martin AI Governance Initiative, the Institute for AI Policy and Strategy, the Institute for Law and AI, RAND’s Technology and Security Policy Center, the Simon Institute, or even large multilateral policy organisations and related think tanks.

    If this path seems exciting, you might want to read our career review of AI governance and policy.

    Researching risks of global coordination

    Of course, concerns about the development of oppressive world governments are motivated by exactly this vision for global governance, which include quite radical proposals such as monitoring all advanced AI development.

    If such institutions are needed to tackle global catastrophic risks, we may have to accept some risk of them enabling overly-intrusive governance. Still, we think we should do everything we can to mitigate this cost where possible and continue researching all kinds of global risks to ensure we’re making good tradeoffs here.

    For example, you could work to design effective policies and institutions that are minimally invasive and protect human rights and freedoms. Or, you could analyse which policies to reduce existential risk need to be addressed at the global level and which can be addressed at the state level. Allowing individual states to tackle risks also seems more feasible than coordinating at the global level.

    We haven’t done a deep dive in this space, but you might be able to work on this issue in academia (like at the Mercatus Center, where Bryan Caplan works), at some think tanks that work on freedom and human rights issues (like Chatham House), or in multilateral governance organisations themselves.

    You can also listen to our podcast with Bryan Caplan for more discussion.

    Working on defensive technologies

    Another approach would be to work on technologies that protect individual freedoms without empowering bad actors. Many technologies, like global institutions, have benefits and risks: they can be used by both individuals to protect themselves and malicious actors to cause harm or seize power. If you can speed up the development of technologies that help individuals more than bad actors, then you might make the world as a whole safer and reduce the risk of totalitarian takeover.

    Technologist Vitalik Buterin calls this defensive accelerationism. There’s a broad range of such technologies, but some that may be particularly relevant for resisting totalitarianism could include:

    • Tools for identifying misinformation and manipulative content
    • Cybersecurity tools
    • Some privacy-enhancing technologies like encryption protocols
    • Biosecurity policies and tools, like advanced PPE, that make it harder for malicious actors to get their way by threatening other states with biological weapons

    The short length of that list reflects our uncertainty about this approach. There’s not much work in this area to direct additional efforts beyond Buterin’s essay.

    It’s also very hard to predict the implications of new technologies. Some of the examples Buterin gives seem like they could also empower totalitarian states or other malicious actors. Cryptographic techniques can be used by both individuals (to protect themselves against surveillance) and criminals (to conceal their activities from law enforcement). Similarly, cybersecurity tools meant to help individuals could also be used by a totalitarian actor to thwart multilateral attempts to disrupt dangerous AI development within its borders.

    That said, we think cautious, well-intentioned research efforts to identify technologies that empower defenders over attackers could be valuable.

    Another related option is to research potential downsides from other technologies discussed in this article. Some researchers dedicate their time to understanding issues like risks to political freedom from advanced surveillance and the dangers of autonomous weapons.

    Protecting democratic institutions

    A final approach to consider is supporting democratic institutions to prevent more countries from sliding towards authoritarianism and, potentially, totalitarianism.

    We mentioned that, after over a century of progress, global democratisation has recently stalled. Some researchers have claimed that we are experiencing “democratic backsliding” globally, with populists and partisans subverting democratic institutions. Although this phenomenon is controversial because it’s highly politicised and “democraticness” is hard to measure, it does seem to be a real phenomenon.

    Given what we know, it at least seems like a trend worth monitoring. If democratic institutions are under threat globally, protecting them to make it harder for more countries to become totalitarian is important, as it could reduce the chance that a totalitarian state gains a decisive advantage through AI development. It also raises the chance that democratic values, such as freedom of expression and tolerance, shape humanity’s long-term future.11

    There is a large ecosystem of research and policy institutes working on this problem in particular. These include think tanks like V-Dem, Freedom House, the Carnegie Endowment for International Peace, and the Center for Strategic and International Studies. There are also academic research centres like Stanford’s Center on Democracy, Development and the Rule of Law and Notre Dame’s Democracy Initiative.

    (Note: These are just examples of programs in this area. We haven’t looked deeply at their work.)

    Learn more about risks of stable totalitarianism

    Read next:  Explore other pressing world problems

    Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Risks of stable totalitarianism appeared first on 80,000 Hours.

    ]]>
    Why Orwell would hate AI https://80000hours.org/2024/08/why-orwell-would-hate-ai/ Tue, 06 Aug 2024 09:14:20 +0000 https://80000hours.org/?p=86975 The post Why Orwell would hate AI appeared first on 80,000 Hours.

    ]]>
    The idea this week: totalitarian regimes killed over 100 million people in less than 100 years — and in the future they could be far worse.

    That’s because advanced artificial intelligence may prove very useful for dictators. They could use it to surveil their population, secure their grip on power, and entrench their rule, perhaps indefinitely.

    I explore this possibility in my new article for 80,000 Hours on the risk of stable totalitarianism.

    This is a serious risk. Many of the worst crimes in history, from the Holocaust to the Cambodian Genocide, have been perpetrated by totalitarian regimes. When megalomaniacal dictators decide massive sacrifices are justified to pursue national or personal glory, the results are often catastrophic.

    However, even the most successful totalitarian regimes rarely survive more than a few decades. They tend to be brought down by internal resistance, war, or the succession problem — the possibility for sociopolitical change, including liberalisation, after a dictator’s death.

    But that could all be upended if technological advancements help dictators overcome these challenges.

    In the new article, I address:

    To be sure, stable totalitarianism doesn’t seem to be the most likely course for the future. An aspiring permanent dictator would face formidable barriers, and — while AI could help them — other actors may use AI to the world’s benefit. I think that the chance of stable totalitarianism really coming to pass is much less than 1%. (Others have estimated as high as 5%.)

    But I think it’s less far-fetched than it might seem at first. Totalitarian regimes have been common throughout history, they often expand and entrench their influence, and new technology may well centralise power in ways that help dictators.

    Despite the risk being low, it seems worrying enough that 80,000 Hourswe include stable totalitarianism in their list of the world’s most pressing problems.

    The good news, I think, is that there are things we can do to help make this outcome even less likely — and these things also have the benefit of helping with other issues. At the end of the article, I discuss four ways you might work on this problem:

    • Working on AI governance to ensure no country can use advanced AI systems to dominate the rest of the world
    • Researching how states can coordinate to address global risks without centralising power in ways that are easy for dictators to subvert
    • Working on defensive technologies that could prevent dictators’ attempts to control people
    • Working to protect democratic institutions to reduce the chance that more democracies become autocracies in coming decades

    There’s much more to be found in the full article, including the bizarre story behind why the leader of Kazakhstan invested millions of his nation’s tax dollars to develop a probiotic yogurt drink.

    And if any of the options I mentioned for working on this problem sound interesting to you, check out our articles on political skills, research skills, and AI governance career paths.

    This blog post was first released to our newsletter subscribers.

    Join over 500,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!

    Learn more:

    The post Why Orwell would hate AI appeared first on 80,000 Hours.

    ]]>
    What the war in Ukraine shows us about catastrophic risks https://80000hours.org/2023/06/what-the-war-in-ukraine-shows-us-about-catastrophic-risks/ Fri, 30 Jun 2023 17:39:35 +0000 https://80000hours.org/?p=82622 The post What the war in Ukraine shows us about catastrophic risks appeared first on 80,000 Hours.

    ]]>
    A new great power war could be catastrophic for humanity — but there are meaningful ways to reduce the risk.

    We’re now in the 17th month of the war in Ukraine. But at the start, it was hard to foresee it would last this long. Many expected Russian troops to take Ukraine’s capital, Kyiv, in weeks. Already, more than 100,000 people, including civilians, have been killed and over 300,000 more injured. Many more will die before the war ends.

    The sad and surprising escalation of the war shows why international conflict remains a major global risk. I explain why working to lower the danger is a potentially high-impact career choice in a new problem profile on great power war.

    As Russia’s disastrous invasion demonstrates, it’s hard to predict how much a conflict will escalate. Most wars remain relatively small, but a few will become terrifyingly large. US officials estimate about 70,000 Russian and Ukrainian soldiers have died in battle so far. That means this war is already worse than 80% of all the wars humanity has experienced in the last 200 years.

    But the worst wars humanity has fought are hundreds of times larger than the war in Ukraine currently is. World War II killed 66 million people, for example — perhaps the single deadliest event in human history.


    Author’s figure. See the data here. Data source: Sarkees, Meredith Reid and Frank Wayman (2010). Resort to War: 1816 – 2007. Washington DC: CQ Press

    Barring the use of nuclear weapons, it doesn’t seem likely the war in Ukraine will escalate that much.

    But I still think a war worse than World War II — though unlikely — could erupt in our lifetimes. Reducing that risk could be one of the most pressing problems of our time.

    The prospect of a modern great power conflict is particularly worrying because humanity’s capacity to make war increases over time. Since 1945 nuclear weapons have proliferated. And billions of dollars spent on military R&D have led to the invention of:

    • Advanced jets, ships, missiles, and guns
    • Technical augmentations like remote sensors and satellites
    • Emerging technologies like autonomous weapons and drones
    • New nuclear weapon technologies
    • Techniques that could be used to develop advanced bioweapons

    At the time, the mechanised slaughter of World War I was a shocking step-change in the severity of warfare. But it was surpassed just 20 years later by the outbreak of World War II, which killed more than twice as many people.

    A modern great power war could be even worse. It would reshape our world with very long-term effects and could even threaten us with extinction or the end of civilization as we know it.

    At the end of the new profile I suggest a few specific paths forward if you want to use your career to reduce the risk. These include:

    • Working in various roles in the US government
    • Starting a high-impact research project
    • Developing expertise in important fields, such as US-China relations

    I see several worrying trends among relations between the world’s most powerful countries. Beyond the invasion of Ukraine, relations between the US and China are poor, and India and China have recently fought deadly battles along their border.

    Right now it’s hard to imagine these countries cooperating meaningfully on major issues like preventing pandemics and governing advanced artificial intelligence systems. They could even blunder into an all-out war.

    But as political scientist Chris Blattman reminds us, most potential wars are not fought. We can yet avoid World War III this century. And reducing great power tensions might also help us work together as a species to mitigate the major threats we face.

    Learn more:

    The post What the war in Ukraine shows us about catastrophic risks appeared first on 80,000 Hours.

    ]]>