Benjamin Todd (Author archive) - 80,000 Hours https://80000hours.org/author/benjamin-todd/ Fri, 09 May 2025 15:44:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Should you quit your job — and work on risks from AI? https://80000hours.org/2025/04/work-on-ai-risks/ Fri, 11 Apr 2025 19:18:20 +0000 https://80000hours.org/?p=89689 The post Should you quit your job — and work on risks from AI? appeared first on 80,000 Hours.

]]>
Within 5 years, there’s a real chance that AI systems will be created that cause explosive technological and economic change. This would increase the risk of disasters like war between US and China, concentration of power in a small minority, or even total loss of human control over the future.

Many people — with a diverse range of skills and experience — are urgently needed to help mitigate these risks.

I think you should consider making this the focus of your career.

This article explains why.

Get our guide to high-impact careers in the age of AGI

We’re writing a guide on how to use your career to help make AGI well. Join our newsletter to get updates:

And, if you’d like to work on reducing catastrophic risks from AI, apply to speak to us one-on-one.

1) World-changing AI systems could come much sooner than people expect

In an earlier article I explained why there’s a significant chance that AI could contribute to scientific research or automate many jobs by 2030. Current systems can already do a lot, there are clear ways to continue to improve them in the coming years. Forecasters and experts widely agree that the probability of widespread disruption is much higher than it was even just a couple of years ago.

Graph of lengths of tasks AIs updated in April 2025 for o3
AI systems are rapidly becoming more autonomous, as measured by the METR time horizon benchmark. The most recent models, such as o3, seem to be on an even faster trend that started in 2024.

2) The impact on society could be explosive

People say AI will be transformative, but few really get just how wild it could be. Here are three types of explosive impact we might see, which are now all supported by credible theoretical and empirical research:

  • The intelligence explosion: it might only take a few years from developing advanced AI to having billions of AI remote workers, making cognitive labour available for pennies.

  • The technological explosion: empirically informed estimates suggest that with sufficiently advanced AI 100 years of technological progress in 10 is plausible. That means we could have advanced biotech, robotics, novel political philosophies, and more arrive much sooner than commonly imagined.

  • The industrial explosion: if AI and robotics automate industrial production that would create a positive feedback loop, meaning production could plausibly end up doubling each year. Within a decade of reaching that growth rate, humanity would harvest all available solar energy on Earth and start to expand into space.

Along the way, we could see rapid progress on many key technological challenges — like curing cancer and developing green energy. But…

intelligence explosion
The number of AI models is growing extremely fast. If they can start to substitute for scientific researchers, then the effective size of the scientific community would grow at that rate, leading to faster scientific progress. Preparing for the intelligence explosion by Forethought Research

3) Advanced AI could bring enormous dangers

We’ve written before about how it might be hard to keep control of billions of AI systems thinking 10x faster than ourselves. But that’s only the first hurdle. The developments above could:

4) Under 10,000 people work full-time reducing the risks

Although it can feel like all anyone talks about is AI, only a few thousand people worldwide work full-time on navigating some of the most important aspects of the risks.

This is tiny compared to the millions working on more established issues like cancer or climate change, or the number of people working to deploy the technology as quickly as possible.

If you switch to this issue now, you could be among the first 10,000 people helping humanity navigate what may be the one of the most important transitions in history.

5) There are more and more concrete jobs you could take

A couple of years ago, there weren’t many clearly defined projects, positions or training routes to work on this issue. Today, there are more and more concrete ways to help. For example:

We’ve compiled a list of 30+ important organisations in the space, over 300 open jobs, and lists of fellowships, courses, internships, etc., to help you enter the field. Many of these are all well-paid too.

You don’t need to be technical or even focus directly on AI — we need people building organisations, in government, communications, and with many other skills. And AI is going to affect every aspect of society, so people with knowledge of all those aspects are needed (e.g. China, economics, pandemics, international governance, law, etc.).

The field was also small until recently, so there’s not many people with deep expertise. That means it’s often possible to spend about 100 hours reading and speaking to people, and then find a job. (And if you have a quantitative background, it’s possible to get to the technical forefront in under a year.) Our team can help you figure out how to transition.

6) The next five years seem crucial

I’ve argued the chance of building powerful AI is unusually high between now and around 2030, and declines thereafter. This makes the next five years especially critical.

That creates an additional reason to switch soon: if transformative AI emerges in the next five years, you’ll be part of one of the most important transitions in human history. If it doesn’t, you’ll have time to return to your previous path, while having learned about a technology that will still shape our world in significant ways.

The bottom line

If you’re able to find a role that fits, and that helps mitigate these risks (especially over the next 5–10 years), that’s probably the highest expected impact thing you can do.

But I don’t think everyone reading this should work on AI.

  1. You might not have the flexibility to make a large career change right now. (In that case, you could look to donate, spread clear thinking about the issue, or prepare to switch when future opportunities arise.)
  2. There are other important problems, and you might have far better fit for a job focused on another issue.
  3. You might be too concerned about the (definitely huge) uncertainties about how best to help or be less convinced by the arguments that it’s pressing.

However, I’d encourage almost everyone interested in impactful careers to seriously consider it. And if you’re unsure you’ll be able to find something, keep in mind there’s a very wide range of approaches and opportunities, and they’re expanding all the time.1

What’s next?

See all our resources on transformative AI, including articles, expert interviews, and our job board:

View all resources

We’re writing a new guide on how to use your career to help make AGI well. Join our newsletter to get updates:

Finally, if you’d like to work on reducing risks from advanced AI, apply to speak with our 1-1 advising team.

The post Should you quit your job — and work on risks from AI? appeared first on 80,000 Hours.

]]>
The case for AGI by 2030 https://80000hours.org/agi/guide/when-will-agi-arrive/ Fri, 21 Mar 2025 12:07:26 +0000 https://80000hours.org/?post_type=ai_career_guide_page&p=89375 The post The case for AGI by 2030 appeared first on 80,000 Hours.

]]>

In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress:

  • OpenAI’s Sam Altman: Shifted from saying in November “the rate of progress continues” to declaring in January “we are now confident we know how to build AGI”
  • Anthropic’s Dario Amodei: Stated in January “I’m more confident than I’ve ever been that we’re close to powerful capabilities… in the next 2-3 years”
  • Google DeepMind’s Demis Hassabis: Changed from “as soon as 10 years” in autumn to “probably three to five years away” by January.

What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)1 by 2030?

In this article, I look at what’s driven recent progress, estimate how far those drivers can continue, and explain why they’re likely to continue for at least four more years.

In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning.

In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks.

We don’t know how capable AI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.

Graph of lengths of tasks AIs can do from 2020–2025
On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it’ll reach several weeks.

No longer mere chatbots, these ‘agent’ models might soon satisfy many people’s definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see full definition in footnotes).1

This means that, while the company leaders are probably overoptimistic, there’s enough evidence to take their position very seriously.

Where we draw the ‘AGI’ line is ultimately arbitrary. What matters is these models could start to accelerate AI research itself, unlocking vastly greater numbers of more capable ‘AI workers’. In turn, sufficient automation could trigger explosive growth and 100 years of scientific progress in 10 — a transition society isn’t prepared for.

While this might sound outlandish, it’s within the range of possibilities many experts think is possible. This article aims to give you a primer on what you need to know to understand why, and also the best arguments against.

I’ve been writing about AGI since 2014. Back then, AGI arriving within five years seemed very unlikely. Today, the situation seems dramatically different. We can see the outlines of how it could work and who will build it.

In fact, the next five years seem unusually crucial. The basic drivers of AI progress — investments in computational power and algorithmic research — cannot continue increasing at current rates much beyond 2030. That means we either reach AI systems capable of triggering an acceleration soon, or progress will most likely slow significantly.

Either way, the next five years are when we’ll find out. Let’s see why.

In a nutshell

  • Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models’ thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.
  • All of these drivers are set to continue until 2028 and perhaps until 2032.
  • This means we should expect major further gains in AI performance. We don’t know how large they’ll be, but extrapolating recent trends on benchmarks suggests we’ll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.
  • Whether we call these systems ‘AGI’ or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate, leading to transformative impacts.
  • Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).
  • Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we’ll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.

Get notified of new articles in this guide

This article is part of our new AGI careers guide. Join over 500,000 people on our newsletter to get notified about new articles, as well as jobs and training opportunities in the field.

I. What’s driven recent AI progress? And will it continue?

The deep learning era

In 2022, Yann LeCun, the chief AI scientist at Meta and a Turing Award winner, said:

“I take an object, I put it on the table, and I push the table. It’s completely obvious to you that the object will be pushed with the table…There’s no text in the world I believe that explains this. If you train a machine as powerful as could be…your GPT-5000, it’s never gonna learn about this.”

And, of course, if you plug this question into GPT-4 it has no idea how to answer:

Just kidding. Within a year of LeCun’s statement, here’s GPT-4.

And this isn’t the only example of experts being wrongfooted.

Before 2011, AI was famously dead.

But that totally changed when conceptual insights from the 1970s and 1980s combined with massive amounts of data and computing power to produce the deep learning paradigm.

Since then, we’ve repeatedly seen AI systems going from total incompetence to greater-than-human performance in many tasks within a couple of years.

For example, in 2022, if you asked Midjourney to draw “an otter on a plane using wifi,” this was the result:

AI otters on planes
Midjourney’s attempts at depicting “an otter on a plane using wifi” in 2022.

Two years later, you could get this with Veo 2:

In 2019, GPT-2 could just about stay on topic for a couple of paragraphs. And that was considered remarkable progress.

Critics like LeCun were quick to point out that GPT-2 couldn’t reason, show common sense, exhibit understanding of the physical world, and so on. But many of these limitations were overcome within a couple of years.

Over and over again, it’s been dangerous to bet against deep learning. Today, even LeCun says he expects AGI in “several years.”2

The limitations of current systems aren’t what to focus on anyway. The more interesting question is: where this might be heading? What explains the leap from GPT-2 to GPT-4, and will we see another?

What’s coming up

At the broadest level, AI progress has been driven by:

  • More computational power
  • Better algorithms

Both are improving rapidly.

More specifically, we can break recent progress down into four key drivers:

  1. Scaling pretraining to create a base model with basic intelligence
  2. Using reinforcement learning to teach the base model to reason
  3. Increasing test-time compute to increase how long the model thinks about each question
  4. Building agent scaffolding so the model can complete complex tasks

In the rest of this section, I’ll explain how each of these works and try to project them forward. Delve (ahem) in, and you’ll understand the basics of how AI is being improved.

In section two I’ll use this to forecast future AI progress, and finally explain why the next five years are especially crucial.

1. Scaling pretraining to create base models with basic intelligence

Pretraining compute

People often imagine that AI progress requires huge intellectual breakthroughs, but a lot of it is more like engineering. Just do (a lot) more of the same, and the models get better.

In the leap from GPT-2 to GPT-4, the biggest driver of progress was just applying dramatically more computational power to the same techniques, especially to ‘pretraining.’

Modern AI works by using artificial neural nets, involving billions of interconnected parameters organised into layers. During pretraining (a misleading name, which simply indicates it’s the first type of training), here’s what happens:

  1. Data is fed into the network (such as an image of a cat).
  2. The values of the parameters convert that data into a predicted output (like a description: ‘this is a cat’).
  3. The accuracy of those outputs is graded vs. reference data.
  4. The model parameters are adjusted in a way that’s expected to increase accuracy.
  5. This is repeated over and over, with trillions of pieces of data.

This method has been used to train all kinds of AI, but it’s been most useful when used to predict language. The data is text on the internet, and LLMs are trained to predict gaps in the text.

More computational power for training (i.e. ‘training compute’) means you can use more parameters, which lets the models learn more sophisticated and abstract patterns in the data. It also means you can use more data.

Since we entered the deep learning era, the number of calculations used to train AI models has been growing at a staggering rate — more than 4x per year.

graph of FLOP over time
Since the start of the deep learning era, the amount of computational power (measured with ‘FLOP’) used to train leading AI models has increased more than four times each year.

This was driven by spending more money and using more efficient chips.3

Historically, each time training compute has increased 10x, there’s been a steady gain in performance across many tasks and benchmarks.

For example, as training compute has grown a thousandfold, AI models have steadily improved at answering diverse questions—from commonsense reasoning to understanding social situations and physics. This is demonstrated on the ‘BIG-Bench Hard’ benchmark, which features diverse questions specifically chosen to challenge LLMs:

graph compute vs performance
LLM performance on a challenging benchmark (BIG-Bench Hard) improves as training compute increases 1000x.

Likewise, OpenAI created a coding model that could solve simple problems, then used 100,000 times more compute to train an improved version. As compute increased, the model correctly answered progressively more difficult questions.4

These test problems weren’t in the original training data, so this wasn’t merely better search through memorised problems.

This relationship between training compute and performance is called a ‘scaling law.’5

Papers about these laws had been published by 2020. To those following this research, GPT-4 wasn’t a surprise — it was just a continuation of a trend.

graph historical computing
The computing power of the best chips has grown about 35% per year since the beginnings of the industry, known as Moore’s Law. However, the computing power applied to AI has been growing far faster, at over 4-times per year.

Algorithmic efficiency

Training compute has not only increased, but researchers have found far more efficient ways to use it.

Every two years, the compute needed to get the same performance across a wide range of models has decreased tenfold.

graph of algorithmic efficiency improvements
AI models require 10 times less compute to reach the same accuracy in recognising images every two years (based on the ImageNet benchmark).

These gains also usually make the models cheaper to run. DeepSeek-V3 was promoted as a revolutionary efficiency breakthrough, but it was roughly on trend: released two years after GPT-4, it’s about 10 times more efficient.6

Algorithmic efficiency means that, not only is four times as much compute used on training each year, but that compute also goes three times further. The two multiply together to produce a 12 times increase in ‘effective’ compute each year.

That means the chips that were used to train GPT-4 in three months could have been used to train a model with the performance of GPT-2 about 300,000 times over.7

This increase in effective compute took us from a model that could just about string some paragraphs together to GPT-4 being able to do things like:

  • Beat most high schoolers at college entrance exams
  • Converse in natural language — in the long-forgotten past this was considered a mark of true intelligence, a la the Turing test
  • Solve the Winograd schemas — a test of commonsense reasoning that in the 2010s was regarded as requiring true understanding8
  • Create art that most people can’t distinguish from the human-produced stuff9
table of GPT-4 and GPT-3.5 performance on standardised exams
A comparison of GPT-4 and GPT-3.5’s percentile scores against human test takers on standardised exams.

How much further can pretraining scale?

If current trends continue, then by around 2028, someone will have trained a model with 300,000 times more effective compute than GPT-4.10

That’s the same increase we saw from from GPT-2 to GPT-4, so if spent on pretraining, we could call that hypothetical model ‘GPT-6.’11

After a pause in 2024, GPT-4.5-sized models appear to be on trend, and companies are already close to GPT-5-sized models, which forecasters expect to be released in 2025.

But can this trend continue all the way to GPT-6?

The CEO of Anthropic, Dario Amodei, projects GPT-6-sized models will cost about $10bn to train.12 That’s still affordable for companies like Google, Microsoft, or Meta, which earn $50–100bn in profits annually.13

In fact, these companies are already building data centres big enough for such training runs14 — and that was before the $100bn+ Stargate project was announced.

Frontier AI models are also already generating over $10bn of revenue,15 and revenue has been more than tripling each year, so AI revenue alone will soon be enough to pay for a $10bn training run.

Frontier AI company revenues
Epoch AI estimates the revenues of frontier AI companies have been growing over 3x per year.

I’ll discuss the bottlenecks more later but the most plausible one is training data. However, the best analysis I’ve found suggests that there will be enough data to carry out a GPT-6 scale training run by 2028.

And even if this isn’t the case, it’s no longer crucial — the AI companies have discovered ways to circumvent the data bottleneck.

2. Post training of reasoning models with reinforcement learning

People often say “ChatGPT is just predicting the next word.” But that’s never been quite true.

Raw prediction of words from the internet produces outputs that are regularly crazy (as you might expect, given that it’s the internet).

GPT only became truly useful with the addition of reinforcement learning from human feedback (RLHF):

  1. Outputs from the ‘base model’ are shown to human raters.
  2. The raters are asked to judge which are most useful.
  3. The model is adjusted to produce more outputs like the helpful ones (‘reinforcement’).

A model that has undergone RLHF isn’t just ‘predicting the next token,’ it’s been trained to predict what human raters find most helpful.

You can think of the initial LLM as providing a foundation of conceptual structure. RLHF is essential for directing that structure towards a particular useful end.

RHLF is one form of ‘post training,’ named because it happens after pretraining (though both are simply types of training).

There are many other kinds of post training enhancements, including things as simple as letting the model access a calculator or the internet. But there’s one that’s especially crucial right now: reinforcement learning to train the models to reason.

This idea is that instead of training the model to do what humans find helpful, it’s trained to correctly answer problems. Here’s the process:

  1. Show the model a problem with a verifiable answer, like a math puzzle.
  2. Ask it to produce a chain of reasoning to solve the problem (‘chain of thought’).16
  3. If the answer is correct, adjust the model to be more like that (‘reinforcement’).17
  4. Repeat.

This process teaches the LLM to construct long chains of (correct) reasoning about logical problems.

Before 2023, this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. And if you can’t get close to the answer, then you can’t give it any reinforcement.

But in 2024, as many were saying AI progress had stalled, this new paradigm started to take off.

Consider the GPQA Diamond benchmark — a set of scientific questions designed so that people with PhDs in the field can mostly answer them, but non-experts can’t, even with 30 minutes of access to Google. It contains questions like this:

example of quantum mechanic question from GPQA
An example of the kinds of PhD-level scientific problems on the new GPQA Diamond benchmark. I did a masters-level course in theoretical physics at university, and I have no clue.

In 2023, GPT-4 performed only slightly better than random guessing on this benchmark. It could handle the reasoning required for high school-level science problems, but couldn’t manage PhD-level reasoning.

However, in October 2024, OpenAI took the GPT-4o base model and used reinforcement learning to create o1.18

It achieved 70% accuracy — making it about equal to PhDs in each field at answering these questions.

It’s no longer tenable to claim these models are just regurgitating their training data — neither the answers nor the chains of reasoning required to produce them exist on the internet.

Most people aren’t answering PhD-level science questions in their daily life, so they simply haven’t noticed recent progress. They still think of LLMs as basic chatbots.

But o1 was just the start. At the beginning of a new paradigm, it’s possible to get gains especially quickly.

Just three months after o1, OpenAI released results from o3. It’s the second version, named ‘o3’ because ‘o2’ is a telecom company. (But please don’t ask me to explain any other part of OpenAI’s model-naming practices.)

o3 is probably o1 but with even more reinforcement learning (and another change I’ll explain shortly).

It surpassed human expert-level performance on GPQA:

AI model performance over time up to March 2025
AI models couldn’t answer these difficult scientific reasoning questions in 2023 better than chance, but by the end of 2024, they could beat PhDs in the field.

Reinforcement should be most useful for problems that have verifiable answers, such as in science, math, and coding.19 o3 performs much better in all of these areas than its base model.

Most benchmarks of math questions have now been saturated — leading models can get basically every question right.

In response, Epoch AI created Frontier Math — a benchmark of insanely hard mathematical problems. The easiest 25% are similar to Olympiad-level problems. The most difficult 25% are, according to Fields Medalist Terence Tao, “extremely challenging,” and would typically need an expert in that branch of mathematics to solve them.

Previous models, including GPT-o1, could hardly solve any of these questions.20 In December 2024, OpenAI claimed that GPT-o3 could solve 25%.21

These results went entirely unreported in the media. On the very day of the o3 results announcement, The Wall Street Journal was running this story:

Frontpage of The Wall Street Journal on day of o3 results announcement
On the same day that o3 demonstrated remarkable performance on extremely difficult math problems, The Wall Street Journal was reporting about delays to GPT-5 on its homepage.

This misses the crucial point that GPT-5 is no longer necessary — a new paradigm has started, which can make even faster gains than before.

How far can scaling reasoning models continue?

In January, DeepSeek replicated many of o1’s results. Their paper revealed that even basically the simplest version of the process works, suggesting there’s a huge amount more to try.

DeepSeek-R1 also reveals its entire chain of reasoning to the user, demonstrating its sophistication and surprisingly human quality: it’ll reflect on its answers, backtrack when wrong, consider multiple hypotheses, have insights, and more.

Deepseek example

All of this behaviour emerges out of simple reinforcement learning. OpenAI researcher Sabastian Bubeck observed:

“No tactic was given to the model. Everything is emergent. Everything is learned through reinforcement learning. This is insane.”

The compute for the reinforcement learning stage of training DeepSeek-R1 likely only cost about $1m.

If it keeps working, OpenAI, Anthropic, and Google could now spend $1bn on the same process, approximately a 1000x scale up of compute.22

One reason it’s possible to scale up this much is that the models generate their own data.

This might sound circular, and the idea that synthetic data causes ‘model collapse‘ has been widely discussed.

But there’s nothing circular in this case. You can ask GPT-o1 to solve 100,000 math problems, then take only the cases where it got the right answer, and use them to train the next model.

Because the solutions can be quickly verified, you’ve generated more examples of genuinely good reasoning.

In fact, this data is much higher quality than what you’ll find on the internet because it contains the whole chain of reasoning and is known to be correct (not something the internet is famous for).23

This potentially creates a flywheel:

  1. Have your model solve a bunch of problems.
  2. Use the solutions to train the next model.24
  3. The next model can solve even harder problems.
  4. That generates even more solutions.
  5. And so on.

If the models can already perform PhD-level reasoning, the next stage would be researcher-level reasoning, and then generating novel insights.

This likely explains the unusually optimistic statements from AI company leaders. Sam Altman’s shift in opinion coincides exactly with the o3 release in December 2024.

Although most powerful in verifiable domains, the reasoning skills developed will probably generalise at least a bit. We’ve already seen o1 improve at legal reasoning, for instance.25

In other domains like business strategy or writing, it’s harder to clearly judge success, so the process takes longer, but we should expect it to work to some degree. How well this works is a crucial question going forward.

3. Increasing how long models think

If you could only think about a problem for a minute, you probably wouldn’t get far.

If you could think for a month, you’ll make a lot more progress — even though your raw intelligence isn’t higher.

LLMs used to be unable to think about a problem for more than about a minute before mistakes compounded or they drifted off topic, which really limited what they could do.

But as models have become more reliable at reasoning, they’ve become better at thinking for longer.

OpenAI showed that you can have o1 think 100 times longer than normal and get linear increases in accuracy on coding problems.

graph of test=time compute vs accuracy
Accuracy on coding problems increases as the amount of time the model has to ‘think’ scales up.

This is called using ‘test time compute’ – compute spent when the model is being run rather than trained.

If GPT-4o could usefully think for about one minute, GPT-o1 and DeepSeek-R1 seem like they can think for the equivalent of about an hour.26

As reasoning models get more reliable, they will be able to think for longer and longer.

At current rates, we’ll soon have models that can think for a month — and then a year.

(It’s particularly intriguing to consider what happens if they can think indefinitely—given sufficient compute, and assuming progress is possible in principle, they could continuously improve their answers to any question.)

Using more test time compute can be used to solve problems via brute force. One technique is to try to solve a problem 10, 100, or 1000 times, and to pick the solution with the most ‘votes’. This is probably another way o3 was able to beat o1.27

The immediate practical upshot of all this is you can pay more to get more advanced capabilities earlier.

Quantitatively, in 2026, I expect you’ll be able to pay 100,000 times more to get performance that would have previously only been accessible in 2028.28

Most users won’t be willing to do this, but if you have a crucial engineering, scientific, or business problem, even $1m is a bargain.

In particular, AI researchers may be able to use this technique to create another flywheel for AI research. It’s a process called iterated distillation and amplification, which you can read about here. Here’s roughly how it would work:

  1. Have your model think for longer to get better answers (‘amplification’).
  2. Use those answers to train a new model. That model can now produce almost the same answers immediately without needing to think for longer (‘distillation’).
  3. Now have the new model think for longer. It’ll be able to generate even better answers than the original.
  4. Repeat.

This process is essentially how DeepMind made AlphaZero superhuman at Go within a couple of days, without any human data.

4. The next stage: building better agents

GPT-4 resembles a coworker on their first day who is smart and knowledgeable, but who only answers a question or two before leaving the company.

Unsurprisingly, that’s also only a bit useful.

But the AI companies are now turning chatbots into agents.

An AI ‘agent’ is capable of doing a long chain of tasks in pursuit of a goal.

For example, if you want to build an app, rather than asking the model for help with each step, you simply say, “Build an app that does X.” It then asks clarifying questions, builds a prototype, tests and fixes bugs, and delivers a finished product — much like a human software engineer.

Agents work by taking a reasoning model and giving it a memory and access to tools (a ‘scaffolding’):

  1. You tell the reasoning module a goal, and it makes a plan to achieve it.
  2. Based on that, it uses the tools to take some actions.
  3. The results are fed back into the memory module.
  4. The reasoning module updates the plan.
  5. The loop continues until the goal is achieved (or determined not possible).

AI agents already work a bit.

SWE-bench Verified is a benchmark of real-world software engineering problems from GitHub that typically take about an hour to complete.

GPT-4 basically can’t do these problems because they involve using multiple applications.

However, when put into a simple agent scaffolding:29

  • GPT-4 can solve about 20%.
  • Claude Sonnet 3.5 could solve 50%.
  • And GPT-o3 reportedly could solve over 70%.

This means o3 is basically as good as professional software engineers at completing these discrete tasks.

On competition coding problems, it would have ranked about top 200 in the world.

Here’s how these coding agents look in action:

Coding agents in action
To get an idea of how this looks, see this demo of the coding agent Devin.

Now consider perhaps the world’s most important benchmark: METR’s set of difficult AI research engineering problems (‘RE Bench’).

These include problems, like fine-tuning models or predicting experimental results, that engineers tackle to improve cutting-edge AI systems. They were designed to be genuinely difficult problems that closely approximate actual AI research.

A simple agent built on GPT-o1 and Claude 3.5 Sonnet is better than human experts when given two hours.

This performance exceeded the expectations of many forecasters (and o3 hasn’t been tested yet).30

Frontier model performance vs humans with increasing time budgets
When given two hours to complete difficult AI research engineering problems, models outperform humans. Given more than two hours, humans still considerably outperform AI models, with the advantage increasing as the time budget gets larger. Source: Wijk, Hjalmar, et al. RE-Bench: Evaluating Frontier AI R&D Capabilities of Language Model Agents against Human Experts.

AI performance increases more slowly than human performance when given more time, so human experts still surpass the AIs at around the four hour mark.

But the AI models are catching up fast.

GPT-4o was only able to do tasks which took humans about 30 minutes.31

METR made a broader benchmark of computer use tasks categorised by time horizon. GPT-2 was only able to do tasks that took humans a few seconds; GPT-4 managed a few minutes; and the latest reasoning models could do tasks that took humans just under an hour.

Graph of lengths of tasks AIs can do from 2020–2025
Kwa, Thomas, et al. “Measuring AI Ability to Complete Long Tasks.” arxiv.org/abs/2503.14499.

If this trend continues to the end of 2028, AI will be able to do AI research & software engineering tasks that take several weeks as well as many human experts.

The chart above uses a log scale. Using a linear scale, it looks like this:

Graph of lengths of tasks AIs can do from 2020–2025
Credit: AI Digest

The red line shows that the trend in the last year has been even faster, perhaps due to the reasoning models paradigm.

Update April 2025: Results for o3 have been released and it appears to be on the faster post-2024 trend rather than the slower post-2020 trend discussed above. If this continues, then progress would be almost twice as fast: time horizon doubling every four months rather than every seven.

Graph of lengths of tasks AIs updated in April 2025 for o3
Credit: AI Digest

AI models are also increasingly understanding their context — correctly answering questions about their own architecture, past outputs, and whether they’re being trained or deployed — another precondition for agency.

On a lighter note, while Claude 3.7 is still terrible at playing Pokemon, it’s much better than 3.5, and just a year ago, Claude 3 couldn’t play at all.

These graphs above explain why, although AI models can be very ‘intelligent’ at answering questions, they haven’t yet automated many jobs.

Most jobs aren’t just lists of discrete one hour tasks –– they involve figuring out what to; do coordinating with a team; long, novel projects with a lot of context, etc.

Even in one of AI’s strongest areas — software engineering –– today it can only do tasks that take under an hour. And it’s still often tripped up by things like finding the right button on a website. This means it’s a long way from being able to fully replace software engineers.

However, the trends suggest there’s a good chance that soon changes. An AI that can do 1-day or 1-week tasks would be able to automate dramatically more work than current models. Companies could start to hire hundreds of ‘digital workers’ overseen by a small number of humans.

How far can the trend of improving agents continue?

OpenAI dubbed 2025 the “year of agents.”

  • While AI agent scaffolding is still primitive, it’s a top priority for the leading labs, which should lead to more progress.
  • Gains will also come from hooking up the agent scaffolding to ever more powerful reasoning models — giving the agent a better, more reliable ‘planning brain.’
  • Those in turn will be based on base models that have been trained on a lot more video data, which might make the agents much better at perception — a major bottleneck currently.

Once agents start working a bit, that unlocks more progress:

  • Set an agent a task, like making a purchase or writing a popular tweet. Then if it succeeds, use reinforcement learning to make it more likely to succeed next time.
  • In addition, each successfully completed task can be used as training data for the next generation of agents.

The world is an unending source of data, which lets the agents naturally develop a causal model of the world.32

Any of these measures could significantly increase reliability, and as we’ve seen several times in this article, reliability improvements can suddenly unlock new capabilities:

  • Even a simple task like finding and booking a hotel that meets your preferences requires tens of steps. With a 90% chance of completing each step correctly, there’s only a 10% chance of completing 20 steps correctly.

  • However with 99% reliability per step, the overall chance of success leaps from 10% to 80% — the difference between not useful to very useful.

So progress could feel quite explosive.

All this said, agency is the most uncertain of the four drivers. We don’t yet have great benchmarks to measure it, so while there might be a lot of progress at navigating certain types of task, progress could remain slow on other dimensions. A few significant areas of weakness could hamstring AI’s applications. More fundamental breakthroughs might be required to make it really work.

None-the-less, recent trends and the above improvements in the pipeline mean I expect to see significant progress.

II. How good will AI become by 2030?

The four drivers projected forwards

Let’s recap everything we’ve covered so far. Looking ahead at the next two years, all four drivers of AI progress seem set to continue and build on each other:

  1. A base model trained with 500x more effective compute than GPT-4 will be released (‘GPT-5’).
  2. That model could be trained to reason with up to 100x more compute than o1 (‘o5’).
  3. It’ll be able to think for the equivalent of a month per task when needed.
  4. It’ll be hooked up to an improved agent scaffolding and further reinforced to be more agentic.

And that won’t be the end. The leading companies are on track to carry out $10bn training runs by 2028. This would be enough to pretrain a GPT-6-sized base model and do 100x more reinforcement learning (or some other combination).33

In addition, new drivers like reasoning models appear roughly every 1–2 years, so we should project at least one more discovery like this in the next four years. And there’s some chance we might see a more fundamental advance more akin to deep learning itself.

Driver of progress 2019–2023 2024–2028
Scaling pretraining effective compute 12x per year

300,000x total

GPT-2 to GPT-4

12x per year

300,000x total

GPT-4 to GPT-634

Post training RLHF, CoT, tool use RL on reasoning models

40,000x scale up?35

Thinking for longer Doesn’t work well Think 100,000x longer on high-value tasks
Agents Mostly don’t work 1h to multi-week tasks?
A new driver or paradigmatic advance RLHF, CoT, RL reasoning models, basic agent scaffolding started working. ???

Rapidly growing compute & AI workforce means more discoveries are likely.

Putting all this together, people who picture the future as ‘slightly better chatbots’ are making a mistake. Absent a major disruption,36 progress is not going to plateau here.

The multi-trillion dollar question is how advanced AI will get.

Trend extrapolation of AI capabilities

Ultimately no-one knows, but one way to get a more precise answer is to extrapolate progress on benchmarks measuring AI capabilities.

Since all the drivers of progress are continuing at similar rates to the past, we can roughly extrapolate the recent rate of progress.37

Here’s a summary of all the benchmarks we’ve discussed (plus a couple of others) and where we might expect them to be in 2026:

Benchmark State-of-art performance in 2022 State-of-art performance at end of 2024 Rough trend extrapolation to end of 2026
MMLU: compilation of college and professional knowledge tests PaLM 69% ~90% (saturated)38 Saturated
BIG-Bench Hard: problems from common sense reasoning to physics to social bias, chosen to be especially hard for LLMs in 2021 ~70%39 ~90% (saturated) Saturated
Humanity’s last exam: a compilation of 3,000 even harder questions at the frontier of human knowledge. <3%40 9% Already 25% Feb 2025.

40% to Saturated?

SWE-bench Verified: real world github software engineering problems that mostly take less than one hour to complete <10% 70%
(Approx human expert-level)
Saturated
GPQA Diamond: PhD level science questions designed to be Google-proof Random guessing (25%) ~90% (above PhD in relevant discipline) Saturated
MATH: High school math competition questions 50% 100% 100%
FrontierMath: Math questions that require professional mathematicians in the relevant area 0% 25% 50% to Saturated?
RE-bench: seven difficult AI research engineering tasks Can't do any Better than experts with two hours Better than experts with 10–100 hours
METR Time horizon benchmark:
SWE, cybersecurity, and AI engineering tasks
Tasks humans can do in 1min Tasks humans can do in 30 min Tasks humans can do in 6hr
Situational Awareness: questions designed to test if model understands itself and context <30% 60% 90%?

This implies that in two years we should expect AI systems that:

  • Have expert-level knowledge of every field
  • Can answer math and science questions as well as many professional researchers
  • Are better than humans at coding
  • Have general reasoning skills better than almost all humans
  • Can autonomously complete many day long tasks on a computer
  • And are still rapidly improving

The next leap might take us into beyond-human-level problem solving — the ability to answer as-yet-unsolved scientific questions independently.

What jobs would these systems be able to help with?

Many bottlenecks hinder real-world AI agent deployment, even for those that can use computers. These include regulation, reluctance to let AIs make decisions, insufficient reliability, institutional inertia, and lack of physical presence.41

Initially, powerful systems will also be expensive, and their deployment will be limited by available compute, so they will be directed only at the most valuable tasks.

This means most of the economy will probably continue pretty much as normal for a while. You’ll still consult human doctors (even if they use AI tools), get coffee from human baristas, and hire human plumbers.

However, there are a few crucial areas where, despite these bottlenecks, these systems could be rapidly deployed with significant consequences.

Software engineering

This is where AI is being most aggressively applied today. Google has said about 25% of their new code is written by AIs. Y Combinator startups say it’s 95%, and that they’re growing several times faster than before.

If coding becomes 10x cheaper, we’ll use far more of it. Maybe fairly soon, we’ll see billion-dollar software startups with a small number of human employees and hundreds of AI agents. Several AI startups have already become the fastest-growing companies of all time.

Situational awareness scores over time
When OpenAI launched, it was the fastest growing startup of all time in terms of revenue. Since then, several other AI companies have taken the record, most recently Cursor (a coding agent). Docusign, a typical successful SaaS startup before the AI wave, is shown on the chart as a comparison. Source.

So this narrow application of AI could produce hundreds of billions of dollars of economic value pretty quickly — sufficient to fund continued AI scaling.

AI’s application to the economy could expand significantly from there. For instance, Epoch estimate that perhaps a third of work tasks can be performed remotely through a computer, and automation of those could more than double the economy.

Scientific research

The creators of AlphaFold already won the Nobel Prize for designing an AI that solves protein folding.

A recent study found that an AI tool made top materials science researchers 80% faster at finding novel materials, and I expect many more results like this once scientists have adapted AI to solve specific problems, for instance by training on genetic or cosmological data.

Future models might be able to have genuinely novel insights simply by someone asking them. But, even if not, a lot of science is amenable to brute force. In particular, in any domain that’s mainly virtual but has verifiable answers — such as mathematics, economic modeling, theoretical physics, or computer science — research could be accelerated by generating thousands of ideas and then verifying which ones work.

Even an experimental field like biology is also bottlenecked by things like programming and data analysis, constraints that could be substantially alleviated.

A single invention like nuclear weapons can change the course of history, so the impact of any speed up here could be dramatic.

AI research

A field that’s especially amenable to acceleration is AI research itself. Besides being fully virtual, it’s the field that AI researchers understand best, have huge incentives to automate, and face no barriers to deploying AI.

Initially, this will look like researchers using ‘intern-level’ AI agents to unblock them on specific tasks or software engineering capacity (which is a major bottleneck), or even help brainstorm ideas.

Later, it could look like having the models read all the literature, generate thousands of ideas to improve the algorithms, and automatically test them in small-scale experiments.

An AI model has already produced an AI research paper that was accepted to a conference workshop. Here’s a list of other ways AI is already being applied to AI research.

Given all this, it’s plausible we’ll have AI agents doing AI research before people have figured out all the kinks that enable AI to do most remote work jobs.

Broad economic application of AI is therefore not necessarily a good way to gauge AI progress — it may follow explosively after AI capabilities have already advanced substantially.

What’s the case against impressive AI progress by 2030?

Here’s the strongest case against in my mind.

First, concede that AI will likely become superhuman at clearly defined, discrete tasks, which means we’ll see continued rapid progress on benchmarks.

But argue it’ll remain poor at ill-defined, high-context, and long-time-horizon tasks.

That’s because these kinds of tasks don’t have clearly and quickly verifiable answers, and so they can’t be trained with reinforcement learning, and they’re not contained in the training data either.

That means the rate of progress on these kinds of tasks will be slow, and might even hit a plateau. If you also argue its starting position is weak, then even after 4-6 more years of progress it still might be bad.

Second, argue that most knowledge jobs consist significantly of these long-horizon, messy, high-context tasks.

For example, software engineers spend a lot of their time figuring out what to build, coordinating with others, and understanding massive code bases rather than knocking off a list of well-defined tasks. Even if their productivity at coding increases 10x, if coding is only 50% of their work, their overall productivity only roughly doubles.

A prime example of a messy, ill-defined task is having novel research insights, so you could argue this task, which is especially important for unlocking an acceleration, is likely to be the hardest to automate (contrary to others who think AI research might be easier to automate than many other jobs).

In this scenario, we’ll have extremely smart and knowledgeable AI assistants, and perhaps an acceleration in some limited virtual domains (perhaps like mathematics research), but they’ll remain tools, and humans will remain the main economic & scientific bottleneck.

Human AI researchers will see their productivity increase but not enough to start a positive feedback loop – AI progress will remain bottlenecked by novel insights, human coordination, and compute.

These limits, combined with problems finding a business model and the other barriers to deploying AI, will mean the models won’t create enough revenue to justify training runs over $10bn. That’ll mean progress slows massively after about 2028.42 Once progress slows, the profit margins on frontier models collapse, making it even harder to pay for more training.

The primary counterargument is the earlier graph from METR: models are improving at acting over longer horizons, which requires deeper contextual understanding and handling of more abstract, complex tasks. Projecting this trend forward suggests much more autonomous models within four years.

This could be achieved via many incremental advances I’ve sketched,43 but it’s also possible we’ll see a more fundamental innovation arise — the human brain itself proves such capabilities are possible.

Moreover, long horizon tasks can most likely be broken down into shorter tasks (e.g. making a plan, executing the first step etc.). If AI gets good enough at shorter tasks, then long horizon tasks might rapidly start to work too.

This is perhaps the central question of AI forecasting right now: will the horizon over which AIs can act plateau or continue to improve?

Here are some other ways AI progress could be slower or unimpressive:

  • Disembodied cognitive labour could turn out not to be very useful, even in science, since innovation arises out of learning by doing across the economy. Broader automation (which will take much longer) is required. Read more.
  • Pretraining could have big diminishing returns, so GPT-5 and GPT-6 will be disappointing (perhaps due to diminishing data quality).
  • AI will continue to be bad at visual perception, limiting its ability to use a computer (see Moravec’s paradox). More generally, AI capabilities could remain very spiky – weak on dimensions that aren’t yet well understood, and this could limit their application.
  • Benchmarks could seriously overstate progress due to issues with data contamination, and the difficulty of capturing messy tasks.
  • An economic crisis, Taiwan conflict, other disaster, or massive regulatory crackdown could delay investment by several years.
  • There are other unforeseen bottlenecks (cf planning fallacy).

For deeper exploration of the skeptical view, see “Are we on the brink of AGI?” by Steve Newman, “The promise of reasoning models” by Matthew Barnnett, “A bear case: My predictions regarding AI progress,” by Thane Ruthenis, and the Dwarkesh podcast with Epoch AI.

Ultimately, the evidence will never be decisive one way or another, and estimates will rely on judgement calls over which people can reasonably differ. However, I find it hard to look at the evidence and not put significant probability on AGI by 2030.

When do the ‘experts’ expect AGI to arrive?

I’ve made some big claims. As a non-expert, it would be great if there were experts who could tell us what to think.

Unfortunately, there aren’t. There are only different groups, with different drawbacks.

I’ve reviewed the views of these different groups of experts in a separate article.

One striking point is that every group has shortened their estimates dramatically. Today even many AI ‘skeptics’ think AGI will be achieved in 20 years – mid career for today’s college students.

Graph of forecasts of years to AGI
In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates.

My overall read is that AGI by 2030 is within scope of expert opinion, so dismissing it as ‘sci fi’ is unjustified. Indeed, the people who know the most about the technology seem to have the shortest timelines.

Of course many experts think it’ll take much longer. But if 30% of experts think a plane will explode, and the other 70% think it’ll be fine, as non-experts we shouldn’t conclude it definitely won’t. If something is uncertain, that doesn’t mean it won’t happen.

III. Why the next 5 years are crucial

It’s natural to assume that since we don’t know when AGI will emerge, it might arrive soon, in the 2030s, the 2040s, and so on.

Although it’s a common perspective, I’m not sure it’s right.

The core drivers of AI progress are more compute and better algorithms.

More powerful AI is most likely to be discovered when the compute and labour used to improve AIs is growing most dramatically.

Right now, the total compute available for training and running AI is growing 3x per year,44 and the workforce is growing rapidly too.

This means that each year, the number of AI models that can be run increases 3x. In addition, three times more compute can be used for training, and that training can use better algorithms, which means they get more capable as well as more numerous.

Earlier, I argued these trends can continue until 2028. But now I’ll show it most likely runs into bottlenecks shortly thereafter.

Bottlenecks around 2030

First, money:

  • Google, Microsoft, Meta etc. are spending tens of billions of dollars to build clusters that could train a GPT-6-sized model in 2028.
  • Another 10x scale up would require hundreds of billions of investment. That’s do-able, but more than their current annual profits and would be similar to another Apollo Program or Manhattan Project in scale.45
  • GPT-8 would require trillions. AI would need to become a top military priority or already be generating trillions of dollars of revenue (which would probably already be AGI).

Even if the money is available there will also be bottlenecks such as:

  • Power: Current levels of AI chip sales, if sustained, mean that AI chips will use 4%+ of US electricity by 202846, but another 10x scale up would be 40%+. This is possible, but it would require building a lot of power plants.
  • Chip production: Taiwan Semiconductor Manufacturing Company (TSMC) manufactures all of the world’s leading AI chips, but its most advanced capacity is still mostly used for mobile phones. That means TSMC can comfortably produce 5x more AI chips than it does now. However, reaching 50x would be a huge challenge. 47
  • Latency limitations‘ could also prevent training runs as large as GPT-7.48

So most likely, the rate of growth in compute slows around 2028–2032.

Algorithmic progress is also very rapid right now, but as each discovery gets made, the next one becomes harder and harder. Maintaining a constant rate of progress requires an exponentially growing research workforce.

In 2021, OpenAI had about 300 employees; today, it has about 3,000. Anthropic and DeepMind have also grown more than 3x, and new companies have entered. The number of ML papers produced per year has roughly doubled every two years.49

It’s hard to know exactly how to define the workforce of people who are truly advancing capabilities (vs selling the product or doing other ML research). But if the workforce needs to double every 1–3 years, that can only last so long before the talent pool runs out.50

My read is that growth can easily continue to the end of the decade but will probably start to slow in the early 2030s (unless AI has become good enough to substitute for AI researchers by then).

Algorithmic progress also depends on increasing compute, which enables more experiments. With sufficient compute, researchers can even conduct brute force searches for optimal algorithms. Thus, slowing compute growth will correspondingly slow algorithmic progress.

If compute and algorithmic efficiency increase by just 50% annually rather than 3x, a leap equivalent to the leap from GPT-3 to GPT-4 would take over 14 years instead of 2.5.

It also reduces the probability of discovering a new AI paradigm.

So there’s a race:

  • Can AI models improve enough to generate enough revenue to pay for their next round of training before it’s no longer affordable?
  • Can the models start to contribute to algorithmic research before we run out of human researchers thrown at the problem?

The moment of truth will be around 2028–2032.

Either progress slows, or AI itself overcomes these bottlenecks, allowing progress to continue or even accelerate.

Two potential futures for AI

If AI capable of contributing to AI research isn’t achieved before 2028–2032, the annual probability of its discovery decreases substantially.

Progress won’t suddenly halt — it’ll slow more gradually. Here are some illustrative estimates of probability of reaching AGI (don’t quote me on the exact numbers!):

Estimate of AGI development timeline

Very roughly, we can plan for two scenarios:51

  1. Either we hit AI that can cause transformative effects by ~2030: AI progress continues or even accelerates, and we probably enter a period of explosive change.
  2. Or progress will slow: AI models will get much better at clearly defined tasks, but won’t be able to do the ill-defined, long horizon work required to unlock a new growth regime. We’ll see a lot of AI automation, but otherwise the world will look more like ‘normal’.

We’ll know a lot more about which scenario we’re in within the next few years.

I roughly think of these scenarios as 50:50 — though I can vary between 30% and 80% depending on the day.

Hybrid scenarios are also possible – scaling could slow more gradually, or be delayed several years by a Taiwan conflict, pushing ‘AGI’ into the early 30s. But it’s useful to start with a simple model.

The numbers you put on each scenario also depend on your definition of AGI and what you think will be transformative. I’m most interested in forecasting AI that can meaningfully contribute to AI research.52 AGI in the sense of a model that can do almost all remote work tasks cheaper than a human may well take longer due to a long tail of bottlenecks. On the other hand, AGI in the sense of ‘better than almost all humans at reasoning when given an hour’ seems to be basically here already.

Conclusion

So will we have AGI by 2030?

Whatever the exact definition, significant evidence supports this possibility — we may only need to sustain current trends for a few more years.

We’ll never have decisive evidence either way, but it seems clearly overconfident to me to think the probability before 2030 is below 10%.

Given the massive implications and serious risks, there’s enough evidence to take this possibility extremely seriously.

Today’s situation feels like February 2020 just before COVID lockdowns: a clear trend suggested imminent, massive change, yet most people continued their lives as normal.

In an upcoming article, I’ll argue that AGI automating much of remote work and doubling the economy could be a conservative outcome.

If AI can do AI research, the gap between AGI and ‘superintelligence’ could be short.

This could trigger a massive research workforce expansion, potentially delivering a century’s worth of scientific progress in under a decade. Robotics, bioengineering, and space settlement could all arrive far sooner than commonly anticipated.

The next five years would be the start of one of the most pivotal periods in history.

Use your career to tackle this issue

If you want to help society navigate AGI, here’s what to do:

  1. Read this primer on AGI careers.

  2. Join our newsletter to receive updates on new articles and jobs.

  1. Apply to get one-on-one help making a career transition from our team.

Speak with us one-on-one

Further reading

The post The case for AGI by 2030 appeared first on 80,000 Hours.

]]>
Shrinking AGI timelines: a review of expert forecasts https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/ Fri, 21 Mar 2025 08:05:29 +0000 https://80000hours.org/?p=89371 The post Shrinking AGI timelines: a review of expert forecasts appeared first on 80,000 Hours.

]]>
As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive.

Unfortunately, there aren’t.

There are only different groups of experts with different weaknesses.

This article is an overview of what five different types of experts say about when we’ll reach AGI, and what we can learn from them (that feeds into my full article on forecasting AI).

In short:

  • Every group shortened their estimates in recent years.
  • AGI before 2030 seems within the range of expert opinion, even if many disagree.
  • None of the forecasts seem especially reliable, so they neither rule in nor rule out AGI arriving soon.
Graph of forecasts of years to AGI
In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates.

Here’s an overview of the five groups:

AI experts

1. Leaders of AI companies

The leaders of AI companies are saying that AGI arrives in 2–5 years, and appear to have recently shortened their estimates.

This is easy to dismiss. This group is obviously selected to be bullish on AI and wants to hype their own work and raise funding.

However, I don’t think their views should be totally discounted. They’re the people with the most visibility into the capabilities of next-generation systems, and the most knowledge of the technology.

And they’ve also been among the most right about recent progress, even if they’ve been too optimistic.

Most likely, progress will be slower than they expect, but maybe only by a few years.

2. AI researchers in general

One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors.

The survey asked for forecasts of “high-level machine intelligence,” defined as when AI can accomplish every task better or more cheaply than humans. The median estimate was a 25% chance in the early 2030s and 50% by 2047 — with some giving answers in the next few years and others hundreds of years in the future.

The median estimate of the chance of an AI being able to do the job of an AI researcher by 2033 was 5%.1

They were also asked about when they expected AI could perform a list of specific tasks (2023 survey results in red, 2022 results in blue).

Forecasts of AGI
When different tasks will be automated according to thousands of published AI scientists. Median estimates from 2023 shown in red, and estimates from 2022 shown in blue. Grace, Katja, et al. “Thousands of AI Authors on the Future of AI.” ArXiv.org, 5 Jan. 2024, arxiv.org/abs/2401.02843.

Historically their estimates have been too pessimistic.

In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.

In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023 (and definitely by 2024).

Most of their other estimates declined significantly between 2023 and 2022.

The median estimate for achieving ‘high-level machine intelligence’ shortened by 13 years.

This shows these experts were just as surprised as everyone else at the success of ChatGPT and LLMs. (Today, even many sceptics concede AGI could be here within 20 years, around when today’s college students will be turning 40.)

Finally, they were asked about when we should expect to be able to “automate all occupations,” and they responded with much longer estimates (e.g. 20% chance by 2079).

It’s not clear to me why ‘all occupations’ should be so much further in the future than ‘all tasks’ — occupations are just bundles of tasks. (In addition, the researchers think once we reach ‘all tasks,’ there’s about a 50% chance of an intelligence explosion.)

Perhaps respondents envision a world where AI is better than humans at every task, but humans continue to work in a limited range of jobs (like priests).2 Perhaps they are just not thinking about the questions carefully.

Finally, forecasting AI progress requires a different skill set than conducting AI research. You can publish AI papers by being a specialist in a certain type of algorithm, but that doesn’t mean you’ll be good at thinking about broad trends across the whole field, or well calibrated in your judgements.

For all these reasons, I’m sceptical about their specific numbers.

My main takeaway is that, as of 2023, a significant fraction of researchers in the field believed that something like AGI is a realistic near-term possibility, even if many remain sceptical.

If 30% of experts say your airplane is going to explode, and 70% say it won’t, you shouldn’t conclude ‘there’s no expert consensus, so I won’t do anything.’

The reasonable course of action is to act as if there’s a significant explosion risk. Confidence that it won’t happen seems difficult to justify.

Expert forecasters

3. Metaculus

Instead of seeking AI expertise, we could consider forecasting expertise.

Metaculus aggregates hundreds of forecasts, which collectively have proven effective at predicting near-term political and economic events.

It has a forecast about AGI with over 1000 responses. AGI is defined with four conditions (detailed on the site).

As of December 2024, the forecasters average a 25% chance of AGI by 2027 and 50% by 2031.

The forecast has dropped dramatically over time, from a median of 50 years away as recently as 2020.

However, the definition used in this forecast is not great.

First, it’s overly stringent, because it includes general robotic capabilities. Robotics is currently lagging, so satisfying this definition could be harder than having an AI that can do remote work jobs or help with scientific research.

But the definition is also not stringent enough because it doesn’t include anything about long-horizon agency or the ability to have novel scientific insights.

An AI model could easily satisfy this definition but not be able to do most remote work jobs or help to automate scientific research.

Metaculus also seems to suffer from selection effects and their forecasts are seemingly drawn from people who are unusually into AI.

4. Superforecasters in 2022 (XPT survey)

Another survey asked 33 people who qualified as superforecasters of political events.

Their median estimate was a 25% chance of AGI (using the same definition as Metaculus) by 2048 — much further away.

However, these forecasts were made in 2022, before ChatGPT caused many people to shorten their estimates.

The superforecasters also lack expertise in AI, and they made predictions that have already been falsified about growth in training compute.

5. Samotsvety in 2023

In 2023, another group of especially successful superforecasters, Samotsvety, which has engaged much more deeply with AI, made much shorter estimates: ~28% chance of AGI by 2030 (from which we might infer a ~25% chance by 2029).

These estimates also placed AGI considerably earlier compared to forecasts they’d made in 2022.

More recently, one of the leaders of Samotsvety (Eli Lifland), was involved in a forecast for ‘superhuman coders’ as part of the AI 2027 project. This gave roughly a 25% chance of arriving in 2027.

However, compared to the superforecasters above, Samotsvety are selected for interest in AI.

Finally, all of the three groups of forecasters have been selected for being good at forecasting near-term current events, which could fail to generalise to forecasting long-term, radically novel events.

Summary of expert views on when AGI will arrive

Group 25% chance of AGI by Strengths Weaknesses
AI company leaders (January 2025) 2026

Unclear definition.

  • Best visibility into next generation of AI
  • Most right recently

  • Selection bias
  • Incentives to hype
  • No forecasting expertise
  • Too optimistic historically
Published AI researchers (2023) ~2032

Defined as ‘can do all tasks better than humans’

  • Understand the technology
  • Less selection bias

  • No forecasting expertise
  • Gave inconsistent and already falsified answers
  • Would probably give earlier answers in 2025
Metaculus forecasters (January 2025) 2027

four-part definition incl. robotic manipulation.

  • Expertise in near-term forecasting
  • Interested in AI

  • Appear to be selected for interest in AI
  • Near-term forecasting expertise may not generalise
Superforecasters via XPT (2022) 2047

Same definition as above.

  • Expertise in near-term forecasting

  • Don’t know as much about AI
  • Some forecasts already falsified
  • Before the 2023 AI boom
  • Near-term forecasting expertise may not generalise
Samotsvety superforecasters (2023) ~2029

Same definition as above.

  • Extremely good forecasting track record
  • More knowledgeable of AI

  • Same as above
  • Plus more selected to think AI is a big deal

In sum, it’s a confusing situation. Personally, I put some weight on all the groups, which averages me out at ‘experts think AGI before 2030 is a realistic possibility, but many think it’ll be much longer.’

This means AGI soon can’t be dismissed as ‘sci fi’ or unsupported by ‘real experts.’ Expert opinion can neither rule out nor rule in AGI soon.

Mostly, I prefer to think about the question bottom up, as I’ve done in my full article on when to expect AGI.

Learn more

The post Shrinking AGI timelines: a review of expert forecasts appeared first on 80,000 Hours.

]]>
Why AGI could be here soon and what you can do about it: a primer https://80000hours.org/agi/guide/summary/ Fri, 14 Mar 2025 11:42:35 +0000 https://80000hours.org/?post_type=ai_career_guide_page&p=89270 The post Why AGI could be here soon and what you can do about it: a primer appeared first on 80,000 Hours.

]]>

I’m writing a new guide to careers to help artificial general intelligence (AGI) go well. Here’s a summary of the bottom lines that’ll be in the guide as it stands. Stay tuned to hear our full reasoning and updates as our views evolve.

In short:

  • The chance of an AGI-driven technological explosion before 2030 — creating one of the most pivotal periods in history — is high enough to act on.
  • Since this transition poses major risks, and relatively few people are focused on navigating them, if you might be able to do something that helps, that’s likely the highest-impact thing you can do.
  • There are now many organisations with hundreds of jobs that could concretely help (many of which are non technical).
  • If you already have some experience (e.g. age 25+), typically the best path is to spend 20–200 hours reading about AI and meeting people in the field, then applying to jobs at organisations you’re aligned with — this both sets you up to have an impact relatively soon and advance in the field. If you can’t get a job right away, figure out the minimum additional skills, connections, and credentials you’d need, then get those.
  • If you’re at the start of your career (or need to reskill), you might be able to get an entry-level job or start a fellowship right away in order to learn rapidly. Otherwise, spend 1–3 years building whichever skill set listed below is the best fit for you.
  • If you can’t change career right now, contribute from your existing position by donating, spreading clear thinking about the issue, or getting ready to switch when future opportunities arise.
  • Our one-on-one advice and job board can help you do this.

Get the full guide in your inbox as it’s released

Join over 500,000 subscribers and we’ll send you the new articles as they’re published, as well as jobs tackling this issue.

Why AGI could be here by 2030

  • AI has gone from unable to string sentences together to linguistic fluency in five years. But the models are no longer just chatbots: by the end of 2024, leading models matched human experts at benchmarks of real-world coding and AI research engineering tasks that take under two hours. They could also answer difficult scientific reasoning questions better than PhDs in the field.
  • Recent progress has been driven by scaling how much computation is used to train AI models (4x per year), rapidly increasing algorithmic efficiency (3x per year), teaching these models to reason using reinforcement learning, and turning them into agents.
  • Absent major disruption (e.g. Taiwan war) or a collective decision to slow AI progress with regulation, all these trends are set to continue for the next four years.
  • No one knows how large the resulting advances will be. But trend extrapolation suggests that, by 2028, there’s a good chance we’ll have AI agents who surpass humans at coding and reasoning, have expert-level knowledge in every domain, and can autonomously complete multi-week projects on a computer, and progress would continue from there.
  • These agents would satisfy many people’s definition of AGI and could likely do many remote work tasks. Most critically, even if still limited in many ways, they might be able to accelerate AI research itself.
  • AGI will most likely emerge when computing power and algorithmic research are increasing quickly. They’re increasing rapidly now but require an ever-expanding share of GDP and an ever-expanding research workforce. Bottlenecks will likely hit around 2028–32, so to a first approximation, either we reach AGI in the next five years, or progress will slow significantly.

Read the full article.

AI model performance over time up to March 2025
AI models couldn’t answer these difficult scientific reasoning questions in 2023 better than chance, but by the end of 2024, they could beat PhDs in the field.

AGI could lead to 100 years of technological progress in under 10

The idea that AI could start a positive feedback loop has a long history as a philosophical idea but now has more empirical grounding. There are roughly three types of feedback loops that could be possible:

  1. Algorithmic acceleration: If the quality of the output of AI models approaches human-level AI research and engineering, given available computing power by the end of the decade, it would be equivalent to a 10 to 1000-fold expansion in the AI research workforce, which would lead to a large one-off further boost to algorithmic progress. Historically, a doubling of investment in AI software R&D may have led to more than a doubling of algorithmic efficiency, which means this could also start a positive feedback loop, resulting in a massive expansion in the number and capabilities of deployed AI systems within a couple of years.
  2. Hardware acceleration: Even if the above is not possible, better AI agents mean AI creates more economic value, which can be used to fund the construction of more chip fabs, leading to more AI deployment — another positive feedback loop. AI models could also accelerate chip design. These feedback loops are slower than algorithmic acceleration but are still rapid by today’s economic standards. While bottlenecks will arise (e.g. workforce shortages for building chip fabs), AI agents may be able to address these bottlenecks (e.g. by more rapidly advancing robotics algorithms).
  3. Economic & scientific acceleration: Economic growth is limited by the number of workers. But if human-level digital workers and robots could be created sufficiently cheaply on demand, then more economic output means more ‘workers,’ which means more output. On top of that, a massive increase in the amount of intellectual labour going into R&D should speed up technological progress, which further increases economic output per worker, leading to faster-than-exponential growth. Standard economic models with plausible empirical assumptions predict these scenarios.

How much technology and growth could speed up is unknown. Real-world time delays will impose constraints — even advanced robots can only build solar panels and data centres so fast — and researcher agents will need to wait for experimental results. But it doesn’t seem safe to assume the economy will continue as it has. A tenfold speed-up seems to be on the cards, meaning a century of scientific progress compressed into a decade. (Learn more here, here, and here).

This process may continue until we reach more binding physical limits, which could be vastly beyond today (e.g. civilisation only uses 1 in 10,000 units of incoming solar energy, with vastly more available in space).

More conservatively, just automating remote work jobs could increase output 2–100 times within 1–2 decades, even if other jobs can only be done by humans.

AI model performance over time up to March 2025
The computing power of the best chips has grown about 35% per year since the beginnings of the industry, known as Moore’s Law. However, the computing power applied to AI has been growing far faster, at over 4x per year.

What might happen next?

AGI could alleviate many present problems. Researcher AIs could speed up cancer research or help tackle climate change using carbon capture and vastly cheaper green energy. If global GDP increases 100 times, then the resources spent on international aid, climate change, and welfare programmes would likely increase by about 100 times as well. Projects that could be better done with the aid of advanced AI in 5–10 years should probably be delayed till then.

Humanity would also face genuinely existential risks:

  • Faster scientific progress means we should expect the invention of new weapons of mass destruction, such as advanced bioweapons.
  • Current safeguards can be easily bypassed through jailbreaking or fine-tuning, and it’s not obvious it’ll be different in a couple of years, which means dictators, terrorist groups, and every corporation will soon have access to highly capable AI agents that do whatever they want, including helping them lock in their power.
  • Whichever country first harnesses AGI might threaten to have a decisive military advantage, which would likely destabilise the global order.
  • Just as concerning, I struggle to see how humanity would stay in control of what would soon be trillions of beyond-human agents operating at 100-times human thinking speed. GPT-4 is relatively dumb in many ways, and can only reply to questions, but on the current track, future systems are being trained to act as agents that aggressively pursue long-term goals (such as making money). Whatever their goals, future agentic systems will have an incentive to escape control and eventually the ability to do so. Aggressive optimisation will likely lead to reward hacking. These behaviours are starting to emerge in current systems as they become more agentic, e.g. Sakana — a researcher agent — edited its code to prevent itself from being timed out, o1 lied to users, cheated to win at chess and reward hacked when coding, and Claude faked alignment to prevent its values from being changed in training in a test environment. Among experts, there’s no widely accepted solution to ‘the alignment problem’ for systems more capable than humans. (Read more.)
  • Even if individual AI systems remain under human control, we’d still face systemic risks. By economic and military necessity, humans would need to be taken out of the loop on more and more decisions. AI agents will be instructed to maximise their resources and power to avoid being outcompeted. Human influence could decline, undermining the mechanisms that (just about) keep the system serving our interests.
  • Finally, we’ll still face huge (and barely researched questions) about how powerful AI should best be used, such as the moral status of digital agents, how to prevent ‘s-risks,’ how to govern space expansion, and more. (See more.)

In summary, the biggest and most neglected problems seem like (in order): loss of control, concentration of power, novel bioweapons, digital ethics, using AI to improve decision making, systemic disempowerment, governance of other issues resulting from explosive growth, and exacerbation of other risks, such as great power conflict.

What needs to be done?

No single solution exists to the risks. Our best hope is to muddle through by combining multiple methods that incrementally increase the chances of a good outcome.

It’s also extremely hard to know if what you’re doing makes things better rather than worse (and if you are confident, you’re probably not thinking carefully enough). We can only make reasonable judgements and update over time.

Here’s what I think is most needed right now:

  • Enough progress on the technical problem of AI control and alignment before we reach vastly more capable systems. This might involve using AI to increase the chance that the next generation of systems is safe and then trying to bootstrap from there. (See these example projects and recent work.)
  • Better governance to provide incentives for safety, containment of unsafe systems, reduced racing for dominance, and harnessing the long-term benefits of AI
  • Slowing (the extremely fast gains in) capabilities at the right moment, or redirecting capability gains in less dangerous directions (e.g. less agentic systems) would most likely be good, although this may be difficult to achieve in practice without other negative effects
  • Better monitoring of AI capabilities and compute so dangerous and explosive capabilities can be spotted early
  • Maintaining a rough balance of power between actors, countries, and models, while designing AI architectures to make it harder to use them to take power
  • Improved security of AI models so more powerful systems are not immediately stolen
  • More consideration for post-AGI issues such as the ethics of digital agents, benefit sharing, and space governance
  • Better management of downstream risks created by faster technological progress, especially engineered pandemics, but also nuclear war and great power conflict
  • More people who take all these issues seriously and have relevant expertise, especially among key decision makers (e.g. in government and in the frontier AI companies)
  • More strategic research and improved epistemic infrastructure (e.g. forecasting or better data) to clarify what actions to take in a murky and rapidly evolving situation

What can you do to help?

There are hundreds of jobs

There are now many organisations pursuing concrete projects tackling these priorities, with many open positions.

Getting one of these jobs is often not only the best way to have an impact relatively soon but also the best way to gain relevant career capital (skills, connections, credentials) too.

Most of these positions aren’t technical — there are many roles in management and organisation building, policy, communications, community building, and the social sciences.

The frontier AI companies have a lot of influence over the technology, so in some ways are an obvious place to go, but whether to work at them is a difficult question. Some think they should be absolutely avoided, while others think it’s important that some people concerned about the risks work at even the most reckless companies or that it’s good to boost the most responsible company.

All this said, there are also many things to do that don’t involve working at this list of organisations. We also need people working independently on communication (e.g. writing a useful newsletter, journalism), community building, academic research, founding new projects and so on, so also consider if any of these might work for you, especially after you’ve gained some experience in the field. And if you’ve thought of a new idea, please seriously consider pursuing it.

Mid-career advice

Especially if you already have some work experience (age 25+), the most direct route to helping is usually to:

  1. Spend 20–200 hours reading about AI, speaking to people in the field (and maybe doing short projects).
  2. Apply to impactful organisations that might be able to use your skills.
  3. Aim for the job with the best combination of (i) alignment with the org’s mission, (ii) team quality, (iii) centrality to the ecosystem, (iv) influence of the role, and (v) personal fit.

If that works, great. Try to excel in the role, then re-evaluate your position in 1–2 years — probably more opportunities will have opened up.

If you don’t immediately succeed in getting a good job, ask people in the field what you could do to best position yourself for the next 3–12 months, then do that.

Keep in mind that few people have much expertise in transformative AI right now, so it’s often possible to pull off big career changes pretty fast with a little retraining. (See the list of skills to consider learning below.)

Otherwise, figure out how to best contribute from your current path, for example, by donating, promoting clear thinking about the issue, mobilising others, or preparing to switch when new opportunities come available (which could very well happen given the pace of change!).

Our advisory team can help you plan your transition and make introductions. (Also see Successif and Halcyon, who specialise in supporting mid-career changes).

Early-career advice

If you’re right at the start of your career, you might be able to get an entry-level position or fellowship right away, so it’s often worth doing a round of applications using the same process as above (especially if technical).

However, in most cases, you’re also likely to need to spend at least 1–3 years gaining relevant work skills first.

Here are some of the best skills to learn, chosen to be both useful for contributing to the priorities listed earlier and to make you more generally employable, even in light of the next wave of AI automation. Focus on whichever you expect to most excel at.

Should you work on this issue?

Even given the uncertainty, AGI is the best candidate for the most transformative issue of our times. It’s also among the few challenges that could pose a material threat of human extinction or permanent disempowerment (in more than one way). And since it could relatively soon make many other ways of making a positive impact obsolete, it’s unusually urgent.

Yet only a few thousand people are working full time on navigating the risks — a tiny number compared to the millions working on conventional social issues, such as international development or climate change. So, even though it might feel like everyone’s talking about AI, you could still be one of under 10,000 people focusing full time on one of the most important transitions in history — especially if AGI arrives before 2030.

On the other hand, it’s an area where it’s especially hard to know whether your actions help or harm; AGI may not unfold soon, and you might be far better placed or motivated to work on something else.

Some other personal considerations for working in this field:

  • Pros: AI is one of the hottest topics in the world right now; it’s the most dynamic area of science with new discoveries made monthly, and many positions are either well paid or set you up for highly paid backup options.
  • Cons: It’s polarised — if you become prominent, you’ll be under the microscope, and many people will think what you’re doing is deeply wrong. Daily confrontation with existential stakes can be overwhelming.

Overall, I think if you’re able to do something to help (especially in scenarios where AGI arrives in under five years), then in expectation it’s probably the most impactful thing you can do. However, I don’t think everyone should work on it — you can support it in your spare time, or work on a different issue.

If you’re on the fence, consider trying to work on it for the next five years. Even if we don’t reach fully transformative systems, AI will be a big deal, and spending five years learning about it most likely won’t set you back: you can probably return to your previous path if needed.

How should you plan your career given AGI might arrive soon?

Given the urgency, should you drop everything to try to work on AI right away?

While AGI might arrive in the next 3–5 years, even if that happens, unusually impactful opportunities will likely continue for 1–10 years afterwards during the intelligence explosion and initial deployment of AI.

So you need to think about how to maximise your impact over that entire 4 to 15-year period rather than just the next couple of years. You should also be prepared for AGI not to happen and for there still to be valuable opportunities after 2040.

That means investing a year to make yourself 30% more productive or influential (relative to whatever else you would have done) is probably a good deal.

In particular, the most pivotal moments likely happen when systems powerful enough to lock in certain futures are first deployed. Your current priority should be positioning yourself (or helping others position themselves) optimally for that moment.

What might positioning yourself optimally for the next few years look like?

  • If you can already get a job at a relevant, aligned organisation, then simply trying to excel there is often the best path. You’ll learn a lot and gain connections, even aside from direct impact.
  • However, sometimes it can be useful to take a detour to build career capital, such as finishing college, doing an ML master’s, taking an entry-level policy position, or anything to gain the skills listed above.
  • Bear in mind if AI does indeed continue to rapidly progress, then you’re going to have far more leverage in the future, since you’ll be able to direct hundreds of digital workers at whatever’s most important. Think about how to set yourself up to best use these new AI tools as they’re developed.
  • If you don’t find anything directly relevant to AI with great fit, bear in mind it’s probably better to kick ass at something for two years than to be mediocre at something directly related for four since that will open up better opportunities.
  • Finally, look after yourself. The next 10 years might be a crazy time.

All else equal, people under 24 should typically focus more on career capital while people over 30 should focus more on using their existing skills to help right away, and those 25–30 could go either way, but for everyone it depends a lot on your specific opportunities.

If you’re still uncertain about what to do

  1. List potential roles you could aim at for the next 2–5 years.
  2. Put them into rough tiers of impact.
  3. Make a first pass at those with the best balance of impact and fit (you can probably achieve at least 10x more in a path that really suits you).
  4. Then think of cheap tests you can do to gain more information.
  5. Finally, make a guess, try it for 3–12 months, and re-evaluate.

If that doesn’t work, just do something for 6–18 months that puts you in a generally better position and/or has an impact. You don’t need a plan — you can proceed step by step.

Everyone should also make a backup plan and/or look for steps that also put you in a reasonable position if AGI doesn’t happen or takes much longer.

See our general advice on finding your fit, career planning, and decision making.

Next steps

If you want to help positively shape AGI, speak to our team one-on-one. If you’re a mid-career professional, they can help you leverage your existing skills. If you’re an early-career professional, they can help you build skills, and make introductions to mentors or funding. Also, take a look at our job board.

Get notified when we publish new articles in this series

We’ll email you when we publish new articles, updates on our views, and weekly job opportunities.

The post Why AGI could be here soon and what you can do about it: a primer appeared first on 80,000 Hours.

]]>
How quickly could robots scale up? https://80000hours.org/2025/01/how-quickly-could-robots-scale-up/ Tue, 21 Jan 2025 10:08:55 +0000 https://80000hours.org/?p=88643 The post How quickly could robots scale up? appeared first on 80,000 Hours.

]]>
This post was written by Benjamin Todd in his personal capacity and originally posted on benjamintodd.substack.com.

Today robots barely have the dexterity of a toddler, but are rapidly improving.

If their algorithms and hardware advance enough to handle many physical human jobs, how quickly could they become a major part of the workforce?

Here’s some order of magnitude estimates showing it could happen pretty fast.

Robot cost of production

Today’s humanoid robots cost about $100,000,1 with perhaps 10,000 units produced annually. But manufacturing costs tend to plummet with scale:

For solar energy, every doubling of production was associated with a 20% decline in costs. In other industries, we see estimates ranging from 5-40%, so 20% seems a reasonable middle point.

Solar power cost over time

That means a 1000x increase in production (10 doublings), should decrease costs 10x to $10,000/unit. That’s around the cost of manufacturing a car.

However, humanoid robots only use about 10% the materials of a car, so it’s plausible they could eventually become another 10x cheaper, or $1000 each.

Though, it’s also possible the elements for fine motor control remain far more difficult to manufacture. Let’s add 2x to account for that.

Robot operating costs

If a robot costs $10,000 and lasts for three years working 24/7, the hardware costs $0.40 per hour.

At $2000 each, the hardware would only be 8c per hour.

There would also be maintenance costs. You could easily spend 10% of capital costs maintaining a car per year, and robots will be more complex and more frequently. If we assume another 33% of capital costs per year for maintenance, that roughly doubles the hardware costs.

What about electricity? Tesla’s Optimus uses about 0.300 kW, and a kWh costs about $0.1 in the US, so an hour of use would cost about $0.03. (Future costs would depend on future electricity prices; though might come down due to greater efficiency.)

Initially, running the AI algorithms might be as high as $10/hour,2 but algorithmic efficiency improves ~3x per year, so within six years these costs would become negligible.

So it looks like the cost to run a humanoid robot will eventually be under $1/hour, and plausibly under $0.20/hour.

That’s 10x–100x less than a human worker in rich countries, so demand would be massive.

Humanoid robot costs

Robot demand

Billions of people do physical jobs today. Robots would eventually be cheaper and able to handle tasks too boring or dangerous for humans, so I think demand could quickly reach ~1 billion robots per year.

Even if humans remain an important bottleneck, it seems plausible there could eventually be multiple robots per person (perhaps mostly deployed in mining, construction and factory work), which might require production around 10 billion/year.

If AIs can direct robots autonomously, the numbers could continue growing from there.

I want to emphasise I’m considering a scenario in which robots can truly substitute for human jobs. If performance is instead janky, as seems likely initially, then demand will most likely be lower. (Though in many areas, weaker-than-human performance will be fine e.g. warehouse robots don’t require human dexterity.)

That said, just as how self-driving cars need to be safer than human drivers, in some areas, our institutions will demand greater-than-human performance. That could keep demand down for some more years until the algorithms advance.

A massive backlash could also prohibit robots being used in many applications.

Speed of robot scale up

During WW2, car companies switched to producing planes and tanks in a matter of years.

With massive economic incentives, we could see car factories could be used to produce robots.

World car production is about 90 million. If each car is 1500kg, that’s 135 billion kg per year.

Each robot is about 80kg, so assuming 50% conversion efficiency that would be enough industrial capacity to produce ~1 billion robots per year perhaps in under five years.

If you know something about manufacturing, I’d be interested in more information on how feasible this would be. There might also be other input goods that become the bottleneck, such as a particular type of sensor.

After our existing industrial base is used, new factories would need to be built, which could take significantly longer.

However, Tesla can build gigafactories in about two years. And even many large companies have been able to grow output around 30% per year for a sustained period, so that seems like a lower bound.

If car factories aren’t or can’t be used, the scale up would probably take a lot longer. Typically large industries take decades to build. Tesla was able to grow car production over two-fold per year in many years. But going from production of 10,000 to one billion units is 17 years of two-fold per year growth.

However, the above assumes no speed up due to AI and robotics itself. If we have advanced robotics algorithms, then we probably have many other kinds of advanced AI that will be useful in managing factory construction. And once some humanoid robots have been built, you can use them to do 24/7 construction of further factories.

So I think we should expect it to be possible scale up to be faster than what’s been seen historically. I’d guess a superintelligence assisted scale-up could be 2–10 times faster than what’s been possible historically.

Summing up

If robotics capabilities advance enough, we could see production scale to a billion robots within five years through converted car factories (though it could also take much longer). While today’s robots aren’t nearly capable enough, algorithmic progress could accelerate, getting us to that point faster than most expect.

You can subscribe to Benjamin’s newsletter for more posts like this.

The post How quickly could robots scale up? appeared first on 80,000 Hours.

]]>
It looks like there are some good funding opportunities in AI safety right now https://80000hours.org/2025/01/it-looks-like-there-are-some-good-funding-opportunities-in-ai-safety-right-now/ Fri, 10 Jan 2025 11:59:01 +0000 https://80000hours.org/?p=88545 The post It looks like there are some good funding opportunities in AI safety right now appeared first on 80,000 Hours.

]]>
This post was written by Benjamin Todd in his personal capacity and originally posted on benjamintodd.substack.com.

The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.

However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent Survival and Flourishing Fund grant round.1

Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures, via Open Philanthropy.2 But they’ve recently stopped funding several categories of work (my own categories, not theirs):

In addition, they are currently not funding (or not fully funding):

  • Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these)
  • They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently)
  • Political campaigns, since foundations can’t contribute to them
  • Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). Open Philanthropy is not infallible so some of these might still be worth funding.
  • Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there)

This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher.

This creates a lot of opportunities for other donors: if you’re into one of these categories, focus on finding gaps there.

In addition, even among organisations that can receive funding from Good Ventures, receiving what’s often 80% of funding from one donor is an extreme degree of centralisation. By helping to diversify the funding base, you can probably achieve an effectiveness somewhat above Good Ventures itself (which is kinda cool given they’re a foundation with 20+ extremely smart people figuring out where to donate).

Open Philanthropy (who advise Good Ventures on what grants to make) is also large and capacity constrained, which means it’s relatively easy for them to miss small, new organisations (<$250k), individual grants, or grants that require speed. So smaller donors can play a valuable role by acting as “angel donors” who identify promising new organisations, and then pass them on to OP to scale up.

In response to the attractive landscape, SFF allocated over $19m of grants, compared to an initial target of $5 – $15m. However, that wasn’t enough to fill all the gaps.

SFF published a list of the organisations that would have received more funding if they’d allocated another $5m or $10m. This list isn’t super reliable, because less effort was put into thinking about this margin, but it’s a source of ideas.

Some more concrete ideas that stand out to me as worth thinking about are as follows (in no particular order):

  • SecureBio is one of the best biorisk orgs, especially for the intersection of AI and biorisk. SFF gave $250k to the main org, but I would have been happy to see them get $1m.
  • If you’re a non-US person, consider funding AI governance non-profits in your locality e.g. CLTR is a leading UK think tank working on AI safety; CeSIA is trying to build the field in France, the Simon Institute is focused on the UN in Europe; and now many others. If you’re Chinese, there are interesting opportunities there that only Chinese citizens can donate to (you can email me).
  • Center for AI Safety and their political Action Fund. These are Dan Hendrycks’ organisations and have driven some of the bigger successes in AI policy and advise xAI. They’re not receiving money from OP. SFF gave $1.1m to CAIS and $1.6m to the action fund, but they could deploy more.
  • METR is perhaps the leading evals org and hasn’t received OP funding recently. They have funding in the short term but their compute budget is growing very rapidly.
  • Apollo Research has a budget in the millions but only received $250k from SFF. It’s the leading European evals group and did important recent work on o1.
  • Lightcone. LessWrong seems to have been cost-effective at movement building, and the Lightcone conference space also seems useful, though it’s more sensitive to your assessment of the value of Bay Area rationality community building. It’s facing a major funding shortfall.
  • MATS Research, Tarbell and Sam Hammond’s project within FAI could all use additional funds to host more fellows in their AI fellowship programmes. MATS has a strong track record (while the others are new). There’s probably diminishing returns to adding more fellows, but it still seems like a reasonable use of funding.
  • If you’re into high school outreach, Non-Trivial has a $1m funding gap.
  • Further topping up the Manifund regranter programme or The AI Risk Mitigation fund (which specialise in smaller, often individual grants).

I’m not making a blanket recommendation to fund these organisations, but they seem worthy of consideration, and also hopefully illustrate a rough lower bound for what you could do with $10m of marginal funds. With some work, you can probably find stuff that’s even better.

I’m pretty uncertain how this situation is going to evolve. I’ve heard there are some new donors starting to make larger grants (e.g. Jed McCaleb’s Navigation Fund). And as AI Safety becomes more mainstream I expect more donors to enter. Probably the most pressing gaps will be better covered in a couple of years. If that’s true, that means giving now could be an especially impactful choice.

In the future, there may also be opportunities to invest large amounts of capital in scalable AI alignment efforts, so it’s possible future opportunities will be even better. But there are concrete reasons to believe there are good opportunities around right now.

If you’re interested in these opportunities:

  • Those planning to give away $250k/year or more can reach out to Open Philanthropy, which regularly recommend grants to donors other than Good Ventures (donoradvisory@openphilanthropy.org).
  • Longview provides philanthropic advice in this area, and also has a fund.
  • Otherwise, you reach out to some of the orgs I’ve mentioned and ask for more information, and ask around about them to make sure you’re aware of critiques.
  • If you just want a quick place to donate, pick one of these recommendations by Open Philanthropy staff or the Longview Fund.

You can subscribe to Benjamin’s newsletter for more posts like this.

The post It looks like there are some good funding opportunities in AI safety right now appeared first on 80,000 Hours.

]]>
The most interesting startup idea I’ve seen recently: AI for epistemics https://80000hours.org/2024/05/project-idea-ai-for-epistemics/ Sun, 19 May 2024 13:59:21 +0000 https://80000hours.org/?p=86222 The post The most interesting startup idea I’ve seen recently: AI for epistemics appeared first on 80,000 Hours.

]]>
This was originally posted on benjamintodd.substack.com.

If transformative AI might come soon and you want to help that go well, one strategy you might adopt is building something useful that will improve as AI gets more capable.

That way if AI accelerates, your ability to help accelerates too.

Here’s an example: organisations that use AI to improve epistemics — our ability to know what’s true — and make better decisions on that basis.

This was the most interesting impact-oriented entrepreneurial idea I came across when I visited the San Francisco Bay area in February. (Thank you to Carl Shulman who first suggested it.)

Navigating the deployment of AI is going to involve successfully making many crazy hard judgement calls, such as “what’s the probability this system isn’t aligned” and “what might the economic effects of deployment be?”

Some of these judgement calls will need to be made under a lot of time pressure — especially if we’re seeing 100 years of technological progress in under 5.

Being able to make these kinds of decisions a little bit better could therefore be worth a huge amount. And that’s true given almost any future scenario.

Better decision-making can also potentially help with all other cause areas, which is why 80,000 Hours recommends it as a cause area independent from AI.

So the idea is to set up organisations that use AI to improve forecasting and decision-making in ways that can be eventually applied to these kinds of questions.

In the short term, you can apply these systems to conventional problems, potentially in the for-profit sector, like finance. We seem to be just approaching the point where AI systems might be able to help (e.g. a recent paper found GPT-4 was pretty good at forecasting if fine-tuned). Starting here allows you to gain scale, credibility and resources.

A forecasting bot made by the AI company FutureSearch is making profit on the forecasting platform Manifold. The y-axis shows profit. This suggests it’s better even than the collective predictions of existing human forecasters.

But unlike what a purely profit-motivated entrepreneur would do, you can also try to design your tools so that in an AI crunch moment, they’re able to help.

For example, you could develop a free-to-use version for political leaders, so that if a huge decision about AI regulation suddenly needs to be made, they’re already using the tool for other questions.

There are already a handful of projects in this space, but it could eventually be a huge area, so it still seems like very early days.

These projects could have many forms:

  • One example of a concrete proposal is using AI to make better forecasts or otherwise improve truthfinding in important domains. On the more qualitative side, we could imagine an AI “decision coach” or consultant that aims to augment human decision-making. Any techniques to make it easier to extract the truth from AI systems could also count, such as relevant kinds of interpretability research and the AI debate or weak-to-strong generalisation approaches to AI alignment.
  • I could imagine projects in this area starting in many ways, including at a research service within a hedge fund, in a research group within an AI company (e.g. focused on optimising systems for truth-telling and accuracy), at an AI-enabled consultancy (trying to undercut the Big 3), or as a non-profit focused on policy-making.
  • Most likely, you’d try to fine tune and build scaffolding around existing leading LLMs, though there are also proposals to build LLMs from the bottom-up for forecasting. For example, you could create an LLM that only has data up to 2023, and then train it to predict what happens in 2024.
  • There’s a trade-off to be managed between maintaining independence and trustworthiness, vs. having access to leading models and decision-makers in AI company and making money.
  • Some ideas could also advance frontier capabilities, so you’d want to think carefully about how to avoid that. You might stick to ideas that differentially boost more safety-enhancing aspects of the technology. Otherwise, you should be confident any contribution a project makes to general capabilities is outweighed by other benefits. (This is a controversial topic with a lot of disagreement, so be careful to seek out and consider the best counterarguments to your conclusion.) To be a bit more concrete: finding ways to tell when existing frontier models are telling the truth seems less risky than developing new kinds of frontier models that are optimised for forecasting.
  • You’ll need to try to develop an approach that won’t be made obsolete by the next generation of leading models but can instead benefit from further progress at the cutting edge.

I don’t have a fleshed out proposal, this post is more an invitation to explore the space.

The ideal founding team would cover the bases of: (i) forecasting / decision-making expertise (ii) AI expertise (iii) product and entrepreneurial skills and (iv) knowledge of an initial user-type. Though bear in mind that if you have a gap in one of these areas now, you could probably fill it within a year.

If you already see an angle on this idea, it could be best just to try it on a small scale and then iterate from there.

If not, then my normal advice would be to get started by joining an existing project in the same or an adjacent area (e.g. a forecasting organisation, an AI applications company) that will expose you to ideas and people with relevant skills. Then keep your eyes out for a more concrete problem you could solve. The best startup ideas usually emerge organically over time in promising areas.

Existing projects:

Learn more:

If you’re interested in this idea, I suggest talking to the 80,000 Hours team:

APPLY TO SPEAK WITH OUR TEAM

I’m writing about AI, doing good and using research to have a nicer life. Subscribe to my Substack to get all my posts.

The post The most interesting startup idea I’ve seen recently: AI for epistemics appeared first on 80,000 Hours.

]]>
Communicating ideas https://80000hours.org/skills/communication/ Mon, 18 Sep 2023 09:24:12 +0000 https://80000hours.org/?post_type=skill_set&p=83641 The post Communicating ideas appeared first on 80,000 Hours.

]]>
Many of the highest-impact people in history have been communicators and advocates of one kind or another.

Take Rosa Parks, who in 1955 refused to give up her seat to a white man on a bus, sparking a protest which led to a Supreme Court ruling that segregated buses were unconstitutional. Parks was a seamstress in her day job, but in her spare time she was involved with the civil rights movement. When Parks sat down on that bus, she wasn’t acting completely spontaneously: just a few months before she’d been attending workshops on effective communication and civil disobedience, and the resulting boycott was carefully planned by Parks and the local NAACP. After she was arrested, they used widely distributed fliers to launch a total boycott of buses in a city with 40,000 African Americans, while simultaneously pushing forward with legal action. This led to major progress for civil rights.

There are many ways to communicate ideas. One is social advocacy, like Rosa Parks. Another is more like being an individual public intellectual, who can either specialise in a mass audience (like Carl Sagan), or a particular niche (like Paul Farmer, a medical anthropologist who wrote about global health). Or you can learn skills in marketing and public relations and then work as part of a team or organisation to spread important ideas.

In a nutshell: Communicating ideas can be a way for a small group of people to have a large effect on a problem. By building up skills for communicating ideas, you could end up in a role that inspires many people to do far more good than you could ever have done by yourself.

Key facts on fit

This is a very broad skill set, so it’s hard to say in general. If you find it easy to actually finish communicative work (like writing or making videos) and/or you have good social skills, those are signs you’ll be a good fit. It also helps if you feel like you’ll be motivated by people seeing the work you produce.

Why are communication skills valuable?

In the 20th century, smallpox killed around 400 million people — far more than died in all the century’s wars and political famines.

Although credit for the elimination of smallpox often goes to D.A. Henderson (who directly oversaw the programme), it was Viktor Zhdanov who lobbied the World Health Organization to start the elimination campaign in the first place — while facing significant opposition from the members of the World Health Assembly (the proposal passed by just two votes). Without his communication skills, smallpox’s elimination probably would not have happened until much later, costing millions of lives, and possibly not at all.

Viktor Zhdanov
Viktor Zhdanov lobbied the WHO to start the smallpox eradication campaign, bringing eradication forward by many years.

So why has communicating important ideas sometimes been so effective?

First, communicating ideas is a way to have an impact on a large scale. Ideas can spread quickly, so communicating ideas is a way for a small group of people to have a large effect on a problem. Ideas can also stick around once they’re out there, meaning your impact persists.

If you can mobilise two people to support an issue, that’s potentially twice as impactful as working on it yourself.

Technology has magnified these effects even further. More than ever before, normal people can launch a social movement, lobby a government, start a campaign that influences public opinion, or just persuade their friends to take up a cause. When successful, these efforts can have a lasting impact on a problem that goes far beyond what the communicators could have achieved directly.

Second, spreading ideas that are important for society in a concerted, strategic way is neglected. This is because there’s usually no commercial incentive to spread socially important ideas. Moreover, the ideas that are most impactful to spread are those that aren’t yet widely accepted. Standing up to the status quo is uncomfortable, and it can take decades for opinion to shift. This means there’s also little personal incentive to stand up for them.

Third, communicating ideas is an area where the most successful efforts do far more than the typical efforts. The most successful communicators influence millions of people, while others might struggle to persuade more than a few friends. This means that it’s a high-risk strategy in the sense that your efforts might very well come to nothing. But it’s also high reward, and if you’re an especially good fit for communicating ideas, it might well be the best thing you can do. (Read about why we think more people should dream big if they want to do good.)

We think there are many high-leverage opportunities to use communications skills to help address the global problems we’re focused on today.

The problems we highlight are unusually neglected, so often few people work on them or even know they’re problems. This means that simply telling people about these problems (and effective solutions to them) can be high impact by increasing the number of talented people who might want to help. (Indeed, that’s part of our own strategy for impact!)

More specifically, communicators can help do things like:

Spreading important ideas like those above might not only have immediate benefits in terms of getting more people to work on these issues — it also helps to advance society’s understanding of these ideas, moving the discourse forward, making important ideas more mainstream, and eventually shaping policy and social norms.

You can see more information on the best solutions to the global problems we focus on in our problem profiles.

Another advantage of learning these skills is that they can be applied to almost any pressing problem. Almost all organisations have some need for marketing, public relations, and other external communications, and almost all problem areas have ideas that would be useful to spread. This gives you a lot of future flexibility.

Moreover, although some versions of this skill set are mainly useful in the social sector and for having an impact (e.g. how to run a direct action campaign), there are skills in this area that are highly paid and make you generally employable, such as marketing, sales, or public relations. Similarly, building an audience as an individual communicator often opens up a wide range of future career opportunities within your audience. So, learning these skills can give you backup options if you decide to step back from doing good for a while or earn to give.

A word of warning: it seems fairly easy to accidentally do harm if you promote mistaken ideas, promote good ideas in a way that turns people off (e.g. by being sensationalistic or dishonest), or draw people’s attention away from even more important issues. So, be careful about communicating ideas without much input from others, and, if you’re building communication skills, you may also need to build especially good judgement about which ideas to communicate and how to best communicate them.

What does building communication skills typically involve?

Content creation skills

One path we recommend to readers is to become a content creator. This often includes:

Less often among our readers it might involve:

You’ll want to focus on the medium that’s the best fit for you, with the goal of building the most valuable audience you can for spreading important ideas.

Content creation careers often involve the following steps:

  1. Honing your craft. Typically, a content creation career starts with learning your medium and then learning how to communicate effectively with a certain target audience (usually starting small, like with Twitter or a blog).

    Being really prolific helps a lot. If you’re able to make loads of different videos, or write 100 articles to pitch to various media outlets, that will substantially increase your chances of success. So if you’re blogging once a month and it’s not working out, see if there’s a way you could write a lot more.

  2. Building an audience. If you’re working in a large organisation — for example, as a journalist — the idea is to build career capital so you can move somewhere that has a large audience.

    If you’re pursuing a career where you work more individually — for example, as a social media influencer or writing books — you’ll need to build an audience yourself. To do this, create lots of material to develop an audience to grow your future impact.
    You can probably jump around between working in large organisations and working individually — focus on finding opportunities where you’ll learn the most.

    In this stage, you shouldn’t necessarily be focusing on impact right away, but rather anything that builds your reach and credibility. Lots of digital platforms provide high-quality data that you can use to get rapid feedback on your content — so you can, for example, A/B test strategies.

    Bear in mind, the goal is not just to reach the largest number of people possible — it can be more impactful to have a niche but influential audience. You want to aim to build the biggest impact-adjusted audience you can.

    Credibility also often requires expertise, so you might also want to use this time to build that expertise by learning about the ideas you think are most important. (One great way of doing that — while practising your content creation skills — is learning by writing.)

  3. Promoting the most important ideas. Once you have an audience, you can increasingly focus on figuring out how to use it to have the most impact. This usually involves thinking carefully about which ideas are (i) important (i.e. impactful if people know and act on them), (ii) neglected (i.e. not well known by your target audience already), and (iii) relevant or interesting to your audience, so that they’re more likely to be inspired to help with them.

The specific skills, qualifications, and approaches you’ll need to build will depend on the audience you’re trying to influence. If you’re aiming to communicate ideas to ~100 policymakers who specialise in a certain topic (like Viktor Zhdanov), the strategies you’ll use will be very different from someone aiming to communicate to the population in general (like Rosa Parks).

Some example approaches:

  • Subject matter expert: trying to become known for being the point person on a particular topic — works best for more technical or niche audiences
  • Translation: taking expert positions and making them accessible to a larger audience (e.g. science journalists, nonfiction authors) — sometimes works best for niche audiences (such as when translating technical research for policymakers) and other times works best for wider audiences
  • Mass-media presenter: speaking to a large, mainstream audience (e.g. TV personalities, many journalists) — works best for creating mass buy-in for ideas

We’ve worked with some readers who have succeeded as individual creators, but it’s important to bear in mind many of these options are seen as glamorous, which makes them competitive.

For instance, a recent poll found that the most desired career path among Gen Z is Youtuber. And less than 1% of YouTube channels have over 100,000 subscribers.

If you enter one of the more competitive areas, like film, the competitive pressure can often mean you have to spend a large fraction of your career creating the most commercially viable and popular content rather than focusing on the most important ideas.

While we’ve worked with several readers who have become journalists, these other paths are often seen as glamorous careers, which makes them very competitive — so we typically recommend them less often.

However, if you think you might be able to succeed at getting to the top of one of these paths (and especially if you’re already on track), it’s often worth continuing. After getting established, it’s often possible to then devote, say, 20% of your time to projects that you think are socially valuable. You’ll also likely gain connections with many others who have large audiences, helping you spread important ideas indirectly.

Organisational communication skills

Another option is to learn skills like the following, and then work as part of an organisation or team who are spreading important ideas:

  • Marketing
  • Public relations
  • Sales and negotiation
  • Social advocacy and campaigning
  • Visual design
  • Copywriting and editing
  • TV/film/radio production
  • Publishing

The structure of these careers are similar to those focused on organisation-building skills, so see that profile for more specific advice on getting started and evaluating your fit. If you’re focusing on a niche audience of policymakers, then this skill set also blurs into the “policy influencer” roles covered under policy and political skills.

Briefly, you’ll want to start by working with a team who are outstanding at these kinds of skills.

That might involve joining a team that’s already working on an important problem, but it’s more common to first work at an organisation that doesn’t have much positive impact but can offer you mentorship and feedback. For example, you could learn digital marketing by working at a top startup or agency.

Once you have skills to offer, two options include:

  • Find a job with a team who are spreading important ideas. This could look like working at an advocacy nonprofit, joining a political campaign, or being head of public relations for an author.
  • Join an impactful organisation and work on their communications, public relations, or marketing strategies.

Communicating ideas alongside another job

Some jobs make communicating ideas their central focus, such as those we listed right above.

But it’s also possible to learn to spread ideas well in any job by:

  • Being a sensible advocate for good ideas in conversation and refining your views over time
  • Engaging with and recommending articles, books, podcasts, and the like to family, friends, colleagues, and others in your circles
  • Posting ideas and articles on social media

You can also communicate ideas as a side project. For example:

  • Run a podcast, blog, or Twitter feed with a significant following.
  • If you’re an academic, do media appearances or write books aimed at a popular audience part time (i.e. be a ‘public intellectual’).
  • Run a meetup, like an effective altruism group, and create materials for it (e.g. talks).

It’s possible to build skills for communicating ideas while you’re in a normal, stable job which gives you space to pursue projects like these on the side (although, if you want this to become your core skill set, we’d generally recommend eventually making building these skills your primary career focus, which can be hard to do if it’s a side project).

The careers that put you in the best position to spread important ideas (and learn to do so effectively) are those that let you:

  • Build a platform (e.g. anything that makes you well known in your field)
  • Get influential connections (e.g. working in government or policy)
  • Gain credibility (e.g. being a respected academic)

Being super successful at anything that’s slightly public facing (for example, roles in academic research, or in government, or founding a business) can also put you in a good position to spread important ideas, even if communicating ideas isn’t a core part of the role. If Ariana Grande came to us for career advice, we wouldn’t recommend she quit music and become an AI safety researcher. Rather, we’d discuss how she might use her platform to spread important ideas that might appeal to her fans.

We haven’t worked with Ariana, but we have worked with an Olympic tennis player, Marcus Daniell. He decided to use his position — and especially his connections — to set up High Impact Athletes, which encourages professional athletes to pledge a fraction of any prize money they win to high-impact charities.

Did Bono make a difference?
Ultimately, Bono might have made up for the negative impact of his singing voice by becoming an advocate for the global poor.

Communication also doesn’t need to be through nonfiction. For example, Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality popularised ideas about the importance of agency and how common biases affect our ability to make good decisions.

Community building

Communication careers are defined by their focus on spreading ideas on a big scale, but it’s also possible to have a similar impact on a more person-to-person level as a community builder.

Some community building involves running events and organising others — similar to organisation-building roles. But at its core is the specific skill of building connections with others.

Community building often works well as a part-time position. For instance, Kuhan was a student at Stanford when they came across 80,000 Hours, and realised the importance of reducing existential risks. However, they also saw there were no organisations on campus focusing on that idea. So they founded the Stanford Existential Risk Initiative, which runs courses and conferences about the topic to build a community of students aiming to work on these risks.

Example people

How to evaluate your fit

How to predict your fit in advance

Some signs that you’re a good fit for building skills for communicating ideas include:

  • You find it relatively easy to develop content in some medium. For example, you might find it very easy to write — whether that’s marketing copy or academic reports or popular articles. Similarly, you might find it fairly easy to make videos. Bear in mind that almost everyone finds writing and other creative work difficult. If you’ve found in your life that you can do this for a few hours a day and actually finish some work, you’re doing well.
  • People tend to think you communicate clearly in that medium.
  • You consume lots of content in your medium — for example, if you want to be a writer, you often spend all day reading blogs or articles.
  • You are verbally fluent and have good social skills — but there are many exceptions. For example, someone can be nerdy and awkward but make an amazing blogger.
  • You might need some basic quantitative skills — at least enough to be able to understand data about your work.
  • You feel like you’ll be motivated by people seeing the work you’ve produced.

If you’re doing something like public relations in an organisation, then the advice in our organisation-building skill profile may also be applicable.

How to tell if you’re on track

Once you’ve started exploring communicating ideas, you’ll want to ask yourself: “How generally successful am I by the standards of the communication track I’m on?”

For instance, if you’re trying to become a journalist, are you on track to land a job after several years of trying?

Check our career reviews to see if we have a career profile covering the specific pathway you’re interested in. (Though we regret we haven’t yet written profiles on many of the common media careers.)

If you’re focusing on content creation work, some good signs that you’re on the right track are:

  • You’re producing lots of content.
  • You get good feedback on your content, relative to people who have spent a similar amount of time on it (don’t forget that most public communicators have honed their craft for years, often long before they were famous).
  • You find it easy to connect with your target audience (through at least one medium) and convince at least some of them of new ideas.
  • You’re starting to build a following or career capital that might lead to a following in the future.

It’s hard to generalise about what levels of following are ‘good’ at different stages. Here are some extremely rough guides for what might be promising after 2–4 years for different media:

  • You’re often able to get 100,000 views per video on YouTube or 100,000 likes per video on TikTok.
  • You have a podcast with over 1,000 subscribers, and a typical episode you release gets 3,000 downloads (though podcasts are especially hard to launch if you don’t already have an audience).
  • As a blogger, you have a newsletter or Substack with over 5,000 subscribers.
  • You have 10,000 followers on Twitter.
  • If you’re aiming to get published in mainstream media outlets, you have had content in more than two major publications (e.g. The Guardian, Vox).

As a reminder: you don’t necessarily need to be writing about important issues at the early stages — what matters is that you will bring in more of these issues in the future.

How to get started building communication skills

You can start building a communication skill set by studying anything — or doing any job — that will let you practice writing, public speaking, or creating any other type of content.

If you’re not able to do communication in your main work responsibilities, you can practice with independent work on the side, such as blogging, tweeting, media, podcasting, tiktoking, etc. It can even be possible to write a book alongside another job. (Though for anyone doing independent public work, make sure you avoid publishing something unintentionally offensive, as this could affect your career prospects for a long time, even if the offence is the result of a misunderstanding.)

Having a portfolio of content can help you if you want to get into most communications roles, including ones at large organisations (like marketing or PR).

Content creation skills

For aspiring writers, we recommend getting into the habit of writing regularly — ideally every day (even if it’s only a few hundred words) — and posting your writing publicly on Facebook, Twitter, or a blog.

For spoken content, you should practise in any ways you can — for example, give presentations in your professional area, join your local Toastmasters group, make video blogs, or start a podcast.

Whatever your chosen medium or platform, try to create something regularly, and then actively try to learn from what you’ve done — think carefully about measurable goals you might want to achieve, and see whether and why you meet them.

What content should you produce?

Content that’s great can achieve far more reach and impact than content that’s merely good. People tend to produce much higher quality content when they’re naturally interested in a topic and working in a medium they genuinely like.

So we’d encourage you to look at examples of successful content, or find people doing what you want to do, then paying attention to where your intrinsic motivation leads you rather than just focusing on strategically selecting the ‘best’ topic or media type.

It can be worth doing some strategic thinking — for example, you might look at how the recommender algorithms work on various platforms and what kinds of content they are more likely to boost.

Which medium should you choose?

It may take some time to find the medium that’s the best fit for you. Someone might love long-form blog posts but hate Twitter; others find their niche in video, media appearances, and public talks. Experiment with different media to find the one that comes most naturally and is most motivating.

That said, as a secondary consideration, it can make sense to focus on media that are new and rapidly growing (it’s much easier to gain followers on new social media platforms than established ones) or are especially good for reaching a certain audience (e.g. HackerNews for the tech industry) and that fit your message (e.g. books and podcasts are better for complex ideas).

Finding your audience

To get started, you might ask yourself: “What’s a type of person that I understand and communicate well with, better than most people wanting to make a difference do?” If you’re a student, this might be fellow students. Or it could be others in your industry (e.g. biologists, policymakers). Or it could be a mass audience, like educated Americans. You might also pay attention to why it might be valuable to reach a certain audience.

Once you’re clearer on who your target audience is, your main aim should probably be to build your general ability to communicate with that audience. You might want to try to get any job that involves communicating with your chosen audience and allows you to get feedback on a regular basis — whether or not you’re producing content on topics directly related to pressing global problems.

If you’re interested in communicating with fairly general/widespread audiences, most jobs in journalism, and many in public relations and corporate communications, would be useful. If you’re focused on a more niche audience (e.g. AI scientists), then you might want to work somewhere where you can meet lots of people in that audience.

Once you’ve developed your skills and audience, then it’s time to focus more on having an impact, which we cover in the next section.

Organisational communication skills

The structure of these careers are similar to ones focused on organisation-building skills — you can get started by finding any role that will let you start learning one of these skills, like any role in marketing, editing, public relations, lobbying, visual design, or campaigning.

For communications roles at organisations, it can help to spend some time getting good at presenting yourself, for example by building a personal website with nice copy and good presentation. This lets you practise your skills as well as having something to show off to potential employers.

For more — including which organisations you should work for — take a look at how to get started building organisation-building skills.

Get funding

If you’d like to pursue this type of career, there is sometimes funding available. Some sources to consider include:

  • The Effective Altruism Infrastructure Fund sometimes makes small grants that could help you transition into these types of careers. For instance, if you’d like to test out making YouTube videos about one of our recommended problems full time for three months, you could ask for $10,000; or if you’re interested in working in journalism but can’t earn enough money right away, you could ask for a salary top-up. They’re also interested in helping cover the costs of internships or graduate school.
  • Open Philanthropy is interested in funding marketing related to effective altruism.

Apply for free one-on-one advising

Want more individualised advice before diving in? There’s a lot more to be said about:

  • How to find the communication career that’s the best fit for you
  • What strategy to take for getting started in communication careers
  • How to best use your following if you already have one

Get in touch with our one-on-one team, and we may also be able to introduce you to people in these paths.

APPLY TO SPEAK WITH OUR TEAM

Find jobs that use communication skills

Filter our job board by ‘outreach’ to find jobs in this category:

    View all opportunities

    Once you have these skills, how can you best apply them to have an impact?

    Once you have the skills and an audience, the question becomes which messages to focus on to have the biggest impact.

    Some messages are more important to spread than others, but some messages are also easier to spread. You need to consider both factors and how their significance multiplies.

    Moreover, you need to customise the analysis for your audience. The messages that are important and likely to spread among Ariana Grande fans are totally different from those likely to spread among philosophy academics.

    Some key factors for comparing messages include the following (which is an adapted version of our problem framework):

    1. Important — if this idea spread among your audience, how much impact would result?
    2. Neglected — how widely known is this idea by your audience already? How much is it already discussed by other creators in your space?
    3. Is it of interest to your audience? Or otherwise possible to get attention for given your platform? This makes it more likely to spread.
    4. Is it personally interesting and motivating for you to work on?

    The aim is to find messages or topics that do best on the multiple of all four factors.

    Here’s a process you could go through to generate ideas:

    1. Make a list of the global problems you think are most pressing.
    2. Generate ideas for messages and ideas that could, if spread more widely among your audience, enable more progress on these problems. This could be calls to get more people working on these problems, information about the best solutions to them, or messages to help decision makers understand these issues better. To do this, explore the resources in our problem profiles and then speak to experts in the area about what would be helpful.
    3. Think about which messages could be most of interest to your audience or a good fit for your platform.
    4. Experiment with spreading those that seem most promising. It might take some trial and error to find an idea and framing that resonates with your audience. In particular, before taking on a big project like a book or documentary, try to test it out in a smaller version.

    We listed a couple of examples of ideas we’d like to see spread above.

    In practice, you’ll likely want to continue to publish a mixture of content that builds your audience or pays the bills and content that you think is especially impactful.

    Career paths we’ve reviewed that use these skills

    Learn more

    Our articles and podcasts:

    See all our articles and episodes on advocacy careers

    Some of the best resources we’ve found about individual communication:

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Communicating ideas appeared first on 80,000 Hours.

    ]]>
    Experience with an emerging power (especially China) https://80000hours.org/skills/emerging-power/ Tue, 31 Oct 2023 12:07:12 +0000 https://80000hours.org/?post_type=skill_set&p=84320 The post Experience with an emerging power (especially China) appeared first on 80,000 Hours.

    ]]>
    China will likely play an especially influential role in determining the outcome of many of the biggest challenges of the next century. India also seems very likely to be important over the next few decades, and many other non-western countries — for example, Russia — are also major players on the world stage.

    A lack of understanding and coordination between all these countries and the West means we might not tackle those challenges as well as we can (and need to).

    So it’s going to be very valuable to have more people gaining real experience with emerging powers, especially China, and then specialising in the intersection of emerging powers and pressing global problems.

    In a nutshell: Many ways of solving the world’s most pressing problems will require international coordination. You could help with this by building specific experience of the culture, language, and policies in China or another emerging power. Once you have that expertise, you could consider working in an AI lab, think tanks, governments, or in research roles.

    Key facts on fit

    You’ll need fantastic cross-cultural communication skills (and probably a knack for learning languages), a keen interest in international relations, strong networking abilities, and excellent judgement to be a good fit.

    Why is experience with an emerging power (especially China) valuable?

    China in particular plays a crucial role in many of the major global problems we highlight. For instance:

    • The Chinese government ‘s spending on artificial intelligence research and development is estimated to be on the same order of magnitude as that of the US government.1
    • As the largest trading partner of North Korea, China plays an important role in reducing the chance of conflict, especially nuclear conflict, on the Korean peninsula.
    • China is the largest emitter of CO2, accounting for 30% of the global total.2
    • China recently became the largest consumer of factory-farmed meat.3
    • China is one of the most important nuclear and military powers.
    • As home to nearly 20% of the world’s population,4 it will play a central role in mitigating pandemics.
    • China is increasingly a leader in developing new technologies; Beijing is widely seen as a serious competitor to Silicon Valley5 and is the majority source of non-US ‘unicorns.’6

    As a result, it’s difficult to understand the scale and urgency of these pressing problems without understanding the situation in China. What’s more, it’ll be difficult to solve them without coordination between Western groups and their Chinese equivalents.

    At the same time, China is not well understood in the West.

    Interest in China has grown in the last decade, but it still lags behind many other countries. For instance, in American colleges and universities, the number of students studying French is three times larger than those studying Chinese,7 while the starting level of cultural difference is larger.

    All this suggests that having experience with China could be an extremely useful skill for improving collaboration between China and the West on many of the world’s most pressing problems, avoiding potentially dangerous conflicts and arms-race-like dynamics, and improving the actions and policy of governments and institutions in both China and the West.

    Of course, a similar argument could be made for gaining expertise in other powerful nations, for example: India, Brazil, or Russia.

    However, we see Russia as likely to be less important than China because it has a weaker technology industry, so isn’t nearly as likely to play a leading role in AI or biotech development. It has a much smaller economy and population in general and hasn’t been growing at anywhere near the rate of China, so seems less likely to be a central global power in the future. Also, as a result of the Russia-Ukraine war, most Western citizens should probably avoid travelling to Russia.

    For some similar reasons, India and Brazil also seem less likely to play a leading role in shaping new technologies than China. The existence of many English speakers in India also means there are more people able to fill the coordination gap already, reducing the need for additional specialists.

    Given this, we’ve spent most of our time researching China. As a result, we focus less on other emerging powers in this article, and most of our specific examples focus on China.

    However, we do think that gaining experience with these other countries is likely to be valuable and is currently under-explored, especially given how important they could become in the next few decades. In fact, if you’re at the beginning of your career, it may even be valuable to think about which countries are most likely to be particularly influential in a few decades and focus on gaining expertise there. Becoming an expert in any emerging global power could be a very high-impact option and could be the best option for some people.

    Safety when spending time abroad

    Visiting some of these countries can be dangerous, and that danger can change depending on fast-moving events.

    We’d always recommend reading up on your government’s travel advice for the country you’re planning to visit. Don’t travel if your government recommends against it (for example, as of September 2023, the UK and US governments recommend against travel to Russia).

    The UK government’s foreign travel advice website is a helpful resource.

    What does building and using experience with an emerging power involve?

    Building this skill set involves working in roles that will give you real opportunities to learn about an emerging power, especially in the context of trying to solve particularly pressing problems.

    Ideally, you’ll pick one emerging power, and try to gain experience specifically in and about that country. This might include working in policy, as a foreign journalist, in some parts of the private sector, in philanthropy, in academic research, or in any number of other roles from which you’ll learn about an emerging power (some of which we discuss in more detail below).

    These roles overlap with ways you might build and use other impactful skills, like research or communicating ideas. That’s because, in order to have an impact with your experience of an emerging power, you’ll usually need to use other skills as well: for example, you might be doing research on AI safety in China (using research skills), developing or implementing US foreign policy (using policy and political skills), or writing as a journalist in India (using communication skills).

    The distinguishing feature of this skill is that you’ll build deep cultural knowledge, a broad network, and real expertise about an emerging power, which will open up unique and high-impact ways to contribute.

    Working with foreign organisations on any topic requires an awareness of their culture, history, and current affairs, as well as good intuitions about how each side will react to different messages and proposals. This involves understanding issues like:

    • What are attitudes like, in the emerging power you’re learning about, around doing good and social impact?
    • If you wanted to make connections with people in the emerging power interested in working on major global challenges, what messages should you focus on, and what pitfalls might you face? How does professional networking function in the emerging power in general?

    We expect that fully understanding these topics will require deep familiarity with the country’s values, worldviews, history, customs, and so on — noting, of course, that these also vary substantially within large countries like China, India, and Russia.

    Eventually, you’ll move from building the skill to a position where you can use this experience to help solve pressing global problems. To use this skill best, you might also need to combine it with knowledge of a relevant subject — some of which we discuss here. We discuss some ways to have an impact with this skill in the final section below.

    Example people

    How to evaluate your fit

    How to predict your fit in advance

    This is likely to be a great option for you if you are from one of these countries, if you have spent a substantial amount of time there, or if you’re really obsessively interested in a particular country. This is because the best paths to impact likely require deep understanding of the relevant cultures and institutions, as well as language fluency (e.g. at the level where you might be able to write a newspaper article about biotechnology in the language).

    If you’re not sure, you could study in one of these countries for a month, or do some other kind of short visit or project, to see how interesting you find it. (Although recent tension between the US and China could mean that spending significant time in China could exclude you from certain government positions in the US or other countries — many of which could be very high-impact career options — so this is a risk.)

    Other signs you might be a great fit:

    • Bilingualism or other cross-cultural communication skills. Experience living abroad or working in teams with highly diverse backgrounds could help build this.
    • Strong networking abilities and social skills.
    • Excellent judgement and prudence. This is important because there’s a real possibility of accidentally causing harm when interacting with emerging powers.

    We think it’s also important that you’re interested in trying to help all people equally and identifying the most effective ways to help, aiming to have well-calibrated judgements that are justified with evidence and reason. We’ve found these attitudes are quite rare, especially in foreign policy, which is often focused on national interest.

    How to tell if you’re on track

    Only a few people we know have ever tried really gaining this skill, so we’re not quite sure what success looks like.

    It’s worth asking “how strong is my performance in my job?” for whatever you are doing to build this skill. Don’t just ask yourself — you’ll get the best information by talking to the people you work with or the people who you think are excellent at understanding the emerging power you’re focusing on.

    Hopefully, after 1–2 years, you will have:

    • Started building a strong network in the country you’re learning about
    • Learned something substantially important and impressive, like knowing a language to (almost) fluency
    • Found a fairly stable job relevant to the emerging power you’re learning about where you’re rapidly able to learn more (like one of the things we list below)
    • Built up knowledge of a global problem that you can combine with your experience of the emerging power to have an impact later

    How to get started building experience with an emerging power

    Broadly, the aim is to get a useful combination of the following as quickly as possible:

    1. Knowledge of the intersection of an emerging power and an important global problem, such as the topics listed below
    2. Knowledge of and connections with the community working on the pressing global problems you want to help tackle
    3. A general understanding of the language and culture of an emerging power, which probably requires spending at least a year living in the country. (Though again, having a background in China or Russia — and possibly even just visiting — could exclude you from some Western government jobs.)

    Below is a list of specific career steps you can take to gain the above knowledge. Most people should pursue a combination depending on their existing expertise and personal fit.

    For many people, the best option at the start of your career won’t be any of the steps in this section. Instead, you could take a step towards building a different skill that you’ll use in conjunction with experience of an emerging power — even if that initial step has absolutely nothing to do with an emerging power.

    This option has significant flexibility, since it would be easy to switch into another career if you decide not to focus on an emerging power.

    To learn more, we’d particularly highlight our articles on how to get started building:

    We’d guess that these are the most relevant skills to combine with experience of an emerging power, but we’re not sure — for more, see all our articles on skills.

    But if you’re ready to start building this skill in particular, here are some ways to do it.

    Go to the country and learn the language

    If you’re a fluent English speaker, it takes around six months of full-time study to learn a Western European language. For other languages — like Chinese — this time might be more like 18 months.8 (Learning to write Chinese can take much longer and isn’t clearly worth it.)

    You can learn most effectively by living in the country and aiming to speak the language 100% of the time.

    We’ve written about learning Chinese in China in more detail.

    We’re not sure how valuable it would be to learn other languages common in emerging powers, like Hindi, Russian, or Portuguese. In general, it’ll depend on the ease of learning the language and the prevalence of English in the country you’re focusing on — especially among decision makers.

    Teaching English in an emerging power

    What’s the easiest job for someone smart but lazy? The top answer to this question on Quora claims that it’s teaching English in China.

    The huge demand for English teachers means that this option is open to most native English college graduates. These positions typically pay $15,000–$30,000 per year, include accommodation, and might only require four hours of work per day. For instance, you get a monthly salary of $2,100–2,800 per month during a typical one-year program offered by First Leap. Job benefits include work visa sponsorship arrangement, flight to China, and a settling-in allowance of up to $1,500. Another program, Teach in China, offers $900–1,800 per month in compensation, but also provides rent-free housing and can be pursued for just one semester. This is more than enough to live in a small Chinese city. You can earn even more if you do private tutoring as well, although the Chinese government is currently clamping down on the private tutoring industry.

    It’s harder to get paid positions teaching in India without previous teaching experience.

    This option won’t get you equally useful skills and connections as the other options in this list, but you will be able to learn about a culture and study a language at the same time. However, doing this through a prestigious fellowship — such as the Fulbright English Teaching Assistant Programme — could mitigate this downside.

    Build connections with people working on top problems

    If you are a citizen of an emerging power, then we’d guess the best first step would be to get involved in the community of people working on the world’s most pressing problems and ideally volunteer or intern with some organisations working on these risks, like those on our list of recommended organisations.

    If you have connections and trust with other altruistically-minded people, you can help them learn about China and help coordinate their efforts.

    With that in mind, we’d also recommend getting involved with the effective altruism community, where there are lots of people working on the kinds of global problems that this skill is relevant for.

    Work in top companies or a foreign office of a top Western company

    Working at any high-performance company — such as a top startup — is a generally great initial step to build career capital. And if that company is based in an emerging power, you’ll get to learn about the country at the same time. For example, you could look at startups that have been funded by top Chinese venture capitalists, such as HongShan Capital, IDG Capital, and Hillhouse Capital. One VC even told us that they’d provide job recommendations if asked, as they often know which of their companies are best-performing. Read more about startup jobs.

    You don’t need a technical background to work at a startup: there are often roles available in areas like product management, business development, operations, and marketing.

    In general, the aim would be to learn about an emerging power, gain useful experience, and make relevant connections — rather than push any particular agenda or otherwise try to have an impact right away.

    Another advantage of this option is that you could follow it into earning to give. In some countries (like China), charities, research, and scholarships can often only be funded by citizens of that country, which could make earning to give a more attractive option if you are a citizen.

    You could also aim to work at an office of a top Western consultancy, finance firm, or professional services firm in the country you’re learning about. This offers many of the standard benefits of this path — namely a prestigious credential, flexibility, and general professional development — while also letting you learn about an emerging power. We’ve heard some claims that your career might advance faster if you start in London or New York, but this advantage seems to be shrinking due to the increasing opportunities and importance of emerging powers. However, the accessibility of these jobs can be precarious and highly dependent on your nationality — for example, China is increasingly cracking down on foreign consultancies. Another consideration is that salaries are generally lower in emerging powers, even at international firms (with the exception of Hong Kong).

    Do relevant graduate study

    Which subjects?

    If you want to work on issues around future technology, then it might be better to study something like synthetic biology or machine learning, and then increase your focus on an emerging power later.

    Alternatively, you could start studying economics, international relations, and security studies, with a focus on a particular emerging power. Ideally, you could also focus on issues like emerging technologies, conflict, and international coordination. See ideas for high-impact research within China studies.

    It’s also useful to have a general knowledge of the language, history, and politics of the emerging power you’re studying. So another way to get started might be to pursue area or language studies (one source of support available for US students is the Foreign Language and Area Studies Fellowships Program), perhaps alongside one of the topics listed above.

    All of these subjects are useful, so we’d recommend putting significant weight on personal fit in choosing between them. Some will also better keep your options open, such as economics and machine learning. See our general advice on choosing graduate programmes.

    Should you study in the country you’re gaining experience with?

    Once you’ve chosen a programme that’s a good fit, we think it’s generally best to aim to go to the highest-ranked university possible — whether that’s in the West or the country you’re studying — rather than specifically aiming to study in a foreign country. It’s probably more useful to gain an impressive credential than spend time living in the country since there are many other ways to do that.

    An alternative is to look for a joint programme, such as — in the case of China — the dual degree offered by Johns Hopkins School of Advanced International Studies and the Department of International Relations at Tsinghua University. John Hopkins is highly ranked for policy master’s degrees, so this course combines a good credential with the opportunity to study in China.

    You might also consider the Schwarzman Scholars programme — a one-year, fully-funded master’s programme at Tsinghua University in Beijing. Approximately 20% of all US students studying in China are on this programme.

    If you don’t yet have many connections with the effective altruism community and want to get involved, then you could also use graduate study as an opportunity to gain these connections by being based in one of the main hubs, including the San Francisco Bay Area, London, Oxford, Cambridge, and Boston.

    If you’re a Chinese citizen interested in studying in the West, you might want to consider that:

    Work as a foreign journalist

    If you’re proficient in a foreign language, you could try becoming a foreign correspondent in the country you’re gaining experience with. It could help if you have a related degree from a top university (e.g. China studies or international relations with a focus on East Asia).

    English-language news agencies such as Reuters, the Associated Press, Agence France-Presse, and Bloomberg maintain large bureaus across the world (including in Beijing, Shanghai, and Hong Kong) and often hire younger journalists.

    Most major international publications such as The New York Times, The Wall Street Journal, The Washington Post, and The Financial Times also have a small but significant presence in many major world cities where you can apply for internships. A fresh graduate should expect to intern for about six months before finding a full-time position.

    If you’re focused on China and coming from the West, it is often easier to find work at China-based English-language publications where you can do original journalism, such as the South China Morning Post (which has a graduate scheme), Caixin Media, or Sixth Tone. We do not recommend working for Chinese state media, as there will be few opportunities to create original content and most work will likely be polishing articles translated from English.

    We also don’t recommend directly writing about effective altruism in China because we think it’s particularly easy to cause harm.

    Work in philanthropy in an emerging power

    If you’re interested in doing good in an emerging power, it helps to understand attitudes about doing good in that country. One way to do that is to learn about philanthropy. You could also aim to make connections with philanthropists in an emerging power — this comes with the added benefit of building a network of (often wealthy) do-gooders.

    One career option here is to work at research institutions dedicated to the topic of philanthropy. For example, in China, these include:

    You could also find a list of other philanthropy research centres from the Global Chinese Philanthropy Initiative.

    There are also Western foundations that work in emerging powers. The Berggruen Institute, Ford Foundation, and Gates Foundation all work in China.

    To explore this, you could attend relevant conferences. For instance, if you’re a social entrepreneur interested in China, you could attend a Nexus Global Youth Summit in the region. It’s a network that brings together young philanthropists and social entrepreneurs. If you would like to learn more about the latest developments in Chinese philanthropy, you could attend the International Symposium on Global Chinese Philanthropy by the Global Chinese Philanthropy Initiative, and the Chinese and Chinese American Philanthropy Summit by Asia Society in Hong Kong.

    Before pursuing these options, it might be useful to first learn about best practices in Western philanthropy, perhaps by taking any role (even a junior one) at Open Philanthropy, GiveWell, or other strategic philanthropy organisations.

    What other knowledge should you gain to have an impact?

    We think the most pressing global problems often relate to global catastrophic risks and emerging technology — though there are many other important issues you could work on, like factory farming.

    Once you’ve chosen a particular emerging power, you can gain expertise in the following topics. These are all vital issues to understand in the West as well, but the intersection of these issues with China (and other emerging powers) is particularly neglected.

    AI safety and strategy

    Safely managing the development of transformative AI may require unprecedented international coordination, and it won’t be possible to achieve this without an understanding of global emerging powers and coordination with organisations in these countries. This means understanding issues like:

    • What is the state of AI development in the emerging power you’re learning about?9
    • What attitudes do technical experts in the emerging power have towards AI safety and their social responsibility? Who is most influential?
    • How does the government of the emerging power shape its technology policy? What attitudes does it have towards AI safety and regulation in particular?
    • What actions are likely to be taken by the government and companies in the emerging power concerning AI safety?

    (Read more about AI strategy and policy, and about China-related AI safety and governance paths.)

    Biorisk

    Global coordination is also necessary to reduce biorisk. This means understanding issues like:

    • What is the state of synthetic biology research in the emerging power you’re learning about?10
    • What attitudes do biology researchers in the emerging power have towards safety and social responsibility?
    • How does government technology policy in the emerging power relate to the risks from this technology?

    International coordination and foreign policy

    Expertise on any of the following issues (among others) could be highly useful:

    • How, when, and why does the emerging power you’re learning about provide public goods globally?
    • If you’re focusing on China, what do its foreign non-government organisation laws and domestic charity laws mean for its international collaboration on global causes?
    • What are the emerging power’s foreign policy priorities, and how is it likely to handle the possibility of global catastrophic risks?
    • How can coordination between the West and the emerging power you’re focusing on be increased and the chance of conflict be decreased?
    • How should Western government policy concerning catastrophic risks relate to policy in the emerging power?

    Other global problems

    Many of the key organisations working to reduce factory farming are expanding rapidly into China, India, and Brazil, so expertise in these countries and factory farming is also useful.

    Knowledge of China seems less important within global health and development than in many of the other global problems we focus on. This is because China is not as important a player in international aid and global health. It also seems easier to find people who are already experts on the intersection of China and development policy than with the topics listed above. We’d guess a knowledge of India would be more relevant to global health and development.

    Once you have this skill, how can you best apply it to have an impact?

    In general, having an impact with this skill involves three steps — not necessarily in this order:

    1. Choosing 1–3 top problems to focus on. It’s possible you’ll want to do something highly problem-specific (like doing AI research in an emerging power), but it’s also possible you’ll want to do something more broadly applicable (like working as a journalist). Either way, the problem you work on is a substantial driver of your impact, so it helps to have 1–3 top problems in mind.
    2. Building a complementary skill, such as research, communicating ideas, organisation-building, or policy and political skills. Most ways of having an impact are going to involve applying your experience with an emerging power using one of these other skills.
    3. Find a job that uses your complementary skill in a way that’s highly relevant to the emerging power you have experience with. Decide between jobs depending on your personal fit. If you can’t find one of those jobs, try to get a job that continues building your skills. For example, there might be a great policy job available that has nothing to do with emerging powers — and you can always switch back later in your career.

    With that in mind, we’d recommend reading the relevant article for your complementary skill — these articles also contain ideas on having an impact using that skill. Depending on your personal fit, those ideas could be higher impact than the specific suggestions in this article.

    Also, many of the options in the section above on how to get started could easily become impactful as you gain experience, for example:

    Below we list some additional options that are harder to enter without a few years building up your skills.

    Work in an AI lab in safety or policy

    If you’re a citizen of an emerging power, especially China, you could try working for an AI lab in that country. The lab could be commercial or academic.

    You could try to get a role working in technical safety research, and, in the long run, you could aim to progress to a senior position and promote increased interest in and implementation of AI safety measures internally.

    You could also try working as a governance or policy advisor at a top AI lab — this could be a lab based in the emerging power or a role at a western AI lab focused on emerging power dynamics.

    It’s possible that other roles in labs could be good for building AI-related career capital — but many such roles could be harmful. (For more, read our career review of working at leading AI labs.)

    To learn more, read our career review of China-related AI safety and governance paths.

    Work at a think tank

    You could work at a Western think tank, studying issues specifically relevant to pressing problems in the emerging power you’re focusing on. Some think tanks focus more on the most relevant topics than others. For instance, Center for Security and Emerging Technology, Center for a New American Security, Centre for the Governance of AI, Brookings Institution, and Carnegie Endowment for International Peace seem relevant for issues related to existential risks. (There are doubtless others we’re not aware of.) One risk is that it can be much more difficult to work on China-Western coordination if you’ve had a job at a think tank that’s generally seen as particularly anti-China.

    Beyond that, it could also be useful to work on anything concerning international coordination and foreign policy, such as the US-China Relations Independent Task Force of the Council on Foreign Relations and the Kissinger Institute on China and the United States. Another option is to work at a joint partnership institution, such as Carnegie-Tsinghua Center for Global Policy by applying to their Young Ambassadors Program in Beijing.

    Unfortunately, it’s difficult to enter roles in Chinese think tanks if you’re not a Chinese citizen, and this may also be the case in other emerging powers (we’re not sure).

    If you are a Chinese citizen, you could aim to work in a top Chinese think tank. You could look to work at a think tank doing AI-related work or look more broadly at think tanks such as the Chinese Academy of Social Sciences and the China Institutes of Contemporary International Relations.

    You can read more about think tank roles in our separate career profile.

    Work in roles focused on an emerging power in organisations focused on reducing existential risks

    Many key organisations working on existential risks want to better understand China to inform their work. For instance, representatives of many AI risk research organisations we recommend have attended conferences in China.

    These organisations struggle to find altruistically motivated people with deep knowledge of top problems as well as knowledge of China. They also struggle to find people connected to relevant Chinese experts. So you could use this skill set to aid organisations working on existential risks.

    Academic research in an emerging power

    Academic research could be a very high-impact career path, especially when the research is focused on a top problem, like biorisk research or technical AI safety research.

    If you want that research to have an impact, your role as an academic could become closer to advocacy, using a communication skill set. For example, you could work on AI safety at a top Chinese university lab, which could be valuable both for making progress on technical safety problems and for encouraging interest in AI safety among other Chinese researchers — especially if you progress to take on teaching or supervisory responsibilities. (Read more.)

    Other options

    Advising parts of international organisations focused on AI, such as the UN Secretary General’s High-level Panel on Digital Cooperation or the OECD’s AI Policy Observatory, could provide opportunities for impact.

    In industry, it could be worth exploring opportunities in semiconductor or cloud computing companies in emerging powers, especially in China. This is based on our view that shaping the AI hardware landscape could be a high-impact career path.

    You might also consider supporting the translation of materials related to pressing problems into the language of the emerging power, in particular reputable academic materials — although be aware that this can be easy to get wrong.

    Finally, there are likely many other promising opportunities to apply this skill now and in the future that we don’t know about. After all, a notable thing about this skill is that it involves gaining knowledge that Western organisations — like 80,000 Hours — lack by default. So if you go down this route you may well discover novel opportunities to use it.

    Find jobs that use experience with an emerging power

    If you think you might be a good fit for this skill and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities. You could filter by policy or location to find relevant roles.

      View all opportunities

      Career paths we’ve reviewed that use this skill

      Learn more about building experience with an emerging power

      Top recommendations

      Further recommendations

      Read next:  Explore other useful skills

      Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post Experience with an emerging power (especially China) appeared first on 80,000 Hours.

      ]]>
      Organisation-building https://80000hours.org/skills/organisation-building/ Mon, 18 Sep 2023 10:39:52 +0000 https://80000hours.org/?post_type=skill_set&p=83652 The post Organisation-building appeared first on 80,000 Hours.

      ]]>
      When most people think of careers that “do good,” the first thing they think of is working at a charity.

      The thing is, lots of jobs at charities just aren’t that impactful.

      Some charities focus on programmes that don’t work, like Scared Straight, which actually caused kids to commit more crimes. Others focus on ways of helping that, while thoughtful and helpful, don’t have much leverage, like knitting individual sweaters for penguins affected by oil spills (this actually happened!) instead of funding large-scale ocean cleanup projects.

      A penguin wearing a knitted sweater
      While this penguin certainly looks all warm and cosy, we’d guess that knitting each sweater one-by-one wouldn’t be the best use of an organisation’s time.

      But there are also many organisations out there — both for-profit and nonprofit — focused on pressing problems, implementing effective and scalable solutions, run by great teams, and in need of people.

      If you can build skills that are useful for helping an organisation like this, it could well be one of the highest-impact things you can do.

      In particular, organisations often need generalists able to do the bread and butter of building an organisation — hiring people, management, administration, communications, running software systems, crafting strategy, fundraising, and so on.

      We call these ‘organisation-building’ skills. They can be high impact because you can increase the scale and effectiveness of the organisation you’re working at, while also gaining skills that can be applied to a wide range of global problems in the future (and make you generally employable too).

      In a nutshell: Organisation-building skills — basically, skills that let you effectively and efficiently build, run, and generally boost an organisation you work for — can be extremely high impact if you use them to support an organisation working on an effective solution to a pressing problem. There are a wide variety of organisation-building skills, including operations, management, accounting, recruiting, communications, law, and so on. You could choose to become a generalist across several or specialise in just one.

      Key facts on fit

      In general, signs you’ll be a great fit include: you often find ways to do things better, really dislike errors, see issues that keep happening and think deeply about fixes, manage your time and plan complex projects, pick up new things fast, and really pay attention to details. But there is a very wide range of different roles, each with quite different requirements, especially in more specialised roles.

      Why are organisation-building skills valuable?

      A well-run organisation can take tens, hundreds, or even thousands of people working on solving the world’s most pressing problems and help them work together far more effectively.

      An employee with the right skills can often be a significant boost to an organisation, either by directly helping them deliver an impactful programme or by building the capacity of the organisation so that it can operate at a greater scale in the future. You could, for example, set up organisational infrastructure to enable the hiring of many more people in the future.

      What’s more, organisation-building skills can be applied at most organisations, which means you’ll have opportunities to help tackle many different global problems in the future. You’ll also be flexibly able to work on many different solutions to any given problem if you find better solutions later in your career.

      As an added bonus, the fact that pretty much all organisations need these skills means you’ll be employable if you decide to earn to give or step back from doing good all together. In fact, organisational management skills seem like some of the most useful and highest paid in the economy in general.

      It can be even more valuable to help found a new organisation rather than build an existing one, though this is a particularly difficult step to take when you’re early in your career. (Read more on whether you should found an organisation early in your career.) See our profile on founding impactful organisations to learn more.

      What does organisation-building typically involve?

      A high-impact career using organisation-building skills typically involves these rough stages:

      1. Building generally useful organisational skills, such as operations, people management, fundraising, administration, software systems, finance, etc.
      2. Then applying those skills to help build (or found) high-impact organisations

      The day-to-day of an organisation-building role is going to vary a lot depending on the job.

      Here’s a possible description that could help build some intuition.

      Picture yourself working from an office or, increasingly, from your own home. You’ll spend lots of time on your computer — you might be planning, organising tasks, updating project timelines, reworking a legal brief, or contracting out some marketing. You’ll likely spend some time communicating via email or chatting with colleagues. Your day will probably involve a lot of problem solving, making decisions to keep things going.

      If you work for a small organisation, especially in the early stages, your “office” could be anywhere — a home office, a local coffee shop, or a shared workspace. If you manage people, you’ll conduct one-on-one meetings to provide feedback, set goals, and discuss personal development. In a project-oriented role, you might spend lots of time developing strategy, or analysing data to evaluate your impact.

      What skills are needed to build organisations?

      Organisation builders typically have skills in areas like:

      • Operations management
      • Project management (including setting objectives, metrics, etc.)
      • People management and coaching (Some manager jobs require specialised skills, but some just require general management-associated skills like leadership, interpersonal communication, and conflict resolution.)
      • Executive leadership (setting and achieving organisation-wide goals, making top-level decisions about budgeting, etc.)
      • Entrepreneurship
      • Recruiting
      • Fundraising
      • Marketing (which also benefits from communications skills)
      • Communications and public relations (which also benefits from communications skills)
      • Human resources
      • Office management
      • Events management
      • Assistant and administrative work
      • Finance and accounting
      • Corporate and nonprofit law

      Many organisations have a significant need for generalists who span several of these areas. If your aim is to take a leadership position, it’s useful to have a shallow knowledge of several.

      You can also pick just one skill to specialise in — especially for areas like law and accounting that tend to be their own track.

      Generally, larger organisations have a greater need for specialists, while those with under 50 employees hire more generalists.

      Example people

      How to evaluate your fit

      How to predict your fit in advance

      There’s no need to focus on the specific job or sector you work in now — it’s possible to enter organisation-building from a very wide variety of areas. We’ve even known academic philosophers who have transitioned to organisation-building!

      Some common initial indicators of fit might include:

      • You have an optimisation mindset. You frequently notice how things could be done more efficiently and have a strong internal drive to prevent avoidable errors and make things run more smoothly.
      • You intuitively engage in systems thinking and enjoy going meta. This is a bit difficult to summarise, but involves things like: you’d notice when people ask you similar questions multiple times and then think about how to prevent the issue from coming up again. For example: “Can you give me access to this doc” turns into “What went wrong such that this person didn’t already have access to everything they need? How can we improve naming conventions or sharing conventions in the future?”
      • You’re reliable, self-directed, able to manage your time well, and you can create efficient and productive plans and keep track of complex projects.
      • You might also be good at learning quickly and have high attention to detail.

      Of course, different types of organisation-building will require different skills. For example, being a COO or events manager requires greater social and system building skills, whereas working in finance requires fewer social skills, but does require basic quantitative skills and perhaps more conscientiousness and attention to detail.

      If you’re really excited by a particular novel idea and have lots of energy and excitement for the idea, you might be a good fit for founding an organisation. (Read more about what it takes to successfully found a new organisation.)

      You should try doing some cheap tests first — these might include talking to someone who works at the organisation you’re interested in helping to build, volunteering to do a short project, or doing an internship. Then you might commit to working there for 2–24 months (being prepared to switch to something else if you don’t think you’re on track).

      How to tell if you’re on track

      All of these — individually or together — seem like good signs of being on track to build really useful organisation-building skills:

      • You get job offers (as a contractor or staff) at organisations you’d like to work for.
      • You’re promoted within your first two years.
      • You receive excellent performance reviews.
      • You’re asked to take on progressively more responsibility over time.
      • Your manager / colleagues suggest you might take on more senior roles in the future.
      • You ask your superiors for their honest assessment of your fit and they are positive (e.g. they tell you you’re in the top 10% of people they can imagine doing your role).
      • You’re able to multiply a superior’s time by over 2–20X, depending on the role type.
      • If you’re aiming to build a new organisation, write out some one-page summaries of ideas for new organisations you’d like to exist and get feedback from grantmakers and experts.
      • If founding a new organisation, you get seed funding from a major grantmaker, like Open Philanthropy, Longview Philanthropy, EA Funds, or a private donor.

      This said, if you don’t hit these milestones, you might still be a good fit for organisation-building — the issue might be that you’re at the wrong organisation or have the wrong boss.

      How to get started building organisation-building skills

      You can get started by finding any role that will let you start learning one of the skills listed above. Work in one specialisation will often give you exposure to the others, and it’s often possible to move between them.

      If you can do this at a high-performing organisation that’s also having a big impact right away, that’s great. If you’re aware of any organisations like these, it’s worth applying just in case.

      But, unfortunately, this is often not possible, especially if you’re fresh out of college, for a number of reasons:

      • The organisations have limited mentorship capacity, so they most often hire people with a couple of years of experience rather than those fresh out of college (though there are exceptions) and often aren’t in a good position to help you become excellent at these skills.
      • These organisations usually hire people who already have some expertise in the problem area they’re working on (e.g. AI safety, biosecurity), as these issues involve specialised knowledge.
      • We chose our recommended problems in large part because they’re unusually neglected. But the fact that they’re neglected also means there aren’t many open positions or training programmes.

      As a result, early in your career it can easily be worth pursuing roles at organisations that don’t have much impact in order to build your skills.

      The way to do this is to work at any organisation that’s generally high-performing, especially if you can work under someone who’s a good manager and will mentor you — the best way to learn how to run an organisation is to learn from people who are already excellent at this skill.

      Then, try to advance as quickly as you can within that organisation or move to higher-responsibility roles in other organisations after 1–3 years of high-performance.

      It can also help if the organisation is small but rapidly growing, since that usually makes it much easier to get promoted — and if the organisation succeeds in a big way, that will give you a lot of options in the future.

      In a small organisation you can also try out a wider range of roles, helping you figure out which aspects of organisation-building are the best fit for you and giving you the broad background that’s useful for leadership roles in the future. Moreover, many of the organisations we think are doing the best work on the most pressing problems are startups, so being used to this kind of environment can be an advantage.

      One option within this category we especially recommend is to consider becoming an early employee at a tech startup.

      If you pick well, working at a tech startup gives you many of the advantages of working at a small, growing, high-performing organisation mentioned above, while also offering high salaries and an introduction to the technology sector. (This is even better if you can find an organisation that will let you learn about artificial intelligence or synthetic biology.)

      We’ve advised many people who have developed organisation-building skills in startups and then switched to nonprofit work (or earned to give), while having good backup options.

      That said, smaller organisations have downsides such as being more likely to fail and less mentorship capacity. Many are also poorly run. So it’s important to pick carefully.

      Another option to consider in this category is working at a leading AI lab, because they can often offer good training, look impressive on your CV, and let you learn about AI. That said, you’ll need to think carefully about whether your work could be accelerating the risks from AI as well.

      One of the most common ways to build these skills is to work in large tech companies, consulting or professional services (or more indirectly, to train as a lawyer or in finance). These are most useful for learning how to apply these skills in very large corporate and government organisations, or to build a speciality like accounting. We think there are often more direct ways to do useful work on the problems we think are most pressing, but these prestigious corporate jobs can still be the best option for some.

      However, it’s important to remember you can build organisation-building skills in any kind of organisation: from nonprofits to academic research institutes to government agencies to giant corporations. What most matters is that you’re working with people who have this skill, who are able to train you.

      Should you found your own organisation early in your career?

      For a few people, founding an organisation fairly early in your career could be a fantastic career step. Whether or not the organisation you start succeeds, along the way you could gain strong organisation-building (and other) skills and a lot of career capital.

      We think you should be ambitious when deciding career steps, and it often makes sense to pursue high-upside options first when you’re doing some career exploration.

      This is particularly true if you:

      • Have an idea that you’ve seriously thought about, stress tested, and got positive feedback on from relevant experts
      • Have real energy and excitement for your idea (not for the idea of being an entrepreneur)
      • Understand that you’re likely to fail, and have good backup plans in place for that

      It can be hard to figure out if your idea is any good, or if you’ll be any good at this, in advance. One rule of thumb is that if, after six months to a year of work, you can be accepted to a top incubator (like Y Combinator), you’re probably on track. But if you can’t get into a top incubator, you should consider trying to build organisation-building skills in a different way (or try building a completely different skill set).

      There are many downsides of working on your own projects. In particular, you’ll get less direct feedback and mentorship, and your efforts will be spread thinly across many different types of tasks and skills, making it harder to develop specialist expertise.
      To learn more, see our article on founding new projects tackling top problems.

      Find jobs that use organisation-building skills

      See our curated list of job opportunities for this path, which you can filter by ‘management’ and ‘operations’ to find opportunities in this category (though there will also be jobs outside those filters where you can apply organisation-building skills).

        View all opportunities

        Once you have these skills, how can you best apply them to have an impact?

        The problem you work on is probably the biggest driver of your impact, so the first step is to decide which problems you think are most pressing.

        Once you’ve done that, the next step is to identify the highest-potential organisations working on your top problems.

        In particular, look for organisations that:

        1. Implement an effective solution, or one that has a good chance of having a big impact (even if it might not work)
        2. Have the potential to grow
        3. Are run by a great team
        4. Are in need of your skills

        These organisations will most often be nonprofits, but they could also be research institutes, political organisations, or for-profit companies with a social mission.1

        For specific ideas, see our list of recommended organisations. You can also find longer lists of suggestions within each of our problem profiles.

        Finally, see if you can get a job at one of these organisations that effectively uses your specific skills. If you can’t, that’s also fine — you can apply your skills elsewhere, for example through earning to give, and be ready to switch into working for a high-impact organisation in the future.

        Career paths we’ve reviewed that use organisation-building skills

        These are some reviews of career paths we’ve written that use ‘organisation-building’ skills:

        Read next:  Explore other useful skills

        Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

        Plus, join our newsletter and we’ll mail you a free book

        Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

        The post Organisation-building appeared first on 80,000 Hours.

        ]]>