The 80,000 Hours team (Author archive) - 80,000 Hours https://80000hours.org/author/80000hours/ Thu, 08 May 2025 10:44:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Reading list for understanding AI and how it could be dangerous https://80000hours.org/2025/05/reading-list-for-understanding-ai-and-how-it-could-be-dangerous/ Thu, 08 May 2025 00:27:47 +0000 https://80000hours.org/?p=90181 The post Reading list for understanding AI and how it could be dangerous appeared first on 80,000 Hours.

]]>
Want to get up to speed on the state of AI development and the risks it poses? Our site provides an overview of key topics in this area, but obviously there’s a lot more to learn.

We recommend starting with the following blog posts and research papers. (Note: we don’t necessarily agree with all the claims the authors make, but still think they’re great resources.)

Key blog posts

Scaling up: how increasing inputs has made artificial intelligence more capable by Veronika Samborska at Our World in Data

The article concisely explains how AI has gotten better in recent years primarily by scaling up existing systems rather than by making more fundamental scientific advances.

How we could stumble into AI catastrophe by Holden Karnofsky on Cold Takes

Holden Karnofsky makes the case that if transformative AI is developed relatively soon, it could result in global catastrophe.

AI could defeat all of us combined by Holden Karnofsky on Cold Takes

Read this to understand why it’s plausible that AI systems could pose a threat to humanity, if they were powerful enough and it would further their goals.

Machines of loving grace — How AI could transform the world for the better by Anthropic CEO Dario Amodei

It’s important to understand why there’s enthusiasm for building powerful AI systems, despite the risks. This post from an AI company CEO paints a positive vision for powerful AI.

Computing power and the governance of AI by Lennart Heim et al. at the Centre for the Governance of AI

Experts in AI policy argue that governing computational power could be a key intervention for reducing risks, though it also raises risks of its own.

Why AI alignment could be hard with modern deep learning by Ajeya Cotra on Cold Takes

This piece explains why existing AI techniques may make it hard to create powerful AI systems over the long term that remain under human control.

The most important graph in AI right now: time horizon by Benjamin Todd

How would we know if AI is really on track to make big changes in society? Benjamin Todd argues that the length of tasks AI can do is the most important metric to look at.

Key research papers

Preparing for the intelligence explosion by William MacAskill and Fin Moorhouse at Forethought Research

These authors argue that an intelligence explosion will compress a century of technological progress into a decade, creating numerous grand challenges beyond just AI alignment that humanity must prepare for now.

Can scaling continue to 2030? by Jaime Sevilla et al. at Epoch AI

Available data suggests AI companies can continue scaling their systems through 2030, primarily facing constraints in power availability and chip manufacturing capacity.

Is power-seeking AI an existential risk? by Joe Carlsmith

This is one of the central papers putting together the argument that extremely powerful AI systems could pose an existential threat to humanity.

Scheming AIs: Will AIs fake alignment during training in order to get power? by Joe Carlsmith

Here’s an in-depth argument that it may be hard to create AI systems without incentivising them to deceive us.

AI 2027 by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean

This speculative scenario explains how superhuman AI might be developed and deployed in the near future.

Gradual disempowerment by Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud

Even if we avoid the risks of power-seeking and scheming AIs, there may be other ways AI systems could disempower humanity.

AI tools for existential security by Lizka Vaintrob and Owen Cotton-Barratt

While AI systems may pose existential risks, these authors argue that we may be able to develop some AI technology that will reduce existential risk.

Taking AI welfare seriously by Robert Long, Jeff Sebo, et al.

This paper makes a thorough case that we shouldn’t only worry about the risks AI poses to humanity — we also need to potentially consider the interests of future AI systems as well.

The post Reading list for understanding AI and how it could be dangerous appeared first on 80,000 Hours.

]]>
Updates to our problem rankings of factory farming, climate change, and more https://80000hours.org/2024/10/updates-to-our-problem-rankings-on-factory-farming-climate-change-and-more/ Tue, 08 Oct 2024 02:00:18 +0000 https://80000hours.org/?p=87545 The post Updates to our problem rankings of factory farming, climate change, and more appeared first on 80,000 Hours.

]]>
At 80,000 Hours, we are interested in the question: “if you want to find the best way to have a positive impact with your career, what should you do on the margin?” The ‘on the margin’ qualifier is crucial. We are asking how you can have a bigger impact, given how the rest of society spends its resources.

To help our readers think this through, we publish a list of what we see as the world’s most pressing problems. We rank the top most issues by our assessment of where additional work and resources will have the greatest positive impact, considered impartially and in expectation.

Every problem on our list is there because we think it’s very important and a big opportunity for doing good. We’re excited for our readers to make progress on all of them, and think all of them would ideally get more resources and attention than they currently do from society at large.

The most pressing problems are those that have the greatest combination of being:

  • Large in scale: solving the issue improves more lives to a larger extent over the long run.
  • Neglected by others: the best interventions aren’t already being done.
  • Tractable: we can make progress if we try.

We’ve recently updated our list. Here are the biggest changes:

  • We now rank factory farming among the top problems in the world. (See why.)
  • We’ve simplified the list into three categories: top most pressing problems, a new category for ‘emerging challenges,’ and other pressing problems (issues we think are underrated by society as a whole but aren’t quite as pressing as our top issues given work already happening). (See more.)
  • We now rank climate change in the category of other pressing problems alongside global health, rather than among the most pressing problems in the world. (See why.)

New articles and updates:

We are also working on or thinking about publishing new articles on:

  • Global health
  • Building capacity for top world problems
  • Wild animal suffering
  • Invertebrate welfare
  • Global priorities research
  • Other sub-problems relating to transformative AI

We expect to continue updating our list as we learn more and our views evolve. We’re not confident that our ranking of global problems is right or that we’re including everything we should. In fact, we’re confident that we don’t have it right — comparing complex global issues is such a difficult research question that it’d be shocking if we did!

But when we make decisions about how to focus our resources — our time, our money, our careers — we can’t avoid prioritisation. We think there are lots of benefits to being explicit about our choices. And we hope this list gives our audience a jumping off point for deciding which problems they should focus on. Read more in our FAQ on ranking global issues.

You can see our full ranking of pressing global problems here. Click through to the articles to see the arguments for and against each problem being particularly pressing and the best ways we know for you to help.

We explain the biggest changes to the rankings below.

Factory farming

We have written a new, in-depth problem profile on factory farming. We now rank it among the top problems in the world to work on.

In our problem profile, Benjamin Hilton reports that there are around 1.6-4.5 trillion farmed animals killed every year. The vast majority of these are raised in factory farms. This causes a huge amount of suffering, and we expect these numbers to continue to grow in the coming decades.

As Benjamin explained in his article:

  • Around 24 billion chickens are alive in farms at any time. We slaughter around 75 billion each year.
  • Around one billion pigs are alive in farms at any one time. We slaughter around 1.5 billion each year.
  • Around 1.5 billion cattle are alive in farms at any one time. We slaughter around 300 million per year.
  • Around 2.2 billion sheep and goats are alive in farms at any one time. We slaughter around 1.05 billion each year.
  • Around 100-180 billion fish are alive in farms at any one time. We kill around 100 billion farmed fish each year. (Many more fish are wild-caught.)
  • We kill around 255-605 billion farmed decapod crustaceans for food each year. That includes:
    • Crabs (5-16 billion slaughtered each year)
    • Crayfish and lobsters (37-60 billion slaughtered each year)
    • Shrimp (213-530 billion slaughtered each year)

Many of these animals suffer intensely for much of their lives on farms.

New research from Rethink Priorities suggests that while there’s a lot of uncertainty about the intensity of farmed animals’ experiences, it’s difficult to justify extremely low estimates of their capacities to suffer. Even significantly discounting the moral importance of animals compared to humans, which we think is reasonable, the scale of this suffering and death is still extreme. And we think most plausible moral views would put significant weight on mitigating such outcomes.

We also think it’s moderately tractable to make progress on this problem and that it’s highly neglected compared to many other other issues, with only about $410 million a year currently being spent on it.

It’s hard to know how to compare this kind of problem to other global problems. We try to approach these questions from a standpoint of moral uncertainty, impartiality, and concern for future generations.

In general, because we try to take into account how issues will affect all future individuals, we focus a lot on reducing existential risks. We think society as a whole underrates these risks and that they are hugely important.

But one might also reasonably think that there are very few effective interventions with lasting impacts that we can pursue now, which will predictably influence the long-term future. Additionally, if you have doubts about whether the future will be net positive, this boosts the priority of working on issues that improve the quality of the future or relieve suffering in the present.

Given how tricky the empirical and philosophical questions involved here are, we think these considerations place mitigating factory farming roughly on par with some smaller existential risks, like the risks posed by nuclear weapons. That said, we still consider it less pressing than existential risks where there are plausibly single or even double-digit chances of an existential catastrophe this century — like from AI.

For much more detail, read our new problem profile on factory farming.

Emerging challenges

We’ve added a new category for problems called ’emerging challenges.’ We think of this as a flexible category that allows us to comfortably include content about problems that we have a lot of remaining uncertainties about, but which could be extremely pressing and competitive with our top problems. It contains issues like the moral status of digital minds, space governance, and stable totalitarianism.

These issues don’t yet have well-developed fields built around them like biosecurity and AI safety do. The career paths within them may be less clearly defined, and overall, pursuing work on these problems should be thought of as high-variance.

Most of these issues are incredibly neglected. Some of them, like understanding the moral status of digital minds or invertebrate welfare, have only dozens of people working on them full time and only a few small funding sources. Meanwhile, hundreds, thousands, or even tens of thousands of people work on some of the issues we list on our page, with millions, billions, or even (in the case of climate change) over a trillion dollars in annual funding dedicated to solving them. And several issues we don’t list, like education in wealthy countries, have even more resources devoted to them.

We think this extreme neglectedness makes it particularly high impact to make progress on these emerging challenges — if you can.

This is balanced in part by the fact that it might be very hard to make progress on these issues (how does one increases society’s understanding of the moral status of digital minds?), or they may turn out to be much less pressing after further investigation. Since so little work has been done on them, our understanding of these issues is limited, and it will be harder to find collaborators or other support to work on them.

But even so, and partly because these fields aren’t well defined, we think people who are well-suited to working on them can have a really big impact by getting in on the ground floor. We recommend learning everything there is to learn about the topic (which often isn’t very much due to limited research), and then helping to shape the field and assess the pressingness of the issue with an insider’s perspective.

For example, we think of AI safety as having been in this category about a decade ago — and the people who pioneered the field have been disproportionately impactful in part because they were the only ones working on it.

Tackling these issues is not an easy path. There usually won’t be many organisations or jobs available, meaning you’ll probably have to chart your own course. This can be confusing and challenging, and there is often a much higher chance that it won’t work out, or that you might even do harm by shaping a burgeoning field in a negative way. This means it’s worth being extra careful, and it’s certainly not everyone’s best option to work on an emerging challenge.

But for those who have the right background and aptitudes to thrive in this kind of work, it can be extremely promising.

Climate change

Climate change is a very important issue, and we think, in general, more resources should be going toward addressing it. Humanity’s greenhouse gas emissions have triggered rising global temperatures, which are already impacting people’s lives. Projections suggest this will result in many millions of avoidable deaths and widespread disruption and harm in the coming decades.1 The harms of climate change are arguably particularly objectionable because they will generally most burden the populations who have contributed least to the problem.

We think, however, that given the work that is already happening to mitigate climate change, and considering the scale of other, more neglected issues, many people can do even more good tackling issues like nuclear weapons, catastrophic pandemics, factory farming, and risks from artificial intelligence.

The topline reasons for listing climate change near the top of our ‘other pressing problems’ section, rather than in our ‘most pressing problems’ section, are:

  • Climate change is significantly less neglected than other problems we focus on, and we expect that to continue.
  • Substantial progress has already been made in addressing climate change, which makes the most extreme global outcomes less likely than they might have been otherwise. We think it’s likely this progress will continue.
  • The most recent projections indicate that while the world is likely to miss the goal of keeping global warming below 2°C, it’s less likely than previously thought to exceed 4°C.
  • While lower levels of warming can still do a lot of damage, it’s much less likely to pose a risk of human extinction than some other threats, like AI, pandemics, and nuclear war.

Climate change is projected to have serious consequences for the many of the most vulnerable populations, such as people in India who already face the challenges of extreme heat. But climate change is not a unique threat in this regard. Preventable diseases and premature deaths also disproportionately burden people in low-income countries, and we believe much more should be done to address this problem. We think climate change is roughly comparable to the general challenge of improving global health and wellbeing.

We will continue to list climate change roles on our job board, just as we do for roles focused on improving global health.

Below we’ll give more detail on recent research and our thinking about how the neglectedness, scale, and tractability of climate change compares to other problems on our list.

Neglectedness

Climate change is significantly less neglected than it was in the recent past and much less neglected than most of the other issues on our list.

When we published an updated article on climate change in 2022, we cited an estimate from the Climate Policy Initiative that global climate finance was around $640 billion annually in 2019/20. The most recent version of that estimate has nearly doubled to $1.265 trillion.

This is about 10 times the amount of global funding for biosecurity, according to a recent estimate. It’s more than 3,000 times the amount of funding going to factory farming.2

Though we haven’t done nearly enough — and we should have done more much sooner — humanity has broadly recognised climate change as a major world problem and devoted significant resources to addressing it. For this reason, warming looks like it will be significantly less severe than it might have otherwise been.

Again, more climate change funding is still needed — the IPCC has called for 3-6x the current funding level — but the recent increase is a positive development.

And there’s widespread support for continuing action on climate change:

  • In opinion polls, climate change is consistently ranked as one of the most important global problems. A 2021 poll found that a plurality of respondents in the EU believed climate change was the most important single problem facing the world, ranking above poverty, infectious disease, and terrorism.
  • A 2022 poll of 21,000 people living in 22 countries found that 36% thought that climate change and environmental protection to be among the top three problems facing the world.
  • Public support for climate policies is high worldwide. A 2024 survey published in Nature found that 89% of respondents wanted their governments to do more to tackle climate change.

The 2024 study found that people tend to underestimate how supportive others are of tackling climate change. Other studies have shown similar findings. As Hannah Ritchie notes:

A study published in Nature Communications found that 80% to 90% of Americans underestimated public support for climate policies. And not by a small amount: they thought that just 37% to 43% were supporters, despite the actual number being 66% to 80%. In other words, they thought people in favour of climate policies were in the minority. In reality, the opposite was true: more than two-thirds of the country wants to see more action.”

Most discussions on climate change are now about the merits of various solutions and the scale of the problem, not whether it exists or is a problem at all.

The IPCC reported with medium confidence that, “By 2020, laws primarily focussed on reducing GHG emissions existed in 56 countries covering 53% of global emissions.” And: “More than 100 countries have either adopted, announced or are discussing net zero GHG or net zero CO2 emissions commitments, covering more than two-thirds of global GHG emissions.”

The future is uncertain, and it’s possible that investment in fighting climate change could stall or even reverse course. This could be due to a backlash against climate action or other shifts in the global political landscape. For example, if Donald Trump were elected president in November, the US federal government would be less likely to invest in ambitious climate change mitigation, and some progress on this issue may stall.

But even a significant slowdown in funding could leave climate change much better funded than our top-ranked problems, which could also be affected by shifting political winds. And it’s notable that climate funding has consistently increased since 2013/2014 despite many tumultuous political events. We also expect significant private investment in climate solutions to continue.

All this matters because we think neglectedness is a key factor in determining how pressing a problem is — in the sense of how much good you can do by working on it. The more work goes into a problem, the more likely it is to hit diminishing returns because the low-hanging fruit has been taken. In other words, if you’re the 100,000th person working on an issue, you’re likely to have a smaller impact, all else being equal, than if you’re the 100th person. (See more on this above.)

We think there is still a lot of good work to be done on climate change, and we hope to see much more investment in the most impactful solutions. That’s why we list it as a pressing problem. But there are other issues that are also large or even larger in scale, that have insufficient resources going into solving them, and which are not as widely recognised.

Scale

Recent climate change developments

Despite what some sceptics have tried to say over the years, climate change is real, it’s already happening, and it has serious impacts on the planet. For example:

But the resources humanity has invested into counteracting climate change are starting to bear fruit. Due to progress in low-carbon technology and increasingly ambitious climate policy, overall warming is likely to be much lower than feared a decade ago.

From 2000 to 2010, global emissions increased by around 3% per year, and the world was tracking slightly above the highest emissions scenario considered by the IPCC, which implied warming of around 5°C by 2100. However, since then, global emissions have slowed considerably and appear to be reaching more of a plateau, making a 5°C warmer world look increasingly unlikely.

Progress has been driven by strengthening climate policy and the falling costs of low-carbon technologies. Over the last 30 years, the price of lithium-ion batteries has declined by 97%.

This means that batteries will play an increasing role in energy storage, as well as in transport. Some family electric cars now sell for $10,000, and electric cars cost less to maintain and run than petrol cars. In 2020, 4% of new cars sold were electric. That figure increased to 18% only three years later.

A similar trend is happening with solar panel prices, which have declined by more than 500x over the last 50 years.

As a result, the share of global electricity production from solar has increased dramatically in recent years. While still only at 5%, it’s rising fast.3

Even without stringent climate policy, we expect low-carbon technology will play an increasing role in the global energy system.

In addition, climate policies across the world have strengthened in recent years following the 2015 Paris Agreement, which aimed to limit warming to 2°C. As a result, future global temperatures projections have moderated substantially over the last decade. In 2014, current policies suggested a pathway to around 3.9°C of warming in 2100. However, more ambitious climate policy and improved low-carbon technology now place the world on a pathway to warming of around 2.7°C by the end of the century.

If climate policy continues to strengthen in the future, as we hope, warming could be reduced even further. Indeed, if governments stick to their pledges and targets, the most likely level of warming is around 2.1°C.

This is still far more warming than the world should want — 2.1°C would be majorly disruptive, and many of the harms will fall on the world’s most vulnerable people, who contributed least to the problem. But it’s far less warming and less harmful than it might have been had we not begun mitigating it.

Source: Climate Action Tracker • Values included for the global temperature increase projections are median, with extended ranges on either side.

This is just experts’ best guess of likely warming based on some key parameters, and there is uncertainty about how emissions will progress over the century and how sensitive the climate is to emissions. But there is broad consensus in the literature that current policies will likely result in warming between 2°C and 3.5°C by 2100 on current policy, with the likelihood of the higher temperatures decreasing over time as policy strengthens. Our children aren’t likely to face a world of 5°C of warming that we once feared.

The IPCC has found that while our uncertainty about the range of future warming had decreased — making higher temperatures less likely — lower temperatures now look somewhat riskier for a range of impacts than previously believed. This somewhat diminishes the positive update from reduced uncertainty about the level of warming, but it appears unlikely to substantially change the general ballpark of the direct harms of climate change, which we turn to next.

Projected deaths

Despite progress, climate change is still on track to cause a huge amount of suffering and millions of deaths over the next century. That’s why we continue to think it makes sense to encourage more people to work towards making further progress.

Several projections have attempted to quantify the potential loss of life due to climate change:

  • The World Health Organization has projected 250,000 excess deaths annually between 2030 and 2050.
  • A Nature Communications study by R. Daniel Bressler estimated 83 million cumulative excess deaths by 2100 with 4.1°C of warming.4
    • In the most extreme scenario under this model, cumulative excess deaths reach nearly 300 million by 2100.
    • If warming is limited to 2.4°C by the end of the century, the model projects 9 million excess deaths.
  • The Climate Vulnerable Forum has projected 3.4 million deaths per year by 2100 from “unabated climate change”.5
  • While not giving precise projections of annual deaths, a 2023 IPPC report found that, “Depending on the level of global warming, the assessed long-term impacts will be up to multiple times higher than currently observed (high confidence) for 127 identified key risks, e.g., in terms of the number of affected people and species.”6

These projections are extremely difficult, so we shouldn’t place too much confidence in any particular estimate. But the scale of these projected effects is consistently disturbing and roughly on a par with the broad range of major world health challenges (largely reflecting global inequalities) like tuberculosis, diarrheal diseases, malaria, outdoor air quality, and HIV/AIDS, which cause many millions of deaths each year.7

While climate change is expected to increase the number of global deaths than there would otherwise have been, the Institute for Health Metrics and Evaluation recently forecast in its Global Burden of Disease study that between 2022 and 2050, global life expectancy will increase by 4.6 years. This projection factors in the impact of rising temperatures and notes:

Our findings indicate that increases in life expectancy will be largest in countries where it is currently lower, and inequalities between countries will shrink.

Addressing climate change may synergise with other global health initiatives. For example, transitioning to green energy can mitigate climate change while also reducing air pollution, and eradicating diseases like malaria could make societies more resilient to climate-related challenges. Indeed, the Gates Foundation has said, “Malaria eradication may be one of the most cost-effective climate adaptations we can make.”

All things considered, while climate change is a large-scale problem, it seems less severe than threats that pose much more significant risks of human extinction, like we think nuclear war and engineered pandemics do.

See more details in a footnote.8 For a related perspective, see David Wallace-Wells’ New York Times op-ed, “Just How Many People Will Die From Climate Change?”

Economic models

While ‘projected deaths’ are a useful proxy for understanding the impact of climate change and comparing its scale to other problems, it’s highly imperfect and doesn’t capture the full effects of warming. It’s helpful to look at other methods of assessing climate change’s impact to see if they suggest its scale may be more comparable to potentially extinction-level events, like the worst catastrophic pandemics.

In some ways, climate-economy models provide a broader picture of the impact, because they incorporate negative effects on people’s lives beyond disease and death. Experts in this area project the social impact of warming based on two different kinds of methods:

  • Bottom-up models estimate that 2°C to 3.5°C warming by 2100 could reduce global GDP by 1-10% compared to a world without climate change.
  • Top-down models suggest more pessimistic outcomes, with potential GDP reductions of around 20%.

Making these projections is exceedingly difficult, so again, we shouldn’t be overly confident in any particular model. It’s also important to note that the costs in these models are relative to a counterfactual future world without climate change, not today’s economy. (See more detail in this footnote.9) Even with climate change, average living standards are expected to rise significantly in the future due to ongoing economic growth.10

For an alternative take arguing that the worst-case effects of climate change remain underrated, even by the IPCC, we recommend “Climate endgame: Exploring catastrophic climate change scenarios” by Kemp et al.

Contributions to existential risk

We’ve argued that even if climate change turns out to be significantly worse than existing projections suggest, it is very unlikely to directly cause human extinction:

  • The IPCC and climate models account for various feedback loops and tipping points. The chance of runaway warming to uninhabitable global conditions is considered extremely low.
  • Humanity has shown the ability to adapt to climate changes in the past. With decades or centuries of warming, further adaptation is possible, even in extreme scenarios.

We think the direct extinction risk from climate change is less than 1 in 1,000,000. This is comparable to Toby Ord’s estimate of the risk of an existential catastrophe from an asteroid collision in the next century.

We also discuss the possibility of climate change indirectly increasing other catastrophic risks in our problem profile on climate change. Climate change-induced disasters or crises could, for example, fuel international conflict, perhaps increasing the risk of a great power war.11

While we think these indirect risks are real, we don’t think they significantly strengthen the case for working on climate change over other pressing global issues, all things considered.

One reason is that instead of working on climate change, you might instead work on reducing the risk of a great power war directly — for example, by working to reduce the risk of an accidental nuclear launch or fostering cooperation between great powers. It’s possible that working to reduce climate change is in fact more effective on the current margin at reducing the risk of great power war than either of those methods or any others. But our view is that tackling threats as directly as possible is usually a good heuristic, especially when the indirect method already receives a fair amount of resources, as is the case with climate change, as discussed above.

There are also indirect benefits from working on many problems, not just climate change. For example,reducing the risk of great power conflict could plausibly increase the chance that we effectively tackle climate change, because avoiding conflict makes it easier for countries to coordinate with one another to bring down carbon emissions. Similarly, mitigating factory farming could reduce the risk of pandemics, because factory farms increase the risks that a potential pandemic pathogen crosses over from animals to humans.

Tractability

While the neglectedness and scale of climate change tend to count in favour of prioritising it less than our top problems, it is plausibly more tractable than at least some of the other problems. This is a reason to prioritise it more on the margin, since it means that working on it in your career can make a bigger positive difference to solving it.

There are two main reasons to think that climate change is more tractable than other global catastrophic risks. These arguments are discussed in this article by Giving What We Can:

First, there is a clear success metric for climate change: we know we are winning if we reduce carbon emissions. Compared to other problems like AI safety and nuclear security, it is much clearer whether we are making progress on climate change.

Second, because success is relatively easy to measure, it is easier to identify the most promising ways forward. There are now several climate success stories which suggest that progress on climate change is possible if efforts are carefully designed. For example:

Because climate change has such a clear success metric and different solutions are now so well-tested, it is one of the more tractable major global risks.

Despite the fact that climate change seems a more tractable problem, we think this does not outweigh the differences in neglectedness and scale between climate change and problems on the top of our list.

There’s still much more to do

There has been substantial progress on climate change, and the risks are now lower than they once were. We might have learned that it was getting worse and the most extreme possibilities were looking more likely, but that hasn’t happened. That doesn’t mean that climate change is no longer a big problem. Under current policies, there is a non-negligible chance of 4°C warming, which would clearly be damaging to the world, and work still needs to be done to reduce emissions more. Even if climate change is less severe than that, many people will likely suffer as a result.

As we discussed at the start of this post and on our problem profiles page, we try to think about the difference we and our readers can make on the margin. We think about what they can do to help as much as they can, given how the rest of society spends resources. We are not saying that all resources directed to climate should instead go to AI and pandemics. In fact, we think that climate change should receive more resources than it does today, just as we think global health should. Our point is that, especially for many people starting their careers, you can probably do even more good by working on other problem areas.

We’re also not telling people currently working on climate change that they should change careers or suggesting their work isn’t valuable. It’s often very valuable, and personal career decisions must weigh many different factors. While the pressingness of the problem you work on is an important and underrated factor in our opinion, considerations like personal fit — and what you enjoy — are also relevant.

Some people argue that climate change should be prioritised in part because the harms it causes are particularly unjust. Many countries that will be most harmed have historically contributed least to greenhouse gas emissions. We don’t explicitly include these kinds of considerations in our rankings, instead focusing on total welfare impacts, but they may motivate many people in their work. Though note that considerations of justice may be relevant to other problems as well — e.g. factory farming or the threat of nuclear war.

Johannes Ackva, a grantmaker who works on climate change, told 80,000 Hours in an interview that early-career people might be advised not to work on the issue because much of the policy, technology, and emissions trajectories could be essentially “locked in” within the next 10-15 years or so. Since you’re most likely to be impactful after at least a decade in your field, a young person pursuing this path may find the most valuable years of their career don’t coincide with the best opportunities to mitigate the harms of climate change.

If you want to work on mitigating climate change, we list climate change roles on our job board and have guidance for what seem to be the best ways to help in our article on the problem. And if you want to donate to organisations that work on this topic, we’d recommend the Founders Pledge Climate Change Fund.

What if I disagree with 80,000 Hours about all of this?

We expect a lot of disagreement with these decisions. One reason you might disagree with our ranking of climate change is if you think it’s more likely to cause human extinction than we’ve argued is plausible, or if you think the risks from advanced AI, catastrophic pandemics, and nuclear weapons are significantly lower than we do.

Figuring out how to compare the impact of working on different problem areas is hard, and there will always be reasonable disagreement about how to do it. Members of our team disagree with one another on these topics all the time.

We also acknowledge that we may well be wrong about these new changes, but that’s also true for every other choice we make as an organisation.

We’re excited for people to engage with our ideas, propose counter-arguments, and develop their own views. We have an article that can help you compare problems for yourself if you’re interested in exploring this further. Many people in our audience and in the effective altruism community have differing views on what issues are most pressing – you can see some argument about these topics on the Effective Altruism Forum.

Much of our other content, such as our career guide, is also designed to be helpful regardless of your problem prioritisation. We think we can still be useful to people even if they totally disagree with us on what issues are most pressing.

Learn more

Factory farming

Problem choice

Climate change

The post Updates to our problem rankings of factory farming, climate change, and more appeared first on 80,000 Hours.

]]>
An apology for our mistake with the book giveaway https://80000hours.org/2024/01/an-apology-for-our-mistake-with-the-book-giveaway/ Fri, 05 Jan 2024 14:15:43 +0000 https://80000hours.org/?p=85320 The post An apology for our mistake with the book giveaway appeared first on 80,000 Hours.

]]>
80,000 Hours runs a programme where subscribers to our newsletter can order a free, paperback copy of a book to be sent to them in the mail. Readers choose between getting a copy of our career guide, Toby Ord’s The Precipice, and Will MacAskill’s Doing Good Better.

This giveaway has been open to all newsletter subscribers since early 2022. The number of orders we get depends on the number of new subscribers that day, but in general, we get around 150 orders a day.

Over the past week, however, we received an overwhelming number of orders. The offer of the free book appears to have been promoted by some very popular posts on Instagram, which generated an unprecedented amount of interest for us.

While we’re really grateful that these people were interested in what we have to offer, we couldn’t handle the massive uptick in demand. We’re a nonprofit funded by donations, and everything we provide is free. We had budgeted to run the book giveaway projecting the demand would be in line with what it’s been for the past two years. Instead, we had more than 20,000 orders in just a few days — which we anticipated would run through around six months of the book giveaway’s budget.

We’ve now paused taking new orders, and we’re unsure when we’ll be able to re-open them.

Also, because of this large spike in demand, we had to tell many people who subscribed to our newsletter hoping to get a physical book that we’re not able to complete their order.

We deeply regret this mistake. We should have had a better process in place to pause the book giveaway much sooner, so that no orders were placed that we couldn’t fulfil, and so no one signed up to the newsletter thinking they would get a physical copy of a book when they wouldn’t.

Our readers’ trust in our services is extremely important to us, and we’re very sorry to let down the people who won’t get the books they signed up for.

We understand that this might make some readers trust us less. All we can say is that we commit to doing better in the future. We’re reviewing our book giveaway processes so that going forward, we will be able to consistently fulfil all orders as expected.

If you’re reading this and you were one of the users affected:

  • Please accept our sincerest apologies for not being able to deliver on our promise to you.
  • You can still get access to the 80,000 Hours career guide in these ways:

We’d also like to address any concerns readers may have concerning the processing of user data that we obtained during this period:

If you’d like to unsubscribe from our newsletter, because of this or any other reason, you can do so at any time by clicking the ‘unsubscribe’ link in the footer of any email from us. If you unsubscribe, we won’t email you again.

User data collected by us will be processed in accordance with our privacy policy, which you can read on Effective Ventures’ website here.

We will never sell any user data, for any reason.

Users who ordered a book will also have provided some of their personal data to our distribution partner, Impact Books, such as the delivery address for their book and their email. You can read their privacy policy here. Like us, they will never sell your data, for any reason.

We’ve asked Impact Books to delete all the personal data they had gathered from any user whose order we did not fulfil, and they will do so. So you can be confident that we will not benefit in any way from your provision of this data.

We hope this clears up some potential concerns in this area.

We apologise once again for not sending out all the requested books, and we’re really sorry that we let people down.

We think our book giveaway is a valuable service, so we’re motivated to get it restarted in a sustainable way — and we will strive to make sure we avoid a mistake like this in the future. We also hope that some of those who are disappointed to not receive a paperback book can make use of other versions of our advice, which are (and will remain) available for free online.

Update — Book giveaway re-opened on January 26, 2024:

We have re-opened our book giveaway for free paperback orders! If you have already signed up to our newsletter, you can order a paperback book by emailing book.giveaway@80000hours.org. Otherwise, you can get your book by subscribing to our newsletter as normal.

We greatly appreciate the patience of our new subscribers while we prepared to re-open the giveaway.

While we may have to close orders if we get overwhelmed again in the future, we have made several changes to improve the process of the book giveaway to address this problem.

  1. We added new terms and conditions to the giveaway so new subscribers are better informed about the availability of books in certain formats, our data privacy policy, and the circumstances in which we may be unable to fulfil paperback orders.
  2. We improved the system that alerts us to unexpectedly high volumes of paperback book orders so that more of us are aware sooner.
  3. We developed clearer internal recommendations and procedures for when and how to pause the giveaway.

These changes will help us respond more quickly to these situations in the future, which we hope will limit the number of orders placed that we cannot fulfil.

The post An apology for our mistake with the book giveaway appeared first on 80,000 Hours.

]]>
Announcing our plan to become an independent organisation https://80000hours.org/2023/12/announcing-plan/ Fri, 29 Dec 2023 14:39:54 +0000 https://80000hours.org/?p=85263 The post Announcing our plan to become an independent organisation appeared first on 80,000 Hours.

]]>
We are excited to share that 80,000 Hours has officially decided to spin out as a project from our parent organisations and establish an independent legal structure.

80,000 Hours is a project of the Effective Ventures group — the umbrella term for Effective Ventures Foundation and Effective Ventures Foundation USA, Inc., which are two separate legal entities that work together. It also includes the projects Giving What We Can, the Centre for Effective Altruism, and others.

We’re incredibly grateful to the Effective Ventures leadership and team and the other orgs for all their support, particularly in the last year. They devoted countless hours and enormous effort to helping ensure that we and the other orgs could pursue our missions.

And we deeply appreciate Effective Ventures’ support in our spin-out. They recently announced that all of the other organisations under their umbrella will likewise become their own legal entities; we’re excited to continue to work alongside them to improve the world.

Back in May, we investigated whether it was the right time to spin out of our parent organisations. We’ve considered this option at various points in the last three years.

There have been many benefits to being part of a larger entity since our founding. But as 80,000 Hours and the other projects within Effective Ventures have grown, we concluded we can now best pursue our mission and goals independently. Effective Ventures leadership approved the plan.

Becoming our own legal entity will allow us to:

  • Match our governing structure to our function and purpose
  • Design operations systems that best meet our staff’s needs
  • Reduce interdependence with other entities that raises financial, legal, and reputational risks

There’s a lot for us to do to make this happen. We’re currently in the process of finding a new CEO to lead us in our next chapter. We’ll also need a new board to oversee our work, and new staff for our internal systems team and other growing programmes.

We’re excited to begin this next chapter and to continue providing research and support to help people have high-impact careers!

Join the 500,000 people aiming to have a greater impact with their careers.

Sign up and we’ll send you:

  • Weekly job opportunities
  • Opportunities to meet others
  • Details on how to get one-on-one coaching from our team

The post Announcing our plan to become an independent organisation appeared first on 80,000 Hours.

]]>
How 80,000 Hours has changed some of our advice after the collapse of FTX https://80000hours.org/2023/05/how-80000-hours-has-changed-some-of-our-advice-after-the-collapse-of-ftx/ Fri, 12 May 2023 06:06:36 +0000 https://80000hours.org/?p=81669 The post How 80,000 Hours has changed some of our advice after the collapse of FTX appeared first on 80,000 Hours.

]]>
Following the bankruptcy of FTX and the federal indictment of Sam Bankman-Fried, many members of the team at 80,000 Hours were deeply shaken. As we have said, we had previously featured Sam on our site as a positive example of earning to give, a mistake we now regret. We felt appalled by his conduct and at the harm done to the people who had relied on FTX.

These events were emotionally difficult for many of us on the team, and we were troubled by the implications it might have for our attempts to do good in the world. We had linked our reputation with his, and his conduct left us with serious questions about effective altruism and our approach to impactful careers.

We reflected a lot, had many difficult conversations, and worked through a lot of complicated questions. There’s still a lot we don’t know about what happened, there’s a diversity of views within the 80,000 Hours team, and we expect the learning process to be ongoing.

Ultimately, we still believe strongly in the principles that drive our work, and we stand by the vast majority of our advice. But we did make some significant updates in our thinking, and we’ve changed many parts of the site to reflect them. We wrote this post to summarise the site updates we’ve made and to explain the motivations behind them, for transparency purposes and to further highlight the themes that unify the changes.

We also support many efforts to push for broader changes in the effective altruism community, like improved governance.1 But 80,000 Hours’ written advice is primarily aimed at personal career choices, so we focused on the actions and attitudes of individuals in these updates to the site’s content.

The changes we made

We think that while ambition in doing good is still underrated by many, we think it’s more important now to emphasise the downsides of ambition. Our articles on being more ambitious and the potential for accidental harm had both mentioned the potential risks, but we’ve expanded on these discussions and made the warnings more salient for the reader.

We expanded our discussion of the reasons against pursuing a harmful career. And we’ve added more discussion in many places, most notably our article on the definition of “social impact” and in a new blog post from Benjamin Todd on moderation, about why we don’t encourage people to focus solely, to the exclusion of all other values, on aiming at what they think is impartially good.

We also used this round of updates to correct some other issues that came up during the reflections on our advice after the collapse of FTX.

The project to make these website changes was implemented by Benjamin Todd, Cody Fenwick and Arden Koehler, with some input from the rest of the team.

Here is a summary of all the changes we made:

  • We updated our advice on earning to give to include Sam as a negative example, and we discussed at more length the risks of harm or corruption. We express more scepticism about highly ambitious earning to give (though we don’t rule it out, and we think it can still be used for good with the right safeguards).
  • In our article on leverage, we added discussion of the downsides and responsibility that comes with having a lot of leverage, such as the importance of governance and accountability for influential people.
  • We clarified our views on risk and put more emphasis on how you should generally only seek upsides after limiting downsides, for both yourself and the world.
  • We put greater emphasis on respecting a range of values and cultivating character in addition to caring about impact, as well as not doing things that seem very wrong from a commonsense perspective for what one perceives as the “greater good.”
  • We added a lot more advice on how to avoid accidentally doing harm.
  • We took easy opportunities to tone down language around maximisation and optimisation. For instance, we talk about doing more good, or doing good as one important goal among several, rather than the most good. There’s a lot of room for debate about these issues, and we’re not in total agreement on the team about the precise details, but we generally think it’s plausible that Sam’s unusual willingness to fully embrace naive maximising contributed to the decision making behind FTX’s collapse.
  • We slightly reduced how much we emphasise the importance of getting involved with the effective altruism community, which now has a murkier historical impact compared to what we thought before the collapse. (To be clear, we still think there are tons of great things about the EA community, continue to encourage people to get involved in it, and continue to count ourselves as part of it!)
  • We released a newsletter about character virtue and a blog post about moderation.
  • We’ve started doing more vetting of the case studies we feature on the site.
  • We have moved the “Founder of new project tackling top problems” out of our priority paths and into the “high-impact but especially competitive” section on the career reviews page. This move was in part driven by the change in the funding landscape after the collapse of FTX — but also because the recent proliferation of new such projects likely reduces the marginal value of the typical additional project.

We’re still considering some other changes, such as to our ranking of effective altruism community building and certain other careers, as well as doing even more to emphasise character, governance, oversight, and related issues. But we didn’t want to wait to be ‘done’ with these edits, to the degree we ever will be ‘done’ learning lessons from this episode, before sharing this interim update with readers.

Some of the articles that saw the most changes were:

We’ve also updated some of our marketing materials, mostly by toning down calls to “maximise impact.” We still think it’s really important to be scope sensitive, and helping more individuals is better than helping fewer — some of the core ideas of effective altruism. But handling these ideas in a naive way, as the maximising language may incline some toward, can be counterproductive and miss out on important considerations.

We think there’s a lot more we can learn from what happened. Here are some of the reflections members of the 80k team have had:

We think the edits we’ve made are only a small part of the response that’s needed, but hopefully they move things in the right direction.

The post How 80,000 Hours has changed some of our advice after the collapse of FTX appeared first on 80,000 Hours.

]]>
Improving decision making (especially in important institutions) https://80000hours.org/problem-profiles/improving-institutional-decision-making/ Mon, 25 Sep 2017 17:25:45 +0000 https://80000hours.org/?post_type=problem_profile&p=39620 The post Improving decision making (especially in important institutions) appeared first on 80,000 Hours.

]]>
What is this issue?

Our ability to solve problems in the world relies heavily on our ability to understand them and make high-quality decisions. We need to be able to identify what problems to work on, to understand what factors contribute towards these problems, to predict which of our actions will have the desired outcomes, and to respond to feedback and change our minds.

Many of the most important problems in the world are incredibly complicated, and require an understanding of complex interrelated systems, the ability to make reasonable predictions about the outcome of different actions, and the ability to balance competing considerations and bring different parties together to solve them. That means there are a lot of opportunities for errors in judgement to slip in.

Moreover, in many areas there can be substantial uncertainty about even whether something would be good or bad — for example, does working on large, cutting-edge models in order to better understand them and align their goals with human values overall increase or decrease risk from AI? Better ways of answering these questions would be extremely valuable.

Informed decision making is hard. Even experts in politics sometimes do worse than simple actuarial predictions, when estimating the probabilities of events up to five years in the future.1 And the organisations best placed to solve the world’s most important problems — such as governments — are also often highly bureaucratic, meaning that decision-makers face many constraints and competing incentives, which are not always aligned with better decision making.

There are also large epistemic challenges — challenges having to do with how to interpret information and reasoning — involved in understanding and tackling world problems. For example, how do you put a probability on an unprecedented event? How do you change your beliefs over time — e.g. about the level of risk in a field like AI — when informed people disagree about the implications of developments in the field? How do you decide between, or balance, the views of different people with different information (or partially overlapping information)? How much trust should we have in different kinds of arguments?

And what decision-making tools, rules, structures, and heuristics can best set up organisations to systematically increase the chances of good outcomes for the long-run future — which we think is extremely important? How can we make sensible decisions, given how hard it is to predict events even 5 or 10 years in the future?2

And are there better processes for implementing those decisions, where the same conclusions can lead to better results?

We think that improving the internal reasoning and decision making competence of key institutions — some government agencies, powerful companies, and actors working with risky technologies — is particularly crucial. If we’re right that the risks we face as a society are substantial, and these institutions can have an outsized role in managing them, the large scale of their impact makes even small improvements high-leverage.

So the challenge of improving decision making spans multiple problem areas. Better epistemics and decision making seems important in reducing the chance of great power conflict, in understanding risks from AI and figuring out what to do about them, in designing policy to help prevent catastrophic pandemics, in coordinating to fight climate change, and basically every other difficult, high-stakes area where there are multiple actors with incomplete information trying to coordinate to make progress — that is, most of the issues we think are most pressing.

What parts of the problem seem most promising to work on?

‘Improving decision making’ is a broad umbrella term. Unfortunately, people we’ve spoken to disagree a lot on what more specifically is best to focus on. Because of this lack of clarity and consensus, we have some scepticism that this issue as we’ve defined it is as pressing as better-defined problems like AI safety and nuclear risk.

However, some of the ideas in this space are exciting enough that we think they hold significant promise, even if we’re not as confident in recommending that people work on them as we are for other interventions.

Here are some (partially overlapping) areas that could be particularly worth working on:

Forecasting

A new science of forecasting is developing around improving people’s ability to make better all-things-considered predictions about future events by developing tools and techniques as well as ways of combining them into aggregate predictions. This field tries to answer questions like, for example, what’s the chance that the war in Ukraine ends in 2023? (It’s still going at the time of this writing), or what’s the chance that a company will be able to build an AI system that can act as a commercially viable ‘AI executive assistant’ by 2030?

Definitions are slippery here, and a lot of things are called ‘forecasting.’ For example, businesses forecast supply and demand, spending, and other factors important for their bottom lines all the time. What seems special about some kinds of forecasting is a focus on finding the best ways to get informed, all-things-considered predictions for events that are really important but complex, unprecedented, or otherwise difficult to forecast with established methods.

One approach in this category is Dr Phil Tetlock’s search for ‘superforecasters’ — people who have a demonstrated track record of being unusually good at predicting events. Tetlock and his collaborators have looked at what these superforecasters are actually doing in practice to make such good predictions, and experimented with ways of improving on their performance. Two of their findings: the best forecasters tended to take in many different kinds of information and combine them into an overall guess, rather than focusing on e.g. a few studies or extrapolating one trend; and an aggregation of superforecasters’ predictions tended to be even more accurate than the best single forecasters on their own.

Developing and implementing better forecasting of important events and milestones in technological progress would be extremely helpful for making more informed high stakes decisions, and this new science is starting to gain traction. For example, the US’s Intelligence Advanced Research Projects Activity (IARPA) took an interest in the field in 2011 when it ran a forecasting tournament. The winning team was Tetlock’s, the Good Judgement Project, which has turned into a commercial forecasting outfit, Good Judgement Inc.

Here’s a different sort of example of using forecasting from our own experience: one of the biggest issues we aim to help people work on is reducing existential threats from transformative AI. But most work in this area is likely to be more useful before AI systems reach transformative levels of power and intelligence. That makes when this transition happens (which may be gradual) an important strategic consideration. If we thought transformative AI was just around the corner, it’d make more sense for us to try to work with mid-career professionals to pivot to working on the issue, since they already have lots of experience. But if we have a few decades, there’s time to help a new generation of interested people to enter the field and skill up.

Given this situation, we keep an eye on forecasts from researchers in AI safety and ML; for example, Ajeya Cotra’s forecasts based on empirical scaling laws and biological anchors (like the computational requirements of the human brain), expert surveys, and forecasts by a ‘superforecasting’ group called Samotsvety. Each of these forecasts uses a different methodology and is highly uncertain, and it’s unclear which is best; but they’re the best resource we have for this decision. Together and so far, forecasts suggest there’s a decent probability that there is enough time for us to help younger people enter careers in the field and start contributing. But more and better forecasting in this area, as well as more guidance on best practices for how to combine different forecasts, would be highly valuable and decision-relevant.

Policy is another area where forecasting seems important. The forecasted effects of different proposed policies are the biggest ingredient determining their value — but these predictions are often very flawed. Of course, this isn’t the only thing that stands in the way of better policymaking — for example, sometimes good policies hit political roadblocks, or just need more research. But predicting the complex effects of different policies is one of the fundamental challenges of governance and civil society alike.

So, improving our ability to forecast important and complex events seems robustly useful. However, the most progress has so far been made on relatively well-defined predictions in the near term future. It’s as-yet unproven that we can reliably make substantially better-than-chance predictions about more complex or amorphous issues or events greater than a few years in the future, like, for example, “when, if ever, will AI transform the economy?”

And we also don’t know much about the extent to which better forecasting methods might improve decision making in practice, especially in large, institutional settings. What are the best and most realistic ways the most important agencies, nonprofits, and companies can systematically use forecasting to improve their decision making? What proposals will be most appealing to leadership, and easiest to implement?

It also seems possible this new science will hit a wall. Maybe we’ve learned most of what we’re going to learn, or maybe getting techniques adopted is too intractable to be worth trying. We just don’t know yet.

So: we’re not sure to what extent work on improving forecasting will be able to help solve big global problems — but the area is small and new enough that it seems possible to make a substantial difference on the margin, and the development of this field could turn out to be very valuable.

Prediction aggregators and prediction markets

Prediction markets use market mechanisms to aggregate many people’s views on whether some or another event will come about. For example, the prediction markets Manifold Markets and Polymarket let users bet on or forecast on questions like “who will win the US presidential election in 2024?”. Users who correctly predict events can earn money, and can earn bigger returns in proportion to how unexpected the outcome they predicted was.

The idea is that prediction markets provide a strong incentive for individuals to make accurate predictions about the future. In the aggregate, the trends in these predictions should provide useful information about the likelihood of the future outcomes in question — assuming the market is well designed and functioning properly.

There are also other prediction aggregators, like Metaculus, that use reputation as an incentive rather than money (though Metaculus also hosts forecasting tournaments with cash prizes), and which specialise in aggregating and weighting predictions by track records to improve collective forecasts.

Prediction markets in theory are very accurate when there is a high volume of users, in the same way the stock market is very good at valuing shares in companies. In the ideal scenario, they might be able to provide an authoritative, shared picture of the world’s aggregate knowledge on many topics. Imagine never disagreeing on what “experts think” at the Thanksgiving table again — then imagine those disagreements evaporating in the White House situation room.

But prediction markets or other kinds of aggregators don’t exist yet on a large scale for many topics, so the promise is still largely theoretical. They also wouldn’t be a magic bullet: like stock markets, we might see them move but not know why; and they wouldn’t tell us much about topics where people don’t have much reason to think one thing or another. Likewise, they add another set of complex incentives to political decision making — many prediction markets can also incentivise the things they predict. Moreover, even if large and accurate prediction markets were created, would the most important institutional actors take them into account?

These are big challenges, but the theoretical arguments in favour of prediction markets and other aggregators are compelling, and we’d be excited to see markets gain enough traction to have a full test of their potential benefits.

Decision-making science

Another category here is developing tools and techniques for making better decisions — for example ‘structured analytic techniques’ (SATs) like the Delphi method, which helps build consensus in groups by using multiple iterations of questions to collect data from different group members, and the creation of crowdsourcing techniques like SWARM (‘Smartly-assembled Wiki-style Argument Marshalling’). (You can browse uses of the Delphi method on the website of the RAND corporation, which first developed the method in the 1950s.)

These tools might help people avoid cognitive failings like confirmation bias, help aggregate views or reach consensus, or help stop ‘information cascades’.

Many of these techniques are already in use. But it seems likely that better tools could be developed, and it’d be very surprising if there weren’t more institutional settings where the right techniques could be introduced.

Improving corporate governance at key private institutions

Corporate governance might be especially important as we develop transformative and potentially dangerous technologies in the fields of AI and bioengineering.

Decision making responds to incentives, which can often be affected by a company’s structure and governance. For example, for-profit companies are ultimately expected to maximise shareholder profit — even if that has large negative effects on others (‘externalities’) — whereas nonprofits are not.

As an example, the AI lab OpenAI set up its governance structure with the explicit goal that profits not take precedence over the mission codified in its charter (that general AI systems benefit all of humanity), by putting its commercial entity under the control of its nonprofit’s board of directors.

We’re not sure this is doing enough to reduce the chance of negative externalities from OpenAI’s operations, and take seriously concerns raised by others about the company’s actions potentially leading to arms race dynamics, but it seems better than a traditional commercial setup for shaping corporate incentives in a good direction, and we’d be excited to see more innovations as well as interest from companies in this space. (Though we’re also not sure improving governance mechanisms is more promising than advocacy and education about potential risks — both seem helpful.)

Improving how governments work

Governments control vast resources and are sometimes the only actors that can work on issues of global scale. But their decisions seem substantially shaped by the structures and processes that make them up. If we could improve these processes, especially in high-stakes or fast-moving situations, that might be very valuable.

For example, during the COVID-19 pandemic, reliable rapid tests were not available for a very long time in the US compared to other similar countries, arguably slowing the US’s ability to manage the pandemic. Could better institutional decision-making procedures — especially better procedures for emergency circumstances — have improved the situation?

Addressing this issue would require a detailed knowledge of how complex, bureaucratic institutions like the US government and its regulatory frameworks work. Where exactly was the bottleneck in the case of COVID rapid tests? What are the most important bottlenecks to address before the next pandemic?

A second area here is voting reform. We often elect our leaders with ‘first-past-the-post’-style voting, but many people argue that this leads to perverse outcomes. Different voting methods could lead to more functional and representative governments in general, which could be useful for a range of issues. See our podcast with Aaron Hamlin to learn more.

There is also the importance of voting security to prevent contested elections, discussed in our interview with Bruce Schneier.

Finally there are interventions aimed at getting governance mechanisms to systematically account for the interests of future generations. Right now future people have no commercial or formal political power, so we should expect their interests to be underserved. Some efforts to combat this phenomenon include the Future Generations Commissioner for Wales and the Wellbeing of Future Generations bill in the UK.

Ultimately, we’re unsure what the best levers to push on are here. It seems clear that there are problems with society’s ability to reason about large and complex problems and act appropriately — especially when decisions have to be made quickly, when there are competing motives, when cooperation is needed, and when the issues primarily affect the future. And we’re intrigued by some of the progress we’ve seen.

But some of the best interventions may also be specific to certain problem areas — e.g. working on better forecasting of AI development.

And tractability remains a concern even when better methods are developed. Though there are various examples of institutions adopting techniques to improve decision-making quality, it seems difficult to get others to adopt best practices, especially if the benefits are hard to demonstrate, you don’t understand the organisations well enough, or if they don’t have adequate motivation to adopt better techniques.

What are the major arguments against improving decision making being a pressing issue?

It’s unclear what the best interventions are — and some are not very neglected.

As we covered above, we’re not sure about what’s best in this area.

One reason for doubt about some of the kinds of work that would fall into this bucket is that they seem like they should be taken care of by market mechanisms.

A heuristic we often use for prioritising global problems is: will our economic system incentivise people to solve this issue on its own? If so, it’s less obvious that people should work on it in order to try to improve the world. This is one way of estimating the future neglectedness of an issue. If there’s money to be made by solving a problem, and it’s possible to solve it, it stands a good chance of getting solved.

So one might ask: shouldn’t many businesses have an interest in better decision making, including about important and complex events in the medium and long-run future? Predicting future world developments, aggregating expert views, and generally being able to better make high stakes decisions seems incredibly useful in general, not just for altruistic purposes.

This is a good question, but it applies more to some interventions than others. For example, working to shape the incentives of governments so they better take into account the interests of future generations seems very unlikely to be addressed by the market. Ditto better corporate governance.

Better forecasting methods and prediction markets seem more commercially valuable. Indeed, the fact that they haven’t been developed more under the competitive pressure of the market could be taken as evidence against their effectiveness. Prediction markets, in particular, were first theorised 20 years ago — maybe if they were going to be useful they’d be mainstream already? Though this may be in part because they’re often considered a form of illegal gambling in the US, and our guess is they’re still valuable and are being underrated.

Decision science tools are probably the least neglected of the sub-areas discussed above — and, in practice, many industries do use these methods, and think tanks and consultancies develop and train people in them.

All that said, the overall problem here still seems unsolved, and even methods that seem like they should receive commercial backing haven’t yet caught on. Moreover, our ability to make decisions seems most inadequate in the most high-stakes situations, which often involve thinking about unknown unknowns, small probabilities of catastrophic outcomes, and deep complexity — suggesting more innovation is needed. Plus, though use of different decision-making techniques is scattered throughout society, we’re a long way from all the most important institutions using the best tools available to them.

So though there is broad interest in improving decision making, it’d be surprising to us if there weren’t a lot of room for improvement on the status quo. So ultimately we’re excited to see more work here, despite these doubts.

You might want to just work on a pressing issue more directly.

Suppose you think that climate change is the most important problem in the world today. You might believe that a huge part of why we’re failing to tackle climate change effectively is that people have a bias towards working on concrete, near-term problems over those that are more likely to affect future generations — and that this is a systematic problem with our decision making as a society.

And so you might consider doing research on how to overcome this bias, with the hope that you could make important institutions more likely to tackle climate change.

However, if you think the threat of climate change is especially pressing compared to other problems, this might not be the best way for you to make a difference. Even if you discover a useful technique to reduce the bias to work on immediate problems, it might be very hard to implement it in a way that directly reduces the impacts of climate change.

In this case, it’s likely better to focus your efforts on climate change more directly — for example by working for a think tank doing research into the most effective ways to cut carbon emissions, or developing green tech. In general, more direct interventions seem more likely to move the needle on particular problems because they are more focused on the most pressing bottlenecks in that area.

That said, if you can’t implement solutions to a problem without improving the reasoning and decision-making processes involved, it may be a necessary step.

The chief advantage of broad interventions like improving decision making is that they can be applied to a wide range of problems. The corresponding disadvantage is that it might be harder to target your efforts towards a specific problem. So if you think one specific problem is significantly more urgent than others, and you have an opportunity to work on that problem more directly, then it is likely more effective to do the direct work.

It’s worth noting that in some cases this is not a particularly meaningful distinction. Developing good corporate governance mechanisms that allows companies developing high-stakes technology to better cooperate with each other is a way of improving decision making. But it may also be one of the best ways to directly reduce catastrophic risks from AI.

It’s often difficult to get change in practice

Perhaps the main concern with this area is that it’s not clear how easy it is to actually get better decision-making strategies implemented in practice — especially in bureaucratic organisations, and where incentives are not geared towards accuracy.

It’s often hard to get groups to implement practices that are costly to them in the short term — requiring effort, resources, and sometimes challenges current stakeholders — while only promising abstract or long-term benefits. There can also be other constraints — as mentioned above, running prediction markets with real money is usually considered illegal in the US.

However, implementation problems may be surmountable if you can show decision-makers that the techniques will help them achieve the objectives they care about.

If you work at or are otherwise able to influence an institution’s setup, you might also be able to help shift decision-making practices by setting up institutional incentives that favour deliberation and truth-seeking.

But recognising the difficulties of getting change in practice also means it seems especially valuable for people thinking about this issue to develop an in-depth understanding of how important groups and institutions operate, and the kinds of incentives and barriers they face. It seems plausible that overcoming bureaucratic barriers to better decision making may be even more important than developing better techniques.

There might be better ways to improve our ability to solve the world’s problems

One of the main arguments for working in this area is that if you can improve the decision-making abilities of people working on important problems, then this increases the effectiveness of everything they do to solve those problems.

But you might think there are better ways to increase the speed or effectiveness of work on the world’s most important problems.

For example, perhaps the biggest bottleneck on solving the world’s problems isn’t poor decision making, but simply lack of information: people may not be working on the world’s biggest problems because they lack crucial information about what those problems are. In this case the more important area might be global priorities research — research to identify the issues where people can make the biggest positive difference.

Also, a lot of work on building effective altruism might fall in this category: giving people better information about the effectiveness of different causes, interventions, and careers, as well as spreading positive values like caring about future generations.

What can you do in this area?

We can think of work in this area as falling into several broad categories:

  1. More rigorously testing existing techniques that seem promising.
  2. Doing more fundamental research to identify new techniques.
  3. Fostering adoption of the best proven techniques in high-impact areas.
  4. Directing more funding towards the best ways of doing all of the above.

All of these strategies seem promising and seem to have room to make immediate progress (we already know enough to start trying to implement better techniques, but stronger evidence will make adoption easier, for example). This means that which area to focus on will depend quite a lot on your personal fit and the particular opportunities available to you — we discuss each in more detail below.

1. More rigorously testing existing techniques that seem promising

We went through some different areas where you could have very different sorts of interventions above. We think more rigorously testing tentative findings in all those areas — especially ones that are the least developed — could be quite valuable.

Here we’re just going to talk about a few examples of techniques on the more formal end of the spectrum, because it’s what we know more about:

Prof Tetlock’s work has identified thinking styles that lead to significantly more accurate predictions, as detailed in his book, Superforecasting.

The idea here would be to take techniques that seem promising, but haven’t been rigorously tested yet, and try to get stronger evidence of where and whether they are effective. Some techniques or areas of research that fall into this category:

  • Calibration training — one way of improving probability judgements — has a reasonable amount of evidence suggesting it is effective. However, most calibration training focuses on trivia questions — testing whether this training actually improves judgement in real-world scenarios could be promising, and could help to get these techniques applied more widely.
  • ‘Pastcasting’ is a method of achieving more realistic forecasting practice by using real forecasting questions – but ones that have been resolved already.
  • Structured Analytic Techniques (SATs) — e.g. checking key assumptions and challenging consensus views. These seem to be grounded in an understanding of the psychological literature, but few have been tested rigorously (i.e. with a control group and looking at the impact on accuracy). It could be useful to select some of these techniques that look most promising, and try to test which are actually effective at improving real-world judgements.3 It might be particularly interesting and useful to try to directly pitch some of these techniques against each other and compare their levels of success.
  • Methods of aggregating expert judgements, including Roger Cooke‘s classical model for structured expert judgement (which scores different judgements according to their accuracy and informativeness and then uses these scores to combine them),4 prediction markets, or the aforementioned Delphi method.
  • Better expert surveying in important areas like AI and biorisk — for example, on the model of the IGM Economic Experts Panel.

A few academic researchers and other groups working in this area:

There may also be some non-academic organisations with funding for, and interest in running, more rigorous tests of known decision-making techniques:

  • Intelligence Advanced Research Projects Activity (IARPA) is probably the biggest funder of research in this area right now, especially with a focus on improving high-level decisions.
  • Open Philanthropy, which also funds 80,000 Hours, sometimes makes grants aimed at testing and developing better decision-making techniques — especially ones aimed at helping people make better decisions about the future.
  • Consultancies with a behavioural science focus, such as the Behavioural Insights Team, may also have funding and interest in doing this kind of research. These organisations generally focus on improving lots of small decisions, rather than on improving the quality of a few very important decisions, but they may do some work on the latter.

2. Doing more fundamental research to identify new techniques

You could also try to do more fundamental research: developing new approaches to improved epistemics and decision making, and then testing them. This is more pressing if you don’t think existing interventions like those listed above are very good.

One example of an open question in this area is: how do we judge ‘good reasoning’ when we don’t have objective answers to a question? That is, when we can’t just judge answers or contributions based on whether they lead to accurate predictions or answers we know to be true?6 Two examples of current research programmes related to this question are IARPA’s Crowdsourcing Evidence, Argumentaion, Thinking and Evaluation (CREATE) programme and Philip Tetlock’s Making Conversations Smarter, Faster (MCSF) project.

The academics and institutions listed above might also be promising places to work or apply for funding if you’re interested in developing new decision-making techniques.

3. Fostering adoption of the best proven techniques in high-impact institutions

The website FiveThirtyEight.com has popularised data-driven forecasting methods.

Alternatively, you could focus more on implementing those techniques you currently think are most likely to improve collective decision making (such as the research on forecasting by Tetlock, prediction markets, or SATs).7 If you think one specific problem is particularly important, you might prefer to focus on the implementation of techniques (rather than developing new ones), as this is easier to target towards specific areas.

As mentioned above, a large part of ‘fostering adoption’ might first require better understanding the practical constraints and incentives of different groups working on important problems, in order to understand what changes are likely to be feasible. For this reason, working in any of the organisations or groups listed below — with the aim of better understanding the barriers they face and building connections — might be valuable, even if you don’t expect to be in a position to change decision-making practices immediately.

These efforts might be particularly impactful if focused on organisations that control a lot of resources, or organisations working on important problems. Here are some examples of specific places where it might be good to work if you want to do this:

You could also try to test and implement improved decision-making techniques in a range of organisations as a consultant. Some specific organisations where you might be able to do this, or at least build up relevant experience, include:

  • Good Judgment, which runs training for individuals and organisations to apply findings from forecasting science to improve predictions.
  • Working at a specialist ‘behavioural science’ consultancy, such as the Behavioural Insights Team or Ideas42. Some successful academics in this field have also set up smaller consultancies — such as Hubbard Decision Research (which has worked extensively on calibration training).8
  • HyperMind, an organisation focused on wider adoption of prediction markets.
  • Going into more general consultancy, with the aim of trying to specialise in helping organisations with better decision making — see our profile on management consulting for more details.

Another approach would be to try to get a job at an organisation you think is doing really important work, and eventually aim to improve their decision making. Hear one story of someone improving a company’s decisionmaking from the inside in our podcast with Danny Hernandez.

Finally, you could also try to advocate for the adoption of better practices across government and highly important organisations like AI or bio labs, or for improved decision making more generally — if you think you can get a good platform for doing so — working as a journalist, speaker, or perhaps an academic in this area.

Julia Galef is an example of someone who has followed this kind of path. Julia worked as a freelance journalist before cofounding the Center for Applied Rationality. She co-hosts the podcast Rationally Speaking, and has a YouTube channel with hundreds of thousands of followers. She’s also written a book we liked, the Scout Mindset, aimed at helping readers maintain their curiosity and avoid defensively entrenching their beliefs. You can learn more about Julia’s career path by checking out our interview with her.

4. Directing more funding towards research in this area

Another approach would be to move a step backwards in the chain and try to direct more funding towards work in all of the aforementioned areas: developing, testing, and implementing better decision-making strategies.

The main place we know of that seems particularly interested in directing more funding towards improving decision-making research is IARPA in the US.9 Becoming a programme manager at IARPA — if you’re a good fit and have ideas about
areas of research that could do with more funding — is therefore a very promising opportunity.

There’s also some chance Open Philanthropy will invest more time or funds in exploring this area (they have previously funded some of Tetlock’s work on forecasting).

Otherwise you could try to work at any other large foundation with an interest in funding scientific research where there might be room to direct funds towards this area.

Special thanks to Jess Whittlestone who wrote the original version of this problem profile, and from whose work much of the above text is adapted.

Learn more

Top recommendations

Further recommendations

The post Improving decision making (especially in important institutions) appeared first on 80,000 Hours.

]]>
80,000 Hours two-year review: 2021 and 2022 https://80000hours.org/2023/03/80000-hours-two-year-review-2021-and-2022/ Wed, 08 Mar 2023 11:29:04 +0000 https://80000hours.org/?p=80949 The post 80,000 Hours two-year review: 2021 and 2022 appeared first on 80,000 Hours.

]]>
We’ve released our review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.

You can find our previous evaluations here. We have also updated our mistakes page.


80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.

Over the past two years, three of four programmes grew their engagement 2-3x:

  • Podcast listening time in 2022 was 2x higher than in 2020
  • Job board vacancy clicks in 2022 were 3x higher than in 2020
  • The number of one-on-one team calls in 2022 was 3x higher than in 2020

Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.

From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.

Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.

The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.

We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.

In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.

We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by roughly 50% in 2023, adding an additional 12 people.

Our baseline non-marketing budget is $8.8m for 2023 and $13.7m for 2024. We’re keen to fundraise above our baseline budget and also interested in expanding our runway – though expect that the amount we raise in practice will be heavily affected by the funding landscape.

We would like to increase the number of people and organisations donating to 80,000 Hours, so if you would consider donating, please contact michelle.hutchinson@80000hours.org.

The post 80,000 Hours two-year review: 2021 and 2022 appeared first on 80,000 Hours.

]]>
How to choose where to donate https://80000hours.org/articles/best-charity/ Wed, 09 Nov 2016 18:13:57 +0000 https://80000hours.org/?post_type=article&p=36393 The post How to choose where to donate appeared first on 80,000 Hours.

]]>
If you want to make a difference, and are happy to give toward wherever you think you can do the most good (regardless of cause area), how do you choose where to donate? This is a brief summary of the most useful tips we have.

How to choose an effective charity

First, plan your research

One big decision to make is whether to do your own research or delegate your decision to someone else. Below are some considerations.

If you trust someone else’s recommendations, you can defer to them.

If you know someone who shares your values and has already put a lot of thought into where to give, then consider simply going with their recommendations.

But it can be better to do your own research if any of these apply to you:

  • You think you might find something higher impact according to your values than even your best advisor would find (because you have unique values, good research skills, or access to special information — e.g. knowing about a small project a large donor might not have looked into).
  • You think you might be able to productively contribute to the broader debate about which charities should be funded (producing research is a public good for other donors).
  • You want to improve your knowledge of effective altruism and charity evaluation.

Consider entering a donor lottery.

A donor lottery allows you to donate into a fund with other small donors, in exchange for a proportional chance to be able to choose where the whole fund gets donated. For example, you might put $20,000 into a fund in exchange for a 20% chance of being able to choose where $100,000 from that fund gets donated.

Why might you want to do this? If you win the lottery, it’s worthwhile doing a great deal of research into where it’s best to give, to allocate that $100,000 as well as possible. If you don’t win, you don’t have to do any research, and whoever wins the lottery does it instead. In short, it’s probably more efficient for small donors to pool their funds, and for one of them to do in-depth research, rather than for each of them to do a small amount of research. This is because there are some fixed costs of understanding the landscape — it doesn’t generally become 100 times harder to figure out where to donate 100 times the funds.

Giving What We Can organises donor lotteries once a year.

If you’re going to do your own research, decide how much you should do.

The more you’re giving as a percentage of your annual income, the more time it’s worth spending on research. Roughly speaking, a 1% donation might be worth a few hours of work, while a 50% donation could be worth a month of research. On the other hand, the more you earn per hour, it may be that the less time you should take off for independent research, as that may be dominated by simply earning and giving more.

Another factor is how much you expect the research to affect your decisions. For example, if you haven’t thought about this much before, it’s worth doing more research. But even if you have thought about it a lot, bear in mind you could be overconfident in your current views (or things might have changed since you last looked into it), so a bit of research might be a good idea to ensure your donations are doing the most good.

Finally, younger people should sometimes do more research, since it will help them learn about charity evaluation, which will inform their giving in future years (and perhaps their career decisions as well). As a young person, giving 1% per year and spending a weekend thinking about it is a great way to learn about effective giving. If you’re a bit older, giving 10%, and don’t expect your views to change, then perhaps one or two days of research is worth it. If you’re giving more than 10%, more time is probably justified.

Second, choose an effective charity

If you’re doing your own research, we recommend working through these steps:

1. Decide which global problems you think are most pressing right now.

You want to find charities that are working on big but neglected problems, and where there’s a clear route to progress — this is where it’s easiest to have a big impact. If you’re new to 80,000 Hours, learn about how we approach figuring out which global problems are most pressing, or see a list of problems we think especially need attention.

2. Find the best organisations within your top 2–3 problem areas.

Look for charities that are well-run, have a great team and potential to grow, and are working on a justified programme.

Many charitable programmes don’t work, so focus on organisations that do at least one of the following:

  • Implement programmes that have been rigorously tested (most haven’t).
  • Are running pilot programmes that will be tested in the future.
  • Would be so valuable if they worked that it’s worth taking a chance on them — even if the likelihood of success is low. Organisations in this category have a ‘high-risk, high-reward’ proposition, such as scientific research, policy advocacy, or the potential to grow very rapidly.

If you’re doing your own intensive research, then at this stage you typically need to talk to people in the area to figure out which organisations are doing good work. One starting point might be our lists of top-recommended organisations.

3. If you have to break a tie, choose the one that’s furthest from meeting its funding needs.

Some organisations already have a lot of funding, and may not have the capacity to effectively use additional funds. For instance, GiveWell has tried to find a good organisation that provides individuals with vaccines to fund, but funders like the Gates Foundation take most of the promising opportunities. You can assess an organisation’s room for more funding by looking at where they intend to spend additional donations, either by reading their plans or talking to them.

This consideration is a bit less important than others: if you support a great organisation working on a neglected problem, then they’ll probably figure out a good way to use the money, even if they get a lot.

Learn more about how to find effective charities

  • When can small donors make donations that are even more effective than large donors? This article lists situations when small donors have an advantage over large donors — ideally you’d choose one of these situations to focus on. It also includes more thoughts on whether to delegate your decision or do your own research.

  • Tips on how to evaluate charities from GiveWell. Bear in mind that the process for evaluating a large organisation is different from evaluating a startup. With large, stable organisations, you can extrapolate forward from their recent performance. With new and rapidly growing organisations, what matters is the long-term potential upside (and their chances of getting there), more than what they’ve accomplished in the past.

We are not experts in charity evaluation — but there are people who are! Not every cause area has charity evaluators, but in global health and animal welfare the recommendations are more developed.

A good place to start are the following lists, which are updated annually.

Donating to expert-led funds rather than directly to charities

The best charity to give to is both hard to determine and constantly changing. So, we think a reasonable option for people who don’t have much time for their own research is to give to expert-managed funds that are aligned with your principles. (Our principles are broadly in line with effective altruism, which is why we highlight effective altruism funds below.)

When donating to a fund, you choose how to split your giving across different focus areas — global health, animal welfare, community infrastructure, and the long-term future — and an expert committee in each area makes grants, with the aim of selecting the most effective charities. This is a great way to delegate your decision to people who might have a better view of the options, provided you feel reasonably aligned with the committees.

EA Funds options:

Founders Pledge also has an expert-led fund for climate change.

The Giving What We Can donation platform lists more recommended effective altruism funds:

Donate now
(Note that EA Funds is a project of the Effective Ventures Foundation, our parent charity, and due to our similar views on how to do the most good, we have received grants from both funds in the past.)

You can also see some notes from our president, Benjamin Todd, on how he would decide where to donate.

Topping up grants from other donors you broadly agree with

If you prefer to have more control over where your money is going, you could also directly ‘top up’ a particular past grant made by one of the funds you think is effective, or another large donor, such as Open Philanthropy — read more about this option here:

We think the leading foundation that takes an effective altruism approach to giving is Open Philanthropy.1 (Disclosure: it is our largest funder.) You can learn more about Open Philanthropy’s mindset and research in our interviews with current and former research staff.

Open Philanthropy has far more research capacity than any individual donor, but you can roughly match the cost effectiveness of its grants without needing to invest much effort at all. One way to do this is by co-funding the same projects, or giving based on what its analysts have learned.

Open Philanthropy often doesn’t want to provide 100% of an organisation’s funding, so that organisations don’t become too dependent on it alone. This creates a need for smaller donors to ‘top up’ its funding.

In light of the above, Open Philanthropy maintains a database of all its grants, which you can filter by year and focus area.

Also, some grantmakers at Open Philanthropy offer annual giving suggestions for individual donors that you can follow.

For instance, if you’re interested in giving to support pandemic preparedness, you can get a list of all its grants in that area, read through some recent ones, and donate to an organisation you find attractive and which still has room to absorb more funding.

Below is a list of Open Philanthropy’s focus areas and associated grants.

Our top-priority areas

Other focus areas we’ve investigated

Focus areas we know less about

Reading the research conducted by other informed donors

Here are some other resources you could draw on:

  • Technical AI safety research: A contributor at the Effective Altruism Forum publishes a review of organisations most years— here’s their December 2021 update.
  • Global health and development: GiveWell identifies and recommends charities that are evidenced-based, thoroughly vetted, and underfunded. Many of the staff at GiveWell also write about where they are giving personally, and make suggestions for the public. Here’s their post from 2022.
  • Farmed animal welfare: Animal Charity Evaluators uses four criteria to recommend charities they believe most effectively help animals.
  • ‘S-risks’: The German Effective Altruism Foundation has launched its own expert-advised fund focused on the possibility that future technologies could lead to large amounts of suffering.
  • See all posts about where to donate on 80,000 Hours and on the EA Forum.

Should you give now or later?

It might be more effective to invest your money, grow it, and donate a larger sum later. We have an article on this, or you can read this more recent and technical exploration of the considerations. Here are all our resources on the ‘now vs later’ question.

How should you handle taxes and giving?

If you’re in the US, here’s an introductory guide to giving, taxes, and personal finance, and a more advanced one. You may also be interested in this guide to choosing a donor-advised fund.

If you’re in the UK, here’s a guide to income tax and donations.

You can also see Giving What We Can’s article on tax deductibility of donations by country.

Next steps

The post How to choose where to donate appeared first on 80,000 Hours.

]]>
What 80,000 Hours learned by anonymously interviewing people we respect https://80000hours.org/2020/06/lessons-from-anonymous-interviews/ Thu, 18 Jun 2020 14:48:27 +0000 https://80000hours.org/?p=69994 The post What 80,000 Hours learned by anonymously interviewing people we respect appeared first on 80,000 Hours.

]]>
We recently released the fifteenth and final installment in our series of posts with anonymous answers.

These are from interviews with people whose work we respect and whose answers we offered to publish without attribution.

It features answers to 23 different questions including How have you seen talented people fail in their work? and What’s one way to be successful you don’t think people talk about enough?.

We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they spanned a wide range of opinions.

For example, one person had seen talented people fail by being too jumpy:

“It seems particularly common in effective altruism for people to be happy to jump ship onto some new project that seems higher impact at the time. And I think that this tendency systematically underestimates the costs of switching, and systematically overestimates the benefits — so you get kind of a ‘grass is greener’ effect.

In general, I think, if you’re taking a job, you should be imagining that you’re going to do that job for several years. If you’re in a job, and you’re not hating it, it’s going pretty well — and some new opportunity presents itself, I think you should be extremely reticent to jump ship.

I think there are also a lot of gains from focusing on one activity or a particular set of activities; you get increasing returns for quite a while. And if you’re switching between things often, you lose that benefit.”

But another thought that you should actually be pretty open to leaving a job after ~6 months:

“Critically, once you do take a new job — immediately start thinking “is there something else that’s a better fit?” There’s still a taboo around people changing jobs quickly. I think you should maybe stay 6 months in a role just so they’re not totally wasting their time in training you — but the expectation should be that if someone finds out a year in that they’re not enjoying the work, or they’re not particularly suited to it, it’s better for everyone involved if they move on. Everyone should be actively helping them to find something else.

Doing something you don’t enjoy or aren’t particularly good at for 1 or 2 years isn’t a tragedy — but doing it for 20 or 30 years is.”

More broadly, the project emphasised the need for us to be careful when giving advice as 80,000 Hours.

In the words of one guest:

“trying to give any sort of general career advice — it’s a fucking nightmare. All of this stuff, you just kind of need to figure it out for yourself. Is this actually applying to me? Am I the sort of person who’s too eager to change jobs, or too hesitant? Am I the sort of person who works themselves too hard, or doesn’t work hard enough?”

This theme was echoed in a bunch of responses (1, 2, 3, 4, 5, 6).

And this wasn’t the only recurring theme — here are another 12:

You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

These quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own.

All entries in this series

  1. What’s good career advice you wouldn’t want to have your name on?
  2. How have you seen talented people fail in their work?
  3. What’s the thing people most overrate in their career?
  4. If you were at the start of your career again, what would you do differently this time?
  5. If you’re a talented young person how risk averse should you be?
  6. Among people trying to improve the world, what are the bad habits you see most often?
  7. What mistakes do people most often make when deciding what work to do?
  8. What’s one way to be successful you don’t think people talk about enough?
  9. How honest & candid should high-profile people really be?
  10. What’s some underrated general life advice?
  11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower?
  12. What are the biggest flaws of 80,000 Hours?
  13. What are the biggest flaws of the effective altruism community?
  14. How should the effective altruism community think about diversity?
  15. Are there any myths that you feel obligated to support publicly? And five other questions.

The post What 80,000 Hours learned by anonymously interviewing people we respect appeared first on 80,000 Hours.

]]>
Policy and research ideas to reduce existential risk https://80000hours.org/2020/04/longtermist-policy-ideas/ Mon, 27 Apr 2020 22:46:38 +0000 https://80000hours.org/?p=69591 The post Policy and research ideas to reduce existential risk appeared first on 80,000 Hours.

]]>
In his book The Precipice: Existential Risk and the Future of Humanity, 80,000 Hours trustee Dr Toby Ord suggests a range of research and practical projects that governments could fund to reduce the risk of a global catastrophe that could permanently limit humanity’s prospects.

He compiles over 50 of these in an appendix, which we’ve reproduced below. You may not be convinced by all of these ideas, but they help to give a sense of the breadth of plausible longtermist projects available in policy, science, universities and business.

There are many existential risks and they can be tackled in different ways, which makes it likely that great opportunities are out there waiting to be identified.

Many of these proposals are discussed in the body of The Precipice. We’ve got a 3 hour interview with Toby you could listen to, or you can get a copy of the book mailed you for free by joining our newsletter:

Policy and research recommendations

Engineered Pandemics

  • Bring the Biological Weapons Convention into line with the Chemical Weapons Convention: taking its budget from $1.4 million up to $80 million, increasing its staff commensurately, and granting the power to investigate suspected breaches.
  • Strengthen the WHO’s ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.
  • Ensure that all DNA synthesis is screened for dangerous pathogens. If full coverage can’t be achieved through self regulation by synthesis companies, then some form of international regulation will be needed.
  • Increase transparency around accidents in BSL-3 and BSL-4 laboratories.
  • Develop standards for dealing with information hazards, and incorporate these into existing review processes.
  • Run scenario-planning exercises for severe engineered pandemics.

Unaligned Artificial Intelligence

  • Foster international collaboration on safety and risk management.
  • Explore options for the governance of advanced AI.
  • Perform technical research on aligning advanced artificial intelligence with human values.
  • Perform technical research on other aspects of AGI safety, such as secure containment and tripwires.

Asteroids & Comets

  • Research the deflection of 1 km+ asteroids and comets, perhaps restricted to methods that couldn’t be weaponised such as those that don’t lead to accurate changes in trajectory.
  • Bring short-period comets into the same risk framework as near-Earth asteroids.
  • Improve our understanding of the risks from long-period comets.
  • Improve our modelling of impact winter scenarios, especially for 1–10 km asteroids. Work with experts in climate modelling and nuclear winter modelling to see what modern models say.

Supervolcanic Eruptions

  • Find all the places where supervolcanic eruptions have occurred in the past.
  • Improve the very rough estimates on how frequent these eruptions are, especially for the largest eruptions.
  • Improve our modelling of volcanic winter scenarios to see what sizes of eruption could pose a plausible threat to humanity.
  • Liaise with leading figures in the asteroid community to learn lessons from them in their modelling and management.

Stellar Explosions

  • Build a better model for the threat including known distributions of parameters instead of relying on representative examples. Then perform sensitivity analysis on that model—are there any plausible parameters that could make this as great a threat as asteroids?
  • Employ blue-sky thinking about any ways current estimates could be underrepresenting the risk by a factor of a hundred or more.

Nuclear Weapons

  • Restart the Intermediate-Range Nuclear Forces Treaty (INF).
  • Renew the New START arms control treaty, due to expire in February 2026.
  • Take US ICBMs off hair-trigger alert (officially called Launch on Warning).
  • Increase the capacity of the International Atomic Energy Agency (IAEA) to verify nations are complying with safeguards agreements.
  • Work on resolving the key uncertainties in nuclear winter modelling.
  • Characterise the remaining uncertainties then use Monte Carlo techniques to show the distribution of outcome possibilities, with a special focus on the worst-case possibilities compatible with our current understanding.
  • Investigate which parts of the world appear most robust to the effects of nuclear winter and how likely civilisation is to continue there.

Climate

  • Fund research and development of innovative approaches to clean energy.
  • Fund research into safe geoengineering technologies and geoengineering governance.
  • The US should re-join the Paris Agreement.
  • Perform more research on the possibilities of a runaway greenhouse effect or moist greenhouse effect. Are there any ways these could be more likely than is currently believed? Are there any ways we could decisively rule them out?
  • Improve our understanding of the permafrost and methane clathrate feedbacks.
  • Improve our understanding of cloud feedbacks.
  • Better characterise our uncertainty about the climate sensitivity: what can and can’t we say about the right-hand tail of the distribution.
  • Improve our understanding of extreme warming (e.g. 5–20 °C), including searching for concrete mechanisms through which it could pose a plausible threat of human extinction or the global collapse of civilisation.

Environmental Damage

  • Improve our understanding of whether any kind of resource depletion currently poses an existential risk.
  • Improve our understanding of current biodiversity loss (both regional and global) and how it compares to that of past extinction events.
  • Create a database of existing biological diversity to preserve the genetic material of threatened species.

General

  • Explore options for new international institutions aimed at reducing existential risk, both incremental and revolutionary.
  • Investigate possibilities for making the deliberate or reckless imposition of human extinction risk an international crime.
  • Investigate possibilities for bringing the representation of future generations into national and international democratic institutions.
  • Each major world power should have an appointed senior government position responsible for registering and responding to existential risks that can be realistically foreseen in the next 20 years.
  • Find the major existential risk factors and security factors — both in terms of absolute size and in the cost-effectiveness of marginal changes.
    • (Editor’s note: existential risk factors are problems, like a shortage of natural resources, that don’t directly risk extinction, but could nonetheless indirectly raise the risk of a disaster. Security factors are the reverse, and might include better mechanisms for resolving disputes between major military powers.)
  • Target efforts at reducing the likelihood of military conflicts between the US, Russia and China.
  • Improve horizon-scanning for unforeseen and emerging risks.
  • Investigate food substitutes in case of extreme and lasting reduction in the world’s ability to supply food.
  • Develop better theoretical and practical tools for assessing risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
  • Improve our understanding of the chance civilisation will recover after a global collapse, what might prevent this, and how to improve the odds.
  • Develop our thinking about grand strategy for humanity.
  • Develop our understanding of the ethics of existential risk and valuing the long-term future.

Learn more

The post Policy and research ideas to reduce existential risk appeared first on 80,000 Hours.

]]>