Blog post – 80,000 Hours https://80000hours.org Thu, 08 May 2025 18:38:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui) https://80000hours.org/podcast/episodes/rose-chan-loui-openai-nonprofit-control/ Thu, 08 May 2025 18:13:40 +0000 https://80000hours.org/?post_type=podcast&p=90318 The post Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui) appeared first on 80,000 Hours.

]]>
The post Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui) appeared first on 80,000 Hours.

]]>
Reading list for understanding AI and how it could be dangerous https://80000hours.org/2025/05/reading-list-for-understanding-ai-and-how-it-could-be-dangerous/ Thu, 08 May 2025 00:27:47 +0000 https://80000hours.org/?p=90181 The post Reading list for understanding AI and how it could be dangerous appeared first on 80,000 Hours.

]]>
Want to get up to speed on the state of AI development and the risks it poses? Our site provides an overview of key topics in this area, but obviously there’s a lot more to learn.

We recommend starting with the following blog posts and research papers. (Note: we don’t necessarily agree with all the claims the authors make, but still think they’re great resources.)

Key blog posts

Scaling up: how increasing inputs has made artificial intelligence more capable by Veronika Samborska at Our World in Data

The article concisely explains how AI has gotten better in recent years primarily by scaling up existing systems rather than by making more fundamental scientific advances.

How we could stumble into AI catastrophe by Holden Karnofsky on Cold Takes

Holden Karnofsky makes the case that if transformative AI is developed relatively soon, it could result in global catastrophe.

AI could defeat all of us combined by Holden Karnofsky on Cold Takes

Read this to understand why it’s plausible that AI systems could pose a threat to humanity, if they were powerful enough and it would further their goals.

Machines of loving grace — How AI could transform the world for the better by Anthropic CEO Dario Amodei

It’s important to understand why there’s enthusiasm for building powerful AI systems, despite the risks. This post from an AI company CEO paints a positive vision for powerful AI.

Computing power and the governance of AI by Lennart Heim et al. at the Centre for the Governance of AI

Experts in AI policy argue that governing computational power could be a key intervention for reducing risks, though it also raises risks of its own.

Why AI alignment could be hard with modern deep learning by Ajeya Cotra on Cold Takes

This piece explains why existing AI techniques may make it hard to create powerful AI systems over the long term that remain under human control.

The most important graph in AI right now: time horizon by Benjamin Todd

How would we know if AI is really on track to make big changes in society? Benjamin Todd argues that the length of tasks AI can do is the most important metric to look at.

Key research papers

Preparing for the intelligence explosion by William MacAskill and Fin Moorhouse at Forethought Research

These authors argue that an intelligence explosion will compress a century of technological progress into a decade, creating numerous grand challenges beyond just AI alignment that humanity must prepare for now.

Can scaling continue to 2030? by Jaime Sevilla et al. at Epoch AI

Available data suggests AI companies can continue scaling their systems through 2030, primarily facing constraints in power availability and chip manufacturing capacity.

Is power-seeking AI an existential risk? by Joe Carlsmith

This is one of the central papers putting together the argument that extremely powerful AI systems could pose an existential threat to humanity.

Scheming AIs: Will AIs fake alignment during training in order to get power? by Joe Carlsmith

Here’s an in-depth argument that it may be hard to create AI systems without incentivising them to deceive us.

AI 2027 by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean

This speculative scenario explains how superhuman AI might be developed and deployed in the near future.

Gradual disempowerment by Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud

Even if we avoid the risks of power-seeking and scheming AIs, there may be other ways AI systems could disempower humanity.

AI tools for existential security by Lizka Vaintrob and Owen Cotton-Barratt

While AI systems may pose existential risks, these authors argue that we may be able to develop some AI technology that will reduce existential risk.

Taking AI welfare seriously by Robert Long, Jeff Sebo, et al.

This paper makes a thorough case that we shouldn’t only worry about the risks AI poses to humanity — we also need to potentially consider the interests of future AI systems as well.

The post Reading list for understanding AI and how it could be dangerous appeared first on 80,000 Hours.

]]>
Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it https://80000hours.org/podcast/episodes/ian-dunt-why-governments-fail/ Fri, 02 May 2025 14:52:16 +0000 https://80000hours.org/?post_type=podcast&p=89737 The post Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it appeared first on 80,000 Hours.

]]>
The post Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it appeared first on 80,000 Hours.

]]>
Bonus: Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests https://80000hours.org/podcast/episodes/concrete-unconventional-career-advice/ Thu, 24 Apr 2025 16:38:40 +0000 https://80000hours.org/?post_type=podcast&p=89736 The post Bonus: Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests appeared first on 80,000 Hours.

]]>
The post Bonus: Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests appeared first on 80,000 Hours.

]]>
AI-enabled power grabs https://80000hours.org/problem-profiles/ai-enabled-power-grabs/ Thu, 24 Apr 2025 10:53:40 +0000 https://80000hours.org/?post_type=problem_profile&p=89674 The post AI-enabled power grabs appeared first on 80,000 Hours.

]]>
Why is this a pressing problem?

New technologies can drastically shift the balance of power in society. Great Britain’s early dominance in the Industrial Revolution, for example, helped empower its global empire.1

With AI technology rapidly advancing, there’s a serious risk that it might enable an even more extreme global power grab.

Advanced AI is particularly concerning because it could be controlled by a small number of people, or even just one. An AI could be copied indefinitely, and with enough computing infrastructure and a powerful enough system, a single person could control a virtual or literal army of AI agents.

And since advanced AI could potentially trigger explosive growth in the economy, technology, and intelligence, anyone with unilateral control over the most powerful systems might be able to dominate the rest of humanity.

One factor that enhances this threat is the possibility of secret loyalties. It may be possible to create AI systems that appear to have society’s best interests in mind but are actually loyal to just one person or small group.2 As these systems are deployed throughout the economy, government, and military, they could constantly seek opportunities to advance the interests of their true masters.

Here are three possible pathways through which AI could enable an unprecedented power grab:

  1. AI developers seize control — in this scenario, actors within a company or organisation developing frontier AI systems use their technology to seize control. This could happen if they deploy their systems to be used widely in the economy, military, and government while it retains secret loyalty to them. Or they could potentially create powerful enough systems internally that can gather enough wealth and resources to launch a hostile takeover of other centres of power.
  2. Military coups — as militaries incorporate AI for competitive advantage, they introduce new vulnerabilities. AI-controlled weapons systems and autonomous military equipment could be designed to follow orders unscrupulously, without the formal and informal checks on power that militaries traditionally provide — such as the potential for mutiny in the face of unlawful orders. A military leader or other actor (including potentially hostile foreign governments) could find a way to ensure the military AI is loyal to them, and use it to assert far-reaching control.
  3. Autocratisation — political leaders could use advanced AI systems to entrench their power. They may be elected or unelected to start, but either way, they could use advanced AI systems to undermine any potential political challenger. For example, they could use enhanced surveillance and law enforcement to subdue the opposition.

Extreme power concentrated in the hands of a small number of people would pose a major threat to the interests of the rest of the world. It could even undermine the potential of a prosperous future, since the course of events may depend on the whims of those who happened to have dictatorial aspirations.

There are also ways AI could likely be used to broadly improve governance, but we’d expect scenarios in which AI enables hostile or illegitimate power grabs would be bad for the future of humanity.

What can be done to mitigate these risks?

We’d like to see much more work done to figure out the best methods for reducing the risk of an AI-enabled power grab. Several approaches that could help include:

  • Safeguards on internal use: Implement sophisticated monitoring of how AI systems are used within frontier companies, with restrictions on access to “helpful-only” models that will follow any instructions without limitations.
  • Transparency about model specifications: Publish detailed information about how AI systems are designed to behave, including safeguards and limitations on their actions, allowing for external scrutiny and identification of potential vulnerabilities.
  • Sharing capabilities broadly: Ensure that powerful AI capabilities are distributed among multiple stakeholders rather than concentrated in the hands of a few individuals or organizations. This creates checks and balances that make power grabs more difficult. Note though that there are also risks to having powerful AI capabilities distributed widely, so the competing considerations need to be carefully weighed.
  • Inspections for secret loyalties: Develop robust technical methods to detect whether AI systems have been programmed with hidden agendas or backdoors that would allow them to serve interests contrary to their stated purpose.
  • Military AI safeguards: Require that AI systems deployed in military contexts have robust safeguards against participating in coups, including principles against attacking civilians and multiple independent authorisation requirements for extreme actions.

For much more detail on this problem, listen to our interview with Tom Davidson.

Learn more

The post AI-enabled power grabs appeared first on 80,000 Hours.

]]>
Open position: Engagement Specialist https://80000hours.org/2025/04/open-position-engagement-specialist/ Wed, 23 Apr 2025 12:43:19 +0000 https://80000hours.org/?p=89907 The post Open position: Engagement Specialist appeared first on 80,000 Hours.

]]>

Summary

We’re looking for a new Engagement Specialist to help us increase engagement among our target audience, by managing our outreach channels, contributing to our growth strategy, and helping to deploy our yearly budget of ~$3m.

Location: London, UK (preferred). We’re open to remote candidates and can support UK visa applications.

Salary: Varies depending on fit, location, and experience. An applicant in London with good fit and no relevant experience would be paid approx. £60,000; an applicant in London with excellent fit and 4 years of relevant experience would be paid approx. £82,000.

To apply, please complete this application form by 11th May 2025.

Why this role?

80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems, especially mitigating risks from advanced artificial intelligence.

Since we started investing much more in growth in 2022, we’ve increased the hours that people spend engaging with our content by 6.5x, reached millions of new users across different platforms, and now have over 500,000 newsletter subscribers. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

Even so, it seems like there’s considerable room to reach more people — and there are many exciting growth projects we’re unable to take on because of low capacity on our team. So, we’re looking for a new Engagement Specialist to help us ambitiously increase the amount of engagement with our advice and our impact.

We anticipate that the right person in this role could help us massively increase our readership, and lead to hundreds or thousands of additional people pursuing high-impact careers.

As some indication of what success in the role might look like, over the next couple of years you might have:

  • Cost-effectively deployed >$5 million reaching people from our target audience.
  • Reached hundreds of millions of people on social media with key messages.
  • Partnered with some of the largest and most well-regarded YouTube channels (for instance, we have run sponsorships with Veritasium, Kurzgesagt, and Wendover Productions).
  • Designed efficient digital ad campaigns that caused thousands of hours of engagement on our website.
  • Driven hundreds of thousands of additional newsletter subscriptions, leading to many of those people changing to a more impactful career.
  • Launched a new outreach channel that causes us to double the proportion of people who are aware of 80,000 Hours within a particular target audience segment.

We think this role seems very impactful if you’re excited about 80,000 Hours’ theory of change.

The main reason is that 80,000 Hours has a very strong track record of helping people find high-impact careers. This role lets you be a multiplier on the impact of 80,000 Hours as a whole, by finding larger and more relevant audiences who might be interested in the advice. We think this makes the role highly leveraged.

Since we are a nonprofit and we aren’t selling a product, this is a fairly nontraditional role. We’d therefore encourage you to apply, even if you aren’t otherwise looking for roles in growth, outreach, or marketing, and don’t have prior relevant experience.

For more context on 80,000 Hours’ new direction, read our recent post on why we’re shifting our strategic approach to focus more on AGI.

Responsibilities

We’re looking for a flexible Engagement Specialist, who will take on responsibilities such as:

  • Help us scale up and improve our outreach channels that are currently most effective at increasing engagement. For example, you could run new campaigns aimed at particularly-important segments of our target audience, improve our messaging through user research or ideation, or make the case for how quickly (or slowly!) we should scale up investment. These channels are:
    • Sponsorships with people who have large audiences, primarily on YouTube (see examples here, here, and here).
    • Advertisements on social media sites (e.g. Instagram), Google search, and elsewhere (digital marketing — see examples here, here, and here).
  • Write or design the promotional material for new releases on our website and podcast — such as the “hook” accompanying podcast releases, the titles and thumbnails for YouTube video releases, or filming videos to post on social media.
  • Improve our measurement and evaluation of our attempts to grow our audience, using analytics platforms such as Mixpanel, Plausible.io, and Google Analytics.
  • Carry out research on relevant audience segments, and help us decide who we most need to reach.
  • Help design pages on the website that we use for outreach (example).
  • Manage the promotion of our book giveaway.
  • Take on experiments with other new outreach channels or initiatives.
  • Manage our budget (which currently totals just under $3 million per year) within your areas of responsibility.

Note that this role will not primarily involve writing for the website (though you might write and publish some especially growth-relevant pages). If you’re most interested in that, you should apply for our writer-researcher position instead.

About you

We’re looking for someone who has the following traits:

  • Mission-drivenness: A commitment to helping advance 80,000 Hours’ mission — particularly mitigating risks from advanced AI. You don’t need to already know a lot about AI, but you should be interested in learning more about the potential risks.
  • Good “taste,” where that means you’re willing to think carefully about how we can best grow our audience and why, and exercise that judgement
  • Great communication skills — in particular, the ability to clearly write out and show your thinking / your uncertainties in decision making
  • Flexibility: excitement about trying out and evaluating new approaches, platforms, and messages
  • Ambition: An ambitious approach to the role, with enthusiasm for helping grow our impact
  • Conscientiousness: Good organisation skills, and the ability to competently manage multiple priorities at work
  • Data-drivenness: A data-driven and results-oriented attitude to their work, aiming to get the best outcomes for our mission

Ideally, you’d also have the following traits — but we encourage you to apply even if they don’t describe you!

  • You have some previous experience relevant to this role. (Please note we definitely do not expect any candidate to have all of these.) Here are some kinds of experience we’d be especially excited about:
    • Measurement and evaluation of a product or programme; data science or statistics
    • Influencer marketing, or experience with anything to do with online content creation or monetisation
    • Digital marketing, especially performance marketing, design, copywriting, and/or experience with Meta and Google ads
    • Communications, including PR, media, campaigning, science communications, etc.
    • Other marketing experience; for example, marketing for a university society
    • Product experience, especially where this includes launching and attracting or maintaining a lot of users
    • Social media, including posting regularly on your own social media and/or blogging platforms
  • You are creative, and good at generating lots of new ideas.
  • You currently stay up to date on news relevant to AI risk, or would be excited to start.
  • You really “get” our target audience (talented, ambitious, altruistic 18–35 year olds), and/or are excited to learn more about them and their interests.
  • Though we expect most candidates won’t, we’re especially excited about candidates who have experience writing for or working with Chinese audiences.

Role details

The new Engagement Specialist would be managed by our current Director of Growth, Bella Forristal. Our existing team focused on growing our reach and engagement consists of just Bella Forristal (the current head) and Nik Mastroddi.

This is a full-time role, but staff can work flexible hours — i.e. whatever schedule (consistent with full-time status) will allow them to be most personally effective.

We would prefer for you to work in-person — either based in London or able to regularly visit (we can support UK visa applications if needed). However, we are open to remote applications.

The salary will vary depending on your fit, location, and experience; however, to give a rough sense, an applicant in London with good fit and no relevant experience would be paid approx. £60,000. An applicant in London with excellent fit and 4 years of relevant experience would be paid approx. £82,000.

Our benefits include:

  • The option to use 10% of your time for self development or other self-directed projects
  • 25 days of paid holiday, plus bank holidays
  • Standard UK pension, with 3% contribution from employer
  • Flexible work hours and location
  • Private medical insurance
  • Long-term disability insurance
  • Up to 14 weeks of fully paid parental leave
  • Childcare allowance for children under 5
  • Coverage of work-related expenses like travel to conferences and office equipment
  • £5,000 annual mental health support allowance
  • £5,000 annual self-development budget
  • Gym, shower facilities, and free food provided at our London office

We have a really awesome team and are excited for more people to join us in our mission to help people use their careers to solve the world’s most pressing problems.

Evaluation process

To apply, please fill in this form. If you have any problems submitting the form, please send your CV to bella@80000hours.org.

Our evaluation process will vary a bit depending on the candidate, but is likely to include a written work sample, an interview, and a multi-day in-person trial. We offer payment for work samples and trials, conditional on your location and right to work in the UK.

We’re aware that factors like gender, race, and socioeconomic background can affect people’s willingness to put themselves forward for roles for which they meet many but not all the suggested attributes. We’d especially like to encourage people from underrepresented backgrounds to express their interest.

If you’re feeling unsure whether you meet our criteria, I’d like to strongly encourage you to express interest; or reach out to bella@80000hours.org if you’re still unsure.

Apply here

The post Open position: Engagement Specialist appeared first on 80,000 Hours.

]]>
Tom Davidson on how AI-enabled coups could allow a tiny group to seize power https://80000hours.org/podcast/episodes/tom-davidson-ai-enabled-human-power-grabs/ Wed, 16 Apr 2025 16:01:53 +0000 https://80000hours.org/?post_type=podcast&p=89650 The post Tom Davidson on how AI-enabled coups could allow a tiny group to seize power appeared first on 80,000 Hours.

]]>
The post Tom Davidson on how AI-enabled coups could allow a tiny group to seize power appeared first on 80,000 Hours.

]]>
Expression of interest: Shortform Video Editing Contractor https://80000hours.org/2025/04/expression-of-interest-shortform-video-editing-contractor/ Tue, 15 Apr 2025 22:05:55 +0000 https://80000hours.org/?p=89705 The post Expression of interest: Shortform Video Editing Contractor appeared first on 80,000 Hours.

]]>

Summary

Applications are now not being actively processed, though we’d still welcome expressions of interest via the application form for future projects

We’re looking for a video editor who can create excellent shortform videos about transformative AI and its risks given footage, audio and notes.

Location: Berkeley, CA or PST preferred, but we’re open to remote candidates.

Rate: Varies depending on skills, fit, and experience. A skilled applicant with 5 years of relevant experience would be paid approx. $50/hour

To apply please complete this application form.

More detail below.

Help make spectacular videos that reach a huge audience.

We’re looking for someone to contract as a video editor, who can quickly learn our style and make our videos successful on shortform video platforms. We want these videos to start changing and informing the conversation about transformative AI and AGI.

Why this role?

In 2025, 80,000 Hours is planning to focus especially on helping explain why and how our audience can help society safely navigate a transition to a world with transformative AI. Right now not nearly enough people are talking about these ideas and their implications.

A great video program could change this. Time spent on the internet is increasingly spent watching video, and for many people in our target audience, video is the main way that they both find entertainment and learn about topics that matter to them.

To get our video program off the ground, we need great editors who understand our style and vision and can work quickly and to a high standard.

Responsibilities

  • Be able to work at least 10 hours a week
  • Be able to turn around drafts of edited shortform videos in 24-48 hours
  • Take feedback well
  • Learn our style and adapt to it quickly

About you

We’re looking for someone who ideally has:

  • Experience making shortform videos
  • Experience with Capcut, Descript or similar
  • Good taste in shortform video
  • Knowledge of the current trends and what succeeds in shortform video
  • The ability to work quickly and take feedback well
  • Familiarity with AI Safety

If you don’t have experience here but think you’d be a great fit, feel free to apply anyway; try out editing some footage you take of yourself to see how the work suits you.

Role details

This will be a contracting role, paid by the hour, with other details specific to each contract.

Application process

To apply, please complete this application form. Applications will be evaluated on a rolling basis. We may hire multiple contractors.

The post Expression of interest: Shortform Video Editing Contractor appeared first on 80,000 Hours.

]]>
Should you quit your job — and work on risks from AI? https://80000hours.org/2025/04/work-on-ai-risks/ Fri, 11 Apr 2025 19:18:20 +0000 https://80000hours.org/?p=89689 The post Should you quit your job — and work on risks from AI? appeared first on 80,000 Hours.

]]>
Within 5 years, there’s a real chance that AI systems will be created that cause explosive technological and economic change. This would increase the risk of disasters like war between US and China, concentration of power in a small minority, or even total loss of human control over the future.

Many people — with a diverse range of skills and experience — are urgently needed to help mitigate these risks.

I think you should consider making this the focus of your career.

This article explains why.

Get our guide to high-impact careers in the age of AGI

We’re writing a guide on how to use your career to help make AGI well. Join our newsletter to get updates:

And, if you’d like to work on reducing catastrophic risks from AI, apply to speak to us one-on-one.

1) World-changing AI systems could come much sooner than people expect

In an earlier article I explained why there’s a significant chance that AI could contribute to scientific research or automate many jobs by 2030. Current systems can already do a lot, there are clear ways to continue to improve them in the coming years. Forecasters and experts widely agree that the probability of widespread disruption is much higher than it was even just a couple of years ago.

Graph of lengths of tasks AIs updated in April 2025 for o3
AI systems are rapidly becoming more autonomous, as measured by the METR time horizon benchmark. The most recent models, such as o3, seem to be on an even faster trend that started in 2024.

2) The impact on society could be explosive

People say AI will be transformative, but few really get just how wild it could be. Here are three types of explosive impact we might see, which are now all supported by credible theoretical and empirical research:

  • The intelligence explosion: it might only take a few years from developing advanced AI to having billions of AI remote workers, making cognitive labour available for pennies.

  • The technological explosion: empirically informed estimates suggest that with sufficiently advanced AI 100 years of technological progress in 10 is plausible. That means we could have advanced biotech, robotics, novel political philosophies, and more arrive much sooner than commonly imagined.

  • The industrial explosion: if AI and robotics automate industrial production that would create a positive feedback loop, meaning production could plausibly end up doubling each year. Within a decade of reaching that growth rate, humanity would harvest all available solar energy on Earth and start to expand into space.

Along the way, we could see rapid progress on many key technological challenges — like curing cancer and developing green energy. But…

intelligence explosion
The number of AI models is growing extremely fast. If they can start to substitute for scientific researchers, then the effective size of the scientific community would grow at that rate, leading to faster scientific progress. Preparing for the intelligence explosion by Forethought Research

3) Advanced AI could bring enormous dangers

We’ve written before about how it might be hard to keep control of billions of AI systems thinking 10x faster than ourselves. But that’s only the first hurdle. The developments above could:

4) Under 10,000 people work full-time reducing the risks

Although it can feel like all anyone talks about is AI, only a few thousand people worldwide work full-time on navigating some of the most important aspects of the risks.

This is tiny compared to the millions working on more established issues like cancer or climate change, or the number of people working to deploy the technology as quickly as possible.

If you switch to this issue now, you could be among the first 10,000 people helping humanity navigate what may be the one of the most important transitions in history.

5) There are more and more concrete jobs you could take

A couple of years ago, there weren’t many clearly defined projects, positions or training routes to work on this issue. Today, there are more and more concrete ways to help. For example:

We’ve compiled a list of 30+ important organisations in the space, over 300 open jobs, and lists of fellowships, courses, internships, etc., to help you enter the field. Many of these are all well-paid too.

You don’t need to be technical or even focus directly on AI — we need people building organisations, in government, communications, and with many other skills. And AI is going to affect every aspect of society, so people with knowledge of all those aspects are needed (e.g. China, economics, pandemics, international governance, law, etc.).

The field was also small until recently, so there’s not many people with deep expertise. That means it’s often possible to spend about 100 hours reading and speaking to people, and then find a job. (And if you have a quantitative background, it’s possible to get to the technical forefront in under a year.) Our team can help you figure out how to transition.

6) The next five years seem crucial

I’ve argued the chance of building powerful AI is unusually high between now and around 2030, and declines thereafter. This makes the next five years especially critical.

That creates an additional reason to switch soon: if transformative AI emerges in the next five years, you’ll be part of one of the most important transitions in human history. If it doesn’t, you’ll have time to return to your previous path, while having learned about a technology that will still shape our world in significant ways.

The bottom line

If you’re able to find a role that fits, and that helps mitigate these risks (especially over the next 5–10 years), that’s probably the highest expected impact thing you can do.

But I don’t think everyone reading this should work on AI.

  1. You might not have the flexibility to make a large career change right now. (In that case, you could look to donate, spread clear thinking about the issue, or prepare to switch when future opportunities arise.)
  2. There are other important problems, and you might have far better fit for a job focused on another issue.
  3. You might be too concerned about the (definitely huge) uncertainties about how best to help or be less convinced by the arguments that it’s pressing.

However, I’d encourage almost everyone interested in impactful careers to seriously consider it. And if you’re unsure you’ll be able to find something, keep in mind there’s a very wide range of approaches and opportunities, and they’re expanding all the time.1

What’s next?

See all our resources on transformative AI, including articles, expert interviews, and our job board:

View all resources

We’re writing a new guide on how to use your career to help make AGI well. Join our newsletter to get updates:

Finally, if you’d like to work on reducing risks from advanced AI, apply to speak with our 1-1 advising team.

The post Should you quit your job — and work on risks from AI? appeared first on 80,000 Hours.

]]>
Bonus: Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys https://80000hours.org/podcast/episodes/mental-health-impactful-careers-compilation/ Fri, 11 Apr 2025 14:52:49 +0000 https://80000hours.org/?post_type=podcast&p=89651 The post Bonus: Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys appeared first on 80,000 Hours.

]]>
The post Bonus: Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys appeared first on 80,000 Hours.

]]>