Command Palette

Search for a command to run...

Page Inspect

https://futureoflife.org/
Internal Links
69
External Links
49
Images
73
Headings
61

Page Content

Title:Home - Future of Life Institute
Description:FLI works on reducing extreme risks from transformative technologies. We are best known for developing the Asilomar AI governance principles.
HTML Size:422 KB
Markdown Size:16 KB
Fetched At:November 18, 2025

Page Structure

h4Featured projects
h1Fighting for a human future.
h2Our Mission
h2Focus Areas
h3Artificial Intelligence
h3Biotechnology
h3Nuclear Weapons
h2Featured videos
h2Featured projects
h3Control Inversion
h4FLI AI Safety Index: Summer 2025 Edition
h4Recommendations for the U.S. AI Action Plan
h4Educating about Autonomous Weapons
h4FLI AI Safety Index: Summer 2025 Edition
h4Recommendations for the U.S. AI Action Plan
h4Educating about Autonomous Weapons
h4AI’s Role in Reshaping Power Distribution
h4Envisioning Positive Futures with Technology
h4Perspectives of Traditional Religions on Positive AI Futures
h4AI’s Role in Reshaping Power Distribution
h4Envisioning Positive Futures with Technology
h4Perspectives of Traditional Religions on Positive AI Futures
h4Control Inversion
h4Digital Media Accelerator
h4Keep The Future Human
h4Control Inversion
h4Digital Media Accelerator
h4Keep The Future Human
h4AI Existential Safety Community
h4Fellowships
h4RFPs, Contests, and Collaborations
h4AI Existential Safety Community
h4Fellowships
h4RFPs, Contests, and Collaborations
h2Newsletter
h2Recent editions
h2Latest content
h2Featured content
h3Posts
h4The U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and Superhuman AI
h4Michael Kleinman reacts to breakthrough AI safety legislation
h4Are we close to an intelligence explosion?
h4The Impact of AI in Education: Navigating the Imminent Future
h3Podcasts
h4Can Defense in Depth Work for AI? (with Adam Gleave)
h4How We Keep Humans in Control of AI (with Beatrice Erkers)
h4Why Building Superintelligence Means Human Extinction (with Nate Soares)
h4Breaking the Intelligence Curse (with Luke Drago)
h4What Markets Tell Us About AI Timelines (with Basil Halperin)
h3Papers

Markdown Content

Home - Future of Life Institute



Skip to content

- Our mission
- Focus areas

Artificial Intelligence

Biotechnology

Nuclear Weapons
- Our work

Our work overview

Policy and Research

Futures

Communications

Grantmaking

#### Featured projects

Statement on Superintelligence

Communications

Control Inversion

Communications

Perspectives of Traditional Religions on Positive AI Futures

Futures, Policy & Research

Recommendations for the U.S. AI Action Plan

Policy & Research

Multistakeholder Engagement for Safe and Prosperous AI

Communications

Digital Media Accelerator

Communications

FLI AI Safety Index: Summer 2025 Edition

Policy & Research

Our content

Articles, Podcasts, Newsletters, Resources, and more.
- About us

About us overview

Our people

Careers

Donate

Finances

Contact us
- Take action

- Language

English

- Français
- Deutsch

Some translations are machine-generated. Contact us to report any issues, or to submit a new translation of our content.

Search for:

# Fighting for a human future.

AI is poised to remake the world.
Help us ensure it benefits all of us.

Learn more

Take action

Policy & Research

↗

We engage in policy advocacy and research across the United States, the European Union and around the world.

Image: FLI’s Emilia Javorsky at the Vienna Autonomous Weapons Conference 2025

Futures

↗

The Futures program aims to guide humanity towards the beneficial outcomes made possible by transformative technologies.

Image: Our latest Futures project—a series of interactive, research-backed scenarios of how AI could transform the world.

Communications

↗

We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.

Image: Max Tegmark takes the stage on opening night at Web Summit 2024 in Lisbon.

Grantmaking

↗

We provide grants to individuals and organisations working on projects that further our mission.

Image: Mark Brakel attends a dinner hosted by grantees at the Foundation of American Scientists.

Recent updates from us

Over 65,000 Sign to Ban the Development of Superintelligence

Plus: Final call for PhD fellowships and Creative Contest; new California AI laws; FLI is hiring; can AI truly be creative?; and more.

1 November, 2025

AI at the Vatican

Plus: Fellowship applications open; global call for AI red lines; new polling finds 90% support for AI rules; register for our $100K creative contest; and more.

1 October, 2025

RAISE-ing the Bar for AI Companies

Plus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.

4 September, 2025

Hear from us every month

Join 40,000+ other newsletter subscribers for monthly updates on the work we’re doing to safeguard our shared futures.

## Our Mission

Steering transformative
technology towards benefiting life and away from extreme large-scale risks.

We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects.

Learn more

## Focus Areas

### Artificial Intelligence

AI can be an incredible tool that solves real problems and accelerates human flourishing, or a runaway uncontrollable force which destabilizes society, disempowers most people, enables terrorism, and replaces us.

### Biotechnology

Advances in biotechnology can revolutionize medicine, manufacturing, and agriculture, but without proper safeguards, they also raise the risk of engineered pandemics and novel biological weapons.

### Nuclear Weapons

Peaceful use of nuclear technology can help power a sustainable future, but nuclear weapons risk mass catastrophe, escalation of conflict, the potential for nuclear winter, global famine and state collapse.

## Featured videos

The best recent content from us and our partners:

More videos

## Featured projects

Read about some of our current featured projects:

Recently announced

### Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre.

Communications

Policy & Research

View all

#### FLI AI Safety Index: Summer 2025 Edition

Seven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.

#### Recommendations for the U.S. AI Action Plan

The Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.

#### Educating about Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

#### FLI AI Safety Index: Summer 2025 Edition

Seven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.

#### Recommendations for the U.S. AI Action Plan

The Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.

#### Educating about Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Futures

View all

#### AI’s Role in Reshaping Power Distribution

Advanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.

#### Envisioning Positive Futures with Technology

Storytelling has a significant impact on informing people's beliefs and ideas about humanity's potential future with technology. While there are many narratives warning of dystopia, positive visions of the future are in short supply. We seek to incentivize the creation of plausible, aspirational, hopeful visions of a future we want to steer towards.

#### Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

#### AI’s Role in Reshaping Power Distribution

Advanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.

#### Envisioning Positive Futures with Technology

Storytelling has a significant impact on informing people's beliefs and ideas about humanity's potential future with technology. While there are many narratives warning of dystopia, positive visions of the future are in short supply. We seek to incentivize the creation of plausible, aspirational, hopeful visions of a future we want to steer towards.

#### Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

Communications

View all

#### Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre.

#### Digital Media Accelerator

The Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.

#### Keep The Future Human

Why and how we should close the gates to AGI and superintelligence, and what we should build instead | A new essay by Anthony Aguirre, Executive Director of FLI.

#### Control Inversion

Why the superintelligent AI agents we are racing to create would absorb power, not grant it | The latest study from Anthony Aguirre.

#### Digital Media Accelerator

The Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.

#### Keep The Future Human

Why and how we should close the gates to AGI and superintelligence, and what we should build instead | A new essay by Anthony Aguirre, Executive Director of FLI.

Grantmaking

View all

#### AI Existential Safety Community

A community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.

#### Fellowships

Since 2021 we have offered PhD and Postdoctoral fellowships in Technical AI Existential Safety. In 2024, we launched a PhD fellowship in US-China AI Governance.

#### RFPs, Contests, and Collaborations

Requests for Proposals (RFPs), public contests, and collaborative grants in direct support of FLI internal projects and initiatives.

#### AI Existential Safety Community

A community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.

#### Fellowships

Since 2021 we have offered PhD and Postdoctoral fellowships in Technical AI Existential Safety. In 2024, we launched a PhD fellowship in US-China AI Governance.

#### RFPs, Contests, and Collaborations

Requests for Proposals (RFPs), public contests, and collaborative grants in direct support of FLI internal projects and initiatives.

## Newsletter

Regular updates about the technologies shaping our world

Every month, we bring 40,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our focus areas, and key updates on the work we do.

Subscribe to our newsletter to receive these highlights at the end of each month.

## Recent editions

Over 65,000 Sign to Ban the Development of Superintelligence

Plus: Final call for PhD fellowships and Creative Contest; new California AI laws; FLI is hiring; can AI truly be creative?; and more.

1 November, 2025

AI at the Vatican

Plus: Fellowship applications open; global call for AI red lines; new polling finds 90% support for AI rules; register for our $100K creative contest; and more.

1 October, 2025

RAISE-ing the Bar for AI Companies

Plus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.

4 September, 2025

AI safety report cards are out. How did the major companies do?

Plus: Update on EU guidelines; the recent AI Security Forum; how AI increases nuclear risk; and more.

1 August, 2025

View all

## Latest content

The most recent content we have published:

## Featured content

We must not build AI to replace humans.

*A new essay by Anthony Aguirre, Executive Director of the Future of Life Institute*

Humanity is on the brink of developing artificial general intelligence that exceeds our own. It's time to close the gates on AGI and superintelligence... before we lose control of our future.

Read the essay ->

### Posts

#### The U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and Superhuman AI

Three‑quarters of U.S. adults want strong regulations on AI development, preferring oversight akin to pharmaceuticals rather than industry "self‑regulation."

19 October, 2025

Policy, Recent News

#### Michael Kleinman reacts to breakthrough AI safety legislation

FLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum

3 October, 2025

AI Policy, Statement

#### Are we close to an intelligence explosion?

AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.

21 March, 2025

AI, Existential Risk

#### The Impact of AI in Education: Navigating the Imminent Future

What must be considered to build a safe but effective future for AI in education, and for children to be safe online?

13 February, 2025

AI, Ethics, Guest post

View all

### Podcasts

Available on all podcast platforms:

SpotifyApple Music
Pocket Casts
Podcast AddictMore...

•

Latest

#### Can Defense in Depth Work for AI? (with Adam Gleave)

3 October, 2025

#### How We Keep Humans in Control of AI (with Beatrice Erkers)

26 September, 2025

#### Why Building Superintelligence Means Human Extinction (with Nate Soares)

18 September, 2025

#### Breaking the Intelligence Curse (with Luke Drago)

10 September, 2025

#### What Markets Tell Us About AI Timelines (with Basil Halperin)

1 September, 2025

View all

### Papers

#### AI Safety Index: Summer 2025 (2-Page Summary)

July 2025

Open file

#### Staffer’s Guide to AI Policy: Congressional Committees and Relevant Legislation

March 2025

Open file

#### Recommendations for the U.S. AI Action Plan

March 2025

Open file

View all

## Use your voice

Protect what's human.

Big Tech is racing to build increasingly powerful and uncontrollable AI systems designed to replace humans. You have the power to do something about it.

Take action today to protect our future:

Take Action ->

## Our people

A team committed to the future of life.

Our staff represents a diverse range of expertise, having worked in academia, for government and in industry. Their background range from machine learning to medicine and everything in between.

Meet our team

Open Roles

Careers

## Our History

We’ve been working to safeguard humanity’s future since 2014.

Learn about FLI’s work and achievements since its founding, including historic conferences, grant programs, and open letters that have shaped the course of technology.

Explore our history ->

# Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.

View previous editions

Steering transformative technology towards benefiting life and away from extreme large-scale risks.

Our Mission

#### Focus areas

- Artificial Intelligence
- Biotechnology
- Nuclear Weapons

#### Our work

- Policy & Research
- Futures
- Communications
- Grantmaking

#### Our content

- Articles
- Podcasts
- Newsletters
- Open letters

#### About us

- Our people
- Our history
- Careers
- Donate
- Finances
- FAQs
- Contact us

- Accessibility
- Media room
- Report a broken link

- Accessibility
- Media room
- Report a broken link

© 2025 Future of Life Institute. All rights reserved.

Visit our FacebookVisit our TwitterVisit our LinkedInVisit our YouTube channel

file-text-oangle-leftangle-rightangle-downcloudmagnifiercrossarrow-uparrow\_downwardarrow\_forwardbusiness\_centermemoryattach\_moneymonetization\_onaccess\_timewifi\_tetheringmailmail\_outlineav\_timerlibrary\_booksrecent\_actorsaccount\_balance\_walletfingerprintgavellanguage        linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram