Keeping track of my donations
My giving philosophy: “My case for donating to small, new efforts”
I think the average donor has very little impact when they donate to big, established efforts in traditional philanthropy, such as Greenpeace or efforts such as Against malaria in effective altruism. I think the biggest impact comes from the equivalent of angel investing, but for funding novel philanthropic initiatives that could potentially be extremely impactful in relevant cause areas, but are underexplored and underfunded.
On reflection for myself, donating in the first few months of the project’s existence to projects such as Ocean Cleanup, NewScience or Taimaka was probably much more impactful than donating to big, established efforts. I would also recommend novel, potentially impactful initiatives to other donors and foundations for funding. Once a billionaire or big foundation is funding a project, it probably doesn’t require your donations anymore.
In the book Effective Altruism: Philosophical Issues (Engaging Philosophy), Mark Budolfson and Dean Spears make this case elogquently in their paper “The Hidden Zero Problem: Effective Altruism and Barriers to Marginal Impact”. I highly recommend reading the book.
I do think that efforts such as the EA funds or ACX Grants are a decent passive way to have a similar impact, as they support these small, novel projects.
Chronology of what I thought was worth supporting (with a range of small amounts)
- to keep the habit of giving, and learn about different efforts, as well as calibrate my giving
2023
- Empirical research into AI consciousness and moral patienthood
- Avoiding Incentives for Performative Prediction in AI
- Long term future fund
- Holly Elmore organizing people for a frontier AI moratorium
- Activation vector steering with BCI
- Empowering AI Governance - Grad School Costs Support for Technical AIS Research
- Build an AI Safety Lab at Oxford University
- AI Alignment Research Lab for Africa
- Introductory resources for Singular Learning Theory
- WhiteBox Research: Training Exclusively for Mechanistic Interpretability
- Compute and other expenses for LLM alignment research
- The Rethink Priorities Existential Security team: Research Fellow hire
- Optimizing clinical Metagenomics and Far-UVC implementation
- Run five international hackathons on AI safety research
- Apollo Research: Scale up interpretability & behavioral model evals research
- Automated Interpretability and Memory Management in Transformers
- Agency and (Dis)Empowerment by Damiano Fornasiere
- Discovering latent goals by Lucy Farnik
- Scoping Developmental Interpretability by Jesse Hoogland
- Targeted Interpretability Work
- Joseph Bloom - Independent AI Safety Research on offline-RL agents using mechanistic interpretability in order to understand goals and agency.
- Lightcone Infrastructure/LessWrong
- EA long-term future fund
- The Inside View Podcast
- Metacrisis quadratic donation round
- EA community infrastructure fund
- EA long-term future fund
- EA global health and development fund
- EA animal welfare fund
- ARC Evaluations Project
- FAR AI
- EA infrastructure fund
- EA long term future fund
- European Network for AI Safety (ENAIS)
- Alignment Research Center
- Rethink priorities
- The Center for AI Safety (CAIS)
- Center on Long-Term Risk
- Turkey and Syria Earthquake Relief Fund
- Berkeley Existential Risk Initiative
- Taimaka
- Nuclear Threat Initiative
- Institute for Meaning Alignment
- Qualia research institute
- Global poverty fund
- Helen Keller International
- GiveWell recommendation
- EA infrastructure fund
- LEVF: Mouse rejuvenation
- EA long term future fund
- EA germany + effektivspenden
- Noora Health
- Patreon - The Inside View, AXRP Podcast, rob miles/ai safety, The Sheekey Science Show/longevity, The Roots of Progress, Andy Matuschak/Creating tools for thought, Rational Animations, Isaac Arthur/SciFi YouTube
2022
- Long-Term Future Fund by Giving What We Can
- Effective Altruism Infrastructure Fund by Giving What We Can
- GiveWell
- Berkeley Existential Risk Initiative
- Material Innovation Initiative
- Spark Climate - Ryan’s top recommendation
- Malengo: facilitates international educational migration (starting between Uganda<>Germany, Ukraine<>Germany (cause exploration)
- 100+ Gitcoin grants i’ve supported via quadratically matched donations, from open + decentralized science, climate to open source, matched with approx $20k+
- Long-Term Future Fund: Donate to people or projects that aim to improve the long-term future, such as by reducing risks from artificial intelligence and engineered pandemics.
- Nuclear Threat Initiative
- Taimaka
- The Effective Altruism Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
- protect reproductive rights
- Clean air task force
- Founders Pledge (Climate Change Fund)
- ACX Grants
- ~100+ Projects i supported through gitcoin (from open source to longevity), matched with approx. ~$25k+
- Kickstart
- MIRI
- Clean Air Task Force
- Patreon - rob miles/ai safety, The Sheekey Science Show/longevity, The Roots of Progress, Andy Matuschak/Creating tools for thought, Rational Animations, Isaac Arthur/SciFi YouTube
2021: see all and donate easily through every.org or endaoment: for direct crypto donations
- The Against Malaria Foundation
- The Knowledge Society
- Evidence Action Evidence Action
- Carbon180:
- Khan Academy
- Black Girls CODE:
- Cool Earth:
- Science and Tech Future
- Sightsavers
- Taimaka Project
- 80,000 hours
- founders pledge
- animal charity evaluators
- maps mental health
- wikipedia
- generation pledge
- terra praxis
- silverlining climate
- strong minds
- rethink charity
- mars society
- malaria consortium
- clean air task force
- centre for effective altruism
- founders pledge science & tech
- legal priorities project
- nuclear threat initiative
- strong minds
- centre for health security
- effective altruism foundation
- our world in data
- future of life institute
- founderspledge patient philanthropy
- centre for human compatible ai
- rethink priorities
- machine intelligence research inst
- global health and dev fund
- climate change fund
- berkeley x risk
- qualia research
- fdp
- newscience.org
- * ~70 Projects i supported through gitcoin (from open source to longevity), matched with approx. ~$12k+
2020
- 80,000 Hours
- CEA
- Partei für Gesundheitsforschung
- EA Fund Global Poverty
- EA Fund Long-term future
- EA Fund EA Infrastructure
- SENS
- Our World in Data
- MAPS
- StrongMinds
- Berkeley X Risk, SENS, CEA, …
- * ~10 Projects i supported through gitcoin (from open source to longevity), matched with approx. ~$1k+
2018-2019
- EA Fund Global Poverty
- EA Fund Long-term future
- EA Fund EA Infrastructure
- EA Fund Animal Suffering
- SENS
2017 and before
- ocean cleanup
- nabu