Nearly every major “victory” that humanity has experienced over the past ten thousand years—from agriculture to the eradication of smallpox—has come from human intelligence.
Nearly every major problem that humanity will face over the next century—from bioengineered weapons to AI to climate change—will also be the result of human intelligence.
As our species builds up a greater foundation of knowledge and develops ever-more-powerful tools, the stakes are only going up. Compare the best and the worst that a single human could accomplish with the resources available in the year 1018 to the best and the worst that a single human could accomplish in 1918, or to the best and the worst that could be accomplished in 2008. Over the coming decades, our ability to make the world a better place is going to rise meteorically—along with our ability to make disastrous mistakes.
And yet, human intelligence itself remains demonstrably imperfect and largely mysterious. We suffer from biases that still influence us even after we know they’re there. We make mistakes that we’ve made a dozen times before. We jump to conclusions, make overconfident predictions, develop giant blindspots around ego and identity and social pressure, fail to follow through on our goals, turn opportunities for collaboration into antagonistic zero-sum games—and those are just the mistakes we notice.
Sometimes we manage to catch these mistakes before they happen—how? Some people manage to reliably avoid some of these failure modes—how? Where does good thinking come from? Good research? Good debate? Innovation? Attention to detail? Motivation? How does one draw the appropriate balance between skepticism and credulity, or deliberation and execution, or self-discipline and self-sympathy? How does one balance happiness against productivity, or the exploitation of known good strategies against the need to explore and find the next big breakthrough? What are the blindspots that cause humans—even extremely moral and capable ones—to overlook giant, glaring problems, or to respond inappropriately or ineffectively to those problems, once recognized?
CFAR exists to try to make headway in this domain—the domain of understanding how human cognition already works, in practice, such that we can then start the process of making useful changes, such that we will be better positioned to solve the problems that really matter. We are neither about pure research nor pure execution, but about applied rationality—the middle ground where the rubber hits the road, where one’s models meet reality, and where one’s ideas and plans either pay off (or they don’t). By looking in-depth at individual case studies, advances in cogsci research, and the data and insights from our thousand-plus workshop alumni, we’re slowly building a robust set of tools for truth-seeking, introspection, self-improvement, and navigating intellectual disagreement—and we’re turning that toolkit on itself with each iteration, to try to catch our own flawed assumptions and uncover our own blindspots and mistakes.
In short, actually trying to figure things out, such that we can achieve the good and avoid the bad—especially in arenas where we have to get it right on the first try.
Nuts & Bolts
CFAR is a 501(c)(3) non-profit operating out of Berkeley, California, originally founded in 2012 by Anna Salamon, Julia Galef, Valentine Smith, and Andrew Critch. As of January 2018, we have roughly a dozen full-time-equivalent staff and a large network of alumni, volunteers, collaborators, and donors. Most of our day-to-day work consists of:
- Finding or making cognitive tools
- Finding or developing high-promise individuals
- Delivering those tools to those individuals
- Connecting those individuals with one another and with important projects
We handle research and outreach in our Berkeley office, where we also put on regular small events for the rationality and effective altruism communities. We also run about half a dozen offsite retreat workshops each year to teach our basic applied rationality curriculum (“mainline workshops”), giving people from all backgrounds a thorough grounding in CFAR-style thinking and execution.
Donations from individuals and institutional donors also make possible a number of smaller, more targeted programs for things like:
- Developing the skill of doing pre-paradigmatic research
- Wrestling with grief and other deep, powerful emotions
- Understanding the strategic landscape surrounding the development of artificial intelligence
- Solving coordination problems within companies and social groups
- Bringing together both emotional and practical tools to tackle large, daunting, open-ended problems
When we’re not doing curricular research or running workshops, our instructors might be found providing applied rationality lectures to student groups at universities such as Stanford, Harvard, Yale, MIT, Cambridge, and U.C. Berkeley, or running training sessions for other organizations and for gatherings such as the annual Effective Altruism Global conference in San Francisco and Oxford. Finally, we run coaching and mentorship programs for our alumni network, helping individuals improve themselves and find their way toward fulfilling, high-impact work.
CFAR and AI Safety
Perhaps the most productive way to understand and improve human thinking is to treat it as an end in itself; for this reason we often use that attitude to inspire our research and the tone of our introductory workshops. Our deeper interest is motivated by believing that it’s one of the most important and underserved intervention points on humanity’s future.
Many of our alumni and staff have come to believe that addressing existential risks (x-risks) such as AI safety is one of the greatest opportunities for clearer thinking to make an important difference on humanity’s future. As such, it has become a priority for the organization; many of our coaching hours are now geared toward finding and integrating promising mathematicians and programmers into existing organizations working to address x-risk, and much of our spare effort goes into developing longer, targeted workshops like the AI Summer Fellows Program, which is a two-week immersive retreat focused on identifying and developing AI safety researchers.
That said, our tools, techniques, and mainline workshops are open to everyone—our alumni include students, academics, professionals, and industry experts alike, Americans and internationals, liberals and conservatives, men and women, “arts” people and “STEM” people, and adults of all ages. Our introductory workshops continue to serve a broad population, and to receive high approval ratings largely independent of demographics.
Does this mean that AI safety skeptics, or people who don’t have opinions about x-risk, should avoid CFAR? No! For starters, skeptics and proponents of other problems have useful models and information about the world, too—keeping our pool of participants diverse helps us make sure that we’re not deceiving ourselves or ending up in an echo chamber. And as noted above, the mainline workshops remain cause-agnostic and open to all.