June 29, 2020
Even armed with the best intentions and evidence, donors who consider themselves to be members of the effective altruism (EA) community must make a moral choice about what “doing the most good” actually means — for example, whether it’s better to save one life or lift many people out of poverty. Alice Redfern, a manager at IDinsight, discusses the organization’s “Beneficiary Preferences Project,” which involved gathering data on the moral preferences of individuals who receive aid funding. The project could influence how governments and NGOs allocate hundreds of billions of dollars.
June 29, 2020
Impact investing is an increasingly popular approach to doing good. In this talk from EA Global: London 2019, John Halstead, head of applied research at Founders Pledge, discusses whether, and in what conditions, impact investing might be thought to succeed. John argues that impact investing is likely to have limited impact in large, highly efficient markets such as public stock markets. However, impact investing stands a better chance of impact if it involves VC or angel investing, includes companies that serve poor consumers or produce positive externalities, and/or exploits an investor’s informational advantage.
June 25, 2020
Academic studies don’t always estimate the parameters that will be most useful to us as we try to understand the cost-effectiveness of charities’ interventions. Even when they do, it may be difficult to figure out how those estimates apply to a specific charity’s program. Dan Brown, a senior fellow at GiveWell, uses examples from his own projects to demonstrate the depth of this challenge — and suggests ways to get practical value from academic insights.
June 25, 2020
In 2018, Vox launched Future Perfect, with the goal of covering the most critical issues of the day through the lens of effective altruism. In this talk, Kelsey Piper discusses how the project worked out, her experience as a Vox staff writer, and her thoughts on the key challenges of EA-focused journalism.
April 3, 2020
Paul Christiano, a researcher at OpenAI, discusses the current state of research on aligning AI with human values: what’s happening now, what needs to happen, and how people can help. This talk covers a broad set of subgoals within alignment research, from inferring human preferences to verifying the properties of advanced systems.
March 1, 2020
What have we found so far? And what does that mean for you?
February 21, 2020
If you take actions that affect the future, you don’t just change the eventual welfare of people who have yet to exist — you actually influence which people exist in the first place. Typical moral principles, when applied to such actions, yield paradoxical results.
February 20, 2020
The potential upsides of advanced AI are enormous, but there’s no guarantee they’ll be distributed optimally. In this talk, Cullen O’Keefe, a researcher at the Centre for the Governance of AI, discusses one way we could work toward equitable distribution of AI’s benefits — the Windfall Clause, a commitment by artificial intelligence (AI) firms to share a significant portion of their future profits — as well as the legal validity of such a policy and some of the challenges to implementing it.
February 19, 2020
Eva Vivalt, a researcher at the Australian National University, believes “there's a whole wealth of [hidden] information that people use to come to certain decisions or beliefs.” With this in mind, she is helping to build a platform to collect that information — specifically, people’s predictions about the results of social science experiments. In this talk, she discusses what social science stands to gain through the use of forecasting, from better research design to less biased decisions about which studies to publish.
February 18, 2020
Many people in the effective altruism (EA) community hope to make the world better by influencing events in the far future. But if the future can potentially contain an infinite number of lives, and infinite total moral value, we face major challenges in understanding the impact of our actions and justifying any particular strategy. In this academic session, Hayden Wilkinson discusses how we might be able to face these challenges — even if it means abandoning some of our moral principles in the process.