April 3, 2020
Paul Christiano, a researcher at OpenAI, discusses the current state of research on aligning AI with human values: what’s happening now, what needs to happen, and how people can help. This talk covers a broad set of subgoals within alignment research, from inferring human preferences to verifying the properties of advanced systems.
February 21, 2020
If you take actions that affect the future, you don’t just change the eventual welfare of people who have yet to exist — you actually influence which people exist in the first place. Typical moral principles, when applied to such actions, yield paradoxical results.
February 20, 2020
The potential upsides of advanced AI are enormous, but there’s no guarantee they’ll be distributed optimally. In this talk, Cullen O’Keefe, a researcher at the Centre for the Governance of AI, discusses one way we could work toward equitable distribution of AI’s benefits — the Windfall Clause, a commitment by artificial intelligence (AI) firms to share a significant portion of their future profits — as well as the legal validity of such a policy and some of the challenges to implementing it.
February 19, 2020
Eva Vivalt, a researcher at the Australian National University, believes “there's a whole wealth of [hidden] information that people use to come to certain decisions or beliefs.” With this in mind, she is helping to build a platform to collect that information — specifically, people’s predictions about the results of social science experiments. In this talk, she discusses what social science stands to gain through the use of forecasting, from better research design to less biased decisions about which studies to publish.
February 18, 2020
Many people in the effective altruism (EA) community hope to make the world better by influencing events in the far future. But if the future can potentially contain an infinite number of lives, and infinite total moral value, we face major challenges in understanding the impact of our actions and justifying any particular strategy. In this academic session, Hayden Wilkinson discusses how we might be able to face these challenges — even if it means abandoning some of our moral principles in the process.
February 18, 2020
In this academic session, Zach Groff, a PhD student at Stanford University whose research areas include welfare economics, discusses a 1995 paper whose authors argued that nature contains more suffering than enjoyment. After analyzing the model used in that paper, Groff found that an error negated its original conclusion, and that evolutionary dynamics imply that enjoyment may predominate for some species. In addition to this result, Groff discusses suggestions for the empirical study of wild animal welfare.
February 14, 2020
We must take evolution into account when we consider animal welfare — whether we’re thinking about which animals are sentient or how animals might respond to a given intervention. In this talk, Wild Animal Initiative’s Michelle Graham presents a brief introduction to the theory of evolution (she also recommends this video for more background), explains how understanding evolution can help us conduct better research, and discusses the ways misconceptions about evolution lead us astray.
February 13, 2020
When competition intensifies between powerful countries, peace and security are threatened in many ways. Proxy wars break out and global cooperation breaks down — including agreements on nuclear weapons. In this talk, Dani Nedal, who teaches global nuclear politics at Carnegie Mellon University, offers thoughts on these risks, and how countries and individuals can work to reduce them.
February 12, 2020
Open Philanthropy has recommended over $90 million in grants for farm animal welfare work around the world. What have they learned? In this talk, Lewis Bollard, who heads Open Phil’s work on farm animal welfare, shares lessons on corporate reforms, plant-based meat, and the global scope of available funding.
February 11, 2020
If we want to donate money, should we give it away now or invest it to give away later? The answer depends on many considerations, including our expected rate of return, the chance of our personal values changing, and the question of whether we live at the “Hinge of History” — a time with high-impact opportunities that will soon vanish.