Why Companies Should be Leading on AI Governance

April 08, 2019

Are companies better-suited than governments to solve collective action problems around artificial intelligence? Do they have the right incentives to do so in a prosocial way? In this talk, Jade Leung argues that the answer to both questions is "yes".

A transcript of Jade's talk is below, which we have lightly edited for clarity. You can discuss this talk on the EA Forum.

The Talk

In the year 2018, you can walk into a conference room and see something like this: There's a group of people milling around. They all look kind of frantic, a little bit lost, a little bit stressed. Over here, you've got some people talking about GDPR and data. Over there, you've got people with their heads in their hands being like, "What do we do about China?" Over there, you've got people throwing shade at Zuckerberg. And that's when you know you're in an AI governance conference.

The thing with governance is that it's the kind of word that you throw around, and it feels kind of warm and fuzzy because it's the thing that will help us navigate through all of these kind of big and confusing questions. There's just a little bit of a problem with the word "governance," in that a lot of us don't really know what we mean when we say we want to be governing artificial intelligence.

So what do we actually mean when we say "governance?" I asked Google. Google gave me a lot of aesthetically pleasing, symmetrical, meaningless management consulting infographics, which wasn't very helpful.

jade slide 1

And then I asked it, "What is AI governance?" and then all the humans became bright blue glowing humans, which didn't help. And then the cats started to appear. This is actually what comes up when you search for AI governance. And that's when I just gave up, and I was like, "I'm done. I need a career change. This is not good."

jade slide 2

So it seems like no one really knows what we mean when we say, "AI governance." So I'm going to spend a really quick minute laying some groundwork for what we mean, and then I'll move on into the main substantive argument, which was, "Who should actually be leading on this thing called governance?"

jade slide 3

So governance, global governance, is a set of norms, processes, and institutions that channel the behavior of a set of actors towards solving a collective action problem at a global or transnational scale. And you normally want your governance regime to steer you towards a set of outcomes. When we talk about AI governance, our outcome is something like the robust, safe, and beneficial development and deployment of advanced artificial intelligence systems.

jade slide 4

Now that outcome needs a lot of work, because we don't actually really know what that means either. We don't really know how safe is safe. We don't really know what benefits we're talking about, and how they should be distributed. And us answering those questions and adding granularity to that outcome is going to take us a while.

So you can also put in something like a placeholder governance outcome, which is like the intermediate outcome that you want. So the process of us getting to the point of answering these questions can include things like being able to avoid preemptive locking, so that we don't have a rigid governance regime that can't adapt to new information. It could also include things like ensuring that there are enough stakeholder voices around the table so that you are getting all of your opinions in. So those are examples of intermediate governance outcomes that your regime can lead you towards.

jade slide 5

And then in governance you also have a set of functions. So these are the things that you want your regime to do so that you get to the set of outcomes that you want. So common sets of functions that you talk about would be things like setting rules. What do we do? How do we operate in this governance regime? Setting context, creating common information and knowledge, doing common benchmarks and measurements. You also have implementation, which is both issuing and soliciting commitments from actors to do certain things. And it's also about allocating resources so that people can actually do the things. And then finally, you've got enforcement and compliance, which is something like making sure that people are actually doing the thing that they said that they would do.

So these are examples of functions. And the governance regime is something like these norms, processes, and institutions that get you towards that outcome by doing some of these functions.

So the critical question today is something like, how do we think about who should be taking the lead on doing this thing called AI governance?

I have three propositions for you.

jade slide 6

One: states are ill-equipped to lead in the formative stages of developing an AI governance regime. Two: private AI labs are better, if not best-placed, to lead in AI governance. And three: private AI labs can and already, to some extent, are incentivized to do this AI governance thing in a prosocial way.

I'll spend some time making a case for each one of these propositions.

States are Ill-Equipped

When we normally think about governance, you consider states as the main actors sitting around the table. You think about something like the UN: everyone sitting under a flag, and there are state heads who are doing this governance thing.

jade slide 7

You normally think that because of three different reasons. One is the conceptual argument that states are the only legitimate political authorities that we have in this world, so they're the only ones who should be doing this governance thing. Two is you've got this kind of functional argument: states are the only ones who can pass legislation, design regulation, and if you're going to think about governance as regulation and legislation, then states have to be the ones doing that function. And three, you've got something like the incentives argument, which is that states are set up, surely, to deliver on these public goods that no one else is going to care about as a result. So states are the only ones that have the explicit mandate and the explicit incentive structure to deliver on these collective action problems. Otherwise, none of this mess would get cleaned up.

Now all of those things are true. But there are trends and certain characteristics about a technology governance problem, which means that states are particularly increasingly undermined in their ability to do governance effectively, despite all of those things being true.

jade slide 8

Now the first is that states are no longer the sole source of governance capacity. And this is a general statement that isn't specifically about technology governance. You've got elements like globalization, for example, creating the situation where these collective action problems are at a scale which states have no mandate or control over. And so states are increasingly unable to do this governance thing effectively within the scope of the jurisdiction that they have.

You also have non-state actors emerging on the scene, most notably civil society and multi-national corporations are at this scale that supersedes states. And they also are increasingly demonstrating that they have some authority, and some control, and some capacity to exercise governance functions. Now their authority doesn't come from being voted in. The authority of a company, for example, plausibly comes from something like their market power and the influence on public opinion. And you can argue about how legitimate that authority is, but it is exercised and it does actually influence action. So states are no longer the sole source of where this governance stuff can come from.

Specifically for technology problems, you have this problem that technology moves really fast, and states don't move very fast. States use regulatory and legislative frameworks that hold technology static as a concept. And technology is anything but static: it progresses rapidly and often discontinuously, and that means that your regulatory and legislative frameworks get out of date very quickly. And so if states are going to use that as the main mechanism for governance, then they are using irrelevant mechanisms very often.

Now the third is that you have emerging technologies specifically being a challenge. Emerging technologies have huge bars of uncertainty around the way that they're going to go. And to be able to effectively govern things that are uncertain, you need to understand the nature of that uncertainty. In the case of AI, for example, you need deep in-house expertise to understand the nature of these technology trajectories. And I don't know how to say this kindly, but governments are not the most technology-literate institutions that are around, which means that they don't have the ability to grapple with that uncertainty in a nuanced way, which means you see one of two things: you either see preemptive clampdown out of fear, or you see too little too late.

So states are no longer the sole source of governance capacity. And for technology problems that move fast and are uncertain, states are particularly ill-equipped.

Private Labs are Better Placed

Which leads me to proposition two, which is that instead of states, private AI labs are far better-placed, if not the best-placed, actors to do this governance thing, or at least form the initial stages of a governance regime.

jade slide 9

Now this proposition is premised on an understanding that private AI labs are the ones at the forefront of developing this technology. Major AI breakthroughs have come from private companies, privately funded nonprofits, or even academic AI labs that have very strong industrial links.

Why does that make them well-equipped to do this governance thing? Very simply, it means that they don't face the same problems that states do. They don't face this pacing problem. They have in-house expertise and access to information in real time, which means that they have the ability to garner unique insights very quickly about the way that this technology is going to go.

jade slide 10

So of all the actors, they are most likely to be able to slightly preemptively, at least, see the trajectories that are most plausible and be able to design governance mechanisms that are nuanced and adaptive to those trajectories. No other actor in this space has the ability to do that except those at the forefront of leading this technology development.

jade slide 11

Now secondly, they also don't face the scale mismatch problem. This is where you've got a massive global collective action problem, and you have states which are very nationally scaled. What we see is multinational corporations which from the get-go are forced to be designed globally because they have global supply chains, global talent pools, global markets. The technology they are developing is proliferated globally. And so, necessarily, they both have to operate at the scale of global markets, and they also have experience, and they attribute resources to navigating at multiple scales in order to make their operations work. So you see a lot of companies scale at local, national, regional, transnational levels, and they navigate those scales somewhat effortlessly, and certainly effortlessly compared to a lot of other actors in this space. And so, for that reason, they don't face the same scale mismatch problem that a lot of states have.

So you've got private companies that both have the expertise and also the scale to be able to do this governance thing.

Now you're probably sitting there thinking, "This chick has drunk some private sector Kool-Aid if she thinks that private sector, just because they have the capacity, means that they're going to do this governance thing. Both in terms of wanting to do it, but also being able to do it well, in a way that we would actually want to see it pan out."

Private Labs are Incentivized to Lead

Which leads me to proposition three, which is that private labs are already and can be more incentivized to lead on AI governance in a way that is prosocial. And when I say "prosocial" I mean good: the way that we want it to go, generally, as an altruistic community.

Now I'm not going to stand up here and make a case for why companies are actually a lot kinder than you think they are. I don't think that. I think companies are what companies are: they're structured to be incentivized by the bottom line, and they're structured to care about profit.

jade slide 12

All that you need to believe in order for my third proposition to fly is that companies optimize for their bottom line. And what I'm going to claim is that that can be synonymous with them driving towards prosocial outcomes.

Why do I think that? Firstly, it's quite evidently in a firm's self-interest to lead on shaping the governance regime that is going to govern the way that their products and their services are going to be developed and deployed, because it costs a lot if they don't.

jade slide 13

How does that cost them? Poor regulation, and when I say "poor", I mean poor in terms of costly for firms to engage with, is something where you see a lot of costs incurred to firms when that happens across a lot of technology domains. And the history of technology policy showcases a lot of examples where firms haven't been successful in preemptively engaging with regulation and preemptively engaging with the governance, and so they end up facing a lot of costs. In the U.S, and I'll point to the U.S. because the U.S. is not worst example of it, but they have a lot of poor regulation in place particularly when it comes to things like biotechnology. In biotechnology, you've got blanket bans on certain types of products, and you also have things like export controls, which have caused a lot of loss of profit for these firms. You also have a lot of examples of litigation across a number of different technology domains where firms have had to battle with regulation that has been put in place.

Now it wasn't in the firms' interests to incur those costs. And so the most cost-effective way, in hindsight, would be for these firms to engage with the governance as they were shaping regulation, shaping governance, and doing what that would be.

Now just because it's costly doesn't mean that it's going to go in a good way. What are the reasons why them preemptively engaging is likely to lead to prosocial regulation? Two reasons why. One: the rationale for a firm would be something like, "We should be doing the thing that governance will want us to do, so that they don't then go in and put in regulation that is not good for us." And if you assume that governance has that incentive structure to deliver on public goods, then firms, at the very least, will converge on the idea that they should be mitigating their externalities and delivering on prosocial outcomes in the same way that the state regulation probably would.

The more salient one in the case of AI is that public opinion actually plays a fairly large role in dictating what firms think are prosocial. You've seen a lot of examples of this in recent months where you've had Google, Amazon, and Microsoft face backlash from the public and from employees where they've developed and deployed AI technologies that grate against public values. And you've seen reactions from these firms respond to those actions as well. It's concrete because it actually affects their bottom line: they lose consumers, users, employees. And that, again, ties back to their incentive structure. And so if we can shore up the power of something like the public opinion that translates into incentive structures, then there are reasons to believe that firms will engage preemptively in shaping things that will go more in line with what public opinion would be on these issues.

So the second reason is that firms already do a lot of governance stuff. We just don't really see it, or we don't really think about it as governance. And so I'm not making a wacky case here in that business as usual currently is already that firms do some governance activity.

Now I'll give you a couple of examples, because I think when we think about governance, we maybe hone in on the idea that that's regulation. And there are a lot of other forms of governance that are private sector-led, which perform governance functions, but aren't really called "governance" by the traditional term.

So here are some examples. When you think about the function of governance of implementing some of these commitments, you can have two different ways of thinking about private sector leading on governance. One is establishing practices along the technology supply chain that govern for outcomes like safety.

Again, in biotechnology, you've got an example of this where DNA synthesis companies voluntarily self-initiated schemes for screening customer orders so that they were screening for whether customers were ordering for malicious use purposes. The state eventually caught up. And a couple of years after most DNA synthesis companies had been doing this in the U.S., it became U.S. state policy. But that was a private sector-led initiative.

Product standards are another really good example where private firms have consistently led at the start for figuring out what a good product looks like when it's on the market.

Cryptographic products, the first wave of them, is a really good example of this. You had firms like IBM and a firm called RSI Security Inc., in particular, do a lot of early-stage R&D to ensure that strong encryption protocols made it onto the market and took up a fair amount of global market share. And for the large part, that ended up becoming American standards for cryptographic products, which ended up scaling across the global markets.

So those are two examples of many examples of ways in which private firms can lead on the implementation of governance mechanisms.

jade slide 14

The second really salient function that they play is in compliance. So making sure that companies are doing what they do. There are a lot of examples in this space of climate change, in particular where firms have either sponsored or have directly started initiatives that are about disclosing the things that they're doing to ensure that they are in line with commitments that are made on the international scale. Whether that's things like divestment, or disclosing climate risk, or carbon footprints, or various rating and standards agencies, there is a long list of ways in which the private sector is delivering on this compliance governance function voluntarily, without necessarily needing regulation or legislation.

So firms already do this governance thing. And all that we have to think of is how can they lead on that and shape it in a more preemptive way.

And the third reason to think that firms could do this voluntarily is that, at the end of the day, particularly for transformative artificial intelligence scenarios, firms rely on the world existing. They rely on markets functioning. They rely on stable sociopolitical systems. And if those don't end up being what we get because we didn't put in robust governance mechanisms, then firms have all the more reason to want us to not get to those futures. And so, for an aspirationally long-term thinking firm, this would be the kind of incentive that would lead them to want to lead preemptively on some of these things.

So these are all reasons to be hopeful, or to think at least, that firms can do and can be incentivized to lead on AI governance.

jade slide 6

So here are the three propositions again. You've got states who are ill-equipped to lead on AI governance. You've got private AI labs who have the capacity to lead. And finally, you've got reasons to believe that private AI labs can lead in a way that is prosocial.

Now am I saying that private actors are all that is necessary and sufficient? It wouldn't be an academic talk if I didn't give you a caveat, and the caveat is that I'm not saying that. It's only that they need to lead. There are very many reasons why the private sector is not sufficient, and where their incentive structures can diverge from what prosocial outcomes are.

More than that, there are some governance functions which you actually need non-private sector actors to play. They can't pass legislation, and then you often need like a third party civil society organization to do things like monitoring compliance very well. And the list goes on of a number of things that private sector can't do on their own.

So they are insufficient, but they don't need to be sufficient. The clarion call here is for private sector to recognize that they are in a position to lead on demonstrating what governing artificial intelligence can look like if it tracks technological progress in a nuanced, adaptive, flexible way, if it happens at a global scale and scales across jurisdictions easily, and finally avoids costly conflict between states and firms, which tends to precede a lot of costly governance mechanisms that are ineffective being put in place.

So firms and private AI labs can demonstrate how you can lead on artificial intelligence governance in a way that achieves these kinds of outcomes. The argument is that others will follow. And what we can look forward to is shaping the formative stages of an AI governance regime that is private sector-led, but publicly engaged and publicly accountable.

jade slide 15

Thank you.

Questions

Question: Last time you spoke at EA Global, which was just a few months ago, it was just after the Google engineers' open letter came out saying, "We don't want to sell AI to the government". Something along those lines. Since then, Google has said they won't do it. Microsoft has said they will do it. It's a little weird that rank and file engineers are sort of setting so much of this policy, and also that two of the Big Five tech companies have gone so differently so quickly. So how do you think about that?

Jade: Yeah. It's so unclear to me how optimistic to be about these very few data points that we have. I think also last time when we discussed it, I was pretty skeptical about how effective research communities can be and technical researchers within companies can be in terms of affecting company strategy.

I think it's not surprising that different companies are making different decisions with respect to how to engage with the government. You've historically seen this a lot where you've got some technology companies that are slightly more sensitive to the way that the public thinks about them, and so they make certain decisions. You've got other companies that go entirely under the radar, and they engage with things like defense and security contracts all the time, and it's part of their business model, and they operate in the same sector.

So I think the idea that you can have the private sector operate in one fashion, with respect to how they engage with some of these more difficult questions around safety and ethics, isn't the way it pans out. And I think the case here is that you have some companies that can plausibly care a lot about this stuff, and some companies that really just don't. And they can get away with it, is the point.

And so I think, assuming that there are going to be some leading companies and some that just kind of ride the wave if it becomes necessary is probably the way to think about it, or how I would interpret some of these events.

Question: So that relates directly, I think, to a question about the role of small companies. Facebook, obviously, is under a microscope, and has a pretty bright spotlight on it all the time, and they've made plenty of missteps. But they generally have a lot of the incentives that you're talking about. In contrast, Cambridge Analytica just folded when their activity came to light. How do you think about small companies in this framework?

Jade: Yeah. That's a really, really good point.

I think small companies are in a difficult but plausibly really influential position. As you said, I think they don't have the same lobbying power, basically. And if you characterize a firm as having power as a result of their size, and their influence on the public, and their influence on the government, then small companies, by definition, just have far less of that power.

There's this dynamic where you can point to a subset of really promising, for example, startups or up-and-coming small companies that can form some kind of critical mass that will influence larger actors who, for example, in a functional, transactional sense, would be the ones that would be acquiring them. E.g., like DeepMind had a pretty significant influence on the way that safety was perceived within Google as a result of being a very lucrative acquisition opportunity, in a very cynical framing.

And so I think there are ways in which you can get really important smaller companies using their bargaining chips with respect to larger firms to exercise their influence. I would be far more skeptical of small companies being influential on government and policy makers. I think historically it's always been large industry alliances or large big companies that get summoned to congressional hearings and get the kind of voice that they want. But I think certainly, like within the remit of private sector, I think small companies, or at least medium-size companies, can be pretty important, particularly in verticals where you don't have such dominant actors.

Question: There have been a lot of pretty well-publicized cases of various biases that are creeping into algorithmic systems that sort of can create essentially racist or otherwise discriminatory algorithms based on data sets that nobody really fully understood as they were feeding it into a system. That problem seems to be far from solved, far from corrected. Given that, how much confidence should we have that these companies are going to get these even more challenging macro questions right?

Jade: Yeah. Whoever you are in the audience, I'm not sure if you meant that these questions are not naturally incentivized to be solved within firms. Hence, why can we hope that they're going to get solved at the macro level? I'm going to assume that's what the question was.

Yeah, that's a very good observation that within... unless you have the right configuration of pressure points on a company, there are some problems which maybe haven't had the right configuration and so aren't currently being solved. So put aside the fact that maybe that's a technically challenging problem to solve, and that you may not have the data sets available, etc. And if you assume that they have the capacity to solve that problem internally but they're not solving it, why is that the case? And then why does that mean that they would solve bigger problems?

The model of private sector-led governance requires, and as I alluded to, pressure points that are public-facing that the company faces. And with the right exertion of those pressure points, and with enough of those pressure points translating into effects on their bottom line, then that would hopefully incentivize things like this problem and things like larger problems to be solved.

In this particular case, in terms of why algorithmic bias in particular hasn't faced enough pressure points, I'm not certain what the answer is to that. Although, I think you do see a fair amount more like things like civil society action and whatnot popping up around that, and a lot more explicit critique about that.

I think one comment I'll say is that it's pretty hard to define and measure when it's gone wrong. So there's a lot of debate in the academic community, for example, and the ProPublica debate comes to mind too, where you've got debates literally about what it means for this thing to have gone fairly or not. And so that points to the importance of a thing like governance where you've got to have common context, and common knowledge, and common information about your definitions simply, and your benchmarks and your metrics for what it means for a thing to be prosocial in order for then you to converge on making sure that these pressure points are exercised well.

And so I think a lot of work ahead of us is going to be something like getting more granularity around what prosocial behavior looks like, for firms to take action on that. And then if you know basically what you're aiming for, then you can start to actually converge more on the kind of pressure points that you want to exercise.

Question: I think that connects very directly to another question from somebody who said, basically, they agree with everything that you said, but still have a very deep concern that AI labs are not democratic institutions, they're not representative institutions. And so will their sense of what is right and wrong match the broader public's or society's?

Jade: I don't know, people. I don't know. It's a hard one.

There are different ways of answering this question. One is that it's consistently a trade off game in terms of figuring out how governance is going to pan out or get started in the right way. And so one version of how you can interpret my argument is something like, look, companies aren't democratic and you can't vote for the decisions that they make. But there are many other reasons why they are better. And so if you were to trade off the set of characteristics that you would want in an ideal leading governance institution, then you could plausibly choose to trade off, as I have made the case for trading off, that they are just going to move faster and design better mechanisms. And so you could plausibly be able to trade off some of the democratic elements of what you would want in an institution. That's one way of answering that question.

In terms of ways of making... yeah, in terms of ways of aligning some of these companies or AI labs: so aside from the external pressure point argument... which if I were critiquing myself on that argument, there are many ways in which pressure points don't work sometimes and it kind of relies on them caring enough about it and those pressure points actually concretizing into kind of bottom line effects that actually makes that whole argument work.

But particularly in the case of AI, there are a handful of AI labs that I think are very, very important. And then there are many, many more companies that I think are not critically important. And so the fact that you can identify a small group of AI labs makes it an easier task to both kind of identify at almost like an individual founder level where some of these common views about what good decisions are can be lobbied to.

And I think it's also the case that there are a number of AI labs... we're not entirely sure how founders think or how certain decision makers think. But there are a couple who have been very public and have gone on record about, and have been pretty consistent actually, about articulating the way that they think about some of these issues. And I think there is some hope that at least some of the most important labs are thinking in quite aligned ways.

Doesn't quite answer the question about how do you design some way of recourse if they don't go the way that you want. And that's a problem that I haven't figured out how to solve. And if you've got a solution, please come tell me.

Yeah, I think as a starting point, there's a small set of actors that you need to be able to pin down and get them to articulate what the kind of mindset is around that. And also that there are an identifiable set of people that really need to buy in, particularly to get transformative AI scenarios right.