You do not have Javascript enabled. Some elements of this website may not work correctly.

September 28, 2018

Who would you rather have access to human-level artificial intelligence: the US government, Google, the Chinese government or Baidu? The biggest governments and tech firms are the most likely to develop advanced AI, so understanding their goals, abilities and constraints is a vital part of predicting AI’s trajectory. In this talk from EA Global 2018: San Francisco, Jade Leung explores how we can think about major players in AI, including an informative case study. Below is a transcript of Jade's talk, including questions from the audience answered both by herself and Carrick Flynn, which we have lightly edited for readability.

The Talk

Here's a question for you: Imagine we've built AGI. You wake up one morning, you hop on Twitter, and word is floating around that X has built AGI. Who would you want X to be? You've got four options.

  1. Alphabet. E.g. Google.
  2. The US government.
  3. Baidu, which is one of the leading AI companies in China.
  4. The Chinese government.
EAG SF - Firm government talk (1).pptx

Now bear in mind this is a question about who you would want to be in control of the technology, not who you think is most likely to get there. And no, you are not allowed to say, "I don't want any of these actors to develop AGI."

The point here isn't that there is a correct answer, and I'm not going to tell you what the answer should be. The point here being that I think this is one of the most important questions that we need to be able to answer. The question here being: "Who do we want to be in control of powerful technology like advanced AI?" But also the question of who is likely to be in control of it. These kinds of questions are critical and really, really difficult to answer. So what I'm going to do for you today is not to answer the question. Instead, I'm going to try to equip you with a framework, or a methodology, for thinking about how you can go about answering these questions sensibly, or at least generating hypotheses that make some sense.

EAG SF - Firm government talk (1).pptx (1)

My proposition is this: that you can frame AI governance as a set of strategic interactions between a set of actors, such that each actor has a unique and really large stake in the development and deployment of advanced AI. There are two kinds of actors that I think are most important:

  1. Large multi-national technology firms, who are at the forefront of developing this technology.
  2. States, or specifically, national security and defense components of the state apparatus.

As a meta-point, because we love meta-points, this is going to be a talk that demonstrates how we can do tractable research in AI governance and AI strategy, given information that we have today, to figure out what futures could look like, or should look like, that are more likely to be safe and beneficial than not.

Hopefully, by the end of this, you can feel like there are some things that we can figure out in this large landscape of questions, which currently all seem really large and uncertain.

So I'll take you through three things. First I'm going to expand on this case for why looking at actors and strategic interactions is one of the most fruitful ways of looking at this problem. And then I'm going to take you through a toy model for how you can think about strategic interactions between firms and governments in this space. And then finally, I'm going to apply that model to a case study which gives you some meat to the bones of what I'm talking about. And we'll end by a few thoughts on how you can take this forward, if you're interested.

Why look at actors?

EAG SF - Firm government talk (1).pptx (2)

I think there are three key reasons, and quite obvious ones, for why focusing on actors is a good idea. Number one, actors are part of the problem, and a big part of it at that. Specifically, misaligned actors who have various goals that could lead to suboptimal outcomes. The second is that actors are very much exactly the people who are shaping the solutions that we talk about. So at any point at which we talk about what solutions to AI governance look like, those are products of actor decisions that are being made. Number three, I think we are less uncertain about the nature of actors in this space than we are about a bunch of other things. Gravitating towards the things that we are more certain about makes a bunch of sense.

So I'm going to run through these in turn. Number one, ask yourself this question: Why do we not assume that the deployment and development of transformative AI is a given? You would tend to come across two types of answer to this question. The first bucket of answers tends to be that it's just a really, really hard technical problem. It's not easy to guarantee safety in the design and deployment of your system.

EAG SF - Firm government talk (1).pptx (3)

Putting that bucket aside, the second bucket tends to rely on you believing three statements:

  1. There are a number of actors who are out there who prioritize capabilities above safety.
  2. These actors aren't incompetent. If they were incompetent, we wouldn't have to worry about them, so you have to be convinced that there's at least a subset of them that have the ability to pursue capabilities above safety.
  3. Plausibly, these actors could get there first.

If you believe these three things, then you believe that misaligned actors are going to be at least part of this safe development and deployment problem that we need to solve.

EAG SF - Firm government talk (1).pptx (4)

Reason number two why focusing on actors makes sense: We often talk about solutions, and if you read a bunch of the research in this space you'll have propositions floating around of things like multilateral agreements, joint projects, coordination things, etc. The quite obvious thing to state here is that all of these are products of actor choices, capabilities, incentives. Upstream of these solutions is a set of actors who are haggling and tussling over what these solutions should look like. And so, analysis-wise, we should be focusing upstream to try to figure out what solutions are likely versus unlikely, what solutions are desirable and undesirable. And then, critically, how do you make the thing that is likely the desirable thing that you actually want?

Reason number three is because we are less uncertain about actors than other inputs. Here are a couple of photos of my colleagues who work in AI strategy:

EAG SF - Firm government talk (1).pptx (5)

There's a ton of uncertainty in this space. And it's kind of a bit of an occupational hazard just to be comfortable with the fact that you have to make some assumptions that you can't really validate at this point in time, given the information that we have. The point here isn't that uncertainty is a bad thing, it's just kind of a thing that we have to deal with.

EAG SF - Firm government talk (1).pptx (6)

But I think, among a number of things, that we are less uncertain about the nature of actors compared to a lot of our parameters that we care about. The reasons being that A: You can observe their behavior today, more or less; B: You can look at the way that these very same actors have behaved in the past in analogous situations of governing emerging and dual-use technologies; and C: we've spent a lot of time across a number of academic disciplines trying to understand the environments that constrain these actors, whether those are in economics, policy, politics, legal situations, etc.

And so we have a fair number of models that have been developed through other intellectual domains that give us a good sense of what constrains these actors and what supports these actors' behaviors. So, to recap, three reasons why actors are a good thing to focus on:

  1. They're part of the problem.
  2. They're part of the solution/they design the solutions.
  3. We have less uncertainty, although still a fair amount of uncertainty, about what these actors do, think, how they behave. So gravitating towards those interactions between them makes a bunch of sense, as plausibly an area that can tell us some stuff about AI strategy.

A toy model for considering actors

I'm going to assume that you buy that case for why focusing on actors is a good idea. So now, we're going to segue into actually talking about the actors that we care about.

EAG SF - Firm government talk (1).pptx (9)

So here are a subset of who I think the most important actors are that we need to think about in the space of AI strategy. You've got the US government.

EAG SF - Firm government talk (1).pptx (7)

Now the US government, in 2016, really first came out and said AI is a thing that we care about. It kicked off by the Obama administration establishing an NSTC subcommittee on ML and AI. And subsequently across the year of 2016 we hosted five public workshops, there were requests for information on AI and that culminated a set of reports at the end of 2016 that made the case collectively that AI is a thing that is a big deal for the US economically, politically, socially.

Since the change in administration, there's been a bit of other stuff going on that's distracted the US government. But what's not to forget is that the Department of Defense sits alongside/within the US government, and they haven't lost focus at all. So turning a little bit of a focus to the DOD specifically, in 2016 as well, they commissioned a bunch of reports that explored the applications of AI to DOD's missions and capabilities. And that set of reports made a case for why the DOD, specifically, should be focusing on AI to pursue military strategic advantage. AI was also placed at the center of the third op-sec strategy, which was the latest piece of military doctrine that the US put forward.

The last little data point is that in 2017 Robert Work established a thing called the "algorithmic cross warfare functional team thing". And what the remit of that team is explicitly is, to quote, "accelerate the integration of big data and machine learning into DOD's missions." And that's a subset of the data points that we have about how much DOD cares about this.

So that's the US. Now we're going to turn to the Chinese government, who, in quite a different fashion, but in similar priority, has placed AI at the center of their national strategy.

EAG SF - Firm government talk (1).pptx (8)

Among many data points that we have, I'll point out a couple. We had the State Council's New Generation AI Development Plan published in 2017. And in that there was a very explicit statement that China wanted to be the world's leading AI innovation center by 2030. At the report to the 19th Party Congress, President Xi Jinping also reiterated the goal for China to become a science and tech superpower, and AI was dead center of that speech.

Turning again to the military side of China, the People's Liberation Army have also not been shy about saying that AI is a thing that they really care about and really want to pursue. There's a number of surveys from the Center for New American Security that do a good job of summarizing a lot of what PLA is pursuing. And as of Feb 2017, there were a number of data points that told us that the Chinese government were pursuing what they call an "intelligentization strategy", Which basically looks like unmanned automated warfare. And as you can imagine, AI plays a very central technical role in helping them achieve that.

Last but not least, there was the establishment of the Civil Military Integration Development Commission. And that's headed up by President Xi Jinping, which signals how important it is to China. And what that does, among a number of other things, is it makes it incredibly seamless to have civil AI technologies translated through to military applications as a state mandate.

So that's China. And the last subset of actors I'll point to are multinational technology firms. These are the folks who are conclusively leading the way in terms of developing the actual technology. I'll point out a couple of the leading ones in the US and China, since these are the leading ones worldwide, and also there is something there about them being US v. Chinese companies, and I'll say a little bit more about that in a sec.

EAG SF - Firm government talk (1).pptx (10)

In the US, you've got the likes of Alphabet, DeepMind specifically, Microsoft, etc. In China, you've got Baidu, Alibaba, and Tencent. And these guys are all competing internationally to be leading the way. And they also have some interesting relationships with their governments and their defense components as well.

So these are the actors that we're talking about. What do we do with information about them? How do we look at what they do, what they think, how they act? And how do we interpret that in a way that's useful for us to understand the space of AI strategy? What I'm going to do is give you a toy model for how you can think about doing that, which is one of many ways you can model this space. First you can break down each of the actors into three things: their incentives, their resources, their constraints.

Their incentives are the things that they're rewarded for. What behaviors are they naturally, structurally incentivized to pursue? And what behaviors consistently are rewarded such that they keep pursuing them?

Their resources are what a particular actor have access to that other actors don't. Whether that is money, whether that's talent, whether that's hardware.

And constraints, finally, are the things that constrain the behavior of an actor. What do they care about that stops them from doing the thing that's optimal for their goal? That can be a lack of resource, that can also be things like public image, and a number of other things that any given actor could care about.

So, each individual actor can be analyzed as such, and then you can start looking at how they interact with each other in bilateral relationships. And the caricatured, simplified dichotomy here is you can have two types of relationships. You can get synergistic ones or conflictual ones. Synergistic ones are the ones where you have people pursuing similar goals, or at least not mutually-exclusive goals, and there are complimentary resources at play, and/or one actor has the ability to ease a constraint for the other actor. And so naturally you fall into a synergy of wanting to support each other and cooperate on various things. On the other hand you can have conflicts. So conflicts are areas where you've got different goals, or at least divergent goals, but that's not sufficient. You also have to have interdependency between these actors. You need to have one depend on the other for resources or one to be able to exercise constraints on the other, such that you can't ignore the fact that the other actor is trying to pursue something that's different to what you want.

EAG SF - Firm government talk (1).pptx

It's key to flag that synergy sounds nice and conflict sounds bad, but you can get good synergies and bad synergies, good conflicts and bad conflicts. An example of a good synergy is one where you incentivize cooperation pretty naturally between two actors that you want to cooperate on something like safety. An example of a bad synergy, which we'll talk about in a second, is one where you incentivize the pursuit of say, a somewhat unsafe technology and the pursuit of that technology is rewarded by the other actor. An example of a good conflict could be one where you introduce friction, such that you slow down the pace of development or incentivize safety or standardization because of that friction. An example of a bad conflict is one where you can get race dynamics emerging between two, for example, adversarial military forces. So don't fall into the trap of thinking that synergies are always good and conflicts are always bad.

And last but not least, if you really want to go wild, you can look at a set of bilateral relationships in a given context. That's what I do. I look at a set of bilateral relationships in the US and a set of bilateral relationships in China, and try to figure out how this mess can be structured and make sense and tell you something about what's likely to occur in that given, say, domestic political context that I care about.

Case study

This is kind of all a little bit abstract, so we're going to concretize this by looking at a recent case study, which is the Google Project Maven case study. For those who aren't familiar with what happened here: in March 2018, it was announced against Google's will that Google had become a commercial partner for the Project Maven DOD program. Project Maven is a DOD program that's explicitly about accelerating an integration of AI technologies, specifically deep learning and neural networks, to bring them into action in active combat theater.

Now, when we look at this case study, we can try to put it into this framework and understand a) which actors matter, b) what matters to these actors, and c) how that's likely to pan out, and then we can compare and contrast to what actually panned out, and that can tell us something about how these strategic interactions end up mattering.

I'll also take a bit of a step back and say this is an interesting case study for a number of reasons, not least because it's a microcosm case study of this bigger question of what happens when a government defense force wants to access leading AI technology from a firm. That, in general, is a question that we actually care a lot about, and we specifically care about how it lands and what happens and who ends up getting control of that tech. So, when we're walking through this case study, think about it as an example of this larger question that is generally very decision relevant for the work that we do.

The first actor we can think about is DOD. Their incentives are quite clear. They want to have military strategic advantage in this particular case by pursuing advanced AI tech. The resource that they have is a lot of money. The constraint that they have is that they typically don't have in-house R&D capabilities basically, so they don't develop leading AI tech within DOD. That means that they have to go to a third party.

Enter Google management, who make decisions on behalf of Alphabet. The incentives that they have, again caricatured, but plausibly somewhat accurate, is that they are pursuing profit, or at least a competitive advantage that will secure them profit in the long run. The resource that they mostly have is this technology that they're developing in-house. The constraint that they have, among many other constraints, but the constraint that ended up mattering here, surprisingly, was a public image constraint. Google has a thing about doing no evil, or at least not doing enough evil to get attention, and that ended up being the thing that mattered a lot in this case.

Last but not least, you've got Google engineers. These are the employees of the company. These guys, again, simplified caricature of their incentive is that they want rewarding employment. They want employment that is not just financially rewarding, but somewhat aligns with the values of them as humans, them as individuals, and also aligned to the reputation that they want to have as people. Resources is themselves. AI talent is one of the hottest commodities around and people will pay a stupid amount of money to get a good AI engineer these days, and so by being an engineer you are that really good resource. A constraint that they face, is that they don't have access to decision-making tables. As an employee, you are fundamentally structurally limited by what you can do or what you can say, in terms of it affecting what this company does or doesn't do.

EAG SF - Firm government talk (1).pptx (12)

Think a bit for a sec and think about these actors and think about how they're likely to interact with each other, and this is an exercise in trying to figure out if you can observe this behavior about key strategic actors in the AI space. What should we assume is going to happen, and is that a good or bad thing? What's the end outcome in terms of things that we care about, like control of this technology?

Because I'm running out of time, I'm going to give you a spoiler alert, and these are the two main bilateral relationships that ended up really mattering in this case. You had a synergistic one between Google management and DOD. This is quite an obvious one where DOD had a bunch of money to give, and they wanted to get tech, Google had the tech, they wanted the money. So, that's a kind of contractual relationship that fell out pretty naturally.

What was particularly kind of interesting, and House of Cards-y about this one, is that the contract itself wasn't that large. It was $15 million, which is pretty dang large for most people, but for Google that's not much. What was key though, is that as part of Project Maven, it helped Google accelerate the authorization that they got to access larger federal contracts. Specifically there's one on the horizon called the JEDI program (JEDI actually stands for something quite sensible, it just happens to be that the acronym worked out for them), and that contract is about providing Cloud services to the Pentagon.

That contract is worth $10 billion, which even an actor like Google doesn't sniff their nose at. So, by engaging in Project Maven, that by all accounts helped them accelerate the authorization for them to be an eligible candidate to vie for that particular contract. We'll revisit that in a second, but that one is still live and that's a space to watch if you're interested in this set of relationships that we're talking about. In any case, that's a synergistic one.

Then you've got the conflictual one, and this emerged between Google management and Google engineers. Basically, Google engineers kicked up a fuss and they were really upset when they found out that Google had engaged in Project Maven. One thing they did was to start a letter, an employee letter that was signed by thousands of employees. Notably by Jeff Dean, who was head of Google AI research, as well as a number of other senior researchers who really matter. The letter basically asked Google to stop engaging in Project Maven. Also, reportedly, dozens of employees resigned as a result of Project Maven as well, particularly when Google Cloud wasn't budging and they were still engaging in that.

So Google management actually knew this was going to be a problem. There were leaked emails, and in those leaked emails the head of Google Cloud was very explicitly concerned about what public backlash would occur, and throughout the whole thing there were a number of attempts by Google management to host town halls and host meetings to assuage the engineers, and that didn't do enough or at least didn't do much.

These two relationships are somewhat conflictual. One wants Google to pursue the contract, one doesn't, and the finale of this whole case study was that in June they announced that they would not renew their Project Maven contract. So Google was going to continue until 2019, and then they weren't going to uptake the next round that they were originally slated to uptake. In some ways, this was surprising for a number of people, and you can get all psychoanalytic on this and say there are a number of things you can get from this that tells you something about where power sits within a company like Google.

EAG SF - Firm government talk (1).pptx (13)

The cliffhanger though, which is a space that we need to continue to watch, is that as a result of this whole shenanigan, Google recently announced their AI principles, and in that they made statements like, "We're not going to engage in sort of warfare technologies or whatnot", but there was also an out in there that basically allowed Google to continue to engage in Pentagon contracts, eg., this JEDI program thing that they really want.

The cynic in you can think about this as a case where Google basically just won because Google assuaged the concerns of their employees. They looked like they responded to it, but in practice they can still pursue a number of military contracts, or at least government contracts, that they originally wanted to pursue.

In any case, the meta point here is that you can look at a case study like this, think about the actors, think about what strategic interactions they have with each other, and it can plausibly tell you something about how things are likely to pan out in terms of control and influence of a technology.

Conclusions

There's a case for looking at strategic interactions as a domain by which you can get a lot of information about AI strategy. Particularly you can look at what's likely to occur in terms of synergies and conflicts, and what bottlenecks are likely to kick in when you think about cooperation as a mechanism you want to move forward with.

EAG SF - Firm government talk (1).pptx (14)

Not just descriptively so, you can also think about strategic interactions as a way of telling you what you should be doing now, to avoid outcomes that you don't want. If you can see that there's a conflict coming up that you want to avoid, or a synergy coming up that is a bad synergy, which will translate into unsafe technologies being developed, then you can look upstream and say, "What can we tweak about these interactions, or what can we tweak about the incentive structures of these actors to avoid those outcomes?"

EAG SF - Firm government talk (1).pptx (15)

Finally, a meta point is I've got a ton more questions and hypotheses than I have answers and that's the case for every researcher in this space. And so, as Carrick mentioned, there's a bunch of reasons to think this is a really good area to dive into and if you have any interest in doing analysis like what I described, or to address any of the questions that were on Carrick's presentation, please come talk to us. We'd love to hear about your ideas, and we'd love to hear about ways of getting you involved and getting you guys tackling some of these questions as well.

EAG SF - Firm government talk (1).pptx (16)

Questions

Question: A lot of the AI talk at an event like this tends to focus on AGI, that is general intelligence, but I wonder if you think that this kind of governance, and the dynamics that you're talking about, become important only as we approach general intelligence, or if they might become important much sooner than that?

Jade: I think there's a set of things which I hope are robust things to look at regardless of what capabilities of AI we're talking about, and I think the mindset that at least I approach it with - and I'd say this is pretty general across the Governance of AI Program that Carrick and I work at - is that it's important to focus on the high stakes scenarios, but there is at least a subset of the same questions that translates into actions that are relevant for nearer-term applications of AI. I do think, though, that there are some strategic parameters that significantly change if you assume scenarios of AGI, and those are absolutely worth looking at and will to some extent change the way that we analyze some of those questions.

Carrick: I would also like to add that it depends a little bit on what part of the question you're looking at. I think when we think in terms of geopolitical stability and balance of power and offense/defense dynamics, that near term applications matter a lot, and trying to imagine keeping that sort of stable and tranquil, as you potentially then move from that up to something potentially like AGI, so that you're not already adversarial or having these dynamics is quite important.

Question: It seems that the cooperation between enterprises and the government is much tighter and more collaborative in China. First of all, is that a fair assumption? And how do you think that affects where this is likely to go?

Jade: Yeah, I think that's absolutely a fair assumption. I think one of the key differences, among many differences, but one of the most notable ones in China is that the relationship between their firms and their government and their military is, I don't want to say monolithic necessarily, but at least there's a lot more coherency between the way that those actors interact, and also the alignment in their goals is a lot more similar than you get in the US. I think in the US it's pretty fair to assume that those are three pretty independent actors, whereas in China that assumption is closer to not being true, I think.

In terms of the implications of that, there are a number of them. The really kind of obviously robust implications of those are that the pace at which I think China can move with respect to pursuing certain elements of the AI strategy is a lot quicker, and is a lot more coherent.

I would also plausibly say that the Chinese government have more capacity and more tools available to them to exercise control and influence over their firms than the US government has over US firms. That has a number of implications, which I don't want to go on record to put on paper. But you can use your imagination and figure out what that will tell you about certain scenarios of AI.

Question: Is there any sort of… we clearly see some power exerted by Google engineers in this case. It maybe is unclear exactly where things shake out, but it's a force. Right? I mean people can leave Google. They're eminently employable in lots of other places. Are there any examples or signs of that same consciousness among Chinese engineers?

Jade: That's an excellent question. I don't have a very clear answer to that. There are a number of researchers, who are working on getting answers to that, and will have better answers than I do. I'll particularly flag Jeff Ding, who's a researcher at the Governance of AI Program, who does excellent work on trying to understand what the analogous situation looks like in China. There's Brian Z, who is
also working on this, and trying to understand it better, as well. So, yeah. I can't comment on that necessarily, but there's been less data points, is the one thing that I conclusively can say.

Carrick: What I will say is, there might be something like a third option, where Chinese AI researchers, again, who are quite employable, could go to DeepMind or something that maybe seems a little more neutral, if this something where they don't like the dynamic. But I'm not sure this is something that has actually taken off.

Or again, this might be something where having something like an intergovernmental research body, that's pursuing science in a pure sense and has this international credibility, might be quite useful. It can provide an exit for people, who are not quite sure, if there is a race dynamic, that they want to be engaged in a race dynamic.

Question: One question on the possible role of patents and intellectual property in this. Do those rules have force, or not really?

Carrick: In the United States, the US Department of Defense has the right to use any patent and pay just compensation, so you can't actually use a patent to block DOD. There's a special exemption for it. Other IP, between firms, it's not uncommon for Chinese firms to steal a lot of American intellectual property.

I don't know how this would work, for example, with the Department of Defense interacting with a Chinese patent. I haven't looked into that.

Question: How much do you think individuals matter to this analysis? For example, in this Google case, at least one questioner says, “Eric Schmidt, personally, is a big part of this story”. So, if you zero in on that one person, you get these idiosyncratic possibilities, where maybe if it was just one individual swapped out, things could be quite a bit different. What do you think about that?

Yeah. That's an excellent question. Schmidt definitely is a particular individual, who had a lot of influence on this particular case. I suppose there's two ways to answer this question.

One is that, yes individuals matter, but I tend to lean towards assuming that people assume that they matter more than they actually do. I think fundamentally, a person like Eric Schmidt is still constrained by the structural roles that he has been given, in relation to some of the institutions that matter. So that's one answer.

Two, even if individuals do have a fair amount of influence in this case, if we're talking about trying to do robust analysis, it's a better analytical strategy to focus on an aggregate set of preferences that's housed in an entity that's likely to exist for a while. Or you actually have to assume that they will likely have a role to play in AI strategy and governance. I think individuals tend to turn over a lot quicker, than is reasonable to place a lot of an analytical weight on them for.

Schmidt again is a bit of an exception in this case, because he's been around and dabbling in both of these scenes, in terms of defense and Google, for a fair amount of time.

But I think there are also fewer individuals, who you could point to. But there are a larger number of actors, whose behavior you can consistently observe historically as well, which is really where the power of the analysis comes from.

Carrick: What I would also add is that, I think sometimes it matters a little bit who the individual is and what they're motivated by.

I think for most people, maybe their values and what motivates them is a little underspecified. As a result, they can be pushed around by the dynamics around them. Whereas, I think that one of the reasons why it makes sense to have people who are motivated by EA considerations and altruistic considerations, to get involved in government and to get involved in these firms, is because they can potentially be steadier and less subject to the currents, and keep their eye on what part of this is actually important to them. Whereas, I'm not sure that's the case that most people have that bedrock.

Question: Who has how much power here, as you guys see it? There's not that much talent in this space. The best talent is so scarce, that maybe the most power really is there, which would suggest an evangelism opportunity, or a very specific target for who you'd want to reach out to with a particular message. Do you think that's right? How do you see the balance of power, as it exists today?

Carrick: I don't know the overall balance of power. I do think it is the case, and it seems to be the case, that researchers do have a lot more power than they would in normal industries. Which is why I think the Department of Defense actually needs to cater to AI researchers, in a way they haven't really ever catered to cryptographers or other bodies where they've gone in with their money and papered over things.

With that being the case, I think if the AI research community allows itself to treat itself as a political bloc that has values, and those values it wants to advance, then it will have to be taken seriously.

The AI research community, generally, has very good cosmopolitan values, they do want to benefit the world, they don't have very narrow, parochial interests. I think having them treat themselves as a political bloc and maybe evangelizing them, to treat themselves as a political bloc, could be a fantastic lever in this space.

Jade: One damper to put on that. I promise I'm not a skeptic by nature, but historically, I think research communities haven't mattered as much as one would hope.

Carrick: That's also true.

Jade: Looking at cases like biotechnology, nanotechnology, and whatnot where you had somewhat analogous concerns pop up. You also have had this transnational research community vibe exist. Not even just vibe, actually, but institutions, professional networks, and whatnot, that institute the existence of that epistemic community.

That has had limited influence on decisions that are made by key actors in this space. It hasn't had no influence. That's absolutely not true. There are some really good examples of this transnational research community mattering a lot, but I think that's been fewer and further between, than one would hope.

Carrick: I'd like to say something on both sides of this, because I think you're right. This is a difficult line. There was an idea with the International Air Force. When people were proposing this they were saying, “Aviators are the natural ambassadors. They're in the sky. They fly between countries. They're so international, that of course they would never bomb one another. This wouldn't make sense.” They were saying this immediately before WWI. Then just like that, it wasn't even a question. So there's a thing where you can be captured, again like cryptography and other areas.

And to some extent, physics, during The Manhattan Project. The physicists were not American, most of them. They came over and they still engaged in this interest. But also with physicists, afterwards they were a lot of the push towards disarmament, towards safety protocols, towards taking this quite seriously. They still are actually a really important part of that, so it's a little unclear.

I think with AI researchers, given that they do seem to have a somewhat coherent set of values, and they are a small group, they might be more on one side of this than some of the others. But yeah, I agree. It's not a guarantee and it's not easy.

Jade: One can hope.

Question: What rules or governance regimes we should be trying to put in place? We've got certain bodies of researchers putting forward statements. We've got Google now putting forward some pretty cosmopolitan values and principles. But what is the framework that everybody might be able to sign onto? Do we have a vision for that yet?

Carrick: I think it probably doesn't make sense to try and have too substantive of a vision, at this point. I like the idea - I'm sorry I'm being a lawyer here for a second - but a procedural vision. This idea where you say, “What we agree to is that everyone will have a say. That we won't move until we've hit this procedure in place, and that we've taken into consideration, not just the actors who are relevant in sense of having control of this, but the people who do not have much say in this, or who aren't having access to the levers.” To some extent, other moral considerations like animal welfare, and the benefit of the earth, and these things.

I also think that, mostly in terms of substantive research, we're trying to push towards something like this procedure and coordination, with the hope that this naturally falls out. More than putting forward too much of a substantive suggestion.

The exemption to this, being something like a commitment to a common good principle, which I think is almost the same thing as a procedural thing, because it's underspecified in some ways.

Jade: I agree entirely with that. I would hesitate for folks thinking about working in this space, to try to drive towards articulating specifically what it is, that the end goal is. As I alluded to, I think there's a lot of uncertainty, so those things are more likely than not, to not be robust at the stage.

That being said, I think there are number of robust things, like the common good principle and the common commitment to that. I am, for as much as I sounded like a wet blanket on that, I am hopeful that research communities are really important. I think anything that can boost researchers' power and make that community stronger and more coherent, in terms of encapsulating a set of values that we want, is good. Then as well, I think the other robustly good thing is, to acknowledge that states aren't the only ones that matter in this case, which is a pitfall that we tend to fall into when we're talking about international governance things.

In this case, firms matter a lot, like a lot, a lot. And a robustly good thing is to focus on them, and place them somewhat center stage, at least alongside states. And to understand how we can involve them, in whatever the solution looks like.