This is the full transcript, lightly edited for grammar and clarity, of an interview with Jessica Batke, ChinaFile’s senior editor for investigations, and Laura Edelson, assistant professor of computer science at Northeastern University, on their report “The Locknet: How China Controls Its Internet and Why It Matters.” An abridged version of the interview, with an introduction to the report, was published previously.
CDT: The report is brilliant. Jessica, you probably started learning Chinese before Pleco, with paper dictionaries …? The report felt a bit like the equivalent of Pleco for learning about online censorship … in a way, it’s almost annoying that people can get this much information in one place, and don’t have to do it the hard way. It’s so good.
Jessica Batke: That’s the best compliment I could have ever gotten. Oh, my God, I’m going to write this down.
Laura Edelson: Can I put in a small plug for our methodology?
CDT: I would love to hear about the methodology!
LE: I think about this a lot, because almost all my research is at the border of computer science and something else. I was really interested in this problem, because I think China as a state actor is just so important to understanding internet threats, and I think the way we think about China as a threat actor has changed over the last decade to 15 years. We haven’t all caught up, but I think it is surprisingly rare to have two people from different disciplines participating as equal partners in a piece of work. And what that meant is that we didn’t have to subsume one to the other, and we really were able to generate, I think, novel insights. I think this is one area where there really has been a fundamental change in how we should think about the Chinese system of censorship and propaganda. But it really needed someone to come in and look at it with a fresh sheet of paper, and not just try to take the old model and extend it forward.
And I think especially in those kinds of problems where we know things are different, we know our picture is wrong, coming at this from a couple of different perspectives at the same time, and trying to knit those two perspectives together, the whole is greater than the sum of its parts. If I just went off by myself and wrote the computer science version of this, and Jessica wrote the China expert version … if you put them together, they wouldn’t have been as useful as what we came up with together, and that’s why I was so proud of this work.
JB: The thing that was so awesome for me was that nothing was taken for granted. Laura came in and was, like, "If I was building this system from first principles, what are my first principles, and then what does that mean for the technical design of this system?" And so you’re engineering it mentally from the ground up, rather than working backwards from a place where I would have started, which is having all this knowledge, and then inferring things. We were going from the ground up that way, and I think that made all the difference.
LE: The flip side of that, too: one of the things that is so difficult as a technical person, trying to explain technical systems to lay people, is that so many of the reference materials out there assume a fairly high level of knowledge. They assume that you are in a bachelor’s computer science program. And so sometimes Jessica would ask me questions, and I would say, "Oh, go look at the OSI model and it’ll explain this." And then she’d come back and ask … "but what is this thing?" It doesn’t explain it at all, does it? And then I’d take out a sheet of paper, and we’d go through it, and I think again, we’re coming from two different disciplines that are little clusters of experts. We don’t always have to explain the inner workings of things to a wider audience, and when we had to explain things to each other, it really helped us look at things with fresh eyes.
CDT: Yeah, that really comes through. I think the combination, especially the fact that you were new to China, and Jessica, you weren’t new to the technical stuff, but relatively …
JB: I was pretty new. I would say I wasn’t scared of it, right? But I didn’t know what the stack was. I didn’t know how packets worked, or how data was sent. That was all new.
CDT: But now you put it in those terms, I think that a lot of what’s so great about the report really comes from that combination. You began the report with a passage about the RedNote episode earlier this year, and that really made me realize that I have no idea what normal people know or think, or think they know about China. It made me realize how much I take for granted … there’s that XKCD cartoon ….
LE: When that RedNote incident happened, Jessica said to me, basically right away, "oh, this has got to be the opening!"
JB: I was really relieved. I’d been thinking for a year, what is our lede going to be? Because part of my job was to figure out how we package this in a format that would work for ChinaFile. And the lede is always the hardest part for me. And when that happened, I was like, there it is!
LE: I think what I appreciated so much about the journalistic convenience of it was that it was such a perfect little demonstration of the Locknet and how it works. It encompasses, obviously, the service level censorship … but also some of the meatspace stuff was on display. In the end we did wind up back where we were, with segmented networks that sometimes spill over, but for the most part, there is a real wall: even today, the foreign users who remain on RedNote are just not in the same ecosystem as Chinese users.
CDT: It’s rare for a combination of authors like this to gel. Often China people and tech people talk past each other, but you really complemented each other.
JB: I think we just got really lucky. There’s no reason that this had to have worked. We just happened to be very complementary personalities who like talking.
LE: A tremendous amount of time on this project, and I think some of the most useful time, was spent first just talking through the basic ideas of what we thought the system was. I remember a few of these "aha!" moments, like where we realized, "Oh, if we think about the control of the flow of information as a design requirement of the internet, and you just bake in that design requirement into every layer of the stack …." Because that’s what you do with design requirements. And we started thinking about this as just a fundamental requirement of technology that we don’t have in the global internet, but there’s no reason that you couldn’t build a network without it as a design requirement. It’s just a choice. It’s just a value you either have or you don’t.
And you know, for the Chinese Party-state, that’s a core value—a core design requirement for anything that is going to be social infrastructure in the way that that the Locknet is. And as a design requirement, it’s certainly a lot more robust when you control the police and judiciary, as they do, so they can lean on those systems out in meatspace. This is what makes the system so effective, that it’s so complete when you look at the entire stack, which means it doesn’t need to be complete at any one particular layer. And when I came to see that the way I should be thinking about this as a system, as opposed to individual components … that’s when I really started to think about the fact that actually this is how I would build it. You wouldn’t want to overengineer any one component, any one system, by making it completely perfect, because it’s just so expensive. How do you efficiently deploy your resources?
This is where I’ll say that I’m not a traditional academic. I had a long career in industry, building software, which is probably why I think about it this way, at least in part. But I really came to think, actually, you probably want most of these systems to work 80-85% of the time. That’s how often it is efficient for them to work. And you’re going to rely on the fact that you do have redundancy up and down the stack. And then additionally, you have redundancy over time. If one message comes in and one message comes out, you don’t really care. You care about what happens over time.
CDT: How did the project come about?
JB: Actually, ChinaFile got approached. Essentially, the Open Technology Fund was aware that current understandings of how the censorship system in China worked needed a little bit of a refresh. They were looking for someone to do that, and somehow, they got pushed in our direction. I was super excited to take this on as a project.
The touchstone that I kept going back to—which now, looking back, is hilarious, because it’s only about four pages long—was this 2008 article in The Atlantic by James Fallows called "The Connection Has Been Reset." I remember where I was when I read it. I remember what house I was living in, and the table and, like, all the stuff. For me, it was a really seminal work, and it really is a master class in explanatory journalism for lay people—I recommend reading it. There’s an amazing Soviet joke, it’s wonderful, just wonderful. But that was the only thing I knew of that tried to explain the technical workings of the censorship system to a lay audience. And that was part of the brief: this needed to be a technical review of how the system worked, but it had to be understandable to regular people; and that was in 2008, and it was four pages long—it’s like we’re in a different universe.
So I was very excited to take this on, and my first thought was, "I can’t do this by myself, because I don’t know anything about computer science." And number two, I really wanted to find someone that had no China background at all. That was really important to me, and for all the reasons that we have already been through, that got played out over and over again and validated. If I only had one good idea in my entire life, it was that one.
CDT: It was a very, very good idea.
LE: Just to say why I think that was so important: I now want to find all the people who made these explanations of how networks work for non-computer scientists and just have a long conversation with them. Because so many of these explainer materials—and this is also true in the China space, it just happened over and over again—I would think that something was really clear and really obvious: "Of course, they’re gonna know fiber optic has glass in the middle. What do you think it is made of?" And the answer is that, outside your own field, you don’t think it. You don’t think about it at all. It’s a black box, and the explicit point of this project was to make the functioning of this system understandable to someone who was neither a China expert nor a computer scientist.
It’s an unusual problem because, you know, very often there’s just one discipline you need to translate to a general audience, and this one, I think, genuinely requires two. I don’t know if you read my "Notes on China from a Computer Scientist." I think what is so difficult about that section was that for China experts, many of those items just seem obvious. It’s almost offensive to mention them. But I guarantee you, for certainly my technical community, who—I think we’re a little bit more off in the wilderness, even than, frankly, a general lay audience, because we spent all of our higher education taking technical classes. I have taken many, many, many math classes … in my undergraduate college career, I probably had to take two English classes total, and one history class total. I did go to an undergrad that had an arts requirement, so I did take pottery. Those are all the non-math or -science classes I ever took. That’s it.
It was important to me that this work was something that would inform the computer science community and the cybersecurity community, who know a lot about the technical mechanisms that we’re talking about, but don’t necessarily know about how they’re all knit together, because we are often looking and trying to measure one system at a time, and we’re trying to understand that one system with a high degree of accuracy. But something that came out was that this background knowledge that China scholars take for granted just isn’t something that that many people who work in cybersecurity really generally have. And I think this gets back to the fact that this knowledge isn’t just important to people who study circumvention, many of whom do have this context, but really for anyone who wants to think about the internet as a whole.
JB: I’ll just add that not only do China experts take all of this knowledge for granted, I think even more so they take for granted that if you don’t know a fact, you can just look it up on the internet. If you don’t know about Deng Xiaoping, or the 100 Flowers campaign, you just look it up, and then you have the answer. And over and over again, I thought, “But there are so many interpretations of things ….” You don’t want to send someone to the internet to look up, “Is China good?” “Is Chinese censorship good?” That’s not helpful. They’re going to get some weird answer, especially now with all the AI slop everywhere.
I actually felt like Laura had a much harder job, because with the things that she was learning, a lot of times there isn’t one right answer, right? The things that I was learning were things like, "This is what a data packet is, and these are the two mechanisms by which they are sent." There is an answer. Sometimes it was poorly written or hard to find, but there is an answer. And that just isn’t the case for a lot of stuff to do with China.
LE: When this project was getting toward the end—we were probably a few months away from publication—I went to a lab lecture. A very common thing in academia: we’ll have a guest speaker, they come in, they give a little talk to our lab group. We take the speaker out to lunch. Anyone who’s written an interesting recent paper, and is in town, will come over. And we had someone come in who had done some interesting work on measurement of censorship systems. We go out to lunch, and we’re talking about his paper. We’re talking about some of the background on China that is relevant to his paper, and a Ph.D. student asked, "So, is there voting in China?"
And that’s actually a very complicated question to answer. I think that’s the kind of thing that’s not, like, "What is the structure of data packets?" I would have to look up a few details, but there is one answer to that question, and I can draw you a picture. It’s very clear.
CDT: I did read that section [Notes on China from a Computer Scientist] … I loved it. It’s great for people who don’t have that background, but also really valuable for “China people,” I think, because it’s really valuable to see what was notable to you, as a relative newcomer to the subject. I’ll make a point in the write-up of nudging people not to just skip past that, because it was really interesting to see.
LE: Can I maybe talk a little bit about what I think are opportunities for the future?
CDT: Please do.
LE: The reason we really wanted people to have a mental model of how the internet worked, and also how China breaks the internet, is that we need to be able to talk between communities about where opportunities lie, and what might be fruitful lines of research—productive things that we should be building—given knowledge of both how the internet works technically and also the broader China context. We need to be able to have that conversation back and forth.
As a matter of strategy, if we think about the weaknesses of a state actor, the primary one that I see is just that they’re slow. They will get there in the end. They really will. But they are big slow bureaucracies. And when I think about the strategy that we should be employing when we are, as a community, building circumvention tools that might actually continue to allow the free flow of information to people inside China who want it, I think we need a bit of a strategic shift. Instead of making monolith technologies that are really robust and really technically sound, but are single points of failure that a state actor can devote a lot of resources to taking down and then actually blocking, we should be investing in smaller, frankly shorter-lived bits of technology that might only stay up for 18 months. That’s okay, if we make enough of them, if we change how we invest in new projects so that we’re making a lot of small bets, rather than one big bet. I think as a strategy that will be more robust against a state-actor adversary.
That’s something that came out of thinking about China, and then getting into how the system works on a technical level. What does that really buy us? I think the other thing it buys us is just understanding that these are porous adversaries at every stage. When we do that diversification, we should be thinking about diversifying methods and strategies. It’s not just about funding five different VPNs that are relying on the same technology stack. We need to be investing in novel ideas for protocol generation. There’s a range of those kinds of things where you need to be thinking about how someone is going to navigate that gauntlet of censorship, and providing them with a range of items.
And the last thing that is really an emergent property of the circumvention ecosystem is the rise of, effectively, circumvention-as-service providers ["airports"], where there’s some guy and you talk to him over WeChat or something, and you pay him some amount of money per month, and he gives you a box or a hotspot. What it’s using, the specific technology stack it’s using, might be changing every month, but that doesn’t matter to you, because you’ve paid some guy, and that guy is dealing with it. And what’s really going on here is that we’re shifting from a B2C model of providing censorship circumvention to a B2B model. What we should be doing is building a range of tools that can enable those circumvention-provider middlemen to have a range of options, to be nimble, and they’re going to provide the final-mile delivery of customer support and the technical heavy lifting of switching whatever you need to be switching that month to adapt. If you can give people that, if you can relieve the technical burden of doing that, then you have a feedback loop where one person is paying the other to find technical support like this. It’s always a problem we’re trying to solve, getting a feedback mechanism.
JB: The other thing that I’ll say about opportunities for the future: some of them have to do with the fact that the Locknet is not staying just within China, that it’s affecting the global internet and us too. And so one of the other things that we talked about was increasing transparency for users outside of China. So for example, we talked about RedNote. It might be helpful for people who download RedNote here in the U.S., say a mom in Iowa, if she opened up the app and it said, "This app is subject to censorship according to Beijing’s rules." Even no matter where in the world you are, and no matter where you’re logging on from, just that basic level of transparency might be helpful for people, because I don’t know that everybody fully understands that.
Another thing we talked about, and I don’t know how much in the weeds we want to get, is making sure that people are aware of and sending people to and funding people to go to internet standards meetings. If you’re me and you’re not technical, "internet standard setting" … those words immediately put you to sleep. If you’ve ever seen “In The Loop,” it’s like the Future Planning Committee … almost designed to bounce you out just from the words. But it’s super important. It’s going to have a huge impact on how the internet systems of tomorrow function at a technical level, and what it will allow in terms of governments’ ability to surveil or censor if they so choose. A lot of the ways in which bad things could happen are simply because folks who have an interest in maintaining privacy built into those systems are not organizing and making sure that there are people at every single meeting, every single time. There’s a lot of little things like that that are quite important that folks could be doing to help ensure that either people are at least aware of what’s happening, or trying to hold the line on some technologies internationally.
CDT: The way you highlighted standards in the report is really valuable, because it tends to fly under the radar … I guess because most of us don’t have the right level of understanding. Talking about changing IP protocols feels like suggesting that China is going to start messing with the periodic table. We can’t even get our heads around it.
JB: It is actually very analogous to spectrum or broadband: what frequencies people are allowed to broadcast on, and that actually really matters for a bunch of stuff. But that just all happens behind the scenes, and then you end up with whatever radio or TV broadcast you have, and your life is just fine, and you don’t have to think about it. It’s very similar. In that way, you will still have the internet. It just may not be the internet that you’re used to.
LE: I think this really gets to the point that people like to think about their technical systems as just technical systems that don’t have human values baked in, but they absolutely do. You know, the reason that the internet works for you is because you share the values of the people who made the internet. The internet didn’t work for China’s party-state. It had big problems with things like the free flow of information, the way that the internet enables people to, at least digitally, have freedom of assembly. These are not actually the values of the Chinese Party-state. And so when it was building the Chinese internet, very early in the process, it wanted to make sure it had a pathway to building the Locknet. You see this in some of the early documents: they wanted to make sure, are we going to be able to get to where we want to go, where we can control the flow of information? Because that is, at least to them, a core value. This is where the internet works for you, because it has embedded within it the values of your society. And so to you, those are invisible. But as soon as you are faced with a technical system that does not embed your values, they’re very visible.
CDT: I think this is a good point to segue to the inevitable AI question, which in 2025 is legally mandated. How is AI being used so far in the information control system, and what are the prospects for it in the future? Regarding your earlier point about China’s bureaucracy being big and slow, is AI going to help make it more flexible and able to keep up?
Another point I’ve been wondering about is that, a lot of the time, AI is substituting for human labor, which is not something that’s in short supply in China. So is it possible that, in the Chinese context, it’s actually not going to be such a big game-changer, because they have millions of people they need to find jobs for, and that’s just not the bottleneck?
JB: It’s important to understand that there’s two sides to this coin, one of which is the actual censorship itself. How is AI being used to implement censorship? There are two ways in which AI is being used, one of which is to augment the human labor that you’re talking about, of the censors, to identify content that users have produced, and to identify it as problematic and help flag it in the system and get it down. And the other way is to produce content that is already censored [in the process of being generated]. Those are two different functions that the AI systems that we’re probably all thinking of right now are being used for, and it’s important to disaggregate those.
I think in the first case, where they’re augmenting people, we’re already seeing that, and Laura, I think, is better positioned to talk about the efficiencies that brings. But I don’t think there’s any point at which the human gets removed from the loop entirely. You need humans always, at some stage, to be able to say, for example, "Hey, the meaning of this word has changed." But that said, AI is getting more able to do that. And then on the other side, the question of how AI is going to be used to produce novel content is something that Laura and I are really interested in working on in the future.
LE: About the first point, about the way AI can improve the Chinese party-state’s ability to execute censorship, along with the many other parts of society that execute censorship on behalf of the Party-state, like platforms and so on: everything Jessica said is 100% right. I think AI is potentially an accelerant in the sense that it can make human content moderators more efficient in all the ways that automation does, but it’s not going to replace them, because fundamentally, the way humans use language evolves over time. That’s the first reason. But secondly, what gets censored isn’t fixed. It’s not like there’s a list of the thousand things that are censored that will never change. New things are censored every day, and that means there’s always going to be an important role for humans to play in that system. But I do think that AI is going to make those humans more productive and … I don’t think it’s necessarily going to lower costs so much as it’s going to make it easier to keep up with the growth in content that gets generated.
JB: This brings up a really important point that we talk a little bit about in the piece, but not as much as either of us would have liked, because it was a big "aha!" moment for both of us. Chinese companies have this extremely awkward mission in terms of their content moderators, which is to get rid of things that the Chinese party-state wants memory-holed. Therefore they have to teach people this information which should not exist, so that those people can then attempt to memory-hole it, and that is just an extremely awkward position for the companies to be in. AI doesn’t completely obviate that, but to the extent that you can program in some of that stuff—and that’s the stuff that’s most likely to be permanent, right? Like the Tiananmen Square Massacre—that does help you, because if you can get rid of 99.999% in your first-line AI review, and very little gets to a human reviewer, you don’t have to educate actual humans on these things that you want memory-holed, which is a net positive from the perspective of the state.
LE: Super important point. I’m really glad you made that, because it does ease this difficulty with covert censorship. But getting to the other question: in addition to using AI to execute censorship, AI results and responses are also going to be censored. I think the reason I find this so interesting is that it gets to this larger point that not all technologies are equally easy to censor.
For example, we were just talking about our interest in the little technical problem of covert censorship, how there are certain things that, if you want to memory-hole them, if you want to erase them from existence, not only must you censor mentions of that content, but you can’t tell people. You can’t say publicly … you can’t have a platform rule that says it violates our terms of service to write about the Tiananmen Square Massacre. You can’t do that because that violates the memory-holing. So there are lots of situations where the censor would really like to be able not just to censor content, but to do it in a way that is not visible to the people who are being censored or the people whose information is being censored. Now, there are certain structures of delivering information where this is easier or harder.
Let’s take search results. There are times of the year where particular politicians’ names are censored—you won’t see any results for them. At other times of year, you can search for those people and see results, so you can see that the censorship is taking place. That works for search results because I type in a search term, and I get back some results, and I can see that in this time period, I don’t get back any results, or I get back three, versus the rest of the year, when I get back 10 pages of results.
However, what if, instead of consuming my news by going to a search engine and searching for terms, instead I’m consuming information in a content feed on a social media service? I don’t know what isn’t being upranked in my feed. I don’t know what isn’t being inserted there, because the particular way that that is assembled is hidden—not for any censorship reasons, that’s just the way that that technology is built. And this means that content feeds are easier to censor, and also that that censorship is always covert.
Censorship of AI systems is also covert in this way. There’s all sorts of reasons that people like going to a chatbot and asking a question and getting an answer, as opposed to going to a search engine and giving a search term that’s relevant to their question, and then going and reading the answer … it’s just easier, right?
[Doing it the old-fashioned way is harder.] There’s a thing you want to know. You are engaged in knowledge formation. So you type in a search term, you’ll go to the first link, or maybe you’ll go to the first couple. You’ll read what’s there, and then you will form the new idea, you’ll get an answer to your question. Maybe you’ll read a couple sources to be sure.
That’s not how it works with a chatbot, and that’s why people like it, because the other process is hard, and people don’t want to do it. They just want to be told an answer. They do not want to have to think of the answer themselves after reading several pages of reference material. And I’m sure it’s obvious why this is an easier surface to censor, and to censor covertly, because the model controls the fact selection. Let’s say you ask a question like … I’m trying to think of a really innocuous question … "How did we get house cats?" And I’ll say I know zero about this topic. There’s probably a long and complicated history of cats coming to live with humans. They killed mice, and we liked that. There’s a lot of facts, and you could probably construct a different narrative of how it happened based on how you selected from those facts. And you could probably valence that story to tell you different things about humans, about grain storage and agriculture, or different things about cats … and if I am in control of the knowledge formation—from what sources am I drawing, from which facts in those sources will I emphasize, and then how will I group that information—I have a lot of control over final knowledge formation.
CDT: Were there particular misconceptions that you were aiming to correct with the report?
JB: We didn’t come into it, I think, with a ton of things we were trying to correct, other than that the existing model was outdated and oversimplified—I credit Laura with this—because we were coming to it from first principles: what is this model? I actually think that was a much more useful way than probably what I would have personally done, coming to it with knowledge, thinking, "Well, this is wrong, and we need to blow it up." Because we came from the ground up, only when we built up to a certain level did we then think, "Aha! Now I see why this is incorrect, and we need to update understandings." But that happened more organically, if that makes sense.
LE: I think what was clear is that the old model was no longer working. I think that a lot of people knew that their model for how the system worked and what the goals of this system were was just not predicting outcomes anymore. And so we needed a fresh view of it. But I don’t think we came into this knowing what was wrong. If we had known what those misconceptions were, we would have just fixed those. But there were some particular things on which we came to conclusions that are different from the previous dominant narrative. One that I think is one of the most serious is that particularly inside the computer science community, we looked at the individual components: we saw individual components that were imperfect, that were at least somewhat porous. And when you look at any of those individual components, you might think, "if you want to get around this, you can … if you put in the effort, you can get around this" … and maybe you think that’s intentional, maybe you don’t. But I have come to think that this system is much, much more effective than perhaps the dominant narrative before would suggest. And I think that change of perspective came because we did take this perspective of looking at the system as a whole.
CDT: And in terms of that effectiveness, the limitations and porosity of the system are a feature, not a bug, right? They’re what makes the system as a whole practical.
LE: Yeah, they’re what makes it efficient. If you think: "I have x yuan to spend on controlling the flow of information over the internet, how am I going to do it?" What is the efficient way to spend those yuan? It’s to have systems that are each fairly … not minimal, but not overengineered … but to have a lot of them. That’s a really efficient, adaptable way of building a system, and it’s how we build other kinds of systems—that’s a very normal engineering approach to solving a problem.
JB: Another piece of this is understanding that just because there are gaps, that doesn’t mean the system as a whole is not effective. The human psychological component to this is that for most people, most of the time, as long as it’s inconvenient, that’s enough. If you really want to get around it, you can—and we’ll leave aside the fact that they’re trying to make that harder and harder for people to do—but to get to the threshold of "I really want to get around it" is actually more than most people give it credit for. There’s a lot of assumptions baked in, especially among people maybe who’ve been looking at this for a while.
Back in the day, people really did want to circumvent, because the outside internet was so much better, there was more stuff. That’s not the case anymore. There’s a whole domestic ecosystem that’s really great for all sorts of things. So again, the friction kind of gets more and more, both in terms of what you have to do to get out, but also what you would want to access. It’s really important to keep this human component in mind. I speak for myself: humans are lazy. I don’t want to do more work than I have to do, and that’s all the censors have to do: just make it good enough for enough of the time and enough of the people.
CDT: I often think about—nothing to do with censorship—studies about how every extra second that it takes for a page to load will deter x percent of users. The tiniest speed bump ….
JB: It can put people off. A huge part of this, actually, is literally just that the Chinese government has under-built the actual physical infrastructure that connects the Chinese internet with the outside internet. And so it is slower. There’s way more stuff going over those pipes than there should be, so the speed for you to get domestic content is going to be a trillion times faster, not just because it’s so much closer, but because to go abroad, it’s running over this overused, creaky infrastructure. A lot of times people will think, "Oh, it’s because it’s being censored. That’s why it’s slow." There’s this idea that the page is loading really slowly because the censors have to read it and make sure it’s OK, and that’s not at all what’s happening. It’s literally just that the infrastructure sucks. And that is another one of these ways of introducing friction that has the effect of censorship without having to actually implement censorship. It’s very clever.
LE: I do think it’s worth saying, though, that the outside global internet remains very appealing to people inside China. And also worth remembering that the thing that is appealing is probably Netflix and gambling and pornography, not news in foreign languages. I mean, how much news in foreign languages do you read? Probably a lot more than most people, but probably still an overall small piece of your media diet.
Again, I think this is getting back to where the opportunities are: I think we should be building circumvention tools that work for that commercial market. Because when people go out to get their Netflix and their Marvel movies, keeping the door open for information and knowledge about what is going on outside China, that is still a good and important thing. That little moment in internet history where Chinese and American and European users were on RedNote … that was an interesting, cool moment of human-to-human connection. And that’s actually a hard thing. One of the purposes of the Locknet is to maintain at least some friction between their own population and the larger global internet population, and so to the degree that we can enable that boring, but very normal commercial activity, we should be doing it.
CDT: You talk in the report about a shift towards covert censorship. Traditionally, it’s been quite “in your face”: you know, the notices that “in accordance with relevant rules and regulations, this content has been removed” or “can’t be viewed.” And the overtness of that has been part of the system, in that it lets people know that things are off-limits. It helps them keep that in mind. But from what you write, the system seems to be moving away from that.
JB: Yeah, some of this is what Laura already talked about, which is the nature of newer technologies that are inherently more covert—the way feed algorithms work in social media, that’s happening already.
But also, once Google was out of the picture, and more and more global companies were leaving China, there just wasn’t the same pressure to say that "according to relevant regulations, this content can’t be shown." There was no one to push back. No one had any leverage to do so, and so they stopped doing it. And there are pluses and minuses to that. I think there are more pluses, from the censors’ perspective, than minuses, which is why they’re doing it. Obviously, people will get less angry if they don’t know that they’re being censored. But, as you said, there is a didactic function to overt censorship: you are teaching people what they should and shouldn’t do, and you are, by extension, encouraging self censorship, and that is the cheapest and most effective method. If you can get people to just stop typing what they would be typing, that’s amazing. You do lose that when you shift towards more covert methods. But overall, if you’re trying to shift the mental landscape of internet users, from a very long term perspective, it’s helpful to you if they think that that’s organically happening rather than that they’re constantly chafing against restrictions.
LE: The only thing I would add is that I have no idea what the full motivations were behind the real name ID system. But I would point out that certainly one of the benefits is that it creates a bit more of this overtness, the signaling to users of the fact that you’re being watched and that you should not violate the rules, because we will track you down. And I think this at least compensates for that loss of overt censorship, because that is the primary game from overt censorship: signaling to users that they are doing something that you view as wrong. If all of your censorship becomes covert, you lose that signaling mechanism, that push toward self-censorship. So the fact that they have these other systems that are doing some of that in other ways, that’s certainly helpful to them.
CDT: You highlight the role of WeChat as a way of extending the censorship bubble to the diaspora. Do you have any policy prescriptions for ways to address that?
JB: I think it’s the same thing we were saying earlier, which is transparency. If you are using an app in the United States, it should have to say to you that this is being censored. I’m personally not in the business of telling people that they can’t use a platform like WeChat. That’s the main way that people have to communicate with their families back home—there aren’t a lot of other options for them. So if you’re cutting them off from that, what does that do? That’s horrible. So this is a tough answer. And this goes along with all the other things, there are so many things in U.S.-China relations, or China relations globally, where the authoritarian state, by its closed nature, by its willingness to run roughshod over people’s rights …
LE: This is the trouble with having an authoritarian state that we attempted for many years to integrate into the global system. They don’t play by the same rules that we do. The root problem here is you have a mode of communication where one person on one side is subject to the jurisdiction of an authoritarian state that reserves the right to arrest them for things that they type into a WeChat window, and—respecting all the nuance that there is about law in China—that person, depending on what they typed into that WeChat window, you might just not hear from them for months on end. You might just not know what happened to them, and no one else will either. That is a real thing that can happen to you, if you are inside mainland China, and will not happen to you if you are outside China. That’s not a function of WeChat, that’s a function of cross-border internet communication in and out of an authoritarian state. We’re just not playing by the same rules here, but we’ve been trying to, and now the strains in that system are starting to really show.
JB: That’s a really good point. It’s really not the technology, it’s the political systems.
LE: The best idea that I know of is Jessica’s, which is transparency. I do think that it is a reasonable expectation, if you are a user inside the United States, that you know … if you think that the primary jurisdiction of the technology product you’re interfacing with is that of the United States, that it is subject to the laws of the United States, you simply may not be aware that is, in fact, subject to a much more restrictive regime, which is China. And just telling people that seems fair.
CDT: Are there things we can learn from the Chinese case that might offer guidance for the challenges we’re facing in our own countries in terms of online freedom?
LE: One thing that has repeatedly come up in our conversations with computer scientists, some of our collaborators on future works, is that we ignore meatspace to our peril. If you want to understand how effective a censorship or circumvention mechanism can be, if you’re not thinking about the fact that people who are violating China’s censorship rules or using circumvention are violating the law, and they live in a country where, again, they can be taken off to jail and no one will hear from them for many months … that’s just the meatspace reality on the ground, inside China. And if you aren’t thinking about that context, you are not going to build performant solutions.
What does that mean for our internet? Well, we are seeing a change in governance in the ways that the executive system functions, that is becoming much more personality driven, and, frankly, much less rule-of-law driven. I think that is something that the Chinese Party-state has used very effectively to really cement its control over what Chinese people know and are able to say, and sometimes even to think, because you can’t think things you don’t know.
Please do not construe this as saying that I think we’re going there. Because I don’t, but I do think that the tactics of, "Hey, if we can’t get what we want through the normal course of business, we will run lawfare against you. We’re going to find a way to file criminal charges against you. We’re going to put your physical person in jeopardy until you do what we want" … I think that is a classic tactic of authoritarian regimes, and it is certainly one that we are starting to see.
CDT: Another of the strengths of the report is that it’s an amazing collection of links to source materials—I have about 9,000 tabs open from it that I’m steadily working through. Are there any of those that you found particularly valuable, or that you’d particularly like to highlight? If readers only click one link in the whole report (which is a very bad idea), which should it be?
JB: If you want to see how the system has changed since 2008, I recommend everyone to read that James Fallows Atlantic article. And anyone who is interested in how to write, because, God, that guy can write. That’s the first thing that comes to mind. But there’s so much more.
LE: If I were to pick a paper—this would only be relevant to my technical community—there is an original paper by the father of the Chinese internet, that proposes a way to filter out information. I found that paper so illuminating, because it proposed that blocking be proportionate to the likelihood of harm. It proposed, "we’re going to make a probability estimate of how likely this thing is to be bad, and then if it’s like 80% likely to be bad, then we’ll be 80% likely to block it."
And the reason this is such an "aha!" moment to me is that, on a certain level, that is actually how this whole system functions. It’s not how any one component functions that we know of, but it’s very much how the system as a whole functions. It’s very efficient. It’s a little bit like a softmax function—that’s a slightly newer way of thinking about things, to do things in that probabilistic way, and it’s super effective. When I sat back and I read that original paper, that’s when I really thought, "Oh, this is on purpose." People really thought about what the design goals of this system were, and we see that played out. So that would be the technical paper that, if someone were to read one paper, that would be the one I would want them to read. But I understand very few people would want to go back and read it, a computer science paper.
CDT: Laura, is there anything that you read about China that you found particularly illuminating?
LE: Oh, there’s so many good ones. I think that, to give a really boring answer, "The Search for Modern China" by Jonathan Spence was a very good ground-laying.
I think something that is kind of dated, but that really helped me with thinking about a different framework of values and morals, was [Jiwei Ci’s] "Moral China in the Age of Reform."
JB: When that book came out, I bought it immediately, and I ended up reading it on a trip to Hong Kong, where I was trapped on a plane for 24 hours. I read the entire thing on the plane, and took all these notes. I was still working at the State Department at that point, and I typed up my notes and sent it around to everyone I knew at the State Department, and I’m sure everyone thought I was completely nuts. So when Laura asked, "What should I read?" I said, "THIS!" and I’m so glad that she read it.
LE: It’s really hard to think about the box that you’re in. I definitely don’t make any claim to understand anybody else’s box, but reading that book made me more aware of my own, and that is extremely helpful.
"China’s Thought Management" [edited by Anne-Marie Brady] was also pretty helpful. That was a good one.
CDT: The foreign media presence in China has been decimated in the last few years. On every front, there’s less information getting out of China: fewer official statistics, less independent reporting inside China, less room for academic fieldwork, and so on. A related phenomenon that you write about is the growing difficulty of probing the censorship system from the outside, in terms of "bidirectionality." Can you talk about that now?
JB: You really did read this! Bidirectionality is absolutely crucial for us to understand how the system works. It’s extremely hard to set up a testing infrastructure inside China that accurately reflects what it’s like for a normal person to access the internet. The way we know what’s censored is by throwing packets, essentially, from outside, into China, and seeing what happens, and knowing that you’re getting the same results as if you were inside China. And if that goes away, if bidirectionality goes away, we will lose our technical capacity to monitor at a granular level how exactly these protocols are working, how they’re censoring things, and what is happening. You can lose bidirectionality on some protocols and not others—it’s not a fully binary thing—but it is a really scary thought and a really scary time.
This is another one of these things that has been a bedrock, foundational assumption for decades, for people that have been studying the Chinese internet. We found out about this because we talked to computer scientists working on this stuff. I was asking about something completely different, and he just happened to mention that also, "By the way, while we were doing this, we discovered that for this protocol, bidirectionality doesn’t seem to be holding anymore." They weren’t writing a whole paper about it. They weren’t raising the alarm. I feel like that’s one of these really super nerdy things that is so technical, but speaks to what a moment of potential crisis that we could be in very soon, which I don’t think we’re prepared for.
LE: I think this actually highlights one of the one of the recommendations that’s normally in our recommendations pitch, but we didn’t mention here, and that is, we need more consistent monitoring of how this system works. Right now, there isn’t a continuous monitoring program of how censorship is operating and what that means is that the way we get information is that one person will run a study; they’ll collect data for nine months to answer some specific question; and then when that’s over, the data collection is over, and there’ll be large gaps. And sometimes someone will discover something new, we don’t know when it started, and that is actually very important for us to understand, for us to understand how the system is evolving on a technical level, and even just things like what is being censored. That would be really, really useful to have a better view of.
So I want to, if I may, make a plea for a book to exist. How about that? When we were originally thinking through this system, one of my big questions was just, "Why do they do this?" Clearly the Chinese Party-state thinks that information control is vital to regime survival. But why do they think that? Why do they think they need such a tight lid on what people can say, can know, can think? And I’m not saying I have an answer, but I have come to think a major contributing factor is what they saw as the reasons that the Soviet Union fell. I am so interested in what lessons the Chinese government learned from the fall of the U.S.S.R., and what lessons they have taken for their own regime survival. A book I am reading about that that is so excellent is "To the Success of Our Hopeless Cause: the Many Lives of the Soviet Dissident Movement." I never recommend books before I finish them, but this one is fabulous and I’m really enjoying it, so that is one I would recommend. And I really think maybe a book exists out there about what China’s leaders learned from the fall of the U.S.S.R.
JB: There is! I sent you the table! I think it’s Ken Lieberthal’s book. Somebody made an actual table of the lessons that they learned from …
LE: I just want a whole book about that.
JB: I thought you were going to make a pitch for someone to fund a book about internet monitoring. And I was going to say I hope they fund us to write the book about internet monitoring.
LE: I should have been pitching more selfishly. Yeah, okay, put in a pitch for someone to give us money to write a book.