“Who do we want making decisions about who should be allowed to speak?”
Part Two of a conversation with Joel Bakan and Sujit Choudhry about “new” corporations, the power of Big Tech, and suing Twitter and the Canadian government
This is the second of a two-part interview with Sujit Choudhry1 and Joel Bakan.2 Read the first part here. To listen to the complete audio of our conversation, head here, or search for “Reframe Your Inbox” wherever you get your podcasts. The excerpts below have been condensed significantly and edited for clarity.
ADAM: One of the under-discussed impacts of using and relying on these tech platforms is how dehumanizing the experience can be. It’s not just because everybody is lobbing attacks and vitriol at each other, but also because the decisions that companies make—and the algorithms that often make [them]—are so powerful, so opaque, and so seemingly arbitrary. They seem to think that they don’t need to hire people to manage these processes because they can replace people with A.I., but when the A.I. fails they say, We can’t help you because we don’t have enough people working here.
JOEL: They actually said to us, We don’t have the resources to assess the credibility of your film. At the end of the whole thing, we’re saying, We’re a credible film! We’re reviewed by mainstream media. We’re getting good reviews. And their response was, We can’t assess the credibility of everything that comes across the platform. So they can’t assess [its] credibility, but they can stop it and they can ban it.
I’ll give you another example. My father is ninety-two years old. He’s a retired professor. The day after we dropped the writ in this case in the Ontario Superior Court, his Twitter account was permanently suspended. He was banned from Twitter. He has only sent out two tweets in his entire life. He just uses Twitter to scroll through and see what’s happening in the world. And he wrote to them and said, Why? I haven’t even sent a tweet in the last three years!
This was three months ago. He’s gone through their internal process, and he’s gotten nowhere. They’ve given him no response. They’ve given him no explanation. I don’t know if that has anything to do with the fact that he and I share a name, and it was the day after we dropped the writ. I’m not a conspiracy theorist generally. But the timing seems awfully weird.
SUJIT: The whole issue of how the legal system is going to deal with algorithmic decision-making is one of the great legal frontiers of the next decade. It’s going to come up in the context of governments relying on algorithmic decision-making, but it’s also going to come up in the private law context, too.
In our case, it could be that the question is: If a contractual party has a duty of good faith towards another party, is it consistent with that duty of good faith to leave decisions in relation to core social and political speech to a computer? Or do you have a right to a reasoned decision by a human being? And if not right away, then at what stage?
Part of what we saw was this bot saying No, and then the human being—under-resourced and under time pressure—in a sense repeating what the bot determined. You almost think that if the bot had decided differently, then things would have gone differently.
ADAM: Sujit, if you could unilaterally implement a global policy and a legal framework for Big Tech platforms, what would that framework look like? What kind of oversight and accountability, in your ideal world, would there be for these companies?
SUJIT: I think it probably needs to be a multinational or a coordinated solution. The reason is because of the ways in which companies use these user agreements to evade the boundaries and constraints of a national legal system.
Because they say, Your agreement is governed by foreign law and has to be heard in a foreign court, that immediately gets them out of national jurisdiction. Either there needs to be cooperation across countries or [there needs to be] some type of a multilateral mechanism to coordinate how these companies are governed, because they’re evading national governance. That’s the first thing I would say.
The second thing I would say is, rather than thinking about what the framework should be, I think we should begin to think about what these platforms are. They’ve become public forums. In the context of the pandemic—just to take the most recent example—this is where a lot of people got their information, including from health care providers and governments. It’s where democratic politics is taking place in countries around the world.
So if they’re public forums, the question is: Should they be bound by certain basic norms regarding equality of access, or restrictions on misinformation or hate speech? This idea—that you can restrict speech on public platforms for limited but important purposes to the extent necessary—that’s very foreign to the American way of thinking about free speech law. The First Amendment’s absolute. But the Americans are the outliers. The rest of the world doesn’t think that way.
JOEL: We are trying to come up with a theory or a model about how these platforms should be governed. What we’re saying, in effect, is that they need to be governed to protect people from the exercise of free speech that is incendiary, that is promoting and provoking violence, that is promoting insurrection, that is hateful, that is racist. But speech that is fundamental to democratic debate, whether of a political party or of a filmmaker or of a journalist, that has to be protected.
So far, our governments have shied away from having that conversation and from talking about creating a regulatory framework for those platforms. Litigation can play a really important role in pushing an issue into the political arena when the political arena has been resistant to embracing it.
ADAM: This gets at this ongoing debate about “censorship” and “cancel culture.” Some of this debate is in good faith; a lot of it obviously is not. So much of the discussion seems to be about whether the left or the right is being targeted or censored or canceled. But it seems like from the perspective of the entities who are doing this so-called censoring or canceling, which is the tech companies, their calculus probably has very little to do with politics. For them it is about profits. It’s about maintaining their power. And both of those things come from engagement.
[The decision regarding] whether they remove somebody or something from the platform, or they don’t, seems like it’s often made with some sort of simple cost-benefit calculation: When the public outcry and opposition to a person or a post outweighs the benefits that the company earns from that person’s or that post’s engagement, then they’ll pull the plug, which is why Trump was allowed to stay on Twitter until after the January 6th insurrection, at which point it seemed like his continued presence on the platform would’ve been more of a liability for the company than an income generator.
But when you look at your ads for the film, who knows what they saw. Obviously it started with the bot, but once the human beings got involved, maybe they saw it as an indirect threat to their power. Maybe they saw it as an opportunity to demonstrate that they weren’t only going after people on the right. Maybe it was an algorithm’s decision, and, like you mentioned, Sujit, maybe they just decided, This is what the bot said, so we’re just going to double down and not question it.
Whatever the rationale, your ads probably were not driving a huge amount of engagement for Twitter, at least compared to Trump or a troll. So they said, Ok, they’ve got to go. I wonder what you make of all this? Am I being too cynical about their thinking here?
JOEL: It’s really, really simple at one level. The question is: Who do we want making these decisions about who should be canceled, who should be amplified, who should be allowed to speak, who shouldn’t be, what kinds of things can be said? Who do we want making these decisions?
We’re going to argue forever because that is the human condition. We’re also going to find some lines of consensus, I think. Maybe not everybody, but a kind of broad consensus. If you’re promoting insurrection, or you’re trying to amplify clearly false material that is harmful, whether about vaccines or anything else—there are certain things that we’ll think, It’s reasonable to regulate those.
We’ve done this all through our history. We’ve never taken the position that anything goes on television at any time of the day and on any channel, whether children are watching or not. We’ve never said that. We’ve always said, There’s a line. We’ve debated about where the line is.
Who do we want making those decisions? We really have three contenders here. We have governments. We have courts. And we have for-profit companies. Each of those contenders has a different incentive system. Companies like Twitter are driven by engagement because engagement leads to more eyeballs and leads to more data, and that, on their business model, leads to more money.
I think our argument here is that some combination [of] democratically accountable governments and legally accountable courts is going to be better for all of us, and for all of these decisions, than leaving it to for-profit platforms. [That] is probably the worst outcome. Some combination of checks and balances between government and courts is probably going to get us to a better place.
Visit thenewcorporation.movie to learn more about the film and the lawsuits against Twitter and the Canadian government.
https://www.sujitchoudhry.com; @sujit_choudhry
https://www.joelbakan.com; @joelbakan