Advertisement
Australia markets open in 3 hours 3 minutes
  • ALL ORDS

    7,974.80
    -27.70 (-0.35%)
     
  • AUD/USD

    0.6615
    -0.0023 (-0.34%)
     
  • ASX 200

    7,724.30
    -25.40 (-0.33%)
     
  • OIL

    78.49
    +0.04 (+0.05%)
     
  • GOLD

    2,348.40
    -0.70 (-0.03%)
     
  • Bitcoin AUD

    100,594.45
    +675.94 (+0.68%)
     
  • CMC Crypto 200

    1,382.41
    -35.46 (-2.50%)
     

Signal's Meredith Whittaker on the Telegram security clash and the 'edgelords' at OpenAI

Image Credits: PATRICIA DE MELO MOREIRA/AFP / Getty Images

Meredith Whittaker has had it with the “frat house” contingent of the tech industry. I sat down with the president of Signal at VivaTech in Paris to go over the wide range of serious, grown-up issues society is facing, from disinformation, to who controls AI, to the encroaching surveillance state. In the course of our conversation, we delved into Signal’s interactions with Elon Musk and Telegram’s Pavel Durov, and — given its controversial clash with Scarlett Johansson — we discussed Whittaker’s candid thoughts about the leadership at OpenAI, which she likened to "dorm room high jinks.”

Among other things, Whittaker is concerned about the concentration of power in the five main social media platforms, especially in a year when the world faces a large number of general elections, not least in the U.S., and Europe's reliance on U.S.-based, external tech giants. She argued that loosening EU regulations won't actually help Europe compete with U.S. tech giants or be good for society. She criticized the media’s obsession with AI-driven deepfakes, while often ignoring how social media platforms prioritize hyperbolic engagement over facts.

We also discussed surveillance advertising, the implications of the U.K.’s Online Safety Bill, the EU-CSAM proposals ("absolutely dangerous"), and whether Telegram’s Pavel Durov should spend more time making his platform secure than being followed by a photographer for his Instagram account (“he's full of s—”).

And toward the end, she revealed why she’s spending the next six months in Europe.

ADVERTISEMENT

You’ve lately been talking about the concentration of power in AI and that this was important in the European context. Would you like to expand on that?

The very short answer is that that's important in the European context, because that power is not concentrated in Europe. Yes, that power is concentrated in the hands of a handful of companies that reside in the U.S., and then some more in China. But when we're talking about this context, we're talking about the U.S. The reliance of Europe, European startups, European governments, European institutions, on AI is ultimately a reliance on infrastructures and systems that are created, controlled, and redound back to the profits and growth of these handful of companies.

Now, the context we're speaking in is May 2024. I don't know how many months we have till the election and I'm refusing to remember that right now. But we're looking at the very real possibility of a Trump regime and of a more authoritarian style U.S. government and that part of the [Republican] party has had its eye on controlling tech and particularly social media for a very long time. So those are considerations that should all be taken together in an analysis of what is AI? Who does AI serve? And why, again, should Europe be concerned about concentrated power in the AI industry?

There's a debate in Europe around accelerationism and accelerating technologies. Some European entrepreneurs are frustrated by European regulation. Do you think that their concerns about possible European regulation, perhaps of the EU slowing down the pace of technological progress, are justified?

Pardon me, I come from The Academy. So I'm a stickler for definitions. I want to unpack that a little. Is the premise here that without such shackles, Europe would be free to build competitors equal to the U.S. tech giants? If that's the presumption, that's not true. They know this is not true. Anyone who understands the history, the business models, the deep entrenchment of these companies also knows that's not true.

There may be frustration with regulation "slowing down your Series B." But I think we need to look at a definition of "progress" that relies on casting off all guardrails that would govern the use and abuse of technologies that are currently being tasked with making incredibly sensitive determinations; currently being linked with mass surveillance infrastructures that are accelerating new forms of social control; that are being used to degrade and diminish labor. Is that what we want? Is that progress? Because if we don't define our terms, I think we can get caught in these fairy tales.

Sure, some guys are going to be solidly middle class after they cash out, and that is good for them. But let's not conflate that with progress toward a livable future. Progress toward a socially beneficial governance structure, progress toward technology that actually serves human needs, that is actually accountable to citizens.

You’ve raised the example of disinformation about AI-generated content about Volodymyr Zelensky and his wife. Such as deepfaked video and AI-generated websites.

The focus on deepfakes in a vacuum is actually missing the forest for the trees, with the "forest" being the fact that we now rely on five massive social media platforms as the arbiters. [TikTok, Facebook, Instagram, Twitter/X and YouTube.]

These massive homogenous social media platforms are incentivized to calibrate their algorithms for engagement because they want more clicks, more ad views, that are incentivized to elevate s--- content, bombastic content, hyperbolic content, completely false content, right? And that's where we're seeing, in my view, AI used for disinformation in a much more powerful way. That's where you would find a deepfake. No one goes to a website anymore. You go to Twitter, YouTube, you search around, you see what's on there.

You see a headline and click on it, you click on someone posting from that website. I don't think we can have a conversation about disinformation without having a conversation about the role of massive homogenous platforms that have cannibalized our media ecosystem and our information ecosystem in service of profit and growth for a handful of companies.

In the U.K., we have the Advertising Standards Authority. In Germany, you can't advertise Nazi memorabilia, for instance, on eBay. Would there be ways of policing the advertising industry and therefore, downstream, creating better rules and better outcomes from the platforms that rely on advertising as a business model?

I think banning surveillance advertising would be a very good first step. We would be really cutting at the root of the pathologies that we are dealing with from the tech industry, which is this mass surveillance in the name of influence, influence to sell something, influence to convince someone to vote for something, influence to misinform someone. Ultimately, that's the game.

The training data for that mass surveillance, as you put it, was thrown into sharp relief with the story around OpenAI’s use of the "Sky" AI voice that sounded quite similar to Scarlett Johansson. She later revealed she had been contacted by Sam Altman about using her voice. Do you have a view who won that incident?

I posted this on Twitter, but it's just like … "edgelord" bulls---. It's so disrespectful. It's so unnecessary. And it really tears the veil on this mythology that you're all serious people at the apex of science building the next godhead, when it's very clear that the culture is dorm room high jinks egged on by a bunch of yes men who think every joke you say is funny, because they're paid to do that, and no one around there is taking this leadership by the shoulders and saying, "What the f--- are you doing?!"

Last year at TechCrunch Disrupt, there was a discussion with you about the U.K.’s Online Safety Bill (now Act) that suggested it may ask tech companies to build backdoors into their end-to-end encryption. What's your position now that bill has passed?

We'd never do it. We’re never gonna do it. What we said was that if they moved to enforce that part of the bill, [which] could be used by Ofcom to tell Signal "they have to build a backdoor; they have to implement client-side scanning" — which is a backdoor — we would leave [the U.K.]. Because we're not going to do that. We're never going to sell out the people who rely on Signal, particularly given that so many of them rely on it, in contexts where digital security is a life-or-death matter.

What appears clear is Ofcom got handed a giant bag of wild nonsense, some of which is interesting, some of which isn't, that built up like a Christmas tree, where everyone had tacked on their favorite ornament. It got passed due to political inertia, not [through] any real support. Every MP I had talked to in the lead-up to the bill was like "Yeah, we know that s---, but no one's gonna do anything about it." And now Ofcom has to deal with enforcing it. And so … every couple of months another 1,700 pages drops that you need to pay someone to read.

So you haven't had any pressure from Ofcom yet?

No, and my experience with the Ofcom leadership has been that they're fairly reasonable. They understand these issues. But again, they got handed this bill and are now trying to grapple with what to do there.

There was a recent development where they're consulting on AI for online safety. Do you have any comment on that?

I am very concerned about age-gating. And this idea that we need a database, [for instance] run by Yoti, a U.S.-based company who's lobbying hard for these infrastructures, that would do biometric identification or some machine learning, inaccurate magic, or have a database of IDs, or what have you, that means you effectively have to log in with your real identity and your age and any other information they want, in order to visit a website.

You're talking about an incredible mass surveillance regime. In the U.S. for a long time librarians held the line on not disclosing what people checked out because that information was so sensitive. You can look at the Robert Bork case and his video rentals and purchases and how sensitive that information was. What you see here with these provisions is just an ushering in of something that completely ignores an understanding of simply how sensitive that data is and creates a [situation] where you have to check in with the authorities before you can use a website.

The European Commission has proposed a new directive to recast the criminal law rules around child sexual abuse material (CSAM). What’s your view on this proposal?

Honestly, it doesn't look like there's the political will [for it]. But it is notable that there seems to be this rabid contingent, who in spite of damning investigative reporting, shows just what a heavy hand lobbyists from the scanning and biometrics industry played in drafting this legislation. This, in spite of the entire expert community — anyone of note who does research on security or cryptography and understands these systems and their limits — coming out and saying this is absolutely unworkable. What you're talking about is a backdoor in the core infrastructures we rely on for government, for commerce, for communication.

It’s absolutely dangerous, and oh, wait, there's no data that shows this is actually going to help children. There’s a massive shortfall in funding for social service, education. There are real problems to help children. Those are not being focused on. Instead, there is this fixation on a backdoor on encryption, on breaking the only technology we have that can ensure confidentiality, authenticity and privacy. So the arguments are in. It's very clear that they're wrong. It’s very clear that this process has been corrupt, to say the least. And yet there seems to be this faction that just cannot let that bone go.

You’re clearly concerned about the power of centralized AI platforms. What do you make of the so-called decentralized AI being talked about by Emad Mostaque, for instance?

I hear a slogan. Give me an argument. Give me an architecture. Tell me what that actually means. What specifically is being decentralized? What are the affordances that attend your special version of decentralization?

Obviously there was the recent clash with Elon Musk about Telegram versus Signal. Zooming out and coming out of that experience, did you see any activists come off Signal? What are your views of what Pavel Durov said?

It seems like Pavel might be too busy being followed by a professional photographer to get his facts right. I don't know why he amplified that. I know he's full of s--- when it comes to his views or his claims about Signal. And we have all the receipts on our sides. So the jury is in. The verdict is clear.

What's unfortunate about this is that, unlike other instances of tech executives’ s--- talk — which I'm fine engaging in and I don't particularly care — this one actually harms real people and is incredibly reckless. Alongside a number of folks we work with in coalition, we have had to be in touch with human rights defenders and activist communities who were legitimately frightened by these claims because we're in an industry, in an ecosystem, where there are maybe 5,000 people in the world with the skills to actually sit down and validate what we do, and we make it as easy as possible for the people who have that narrow expertise to validate what Signal is doing.

Our protocol is open source. Our code is open source. It's well documented. Our implementations are open source. Our protocol is formally verified. We're doing everything we can. But there are many people who have different skills and different expertise, who have to take experts’ word for it. We're lucky because we have worked in the open for a decade. We have created the gold standard encryption technology, we have the trust of the security, hacker, InfoSec, cryptography community and those folks come out as kind of an immune system. But that doesn't mean we don't have to do real damage control and care work with the people who rely on Signal. A lot of times we see these disinformation campaigns targeted at vulnerable communities in order to force them onto a less secure option and then subject them to surveillance and social control and other forms of harm that come from that type of weaponized information asymmetry. So I was furious, I am furious, and I think it's just incredibly reckless. Play your games, but don't take them into my court.

I've done a lot of reporting about technology in Ukraine and some of the asymmetric warfare going on. At the same time, it’s clear that Ukrainians are still using Telegram to a very large extent, as are Russians. Do you have a view on its role in the war?

Telegram is a social media platform with DMs. Signal is a private communication service. We do interpersonal communications, and we do it at the highest level of privacy. So a lot of people in Ukraine, a lot of other places, use Telegram channels for social media broadcasts, use groups and the other social media features that Telegram has. They also use Signal for actual serious communications. So Telegram is a social media platform; it’s not encrypted. It's the least secure of messaging and social media services out there.

You said that you're going to be spending a lot of time in the EU. Why is that?

I’ll be in Paris for the next six months. We’re focusing on our European market, our European connections. It's a good time as a privacy-preserving app that will never back down from our principles to be very flexible, given the political situation in the U.S., and to understand our options. I'm also writing a book about all the work I've been doing for the last 20 years.