The Zuckerberg Deepfake Creators Wanted to Give Facebook a Taste of Its Own Medicine

An art project video highlights the problems facing the world’s biggest social media platform.

Bill Posters/Facebook Screenshot

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

At the end of May, Facebook was dealing with yet another crisis. The manager of a highly followed, hyper-partisan, right-wing political page had posted a video of Nancy Pelosi altered to make her look like she was drunkenly slurring her words. The video was fake, but went viral anyways, racking up over a million views. Facing pressure to do something to reduce the spread of misinformation, Facebook opted to leave the video up but display links to debunking fact-checking sites next to it, and to “downrank” the video in its algorithm, showing it to fewer users. Many of the company’s critics blasted it for not outright deleting the video, as YouTube, its competitor, opted to.

Around two weeks later, Facebook was facing another difficult situation. A group of artists had made a deepfake video of Facebook CEO Mark Zuckerberg and posted it on Instagram, a Facebook-owned platform. In the video, the artists manipulated footage of Zuckerberg to make him appear as though he were bragging about tricking users into sharing their data.

While at first blush the Zuckerberg video may have seemed like a timely test of the company’s hands off policy inspired by the manipulated Pelosi video, according to one of the artists responsible for creating it it actually had been in the works for months. While Bill Posters—an artist’s pseudonym playing on the U.K.’s version of “post no bills”—admits the video was a test, he says it’s creation was unrelated to the Pelosi incident, but was aimed at forcing Facebook to grapple with how it moderates art, and with larger questions about privacy and Facebook’s power over people’s lives.

To Posters, Facebook, Google, and other technology companies have figured out how to shred everyone’s veil of privacy and are now collecting and sharing the thoughts, preferences, and ideas that everyone quietly or secretly harbors. In doing so, they’re not just making money; they’re accruing power over people who can be coerced into buying things and voting certain ways. Posters thinks all of this might kill democracy—and that Zuckerberg either doesn’t realize or doesn’t care.

The manipulated Zuckerberg video was a part of “Spectre,” an installation project that he made with fellow artist and technologist Daniel Howe featuring deepfakes of celebrities including Kim Kardashian, Morgan Freeman, and the artist Marina Abramovich speaking about data collection and the installation. The videos were first displayed on large smart screens set up in a circle resembling Stonehedge in a gallery in Sheffield, U.K. With a quiz, the interactive screens solicit information from participants in a process modeled after Cambridge Analytica’s methods, which Posters said was a major point of inspiration for the art. The installation, in Posters’ words, lets participants “pray at the altar of dataism.”

Posters has a long track record of creating art critical of technology and consumerism. He’s explored internet pop-ups ads, Black Friday, and a project called “Brandalism” that attempted to “challenge corporate power in public & digital space,” by placing parody advertisements critical of the ad industry and corporatism in cities across Europe.  Mother Jones spoke with Posters about his attention grabbing project.

Mother Jones: What were your motivations for creating the Zuckerberg deepfake?

Bill Posters: The artwork was a part of a series of videos that form quite a large part of the wider Spectre installation. Our rationale for creating this kind of AI-generated artwork was really to explore the power of online influence. Part of the reason for our venture into the social media space was to influence the influencers in creating videos with influential pieces of computational propaganda that we could deploy into social media networks.

MJ: I saw your Kim Kardashian one. That didn’t get as much steam.

BP: Yeah, it was really the Zuckerberg one that took the bulk of the attention. There’s a range of social media influencers that we wanted to use computational propaganda technologies to influence. So we tried to combine technology, art, politics. We wanted to cut across a range of avatars in a literal sense—but also a spiritual sense—to reincarnate the influencers using AI technology and then insert them into social media networks. So you’ll see people like Kim Kardashian, who’s really the most well known online influencer. You’ve got politicians, like Donald Trump. You’ve also got tech people like Mark Zuckerberg, the founder of Facebook. We also got artists like Marina Abramovich. We brought back to life Freddie Mercury.

MJ: Zuckerberg seems different from these people in a number of ways, especially in that in that he’s the only one who also controls the platforms you’re putting these videos on. Were the motivations behind his deep fake different than the others? Did you want it to be a test for him in the way a lot of the media interpreted it to be?

BP: Yeah, he’s a critical piece of the art. Zuckerberg is at the heart of this story when you tear off the lid of the black box of these surveillance capitalist technologies and all those behavioral profiling methods. They’re all centered on the way in which our data is surveilled and extracted, and Facebook is at the heart of that story. They came after Google, which created the cultural norms and logic of surveillance capitalism. Mark Zuckerberg is a really difficult figure because he literally is the person who has total control of a digital society of over 2 billion people and their intimate data. He’s at the heart of the debate about privacy and democracy. Not only is privacy an issue with Facebook, but its platforms have also exacerbated a lot of issues with populist movements, fake news and other forms of propaganda which have been spread on social media networks. They’re really a huge part of the conversation, which is why we chose to focus on them. So it was really important for us to bring Mark Zuckerberg into that conversation directly. We wanted to put him in a situation in which he feels uncomfortable for the way that our personal data is being used and manipulated by harmful technologies. We wanted to level that playing field a little bit.

MJ: Correct me if I’m wrong, but the Kim Kardashian video is saying important things but it’s not interrogating her directly and forcing her to engage with the implications of these things. Whereas Zuckerberg’s deepfake is a direct interrogation of the platform and how he moderates it. Were you anticipating that he would end up seeing the video and have to grapple with these issues?

BP: Yeah, there were made with a mind to a different context. Zuckerberg had already taken a position on the Nancy Pelosi video. These are corporate spaces that present themselves as public spaces, so it’s always interesting when companies intervene and try to apply the principles of speech and free expression and claim they’re not censoring the content. It was important to explore what they would do beyond the Nancy Pelosi video. It was only a few weeks before the installation was premiering at the site in Sheffield, that the controversy over that video happened. So we knew it would be topical. We knew it was going to be provocative. We were really pleased with how things played out and the organic vitality of the video. It’s worth noting too that our deepfake videos were in the works for months before the Pelosi video.

MJ: Were you disappointed with Facebook’s decision to downrank the Zuckerberg video?

BP: We were told initially by Facebook that they flagged that content as fake and marked it as disinformation and downranked it. After a call with a communications/PR spokesperson, I was assured personally—and I have a transcript of the conversation—I was assured that it hadn’t been marked as misinformation or false. And it had in fact been marked as art/satire by a third party fact checker and that it shouldn’t have been downgraded in any way, shape or form. So there’s a bit of confusion there. In reality, it was not visible and searchable on Facebook as opposed to what they told us the original measures were. It didn’t make sense because we had clearly labeled it as art from the very beginning.

[Facebook did not respond to a Mother Jones request for comment about what it told Posters.]

MJ: Not to force you to go beyond your paygrade and come up with solutions for these companies, but what do you—

BP: It shouldn’t be the responsibility of us to come up with solutions. It should be elected representatives who are talking about transparency and accountability and not only protecting user privacy. We’re at a moment where it’s coming down to basic human rights. It’s about agency and free will. They enable profiling and behavioral predictions that are rolled out en masse. They influence our decisions not only with what we do online and what we buy but also how we vote. It’s really a case of creating more transparency and making those legal and judicial structures that actually hold really powerful corporate powers to account.

MJ: Is there a resolution to this or at least a style of resolution that you find attractive?

BP: I would certainly think about breaking up these companies. You can’t have that level of monopoly, that level of concentration in huge industries and markets without huge problems. There are many examples in history where large firms even when they had a large percentage of market share, they were forced to break up or had to be forced to alter their corporate structure. Huge companies now like Google, Amazon, Facebook have unprecedented levels of domination. And that is a very dangerous concentration of corporate power. If you look at the way these relationships are playing out and our democratic processes and the impact these platforms are having on it.

MJ: You guys were very transparent about your videos being a manipulated deepfake. You wanted everyone to know who was behind it and not have it be mistaken for something else. Are you afraid of more nefarious uses of the technology and are you afraid how technology platforms will handle it in the future?

BP: The whole purpose of the installation was to interrogate the impacts of how technology companies’ practices are leaving their platforms open to abuse. And that covers a whole range of computational propaganda. The installation itself is also a kind of behavioral profile of how generative images can appear in tech. It reveals the kind of behavioral design and psychology tactics that are used and the way that apps are designed and interfaces are set up. It’s more of a critique of the broader industry as a whole, which is something we’re really interested in.

And we’re concerned with the wider industry as a whole. Deepfakes, AI and machine learning technology are a kind of key part of the argument—but it really is the broader industry as a whole that operates really in an unregulated area. It’s a new form of neocolonialism really. It is literally an enclosure over our bodies and minds. We saw how colonialism worked in the past in regard to planned, physical cages with slavery. Now we see that in financial markets with the financialization of health insurance. It’s almost 20 years later you see the new logic of capitalism which is basically about collecting human experiences and converting that into behavioral data. It’s the new raw materials, like Shoshana Zuboff talks about. These companies want to know more and more and more about our individual minds and bodies now. That is a huge huge concern, that’s really troubling and it’s also fascinating. You can get incredibly intimate data that we give away for free without even realizing it. These platforms rely on obfuscation of what they collect and how. It’s really a question about true, informed consent. People don’t realize how much of their data is being collected.

With Spectre, we have a very detailed psychometric profile of you within the first minute of participants stepping in the installation. So people are genuinely amazed and shocked at how much of their data can be harvested so quickly, really without their knowledge or consent. It’s a real disaster which is part of what makes it such a powerful installation. It’s so hard to see and even peer inside these systems and how they’re working. This industry is opaque by design. How are we ever going to know how much we’re exchanging for these free apps and services without really truly knowing what’s in the black box?

MJ: That makes a lot of sense. In some ways, a lot of the data coming out is like neoliberalism metastasizing in this aggressive way that no one anticipated because it used to be impossible to collect and sell personal data on this scale.

BP: It’s really interesting because if you look at Google, like they used to call it data exhaust because they used to not know what to do with all the data generated by their search platform. And then post 9/11 they made a wholesale to utilizing that behavioral data, and analyzing reams of people’s searching behavioral data. They were one of the first companies to monetize this sort of personal data and Facebook wasn’t far behind.

MJ: In some ways, Facebook and Google do take the most amount of criticism for this type of thing because they’re doing it on the largest scale, but at the same time, this is the logical conclusion of what happens when you take the highest and most efficient level of technology and combine it with capitalism. They’re just doing what Exxon, Goldman Sachs or any other company probably would if they had access to this kind of information.

BP: Exactly. It’s another extractive practice. It’s extracting what’s in the mind, the individual mind. So the intrusiveness of these platforms, that’s what creates what some data scientists refer to as a data epigenome—like an extension of our DNA. Having that type of data can become very dangerous. They create industry scale data sets on the cultural extensions of billions of people’s lives. And then they get the correlates of nodes in the datasets and their relationships between each one. It’s an incredible, vast of knowledge spread among very few hands.

MJ: One big potential of deepfake technology is for authoritarian governments not as committed to democracy using it to manipulate and oppress populations. Are you thinking about that at all?

BP: History has shown that you can’t have democracy without privacy. And we’re now in a very real situation where privacy doesn’t exist. So we’re in a very difficult position there. With deepfakes, there is a lot of correlation between some AI tech startups and industrial projects. Things like Cambridge Analytica—they built their Ripon software as a part of a U.K. Ministry of Defense contract in Afghanistan, during the war there. Before they went by “Cambridge Analytica” they ran psyops operations in the country. And then that was spun out as what became Cambridge Analytica. In regard to deepfake technology, there is no doubt that that will be used by powerful nation state actors to influence events or individuals. That kind of technology has been available for a few years. There’s no doubt it’s going to be used for repression.

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate