Introduction

The headline to Simon Parkin’s The Guardian article (Parkin 2019), “The Rise of the Deepfake and the Threat to Democracy,” summarizes and encapsulates much of the common popular discourses surrounding deepfakes. While the article is a nuanced and intriguing retracing of some of the most impactful instances of deepfakes to date, the alarm that the headline raises continues to be central to the fearful question surrounding the technology: what happens when a “real” event becomes indistinguishable from a “fake” event, in particular events involving public, political figures? Much of the threat that deepfakes are seen to pose stems from their nature as synthetic media, which Aldana Vales explains as “a new category of images, text, audio, videos, and data generated by algorithms” (Vales 2022) that are extremely realistic. In late 2022, this includes deepfakes, but also the burgeoning field of audiovisual and visual media produced by Generative Adversarial Networks (GANs) and by artificial systems like DALL-E and Stable Diffusion, which use neural networks that allow a user to use text prompts to generate images. Image-based synthetic media are representational media technologies that short circuit traditional relationships between the technological and “objective” representation of a “real” event and the “real” event itself (as might previously have been found in photography or film); within synthetic media, whether represented in audio materials, an image, or a moving image, events are able to be produced seemingly out of thin air and without any relationship to the events of the “real” world.

In the case of deepfakes, as Cade Metz in The New York Times (Metz 2019), Mika Wusterland (Wusterland 2019), and William A. Galston at The Brookings Institute (Galston 2020) have argued, the democratizing of the abilities to manipulate digital bodies via deepfakes has greatly disrupted an audience member’s ability to trust what they are seeing and hearing. This is perhaps best captured in one of the most well-known examples of a deepfake, Jordan Peele’s impersonation of Barak Obama (BuzzFeedVideo 2018): in the video, Peele’s expert mimicking of the former President is combined with deepfake technology to produce a startlingly realistic video that puppets a digital Obama. The dangers arising from this, fake-Obama warns at the beginning of the video, is that “We’re entering an era in which our enemies can make anyone say anything at any point in time.” Such fears become potentially more alarming as the abilities to make deepfakes become increasing easier to make, as 2022 presents a moment where the computing resources and specialized knowledge needed to create deepfakes are becoming more accessible.

Importantly, however, the message of the Peele-Obama video is not to malign deepfakes per se, but rather to advocate for better information literacy and for the value of trusted and vetted news sources. It is doubtless that the concerns voiced within the video need to be urgently addressed in current disinformation economies and environments. Further, the alarming use of the technology to produce revenge porn and strike at non-public figures, shows how deepfakes can dangerously leach into the everyday of citizens in extremely harmful ways, in particular in ways that target women (Gosse and Burkell 2020; Burgess 2021). As Suzie Dunn (Dunn 2021), in her article “Women, Not Politicians, Are Targeted Most Often by Deepfake Videos” argues, the disproportionate focus on the potential political manipulation of deepfakes too often leaves out the real ways in which the technology is actually being used in misogynistic ways to generate nude photos of women. While such concerns are more than valid and deserve a wealth of attention, they are outside the scope of this paper.

Instead, this paper broadens discussion of deepfakes to first consider other synthetic media, specifically images produced by GANs, and then use that discussion to bridge into the potential positive forms of deepfakes that can arise from the ways in the foregrounding of deepfakes’ production of digitally manipulated bodies and events. Doing so means beginning with my own research creation project, a collaboration with first Kieran Ramnarine then Jae Seo titled “This Criminal Does Not Exist,” which trained a GAN on the MEDS I and II mugshot databases. The failure of this project illustrates some of the ethical limits of synthetic media, in particular when considering the vectors of power that produce the capture and circulation of problematic datasets, such as those composed of mugshots. As this paper will expand upon, the systemic power that produces images such as mugshots is not subsumed nor erased when those images are algorithmically processed or re-processed as synthetic media.

Out of this failure I propose ways in which deepfakes should be seen as potential tactical media. Taking up Rita Raley’s definition, “in its most expansive articulation, tactical media signifies the intervention and disruption of a dominant semiotic regime, the temporary creation of a situation in which signs, messages, and narratives are set into play and critical thinking becomes possible” (Raley 2009, 6). Any definitions or examples of tactical media, Raley argues, are in flux, as new forms of technology and political activism instigate adjustments and new deployments of media, but, overall, “tactical media operates in the field of the symbolic, the site of power in the post-industrial society” (Raley 2009, 6). As a representational technology, deepfakes are especially useful within the spheres of tactical media as the messy interplay between signs, the symbolic, and narratives within the technology’s moving images invites synthetic media that operate as critique and disruptions of dominant regimes and power.

From this, deepfakes operating as tactical media hold a potentially powerful place within the pluriverse, a term from contemporary critical design. Borrowed from the Zapatistas of Chiapas, Escobar states that the pluriverse is “a world where many worlds fit … is about an ethical and political practice of alterity that involves a deep concern for social justice, the radical equality of all beings, and nonhierarchy” (Escobar 2018, xvi, author’s italics). Powerfully, Escobar argues that the pluriverse fosters “manifestations of multiple collective wills [which] evince the unwavering conviction that another world is indeed possible” (Escobar 2018, 16). The pluriverse is the acknowledgment of multiple knowledge systems, cosmologies, and ontologies that exist outside of Western truth regimes, such that there can be many worlds, modes of living, futures, and imaginaries existing and interacting simultaneously at every moment. Renata M. Leitão uses the pluriverse to then advocate for design that is able to “create a world of many worlds” which can then produce new possibilities of life outside hegemonic norms and power (Leitão 2022, 256). Moving beyond questions of diversity and multiculturalism, the pluriverse is an argument for the theoretical and practical design of materials that advocate for locality, equity, and complex relationships between humans, species, and the land.

Because deepfakes disrupt traditional links between representation and the “real,” they can be used to imagine and represent knowledge, histories, and future events in the pluriverse in direct opposition to colonial and patriarchal logics. Historical events, and the indexical relationship between the image of a person and their identity within representations of such events, can be decoupled and reformed into imaginaries and alternate histories. This paper proposes three instances in which deepfakes can be repurposed into tactical media invested in the pluriverse’s alterity: in the anonymizing of footage and/or witness testimony, as in the film Welcome to Chechnya (France 2020); in the generation of documentary re-enactment, in particular for events where there are little to no actual footage of an event; and in the creation of alternate histories and counterfactuals that reveal the narratives and power dynamics within accepted “history.” This paper presents my own prototypes of deepfakes as documentary re-enactment and as alternate histories but, to be clear, the prototypes offered in this paper are not especially powerful examples of works that aid the pluriverse; on the contrary they are obviously limited and imperfect. The hope is not to promote these prototypes, but rather to provide groundwork from which other scholars, in particular those from what Patricia H. Collins and Valerie Chepp’s term intersectionally-disadvantaged populations (Collins and Chepp, 57–58), can use deepfakes in ways that generate, encourage, and support social justice, equality, and nonhierarchy.

The failure of “This Criminal Does Not Exist”

My own research and research creation primarily focuses on the ways in which facial recognition technologies (FRTs) have long served as effective tactics within strategies used to gatekeep citizenship. Central to these arguments are the histories of facial images that have been circulated as bureaucratic data materials within the biopolitical and necropolitical management of individuals and populations: the abilities, through identity documents such as driver’s licences and passports, are crucial to the management of citizenship. The indexicality seemingly offered by images and moving images has long been the most common way to deterministically link the body to identity, with the face being one of the bodily sites where this dynamic most commonly takes place.

It was from this thinking that the project “This Criminal Does Not Exist” emerged. The project arose from my interactions with problematic datasets such as the Multiple Encounters Datasets (MEDS) I and II (National Institute of Standards and Technology 2020). These two datasets, still in use in 2022, are composed of the mugshots of deceased people who had had multiple encounters with law enforcement. I was alarmed by what I saw when I downloaded the images and was struck by the affect present within the supposedly neutral and bureaucratic materials. I wanted to find some way to educate the public about these images and their integration into AI systems. While I did not feel comfortable showing the images as they were, I did want some way to show public audiences what types of faces were most present in such databases without using “real” people’s faces. This led to the construction of a GAN that was trained on the MEDS databases, wherein the GAN acted as a data visualization technique that served to produce images of the most common faces within the datasets. After working first with computer scientists, Kieran Ramnarine and then Jae Seo, we were able to generate realistic “fake” faces that were representative of the populations in the datasets. Perhaps unsurprisingly, the vast majority of the faces were Black men, which I took as proof that the project was a success: as the original databases were disproportionally populated by Black men, the fact that the GAN produced synthetic “non-real” versions of those faces allowed me materials that I thought were separated from MEDS’ indexicality and the original mugshots’ links to real people, their lived histories, and their identities.

With this separation in mind, I began showing these synthetic images in research talks where I received multiple instances of feedback that, despite the faces being “fake,” their appearance as mugshots and their photorealistic presentation did not do enough to disrupt the right to look within the original images of the dataset. Following from Nicholas Mirzoeff in The Right to Look: A Counterhistory of Visuality (Mirzoeff 2011), the reproduction of the images from the datasets within my own research subjected those images, despite my best intentions, to the same circulations of power that ensured the initial capture of that facial data, in particular in racialized and gendered populations. Mirzoeff calls “the right to look” a gaze which “produces a visuality that can be broken down into three actions: first, it defines and categorizes; second, it separates individuals and populations based on those definitions and categories, preventing those populations separated from “cohering as political subjects” (Mirzoeff 2011, 3); and third, it fixes those separations as “truth” and aesthetic. These three actions then generate what he calls “a complex of visuality,” defined further as “both the production of a set of social organizations and process that form a given complex” (Mirzoeff 2011, 5). Because they still looked like mugshots of Black men, the images still carried the right to look and the harmful and inscribed racialized politics and vectors of power present in the MEDS mugshots; the images failed as tactical media because they did not challenge ingrained power dynamics in ways that offered the sort of alterity that the pluriverse entails. I am grateful for such feedback and have since taken the project offline in order to rethink the ways in which the data may be, perhaps, further abstracted in ways that allow greater public knowledge of the contents of such problematic databases without replicating the very power dynamics that produced those datasets. One approach is encompassed by an in-progress methodology that Jae Seo and I at the Toronto Metropolitan University Library Collaboratory have been working on wherein photos are abstracted from the right to look by running images through facial recognition technologies and colour-coding the results to new images. In such a methodology, the facial landmarks are mapped and coloured according to emotion; the background colour is coded to the results for race; and the bounding box is coded to the results for gender. Our thinking is that these images maintain aspects of the facial affect in the images without erasing the face, while still providing valuable information to a viewing audience that makes visible the contents of problematic facial database such as MEDS.

Returning to the failed experiment using GANs, this failure had positive outcomes. GAN-based synthetic facial data could still be useful for less problematic datasets, such as celebrity datasets, in order to visualize the most common types of faces; such visualizations can show the ways in which the materials used in the training of AI-enabled systems, like FRTs, are shaped by the data used in that training, often overtraining for certain populations, such as lighter skinned faces within celebrity facial datasets. However, as tactical media, such a data visualization methodology does not adequately disrupt the dominant semiotic regime when applied to more volatile facial images such as mugshots.

Importantly, thinking through the image-making that produce the MEDS mugshots illuminated a positive path forward to deploying deepfakes by thinking about the technology’s relationship to the events it portrays. Deepfakes, operating under principles of pluriverse design, can produce alternate forms of representational materials that are against the long history of images and moving images rooted in colonial, gendered, and racialized logics and histories.

Capture and Event

The right to look within the mugshots found in MEDS is formed in large part by the power dynamics that led to the capture of said image. Explaining the over-population of Black male faces in such facial databases points back to political protocols, such as policing practices, the management of neighborhood wealth, and the availability and accessibility of education (to name only three factors), that have potentially contributed to the individual having their mugshot taken. The capture of a mugshot then is a key surfacing of these power dynamics, and the image’s production, tethered to societal desires for identification and securitization, is dependent on that capture.

Deepfakes are, obviously, not without their harmful political protocols, as this paper has already discussed; there are many dangerous deepfakes that traffic in racist and misogynist representational protocols that contain similar rights to look as the MEDS mugshots. However, deepfakes do not carry the same sense of capture as a traditional image or moving image: there is no device pointed at an event and recording footage. Instead, the synthetic moving images are computationally produced and manipulated. While this dynamic does not release deepfakes from political protocols, it does shift the technology away from logics of capture to logics of representation. If a deepfake is not intended to capture a “real” event, then it is liberated, as a form of tactical media, to represent imagined, alternate, and potential events.

As the introduction to this article established, there is great trepidation around this idea that is decades old. One only needs to return to alarmist fears around the production of digitally manipulated bodies voiced two decades earlier with the popular integration of Photoshop and cinematic special effects into image and moving image making. There was the same hand-wringing about the relationship between the expectation of a video and/or photo’s indexicality in relationship to a person’s body: while Hollywood examples like Forrest Gump’s (Zemeckis 1994) insertion into historical events was lauded as a technological and cultural achievement, the growing popularity of the Internet in the mid-1990s and the fears arising from increasingly virtual bodies and interactions combined with the increasingly realistic abilities for digital computers to produce and manipulate digital bodies to provide a deep well of suspicion surrounding digital technologies effects on the larger understandings of “reality,” “truth,” and “human.”

Yet, as Vivian Sobchak (Sobchak 1996) argues, “the ‘events’ of the twentieth century are less inherently novel than the novel technologies of representations that have transformed ‘events’” (Sobchak 1996, 4); in the twenty-first century, deepfakes are a representational technology that ruptures traditional understandings of historical events. From this perspective, deepfakes are one of many technologies that contribute to “the loss of a determinate historical document” and, in turn, surface the contemporary instability inherent to representing historical events (Sobchak 1996, 6).

This contemporary instability is, twenty-five years after Sobchak’s essay, taking place at a time where the extreme proliferation of social media filters, Photoshop, cinematic effects, increasingly sophisticated video games and other technologically developments of the like have made digitally altered and synthetic media extremely common. If this ubiquity is focused into instances of tactical media, technologies like deepfakes are able to offer a world where parallel and alternate historical and future events can provoke the values inherent to the pluriverse. As the first example, the documentary Welcome to Chechnya utilizes deepfakes to anonymize powerful witness testimony without the losing the affective qualities of the expressive face and vocal performances.

Anonymizing deepfakes

When director David France began work on Welcome to Chechnya, he was prepared to try any number of solutions to anonymize the participants involved that needed further protection. The film details the extreme struggles of LGBTQ populations within Chechnya as the government worked to harshly crack down on all “deviant” sexual behaviour, focusing primarily on homosexuality. The struggle France was confronted with was how to film footage that could capture the emotions and affect of those being persecuted, while obscuring their identities so that that they, and their families, could avoid future violence.

Initially, he and his team tried rotoscoping, then Snapchat-like facial filters, before finally settling on utilizing AI and deepfake technologies to map the faces of volunteers onto those in the original footage (Ifeanyi 2020). To do so, France gathered 22 volunteers, mostly queer activists from New York and Instagram, and filmed each in different light and from different angles, mapping the facial landmarks of each in order to capture full complex renderings of each of their faces (Rothkopf 2020). Then, utilizing AI-trained frameworks, the volunteers’ faces were overlaid on top of those whose identities need protecting within the film itself, with an added “underblur” signalling which faces had been transformed. Ryan Laney, a Hollywood special effects expert, led the technical aspects of the transformations, which generated the halo-like blurring around the faces to signal the terror and trauma undergone by those whose faces are being changed, while still offering the audience an affective connection to that person that would have been lost by obscuring the faces entirely (Rothkopf 2020, para. 13).

The value of this technique can be seen throughout the whole of the film, as it allows for a nimble anonymizing without overwhelming the affect of the face with spectacle, in particular in moments that utilize raw, unstructured footage. One powerful example takes place near the beginning of the film, when a group of vigilantes attack two gay men whom they caught together in public, recording on what appears to be their phones. The scene takes place at night, and the vigilantes are aggressive and loud; the danger is obvious and affecting. For the duration of this cellphone footage, both of the caught men’s faces can be seen, the digital transformation of their faces doing nothing to hide the fear and confusion on their faces. That the deepfake technology can anonymize faces within this type of grainy and amateur footage, in addition to traditional straight-to-camera testimony, showcases the flexible and affective potential within the technology.

Instances of deepfakes like those deployed in Welcome to Chechnya target the mediation of events, effectively decoupling identity from the faces initially captured in ways that leave the original representation of the event intact. As an example of tactical media, this example of deepfakes short circuits the indexical relationship between face and identity that is so essential to biopolitical tactics and strategies while still maintaining the power present within the witnessing of an event in a documentary fashion, “as it happened.” Echoing the earlier discussion of the pluriverse, the paralleling of the original speakers with the volunteers who lent their faces to the anonymizing deepfakes showcases how multiple histories and bodies can collide in the representation of an event in ways that provide witness testimony and affective audio-visual materials.

Documentary re-enactment

As Escobar explains, the pluriverse draws attention to active historical realities, such as colonial and patriarchal power: “recognizing those historical aspects of our historicity that seem buried in a long-gone past … is part and parcel of design’s coming to terms with the very historicity of the worlds and things of human creation in the current tumultuous age” (Escobar 2018, 15). Challenging, reimagining, and reshaping historicity, deepfakes can be deployed as documentary re-enactment, where the technology can work to represent past events that were not captured on camera.

Deepfakes present an interesting opportunity for partially captured historical events, in particular ones where there is existing audio, but video footage is lacking or absent entirely. In such cases, the affect of an expressive voice is present, but without an embodied performance of that voice, the impact of that vocalization could be less than if there were a full, multisensorial version of the event. Importantly, if a knowing understanding of deepfakes and digital literacies is accompanied by an acknowledgement that the moving image being seen is not “real,” then the technology offers a way to offer a version of documentary re-enactment that marries vocalization and bodily affect, in particular at the site of the face. Unlike documentary re-enactment done by actors or by digital animations, there is an opportunity to connect the identity of the speaker to the vocalization, keeping the impact of the vocalization by making it appear more “real.” In this way, the effects are somewhat contrary to the use in Welcome to Chechnya. In the instance of deepfakes-as-documentary re-enactment, the indexicality of the moving image connects representations of the face to identity in ways that restage the historical event with the affect of the presence of the body re-inserted.

Jae Seo and I built a proof-of-concept titled He Said, which sets former U.S. President Donald Trump in various official locations, and animates his mouth so that he re-speaks previous statements he has made (Tucker 2023). In our prototype, President Trump stands in front of St. John’s Church while he appears to speak the words from the audio of the Access Hollywood Tapes. As there was no video footage of the Access Hollywood incident, the audio provides proof that the event took place, but without the performance that additional sensory information, such as moving images, would offer. He Said operates as documentary re-enactment, wherein the deepfake aids in the performance of the archival audio so that the audience is able to unite the vocalization with the speaker’s identity in a more visceral fashion, thereby deepening the impact of that audio. In this case, the misogynistic insistences of Donald Trump are placed back in a bodily context, thereby re-surfacing and, hopefully, strongly critiquing his comments.

Taking from the arguments of cinema scholar Bill Nichols, He Said is an example of a Bretchian distantiation, which retrieves the lost object of the historical event, while simultaneously generating a new historical event built from temporal foldings and overt pleasure in a knowing, fantasmic recreation (Nichols 2008, 85). This example demonstrates that deepfakes can be used as powerful documentary re-enactments that are “neither an indexical record of [an] event nor merely a later act of representation, but rather some uncanny combination of the two” (Kahana 2009, 52).

These new, uncanny versions of historical events are fantastical without being a fantasy, leveraging the archival and documentary aspects via the audio. When deployed as tactical media in this way, deepfakes are able to produce alternate and potentially rich materials within the pluriverse that challenge the audience to imagine a world in which the event was completely captured, and that the preservation of that event could offer multiple versions of accepted historicity.

Such examples of fantasies and imaginaries can be explored further, as outlined in the following section, when looking at the ways the technology can be used to create versions of historical events that never actually happened, in ways that make transparent the makings of history and the power dynamics and discourses that have produced that history.

Alternate histories and imaginaries

If a deepfake’s artificial nature is foregrounded, then the technology offers spaces for fantasy and imagination where portions of historical events can be refashioned and re-presented in ways that are parallel and/or alternate to accepted history. The historical event being produced in these cases is not intended to be factual, but the indexicality established by the moving images provides the materials for imaginaries that are still tethered to real people and past framings of history. Specifically, such use of deepfakes activates alternate histories of events while also pointing towards the futurities that may have resulted from alternative events. Doing so enlivens the potentialities of counterfactual historical possibilities, showcasing that deepfakes are also an effective tool for generating what Kathleen Singles (Singles 2013) calls “future narratives.” From this, deepfakes hold the potential to activate a nodal, multi-linear treatment of historical events, which allows for disruptions of sequential historical discourses by way of divergences, intervening into how a future might unfold and how the past has been constructed. Future narratives are essential to the pluriverse, as the imagining and reimagining new potentialities of life cement Escobar’s insistence that another world, freed from previous harmful circulations of powers and logics, is possible.

Working again with Jae Seo, the prototype Other Histories showcases former U.S. President John F. Kennedy delivering the speech he was intended to give on the day of his assassination. The audio for this proof-of-concept was also AI-generated by the company CereProc, where they used archival audio of J.F.K. to produce a model of his speech that could then be fed any text and have it “read” in the voice of the former president (BBC 2018). In a simplistic way, by watching this alternate version of history, the audience is left to question what may have happened, in the U.S. and globally, had the assassination not taken place. While our example use of deepfakes in this instance is, admittedly, a bit simplistic, other applications of the technology in this manner would challenge the audience not only to reconsider particular historical events, but also re-think the causal relationships of events; the meditation of those real and imagined events, via deepfakes and archival materials, would form the sort of multi-nodal and multi-temporal network that Singles advocates for.

Further, such an approach could generate tactical media that produce footage of future events, enlivening imaginaries that potentially break from colonial and racist futurities. As this paper has repeated in relation to the pluriverse, it is important to imagine futures that break from the seemingly inevitable replication of past power dynamics fueled by racist and colonial practices. In this way, deepfakes can draw from other artists and writers aligned with Afrofuturist and Indigenous futurities. Artist Skawenetti’s Time Traveller series (Skawenetti 2007–2014), as well as her work repurposing the video game Second Life, is particularly inspiring. Skawenetti’s creation of AbTec, a space in Second Life, is especially powerful in its construction of such powerful futurities but, more specific to this paper, her deployment of machinima in Time Traveller provides a roadmap for how others could utilize deepfakes in similar fashions. The series imagines the ability to utilize a pair of augmented reality-like goggles to travel to anywhere and anytime; in the nine-part series, the characters jump forward and backward, mixing historical retellings of events with Indigenous knowledge-making by leveraging the generative image-making of machinima. Time Traveller, in its retelling of history and telling of future events while also imagining and projecting futurities is a power example for potential uses of deepfakes, as artists and scholars could use the technology similarly to replay and create historical and future events with a high degree of realism. Like Skawenetti’s machinima, deepfakes grant the ability to produce representations of future events events that enliven the imagination and alterity central to the pluriverse and its rejection of colonial and patriarchal logics.

Conclusion

As stated in the introduction to this paper, I wanted to make proof-of-concepts utilizing deepfakes in hopes that other practitioners would find inspiration from them and extend these techniques in more interesting and powerful ways. I learned some hard-fought lessons and gained critical insights. For example, the reasons why the Peele-Obama, and other instances like the Tom Cruise impersonator, work so effectively is that the “source,” such as Jordan Peele, are incredibly skilled mimics. As such, because the prototypes were relying on machine systems to map spoken words to mouth movements, as opposed to motion capture then digital projection from a skilled mimic, the prototypes were distractingly uncanny: without a skilled performer at the centre, deepfakes become far less convincing, and their obvious false nature becomes too much of a distraction to function in the positive ways proposed throughout this paper. Further, the choice of Donald Trump and J.F.K. as the central figures of the prototypes was borne from necessity: the project needed figures with both a lot of video and audio footage, and also video footage in which the subject speaks clearly and directly to camera. Presidential speeches, as well as performances by popular actors and actresses, have the advantage of having a wealth of such materials; as such, the prototypes are fairly unimaginative; there is hope that other scholars may take the principles outlined in this paper and expand them in ways that better serve the pluriverse. Overall, while the initial prototypes are shaped by the biases of the data set (white male faces and voices), future deepfakes operating as tactical media hold the potential for marginalized populations to produce re-enactments and alternate and future narratives that speak more directly to their communities.

In closing, Andrew Dewdney in Forgetting Photography (Dewdney 2021) encourages readers to embrace what he sees as the afterlife of analogue photography in the contemporary digital production and circulation of images, what he explains as networked images. He argues that photography’s past histories, often functioning under power dynamics damaging to intersectionally disadvantaged populations, need to be unthought: rather than images being used to replicate reality as the default form of representation, digital images, in particular those generated with algorithmic tools, challenge the contemporary understandings of representation. This challenge is intended to be a productive exercise, where the plea to forget photography is not meant to be taken literally. Rather, imagining photography as a past form of knowledge-making opens contemporary thinking to the future possibilities of what digital image-making can be, and what knowledge can be produced and encouraged by such image-making.

In this context, deepfakes are able to leverage many of the powers of representation within photography and cinema and flexibly manipulate them in ways that produce complex and multiple understandings of the production of history, historical events, and knowledge. Deepfakes can maintain strong indexical bonds or can sever identity from the image; likewise, deepfakes can produce events that utilize the affect of the face and images of the face, while undermining traditional humanist logics attached to such understandings of the face and the body. When deployed in a way that foregrounds their artificial nature, deepfakes can be tactical media that provide potential powerful materials for the pluriverse and the challenging of engrained past, present, and future power structures.

Competing interests

The author has no competing interests to declare.

Contributions

Editorial contributions

Section Editor and Lead Copy Editor

AKM Iftekhar Khalid, The Journal Incubator, University of Lethbridge, Canada

Layout Editor

Virgil Grandfield, The Journal Incubator, University of Lethbridge, Canada

References

BBC. 2018. “John F. Kennedy’s Lost Speech Brought to Life.” March 16. Accessed February 7, 2022. https://www.bbc.com/news/uk-scotland-edinburgh-east-fife-43429554.

Burgess, Matt. 2021. “The Biggest Deepfake Abuse Site Is Growing in Disturbing Ways.” Wired. December 15. Accessed February 7, 2022. https://www.wired.com/story/deepfake-nude-abuse/.

BuzzFeedVideo. 2018. “You Won’t Believe What Obama Says in This Video!” April 17. Accessed February 7, 2022. https://www.youtube.com/watch?v=cQ54GDm1eL0&feature=emb_logo.

Collins, Patricia H., and Valerie Chepp. 2013. “Intersectionality.” In The Oxford Handbook of Gender and Politics, edited by Johanna Kantola, Georgina Waylen, Karen Celis, and S. Laurel Weldon, 57–87. New York: Oxford University Press.

Dewdney, Andrew. 2021. Forgetting Photography. Cambridge: MIT Press.

Dunn, Suzie. 2021. “Women, Not Politicians, Are Targeted Most Often by Deepfake Videos.” Centre for Internal Governance Innovation. March 3. Accessed February 7, 2022. https://www.cigionline.org/articles/women-not-politicians-are-targeted-most-often-deepfake-videos/.

Escobar, Arturo. 2018. Design for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. Durham: Duke University Press.

France, David, dir. 2020. Welcome to Chechnya. HBO Films.

Galston, William A. 2020. “Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics.” Brookings. The Brookings Institute. January 08. Accessed February 7, 2022. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/.

Gosse, Chandell, and Jacquelyn Burkell. 2020. “Politics and Porn: How News Media Characterizes Problems Presented by Deepfakes.” Critical Studies in Media Communication 37(5): 497–511.

Ifeanyi, K.C. 2020. “How AI Came to Protect the LGBTQ Subjects in HBO’s ‘Welcome to Chechnya’.” Fast Company. June 30. Accessed February 7, 2022. https://www.fastcompany.com/90522330/how-ai-came-to-protect-the-lgbtq-subjects-in-hbos-welcome-to-chechnya.

Kahana, Jonathan. 2009. “Introduction: What Now? Presenting Reenactment.” Framework: The Journal of Cinema and Media 50 (1 and 2): 46–60.

Leitão, Renata M. 2022. “From Needs to Desire: Pluriverse Design as a Desire-Based Design.” Design and Culture 14 (3): 255–276.

Metz, Cade. 2019. “Spot the Deepfake. (It’s Getting Harder).” The New York Times. November 25. Accessed February 7, 2022. https://www.nytimes.com/2019/11/24/technology/tech-companies-deepfakes.html.

Mirzoeff, Nicholas. 2011. The Right to Look: A Counterhistory of Visuality. Durham: Duke University Press.

National Institute of Standards and Technology (NIST). 2020. “Special Database 32 - Multiple Encounter Dataset (MEDS).” Multiple Encounters Datasets (MEDS) I and II. November 19. Accessed February 7, 2022. https://www.nist.gov/itl/iad/image-group/special-database-32-multiple-encounter-dataset-meds.

Nichols, Bill. 2008. “Documentary Reenactment and the Fantasmatic Subject.” Critical Inquiry 35 (1): 72–89.

Parkin, Simon. 2019. “The Rise of the Deepfake and the Threat to Democracy.” The Guardian. June 19. Accessed February 7, 2022. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy.

Raley, Rita. 2009. Tactical Media. Minneapolis: University of Minnesota Press.

Rothkopf, Joshua. 2020. “Deepfake Technology Enters the Documentary World.” The New York Times. July 29. Accessed February 7, 2022. https://www.nytimes.com/2020/07/01/movies/deepfakes-documentary-welcome-to-chechnya.html.

Singles, Kathleen. 2013. Alternate History: Playing with Contingency and Necessity. Berlin: De Gruyter.

Skawenetti. 2007–2014. Time Traveller. Accessed February 7, 2023. https://www.timetravellertm.com/episodes/.

Sobchak, Vivian. 1996. “Introduction: History Happens.” The Persistence of History: Cinema, Television and the Modern Event, edited by Vivian Sobchak, 1–15. New York: Routledge.

Tucker, Aaron. 2023. He Said (with Jae Seo). Accessed May 26. https://web.archive.org/web/20230602164019/http://aarontucker.ca/digital-art/he-said/.

Vales, Aldana. 2022. “An Introduction to Synthetic Media and Journalism.” Medium. Accessed November 29. https://medium.com/the-wall-street-journal/an-introduction-to-synthetic-media-and-journalism-cbbd70d915cd.

Wusterland, Mika. 2019. “The Emergence of Deepfake Technology: A Review.” Technology Innovation Management Review: 39–53.

Zemeckis, Robert, dir. 1994. Forrest Gump. Paramount Pictures.