Introduction

Is it possible that examining the rules which govern our use of an everyday appliance can assist data practitioners in collecting and using data in ethical and responsible ways? This inquiry is based on a theory that current and emerging frameworks for data ethics share a palpable synergy with social rules that govern our use of everyday technologies, like the household refrigerator. This paper positions the refrigerator as a metaphorical model by defining some of the well-established and commonplace rules that govern its everyday use in our homes. Entrenched in the social norms of the domestic space, I refer to these rules as refrigerator wisdom: a tacit knowledge of boundaries, rights, accessibility, fairness, and reciprocity that has developed over time through our daily interactions with this ubiquitous technology.

When we examine the current trends in data ethics and governance around the collection, use, reuse, and disposal of data, it becomes evident that ethical concerns in the digital world and the material one share some considerable similarities. This leads to the possibility that the knowledge gained by our engagement with the materiality of shared social spaces can be mobilized to make the complexities and challenges of data ethics more legible and accessible for data practitioners. In structuring a theoretical framework for understanding digital ethics, the employment of material concerns is useful for making complex ethical concepts more tangible. It is also an invitation to critically view how we navigate the digital/material divide and consider how the opacity of digital spaces can lead to questionable ethical decisions vis a vis those made in physical environments, in clearer view of others, places where our actions can be more readily scrutinized.

There are several big themes in digital ethics that are not exclusive to the discipline but are basic tenets of broader ethical discussions: privacy, security, power through access to information or oppression from a lack of access, treating others the way we would like to be treated. While digital technologies “mirror similar ethical concerns as pre-digital technologies they replaced” (Müller 2024, 3) the digital revolution has also brought about new and unprecedented ethical questions, for example, whether artificial intelligence (AI) poses an existential threat to human autonomy or is perhaps sentient and entitled to its own set of rights (Zerilli et al. 2021, 117). David J. Hand has stated that the unique challenges of data technologies do not allow for a straightforward comparison of ethical issues with “other technical and scientific areas” (Hand 2018, 177). I agree that this statement may be true of a blow-by-blow comparative analysis, but I am more interested in the aforementioned larger ethical themes that emerge when confronting technologies and practices. That being said, the analogy is not perfect or seamless, nor is it directly applicable in every situation. I am not suggesting, for example, that the stakes for unethical use of data—which can be, without exaggeration, a matter of life and death, especially for those situated at the lower end of the scales of power—are the same as those for ignoring the social norms of refrigerator sharing. I am comfortable with the tension between the principles’ somewhat (over)simplified messages and the utility of that simplicity. The refrigerator/digital metaphor invites a long view, a means of recognizing some general behaviours and their influence on bigger, overarching systems.

The refrigerator is also not the prime example. As a researcher of material culture in domestic spaces, as well as an emerging digital humanist, it was an investigation of the social and technological history of the refrigerator that revealed to me the similarities between refrigerator wisdom and guidelines for ethical digital practices. There are likely congruences to be found in other objects: smartphones, automobiles, or televisions, for instance. What follows the emergence of each new digital tool, method, and project are questions about who benefits, who is left vulnerable, and whether those who hold power have considered everyone who may be impacted in their decision-making. Both digital and mechanical technologies share a significant common idea: technologies (and data) are not inherently good or bad, and they are definitely not neutral (D’Ignazio and Klein 2020, 149). I agree with James Smithies that digital tools and technologies are associated with “contemporary issues of substantial importance to everyday life,” namely, the “cultural, ethical, and moral implications” of their use (Smithies 2022, 111). This is not unique to digital technologies; in our encounters with material objects, including refrigerators, we are not static observers. They actively shape our behaviour, become enmeshed in the processes of quotidian life, assume social and cultural properties, and influence our understanding of the world and how we interact with it (Csikszentmihalyi 1993, 23; Brown 2001, 4). Our relationships to technologies change depending on the type of technology, who is using it, and the multiple and rapidly changing contexts in which we use that technology. For these reasons, technological ethics can be tricky to codify and apply—particularly in the rapid changes of the past thirty years or so of the internet “revolution”—or have at times been reduced to performative proclamations that do little to balance systems of power (Hand 2018, 177; Sutherland et al. 2023, 121).

It is also difficult to extricate the ways we conduct ourselves from the place, or things, where those behaviours are formed. Looking to the web as an example, Berners-Lee’s somewhat utopian conceptualization of the internet was not as a tool, but as a community, a “social creation” where we clump into families, associations, and companies (Berners-Lee and Fischetti 1999, 123). This communion shares, if not a purely physical analogue, then a conceptual one, with digital spaces and infrastructures. There appears to be a pronounced demarcation between online behaviours and real-life engagement with others (as exemplified by a proliferation of successively angrier online discourses), yet this line becomes increasingly blurry, most notably with the rapid development of increasingly sophisticated AI systems that make it difficult to discern human from computer (Brown et al. 2023, 75). The at-times slippery nuances of computer-human relations do not necessarily need to complicate a look to the physical world for answers about the digital one.

Despite the aforementioned complexities of applying ethics to technologies, examining one technology, like the refrigerator, reveals that well-established practices have resulted in a tacit moral code we draw upon daily to guide us through moments of uncertainty. To date there have not been substantial studies into how social behaviours enacted through our use of older, non-digital technologies can provide a means of understanding the ethical use of newer ones. I argue that the seemingly straightforward technology of the refrigerator provides recognizable examples of how it shapes and directs our behaviours in profound ways. We learn respectful use not from any formalized instruction, but simply through daily use in tandem with others around us, and the ethics of refrigerator use have become well-understood social norms. It is worth investigating how this phenomenon is or has the potential to be replicated in digital environments, and that a similar “code” can serve to guide digital humanities (DH) practitioners. It invites the possibility that in digital spaces, ethical considerations also have the potential to become normative social behaviours that can, in the simplest sense, invite us to more deeply engage with ethical questions in our own projects and, thinking bigger, lead to a future where data ethics become innate, tacit understandings upon which data governance policies take shape and guide how we conduct digital work.

It is hoped that those embarking on DH projects or working within institutions that collect and store data will experience some resonance with the recognizable social rules governing refrigerator technology. Those who work in pedagogy may find that the connection between everyday objects and our engagement with digital applications is a straightforward way to communicate to students how we can use data and digital tools fairly and justly. It is also imperative to acknowledge that the collection and use of data impacts every person who uses the internet or owns a smartphone, engages with social media or clicks the “I agree” button at the bottom of a website. Regardless of whether or not we engage in digital scholarship, most of us are entities in someone’s dataset, and are entitled to agency in making decisions about how our personal data is collected and used by companies and institutions. The refrigerator analogy is an accessible and recognizable apparatus to convey the complexities of technological ethics to scholars and laypersons alike.

This proposed framework draws from material culture, food culture, and social anthropology to understand the role of ethics in our use of technology. However, the hard labour of identifying and articulating ethical pain-points in digital environments—without which the leap to this idea would not have been possible—has not been mine. Though not a comprehensive review of the vast work that has been done to articulate digital ethics, it is alongside the work of these scholars that the idea of a refrigerator manifesto finds its closest theoretical kinship. It is a new approach, but not a new idea, and builds on the robust body of scholarship and initiatives addressing the inherent tension between access to data for research and the rights of those whose data we require for DH research and projects (Wilkinson et al. 2016; Carroll et al. 2020; Sutherland et al. 2023). The rapid onset of AI technology has spurred a flurry of work in developing manifestos and policy recommendations pertaining to its use, including the Organisation for Economic Co-operation and Development (OECD), the Going Digital Toolkit, the Montreal Declaration, the Toronto Declaration, and the EU Declaration of Cooperation on Artificial Intelligence (Dutton 2018). A manifesto for digital ethics has been proposed in The Feminist Data Manifest-NO, a “declaration of refusal that dismantles harmful data structures and practices,” and many of the authors’ humanistically grounded ethical parameters align with and support the object lesson I propose here (Sutherland et al. 2023). I look to Catharine D’Ignazio and Lauren Klein’s critical analysis of gendered systemic bias (D’lgnazio and Klein 2020) and the call to reconcile the positive aspects of data projects with their ethical concerns. I draw upon efforts to define digital ethics (Floridi and Taddeo 2016), examinations of the changing ethical landscape (Kitchin 2014; Hand 2018; Müller 2024), and the specific ethics of artificial intelligence (Zerilli et al. 2021). Jer Thorp’s case studies (Thorp 2021) reveal his own reckoning with data responsibility in the projects he has undertaken, and this paper looks to his deft hand in employing anecdotes to exemplify the constant, important reminders to think before doing, to be constantly aware of how we are using data and to repeatedly and actively consider who we are including or excluding in our work.

The history of the refrigerator and its use has been a fruitful topic for historians of material culture who have focused on the development of refrigerator technology and the conditions leading to its widespread adoption into household kitchens (Rees 2013; Peavitt 2017; Gantz 2015). My research builds upon a precedent of scholarship on technologies as mediators in human relationships and technologies as stages on which we enact codes of behaviour, and analogizes the previous scholarship of the fridge as a storage infrastructure (Pilcher 2016; Marshall 2022). Cultural and social anthropologists have examined the behaviours arising from human/technology interaction (Maschio 2002; Blackman 2005). Blackman in particular has built out the concept of refrigerator rights, that is, rules surrounding who we allow into our personal refrigerators and how this signals kinship ties between household members (Blackman 2005). This anthropological scholarship helped me to ideate refrigerator wisdom as a theoretical approach.

Refrigerator wisdom

As a vital node in what food historian Jeffrey Pilcher has referred to as culinary infrastructure, the refrigerator is a useful point of analysis in the broader systems that convey knowledge about food (Pilcher 2016). I argue it is also a site where, from an early age, its users learn and display tacit knowledge through specific behaviours and practices that have led to implicit rules. First, by its very utility as a place we safeguard our valuable food resources, it requires rules governing who can access those resources. Second, it is a location where the same types of behaviours are repeatedly enacted and become encoded into everyday practices. We open and close it, interact with its contents, use and remove them, label and discard, add and subtract multiple times per day, because we (hopefully) eat multiple times per day. Third, its inner and outer spaces are shared with and by others in our social groups and have thus developed as locales of direct and indirect interaction with those groups. In this way, it is not only a technical infrastructure, but also a social one. The behaviours tied to our repeated interactions with the refrigerator and with others using it become established social norms. Finally, the refrigerator is a site of power and control over what goes in, what comes out, who has access, and under what conditions they take or are given access.

It will be helpful to foreground a material/digital ethics comparison with a brief explanation of refrigerator rules and how they function in this context of technological and social infrastructure. It is in the very concrete and practical ways we engage with the refrigerator, and the lessons learned, that the most useful themes begin to emerge and can be realized in digital applications.

  1. We open our refrigerator to those we trust, and the solid, material barrier of the door keeps out everyone except those in our immediate families or very close non-kin relations. There is also a conceptual barrier, in the form of the intuitive discomfort we feel when considering going into the fridge of someone outside our kinship group, or having someone enter our shared and protected domestic spaces. This concept is referred to by Margaret Blackman as refrigerator rights, the tacit knowledge of the implied permission or prohibition to enter the private space of the household refrigerator (Blackman 2005, 36). It is closely related to permissions, though it is more instinctive: an unmistakable sense that we are welcome to help ourselves, or not. These rights are closely tied to familial and social groups and can change as relationships begin, shift, or end.

  2. The dictum “you are what you eat” in one variation or another is an idea that goes back as far as the early nineteenth century (Shapin 2014, 377). Yet the refrigerator, and the systems that determine the quality of the foods we are able to fill it with, are not neutral. What goes in and comes out is outside the agency and control of many, yet the results of poor-quality food quickly transform into systemic barriers that are extremely difficult to overcome. Despite its appearance as a universally available and accessible technology for storing food, what is behind the door of one’s refrigerator reveals differences in class and privilege.

  3. A best-before date stamped on a food item suggests that while it can still be used, it may not be at its best. Ideally, things are placed in refrigeration according to the first-in/first-out system, and products with the oldest date are used first. Doing this ensures access to the best and freshest ingredients. Older products get used before they are no longer fresh, or discarded before food-borne organisms make us very sick. This best practice mitigates the harms of consuming what may be outdated, stale, or downright rotten. To remain safe and palatable, food items must be periodically examined, evaluated, repackaged, edited, or purged.

  4. Sharing a refrigerator space requires different levels of permission, and taking what is not ours is simply unethical. Plundering food stores has been understood as an act of war since humans moved from nomadic to static societies and began stockpiling food resources (Messer and Cohen 2024, 1). Most people would consider it a violation if their clearly labelled lunch were pilfered by a colleague from the shared office refrigerator. Even something as simple as having ingredients put away for a recipe becomes problematic if someone in the most closely-knit environment of the household uses them without asking. Permissions can also change over time: an ingredient required to make a dish at one point in time may at a different point in time be permissible to take and eat at will. Like refrigerator rights, the rules which govern what we can and cannot take change according to our relationship to the refrigerator itself and who we share it with.

  5. In shared spaces, like a communal refrigerator, assignment of a shelf and labelling your food has become a familiar way of signifying ownership. Labelling also helps determine where to place foods needing optimal storage. For example, an opaque container with no label may contain raw meat, which should not be stored with fresh vegetables and requires its own space to prevent harmful contamination. The clarity of a storage system prevents us from storing or using it in a way that is potentially harmful. However, a label is only as useful as the information it includes. If someone is allergic to dairy and finds a container in the fridge labelled “mac and cheese,” it is essentially unusable. The exact same dish labelled “vegan mac and cheese” is accurate, and what is in that container immediately becomes more usable. Mislabelling, or leaving out vital information that makes an item usable, is no better than having no label at all. Labelling minimizes the risk that what is open and consumed could cause harm. An ideal label is informative, accurate, readable, and as complete as possible. Labels should use recognizable and accessible language, make use of universal symbols as needed, and be legible. The person creating the label decides on what information to include, or exclude, and may have specific motives for doing so. Food should be relabelled if the information changes, or if the label becomes damaged or illegible.

  6. Refrigerator wisdom teaches that those with rights to what is inside the refrigerator should not be confronted with structural barriers that do not consider difference, limit access, and promote inequity. A stepstool placed next to the fridge so children can reach gives them agency over their own hunger signals and enables them to be self-sufficient. An elderly person may need to remove food from the fridge with one hand while balancing a cane with the other. In this case, food in small, easily carried containers, placed front and centre (not buried in the back), provides fair and equitable access and personal control.

  7. Refrigerator rules dictate that all users should practice reciprocity. To share a refrigerator involves binaries: mine/yours, full/empty, short/tall, chaos/control. When we share fridge space, we trust that the people we share with will ask permission if they are not sure if someone has plans for the food already, or that others will not consume something clearly labelled with our name. Refrigerator reciprocity means when we take something out, we return something that is of mutual benefit to the community of people using the fridge. Leaving behind an empty carton of milk is frustrating and, in the context of refrigerator sharing, considered a selfish act. Replacing the carton or letting someone know it has been finished are ways to practice reciprocity. You may go even further by using the milk to prepare something everyone can share, like a batch of homemade gelato. You could replace the finished milk with a better quality brand, or consider that someone in the household may be lactose intolerant and put a policy in place that only lactose-free milk be stored in the future.

  8. The refrigerator as a technological infrastructure is crucial to our modern lives, but one that tends to become invisible and not given much consideration until something goes wrong. Additional invisible labour in the context of the refrigerator happens at the more precise familial level. Women are largely responsible for food shopping, storing, cooking, and management of the household refrigerator, including the cognitive labour of anticipating the needs of the family (Daminger 2019, 609–610). What is invisible only becomes visible when a hungry member of the household is standing in front of the refrigerator lamenting there is nothing to eat. Refrigerator wisdom teaches us that even if it appears magical, there is always someone’s labour behind the shopping, organizing, storing, labelling, and cleaning out of the refrigerator that leads to a meal on the table being realized, and that work is usually underpaid or unpaid.

  9. When we open the fridge door and look inside, how many times are we sure about exactly what it is we are looking for? The action of standing and staring in front of an open refrigerator can mean many things besides physical hunger: curiosity, boredom, aimlessness, depression, or a desire for comfort; a search for something other than food. We know the results of eating when there is no need to do so, which may lead us to ask, do I really need this? The development of refrigerator technology itself can also be a lesson on the consequences of not critically assessing our motives before taking action. When refrigerators replaced ice boxes, it immediately impacted the amount of social interaction among women in their communities during frequent trips to the market, and eliminated local producers (Green 2014, 187–188). Refrigeration contributes to significant environmental impacts, from transporting food in refrigerated trucks across the cold chain, to waste and byproducts generated by refrigerators themselves. Early refrigerators leaked deadly chemicals that asphyxiated people in their sleep (The New York Times 1929; The New York Times 1960). In the mid-twentieth century, a push to replace older models with newer models that did not leak created a challenge for their owners to dispose of their large and cumbersome refrigerators. This led many households to simply abandon them in backyards or on porches. This widespread practice led to utter catastrophe: newspapers of the period reported an epidemic of curious children suffocating when self-locking doors trapped them inside, often while playing hide and seek in their own homes (Hollars 2015; The New York Times 1964; The New York Times 1961). Refrigeration means we eat better, and safer, but the quick development of the technology also led to unnecessary deaths.

The principles

There is synergy between digital methods, tools, interfaces, and infrastructures, and the simpler, widely adopted technology of the ordinary refrigerator. They store, catalogue, and safeguard our data. They have become central to daily life, and at the time of this writing, the internet and social media are used by more than 67% of the global population (Statista 2025). They are sites of social interactions with others, and where power is used to control, collect, and provide or limit access. They demonstrate and perpetuate systemic bias and inequality.

The following nine principles illustrate how the intuitive, common-sense rules for refrigerator use and sharing, what I have called refrigerator wisdom, are compatible with ethical digital practices and can be considered when encountering some of the ethical pain-points, gaps in policy, or overlapping concerns discussed in current scholarly discourses and debates in the DH practice. It is also helpful to keep in mind that the majority of us are “digital citizens,” and in addition to being practitioners, data collection and use pervades nearly every aspect of our personal lives. As we think and move through the realities of this statement, it can be useful to consider the benefit of letting our intuition and tacit knowledge of ethics and relationships, the basis of refrigerator wisdom, shape our intentions and guide our actions.

Principle 1: Consider who has “refrigerator rights” to data

This principle invites an interrogation of how data boundaries are set and enforced, and crucially, finding ways to stabilize intangible boundaries. Unlike refrigerators, datasets do not come with solid doors to clearly demarcate what is public and what is private, and therefore, when collecting and using the data of those who have historically been marginalized, or whose data is culturally sensitive, it is vital to protect and signal off-limits data in other ways. This understanding has led to several initiatives for safeguarding data, ones that acknowledge the complexities and tensions between supporting open data and scholarship and protecting the rights of individuals and groups. It is especially useful to consider the kinship rights to refrigerator space articulated by Blackman in the context of Indigenous data sovereignty, and the ways in which traditional knowledges are mediated by technologies that are often misaligned with Indigenous approaches to how information is shared. The CARE Principles for Indigenous Data Governance (Collective Benefit, Authority to Control, Responsibility, and Ethics) were developed in response to the need for Indigenous groups to set out terms for how, and for what purpose, their data is collected and used (Carroll et al. 2020, 3). In essence, these principles create ways to determine “refrigerator rights” to data, elucidating the meanings of how Indigenous communities share and protect their knowledges and intangible cultural materials.

We can similarly interrogate the boundaries we set, and those we respect when it comes to our own data. It is strange to imagine the marketing manager of a company walking into your house and going into your fridge, yet people willingly “confess” their data and give companies refrigerator rights to their personal information (Kitchin 2014, 180). The massive amount of web data, and especially the use of social media to mobilize movements such as the Arab Spring and #MeToo make these platforms a desirable source of data for researchers (Wilson 2022, 5; Jacobson and Gorea 2022, 703). While most social media platforms are completely free, they are data collection systems with varying levels of opacity. In 2018 the Cambridge Analytica scandal revealed that Facebook handed over personally identifiable information (PII) data on 87 million users to the data firm Cambridge Analytica, who then leveraged this data to profile users and send targeted information to voters during the Trump campaign (Isaak and Hanna 2018, 55–56). This realization highlighted the risks associated with user privacy on social media platforms and resulted in Meta implementing more robust privacy protections for user data. However, the truth remains that when we encounter services that require no money to join, it is likely that we ourselves, and the data we willingly offer up for collection, are the product. Online surveys, loyalty programs, memberships, and newsletter signups all require a base level of personal data to participate and become rich sources of data for companies to use, exploit, or market further products to us (Kitchin 2014, 165). It is imperative that we advocate for ourselves and critically assess these value-adds against the risks of sharing private information.

There is a reason we feel uncomfortable going into someone’s fridge. Should we not experience the same discomfort when accessing someone’s data, or in giving access to others? To whom are we giving our precious refrigerator rights?

Principle 2: In algorithms (and refrigerators), garbage in equals garbage out

This principle reflects on the ways in which the quality and accuracy of machine-learning outputs is dependent on the quality of the data chosen to feed them. Data and the systems we employ to process them are not neutral and become embedded with the biases of the humans who create them (Buolamwini 2023, 49). The embedded bias in datasets does not just appear: it begins at the outset of collection when decisions are made about what to collect or omit from collection, where to collect data, from whom, and for what purpose. The mechanisms to collect data from individuals are not consistently designed to allow users to adequately choose the gender, ethnic group, race, or sexual orientation that best describes their identity. Text boxes often do not accommodate non-Western or Indigenous names or allow for single names or multiple names. These decisions become “amplified as data are computed upon, inflated by algorithms, and distilled by visualization” (Thorp 2021, 60–61). This suggests that omissions or deliberate over-representations ensure that certain outcomes from the data are pre-determined long before we begin the process of analysis. In automated decision-making technologies, such as those used in law enforcement or banks’ use of “big data” to construct risk models to process loan applications, inequality perpetuated by embedded biases threatens an individual’s chances for equitable treatment. For example, where a person may benefit from a positive result, such as a job interview or loan application, algorithms fed data based on so-called desirable persons are designed to filter out women and people of colour (Dastin 2018). Conversely, where a positive match results in a negative outcome, algorithms used in law enforcement are fed data which skews results to include more non-white individuals, and AI used in predictive policing is designed to direct attention to neighbourhoods where crime is most likely to occur (Demiaux 2017, 25).

There have also been instances of facial recognition being based on incomplete or flawed data. In her 2019 report “Garbage In, Garbage Out: Face Recognition on Flawed Data,” Clare Garvie of the Georgetown Center on Privacy and Technology reviews the work done in several studies, demonstrating how garbage data can violate basic civil rights, for example, when a system returns false positives, or when suspects are not informed of how their image has been used to train AI models (Garvie 2019). When confronted with identification tasks, facial recognition software displays both gender and age bias, and fails to accurately recognize gender-neutral, Asian, and Black faces (Buolamwini 2023, 242).

Machine-learning systems based on flawed systems reproduce racial and socioeconomic biases, exacerbate inequality, and limit the ways in which individuals are able to succeed in the world (Datta et al. 2017, 1). In Canada, government and public sector institutions have adopted policies for data collection and use, such as the 2023–2026 Data Strategy for the Federal Public Service, to improve collection and address data gaps (Government of Canada 2023a). It includes amendments to StatsCan collection procedures to allow Indigenous Canadians to submit information under traditional names. The most current draft of updates to the World Wide Web Consortium (W3C), Accessibility Guidelines (WCAG) 3.0 include two recommendations pertaining directly to algorithmic bias. They recommended that content authors train AI models to be unbiased towards persons with disabilities, and that they conduct usability testing and ethics reviews to determine if persons with disabilities are being disadvantaged by algorithmic systems (W3C 2025).

It is understandable that for the intellectually curious DH practitioner, presented with vast amounts of data and imagining how it can be analyzed in creative new ways, we can lose sight of how the abstractions of datafication can obfuscate the individuals behind the data points. To develop a more humanistic approach to data, there are some specific ways to mitigate data biases and remind ourselves of this fact. We can interrogate available data for signs of bias before we use them in our projects. We can seek out missing or incomplete datasets, or build our own to address gaps, and include robust discussions of these gaps in our work. When we design surveys or otherwise collect data from human participants, we can follow ethical standards for data collection, such as the Tri Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS-2) (Government of Canada 2023b). Finally, because AI systems “create the illusion of both a fully autonomous process and impartial output” (Viola 2023, 10), we should abandon any notion that computers equal objectivity or that numbers equal neutrality.

Principle 3: Use best-before dates and periodically purge outdated, stale, or rotten data

This principle of data ethics pertains to the use of the most accurate and current data and considers how data collected today will be used in the future. As of July 2025, the internet is home to 1.2 billion websites, which includes innumerable databases, blogs, social media platforms, file sharing sites, message boards, gaming sites, periodicals, e-commerce sites, and search engines (Haan 2025). If we look at a database like Statistics Canada, which manages census data, it is fairly easy to see how the data is managed. Every five years, there is a new census with fresh data. It is posted on the website and older statistics are archived. Users have access to the most current data, and it is very clear what is old information.

With other sites and databases, it can be trickier. First of all, what constitutes “fresh” information? A date is the beginning, but sometimes the content cannot be considered the most current by the date alone. Historic information, such as that in the Slave Voyages Dataset or old census records, are likely as accurate as they will ever be (SlaveVoyages 2025). New information may be discovered and added later, but the data on a primary source material like a ship’s manifest is much less susceptible to data that changes with the passage of time. A feed on X, begun when the platform debuted as Twitter in 2007, is a dataset of the person or company who used it. The data on X is more personal and fluid, an online diary of events over time, registering shifts in our responses to the social climate and culture. A series of chronological tweets becomes a history of what we were thinking and saying at a specific point in time, and often these viewpoints change. As evidenced by multiple cases in the press, individuals who have had their lives derailed by the online chronicling of past indiscretions or opinions they once had but no longer do, exemplify the need for stronger policies pertaining to one’s right to be digitally “forgotten” by compelling service providers and search engines to scrub the offending material (Jones 2016, 3). In January 2022, Meta changed their data policy to allow users to review a chronological list of all their posts, likes, and comments and delete them individually or bulk-delete all at once (Instagram Help Center 2025). The intricacies of current debates surrounding digital “redemption”—including whether it curtails the free movement of ideas in the information age, to whether or not it is logistically possible to remove digital evidence entirely, even landing on a definition of what constitutes “privacy”—are more complex than the space here allows (Jones 2016, 81). One key point is that once data is scraped, it has already begun to age, and it is nearly impossible to retract information that has already been republished or used to train machine-learning systems. Buolamwini asserts that the practice of deep data deletion should involve not only removing “dirty” data (that which has been debunked, is inaccurate, or exhibits damaging bias), but also removing the AI models that have been trained on it (Buolamwini 2023, 101–102).

A historical dataset versus the real-time dynamics of social media are two extremes. Many datasets accumulate information more slowly, changing and evolving over longer periods of time. In the middle are other types of stale or downright rotten information whose effects can range from being a nuisance for researchers to causing measurable harm. These can include dead websites, link rot, or news stories containing now-debunked myths, which, if not followed by further research, can still be read as statements of fact. The New York Times digital archive turns up zero search results for articles about the doctor whose conflicts of interest with pharmaceutical companies led him to falsify data linking MMR vaccines to autism. However the article still lives in the archives of The Lancet, where it was first published, and while the document has been overlaid with the word “retracted” in giant red lettering, the existence of the document online means it can still be read, cited, and linked to (which I will not be doing here) by those with an interest in keeping its falsehoods alive. Sites may not meet the latest minimum standards for accessibility, such as those developed by the World Wide Web Consortium (W3C 2024) or be in possession of and posting culturally sensitive material that should be revisited, reframed, or repatriated to its rightful owners. Data is also financially and environmentally costly to store.

These harms are mitigated by a combination of policy and best practices for managing the lifecycle of data, including its creation or acquisition, storage and maintenance, usage and distribution, archiving and retention, and destruction (Ige et al. 2024). A formalized Data Management Plan (DMP) is frequently required by funding organizations that support DH projects, including the Social Science and Humanities Research Council of Canada (SSHRC 2025). While diligent DH practitioners make DMPs, outdated material thrives on the internet. A Pew Research study from 2024 estimated that 38% of websites operational in 2013 were no longer being updated, and that 21% of U.S. government web pages contained at least one broken link (Chapekis et al. 2024). That anyone can publish on the web is both a strength and a weakness, and as the Pew report found, best practices are not consistently employed, even on government websites. Owners of their own websites and datasets should take the time to take a critical look at their own content (Kuenn 2014). The unbridled freedom to publish online does not neatly align with legal or moral demands that the information be up to date. Should users be required to review information for accuracy, broken links, or outdated information, perhaps before they can renew their domain?

Principle 4: Obtain permission to use others’ data, as often as necessary

Asking permission before taking what does not belong to us is arguably the simplest principle to grasp, but one requiring multiple approaches to ensure we are using data permissions ethically and correctly.

In 2011, the company Reveal Digital, which works with university libraries to digitize cultural materials, secured the rights to On Our Backs, a lesbian pornography magazine published from 1984 to 2004. Five years later, the material was removed. Among the reasons for doing so was a quiet protest in the form of blog posts by an archivist and librarian, Tara Robertson, who pointed out that consideration of the historical and cultural value of the material must be weighed against the privacy rights of those who posed in the magazine, people who were members of a sensitive community. After all, she wrote, “consenting to a porn shoot that would be in a queer magazine is a different thing to consenting to have your porn shoot be available online” (Robertson 2016). What this example illustrates is one of the many facets of permission that digital practitioners must consider when embarking on projects that involve the collection and dissemination of sensitive data: permission is solicited and granted in many forms, and consent cannot be reused or assumed to be granted in perpetuity (Sutherland et al. 2023, 124). In Canada, the Tri-Council Policy Statement on Research Ethics requires specific standards to be met in the collection of data from individuals for research, but only for projects “conducted under the auspices of an institution eligible for funding by the federal Agencies” (CIHR, NSERC, SSHRC) (Government of Canada 2023b), and of course, not all DH projects are. However, permission and consent are not just about legality, but a community-based understanding of digital responsibilities.

This is the same tacit knowledge of right and wrong that governs our refrigerator conduct, and similar to permissions around ethical data collection and use, there is no particular set of rules applicable to every situation. We can assume some general and intuitive principles of permission and consent. Ask the right people, and consider the ways in which the use of data will impact all those involved. There must be clarity and transparency about what data we’re asking for and how it will be used. If we ask permission and the answer is no, then we cannot use it. If, because of the passage of time or reluctance of those whose data we are requesting, we cannot get an answer, we cannot use it. If the answer is yes, we must proceed thoughtfully and according to established ethical parameters, and then only take what we have been authorized to use. As in the On Our Backs example, if the answer is yes today, then the answer is yes today, not for another, unrelated project later; we need to ask again and make sure the answer is still yes. Consent must be ongoing, and participants given the option to withdraw their consent from projects at any time. We can also use tools like the Digital Research Ethics Collaboratory (DREC) to collaborate with colleagues in our field and adopt strategies that give people more control and agency over their information (DREC 2025). If we are not sure, or we find ourselves talking ourselves into it being okay, it is not okay.

If it feels uncomfortable to ask for permission, we need to question whether we are asking for too much.

Principle 5: Label and contextualize data

When used properly, labelling makes it clear what something is, what it is for, and who it belongs to. In digital contexts this can include the use of metadata, tagging, the practice of decolonizing archival catalogues, or applying labels.

The FAIR principles for scientific data management and stewardship state that data should be findable, accessible, interoperable, and reusable. At the crux of FAIR principles is the addition of metadata—data about data—that can be read by humans and machines, provides context about an entity, uses readable language, uses persistent identifiers to make information about entities findable and recognizable, and allows for data to be reused by researchers (Wilkinson et al. 2016). As D’Ignazio and Klein have articulated, a major problem with data downloaded from the internet is that a lack of metadata fails to provide the context required to analyze it (D’lgnazio and Klein 2020, 153). It is also not unusual that certain datasets are inaccessible by design and conceal rather than elucidate information. In 2014, the Boston Police Department was forced by a lawsuit to disclose the datasets they kept to record stop-and-frisk incidents, but the cryptically numbered labels did not describe the contents of the files and were not contextualized in a way the public could use (D’Ignazio 2019). It makes sense to legislate that open datasets be given plain language labels or annotations to allow for easier deciphering of labels.

In GLAMS (galleries, libraries, archives, museums, and special collections) where we work or conduct work, materials have been catalogued according to normative standards based in colonialism and white, male, Western, heteronormative supremacy, and therefore are embedded with inherent bias based on this positionality (Werling and Radio 2025, 84). Policies and best practices for critical cataloguing reject and disrupt labelling systems that flatten rather than specify, those that reproduce harmful categorizations and work toward further decolonizing archives and how information is organized within them (Sutherland et al. 2023, 124).

Working with Indigenous communities, the Traditional Knowledge Labels (TKLs) initiative provides a system of pictographic labels that can be used to identify Indigenous ownership of cultural materials (provenance), the conditions of its use (protocols) and who has access to it (permissions), cultural boundaries, and creating local and specific conditions for sharing of Indigenous cultural knowledge data, thereby protecting sensitive and sacred Indigenous knowledges (Local Contexts 2025).

Principle 6: Ensure barrier-free access to sharable data

If we assume that some people should have refrigerator rights to data, then how can we ensure that people do not face barriers to accessing information? Data fairness involves a fine balance between safeguarding private information and providing accessibility to data that people are entitled to, and different groups of people will require different tools to ensure they do not face physical, institutional, or socio-economic barriers to that access.

It could be about prioritizing open scholarship or providing ways to circumvent academic paywalls to provide more open access to books and scholarly journals. To keep pace with rapidly changing technologies, W3 has drafted an updated version of their Accessibility Guidelines to ensure web tools and content are fully accessible to persons with disabilities (W3C 2025). In the contexts of Western poverty, rural communities, and addressing the digital access gap between the global north and south, scholars have argued that unfettered and unmonitored access to the internet, provided free of charge, qualifies as a basic human right (Reglitz 2024; Cruft 2023).

Too often, the paternalistic controls under which so much information is tightly kept is often justified for the purposes of safety and security, such as video surveillance footage and passenger information collected by border agencies and airlines in the post-9/11 era, or the CCTV recordings of Londoners going about their day (Kitchin 2014, 178). In the earlier example of garbage in, garbage out facial recognition software, one recommendation proposed by Garvie was that defendants brought in for questioning based on a computer-generated composite should have the right to view the data inputs that created it (Garvie 2019).

Data kept under lock and key for the purposes of maintaining control over marginalized populations is why data needs policies to ensure that people do not need to stand on rickety chairs to access what they need, or worse, have no support at all.

Principle 7: Prioritize and practice reciprocity

This principle is very tightly bound with the idea of permissions. When we ask to use data, there should be more at stake than our personal gain, and a mechanism for returning the benefits of research back into the community.

For example, in 2009, the DiverseCity Project collected data on those in senior leadership roles across multiple public sectors in diverse communities in Toronto and issued a clear mandate to use the results to provide recommendations that would help ensure minorities were being served in their respective communities (Cukier and Yap 2009). There are other projects collecting data in ways that are less clear, and this raises questions about what they will do with it. A 2021 New York Times article profiled Jimmy Chen, the developer of an app called Propel, which he designed to assist those using the Supplemental Nutrition Assistance Program (SNAP), or food stamps, in planning and budgeting. Chen was quoted as saying that ad revenues for the app came from grocery stores advertising sales or companies offering employment: “the goal [is] to align the company’s financial interests with those of its users” (DeParle 2021). As the article pointed out, commenters on Twitter (now X), questioned Chen’s sincerity and wanted to know exactly how this app was supposed to aid the poor. In other words, Chen’s app, developed to fill a niche, was collecting ad revenues yet was not directly addressing the material conditions of those whose data it relied on to function and ultimately fund his personal venture. While social media critiques are arguably not the only or best way to gauge community benefit, this example shows how the power differential between food stamp users and tech CEOs (funded by Morgan Chase Bank, among others) exemplifies the inequalities that are reinforced when financial need is a factor in the decision-making of the end user of the app. Chen is ostensibly not a person receiving food stamps, and while the app may very well be helpful, even more so would be to consider how collected user data could be returned back to the community and leveraged to support food security initiatives.

Reciprocity is not limited to giving back after the fact. One of the most important ways to practice reciprocity is to engage in community outreach prior to the onset of a project, in ongoing consultation during its design, through continual evaluation throughout its lifecycle. The results of such considerations can move a project beyond the goals of its initial concept and create new opportunities for empowering and emancipating marginalized groups.

Principle 8: Acknowledge and challenge invisible labour

In digital contexts, realized work, that which we see as an outcome, is built on a vast infrastructure of invisible, unseen, and undervalued labour (Justesen and Plesner 2024, 2) that lies on various points along the ethical spectrum. Among the assertions emerging from critical DH discourses is that the very foundation of DH is highly skilled yet invisible labour (Smith and Whearty 2023, 28). In academic institutions, the paid work needed to further DH projects and pedagogy is often supplemented by colleague support in classroom and workshop environments, especially due to the employment of specialized technologies and methods (Croxall and Jakacki 2022). Many projects are supported by tasks undertaken by student research assistants who perform hours of work “under the hood,” building databases, cleaning and entering data and metadata into databases, and checking sources. In corporate environments, what Mary L. Gray and Siddarth Suri refer to as ghost work is the array of on-demand gig labour—proofreading, transcribing, captioning, inputting metadata, confirming identities—that drives the machine-learning operations of large tech companies and creates an underclass of workers lacking the employment rights enjoyed by full-time employees (Gray and Suri 2019, ix–xxiii). The work of social media content editors is largely done offshore in the Philippines and China, where “digital janitors” are (under)paid to supplement the work of algorithms in removing offensive or violent content (Cherry et al. 2016, 71).

After acknowledging the interdependencies of digital projects and the labour needed to realize them, challenging invisible labour means no longer accepting the “myth of seamlessness” (Justesen and Plesner 2024, 9). Pushing back against invisible systems, what D’Ignazio and Klein have referred to as “showing our work,” can be accomplished by interrogating the systems of power that undergird data collection, by being more inclusive of marginalized voices at every stage of project development, and by contextualizing data and its labels (D’lgnazio and Klein 2020, 184). The principles of both permissions (Principle 4) and reciprocity (Principle 7) contribute to this inclusion by revealing the human voices behind the data we collect and increasing visibility by focusing less on taking data and giving more attention to how data can be used to address power differentials. Even if we cannot remove the element of invisible labour entirely, for example, due to rigid capitalist structures, taking these additional steps and performing these different kinds of labour in our own projects can help make these structures, and the labour that supports them, more visible and valued.

Principle 9: Question our motives, not once, but throughout the lifecycle of digital projects

Getting to the root of our intentions requires reflection, and this principle can be applied to both the development of new technologies and the ways in which we collect and interpret data for digital humanities projects. The same way we can ask ourselves, “Am I truly hungry?” we can ask ourselves, “Are the people or groups represented by the data being empowered by this work? Is it furthering knowledge in a way that is respectful? Are we justifying shaky motives to meet a personal end?”

In our race to get technologies to market or to start a project that is exciting to us, we can fall into the trap of failing to critically examine our motivations and ask important questions about who our work will ultimately serve. This can lead to wilfully or unintentionally overlooking the downsides, or misreading cues about our own intentions, in much the same way we mistake feelings of emptiness or boredom for hunger. The excitement to digitize On Our Backs was rooted in the desire to make queer histories more accessible and correct the absence of lesbian stories in the archive. What was missing was the consideration that the right to be seen and heard is sometimes secondary to the right to be silent or forgotten. The original participants in On Our Backs were denied agency over their own images, how they were shared, and any involvement in the decision to digitize the archive.

The rapid development and adoption of generative AI has led many to question the safety and ethics of expediting extremely powerful technologies, often without public consultation or adequate testing. In 2023, the Future of Life Institute issued an open letter (signed by more than 33,000 people to date) urging the developers at Open AI to pause training of its models due to inadequate planning for the consequences (Future of Life Institute 2023). The letter’s assertion that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable” exemplifies what a questioning of motives could look like, yet by the time the letter made the rounds, Open AI’s GPT-4 genie was already out of the bottle. Experts feared that by prioritizing speedy innovation over diligence in questioning the implications of AI systems, ones that were edging ever closer to artificial general intelligence, humanity was “sleepwalking into catastrophe” (Perrigo 2023).

What the On Our Backs example and Future of Life’s open letter demonstrate is that once projects are well underway, especially those with economic potential, activism after the fact has varying degrees of success. It is not a replacement for mindful and critical assessment of projects at the point of conception. If we consider that, as Johanna Drucker has articulated, knowledge production is “situated, partial, and constitutive,” humanistic/digital inquiry as a discipline is predicated on the acknowledgement that data is a malleable material in the construction of knowledge (Drucker 2011, 3). In other words, if data is neither inert nor indifferent, the principle of questioning our motives necessitates moving beyond obtaining permission. It requires asking what story we are seeking to tell, and if the data we take, Drucker’s “capta,” is being used in a way that helps us look at a situation differently, consider more people in more specific ways, move beyond our biases and assumptions, or ultimately become more generous digital practitioners.

Questioning our motives involves inserting human voices into digital projects and applying humanistic ethical questions to our technologies. The CARE principles emphasize that data and associated research must be employed for the collective benefit of Indigenous communities. For data practices to be considered ethical, Indigenous people must be the ones to determine collective benefit (Carroll et al. 2020, 6). If we question our motives and find them selfish or based on ill-conceived ideas of saving the heritage of peoples or cultures other than our own, sometimes the answer is to walk away. If our projects involve precarious permissions, then we should walk away. If the results of our research and the data we have gathered to support it do not result in some benefit to the community, if they lack reciprocity, then maybe we should walk away. If we look at our project and it does not include who it should include, in the form of missing data or a lack of representation in the project team, if the data is flawed, incomplete, or perpetuates bias, these should be remedied, or else we should not proceed. At all stages of a project, from its conceptualization to proposal to data collection to its end stages, there should be repeated consideration of whether the project or product creates harm, further marginalizes those already on the margins, creates barriers, or neglects to break those barriers down. We should be constantly questioning our motives and asking ourselves: “Am I truly hungry? Do I need this? Does the world need this?”

Conclusion

The refrigerator is not a neutral object, but an infrastructure embedded with the means to establish boundaries and signal kinship rights. It is an interface through which we enact habits and rituals of adding things to it and taking them out, and of dating and labelling. It can provide lessons about sharing and finding peaceful ways of establishing territorial rights. It teaches us to store things better, to place things where they can be found and where they are accessible to more people. In this shared space, we learn to recognize what does not belong to us, and that using it requires one of many types of permissions. Stocked with fresh food and regularly replenished, it provides the raw materials for both creativity and the necessities of life. It is also a repository for old, outdated, forgotten, and decaying things, shoved way into the back, that we will ultimately need to deal with. It also allows some things to be kept purposefully hidden.

We experience a similar ambivalence to living in a datafied world. The infinite amounts of information at our disposal are gateways to knowledge, provide opportunities to engage with colleagues, and lead to collaboration on projects, despite the physical distance. We can connect with strangers across the globe, attend a class anywhere over Zoom, check in with friends on social media, and share some of our lives with complete strangers. What cannot be ignored is that the digital world is not a positive experience for everyone. Data technologies can be frightening, threatening, and sometimes a matter of life and death. To truly understand a datafied world—regardless of the positive outcomes of our individual experiences—requires we maintain a constant and critical eye to the impacts that our connected world has on everyone who resides in it.

There is certainly much more that can be said, and other ways to approach ethical questions through the lens of material culture and technology. There are the implications of storage, how technologies contribute to waste, the effects on the environment and climate change, and questions of who is most vulnerable to these effects. There is the matter of how we handle difficult ethical situations or conflicts. How can we reassure those who trust us with their sensitive data that we will handle it with care? How do we adapt and modify rules as technologies change? Perhaps most significantly, what are the best ways to model these behaviours and teach them to the next generations of scholars?

If we are ethical digital practitioners, it is instinctive, I think, to be both hopeful and despondent about what can realistically be done to mitigate harm. When it comes to creating a digital world based on careful consideration of everyone it could impact, we must turn our attention to what is possible: subject our assumptions and projects to ongoing and critical engagement with evolving ethical approaches, question who we are including or excluding in our work, and consider how the data we collect is beneficial or detrimental and to whom. If we can imagine how our conduct in our material social environment is translatable to the digital one, it may bring us slightly closer to true digital humanism.

Competing interests

The author has no competing interests to declare.

Contributions

Editorial

Section and Translation Editor

Davide Pafumi, The Journal Incubator, University of Lethbridge, Canada

Copy and Production Editor

Christa Avram, The Journal Incubator, University of Lethbridge, Canada

Section, Copy, and Layout Editor

A K M Iftekhar Khalid, The Journal Incubator, University of Lethbridge, Canada

References

Berners-Lee, Tim, and Mark Fischetti. 1999. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor. HarperSanFrancisco.

Blackman, Margaret B. 2005. “Focus on the Fridge.” Gastronomica 5 (4): 32–37. Accessed November 3, 2025.  http://doi.org/10.1525/gfc.2005.5.4.32.

Brown, Bill. 2001. “Thing Theory.” Critical Inquiry 28 (1): 1–22. Accessed November 16, 2025. https://www.jstor.org/stable/1344258.

Brown, Michele Lee, Hēmi Whaanga, and Jason Edward Lewis. 2023. “Relation-Oriented AI: Why Indigenous Protocols Matter for the Digital Humanities.” In Debates in the Digital Humanities 2023, edited by Matthew K. Gold and Lauren F. Klein, 74–83. University of Minnesota Press.

Buolamwini, Joy. 2023. Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Random House Publishing Group.

Carroll, Stephanie Russo, Ibrahim Garba, Oscar L. Figueroa-Rodríguez, et al. 2020. “The CARE Principles for Indigenous Data Governance.” Data Science Journal 19 (1). Accessed November 3, 2025.  http://doi.org/10.5334/dsj-2020-043.

Chapekis, Athena, Samuel Bestvater, Emma Remy, and Gonzalo Rivero. 2024. “When Online Content Disappears.” Pew Research Center, May 17. Accessed November 3, 2025. https://www.pewresearch.org/data-labs/2024/05/17/when-online-content-disappears/.

Cherry, Miriam G., Marion A. Crain, and Winifred R. Poster. 2016. Invisible Labor: Hidden Work in the Contemporary World. University of California Press.

Croxall, Brian, and Diane K. Jakacki. 2022. “The Invisible Labor of DH Pedagogy.” In The Bloomsbury Handbook to the Digital Humanities, edited by James O’Sullivan, 295–304. Bloomsbury Academic. Accessed November 3, 2025.  http://doi.org/10.5040/9781350232143.ch-28.

Cruft, Rowan. 2023. “Is There a Right to Internet Access?” In Oxford Handbook of Digital Ethics, edited by Carissa Véliz, 52–66. Oxford University Press. Accessed November 16, 2025.  http://doi.org/10.1093/oxfordhb/9780198857815.013.4.

Csikszentmihalyi, Mihaly. 1993. “Why We Need Things.” In History from Things: Essays on Material Culture, edited by Steven Lubar and W. David Kingery, 20–39. Smithsonian Institution Press.

Cukier, Wendy, and Margaret Yap. 2009. DiverseCity Counts: A Snapshot of Diversity in the Greater Toronto Area. The Diversity Institute, Toronto Metropolitan University. Accessed November 3, 2025. https://www.torontomu.ca/content/dam/diversity/reports/diversecitycounts_asnapshotofdiversityintheGTA_2009.pdf.

Daminger, Allison. 2019. “The Cognitive Dimension of Household Labor.” American Sociological Review 84 (4): 609–633. Accessed November 3, 2025.  http://doi.org/10.1177/0003122419859007.

Dastin, Jeffrey. 2018. “Insight – Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.” Reuters, October 10. Accessed November 3, 2025. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.

Datta, Anupam, Matt Fredrikson, Gihyuk Ko, Piotr Mardziel, and Shayak Sen. 2017. “Proxy Non-Discrimination in Data-Driven Systems.” arXiv. Accessed November 16, 2025.  http://doi.org/10.48550/arXiv.1707.08120.

Demiaux, Victor. 2017. How Can Humans Keep the Upper Hand? The Ethical Matters Raised by Algorithms and Artificial Intelligence. CNIL (French Data Protection Authority). Accessed November 3, 2025. https://www.cnil.fr/sites/cnil/files/atoms/files/cnil_rapport_ai_gb_web.pdf.

DeParle, Jason. 2021. “How Tech Is Helping Poor People Get Government Aid.” The New York Times, December 8. Accessed November 3, 2025. https://www.nytimes.com/2021/12/08/us/politics/safety-net-apps-tech.html?referringSource=articleShare.

D’Ignazio, Catherine. 2019. “Putting Data Back into Context: Why Context Is Hard.” DataJournalism.com, April 4. Accessed November 3, 2025. https://datajournalism.com/read/longreads/putting-data-back-into-context.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. MIT Press. Accessed November 3, 2025. https://data-feminism.mitpress.mit.edu/.

DREC (Digital Research Ethics Collaboratory). 2025. “About the Digital Research Ethics Collaboratory.” Accessed November 16. https://www.drecollab.org/about/.

Drucker, Johanna. 2011. “Humanities Approaches to Graphical Display.” Digital Humanities Quarterly 5 (1). Accessed November 16, 2025. https://dhq.digitalhumanities.org/vol/5/1/000091/000091.html.

Dutton, Tim. 2018. “An Overview of National AI Strategies.” Medium, June 28. Accessed November 3, 2025. https://medium.com/@tim.a.dutton/an-overview-of-national-ai-strategies-2a70ec6edfd.

Floridi, Luciano, and Mariarosaria Taddeo. 2016. “What Is Data Ethics?” Philosophical Transactions of the Royal Society A 374 (2083). Accessed November 3, 2025.  http://doi.org/10.1098/rsta.2016.0360.

Future of Life Institute. 2023. “Pause Giant AI Experiments: An Open Letter.” March 22. Accessed November 3, 2025. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

Gantz, Carroll. 2015. Refrigeration: A History. McFarland & Company, Inc., Publishers.

Garvie, Clare. 2019. “Garbage In. Garbage Out. Face Recognition on Flawed Data.” Georgetown Law. Accessed November 3, 2025. https://www.flawedfacedata.com.

Government of Canada. 2023a. 2023–2026 Data Strategy for the Federal Public Service. Accessed November 3, 2025. https://www.canada.ca/en/treasury-board-secretariat/corporate/reports/2023-2026-data-strategy.html.

Government of Canada. 2023b. “TCPS 2: CORE-2022 (Course on Research Ethics).” Panel on Research Ethics. Accessed November 16, 2025. https://www.tcps2core.ca/welcome.

Gray, Mary L., and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt.

Green, David G. 2014. Of Ants and Men: The Unexpected Side Effects of Complexity in Society. Springer.

Haan, Katherine. 2025. “Top Website Statistics For 2025.” Forbes Advisor, October 24. Accessed November 3, 2025. https://www.forbes.com/advisor/business/software/website-statistics/.

Hand, David J. 2018. “Aspects of Data Ethics in a Changing World: Where Are We Now?” Big Data 6 (3): 176–190. Accessed November 3, 2025.  http://doi.org/10.1089/big.2018.0083.

Hollars, B. J. 2015. This Is Only a Test. Indiana University Press.

Ige, Adebimpe Bolatito, Naomi Chukwurah, Courage Idemudia, and Victor Ibukun Adebayo. 2024. “Managing Data Lifecycle Effectively: Best Practices for Data Retention and Archival Processes.” International Journal of Engineering Research and Development 20 (7):453–461. Accessed November 16, 2025. https://www.researchgate.net/publication/388387603_Managing_Data_Lifecycle_Effectively_Best_Practices_for_Data_Retention_and_Archival_Processes.

Instagram Help Center. 2025. “Delete Comments You’ve Made on Instagram Posts.” Accessed November 3, 2025. https://help.instagram.com/631948184906507?helpref=faq_content.

Isaak, Jim, and Mina J. Hanna. 2018. “User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection.” Computer 51 (8): 56–59. Accessed November 3, 2025.  http://doi.org/10.1109/MC.2018.3191268.

Jacobson, Jenna, and Irina Gorea. 2022. “Ethics of Using Social Media Data in Research: Users’ Views.” In The SAGE Handbook of Social Media Research Methods, edited by Anabel Quan-Haase and Luke Sloan, 415–433. SAGE Publications Ltd.

Jones, Meg Leta. 2016. Ctrl + Z: The Right to Be Forgotten. New York University Press.

Justesen, Lise, and Ursula Plesner. 2024. “Invisible Digi-Work: Compensating, Connecting, and Cleaning in Digitalized Organizations.” Organization Theory 5 (1): 26317877241235938. Accessed November 3, 2025.  http://doi.org/10.1177/26317877241235938.

Kitchin, Rob. 2014. The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. SAGE Publications Ltd. Accessed November 3, 2025.  http://doi.org/10.4135/9781473909472.n10.

Kuenn, Arnie. 2014. “Pros & Cons of Date Stamping Your Content.” MarTech, March 10. Accessed November 3, 2025. https://martech.org/pros-cons-date-stamping-content/.

Local Contexts. 2025. “TK Labels.” Accessed August 15, 2024. https://localcontexts.org/labels/traditional-knowledge-labels/.

Marshall, Matilda. 2022. “The Refrigerator as a Problem and Solution: Food Storage Practices as Part of Sustainable Food Culture.” Food and Foodways 30 (4): 261–286. Accessed November 3, 2025.  http://doi.org/10.1080/07409710.2022.2124726.

Maschio, Thomas. 2002. “The Refrigerator and American Ideas of ‘Home.’” Anthropology News 43 (5): 8. Accessed November 3, 2025.  http://doi.org/10.1111/an.2002.43.5.8.

Messer, Ellen, and Marc J. Cohen. 2024. “Food as a Weapon.” In Oxford Research Encyclopedia of Food Studies, edited by Darra Goldstein, Alex Blanchette, Paul Freedman, et al. Accessed November 3, 2025.  http://doi.org/10.1093/acrefore/9780197762530.013.17.

Müller, Vincent C. 2024. “The History of Digital Ethics.” In The Oxford Handbook of Digital Ethics, edited by Carissa Véliz, 3–19. Oxford University Press.

Peavitt, Helen. 2017. Refrigerator: The Story of Cool in the Kitchen. Reaktion Books.

Perrigo, Billy. 2023. “Elon Musk Signs Open Letter Urging AI Labs to Pump the Brakes.” TIME, March 29. Accessed November 3, 2025. https://time.com/6266679/musk-ai-open-letter/.

Pilcher, Jeffrey M. 2016. “Culinary Infrastructure: How Facilities and Technologies Create Value and Meaning Around Food.” Global Food History 2 (2): 105–131. Accessed November 3, 2025.  http://doi.org/10.1080/20549547.2016.1214896.

Rees, Jonathan. 2013. Refrigeration Nation: A History of Ice, Appliances, and Enterprise in America. Studies in Industry and Society. Johns Hopkins University Press.

Reglitz, Merten. 2024. Free Internet Access as a Human Right. Cambridge University Press.

Robertson, Tara. 2016. “Digitization: Just Because You Can, Doesn’t Mean You Should.” Tara Robertson Consulting (blog), March 21. Accessed November 4, 2025. https://tararobertson.ca/2016/oob/.

Shapin, Steven. 2014. “‘You Are What You Eat’: Historical Changes in Ideas About Food and Identity.” Historical Research 87 (237): 377–392. Accessed November 18, 2025.  http://doi.org/10.1111/1468-2281.12059.

SlaveVoyages. 2025. “Trans-Atlantic Slave Trade Database.” SlaveVoyages. Accessed December 2, 2024. https://www.slavevoyages.org/voyage/trans-atlantic#voyages

Smith, Astrid J., and Bridget Whearty. 2023. “All the Work You Do Not See: Labor, Digitizers, and the Foundations of Digital Humanities.” In Debates in the Digital Humanities 2023, edited by Matthew K. Gold and Lauren F. Klein, 315–334. University of Minnesota Press.

Smithies, James. 2022. “The Dark Side of DH.” In The Bloomsbury Handbook to the Digital Humanities, edited by James O’Sullivan, 109–119. Bloomsbury Publishing.

SSHRC. 2025. “Guide to Preparing a Data Management Plan.” June 6. Accessed November 3. https://sshrc-crsh.canada.ca/en/funding/opportunities/resources/guide-preparing-data-management-plan.aspx.

Statista. 2025. “Number of Internet and Social Media Users Worldwide as of October 2025.” Statista. Accessed November 3, 2025. https://www.statista.com/statistics/617136/digital-population-worldwide/.

Sutherland, Tonia, Marika Cifor, T. L. Cowan, and Patricia Garcia. 2023. “The Feminist Data Manifest-NO: An Introduction and Four Reflections.” In Debates in the Digital Humanities 2023, edited by Matthew K. Gold and Lauren F. Klein, 119–132. University of Minnesota Press.

The New York Times. 1929. “Ice Box Gas Killed Family of 3.” July 17.

The New York Times. 1960. “Gas Kills 4 in Room: Parents and 2 Children May Be Refrigerator Victims.” October 8. Accessed November 16, 2025. https://www.nytimes.com/1960/10/08/archives/gas-kills-4-in-room-parents-and-2-children-may-be-refrigerator.html.

The New York Times. 1961. “Boy’s Death Spurs Hunt for Old Refrigerators.” June 7. Accessed November 16, 2025. https://timesmachine.nytimes.com/timesmachine/1961/06/07/97240716.html.

The New York Times. 1964. “3 Children Die in Refrigerator.” August 10. Accessed November 16, 2025. https://www.nytimes.com/1964/08/10/archives/3-children-die-in-refrigerator.html.

Thorp, Jer. 2021. Living in Data: A Citizen’s Guide to a Better Information. MCD Books.

Viola, Lorella. 2023. The Humanities in the Digital: Beyond Critical Digital Humanities. Springer Nature.

W3C (World Wide Web Consortium). 2024. “W3C Accessibility Standards Overview.” Web Accessibility Initiative (WAI). Last modified February 29. https://www.w3.org/WAI/standards-guidelines/.

W3C (World Wide Web Consortium). 2025. “W3C Accessibility Guidelines (WCAG) 3.0.” W3C Working Draft, September. https://www.w3.org/TR/wcag-3.0/.

Werling, Sarah, and Erik Radio. 2025. “Comparing Critical Cataloging Procedures and Developing Local Policies in the Context of Digital Library Metadata.” Journal of Library Metadata 25 (1): 33–55. Accessed November 4, 2025.  http://doi.org/10.1080/19386389.2024.2407710.

Wilkinson, Mark D., Michel Dumontier, IJsbrand Jan Aalbersberg, et al. 2016. “The FAIR Guiding Principles for Scientific Data Management and Stewardship.” Scientific Data (London) 3 (1): 160018. Accessed November 4, 2025.  http://doi.org/10.1038/sdata.2016.18.

Wilson, Steven Lloyd. 2022. Social Media as Social Science Data. Methods for Social Inquiry. Cambridge University Press.

Zerilli, John, John Danaher, James Maclaurin, et al. 2021. A Citizen’s Guide to Artificial Intelligence. MIT Press. Accessed November 16, 2025.  http://doi.org/10.7551/mitpress/12518.001.0001.