Artificial intelligence has made significant advancements in image restoration and enhancement, with tools like DeOldify offering a seamless way to colorize black-and-white photographs. This week’s exercise demonstrated just how easy it is to use AI for image manipulation, raising important ethical questions about the role of technology in reshaping historical narratives. While these tools can help bring the past to life, they also come with the risk of distorting historical accuracy and altering our perception of history.
The Illusion of Authenticity
One of the key ethical concerns surrounding AI-driven colorization is the illusion of authenticity it creates. AI-generated colors are not derived from historical evidence but rather from algorithms trained on contemporary datasets. This means that while the colorized images may look realistic, they are, in fact, speculative interpretations of the past. As Ted Chiang argues in ChatGPT Is a Blurry JPEG of the Web, AI does not create knowledge but distills and reinterprets existing data in ways that may be imprecise or misleading. In the context of historical image colorization, this raises questions about how much creative freedom AI should have in altering historical records and whether these altered images should be presented as accurate representations of the past.
AI and the Risk of Bias
Another significant issue is the potential for bias in AI-generated images. AI models are trained on existing datasets, which often reflect the biases of their creators. This can result in colorized images that reinforce stereotypes or present an inaccurate view of historical subjects. As Sonja Drimmer warns in How AI is Hijacking Art History, “AI-generated outputs often privilege certain artistic traditions while marginalizing others, shaping cultural memory in ways that reinforce existing hierarchies.” This means that AI colorization can unintentionally misrepresent aspects such as skin tones, clothing colors, and environmental details, thereby shaping a version of history that aligns more with the biases of modern observers rather than historical accuracy.
The Role of Transparency and Ethical Use
Given these ethical concerns, it is crucial for users of AI-generated content to remain transparent about the limitations of these tools. Clearly labeling AI-altered images and providing context about their speculative nature can help prevent the spread of misinformation. Additionally, historians and researchers should critically evaluate AI-generated colorizations, using them as interpretive tools rather than definitive historical representations. As Lauren Tilton notes in Relating to Historical Sources, “AI should be seen not as an infallible source of truth, but as a tool that requires careful contextualization and critical engagement.” This perspective underscores the need for ethical use and transparency when applying AI to historical research.
Image Comparison

Link to Omeka Item
https://dgah.sites.carleton.edu/digtialobjects/items/show/156
I totally agree that AI distorts authenticity. In a way, AI and historical fiction writers have some in common—they don’t just recreate the past, they shape it based on what they think is real. This goes against the principles of historical research, where accuracy and evidence matter. And even in literature, AI can never match the creativity of a true historical novelist. So, just like you said, we need to think critically about how we use AI and the results it produces.
You bring up important points about the risks of AI colorizing old photos. These images may look real but aren’t always accurate. I like how you highlight the need for transparency and careful use. Clearly labeling AI images can help people see them as guesses rather than true history.
Hi, thank you for your interesting blog post! I completely agree with your point about authenticity. It seems that users often have little understanding of the accuracy of these tools. Many take their outputs for granted without critically examining how they might reinforce existing distortions in society. As scholars and students, we need to be especially cautious when using these tools in humanities research. Personally, I feel they are more suited for entertainment—such as colorizing old family photos for nostalgic enjoyment—rather than for serious academic analysis.
I agree with you that AI images often look realistic and real however can be significantly skewed. I also agree with you that it is important that authors who choose AI images should label them so the reader and viewer can take that into account in their conclusion. This not only helps reduce the level of false claims, it also reduces any biases, especially ones that could be harmful.
Hi Ngelek, I completely agree with you that I just do not have the capabilities to capture or remake human culture. Human culture is just simply unpredictable, and we can’t use a dataset to create a model. I wanted to also note your original image looks really cool, I never knew Sayles used to look like that so it is fun to see the history of how Carleton has changed.