After completing the Omeka project, I was both amazed by AI’s capabilities and increasingly wary of such software. The main issue, in my opinion, is authenticity. Many times, we assume that these programs are simply “restoring” an original image or artwork. However, what they actually do is generate something new based on AI’s interpretation, rather than staying faithful to historical reality. One of our readings articulates this concern:
Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.
Sonja Drimmer, How AI is Hijacking Art History, The Conversation
When it comes to colorizing images, it should be considered a part of artistic creation or historical research, rather than a simple restoration process. Take art history as an example: many ancient artworks, particularly sculptures and architectural elements, have lost their original colors over time. Recoloring them is a complex challenge. AI will never be able to show the actual historical colors—this is due to how AI works. It is trained on past datasets and then predicts based on those patterns. In other words, AI learns from other cultures, even modern artistic styles, to infer the colors of a specific artwork. But historical artists had very different aesthetic values from what we might assume today.
This misinterpretation can cause problems for art historians. For instance, last semester, I learned about a 1,000-year-old statue of Augustus that had lost its colors over time. Based on modern expectations, we might assume that the statue was plain or had a dark, muted tone. However, X-ray analysis revealed that it was originally painted in bright red and blue. This is something AI would never be able to predict.
One of our readings describes the limitation of AI in this context:
These recreations don’t teach us anything we didn’t know about the artists and their methods.
Sonja Drimmer, How AI is Hijacking Art History, The Conversation
So, do we really need AI technology to assist humanities research? And if so, how should it be integrated into our work? This is an important discussion to have. The strength of AI lies in making predictions based on existing knowledge, but unlike humans, it cannot engage in deep reasoning or provide meaningful references for its claims. Because of this, I argue that AI should not be used as the primary tool for drawing conclusions in humanities research. Instead, these technologies should function as helpers, while the actual interpretation and decision-making should remain in the hands of human scholars.


Omeka Link: https://dgah.sites.carleton.edu/digtialobjects/admin/items/show/147