For this lab assignment, I colorized the image from Carleton’s dorm room. Below is the original copy and the colorized copy. The images can also be accessed through these links: Original Image & Colorized Image.


Upon first glance, this image on the right looks like it was colored well, and the model worked perfectly. However, as you look closely into more details, there are some noticeable errors. For one thing, the poster on the wall has a red-purple gradient that does not fit into the style during the time this photo was taken. Furthermore, the apples that are on the plate are black, which isn’t realistic. This leads me to my main ethical issue with the combination of artificial intelligence and image manipulation.
First, the use of artificial intelligence and image manipulation isn’t completely accurate, as we have seen above. With this level of inaccuracy, this can lead to others having a misrepresentation or false claims on the topic. This use of artificial intelligence can be used maliciously, with the user utilizing the tool for misrepresentation and defamation. Last year, I took a machine learning course at Carleton, and the most important lesson I learned was to ensure that the algorithm has no biases and prevents misrepresentation of data. This is to ensure that conclusions made from artificial intelligence and machine learning models are indeed accurate and do not misrepresent communities. Today, there exist many examples in which machine learning and AI models have caused the misrepresentation of minority groups, causing many to draw conclusions that are non-factual. Thus, using artificial intelligence for image manipulation can have some benefits, however, it raises ethical concerns as the tool can be used to misrepresent the original image and draw inaccurate conclusions.
Below are two quotes that resonate with the ethical concern of imagining manipulation with AI.
When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines for creating deep fakes that falsify politicians’ speech or for using recognition for authoritarian surveillance.
Sonja Drimmer, How AI is Hijacking Art History, The Conversation
But my lingering suspicion emerges from an awareness of how public support for the sciences and disparagement of the humanities means that, in the endeavor to gain funding and acceptance, the humanities will lose what makes them vital. The field’s sensitivity to historical particularity and cultural difference makes the application of the same code to widely diverse artifacts utterly illogical.”
Sonja Drimmer, How AI is Hijacking Art History, The Conversation
These two quotes demonstrate the concern of the combination of AI and image manipulation. The first quote shows that AI can be used maliciously to misrepresent others. The second quote also demonstrates that the use of AI to colorize images can misrepresent the image and not be fit for the field of humanities.
Hi, I agree with your point that AI colorization, can introduce inaccuracies that misrepresent historical images. The black apples and the unrealistic poster gradient really show how AI struggles with context and period accuracy. I also like your opint that misrepresentation in AI isn’t just a technical issue but an ethical one.
Since you have background experience with machine learning, do you know or have ideas on why the apples were colored in black? Is this mistake too specific to have ideas on why the AI chose to color it this way or is there something in machine learning models that would make it tend to gravitate to mistakes like this one specifically?