Lab 4 – AI Colorization

For my lab project this week, I used the DeOldify AI to colorize a picture of three girls sitting together in a dorm room, eating apples. I thought this image had a lot of personality, even in black and white, and I was super thrilled to see what it might possibly look like in color. Here’s the link to the colorized photo in Omeka.

However, after going through the process, I’m more attuned to potential ethical issues / questions surrounding AI colorization and AI involement in photo manipulation in general. For one, accuracy. Of course, adding color back into a photo that never had it in the first place naturally introduces room for error, even when a human is doing it. A problem arises, though, when there’s no indication that AI may have colorized the photo, which may mislead people. (We tried to address this in our project by keeping the watermark in.) In my photo, for example, I know the girls are sitting with apples, which should probably be red — however, the AI made them purple.

It may seem minor when the miscolored object is an apple, but if it’s someone’s skin, or a country’s flag, or a historical event — like the Golden Gate Bridge example we looked at in class — the issue seems more important. There’s certainly questions to be asked about how this tool can be implemented thoughtfully, and what safeguards or warnings we should put on it.

How absurd to think that black-and-white photographs from 100 years ago would produce colors in the same way that digital photographs do now. And yet, this is exactly what AI-assisted colorization does. This effort to “bring events back to life” routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.

Sonja Drimmer, How AI is Hijacking Art History, The Conversation

Sonja Drimmer’s quote above brings up another angle with which to look at AI colorization, specifically around the argument that colorizing photos helps people see the past “as it really was,” or otherwise more fully immerses people in history. However, no one can know for sure what the colors really looked like. Colorizing photos is a present-centric way to look at the past. It may be more productive, or at least more accurate, to look at the original photo and identify your feelings around that, instead. I’ll end with another quote from Drimmer, directed at people reporting AI’s results in the popular press, which I feel like also applies to those considering AI colorization:

[T]hey should ask whether any of this is for the good of anyone or anything but AI, its most zealous proponents and those who profit from it.

Sonja Drimmer, How AI is Hijacking Art History, The Conversation

Here’s the side-by-side comparison of the two photos:

3 thoughts on “Lab 4 – AI Colorization

  1. I completely agree with your argument Eliza! In your colorized photograph, they turned the apples purple instead of red. This could lead the viewer to believe that these fruits are plums or passionfruit instead of apples. In this context, misinterpretation about fruit type isn’t a huge deal. However, with the examples you listed such as skin color or a country’s flag, this could cause much more serious ramifications in how the viewer is drawing conclusions from the image.

  2. Oh wow, I wouldn’t have noticed that they were apples in that picture if you hadn’t said anything! I do think that while making apples purple seems harmless the overall inaccuracy and guesswork required for these generative AIs could have much larger implications with other subjects. I also hadn’t thought about what if flags get turned different colors as that could lead someone who has no context of the image to falsely assume either place or nationality.

  3. Hi Eliza, I completely agree that AI generated images can mislead people due to its inaccuracies. I wonder how the program managed to make the apples purple considering AI generated images generally do well with ordinary common objects like apples. The misinformation that could arise from the use of AI is definitely concerning and problematic.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php