
Here is a link to my Omeka item.
One ethical issue that arises from the combination of artificial intelligence and image manipulation is that AI isn’t completely accurate, which can lead to false interpretations and misrepresentations of images. For example, in this lab where we added color to black and white images, it’s important to remember that this coloring isn’t 100% accurate. However, many individuals could simply see these images and confidently believe that they are a correct portrayal of the scene. The following quote from Sonja Drimmer summarizes my thoughts quite well.
“Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.”
Sonja Drimmer, How AI is Hijacking Art History, The Conversation
Additionally, we don’t always know when an image is AI-generated or edited. Although we have a watermark on the images we produced to let people know it was AI-created, there was the option to remove it as well. In recent years, especially in social media, there have been numerous AI images that have been posted to the internet. If an individual is knowledgeable about AI image generation, oftentimes they will be able to determine if it is a real photograph or not. However, this is only a small percentage of people. Others who are not as familiar with technology could easily believe that these images are real, which can lead to misinformation and false interpretations. This can get more extreme in the cases of deepfakes, which Sonja talks more about in the following quote.
“When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines for creating deep fakes that falsify politicians’ speech or for using facial recognition for authoritarian surveillance.”
Sonja Drimmer, How AI is Hijacking Art History, The Conversation
AI is a tool that can be very beneficial for many people. However, there are also many drawbacks and potential risks that we must be aware of both as producers and consumers.
Hi Reed! I agree with your point that we need to be aware that image manipulation done by AI is not always completely accurate. Thinking about the example in class where the skin tone of someone was lightened using coloring, there are plenty of ethical concerns regarding artificial intelligence. I can imagine viewers of images created by AI believing it is real and spreading misinformation.
I totally agree with the transparency aspect of the use of AI restoration and editing. Given the option that people can remove watermarking from their AI colorized images and any other generated images, there could be a future ahead where you can’t trust things you see online, even more so than today.
I really agree with you, Reed. I wonder how we can trust the process of the recoloring of an image when we can easily say we have done it ourselves but instead all we did was run a program. How does that respect the time and effort that professionals have invested in their work when they see a program doing the same thing but worse? How can we find a balance between the use of AI and professionals to create a space where they work in synchrony?
Hi Reed, what you said about AI “fraud” really resonated with me. I remember my grandmother once sent me a photo, amazed that cats could be smart enough to cook. But as a college student, I could immediately tell it was AI-generated because the texture just didn’t look real. From this incident, I learned that as AI becomes more advanced, we also need to be cautious about how people with bad intentions might use it to create fake images for profit. In this case, those who aren’t familiar with AI-generated images, especially older people, are particularly vulnerable. It’s never bad thing to develop technologies, but it is our duty to prevent the technologies from being used in a bad way.
I completely agree with your point about it being nearly impossible to know if AI has been used to create or edit an image. I think it should be necessary to be transparent about whether or not AI has been used on an image in order to try to prevent or minimize ethical issues that could arise with this increase in AI. For example, in a stats class I took last term we had to create a model card alongside our final project that described all the steps we took in the creation of the model, intended uses, and ethical considerations. We did this so that the uses, processes taken, and biases/limitations were all transparent. I believe something similar could be important for images where AI is used so that the audience is aware of all the information surrounding the image.
I really liked your post, especially the ethical considerations you raised around AI in image manipulation. You bring up a valid point about how AI-generated colorization can lead people to falsely interpret images as historically accurate, which is a huge concern. Your use of Sonja Drimmer’s quote really drives home the idea that adding color doesn’t necessarily recreate reality—it just reimagines it through a modern lens, which could be misleading. I also think you did a great job highlighting the potential for misinformation, especially with the ease of removing watermarks or using deepfakes. It’s definitely something to keep in mind as we become more dependent on AI for content creation.
I agree with your point about how AI can lead to misunderstandings, especially from individuals who may not have as much knowledge about AI. I like the second quote you brought up because it shows how people may underestimate AI’s potential to create believable scenes/images. Of course, AI has many benefits but sometimes things can get too far, and it brings about the question on whether there should be more limits placed on AI in general.