AI Image Generator: The Problematic Truth Behind the Trend – theSkimm




In December, our feeds started getting flooded with AI-generated images of our friends. Thanks to the Lensa AI app. It quickly became the top photo app in the Apple app store — one analysis found that more than 20 million people have already downloaded it. But some people feel like the photos aren’t portraying people accurately. In a problematic way. Specifically because the AI image generator seems to be using racist and sexist stereotypes. 
All of this brings up a dark side of AI technology. To get to the bottom of the controversy around these images, our “Skimm This” podcast team sat down with Ina Fried, chief technology correspondent for Axios, about the AI images taking social media by storm.
We’ll use the Lensa AI app as our primary example here. After uploading selfies and paying a $4 service fee, the app generates dozens of artistic images in different styles and settings, with you as the star. Glamorous, right?
But there have been questions about the pictures users are getting back. In addition to the concerns about the AI-generated images using material stolen from artists, people have shared that their avatars have bigger breasts and slimmed waistlines, according to The New York Times. Some even said the app generated nude photos of them. And some Black users found that the app lightened their skin
Fried says we shouldn't be surprised that the app generates these kinds of images. “It starts from the moment you click that button that says woman, man, or other,” she said. “It's now making a bunch of gendered assumptions.”
Fried also points out that these assumptions come from human biases. “We have bias in our society, so the data that trains these systems has bias,” she said. “We have to correct for both of those things.”
But now, with this latest trend, that conversation has reached a wider audience. Which Fried says is a good thing. “It's really important now…when we're still talking [about] relatively trivial things like a photo editor, that we understand bias,” she said. “That we really learn to understand why systems are making these decisions and correct for it.”
Call out the bias, said Fried. Because AI technology has so many different applications, from photo apps like Lensa to the bail system. “I think it's really important that we uplift the folks that are saying, ‘Hey, this lightened my skin. This put me in a stereotypical depiction.’ We are at this infancy of AI where we can say, this is okay and this isn't okay,” said Fried. 
If you're curious and want to see your avatar, Fried says there are ways we can use AI technology in a smarter, safer way. “Do some research about the privacy policy” to find out what a company may do with your photos, she said. In some instances, they may be used to train a facial recognition algorithm. “So sometimes it's not doing anything individually bad to you, but it might be doing something collectively that you're not okay with,” she said. 
AI tech is growing rapidly — in more areas of our lives than we might realize. And like society, it’s unfortunately flooded with all kinds of biases. Which can perpetuate racist and sexist stereotypes. No shade to those who have tried the app already, or who still plan to. But maybe take a closer look at the images the app generates before you share them.

Sign up for the Daily Skimm email newsletter. Delivered to your inbox every morning and prepares you for your day in minutes.
Follow us
© 2023 theSkimm, All rights reserved
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

source



Leave a Reply

© 2024 Image.bg Editor. All Rights Reserved. A bizafy Limited Company.