Starting next week, Meta will no longer put an easy-to-see label on Facebook images that were edited using AI tools, and it will make it much harder to determine if they appear in their original state or had been doctored. To be clear, the company will still add a note to AI-edited images, but you'll have to tap on the three-dot menu at the upper right corner of a Facebook post and then scroll down to find "AI Info" among the many other options. Only then will you see the note saying that the content in the post may have been modified with AI.
Images generated using AI tools, however, will still be marked with an "AI Info" label that can be seen right on the post. Clicking on it will show a note that will say whether it's been labeled because of industry-shared signals or because somebody self-disclosed that it was an AI-generated image. Meta started applying AI-generated content labels to a broader range of videos, audio and images earlier this year. But after widespread complaints from photographers that the company was flagging even non-AI-generated content by mistake, Meta changed the "Made with AI" label wording into "AI Info" by July.
The social network said it worked with companies across the industry to improve its labeling process and that it's making these changes to "better reflect the extent of AI used in content." Still, doctored images are being widely used these days to spread misinformation, and this development could make it trickier to identify false news, which typically pop up more during election season.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-hides-warning-labels-for-ai-edited-images-143004313.html?src=rss