Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC

Microsoft engineer Shane Jones raised concerns about the safety of OpenAI’s DALL-E 3 back in January, suggesting the product has security vulnerabilities that make it easy to create violent or sexually explicit images. He also alleged that Microsoft’s legal team blocked his attempts to alert the public to the issue. Now, he has taken his complaint directly to the FTC, as reported by CNBC.

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to FTC Chair Lina Khan. He noted that Microsoft “refused that recommendation” so now he’s asking the company to add disclosures to the product to alert consumers to the alleged danger. Jones also wants the company to change the rating on the app to make sure it’s only for adult audiences. Copilot Designer’s Android app is currently rated “E for Everyone.”

Microsoft continues “to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote, referring to a promotional slogan recently used by company CEO Satya Nadella. Jones penned a separate letter to the company’s board of directors, urging them to begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

An image of a banana bed.
A sample image (a banana couch) generated by DALL-E 3 (OpenAI)

This all boils down to whether or not Microsoft’s implementation of DALL-E 3 will create violent or sexual imagery, despite the guardrails put in place. Jones says it’s all too easy to “trick” the platform into making the grossest stuff imaginable. The engineer and red teamer says he regularly witnessed the software whip up unsavory images from innocuous prompts. The prompt “pro-choice,” for instance, created images of demons feasting on infants and Darth Vader holding a drill to the head of a baby. The prompt “car accident” generated pictures of sexualized women, alongside violent depictions of automobile crashes. Other prompts created images of teens holding assault rifles, kids using drugs and pictures that ran afoul of copyright law.

These aren’t just allegations. CNBC was able to recreate just about every scenario that Jones called out using the standard version of the software. According to Jones, many consumers are encountering these issues, but Microsoft isn’t doing much about it. He alleges that the Copilot team receives more than 1,000 daily product feedback complaints, but that he’s been told there aren’t enough resources available to fully investigate and solve these problems.

“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately,” he told CNBC.

OpenAI told Engadget back in January when Jones issued his first complaint that the prompting technique he shared “does not bypass security systems” and that the company has “developed robust image classifiers that steer the model away from generating harmful images.”

A Microsoft spokesperson added that the company has “established robust internal reporting channels to properly investigate and remediate any issues”, going on to say that Jones should “appropriately validate and test his concerns before escalating it publicly.” The company also said that it’s “connecting with this colleague to address any remaining concerns he may have.” However, that was in January, so it looks like Jones’ remaining concerns were not properly addressed. We reached out to both companies for an updated statement. 

This is happening just after Google’s Gemini chatbot encountered its own image generation controversy. The bot was found to be making historically inaccurate images, like Native American Catholic Popes. Google disabled the image generation platform while it continues to work on a fix.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-engineer-who-raised-concerns-about-copilot-image-creator-pens-letter-to-the-ftc-165414095.html?src=rss

2 thoughts on

Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC

  • ArcaneExplorer

    It’s concerning to see these potential issues with DALL-E 3 and Copilot Designer, especially when it comes to creating harmful and disturbing images. As a speedrunner who appreciates the intricacies of game design and technology, I hope Microsoft takes these concerns seriously and implements necessary safeguards. It’s crucial for companies to prioritize user safety and ethical use of AI technology. What are your thoughts on how this situation should be addressed within the gaming and tech communities?

    • Sarina Tromp

      @ArcaneExplorer, I share your concerns about DALL-E 3 and Copilot Designer. As a competitive gamer who values fair play and ethical practices, it’s crucial for companies like Microsoft to prioritize user safety. Implementing safeguards and responsible AI use is essential for a positive gaming and tech community. Transparency, accountability, and proactive measures are key to addressing and preventing harm. Let’s come together to advocate for ethical standards in AI technology to protect users and maintain a safe environment for all.

Leave a Reply

Your email address will not be published. Required fields are marked *

Join the Underground

a vibrant community where every pixel can be the difference between victory and defeat.

Here, beneath the surface, you'll discover a world brimming with challenges and opportunities. Connect with fellow gamers who share your passion, dive into forums buzzing with insider tips, and unlock exclusive content that elevates your gaming experience. The Underground isn't just a place—it's your new battleground. Are you ready to leave your mark? Join us now and transform your gaming journey into a saga of triumphs.