After explicit photographs generated by artificial intelligence (AI) became widespread on social media, the social media platform X, which was formerly known as Twitter, has prohibited searches of pop diva Taylor Swift, according to a confirmation made by an executive on Sunday.
When searching for “Taylor Swift” on the social networking platform as of Sunday night, users were presented with the error message “Something went wrong in the search.” You could try reloading.”
The relocation to The Hill was verified by Joe Benarroch, who operates as the head of business operations at X.
On Sunday, Benarroch issued a statement in which he described the measure as “a temporary action and done with an abundance of caution as we prioritize safety about this issue.”
Last week, fake sexually explicit images of Taylor Swift were shared on the internet, which prompted a response from Swift’s admirers. They flooded the site with positive photos of the singer, using the hashtag #ProtectTaylorSwift to express their support.
Mason Allen, the head of growth for Reality Defender, told The Associated Press that the organization that detects deep fakes, Reality Defend, tracked multiple nonconsensual pornographic materials of Swift. These materials went to “millions and millions” of people by the time some of them were taken down.
According to the news wire, the majority of the photographs were discovered on X; however, some of them were discovered on Facebook, which is owned by Meta, as well as other social media sites.
Swift’s sexual content that was generated by artificial intelligence and circulated, which is commonly referred to as deepfakes, was described as “alarming” by the White House last week.
“We believe that social media companies have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people,” press secretary Karine Jean-Pierre told reporters last Friday.
“While social media companies make their own independent decisions about content management, we believe that they have said that they have a responsibility to do so.”
The event spurred discussions about the dangers that are associated with content that is generated by artificial intelligence, as well as renewed requests from federal lawmakers on social media companies to improve the enforcement of rules against deepfakes.
After the incident, Satya Nadella, the CEO of Microsoft, called for further “guardrails” to be placed on artificial intelligence.
An expansive executive order on artificial intelligence was signed by President Biden in October. The order’s primary focus was on the management of the dangers to safety and privacy that are associated with the new technology.