Ad-Sharing Revenue on X Fuels Misleading Israel-Hamas War Narratives

NewsGuard Logo

On X, formerly Twitter, programmatic advertisements for dozens of major brands, governments, educational institutions and nonprofits are being displayed in the feeds directly below viral posts advancing false or egregiously misleading claims about the Israel-Hamas war, a NewsGuard analysis has found. Under the terms of a new advertising revenue-sharing program that X introduced for its "creators," a portion of the advertising income generated by these organizations would apparently be shared with these superspreaders of misinformation.

From November 13 to November 22, 2023, NewsGuard analysts reviewed programmatic ads that appeared in the feeds below 30 viral tweets that contained false or egregiously misleading information about the war. Programmatic ads are served via algorithms to target digital ads to online readers. Brands typically do not select where programmatic ads run and indeed are unaware of where their programmatic ads appear.

These 30 viral tweets were posted by 10 of some of X's worst purveyors of Israel-Hamas war-related misinformation, and cumulatively reached an audience of over 92 million viewers, according to X data. On average, each tweet was seen by 3 million people. The accounts NewsGuard analyzed had previously been identified as repeat spreaders of misinformation about the conflict.

The 30 tweets advanced some of the most egregious false or misleading claims about the war. These include that the October 7 Hamas attack against Israel was a "false flag" and that CNN staged footage of an October 2023 rocket attack on a news crew in Israel. Half of the tweets (15) were flagged with a fact-check by Community Notes, X's crowdsource fact-checking feature, which under the X policy would have made them ineligible for advertising revenue. However, the other half did not feature a Community Note. Ads for major brands such as Pizza Hut, Airbnb, Microsoft, Paramount and Oracle were found by NewsGuard on posts with and without a Community Note, as explained in more detail below.

PER02 misinformation_01
Moor Studio/Getty

In total, NewsGuard analysts cumulatively identified 200 ads from 86 major brands, nonprofits, educational institutions and governments that appeared in the feeds below 24 of the 30 tweets containing false or egregiously misleading claims about the Israel-Hamas war. The other six tweets did not feature advertisements. (On X, ads appear as "tweets" that are shown to users in feeds.) The ads NewsGuard found were served to analysts browsing the internet using their own X accounts in five countries: the U.S., U.K., Germany, France and Italy.

NewsGuard's report comes after Apple, Disney and IBM pulled their ads off of X after owner Elon Musk spoke approvingly of an antisemitic post on the platform. In response to NewsGuard's emailed questions about NewsGuard's findings and the ads appearing in the feeds below tweets advancing misinformation, X's press office sent an automated response: "Busy now, please check back later."

On November 21, after NewsGuard reached out to X about this report, Musk tweeted: "X Corp will be donating all revenue from advertising & subscriptions associated with the war in Gaza to hospitals in Israel and the Red Cross/Crescent in Gaza." It is not clear what Musk meant by "revenue from advertising & subscriptions associated with the war in Gaza," nor did he comment on many or all of these account holders sharing in X's revenues for spreading misinformation. In response to NewsGuard's inquiry on November 22 asking for clarification about Musk's announcement, X's press office again replied with an automated response.

Monetizing 'Burnt Baby' False-AI Claims

On October 29, X owner Elon Musk said that commentators whose posts had been flagged by Community Notes would not be eligible to make money from ad dollars on that particular post. Nonetheless, NewsGuard found ads for 70 unique major organizations on 14 of the 15 tweets advancing war-related misinformation that did not feature a Community Note fact-check. This means that some of X's worst purveyors of war-related misinformation would likely have been entitled to ad dollars from major organizations.

For example, NewsGuard found 22 ads for major organizations on three tweets posted by Jackson Hinkle advancing false claims about the war that did not feature a Community Note. Hinkle is a commentator who describes himself as an "American Conservative Marxist-Leninist" and has spread dozens of false and misleading claims about the war, NewsGuard has determined.

Ads for Oracle, Pizza Hut and Anker, among others, were shown to NewsGuard analysts below a tweet posted by Hinkle on October. His tweet advanced the false claim that Daily Wire podcast host Ben Shapiro used artificial intelligence to generate an image of a child killed by Hamas. The tweet received 22 million views as of November 20. Again, it is in the nature of programmatic advertising that brands are unaware of where their ads are appearing and whom their ads are supporting.

"Holy sh*t," Hinkle said. "The image that Ben Shapiro tried to pass off as a 'burnt baby corpse' was an AI-generated fake image!"

In fact, there is no evidence that the photo—which was first shared by the Israeli government—was generated using AI. Hany Farid, a professor at the University of Berkeley's School of Information, told technology news site 404 Media that the image of the baby "does not show any signs it was created by AI." He said, "The structural consistencies, the accurate shadows, the lack of artifacts we tend to see in AI—that leads me to believe it's not even partially AI generated."

Similarly, ads for Airbnb, the Virgin Group, Taiwanese technology company Asus, conservative media company The Dispatch and multibillion-dollar Swedish hygiene company Essity appeared alongside a post by conservative commentator Matt Wallace that baselessly asserted Hamas' attack on Israel was an "inside job." The post did not feature a Community Note and had received 1.2 million views as of November 21.

Governments and Nonprofits Aren't Exempt

Of the 200 ads NewsGuard identified on superspreader posts featuring war-related misinformation, 26 were for government-affiliated organizations, including government agencies, state-owned enterprises and state-run foreign media outlets.

For example, NewsGuard found an ad for the FBI on a November 9 post from Jackson Hinkle that claimed a video showed an Israeli military helicopter firing on its own citizens. The post did not contain a Community Note and had been viewed more than 1.7 million times as of November 20.

PER02 misinformation_02
Jim Watson/AFP/Getty

"ISRAEL ADMITS they fired on their OWN CIVILIANS with APACHE ATTACK HELICOPTERS!" Hinkle wrote in a post linking to the video. In fact, the video showed Israeli Air Force planes carrying out attacks against Hamas over the Gaza Strip, according to GeoConfirmed, a group of open-source intelligence investigators.

The ad for the FBI that appeared below Hinkle's tweet promoted the FBI's work to stop hate crimes. "Hate crimes not only harm victims but also strike fear into their communities," the ad said. "The #FBI is committed to combating hate crimes and seeking justice for victims." NewsGuard sent two email messages and one contact form message to the FBI on November 20 inquiring about whether the ad was placed on purpose. In response, an FBI spokesperson declined to comment and referred NewsGuard to X for questions about the placement of ads on the platform.

Other ads from government entities included an ad for the state-owned Abu Dhabi National Oil company under a tweet baselessly claiming to show a Palestinian blogger faking injuries from the war; an ad from Taiwan's Ministry of Culture under a post falsely claiming that a video showed Israel firing at its own civilians; and an ad for China Global Television Network (CGTN) —a Chinese government-owned broadcaster—under a post from Matt Wallace suggesting that 9/11 was a "false flag attack from [the] Israeli government." None of the posts featured Community Notes as of November 21.

Jack Brewster is NewsGuard's enterprise editor and Coalter Palmer and Nikita Vashisth are staff analysts. Contributing reporting by Natalie Adams, John Gregory, Sam Howard, McKenzie Sadeghi and Roberta Schmid.

Note: For a detailed description of the methodology NewsGuard used for this report, please visit: https://www.newsguardtech.com/november-2023-misinformation-monitor-methodology/

About the writer

Jack Brewster, Coalter Palmer and Nikita Vashisth - NewsGuard


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go