Meta exempted some of its top advertisers from its usual content moderation process, shielding its multibillion-dollar business amid internal concerns that the company’s systems mistakenly penalised top brands.
According to internal documents from 2023 seen by the Financial Times, the Facebook and Instagram owner introduced a series of “guardrails” that “protect high spenders”.
The previously unreported memos said that Meta would “suppress detections” based on how much an advertiser spent on the platform, and that some top advertisers would instead be reviewed by humans.
One document suggested that a group called “P95 spenders” — those spending more than $1,500 per day — were “exempt from advertising restrictions” but would still “eventually be sent to manual human review”.
The memos predate this week’s announcement by chief executive Mark Zuckerberg that Meta was ending its third-party fact-checking programme and dialling down its automated content moderation, as it prepares for Donald Trump’s return as president.
The 2023 documents show Meta had found that its automated systems had incorrectly flagged some top-spending accounts for breaches of the company’s rules.
The company told the FT that higher-spending accounts were disproportionately subject to erroneous notifications of possible breaches. It did not respond to questions asking whether any of the measures in the documents were temporary or ongoing.
Ryan Daniels, a Meta spokesperson, said the FT’s reporting is “simply inaccurate” and “based on a cherry-picked reading of documents that clearly state this effort was intended to address something we’ve been very public about: preventing mistakes in enforcement”.
Advertising makes up the majority of Meta’s annual revenues, which were nearly $135bn in 2023.
The tech giant typically screens adverts using a combination of artificial intelligence and human moderators to stop violations of its standards, in an effort to remove material such as scams or harmful content.
In a document titled “high spender mistake prevention”, Meta said it had seven guardrails protecting business accounts that bring in more than $1,200 in revenue over a 56-day period, as well as individual users who spend more than $960 on advertising over the same period.
It wrote that the guardrails help the company “decide if a detection should proceed to an enforcement” and were designed to “suppress detections . . . based on characteristics, such as level of advertising spend”.
It gave as an example a business that “is in the top 5 per cent of revenue”.
Meta told the FT it uses “higher spend” as a guardrail because this often means the company’s adverts will have greater reach, and so the consequences could be graver if a company or its adverts are mistakenly removed.
The company also acknowledged that it had prevented some high-spending accounts from being disabled by its automated systems, instead sending them for a human review, when the company was concerned about the accuracy of their systems.
However, it said that all businesses were still subject to the same advertising standards and no advertiser was exempt from its rules.
In the “high spender mistake prevention” memo, the company rated different categories of guardrails as “low”, “medium” or “high” in terms of whether they were “defensible”.
Meta staff designated the practice of having spend-related guardrails as having “low” defensibility.
Other guardrails, such as using knowledge of the trustworthiness of the business to help it decide whether a detection of a policy violation should be automatically acted on, were labelled as “high” defensibility.
Meta said that the term “defensible” referred to the difficulty of explaining the notion of guardrails to stakeholders, should they be misinterpreted.
The 2023 documents do not name the high spenders that fell within the company’s guardrails, but the spending thresholds suggest thousands of advertisers may have been considered exempt from the typical moderation process.
Estimates from market intelligence firm Sensor Tower suggest that the top 10 US spenders on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.
Meta has achieved record revenues over recent quarters and its stock is trading at an all-time high, following the company’s recovery from a post-pandemic slump in the global advertising market.
But Zuckerberg has warned of threats to its business, from the rise of AI to ByteDance-owned rival TikTok, which has grown in popularity among younger users.
A person familiar with the documents argued the company was “prioritising revenue and profits over user integrity and health”, adding that concerns had been raised internally about circumventing the standard moderation process.
Zuckerberg said on Tuesday that the complexity of Meta’s content moderation system had introduced “too many mistakes and too much censorship”.
His comments came after Trump accused Meta last year of censoring conservative speech and suggesting that if the company interfered in the 2024 election, Zuckerberg would “spend the rest of his life in prison”.
The internal documents also show that Meta considered pursuing other exemptions for certain top-spending advertisers.
In one memo, Meta’s staffers proposed “more aggressively offering protections” from over-moderation to what it dubs as “platinum and gold spenders”, which together bring in more than half of advertising revenue.
“False positive integrity enforcement against High Value Advertisers costs Meta revenue [and] erodes our credibility,” read the memo.
It suggested an option of a blanket exemption for these advertisers from certain enforcements, except in “very rare cases”.
The memo shows that staff concluded that platinum and gold advertisers were “not a suitable segment” for a broad exemption, because an estimated 73 per cent of its enforcements were justified, according to the company’s tests.
The internal documents also show that Meta had uncovered multiple AI-generated accounts within big spenders’ categories.
Meta has previously fallen under scrutiny for carving out exemptions for important users. In 2021, Facebook whistleblower Frances Haugen leaked documents showing that the company had an internal system called “cross-check”, designed to review content from politicians, celebrities and journalists to ensure posts were not mistakenly removed.
According to the Haugen documents, this was sometimes used to shield some users from enforcement, even if they broke Facebook’s rules, a practice known as “whitelisting”.
Meta’s oversight board — an independent “Supreme Court”-style body funded by the company to oversee its most difficult moderation decisions — found that the cross-check system had left dangerous content online. It demanded an overhaul of the system, which Meta has since undertaken.
Read the full article here