FTC Reports on Social Media Bots in Online Ads

(Image credit: FTC)

The Federal Trade Commission has released a report on the use of social media bots--algorithmically driven computer software--in online advertising and finds them to be thriving, out in the open, and easy, cheap and effective to sell, buy and use. 

The report came at the direction of the Senate Appropriations Committee, which wanted input on deceptive advertising. While that majority report passes no judgment, Democratic commissioner Rohit Chopra, in his a separate statement, does, concluding there are major issues that Big Tech won't solve, but the FTC needs to tackle. 

Related: Twitter's Dorsey Says They Will Try to Label Bot-Driven Communications

The report, which without footnotes is less than four pages, is essentially a brief survey of the landscape rather than a critique of how bots are painting it. It points out the good and bad uses they can be put to, how they range from simple to sophisticated, and that they can be tough to detect despite efforts to combat them. The commission also points to past enforcement in the area, including bots that mimic real people. 

Some of the bad ad-related uses include artificially boosting traffic to ads, delivering spam or spreading fake online product reviews. The FTC used more than a page of the report to detail an October 2019 enforcement action against a company, Devumi, that sold "fake followers, subscribers, views, and likes." 

The takeaway from the report is that social media bots are, well, "simply bots that run on social media platforms, where they are common and have a wide variety of uses, just as with bots operating elsewhere," and that if a bot is being used unfairly or deceptively, the FTC can act, and has acted, using its unfair or deceptive practices authority. It did concede Devumi was the first such action in the social media bot space. 

It also points out that "Major social media companies have made commitments – codified in the EU Code of Practice on Disinformation – to better protect their platforms and networks from manipulation, including the misuse of automated bots." 

While voting to approve the report--the vote was 4-0-1, the 1 being Democrat Rebecca Kelly Slaughter, who did not participate--Democrat Rohit Chopra issued his own report of sorts that did critique social media bots and made that clear from the first sentence: "The viral dissemination of disinformation on social media platforms poses serious harms to society." 

While the report essentially made no value judgments beyond the enforcement action against a bad actor, Chopra said Big Tech platforms can't be trusted to police themselves. "While the Commission’s report cites platforms’ efforts to remove bots and fake accounts, it is crucial to recognize that the platforms’ core incentives do not align with this goal," he wrote. 

He also talked about bots inflating the price of advertising, citing Association of National Advertisers data showing 35% of impressions online are fraudulent, fraud that could have cost advertisers almost $6 billion in 2019 alone. 

While the report confines itself to detailing that single enforcement action, Chopra says he thinks the commission can and should be challenging other practices, including fraudulent ad metrics. "Major advertisers routinely raise these concerns, and the Media Rating Council is reportedly reviewing Facebook’s certification, including its practices with respect to fake accounts," he said. "The FTC’s authority is limited to 'commerce' and generally does not encompass political speech," he also concedes. "However, individuals, firms, and corporations operating for profit are covered by the FTC Act’s prohibition on deception. In other words, if a for-profit enterprise offers surreptitious manipulation services to denigrate a commercial competitor or political opponent, it may be subject to the FTC’s jurisdiction." 

Related: House Takes Deep Dive into Online Fakes

The majority report lacks a conclusion, ending with the statement: "The Commission’s staff will continue its monitoring of enforcement opportunities in matters involving advertising on social media as well as the commercial activity of bots on those platforms. 

Chopra's conclusion: "Congress is right to be alarmed by the explosion of disinformation online driven by bots and fake accounts....The FTC’s authority to prohibit deceptive acts and practices is one way tackle the harms posed to our economy, democracy, and national security. But, of course, policymakers around the world must do more." 

John Eggerton

Contributing editor John Eggerton has been an editor and/or writer on media regulation, legislation and policy for over four decades, including covering the FCC, FTC, Congress, the major media trade associations, and the federal courts. In addition to Multichannel News and Broadcasting + Cable, his work has appeared in Radio World, TV Technology, TV Fax, This Week in Consumer Electronics, Variety and the Encyclopedia Britannica.