By Dr. Michael LaBossiere
One of the many fears about AI is that it will be weaponized by political candidates. In a proactive move, some states have already created laws regulating its use. Michigan has a law aimed at the deceptive use of AI that requires a disclaimer when a political ad is “manipulated by technical means and depicts speech or conduct that did not occur.” My adopted state of Florida has a similar law that political ads using generative AI requires a disclaimer. While the effect of disclaimers on elections remains to be seen, a study by New York University’s Center on Technology Policy found that research subjects saw candidates who used such disclaimers as “less trustworthy and less appealing.”
The subjects watched fictional political ads, some of which had AI disclaimers, and then rated the fictional candidates on trustworthiness, truthfulness and how likely they were to vote for them. The study showed that the disclaimers had a small but statistically significant negative impact on the perception of these fictional candidates. This occurred whether the AI use was deceptive or more harmless. The study subjects also expressed a preference for using disclaimers anytime AI was used in an ad, even when the use was harmless, and this held across party lines. As attack ads are a common strategy, it is interesting that the study found that such ads with an AI disclaimer backfired, and the study subjects evaluated the target as more trustworthy and appealing than the attacker.
If the study results hold for real ads, these findings might serve to deter the use of AI in political ads, especially attack ads. But it is worth noting that the study did not involve ads featuring actual candidates. Out in the wild, voters tend to be tolerant of lies or even like them when the lies support their political beliefs. If the disclaimer is seen as stating or implying that the ad contains untruths, it is likely that the negative impact of the disclaimer would be less or even nonexistent for certain candidates or messages. This is something that will need to be assessed in the wild.
The findings also suggest a diabolical strategy in which an attack ad with the AI disclaimer is created to target the candidate the creators support. These supporters would need to take care to conceal their connection to the candidate, but this is easy in the current dark money reality of American politics. They would, of course, need to calculate the risk that the ad might work better as an attack ad than a backfire ad. Speaking of diabolical, it might be wondered why there are disclaimer laws rather than bans. The Florida law requires a disclaimer when AI is used to “depict a real person performing an action that did not actually occur, and was created with the intent to injure a candidate or to deceive regarding a ballot issue.” A possible example of such use seems to occur in an ad by DeSantis’s campaign falsely depicting Trump embracing Fauci in 2023. It is noteworthy that the wording of the law entails that the intentional use of AI to harm and deceive in political advertising is allowed but merely requires a disclaimer. That is, an ad is allowed to lie but with a disclaimer. This might strike many as odd, but follows established law. As the former head of the FCC under Obama Tom Wheeler notes, lies are allowed in political ads on federally regulated broadcast channels. As would be suspected, the arguments used to defend allowing lies in political ads are based on the First Amendment. This “right to lie” provides some explanation as to why these laws do not ban the use of AI. It might be wondered why there is not a more general law requiring a disclaimer for all intentional deceptions in political ads. A practical reason is that it is currently much easier to prove the use of AI than it is to prove intentional deception in general. That said, the Florida law specifies intent and the use of AI to depict something that did not occur and proving both does present a challenge, especially since people can legally lie in their ads and insist the depiction is of something real. Cable TV channels, such as CNN, can reject ads. In some cases, stations can reject ads from non-candidate outside groups, such as super PACs. Social media companies, such as X and Facebook, have considerable freedom in what they can reject. Those defending this right of rejection point out the oft forgotten fact that the First Amendment legal right applies to the actions of the government and not private businesses, such as CNN and Facebook. Broadcast TV, as noted above, is an exception to this. The companies that run political ads will need to develop their own AI policies while also following the relevant laws.
While some might think that a complete ban on AI would be best, the AI hype has made this a bad idea. This is because companies have rushed to include AI in as many products as possible and to rebrand existing technologies as AI. For example, the text of an ad might be written in Microsoft Word with Grammarly installed and Grammarly is pitching itself as providing AI writing assistance. Programs like Adobe Illustrator and Photoshop also have AI features that have innocuous uses, such as automating the process of improving the quality of a real image or creating a background pattern that might be used in a print ad. It would obviously be absurd to require a disclaimer for such uses of AI.