Breaking 08:20 Russian oil tankers rerouted from China to India after US sanctions waiver 07:50 Tel Aviv researchers develop laser method for longer lasting battery anodes 07:20 Bank of Japan holds rates as oil shock raises inflation risks 07:00 Global energy crisis from Iran war drives shift to renewables 16:20 Amazon sees AI driving AWS revenue to 600 billion dollars 16:00 Colombia and Ecuador investigate border incident following fatal explosions 15:50 Tencent reports record revenue and shifts buyback funds to AI 15:45 Pakistan cancels military parade as oil crisis triggers austerity across Asia 15:45 AVEVA appoints Khaled Salah as Vice President for Africa 15:30 Telegram faces pressure as Russia says it violates local laws 15:20 AMD selects Samsung as key HBM4 supplier for next AI GPUs 15:15 Zelensky warns Iran war is stalling Ukraine peace talks 15:15 Bank of Canada holds rates amid energy price concerns 15:00 Spain reaffirms strong support for Ukraine amid Middle East tensions 14:50 BYD launches new Atto 2 hybrid SUV in Morocco market 14:45 US waives sanctions on deals involving Venezuela’s PDVSA 14:30 Belgian court delays ruling in TotalEnergies climate lawsuit 14:20 Morocco rolls out new transport aid program to offset fuel costs 14:15 Denmark’s supreme court rejects NGOs’ lawsuit over arms sales to Israel 14:00 Morocco prepares to observe moon sighting for Eid al-Fitr 2026 13:50 Nvidia chief backs OpenClaw as AI agent frenzy grips China 13:45 Imec acquires advanced Asml tool to accelerate next-generation chip development 13:30 Morocco launches national campaign to boost domestic tourism 13:20 Swiss researchers achieve record 30 percent efficiency in solar cell 13:15 Ecb warns markets are underestimating geopolitical risks and urges caution on bank rules 13:08 Love Brand 2025 | BIM among the favorite brands of consumers in Morocco 13:00 Bmw targets recovery in China with neue klasse electric platform 12:50 China reroutes oil shipments as Hormuz disruption reshapes energy flows 12:45 A decade after Brussels attacks survivor continues fight for recovery and compensation 12:21 Tencent to integrate Ai agents into WeChat ecosystem 12:20 Colombia alleges deadly border bombing as Ecuador denies responsibility 12:01 Eu condemns Kabul hospital strike as deadly escalation, calls for ceasefire 11:50 Hormuz traffic partially resumes as oil prices surge on Gulf attacks 11:20 BHP appoints Brandon Craig as next CEO to drive growth strategy 10:50 Trump seeks to delay Xi summit as Iran war reshapes priorities 10:20 Oil prices dip as Iraq and Kurdistan resume exports via Turkey

Artificial Intelligence and the 2024 US Election: Myths and Realities

Wednesday 25 December 2024 - 14:32
Artificial Intelligence and the 2024 US Election: Myths and Realities

As the 2024 US presidential election unfolded, one question loomed large: Would artificial intelligence (AI) shape the outcome? This marked the first election in the era of widespread AI tools that allowed for the creation of synthetic media such as images, audio, and video, sometimes used for manipulation. Shortly before the election, a robocall was made in New Hampshire, featuring an AI-generated voice resembling President Joe Biden's, prompting the Federal Communications Commission to act swiftly, banning the use of AI-generated voices in robocalls.

This event served as a flashpoint in the ongoing debate over AI's potential to impact elections. Sixteen states passed legislation regulating AI’s use in political campaigns, often requiring clear disclaimers on AI-generated content close to Election Day. The Election Assistance Commission released a comprehensive “AI toolkit” for election officials, offering guidance on how to handle the challenges posed by AI-driven misinformation. Additionally, states set up resources to help voters distinguish between authentic and AI-generated content.

Experts had raised alarms about AI’s potential to produce deepfakes—videos and audios that could mislead voters by making political figures appear to say or do things they never did. Concerns extended beyond domestic issues, warning that foreign adversaries could exploit AI to influence public opinion. Despite these fears, the anticipated flood of AI-driven misinformation largely failed to materialize.

When Election Day came and went, misinformation remained a dominant issue, but it was largely based on old tactics. Claims about vote counting, mail-in ballots, and voting machines circulated widely, but the content was mostly created through traditional methods such as text-based posts and images taken out of context. “This was not ‘the AI election,’” said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights. “Generative AI turned out not to be necessary to mislead voters.”

Professor Daniel Schiff of Purdue University echoed this sentiment, stating there was no "massive eleventh-hour campaign" that misled voters or influenced polling places. He noted that while misinformation existed, it was unlikely to have been a decisive factor in the presidential race.

AI-generated misinformation that did gain traction often supported existing narratives rather than creating entirely new falsehoods. For example, after false claims were made by former President Donald Trump and his running mate about Haitians allegedly eating pets in Springfield, Ohio, AI-generated images and memes spread across the internet, reinforcing the narrative without necessarily fabricating new information.

At the same time, efforts to curb the negative impact of AI on elections gained momentum. AI-driven risks prompted a collective response from governments, public advocates, and researchers. Schiff observed that the attention given to potential AI harms resulted in effective safeguards, helping to minimize the risks.

Social media platforms took action, too. Meta, which owns Facebook, Instagram, and Threads, required advertisers to disclose the use of AI in political advertisements, while TikTok introduced mechanisms to label AI-generated content. OpenAI, the company behind ChatGPT and DALL-E, banned the use of its tools in political campaigns, further limiting AI’s potential to influence the election.

Despite these safeguards, traditional techniques of misinformation still reigned supreme. Siwei Lyu, a professor of computer science and digital media forensics, explained that traditional methods of spreading falsehoods continued to be more effective than AI-generated media. Research also showed that AI-generated images didn’t achieve the same virality as traditional memes, even though both types could still gain traction.

In the end, prominent figures with large followings, such as Trump, spread misinformation without relying on AI-generated content. His false claims about illegal immigrants voting were amplified through speeches, media interviews, and social media posts, helping to shape public opinion despite the lack of AI-driven influence.

While the role of AI in the 2024 election may not have been as significant as some predicted, the ongoing battle against misinformation remains central. The election highlighted the complex interplay between technology, policy, and public perception, underscoring the need for vigilance in managing the risks posed by new technologies in the political landscape.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.