top of page

X Combats Misinformation with AI that Generates Automated Community Notes

  • Writer: Juan Allan
    Juan Allan
  • 6 days ago
  • 2 min read

Elon Musk's X (formerly Twitter) is experimenting with AI to create “Community Notes”, contextual clarifications in posts without human intervention



X (formerly Twitter) is secretly testing a system where AI chatbots automatically generate “Community Notes” (contextual clarifications that combat misinformation in viral posts) without human intervention. Based on Grok-2, xAI's model, the system scans trending content by cross-referencing data from 87 verified sources (from fact-checkers to scientific articles).


According to a report leaked to TechCrunch, the pilot analyzed 18,000 posts during June. In 73% of cases (especially scientific topics and sporting events), the automated notes matched human contributions. But in sensitive areas such as politics or health, it showed worrying flaws: in a tweet about immigration reforms in Germany, the AI added a note with outdated data that misrepresented the context.


Elon Musk defended the project: “The scale of misinformation demands lightning-fast solutions. Grok can process 100,000 notes daily vs. 15,000 human ones.” However, anonymous employees reveal that the Trust & Safety team requested to postpone the launch: “Without oversight, an algorithmic error on sensitive topics could cause real harm.”


AI as a digital shield in finance


“Artificial intelligence is creating a turning point in the cryptocurrency industry.,” emphasizes Rodrigo Durán Guzmán, Director of Communications at CryptoMKT.


“At CryptoMKT, we are already applying AI models to detect unusual behavior, strengthen our fraud prevention systems, and anticipate risks in real time. All of this translates into greater confidence, speed, and protection for those who do business with us”, explains Durán Guzman.


The communications director expands on the analogy:


“From virtual assistants to intelligent recommendation engines, today we have tools that make the experience much more personalized, accessible, and educational”. However, Durán Guzman believes that “this progress also challenges us to establish clear ethical frameworks and develop technologies that are centered on people”.


X's system operates under three rules:


  1. Self-correction: If users vote negatively on an AI note, it is deactivated within 60 seconds.

  2. Transparency: AI-generated notes will carry a distinctive stamp.

  3. Exclusion: It will not intervene in medical issues or emergencies until 2026.


“This approach reflects how blockchain and AI converge to multiply opportunities,” concludes Durán Guzmán.


Experts insists that technology must focus on people. This way, we can reduce barriers and build an inclusive AI ecosystem based on trust.

Comentarios


bottom of page