X will let AI write Community Notes, which could change how truth and context are shared on social media. You may have seen Community Notes if you’ve ever scrolled through X (formerly Twitter) and seen those little fact-checking blurbs under popular posts. Community Notes are user-generated fact-checks or context blurbs attached to posts that might be misleading or need extra explanation.
The system was first called Birdwatch and relies on volunteers to suggest, rate, and improve notes. A note can only go public if a wide range of people agree with it, ensuring that all points of view are represented. Since Elon Musk bought the company, this method has made Community Notes a key part of X’s content moderation. The feature’s success has even led platforms like Meta and YouTube to try out similar models.
How does the AI Write Community Notes Pilot work?
Starting this month, X is starting a test program that lets AI-powered bots make Community Notes. Using large language models (LLMs) like X’s Grok or OpenAI’s ChatGPT, developers can now make their own AI Note Writers and link them to X through an API.

At first, these AI-generated notes will be in test mode and need to be approved before they can be shown to the public. For now, AI-generated notes will only be allowed on posts that request a Community Note, and they will be clearly marked for transparency. The company sees this as a first step and hopes that AI will play a bigger role in the future.
One of the most interesting parts of this project is how people who contribute to AI can talk to each other. An open-source, automated evaluator will check every AI-generated note for relevance and possible abuse, using information from past human contributors. In the end, human ratings are the last thing that happens before a note goes live.
X’s goal isn’t to let AI decide what’s true; instead, they want to create a “virtuous loop” in which AI and people help each other get better. Over time, AI models can learn to provide more accurate, less biased, and more useful information with human feedback.
Opportunities: Speed, Size, and Ease of Access
The main reason to let AI write Community Notes is that it can do so at scale. Most of the time, human volunteers only check high-profile posts, leaving many others unchecked. On the other hand, AI can quickly process large amounts of content, which could help surface context for posts that might not have been seen otherwise.

For users, this could mean
- Faster fact-checking: AI can write notes in real time, which speeds up fact-checking by cutting down on the time between a false post and a correction.
- Wider coverage: More posts get noticed, not just the ones that go viral.
- Consistent quality: AI models can be trained to follow the rules for Community Notes, reducing the risk of off-topic or biased notes.
Of course, there are risks involved when you let AI write Community Notes. AI models can “hallucinate,” meaning they can make up information that sounds true but isn’t. If not carefully managed, this could fill the system with unreliable notes, making it hard for human raters to do their jobs and lowering trust.
There’s also the risk of bias: if an AI model places too much emphasis on “helpfulness” or aligns too closely with certain viewpoints, its notes could reflect those biases. It could be hard to maintain standards when third-party LLMs are in the mix.
In conclusion, having seen how online moderation has changed over time, I think X’s choice to let AI write Community Notes is both brave and necessary. The huge amount of content on social media calls for solutions that can grow. But the human parts—like critical thinking, understanding, and empathy—can’t be replaced.
If X can find the right balance between using AI to support its contributors rather than replace them, this could be a model for other platforms. The most important things will be openness, strong oversight, and a promise to keep getting better.




