Wikipedia has officially barred its 260,000 volunteer editors from using large language models like ChatGPT to generate encyclopedic content, citing the technology’s tendency to hallucinate fake facts, broken links, and fabricated citations that violate the site’s strict standards on verifiability and neutrality.

The policy, passed in a landslide 40-to-2 vote after months of debate, still permits limited AI use for tasks like translating articles or suggesting minor copy edits, provided a human reviews every change and no new information is introduced.

The crackdown follows growing alarm among editors who noticed a surge of AI-written articles with telltale signs like overused phrases, wordy explanations, and sudden style shifts — prompting the formation of a dedicated WikiProject AI Cleanup squad to hunt down and remove bot-generated content.

The move comes at a painful moment for the 25-year-old platform, as recent data shows ChatGPT has already surpassed Wikipedia in monthly visits, with human page views dropping 8% in late 2025 compared to the previous year — a bitter irony given that Wikipedia’s own content likely helped train the AI models now threatening to replace it.

Volunteer editor Ilyas Lebleu, who helped draft the new policy, warned the implications go far beyond Wikipedia, predicting a “domino effect” that could push other online communities to draw their own lines in the sand against AI-generated content.

Show CommentsClose Comments

Leave a comment