In a groundbreaking study conducted by Nature Machine Intelligence, the potential of artificial intelligence (AI) to revolutionize the accuracy and dependability of citations on Wikipedia has come to light. Wikipedia, the renowned online encyclopedia, has long been a hub of information, with thousands of dedicated volunteers working tirelessly to maintain its credibility. However, the persistent challenge of flawed references, inaccuracies, and links to unreliable sources has persisted.
Addressing this issue, researchers from Samaya AI, a London-based company, have introduced SIDE (Source Inference for Document Expansion), a neural network empowered by AI. SIDE is designed to automate the scrutiny of sources cited on Wikipedia pages, ensuring they substantiate the claims within an article and recommending alternative references when necessary.
SIDE relies on a comprehensive database of reputable references commonly promoted and acknowledged by Wikipedia’s editors and moderators. The study revealed that SIDE aligned with the references cited in articles in nearly 50 percent of cases. In other instances, it proposed superior alternatives.
When the AI results were presented to a sample of Wikipedia users, 21 percent expressed satisfaction with the references provided by SIDE, while 10 percent favored the existing sources. A significant portion, 39 percent, had no specific preference.
This development assumes great significance in light of recent cautionary words from India’s Supreme Court. The court has warned against relying solely on platforms like Wikipedia for accurate information, highlighting that while Wikipedia follows a user-generated editing model, it may not guarantee academic veracity and can, at times, propagate misleading information.
“The principal advantage is removing human bias. An AI could look at multiple interpretations, verify facts and constantly monitor the research or reporting around an entry,” says Alexander, one of the experts in the field. “Another advantage is that the AI works 24 hours a day, seven days a week and will not tire. Humans simply cannot keep up with that sort of productivity.”
But there are challenges associated with AI implementation, including proprietary algorithms that can be complex for users to understand.
“The second disadvantage is found in the current state of AI platforms,” Alexander notes. “Presently, the AI wants, more than anything else, to make you happy. Accuracy is secondary to being useful. This can lead to AIs providing inaccurate information.”
In essence, the emergence of SIDE, an AI-powered neural network, signifies a significant stride towards bolstering the accuracy and reliability of citations on Wikipedia. Through the automation of source analysis and the provision of alternative references, SIDE stands as a solution to address inaccuracies and incomplete citations. While AI can enhance efficiency, human editors’ oversight remains essential to uphold the credibility and reliability of information on platforms like Wikipedia.
It is designed to scrutinize Wikipedia references, identifying missing or broken links and verifying whether the references support the claims in the articles. In cases where the existing references fall short, SIDE can recommend better alternatives, thereby improving the quality of citations.
“The principal advantage is removing human bias. An AI could look at multiple interpretations, verify facts and constantly monitor the research or reporting around an entry,” Alexander added.
AI can impartially assess multiple interpretations, verify facts, and continuously monitor research and reporting around a topic. Additionally, AI operates 24/7 without fatigue, ensuring a consistent level of productivity.
Nonetheless, there are challenges associated with AI implementation, such as proprietary algorithms that can be complex for users to understand. Presently, AI tends to prioritize usefulness over accuracy, which can lead to the dissemination of inaccurate information.
The concept of an AI-run Wikipedia also raises intriguing possibilities. It could lead to the creation of a “better” version of Wikipedia, potentially similar to predictive text language models like ChatGPT. Such an AI Wikipedia would have the capacity to process massive datasets, utilize every available relevant source, and conduct ironclad fact-checking, reducing human error and bias.
Wikipedia heavily relies on its references to validate the information it provides. Unfortunately, many references are flawed, leading to inaccuracies and incomplete citations. The study published in Nature Machine Intelligence on October 19th proposes that AI can play a crucial role in rectifying this issue and improving the quality of Wikipedia entries.