Facebook may be infamous for helping to usher in the era of "fake news", but it's also tried to find a place for itself in the follow-up: the never-ending battle to combat it. In the latest development on that front, Facebook parent Meta today announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere's first application, Meta says, is Wikipedia, where it's being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.
The research team has open sourced Sphere -- which is currently based on 134 million public web pages.
Here is how it works in action:
The idea behind using Sphere for Wikipedia is a straightforward one: The online encyclopedia has 6.5 million entries and is on average seeing some 17,000 articles added each month. The wiki concept behind that means effectively that adding and editing content is crowdsourced, and while there is a team of editors tasked with overseeing that, it's a daunting task that grows by the day, not just because of that size but because of its mandate, considering how many people, as well as increasingly educators and other institutions, rely on it as a repository of record.
At the same time, the Wikimedia Foundation, which oversees Wikipedia, has been weighing up new ways of leveraging all that data. Last month, it announced an Enterprise tier and its first two commercial customers, Google and the Internet Archive, which use Wikipedia-based data for their own business-generating interests and will now have wider and more formal service agreements wrapped around that.
To be clear, today's announcements about Meta working with Wikipedia do not reference Wikimedia Enterprise, but generally adding in more tools for Wikipedia to make sure that the content that it has is verified and accurate will be something that potential customers of the Enterprise service will want to know when considering paying for the service.
Meta has confirmed to me that there is no financial arrangement in this deal: Neither Wikipedia becoming a paying customer of Meta's, nor vice versa. But Meta notes that to train the Sphere model, it created "a new data set (WAFER) of 4 million Wikipedia citations, significantly more intricate than ever used for this sort of research." And just five days ago, Meta announced that Wikipedia editors were also using a new AI-based language translation tool that it had built, so clearly there is a deeper relationship there.
On Meta's part, the company continues to be weighed down by a bad public perception, stemming in part from accusations that it enables misinformation and toxic ideas to gain ground freely -- or if you're someone who has ended up in "Facebook jail", believing you have shared something you think is fine, but you still have fallen afoul of over-zealous social police. It's a mess for sure, but in that regard launching something like Sphere feels a little like a PR exercise for Meta, as much as potentially a useful tool: If it works it shows that there are people in the organization trying to work in good faith.
A few more details on the news today, and what might be coming next:
-- Meta believes that the "white box" knowledge base that Sphere represents has significantly more data (and by implication more sources to match for verification) than a typical "black box" knowledge sources out there that are based on findings from, for example, proprietary search engines. "Because Sphere can access far more public information than today’s standard models, it could provide useful information that they cannot," it noted in a blog post. The 134 million documents that Meta has used to bring together and train Sphere were split into 906 million passages of 100 tokens each.
-- By open sourcing this tool, Meta's argument is that it's a more solid foundation for AI training models and other work than any proprietary base. All the same, it concedes the very foundations of the knowledge are potentially shaky, especially in these early days. What if a "truth" is simply not being reported as widely as misinformation is? That's where Meta wants to focus its future efforts in Sphere. "Our next step is to train models to assess the quality of retrieved documents, detect potential contradictions, prioritize more trustworthy sources — and, if no convincing evidence exists, concede that they, like us, can still be stumped," it noted.
-- Along those lines, this raises some interesting questions on what Sphere's hierarchy of truth will be based on compared to those of other knowledge bases. Because it's open sourced, there may be an ability for the users to tweak those algorithms in ways better suited to their own needs. (For example, a user implementing Sphere to check legal references base may assign more credibility on court filings and case law databases than a user verifying fashion or sports references, which would put a higher emphasis on other sources.)
-- Meta has confirmed that it is not using Sphere or a version of it on its own platforms like Facebook, Instagram and Messenger, which themselves have long grappled with misinformation and toxicity from bad actors. (We have also asked whether there are other customers in line for Sphere.) It has separate tools to manage its own content and moderating it.
-- Most of all, it seems that something like this is designed for mega scale. The current size of Wikipedia has arguably exceeded what any sized team of humans alone could check for accuracy, so the idea here is that Sphere is being used to automatically scan hundreds of thousands of citations simultaneously to spot when a citation doesn't have much support across the wider web: "If a citation seems irrelevant, our model will suggest a more applicable source, even pointing to the specific passage that supports the claim," it noted.
While this is in a production phase at the moment, it also sounds like the editors might be selecting the passages which might need verifying for now. "Eventually, our goal is to build a platform to help Wikipedia editors systematically spot citation issues and quickly fix the citation or correct the content of the corresponding article at scale."
Updated with further comment from Meta.