The other day I posted about Wikipedia and the fact that it has realised that a degree of control over authorship is necessary to ensure things don’t spiral out of control. Hot on the heels of that post I read over on RWW that a new strategy to tackle editorialised content has come from WikiTrust. WikiTrust comes from a research project and UCSC, but the fact that someone realises it was a product that had a need is indicative.
Basically WikiTrust trawls Wikipedia entries and looks at every word and determines the trustworthiness of said word based on the frequency that the word contributor has had other material reverted. WikiTrust then highlights any dubious contributor’s entries, word by word.
Currently WikiTrust only works on a cached copy of Wikipedia but it’s an interesting experiment. It is instructive to see an excerpt of the WikiTrust’s analysis of the Wikipedia entry on SaaS – no real surprises where the questionable parts (highlighted) lie;
I have to say that, in light of my comments yesterday about the need for control in an open community, that WikiTrust is a great solution. We’re seen for years now rating systems for other offerings – from e-commerce to social media – and this is a way to get rating into Wikipedia. It is however currently too much of a gross measurement system. I would envisage that in the future there was a colour code – rock solid uncontestable words (if there are such a thing!) would appear in black, and there would be a spectrum of possibilities going down in trustworthiness from there.
It would also be good to see WikiTrust, or something like it, use other streams of data in it’s determinations. At the moment in is a measure built entirely on reversion count, imagine if it could also take data from other sources – say a title count from scientific paper libraries, or from a catalogue of scholarly works…
It’s an area to watch with interest going into the future.