Sina Weibo, a major microblogging service in China, said it is developing a credibility rating system that will simplify content management so as to reduce potential threats to social stability by false online information.
“We have been doing research on how to establish a credibility system over the past few months… we may cite the way that vendors on e-commerce websites get rated by their clients as a potential model,” CEO Charles Cao said at the China Digital Media Summit 2011 held in Beijing. (Global Times)
It’s no secret that social media in China is under fire. The government still hasn’t figured out what the proper regulatory equilibrium is for this industry. Ironically, we know about all of this via government leaks (also known as rumors).
The latest rumor, whose circulation ended up contributing to a mauling of Sina’s share price in the U.S. yesterday, is that weibo service (short messaging/Twitter) operations will require special licensing. More on the specific rumor, even more VIE chatter, and what happened to China’s tech stocks yesterday in a separate post (hopefully later today).
In other words, the government is not happy with social media and is looking for ways to regulate it further. The classic response to such a challenge from the private sector is to get out ahead of the problem. Seems to me that Charles Cao was trying to do just that with his self-regulation idea.
Only problem is that the idea of a credibility rating system makes no sense and would be impossible to manage. It looks fine superficially. One could set something up that looks similar to the ratings systems used by e-commerce companies (Taobao, eBay, Amazon). The difficulty lies in the subjectivity of comments.
When you make a purchase through Taobao, you then have the opportunity to say whether you are happy/unhappy with the product and service, and can include details about what might have gone wrong. While there are subjective elements to this, a lot of the reaction is based on objective criteria (price, overall quality, logistics). In fact, I’d say that the remaining problems, or weaknesses, with such evaluation systems have to do with subjective elements.
With online commentary, it’s pretty much all subjective. Yes, if someone says that an earthquake will occur in a certain place at a certain time, and they turn out to be wrong, it would be easy to give that person some sort of demerit. But the overwhelming number of messages/posts are not like that. Commentary is fundamentally about sharing opinion, and attempting to discern “truth” is going to be an exercise in failure. (By the way, as one comment to the Global Times article said, rumors based in truth probably threaten social stability even more than false ones.)
I don’t think this is going to work. Other measures, like Real ID systems, have their own challenges, but at least they are based on objective criteria.