How Web 2.0 sites deal with ‘bad’ behavior

In our review of 10 leading Web 2.0 sites (Craigslist, Digg, Facebook, LinkedIn, PlentyOfFish, Prosper, TripAdvisor, Wikipedia, WordPress, and Yelp), we found the most commonly reported challenge they faced was coping with deceptive and destructive user behavior.

How do Web 2.0 sites deal with ‘bad’ behavior from the very users that make their sites possible?  We divided their strategies into two buckets:  content moderation, and alternative strategies.  Content moderation strategies come in different flavors, varying from site-driven, where sites perform their own moderation and policy enforcement (think Yelp or Facebook), to community-driven (with Wikipedia as the classic example).  In between is a community-assisted model, where community members help flag inappropriate content (as seen on Craigslist and PlentyOfFish).

What are the alternatives to content moderation?  One of the most fascinating is the secret algorithm strategy, where an automatic but secretive method is used to promote the most suitable content.  Google PageRank is the granddaddy of secret algorithms, but the secret sauce at the heart of sites like Digg, Yelp, and TripAdvisor has attracted juicy controversy.  The flip side of dark secrets at the heart of Web 2.0 is a total transparency strategy, as used by the open source WordPress to deal with security threats.  Prosper has used a strategy of adding additional outside data to their user-generated content to help lenders make better loan decisions.  Strategies can be combined too.

I’m so intrigued by the secret algorithm strategy that I was thinking of making it the topic of my next Web 2.0 paper.  In the meantime, this study is under review at IEEE Technology & Society.  Details and paper to be posted later.