Monday, September 27, 2010

Crowdsourcing and Sentiment Analysis

Quick Summary: I am taking a class called "Extracting Social Meaning and Sentiment" at Stanford this quarter. For the final project, I will explore how to best combine crowdsourcing and automated sentiment analysis. This should yield higher accuracy, but it will come at a monetary cost. If you would like us to optimize the sentiment analysis results for a particular keyword, please donate money (~$20) by clicking on the following button:

We'll then contact you to ask what keyword you're interested in optimizing. The money will be used to pay the crowdsourced workers. This offer stands until December 2010.

Background: There are two schools of thought in the sentiment analysis community:
1. Crowdsourcing. The basic idea behind crowdsourcing is that you can pay a group of people to classify a tweet, through a service like Amazon Mechanical Turk or CrowdFlower. There are many workers on these services, so it can take just a few hours to classify several thousands of tweets. I've found that spending about 2 cents per tweet, with the correct settings and tuning, can yield higher accuracy than any automated solution (described next).
2. Automated sentiment analysis. Many computer scientists and linguists are developing algorithms to automatically detect sentiment analysis, without human intervention. We have a whole list of these solutions in our Resources spreadsheet. Also, this is how Twitter Sentiment works today. The problem with automated sentiment analysis is that it's far from perfect. My gut feeling tells me that some human intervention is required to get accuracy to the next level.

Motivation. We want to combine the best of both worlds. By combining crowdsourcing with automated sentiment analysis, we think that we can significantly increase accuracy while being cheaper than a fully crowdsourced system.

Possible solutions. Tweets have unique properties that could be examined to make crowdsourcing more efficient. For example, an easy way to boost accuracy would be to make sure that very highly retweeted items are classified correctly. If a status is retweeted 500 items, sending one example retweet to a classification system with higher accuracy (i.e. our crowdsourced system) would be worth it, in order to get all 500 correct. We have other interesting ideas like this.

Why we need your help. Answer: money. Crowdsourcing can be expensive, especially when you start classifying thousands of tweets. Unfortunately, we don't have a budget to run these crowdsourcing tests. Also, it would be good to have some practical scenarios to work with, rather than contrived trials that don't represent the needs of the real world.

Cost. The money will be used towards paying the workers to classify tweets. At about $0.02 per tweet, you could classify 1000 tweets for $20.

What do you get? We'll work with you on classifying a large set of tweets. We can gather the tweets for you, or you can send us a batch. We think that we can classify tweets at a higher accuracy level offered by automated solutions, while costing less than a fully crowdsourced system.

Interested? Simply fill out our feedback form or contact me at alecmgo at stanford dot edu with the query you would like to track and the time frame. We will then respond with an estimated cost.

Thanks,
Alec

No comments:

Post a Comment