Frequently Asked Questions

Alright, so what is this all about?

Crowdsourced Scores is based on the theory that says that a average prediction by across a group of individuals will perform better than any individual over time, called the Wisdom of the Crowds. Our goal is to collect prediction from you all about the final scores of games and compare the predictions to the actual totals.

We’ll also be presenting the results against the spread and totals as defined by the lines at vegasinsider.com.

To make it appealing to return every week, we’ll be keeping track of how accurate you all are showing who the leaders are for each game, each week, and each season.

Why should I participate?

There are a couple of reasons that we think you’ll like. First and foremost, we think that you’re kind of like us and you’ll be interested to see whether the hypothesis actually works. You like football and you like to demonstrate your understanding of all of the information at your fingertips.

Second, if you’re involved in a weekly pick’em pool and are anything like Chris is, you can use all the help you can get to try to be in the mix past the opening weekend. If the wisdom of crowds proves correct, it will provide you with an edge over your competitors.

How is this different from the Who Picked Whom (WPW) graphs on other Pickem sites?

CSS is more detailed than the WPW graphs. The WPW graphs are binary showing only where the majority of picks are, but it doesn’t show you critical details. For example, in the opening game of Carolina (-2.5) at Denver, let’s imagine that 60 of 100 users picked Carolina, this would imply that the WotC indicates that Carolina would cover, but what if the average spread for Carolina pickers was Carolina +4 and the average spread for Denver was Denver +7. The WotC result, then, indicates that the average spread would be Denver +0.4 (60 * 4 = +240 CAR; 40 * 7 = +280 DEN; 280-240 = 40 / 100 = +0.4 DEN).

So ultimately, the WotC would have proven correct because the extra detail of the actual scores would have indicated that, while more people picked Carolina, more people expected Denver to win by a bigger margin.

We want to see if this will play out over the course of the season.

What is a good forecasting record?

We’ll consider the wisdom of the crowds hypothesis to be feasible if the crowd can achieve 60% accuracy or better.  AccuScore boasts anywhere between 50% and 75% based on how they are pivoting. The overall success/failure of the hypothesis will be determined at the end of the season, though we will certainly be providing week-by-week updates.

Why should I use your site over a site like AccuScore or Sportsline?

First, CSS is free this season; you have to pay for many other sites. So there’s that. In the long term, to be sure, AccuScore looks like a very promising option. It seems to show a strong success rate. The main differences between us and AccuScore are two-fold:

  1. Rather than spend money on building an algorithm and passing the cost through to the customers, we simply ask the customer to provide their own knowledge and trust the hypothesis of the wisdom of crowds to provide the accuracy. As such, we will never charge as much as AccuScore will because we’re just making sure that we can keep the site going.
  2. Our site will have a strong community focus. AccuScore is one-way: they tell you their predictions and keep the model secret. On crowdsourcedscores.com, we’re going to provide predictions as well as eventually enable community discussion and feedback. This is actually a critical component to good forecast accuracy. Having conversations with people and changing your mind on predictions. A great forecaster doesn’t pick a result and hold onto it; they continue to evaluate their prediction based on new information.