The silly bias of the Razorfish Feed report

Did you know that people who stand in line for ice cream also like to eat ice cream? The latest Razorfish report on social media trends makes just as much sense. Their “Feed” report now being retweeted everywhere online has rosy stats such as 65% of consumers have had online experiences influencing their perception of a brand. Yes! Marketers, unleash those social media budgets!

Alas, dig into page 14 of the report and you find the survey sample was 1,000 consumers who:

a. have broadband access
b. spent at least $150 online in the past 6 months
c. visited a community site such as Facebook or Yelp
d. consume or create digital media.

That’s right. Razorfish asked heavy users of the web and social media who are comfortable spending money online if they are influenced by the web and social media. We hate to rain on Razorfish, usually a sharp group, but this report feels like self-justifying fuzziness designed to provide bar charts for business development. The technical term for this is “coverage bias,” in which the sample being studied does not represent the population as a whole.

So enjoy those Razorfish quotes being tossed around — 97% of consumers search for brands online! An “overwhelming majority” welcome advertising on social networks! We also hear that 100% of heavy internet users are people who use the internet.

8 thoughts on “The silly bias of the Razorfish Feed report

  1. Hi Ben —

    Thank you for your thoughtful comments and critique of this year’s FEED report.

    To address your comments about our sample, we specifically chose to survey what we call “connected consumers” who are the people that the vast majority of our clients are trying to reach. Screening criteria include:

    *Broadband access
    *Spent $150 online in the past six months (travel, Netflix, tickets, Amazon, etc.)
    *Visited a community site (MySpace, YouTube, Facebook, Yelp, etc.)
    *Consumed or created some form of digital media such as photos, videos, music or news

    This criteria has remained relatively stable over the past three years that we have run the study and we’ve historically found that “connected consumers” have been a strong leading indicator of trends, popularity, etc.

    In addition, we were also very pleased to find that our data trends nicely to the work of Forrester, Pew, eMarketer and a number of others — though our sample size is not nearly as extensive (nor broad).

    Some good research that points to the growing technical fluency of U.S. audiences includes “The Broad Reach Of Social Technology” from Forrester’s Sean Corcoran,7211,55132,00.html as well as Forrester’s “The State Of Consumers And Technology: Benchmark 2009, US”,7211,54959,00.html. Other recent work from Pew and eMarketer show similar advances.

    I should also add that we have always been completely transparent around the characteristics of “connected consumers” — not only citing it on pg. 14 (as you point out), but releasing the entire survey data beginning on pg. 41 that (including age, income, geographic location, etc.)

    Thanks again for the thoughtful critique — good to keep us on our toes!


  2. Garrick, thanks for this response.

    I still think the report has two major flaws — coverage bias (from the tiny pool of early SM adopters surveyed) and the presentation of the stats as representative of real trends across all consumers. You do spell out the sample on page 14 — but leading up to that are 13 pages of stats that are really misrepresentative of the population as a whole. Those early stats are what readers latch on to.

    So I admire the spirit of your report, but not the weight in which the data is presented. I think this report really is presented in a rather misleading light, even if you do have disclosure within it.

    Here’s a sample quote from early in the report: “And consumers want to interact, regardless of whether brands are willing participants: 73% have posted a product or brand review on a web site like Amazon, Yelp, Facebook, or Twitter.”

    Um, you mean 73% of the small sample of heavy web users who already use social media, detailed on page 14, right?

    BTW – yes, Razorfish is a leader in this space and you do fine work. No slam meant on your capabilities, just a bit of push to make your reports cleaner in the future.

    Thanks again for your response.

  3. Ben – It never ceases to amaze how much we blindly rely on metrics and ignore whether the metrics are built on a solid foundation. It would surprise most people that many measurements – even those considered industry standards – are grossly unreliable. In fact, I’m currently writing a post about one such popular measurement – for measuring web traffic.

    And I suspect that social media amplifies this problem – people just retweet whatever seems interesting (some probably don’t even bother to open the links to see whether the thing they’re retweeting adds value). It’s a serious problem – and I think it’s getting worse as people become more apathetic about the underlying credibility of sources.

  4. I generally take Razorfish reports with a grain of salt; they rock as a digital & social agency, but they publish reports that aren’t applicable to a majority of circumstances. In addition to your points; many of the questions imply a bias and lead the user to a ‘preferred’ response. Not saying Razorfish is directing users to respond a certain way; but the questions often lead users to a response.

    Was equally disappointed with the SIM report. Have tried using that information in a practical way once, and was burned. (big time, with a fire meme.)

    Anyway, the report was 30-40% graphics, which were pretty nice; so I’m all for reading their reports. Just weighting them accordingly.

  5. A great post and really those of us trying to promote this area get discredited when we come out with stats that are laughably skewed in one direction.

    FEED reports seem to run like this with regularity, a year ago they claimed that 52% used RSS feeds with some regularity…at the same time as a Forrester report claimed that only 12% do so…(

  6. Ben, if you take a look at the FEED reports’ generous Data tables (, you’ll see that FEED’s screeners and weightings are actually more restrictive and more biasing than those Razorfish listed in the summary page you mentioned.

    The actual screener wasn’t “broadband access,” it was “broadband access at home.

    And the screener wasn’t [ever] “consumed or created some form of digital media…”; it was Does 1+ of the following regularly [!]: “I regularly listen to music online. I regularly watch video online. I regularly use photo-sharing sites. I rely on the Web to get current news or information more than I do the television. I regularly visit social networking sites.” (See the ‘100% incidence’ on Question 7 to confirm this.)

    And, if you look at FEED’s “What is your age?” chart, you’ll see that respondents over age 55 were excluded, even though this isn’t mentioned in the text.

    Also, if you look at the frequencies of respondents by “Metro” in the Data tables, you’ll see that half of the sample was from large metros (giving this population segment twice the weight that it would have had in a nationally representative sample). Razorfish claims the 10 metros chosen were the “10 largest,” but if you compare the list to 2008 Census stats, you’ll see that the top metros of Philadelphia, Detroit and Houston were dropped, and replaced with the hipper SF/San Jose (combining the 2 metros does put it in top 10), and the smaller Seattle and Boston metros.

    Last but not least, the source of the sample is only identified as coming from “a consumer panel, that “we” (rather than a external third party?) screened. No mention of random sampling or an effort to make the sample representative within the parameters screened upon.

    In short, Ben, I think your criticisms of the sample were considerably gentler than they could have been. Many of the generated biases appear in the high incidences reported in Question 7, which, as of August, were significantly higher than other’s results for the “200 million broadband users” that Razorfish claims the results “roughly” project to.(Nevermind that, based on the whole-U.S.-population base Razorfish cited, this 200 million would include the kids and infants, and people 56+ that the survey excluded. Someone apparently didn’t notice that Pew’s data projects to adult Americans, not “Americans.”)

    That said, I completely agree with Garrick’s comments that the results may still “trend” well with the third-party research. Even very inaccurate data can trend well, if the biases are held constant. And I also think Garrick and Razorfish deserve a tremendous praise for insightfulness of the survey questions and the presentation. A lot of surveys that sample better end up producing far less for others to build on. The survey shows thought leadership and remains a major service to the industry.

  7. Chris,

    Thanks for stopping by and for your detailed additions. Razorfish does much good work and the trends are moving in their direction. I still think clarifying the report’s stats would go further to making this research credible in front of clients.

    Getting 1,000 people to opt in to a web-based survey who fit a narrow criteria of early adopter technology usage is certain to skew the results, even if it trends to the general populous.

    Presenting this without specifying the sample until deep in the report is misleading and silly.

    Thanks also for the alert on the blog spam — I love those SEO link spammers SO much!

  8. The technical term for it is “logical fallacy”, in particular logical fallacy of the biased sample.

    Chris you can twist the actual numbers however you want

    “Even very inaccurate data can trend well, if the biases are held constant.”

    but the end result is that the survey is as the OP stated — bizdev.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *