Wikipedia’s Crowdsourced A/B Ad Testing

The Wikipedia community continues to amaze me. Wikipedia decided to crowd source their fundraising campaign this year, getting members to get down and dirty and perform nitty gritty tasks like analyze which ads perform better, large excel sheets, and customize ad copy. And they succeeded in crowd sourcing the fundraising campaign! Look at the excel sheet they’ve posted and had community members help analyze to optimize their annual fundraising campaign:

How can I get a Wikipedia community so I can crowd source filing my taxes? But seriously, you can join in on the meetings to discuss the ads, read meeting logs, test ads, and contribute both copy and image ideas to their campaign. All of the statistics are freely shared. It sounds crazy but only until you remember that the entire project is built by the community itself, so why shouldn’t the fundraising campaign be as well? In Wikimedia’s words, “We’re asking the community to get deeply involved with the messaging, planning, and execution of this year’s fundraiser.”

We’ve examined their A/B Testing page from their banner ad to drive English speaking traffic to their donation page on October 26th. For those of you who are unfamiliar with the term, A/B testing is where you show two ads with different variations to a sample audience and see which one does better to optimize your ad results. True A/B testing holds all things constant except for one element so you’re truly comparing apples-to-apples. A/B testing is a pretty standard technique among marketers (at least it should be!) but it’s really great to see such drastic results publicly because it helps to educate everyone on the extreme value in A/B testing your marketing materials. Here, we should actually call what they did A/B/C/D testing as they have four different versions of the ad:

Here are images of three of these four ads (the 4th link didn’t take me to the ad, apparently one of the downsides of crowdsourcing your campaign):

As you can see, the Jimmy Wales personal appeal did over three times better than the next best ad, and nearly ten times better than the worst. The Wikipedia community just increased its own fundraising profits by 3-10x if these tests remain true throughout their full scale campaign. This data shows that thorough ad optimization is a must for any organization that wants to stick around.

What is also amazing about this campaign is they’re generating a data mine ripe for in depth research on worldwide similarities to ad responses (research which will hopefully be posted on Wikipedia). This could reveal global ad preference trends and best practices for all to use. I can’t help but wonder what else would be an effective use of their community. What would you ask the Wikipedia community to optimize ad testing for if they would follow your direction? Why?

Written by Nathan Maton | Project Manager, JESS3



Tags: , ,

Join the Conversation