Online advertising works! (At least for those over 40.)
Reflections on Duncan Watts’s Everything is Obvious – Once You Know The Answer.
I thoroughly enjoyed this book, although bits of it are perhaps so-accessible-it’s-almost-bland; it’s largely aimed towards the layman, there is quite a lot of stuff in there that I was already aware of.
But that’s the beauty of popular science; it explains stuff that you already knew or suspected in ways that are lucid and crystal clear. It is a great companion to anyone who is interested in media policy, advertising, planning, and scientific approaches to human behavior and reasoning.
I particularly enjoyed his chapters on prediction and planning. He starts by noting the illusion of a predictable universe, a fallacy that harks from Newtonian physics and positivism. Laplace’s demon, Watts reminds us, only works for simple systems. For complex systems (which goes for virtually all social science), prediction can never be about actualities, only probabilities – and this is one of the many places where common sense misleads us, Watts notes, as we tend to think of prediction as the former not the latter.
Further, in an complex system – characterized by events that are essentially random, but create vast repercussions – we cannot even know what to predict. What is relevant in a potentially endless array of options cannot be known until later. Making the right prediction is just as important as getting the prediction right. This seemingly simple, sobering insight makes Watts able to criticize other popular-social-science writers sucha as Nassim Taleb (“black swans”) and Malcolm Gladwell (“the tipping point”). We cannot know in advance what will be a black swan, because defining what is a black swan can only be done in retrospect.
However, we can predict events that conform to some kind of historical pattern, especially when the events are the aggregated results of masses of discrete entities. It is hard to predict whether you will catch influenza in the coming year, but we can predict whether an expected amount, a certain percentage of the population will catch influenza.
Inductive methods for prediction
Arguably, the label inductive methods should refer to methods that serve to construct probability models “from the data itself,” in other words by probing or mining the actual entity that would itself be predicted (a population, or a corpus of text or statistical data) in order to say something about this very entity. The inductive aspect would be that knowledge is created in as uninfluenced, nondiscriminatory a way as possible, “from the data” so to speak. By the same token, the label deductive methods should refer to methods that rely on prior laws or patterns, that exist as prior knowledge on behalf of the analyst. However – as any good social scientist will tell you – the notions of deduction and induction are ideal constructs; in reality, all reasoning would be a mixture of both, nevertheless with emphasis on one or the other.
No purely inductive data can be said to exist, as all “pure” data would be subject to some form of interpretation on behalf of the analyst, at any stage in the process of analysis. However, the analyst can strive to maximize his/her neutrality and disinterestedness, and “let the data speak for itself” – this is also what Watts generally recommends.
Similarly, the laws and regularities that underpin deductive reasoning always come from empirical observations, especially so when we are dealing with non-axiomatic, non-trivial rules and laws.
Watts lists several forms of inductive prediction; data mining is one example, where large collections of data are crunched by using algorithms that detect statistical patterns.
Another inductive method is to make use of predictive markets. Here, lots of discrete estimates made by market actors are thought to cancel each other out, thus creating statistical bias towards two or more predefined scenarios.
In theory, in fact, no one should be able to consistently outperform a properly designed prediction market. The reason is that if someone could outperform the market, they would have an incentive to make money in it. But the very act of making money in the market would immediately shift the prices to incorporate the new information. (166)
This notion of “crowdsourcing” data is inspired by Friedrich von Hayek’s notion of catallaxy. Watts refers to the political scientist James Scott, who has criticized the philosophy of “high modernism” that is central to so much planning and strategic thinking. This philosophy – underpinning so many catastrophic policy interventions in the last century! – consistently underemphasizes “local, context-dependent knowledge in favor of rigid mental models of cause and effect” (204). Watts holds that Scott’s argument was however preceded by Hayek’s argument – that precisely this local, context-bound knowledge is what is aggregated in a market situation, without any oversight or direction. Watts, nevertheless, doesn’t fall into the trap of market idealism – but approaches the idea of catallaxy with caution, noting that market-based mechanisms are not the only way to exploit local knowledge. (I personally criticized the notion of catallaxy, or informational idealism, in my previous blog post.) If, for example, cap-and-trade policies become abused, or start generating unforeseen effects such as derivatives markets, for example, centralized regulation such as tax measures might be more efficient.
Further, markets can be gamed – especially by actors with deep pockets, such as US election campaigners. And no single method should be expected to be superior to others. When Watts and a team of co-researchers compared different ways of predicting sports results, all of the tested methods – probability models based on historical records, as well as decentralized betting markets – performed about the same.
Deductive methods for prediction
The strategy of looking into historical data and devise statistical models from this could be labelled a more deductive method for prediction. Here, generalities and rules are applied to the phenomenon that is to be predicted, in order to assess probabilities. These generalities and rules are previously defined, from other corpora of data than the one being predicted; they pre-exist before the current data, and hence we could call them deductive models.
Plans fail, Watts notes, “not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them” (212). One simple rule for what should be avoided would be this: Avoid relying on one single “expert” opinion or prediction about the future. The key, he argues, is to poll many individual opinions, and take the average.
One problem with alleged “general” laws or regularities is that they can quickly change, if the overall complex reality changes considerably. Watts takes the financial market post-2008 as an example, where conditions that once were seen as taken-for-granted suddenly become questioned.
A second problem that illustrates the problem with deductive reasoning is that big, strategic decisions are not made frequently enough to benefit from a statistical approach. Hence, strategic decisions are often ill-suited to both statistical models of expected behavior (deductive modeling) and crowd wisdom (inductive modeling).
The paradox of planning
Watts points to what he calls the paradox of planning: Superior planning might come to nothing if events occur that happen to be game-changing for the entire market. He takes Sony as an example; both the Betamax and the Minidisc were sensible ideas, given a certain scenario. But the scenario suddenly changed, more rapidly than anyone could ever have expected.
One of the solutions is to implement much more flexible planning, such as scenario planning, where the future comes in a range of probable versions, based on what would be observed as core elements (common to all scenarios) and contingent elements (specific only to particular ones). Another improvement in flexibility is to implement a measure-and-react strategy, where organizations are designed to be able to rapidly respond to changing conditions. The risk with this approch, however, is that it once again reifies the positivist fallacy I begun with, namely that multivariate testing, bucket testing, mechanical turks, and the like would lead to the most optimal versions or decisions.
A less square, less rigid approach to versioning could be to conduct experiments, Watts argues, that serve to, for example, test whether the fact that a customer bought something would indeed be attributable to his/her exposure to an ad, or simply for the reason that the customer would have bought it anyway. The Web constitutes a rather controlled environment for these things. When Yahoo conducted a controlled experiment, testing the exposure of an ad to a “control” group who weren’t exposed, the result (published in 2009) was that advertising should indeed be expected to work(!) – additional revenue generated by the campaign, in the short run, was estimated at four times the actual cost of the campaign itself. However, almost all of this effect was attributable to the older consumers, those over 40. Whether this age factor was attributable to the style of the ads, or to potentially different browsing habits (younger people simply skipping ads much more intuitively) the study couldn’t show. But at least experiments such as this help to give at least a partial answer to the ways advertising work.
Some other interesting alternatives that can add to the wealth of inductive knowledge, that Watts lists, are prize competitions for open innovation initiatives that set an objective or target and let people freely experiment in order to reach this target. The well-known open source model is of course another great tool for crowdsourced innovation, although many open source projects tend to contain a lot of path dependency (when projects risk becoming too big and unwieldy for one person to change their core elements, and when groupthink trumps indvidual originality – while software often benefit from crowd production, novels or artworks rarely do – but that’s an altogether different story…), although Watts doesn’t go into detail on that topic. Nor does he expand on gamification, although that can be said to underpin much of what he recommends by way of prize competitions:
Gamification is the technical term for applying game-design characteristics to content and applications that aren’t games. Typical gamification elements include such things as achievement badges, achievement levels, leader boards, virtual currency, points that can be traded or cashed in and progress bars or other visual meters to encourage people to complete a task.
Altogether, Everything is Obvious is a good read, covering topics like marketing, dissemination of information, prediction, and social networks, while prioritizing pedagogic lucidity over theoretical novelty. It does not provide earth-shattering new theses, but synthesizes existing knowledge from a rich glut of studies, conducted by Watts and others.