There has been a certain amount of fuss made over Rolling Stone’s new list of the Top 500 Albums. Well, I say, “new list”, but there’s where the fuss comes in: the first 21 positions on the list are, in fact, occupied by the same records as the previous Top 500 list, published several years ago. In a passionate Wall Street Journal article, Jim Fusilli accuses Rolling Stone of bias towards the 1960s and 1970s.
This is, of course, the era of the magazine’s founding and heyday. Is Rolling Stone – like, dare I say it, a few of its readers – getting on a bit and happiest remembering its youth?
As a researcher, I wasn’t reminded of a nostalgic former teen. Instead, the list made me think of tracking studies. So perhaps I might offer some tongue in cheek advice.
Tracking studies are built on benchmarks and are, in general, Goldilocks research. Like the bowl of porridge in the fairy tale, they should not be too hot or too cold, but “just right”. If nothing ever changes from wave to wave – if the porridge is too cold – you’re measuring the wrong things. If your indicators swing about wildly – too hot – you’re measuring in the wrong way. Or that’s the theory – of course both those outcomes can be real results, but the general way of the tracker is gradual change – not too hot, not too cold.
Blood on the Track(er)s
Considered as a market tracker – the market being “rock music” – you can understand why the Rolling Stone list is so slow to move. A dramatic shift in results between waves on a tracker leaves a researcher’s arse seriously uncovered: how on earth did this happen? Can it possibly be real? Doesn’t it mean that all your other results were wrong?
And the same goes for Rolling Stone. If The Beatles, Stones, Dylan et al. fall sharply from grace, what does that say about all the previous lists on which they performed to marvellously? So instead Rolling Stone have ended up going in the other direction and have ended up with an album list that has all the appeal of, well, cold porridge.
What could they have done? To some extent the problem is a function of age. Every tracker needs refreshing occasionally to reflect underlying changes in the market, culture, or audience. But there’s also the question of how frequently the tracker waves are commissioned.
If Rolling Stone produced a list in 2002 and a list in 2012 with vast differences, they might make their readers (the customers of the brand tracker!) angry. But if they had produced 30 other quarterly lists in between, with the changes happening gradually over time, the customers would be far less upset. As Facebook, Twitter, and other web brands have proved, constant minor changes are less jarring than single large ones.
So my advice to Rolling Stone is – more lists! More waves of the tracker! That way the long-term shifts won’t look so radical, and you might eventually get to a list that doesn’t look like a museum piece. Everyone on the Internet might end up hating you but at least you’d have accurate data.