There are numerous ways of evaluating the performance of individual scientists, their departments and the institutions they are part of. Most are based on the volume and quality of the research they produce.
Today, Lutz Bornmann at the Max Planck Society in Munich and Loet Leydesdorff at the University of Amsterdam put forward another method, this time for evaluating the scientific performance of cities.
Their approach is straightforward. They take the total number of papers cited by researchers from a particular city and then count how many of these appear in the top ten per cent of cited papers. By the law of averages, you'd expect ten per cent of these papers to appear in the top ten per cent.
"For example, if authors located in one city have published 10,000 papers, one would expect for statistical reasons that a thousand (that is, 10%) belong to the top10% most highly cited papers," say Bornmann and Leydesdorff.
They then compare the expected number of top papers from a city with the actual number.
Finally, they plot the results on a map, showing cities that have more than expected highly cited papers in dark green and those with fewer than expected in red. The bigger the dots, the more papers that are involved.
Bornmann and Leydesdorff have done this for physics
papers that appeared on Scopus in 2008 with the citations up until February 2011. The screen shot above shows the physics papers map.
The results for physics indicate that the best performaners are London, Paris, Karlsruhe, Munich (and Garching), Pisa, and Rome. And the top result comes from London, which has more than three times more highly cited papers than expected (46 v 14.3).
The worst performer is Moscow which has only 21 highly cited papers compared to an expected value of 78.7. Bornmann and Leydesdorff also highlight the performance of Cambridge in the UK which merely matched expectations, producing 21 highly cited papers compared to the expected number of 21.7.
Bornmann and Leydesdorff's maps raise a number of questions. Not least of these is the performance of Cambridge, MA, home to two of the world's top institutions in MIT and Harvard, which could reasonably be expected to feature strongly in the data. Yet, Cambridge, MA, does not appear at all.
They also discuss a number of other limitations such as the role of different authors, who may come from different disciplines and contributed vastly different amounts of work.
My guess is that this kind of mashup will be of much greater significance in Europe, particularly Germany, than in the US because there is considerable focus from funding agencies on geographical centres of excellence.
Whatever the merits of this approach, performance measures are part of the landscape for working scientists. And visualisation techniques like these mashups can help to present the data in easily digestible ways. Expect to see more of them.