altmetric.com is so called because I was working on updating an old pet project tracking how research articles are discussed online and it needed a new name. I was following a lot of science 2.0 folks on Twitter, saw the manifesto when it first came out and figured it was a great pitch but that nobody would ever actually do anything serious about it.
I was wrong – it turned out lots of people were interested in the idea and had started working on new tools, at PLoS and Impact Story (which Jason Priem who coined the term ‘altmetrics’ founded with Heather Piwowar) and figshare and many other places. It was a nice thing to be wrong about.
The name Altmetric suited the old Postgenomic project because that was all about collecting attention from scholarly blogs and social media. If I think about the difference between where altmetrics were, where they’re at and where we think they should go then that’s what immediately comes to mind.
Altmetrics tools and users started out looking at attention and being more about counts and (for us) a score. The manifesto talks about using the data for discovery, about looking at attention, about creating new filters. I think it’s fair to say it was driven by publishers and groups interested in the communication of science. It worked very well for that. A large proportion – perhaps even the majority – of journals now track article level metrics that incorporate altmetrics in some way.
Over the next few years the framework for using and understanding altmetrics developed. As a community we started talking to the original users of altmetrics data and they started helping us to develop the tools. They started talking about things like impact and quality as well as attention. Those are very different use cases and require different ways of thinking, different source data too. The list of interested stakeholders grew from publishers and librarians to encompass funders and research officers. I think where we are now, coming up to five years after the manifesto was published, is standing just across the threshold of looking at impact as well as attention.
There’s lots still to do at this point. At altmetric.com specifically we’re focusing on things like looking for citations in policy documents, on broadening the kinds of items that can be tracked and on new sources that measure more than just attention. At PLoS they’re continuing to develop cool tools like Parascope. Groups like CrossRef continue to experiment with linking patents to DOIs. ORCID and Impact Story are helping researchers actually use the data.
In the future I think another aspect of research that altmetrics could conceivably help with is quality. Not help with in the sense that there might be a metric or some number you can use to gauge quality, but if we’re bringing together what people are saying about work online perhaps with the right reputation systems it’s a way to give post publication review a bit of a kickstart. I’d be interested in people’s opinions on that…
… which brings me to 2:AM. The cool thing about 1:AM – and 2:AM too, hopefully – was that it brought together a lot of stakeholders from publishing, funders and institutions and gave them a chance to talk about both the good and bad things about altmetrics. It helped us understand what people believed was important, and what kind of messaging was misleading (last year made me stop talking about metrics rather than indicators in some contexts, and realize the different between impact and routes to impact, for a start). At least for us it helped shape our thinking and roadmap for the future.
So I’m looking forward to this year. See you there!