Category Archives: Uncategorized

It’s conference week!

After months of preparation, we’re finally there: over the next few days the 2:AM hack day, conference, and altmetrics15 workshop will take place in Amsterdam. We’ve got some great presentations and workshops lined up – take a look at the full schedule to see what’s happening when.

There’ll be lots happening over the week – stay tuned to Twitter and follow #2amconf for all the latest updates.

We’ll also be live streaming via our YouTube channel (so you don’t need to miss out even if you couldn’t make it in person!) and our guest bloggers will be sharing their take on each session here.

Advertisements

Consistency challenges across altmetrics data providers/aggregators

ZohrehThis is a guest post from Zohreh Zahedi, PhD candidate at the Centre for Science and Technology Studies (CWTS) of Leiden University in the Netherlands. 

At the 1:AM conference in London last year, a proposal put forward by myself, Martin Fenner and Rodrigo Costas on “studying consistency across altmetrics providers” received an 1:AM project grant, provided by Thomson Reuters. The main focus of the project is to explore consistency across altmetrics providers and aggregators for the same set of publications.

Altmetric.com, the open source solution Lagotto and Mendeley.com participated in the study while other altmetrics aggregators (Plum Analytics and Impact Story) didn’t due to some difficulties such as agreeing on a random sample and its size and extracting the metrics exactly at the same date/time.

By consistency we mean having reasonably similar scores for the same DOI per source across different altmetrics providers/aggregators. For example, if Altmetric.com and Lagotto report the same number of readers as the source (Mendeley) for a same DOI, they are considered to be consistent. This is very critical to understand any potential similarities or difference in metrics across different altmetric aggregators. This work is the extension of a 2014 study using a smaller sample of 1000 DOIs, and all coming from one publisher (PLOS). In that study we showed that altmetrics providers are inconsistent, in particular regarding Facebook counts and number of tweets

(http://dx.doi.org/10.6084/m9.figshare.1041821).

Data & method:

For this purpose, we collected a random sample of 30,000 DOIs obtained from Crossref (15,000) and WoS (15,000), all with a 2013 publication date. We controlled the time by extracting the metrics for the data set at the same date/time on July 23 2015 starting at 2 PM using the Mendeley REST API, Altmetric.com dump file and the Lagotto open source application used by PLOS. Common sources (Facebook, Twitter, Mendeley, CiteULike and Reddit) across different provider/aggregators were analyzed and compared for the overlapped DOIs.

Preliminary results:

Screen Shot 2015-10-02 at 15.44.45Several discrepancies/inconsistencies among these altmetrics data providers in reporting metrics for the same data sets have been found. In contrast to our previous study in 2014, Mendeley readership counts were very similar between the two aggregators, and to the data coming directly from Mendeley. One important reason is a major update of the Mendeley API between the two studies. On the other hand, we found similar results for Facebook counts and tweets as before that there are still huge differences across Altmetric.com vs. Lagotto in collecting and reporting these metrics. 

Possible reasons for inconsistency:

We have summarized here some of the possible reasons we identified for inconsistencies across the different providers such as:

  • Differences in reporting metrics (aggregated vs. raw score/public vs. private posts)
  • Different methodologies in collecting and processing metrics (Twitter API)
  • Different updates: possible time lags in the data collection or updating issues
  • Using different identifiers (DOI, PMID, arXiv id) for tracking metrics
  • Difficulties in specifying the publication date (for example different publication dates between WoS and Crossref) influence data collection
  • Accessibility issues (resolving DOIs to URLs issues; cookies problems, access denies) differ across different publisher platforms

All in all, these problems emphasize the need to adhere to best practices in altmetric data collection both by altmetric providers/aggregators and publishers. For this we need to develop standards, guidelines and recommendations to introduce transparency and consistency across providers/ aggregators.

Fortunately, the National Information Standards Organization (NISO) has initiated a working group on altmetrics data quality in early 2015 which aims to develop clear guidelines for collection, processing, dissemination and reuse of altmetric data that can benefit from a general discussion of the results of this project. Much works need to be done!

2:AM Amsterdam: Setting the standard

AdamThis is a guest post from Adam Dinsmore, a member of the Wellcome Trust’s Strategy Division. Adam describes the importance of rigorous data standards and infrastructure to funders who wish to use altmetrics to better understand their portfolios, and looks ahead to the standards session at next month’s event.

It was a moment of some personal and professional pride last September when the Wellcome Trust played host to the inaugural altmetrics meeting (1:AM London). As a large funder of biomedical research the Trust is always keen to better understand the attention received by the outputs of the work that it supports, and over the two days delegates were given much cause to consider the potential of altmetrics to help us gather intelligence on the dissemination of scholarly works.

Screen Shot 2015-09-23 at 12.27.15Among the biggest developments in the UK’s metrics debate since 1:AM was the publication of The Metric Tide[1]; a three volume report detailing the findings of the Higher Education Funding Council for England’s (HEFCE) Independent Review of the Role of Metrics in Research Assessment and Management (or IROTROMIRAAM for short). The review, commissioned by then Minister of State for Universities and Science David Willets in Spring 2014, sought to bring together thinking on the use of metrics in higher education from across the UK’s researchscape. A call for evidence launched in June 2014 attracted 153 responses from funders, HEIs, metric providers, publishers, librarians, and individual academics.

Attendees at last year’s 1:AM event heard an update on the review’s progress from the report’s eventual lead author James Wilsdon (viewable at our Youtube channel) who described the group’s aims to consider whether metrics might support a research environment which encourages excellence, and crucially how their improper use might promote inefficient research practises and hierarchies.

The full report expounds further, crystallising more than a year of thoughtful consultation into an evidence base from which several important recommendations proceed. Among them is a call for greater interoperability between the systems used to document the progression of research – from funding application to scholarly inquiry to publication and re-use – and the development of appropriate identifiers, standards, and semantics to minimise any resulting friction. Fortunately for those with a vested interest in an efficient research ecosystem (i.e. everyone) some very clever people are working to make these systems a reality.

It’s important that the systems used to track the proliferation of scholarly work are able to interconnect and speak a common language. Image: 200 pair telephone cable model of corpus callosum by Brewbrooks (CC-BY-2.0).

It’s important that the systems used to track the proliferation of scholarly work are able to interconnect and speak a common language.
Image: 200 pair telephone cable model of corpus callosum by Brewbrooks (CC-BY-2.0).

In 2 weeks the second annual altmetrics meeting (2:AM Amsterdam) – which this year is being hosted at the Amsterdam Science Park – will open with a session on Standards in Altmetrics, featuring a presentation from Geoff Bilder on a newly announced CrossRef service potentially able to track activity surrounding research works from any web source. First piloted in Spring 2014, the DOI Event Tracker will capture online interactions with any scholarly work for which a DOI can be generated (articles, datasets, code) such as bookmarks, comments, social shares, and citations, and store these data in a centralised clearing house accessible to anyone. Critically, CrossRef have stated that all of the resultant data will be transparent and auditable, and made openly available for free via a CC-0 “no rights reserved” license. The service is currently stated for launch in 2016.

The session will also feature an update on the National Information Standards Organization’s (NISO) Alternative Assessment Metrics (Altmetrics) Initiative. Since 2013 NISO has been exploring ways to build trust in metrics by establishing precise, universal vocabularies around altmetrics to ensure that the data produced by them mean the same things to all who use them. In 2015 NISO has convened three working groups tasked with the development of specific definitions of altmetrics, calculation methodologies for specific output types, and strategies to improve the quality of the data made available by altmetric providers.

The continuing work of these groups speaks to the challenges inherent in establishing consistent, transparent data provision across the altmetric landscape. Zohreh Zahedi of CWTS-Leiden University will present the findings of a study of data collection consistency among three altmetrics providers, namely Altmetric.com, Mendeley, and Lagotto. The study examined data provided by these vendors for a random sample of 30,000 academic articles, finding several discrepancies both in terms of coverage of sources like Twitter, CiteULike, and Reddit and the scores derived from them. These findings provide an important indication that the use of altmetric data remains laden with caveats regarding the context in which they were derived and exported.

It is heartening that real attention is being paid to the issues of interoperability and consistency often raised by funders, publishers, and HEIs, and drawn together by the Metric Tide report. The presentations from CrossRef, NISO, and the CWTS-Leiden group are bound to stimulate much thought and discussion, which will then be built upon in a standards-themed workshop session later in the day. These discussions portend a time when a rigorous data infrastructure allows altmetrics to approach their hitherto unrealised potential. I look forward to hearing about it in Amsterdam!

[1] Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363

Announcing the 2:AM Hackday

With the 2:AM conference only three weeks away and the schedule finalized, most of you have bought tickets and planned your trip. Some of you will also attend the altmetrics15 workshop on the Friday right after 2:AM. If three days of altmetrics discussions is not enough for you, or if you want to build something in addition to talking, please join us for the 2:AM hackday on October 6th, the day before 2:AM starts, taking place in the same venue.

In contrast to the 2:AM conference and altmetrics15 workshop, the 2:AM hackday will not have a fixed schedule, but rather will have an agenda determined in the morning, depending on the skills and interests of those attending. The idea is to work on a prototype or idea in a small group for the day, and collaboration and communication is more important than the end result.

Even though no programming skills are necessary to have fun in a hackday, the idea of building something in a single day might be scary for some of you. We are therefore also offering two hands-on workshops for those more interested in learning some basics. One workshop is organized by Stacy Konkiel from Altmetric.com, and she will give an introduction into the tools and services that Altmetric.com offers. The other workshop is led by Najko Jahn from Bielefeld University and me, and we will give an introduction to using the R statistical computing and graphics language to analyze bibliometrics data. There is still room for one or two more hands-on workshops if you are interested.

Registration for the hackday is free and more information and the registration link can be found here.

Altmetrics & data visualization: answering new questions about impact

StinaThis is a guest post contributed by Stina Johansson, Bibliometrician at Chalmers University of Technology Library. 

As a librarian and bibliometric analyst at Chalmers University of Technology, I am surrounded by data describing our university’s research in collaboration with other universities, the industry or other parts of society. More than once have I called our local repository (Chalmers Publications Library) a gold mine. In these data intensive times, not only are we, as analysts, able to track impact through citations, but we are also able to look at and understand the nature of our research and its’ outreach through a broader perspective. We can analyze, for example:

  • the spread of research results in social media through alternative metrics,
  • the nature of collaboration through co-authorship and,
  • as a complement to publications’ metadata, metadata covering projects going on at the university.

Through local and global systems, these types of metadata are becoming more and more useful and available to us. All these data sources combined give us the possibility to look at research from a broader perspective, and to understand impact from a broader perspective than through classic bibliometrics alone. Using the data that is available, we are able to look for patterns and trends when it comes to the university’s role in society, especially its outreach to society, and to academia and the industry sector through various forms of collaboration.

How do we choose to work with this data? Are there as many alternative methods as there are data sources? We, a group of librarians at Chalmers Library with a passion for metadata, have found that different methods of data visualization have helped us both in terms of handling our data, and in analyzing and presenting it to the university. In our experience, one of the most interesting aspects of visualizations is that they, when successful, encourage discussions and work as stepping stones to further and more in depth questions about the nature of our research.  

The first visualizations we presented, using data from our local repository, were geospatial maps plotting our Chalmers’ national and international research collaboration, measured through co-authorship. These were interesting because they showed a geospatial pattern, but they also encouraged a conversion on research collaboration. Now we have an image of the spread of collaboration but can we also answer questions about how our collaboration patterns have evolved over time? What impact does this collaboration have compared to that? In this respect, the images we have created showing geospatial patterns have helped us move forward in our work.

One more recent experiment is a network analysis I’ve got underway in collaboration with a Chalmers PhD student, studying the sociology of science. In this visualization project, we use publications’ metadata from our local repository and we focus on the author field of publications. Not only do we want to make a good visual representation of the social network existing within our repository, we also want to explore patterns and trends in the network – both simply by looking at it, but also through social network analysis measures (describing the density of the network, detecting subgroups within the network, important roles like stars, bridges and gatekeepers among individual researchers). We have already seen that co-authorship patterns, and perhaps social practices concerning authorship, vary between different departments at our university. This of course makes us want to dig deeper into our network.

Questions we have posed when looking at the network visualization are:

  • What types of practices can we detect?
  • What different types of roles are visible in the community?
  • How can we use social network analysis as an alternative method to understand our research community better?
  • How would a set of ‘social network analysis metrics’, complement our classic bibliometric metrics?
  • The university is concerned with equality between male and female researchers; how can we, through local data and social network analysis, apply a gender perspective to the author based networks images we have created?
map

Figure 1. The network above was a starting point in our network visualization project. It shows co-authorship between Chalmers departments. The data we used came from our local repository. A thicker line indicates a larger number of co-authored publications. From this analysis, we have moved on to the author fields, looking at individual researchers within the departments.

At this point we pose more questions than we answer, and is that not a sign of development? And a sign that this method is interesting in terms of what is does to stimulate discussion on what we can do with our metadata?

As I’ve expressed, there are many pros using data visualization techniques, but of course there are also limitations and responsibilities. They say that you should be careful when communicating through images, because images are both effective and powerful, and can manipulate the mind.

florida

Figure 2. An example of a data visualization that is manipulated through formatting and inversion of scale, courtesy of Michael Sandberg’s datavizblog.com

Think of a line graph, a classic visualization- is there a more effective way of showing a trend? And how easily can it be manipulated? Fairly easily, as it turns out: through changing the scale of the image or manipulating the data used to make it (like we see in Figure 2).

Through our experiences experimenting with data visualization methods at Chalmers Library, we have grown to appreciate and respect data visualization as a powerful tool, most of all as a help for us to better understand our data, presenting it to the university in an easily readable format, and to appreciate the images created as stepping stones for further analysis. Yet we have just started this quest, and we are eager to see where our visualization projects lead us.

Analyzing altmetrics at OHSU Library

robin_120x145This is a guest post contributed by Robin Champieux, Scholarly Communication Librarian at Oregon Health & Science University in Portland, Oregon.  She leads efforts that contribute to the pace and impact of scholarly communication by partnering with OHSU research, teaching, and student communities on issues relating to publication, public access, data sharing, and scientific contribution.  Robin is the co-founder of Advancing Research Communication & Scholarship, a multi-disciplinary conference focused on new modes and models of scholarly communication.  She is a passionate advocate for open science and the success of early career researchers.

I have a confession to make. The Oregon Health & Science University Library (OHSU) is entering its third year of using altmetrics to answer impact questions, but some of the time I still feel like I’m winging it, I often have more questions than answers, and, if local trends continue, I may need to clone myself…but, I love it! The uncertainty and interesting challenges motivate me to learn more, work closely and thoughtfully with researchers and my colleagues, and ultimately to do the experimentation and planning needed to build a successful service.

Like many of our peers, the OHSU Library began using altmetrics to gain a fuller understanding of the impact of our institution’s research. We wanted to provide faculty, students, labs, and administrators with data they could use to track and tell their impact stories. Initially, we focused on the creation of personal impact profiles, which users could update and pull from as needed. The service has evolved into something more consultative. I work with individuals and organizations to formulate impact questions, gather and analyze data, and translate this information into compelling stories.

I spend a lot of time thinking about this work, and transitioning trial and error efforts to replicable, trusted methodologies. Below I describe the themes that run through the questions, challenges, and progress I’ve encountered or made over the last two years. I think my local experience is echoed by many of the conversations and developments happening in the broader altmetrics, research, and library communities.

Context
My experience using altmetric and bibliometric data for impact assessment and story telling has taught me that the circumstances of attention matter, really matter. Understanding this for some data is easier than others, but it all requires work. Take a set of tweets, for example. To effectively and convincingly use Twitter attention to understand and communicate impact, I need to know (and show) who was talking about the research and the nature of their networks. If I’m working with a lab group interested in telling a story about public engagement, tweets between 100 specialized neuroscientists isn’t the right evidence. Similarly, the originality and depth of attention is important. The scientists and administrators I’m advising may appreciate data showing how a journal’s press release was picked up and republished, but they are not going to tell a story with it. Rather, they want to know about and promote original news coverage and social media discussions of their work.

I’ve also learned that some kinds of altmetric data are better for story telling that others. High download counts help me identify engagement, especially for recently disseminated outcomes, but I would not use them to communicate impact in a P & T dossier or NIH biosketch. What I would incorporate are the impacts the downloads helped me uncover, such as inclusion in curriculum or the use of a measure in clinical trials.

Authority
One way or another, all of the individuals and organizations I’ve worked with have asked two questions upon reviewing their altmetric data: what does that mean and why is it important? These are good questions and ones we should to be addressing for all of the data we use to uncover and tell stories of impact. It’s not enough for the scientists, students, and institutes I work with to reference general statements about the growing importance of the web to scientific communication. Honestly, most of them don’t care. Rather, they want to know, have confidence in, and communicate the relationships between online attention and specific kinds of impact. We should demand the same for bibliometric data, by the way.

I believe this necessitates the use of and new research on how scholarship and scientific information is sought, endorsed, and applied by different communities. For example, my colleague Tracy Dana and I will be trying to suss out the potential relationships between the medical literature, research on how physicians seek and use information, and bibliometric and altmetric data in order to better understand patterns and evidence of clinical impact. Along with existing resources like the Becker Medical Library Model for Assessment of Research Impact, and the NISO Alternative Assessment Metrics (Altmetrics) Initiative, I believe this kind of work will inform the creation of more trustworthy and compelling methodologies for answering impact questions.

Sustainability
When I present on the impact assessment work I’m doing, the question of sustainability is inevitably raised. It’s true, I can loose days, analyzing tweets, slogging through Google Books or Patents, and playing in The Lens. That said, I’m optimistic about the viability of impact assessment services in all kinds and sizes of libraries. For one, this work is being incorporated into local and global discussions about the roles of libraries and librarians in scholarly production and communication, with several institutions creating full-time impact assessment positions. For example, look at the work Karen Gutzman is doing at Northwestern University. Additionally, I’m convinced the most successful service models will be team-based efforts, which leverage the disciplinary knowledge of liaison librarians.

Finally, I’m confident that we’ll begin to see more of the context and authority issues described above addressed through code, and reflected in the metrics and benchmarking from data providers. Already, tools like PlumX and the Altmetric Bookmarklet are essential to the work I do. As the intelligence of the data improve, I can focus more of my efforts and hours on the human centered work of analyzing and translating the numbers into meaningful and actionable stories of impact.

 

Metrics for ALL

andreaThis is guest post from Andrea Michalek, the Co-Founder and President of Plum Analytics. Andrea shares her thoughts about the current state of altmetrics and opportunities for the future.

The landscape around alternative metrics has been evolving rapidly.  Going from a twitter hashtag in 2011 to a key component in research evaluation at many institutions, metrics that move beyond citation counts and Journal Impact Factor are here to stay.

When I was asked to write a blog post for the 2AM conference blog, my first thought was, “Let’s stop calling it altmetrics!”  Mike Buschman and I co-founded Plum Analytics in January 2012, and joined the bustling altmetrics community to share best practices and discuss different approaches in the field.

In April 2013, we wrote the article titled, “Are Altmetrics Still Alternative?”   We made the claim that, “it is our position that all these metrics are anything but alternative. They are readily available, abundant and essential.“

The main points in that article still hold true today:

  • As the pace of scholarly communication and science advancement has increased, citation analysis is a lagging indication of prestige. Citations can take 3-5 years to accrue the critical mass necessary for meaningful analysis.
  • Not all influences are cited in an article, thus leaving the whole measure incomplete. Research outputs other than a journal article are typically not cited.
  • Securing research funding is getting more competitive. When applying for grants, researchers’ most highly cited work will typically be several years old, and not necessarily most relevant to the grant application at hand. If researchers can show that their recent research is generating a lot of interaction in the scholarly community, that information can provide an advantage in this tight funding environment.

Two and a half years later, with over 250 universities, research institutes, funders, and corporations around the world using metrics from Plum Analytics to answer key questions about their research, we have learned even more from how real people, solving real problems, use these metrics on a daily basis. Highlighting a few key areas of learning:

Be Comprehensive
At Plum Analytics, we are not scholarly publishers. We did not start with looking at a journal article, and trying to build better metrics around those. Instead, we began with a very different end in mind, where we could use the data of how people were interacting with research, to tell the full story behind their work.

When you start with the question, “What do you consider to be your research output?” and you ask that across many different disciplines, you start to build a base that tells a more complete story of the outcomes of research. Working with librarians and others who support research, we now track over 40 separate types of research artifacts, from articles, to books, clinical trials, conference proceedings, datasets, figures, presentations, videos, and more. For example, when looking at digital traces of how a book has been interacted with, you can find indicators of how many times has the ebook been viewed or downloaded online, how many libraries hold the book in their collection, what Wikipedia articles reference the book, how many online reviews have been written, and what do they say?

Moreover, to get a full picture of research, you need to look beyond a single type of engagement around it. Beyond cited-by counts, there are four categories of metrics to consider:

  • Usage – The raw engagement around research by clicking on a link, viewing the article, downloading the data, playing a video, etc.
  • Captures – A user has indicated that they plan to return to this artifact by favoriting, bookmarking, marking it as a reader, or otherwise digitally indicated their intent.
  • Mentions – blog posts, book reviews, comments, and Wikipedia mentions
  • Social Media – Tweets, likes, +1s, shares

Metrics can now be harvested and applied to research around each of these, in addition to citations, giving a much more comprehensive and holistic view of impact. These new metrics are also much more timely than citation metrics and can keep pace with new formats much faster than the entrenched, legacy practices.

Measure at the Artifact Level – Not the Journal
There are bad articles in high impact factor journals, and great articles in low impact factor journals.  Even if Journal Impact Factor (JIF) were a perfect measure of the quality of a journal, it would still be an inappropriate measure of the quality of a particular article in that journal.

Many studies have been performed looking at the serious issues that are caused when only looking at JIF. For a regional example of the harm these practices can cause, see: The hidden factors in impact factors: a perspective from Brazilian science.

In the paper, The Skewed Few: Does “Skew” Signal Quality Among Journals, Articles, and Academics?

Joel Baum makes the statement that, “The idea that a few “top” authors from a few “top” institutions publish a few “top” articles in a few “top” journals has a certain, orderly appeal to it. But this order is not without consequences.“ His paper goes on to point out facts like:

  • 20% of the papers examined had half of the citations
  • Fewer than 20 schools account for over half of all citations

He describes how the skew contributes to the Matthew Effect where the rich get richer and the poor get poorer. (For those without access to the toll access paper, you can view the preprint or a presentation related to this work.)

The Journal Impact Factor has come under considerable scrutiny and criticism, notably from initiatives like the San Francisco Declaration on Research Assessment where over 150 scientists and 75 research organizations stood against having a single score for a journal represent the quality of the articles it contains. Although Eugene Garfield never intended JIF to be used to assess quality, this was invariably what happened.

Better Visualizations Lead to Better Understanding
As we look towards the future of more modern metrics, we believe that any single score (even at the article level) is overly simplistic in what it represents, and cannot be used as a responsible indicator, especially when comparing across disciplines. It is therefore essential to deliver this metric data in ways to make it understandable, without resorting to a single score per document. The key to quickly navigating complex data and be able to gain insight from it, is to use elegant and simple visualizations to do the hard work for you.

Article Level Metrics are just a Building Block
Calculating article level metrics, even when done comprehensively across all five categories of metrics, are just a building block. The power and the insight comes from being able to pull them together to tell the stories of the people, the groups that they are affiliated with, and the topics they care about.

Metrics that Keep Pace with Online Scholarly Communication
As we look at how scholars and those who interact with research outputs do so online, it is clear that the pace of communication, the amount of data produced, and the varieties of mechanisms to consume it will all continue to grow.   When designing measurement instrumentation, it needs to function in near real time, and at web scale. The technology infrastructure needs to be robust, and enterprise-grade. And the data that we collect and the way we represent it, needs to capture today’s interactions and yet be flexible for the future.