This guest post was contributed by Jean Liu, Product Development Manager at Altmetric. Jean is a neuroscientist by training, and was previously the data curator and blog editor for Altmetric.
On the opening day of the 1:AM London conference, the afternoon crowd was treated to Dan O’Connor‘s engaging and thought-provoking talk on the ethical implications of research into online social content. O’Connor, who is the Head of Humanities and Social Science at the Wellcome Trust, began by explaining how his PhD studies in history, experience at a social media consultancy, and subsequent postdoctoral studies in bioethics brought him into a rather new field of study that examines the ethical implications of research involving social media.
O’Connor outlined how research involving social media tends to fall under two categories: 1) research using social platforms (e.g., patient-led research, which might involve sites such as patientslikeme or Ginger.io), and 2) research “into” social content (e.g., online communities, such as a “Cancer Survivors” Facebook page). Both categories of research have their own ethical implications, but it was the latter topic that O’Connor chose to focus on in the rest of his talk.
The case of the tweeting patient
O’Connor used several vivid examples from healthcare research to illustrate the ethical issues that have arisen when attempts are made to study the behaviour of online communities. The first case was that of a tweeting kidney patient, “Jane”, who had used her personal Twitter account to post various updates and comments about her condition. However, without her knowledge, her kidney specialist was actually monitoring her tweets. Jane later tweeted that her specialist had spoken to her and exposed gaps in her knowledge about her condition. “How embarrassing is that!” she tweeted, and even though her tone was lighthearted, there might have been a hint that she had had some expectation of privacy.
Did Jane’s specialist violate her trust? While Jane probably understood that she had been speaking in an online public space, she wouldn’t necessarily have expected that she was being monitored by her healthcare practitioner. O’Connor argued that this kind of monitoring and use Jane’s online comments could have terrible consequences. What if Jane had had a more stigmatising disease like HIV? What if Jane had posted about her behaviours that she wouldn’t have wanted her specialist to know? And also worryingly, what if insurance companies began monitoring what people like Jane are saying or posting online?
“People are not fields.”
Through online platforms like Twitter and Facebook, like-minded people can come together and form their own communities to discuss their shared interests, experiences, problems, and more. But is it appropriate to monitor these communities?
O’Connor discussed how mental health practitioners and researchers are interested in monitoring the social content of Twitter, in order to discover what mental health patients are talking about. The idea is that it may then be possible to uncover insights for improving mental health services and care, or even to identify “influential users” who may be able to provide their views to practitioners. However, such communities don’t wish to be monitored, and have an expectation of privacy in spite of the fact that they are speaking in a public online space.
O’Connor described another online community (those interested in “slash fiction”) which was approached to be part of research but in 2009 formally declined to be interviewed, writing that, “we decline to be the objects of your fascination”.
Expectations of privacy
O’Connor spent some time describing “Solove’s Taxonomy”, stating that privacy is invaded when information is collected, processed, and disseminated without a person’s consent.
But what about “cyber-patients”? These people still have some expectations of privacy despite the fact that they are knowingly posting in an open platform. This led O’Connor to ask, do social media users have a responsibility to protect their own privacy? What responsibilities do researchers have to respect social media users’ privacy?
O’Connor closed his talk with some suggested approaches and solutions for avoiding ethical missteps. He first highlighted the importance of transparency, asking researchers to be open and honest when approaching online communities. He also stated the importance of community involvement, and that engaging with the community and being social was a useful way to start on good terms. His final word of advice was Wheaton’s Law (which I’ll censor a wee bit for this blog): don’t be a jerk.
Implications for the use of social media data in altmetrics
O’Connor’s talk was thoroughly enjoyable, intriguing, and refreshingly different. The audience heard some cautionary tales about the ethics in online healthcare research, but because this was in an altmetrics-related conference, I would have also been interested to hear O’Connor’s thoughts on how social media data are being used in altmetrics tools.
Every single day, conversations from Twitter, Facebook, Google+, and various other sources are mined, collated, and displayed by the various altmetrics tools. Nowadays, it’s easy to find out who on social media is talking about specific papers, datasets, and more. Do social media users who post about scholarly content have the expectation of privacy when they post certain kinds of comments? Or do they have the expectation of publicity, as they may be interested in promoting content?
However, news, blogs, peer review platforms, and the like are usually produced with the expectation of publicity, and so perhaps there is less of an ethical quandary to navigate when mining those types of sources.
In conclusion, O’Connor’s talk was great food for thought for both altmetrics toolmakers and the broader research community at large. To me, it served as a strong reminder for everyone who researches or uses social media data to always be respectful and mindful of privacy – whether it’s the privacy of an individual or of an entire community.