By Gina Vale
Social media has become an indispensable aspect of everyday life. Platforms such as Twitter, Facebook, Telegram, Tumblr, Kik and Ask.fm have also become hosts of a virtual minefield of extremist and violent content and political misinformation. Countless academic articles and reports have drawn upon user-generated content alongside official group publications circulated on social media platforms as primary data for research. Such media includes hate-filled speeches by extremist ideologues, logistical communications for group recruitment and photographic and audio-visual evidence of violence and abuses committed by official members and inspired adherents. This has enabled greater understanding of the recruitment strategies, propagandistic narratives and influencers of extremist groups, and has opened a field of work focused on counter-narratives to prevent and deter individuals from succumbing to the influence of online extremists.
This all sounds very positive, but what are the costs and risks of this research? Here, I argue that the welfare of the researcher frequently slips through the net of the ethical principle to ‘do no harm’.
For researchers affiliated with an academic institution, ethical considerations form a necessary part of the planning process for any study involving human participants or human data. In order to receive approval from the independent Institutional Review Board (IRB), a proposal in either the social or medical sciences must demonstrate evidence of planned measures to mitigate harm and safeguard research subjects, particularly those considered especially ‘vulnerable’. However, an in-depth study carried out in the United States in 2017 of national and international research ethics policies revealed a consistent lack of explicit definition of what ‘vulnerability’ in fact was.
The primary concern and mandate of the IRB is to safeguard the researched rather than the researcher. In cases of research conducted in unstable or insecure contexts, approval may be granted upon condition of completion of a risk assessment form, requiring researchers to demonstrate protection against potential physical harm to themselves, their colleagues and their equipment. However, there is usually little or no mention of safeguards for psychological harm to the researcher. Furthermore, if the research is solely based on documents such as social media posts or audio-video recordings already in the public domain, generally no ethical approval is needed at all.
The vulnerability of researchers therefore largely falls through the cracks of the ethical review process. Moreover, in the context of gruelling publication pressures and the emphasis on terrorism as a fluid and fast-paced phenomenon, self-care is frequently overlooked or seen as a sign of one’s inability to ‘handle’ the demands of the field. Seamus Hughes, Deputy Director of the Program on Extremism at George Washington University, summed up the issue when he explained that the mental toll of research is ‘the type of thing that nobody really talks about’. Fortunately, the tide is changing.
Recently, attention has turned to the unseen, psychological toll of viewing extremist or violent content, particularly in light of brutal videos of amputations, killings and torture conducted by Islamic State (IS) and circulated internationally on social media platforms. Cottee and Cunliffe’s study of audience responses to watching official English-language IS propaganda videos is widely read and outlines the authors’ steps to mitigate harm to their participants, including removing scenes of explicit ‘ultraviolence’, in which the victim is killed. However, Carol Winkler, Professor of Communication Studies at Georgia State University, has directly challenged this approach. She suggests that simply omitting the scene of ‘ultraviolence’ does not remove the ‘promised violence’ that elicits the audience’s emotional reaction. This, then, prompts consideration of the psychological risks to researchers whose work does not include or explicitly focus on the ‘ultraviolent’ visual climax of extremist imagery.
My own empirical research has thus far adopted a different approach to studying the policies and activities of IS. While documentary evidence provides a baseline for comparison, my focus has been the first-hand testimonies of (female) civilians who lived under the group’s rule. By nature of engaging with survivors of conflict, my research required high-risk ethical approval. The scope of experiences of life under terrorist rule are broad. Some stories of defiance are uplifting, empowering and inspiring. Other narratives of captivity, genocide and sexual and gender-based violence are harrowing.
Extensive measures were put in place to protect my research subjects from the negative effects of narrating and reliving traumatic events. However, I participated in that process with each of them through almost seventy hours of one-on-one interviews. Just as violent imagery takes an emotional toll on an archival researcher, so too does the experience of listening to detailed recollections of traumatic experiences. This is known as ‘vicarious trauma’, a self-transformation in the researcher as a result of empathic engagement with reports and stories from trauma survivors. It is important to note that vicarious trauma extends beyond the allotted time of interview. Long-term traumatisation resulting from research participation is acknowledged in the ethical approval process for the interviewee, but again is often overlooked for the interviewer. For weeks on end, I worked on transcription, transporting myself back to each interview. Where I had gained consent for audio recordings, I could also replay the women’s voices, rewinding and repeating second by second in my headphones. Despite a busy office environment, such a task can feel isolating and emotionally draining.
The issue of vicarious trauma is well researched within the area of psychology, with a particular focus on the experiences of therapists and counsellors. However, research by Coles et al. reveals that academic researchers and analysts are at higher risk of vicarious trauma because we (largely) do not adopt an assistance role. Indeed, in a recent study of vicarious trauma in criminological research, Moran and Asquith found that the costs of engagement may be compensated by the productive outputs and impacts on policy and practice that the research may elicit. For those who are not in a position to offer practical benefit for their research subjects, what can be done?
Jane Palmer, Professorial Lecturer in the Department of Justice, Law, and Criminology at the American University, has written a helpful blog post on recognising and dealing with vicarious trauma among researchers. Among the list of key areas of concern and consideration is ‘the researcher’s current support systems’ and she encourages the development of policies and protocols for supporting colleagues with negative impacts of their research. With public debates concerning researcher (vicarious) trauma gaining momentum among analysts of terrorism and extremist violence, this is an important juncture to bring these conversations internally within each research team and ensure preventative measures and support networks are in place.
The benefits of researching extremist violence are clear; collection of primary data can provide vital evidence of criminal activity and abuses by underground and terrorist organisations. However, this research comes at a cost. In order to sustain this work, greater measures are needed to recognise researcher vulnerability without stigma. The academic community is now starting to think about the right approach; I hope that the perspectives of early career researchers will be included. From my own experience, I would like to see greater support for and acceptance of the vulnerabilities that researchers face. This could come in the form of conference panels or support guides for scholars and analysts, as well as integration of researcher care within project design. By opening and normalising these conversations, we can progressively reduce the hidden costs of research and prioritise care for ourselves and colleagues.
This publication was produced as part of the XCEPT programme, a programme funded by UK Aid from the UK government. The views expressed do not necessarily reflect the UK government’s official policies.