Bethany Robertson The journey of thematic analysis: On acknowledging the self and making a research diary visible
The aim of this paper is to address the benefit of keeping a research diary which includes stages beyond data collection such as transcription and analysis, as well as the tensions in determining the value of personal reflections if they are kept private. I will draw upon my diary, compiled during research into student experiences of separation from their pets, which includes reflections such as the challenges of researching a population that I could identify with as a student who faced the same issues. According to Russell and Kelly (2002), it can be difficult to articulate how reflexivity is done in practice, but I sought to rectify this in the production of a journal which yields reflexivity throughout the research, rather than only at the end retrospectively. I used extracts from my diary in a chapter of postgraduate work to illustrate the subjectivities embedded in research in practice and to guide readers on the journey that had led me to the creation of knowledge. Consistent with the anecdote outlined at the beginning of Charles’ (2014) paper, this research arose from a personal history of pet-keeping so a claim to objectivity would be counterproductive. This poses the question whether, akin to interview data generated by participants, diary entries may be considered data worthy of inclusion in an article to help others make sense of the findings. For example, I noted the monotony of transcription yet its importance for familiarising oneself with the data. But does transparency of such tensions compromise or enhance credibility?
References Charles, N. 2014: ‘Animals just love you as you are’: experiencing kinship across the species barrier. Sociology, 48 (4) 715-730. Russell, G. M. and Kelly, N. H. (2002). Research as Interacting Dialogic Processes: Implications for Reflexivity. Forum: Qualitative Social Research, 3, (3).
Christina Straub Operationalisation of abstract concepts in qualitative research for data collection and data representation
My presentation will deal with some of the challenges that I have come up against as a PhD student whilst planning and designing my qualitative research project. It will cover the area of preliminary operationalisation of theoretical and/or abstract concepts. In my case it was the abstract concept of love that had to be operationalised and made “palpable” in an interview guideline. I would like to talk about the process of “borrowing” an operationalisation strategy from a different discipline, i.e. using concept analysis that has been used predominantly (but not solely) in nursing sciences. The concept analysis itself was carried out in an interdisciplinary fashion, reverting to four different disciplines: sociology, moral philosophy, psychology and neurosciences.
In my presentation I would like to address the difficulty of locating the point of saturation during this concept analysis. I would like to discuss with the audience how and when we know that we have gained enough reliable and valid theoretical data to be able to draw conclusions about the most general features of a concept (here, love), so that it can be translated into an interview guideline.
Since this process is still ongoing in my current research stage, it would be very valuable for me to think together with other postgraduates about how they determine the point in their theoretical groundwork (be it literature review or operationalisation strategies) when enough valid data has been gathered about an abstract concept or theory to then expand into actual research methods.
Peter Traynor Hard to reach groups and non-probability sampling: how to ensure quality of data, and to measure quality of data This paper explores issues of “quality” in qualitative research in the context of a qualitative PhD. From this, the paper considers two issues. First: how to ensure that data are of a sufficient quality. Second: how to assess or measure this quality objectively.
For my PhD thesis I conducted qualitative research among two cohorts of young people – those who had carried, or were known to carry, a knife, and those who lived in areas of high knife crime but had not carried a knife. Contacting, recruiting and engaging with those who took part in the study presented significant challenges. Conducting research with young people can be challenging generally, and especially so if they are also offenders. Young offenders represent an especially “hard to reach” (Taylor and Kearney, 2005) group and those who carry or have carried knives as a specific sub-set are even harder to reach. To this end, a non-probability sample was selected, and within this, a purposeful sampling strategy was adopted. A significant concern given the chosen methodology was that the research would not meet accepted criteria of truth, reliability and generalisability – that is, it would not be of sufficient quality or rigour. This paper argues for an alternative criteria of quality based on accepted notions of plausibility and authenticity. These are not new concepts but rather have emerged over a long period of debate within qualitative research taking in epistemological, ontological and methodological imperatives. A second issue has been less discussed. If we accept that it is possible to produce data of quality, how do we measure the level of quality? Usually it seems acceptable to argue that qualitative data are useful, without discussing how useful. But is it possible to start to build a set of standards by which we can judge qualitative research? This paper will make some proposals as to how this might be approached, reflecting on my own research and broader changes in the research environment, especially around the impact agenda.
Jaroslava Tomanova Understanding international public policy transfer: sampling strategy and language dilemmas
In my PhD research I focus on cultural policy (public policy in the area of culture and arts) and international transfer of policy ideas in this field. My research aims to explain the emergence and current existence of the concept of creative economy in the Czech Republic. Rather than evaluating the impacts and changes that have occurred since creative economy appeared in cultural policy discourse, I am exploring subjective experiences of people who are involved or affected by the transfer process. I aim to understand and explain the procedures and rationales leading to adoption of creative economy rhetoric which is by all means study of power and influence. In the presentation I will introduce two issues which in my view are affecting the quality of my research and provide fruitful ground for reflexive questioning. First is my purposeful sampling strategy determined by various factors such as theoretical assumptions as well as former personal and professional relations within the field. Second is that designing research and collecting data in two different cultural and language conventions may bring significant challenges and has to be therefore examined further.
Sophie Rutter “If only you’d got 15 interviews”
There are no fixed rules on how many interviews are needed for a study (Baker & Edwards, 2012, p. 6). Some argue that a single case can provide a “rich and deep understanding of the subject and breakthrough insights” (Patton, 2015, p. 266). Nonetheless, there are still expectations of how many interviews are necessary to get published in top journals and how many interviews should be conducted for a PhD thesis. For my PhD I carefully selected 10 participants using maximal variation sampling but encountered feedback of “if only you’d got 15 interviews”. Using Bryman’s (2012, pp. 19–20) suggestion that there are five factors to consider when deciding the size of a sample, I review whether having more interviews would increase the quality of my research. Finally, I will conclude that the quality of research should not be judged by the quantity of interviews but the quality of interview and quality of the analysis.
References Baker, S., & Edwards, R. (2012). Introduction. In S. Baker & R. Edwards (Eds.), How many qualitative interviews is enough? Expert voices and early reflections on sampling and cases in qualitative research. National centre for research methods. Retrieved from http://eprints.ncrm.ac.uk/2273/
Bryman, A. (2012). Expert voices: Alan Bryman, University of Leicester. In S. E. Baker & R. Edwards (Eds.), How many qualitative interviews is enough? Expert voices and early reflections on sampling and cases in qualitative research. National centre for research methods. Retrieved from http://eprints.ncrm.ac.uk/2273/ Patton, Q. (2015). Qualitative research and evaluation methods (4th ed.). London: Sage.
Jess Elmore Researching ESOL (English for speakers of other languages) learners: A reflexive account
In this presentation I will discuss the challenges of reflexivity in relation to my doctoral research. I will identify how the interpretive nature of my research, and my positionality in relation to my participants, means that reflexivity is fundamental to my research’s quality. I will then consider how far and in what way I have been successful in my reflexivity.
My research is a case study of information sharing in two ESOL classes. ESOL learners are people who have come to the UK to settle and are learning English as part of adult basic skills. I used a range of methods in my research, making multiple visits to the two classes in my study with observation as the primary method.
I am very conscious of my difference to the learners who participated in my study. They were all women and all migrants with limited linguistic capital, many were people of colour, some had very little formal education and many were Muslim. As a highly educated, white, atheist woman I am mindful of hook’s (1998) challenge that researching a group different to oneself risks perpetuating domination. In attempting to address this challenge through reflexivity I am equally mindful of Skeggs’ (2002:361) warning that “the ability to be reflexive via the experience of others is a privilege, a position of mobility and power, a mobilization of cultural resources”.
I will discuss how I had hoped that collaborative reflexivity (Finlay, 2002) would help me address these challenges. This includes relative success in reflexivity as part of data collection. I found a reflective journal meant I could interrogate my own practices and multiple selves and prolonged engagement with participants meant their voices could be heard alongside my own. I will then contrast this to the far greater challenges I have experienced in the more solitary labour of data analysis. Finally I will consider how far I overcame these challenges and what this means for the success of my doctoral research.
References hooks, b. 1989. Talking back: Thinking feminist, thinking black. New York, New York: South End Press.
Finlay, L. 2002. Negotiating the swamp: the opportunity and challenge of reflexivity in research practice. Qualitative Research, 2(2), 209–230. https://doi.org/10.1177/146879410200200205
Skeggs, B. 2002. Techniques for telling the reflexive self. In T. May (ed.), Qualitative research in action, pp. 350–74. London, United Kingdom: Sage.
Andy Bradshaw Judging the quality of qualitative research: A relativist perspective
Since the seminal work conducted by Lincoln and Guba (1985; 1989), it has been well established that the criteria used to judge qualitative inquiry should be different to that of quantitative. Since then, various conceptions of how we may judge qualitative research have emerged (e.g. trustworthiness, plausibility, authenticity, credibility, etc.). Whilst most can agree that ensuring quality is important, how we should apply these criteria has been the centre of copious debate.
Most research in the field of sports psychology has adopted what is coined the ‘parallel perspective’; applying a preordained list of quality criteria as a universal standard through which we are to judge the quality of all genres of qualitative research. However, this approach often results in studies being unfairly dismissed as poor, simply because they are not judged within their own terms. Put a different way, their application simply is not relevant to the context, purposes, and values of some research traditions and ends up in a scenario akin to judging a good apple by the criteria of what makes a good orange. This polices what research can and cannot be conducted and therefore poses the risk of unfairly discarding novel ways of doing research – ways that can provide us with enlightening perspectives and knowledge on phenomenon.
In this presentation, I aim to critique the parallel perspective by applying it to my own research project. By the same token, I will then propose an alternative method of evaluating the quality of qualitative research through presenting what is called the ‘relativist perspective’.