While the grounded theory method has been in use for many years in the social sciences, it still has a minority status in IS research (Lehmann, 2001b). Thus, some critical and perhaps more obscure methodological aspects need to be discussed if one wants to dispel misconceptions. These characteristics are discussed next.
As has already been mentioned, in grounded theory methodology the bulk of the literature review is conducted after the emergence of substantive theory. It is then, and not before, that data from the extant literature contributes to the study (Eisenhardt, 1989, p. 278; Urquhart, 2001, p. 366). The approach of reading the literature first with the objective of identifying gaps and relevant theories is opposite to the role that the literature has in grounded theory. Glaser (1998, p. 67) cannot be more specific in this regard:
Grounded theory’s very strong dicta are a) do not do a literature review in the substantive area and related areas where the research is done, and b) when the grounded theory is nearly completed during sorting and writing up, then the literature search in the substantive area can be accomplished and woven into the theory as more data for constant comparison (Glaser, 1998 p. 360:67).[7]
While uninformed observers of the grounded theory method may construe these dicta as a neglect of the literature (Glaser, 1998 p. 360), nothing can be farther from the truth. The purpose of the dicta above is to keep the researcher as free as possible of influences that could restrict the freedom required for theoretical discovery, not to ignore extant and relevant knowledge (Glaser, 1998). Adopting a grounded theory method commits the researcher to a rigorous and constant literature review process that occurs at two levels:
the researcher must be constantly reading in other substantive areas to increase their theoretical sensitivity, and
conceptual emergence forces the researcher to review convergent and diverging literature in the field related to the developing concept.
Because emerging theoretical construction drives the literature review, the extant literature is incorporated into the study as data. Therefore, most of the relevant reviewed literature will be presented, as it finds its way into, and becomes integrated with, the substantive theory. This closely reflects the nature of the method and the role and place of the literature within it. Forcing a typical PhD dissertation’s ‘Chapter 2: Literature Review’ would be incongruent with grounded theory and methodologically unsound, detracting from the true role of the literature in this type of research.
The qualitative datum is defined as a string of words capturing information about an incident; this incident (or unit of analysis) represents an instance of a concept coded and classified during the coding process (Van de Ven and Poole, 1989). The source of the datum may be a person, a group, a document, an observation, or extant literature.
Incidents are indicators of a concept. Figure 5.4, “The concept indicator model (Glaser 1978, p.62).” shows a model based on the constant comparison of indicators. In this model, the comparison of indicator to indicator generates a conceptual code first, and then indicators are compared to the newly emerged concept, further defining it. The constant comparison of indicators confronts the analyst with similarities, differences, and consistency of meaning, which result in the construction of a concept (or category) and its dimensions (Glaser, 1978).
Incidents had many sources, from actors’ accounts to field observations. However, interviews provided the study’s most significant building block. These interviews focused on the client’s core project team comprising the project manager and the associated team leaders, with multiple interviews over a period of time. The decision to include team members was based on the need to obtain a wide range of views from the people confronting the day-to-day issues and having similar (but not equal) level of responsibility in the IT project. This was important because:
[g]rounded theory accounts for the action in a substantive area. In order to accomplish this goal grounded theory tries to understand the action in a substantive area from the point of view of the actors involved. This understanding revolves around the main concern of the participants whose behaviour continually resolves their concern. Their continual resolving is the core variable. It is the prime mover of most of the behaviour seen and talked about in a substantive area. It is what is going on! It emerges as the overriding pattern. (Glaser, 1998, p. 115)
Therefore, this study’s focus on actions and accounts of actions is congruent with the assumptions of grounded theory. Furthermore, action occurs in a context and within a process enacted and constructed by the actors. Consequently, the study does not focus on properties of an actor or unit but on properties of a process. This is discussed next.
My study centred on properties of a process not on properties of a unit (as a person, group, or organisation). Properties of a unit are more relevant to descriptive qualitative studies, while properties of a process are more relevant to studies aiming at theoretical conceptualisation (Glaser, 1978; Glaser, 2001; Glaser, 2002).
More specifically, the study’s aim was to provide a theoretical conceptualisation of a basic social process (BSP). Basic social processes can be of two types: basic social psychological process (BSPP) and basic social structural process (BSSP). BSPPs refer to processes such as becoming (e.g. a nurse, a leader, a system) or inspiring (e.g. followers, peers) and are useful in understanding behaviours. BSSPs are concerned with social structures in a process such as centralisation, organisational growth, outsourcing, or recruiting procedures (Glaser, 1978).
BSPs are a type of core category (though not all core categories are BSPs) exhibiting the following characteristics (Glaser, 1978):
BSPs ‘process out’ at least two emergent stages that ‘differentiate and account for variations in the problematic pattern of behaviour.’
BSPs may not be present in a grounded theory study (i.e. researchers may not have two or more stages in the central concept).
BSPs are ideally suited to qualitative studies where the analyst observes the evolution of a process over time (i.e. influencing outcomes in a project).
BSPs are labelled by a ground that reflects their evolving nature and a sense of motion (i.e. resolving, influencing, communicating, becoming).
As the second point above indicates, BSPs may or may not be present in a grounded theory study; their presence (or lack thereof) further guides the research design and execution. Therefore, understanding the distinction between doing unit or process based sociological analysis, is critical to the research design, regarding the particular demands they place on sampling, analysing and theorising (see Glaser, 1978, pp. 109-13, for a comprehensive listing of these differences).
A basic question in case study research is concerned with the single-case versus multiple-case design of the study. In case study research, researchers determine a priori if the study is going to be single-case or multiple-case based, depending on the nature of the inquiry (Yin, 1994). Yet, under a grounded theory approach, that assumption could not have been made at the start of the research simply because at that stage it was unknown if the case would allow pattern detection and saturation. In grounded theory, sampling is driven by conceptual emergence and limited by theoretical saturation, not by design. As Glaser and Strauss (1967, p. 45) explain:
Theoretical Sampling is the process of data collection for generating theory whereby the analyst jointly collects, codes, and analyses his data and decides what data to collect next and where to find them, in order to develop his theory as it emerges. This process of data collection is controlled by the emerging theory, whether substantive or formal.
Consequently, the selection of data sources is neither a random selection nor a totally a priori determination. For example, I decided a priori that a combination of data sources was most appropriate for this study. However, the specific details of what data was available and which datum was relevant depended on the emerging data.
Another critical a priori sampling decision was to control the variation by organisational delimitation while allowing for within-case diversity of access to multiple data sources. The sample was under the unifying influence of the cultural and organisational environment, which allowed controlling environmental variation while clarifying the domain of the research, as suggested by Pettigrew (1988).
One of the dangers in any type of research is to sample too superficially. To counteract this risk, the foundation case was selected because it provided the ‘meatiest, most study-relevant sources’ (a strategy recommended by Miles and Huberman [1994]). There were also opportunistic reasons to select the case. The selected project provided the best accessibility, as most people in the core project team were (usually) based in the same city in which I was located. This practical consideration was later proven critical as in situ observations gave me a better appreciation of what was going on and of what was important to the actors.[8]
As it happened, the single case was sufficient to provide enough data for the exploratory study, as Yin (1994) would perhaps have suggested. However, this was because the initial project resulted in a much richer source of data than first expected, with the project taking six times longer than expected to complete and presenting a substantial number of incidents for comparison and theory construction. While the argument presented by Yin (1994) for revelatory single case studies was ad post valid for my research, the validity of the single case study was based on the richness of the case. This richness allowed reaching conceptual saturation and thus permitted the closure of the grounded theory study, something I did not know a priori.
The objective of the research is to generate theory ‘that accounts for the patterns of behaviour which is relevant and problematic for those involved’ (Glaser, 1978, p. 93). To achieve this goal the analyst must discover the core category and delimit the investigation around it. The core category is the pivotal point for the theory; most other categories relate to it, and it accounts for most of the variation in pattern and behaviour. The core category ‘has the prime function of integrating the theory and rendering the theory dense and saturated as the relationships increase’ (Glaser, 1978, p. 93).
In my study, the core pattern was ‘resolving conflicts’, a basic process that engaged actors (people and organisations) in a series (pattern) of activities aimed at resolving incongruence and misunderstandings. Resolving conflicts is how managers of meta-teams (and the component teams) achieve project delivery. The core category in the resolving conflict pattern was ‘trust,’ which had a number of key interrelated categories that explained the core pattern.
According to Glaser (1998), the notion of induction versus deduction is often an oversimplification of complex patterns of thought present in grounded theory development. While grounded theory is classified as an inductive method (e.g. Glaser, 1978; Glaser and Strauss, 1967; Martin and Turner, 1986; Strauss and Corbin, 1998), theoretical sampling is a deductive activity grounded in inducted categories or hypotheses. This acts as a virtuous circle where ‘[d]eductions for theoretical sampling fosters better sources of data, therefore better grounded inductions’ (Glaser, 1998, p. 43). The difference between an inductive and a deductive method relates to ‘pacing’; if the researcher looks at data first and then forms the hypotheses (inductive), or if the researcher forms the hypotheses first by conjecture and then seeks research data to verify the deduction (deductive) (Glaser, 1998). This cycle of induction and deduction is represented in Figure 5.5, “The inductive-deductive cycle of the grounded theory method.”.
Two practical aspects of the research facilitated both induction and deduction activities, namely: (a) recording and transcribing interviews, and (b) using a qualitative data coding and analysis tool. These activities are discussed in the next two sections.
All interviews were recorded using analogue and digital technology. The analogue tape was then professionally transcribed and turned into analysable text. I used ATLAS.ti as the main tool to code and analyse the data and to collect memos. For example, while proceeding through open coding of a particular interview for the first time, I would load the primary document onto ATLAS.ti and simultaneously play the MP3 version of the interview on my computer. This had two effects: first, it improved recollection and mental activity (the interview was recreated with sound, not just words), which increased the production of memos. Second, it allowed the correction of transcription errors that can be very frequent due to the jargon used by actors.
Glaser does not encourage the use of tape recording (Glaser, 1998). He argues that recording is unnecessary because the researcher is after important concepts and patterns, not precise accounts as in other more descriptive methods. Therefore, for conceptualisation purposes the actual words are not as significant as they belong to one of many possible units in a process. Another perceived problem with recording is that it becomes time consuming and inefficient for this type of research. Interviews are often taken for transcription and then corrected, causing the analysis of many non-important parts. Glaser is very conscious of wasting time in what he considers superfluous activities.
However, I was convinced at the beginning of the study that recording the interviews was appropriate and necessary. Consequently, I decided to: (a) take a few notes during the interview; (b) do post-interview notes when required; and (c) record the interviews. This extra effort was justified as a risk mitigation strategy. By taking notes, I could then use these notes to record memos or to guide my next interview while the previous one was being transcribed. Furthermore, waiting for transcriptions was seldom necessary as I was able to control my pacing thanks to having open access to actors and data.
As Glaser predicted, the extra time involved in open coding full interviews, rather than coding just the important concepts, was substantial (ranging from 40 to 60 hours each for the first few one-hour interviews to eight to 20 hours each for the last few). However, this also allowed me to relive the interviews and the detailed analysis helped me to acquire a deeper understanding of the issues. This understanding facilitated the emergence by discovery of the core concept and made me, the researcher, more comfortable with the coding activity.
It is probable that without recording and coding literal transcriptions I could have saved some time; however, listening to the actors often triggered theoretical memos and facilitated the finding of relations – therefore, it was a productive activity, not a wasteful one. Moreover, listening and reading the interviews matched my cognitive style and therefore facilitated emergence.
While I found re-listening to the interviews and analysing the full text very rewarding and interesting, it must be recognised that Glaser is correct in his assertions – neither recording nor taking extensive notes are necessary activities for conceptualisation.
Nevertheless, not recording is too risky a strategy for a PhD student to follow. Above and beyond fulfilling the need for evidence in a PhD study by recording and transcribing interviews, researchers can revisit and re-code text as more evidence emerges and patterns are detected. The ability to have access to the full transcription and to replay the interview at any time is a distinct advantage, especially in studies of organisational cases that are conducted over a long period of time, at different points in the life cycle of the analysed phenomena. In any case, the iterative nature of grounded theory demands the constant comparison of incidents with already collected data. In doing this, previously undetected incidents are likely to emerge. These new incidents benefit the study and therefore justify the extra effort required to record, transcribe, and code potentially irrelevant data.
Glaser (1998, pp. 185-6) also alerts against the ‘technological traps’ of data analysis tools such as NVivo or ATLAS.ti because they create unnecessary restrictions, inhibit the researcher’s development of skills and impose time-consuming learning curves. Glaser perceives computing technology as an easy way out and as a hindrance rather than an aid to creativity. This is not my experience. Yet computing tools can be used in many ways and some of those ways will indeed have the negative consequences Glaser has mentioned.
Using ATLAS.ti in my study for open coding and memoing was a substantial advantage. It provided a fast way of checking and comparing incidents and the flexibility of exporting data to other tools as I perceived appropriate. The software’s ability to collect memos allowed the efficient writing, analysis, and retrieval of memos at any time in the process. It is also true that ATLAS.ti was not everything I needed. I used additional techniques and tools: butcher’s paper and a whiteboard to draw box diagrams representing the interrelation of emerging concepts; notepads and flowcharting software to draw many diagrams; a word processor to combine and analyse sets of incidents and memos; and mind-mapping software (MindManager) to organise ideas and themes.
Therefore, Glaser is correct in asserting that this is creative work, yet the generalisation that technology restricts creativity was falsified by this study’s experience, as people familiar with computers do creative work with them and around them.[9] ATLAS.ti did not impose a significant learning curve; the software was found to be intuitive, the tutorials took a day to do – and after that I did not need to refer to the software manuals. Working with ATLAS.ti was not different from working on paper, yet retrieving and connecting concepts was extremely easy and efficient.
Finally, while ATLAS.ti has some automated coding facilities (i.e. coding all occurrences of a word or phrase), coding was done entirely manually, reading the text line by line while endeavouring to explain the incidents. Automatic coding is a disadvantage for the grounded theorist as it obscures the discovery of what is going on in the text; in this regard, Glaser’s reservations are fully justified.