Data analysis and reduction

MK M. M. Khan
BM B. Manduchi
VR V. Rodriguez
MF M. I. Fitch
CB C. E. A. Barbon
HM H. McMillan
KH K. A. Hutcheson
RM R. Martino
ask Ask a question
Favorite

All interviews were transcribed verbatim, using an automated transcription software (Otter.ai - Otter Voice Meeting Notes) then cross checked manually for accuracy, and de-identified. The final transcripts were analyzed using a standard content/theme analysis [31]. The team of researchers with collective expertise in the clinical care of head and neck cancer patients, SLP practice and/or qualitative analysis, planned the multi-step analysis process. In step one, 20% of the transcripts from the PM/UHN site were randomly selected for independent review by two raters (MK, MF) with the aim to generate coding categories. Step two, these researchers met to discuss their observations and reach consensus regarding the content identified and how the content ‘fit’ to build a coding framework. Step three, the remaining researchers (BM, VF, RM) independently reviewed the same transcripts and applied the coding framework derived in step two. Step four, all researchers met to finalize the coding framework through consensus discussion. Step five, the final coding framework was applied to all interviews at PM/UHN with each transcript independently reviewed by two of the trained raters. Step six, the paired raters met to discuss their coded transcripts, reconcile any discrepancies through consensus, and identify new codes that may not have been previously identified. Step seven, all raters met to review and discuss the data, with the aim to reach consensus on the final analysis and major content categories. Step eight, each rater was assigned one or more major category and they independently generated a brief summary and key messages for each assigned category by reviewing all transcripts and borrowing from the participants’ voice. Step nine, to ensure accountability, all raters met to discuss and agree on key messages for each coding category.

The same process was repeated with the interview transcripts from the MDACC site when they became available. Interviewers from MDACC conducted the interviews which were locally transcribed, cleaned and de-identified before being exported for analysis by the raters at PM/UHN. Steps one to eight were enacted as per above and step nine included the MDACC facilitators to discuss and agree on key messages. Part of the discussion focused on identifying any observed differences in meaningful content between the PM/UHN transcripts and the MDACC transcripts.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A