Professional development outcomes and teaching practices

TA Tim Archie
CH Charles N. Hayward
SY Stan Yoshinobu
SL Sandra L. Laursen
request Request a Protocol
ask Ask a question
Favorite

Four measures were hypothesized to be related to workshop participation outcomes. They were reflective of TPB constructs and used to answer all three research questions. First, “behavior” as identified in the TPB was derived from measures of self-reported teaching practices in a specific course that the workshop participant chose as their target course and worked on during the IBL workshop. We asked participants on the pre-workshop survey and follow-up survey to indicate their frequency of use of 11 teaching practices using the following scale: 0 = never, 1 = once or twice during the term, 2 = about once a month, 3 = about twice a month, 4 = weekly, 5 = more than once a week, or 6 = every class.

As described in Hayward et al. [51], five of the 11 teaching practices are classified as ‘core IBL’ practices because they characterize all variations of IBL that were emphasized in workshops: decreased use of instructor activities, including lecture and instructor problem-solving on the board, and increased use of student activities, especially student presentations of their own work and student discussion in small groups or as a whole class. Using these five, we created a composite variable that we call IBL intensity, as an overall measure of instructors’ use of IBL methods. IBL frequency scores were calculated for pre workshop and follow-up time points and are based on the frequency values of the five core teaching practices as follows:

Thus, the IBL frequency score is used to answer RQ2 and, for RQ3, as the measure of “behavior” for the TPB. Scores ranged from -12 to 17; higher scores indicate more frequent use of IBL teaching practices and less frequent use of instructor lecture and instructor problem solving. Both pre-workshop and follow-up IBL frequency scores showed acceptable, but marginal, internal consistency for the subset of IBL core practices that were expected to increase after workshop participation (α = 0.56, α = 0.58, respectively). The relatively low internal consistency of pre-workshop and follow-up scores can be attributed to the variability in IBL teaching, as the workshops purposefully espoused a “big tent” approach to accommodate instructors’ preferences, student audiences, and course and institutional contexts [51]. Consistency was slightly better for practices that were expected to decrease after workshop participation; both pre-workshop and follow-up IBL intensity scores were acceptable (α = 0.81, α = 0.78, respectively).

Three items on the post-workshop survey represented the latent variable “intent” in the TPB (RQ3) and showed acceptable internal consistency (α = 0.70). We asked participants How likely are you to implement IBL in the coming academic year?” and If not this year, how likely are you to implement IBL in the following academic year?”. Both items used the same five-point scale (1 = Not at all likely, 2 = Somewhat unlikely, 3 = Somewhat likely, 4 = Rather likely, 5 = Definitely). Another item asked, “How motivated do you feel to incorporate inquiry into your teaching methods?” and used a four-point scale (1 = None, 2 = A little, 3 = Some, and 4 = A lot).

Instructors’ beliefs about the effectiveness of IBL, knowledge of IBL methods, and skill in using IBL methods were measured three times: before the workshop, immediately after the workshop and 18 months after the workshop. These items were used to answer RQ1 and also for “attitude” and “perceived behavioral control” in the TPB model (RQ3), as follows.

For RQ3, “attitude” was defined as participants’ belief in the effectiveness of IBL specifically at the follow-up time point, reflecting their view at the completion of professional development. Participants were asked “To what extent do you believe inquiry-based strategies are an effective learning method?” and rated this item on a four-point scale (1 = Don’t know, 2 = Not very effective, 3 = Somewhat effective, 4 = Highly effective).

The latent variable “perceived behavioral control” in the TPB (RQ3) used participants’ ratings for both knowledge of IBL methods and skill in using IBL methods, again only at the follow-up time point. The “knowledge” item asked respondents to rate their current level of skill in inquiry-based teaching and the “skill” item asked to rate their current level of knowledge of inquiry-based learning in math education. Both items used the same four-point scale (1 = None, 2 = A little, 3 = Some, and 4 = A lot) and showed acceptable internal consistency (α = 0.66).

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A