Maximum Likelihood Conjoint Measurement

CA Clement Abbatecola
PG Peggy Gerardin
KB Kim Beneyton
HK Henry Kennedy
KK Kenneth Knoblauch
request Request a Protocol
ask Ask a question
Favorite

Maximum likelihood conjoint measurement aims to model the decision process of observers comparing multidimensional stimuli in order to determine how the observer integrates information across dimensions to render a judgment. Because the decision process is noisy, a signal detection framework is used (Ho et al., 2008) and the resulting model formalized as a binomial Generalized Linear Model (GLM) (Knoblauch and Maloney, 2012). Several nested models, corresponding to increasingly complex decision rules for combining information across modalities, are fitted to the data using maximum likelihood so as to maximize the correspondence between model predictions and observer decisions. These models are compared using nested likelihood ratio tests to determine the degree of complexity required to describe the observer’s decisions.

For example, consider two face-voice stimuli defined by their physical levels of morphing, [ϕiVϕiA for stimulus 1 or 2], for visual and auditory gender, S1 (ϕ1Vϕ1A) and S2 (ϕ2Vϕ2A), and the task of deciding whether the first or second stimulus has the more masculine face, i.e., the visual task. The noisy decision process is modeled as:

where ψ1 and ψ2 are internal representations for the gender of the first and second face, respectively, determined by the psychophysical function, ψ, ϵ is a Gaussian random variable with mean μ=0 and variance σ2 and Δ is the decision variable. We assume that the observer chooses the first stimulus when Δ > 0, and otherwise the second. The log-likelihood of the model over all trials given the observer’s responses is given by:

where Ri is the response on the ith trial and takes the value of 0 or 1 depending if the subject chooses the first or second stimulus and Φ is the cumulative distribution function for a standard normal variable. For each model described below, the psychophysical responses, ψ′s were estimated that maximized the likelihood of the observer’s responses across all trials, with constraints imposed to render the model identifiable (Knoblauch and Maloney, 2012).

Under the independence model the observer exclusively relies on visual information and we define the decision variable:

where ψ1Vψ2V are the internal representations of gender evoked by the visual cues of stimulus 1 and 2, respectively. A similar model is defined to model independent responses for the auditory task where the V’s are replaced by A’s.

In the additive model we define the decision variable as a sum of the visual and auditory gender signals.

where the visual and auditory terms of the equation have been regrouped to demonstrate that the observer is effectively comparing perceptual intervals along one dimension with perceptual intervals along the other (Knoblauch and Maloney, 2012).

Under the interaction model, non-additive combination terms are introduced. The decision variable can be written as

where the third term on the right side corresponds to an interaction that depends on each face-voice combination. This interaction model is usually non-specific as one term is evaluated independently for each combination of visual and auditory gender levels. Here, our results, described below, indicated that the additive terms could be characterized as parametric functions of the gender levels, fV(ϕ)and fA(ϕ), for visual and auditory modalities, respectively. This allowed us to test two specific types of interaction.

The Congruence Interaction Model introduces an internal congruence effect between face and voice gender within stimulus to yield the decision variable:

This interaction depends on the absolute gender difference between visual and auditory signals for each stimulus. It has minimal effect on judgments when for each stimulus, the gender scale values are the same for both modalities and maximal when there is the greatest gender difference between modalities.

The Magnitude Interaction Model introduces a multiplicative effect of gender information across stimuli for the following decision variable:

This interaction is minimal when the gender difference between stimuli within either or both modalities is small and maximal when the within modality difference is large for both stimuli. Over many trials these differences cancel out more for stimuli that are closer to gender-neutral, so overall these gender-neutral stimuli will be associated with smaller effects for this interaction.

In other words, under the congruence interaction model, non-additive effects are assumed to be proportional to the absolute difference between face and voice gender. Under the magnitude interaction model, non-additive effects are assumed to be proportional to the amount of masculinity/femininity in the face and voice as compared to gender-neutral.

All psychophysical data were analyzed using R (R Core Team, 2019) and the package lme4 (Bates et al., 2015) to take inter-individual variability into account with generalized mixed-effects models (GLMM) (Pinheiro and Bates, 2000).

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A