I am currently maintaining three major research lines, each of which are quantitative either in method or content. I maintain research lines in auditory perception, statistical procedures, and models of decision-making under uncertainty. The approach taken in each of these areas tends to reflect my quantitative background. My auditory research is focused on the computational modeling of perceptual abilities, including sound localization and auditory masking. My work in statistics is uses analytical and computational modeling techniques to evaluate current inferential statistical procedures and suggest replacements. Finally, my work in decision-making is focused on testing accepted models of decision-making and deriving alternatives that better account for empirical data.
Auditory Perceptual Phenomena
Much of my work in auditory perception is focused on applications of spatial (3D) audio to interface design. The technology allows for three-dimensional auditory percepts over headphones, and I demonstrated the benefits of implementing this technology in an air-traffic control interface. During the course of this research I became fascinated by the ability of human listeners to extract source location and distance information from the ambiguous information that arrives at the ears. Sound localization is a difficult task that clearly requires enormous computational resources, and yet the human listener is able to localize sounds with a speed and accuracy that computer-based localization systems were unable to match. As the mechanisms behind human sound localization are poorly understood, I began by trying to develop a computational localization algorithm that could at least match the accuracy of the human listener. My innovative solution to this problem resulted in a paper in the flagship journal for auditory perception research. My localization algorithm was exceptionally accurate, exhibiting less than one degree of localization error even in very noisy environments. I am currently working to extend this work to allow for multi-source localization.
I have also applied quantitative methods to explore the mechanisms behind more basic auditory perceptual phenomena. For example, I recently published a paper that introduced a quantitative modeling technique to determine the role of brain structures in producing perceptual phenomena (depth perception, sound localization, or visual illusions, for example), and then demonstrated the method in the auditory domain. Auditory masking occurs when the presentation of one sound interferes with the perception of another. This interference can occur even when the sounds do not overlap in time, and I used the modeling technique presented in the paper to demonstrate that the auditory periphery is responsible for some types of masking but cannot be responsible for others. The work is an important contribution because it allows for firm conclusions about the causes of perceptual phenomena, and because it is general enough to apply to any class of stimuli can be digitized (images or sounds). I am excited to apply the method to explore the root causes of multimodal perceptual phenomena (vision and audition together) in the near future.
Descriptive and Inferential Statistical Procedures
My interest in descriptive and inferential statistics has greatly expanded over the past five years, at least somewhat due to the fact that I frequently teach graduate-level statistics courses. I feel strongly that the rate of scientific progress in the science of psychology is slowed by the statistical procedures that are considered standard in the field. Given the widespread use of these procedures, efforts to demonstrate their inadequacies and promote alternatives have the potential to have a tremendous impact on psychology as a whole. For this reason I have enthusiastically developed a line of research to try to accomplish this. I began by coauthoring a paper demonstrating the inadequacy and inaccuracy of standardized path analysis, a statistical procedure that is a mainstay of social psychological and business management research. In the paper we used computer simulations to demonstrate that the path coefficients from which conclusions are drawn are rarely accurate, and therefore the validity of the conclusions are suspect. Concurrently, I was heavily involved in an effort to evaluate a recently proposed measure meant to replace the standard statistical significance measure (the p-value) commonly reported in psychological papers. Using both derivation and simulation we demonstrated that the measure was conceptually flawed as well as inaccurate. Most recently, I derived a statistical measure of within-participant response consistency that allows for a richer description of response behavior than the typical measures of response accuracy (sensitivity) and bias. Imagine that a human observer completes a typical perceptual experiment in which the task is to discriminate between different stimuli. The consistency measure predicts how strongly the responses would correlate if the participant were to complete the same experiment again. The measure is the first of its kind and will allow researchers to test theories that make predictions about response consistency.
Each of these research projects serves my long-term goal to encourage researchers to transition to new statistical procedures that increase the rate of scientific progress within psychology. In conjunction with my research papers, I am convinced that the easiest way to accomplish this transition is to teach our graduate students to use these new methods.
Models of Decision-Making
Certainly the most extensive and long-standing of my research lines is my work evaluating and developing models of decision-making under uncertainty. These models describe how the human observer attaches labels to stimuli collected by the sensory systems, and as such can be rightfully considered one of the fundamental topics of cognitive and quantitative psychology. Jerry Balakrishnan and I have been working over the past 10 years to demonstrate that Signal Detection Theory (SDT), easily the most dominant and widely used decision-making model, is an inaccurate and incomplete depiction of the human decision-making process. The model assumes that the human observer has no control over the data-gathering process when a stimulus is examined. For example, if you are looking at a visual stimulus in order to identify it, SDT assumes that you have no systematic control over the parts of the stimulus that you look at. Instead, the model assumes that the entire decision process happens after the observer is given information about the stimulus that they are examining. The main goal of our work in this area is to demonstrate that SDT is a gross oversimplification, and ignoring the data-gathering part of the decision process will lead to inaccurate conclusions whenever SDT-based performance measures are used.
Our earlier work in this area was focused on demonstrating that SDT made inaccurate empirical predictions about the effects of introducing bias into the observer’s decision-making strategy. More exactly, we developed an alternate measure of bias that does not require the host of assumptions that are required by SDT’s bias measures. We applied this measure to empirical data to demonstrate that bias does not manifest as SDT claims it does, and therefore the entire mechanism that SDT uses to explain decision-making behavior is suspect. These results called into question the conclusions of the many papers (more than 1400 in the last decade) that make use of SDT’s performance measures.
Not surprisingly, this challenge to the standard theory motivated several efforts from others to salvage SDT. The most frequent response was to challenge the validity or sensitivity of our bias measure. In response to this criticism we recently demonstrated that the measure is quite capable of detecting response bias when it exists to be found. We accomplished this using a visual experiment in which the task was to discriminate between two lines of different lengths. We utilized a visual illusion to make some of the lines appear longer, thereby introducing a bias toward the longer-line response. Our bias measure was quite able to detect this perceptual bias, but was unable to find any evidence of the types of bias predicted by SDT. The other major challenge to our work was an attempt to alter SDT to account for our results. In our extensive rebuttal we demonstrated that the altered version of SDT was still unable to account for all of the empirical data that we have published in our papers.
The main conclusion from these papers is that SDT does not model all parts of the decision process, and a more complete representation that includes the data-gathering phase of the decision process is preferred. This conclusion was bolstered by a recent paper in which we demonstrated that SDT is missing one or more systematic sources of variability. We proved mathematically that an observer making decisions according to the SDT model should exhibit perfect performance when all randomness is removed from the decision process. Human observers do not achieve perfection under these circumstances, indicating that one or more systematic factors that are driving down performance are missing from the model. Most recently, we have developed an alternative decision-making model that not only accounts for all of the empirical data, but also provides decision-making performance measures that arise from a complete model of the decision-making process. This work has the potential to make a huge impact on the many researchers that require such measures.
My current and near-future work in this area is focused on demonstrating that the data-gathering process is not only important to include in any model of decision-making, but in fact the majority of biases seen in the decision-making literature manifest in this phase of the process. We are working to demonstrate this result by using an eye-tracking device to show that stimulus-sampling is not only strategic and under the control of the observer, but that the responses of the observer can be predicted from the stimulus sampling behavior.