Causal Learning & Decision Making Lab

University of Pittsburgh

Lab Director

Generic placeholder image

Ben Rottman

rottman@pitt.edu
412-624-7493

University of Pittsburgh
LRDC 726
3939 O'Hara St
Pittsburgh, PA 15260

Research Focus

Causal Learning

The primary research focus of the lab is causal learning - how people learn cause-effect relationships from their experiences (e.g., this new medicine I have been trying seems to work well). We are especially interested in how people learn causal relations over time such as noticing that a cause has decreasing effectiveness (e.g., caffeine), how people account for changes in the environment when learning about a causal relationship, and how people figure out which of two variables (e.g., being depressed and being anxious) is the cause and which is the effect.

Decision Making

The secondary research focus is on decision making, such as when and whether human decision makers approximate normative judgment.

Medical Reasoning

We are especially interested in understanding the role of causal learning and decision making in medical contexts, for example, how doctors and patients learn from their own experiences and apply that knowledge to future cases.


Graduate Students


Generic placeholder image

Kevin Soo

Kevin is a 5th year graduate student and has a Master's from University College London. He is especially interested in the interplay between time and causality, causal directionality, tipping points, and the philosophy of causality.

Generic placeholder image

Cory Derringer

Cory is a 4rd year graduate student and has a Master's from Missouri State University. He is especially interested in errors of human judgment, memory, and applications to causal reasoning.

Generic placeholder image

Ciara Willett

Ciara (pronounced like Kira Knightley) is an incoming graduate student and has a Master's from Seton Hall. She is especially interested in causal learning in cognitively demanding situations.

Generic placeholder image

Zac Caddick

Zac is an incoming graduate student and has a Master's from San Jose State. Zac has been working on human factors and sleep research at Nasa Ames and is especially interested in motivated reasoning and scientific beliefs.


Publications

Electronic versions are provided as a professional courtesy to ensure timely dissemination of academic work for individual, noncommercial purposes. Copyright (and all rights therein) resides with the respective copyright holders, as stated within each paper. These files may not be reposted without permission.

In Press

2017

Rottman, B. M. (2017). Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio. Memory & Cognition. doi:10.3758/s13421-016-0658-z Abstract PDF
Whether humans can accurately make decisions in line with Bayes’ rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own pre-existing beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians’ posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians’ beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other non-normative aspects to the updating such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter provided cues. It suggests both that there is reason to be optimistic about experts’ abilities, but that there is still considerable need for improvement.
Rottman, B. M., Marcum, Z. A., Thorpe, C. T., Gellad, W. F. (2017). Medication adherence as a learning process: Insights from cognitive psychology. Health Psychology Review. doi:10.1080/17437199.2016.1240624 Abstract PDF
Non-adherence to medications is one of the largest contributors to sub-optimal health outcomes. Many theories of adherence include a ‘value-expectancy’ component in which a patient decides to take a medication partly based on expectations about whether it is effective, necessary, and tolerable. We propose reconceptualizing this common theme as a kind of ‘causal learning’ – the patient learns whether a medication is effective, necessary, and tolerable, from experience with the medication. We apply cognitive psychology theories of how people learn cause-effect relations to elaborate this causal learning challenge. First, expectations and impressions about a medication and beliefs about how a medication works, such as delay of onset, can shape a patient’s perceived experience with the medication. Second, beliefs about medications propagate both ‘top-down’ and ‘bottom-up,’ from experiences with specific medications to general beliefs about medications and vice versa. Third, non-adherence can interfere with learning about a medication, because beliefs, adherence, and experience with a medication are connected in a cyclic learning problem. We propose that by conceptualizing non-adherence as a causal learning process, clinicians can more effectively address a patient’s misconceptions and biases, helping the patient develop more accurate impressions of the medication.
Rottman, B. M. (2017). The Acquisition and Use of Causal Structure Knowledge. In M.R. Waldmann (Ed.), Oxford Handbook of Causal Reasoning (85-114). Oxford: Oxford U.P. Abstract PDF
This chapter provides an introduction to how humans learn and reason about multiple causal relations connected together in a causal structure. The first half of the chapter focuses on how people learn causal structures. The main topics involve learning from observations vs. interventions, learn temporal vs. atemporal causal structures, and learning the parameters of a causal structure including individual cause-effect strengths and how multiple causes combine to produce an effect. The second half of the chapter focuses on how individuals reason about the causal structure, such as making predictions about one variable given knowledge about other variables, once the structure has been learned. Some of the most important topics involve reasoning about observations vs. interventions, how well people reason compared to normative models, and whether causal structure beliefs bias reasoning. In both sections I highlight open empirical and theoretical questions.

2016

Rottman, B. M., Prochaska, M. T., & Deaño, R. C. (2016). Bayesian reasoning in residents' preliminary diagnoses. Cognitive Research: Principles and Implications. doi:10.1186/s41235-016-0005-8 Abstract PDF Supplement Blog
Whether and when humans in general, and physicians in particular, use their beliefs about base rates in Bayesian reasoning tasks is a long-standing question. Unfortunately, previous research on whether doctors use their beliefs about the prevalence of diseases in diagnostic judgments has critical limitations. In this study, we assessed whether residents’ beliefs about the prevalence of a disease are associated with their judgments of the likelihood of the disease in diagnosis, and whether residents’ beliefs about the prevalence of diseases change across the 3 years of residency. Residents were presented with five ambiguous vignettes typical of patients presenting on the inpatient general medicine services. For each vignette, the residents judged the likelihood of five or six possible diagnoses. Afterward, they judged the prevalence within the general medicine services of all the diseases in the vignettes. Most importantly, residents who believed a disease to be more prevalent tended to rate the disease as more likely in the vignette cases, suggesting a rational tendency to incorporate their beliefs about disease prevalence into their diagnostic likelihood judgments. In addition, the residents’ prevalence judgments for each disease were assessed over the 3 years of residency. The precision of the prevalence estimates increased across the 3 years of residency, though the accuracy of the prevalence estimates did not. These results imply that residents do have a rational tendency to use prevalence beliefs for diagnosis, and this finding also contributes to a larger question of whether humans intuitively use base rates for making judgments.
Rottman, B. M., & Hastie, R. (2016). Do people reason rationally about causally related events? Markov violations, weak inferences, and failures of explaining away. Cognitive Psychology, 87 88-134. doi:10.1016/j.cogpsych.2016.05.002 Abstract PDF
Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X→Y→Z, common cause structures X←Y→Z, and common effect structures X→Y←Z, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative “explaining away” pattern). Compared to the normative account, in general, when the judgments should change, they change on average in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations.
Rottman, B. M. (2016). Searching for the best cause: Roles of mechanism beliefs, autocorrelation, and exploitation. Journal of Experimental Psychology: Learning, Memory, & Cognition. doi:10.1037/xlm0000244 Abstract PDF
When testing which of multiple causes (e.g., medicines) works best, the testing sequence has important implications for the validity of the final judgment. Trying each cause for a period of time before switching to the other is important if the causes have tolerance, sensitization, delay, or carryover (TSDC) effects. In contrast, if the outcome variable is autocorrelated and gradually fluctuates over time rather than being random across time, it can be useful to quickly alternate between the 2 causes, otherwise the causes could be confounded with a secular trend in the outcome. Five experiments tested whether individuals modify their causal testing strategies based on beliefs about TSDC effects and autocorrelation in the outcome. Participants adaptively tested each cause for longer periods of time before switching when testing causal interventions for which TSDC effects were plausible relative to cases when TSDC effects were not plausible. When the autocorrelation in the baseline trend was manipulated, participants exhibited only a small (if any) tendency toward increasing the amount of alternation; however, they adapted to the autocorrelation by focusing on changes in outcomes rather than raw outcome scores, both when making choices about which cause to test as well as when making the final judgment of which cause worked best. Understanding how people test causal relations in diverse environments is an important first step for being able to predict when individuals will successfully choose effective causes in real-world settings.
Soo, K., & Rottman, B. M. (2016). Causal learning with continuous variables over time. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
When estimating the strength of the relation between a cause (X) and effect (Y), there are two main statistical approaches that can be used. The first is using a simple correlation. The second approach, appropriate for situations in which the variables are observed unfolding over time, is to take a correlation of the change scores – whether the variables reliably change in the same or opposite direction. The main question of this manuscript is whether lay people use change scores for assessing causal strength in time series contexts. We found that subjects’ causal strength judgments were better predicted by change scores than the simple correlation, and that use of change scores was facilitated by naturalistic stimuli. Further, people use a heuristic of simplifying the magnitudes of change scores into a binary code (increase vs. decrease). These findings help explain how people uncover true causal relations in complex time series contexts.
Derringer, C., & Rottman, B. M. (2016). Temporal causal strength learning with multiple causes. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
When learning the relation between a cause and effect, how do people control for all the other factors that influence the same effect? Two experiments tested a hypothesis that people focus on events in which the target cause changes and all other factors remain stable. In both four-cause (Experiment 1) and eight-cause (Experiment 2) scenarios, participants learned causal relations more accurately when they viewed datasets in which only one cause changed at a time. However, participants in the comparison condition, in which multiple causes changed simultaneously, performed fairly well; in addition to focusing on events when a single cause changed, they also used events in which multiple causes changed for updating their beliefs about causal strength. These findings help explain how people are able to learn causal relations in situations when there are many alternative factors.

2015

Soo, K., & Rottman, B. M. (2015) Elemental Causal Learning from Transitions. In R. Dale, C. Jennings, P. Maglio, T. Matlock, D. Noelle, A. Warlaumont, & J. Yoshimi (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
Much research on elemental causal learning has focused on how causal strength is learned from the states of variables. In longitudinal contexts, the way a cause and effect change over time can be informative of the underlying causal relationship. We propose a framework for inferring the causal strength from different observed transitions, and compare the predictions to existing models of causal induction. Subjects observe a cause and effect over time, updating their judgments of causal strength after observing different transitions. The results show that some transitions have an effect on causal strength judgments over and above states.
Rottman, B. M. (2015) How Causal Mechanism and Autocorrelation Beliefs Inform Information Search. In R. Dale, C. Jennings, P. Maglio, T. Matlock, D. Noelle, A. Warlaumont, & J. Yoshimi (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
When testing which of multiple causes (e.g., medicines) works the best, the testing sequence has important implications for the validity of the final judgment. Trying one cause for a period of time is important if the cause has tolerance, sensitization, delay, or carryover effects (TSDC). Alternating between the causes is important in autocorrelated environments – when the outcome naturally comes and goes in waves. Across two studies, participants’ beliefs about TSDC influenced the amount of alternating; however, their beliefs about autocorrelation had a very modest effect on the testing strategy. This research helps chart how well people adapt to various environments in order to optimize learning, and it suggests that in situations with no TSDC effects and high autocorrelation, people may not alternate enough.
Rottman, B.M., & Hastie, R. (2015) Do Markov Violations and Failures of Explaining Away Persist with Experience? In R. Dale, C. Jennings, P. Maglio, T. Matlock, D. Noelle, A. Warlaumont, & J. Yoshimi (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
Making judgments by relying on beliefs about causal relations is a fundamental aspect of everyday cognition. Recent research has identified two ways that human reasoning seems to diverge from optimal standards; people appear to violate the Markov Assumption, and do not to “explain away” adequately. However, these habits have rarely been tested in the situation that presumably would promote accurate reasoning – after experiencing the multivariate distribution of the variables through trial-by-trial learning, even though this is a standard paradigm. Two studies test whether these habits persist 1) despite adequate learning experience, 2) despite incentives, and 3) whether they also extend to situations with continuous variables.

2014

Rottman, B.M. (2014) Information Search in an Autocorrelated Causal Learning Environment. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
When trying to determine which of two causes produces a more desirable outcome, if the outcome is autocorrelated (goes through higher and lower periods) it is critical to switch back and forth between the causes. If one first tries Cause 1, and then tries Cause 2, it is likely that an autocorrelated outcome would appear to change with the second cause even though it is merely undergoing normal change over time. Experiment 1 found that people tend to perseverate rather than alternate when testing the effectiveness of causes, and perseveration is associated with substantial errors in judgment. Experiment 2 found that forcing people to alternate improves judgment. This research suggests that a debiasing approach to teach people when to alternate may be warranted to improve causal learning.
Soo, K. & Rottman, B.M. (2014) Learning Causal Direction from Transitions with Continuous and Noisy Variables. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Abstract PDF
Previous work has found that one way people infer the direction of causal relationships involves identifying an asymmetry in how causes and effects change over time. In the current research we test the generalizability of this reasoning strategy in more complex environments involving ordinal and continuous variables and with noise. Participants were still able to use the strategy with ordinal and continuous variables. However, when noise made it difficult to identify the asymmetry participants were no longer able to infer the causal direction.
Edwards, B. J., Rottman, B. M., Shankar, M., Betzler, R., Chituc, V., Rodriguez, R., ... Santos, L. R. (2014). Do Capuchin Monkeys (Cebus apella) Diagnose Causal Relations in the Absence of a Direct Reward? (E. Flynn, Ed.) PLoS ONE, 9(2), e88595. doi:10.1371/journal.pone.0088595 Abstract PDF
We adapted a method from developmental psychology [1] to explore whether capuchin monkeys (Cebus apella) would place objects on a ‘‘blicket detector’’ machine to diagnose causal relations in the absence of a direct reward. Across five experiments, monkeys could place different objects on the machine and obtain evidence about the objects’ causal properties based on whether each object ‘‘activated’’ the machine. In Experiments 1–3, monkeys received both audiovisual cues and a food reward whenever the machine activated. In these experiments, monkeys spontaneously placed objects on the machine and succeeded at discriminating various patterns of statistical evidence. In Experiments 4 and 5, we modified the procedure so that in the learning trials, monkeys received the audiovisual cues when the machine activated, but did not receive a food reward. In these experiments, monkeys failed to test novel objects in the absence of an immediate food reward, even when doing so could provide critical information about how to obtain a reward in future test trials in which the food reward delivery device was reattached. The present studies suggest that the gap between human and animal causal cognition may be in part a gap of motivation. Specifically, we propose that monkey causal learning is motivated by the desire to obtain a direct reward, and that unlike humans, monkeys do not engage in learning for learning’s sake.
Rottman, B. M., & Hastie, R. (2014). Reasoning About Causal Relationships: Inferences on Causal Networks. Psychological Bulletin, 140(1), 109-139. doi:10.1037/a0031903 Abstract PDF
Over the last decade, a normative framework for making causal inferences, Bayesian Probabilistic Causal Networks, has come to dominate psychological studies of inference based on causal relationships. The following causal networks — [X→Y→Z, X←Y→Z, X→Y←Z] — supply answers for questions like, "Suppose both X and Y occur, what is the probability Z occurs?" or "Suppose you intervene and make Y occur, what is the probability Z occurs?" In this review, we provide a tutorial for how normatively to calculate these inferences. Then, we systematically detail the results of behavioral studies comparing human qualitative and quantitative judgments to the normative calculations for many network structures and for several types of inferences on those networks. Overall, when the normative calculations imply that an inference should increase, judgments usually go up; when calculations imply a decrease, judgments usually go down. However, two systematic deviations appear. First, people's inferences violate the Markov assumption. For example, when inferring Z from the structure X→Y→Z, people think that X is relevant even when Y completely mediates the relationship between X and Z. Second, even when people's inferences are directionally consistent with the normative calculations, they are often not as sensitive to the parameters and the structure of the network as they should be. We conclude with a discussion of productive directions for future research.
Rottman, B.M., Kominsky, J.F., & Keil, F.C. (in press). Children Use Temporal Cues to Learn Causal Directionality. Cognitive Science. Abstract PDF
The ability to learn the direction of causal relations is critical for understanding and acting in the world. We investigated how children learn causal directionality in situations in which the states of variables are temporally dependent (i.e. autocorrelated). In Experiment 1, children learned about causal direction by comparing the states of one variable before vs. after an intervention on another variable. In Experiment 2, children reliably inferred causal directionality merely from observing how two variables change over time; they interpreted Y changing without a change in X as evidence that Y does not influence X. Both of these strategies make sense if one believes the variables to be temporally dependent. We discuss the implications of these results for interpreting previous findings. More broadly, given that many real-world environments are characterized by temporal dependency, these results suggest strategies that children may use to learn the causal structure of their environments.

2012

Rottman, B.M., & Keil, F.C. (2012). Causal Structure Learning over Time: Observations and Interventions. Cognitive Psychology, 64, 93-125. doi:10.1016/j.cogpsych.2011.10.003. Abstract PDF
Seven studies examined how people learn causal relationships in scenarios when the variables are temporally dependent — the states of variables are stable over time. When people intervene on X, and Y subsequently changes state compared to before the intervention, people infer that X influences Y. This strategy allows people to learn causal structures quickly and reliably when variables are temporally stable (Experiments 1 and 2). People use this strategy even when the cover story suggests that the trials are independent (Experiment 3). When observing variables over time, people believe that when a cause changes state, its effects likely change state, but an effect may change state due to an exogenous influence in which case its observed cause may not change state at the same time. People used this strategy to learn the direction of causal relations and a wide variety of causal structures (Experiments 4-6). Finally, considering exogenous influences responsible for the observed changes facilitates learning causal directionality (Experiment 7). Temporal reasoning may be the norm rather than the exception for causal learning and may reflect the way most events are experienced naturalistically.
Rottman, B.M., Gentner, D., & Goldwater, M. B. (2012). Causal Systems Categories: Differences in Novice and Expert Categorization of Causal Phenomena. Cognitive Science, 36, 919-932. doi: 10.1111/j.1551-6709.2012.01253.x Abstract PDF Supplement
We investigated the understanding of causal systems categories — categories defined by common causal structure rather than by common domain content — among college students. We asked students who were either novices or experts in the physical sciences to sort descriptions of real-world phenomena that varied in their causal structure (e.g., negative feedback vs. causal chain) and in their content domain (e.g., economics vs. biology). Our hypothesis was that there would be a shift from domain-based sorting to causal sorting with increasing expertise in the relevant domains. This prediction was borne out: The novice groups sorted primarily by domain and the expert group sorted by causal category. These results suggest that science training facilitates insight about causal structures.

2011

Rottman, B.M., & Ahn, W. (2011). Effect of grouping of evidence types on learning about interactions between observed and unobserved causes. Journal of Experimental Psychology: Learning, Memory, & Cognition, 37(6), 1432-1448. doi:10.1037/a0024829 Abstract PDF
When a cause interacts with unobserved factors to produce an effect, the contingency between the observed cause and effect cannot be taken at face value to infer causality. Yet, it would be computationally intractable to consider all possible unobserved, interacting factors. Nonetheless, six experiments found that people can learn about an unobserved cause participating in an interaction with an observed cause when the unobserved cause is stable over time. Participants observed periods in which a cause and effect were associated followed by periods of the opposite association ("grouped condition"). Rather than concluding a complete lack of causality, participants inferred that the observed cause does influence the effect (Experiment 1) and they gave higher causal strength estimates when there were longer periods during which the observed cause appeared to influence the effect (Experiment 2). Consistent with these results, when the trials were grouped, participants inferred that the observed cause interacted with an unobserved cause (Experiments 3 and 4). Indeed, participants could even make precise predictions about the pattern of interaction (Experiments 5 and 6). Implications for theories of causal reasoning are discussed.
Rottman, B.M., & Keil, F.C. (2011). What matters in scientific explanations: Effects of elaboration and content. Cognition, 121, 324-337. doi:10.1016/j.cognition.2011.08.009. Abstract PDF
Given the breadth and depth of available information, determining which components of an explanation are most important is a crucial process for simplifying learning. Three experiments tested whether people believe that components of an explanation with more elaboration are more important. In Experiment 1, participants read separate and unstructured components that comprised explanations of real-world scientific phenomena, rated the components on their importance for understanding the explanations, and drew graphs depicting which components elaborated on which other components. Participants gave higher importance scores for components that they judged to be elaborated upon by other components. Experiment 2 demonstrated that experimentally increasing the amount of elaboration of a component increased the perceived importance of the elaborated component. Furthermore, Experiment 3 demonstrated that elaboration increases the importance of the elaborated information by providing insight into understanding the elaborated information; information that was too technical to provide insight into the elaborated component did not increase the importance of the elaborated component. While learning an explanation, people piece together the structure of elaboration relationships between components and use the insight provided by elaboration to identify important components.
Rottman, B. M., Kim, N. S. Ahn, W., & Sanislow, C. A. (2011). Can personality disorder experts recognize DSM-IV personality disorders from Five-Factor Model descriptions of patient cases? The Journal of Clinical Psychiatry, 72, 630-635. doi:10.4088/JCP.09m05534gre Abstract PDF
Background: Dimensional models of personality are under consideration for integration into the next Diagnostic and Statistical Manual of Mental Disorders (DSM-5), but the clinical utility of such models is unclear. Objective: To test the ability of clinical researchers who specialize in personality disorders to diagnose personality disorders using dimensional assessments and to compare those researchers’ ratings of clinical utility for a dimen- sional system versus for the DSM-IV. Method: A sample of 73 researchers who had each published at least 3 (median = 15) articles on personal- ity disorders participated between December 2008 and January 2009. The Five-Factor Model (FFM), one of the most-studied dimensional models to date, was compared to the DSM-IV. Participants provided diagnoses for case profiles in DSM-IV and FFM formats and then rated the DSM-IV and FFM on 6 aspects of clinical utility. Results: Overall, participants had difficulty identifying correct diagnoses from FFM profiles (t72 = 12.36, P < .01), and the same held true for a subset reporting equal familiarity with the DSM-IV and FFM (t23 = 6.96, P < .01). Participants rated the FFM as less clinically useful than the DSM for making prognoses, devising treatment plans, and communicating with professionals (all t69 > 2.19, all P < .05), but more useful for communicating with patients (t69 = 3.03, P < .01). Conclusions: The results suggest that personality disorder expertise and familiarity with the FFM are insufficient to correctly diagnose personality disorders using FFM profiles. Because of ambiguity inherent in FFM profile descriptors, this insufficiency may prove unlikely to be at- tenuated with increased clinical familiarity with the FFM.
Rottman, B. B., & Keil, F. C. (2011). Learning causal direction from repeated observations over time. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33th Annual Conference of the Cognitive Science Society. (pp. 1847-1852). Austin, TX: Cognitive Science Society. Abstract PDF
Inferring the direction of causal relationships is notoriously difficult. We propose a new strategy for learning causal direction when observing states of variables over time. When a cause changes state, its effects will likely change, but if an effect changes state due to an exogenous factor, its observed cause will likely stay the same. In two experiments, we found that people use this strategy to infer whether X→Y vs. X←Y, and X→Y→Z vs. X←Y→Z produced a set of data. We explore a rational Bayesian and a heuristic model to explain these results and discuss implications for causal learning.
Rottman, B. B., & Keil, F. C. (2011). Which parts of scientific explanations are most important? In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. (pp. 378-383). Austin, TX: Cognitive Science Society. Abstract PDF
Given the depth and breadth of available information, determining which components of an explanation are most important is a crucial process for simplifying learning. Two experiments tested whether people believe that components of an explanation with more elaboration are more important. In Experiment 1, participants gave higher importance scores for components that they judged to be elaborated upon by many other components. In Experiment 2, the amount and type of elaboration was experimentally manipulated. Experiment 2 demonstrated that elaboration increases the importance of the elaborated information by providing insight into understanding the elaborated information; information that was too technical to provide insight into the elaborated component did not increase the importance of the elaborated component. While learning an explanation, people piece together the structure of elaboration relationships between components and use the insight provided by elaboration to identify important components.
Rottman, B.M., Ahn, W., & Luhmann, C. C. (2011). When and how do people reason about unobserved causes? In P. Illari, F. Russo, & J. Williamson (Eds.), Causality in the Sciences. Oxford U.P. (pp. 150-183). Abstract
Assumptions and beliefs about unobserved causes are critical for inferring causal relationships from observed correlations. For example, an unobserved factor can influence two observed variables, creating a spurious relationship. Or an observed cause may interact with unobserved factors to produce an effect, in which case the contingency between the observed cause and effect cannot be taken at face value to infer causality. We review evidence that three types of situations lead people to infer unobserved causes: after observing single events that occur in the absence of any precipitating causal event, after observing a systematic pattern among events that cannot be explained by observed causes, and after observing a previously stable causal relationship change. In all three scenarios people make sophisticated inferences about unobserved causes to explain the observed data. We also discuss working memory as a requirement for reasoning about unobserved causes and briefly discuss implications for models of human causal reasoning.
Edwards, B.J., Rottman, B. M., Santos, L.R. (2011). Causal reasoning in children and animals. In T. McCormack, C. Hoerl, and S. Butterfill (Eds.) Tool Use and Causal Cognition. Oxford: Oxford U.P.
XXX

2010

Rottman, B. M., & Keil, F. C. (2010). Connecting causal events: Learning causal structures through repeated interventions over time. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp.907-912). Austin, TX: Cognitive Science Society. Abstract PDF
How do we learn causal structures? All current approaches use scenarios in which trials are temporally independent; however, people often learn about scenarios unfolding over time. In such cases, people may assume that other variables don't change at the same instant as an intervention. In Experiment 1, participants were much more successful at learning causal structures when this assumption was upheld than violated. In Experiment 2, participants were less influenced by such temporal information when they believed the trials to be temporally independent, but still used the temporal strategy to some extent. People seem to be inclined to learn causal structures by connecting events over time.
Chang, A., Sandhofer, C.M., Adelchanow, L., & Rottman, B. (2010). Parental numeric language input to Mandarin Chinese and English speaking preschool children. Journal of Child Language, 38, 341-355. doi:10.1017/S0305000909990390 Abstract PDF
The present study examined the number-specific parental language input to Mandarin- and English-speaking preschool-aged children. Mandarin and English transcripts from the CHILDES database were examined for amount of numeric speech, specific types of numeric speech and syntactic frames in which numeric speech appeared. The results showed that Mandarin-speaking parents talked about number more frequently than English-speaking parents. Further, the ways in which parents talked about number terms in the two languages was more supportive of a cardinal interpretation in Mandarin than in English. We discuss these results in terms of their implications for numerical understanding and later mathematical performance.

2009

Rottman, B. M., Ahn, W., Sanislow, C. A., & Kim, N. S. (2009). Can clinicians recognize DSM-IV personality disorders from Five-Factor Model descriptions of patient cases? The American Journal of Psychiatry, 166, 427-433. Abstract PDF Supplement Discussion
Features are inherently ambiguous in that their meanings depend on the categories they describe (e.g., small for planets vs. molecules; Murphy, 1988). However, a new proposal for the next version of the DSM (DSM-IV-TR, Diagnostic and Statistical Manual of Mental Disorders, 4th Ed., text revision; American Psychiatric Association, 2000) advocates eliminating personality disorder categories, instead describing patients using only dimensions with the well-known Five- Factor Model. We investigated whether experts in personality pathology are able to translate dimensional patient descriptions into their corresponding diagnostic categories in the current version of the DSM. The results showed that even experts had considerable difficulty disambiguating the meaning of the dimensions to determine correct diagnoses and found the utility of the dimensional system to be lacking. Implications for categorization research are discussed.
Rottman, B. M. & Ahn, W. (2009). Causal learning about tolerance and sensitization. Psychonomic Bulletin and Review, 16 (6), 1043-1049. doi:10.3758/PBR.16.6.1043 Abstract PDF
We introduce two new, abstract, causal schemata used during causal learning; (i) tolerance, when an effect diminishes over time as an entity is repeatedly exposed to the cause (e.g., a person becoming tolerant to caffeine), and (ii) sensitization, when an effect intensifies over time as an entity is repeatedly exposed to the cause (e.g., an antidepressant becoming more effective through repeated use). In Experiment 1, participants observed either cause/effect data patterns unfolding over time exhibiting the tolerance or sensitization schemata. Compared to a condition with the same data appearing in a random order over time, participants inferred stronger causal efficacy and made more confident and more extreme predictions about novel cases. In Experiment 2, the same tolerance/sensitization scenarios occurred either within one entity or across many entities. In the many-entity conditions, when the schemata were violated, participants made much weaker inferences. Implications for causal learning are discussed.
Rottman, B. M. & Ahn, W. (2009). Causal inference when observed and unobserved causes interact. In N.A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp.1477-1482). Austin, TX: Cognitive Science Society. Abstract PDF
When a cause interacts with unobserved factors to produce an effect, the contingency between the observed cause and effect cannot be taken at face value to infer causality. Yet, it would be computationally intractable to consider all possible unobserved, interacting factors. Nonetheless, two experiments found that when an unobserved cause is assumed to be fairly stable over time, people can learn about such interactions and adjust their inferences about the causal efficacy of the observed cause. When they observed a period in which a cause and effect were associated followed by a period of the opposite association, rather than concluding a complete lack of causality, subjects inferred an unobserved, interacting cause. The interaction explains why the overall contingency between the cause and effect is low and allows people to still conclude that the cause is efficacious.
Rottman, B. M., Kim, N. S., Ahn, W., & Sanislow, C. A. (2009). The cognitive consequences of using categorical versus dimensional classification systems: The case of personality disorder experts. In N.A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 2825-2830). Austin, TX: Cognitive Science Society. Abstract PDF
Features are inherently ambiguous in that their meanings depend on the categories they describe (e.g., small for planets vs. molecules; Murphy, 1988). However, a new proposal for the next version of the DSM (DSM-IV-TR, Diagnostic and Statistical Manual of Mental Disorders, 4th Ed., text revision; American Psychiatric Association, 2000) advocates eliminating personality disorder categories, instead describing patients using only dimensions with the well-known Five-Factor Model. We investigated whether experts in personality pathology are able to translate dimensional patient descriptions into their corresponding diagnostic categories in the current version of the DSM. The results showed that even experts had considerable difficulty disambiguating the meaning of the dimensions to determine correct diagnoses and found the utility of the dimensional system to be lacking. Implications for categorization research are discussed.

Resources and Tutorials

Research Methods Dojo Tutorials for research methods students on within vs. between subjects designs, carryover, practice, and fatigue effects.
Causal Strength Calculators Code for models of causal strength including Rescorla-Wagner (Rescorla & Wagner, 1972), ∆P (Jenkins & Ward, 1965), Power-PC (Cheng, 1997), and Temporal-difference (Sutton & Barto, 1987).