Phoenix
  The Phoenix Lander. We study the spatial reasoning of scientists working on the Phoenix mission.
 
 

Overview

Cognitive psychology has devoted a lot of attention to the study of vision, but surprisingly little attention to visual problem solving. The dearth of visual cognition research is especially true for visual spaces larger than a 17inch monitor. And yet people engage in a broad range of complex visual problem solving tasks that involve multiple monitors, wall displays, and information displayed in more than simple 2D form. At the same time, the bulk of our visual information comes in through a very small foveal window and is stored in a very small visual working memory. We try to forge new understandings of how rich visual information is represented mentally and how it guides complex problem solving.

Eyetracking
 
         
 
Key Results
  • When information across pages is superimposed (like pages in a book, or slides in a powerpoint), problem solving is much slower than when the same information is displayed in a distributed fashion (at least for tasks that require some integration of information).
  • People adaptively store global and local information chunks in their small visual working memory, using external eye-gazes to recover local and global information as needed.
  • Mental representations are limited by neurocomputational capabilities.
  • People tend to move from internal mental representations that match the external structue of information in the environment to mental representations that best match the neurocomputational demands of the task at hand.
two screens  
   

 

 

   
 
The Team
   
       
 
Schunn Lab: Allison Liu
   
 
Collaborators: Jooyoung Jang (UCLA), Xiaohui Kong (UTHMC), Susan Kirschenbaum (NUWC), Michael McCurdy (NASA), Greg Trafton (NRL), Susan Trickett (NRL)
   
           

 

TSP

 

The Traveling Salesman Problem. People can solve accurately and quickly, but they do so with a very small visual working memory. How?
 

 
spacer2

Current Projects

Physical vs. Virtual supports for learning. Is it better to learn with physical models or virtual models? We are exploring this in the content of educational robotics. In some cases, virtual models can simplifying and speed up the learning task, but in other cases students miss fundamental elements of the learning situation without physical models, even when copared wth highly detailed 3D virtual simulaitons.

 


   
spacer
 

Past Projects

When bigger is better. Why are two screens better than one for solving problems? You can only see a small part of one screen. That much screen space should not matter. But the effect on problem solving is actually quite large (for tasks that require integration of information across screens). We found that the effect can actually vary drammatically across situations and is overall driven by three factors: extra time taken to memorize content, the time to find content located externally, and the time to actively place copies/summaries of content into additional display areas.

Solving big problems with a little brain. Estimates of visual working memory size put it around 2 to 5 chunks of information, depending on the person. That is a tiny amount of information. Yet, all humans can solving fairly complex visual problems, some in ways that are much more efficent than computers, integrating global information into local activities. How can that be done using such a small visual working memory? We collected and analyzed eye-tracking data and bulding computational models of visual problem solving to uncover the answer.

   
 
Publications
   
 
  • Jang, J., Trickett, S. B., Schunn, C. D., & Trafton, J. G. (2012). Unpacking the temporal advantage of distributing complex visual displays. International Journal of Human-Computer Studies, 70, 812-827. pdf
  • Jang, J., & Schunn, C. D. (2012). Performance benefits of spatially distributed vs. stacked information on integration tasks. Applied Cognitive Psychology, 26(2), 207–214. pdf
  • Jang, J., & Schunn, C. D. (2012). Physical design tools support and hinder innovative engineering design. Journal of Mechanical Design, 134(4), 041001. pdf
  • Jang, J., Schunn, C.D., & Nokes, T. J. (2011). Spatially distributed instructions improve learning outcomes and efficiency. Journal of Educational Psychology, 103(1), 60-72. pdf
  • Kong, X., Schunn, C. D., Wallstrom, G. L. (2010). High regularities in eye-movement patterns reveal the dynamics of visual working memory allocation mechanism. Cognitive Science, 34(2), 322-337. pdf
  • Trickett, S. B, Trafton, J. G., & Schunn, C. D. (2009). How do scientists respond to anomalies? Different strategies used in basic and applied science. Topics in Cognitive Science, 1(4), 711-729. pdf
  • Kong, X., & Schunn, C. D. (2007). Global vs. local information processing in visual/spatial problem solving: The case of traveling salesman problem. Cognitive Systems Research, 8(3), 192-207. pdf
  • Schunn, C. D., Saner, L. D., Kirschenbaum, S. K., Trafton, J. G., & Littleton, E. B. (2007). Complex visual data analysis, uncertainty, and representation. In M. C. Lovett & P. Shah (Eds.), Thinking with Data. Mahwah, NJ: Erlbaum. pdf
  • Trickett, S. B., Trafton, J. G., & Saner, L. D., & Schunn, C. D. (2007). "I don't know what is going on there": The use of spatial transformations to deal with and resolve uncertainty in complex visualizations. In M. C. Lovett & P. Shah (Eds.), Thinking with Data. Mahwah, NJ: Erlbaum. pdf
  • Trafton, J. G., Trickett, S. B., Stitzlein, C. A., Saner, L. D., Schunn, C. D., & Kirschenbaum, S. S. (2006). The relationship between spatial transformations and iconic gestures. Spatial Cognition & Computation, 6(1), 1-29. pdf
  • Trickett, S. B., Trafton, J. G., & Schunn, C. D. (2005). Puzzles and peculiarities: How scientists attend to and process anomalies during data analysis. In M. E. Gorman, R. D. Tweney, D. Gooding, & A. Kincannon (Eds.), Scientific and technological thinking (pp. 97-118). Mahwah, NJ: LEA. pdf
  • Duric, Z., Gray, W., Heishman, R., Li, F., Rosenfeld, A., Schoelles, M. J., Schunn, C., & Wechsler, H. (2002). Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer interaction. Proceedings of the IEEE. 90(7), 1272-1289. pdf