Tversky et al. discuss the concept of making a mental map of space around the body whereby a person seems to have three essential axes of reference from their body. From these, the individual then identifies objects more quickly in relation to first the head to feet axis, then the back to front axis and finally the left to right axis. This is the Spatial Framework Theory and was the best theory to explain the observed response times of participants in the study.
What I would like to know is how this theory holds up when the study is performed not by reading about one’s surroundings and then answering questions but by using images to form the reference frame. I say this while thinking of Google Street View where you can see what surrounds you in all directions and build a mental map of what is located around your virtual location. However, if a researcher were to perform the same experiment as above, where you look in one direction first and create your frame of reference to that, how easy is it to then turn in a different direction and answer questions about what is around you using directional terms? Since the head/feet axis is not present, would the left/right axis have faster response times than the back/front axis or would it remain as in the first study?
Additionally, if you were to go to a random (urban) location with Google Street View, how would you mentally situate your understanding of your location in a larger context? How do you take the image you made of the area around you from one place on a street to the next zoom out when you no longer see the visual features on the street, just the street name? This may be an issue with Google visualization that you lose a sense of the direction in which you were looking when you zoom out of the street view but personally it is confusing to try to make sense of what I was looking at in a particular direction and then to lose that directionality when zooming out. In relation to spatial cognition, I wonder how my brain (maybe just mine if no-one else has ever noticed this) loses the mental map it had of that streetscape when the viewpoint is changed from in the scene to top down with straight lines and street names. Shouldn’t it be easier to visualize the street with straight lines and names if the brain schematizes the street view into these things (“nodes and links, landmarks and the paths among them, elements and their spatial relations” (517)) already? If it is an issue of the different perspectives, how can Google work to reduce this confusion with knowledge of these spatial cognition findings?