Posts Tagged ‘506’

Cognitive Research in GIS – Montello

Sunday, November 8th, 2015

The article by Montello introduces six areas of recent cognitive GIS research and raises many questions about cognitive research in GIScience. I found it interesting that Montello included questions about the discipline of GIS itself. For example, the author questions whether GIS is coherent as a discipline and can be referred to as a single entity, and if it is possible for any individual to know about and integrate all the different fields that contribute to GIScience.

One point I found especially interesting was that “GIS is not exclusively spatial.” In our discussions, we have not talked much about the distinctions between geographical and spatial. This statement, I imagine, would be rather mind-blowing for someone just starting in GIScience. It is true that GIS incorporates many other aspects: temporal, logical and informational. As we have discussed in class, the end products of GIS work doesn’t have to be a map (contrary to popular belief). Montello’s argument that much of the spatial cognition of using GIS “really just involves perceiving patterns on a computer screen” is a strong statement. It has implications about the usefulness of incorporating GIS in K-12 education. It also has implications for what GIScientists are really contributing if using a GIS “does not involve much spatial memory, inference or reasoning.” It’s certainly not an inspiring thing to read about a discipline that I am becoming increasingly interested in. But it certainly does provide a challenge: to use GIS in a way that DOES involve more cognitive heavy-lifting.

 

-denasaur

The UCDP dataset: now with geography!

Monday, November 2nd, 2015

The article by Sundberg and Melander was an interesting article, which for me brought up questions about spatial scales and situating the data within geography and GIS. One thing I noticed immediately about the article was the map, as this is often what non-geographers immediately think of GIS and geography. I was disappointed that the authors didn’t map the trends of organized violence (i.e., state-based, non-state and one-sided), because it would have been a very interesting visualization to see, for example, where state-based violence is occurring the most. They have represented it temporally in a line graph, but it would have added to the analysis to represent it geographically. Perhaps they didn’t include it because it would have just reiterated already known information? (For example, it’s perhaps already well-known which countries or cities in Africa experience the most state-based violence.)

For me, the article raised as many questions about spatial scales as it did about open data. The authors write that previous research has been focused mainly on violence at the country/year level, but they argue for more sub-national studies, saying that they might help shed more light on the underlying mechanisms of violence. I agree, and think that mapping examples of violence at the sub-national level would allow for more thorough examination of all the variables that contribute to violence, because these variables would certainly change from country to country.

Overall, I found the article very interesting, but a bit difficult to situate the topic in GIScience or even in geography. It seemed like the authors were incorporating the spatial data as simply another facet of their data, along with other factors like time and type of violence, rather than framing it as an investigation fundamentally based in geography. For the authors, GIS is a tool they use to georeferenced their data and make a nice-looking map. This is a fine approach – but it leaves me wondering how the article would be different if the approach was embedded in geography, rather than incorporating geography as one aspect of the data.

~denasaur

Smart cities: who do they benefit?

Thursday, October 29th, 2015

Roche’s article about smart cities is an organized and interesting read which situates smart cities in GIScience and offers ways for GIScience to make cities smarter.

As I read this article, I wondered if and how smart cities might reinforce existing power structures and further marginalize some groups in urban landscapes. “Rethinking urbanization” with an approach that is more focused on individuals sounds great – but it begs the question: which individuals are we focusing on? For example, it was troubling to me that neither this article nor the Sagl et al article mentions how smart cities could also be accessible cities, in ways that current cities are not. Would the smart cities the author envisions make public transit wheelchair accessible or help people with social anxiety avoid crowds? Where are the homeless in the author’s smart city vision, and how can they contribute geospatial information? Another problem is that proposing technological solutions and enhancing the “digital city” dimension of smart cities comes with the problem of access to and exclusion from these technologies. The author does address this critique, however, saying that if initiatives are driven by technologies, they can be reductive and one-size-fits-all.

Overall it seems to me that smart cities have an enormous amount of potential to improve the lives of many people, but we must be sure that all people are included. Hopefully, this is where the concept of the “intelligent city” comes into play, using VGI and participatory GIS to connect citizens; and where the “open city” increases cooperation and transparency.

~denasaur

Climate change: the ultimate complexity

Monday, October 26th, 2015

Manson and Sullivan’s article raises some very interesting point about geospatial complexity, the difficulty of navigating between the very general and the specific, complexity in ontologies and epistemologies, and in computer modeling. One of the first things that caught my eye was that the authors mentioned that space-and-place based research recognizes the importance of qualitative and quantitative approaches. Disregarding qualitative data is a critique I have read often in the critical GIS literature, and I was glad to see that the authors not only addressed this, but made space for qualitative approaches in their vision for complexity studies going forward.

The article actually made me reflect on my studies in environment. Geospatial complexity as it is explained in this article is actually quite connected to environment, and I immediately thought of climate change. Environmental systems are complex systems that are often not fully understood – for example, it’s difficult to know tipping points. Climate change is also a problem that experts struggle to navigate the space between making generalizations and losing sight of the particular, which is a problem the authors address in this article. Yes, it will make wide, sweeping changes to the planet which can be generalized as warming – but different places at a smaller scale will experience unique, unpredictable changes. Manson and Sullivan state that space, place and time are all part of complex systems – and of course, they are part of the complex system of climate change.

The authors conclude that it is an exciting time to be part of the research of complexity and space-and-place, and that complexity studies is moving beyond the phase of “starry-eyed exuberance.” From my perspective of the complexity of climate change, I’d say that there is no better time than now, because complexity seems to be an essential part of trying to understand what is happening on the planet.

-denasaur

GCI: Shaped By and Shaping Society

Monday, October 12th, 2015

Yang et al’s article about the history, frameworks, supporting technologies, functions, domains and users, and future directions of GCI (Geospatial Cyberinfrastructure) is a dense read which attempts to cover all the bases of GCI. The article made me think about some of the Critical GIS articles I have been reading for my literature review. For example, Sheppard’s 1995 article “GIS and Society: Toward a Research Agenda,” addresses the way that society influences technology as much as technology influences society. For example, the GIS we know has been shaped by a post-war society focused on maximizing efficiency (Sheppard 8). Yang focuses on the possible impacts of GCI in different domains and in society, but doesn’t directly discuss how GCI is shaped by society. However, this does come through in the article: for example, Yang writes about how climate change poses a problem for humanity and will require high-quality geospatial data in vast quantities in order to capture and interpret knowledge. In the same way that GIScience was shaped by the needs of both wartime and post-war societies, perhaps GCI will be shaped by the needs of a society facing a global climate problem. Yang describes a need for a new sociology of knowledge, based on how science has been transformed and shifted to online media.

Yang lists several areas for future strategies in GCI; the one which stands out to me is social heterogeneity and complexity. This complements Yang’s discussion of a diverse community and end-users in fields ranging from education to environmental sciences. There is a possibility for the field of GCI to develop more organically, to be shaped and improved in response to the diverse needs of the end users in the community.

~denasaur

GIS: Just another means of colonization?

Monday, October 5th, 2015

Rundstrom’s 1995 article “GIS, Indigenous Peoples and Epistemological Diversity” is an insightful critique of how geospatial technologies and Western science are fundamentally incompatible, exclusive and oppressive to indigenous epistemologies. For me, this has been the most thought-provoking topic yet. It made me reflect on just how pervasive and deeply-rooted colonialism is, how indigenous epistemologies have survived, and how that implicates me as a student of GIScience.

Rundstrom states that he understands GIS as a “technoscience,” which “modify and transform the worlds which are revealed through them” (46). Rundstrom actually highlights the division between GIS as a science and a tool. As a science, GIScience is fundamentally incompatible with indigenous worldviews. For centuries, Western science has actively invalidated indigenous ways of knowing. The legacy of colonization lives on through our settler society, which continues to inhabit stolen indigenous land. Western science’s desire to know more, to represent more, to describe more of our world is the means to exploit more, expand more and take more. As a tool, GIS is a technology, which have historically been used for assimilation and continue colonization. The technical capability, language (jargon) and education required to participate in the use of technologies also exclude indigenous people and their ways of knowing. Undeniably, our tools hold power over other people.

Where does this leave GIS, and indigenous ways of knowing and describing geography? I think Rundstrom would argue that indigenous knowledge should not be incorporated into GIS for the sake of taking what is “useful” to us and leaving the rest – which is historically what has been done, again and again, to indigenous groups through colonialism. Instead, indigenous groups could use it for their own aims, because GIS is likely to be believed by empirically-minded policymakers. For example, Operation Thunderbird uses crowdsourced mapping to display information on missing and murdered indigenous women: http://www.giswatch.org/en/country-report/womens-rights-gender/canada. Although GIS still has a long way to go before it can be at all compatible with indigenous epistemologies, it has potential to be an advantageous political tool.

-denasaur

Spatializing Social Networks

Monday, September 21st, 2015

The subject of social network analysis is fascinating; however, I found the article by Radil, Flint and Tita (2010) to be somewhat difficult reading. The article was full of jargon, such as “spatializing” “spatialities” “betweenness” “positional analysis” and the authors often needed to translate themselves by writing “in other words…” Nevertheless, the topic and the application of it to rival gangs in Hollenbeck were very interesting. The authors discuss the idea of embeddedness: how social behavior is produced by and inextricably connected to space, and use spatial statistics such as Moran’s I to examine the social networks and splits between gangs. The example of gang territory is an excellent one, because turf and territory have a significant geographical element that manifests itself in gang rivalries and behavior.

While reading the article, I became interested other applications of social network analysis. I found myself thinking, “How could GIS be used to consider the spatial networks of other things more positive than gang territory?” For example, one could explore the spatiality of activist social networks or a network analysis of the use of health centers. Social media use is also a relevant example because it is, of course, social, but it also has an important spatial element. One could use a spatial network analysis to learn how information is distributed through social media across space and time.

This article explores some of the issues and recurring questions of GIScience. For example, the authors struggle to incorporate both space and time in their analysis, as they address that their static model doesn’t address the dynamism of constantly-changing social networks. The authors also address the multi-disciplinary aspect of GIScience, by encouraging that the results they found be strengthened by other ways of knowing from other disciplines.

 

-denasaur

Goodchild 1992

Monday, September 14th, 2015

This article is a snapshot of scholarly attitudes towards GIS in 1992, and how the field needed to move from system to science. It is interesting to look at this article from a historical perspective, to see what the ancient GIS masters thought of their discipline. Goodchild expresses some frustration that his discipline is criticized as being too technology driven. Yet he himself says that we tend to treat GIS displays as flat, instead of exploiting their potential to display curved surfaces. He says that we need new technologies that can better display curved surfaces and 3d modelling. Today we have Google Earth Pro, which is now free to use for all, and many other paid 3d modelling GIS. Yet for the most part GIS continues to be worked on in either raster or vector on a virtual flat surface. Why? Because it works, not everything has to be modelled in 3d, just like directions to the grocery story don’t have to be a shortest path overlaid on 5x5m resolution satellite imagery. Goodchild states that the greatest advances in GIS research have been where technology itself stood in the way of solutions. He proposes turning the focus away from the tech towards the science but is coincidentally interested in advancing the technology. Well then perhaps GIS was slow to adopt 3d modelling and curved projections because they didn’t actually help solve GIScience.
-anontarian

Goodchild 2010

Monday, September 14th, 2015

In his 2010 update, Goodchild explains the developments in GIS over the past 20 years and where he expects the field to go in the next decade. His areas of further research really reveal how far the discipline has come technologically. For example in the 1992 article he discusses how the ability to show colour gradations needs to be improved. He speaks of being able to scan maps, and accurately recreate readable maps on screen. In 2010 he discusses the best ways of 3D/4D modelling and even adding fifth dimension of attributes that exist in space-time. His interest in new forms of GIS modelling shows how the field has tried to move away from maps as the end product. It is interesting to see how the field has diversified and the author’s perspective on GIS education. While some aspects of GIS have become increasingly complex ie. our modelling abilities, many basic parts of the GIS have become accessible to the general public. Whether or not education should focus on expanding the science or teaching the basic tools is an interesting debate. It seems that researchers would like to see it as a science, whereas firms that still use GIS for basic applications would probably see it as a tool.
-anontarian

Twenty Years of Progress

Monday, September 14th, 2015

I found the article by Goodchild to be engaging and easy to read. The article reads more like a reflection than an academic paper, as Goodchild explores the accomplishments, prominent literature, and advancements in the past 20 years of GIS. After reading the Wright 1997 article, this article is especially interesting to reflect on. It seems to address the “tool versus science” debate as closed, naming GIS academic journals with “science” in the title, and naming GIScientists that appear in academic circles. Goodchild names what the author sees as three subdomains of GIScience: the computer, the user and society. Perhaps it’s the computer that’s seen as the tool, not GIS.

A key difference between this article and the Wright 1997 article which was particularly striking to me was the difference in citizen participation in GIS. The Wright article discusses the viewpoints of a few privileged academics on GIS; however, as the Goodchild article clearly shows, GIS has become much less exclusive in the past two decades. In 1997, the prevalence of Open Street Map and Humanitarian OSM could not have been imagined. The “GIS community,” as Wright refers to it, has therefore expanded enormously in the past two decades, beyond just simply academics and high-level technicians. For some users, it may never be more than a tool – but for many others, it’s become a legitimate academic discipline and research focus.

Denasaur

G.I.S: A Tool or Science?

Monday, September 8th, 2014

The question of whether or not G.I.S. is a science or tool is brought up in Wright, Goodchild, and Proctor’s paper. Through the examination of an online discussion board, they come to the conclusion that G.I.S. can be placed on a continuum ranging from G.I.S as a tool, G.I.S as a toolmaker, and G.I.S as a science.

The question of G.I.S. as a tool or science is an important one that should be addressed. While many years have passed since the writing of this paper, I feel it is necessary that the discussion be continued since, as the authors argue, “science” often is synonymous with academic legitimacy. Looking at the amount of G.I.S journals and institutions with G.I.S programs it is evident that G.I.S is being viewed increasingly as a science. The proliferation of G.I.S technologies (such as Google Maps) that are used by the public (most of whom don’t have a strong grasp of the underlying concepts used) is a good reason for the continuing debate between describing G.I.S as a tool or science or something in between. Perhaps depending on how, and for what purpose the G.I.S is being used, people might have different perceptions of its role as either a tool or a science. For a driver using it to get from point A to B it might just be a tool, while for an academic researcher it could be a science. I would tend to agree that it is closer to the science end of the spectrum.

-Benny

Visualizing uncertainty

Thursday, April 4th, 2013

MacEachren et als article provides a thorough overview of the current status of uncertainty visualization along with its future and its challenges. It seems to be established that uncertainty visualization is more useful at the planning stage than at the user stage of an application. This makes me think back to an earlier discussion on temporal GIS. We talked about how the important aspect of temporal GIS was in its analytical capabilities, rather than in its representational capabilities. While I do not deny the positive effect on analysis that visualization might have, I question if it should be the aspect of uncertainty that is given the most attention.

Two of the challenges proposed by the article are developing tools to interact with depictions of uncertainty and handling multiple kinds of coexisting uncertainty. Might representation in some instances prove more troublesome than its worth? Might representational practices at times be obfuscative of data that might be understood as just data? I want to note that I am asking these questions in earnest, not rhetorically. Which I guess boils down to a question I have probably asked all semsester: how do we evaluate what is important enough or useful enough to invest time in?

Wyatt

GIS&RS

Thursday, April 4th, 2013

Brivio et als paper presents a case study integrating Remote Sensing and GIS to produce a flood map. After explaining methodology and results of other methods, the paper finds the integrative method to be 96% accurate.

This speaks to the value of interdisciplinary work. While RS applications on their own proved inadequate, a mixing of disciplines gave a fairly trustworthy result. While I understand the value of highly specialized knowledge, having a baseline of capability outside of one’s specific field is useful. I remember in 407 Korbin explaining that knowing even a bit of programming can help you in working with programmers, as understanding the way that one builds statements as well as the general limits of a given programming language will give you an idea of what you are can ask for. The same is true for GIS/RS. Knowing how GIS works and what it might be able to do is useful for RS scholars in seeking help and collaboration and vice versa. I think McGill’s GIS program is good in this respect. I got to dip my toes into a lot of different aspects of GIS (including COMP) and figure out what I like about it. If I end up working with GIS after I graduate, I know that the interdisciplinary nature of the program will prove useful.
Wyatt

Time or Space

Thursday, April 4th, 2013

Geospatial analysis can be no better than the original inputs, much like a computer is only as smart as its user. In the field of remote sensing, this ideology may be on its way to becoming obsolete. Brivio et. al show from a case study of catastrophic inundation in Italy that they can compensate for the temporal disparity in the capturing of remotely sensed data and the peak point of the flood, a few days before.

The analysis, however, was not completed with the sole use of synthetic aperture radar images. Had it not been for the integration of topological data, it is unlikely that one would be able to obtain similarly successful results.

With any data input, temporal or spatial resolution are limiting factors. Brivio highlights this by acknowledging the use of NOAA thermal infrared sensors, which have a finer temporal resolution, while lacking in spatial resolution. Conversely, the SAR images used in the case study analysis have a relatively higher spatial resolution, but produces longer temporal intervals.

Given Brivio et. al’s successful prediction of flooding extent, it may mean that, if need be, it is advantageous to choose an input with a finer spatial resolution in exchange for a coarser temporal resolution, complementing the temporal delay with additional inputs to compensate.

Break remote sensing down into it’s two main functions: collection and output. One will inevitably lag behind the other, but eventually the leader will be surpassed by the follower. Only for it to happen again some time down the road. Much like two racers attached by a rubber band.

What all of this means for GIS; eventually the output from remote sensing application will surpass the computing power of geographic information systems. At which point, the third racer, processing, will become relevant, if he isn’t already.

Temporal Topology

Thursday, March 14th, 2013

Location, size, and proximity are just three of many characteristics a feature can be attributed. As complex as they are, the topology and relationships are absolute. Before reading this article I thought it was just a matter of applying the concept of a temporal relationship in a similar manner. I still believe that this is possible. For instance, the questions that the authors answer in Figure 5 could be answered similarly using the equivalent of “Clip” or Raster Calculator. It would be laborious, time consuming, and consist of a rigid framework, but one could still answer the question, “Which areas were fallow land during the last 20 years?”

The framework that Marceau et al. develops is much more dynamic, and thus all calculations can be completed before asking any questions, as opposed to asking a specific question and then answering it after numerous clips and overlays. Generating a user-friendly temporal-spatial model would be a big step forward in answering questions in the fourth dimension. Especially now, considering the ever increasing rate at which data is collected.

Like many problems with GIS, if the data was water and the processing was the pipe through which the water must pass, there will always be a limiting factor. The author’s are of the opinion that spatio-temporal data set availability is lacking, but make progress in further widening the pipe. In the coming years I believe that the limiting factor will again become predominantly the processing of the data as spatial data is collected at an ever increasing rate.

In other news, did anyone else have trouble where the document was missing all text “fi” was missing?

AMac

visualizing time

Thursday, March 14th, 2013

Marceau et als article looks at the use of temporal GIS in a study of land use in St. Eustache. While the paper shows one way that we may incorporate time into GIS, it is only one fairly limited use. The paper’s twelve year old date is important to consider in a fair critique, and I commend the researchers use of available softwares and interfaces in order to move forward on temporal projects. Further, their goal appeared to be focused on ability to conduct spatiotemporal queries, rather than representation. While the former is probably the more essentially important part of temporal GIS, I’d like to talk about the latter.

The question of how to represent a temporal dimension in GIS is one that seemingly continues to stump geographers, and there doesn’t appear to be strong consensus on best practices. Dipto talked a bit about this below, and I agree with him that a useful area of thought in GIS should be how we might rethink the way we do Temporal GIS. How then might we move forward? Can a static image accurately represent time? And what of that data in between recordings? How can we utilize interpolation that is accountable to the purpose of our GIS?
My main question is: is it important that there be a consensus on representation? And further, what does a consensus mean to us in terms of epistemological and ontological concerns?

Wyatt

LBS, consent

Thursday, March 14th, 2013

Steinfield’s article on location based services gives a useful overview of prominent technologies, applications and issues related to the domain. Questions of privacy and ethics are raised in the article, but the date of the article means that the most pressing aspects of LBS and privacy have yet to arrive. Indeed it seems that Steinfeld does not forecast the ubiquity of smartphones that we’re experiencing in the present. With current context in mind, I want to briefly revisit some of the questions of privacy raised by the author.

Steinfield cites a set of principles regarding privacy: Notice, Choice, Consent, Anonymynity, Access and Security. With these in mind, I started thinking about what kinds of options we had in terms of communication today. While it is certainly possible to live without a cell phone, it is pretty rare and largely inconvenient, especially amongst my generation. It is expected that we be reachable at all times, and I have heard employment counsellours telling clients that a cell phone is pretty necessary to get a job. I don’t have a smart phone myself, but most of my friends do, and they’ve become a less expensive option than many more basic models of late. But when we opt-in to a smart phone, does that mean we have to (to borrow lazily from Gramsci) consent to our own domination? Is it just that in order to be successful, to be able to communicate, we have to give up a large part of our privacy? Does this model of consent really respect the needs of all parties involved? Does it matter?

Wyatt

Spatial cognition, ontology, epistemology

Friday, March 1st, 2013

Tversky, in his paper, divides spatial cognition into three “spaces”: navigation, surrounding the body, and the body. Reading this article brought to mind previous discussions in our classes with respect to ontology and epistemology. While the article gave a series of examples of each type of spatial cognition, they were mostly rooted within a Western Academic framework. It would be interesting to extend this discussion of spatial cognition to the ways in which it is variable.
I think that the way we think about space is highly structured by our environment and culture. That is to say that the way we order the environment is culturally located. The space of navigation is an easy place to see this difference. I remember in an earlier GIS class talking about the house numbering system in Japan. Wikipedia explains:

“In Japan and South Korea, a city is divided into small numbered zones. The houses within each zone are then labelled in the order in which they were constructed, or clockwise around the block. This system is comparable to the system of sestieri (sixths) used in Venice.”

Even this small detail will have bearing on the space of navigation. While I feel confident navigating the Canadian street system, I would be lost in this different system. I think that it would require that I think about space and spatial relationships in a new way. My spatial cognition is rooted in local understandings. Thinking of this in terms of GIS work, I think it is important to keep in mind the ways that we think about space in our work and how that accords with the people we are producing GIS with and for.

Wyatt

On Academia, Industry and Assumed Value Neutrality

Thursday, February 28th, 2013

Reading Coleman et als’ paper, a useful piece examining VGI participants and their motivation, brought forth, for me, one of my bigger pet peeves: the idea of value-neutrality (and proficiency) within academia. Let me explain. In the list of motivations to contribute, the authors identify three negative motivations: mischief, agenda, and malice and/or criminal intent. While the article by no means classifies these motivations as specific to VGI, their placement sets them symbolically apart from that knowledege produced by experts. By positing these negative uses as illustrative of VGI as non-neutral, I read an assertion of value neutrality into the domain of experts.
I recognize that the rigourous demands of a publishing process cannot be ignored, and unquestionably account for a higher quality of data production within academic and professional realms. This does not mean that they are perfect, nor does it mean they are without agenda. Agenda is not always explicit, and I argue not always even conscious. However, the lay reader of an academic paper believes it to be value-neutral. All the while, VGI is seen as never trustworthy. Let us bring this to the domain of GIS.
We trust the professionals at Google maps and the peer-reviewed GIS paper, but not at OpenStreetMaps. Both producers and produsers have to make decisions when they input data. We know that in spatial representations, it is easy to lie and it is easy to produce hierarchies. In fact, it is difficult not to. The difference between VGI and professional GIS is that people expect the former to do so and the latter to abstain. However, Google has to make money, and the academic has to be published, and they can mold their data to this end, as can their editors and publishers. I guess what I’m asking in the end, is where can we make a useful critique of VGI that takes into account the unreliability of all data? How to we introduce accountablity into academia, industry or VGI?

On the question of mischief, well, that one happens too. See here

Wyatt

Explorations in the Use of Augmented Reality for Geographic Visualization

Thursday, February 21st, 2013

There is a small but significant difference that could make augmented reality boom or bust when it comes to GIS. It is the same problem that architects and engineers once faced as well. Only with the advent of computers and monitors were they able to rest their neck and sit down in a chair instead of hunching over a drafting board all day. GIS, for the most part, wasn’t subjected to such a fate.

Augmented reality could change that. Even now, similar displays are available to the public in shopping malls and showrooms, using the same table top, infrared projector method outlined in the article. What sets the visitors apart from GIS users is that they only use it for a couple of minutes at a time. As any GIS user knows, geospatial analysis rarely takes a short amount of time.

In light of that, augmented reality will need to make the jump from top-down to heads-up display before it makes significant inroads into the industry.

What part of the methodology that left something to be desired was the need for the user to place a flash card down on each section of the table that they wanted to view supplementary information at. Why not just display all the data at once? If it’s a matter of computing power, that is a simple fix. If, however, it is intrinsic to the software framework, it would greatly benefit the project if, instead of viewing a small section of a large map, the exocentric viewpoint was zoomed in to a smaller…bigger(?) scale so the data took up the extent of the display. After all, whens the last time you squinted at a map of the island of Montreal when trying to figure out how far your house is from the nearest depanneur.

AMac