Archive for January, 2013

Race to the Bottom

Thursday, January 31st, 2013

GIS strives for something that seems near impossible, a blanket solution for a problem with more than one solution. Humans are fickle, subjective, and by and large ignorant in comparison to the communal wealth of knowledge at our fingertips. Thus, people, in the world of Geo-visualization, the user, are never going to be able to use just one form of representation. The variable, subject-based, method is what everyone aims for, but unless the user is allowed to actively input parameters, be it consciously or unconsciously, we will end up with the same stale result every time.
Malleable representations can only come from organic production methods, which up to now, at least in the world of computer science and GIS, do not exist. Still, MacEachren and Kraak have a positive outlook on the field. Either because they believe it is possible, or because it must be. In the very beginning they claim that it is estimated that up to 80% of all digital data include geospatial referencing, only to follow on later with the assertion that everything must exist in space; whether that is the case is still up for debate by string theorists. However, there must be a diminishing point of return. How far must one go before the field of GIS is satisfied? At the rate we’re going, it won’t be until virtual environments achieve the uncanny valley, or are able to surpass it. At which point, it won’t matter where things are located in space, as you’ll have a hard time stripping physical reality from data driven fantasy.

AMac

Making Use of Heterogeneous, Qualitative Data on the Neogeoweb

Thursday, January 31st, 2013

Source: xkcd

Elwood reviews areas of GIScience that are crying out for advancements in the wake of new, Web 2.0 technologies and the flood of big data coming with them. More and more user-generated content is being created every day, and a great deal of it is embedded with explicit or implicit geospatial information.  Two key stumbling blocks to harnessing these new sources of data are the heterogeneous, unstandardized nature of Web 2.0 content, and the qualitative nature of spatial knowledge within the content.

Heterogeneity of geospatial data on the web is an acute issue: because content is user-generated, the range of standards and conventions used in terms of communicating information is huge.  When you factor in the equally diverse, and growing, number of Web 2.0 platforms and services, things are even more daunting.  If there’s a certain class of content you’re interested in analyzing, how do you make a structure out of what is mostly unstructured text? How do you standardize when the number of standards is longer than your arm? As the XKCD comic suggests, creating a new standard us typically an exercise in futility.

The qualitative nature of spatial information and meaning is another stumbling block in the use of data from the Geoweb. As an example of this, suppose you want to geocode a (location-disabled! Ha!) tweet of mine that says “Home, finally :)”. Even if you have detailed biographical and address information about me, you will still encounter ambiguities. By “Home”, do I mean my house? My neighbourhood? My hometown? My residence while I’m at school? My parents’ region of origin where my extended family lives? How do you even begin to automate the processing of such context-dependent information? And is there any existing spatial data model that can effectively represent the concept of home?

It is often said that while Geoweb 2.0 technologies are accessible, engaging and intuitive for a much greater number of people than traditional GIS has ever been, analysis tools for this exciting new medium lag behind GIS.  This is in no small part due to the issues associated with organizing the web of heterogeneous, qualitative data that these new tools are producing.  Though this challenge looms large, Elwood is optimistic that GIScience can rise to the occasion if we take a multidisciplinary approach.

-FischbobGeo

Human Versus Machine: The User-Systems Relationship

Thursday, January 31st, 2013

Lanter and Essinger write about the cutting edge of graphical user interface (GUI) design for GIS in 1991.  This was an era where the command prompt was still in vogue as a user interface and computer use was still unwieldy for many people.  It was also a time of change as more GUI-based software such as Microsoft Windows was gaining popularity.

Though the technology Lanter and Essinger profile has aged considerably since publication of this piece, the fundamental principles underpinning systems design and the system-user interface are still as relevant as ever. The authors discuss the need to move from a systems-centric design paradigm where operation of the software simply reflects the underlying algorithms and processes as directly as possible, to a user-centered one that gels with the user’s own mental models of the tasks they are doing.  When the latter succeeds, controls and functions are more intuitive for the user, who then does not need to be bogged down by excessive documentation or cryptic commands that only make sense to the developer.

An important point concerning the user-systems interface relationship is that it is a two-way relationship. Not only should system interface design be influenced by the user’s mental models, but those same mental models can be changed by interaction with the software, especially when the UI is set up in such a way to allow learning through exploration.  Most of my computer savviness stems from just being able to explore and mess around with things on Windows 95 and Macintosh machines starting at a young age.  My expertise with touchscreen devices, on the other hand, is far less developed. I am terrified of something happening to my Android phone, because I don’t know enough about the underlying system to be able to diagnose and solve problems as I can do with a more traditional PC environment—at 21 years old, I’m already set in my ways!

This has important implications for software developers who wish to advance human-computer interaction but find themselves faced with a generation of users most comfortable with the keyboard/mouse.  UI advances must be implemented and deployed incrementally so this cohort’s mental models have the opportunity to adjust. Unless a very ‘intuitive’ design is found, too-radical changes are bound to fail and be looked back on as being ahead of their time.

-FischbobGeo

User Centered GUIs

Thursday, January 31st, 2013

My initial reaction was to question how GUI’s make people even more distant with computer technology. If GUIs are made so precisely such that the underlying technology is completely hidden from the users, then you can run into problems where users click randomly without really understanding what the tool is doing.  But after reading further down, it gets more complicated than just button mashing, hiding algorithms, and hiding all the techy things the user never signed up for  (obviously). At the core of topic was to create a successful user centered interface – a marriage of what the users knowledge and the process maps/models in their mind and how the tool can adapt to that to further add/influence the user’s model. A great concept that Lanter argues will reduce the necessary documentation, software support and unnecessary brain space traditional GUIs demand. However, for which user should the program be created for? How can the program be created for a beginner who is just learning the tools and concepts necessary to navigate through the problem, and expert users who would like the ability to create custom functions? Or what about people to visualize conceptualize information different. Like Professor Sengupta’s analogy about road directions  (western countries may be more familiar with “next left, next right, continue easterly” type of directions while Asian countries are more familiar with landmark directions), this also applies to developing GUIs that tailor to the dominant norms in a particular society. For instance, some Asian societies traditionally read from right to left thus having menu bars and text left justified in less intuitive for them and forces to adapt.

A prime example of user interface GUI within the geography realm is the upgrade from ENVI Classic and the new ENVI 5. The designers of ENVI 5 definitely made the connection that those using ENVI are very likely to be using ArcGIS, and gave the user interface a very familiar ArcGIS feel.  For the users that this applies to, I think that this does reduce the steep learning curve of the tool, while perhaps enhancing the notion of the interactiveness of these two tools.

-tranv

The user-system interaction issue : has it been turned upside-down?

Thursday, January 31st, 2013

After reading Elwood and Lanter’s articles I had the image of a complete shift. With his user-centered interface, Lanter was concerned with the interaction between the user and the system. The user-centered interfaces are so developed now. Someone mentioned the perfect example in this blog; two years-old baby are able to use iPads! Can it be more user-friendly than that? Interfaces are so user-centered that people generate an enormous amount of heterogeneous data, and are using geotagging and geoblogging. I think that users are directly interacting with the system now and not only with the interface. Furthermore, users are interacting with each other. On the other end, ‘system designers’ (in Lanter’s words or ‘GIScientists’ in Elwood’s perspective) have to figure out ways to manage this phenomena. The problem has turned back to the ‘system’. It is interesting how the ‘system’ from Lanter’s point of view refers to the software system design. Although what I mean by ‘system’ refers to a broader point of view and is about social, political, technological systems (Oh no not the GIS tool/Science again!!!). Geovizualisation technologies have an impact on all spheres of the society. I’ll give a few examples of that (from Elwood’s paper). Political: renaming places and the negotiation of colonial and postcolonial histories or the promotion of activist activities; Social: posting information on bad neighbors!; technological: interoperability of heterogeneous data, transformation of meaning when different people work with the data….

S_Ram

How should we use Eye Tracking?

Thursday, January 31st, 2013

This paper by Poole presented some very cool and interesting technologies that allow us to follow eye movements and relate them to visual interpretation.  While this technology seems very “out there” to me at times, there are definitely some practical applications that I can think of.

With the increased amount of information and data available to us, we must filter out what we don’t want to see. With the advent of digital earths and virtual environments being the medium through which we view spatial information (business locations, road networks etc…), there may be a point where the amount of information is just too much to handle for one person.  Using eye tracking technology, experts could figure out how to best present certain types of spatial data (e.g. houses vs. business).  Then, as a user we could pick and choose which data we want to see.

For example, say I want to look at all the business in an area, filtering out residential and other land uses.  Knowing how our eyes react to different representations, the interface could “highlight” the business locations.   Theoretically, it would save the user time from having to scan through all the information they don’t want to see.

Through these kinds of things, eye tracking could extremely improve the way virtual worlds are presented.  Oddly, it may even allow us to view parts of the real world in a more “efficient” manner than in reality.

-Geogman15

 

Geovisualization: We’ve come far but there is still work to be done

Thursday, January 31st, 2013

The challenges posed by MacEachren and Kraak are manifold, but they effectively outline the path that geovisualization has taken since the paper was written in 2000.  The increased use of the World Wide Web, increases in hardware and software technology as well as focuses on relevant theory has solidified geovisualization as an extremely useful and widespread field.  The huge increase in multi-dimensional, dynamic and interactive visual maps has allowed for a broader visualization and analysis of geospatial data.

One aspect that I think still presents a challenge today is the incorporation of group work and the extension of tools to all people that need it.  This brings us back to the notion of the digital divide that exists in our world.  As western based academics, pushing to the frontier of geovisualization is in line with what we already know and understand.  However, in parts of the world where the internet may not be as accessible, some of these newer features may not be as well understood.  As a result, a lot of new tools and programs may need to be adapted to certain people’s cognitive abilities.  Additionally, everyone interprets the space around them in a unique manner, calling for an in depth analysis of how geovisualization could help them specifically.

Despite this, I believe we have made huge leaps in geovisualization since this paper has been written.  Citizens can use many different applications that were only ideas to experts a few years ago.  Still, one of the main issues today is extending the progress that has been made to all corners of the globe.

-Geogman15

 

Realized geovisualization goals

Thursday, January 31st, 2013

MacEachren and Kraak authored this article in 2000, a year before the release of Keyhole Earthview and five years before Google Earth. In the piece, the authors show the results of collaborations of teams of cartographers and their decisions on the next steps in geovisualization. They mention broad challenges pertaining to data storage, group-enabled technology, and human-based geovisualization. The aims are fairly clear, but there are very few, if any, actual solutions proposed by the authors.

While reading the article, I had to repeatedly remind myself that it was written a dozen years ago, when technologies were a bit more limited. Most notably, there appears to be a very clear top-bottom approach in the thinking here, very reminiscent of Web 1.0, where information was created by a specialized provider and consumed by the user. In the years since this piece was written, Web 2.0—stressing a sharing, collaborative, dynamic, and much more user-friendly paradigm—has largely eclipsed the Web as we understood it at the turn of the millennium. In turn, many of the challenges noted by MacEachren and Kraak have been addressed in various ways. For one, cloud storage and cheaper physical consumer storage have in large part solved the data storage issue. Additionally, Google has taken the driver’s seat in developing an integrated system of database creation and dynamic mapping, with Fusion Tables and KMLs, that are both extremely user-friendly. And there are constantly applications and programs being created and launched that enable group mapping and decision support. MacEachren and Kraak did not offer concrete solutions, but the information technology community certainly has.

– JMonterey

Eye-tracking: the Good, the Bad, and the Uncertain

Thursday, January 31st, 2013

In a well-written and fascinating article, Poole and Ball summarize how eye-tracking technology works and how it is/can be applied in human-computer interaction. They broadly outline the technology behind eye-tracking devices, as well as the psychological interpretation of various eye movements.

Reading this piece, two key thoughts occurred to me. First, the psychology of eye-movement ventures eerily close to mindreading in the loosest sense. Or at least scientists and psychologists are attempting to interpret users’ thoughts on a minute and precise level. The accuracy of interpretation is currently debatable, but this appears to be a field of science that would open an enormous landscape of technological applications pertaining to how we see the world. Of course this is both positive and negative. On the positive side, the authors here mention the use of eye tracking as a way to train autistic children to maintain eye contact during communication. However, on a more cynical level, once distributed commercially, how will people use the technology as a way to exploit us?

My second thought relates to this last point. Reading this article in the context of understanding GIS, I wonder how eye tracking might be applied geographically. The simplest argument, as I see it, would be in decision support in planning, helping planners and designers situate objects in space to best capture the attention of their target. However, I believe a much more likely and, perhaps controversial, application would be in advertising. Tracking a user’s eye movements on a computer screen, for instance, could be a gigantic boon to advertisers looking to attract users’ attention.

– JMonterey

GUIs, GIS, Geoweb

Thursday, January 31st, 2013

Lanter and Essinger’s paper, “User-Centred Graphical User Interface Design for GIS,” outlines the development from a typical user interface, to a graphical user interface, and finally to a user-centred graphical user interface. The biggest take-home point I gathered from the article was that the user interface must meet the user’s conceptual model of how the system is supposed to work. If the UI and the user’s model match up, then using the software becomes much more intuitive to the user, resulting in a minimal need for instruction manuals and other forms of support. It got me thinking about how quickly a preconceived conceptual model can be erased and/or replaced. Take switching operating systems for example—going from PC to Mac we already have a mental map of how to do basic computer tasks (how to find and transfer files, format the screen, etc), but these things are done differently on each system. Somehow we grow accustomed to the new operating system’s UI, and it will eventually replace our previous conceptual framework.

Following Elwood’s article and a call for a new framework for geovisualisation, it may be interesting to think about how our GIS conceptual frameworks will hold up in the new paradigm. The GUIs for geovisualisation are arguably easier to use than a traditional GIS (the idea of making it a public technology rather than an expert technology), so it follows that the GUI will fall into GIS users existing conceptual frameworks. Going the other way—starting with geovisualisation technologies and branching into traditional GIS—or even going back to GIS after extensive geowebbing—may be harder.

-sidewalkballet

What does it all mean?

Thursday, January 31st, 2013

Part of Elwood’s paper considers the implications of using data provided from different users. Data providers stemming from different backgrounds and cultures approach information, its synthesis, and its portrayal in varying ways. This heterogeneous data is further transformed through the manipulations required to make any sense of it. Elwood notes, “data are dynamic, modified through individual and institutional interactions and practices” (259). How can we ensure that the meaning instilled by the original user is carried through all kinds of manipulations and transformations, especially when primarily deciphering the original meaning proves to be laden with complexities?

Elwood provides an overview of many solutions to grapple with a wide array of geovisualisation challenges, but I think we might be getting a little ahead of ourselves. Surely there are a vast number of challenges to be addressed, but can we do it all at the same time? Making sense of original user data seems to be of primary importance before we can assess how it changes through practice and collaboration. While initially seeming counterintuitive to user friendliness, approaches like “standardiz[ing] terms across multiple sources” (258) and using formal ontologies may prove necessary in trying to etch out semantic differences in user provided data.

How can we work collaboratively if we’re talking about different things? We can trace the “modification of concepts in a spatial database as they are used in the process of collaboration” (260), but what do these concepts mean? Can we actually standardize open, user-generated geospatial data in order for it to be interoperable? With the increasing amounts of data sources and data heterogeneity, it looks like there is a long, winding road ahead of us.

Elwood, S. 2009: Geographic Information Science: new geovisualization technologies — emerging questions and linkages with GIScience research. Progress in Human Geography 33(2), 256-263.

-sidewalkballet

Graphical user interfaces

Thursday, January 31st, 2013

Lanter’s assessment of user-center graphical user interfaces and the applicability of those graphical systems to GIS is quite accurate in that visualization makes GIS is easier to understand, learn and use. I believe this relates to how the evolution of the human brain adapted to man’s ever changing environment, in that it responded by creating a set of built in steps to learning, understanding and using tool, through touch and sight. To elaborate, the user-centered graphical interface is the connection of the GIS “tool” and the user. As the brain of a person is designed to see and expect a result in response to an action, the interface plays a major role in understanding; humans learn through observation of results from their actions. In essence, humans create logical connections through pathways, which humans can then observe and deduce the outcome of other similar actions.

The concepts of interlinking both the user and the system, through the system interface and the user model, as Lanter writes, seems to be the best way of linking man’s natural interaction tendencies with the computer’s unnatural approach. Even so, the design of the interface still may cause problems as man’s “instinctive” approach to the use of the interface may limit the function of the interface. Therefore, I agree with Lancer that input from the users to interface designers is essential to resolving the issue of result complexity and use with user simplicity. One thing that may resolve the problem of complexity and user ease, may be to design an interface that allows the use of both traditional and graphical interfaces (i.e. Graphical to start and learn, and traditional for the advanced user once they have mastered the basics).

 

C_N_Cycles

 

Poole & Ball stuck in one place?

Thursday, January 31st, 2013

Poole and Ball’s “Eye Tracking in Human-Computer Interaction and Usability Research: Current Status and Future Prospects” gives an introduction to eye tracking technology with a brief history of its uses and designs. For our purposes as geographers, it is useful to think about to what ends this technology may be used, and how we can incorporate eye tracking into applications that are spatial in nature.
While the uses noted (user interaction with a website, text or tool) mostly focus on a stationary user looking at something that is fixed in space, incorporating motion into eye tracking analyses may be very illuminating. I think specifically of analysis of urban planning that might incorporate universal design to make cities easier to navigate, more physically accessible and more aesthetically appealing. By tracking where users look when moving through a set urban landscape, we could infer improvements such need for curb cuts, street sign placement and in more commercial interests, billboard and advertisement placement. The use of eye tracking might help planners to make cites more easily navigable. One could also use this technology in augmented reality applications such as virtual tours of a given place or in identifying points of interest.

One thing that I hoped the article would explore further was research methodology. It might be interesting to know how studies using eye tracking technology attempt to account for the inherent bias of a study who knows to be being observed, or the aims of a given project.

Wyatt

Geovisualization and GIScience

Thursday, January 31st, 2013

Sarah Elwood’s discussion, of the emerging questions in geovisualization and the linkages to GIScience research, does highlight the issues of qualitative and quantitative data overload and the dissemination of the data. However, I believe that the dynamic change and addition to data, be it quantitative or qualitative, is needed in both a standard and non-standard form. Thought my own research I have found that dynamic data in a non standard form often tells more about a situation then the standardized data. That said, standardized data is still needed in order to “create order” in our understanding and transmission of data to other people.

The article makes me think how as humans, we want everything in order so to make sense of what we see and how GIScience strives to create order in data for it to be useful. Nevertheless, is the universe not chaotic and basis of all data fundamentally chaotic? Maybe chaos and the none standard data tells us something more important about how we are as a people and how the tools and the ways we look at the world change from person to person and culture to culture. The heterogeneity of the data and the types of software and hardware we use maybe is the norm, and GIScience is trying to place artificial boundaries on how we see data and use tools.

Besides trying to fit data to standardized forms, the idea of “public” and “expert” technologies just does not make sense. Today technologies are so integrated in how youth (0-30 years old) see the world that technologies should not be classified as “expert” or “public” but the person who manipulates the technology. Growing up during the advent of mass produced home computers and driving the development for better processing power and performance, past what our parents had ever imagined, through the purchased of video games and internet use, has shown me it is the person not the machine. I have learned that often one must use a plentitude of  platform resources to achieve a result, as each type of platform like google earth or ARCgis has its strengths (one cannot create a single platform to satisfy all needs or wants).

 

C_N_Cycles

 

Making GIS UI friendly

Thursday, January 31st, 2013

Although unrelated to analysis, the User interface (UI) is an incredibly important aspect of any GIS. When using applications such as ArcGIS, the graphical user interface (GUI) is what the person sees when they interact with the software on their screen. Thus, the simpler and easier to use the interface is, the faster the end-user will be able to learn the system and use it efficiently.

One of the best ways of organizing the UI seems to be the use of  natural or interface mappings. These methods play on the users intuitive and logical reactions to occurences. For example, Lanter uses the analogy of the steering wheel. If a person turns the steering wheel right, the car will then move to the right; and vice versa. Similarly, when a user moves the mouse to the right or left, then they would logically assume the cursor on the screen would do the same. This seems to be the best way to teach users how to use a particular system, as they are more likely remember instinctive actions.

Lanter identifies two key concepts that should be taken into account during user centered interface design: how to map the system interface to the users existing model, and how to shape and influence the users model while they interact with the system.  The first part, as previously mentioned, has to do with designing the interface to take advantage of an individuals intuitions and natural mapping. The second part-arguably the biggest challenge going forward in UI design- regards how easily the user is able to learn the system, based on the way it is organized and fulfills functions. Overall, further development in UI- primarily in ease of use and intuitiveness- will open GIS to a larger variety of individuals, especially those relatively unfamiliar with GIS applications.

-Victor Manuel

GIScience, Geovisualization, shifts in how we view data

Thursday, January 31st, 2013

The article by Sarah Elwood touches on issues relating to how the evolution of spatial data has spurred new (or continue to fuel existing) questions regarding how we can begin to handle these datasets in such a way that we can analyze them and make some meaning from it. As new geovisualization technologies emerge, and as long as people continue to freely post geospatial information that can be collected, it poses a double edge sword. Information about the livelihoods of people on a micro level has never been so accessible, yet challenges we face with new innovative geovisualization technologies is what Elwood calls a conundrum of “unprecedented volumes of data and unprecedented levels of heterogeneity” (Elwood, 2009). By applying GIScience theory and research such as assessing the ontology of the data, mathematical algorithms, visual modeling techniques etc. has contributed to research regarding data integration, and heterogeneous qualitative data – issues that extends beyond new geovisualization technology.

Still, even with bigger, better, newer algorithms that can automate data integration we must recognize that categorizations in a particular datasets is context dependent. Therefore, labeling something can carry a lot of weight. What people define as a “bad neighborhood” can have a multitude of meanings (bad as in high crime, noise, a particular demographic that you do not mix well with?), which can have high social and political implications. If datasets are to be combined, then finding a proper categorization scheme for the dataset must also be thrown into the mix of challenges of data integration.  Perhaps this is where metadata can really shine through if it can provide the context of how the dataset was derived and defining the categories its chosen. I don’t know about you, but my appreciation for “data about data” has definitely grown since GEOG 201.

-tranv

Geovisualization-What we have achieved

Thursday, January 31st, 2013

Many of the pressing problems of today have a geo-spatial component. The paper by MacEachren rightly points out the challenges in dealing with efficient representation of Geospatial data. In the last 11 years since the paper was written, radical changes have taken place in the domain of virtual mapping. Not only did GIS softwares like ArcGIS and QGIS develop rapidly, other mapping and Virtual Earth services like Google Maps and Google Earth have also become popular. The authors had rightly pointed out the changes that were taking place since the internet became the prominent medium for disseminate geospatial data.

With 80% of all user-generated data on the web containing geo-location information, storing and leveraging this data generates a lot of interest. Some of the problems discussed in the paper have been efficiently dealt with in the recent years. For example, multi-scale representations of objects have been handled with the concept of scale-dependent renderers used extensively in GIS packages as well as in Google Maps and Google Earth. However, the decision of what to show at each scale is still subjective. When Geographic objects are stored in the database as vectors, attribute information can be added to each of the objects to further describe it in a non-spatial manner. The abstraction of layers provide the flexibility of modularising map building and analysis approach, enabling reuse of the layers to create different themes. Crowd Sourcing and mobile mapping applications have defined the way group mapping tasks are performed.

The paper also emphasises several times on the need for cross domain research to address the problem of Geovisualization and spatial analysis. In terms of Geovisualization, research results from the field of Computer Graphics, Geo-sciences, Cartography, Human Computer Interaction and Information Visualization needs to be integrated in order to find new and innovative ways of creating maps. Multi-disciplinary crosscutting research is the way forward to make further advances in how geographic information is presented.

-Dipto Sarkar

 

The Future of Geovisualization technologies…

Thursday, January 31st, 2013

The emergence of new geovisualization technologies such as Google maps and Google earth are revolutionizing the way people interact with GIS. Contrary to software based programs, these web based applications allow for unprecedented access, at no cost, to powerful visualization technologies. In addition, previously text based web apps, such as Facebook and twitter are now incorporating spatial components. For example, a person is now able to tag their exact location, down to a particular building, when they update their status on Facebook. Web-based geovisualization technologies are growing in popularity because most of them are free, very easy to access (usually only an internet connection is required), and they allow for the standardization and greater sharing of spatial data. This last point is extremely important because it has opened up a wealth of research applications. For example, a researcher in Greenland might be tracking ice flows, while a researcher in northern Canada may be tracing the migration patterns of polar bears. Web based geovisualization technologies such as Google earth now allow both researchers to interpolate their data on the same map; opening avenues for further research, such as how season ice flows affect polar bear migration patterns.

On other point that must be brought up, and that was addressed well by Elwood, is the heterogeneity of the huge amounts of data being generated by these new geovisualization technologies. Thanks to these technologies, large and diverse data-sets are now available covering a wide variety of user-imputed geospatial data. However, an important challenge for the future will be how to standardize these data-sets, as much of the data is based on opinions rather than standard data-points. Most people have shifting meanings on how they perceive their local geographical points. Standardization will also be important as datasets become larger and larger, requiring more automation of analysis.

 

-Victor Manuel

Grappling with visualization and spatial data expectations

Wednesday, January 30th, 2013

I often find papers like this can be very theoretical and I can struggle to grasp the real crux of the research, but I found that Elwood presented some fairly theoretical topics clearly by providing examples. [something about the other paper]

As spatial components of new tools and technology become increasingly ubiquitous, I find that there is now an expectation for data to be presented spatially, and some disappointment when it’s not available. These papers made me think about how I expect spatial information to be easily available and consumable for me. For example, just discussing where to go skating with some classmates, we were irritated with the idea of having to use a website that just listed the rinks with their general location (http://ville.montreal.qc.ca/portal/page?_pageid=5977,94954214&_dad=portal&_schema=PORTAL) and were thrilled to find the alternative which provided the skating rinks mapped across the island (http://www.patinermontreal.ca) – to be honest, I was fully expecting a mapped version to be available and would have been shocked if there wasn’t.  Apartment-hunting without Padmapper or similar is pretty miserable, since “where” is usually the most important variable in any potential home. Clearly non-GIS users have grown to have a higher level of spatial literacy with products from Google and spatial functions in technology, but they are also coming to expect consumable spatial information to be available.

-Kathryn

Programming and visualization

Wednesday, January 30th, 2013

This paper raises some interesting points about what interfaces are trying to do – it might seem like they are just trying to create an attractive user experience, but in an ideal world they should allow a user to learn the software through “trial and error” and ultimately come to the correct conclusions about how the software functionality works. Seems like high expectations, but clearly useful to design software for the people who will be using it!

This paper was written a fair time ago. I wonder how many GIScientists and users of GIS have continued to find the developments/improvements in GUI for GIS software fairly unsatisfactory, and turned or returned to spatial analysis through programming? I almost never touch an ESRI product anymore – I’d usually rather work through a problem in a combination of postGIS and R. Yes, partly it’s for automation, but to be honest I do find the software can be unintuitive sometimes. However, teaching postGIS to a computer programmer with no background in GIS (with any “S”) showed me that his grasp of databases and analysis were clearly strong but the lack of visualization really did hinder understanding of the analysis tools and the concept of projections – even in someone with a highly technical background. While programming is incredibly useful in GIS and spatial analysis, the visualization aspect is still crucially important too, even for learning. And (maybe because I was trained visually) I never fully trust my results until I “sanity check” them in a visual way.

Random thought – the “mapping” problem is very interesting and of course relevant to GIS, and it seems like the touchscreen has changed this a lot since it’s so intuitive. I’ve noticed several times anecdotally that very young children can work iPads so early as age 2, while they lack the ability to work a mouse on a regular computer for years. I’ve always assumed it’s an issue of dexterity but maybe it’s more to do with this “mapping.”

-Kathryn