Archive for the ‘geographic information systems’ Category

Critical GIS

Wednesday, March 28th, 2012

The article on critical GIS strongly emphasizes the subjectivity and human fallibility in GIS. The authors call for a more communicative, open, and inclusive approach. Their concerns appear to lie in the construction of truly participatory GIS that is open to all for exploration and examination at all steps.

I agree with ClimateNYC in that this reading seems to have circled back to our original debate of GI science vs. GI system. Through all of our class lectures, a new (to me) perspective is viewing the problem as an issue of public perception. The authors emphasize a strong divide between experts and novices to GIS, and I agree based on the learning curve of writing custom scripts/functions and variable recognition of the data’s limitations. Aitken and Michel’s call for a more “communicative rationality” seems to question the necessity of such a debate and pushes for a merging of the two. A quick note: at the beginning of the course, I stood on the GI science side of the fence.

The general trend of development in GIS functionality has been to make processes more accessible and user-intuitive. This has been shown in the cutting-edge human-computer interface examples shown by Peck. Simultaneous and interactive multi-user visualizations work towards this goal at the software level. These design challenges necessitate frequent debate and exchanges with users to optimize the systems for as wide an audience as possible.

The life-path of computers is a helpful comparison. They originally started out as expensive, feared research-oriented machines found only in the cloisters of MIT and government agencies. Curious techies got involved with programming and hardware development and became the first “hackers”. The development of games and minicomputers were truly pivotal moments in popularizing computers.The quick adoption of geospatial tools such as Google Maps and the GeoWeb is the popularization parallel for GIS. Application and theory will become increasingly intertwined with more intuitive tools available and may lead to a redefinition of the importance of place (in a digitalized world where the friction of distance has been lessened).  

At the end of the day, the increasing accessibility of data may make GIS an example of a science submerged in the social realm. Computer science still remains, and it is likely that GI science will as well. Concerns about underlying politicization and deceptive “objectivity” will fade as GIS methods become more and more ubiquitous. This reading has further shifted my views on GI science vs. GI systems towards a middle ground, and whether it is realistic to view the science-system as separate components.

– Madskiier_JWong

Is GIS a Science or a Tool in Planning-Information-Critical Theory

Tuesday, March 27th, 2012

So, I had a few problems with the article by Stuart Aitken and Suzanne Michel. First, I felt like the authors danced around the delicate topic of whether GIS operates as a tool or as a science in a way that was detrimental to understanding their article. Second, I wondered about applying Habermas’s theories to the idea of “planning” by making it a consensus built on mutual understanding and arrived at through respectful communication.

But let me back up, first, and give a brief summary of the article. The authors frame their writing as being in response to the troubling idea that GIS is defined solely outside of social constructions that “bolsters a rational-instrument discourse in planning” (17). In contrast, they believe GIS to be a “socially constructed” technology (27) that when used in planning should not impose one person’s agenda on others (24). As such, they worry that some GIS lord sits on high, owns the process of planning, and only allows others to engage with GIS as participants rather than having any ownership of the planning process. Such a process risks defining GIS theoretically in such a manner that makes it an exclusive field of scientific research or practice.

How the author’s defines GIS as a science or tool could potentially be very important in the discussion I describe above because it seems to be wishy-washy in terms of their view of it. On the one hand, they talk about GIS in terms of the planning process and how administrators and others use it to aid in planning of development or other projects. In this sense, it appears to be a tool. However, when the authors get into discussing Habermas, they start to deal with GIS as a field of research that has underlying theories, and to argue for a more inclusive field that includes disparate voices. In this sense, they argue for merging the academic and professional worlds into the world of everyday experience – which I agree with – in order to give average folks ownership over the field of GIS and how it operates.

So, this brings us back to the question that could easily be answered if they define GIS as tool or science. How does planning become an open, inclusive process? If we’re thinking about GIS as a field of research, it’s got unique potential to include a variety of user inputs or applied insights. In many cases, planners and those responsible for making decisions about urban plans do utilize GIS in this manner to gain insight into how better to make their decisions. I mean just look at this video where GIS applications are used in urban planning decisions acroos Addis Ababa. Plus, it’s got some good music.

Yet, I can’t help remember the days I spent as a political reporter and the dread I felt covering county votes on comprehensive land-use plans or even planning commission meetings. These meetings were almost always exclusive to those in charge (Ok, I guess the elected officials did answer in some manner to those who elect them) and subject to the prevailing views of whoever those in power might be. Sometimes, unfortunate homeowners who wanted to build something not accounted for in county plans might have been subjected to some type of harassment by the planning commission or, otherwise, be included if they could justify their new add-on to their jumbo mansion. On really good days, the planning decision might be incredibly divisive (since I worked in Northern Virginia, this mostly only occured when slow growth advocates were pitted against pro-growth folks) and the decision-makers had to come down with some type of politically defensible decision.

But the point is clear. While GIS as a science might have the potential (and in many cases is already) democratic, the planning process in many urban localities is far from it – at least not beyond the sense of being representiationally democratic. So, can GIS bridge this gap? Maybe. But I guess it depends on whether you view it as a science or a just a tool for some government planning board.

Ushahidi: I couldn’t help it

Friday, March 23rd, 2012

I really enjoyed reading Haklay and Tobon’s (2003) article on PPGIS because it examines concepts that I can relate with my term project. The authors believe in information contributed by non-expert users in a constraint free environment; away from the office, possibly work, in your own personal space, or on the go. A decade after this article was written, mobile phones, especially smartphone apps, allow a user to both contribute and interact with non-expert generated information. I believe an ultimate PPGIS synergy has been created by linking FOSS together, in particular Ushahidi and OpenStreetMap, to represent geographic data contributed by non-expert users; on an online platform where you can text, email or Tweet information that you can then view interactively, on an OpenStreetMap interface.

User-centered design, development and deployment, and geovisualization are all critical components to a successful, efficient and usable platform. From the end-user perspective, these are all achieved. However, feelings may be mixed for developers. It is one thing to be able to send a text, Tweet, or email to a platform and interact with it, and another to use it as a template, activate, and maintain that platform. As much as these platforms are user-friendly, when will they become developer friendly? By developer I don’t mean a computer programmer, or a developer that is comfortable with coding, but someone who is new to it all but wants to learn; the non-expert of developers. Given all of this, I wonder what the authors would say of Ushahidi now. I believe in a constant need for improvement of open-source platforms, to strengthen the world of PPGIS. As difficult as the building process of the Ushahidi template can be for a newbie developer, I am astounded by the impact it has had and continues to have on the world of non-expert users.

-henry miller


Geospatial Cyberinfrastructure and User-centric HCI

Friday, March 23rd, 2012

Usability evaluation of GIS is delineated by Haklay et al. in their publication 2010. The connection of human computer interaction (HCI) and public participation geographic information science (PPGIS) is delineated in their paper, but the relationship with geospatial cyberinfrastructure is not explored enough. I think the idea of user-centric design can also be applied in geospatial cyberinfrastructure, which has attracted more research interests.

Geospatial cyberinfrastructure, which provide the functionalities of geospatial data collection, management, analysis and visualization, adopts pure system-centric design in previous studies. Due to the fact that most users of geospatial cyberinfrastructure are research scientists or domain experts, geospatial cyberinfrastructure is criticized for its bad usability. As the development of PPGIS, more users become geospatial data producer in GIS. Since these data are valuable in GIS study, geospatial cyberinfrastructure should be adapted to provide user-centric services.

Here, I name a few challenges for utilizing user-centric design in geospatial cyberinfrastructure, especially when we consider better HCI. Firstly, data search within geospatial cyberinfrastructure should be equipped with fuzzy reasoning functionalities to help non-professional GIS users to fetch the data that they need. Secondly, at the visualization layer, the display should be easy to understand (I think GoogleMap has provided a good example) and manipulate for users. Moreover, multi-media input/output with HCI should also be developed. Thirdly, at the infrastructure level, we are facing a dilemma: the controllability and learnability. To be specific, if we give users more control of geospatial cyberinfrastructure, the corresponding training work also increase. If we want to keep geospatial cyberinfrastructure easy to learn, we should hide most details about the geospatial cyberinfrastructure. Hot to balance the controllability and learnability is a great challenge in the user-centric design of geospatial cyberinfrastructure.


Temporal geographic information: a work in progress

Friday, March 23rd, 2012

The importance of time in Geography has become more relevant for me as I began working on my research project. It was helpful to read Langran and Chrisman’s (1988) article. In a way I was comforted to relate to some of the issues with regards to dealing with time, but at the same time felt discomfort that these issues are still around. We live in a digital geographic world, as sah mentioned in their post. Andrew stated PPGIS and HCI display the issues that arise when using Google applications. Currently dealing with the LBS and open-source world where everything is rapidly changing, new versions of software quickly replacing the old, past problems quickly become obsolete. However, I have learned from this article that time, like other fundamental concepts in Geography, is different. It is still a timely (no pun intended) issue. So how do we go about dealing with mapping time, along with theme and location (1)? Although still in its early stages, the Ushahidi platform may fulfill the requirement of being “a temporal database that makes the time dimension accessible to users” in  the example give by the Ghana Waters initiative (2).

The space-time composite section reminded me of the problem of overlaying a polygon layer created in ArcMap. For example, a geographer decided to represent suburbs of a city by digitally drawing polygons. If they want to display this as a choropleth map, displaying crimes throughout the city, they can do so. However, over time, the boundaries of suburbs may change, thus a new layer must be created to ensure timely accuracy of the theme and space that is represented. I believe the advantage of having Google Earth now, as opposed to 1988, is that we can integrate conventional software databases like ArcGIS with user-friendly, interactive virtual globes to try and solve time related problems. Altering between suburb overlay choropleths from one time period to another can be done by checking a box. Creating a time-lapse animation could be a possible solution to static images that “do not represent the events that change one state to the next” (8). It’s still a work in progress, however, the less constraints we have when dealing with more philosophical and abstract concepts such as time (and ofcourse, ontologies), the better.

-henry miller

Temporal GIS an, Real-time, and Database System

Friday, March 23rd, 2012

For cartographic study, time has two meanings: one is the time that the event happened or lasted in the real world; while the other one is the time when the changes are recorded in the database system. Generally, the data collection should be started at the very moment that the event begins, and finished when the event ends. Temporal boundary, which is used to describe the temporal structure and separation of object versions, is also demonstrated in the paper of Langran et al. 1988.

For some real-time applications, the time difference between the event happen and the corresponding data is collected can cause problems, such as fire monitoring systems. If a fire disaster is detected but recorded several hours later, the lost will be unpredictable. Moreover, since database system is not designed in real-time, the event updates cannot be reflected in our GIS. Here are two challenges: first, how to record event in real-time as accuracy guarantee, and how to update the events in real-time with clear temporal boundary. Real-time technologies are integrated in temporal GIS, as a kind of solution to these two challenges.

In 2010, Mike Dana has given a very good presentation about real-time GIS database.

In their presentation, they present real-time ArcMap which can update and visualize the changes of geospatial information. By utilizing the real-time design, ArcMap can become a good platform for crisis command and mobile resource deployment.


Time after time

Thursday, March 22nd, 2012

In their post on temporality in GIS, Outdoor Addict brings up the date of the Langran and Chrisman paper–1988.  That was a long TIME ago!  So this brought to mind for me, the same as it appears to have for many others, that there must be many innovations since 1988 allowing us to better represent time in GIS datasets.  The sliders on internet graphics, or like in agent-based modelling, where the slider moves, changing the time step and the data displayed; the same thing in Google Earth, you can witness both historical changes and the visualization as day turns to night–all are examples of how temporality is displayed today: digitally.

But these are ways of displaying data, and as someone noted, not necessarily the best way of analyzing data.  This made me think further, though.  At each time step, the “event” is static in that time.  The process is fluid across time, but the events are solidly placed within time.  So my question is: why must we do time differently?  Couldn’t we have one map, where red dots are 1998, blue dots are 2000, and yellow dots 2012?  We could see where time factors in, and the data could be in one attribute table.

I am analyzing landscapes for another project, and we are comparing 1998 to 2004.  We have two maps with essentially the same parcels, and are trying to compare the land use.  We have one attribute table with all the parcels, and then have the time steps as individual fields, listing the land use at that time.  It can be displayed at whatever time step we like.  I can see where the authors suggest this is loading a lot of additional data, however.  If you have upwards of 20 000 unique attributes, say, but 75% of them weren’t changing, you would still have to store each time step of land use where nothing was changing.  But as the authors note, it seems the space-time composite is the best way to go forward, as combining all the temporal events in one space/data set minimizes the chances for error.

So at this point I’m not sure… where do we go from here?


Can’t we all just get along?

Thursday, March 22nd, 2012

A user-centred design of human-computer interfaces, what a thought!  As someone who has gotten to grow up with the best of computers (so far), but still remembers the clunky old Macintosh that was considered ahead of the rest, I definitely see the value in a smooth, practical, and functional design.  So after reading this article by Haklay and Tobon, I was left with two thoughts.

One, to what extent should design conform to the needs of the people, and to what lengths should people go to meet the design?  The idea of incorporating usability and HCI techniques into public participatory GIS (PPGIS) is, in my opinion, a good one, and can create this middle ground.  People can learn new skills, allowing them to become more familiar with potentially less than intuitive softwares (ArcGIS, anyone?) and simultaneous research can restructure software to be as functional and also usable as possible.

Additionally, it made me think of the students in this class who are going through ethics approval, and trying to get people to participate in GIS-related studies.  This article mentions three workshops, which were integrated into a context larger than just furthering GIS as a field, which seemingly drew more participants.  But for people like the students in this class, who require volunteers to simply further their own (and eventually our) understanding of GIS techniques, participants are less than willing.  So while the research aspect usability of PPGIS is an honourable pursuit, I wonder how realistic it would be if the user is not someone who is involved in a particular group, like the involved citizens in Wandsworth, but rather an everyday user of a website or phone app.

I enjoy their statement at the end, though, that points out that “ease of use and user friendliness are characteristics of software that are more elusive than they first seem to be”.  Isn’t that the truth!


Teaching People or Teaching Technology

Thursday, March 22nd, 2012

Often in class, there has been discussion of how much a user of GIS needs to know about the GIScience behind a particular concept or application of GIS. The HCI article by Haklay and Tobon bring this issue to the forefront once more in trying to understand how people interact with computers and GIS in particular.

My impression of this paper and the topic of HCI in general is that experiments, workshops, discussions etc. focused primarily on understanding the difficulties users encountered in working with a GIS and then adapting the software and interface to better cater to the needs of the people. Although this can certainly be helpful for users if the required improvements can be incorporated into a GIS, it does not teach the users the GIScience behind the GIS leaving them vulnerable to making assumptions or conclusions without considering such issues as the uncertainty that could be present in their final product or query response. Teaching the users of these GIScience issues falls under HCI as the method of teaching during an experiment or workshop would likely be very influential on how well participants navigated and used the GIS. As such teaching could be an important area for additional research in HCI.

Catering to different learning styles was not greatly mentioned in the paper although I think this would be another way to improve HCI. GUIs are very helpful in this regard as they cater to visual learners and to some extend those who learn best by doing and trying things as what steps or uses a tool has if it can be visualized. However, those who require explanation or a demonstration to learn may be disadvantaged by GUIs. Tutorials and video demonstrations could be incorporated into GISs to explain how to accomplish particular tasks. Incorporating various learning styles into the GIS would assist users with self-help and reduce, but likely not eliminate, the need for face to face explanation.

-Outdoor Addict

HCI, Urban Planning, and Participation

Thursday, March 22nd, 2012

I particularly enjoyed reading Haklay and Tobón’s article, as public participation in urban planning is a topic that I take great interest in. The article notes that especially in an age of increased personal computer usage, empowering users of GIS is crucial to not only improve individual and social development in communities, but to also gain a greater understanding of the design and capabilities GIS systems.

In urban planning, encouraging public participation is often a tricky endeavor. While reasons for being unable to participate vary greatly, one cause includes the inability of individuals to travel to meetings due to, for instance, mobility issues or scheduling conflicts. As a result, it is possible that those who are able to participate represent a relatively small portion of a community. Perhaps improving GIS to be more accommodating to all types of users, from the novice to the expert, will enable participants to move up Arnstein’s “ladder of participation” to a level of greater citizen involvement and power.

The article provides two examples of citizen workshops, which both provide insight into how a GIS can be better designed to facilitate usage. From the first example, I think that one of the main points is the importance of a system’s learnability and flexibility. In this case, learnability refers to “the time it takes a person to reach a specified level of proficiency.” This process will vary for every user, and as such, a GIS tailored to improve urban planning participation may have to include many different features to involve, for example, those who have limited vision. Moreover, flexibility is defined as “the extent to which [a system] can accommodate tasks or environments it was not originally planned for.” This point very much relates to another issue Haklay and Tobón bring up, which is that creators of a GIS must be constantly involved in the design process, which may arguable be a never-ending activity.

From the second workshop example, I think that one of the key issues is the accessibility of a GIS. The development of a web-based urban planning or e-government platform meant that individuals could remotely access information or be included in decision-making. I think that this process has profound implications for those with mobility issues, as already mentioned, in that one no longer has to travel a great distance to participate.

Lastly, while increasing citizen engagement may appear at first to be entirely positive, one has to wonder to what extent should this be encouraged. On the one hand, it may not be possible to completely cater a system to the needs of every individual. In other words, at some point generalizations have to be made, which may hinder how those with varying levels of expertise or capability interact with a system. On the other hand, citizen engagement is often unquestionably considered a positive aspect that should always be fostered. Perhaps this notion itself should be questioned, because, for example, it is possible that too much public engagement can lead to the reduced capacity of organizations to operate efficiently.

– jeremy

Communication in PPGIS

Thursday, March 22nd, 2012

Public participation in GIS is a tricky thing. How do we find the balance between user friendliness and functionality. Today I participated in Peck’s HIC survey and discovered a few things with Google maps. When I first started using Google maps, many of the functions appeared on the map itself. Things like measuring tools and selecting different types of labels. Since then, it appears as though many of these options have been hidden away, only accessible after you enable them. On one side of the coin, I appreciate what Google is trying to do. They’re trying to streamline the system in order to target their system towards the general public. In doing this however, they may lose the clients looking for a more personalized. I will argue, however, that for those looking for a more specialized tool, there are better options such arcMap etc. So much of the programming is now built into Google.

A new type of PPGIS has emerged in recent years. Oddran Uran (2003) writes that it involves users and smart boards and GIS. Instead of have a mouse and keyboard interface, the new PPGIS uses a smart board to help the community in public consultation better communicate their ideas with planners. One of decision support systems’ goals is to increase the quality of communication between the community and planners. This innovative interface seems to have really helped the communication between specialized users and amateur gis users.

New technology seems to be appearing everyday to aid in the communication between specialists and casual users. This is just one example of how the gap is being bridged.


Can and should it be on the same map?

Thursday, March 22nd, 2012

Based solely upon the reading, I am not quite sure yet how exactly we need to be able to include temporal analysis into maps. As the article says, the digital map itself is strongly related to its analogue roots. Do we really need to be able to analyse temporally through maps? Aren’t the current discrete time step methods enough? What about the geovisualisation we saw where Hans Rosling visualises 3 socioeconomic indicators over time, on a 2D graph. Non-map based methods could be simpler to interpret than the solutions proposed. The basemap with overlays solution proposed in the article is not a good one. It is, first of all, seemingly restricted to vector data. Secondly, I think trying this approach can make things difficult to comprehend (visually), especially with many time slices. The third solution – the space-time composite could also be tricky. Accessing the data at a certain point in time may be simple enough, but if you’re comparing two or three time slices which all have a very large number of polygons, does it still make sense to visualise it. Don’t people tend to view discrete polygons as different objects? Here, the different polygons can actually represent the same thing, but just iterations of it over time. Won’t it be confusing to therefore view two time slices on the same basemap?
Somehow, I feel that looking at a time series in a graph is more intuitive than the map. We’ve already had the map metaphor ingrained in ourselves, I doubt it would be easy to teach a ‘temporal map metaphor’ to people.

I was also thinking about how this sort of thing would work technically. When data isn’t accessed a lot, it tends to get compressed and archived to save storage space. If we’re doing temporal analysis and grabbing slices from here and there, would we have to just keep all our data uncompressed? While technology has advanced very rapidly since the article has risen, we’re still present with the problem of large datasets, and performing temporal analysis with discrete time slices is still probably a chore these days.



Possibility of a Time Toolbox?

Thursday, March 22nd, 2012

Temporal GIS is a topic that encompasses almost all studies. Last year I did a study on pesticide use in California. Unfortunately, the data was only available on a year to year basis, so in order for me to graphically show the drastic increase in pesticide-related injuries I had to use multiple maps. On each map, cases were visualized with a red dot. At the end of the project, I resorted to using three maps from three different years in order to show the growth of pesticides. This however, was somewhat taxing. I had to create and prepare three different sets of data to visualize. At the time I accepted this as the way temporal GIS could be dealt with, but now I ask the question if there are better ways to visualize, rather than overlaying or using side by side map comparison.

I wonder if a simple sliding time bar could be incorporated into arcMap (or something similar) as a toolbox. Existing objects would simply need an additional time attribute that the slider would select. As a user slides the bar, a different series of shapes and polygons would appear or disappear. This could also offer analysis tools. If the program is aware that two polygons are the same, but change in size and shape over time, it could possibly calculate this change.

I realize though, that this would be data intensive, especially when dealing with time scales that are very small. A year to year basis could be feasible, but on a smaller scale, such as second to second, dealing with hours of data could become unrealistic.

Google Earth has several features that allow the user to play sequences through time, but as PPGIS and HIC has proven, sometimes Google apps are not the best for data analysis.


Thoughts on Temporal GIS

Thursday, March 22nd, 2012

Time has been and continues to be a mysterious entity for philosophers and physics. It doesn’t make any sense to talk about things in space with time, since no entity can exist without also having duration as one of its attributes. We also know from special relativity that space is intrinsically tied to time through the Lorentz Transformation, but this really only comes into play when traveling at extremely high velocities. However, there are many ways in which time is considered different from space. One can move in all directions in space but only in forward in time. I really liked Table 2 where Langran and Chrisman (1988) presented the analogies of GIS concepts between space and time. It helped me to clarify my thinking about temporality and how these two concepts can be combined.


While reading the article, I found myself asking the similar questions as Outdoor Addict. What constitutes as a change? And at what temporal scale should these changes be observed? For me, the first question is closely related to cartographic scale and must be considered in tandem with the specific research question at hand. If the research is concerned with the location of maple trees then perhaps a new map tree will constitute a change worth recording. Conversely, if the research question is concerned with the expansion or reduction in the size of a maple tree forest then the growth of one additional maple tree may not be enough to count as a mutation. An “event” in the latter example would be related to a percentage change in the forest boundaries. The cartographic scale one chooses will have a direct influence on the frequency of mutations and thus, on the appropriate temporal scale. This also leads to questions about precision for temporal data. When an event occurs, how precise should the temporal records be? To the minute? to the second? To the hundredth of a millisecond? It is likely that different kinds of events will have different temporal requirements.


Although I liked the map/overlay model the authors proposed, I imagine it is not the best way to visualize the data, especially when many polygons are undergoing mutations and data is collected at very fine scales (i.e. every 5 minutes).  For me, spatial-temporal visualization must involve some sort of animation/video that enables the user to select the speed it is played. This reminds me of the Agent-Based Models we saw in class.


Constant Monitoring and Temporal GIS

Thursday, March 22nd, 2012

Prior to reading Langran and Chrisman’s article, my understanding of temporal geographic information was limited to time lapse videos of static snapshots, which visually display change. After reading the article, however, I was able to better comprehend the importance of temporal models that enable direct comparisons of how objects are changing. Models such as these allow for a greater understanding of how change is occurring at specific times, whereas snapshots seem to merely illustrate the general notion of change.

Outdoor Addict questions how decisions are made as to what constitutes an event and I think this is a valid concern. In abiding by data storage limitations, for example, we may deem a change to be irrelevant and discard it. However, what if this change is considered to be important at a future date? Perhaps the idea of examining snapshots is still holding me back—certain technologies that allow changes to be constantly tracked might need to be considered to a greater degree. In thinking about this, a parallel may lie in a Geography 407 class discussion, where dialogue revolved around sensors designed to continually track animals in forests. In this case, every motion that is detected is recorded, while durations of no motion are not. Can anyone else think of similar examples?

From the motion detection example, yet another concern arises—there will inevitably be information that sensors cannot detect. In addition, relating to the lecture on scale, the issue is not merely about deciding what objects to include, as previously mentioned, but it is also about determining what level of detail is appropriate. In other words, there is such a thing as too much information. The easy way out would be to yet again rely on the “future technological advances will render this concern irrelevant,” argument, but due to the inescapability of uncertainty, I posit that context and judgement are two of the most important considerations.

Lastly, in answering CimateNYC’s question about the distinction between real time and database time with regards to streaming data, Madskiier_JWong states that information may be incorrect or incomplete and in need of updating at a future date. I would like to add to this that technical issues often arise when dealing with streaming of data. For example, glitches in communication systems or backlogs of data can result in differences between real time and database time. This type of information is valuable, however, as it enables insight into how systems can be better designed.

– jeremy

HCI, GIS and the Community

Thursday, March 22nd, 2012

The user-centered design proposed by Haklay and Tobon (2003) is very close to my own topic of critical GIS in that both topics recognize humans are an important factor in how GIS will be used and valued. Thus, historical, cultural and social aspects play a crucial part in the adoption and success of GIS. I especially appreciate how the article highlighted the fact that the improvement of GIS requires an “iterative process” between the tool and users (society). To maximize the practical usefulness of GIS, researchers must keep in mind that a “good” computer program cannot be judged solely on the number or the complexity of the application but rather on how “usable, safe, and functional” (579) the application is to users.


It makes logical sense that HCI research picked up momentum in the 1980s when personal computers became more affordable. However, I wonder what are differences between humans-computer interaction and human-computer interaction. Or in other words, between how groups interact with technology and how individuals do. Perhaps, decisions that involve a group of people (often in PPGIS), users may tend to listen to the one person with the most “expertise” and disregard their own knowledge of the application. The workshops described in the paper involved a user, a facilitator, and a “chauffeur”. I wonder if people would have interacted differently with the application if they were allowed “free-play” on their own after a short demo of the basic tasks.


Furthermore, I think we should carefully consider what tradeoffs are involved between usability and functionality. By making an application more intuitive and easier to use, are we losing important functions that should be included despite its complexity? Ultimately, this judgment depends on the set of tasks intended to be carried out by the application. However, they are not always easy to predict. For the purpose of planning, shortest path analysis may be extremely insightful, although the results may be difficult to intercept given all the assumptions that goes into the analysis. Moreover, uncertainty will definitely be another tricky area to convey. Therefore, one challenge is to figure out which types of tools should be included in a GIS for naïve users so that the system is both not limiting and not overwhelming.


Finally, the article made be think about the potential backlash of some HCI research. For example, I can imagine that disadvantaged communities may not want the results from the workshops to be published due to, perhaps, the misguided belief that making the system easier to use is equivalent to “dumbing it down” and the negative social stigma that follows. Therefore, the decision for including an opting-in or an opting-out option in HCI research is a sensitive one since this option will have dramatic effects on the number of participants. Personally, I favor having the initial settings to automatically include users in the study because although most people probably want to help and improve on a system that they use, the hassle of opting-in is enough to deter most people to becoming participants. However, due to privacy issues mentioned by the author, the application should explicitly warn users of this option before they can start using the application.


“Take another picture! They added a fire hydrant!” and The Need to Go Digital

Thursday, March 22nd, 2012

Temporal GIS absolutely fascinated me once I found out what it was through this paper. The idea that spatial principles can be applied to time interests me as it signals to me that my spatial information knowledge has an additional use. The descriptions of each “image of cartographic time” were extremely helpful in visualizing precisely what the authors were trying to explain.

However, for each method of thinking about geographic temporality, events or mutations are needed. Langran and Chrisman describe a mutation as “an event that causes a new map state” and “a point that terminates [a] condition and begins the next”. It theory this makes sense. In the real world what qualifies as a mutation or event? Take for instance a map of a suburb’s development. The first version may only have a few houses. The next might have new houses, new streets and a new school and the following one might show a new fire hydrant as the only change. At what point in time does the map need to be updated? What event is considered significant enough to warrant making an update to the database? Additionally, who decides this? Perhaps it might be similar to the argument on ontologies as it could be a subject specific database where particular changes are more closely followed than others. A fire department may be far more interested in updates concerning each fire hydrant than a family which may be more concerned about where the nearest park is located. Furthermore, is technology advanced sufficiently to be able to determine this on its own once parameters are set or is this a manual job? (For example, could a satellite constantly taking pictures of the suburb be programmed to recognize when 5 new houses are completed and automatically update the database to which it is connected?)

On a slightly different note I would like to emphasize the importance of going digital for temporal GIS. The authors only point out that their work focuses on “digital methods of storing and manipulating sequent states of geographic information” but neglect to explain why this is so important. Much like geolibraries, the concepts and theory to operate and organize them may have been present may years ago (this paper dates to 1988 while geolibraries date to 1998) but the technology did not exist to bring them to the digital world and make them practical, useful tools and studies. For the many reasons discussed for promoting digital libraries in addition to the nature of spatiotemporal information, digital is the only way to move forward.

-Outdoor Addict

Time and GIS

Wednesday, March 21st, 2012

We’ve heard how the cyberinfrastructure handles temporal and spatial data separately, but must be developed in such a manner that allows for users/researchers to utilize both sets of variables when interacting with a GISystem. Now Gail Langran and Nicholas Chrisman provide an interesting overview of the topological similarities between time and space, and how best to design a GIS system which can accurately display temporal elements.

I find the authors’ notions of time and its important elements to be a overly simple in a way that helps to lend credence to their subject. In particular, they characterize cartographic time as “punctuated by ‘events,’ or changes” (4). Furthermore, they do a nice job contrasting GIS algorithms based on questions concerning space (what are its neighbors? what are its boundaries? what encloses it? what does it enclose?) with the similar questions one might ask for time (what was the previous state or version? what has changed? what is the periodicity of change? what trends are evident) (7). Such examples help to define this paper not just as a discussion of temporal data, but also of temporal data based closely to its application in geographic space. Such an added dimension can be incredibly important when we begin to think about all of the geographic phenomenon that occur over differing timelines. It’s also an element we should try to remember more in our own research efforts.

I do wonder about the distinction the authors draw between real world time and database time. Since many GIS databases are headed toward real time, streaming data – as was pointed out in previous lectures – why make this distinction? Perhaps I’m not technically inclined enough to understand the importance of the difference in programming or maybe it’s just a matter of how the system might store information. Anyone have thoughts on why real time data can’t be used in a manner that equates it to database time?

Human-Computer Interfaces and Identifying User Groups

Wednesday, March 21st, 2012

Haklay and Tobon stress the need to design both software and hardware that is most convenient for an identified user group and their goals. Cases such as Braille displays for computers are clear examples of a positive, improved human-computer interface. However, when applied to a complex analytical field such as GIS, HCI studies run into the issue of defining the user group. There is no immediate common attribute shared amongst all GIS users unlike the condition of being blind for Braille-users.

The authors are instructive when they indicate that identified difficulties with GIS are more human-based rather than technology-based. The challenge with users of GIS is that experience with the software varies wildly, and some may be unaware that they are a part of a GIS analysis (as a source of information or doing it themselves). It is tempting to simplify displays to extend the potential audience of GIS, and we have seen this in many GeoWeb 2.0 apps and platforms such as Geocommons. Geocommons’ site boasts that users can “Easily create rich interactive visualizations to solve problems without any experience using traditional mapping tools”. Web 3.0 continues in this direction by offering semantic searches and increasingly mobile applications and devices. In the drive to eliminate the distance between computers and humans however, it becomes easier for others to manipulate and use naïve users as data sources. The increased digitization of our world has sparked debates about privacy and perceived privacy issues.

A question HCI studies could ask then is how to best segment GIS users into groups. Can the semantic intelligence of Web 3.0 be used to map common thought processes/links to better identify common goals? This understanding can be used to direct intermediate GIS users to resources that explain the basics of analytic functions, while underweighting papers that describe advanced processes (I frequently ran into advanced “Petri Net” algorithms when searching for the general history of temporal GIS). Are there alternatives to segmenting users by “skill”, which is a vague measure and largely prescriptive?  


HCI, Cognition, Systems and Designing Better GIS

Wednesday, March 21st, 2012

Mordechai Haklay and Carolina Tobon provide an interesting overview of the use of GIS by non-experts, with a good focus on how public participation in GIS continues to shape the actual GIS systems in a manner that makes them more accessible and easy to use. In particular, I find their section on the workshops they conducted (582-588) to evaluate the usability of a systems pretty interesting, especially the authors work testing the London Borough of Wandsworth’s new platform. In particular, findings on the need to integrate aerial photos for less sophisticated map users and the need for the system to give feedback to users to confirm they had completed a task struck me as simple, intuitive adjustments many systems leave out. Of course, something as simple as feedback to confirm a task may seem like an obvious part to be included in any system, but I can think of a great many online programs and forms which fail to do this and often leave me wondering if my work/response has been saved.

One of the more interesting aspects of the topic of human-computer interaction, for me, when thinking about it in terms of GIS, includes the way it sits at the intersection of geospatial cognition and geospatial cyberinfrastructure. Perhaps I am biased by my own interests, but this topic pulls these two previous ideas from our class together nicely, as it relies on both to make many of its most salient points. However, one question I had, after reading this paper and discussing cognition in class, remains how do we test geospatial cognition in such a manner that we can apply our findings to better systems design. Often, the field of geospatial cognition seems more obsessed with exploring the ways in which humans understand space and engage in way-finding behavior. I’d be interested in seeing articles/research that really digs into actually applying psychological findings to systems design in a manner that goes beyond the testing these authors have done. I should say they do a nice job, though, of summarizing the theory of how cognitive processes like “issues such as perception, attention, memory, learning and problem solving and [] can influence computer interface and design” (569). Yet I don’t see these concepts applied directly in their testing – perhaps it’s just not covered extensively.

I think it’s only in this way that we can truly bridge the gap between humans and computers. Or is it, humans and networks of computers? Or humans and the cloud? Or humans and the manner in which computers visualize data, represent scale and provide information about the levels of uncertainty? As one might conjecture, the topic of human/computer interaction may be limitless depending on what angle we approach it from.