Archive for the ‘506’ Category

How much can eye movements tell us?

Friday, February 1st, 2013

Using eye movements as a human-computer interface HCI, as Poole and Bali point out in their article, has many advantages. Perhaps the most useful advantage is its applications for disabled people who would not otherwise be able to use a keyboard or a mouse. The way eye movements has made great strides; what used to need very invasive methods can now be accomplished with an infrared camera.

 

Detecting eye movements is one thing, but interpreting these metrics in order to infer some form of thought is something entirely different and more complex. This process is built off the eye-mind hypothesis, which is exactly what it sounds: if your eyes are drawn or fixated on something, this provides some insight into the thought process behind these actions. These conclusions can then be used to analyze and improve the design of interfaces. The applications from this process are endless – ranging from a better cockpit interface to reduce pilot error all the way to an improvement of doctors in performing medical procedures.

 

The main difficulty in this technology lies in interpreting the various eye movement metrics. If someone blinks a lot is this indicative of a low work load, or do they simply have dry eyes? As the authors point out, we are limited by technology to process the enormous amount of data generated, and it must be done at a reasonable costs. With current technology, algorithms are in need of constant recalibration. Another con is that for every handicapped person this technology may help, it also may exclude people with lazy eyes or those in need of hard contact lenses.

 

 

Pointy McPolygon

 

The user-system interaction issue : has it been turned upside-down?

Thursday, January 31st, 2013

After reading Elwood and Lanter’s articles I had the image of a complete shift. With his user-centered interface, Lanter was concerned with the interaction between the user and the system. The user-centered interfaces are so developed now. Someone mentioned the perfect example in this blog; two years-old baby are able to use iPads! Can it be more user-friendly than that? Interfaces are so user-centered that people generate an enormous amount of heterogeneous data, and are using geotagging and geoblogging. I think that users are directly interacting with the system now and not only with the interface. Furthermore, users are interacting with each other. On the other end, ‘system designers’ (in Lanter’s words or ‘GIScientists’ in Elwood’s perspective) have to figure out ways to manage this phenomena. The problem has turned back to the ‘system’. It is interesting how the ‘system’ from Lanter’s point of view refers to the software system design. Although what I mean by ‘system’ refers to a broader point of view and is about social, political, technological systems (Oh no not the GIS tool/Science again!!!). Geovizualisation technologies have an impact on all spheres of the society. I’ll give a few examples of that (from Elwood’s paper). Political: renaming places and the negotiation of colonial and postcolonial histories or the promotion of activist activities; Social: posting information on bad neighbors!; technological: interoperability of heterogeneous data, transformation of meaning when different people work with the data….

S_Ram

Realized geovisualization goals

Thursday, January 31st, 2013

MacEachren and Kraak authored this article in 2000, a year before the release of Keyhole Earthview and five years before Google Earth. In the piece, the authors show the results of collaborations of teams of cartographers and their decisions on the next steps in geovisualization. They mention broad challenges pertaining to data storage, group-enabled technology, and human-based geovisualization. The aims are fairly clear, but there are very few, if any, actual solutions proposed by the authors.

While reading the article, I had to repeatedly remind myself that it was written a dozen years ago, when technologies were a bit more limited. Most notably, there appears to be a very clear top-bottom approach in the thinking here, very reminiscent of Web 1.0, where information was created by a specialized provider and consumed by the user. In the years since this piece was written, Web 2.0—stressing a sharing, collaborative, dynamic, and much more user-friendly paradigm—has largely eclipsed the Web as we understood it at the turn of the millennium. In turn, many of the challenges noted by MacEachren and Kraak have been addressed in various ways. For one, cloud storage and cheaper physical consumer storage have in large part solved the data storage issue. Additionally, Google has taken the driver’s seat in developing an integrated system of database creation and dynamic mapping, with Fusion Tables and KMLs, that are both extremely user-friendly. And there are constantly applications and programs being created and launched that enable group mapping and decision support. MacEachren and Kraak did not offer concrete solutions, but the information technology community certainly has.

– JMonterey

Eye-tracking: the Good, the Bad, and the Uncertain

Thursday, January 31st, 2013

In a well-written and fascinating article, Poole and Ball summarize how eye-tracking technology works and how it is/can be applied in human-computer interaction. They broadly outline the technology behind eye-tracking devices, as well as the psychological interpretation of various eye movements.

Reading this piece, two key thoughts occurred to me. First, the psychology of eye-movement ventures eerily close to mindreading in the loosest sense. Or at least scientists and psychologists are attempting to interpret users’ thoughts on a minute and precise level. The accuracy of interpretation is currently debatable, but this appears to be a field of science that would open an enormous landscape of technological applications pertaining to how we see the world. Of course this is both positive and negative. On the positive side, the authors here mention the use of eye tracking as a way to train autistic children to maintain eye contact during communication. However, on a more cynical level, once distributed commercially, how will people use the technology as a way to exploit us?

My second thought relates to this last point. Reading this article in the context of understanding GIS, I wonder how eye tracking might be applied geographically. The simplest argument, as I see it, would be in decision support in planning, helping planners and designers situate objects in space to best capture the attention of their target. However, I believe a much more likely and, perhaps controversial, application would be in advertising. Tracking a user’s eye movements on a computer screen, for instance, could be a gigantic boon to advertisers looking to attract users’ attention.

– JMonterey

Graphical user interfaces

Thursday, January 31st, 2013

Lanter’s assessment of user-center graphical user interfaces and the applicability of those graphical systems to GIS is quite accurate in that visualization makes GIS is easier to understand, learn and use. I believe this relates to how the evolution of the human brain adapted to man’s ever changing environment, in that it responded by creating a set of built in steps to learning, understanding and using tool, through touch and sight. To elaborate, the user-centered graphical interface is the connection of the GIS “tool” and the user. As the brain of a person is designed to see and expect a result in response to an action, the interface plays a major role in understanding; humans learn through observation of results from their actions. In essence, humans create logical connections through pathways, which humans can then observe and deduce the outcome of other similar actions.

The concepts of interlinking both the user and the system, through the system interface and the user model, as Lanter writes, seems to be the best way of linking man’s natural interaction tendencies with the computer’s unnatural approach. Even so, the design of the interface still may cause problems as man’s “instinctive” approach to the use of the interface may limit the function of the interface. Therefore, I agree with Lancer that input from the users to interface designers is essential to resolving the issue of result complexity and use with user simplicity. One thing that may resolve the problem of complexity and user ease, may be to design an interface that allows the use of both traditional and graphical interfaces (i.e. Graphical to start and learn, and traditional for the advanced user once they have mastered the basics).

 

C_N_Cycles

 

Geovisualization and GIScience

Thursday, January 31st, 2013

Sarah Elwood’s discussion, of the emerging questions in geovisualization and the linkages to GIScience research, does highlight the issues of qualitative and quantitative data overload and the dissemination of the data. However, I believe that the dynamic change and addition to data, be it quantitative or qualitative, is needed in both a standard and non-standard form. Thought my own research I have found that dynamic data in a non standard form often tells more about a situation then the standardized data. That said, standardized data is still needed in order to “create order” in our understanding and transmission of data to other people.

The article makes me think how as humans, we want everything in order so to make sense of what we see and how GIScience strives to create order in data for it to be useful. Nevertheless, is the universe not chaotic and basis of all data fundamentally chaotic? Maybe chaos and the none standard data tells us something more important about how we are as a people and how the tools and the ways we look at the world change from person to person and culture to culture. The heterogeneity of the data and the types of software and hardware we use maybe is the norm, and GIScience is trying to place artificial boundaries on how we see data and use tools.

Besides trying to fit data to standardized forms, the idea of “public” and “expert” technologies just does not make sense. Today technologies are so integrated in how youth (0-30 years old) see the world that technologies should not be classified as “expert” or “public” but the person who manipulates the technology. Growing up during the advent of mass produced home computers and driving the development for better processing power and performance, past what our parents had ever imagined, through the purchased of video games and internet use, has shown me it is the person not the machine. I have learned that often one must use a plentitude of  platform resources to achieve a result, as each type of platform like google earth or ARCgis has its strengths (one cannot create a single platform to satisfy all needs or wants).

 

C_N_Cycles

 

Making GIS UI friendly

Thursday, January 31st, 2013

Although unrelated to analysis, the User interface (UI) is an incredibly important aspect of any GIS. When using applications such as ArcGIS, the graphical user interface (GUI) is what the person sees when they interact with the software on their screen. Thus, the simpler and easier to use the interface is, the faster the end-user will be able to learn the system and use it efficiently.

One of the best ways of organizing the UI seems to be the use of  natural or interface mappings. These methods play on the users intuitive and logical reactions to occurences. For example, Lanter uses the analogy of the steering wheel. If a person turns the steering wheel right, the car will then move to the right; and vice versa. Similarly, when a user moves the mouse to the right or left, then they would logically assume the cursor on the screen would do the same. This seems to be the best way to teach users how to use a particular system, as they are more likely remember instinctive actions.

Lanter identifies two key concepts that should be taken into account during user centered interface design: how to map the system interface to the users existing model, and how to shape and influence the users model while they interact with the system.  The first part, as previously mentioned, has to do with designing the interface to take advantage of an individuals intuitions and natural mapping. The second part-arguably the biggest challenge going forward in UI design- regards how easily the user is able to learn the system, based on the way it is organized and fulfills functions. Overall, further development in UI- primarily in ease of use and intuitiveness- will open GIS to a larger variety of individuals, especially those relatively unfamiliar with GIS applications.

-Victor Manuel

GIScience, Geovisualization, shifts in how we view data

Thursday, January 31st, 2013

The article by Sarah Elwood touches on issues relating to how the evolution of spatial data has spurred new (or continue to fuel existing) questions regarding how we can begin to handle these datasets in such a way that we can analyze them and make some meaning from it. As new geovisualization technologies emerge, and as long as people continue to freely post geospatial information that can be collected, it poses a double edge sword. Information about the livelihoods of people on a micro level has never been so accessible, yet challenges we face with new innovative geovisualization technologies is what Elwood calls a conundrum of “unprecedented volumes of data and unprecedented levels of heterogeneity” (Elwood, 2009). By applying GIScience theory and research such as assessing the ontology of the data, mathematical algorithms, visual modeling techniques etc. has contributed to research regarding data integration, and heterogeneous qualitative data – issues that extends beyond new geovisualization technology.

Still, even with bigger, better, newer algorithms that can automate data integration we must recognize that categorizations in a particular datasets is context dependent. Therefore, labeling something can carry a lot of weight. What people define as a “bad neighborhood” can have a multitude of meanings (bad as in high crime, noise, a particular demographic that you do not mix well with?), which can have high social and political implications. If datasets are to be combined, then finding a proper categorization scheme for the dataset must also be thrown into the mix of challenges of data integration.  Perhaps this is where metadata can really shine through if it can provide the context of how the dataset was derived and defining the categories its chosen. I don’t know about you, but my appreciation for “data about data” has definitely grown since GEOG 201.

-tranv

Geovisualization-What we have achieved

Thursday, January 31st, 2013

Many of the pressing problems of today have a geo-spatial component. The paper by MacEachren rightly points out the challenges in dealing with efficient representation of Geospatial data. In the last 11 years since the paper was written, radical changes have taken place in the domain of virtual mapping. Not only did GIS softwares like ArcGIS and QGIS develop rapidly, other mapping and Virtual Earth services like Google Maps and Google Earth have also become popular. The authors had rightly pointed out the changes that were taking place since the internet became the prominent medium for disseminate geospatial data.

With 80% of all user-generated data on the web containing geo-location information, storing and leveraging this data generates a lot of interest. Some of the problems discussed in the paper have been efficiently dealt with in the recent years. For example, multi-scale representations of objects have been handled with the concept of scale-dependent renderers used extensively in GIS packages as well as in Google Maps and Google Earth. However, the decision of what to show at each scale is still subjective. When Geographic objects are stored in the database as vectors, attribute information can be added to each of the objects to further describe it in a non-spatial manner. The abstraction of layers provide the flexibility of modularising map building and analysis approach, enabling reuse of the layers to create different themes. Crowd Sourcing and mobile mapping applications have defined the way group mapping tasks are performed.

The paper also emphasises several times on the need for cross domain research to address the problem of Geovisualization and spatial analysis. In terms of Geovisualization, research results from the field of Computer Graphics, Geo-sciences, Cartography, Human Computer Interaction and Information Visualization needs to be integrated in order to find new and innovative ways of creating maps. Multi-disciplinary crosscutting research is the way forward to make further advances in how geographic information is presented.

-Dipto Sarkar

 

The Future of Geovisualization technologies…

Thursday, January 31st, 2013

The emergence of new geovisualization technologies such as Google maps and Google earth are revolutionizing the way people interact with GIS. Contrary to software based programs, these web based applications allow for unprecedented access, at no cost, to powerful visualization technologies. In addition, previously text based web apps, such as Facebook and twitter are now incorporating spatial components. For example, a person is now able to tag their exact location, down to a particular building, when they update their status on Facebook. Web-based geovisualization technologies are growing in popularity because most of them are free, very easy to access (usually only an internet connection is required), and they allow for the standardization and greater sharing of spatial data. This last point is extremely important because it has opened up a wealth of research applications. For example, a researcher in Greenland might be tracking ice flows, while a researcher in northern Canada may be tracing the migration patterns of polar bears. Web based geovisualization technologies such as Google earth now allow both researchers to interpolate their data on the same map; opening avenues for further research, such as how season ice flows affect polar bear migration patterns.

On other point that must be brought up, and that was addressed well by Elwood, is the heterogeneity of the huge amounts of data being generated by these new geovisualization technologies. Thanks to these technologies, large and diverse data-sets are now available covering a wide variety of user-imputed geospatial data. However, an important challenge for the future will be how to standardize these data-sets, as much of the data is based on opinions rather than standard data-points. Most people have shifting meanings on how they perceive their local geographical points. Standardization will also be important as datasets become larger and larger, requiring more automation of analysis.

 

-Victor Manuel

Grappling with visualization and spatial data expectations

Wednesday, January 30th, 2013

I often find papers like this can be very theoretical and I can struggle to grasp the real crux of the research, but I found that Elwood presented some fairly theoretical topics clearly by providing examples. [something about the other paper]

As spatial components of new tools and technology become increasingly ubiquitous, I find that there is now an expectation for data to be presented spatially, and some disappointment when it’s not available. These papers made me think about how I expect spatial information to be easily available and consumable for me. For example, just discussing where to go skating with some classmates, we were irritated with the idea of having to use a website that just listed the rinks with their general location (http://ville.montreal.qc.ca/portal/page?_pageid=5977,94954214&_dad=portal&_schema=PORTAL) and were thrilled to find the alternative which provided the skating rinks mapped across the island (http://www.patinermontreal.ca) – to be honest, I was fully expecting a mapped version to be available and would have been shocked if there wasn’t.  Apartment-hunting without Padmapper or similar is pretty miserable, since “where” is usually the most important variable in any potential home. Clearly non-GIS users have grown to have a higher level of spatial literacy with products from Google and spatial functions in technology, but they are also coming to expect consumable spatial information to be available.

-Kathryn

Programming and visualization

Wednesday, January 30th, 2013

This paper raises some interesting points about what interfaces are trying to do – it might seem like they are just trying to create an attractive user experience, but in an ideal world they should allow a user to learn the software through “trial and error” and ultimately come to the correct conclusions about how the software functionality works. Seems like high expectations, but clearly useful to design software for the people who will be using it!

This paper was written a fair time ago. I wonder how many GIScientists and users of GIS have continued to find the developments/improvements in GUI for GIS software fairly unsatisfactory, and turned or returned to spatial analysis through programming? I almost never touch an ESRI product anymore – I’d usually rather work through a problem in a combination of postGIS and R. Yes, partly it’s for automation, but to be honest I do find the software can be unintuitive sometimes. However, teaching postGIS to a computer programmer with no background in GIS (with any “S”) showed me that his grasp of databases and analysis were clearly strong but the lack of visualization really did hinder understanding of the analysis tools and the concept of projections – even in someone with a highly technical background. While programming is incredibly useful in GIS and spatial analysis, the visualization aspect is still crucially important too, even for learning. And (maybe because I was trained visually) I never fully trust my results until I “sanity check” them in a visual way.

Random thought – the “mapping” problem is very interesting and of course relevant to GIS, and it seems like the touchscreen has changed this a lot since it’s so intuitive. I’ve noticed several times anecdotally that very young children can work iPads so early as age 2, while they lack the ability to work a mouse on a regular computer for years. I’ve always assumed it’s an issue of dexterity but maybe it’s more to do with this “mapping.”

-Kathryn

Eye-tracking in Augmented Reality

Monday, January 28th, 2013

The paper by Poole et. al. discusses in details the metrics used in eye-tracking research and some of its application. However, the paper failed to mention one of the most successful commercial usage of the technology. Canon introduced SLR cameras from as early as 1992 which employed eye-controlled autofocus. The system worked very well and has led to a lot of discussion amongst photographers as to why Canon does not include this technology in their recent cameras.

Now with the coming of augmented reality systems, eye-tracking technology has the potential to revolutionize how users interact with their surroundings. Ubiquity is the most important requirement for any augmented reality system. Eye-tracking technology can be used to detect when the user seems to be confused and accordingly provide him with contextual information. Such application of augmented reality will be less intrusive and more usable in a day to day life. Eye-tracking technology can be further coupled with other technology such as GPS to make augmented reality systems more usable by increasing the speed at which it detects objects. The location information provided by the GPS can be used to narrow down the search space for the object.  For example, if a tourist is staring at the Eiffel Tower, then the system knows that he is located near the Eiffel Tower in Paris. Hence the search space where the system needs to search for similar looking objects is greatly reduced.

The whole domain of augmented reality is still in its infancy and it is up to the imagination of the engineers to find supplementary technologies that might be used to enhance the system.

– Dipto Sarkar

 

understanding SDSS in the age of Web 2.0

Friday, January 25th, 2013

PJ Densham’s discussion of the possibility of effective Spacial decision support systems gives a useful overview of the concepts in question. The article however, is located in the time it was written, and in an age where GIS (at least as a tool) is moving from the domain of professional geographers to anyone with an internet connection, Densham’s arguments may have to be re-evaluated.
It is conceivable that in our current context (although I wish not to be too presumptous due to my lack of knowledge on the subject) GIS and SDSS aren’t really such separate entities as they once were. Those applications which incorporate the principles of GIS (as science, tool and toolmaking) can be used to support spatial decision making. The growth of user generated content on the internet means that a new SDSS maybe able to use this data (which will often have a spatial element) to produce decisions that are more, if i will, democratic. This is in fact exactly what is done in the Rinner article. The distinctions between GIS and (S)DSS noted by Densham are not so clear cut as they may have been at the time of writing.
As such, while Densham provides a useful background to the concepts that structure SDSS, his article must be read descriptively. It gives a springboard to things that are to come, and to things that are already happening, but is dated and must be considered in our current context to be useful.

Wyatt

A clever Argooment

Friday, January 25th, 2013

Rinner et al. explore the capabilities of participatory GIS in a case study involving an application that uses geographic arguments in collaborative decision-making processes. The application, called ArgooMap, uses a combination of time-stamped thread conversation “mashed-up” with a map API (in this case Google Maps API), and appears to present significant benefits over decision-making without a GIS. The article is written clearly and effectively outlines first the theory/technology behind the process and then uses the Ryerson University case study to showcase the capabilities of the application.

Using Google Maps API in conjunction with user-generated content (whether volunteered or not) poses nearly infinite possibilities in myriad fields. ArgooMap is particularly interesting in its ability to add an entire dimension to normal conversation. So much of what we say, especially when we are making decisions, has geographic ramifications. Many markets and advertisers are trying, and in many ways succeeding, in parsing our monitored conversations to extract geographic content to better target products. This is largely out of our hands, but normal conversation and decision-making is not. ArgooMap seems to implement the concept of cognitive maps, which drives the conversation in alternative directions. This rings especially true in the reference to mentioning geographic content at varying scales depending on the presence of the visible map. If all interlocutors are seeing the same map simultaneously, they can refer to specific places or directions that previously only existed in the mind of the speaker alone.

As an aside, it would be incredibly interesting to see Twitter, where users are constantly tweeting back and forth, implement a map similar to ArgooMap. Perhaps when programmers solve the geotagging puzzle…

– JMonterey

Is SDSS Geoweb’s ancestor?

Friday, January 25th, 2013

In an article from the 1980’s, P.J. Densham outlines the concept of a Decision Support System (DSS), which aids the user in a decision-making process that includes a number of complex parameters included in a database. He posits that in many cases, a Spatial Decision Support System (SDSS), which uses the basic framework of a DSS, but with a spatial component, would be quite helpful. He notes that an ideal SDSS would a) allow for spatial input, b) represent spatial relationships and structures, c) include geographical analysis, and d) provide spatial visualizations. This is different from GIS in that SDSS is dynamic, while GIS is more rigid.

The need for a dynamic geographic decision-making process is clear, and in that, Densham is completely correct. However, the problem with reading this article today is that GIS has transformed, in large part, away from its infant stage and more towards Densham’s SDSS. More specifically, the Geoweb, rather than the more orthodox desktop client, incorporates many of the outlined SDSS properties. User-generated content allows for near-real-time data, and modern technology allows for rapid regeneration of content on a web page. In fact, it is interesting to read this article in conjunction with the Rinner et al. article, written roughly two decades later, about the use of user-generated content to structure a GIS. Another application is Google Map’s traffic feature, showing roads as red (heavy traffic), yellow (moderate traffic), or green (little or no traffic). As users see this data, they decide, for instance, to choose the “greenest” path, but if enough people do so, the green path becomes the red path, and the red path eases. The data is thus dynamic, and the map adjusts accordingly.

-JMonterey

Crowd Sourced Master Plan, One Step Removed

Thursday, January 24th, 2013

Rinner, from the beginning, claims that user-generated information is not considered a “serious” pursuit. To test his theory, he scrapes user-generated text for geographic references, only to retroactively apply a geo-reference to the post. The researchers, who are responsible for geographically referencing the post, strip away the users involvement in the geographic context. This can introduce a large source of error into the data. Consider, on page 14, a marker’s placement corresponds to the “label in Google’s “map” view.” In the event that a user refers to a general area, choosing to place a label greatly reduces the dimension of the users perspective. Much like a home is not a coordinate, but a structure, and the surround property, land, or neighborhood. Had the researchers allowed the users to actively tag locations to their posts, they would have gained far greater insight into the users intentions.

The creation of a Master Plan cannot be accomplished using the scientific method. Studies can find correlations, in certain areas, but many fall apart when the study area is changed. This may be due to social and cultural differences. In that light, and working on the basis that people are subjective creatures, a crowd-sourced master plan can gain from further empowering the constituents. In the case of the Ryerson Master, providing the user with the ability to choose the location of their post’s geo-tag may have developed another dimension for the study.

As a side note for crowd sourced urban planning, I do not think that even high-scale maps are sufficient. The city is rarely experienced from the air. With that in mind, I am in favor of developing forum in which people can experience the city from the ground. To my knowledge, no such technology yet exists.

AMac

Web 2.0 and its application in DSS

Thursday, January 24th, 2013

In their paper on the uses of Web 2.0 to support spatial decision making, Rinner et al. address one of the problems that M. C. Er identified in DSS 20 years earlier: group decision making. By using PGIS as a source of data, making decisions for a group of people is made easier. In order to test this, the authors designed Arguumaps, where users could make geographically referenced comments about campus/city life or the Ryerson identity. Though some limitations were present, the power of PGIS is clear in its application of online mapping.

 

Whether this data is useful in DSS is a whole other questions. The authors argue that this case study shows that it could be a useful tool by having users vote on preferred geographically reference locations. For example, instead of posting comments about their favorite restaurants, users could cast votes on restaurants over a whole array of criteria, thus helping other users pick which restaurants they would want to go to.

 

Presently, 5 years after this article was written, google implemenets these services on their maps. Great strides have been made in mapping as a support to spatial decision making however much ambiguity still exists over the exact definition of DSS.

 

Pointy McPolygon

 

Spatial Decision Support from the Crowd

Thursday, January 24th, 2013

The Rinner et al article explores the intersection of spatial decision support systems (SDSS) and volunteered geographic information (VGI) with their development and piloting of an argumentation mapping web 2.0 application intended to solicit and map spatially-grounded citizen input and discussion. 

Rinner et al single out planning processes as areas where such applications have potential.  By tying user discussions and feedback to explicit locations, the responses can serve as a qualitative gauge by decision-makers as to the relative importance of criteria within a SDSS model.  There is even the opportunity to add a quantitative element to such conversations by integrating a positive/negative rating system to the threaded messages (like a spatial Reddit!). Overall, Rinner et al’s project aims to find effective ways to crowdsource planning decisions and amplify the ideas of citizens using emerging web 2.0 technology.

Argumentation mapping and other geospatial mashups’ applicability to decision support are not without their concerns. As we learned the hard way in GEOG 407, a #neogeoweb 2.0 application is only as good as its programmer, and can only ever be as good as the underpinning API.  Four years after this paper’s publication, many of Rinner et al’s suggestions for improvements to the API are still unaddressed by Google.  Formal integration of the data collected in ArgooMap with SDSS models is still a long way off: for now, it is limited to qualitative uses.  In addition, these digital consultative avenues, even if they are improving in terms of end-user functionality, may still be exclusionary: from the digital divide, to gender dynamics on the internet, to the less affluent simply having less free time to invest in online civic discussion, there is a role for critical geographers and GIScientists to suggest ways for SDSS and its related tools to be more inclusive and thus more likely to be truly democratizing the decision-making process.

-FischbobGeo

Thursday, January 24th, 2013

In reading MC Er’s 1988 article “Decision Support Systems: A  Summary, Problems,  and Future Trends” I am left with the question of how the concept of DSS can produce technologies that are at once broad and specific, such that they can account for both individual and group needs. I wonder too, how much further we can take (and undoubtedly have taken in the 25 years since this article’s publication) this idea before it extends beyond support and into a more active tool.
An interesting aspect of this paper to me was Er’s mention of  a DSS that might be tailored to ones’ decision making style (as determined by a Myers-Briggs test).  While the idea seems somewhat absurd or flaky, it does point towards the concept of technology designed around the needs of the user as opposed to some abstract population. However, in making decisions that are influential to people other than the user, would such specification truly prove helpful, or to the contrary? Further, Er notes the need for development of group DSS. How do we design a DSS that can account for the diverse styles and needs of a group coming to consensus? What does support mean in this context? Does it merely mean an interface for the organization of ideas, or one that may evaluate figures?
There are, in any problem, many factors that must be considered, some of which may not always be quantified or obectively assessed against one another. How can we produce a DSS that may aid us to weigh the options that may be actively analysed while not losing sight of those that may not be?

Wyatt