Posts Tagged ‘506’

Hedley’s AR

Thursday, February 21st, 2013

**a quick post because wordpress ate my last one**

Hedley’s piece on AR provides a clear and pretty interesting, if dated, look at augmented reality, evaluating the merits of different interface designs. Eleven years on, it is interesting to look at how far AR has come.

A quick look at wikipedia shows a lot of different applications. While most of them are emblematic of everything that is wierd about the economy these days, some piqued my interest as actually pretty valuable. One such thing was workplace apps. Wikipedia explains: “AR can help facilitate collaboration among distributed team members in a work force via conferences with real and virtual participants. AR tasks can include brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms”

While I could certainly put on my Critical GIS hat and problematize this on a number of grounds, I find it pretty exciting. I think that especially in a field like geography, the use of AR could make collaboration over space a lot more effective. Maybe I am drawn to it because it brings to mind my favorite geography term “reducing the friction of distance”; and that it does!

Wyatt

Spatial Scale Problems and Geostatistical Solutions

Thursday, February 14th, 2013

Atkinson and Tate make a good point. I only wish I could find it. Their extensive use of mathematics is daunting, but a necessary evil when understanding what goes on under the hood of ArcGIS. With no personal experience in the matter, a quick Google search yielded that Variograms are the same, if not similar, to kriging, and require significant input from the user. Correct me if I’m wrong.

GIScience has managed to produce a slew of tools that produce right answers. That is to say, there is only one possible answer. The more complex processes, like the interpolation methods outlined by Atkinson and Tate, reveal that there sometimes must be a best answer. At which point it is the responsibility of the user to justify their reasoning behind choosing 10 lags instead of 5. At which point, it becomes a case specific example.

What makes me curious is, is there a right answer? Is it possible to create a set of parameters, possibly for an arbitrary set of scales, that would optimize the up-scaling and kriging process in all fields of use?
Written in 2000, there has been more than a decade for someone to answer the question and implement it in GIScience. As of 2013, there is no right answer, but there is a significant amount of mathematics to back it up.

In an ideal world, if the research field dedicated data mining and geographic knowledge discovery is successful, there may eventually be no need for interpolation as it is replaced by the overwhelming wave of high resolution, universal, data sets.

AMac

Mining for spatial gold

Thursday, February 14th, 2013

Shekhar et al. describe spatial data mining—the process of finding notable patterns in spatial data—and they outline models of doing so, as well as using spatial outliers and spatial co-location rules, and locating spatial clusters. The article is mostly informative, and the topic is central to spatial analysis, so it is difficult to separate spatial mining from the rest of GIS.

I find the notion of clustering particularly interesting, since it is perhaps the most visual-oriented aspect of spatial mining, yet it is largely up for interpretation and/or dependent on the variability of clustering models. For instance, when we see a distribution of points on a map, subconsciously, we begin to see clusters, even if the data is “random.” This type of cognitive clustering is difficult, or even impossible, to model, and it might vary from person to person. The authors of this article list four categories of clustering algorithms, including hierarchical, partitional, density-based, and grid-based models, depending on the order and method of dividing the data. However, the authors fail to note the applications for the various algorithms. If we are thus to naively understand these to be interchangeable, then the results could differ tremendously. Moreover, if there are indeed patterns, then there is most likely a driving force behind those patterns. That force, not the clusters themselves, is the most important discovery in spatial mining and so the modeling must be more stringent in its pursuit of accuracy.

– JMonterey

Tipping the scale toward “science”

Thursday, February 14th, 2013

Marceau’s sums up issues pertaining to the variability in scale including scale dependence, scale domains and scale thresholds. At the crux of the article is an illustration of “a shift in paradigm where entities, patterns and processes are considered as intrinsically linked to the particular scale at which they can be distinguished and defined” (Marceau 1999). The need in any science to be wary of the scale at which the given work is conducted or phenomenon observed is absolutely (and relatively) critical. Different phenomena occur at different scales, and significant inaccuracies in the data exist if this is not accounted for.

I have no qualms with most of Marceau’s article. However, I would like to address another little assertion the author makes in her conclusion: the shift in paradigm once more toward a “science of scale.” After our discussion a few weeks ago regarding rethinking GIS as a science, in addition to a tool, this struck me as particularly interesting. In its broadest sense, science is a body of rationally explained and testable knowledge. Understanding scale as a scientific field in this regard is difficult. I have no problem with comprehending and accepting scale as a basic property of science, but separating out scale as its own entity?

That said, accounting for all of the work involved in understanding thresholds and dependence and the role that a varying scale can play on the world is not trivial. I simply feel that whereas there are laws of physics, for instance, there is no singular body of accepted knowledge, as far as I know, surrounding scale, with the exception that scale is a property of a phenomenon that must be noted and maintained as much as possible.

– JMonterey

Wednesday, February 13th, 2013

In working on my final project, I picked up a copy of “How to Lie With Maps” by Mark Monmonier at the library. I haven’t gotten too far into the book, but its concept, of the way that maps are always more complex than they look on the outside, provides a useful starting point for the discussion of scale. The article by Atkinson and Tate on scale provides an overview of some of the problems that scale brings up in our work, and proposes some ways that we may work with or around this problem. The question I would like to pose, (as it seems that on the technical/data collection side no large changes will help us solve the issue of variable scale any time soon), is how we may be accountable in our GIS work, specifically at a representational level, to problems of scaling.

To someone untrained in GIS, or unaccustomed to critical reading, a map is just a map, an abstraction of reality. For this type of viewer (and not only of maps, but I use this example because it is the most simple), how can we be transparent about what the image lacks or what data the image obscures? It is easy to lie with maps and it is easy to choose an aggregation that is advantageous to those invested in the project, but it is not so easy to make this clear to the uninformed viewer. So I ask, as I always do: Is being accountable to issues of scale in GIS possible? Is it desirable (and if so, when) ?

Wyatt

How to Transfer the Only Answer

Thursday, February 7th, 2013

“The primary purpose…is to define a common vocabulary that will allow inter-operability and minimize any problems with data integration.” Maybe I am misinterpreting the statement, in which case it would be beneficial to have an ontology for papers on ontology. From what I gather, ontology strives to describe data in a standardized, easily translatable manner. Would that not require culling the outlying definitions, or creating an entirely new definition to categorize. In which case, do we not lose the small nuances and differences? Why are those not as valuable as the opportunity to integrate?

This runs head long as a counter argument to the pro-integration sentiment in Academic Autocorrelation. It is the differences that GIS benefits from. Given our current methods of capturing data, and the sheer scale on which projects are now attempted, it is unlikely that one will ever capture the truth. Rather, it is a representation of the truth from the instant we perceive it. Our interface with our environment consists of no more than five senses, which when compared with other species are rudimentary at best. Furthermore, it is is surprisingly easy to replace reality with something that is not, in which case, though, it still is, according to the viewer, their reality. Thus, the broad range of subjectivity in interpretation is a beneficial burden.

If an ontology were to be imposed on our knowledge set it would constrain our perception, as limited as it is, and yet facilitate transfer across parties. If truth is sacrificed in favor of knowledge transfer, it is the responsibility of the individual to balance accordingly. Unless I am lost myself, in which case I look forward to further clarification.

AMac

 

Academic Autocorrelation

Thursday, February 7th, 2013

Nelson talks of the future challenges the incoming generation of spatial statisticians and analysts will face. One, in particular, is the dilution of geography’s influence over the trajectory of the field of spatial analysis. According to a survey of some 24 respondents, there is a risk of “training issues” if “spatial sciences are adopted by many groups and lack a core rooted in geography.” This is a very isolationist way of thinking. If a field is to be dominated entirely by one group of common thinking individuals, it is bound to hit a dead end.

A nondescript, military-in-mind, ramshackle structure was constructed in Cambridge, Massachusetts during World War II. Its purpose was to develop and perfect radar, an instrument that was instrumental in the war effort. Like all wars, once it was over, the building had served its purpose and was intended to be demolished. Tight for space, Massachusetts Institute of Technology crammed a hodgepodge of disciplines into the structure. Before it’s demolition, 50 years later, it had come to be known as the “Magic Incubator.” Numerous technological advances stemmed from the building, many of which could have been accomplished without work across multiple, previously, unrelated disciplines.

Spatial analysis can gain from the weakening of geography’s grip on the subject, allowing different minds with different problems to use and adapt the tool as needed. Until then, spatial analysis will be on the path to innovation, with little invention branching off.

AMac

Spatial Ontologies

Wednesday, February 6th, 2013

Agarwal’s “ontological considerations is GIS” left me with a lot of questions. The article attempts to outline different conceptions of ontology (both strongly theoretical and technical). Ontology is most simply defined in the final paragraph of the paper as “a systematic study of what a conceptual or formalized model should encapsulate to represent reality”. However, how do we translate personal ontologies into more global technologies? The paper briefly questions what it means to produce an ontology including concepts with variable semantics, that may be vague or differently understood by geographers and those outside the domain. The fractures between disciplines point to the inefficacy of a top-down approach to producing ontologies. Agarwal is correct to question this paradigm, noting the benefits and disadvantages of its counterpart.
Agarwal’s discourse, however, seems to be still firmly couched in the academic context. What would it mean to create a bottom-up ontology of more partipatory platforms? How might we make semantics less fuzzy in the case of non-professional conceptual knowledge? Is it possible, and more importantly, is it even desirable? At the risk of sounding like a broken record, I want again to interrogate the power dynamicsthat inform what becomes a part of what we want to represent reality. There are inherent cultural biases in what we will want to represent, and by maintaining a basis of reality defined by academics, we ignore ontologies that fit outside of dominant strains of thought.

Wyatt

Spatial Statistics- Producing a canon

Tuesday, February 5th, 2013

Nelson’s summary paper on spatial stats provided a solid framework for dominant strains of thought both in the past and looking forward. One portion of the paper provided a list of important works on the subject with brief descriptions. While I found this to be something of a bizarre format for this sort of paper, I appreciate the question it raises of what might be considered canonical in technical literature. Unsurprisingly, there is discrepancy between what works different spatial statisticians deem most important as guides for newcomers. Nelson himself adds books that he feels were overlooked (or not published at time of survey) revealing he and his reviewers’ own biases.
What I am circling around can be brought to a critical question of: how do we decide what is important, and who gets to decide? This is really what we were asking when trying to peg GIS as a tool or a science. Which aspect of GIS is most important (and critically, why?). While in spatial stats, a basis of formulae and conceptual tools is necessary, where do we go from there? Once we are past the most essential technical aspects of a discipline, defining what is important becomes more subjective. In looking at this particular literature list (which is doubtless helpful to newcomers), I think it is important to question what it means to define what is to be remembered and what is to be forgotten.

Wyatt

Questioning the Possibility of Interoperability in Geovisualization

Friday, February 1st, 2013

MacEachren and Kraak’s paper on geovisualization provides a concise and critical look at the challenges facing geovisualiation’s advancement, and how they might be overcome. One thing that stuck out to me in the article was the issue of interoperability and how its absence may hamper collaboration. This is briefly mentioned at a point discussing the challenges and potential of multidisciplinary research.
The question of interoperability is certainly not simple and is based in spatial and temporal contexts, however, it is important to interrogate how lack of interoperability works in the interests of competition, both economic and academic. And how In doing so, it may in fact impede progress in the production of geovisualisation toolmaking. By producing separate technologies with different access levels, interfaces, and availabilities, interested parties may be able to develop a competitive research edge or to gain funding. When in universities today, funding is often highly competitive, the logic behind exclusivity (at least initiatially) is understandable. However, by being on the cutting edge, you leave out many potential collaborators who may be able to contribute to the geovisualization tool itself, or to its applications and theoretical development.
A question then becomes: how do we reconcile the inherently competitive nature of academia with the goals its projects purport to serve?

Wyatt

Race to the Bottom

Thursday, January 31st, 2013

GIS strives for something that seems near impossible, a blanket solution for a problem with more than one solution. Humans are fickle, subjective, and by and large ignorant in comparison to the communal wealth of knowledge at our fingertips. Thus, people, in the world of Geo-visualization, the user, are never going to be able to use just one form of representation. The variable, subject-based, method is what everyone aims for, but unless the user is allowed to actively input parameters, be it consciously or unconsciously, we will end up with the same stale result every time.
Malleable representations can only come from organic production methods, which up to now, at least in the world of computer science and GIS, do not exist. Still, MacEachren and Kraak have a positive outlook on the field. Either because they believe it is possible, or because it must be. In the very beginning they claim that it is estimated that up to 80% of all digital data include geospatial referencing, only to follow on later with the assertion that everything must exist in space; whether that is the case is still up for debate by string theorists. However, there must be a diminishing point of return. How far must one go before the field of GIS is satisfied? At the rate we’re going, it won’t be until virtual environments achieve the uncanny valley, or are able to surpass it. At which point, it won’t matter where things are located in space, as you’ll have a hard time stripping physical reality from data driven fantasy.

AMac

Realized geovisualization goals

Thursday, January 31st, 2013

MacEachren and Kraak authored this article in 2000, a year before the release of Keyhole Earthview and five years before Google Earth. In the piece, the authors show the results of collaborations of teams of cartographers and their decisions on the next steps in geovisualization. They mention broad challenges pertaining to data storage, group-enabled technology, and human-based geovisualization. The aims are fairly clear, but there are very few, if any, actual solutions proposed by the authors.

While reading the article, I had to repeatedly remind myself that it was written a dozen years ago, when technologies were a bit more limited. Most notably, there appears to be a very clear top-bottom approach in the thinking here, very reminiscent of Web 1.0, where information was created by a specialized provider and consumed by the user. In the years since this piece was written, Web 2.0—stressing a sharing, collaborative, dynamic, and much more user-friendly paradigm—has largely eclipsed the Web as we understood it at the turn of the millennium. In turn, many of the challenges noted by MacEachren and Kraak have been addressed in various ways. For one, cloud storage and cheaper physical consumer storage have in large part solved the data storage issue. Additionally, Google has taken the driver’s seat in developing an integrated system of database creation and dynamic mapping, with Fusion Tables and KMLs, that are both extremely user-friendly. And there are constantly applications and programs being created and launched that enable group mapping and decision support. MacEachren and Kraak did not offer concrete solutions, but the information technology community certainly has.

– JMonterey

Eye-tracking: the Good, the Bad, and the Uncertain

Thursday, January 31st, 2013

In a well-written and fascinating article, Poole and Ball summarize how eye-tracking technology works and how it is/can be applied in human-computer interaction. They broadly outline the technology behind eye-tracking devices, as well as the psychological interpretation of various eye movements.

Reading this piece, two key thoughts occurred to me. First, the psychology of eye-movement ventures eerily close to mindreading in the loosest sense. Or at least scientists and psychologists are attempting to interpret users’ thoughts on a minute and precise level. The accuracy of interpretation is currently debatable, but this appears to be a field of science that would open an enormous landscape of technological applications pertaining to how we see the world. Of course this is both positive and negative. On the positive side, the authors here mention the use of eye tracking as a way to train autistic children to maintain eye contact during communication. However, on a more cynical level, once distributed commercially, how will people use the technology as a way to exploit us?

My second thought relates to this last point. Reading this article in the context of understanding GIS, I wonder how eye tracking might be applied geographically. The simplest argument, as I see it, would be in decision support in planning, helping planners and designers situate objects in space to best capture the attention of their target. However, I believe a much more likely and, perhaps controversial, application would be in advertising. Tracking a user’s eye movements on a computer screen, for instance, could be a gigantic boon to advertisers looking to attract users’ attention.

– JMonterey

Poole & Ball stuck in one place?

Thursday, January 31st, 2013

Poole and Ball’s “Eye Tracking in Human-Computer Interaction and Usability Research: Current Status and Future Prospects” gives an introduction to eye tracking technology with a brief history of its uses and designs. For our purposes as geographers, it is useful to think about to what ends this technology may be used, and how we can incorporate eye tracking into applications that are spatial in nature.
While the uses noted (user interaction with a website, text or tool) mostly focus on a stationary user looking at something that is fixed in space, incorporating motion into eye tracking analyses may be very illuminating. I think specifically of analysis of urban planning that might incorporate universal design to make cities easier to navigate, more physically accessible and more aesthetically appealing. By tracking where users look when moving through a set urban landscape, we could infer improvements such need for curb cuts, street sign placement and in more commercial interests, billboard and advertisement placement. The use of eye tracking might help planners to make cites more easily navigable. One could also use this technology in augmented reality applications such as virtual tours of a given place or in identifying points of interest.

One thing that I hoped the article would explore further was research methodology. It might be interesting to know how studies using eye tracking technology attempt to account for the inherent bias of a study who knows to be being observed, or the aims of a given project.

Wyatt

You Can Learn A Lot from a Pupil

Monday, January 28th, 2013

Poole and Ball provide readers, with little to no knowledge of eye-movement tracking, a brief overview of the techniques, equipment, and application of the research. They, unfortunately, do not include a section devoted to geo-visualization, which would make this an exercise in read and repeat. Or fortunately, in that it provides us with a broad spectrum for interpretation.
While eye-movement tracking has made major leaps from its original design, including a metal coil affixed to the cornea, it still fall shorts. According to Poole, researchers still have not developed a standard interpretation of results. An example of which includes the duration and frequency of fixation on a target. Depending on the situation, multiple longer durations are considered positive, in that subjects are more interested in the target, or negative in that subjects take more time to encode the visual information. This does not mean the field does not have applications in GIS, and geo-visualization.
As Poole points out, eye-movement tracking techniques can be used to substantiate claims of what may be visually appealing on a case-by-case basis. GIS serves as a way of conveying spatial data in the form of maps. If maps are responsible for the quick and easy conveyance of information, visually optimal maps may be developed with the help of eye-movement tracking. Whether or not the participant is interested in the topic is up to the researcher.

AMac

understanding SDSS in the age of Web 2.0

Friday, January 25th, 2013

PJ Densham’s discussion of the possibility of effective Spacial decision support systems gives a useful overview of the concepts in question. The article however, is located in the time it was written, and in an age where GIS (at least as a tool) is moving from the domain of professional geographers to anyone with an internet connection, Densham’s arguments may have to be re-evaluated.
It is conceivable that in our current context (although I wish not to be too presumptous due to my lack of knowledge on the subject) GIS and SDSS aren’t really such separate entities as they once were. Those applications which incorporate the principles of GIS (as science, tool and toolmaking) can be used to support spatial decision making. The growth of user generated content on the internet means that a new SDSS maybe able to use this data (which will often have a spatial element) to produce decisions that are more, if i will, democratic. This is in fact exactly what is done in the Rinner article. The distinctions between GIS and (S)DSS noted by Densham are not so clear cut as they may have been at the time of writing.
As such, while Densham provides a useful background to the concepts that structure SDSS, his article must be read descriptively. It gives a springboard to things that are to come, and to things that are already happening, but is dated and must be considered in our current context to be useful.

Wyatt

A clever Argooment

Friday, January 25th, 2013

Rinner et al. explore the capabilities of participatory GIS in a case study involving an application that uses geographic arguments in collaborative decision-making processes. The application, called ArgooMap, uses a combination of time-stamped thread conversation “mashed-up” with a map API (in this case Google Maps API), and appears to present significant benefits over decision-making without a GIS. The article is written clearly and effectively outlines first the theory/technology behind the process and then uses the Ryerson University case study to showcase the capabilities of the application.

Using Google Maps API in conjunction with user-generated content (whether volunteered or not) poses nearly infinite possibilities in myriad fields. ArgooMap is particularly interesting in its ability to add an entire dimension to normal conversation. So much of what we say, especially when we are making decisions, has geographic ramifications. Many markets and advertisers are trying, and in many ways succeeding, in parsing our monitored conversations to extract geographic content to better target products. This is largely out of our hands, but normal conversation and decision-making is not. ArgooMap seems to implement the concept of cognitive maps, which drives the conversation in alternative directions. This rings especially true in the reference to mentioning geographic content at varying scales depending on the presence of the visible map. If all interlocutors are seeing the same map simultaneously, they can refer to specific places or directions that previously only existed in the mind of the speaker alone.

As an aside, it would be incredibly interesting to see Twitter, where users are constantly tweeting back and forth, implement a map similar to ArgooMap. Perhaps when programmers solve the geotagging puzzle…

– JMonterey

Is SDSS Geoweb’s ancestor?

Friday, January 25th, 2013

In an article from the 1980’s, P.J. Densham outlines the concept of a Decision Support System (DSS), which aids the user in a decision-making process that includes a number of complex parameters included in a database. He posits that in many cases, a Spatial Decision Support System (SDSS), which uses the basic framework of a DSS, but with a spatial component, would be quite helpful. He notes that an ideal SDSS would a) allow for spatial input, b) represent spatial relationships and structures, c) include geographical analysis, and d) provide spatial visualizations. This is different from GIS in that SDSS is dynamic, while GIS is more rigid.

The need for a dynamic geographic decision-making process is clear, and in that, Densham is completely correct. However, the problem with reading this article today is that GIS has transformed, in large part, away from its infant stage and more towards Densham’s SDSS. More specifically, the Geoweb, rather than the more orthodox desktop client, incorporates many of the outlined SDSS properties. User-generated content allows for near-real-time data, and modern technology allows for rapid regeneration of content on a web page. In fact, it is interesting to read this article in conjunction with the Rinner et al. article, written roughly two decades later, about the use of user-generated content to structure a GIS. Another application is Google Map’s traffic feature, showing roads as red (heavy traffic), yellow (moderate traffic), or green (little or no traffic). As users see this data, they decide, for instance, to choose the “greenest” path, but if enough people do so, the green path becomes the red path, and the red path eases. The data is thus dynamic, and the map adjusts accordingly.

-JMonterey

Thursday, January 24th, 2013

In reading MC Er’s 1988 article “Decision Support Systems: A  Summary, Problems,  and Future Trends” I am left with the question of how the concept of DSS can produce technologies that are at once broad and specific, such that they can account for both individual and group needs. I wonder too, how much further we can take (and undoubtedly have taken in the 25 years since this article’s publication) this idea before it extends beyond support and into a more active tool.
An interesting aspect of this paper to me was Er’s mention of  a DSS that might be tailored to ones’ decision making style (as determined by a Myers-Briggs test).  While the idea seems somewhat absurd or flaky, it does point towards the concept of technology designed around the needs of the user as opposed to some abstract population. However, in making decisions that are influential to people other than the user, would such specification truly prove helpful, or to the contrary? Further, Er notes the need for development of group DSS. How do we design a DSS that can account for the diverse styles and needs of a group coming to consensus? What does support mean in this context? Does it merely mean an interface for the organization of ideas, or one that may evaluate figures?
There are, in any problem, many factors that must be considered, some of which may not always be quantified or obectively assessed against one another. How can we produce a DSS that may aid us to weigh the options that may be actively analysed while not losing sight of those that may not be?

Wyatt

DSS to DMS

Thursday, January 24th, 2013

M.C. ER attempts to untangle data management from the impact of data use. In doing so, though, he attributes far more value to Decision Support System than it merits. The age of the paper (1988) may have something to do with this, but the idea that DSS will be able to support all levels of decision-making is excessive. Furthermore, the description of DSS makes it seem that it will eventually become autonomous. At least, that seems to be the goal. By that point DSS will have to be relabeled, DMS, Decision Making System. The reason being that M.C. ER describes systems that can act on their own decisions. He even furthers the narrative by mentioning artificial intelligence in the concluding statements.
As for GIS, System or Science, it still has not quite reached the point of making decisions in place of top management. Let alone the fact that GIS has a very narrow spectrum of applications. If we were to use GIS as a case study of the success or failure of DSS, it would fall short of making decisions for management, but is definitely useful in supplementing the knowledge set of the decision maker.
Then again, that is not to say that it has failed or succeeded as a DSS, in that the definition of a DSS is, according to M.C. ER fluid and open to interpretation, considering the numerous attempts at classifying the field.
One assertion of the paper that caught me off guard was this, “It is important to know that human decision makers generally do not make decisions based on the probability of success, because the penalty for a vital decision that turns out to be wrong is normally substantial.” If this were the case, how else would people make decisions? Gut feeling? If gut feelings do not take probabilistic guesses of success into account, than they are no better than random guesses. In that case, creating a DSS is easy. Unfortunately, I do not believe this is the case. The user is a vital source of information and decision-making along the way, and is unlikely to be stripped from the process.

AMac