Archive for the ‘506’ Category

How Critical is Critical GIS?

Thursday, February 21st, 2013

Critical GIS attempts to combine various types of critical human geography with methods and techniques reliant on Geographic information systems. However, the field remains somewhat of a minority pursuit due to the fact that there is little evidence of critical geographers completely embracing GIS as a tool of their trade. Sullivan acknowledges seven major themes that have made Critical GIS what it is today. One of the themes that was not discussed in great detail was the “GIS and the human dimensions of global change”. I believe this field in particular has evolved tremendously since the time the article was written.  Developments in communications technologies, especially in mobile communications, web applications, and digital media, have completely transformed the way humans access information, and communicate. In addition, they have also been crucial in facilitating the availability of data, providing users with important information that allows them to educate themselves on certain fields. One only has to look at the monumental rise in average web users who know use GIS tools in their every day lives. Spatial applications such as Google Earth have expanded the field from specialists to the everyday person- who may use the application for any sort of spatial task. Thus, even though critical GIS is a relatively new field (1997), the evolution of complex technological resources has opened up many new opportunities for research within the field.

-Victor Manuel

A Critique of the Critics

Thursday, February 21st, 2013

O’Sullivan’s critique of the critics of GIS is a good summary of the position that many people hold on the role of GIS in social sciences. The author touches on three items from a research agenda on “GIS and society,” namely the relevance of GIS in grassroots movements, GIS from a feminist perspective, and privacy issues inherent in data collection. Although there is justification for omitting discussions on the remaining four themes from Initiative 19, it would have been interesting to learn about other ways in which people are criticizing—oftentimes constructively—GIS’s role in society.

I am particularly interested in the theme of PGIS (participatory GIS), in part because I am researching VGI (volunteer-generated information) for this course. Beyond the ethical and accuracy concerns—of which I do not deny, there are many—I fail to see how PGIS might be critiqued in a social context. In fact, if the primary concern for the use of GIS in social contexts is power assertions in methods of visualization, then surely a way to collect and visualize information generated by the public is in complete contrast to this fear of authorial bias. Furthermore, if PGIS is largely volunteered (VGI), then ethical concerns are diminished, and if the data is confirmed via an objective algorithm, then the accuracy concerns are also moot. PGIS is, perhaps, the most useful method of real-time data collection possible, and it should be utilized as much as possible. As O’Sullivan notes, it is a way to empower citizens, to give them an equal voice, and I agree completely.

– JMonterey

Augment your Reality

Thursday, February 21st, 2013

Azuma provides a great synopsis of some of the advances in Augmented Reality(AR) (in 2001), as well as raises some important points about future work that will be needed in the field. Augmented reality often brings to minded science fiction type technology. However, the reality is that advances in AR mean that widespread consumer products are not too far away. Indeed, many of the devices we now take for granted, especially smartphones, feature the use of AR technologies. Certain apps , such as Google Sky map- using the phones magnetometer, it project a view of the stars and planets in the sky, all based on where the phone is pointing. Other apps that come to mind include wikitudes, which can overlay relevant geographic detail information, based on where the camera on the phone is facing.

One exciting portion of the field concerns the use of Head Worn Displays (HWD). Azuma provides a great overview of the state of the field in 2001. However, as technology has evolved, some of the concerns and limitations have been resolved. Exciting technology, such as the new Google Glass, hold the potential to take AR technology to the next level. Problems such as size and weight have been resolved- these new “glasses” are extremely light, and really not that bad looking. In addition, with a projected cost of less than $1500, they are really not that far off from being a mainstream. One thing to keep in mind, however, is that these new technologies are far from perfect. Google Glass is dependent on wifi or a bluetooth connection to a mobile phone- not exactly the epitomy of mobility. However, they provide a product thats one step closer to making Augmented Reality technology…well, a reality to the everyday person.

-Victor Manuel

 

Augmented Questions

Thursday, February 21st, 2013

Azuma et al. “update” (in 2001) the reader on advances, problems, and applications of augmented reality. Their intended audience appears to already be aware of the basics of registering objects and placing people in visible artificial environments. In contrast, the article we read a few weeks ago on eye-tracking technology explained seemingly advanced technological notions to the layperson much more nicely. Still, if the article’s purpose is to discuss AR from a multi-faceted perspective, discussing issues pertaining to the user, the augmented objects, and the environment, then the authors accomplished this well enough.

As someone with little to no experience with, or background knowledge of, augmented reality, I am concerned more with possible applications of the technology than with the technical side of things. Still, as someone approaching this article from a GIS-based perspective, I am intrigued by notions like georegistering and dynamic augmented reality. I’m sure the technology has advanced leaps and bounds in the past 12 years, including AR applications on smart phones that solve many of the weight and cost issues. I’m curious how AR is able to take an unprogrammed environment and situate its device so accurately within that space. Surely GPS is involved, as are internal sensors that collect aspect information, but beyond that, I am more intrigued and curious than critical.

– JMonterey

Augmented Reality…?

Thursday, February 21st, 2013

It seems like augmented reality was less of a reality and more of a dream for science-fiction enthusiasts in decades past, and now we are living in an era where this could become a reality. What maybe wasn’t anticipated was the ubiquity of different types of hardware, such as smartphones. The authors mention that social acceptance issues play a role, and I think the major issue here could be privacy. Since smartphones are such an available platform, but also could contain a lot of tracking information, it will take time for the ethical privacy concerns to evolve to mesh with the technology.

-Kathryn

Critical GIS

Thursday, February 21st, 2013

I found the paper by O’Sullivan very intriguing. I was completely unaware of the fact that research in GIS is going on in some of the directions mentioned by the paper.  I particularly found the sections ‘Gendering of GIS’ to be very interesting.  In India, there is a lot of work going on in woman empowerment. And it will be very interesting to see whether someone can use similar systems given the limited penetration of the internet.

Privacy and ethics is another part of GIS which does require a lot of research. As more and more applications take into account the location of the user as a principal component, it is becoming very important to come up with standards for privacy protection. With the number of PPGIS applications increasing, a great number of people from the society are contributing to the task of collecting Geographic data. Though this means that GIS is getting higher acceptance in society, it remains a challenge as to how to release this data while striking a balance between accountability and privacy.

-Dipto Sarkar

 

“The World is Not So Easily Mapped Anymore”

Wednesday, February 20th, 2013

Lake’s article is a discussion regarding the gap between the ascendency of GIS within geography/planning and the critique of positivism within geography. Though GIS practitioners have not ignored the issues related to ethics, privacy, accuracy of data, using appropriate methods to avoid “evil outcomes” etc., Lake argues that it only serves as “internal correctiveness” that does not remedy the positivist assumptions underlying GIS (S as system). While I agree that GIS has the ability to support the status quo by burying decisions and reinforcing power relations under the technological wall of algorithms, I think it’s equally important to be able to build these models to be able to talk about the data, and the choices made to produce the map etc. At the very least, we have a tangible artifact that can elicit a conversation between GIS practitioners and social theorist to challenge the interwoven dominant norms.

In traditional GI systems, the practitioner can easily become disconnected with the individuals that comprise of the data points. This separation implies a external vantage point, a god’s eye view where feminist geographers have critiqued because we cannot disassociate ourselves from the world we live in. Perhaps an alternative for the practitioner to reconnect with the analysis, the people, and social relations involved is through technologies such as AR where the user is physically situated to the location of study. This can provide an increased egocentric view that wasn’t previously available before. (though there are critiques with the virtual information superimposed in the AR system as well…). What methods are available to better integrate qualitative data within GIS systems? If this can be done, then observations can keep their subjectivity to a certain extent within the model compared to a purely quantitative categorization of objectified “others”.

-tranv

Better Algorithms or Better Computers?

Wednesday, February 20th, 2013

Any one who is facing dilemma about the above question should see the following video from 44:20 onward:

Data Structures and Algorithms – Richard Buckland on YouTube

Actually the entire video is very useful for anyone who is interested in understanding the basic of Complexity of Algorithms.

– Dipto Sarkar

The Devil’s Software

Tuesday, February 19th, 2013

Robert Lake levels a scathing postpositivist criticism of GIS, as he sees it in the early 1990s, as being fundamentally ethically flawed.  His major ethical sticking point is that the underlying positivist data and analysis models of GIS by necessity objectify the subjects of research and are unable “to comprehend and respect the subjective differences among the individuals who constitute the irreducible data points at the base of the GIS edifice.”  Lake makes a note of prioritizing these deontological concerns over the consequentialist ethical considerations of the field of GIS and planning/applied geography more broadly.  Thus, the “internal correctives” to adopt ethical codes among GIS practitioners is meaningless to Lake, who calls it “insufficient on ethical grounds if it focuses exclusively on the ends to the exclusion of the means”.

Lake’s argument is grounded in the more general postmodern imperative, which holds that positivist (or even quantitative) tools such as GIS must be eliminated, or at the very least, reconstituted from the ground up.  Not even the most benevolent use of GIS can be tolerated, because of the deontological abhorrence—the mortal sin—of representing people as undifferentiated digital objects without any consideration of positionality or subjectivity.  Lake hammers at this singular point repeatedly throughout the article.

I concede that it is vitally important to identify and account for the assumptions and values implicit in any analytical tool, including GIS.  That being said, I find Lake’s argument overly alarmist, generally unconvincing, and even potentially harmful for several reasons.  First, the argument’s unyielding deontological edicts ignore potential applications of GIS in purely physical domains of geography; and even in human geography, there is nothing that precludes researchers from tying back the results of GIS analysis to more qualitative and critical considerations.  Lake dismisses the potential of participatory GIS—an incredibly pertinent and empowering field today—as capable of nothing greater than token consultation.  Second, taken as a wider postmodern assault on abstracted, quantitative record-keeping, Lake’s argument quickly becomes dubious and unwieldy: should we also tear down the subject-object dualistic institutions of paper filing systems and library catalogues?  Finally, I believe that Lake’s appeal for geographers to refrain from adopting GIS in their research is a serious mistake.  Though harping on the deontological ethics may convince academics to avoid GIS applications, it certainly won’t dissuade the militaries, governments and corporations who are waiting in the wings to use GIS for unequivocally evil ends, to say nothing of GIS’s problematic means.  Without a critical and constructive GIScience approach that actually engages with GIS instead of talking past it, there will be nothing standing in the way of these other potentially oppressive actors—now there’s a consequentialist argument worth heeding!

-FischbobGeo

Augmenting the Potential of Participatory GIS

Tuesday, February 19th, 2013

Hedley et al’s 2002 article highlights the state of the art in augmented reality (AR) applications in geovisualization and multi-party collaboration.  Emphasizing a multidisciplinary approach encompassing computer science, human-computer interaction and geovisualization, the authors describe their 3D AR PRISM interface and its successor, the GI2VIZ interface.  Important features of the interfaces they design are representation of multi-attribute data, “support for multiple views” (i.e. ego- and exocentric), “the use of real tools and natural interaction metaphors”, and “support for a shared workspace”.  The interfaces take advantage of three different levels of geovisualization: the physical object, AR object, and even an immersive virtual reality environment complete with avatars.   This allows an arbitrary number of people to interact with an incredibly rich 3D map environment to analyze information and make decisions.  Sort of reminds me of that 3D GIS in James Cameron’s Avatar!

While the antagonist-allegorical colonists of Avatar were certainly not using it as such, this type of technology holds great promise for participatory GIS applications, particularly in urban planning.  Most design charettes today consist of a group of citizens at a table with a paper map, using markers to draw abstractions of desired features and conditions.  With some additions and modifications to the GI2VIZ feature set, it isn’t difficult to imagine citizens being able to collaboratively place markers to represent buildings, roads, landmarks, paths, public spaces and natural areas over a 3D terrain model during the participatory development of a site or neighbourhood plan.  Then, with the help of an engine capable of procedurally generating architectural features and other details, citizens could take a virtual walk through the environment they’ve just designed.  This is just one example of how this technology could produce benefits in planning and participatory GIS applications.

Seeing what the leading edge of technology was in 2002 makes me very excited about the prospects of AR for collaborative geovisualization a decade later.  With Microsoft Surface and Google Glass hardware coming down the pipeline, it is certainly conceivable that we may soon be seeing group AR workspaces become an integral part of GIS practice.  The key challenge in widespread adoption will be in crafting a user interface that rivals the power, comprehensiveness and simplicity of the good old keyboard, mouse and context menus we’re all used to.  Surmount this, and the possibilities are endless.

-FischbobGeo

The near future of Augmented Reality

Monday, February 18th, 2013

After reading the paper by Azuma et. al., I am convinced of the fact that augmented reality systems of the likes shown in Science Fiction Movies are not far. However, I think the first commercial applications of Augmented Reality will use the mobile phones as the primary device. The mobile phones are already equipped with a range of sensors like GPS, Electronic Compass, Accelerometer, Camera, etc. which can be used to provide measurements of the environment. This fact is already leveraged by applications such as Google Goggles and only slight improvements to it will make the system real time, thus making it qualify as an Augmented Reality System according to the definition given by Azuma et. al.  I also feel that acceptance of these applications will be higher as they do not require clunky wearable computers.

Another thought that came to my mind is the use of ubiquitous computing for augmented reality based applications. Instead of putting all the responsibility of sensing the environment, doing calculations and displaying results, it might be useful to distribute some of the task to other smaller specialized units present (or planted) in the physical environment of the user. When a user comes in proximity of these computers, the device they are carrying may just fetch the data and display them after doing some minimal calculations.

-Dipto Sarkar

 

Scale: Youtube videos – National Council for Geographic Education

Saturday, February 16th, 2013

Here is a video link explaining scale from Youtube:

Hope you all enjoy the awkward scale guy!

C_N_Cycles

Yes, mining for spatial gold!

Friday, February 15th, 2013

I appreciate the title of JMonterey’s blog! Spatial data mining, as described in the article by Shekhar et al., seems exactly like extracting precious resources out of the underground or a kind of ‘homogeneous’ set of data. The article gives great examples of the type of ‘gold’ that we can get from the data mining processes and it’s applications (e.g.: crime, safety, floods,…). The article points out again to me that the techniques and methods of data mining depend on what is the application and what is the type of information that the researcher is looking for. Remember the example of data mining in the article starts from the question related to bird-nesting habits. This implies choosing the right set of data for our question and establishing where we are going to find the information we are looking for.

I’m left with the question of time… The ‘gold’ or outliers are spatial objects with non-spatial characteristics that differ from their neighbors’ characteristics. But what is a neighbor? I’m wondering where the notion of time fits in the data mining models, because two spatial neighbors could have a very distant relationship if we consider time, processes, change and interactions (recall the notion of absolute space/relative space in Marceau’s article on the problem of scale).

S_Ram

Scale

Friday, February 15th, 2013

I found myself more interested in what you guys had to say about scale than the texts! or at least more inspired…
One point that is really interesting is that we have fancy techniques to choose a right scale to study a problem but in the end, the scale problem persists. Here again I think that this is a good scientific problem because the researcher has to ask questions and analyze the tools he is using in defining his questions and his methods. This is why I disagree with Point McPolygon’s point at the end of his blog saying that we should develop the technology more rather than attempting to define what is the right scale of study. I think that revealing scale issues is part of the science, and that it is important to study this instead of merely relying on the technology, which anyway is subject to human development and thus human’s understanding of the scale issue…
Wyatt ask is we can be accountable for issues of scale. Maybe by questioning the appropriate scale to use and the appropriate way to transfer information across scale is revealing a problem that we might not completely solve but being transparent about the process is a step towards not lying with the map and accountability.
The example of  adapting to climate change brought up by Victor Manuel is really interesting on my point of view. Different strategies are implemented at different scale of governance (country, region, municipality, communities), using different types of information or different ‘granularity of information’. On top of it dynamics occur between the scale of governance and thus between information. Defining the appropriate scale of information would depend on the scale of governance that your studying. Although most importantly in this case would probably be the problem of transferring information from one scale to the other. This is a very difficult task because of the issues related to MAUP,… mentioned in the texts, but also because of the meaning that the different scale of governance gives to geospatial areas, entities, concepts, processes,…

S_Ram

Lost in the Data

Friday, February 15th, 2013

Guo and Mennis outline the emerging field of spatial data mining for the introduction to a special journal issue.  Work in this field has been prompted by the ever-increasing availability of finer and finer grained data, from an exponentially increasing number of sensors ranging from satellites to cell phones to surveillance cameras.  There has been a great interest in tapping into and making sense of these streams of “big data”, but in order to do so we must develop new ways of exploring, processing and analyzing them.  This is essentially what was alluded to in relation to geostatistics last week: our data and technology have surged way ahead of our available methodological toolbox.

One of the biggest issues with these new data sources is that they are largely unstructured: for example, they may just be a string of text such as a tweet, with some locational metadata.  In order to analyze a large number of unstructured data points, it is necessary to impose a structure via classification.  This is no easy task!  Although computer programs such as qualitative coding software packages exist and can group phrases by theme, most classification algorithms that exist necessitate a training process where the user tweaks the parameters of the classifier manually on a subset of the data.  The development of foolproof unsupervised classifiers that can not only sort unstructured data effectively, but also do so in a way that the output is of use to researchers, is a major challenge in this domain.

A key related idea to the advent of big data is the long-standing trade-offs between resolution and extent in spatial scale.  Though big data presents us with both extent and resolution of unprecedented magnitude, there still remain the limits imposed by humans’ own cognitive abilities.  Computer programs developed to make sense of big data must classify and generalize the raw input data in a way that allows geographers to effectively navigate this sea of data, rather than simply leaving us lost.

-FischbobGeo

Mining for spatial ingenuity

Friday, February 15th, 2013

The article “Spatial Data Mining, by Shashi Shekhar, explains what data mining is and how it has made great strides across various categories such as location prediction, spatial outlier detection, co-location mining, and clustering.  Data mining is finding meaningful patterns or information in data from a large data set that would otherwise have been imperceptible. This can be done in many ways, using either statistical tools or modeling, or a combination of both. The modeling usually takes a training set of data, and applies it to a testing set of data in order to build the model. One of the classic challenges of data mining is to take into account the spatial autocorrelation and spatial heterogeneity during this process.

 

The main unsolved problems of spatial data mining lie in the intersection of geospatial data and information technology. As GIScientists, this relates directly to many of the other subjects that we study. As is often the case in more modern applications of this science, we are limited more in methodology than by the technology itself. The advantage to a concept such as this is that patterns may emerge that nobody had previously considered, as opposed to doing statistical tests on hypothesized meanings of datasets. This technology opens a whole new world of possible advances in knowledge, both relating to GIScience and otherwise.

 

Pointy McPolygon

 

The article every undergraduate geographer needs to read

Friday, February 15th, 2013

As a geographer, Danielle Marceau’s article “The scale issue in social and natural sciences” is easily digestible. Familiar concepts such as the modifiable areal unit problem (MAUP) are presented in a very clear manner. The article focuses mainly on the effects of scale and aggregates on spatial inferences, and on linking sptial patterns and processes that occur across different scales.

 

Predicting and controlling for the MAUP can be very difficult, as pointed out by the authors. New technologies may be able to help us solve this problem with their advanced data acquisition and analysis,  however even though these technologies exist, conducting such a stud would be nearly impossible. So many processes are connected across varying scale, and when you make statistical inferences about specific phenomena, these inferences surely cannot account for it. We may use GIS to create multiple scale maps to run statistical tests and analyze the appropriate scales, however even in the creation of these ‘test scales’ there is inherent bias, in that we assume we know the limits of the scales of these processes.

 

Though technology has advanced, I believe this comes down to a philosophical debate about science and about space; can we attempt to identify every exchange in process across scale, or do we simply attempt to understand using what seems to be in the most intuitive and apparent scale? We may be able to use technology to improve the accuracy of our models, but only to a certain point. At this point, perhaps efforts would be better spent improving the processing capacity of the technology itself, rather than attempting to use appropriate scale for phenomena, that in the end, we don’t even know is correct.

 

Point McPolygon

 

Scale

Thursday, February 14th, 2013

There is no doubt that scale plays a large role in the way in which data is interpreted. The article by Atkinson and Tate provide a good overview of scale of measurement, scales of spatial variation, and the issues inherent to spatial data. However, if we draw from Kathryn’s seminar about spatial statistics, and realize that large scale spatial processes impact smaller scale processes and their patterns, I’m still unclear as to how rescaling of data top down, bottom up or applying geostatistical techniques  can quantify the effects of large scale processes on smaller spatial processes.

An interesting suggestion that the authors cited from Milne (1991) was that to understand heterogeneity, conduct analysis across a wide range of measurement scales and extract the parameters that remained consistent to changes in scale. Though an expensive and time consuming task, if this could be done then it seems like great way to extract these features and focus on parameters that are scale sensitive to determine the appropriate scale necessary for analysis. Also, can those parameters that are robust to change tell the researching something about the study at hand? With the intensification of geospatial data and larger datasets, developing the necessary tools to better integrate multi-scale datasets for a more comprehensive evaluation is a mountainous task.  A tough GIScience topic with no easy answer, but it is crucial that we recognize that the scale of measurement we choose and the changes to data variability once we rescale data can greatly affect the final results of the analysis.

-tranv

Spatial Data Mining

Thursday, February 14th, 2013

Spatial data mining relies on the geographic attributes of the data to uncover spatial relationships within the dataset leading to knowledge discovery.  No doubt that if implemented successfully, then it has contributed to developing spatial theories, and contributing to geographic research. It’s a science!

The authors contend that with the increasing contribution of volunteered geographic information, GPS and LBS technology provides new research directions for the field. Though there is an abundance spatial data, and borrowing from Beth’s theme of critical GIS – we have to be mindful that those that can/do contribute is not representative an entire population. Bias is inherent in obtaining data from these sources because certain groups of people have been known to contribute more, certain locales can be headlined more often, and there is a large group of individuals who are disenfranchised because they do not have access to such technologies and completely sidelined. The authors concern themselves with developing the right questions and methodologies to solicit the answers, but is the data appropriate to answer the questions being asked?

Though there are several techniques to begin the spatial data mining process, how does the user decide which technique is appropriate for their analysis? Given that the end user may not be well versed in spatial data mining and nuances that exist within different types of techniques, different results will be generated by different rule sets, classifications. Since spatial data mining is a multidisciplinary field, who should be ultimately responsible for teaching the theories and methodologies?

-tranv

Mining for spatial gold

Thursday, February 14th, 2013

Shekhar et al. describe spatial data mining—the process of finding notable patterns in spatial data—and they outline models of doing so, as well as using spatial outliers and spatial co-location rules, and locating spatial clusters. The article is mostly informative, and the topic is central to spatial analysis, so it is difficult to separate spatial mining from the rest of GIS.

I find the notion of clustering particularly interesting, since it is perhaps the most visual-oriented aspect of spatial mining, yet it is largely up for interpretation and/or dependent on the variability of clustering models. For instance, when we see a distribution of points on a map, subconsciously, we begin to see clusters, even if the data is “random.” This type of cognitive clustering is difficult, or even impossible, to model, and it might vary from person to person. The authors of this article list four categories of clustering algorithms, including hierarchical, partitional, density-based, and grid-based models, depending on the order and method of dividing the data. However, the authors fail to note the applications for the various algorithms. If we are thus to naively understand these to be interchangeable, then the results could differ tremendously. Moreover, if there are indeed patterns, then there is most likely a driving force behind those patterns. That force, not the clusters themselves, is the most important discovery in spatial mining and so the modeling must be more stringent in its pursuit of accuracy.

– JMonterey