Archive for the ‘General’ Category

GIS: Tool or Science? Why not both?

Tuesday, September 15th, 2015

Is GIS a tool or science? Wright (1997) and many other academics seem to be of the opinion that GIS needs to either be considered as just the computer software which we use to analyse and produce spatial data, or the analysis of fundamental issues raised by its use. But why does it need to be one or the other? While it is true that for anyone to properly collect, analyse, or create spatial data, they should be aware of the uncertainty and error inherent in their results, I don’t believe that an understanding of the fundamental issues surrounding GIS is required to use GIS to produce meaningful results with a high degree of certainty in more basic instances of spatial analysis. Since its use has become so widespread, much of the GIS used today is rudimentary and can done without an academic background in the field. Scientists make up a small fraction of the users of GIS software, and while their work with GIS can definitely be called GIScience, I’m reluctant to call something as simple and infallible as running a buffer function on some points “GIScience”, because it’s not science. Using it for greater purposes such as large-scale planning, research and development on the other hand does require a good awareness of the fundamental issues that surround whatever it is you are doing.

-yee

Concerns for GIScience Brought up in Goodchild (1992) Still Relevant

Tuesday, September 15th, 2015

Goodchild wrote his 1992 article Geographical Information Science at a time when GIS was still relatively new and undeveloped as an academic field. Despite this he manages to pinpoint several problems in GIScience which have remained unsolved or unaddressed over the decades. Of course, many of the issues that make up the topics of discourse of GIScientists are inherent in spatial data collection and analysis and simply cannot be resolved due to a process Goodchild refers to as discretization. Discretization is the generalization of data such that it can be recorded and reproduced. Considering that most spatial data is approximated and cannot be recorded with 100% accuracy and precision, it is in good practice to always consider factors affecting spatial data uncertainty.

Goodchild mentions issues that can and have been resolved, such as the need for better frameworks for geographical data modelling, better integration of GIS and spatial analysis, a taxonomy of spatial analysis, and easier means of passing data between GIS and spatial analysis modules. I found it amusing that he comments on the obscurity of spatial analysis compared to other forms of GIS, given that spatial analysis is a core part of GIS today. Goodchild also expresses a desire for GIS meetings to become more science-focused rather than based around novelty and innovation, a problem in GIS that still seems prevalent given that many major GIS events are focused on showing off new tools and applications. In that sense he seems to be wrong when he says that GIS will be a short-lived practice if it is primarily used as a software used in applications. The use of GIS has become essential in any pursuit that takes spatial data into consideration, and I believe that this phenomenon has actually benefited GIScience through giving it exposure.

-yee

Twenty Years of Progress – GI Science in 2010

Monday, September 14th, 2015

Goodchild begins by introducing the tool vs. science debate, but moves in a different direction than Wright, examining professional opinions: the result of conferences and symposiums among the top players in GI Science. I find this less accessible than Wright’s synopsis of the GIS L listserver, but Goodchild’s examination of the progress of GI Science at a professional level is perhaps a better indicator of the direction of the field. Goodchild focuses on the evolution and definition of the term GI Science. Goodchild describes GI Science as “the basic research field that seeks to redefine geographic concepts and their use in the context of Geographic Information Systems” (2010: 6). I find this definition supports the idea that GIS is more than a tool without overstepping the scope of GIS. The proposed conceptual framework for GIScience further cements the field as a legitimate science with its own boundaries and methodologies.

Smitty_1

 

Twenty Years of Progress: GIScience in 2010

Monday, September 14th, 2015

Michael Goodchild’s 2010 Invited Keynote Article published in the Journal of Spatial Information Science recognizes the accomplishments and evolution of geographic information science over the last 20 years. Goodchild states early on that “this paper is intended more as a stimulus to others to reflect, and does not pretend to be entirely objective” (2010:03).

GIScience is an emerging field that is still ‘finding its legs’, so to speak, as techniques and concepts developed for application mainly in geography-related endeavors are used more and more by researchers interested in data error and uncertainty, local spatial analysis and statistics, and modelling natural and human phenomena (2010:08).

Goodchild’s use of diagrams to illustrate to readers the results of a citation analysis effectively shows the relation between GIScience researchers and the development of three main sub-domains within GIScience: Spatial analysis and decision-making, environmental modeling and topography, and lastly data modeling and representation (2010:09).

This keynote article accomplished its goal of stimulating reflection as to the origins of GIScience, and where it is going. Goodchild effectively demonstrated that GIScience is growing, with more research being published every year, and enriching the field even more. Goodchild includes a list of 19 peer-reviewed “classic” papers that illustrate through their title alone the multidisciplinary and evolving field that is GIScience (2010:10). He then lists several topics that have yet to be researched within GIScience, from “neogeography” to the “third, fourth, and fifth dimensions” (2010:14).

I agree with Goodchild in that reflection on the past is crucial to better understand what has left to be done and discovered. GIScience may have a relatively short history, but it is gaining some serious momentum. Now we have to ask ourselves, what will GIScience look like in another 20 years? And how will it have changed our understanding of the space that surrounds us?

-ClaireM

Goodchild 1992

Monday, September 14th, 2015

This article is a snapshot of scholarly attitudes towards GIS in 1992, and how the field needed to move from system to science. It is interesting to look at this article from a historical perspective, to see what the ancient GIS masters thought of their discipline. Goodchild expresses some frustration that his discipline is criticized as being too technology driven. Yet he himself says that we tend to treat GIS displays as flat, instead of exploiting their potential to display curved surfaces. He says that we need new technologies that can better display curved surfaces and 3d modelling. Today we have Google Earth Pro, which is now free to use for all, and many other paid 3d modelling GIS. Yet for the most part GIS continues to be worked on in either raster or vector on a virtual flat surface. Why? Because it works, not everything has to be modelled in 3d, just like directions to the grocery story don’t have to be a shortest path overlaid on 5x5m resolution satellite imagery. Goodchild states that the greatest advances in GIS research have been where technology itself stood in the way of solutions. He proposes turning the focus away from the tech towards the science but is coincidentally interested in advancing the technology. Well then perhaps GIS was slow to adopt 3d modelling and curved projections because they didn’t actually help solve GIScience.
-anontarian

GIS: Tool or Science at McGill?

Monday, September 14th, 2015

The ongoing debate of whether GIS is a science or a tool is an interesting one that plays out both in the academic world and in our own university. Students have been pushing for a GIS diploma, something similar to what many colleges offer and recognized by employers. Offering a diploma suggests that it is a tool; something to put on a CV: Proficiency in STATA, HTML and ArcGIS. That the university currently lacks such a diploma, and instead groups it within the Geography degree, suggests that administration is more supportive of the science side of the debate. GIS as a tool can be learned in lower courses, but courses like this one that promote GIS as a science are the capstone of our degree.
-annontarian

Goodchild 2010

Monday, September 14th, 2015

In his 2010 update, Goodchild explains the developments in GIS over the past 20 years and where he expects the field to go in the next decade. His areas of further research really reveal how far the discipline has come technologically. For example in the 1992 article he discusses how the ability to show colour gradations needs to be improved. He speaks of being able to scan maps, and accurately recreate readable maps on screen. In 2010 he discusses the best ways of 3D/4D modelling and even adding fifth dimension of attributes that exist in space-time. His interest in new forms of GIS modelling shows how the field has tried to move away from maps as the end product. It is interesting to see how the field has diversified and the author’s perspective on GIS education. While some aspects of GIS have become increasingly complex ie. our modelling abilities, many basic parts of the GIS have become accessible to the general public. Whether or not education should focus on expanding the science or teaching the basic tools is an interesting debate. It seems that researchers would like to see it as a science, whereas firms that still use GIS for basic applications would probably see it as a tool.
-anontarian

GIS: Tool or Science?

Monday, September 14th, 2015

Wright (1997) summarizes the debate over GIS as a tool or a science, saying that rather than there existing two sides of a debate, there are three positions GIS can take along a continuum. I found the idea of a continuum between tool, toolmaking and science to be an interesting visual. A continuum implies that there exist an infinite number of spaces between those points where an individual can find their relationship with GIS.

Wright discusses students learning GIS and the implications for these students of labeling GIS as a tool or a science. I am one of those students; however, as it’s been almost two decades since the article was published, it’s necessary to point out how far GIS has come since then. In my classes, it seems that learning GIS is a progression along the spectrum. In the introductory classes, we learn about GIS the tool, how to use the software, and how it is used in business, environment or urban planning. Next, we incorporate our own interests into GIS and find where our tool is lacking and start looking for ways to fill those gaps with toolmaking. Finally, in the most advanced classes, we can critique GIS itself and start asking the important questions of GIS such as, how do we incorporate space and time in a GIS visual? Who does GIS empower?

At the end of the article, this sentence in particular resonated with me: “GIS may represent a new kind of science, one that emphasizes visual expression, collaboration, exploration, and intuition, and the uniqueness of place” (Wright 358). This is certainly what I’ve seen of GIS and what I see in its future: a collaborative science based in the intuition of geography which explores technology.

-Denasaur

Twenty Years of Progress

Monday, September 14th, 2015

I found the article by Goodchild to be engaging and easy to read. The article reads more like a reflection than an academic paper, as Goodchild explores the accomplishments, prominent literature, and advancements in the past 20 years of GIS. After reading the Wright 1997 article, this article is especially interesting to reflect on. It seems to address the “tool versus science” debate as closed, naming GIS academic journals with “science” in the title, and naming GIScientists that appear in academic circles. Goodchild names what the author sees as three subdomains of GIScience: the computer, the user and society. Perhaps it’s the computer that’s seen as the tool, not GIS.

A key difference between this article and the Wright 1997 article which was particularly striking to me was the difference in citizen participation in GIS. The Wright article discusses the viewpoints of a few privileged academics on GIS; however, as the Goodchild article clearly shows, GIS has become much less exclusive in the past two decades. In 1997, the prevalence of Open Street Map and Humanitarian OSM could not have been imagined. The “GIS community,” as Wright refers to it, has therefore expanded enormously in the past two decades, beyond just simply academics and high-level technicians. For some users, it may never be more than a tool – but for many others, it’s become a legitimate academic discipline and research focus.

Denasaur

Exploring GIS as a Tool and Science

Monday, September 14th, 2015

Wright’s exploration of GIS as a tool and a science still serves as satisfactory synopsis for a debate than began more than twenty years ago. In 1993, the internet was merely an ember of what would become a large inferno of constant discussion and diffusion of knowledge. This particular scholarly explores an informal conversation on the GIS-L electronic listserver, which took place in 1993 between GIS enthusiasts, academics, and frankly whoever felt the inclination to participate. This informality fueled Wright’s desire to write a formal scholarly article on the topic.

Wright et al.  delve into the main body of the discussion: Can we classify GIS as a science, or as an analytics tool? The latter implies that GIS is not legitimatized as its own field and is merely adopted to fields and disciplines to “advance the investigation of a problem,” while the former implies GIS is a synthesis of many Geographic disciplines possessing its own potential for theoretical advances and progression (1997: 355). Proponents of the scientific viewpoint denounce the tool-centric view as limiting, asserting that GIS has its own issues independent of the applications it is used for.

 

I was unaware of the debate surrounding the topic of GIS, and also unaware of the limitations of my tool-centric view. The view that Wright et al. arrive at in which they describe GIS and its users as a spectrum of tool users, tool makers, and scientists (I struggle to define/understand this last group) lends credibility to all aspects of GI science without highlighting the need for definition or denouncing any of the three viewpoints.

 

Smitty_1

GIS: Tool or Science?

Monday, September 14th, 2015

I’ve always perceived GIS as a means to an end. As a tool that automated analysis and organization of spatial data, so as to gain meaningful insights into our quantified Earth. I’m an undergraduate geography student seeking primarily to develop marketable skills, as are most these days, and tend to brush off most notions on the “philosophy of science” as a discussion meant only for grey-haired academics.

However, Wright et al.’s 1997 “GIS: Tool or Science” piece published in the Annals of the Association of American Geographers fundamentally challenged my view of GIS, and the importance of having the conversation in the first place.

Wright et al. analyzed and categorized the responses from the GIS-L electronic listerver, an online platform that allowed for many discussants to participate in the conversation. The responses to the question “GIS: Tool or Science?” were fit into three main topics: GIS as Tool, GIS as Science, and General Comments about Science (1997: 350). Many discussants argued that the answer to the question depends on the user and the nature of the task at hand (scientific research, technological development of end-user GIS products and suites, or use of GIS products for academic, commercial, or research purposes), and that perhaps engineering or applied science would be a more appropriate field for GIS to be a part of (1997:351). Wright et al. then went a step further to argue both sides of the debate in a clear and concise manner.

How do I now perceive GIS? Tool or Science? Having read the article, definitely both, but I also do not think that the discussion is over. Just as computer science evolved from mathematics, perhaps geographic information science will become its own field apart from the geography department, and no longer be delegitimized and perceived as nothing more than a means to an end.

-ClaireM

TGIS back then…

Tuesday, December 2nd, 2014

From the Yuan’s article, he criticizes that the temporal GIS is not supporting properly the spatio-temporal modeling. He argues that the temporal database systems are being applied by time-stamping techniques to tables, attributes or values to incorporate time factor and therefore lack in proper application. I find this article interesting yet inappropriate.

This article is interesting in a sense that back in late 90’s, assuming from the range of dates of cited articles varies from late 80’s to mid 90’s, despite the lack of technologies that could have point out such problems in temporal GIS, yet he was able to pointed out and therefore one can only imagine the frustration when the methods were not appropriately applied and also the technology was not apt to well incorporate the temporal attribute as well at the time. Which leads to the reason why I call this article inappropriate, because it could have mislead to some people how temporal GIS is being not developed even today, whereas it is, at least in the perspective of technology, more than enough to be ready to be applied and being applied currently. It is all thanks to the advance in technology and crowd-sourcing. As Yuen have mentioned, “GIS will be more effective and precise by representing a fire, fire location, and how the fire spreads rather than burned areas at each point in time.” To relate to the example, nowadays, fire departments and natural scientists are closely monitoring the natural forest fire and the tracking, using temporal GIS. On the other hand, I find the burned areas at each point in time eventually leads to the evidences of fire spreads. However, it is facilitated by today’s technologies to create such connections whereas it was impossible to do so about 20 years ago. Hence, this article may be easily misleading or perhaps even confusing to people who are learning and being used to today’s GIScience technologies, especially without further information/background knowledge on TGIS.

-ESRI

Temporal GIScience

Tuesday, December 2nd, 2014

 

It is often said that space, place, and time are three fundamental foci of geographical inquiry. GIScience shares the spatial outlook, significance of place, and temporal way of thinking, however, the latter is relegated more often than not. Yuan (****) notes that many geographic information systems lack the tools needed to conduct spatio-temporal analyses. This presents a significant research gap in the realm of GIScience.

Yuan highlights that temporal datasets are subject to a unique problems intrinsic to the factor of time. Modeling and visualization are particularly challenging. Yuan points out that time is not a singular structure – there is linear, cyclic, parallel, and branching structures of time. Additionally, time occupies multiple dimensions, corresponding to validity, transaction, user-definition, and institutions. The latter set of examples was an entirely new set of concepts for me and I would have appreciated an unpacking of these concepts. Yuan’s acknowledgement of temporal complexity was certainly required, however, the diverse features of time come across as an afterthought in this paper.

Temporal GIS is certainly an intriguing subfield of GIScience, however, there seems to be a steep theoretical learning curve. Unfortunately, this article did not make my introduction to temporal GIS any easier.

-BCBD

Temporal GIS

Monday, December 1st, 2014

In “Temporal GIS and Spatio-Temporal Modeling”, Yuan describes the different issues of temporal GIS.  The main problem is that geographic information systems were built off of the tenets of cartography, and still follow the logic of static maps. Many different data models are presented in this paper, and whilst all of them have benefits, none of them can be applied universally. Researchers have been building models for their specific needs, and therefore they don’t fit most datasets. The article, as well as the field of TGIS, is convoluted and there is no clear answer. Yuan shows that none of the models answer the needs of spatiotemporal analysis completely.

Time seems like such an obvious factor to consider in any kind of analysis, yet it is still hard to incorporate into classic data models for GIS and computer science. Creating a working temporal model for geographic information systems should be a top priority for everyone involved in the field.

It seems unlikely that the best method for spatiotemporal analysis is comparing different layers manually? Temporal GIS will revolutionize the study of processes, changes, trends, flows, etc.

-IMC

 

Temporal G.I.S.

Monday, December 1st, 2014

The article, Temporal G.I.S. and Spatio-Temporal Modeling by May Yuan, was a challenging introduction to the field of temporal G.I.S. It was one of the readings which I enjoyed the least so far, since I felt that the author didn’t really do a great job of clearly explaining the concepts or the diagrams provided very well. Perhaps it will become clearer after tomorrows presentation. Through reading the article, it became clear that it is a major challenge to successfully implement temporal aspects into a G.I.S. This was interesting because it reminded me of the article on geovisualization which discussed Minard’s map of Napoleon’s campaign into Russia. There was a strong temporal characteristic to that map, as well as in subsequent versions. Clearly, geographers have been occupied with combining temporal and spatial data for a long time. It is also clear that the problems and issues associated with doing that have not been solved, which is fascinating (although the technical challenges associated with G.I.S.’s are presumably more complex). Time is a very important variable to take into account for any geographic process and to be able to do it successfully incorporate it into a G.I.S would provide a boost to all fields of G.I.Science which rely on temporal analyses (which is a lot).

-Benny

(Not-so-big) BIG DATA

Monday, December 1st, 2014

Crampton et al. (2013) Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb.

 

Crampton et al. (2013) presents an analytical study of the intersection of big data and GIScience via the medium of geolocated tweets. The authors recognize that atheoretical analyses of the geoweb is limiting the expansion of GIScience research and identify five ways through which to remedy this shortcoming.

Using the tweets accumulated through the #lexingtonpolicescanner (also #LPS), Crampton et al. assessed the spatio-temporal phenomenon of data production. They observed spatial and temporal spikes corresponding to how the event evolved over time and through the channels of social media.

Although the bridge to geography and GIScience is established, the connection to big data is much more tenuous. Crampton et al. remark that their analysis “emphasize[d] that the informational richness of these data is often lacking” in typical big data analyses –those which focus on issues of volume, velocity, and statistical complexity (2013: 137). Perhaps this is my own naïveté when it comes to the subject of big data, however, from my understanding the lack of informational richness (due to noise), unfathomable data quantities, and overwhelming speed of dataset creation are the definitive features of big data. Big data is supposed to challenge the conventional methods of statistical analysis due to these unprecedented conditions. I doubt 12 590 tweets would constitute a ‘big’ dataset. Indeed, Crampton et al. repetitively state that their analysis may not qualify as ‘big data’. I suspect the authors were caught up in the hype of big data. If true, this very article would exemplify the phenomenon of ‘trend spam’ through which unrelated—or loosely related—subjects ride along in the wake of a popular label.

One of the more intriguing findings of this article is that certain agents, particularly popular websites, may act as catalysts in the (re)production of data. Crampton et al. note that publicity on Mashable.com sparked created a noticeable spike in the use of the #LPS hashtag. It would seem that Tobler’s first law of geography is applicable to both physical and virtual space.

 

-BCBD

 

Big Data

Monday, December 1st, 2014

“Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb” describes the possibilities of advancement in the geoweb with big data analysis, by using the example of the #LexingtonPoliceScanner trending topic on Twitter.

One of the most important things I found in this article is how inaccurate the spatial data can be. As the author points out, the location of the user is usually self-reported and can be a fictional place. Moreover, even if they correctly identify their home location, it does not necessarily reflect the location of the user at the time the information was sent. However, we know that most apps record the user’s location, whether they choose to post it or not; therefore it should be possible to have more precise information.

This paper reflects the need for GIScience to move away from static points on maps. The three maps representing tweets at different times in the night could be helped by a functioning temporal GIS and geovisualization.

It is very relevant that this is one of the last papers of the semester, as it ties together many of the fields studied in GEOG 506.

-IMC

 

#BIGDATA

Monday, December 1st, 2014

Using a case study that examines a stream of tweets related to late-night celebration events following the victory of the University of Kentucky Wildcats NCAA championship game Crampton et al. challenge us to look beyond the obvious when dealing with geotagged big data. Although the #LexingtonPoliceScanner hashtag identified the tweets related to the incidents that took place that night, a simple map showing the hotspots of twitter traffic over space and time merely scratch the surface of what is possible – leaving many unanswered question, multiple unexplored avenues.

It is important to note that this example featured the use of user generated content, introducing the risk of false information, and repeated informations facilitated through retweets: in essence noise. The location of the tweet is unclear as less that 1% of tweets were provided with GIS coordinates from where the tweet was posted, exposing a gaping hole in misleading information, location however could be derived from the user-defined location information, While the advent of big data is a sea of opportunity I would repeat the cautiousness of the authors with regards to the confidence with which the information can be used. Before the use of any data, patterns and trends need to be extracted through the exercise of data mining, a process that the authors argue can be enhanced through the addition of ancillary data to inform the data drawn from the big data source.

While the article clearly demonstrates the ways in which big data requires a broader look in order to tackle the research questions it can answer, I question choice of the case study presented as an example to illustrate this. I find that other examples would be far more appropriate in representing what big data really entails.

The example however provided the opportunity to present the concept of social networks and the idea of measuring distance not only over physical space but also through a social network structure. Social networks such as twitter offers the ease of access to UGC data that moves through the social network structure. I do wonder what potential would lie in another data set example removed from the category of geotagging.

– Othello

 

Big Data & Spatial Relationships

Monday, December 1st, 2014

I found this article, ‘Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb’ by Crampton et al., to be an interesting introduction to the topic of Big Data. I thought it was fascinating how they attempted to progress past the simpler research methods previously used to analyse big data and build more robust and detailed frameworks. They made it clear that, while studying big data was useful, there is a need for more nuanced avenues of research which could help advance the field of big data and the geoweb. One such idea that I found interesting was that, as the authors claimed, there is often a paradoxical lack of data, or at least value in the data, that is being analyzed. For example, only 34% of the tweets studied were geocodable by user-defined location information (Pg. 134). They also note that people can use different location data or tweet about events that are not happening in their stated location. Examining how much of an impact this has on the accuracy of studying big data would be interesting to look at.

On the topic of locational relationships, the maps provided on page 135 got me thinking about the potential for people to tweet/create content on the Internet about issues that happen a large physical distance away from them. Does this data still follow Tobler’s First Law of Geography? If a larger number of people in New York tweet about events in Ferguson than people in Ferguson, what sorts of implications does this have in terms of analyzing big data’s spatial patterns? With the power to consume information about distant events, does proximity even matter anymore? When studying big data, I would imagine that this would be an important question to answer.

 

-Benny

 

Big Data

Thursday, November 27th, 2014

I really enjoyed “Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb” by Crampton et al (2013). It explores the possibilities and limitations associated with the analysis of Geo-web generated Big Data. They advocate for a more comprehensive ontology of space, one that goes beyond xy coordinates to include a temporal and relational dimension, as well as a more complete understanding of geo-web generated data to include ancillary non-human produced data, as well as non user-generated data.
This article debunks the promise of the explanatory potential of Bid Data analytics and stresses the importance of choice of data and analytical techniques to provide meaningful findings. Moreover, they emphasized that Big Data created from the Geoweb represents information from a small subset of the population, undermining the external validity of findings, and potentially marginalizing ‘unplugged’ individuals.
The implications for GIScience are significant: How do we realize the potential of geospatial Big Data? What are the societal implications of Big Data analytics? How do create metadata for Big Data efficiently and comprehensively? How do you effectively geovisualize Big Data to both explore and represent data?
Geospatial Big Data gives rise to questions that build upon those generated by ‘small data’ analytics. Issues of quality are still relevant and are not supplanted by large volumes of data. GIScience will evolve rapidly and will likely intersect with other disciplines to fully realize the potential of Big Data.

Fan_G