Archive for December, 2014

TGIS back then…

Tuesday, December 2nd, 2014

From the Yuan’s article, he criticizes that the temporal GIS is not supporting properly the spatio-temporal modeling. He argues that the temporal database systems are being applied by time-stamping techniques to tables, attributes or values to incorporate time factor and therefore lack in proper application. I find this article interesting yet inappropriate.

This article is interesting in a sense that back in late 90’s, assuming from the range of dates of cited articles varies from late 80’s to mid 90’s, despite the lack of technologies that could have point out such problems in temporal GIS, yet he was able to pointed out and therefore one can only imagine the frustration when the methods were not appropriately applied and also the technology was not apt to well incorporate the temporal attribute as well at the time. Which leads to the reason why I call this article inappropriate, because it could have mislead to some people how temporal GIS is being not developed even today, whereas it is, at least in the perspective of technology, more than enough to be ready to be applied and being applied currently. It is all thanks to the advance in technology and crowd-sourcing. As Yuen have mentioned, “GIS will be more effective and precise by representing a fire, fire location, and how the fire spreads rather than burned areas at each point in time.” To relate to the example, nowadays, fire departments and natural scientists are closely monitoring the natural forest fire and the tracking, using temporal GIS. On the other hand, I find the burned areas at each point in time eventually leads to the evidences of fire spreads. However, it is facilitated by today’s technologies to create such connections whereas it was impossible to do so about 20 years ago. Hence, this article may be easily misleading or perhaps even confusing to people who are learning and being used to today’s GIScience technologies, especially without further information/background knowledge on TGIS.

-ESRI

Temporal GIScience

Tuesday, December 2nd, 2014

 

It is often said that space, place, and time are three fundamental foci of geographical inquiry. GIScience shares the spatial outlook, significance of place, and temporal way of thinking, however, the latter is relegated more often than not. Yuan (****) notes that many geographic information systems lack the tools needed to conduct spatio-temporal analyses. This presents a significant research gap in the realm of GIScience.

Yuan highlights that temporal datasets are subject to a unique problems intrinsic to the factor of time. Modeling and visualization are particularly challenging. Yuan points out that time is not a singular structure – there is linear, cyclic, parallel, and branching structures of time. Additionally, time occupies multiple dimensions, corresponding to validity, transaction, user-definition, and institutions. The latter set of examples was an entirely new set of concepts for me and I would have appreciated an unpacking of these concepts. Yuan’s acknowledgement of temporal complexity was certainly required, however, the diverse features of time come across as an afterthought in this paper.

Temporal GIS is certainly an intriguing subfield of GIScience, however, there seems to be a steep theoretical learning curve. Unfortunately, this article did not make my introduction to temporal GIS any easier.

-BCBD

Temporal GIS

Monday, December 1st, 2014

In “Temporal GIS and Spatio-Temporal Modeling”, Yuan describes the different issues of temporal GIS.  The main problem is that geographic information systems were built off of the tenets of cartography, and still follow the logic of static maps. Many different data models are presented in this paper, and whilst all of them have benefits, none of them can be applied universally. Researchers have been building models for their specific needs, and therefore they don’t fit most datasets. The article, as well as the field of TGIS, is convoluted and there is no clear answer. Yuan shows that none of the models answer the needs of spatiotemporal analysis completely.

Time seems like such an obvious factor to consider in any kind of analysis, yet it is still hard to incorporate into classic data models for GIS and computer science. Creating a working temporal model for geographic information systems should be a top priority for everyone involved in the field.

It seems unlikely that the best method for spatiotemporal analysis is comparing different layers manually? Temporal GIS will revolutionize the study of processes, changes, trends, flows, etc.

-IMC

 

Temporal G.I.S.

Monday, December 1st, 2014

The article, Temporal G.I.S. and Spatio-Temporal Modeling by May Yuan, was a challenging introduction to the field of temporal G.I.S. It was one of the readings which I enjoyed the least so far, since I felt that the author didn’t really do a great job of clearly explaining the concepts or the diagrams provided very well. Perhaps it will become clearer after tomorrows presentation. Through reading the article, it became clear that it is a major challenge to successfully implement temporal aspects into a G.I.S. This was interesting because it reminded me of the article on geovisualization which discussed Minard’s map of Napoleon’s campaign into Russia. There was a strong temporal characteristic to that map, as well as in subsequent versions. Clearly, geographers have been occupied with combining temporal and spatial data for a long time. It is also clear that the problems and issues associated with doing that have not been solved, which is fascinating (although the technical challenges associated with G.I.S.’s are presumably more complex). Time is a very important variable to take into account for any geographic process and to be able to do it successfully incorporate it into a G.I.S would provide a boost to all fields of G.I.Science which rely on temporal analyses (which is a lot).

-Benny

(Not-so-big) BIG DATA

Monday, December 1st, 2014

Crampton et al. (2013) Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb.

 

Crampton et al. (2013) presents an analytical study of the intersection of big data and GIScience via the medium of geolocated tweets. The authors recognize that atheoretical analyses of the geoweb is limiting the expansion of GIScience research and identify five ways through which to remedy this shortcoming.

Using the tweets accumulated through the #lexingtonpolicescanner (also #LPS), Crampton et al. assessed the spatio-temporal phenomenon of data production. They observed spatial and temporal spikes corresponding to how the event evolved over time and through the channels of social media.

Although the bridge to geography and GIScience is established, the connection to big data is much more tenuous. Crampton et al. remark that their analysis “emphasize[d] that the informational richness of these data is often lacking” in typical big data analyses –those which focus on issues of volume, velocity, and statistical complexity (2013: 137). Perhaps this is my own naïveté when it comes to the subject of big data, however, from my understanding the lack of informational richness (due to noise), unfathomable data quantities, and overwhelming speed of dataset creation are the definitive features of big data. Big data is supposed to challenge the conventional methods of statistical analysis due to these unprecedented conditions. I doubt 12 590 tweets would constitute a ‘big’ dataset. Indeed, Crampton et al. repetitively state that their analysis may not qualify as ‘big data’. I suspect the authors were caught up in the hype of big data. If true, this very article would exemplify the phenomenon of ‘trend spam’ through which unrelated—or loosely related—subjects ride along in the wake of a popular label.

One of the more intriguing findings of this article is that certain agents, particularly popular websites, may act as catalysts in the (re)production of data. Crampton et al. note that publicity on Mashable.com sparked created a noticeable spike in the use of the #LPS hashtag. It would seem that Tobler’s first law of geography is applicable to both physical and virtual space.

 

-BCBD

 

Big Data

Monday, December 1st, 2014

“Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb” describes the possibilities of advancement in the geoweb with big data analysis, by using the example of the #LexingtonPoliceScanner trending topic on Twitter.

One of the most important things I found in this article is how inaccurate the spatial data can be. As the author points out, the location of the user is usually self-reported and can be a fictional place. Moreover, even if they correctly identify their home location, it does not necessarily reflect the location of the user at the time the information was sent. However, we know that most apps record the user’s location, whether they choose to post it or not; therefore it should be possible to have more precise information.

This paper reflects the need for GIScience to move away from static points on maps. The three maps representing tweets at different times in the night could be helped by a functioning temporal GIS and geovisualization.

It is very relevant that this is one of the last papers of the semester, as it ties together many of the fields studied in GEOG 506.

-IMC

 

#BIGDATA

Monday, December 1st, 2014

Using a case study that examines a stream of tweets related to late-night celebration events following the victory of the University of Kentucky Wildcats NCAA championship game Crampton et al. challenge us to look beyond the obvious when dealing with geotagged big data. Although the #LexingtonPoliceScanner hashtag identified the tweets related to the incidents that took place that night, a simple map showing the hotspots of twitter traffic over space and time merely scratch the surface of what is possible – leaving many unanswered question, multiple unexplored avenues.

It is important to note that this example featured the use of user generated content, introducing the risk of false information, and repeated informations facilitated through retweets: in essence noise. The location of the tweet is unclear as less that 1% of tweets were provided with GIS coordinates from where the tweet was posted, exposing a gaping hole in misleading information, location however could be derived from the user-defined location information, While the advent of big data is a sea of opportunity I would repeat the cautiousness of the authors with regards to the confidence with which the information can be used. Before the use of any data, patterns and trends need to be extracted through the exercise of data mining, a process that the authors argue can be enhanced through the addition of ancillary data to inform the data drawn from the big data source.

While the article clearly demonstrates the ways in which big data requires a broader look in order to tackle the research questions it can answer, I question choice of the case study presented as an example to illustrate this. I find that other examples would be far more appropriate in representing what big data really entails.

The example however provided the opportunity to present the concept of social networks and the idea of measuring distance not only over physical space but also through a social network structure. Social networks such as twitter offers the ease of access to UGC data that moves through the social network structure. I do wonder what potential would lie in another data set example removed from the category of geotagging.

– Othello

 

Time After Time

Monday, December 1st, 2014

Upon reading Yuan’s work on Temporal GIS and Spatio-temporal modelling, I have come to the realization that our current GIS systems are very much ill equipped to handle the storage and processing of all kinds of spatio-temporal models. Spatial is specials, yes, but all spaces are placed in time. We cannot ignore the fact that temporal is special too, everything happens in space and time. Most GIS data represents spatial information that is fixed in time, a slice of the full reality. Name me one process or object that hasn’t changed over time and I’ll bake you a cake of your choice.

The complexity of the six major types of spatial temporal changes in geographic information spoke to my unfamiliarity with such concepts. After 6 years of experience with GIS the subject of temporal GIS has not been addressed adequately to reflect the need to deeper development in this field. The need for further research is all too clear. The current GIS systems are incapable of modeling the temporal changes that most accurately describe phenomena such as forest fires or the development of storm systems in which objects change and transform over space and time. THe lack of support for such basic concepts echoes the incompatibility of GIS with other ways of knowing and the epistemology of the indigenous populations of northern Canada in particular.

The question remains: how will traditional GIS tools as we know them adopt and change to better address the potential that resides in these spheres, more importantly still how will implementation take place in the field? The introduction of a new data model that allows for the efficient and effective temporal analysis of phenomena would cause an overhaul in data structures as we know them, such changes would be well worth the effort. To continue as we currently do would be unfortunate.

With the advances in computational power that have led to rapid advances in geovisualization, the same tools should be directed towards the development of capability to handle all forms of temporal GIS.

– Othello

Big Data & Spatial Relationships

Monday, December 1st, 2014

I found this article, ‘Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb’ by Crampton et al., to be an interesting introduction to the topic of Big Data. I thought it was fascinating how they attempted to progress past the simpler research methods previously used to analyse big data and build more robust and detailed frameworks. They made it clear that, while studying big data was useful, there is a need for more nuanced avenues of research which could help advance the field of big data and the geoweb. One such idea that I found interesting was that, as the authors claimed, there is often a paradoxical lack of data, or at least value in the data, that is being analyzed. For example, only 34% of the tweets studied were geocodable by user-defined location information (Pg. 134). They also note that people can use different location data or tweet about events that are not happening in their stated location. Examining how much of an impact this has on the accuracy of studying big data would be interesting to look at.

On the topic of locational relationships, the maps provided on page 135 got me thinking about the potential for people to tweet/create content on the Internet about issues that happen a large physical distance away from them. Does this data still follow Tobler’s First Law of Geography? If a larger number of people in New York tweet about events in Ferguson than people in Ferguson, what sorts of implications does this have in terms of analyzing big data’s spatial patterns? With the power to consume information about distant events, does proximity even matter anymore? When studying big data, I would imagine that this would be an important question to answer.

 

-Benny