Archive for the ‘506’ Category

Thoughts on “Empirical Models of Privacy in Location Sharing”

Thursday, November 30th, 2017

I am really interested in ubiquitous computing and location-based technologies so I was looking forward to this paper. In describing their methodology and specifically the concept of “location entropy”, I would have liked a more operational definition of “diversity” of people visiting that space- whether they took into consideration economic, social, ethnic, gender differences and how they qualified those variables. There is an interesting link to spatio-temporal GIS in the observation that more complex privacy preferences are usually linked to a specific time window at a given premises (ie. 9-5 on weekdays on company premises) (pg 130.)

I thought it was a novel approach to focus on the attributes of the locations at which people were sharing their locations rather than the personal characteristics of the individuals which might influence their decision to share their location at one point or another. This inverse format lends itself to generalization across subjects and the formation of universal principles about which kinds of places most inspire location-sharing.

There is an emphasis in the paper on “requests” and the explicit invitation to share one’s location in a social network, but the majority of users supply their location unwittingly or without a formal request. Although this is an important difference, it stands to reason that the authors’ observations about the nature of the request (ie. what app is using the info.) or the context (who the information is broadcast to, whether a network of acquaintances or anonymous gamers), influences an individual’s decision to share their location even in the absence of a formal request.

The Locaccino interface (brilliant branding there) looks very much like Find Friends, an app that I know some of my friends use regularly. It’s great in some ways that we are able to empirically test hypotheses about the kinds of environments and behavioural conditions which promote or discourage location sharing using these real-world datasets.

-FutureSpock

 

 

A Framework for Temporal Geographic Information, Langran and Chrisman (1998)

Monday, November 27th, 2017

Langran and Chrisman (1998) discuss the antecedents of temporal GIS, its core concepts, and a number of ways in which temporal geographic information is conceptualized. The map/state analogy was helpful for my understanding of the spatial and temporal parallels. I suppose the stage concept of time is fairly intuitive, but I appreciated its connection to maps explained explicitly. The authors seem comfortable with the convention of representing spatial boundaries as distinct lines, but I can imagine how similar concerns for vagueness and ambiguity might arise in temporal data as well.

The authors did a good job of presenting the advantage and limitations of geographic temporality concepts. At the beginning they mentioned how the “strong allegiance of digital maps to their analog roots” was inadequate for spatiotemporal analysis, but I’ll admit that I didn’t think that the two concepts they presented really subverted this allegiance very much. Still, maybe I’m spoiled by the ways  people are re-imagining maps on the geoweb–an unfair comparison for a 1998 paper.

It was interesting to get a glimpse of historical temporal GIS research. It’s clear that one of the biggest concerns in the implementation of a temporal GIS framework is temporal resolution. If I could hazard a guess, I would think that such concerns might evolve from interpolating between temporally distance information into the question of handling large amounts of data collected in rapid succession. With the advent of big data, namely by way of social media, I can imagine how the application of temporal GIS has and will proliferate since the time the article was published.

Thoughts on Geovisualization of Human Activity… (Kwan 2004)

Sunday, November 26th, 2017

The immediate discussion of the historical antecedents for temporal GIS by Swedish geographers uses the 24-hour day as a “sequence of temporal events” but I wonder why this unit of measurement was chosen as opposed to 48-hours or a week to illustrate the periodicity of temporal events, which may not be captured at the daily scale. It is interesting to note the gendered differences that are made visible by studies of women’s and mens spatio-temporal activities. As the authors note, “This perspective has been particularly fruitful for understanding women’s everyday lives because it helps to identify the restrictive effect of space-time constraints on their activity choice….” I am curious about how much additional data researchers must collect to formulate hypotheses about why women follow certain paths to work or are typically present at certain locations at certain times. I am also curious about how this process is different when trying to explain the spatiotemporal patterns observed in men’s travel behaviour.

One of the primary challenges identified by the authors is the lack of fine-grain individual data relating to peoples’ mobility in urban environments, such as in transportation systems or their daily commutes. This paper was written in 2004 and now, with the rapid increase in streaming, GPS from mobile devices, and open big data sets for most large cities, this is less of a concern. The big challenge these days is probably in parsing the sheer quantity of data with appropriate tools and hypotheses to identify key trends and gain usable insights about resident’s travel behaviour.

The methodology used by the researchers for their study of Portland relied on self-reported behaviour in the form of  a two-day travel study. There are many reasons why the reported data might be unreliable or unusable, especially given the fallibility  of time estimation and tendency to under or over report travel times based on mode of transport, mood, memory of the event, etc. That being said, this is probably the most ethical mode of data collection and asks for explicit consent. I would be interested to know how the researchers cross referenced the survey data with their information about the Portland Metropolitan Region, as well as the structure of the survey.

-FutureSpock

 

 

Goodchild and Proctor (1997) – Scale in digital geography

Sunday, November 26th, 2017

As might be expected, Goodchild and Proctor provide an insightful and lucid evaluation of how conceptions of scale should translate from paper to digital maps, and their analysis remains pertinent in the face of two decades of rapid digital cartographic development. They argue that the representative fraction, as traditionally used by cartographers to represent scale, is outdated for use in digital platforms.

Firstly, I think the representative fraction struggles on a simpler level. In absolute terms, we’d probably find it hard to distinguish 250,000 from 2,500,000, so maybe the large numbers involved with representative fractions would be less preferable to those present in alternatives, such as graphical scales, which visually show the relationship between distances on the map and the real world (as used in Google Maps).

It is interesting to revisit the problems outlined in the paper that have been faced by web map makers. A significant advance in the navigation of scale in digital environments has been in the development of tiled web maps. By replacing a single map image with a set of constituent raster or vector ‘tiles’ loaded by zooming and panning through a user interface, this method facilitates levels of detail that vary with zoom level and position in the map. The appearance and disappearance of certain features (e.g. country names vs town names) has formed another metaphor for scale recognition.

I’m still finding it hard to reconcile the idea of scale as used in everyday language (to represent the range of spatial extents that a phenomena operate within) with its scientific/ GISc definition (as a broader metric for the level of geographic detail, as well as extent). Positional accuracy, resolution, granularity etc are fundamentally important across disciplines, but do they correlate with what people think of when they talk about scale? (sorry Jin)
-slumley

Scale Issues in Social and Natural Sciences, Marceau (1999)

Thursday, November 23rd, 2017

Marceau (1999) describes the significance of and solutions to the issue of scale as it relates to social and natural sciences. The articulation of fundamental principles was helpful in demonstrating the importance of scale as a central question in GIS. It’s clear that the question is particularly important now as we continue to develop a more nuanced appreciation for how observed trends might vary across different scales of analysis.

The discussion of domain of scale and scale threshold stood out to me. I can imagine how differences in the patterns observed between scales would be helpful for organization and analysis. I’m curious about how these observed thresholds would manifest in reality. Are they distinct? Vagueness in our conceptualization of geographic features and phenomena seems to be so prevalent throughout the built and natural environment. I would think that these concepts would somehow shape our analysis of scale in some way that would favour vagueness in the spatial scale continuum. Still, it’s conceivable that sharp transitions could be revealed through the process of scaling unrelated to any vague spatial concepts. An example might’ve made the existence of scale thresholds more obvious to me.

It was an interesting point that an understanding of the implications of the Modifiable Areal Unit Problem took notably longer to develop in the natural science community–perhaps because GIScience as we it now was only in it’s infancy? In any case, it’s another reminder of how significantly spatial concepts can differ between geographies.

On Marceau (1999) and “The Scale Issue”

Thursday, November 23rd, 2017

I really liked how in depth this article went, reviewing development of studies on scale that were outside of the author’s department/field of study. It really emphasizes that this is an issue that applies to both physical & human geography (and others who study geographic space), so it’s cool to see interdisciplinary efforts towards this. I think this article really could have benefited from a visual flowchart or something, just sketching out how these actions would actually work, since it would take me some time to think out how this would all actually work on a raster grid or with polygons or something. Also, I think this article provided some framework for how to consider scale in a research project, like by performing sensitivity analysis (p.7).

In 1999, when this was published, we didn’t have the geoweb, and I think it would be super interesting to learn about how scale issues have been solved/exacerbated by these new developments. Are there issues in this work that have actually been “solved” by the geoweb, or are there just an onslaught of new issues created (as well as the holdovers, like the ubiquitous MAUP)? Writing this blog post, I realize my work has been constantly plagued by issues of scale and yet it’s never required to be acknowledged in handing in an assignment (and therefore I have never really considered it in this depth/variety before). This is something I have to consider in my analysis of methods for my research project, so thank you (and interested in learning more on Monday)!

On Kwan & Lee (2004) and the 3D visualization of space-time activity

Wednesday, November 22nd, 2017

This article was super interesting, as I find the topic of temporal GIS something that’s increasingly pressing in this day and age (and still challenging from the early 2000s).

The visualizations were really interesting, and it seems like they provided way more information faster than just analyzing the 2D movement (no time) would provide. Also, I thought it was incredible that the space-time aquarium (discussed as a prism based on the paths identified by Swedish sociologists) was only conceptualized (or written down, I guess) in 1970 and then realized in the late 1990s with GIS (and also better graphical interfaces of computers).

I thought it was interesting that Kwan & Lee mentioned that this was specifically used for vector data, so it would be interesting to find out more about the limitations of raster data (or perhaps, advances in temporal raster data analysis since 2004?) and the interoperability of raster and vector data. Further, the inclusion and acknowledgement of the lack of qualitative data was appreciated as well, as it provided a bit of a benchmark in the critical GIS history of the issues of qualitative data in something so quantitative. It seems like maybe this could have changed (or have become easier to visualize) in the last 13 years, so I’m looking forward to learning more about this. It would be cool to use this “aquarium” idea to click on individual lines and read a story/oral map of this person’s day, although that raises serious security concerns as the information (likely) describes day-to-day activities even if their name is not included publicly. Further, does the introduction of VR change this temporal GIS model? It would be super bizarre and super creepy (albeit more humanizing, maybe?) to do a VR walkthrough of somebody’s everyday life (although, we probably could get there with all the geo-info collected on us all the time with social media/smartphones!).

Schuurman (2006) – Critical GIS

Monday, November 20th, 2017

Schuurman discusses the shifting presence of Critical GIS in Geographic Information Science (GISc) and its evolving role in the development of the field. Among other obstacles, Schuurman identifies formalisation—the process by which concepts are translated into forms that are readable in a digital environment—as a key challenge to critical theoretical work gaining further traction in GISc.  

Critical GIS challenges the idea that information about a spatial object, system or process can be made ‘knowable’ in an objective sense; our epistemological lense always filters our view, and there is not necessarily a singular objective truth to be uncovered. Schuurman argues that this type of analysis, applied to GIS, has been provided to some extent by ontological GISc research. Contrastingly, this body of research presumes a limit to the understanding of a system, emphasising plurality and individuality of experience (e.g. the multiple perspectives represented in PPGIS research).

That said, previous analyses have fallen short in adequately acknowledging and addressing power relations, demographic inequalities, social control and marginalisation as part of the general design process in GIS. In particular, the translation between cognitive and database representations of reality requires explicit treatment in following research. These observations become increasingly relevant in the context of the rising integration of digital technologies in everyday life.

The paper raises the question of how Critical GIS can affect change on discipline and practice. Going beyond external criticism, critiques must reason within the discipline itself. I would ask how Critical GIS might also gain greater traction outside of academic settings (e.g. in influencing industrial practice of GISc)?
-slumley

MacEachren et al (2005) – Visualising uncertainty

Monday, November 20th, 2017

MacEachren et al evaluate a broad set of efforts made to conceptualise and convey uncertainty in geospatial information. Many real world decisions are made on the basis of information which contains some degree of uncertainty, and to compound the matter, there are often multiple aspects of uncertainty that need to be factored into analysis. The balance between effectively conveying this complexity and overloading analysts with visual stimuli can support or detract from decision making, and constitutes a key persisting challenge explored in this paper.

A central discussion that I found interesting was that surrounding visual representations of uncertainty. Early researchers in the field strove to develop or unearth intuitive metaphors for visualision. Aids such as ‘fuzziness’ and colour intensity could act to convey varying degrees of uncertainty present in a dataset, almost as an additional variable. In the context of our other topic this week, we could ask who these metaphors are designed to assist, and how the choice of metaphor could influence potential interpretations (e.g. for visual constructs like fuzziness and transparency, do different individuals perceive the same gradient scale?).

The authors draw on judgement and decision making literatures to distinguish expert decision makers who adjust their beliefs according to statistical analysese of mathematically (or otherwise) defined uncertainties, from non-experts, who often misinterpret probabilities and rely on heuristics to make judgements. It might have been worth clarifying what was meant by experts in this instance (individuals knowledgeable about a field, or about probability and decision making?). The Tversky and Kahneman (1974) paper cited actually found that often experts (per their own definition) are similarly susceptible to probabilistic reasoning errors, so this polarity may be less distinct than suggested. Like some of the other papers in the geovisualisation literature, I found there was a degree of vagueness in who the visualisation was for (is it the ‘analysts’ mentioned in the introduction, or the lay-people cited in examples?).
-slumley

Critical GIS – Schuurman 2006

Sunday, November 19th, 2017

Critical GIS allows us as users of GIS to better understand how it works and relates to the world around us; how theory’s  are manifested in space, how knowledge is coded, how easy it is to skim over things, and what i think is most important is the validity and praise that we give to our glorious GIS. I think that this concept is something that I wish that I had better understood when I started using GIS programs. We are taught in class and labs about how we can use the software to perform all sorts of tasks for us but we didn’t comment much on the actual foundations of which the software is built upon.

The article highlights how by critically evaluating the foundation of our concepts and techniques applied we can better apply value to the results that arise from our projects. It reminds me of the expression ‘Garbage in – Garbage out’; it doesn’t matter how well you perform a project within the software if the software itself is flawed and you fail to realize that.

I think that one of the issues thats hard to touch on is thinking about how we can improve many of these foundational concepts and apply that in a useful way in the formalized knowledge. Improving on core concepts is only the first hurdle, the second involves finding new ways to code this information in a manner that better expresses what may be minuscule changes to the definitions and ideas. This may be especially challenging since writing code can lead to generalized functions and abstraction.

Critical GIS and Ontology Research, Schuurman (2006)

Saturday, November 18th, 2017

The Schuurman (2006) article presents the emergence of critical GIS, criticisms of early GIS research that necessitated its conception, and its importance to the discipline of GIScience more broadly. It was interesting to get a glimpse of how critical GIS relates to a number of GIScience topics we’ve already begun to cover, and I think the summary of emergent themes in critical GIS provided and excellent primer for next week’s lecture.

There’s a parallel to be drawn between the synthesis of human geography and geographic techniques to form critical GIScience and the emergence of environmental studies as an integration of environmental and social science principles. For instance, the domain of ecology alone is ill-equipped to handle conservation issues related to resource management. It’s the introduction of sociological principles that enables critique of an antiquated form of environmentalism that might value biodiversity over livelihoods. I’m convinced of the importance of critical theoretical work in supplementing a mechanistic approach to geography.

I was glad to see the topic of vagueness make an appearance! I think the author’s discussion of uncertain conceptual spaces does well to demonstrate the importance of human geography concepts to what Sparke (2000) might refer to as “real-worlders.” It’s sometimes easy to forget how poorly defined some physical geographic concepts can be–at what point does a pond become a lake, or what temporal constraints exist regarding lake-hood? Ontological and epistemological research is clearly a necessary step in addressing uncertainty in GIS applications.

Visualizing Thoughts on Geospatial Information Uncertainty: What We Know and What We Need to Know (MacEachren et al.)

Saturday, November 18th, 2017

The authors offer a clarification early on in the paper which I found useful; “When inaccuracy is known objectively, it can be expressed as error; when it is not known, the term uncertainty applies “. This definition sounds like it pertains to measurement, but I don’t know how one would distinguish between error and uncertainty when it comes to visualization, another focus of this paper. I also believe it is important to further classify within “error”, the various sources of error whether they be human, machine, statistical, etc. to give a holistic impression of the (in)accuracy of attained results.

I would have liked to see a discussion of accuracy versus precision and how the concept of  uncertainty would apply to the precision of points in a dataset, ie. the degree to which the points relate to each other regardless of how they capture an absolute (ideal) value.

I liked how the authors drew on multiple discipline to illustrate how the concept of uncertainty is pertinent to many fields, drawing on Tversky and Economical/ Psychological theory to illustrate that “humans are typically not adept at using statistical information in the process of making decisions.” (141) The arguments put forth about how to depict uncertainty visually were very nuanced, from whether this would change individual’s decision-making when consulting a map, and whether it would lead to better decisions or just reduce the reliability of the data presented.

Furthermore, it makes sense that the theories and frameworks of mapping uncertainty are more well developed when it comes to traditional GIS mapping and less so in the domain of geographic data visualizations. I found the Figure 2 to be useful in teasing out how the concept of uncertainty would apply to different facets of a given project.

The challenge of representing uncertainty for dynamic information (which I think it becoming more and more crucial for streaming and big data) is definitely a big one and I’m interested to see how this field develops.

-FutureSpock

 

Thoughts on “Geographical information science: critical GIS” (O’Sullivan 2006 )

Saturday, November 18th, 2017

We have discussed the importance of terminology in previous weeks, and O’Sullivan hints at the elusive nature of capturing a phenomenon when he states the topic of his paper as the “curious beast known at least for now as ‘critical GIS”. (page 782) He further states that there is little sign of a groundswell of critical human geographers wholeheartedly embracing GIS as a tool of their trade. I think this has changed.

In comparing different critiques of GIS, he states that more successful examples of critically informed GIS are those where researchers informed by social theory have been willing to engage with the technology, rather than to criticize from the outside. I agree with this and think it makes sense that some knowledge of the procedures of GIS  how they work is required to illustrate how they can be manipulated to produce subjective results.

On page 784, O’Sullivan states that “Criticism of the technology is superficial”, but neglects to mention what would constitute more profound and constructive criticism. O’Sullivan does not explicate, but refers to Ground Truth and the important contributions made in that book pertaining to ethical dilemmas and ambiguities within GIS. It is interesting to note that much of the “brokering” that went on in the early days, which allowed for reconciliation between social theorists and the GIS community, came from institutions and “top-down” organizing as opposed to a more grass roots discussion, say on discussion boards or online communities/groups.

O’Sullivan notes that “PPGIS is not a panacea, and must not undermine the robust debate on the political economy of GIS, its epistemology, and the philosophy and practice of GIScience’”, and I very much agree with this statement. Although the increased use of PGIS addresses one of the foremost critiques of the applicability of GIS to grassroots communities and movements, it is not a simple goal which can be achieved and considered “solved.” Rather, the increased involvement of novices in GIS and spatial decision-making processes raises a host of new issues for the field of Critical GIS.

-FutureSpock

 

On Roth (2009) and Uncertainty

Friday, November 17th, 2017

It was super interesting to learn about the differences between certain words that, outside of GISci/geography/mathematics, are often equivalent, like vagueness, ambiguity, etc. Knowing more about McEachern’s methodology would have definitely been helpful, but I’m looking forward to hearing more in Cameron’s talk!

I thought it was weird that their central argument about uncertainty was with 6 floodplain mappers in a focus group. I would think that a focus group would be an interesting setting, since people can change their minds or omit what they were thinking due to contributions from another, or from the impostor syndrome, or both. Also, 6 people is not really enough to test a theory. Roth argues that it was negligible since the 6 were experts, but I would have liked to hear more about what actually made them experts, if they actually used any of these steps outlined by McEachern to reduce uncertainty, and also I would have been interested in why Roth actually considers this small (thus biased) subset of GIScientists as negligible.

Also, does anyone actually use this methodology in determining/reducing uncertainty? I have never heard of it before, which is worrisome considering many do not take further GIS courses beyond the intro classes. I thought it was interesting that one of the respondents said that representing uncertainty on their final products brought skepticism about their actual skills by their clients. It is a real issue, but also, people need to know that these maps aren’t always truthful — even as much as the map producer has tried — because of the fundamental multitude of issues of representing all the data accurately, from data collection to 2D/3D representation. So, though these experts lamented the concept of explaining this to laymen, would it really be that difficult to explain, especially considering the long-lasting benefits of the educational experience?

Optimal routes in GIS and Emergency Planning, Dunn & Newton (1992)

Sunday, November 12th, 2017

Dunn and Newton (1992) examine the performance of two popular approaches to network analysis, Dijkstra’s and out-of-kilter algorithms, in the context of population evacuation. At the time of publication, it’s clear that the majority of network analysis research has been conducted by computer scientists and mathematicians. It’s interesting how historical conceptualizations of networks, which appear to be explicitly non-spatial in the way that distortion or transformation are handled and the lack of integrated geospatial information, are transferable to GIS applications. What the authors describe as an “unnecessarily flexible” definition of a network for geographical purposes appears to be an insurmountable limitation of previous network conceptualizations for GIScience. However, I’ll admit that the ubiquity of Dijkstra’s algorithm in GIS software is a convincing argument for the usefulness of previous network concepts in GIS against my limited knowledge of network analysis.

The out-of-kilter algorithm provides a means to address the lack of integrated geospatial information in other network analysis methods. The authors demonstrate how one might incorporate geospatial concepts such as traffic congestion, one-way streets, and obstructions to enable geographic application more broadly. It’s striking that the processing time associated with network analysis is ultimately dependent on the complexity of the network. In the context of pathfinding, increased urban development and data availability will necessarily increase network complexity, and it was demonstrated in the paper how incorporating geographic information into a network can increase processing time. While it was unsurprisingly left out of a paper published in 1997, I would be curious to learn more about how heuristics might be applied to address computational concerns in the geoweb.

VGI and Crowdsourcing Disaster Relief, Zook et al. (2010)

Sunday, November 12th, 2017

Zook et al. (2010) describe the ways in which crowdsourced VGI was operationalized through the 2010 earthquake in Haiti, with emphasis on the response organized by CrisisCamp Haiti, OpenStreetMap, Ushahidi, and GeoCommons. The authors refer to the principle that “given enough eyeballs, all bugs are shallow” in deference of the suitability of crowdsourced VGI. It’s an interesting thought that the source of concerns for uncertainty, namely the contribution of non-experts, might also be the means to address uncertainty. The principle appears to rely on the ability of the crowd to converge upon some truth, but over the course the semester I’ve become less and less confident in the existence of such truth. It’s conceivable that what appears to be objective to some might ultimately be sensitive to vagueness or ambiguity. The argument that VGI need only be “good enough” to assist recovery workers is a reminder that this discussion is perhaps less pertinent to disaster response.

Still, I wonder if the principle holds if there is some minimum technical barrier to contribution. Differential data availability based on development is often realized in the differential technical ability of professionals and amateurs. It’s easy to imagine how remote mapping might renew concerns for local autonomy and self-determination. I thought the Ushahidi example provided an interesting answer to such concerns, making use of more widely available technologies than those ubiquitous within the Web 2.0. GeoCommons is another reminder that crowdsourcing challenges are not limited to expert/non-expert divide, but there are necessarily there are implications for interoperability, congruence, and collaboration.

Thoughts on “Network Analysis in Geographic Information Science…” Curtis 2007

Sunday, November 12th, 2017

I came into this paper not knowing too much about network analysis, but having some general notion of it through its ubiquity in geographic and neuroscience literature (network distance, social networks, neural networks). I thought the paper did a good job of outlining the fundamentals of the field before progressing into geographic specificities and future challenges. I learned that the most The basis of describing networks is in their topological qualities; namely connectivity, adjacency, and incidence, which is what makes it applicable to such a diverse range of phenomena.

Curtis states that “In some cases these network structures can be classified into idealized network types (e.g., tree networks, hub-and-spoke networks, Manhattan networks.” Are idealized network types simplifications of the input data which are performed to fit a certain standardized model?

On page 104, Curtis mentions that “The choice of network data structure chosen can profoundly impact the analysis performed”, just like scale can influence whether or not clusters are observed at a certain resolution and the choice of some variables over others can influence classification algorithms in SDM. Again, we see that the products of any geographic modeling/ network analysis are not objective, but dependent on subjective choice which requires justification.

I assume that the “rapid rendering” discussed in reference to non-topographic data structures is because of  function  of quicker run time.Why are the data in non-topographic networks processed more quickly than in topographic ones? Is it because without having to assess  relationships between points, each point only has to be accounted for once without regard for its connectivity with other points?

It was interesting to note that one of the biggest challenges or paths forward for geographical network analysis was in applying existing algorithms from different fields to geographic data. Usually the challenges are in adapting current methods for new data types or resolving some gaps in domain knowledge, but this is a different kind of challenge probably born out of the substantial developments made in network analysis in different fields.

-FutureSpock

 

Thoughts on “Assuring the quality of…”Goodchild 2012

Sunday, November 12th, 2017

In discussing methods to assure the quality of VGI, Goodchild states that; “The degree to which such triage can be automated varies; in some cases it might be fully automatic, but in other cases it might require significant human intervention.” In VGI, the source of the data is human (as opposed to a scraping algorithm in SDM, for example), but the verification of data quality would definitely benefit from automation to deal with the large scale of geographic data that is produced everyday. He goes on to say that “Some degree of generalization is inevitable, of course, since it is impractical to check every item of data”, but by using the data analysis tools that have been developed to deal with large datasets, researchers can strive for a more complete assessment of accuracy.

To reintroduce the concept of positivism in GIS, Goodchild states that ” Our use of the terms truth and fact suggest an orientation towards VGI that is objective and replicable, and for which quality can be addressed using the language of accuracy. Thus our approach is less likely to be applicable for VGI that consists of opinion….or properties that are vaguely defined” This position seems to indicate that only quantitative or objectively measured geographic phenomena are capable of being tested for accuracy/uncertainty. I find this a flawed position because of the strong explanatory power of qualitative GIS and alternate ways of measuring attribute data. In suggesting it is not possible to apply the same rigorous standards of accuracy to these methods, the implication is that they are less scientific and worthy of merit. Even if this is not the intention, I would have appreciated some suggestions or potential methods by which to ascertain the accuracy of VGI when applied to qualitative GIS data.

The three definitions of crowd-sourcing provided by Goodchild describe its different applications, from “solving a problem”, to “catching errors made by an individual”, to “approaching a truth”. This progression appears traces the familiar role of GIS as a tool, tool-making, or science. It is interesting to note that the third definition does not converge onto a truth as observations approach infinity, but rather that after 13 contributors, there is no observable increase in accuracy for a position contributed to Open Street Map. This suggests that unlike a mathematical proof or principle which will always be proven true given the correct assumptions, the VGI phenomenon is messier and has to account for human factors like “tagging wars” born out of disagreement about geographic principles, or the level of “trust” which may discourage someone from correcting a contribution from a reputed contributor.

The social approach tries to minimize the human errors mentioned above by quantifying variables like “commitment” and “reliability” and allowing for social relations amongst contributors  to act as correction mechanisms.

-FutureSpock

 

 

 

 

Curtin (2013) – Networks in GIScience

Sunday, November 12th, 2017

Curtin (2013) calls on the Geographic Information Science (GISc) community to seize the opportunities surrounding network analysis in geographic information systems (GIS). If GISc researchers and GIS developers can sufficiently integrate networks into existing theoretical frameworks, construct robust methods and design compatible softwares, they could exert a strong geographically-minded influence on the expansion of network analyses in a wide variety of other disciplines.

Networks define fundamental and distinct data structures in GISc, that have not always been served in past GIS implementations. Historically, both non-topological and topological data models in GIS have been inefficient for performing network analyses, with constraining factors leading to repetitions and inconsistencies within the structure. Consequently, data models are required that explicitly treat the description, measurement and analysis of topologically invariant properties of networks (i.e. properties that are not deformed by cartographic transformations), such as connections between transport hubs or links in a social network.

The paper demonstrates that networks are pervasive in their everyday use for navigation of physical and social space. Linear referencing is applied as an underlying location datum, as opposed to a geographic or relative coordinate system, to signify distance along a path. Common metrics for distance between two geographic locations are often calculated by optimally traversing a network.

I think that in order for GIScientists to exert the kind of influence, as envisioned by Curtin, over future GIS network analysis research and its applications, they will need to embrace and address the computational challenges associated with current geographic data models. While they are well-positioned to do so, the ambiguity in ownership implied by the existence of this paper suggests that concurrently evolving fields should not be discounted.
-slumley

Goodchild and Li (2012) – Quality VGI

Saturday, November 11th, 2017

Goodchild and Li (2012) outline crowd-sourcing, social and geographic approaches to quality assurance for volunteered geographic information (VGI). Representing an increasingly important resource for data acquisition, there is a need to create and interrogate the frameworks used to accept, query or reject instances of VGI on the basis of its accuracy, consistency and completeness.

The authors argue that VGI presents a distinct set of challenges and considerations from other types of volunteered information. For example, Linus’s Law—that in software development, “given enough eyeballs, all bugs are shallow”—may not apply as readily to geographic facts as it does to other types of information. Evaluators’ “eyes” scan highly geographic content selectively, with exposure of geographic facts varying from the very prominent to the very obscure.

To me, it is unclear why this disparity is unique to geographic information. The direct comparison between Wikimapia and Wikipedia may be inappropriate for contrasting geographic/ non-geographic volunteered information, since their user/ contributor bases differ so markedly. I might actually advance in the opposite case; that the fact that geographic information is all connected by locations on the surface of the earth makes it more ‘visible’ than, for instance, an obscure wikipedia page on an isolated topic.  

The authors call upon further research to be directed towards formalising and expanding geographic approaches to quality assurance. These approaches seek to verify VGI using external information about location and by applying geographic ‘laws’. In my opinion, this provides an interesting strategy that is relatively unique to geographic information. Through geolocation, any instance of VGI could be linked to other geospatial databases, and could potentially be accepted or flagged on the basis of their relationships to other nearby features or variables. Elements of this process could be automated through formalisation. This approach will of course come with its own set of challenges, such as potential feedbacks generated by multiple incorrect sources reaffirming inaccurate information.
-slumley