Thoughts on Roth (2009)

November 18th, 2017

The concept of uncertainty rarely occurs to me when looking at a map. In Roth’s article, he frequently refers to the visual representation of geographic information uncertainty, but doesn’t explain it in detail or give examples. He describes the different typologies of uncertainty categories from the literature. Roth makes a case for McEachren’s typology of uncertainty categories. His argument is based on the inclusion of all uncertainties “influential [to] decision-making,” interoperability, and quantifiability. Roth fails to explain why previous typologies were lacking in any of these categories, and it seems that McEachren’s list is simply broader.

Roth doesn’t give any methods for representing these uncertainties visually. The results from the focus group seemed to conclude that the largest gap in the reality-to-decision flow is representation. I found interesting the distinction made by participants between a textual disclaimer for uncertainty and a cartographic representation of uncertainty. I agree that a disclaimer allows the viewer to absorb the information presented with a grain of salt. I think that most users can understand the concept of uncertainty (even in a geographic context), but representation is the more apparent barrier.

Participants in the focus group also seemed to dismiss geographic uncertainty as something that should be disregarded. If this attitude is as common among decision-makers as the article supposes it to be, therein lies the problem. If it can be proven that decisions made acknowledging geographic uncertainty versus disregarding it are “better,” then decision-makers must be made aware of the discrepancy. Although Leithner and Buttenfield (2000) seem to prove that uncertainty representations expedited the decision-making process, the decision-makers involved in Roth’s focus group were not of the same mind, claiming that knowledge of uncertainty decreased their confidence in their decision. I think more research and education needs to take place among decision-makers and evaluating the validity of informed and uninformed decisions.

Visualizing Thoughts on Geospatial Information Uncertainty: What We Know and What We Need to Know (MacEachren et al.)

November 18th, 2017

The authors offer a clarification early on in the paper which I found useful; “When inaccuracy is known objectively, it can be expressed as error; when it is not known, the term uncertainty applies “. This definition sounds like it pertains to measurement, but I don’t know how one would distinguish between error and uncertainty when it comes to visualization, another focus of this paper. I also believe it is important to further classify within “error”, the various sources of error whether they be human, machine, statistical, etc. to give a holistic impression of the (in)accuracy of attained results.

I would have liked to see a discussion of accuracy versus precision and how the concept of  uncertainty would apply to the precision of points in a dataset, ie. the degree to which the points relate to each other regardless of how they capture an absolute (ideal) value.

I liked how the authors drew on multiple discipline to illustrate how the concept of uncertainty is pertinent to many fields, drawing on Tversky and Economical/ Psychological theory to illustrate that “humans are typically not adept at using statistical information in the process of making decisions.” (141) The arguments put forth about how to depict uncertainty visually were very nuanced, from whether this would change individual’s decision-making when consulting a map, and whether it would lead to better decisions or just reduce the reliability of the data presented.

Furthermore, it makes sense that the theories and frameworks of mapping uncertainty are more well developed when it comes to traditional GIS mapping and less so in the domain of geographic data visualizations. I found the Figure 2 to be useful in teasing out how the concept of uncertainty would apply to different facets of a given project.

The challenge of representing uncertainty for dynamic information (which I think it becoming more and more crucial for streaming and big data) is definitely a big one and I’m interested to see how this field develops.

-FutureSpock

 

Thoughts on “Geographical information science: critical GIS” (O’Sullivan 2006 )

November 18th, 2017

We have discussed the importance of terminology in previous weeks, and O’Sullivan hints at the elusive nature of capturing a phenomenon when he states the topic of his paper as the “curious beast known at least for now as ‘critical GIS”. (page 782) He further states that there is little sign of a groundswell of critical human geographers wholeheartedly embracing GIS as a tool of their trade. I think this has changed.

In comparing different critiques of GIS, he states that more successful examples of critically informed GIS are those where researchers informed by social theory have been willing to engage with the technology, rather than to criticize from the outside. I agree with this and think it makes sense that some knowledge of the procedures of GIS  how they work is required to illustrate how they can be manipulated to produce subjective results.

On page 784, O’Sullivan states that “Criticism of the technology is superficial”, but neglects to mention what would constitute more profound and constructive criticism. O’Sullivan does not explicate, but refers to Ground Truth and the important contributions made in that book pertaining to ethical dilemmas and ambiguities within GIS. It is interesting to note that much of the “brokering” that went on in the early days, which allowed for reconciliation between social theorists and the GIS community, came from institutions and “top-down” organizing as opposed to a more grass roots discussion, say on discussion boards or online communities/groups.

O’Sullivan notes that “PPGIS is not a panacea, and must not undermine the robust debate on the political economy of GIS, its epistemology, and the philosophy and practice of GIScience’”, and I very much agree with this statement. Although the increased use of PGIS addresses one of the foremost critiques of the applicability of GIS to grassroots communities and movements, it is not a simple goal which can be achieved and considered “solved.” Rather, the increased involvement of novices in GIS and spatial decision-making processes raises a host of new issues for the field of Critical GIS.

-FutureSpock

 

Uncertainty in floodplain mapping – Roth 2009

November 18th, 2017

Roth (2009) presents the results of a focus group that was conducted to learn about the role of uncertainty in decision-making processes relating to floodplain mapping. Due to this focus on floodplain mapping, uncertainty was largely discussed in the context of knowledge communication. Such a cartographic focus often conflated abstraction with uncertainty and discussed ways that representations of reality can impact the knowledge that is being communicated. I am left wondering how uncertainty can be introduced into data beyond abstraction and choices of representation. For example, how is uncertainty introduced by processes of data collection?

Furthermore, I am unsatisfied with the author’s attempts to characterize uncertainty and find that this article presupposes knowledge of this subdomain that I do not have. Roth overviews previous typologies of uncertainty (including concepts of accuracy, precision, resolution, consistency, etc.), but puts little effort into describing the theoretical underpinnings of what uncertainty actually is. Roth may have acknowledged that a philosophical discussion of uncertainty is beyond the scope of this paper, but my comprehension would nevertheless have greatly benefited from a more in-depth overview of the concept.

In describing the results from focus group participants, the “FEMA uncertainty criteria” are briefly mentioned. I am curious what these criteria for uncertainty are, and how widespread the concept of uncertainty criteria is. Is the idea of “uncertainty criteria” linked to the concept of data standards? Both speak to the overall quality of data and address potential errors. While I am sure that uncertainty criteria would be very domain specific and difficult to generalize, such standards would be a good way to ensure that data is not misused.

Thoughts on Roth (2009)

November 17th, 2017

I found this paper by Roth (2009) fascinating for multiple reasons. Firstly, my undergraduate thesis research involved making a map of sediment distribution in water ways, which I collected with a sonar and dGPS. There were multiple layers of uncertainty, mainly relating to error potential in the data collection (from the sonar and the dGPS), and then related to interpolation. Like the focus group participants, it was difficult to communicate the error potential to the stakeholders, and at times counterproductive for policy change to stress the error potential. That being said, I reported the uncertainty as best I could, and reading the results of this paper and the fact that this was commonplace in watershed management (at least in the limited number of participants, though I suspect extends far beyond) was deeply troubling, as it is important for the uncertainty to be known, otherwise, in my mind, it reduces the credibility of the study (if only to those in the know).

Secondly, I found this paper interesting because of the methods, as I mostly read about uncertainty (particularly error, accuracy and precision) quantitatively, so it was a useful change in perspective to read about this in qualitative way (and therefore read about focus groups methods at length).

Finally, this paper was interesting to me because of the uncertainty involved with UAVs, which range from the relatively innocuous error in digital terrain model creation, to the more serious, and even fatal murder of civilians in military drone strikes (never mind the overall ethics). To what extent is the precision and accuracy of drone strike location known before strikes are called, and how accurate is the actual missile? Just in the last few days the New York Times has published articles highlighting some of the discrepancies between what the American-led coalition fighting ISIS says about “precision air strikes” and the reality which is not always so precise or accurate. In some cases, these are airplane strikes and not drone strikes, but the fact remains that uncertainty can be deadly, and must be acknowledged.

OSullivan 2006: Critical GIS

November 17th, 2017

I found this article on critical GIS quite interesting, and very relevant to our topics in GIS ranging from VGI to PPGIS. This paper acknowledges the large gap in the GIScience literature pertaining to social theory, which I find is a very important idea to keep in mind especially when assessing papers in GIS literature involving human participation, and in a more veiled sense, GIS projects that may only represent certain groups and impose geographies on those not involved in the GIS process. This parallels an interesting idea brought up, which we geographers take for granted at this point, being the realization that projections can be used to disproportionately inflate the west, or under represent countries in the global south. In this sense, I agree with the author in that there should be more papers or a book even on the social history of GIS, as to ground the science to meet the ethical issues we often overlook when interacting with a GISystem. This is especially important as the technology enabling GIS at the individual level through the increased prevalence and use of lifestyle databases in GIS literature grows.

Beyond critical GIS as an ethical consideration, I found the added benefit of feminist/critical GIS in Kwan’s work really quite interesting and revolutionary for qualitative GIS and as a tool of empowerment under represented groups who could benefit from GIS to explain their stories and perspectives.

O’Sullivan (2006) and Critical GIS

November 17th, 2017

O’Sullivan (2006) begins by highlighting the divide between social theory and GIS that critical GIS attempts to bridge. This article provides a brief overview of the field of critical GIS with respect to the topics such as feminist GIS, PPGIS, privacy, and ethics. In my opinion, this article did an excellent job of exemplifying the many ways that one can still “do” GIS while being socially aware and critical of the ways that this technology is used. This article, and I suppose the entire subdomain of critical GIS, makes it clear that GIS is not neutral and objective, but rather has many important implications for the individuals and communities that it impacts.

I was most fascinated in reading O’Sullivan’s overview of the “gendering of GIS” and how GIS has been adopted by feminist geographers to resist the “antagonistic dualisms” that are present in many GIScientific debates. In my personal experiences with GIS, I very easily find myself subscribing to a masculinist and positivistic view of geographic entities. I am often guilty of restricting my analysis to objective and knowable spatial characteristics which are devoid of more nuanced considerations for localized differentiation. I think that this top-down approach is how many students are introduced to GIS, which may be troubling for future developments in critical GIS. While this approach may fit well within existing scientific frameworks and allow for replicable research, it risks losing touch with reality as we experience it and may exclude certain other knowledge frameworks. In this sense, I believe that many of the issues raised by critical GIS can be applied to all of science and technology.

On Roth (2009) and Uncertainty

November 17th, 2017

It was super interesting to learn about the differences between certain words that, outside of GISci/geography/mathematics, are often equivalent, like vagueness, ambiguity, etc. Knowing more about McEachern’s methodology would have definitely been helpful, but I’m looking forward to hearing more in Cameron’s talk!

I thought it was weird that their central argument about uncertainty was with 6 floodplain mappers in a focus group. I would think that a focus group would be an interesting setting, since people can change their minds or omit what they were thinking due to contributions from another, or from the impostor syndrome, or both. Also, 6 people is not really enough to test a theory. Roth argues that it was negligible since the 6 were experts, but I would have liked to hear more about what actually made them experts, if they actually used any of these steps outlined by McEachern to reduce uncertainty, and also I would have been interested in why Roth actually considers this small (thus biased) subset of GIScientists as negligible.

Also, does anyone actually use this methodology in determining/reducing uncertainty? I have never heard of it before, which is worrisome considering many do not take further GIS courses beyond the intro classes. I thought it was interesting that one of the respondents said that representing uncertainty on their final products brought skepticism about their actual skills by their clients. It is a real issue, but also, people need to know that these maps aren’t always truthful — even as much as the map producer has tried — because of the fundamental multitude of issues of representing all the data accurately, from data collection to 2D/3D representation. So, though these experts lamented the concept of explaining this to laymen, would it really be that difficult to explain, especially considering the long-lasting benefits of the educational experience?

Some thoughts on Schuurman (2006)

November 16th, 2017

I bring this up in many of the blog posts that I write, but it truly amazes me how much I learn about the inner workings of GIScience every week in this class. During his talk on critical GIS for GIS day, Dr. Wilson mentioned how too often critical GIS is a lecture tacked on to the end of a GIS course, which truly was my experience taking GIS courses at another university, where limited background was provided and methods and applications were favoured. This phenomenon is reflected in the content analysis of GIScience journals provided in Schuurman (2006), which explains that only 49 of 762 published between 1995 and 2004 fell under the category ‘GIS and society’. Firstly, I find the shift/difference in nomenclature from GIS and society to critical GIS interesting, because critical GIS has negative connotations to me, implying a necessarily flawed use or understanding of GIS which needs to be critiqued; whereas ‘GIS and society’ is a neutral description of the scope and intention of the study. Secondly, I don’t find this discrepancy in number surprising, as ‘GIS and society’ isn’t the main focus of GIS research by any means, but I wonder, how can we, as GIS users/researchers make the distinction between ‘GIS and society’, when GIS studies necessarily implicate society, either because of the subject matter or by means of the study implications. To me, GIS is most often just as social as it is spatial (my own project in the high Arctic being as close to an example to the contrary that I can think of-but only because it takes place in one of the most remote places on this planet)- and I think it is highly problematic to ignore these important discussions and focus on the things that get the big funding and flashy publications.

Optimal routes in GIS and emergency planning application (Dunn and Newton, 1992)

November 12th, 2017

This is an old paper presenting two algorithms (i.e., Dijkstra’s algorithm and out-of-kilter algorithm) for shortest path calculation. The different is that out-of-kilter algorithm can tackle the problems with flow control. This is useful in many situations such as transportation. And as the authors note, it is still far from the reality if we apply it to support real-world decisions. However, this paper seems to be limited in algorithm and its applications even it reveals the necessity to engage researchers from other disciplines (e.g. operational science). With recent development, we may need to realize that the limitation for developing better algorithms for shortest path or other optimal solutions is not about algorithms itself. It is possibly about how we construct the network, in other words, how we create an appropriate representation of the real world using arcs and nodes. It is a more fundamental level to discuss the limits of applying network analysis techniques.

Tackling questions in geography usually needs multidisciplinary knowledges, especially in the current age where the massive high-dimensional geo-tagged data (i.e., big data with both spatial and non-spatial attributes) are generated through information and communication technologies. The complexity of real-world problems dramatically increase when such massive data involve. Therefore, the challenge is we need reduce the complexity and transform these data to networks.  The design of networks should balance between efficiency (i.e., reducing the complexity) and accuracy (i.e., not losing information). Another challenge is conducting analysis on large networks that cannot be handled by old algorithms, such as traditional Dijkstra’s algorithm. The guidance for addressing these two challenges is not just from geography or some particular disciplines. I think it can be from any disciplines according to the context of actual problems. I believe that, in near future, there is no method can identify a “real” optimal solution.

VGI and Crowdsourcing Disaster Relief (Zook et al., 2010)

November 12th, 2017

This paper mainly reviews the applications of four online mapping platforms in Haiti earthquake in 2010. It cannot be denied that the four platforms (i.e., CrisisCamp Haiti, OpenStreetMap, Ushahidi, GeoCommons) contribute to disaster relief in the earthquake. However, these technologies also bring problems remain to be discussed and further solved.

Primarily, in the beginning parts of this paper, the authors emphasize the importance of information technologies (ITs) in disaster response and then note how volunteered mapping helps. However, they focus on Haiti where the IT infrastructures are quite limited and geo-referenced data are lacked. I agree that volunteered mapping can provide efficiently and effectively provides these data for disaster rescue and tremendously facilitate the rescue. However, this may not happen in countries with good infrastructures and well-mapped. In that case, I will wonder what is the strength of volunteered mapping comparing with the traditional mapping databases and whether we need it.

Besides, since the platforms use volunteered geographic information (VGI), the fundamental problem is how to ensure the quality of these data. In term of disaster response, I think we should consider two general types of errors proposed by Goodchild (2007): a false positive (i.e., a false rumor of an incidence), or a false negative (i.e., absence of information about the existence of the incidence). The former will lead to the inefficiency in disaster rescue, and the latter can result in low effectiveness. Both could affect the human life even just one individual. Moreover, I will doubt that a place with more dense information is necessarily a place more in need. Information density can result from different reasons, but human lives are not different across areas. According to the authors, there are only 11% people can access to the Internet and one third have mobile phones. It means at two thirds people cannot send out distress calls in Ushahidi. Resources are firstly taken by people who have the access. The authors argue that we should blame the originally insufficient infrastructures. In other words, they think even without VGI the discrimination happens. This is a trick argument and defenses nothing. Of course, I agree that social inequality always exists. However, VGI is not value-neutral and it may worse the existing inequality. Criticists does not blame VGI bring the inequality but worsen the inequality, and currently there is no efficient way to solve the issues.

In conclusion, this paper provides comprehensive review of the benefits brought by volunteered mapping in disaster response in Haiti. But it is not critical enough when discussing the defects of using volunteered mapping. Through reading this paper, we can identify many questions remaining to be answered including the inherent characteristics of VGI and its applications.

Zook et al. Haiti Relief & VGI

November 12th, 2017

Volunteered Geographic information is tool used to consolidate knowledge where it is needed by those willing to offer their data and expertise. I feel comfortable arguing that it is strictly a tool; it create no new process or analysis of information but simply refers to consolidation of knowledge to complete various projects. The project in which the information is being used could potentially be considered science depending on its nature, but VGI is a tool. In relation to the topic of privacy, VGI can either be enhanced or impeded depending on levels of privacy. in other words, if personal data is openly available for collection and use, certain tasks may be easier to complete as a result of readily available pools of knowledge. In contrast, if information is kept private, then certain tasks may lack some critical knowledge and result in inaccuracy or bias in final products.

I think the article does good job at framing VGI as a tool that facilitates transactions of knowledge and data to complete projects more efficiently than by individuals. I was skeptical of the utility/quality of the work completed but the article makes a good point that more users means more people to catch errors and mistakes throughout the process.

One particular concern I have is regarding potential failures to provide a comprehensive amount of information comparable to what could e collected on by local knowledge and expertise: Is everything doable through VGI or are there certain limitations to projects that need to be completed outside of VGI?

Dunn and Newton (1992)

November 12th, 2017

This paper discusses two prominent forms of network analysis: Dijkstra’s shortest path analysis, and off-kilter network analysis.
Dijkstra’s algorithm presents a very simple form of network analysis by regarding the path from point A to point B through a series of nodes and arcs, which are accounted simply by length. Indeed, the authors make a point that original network analyses were affected by computer scientists, and did not account for the inherently geographical nature of transportation and movement. Namely, it does not account for directionality or geographical coordinates. Off-kilter analysis recognizes these issues by accounting for external factors, and partitioning flows of movement depending on the maximum allocated capacity of these certain roads. This needed situation for speed and efficiency is illustrated in disaster scenarios by the authors, where precarious roads and a mass of traffic need to be accounted for quickly and dealt with efficiently.
This was written in 1992, at the cusp of a widespread informatics revolution in the home market. As it stands right now, Dijkstra’s analysis is still highly relevant, I believe that it is used in Google Maps for directions, though I have the sense that off-kilter analysis has become a viable option for many people. With traffic collection data such as Waze collecting cell-phone information, paired with basic information about current infrastructure, it has become possible for GPS services to account for the dynamic changes in traffic data for users of infrastructure.
I can’t help but feel that this is nonetheless a still rudimentary network analysis, there are still many more factors that could potentially be quantifiable and added into the algorithm. What about greenhouse gas emissions, or the level of scenery? I wonder how easily those things could be accounted for. I am still wondering about the accountability of qualifiable data as playing a part into network analysis. Perhaps in the future our GPS’ could account for personal preferences and tailor their network analysis to the individual itself? This would raise questions over privacy, perhaps. Though with growing levels of information being tracked anyway, it’s almost something to be expected. I would be interested in knowing more about the evolution of network analysis, and I am looking forward to the presentation on Monday

Network Analysis (Curtin 2007)

November 12th, 2017

I found this article quite interesting, in both its recap of traditional Network analyses (i.e. Djikstra’s formula) as well as how the network features of GIS are some of GIScience’s earliest and most popular uses. I find the point on how Graph Theory is ultimately the thought holding this immense function together. On this train of thought, I was very surprised to hear that a ‘non-topological’ network existed and is still used to some degree. How a network can be formed without information linking the network to other nodes makes no sense to me, and seems to defeat the point of creating a network.

I like how the author states that Network GIS is a sub-discipline of GIScience, and goes so far as to claim it’s the only one with linear referencing, which I assume since many GIS functions rely on network analysis, ultimately anything that uses a network incorporates this (making it seem not that out of the ordinary).

Lastly, I found the use of Network Analysis in multi-disciplinary fields like microbiology and neurology very interesting, and definitely would use this as an argument that network analysis is purely a tool. As a tool it’s extremely powerful in that it’s a simple to use and understand data structure which can use many algorithms for interesting analyses.

-MercatorGator

Optimal routes in GIS and Emergency Planning, Dunn & Newton (1992)

November 12th, 2017

Dunn and Newton (1992) examine the performance of two popular approaches to network analysis, Dijkstra’s and out-of-kilter algorithms, in the context of population evacuation. At the time of publication, it’s clear that the majority of network analysis research has been conducted by computer scientists and mathematicians. It’s interesting how historical conceptualizations of networks, which appear to be explicitly non-spatial in the way that distortion or transformation are handled and the lack of integrated geospatial information, are transferable to GIS applications. What the authors describe as an “unnecessarily flexible” definition of a network for geographical purposes appears to be an insurmountable limitation of previous network conceptualizations for GIScience. However, I’ll admit that the ubiquity of Dijkstra’s algorithm in GIS software is a convincing argument for the usefulness of previous network concepts in GIS against my limited knowledge of network analysis.

The out-of-kilter algorithm provides a means to address the lack of integrated geospatial information in other network analysis methods. The authors demonstrate how one might incorporate geospatial concepts such as traffic congestion, one-way streets, and obstructions to enable geographic application more broadly. It’s striking that the processing time associated with network analysis is ultimately dependent on the complexity of the network. In the context of pathfinding, increased urban development and data availability will necessarily increase network complexity, and it was demonstrated in the paper how incorporating geographic information into a network can increase processing time. While it was unsurprisingly left out of a paper published in 1997, I would be curious to learn more about how heuristics might be applied to address computational concerns in the geoweb.

VGI and Crowdsourcing Disaster Relief, Zook et al. (2010)

November 12th, 2017

Zook et al. (2010) describe the ways in which crowdsourced VGI was operationalized through the 2010 earthquake in Haiti, with emphasis on the response organized by CrisisCamp Haiti, OpenStreetMap, Ushahidi, and GeoCommons. The authors refer to the principle that “given enough eyeballs, all bugs are shallow” in deference of the suitability of crowdsourced VGI. It’s an interesting thought that the source of concerns for uncertainty, namely the contribution of non-experts, might also be the means to address uncertainty. The principle appears to rely on the ability of the crowd to converge upon some truth, but over the course the semester I’ve become less and less confident in the existence of such truth. It’s conceivable that what appears to be objective to some might ultimately be sensitive to vagueness or ambiguity. The argument that VGI need only be “good enough” to assist recovery workers is a reminder that this discussion is perhaps less pertinent to disaster response.

Still, I wonder if the principle holds if there is some minimum technical barrier to contribution. Differential data availability based on development is often realized in the differential technical ability of professionals and amateurs. It’s easy to imagine how remote mapping might renew concerns for local autonomy and self-determination. I thought the Ushahidi example provided an interesting answer to such concerns, making use of more widely available technologies than those ubiquitous within the Web 2.0. GeoCommons is another reminder that crowdsourcing challenges are not limited to expert/non-expert divide, but there are necessarily there are implications for interoperability, congruence, and collaboration.

Thoughts on “Network Analysis in Geographic Information Science…” Curtis 2007

November 12th, 2017

I came into this paper not knowing too much about network analysis, but having some general notion of it through its ubiquity in geographic and neuroscience literature (network distance, social networks, neural networks). I thought the paper did a good job of outlining the fundamentals of the field before progressing into geographic specificities and future challenges. I learned that the most The basis of describing networks is in their topological qualities; namely connectivity, adjacency, and incidence, which is what makes it applicable to such a diverse range of phenomena.

Curtis states that “In some cases these network structures can be classified into idealized network types (e.g., tree networks, hub-and-spoke networks, Manhattan networks.” Are idealized network types simplifications of the input data which are performed to fit a certain standardized model?

On page 104, Curtis mentions that “The choice of network data structure chosen can profoundly impact the analysis performed”, just like scale can influence whether or not clusters are observed at a certain resolution and the choice of some variables over others can influence classification algorithms in SDM. Again, we see that the products of any geographic modeling/ network analysis are not objective, but dependent on subjective choice which requires justification.

I assume that the “rapid rendering” discussed in reference to non-topographic data structures is because of  function  of quicker run time.Why are the data in non-topographic networks processed more quickly than in topographic ones? Is it because without having to assess  relationships between points, each point only has to be accounted for once without regard for its connectivity with other points?

It was interesting to note that one of the biggest challenges or paths forward for geographical network analysis was in applying existing algorithms from different fields to geographic data. Usually the challenges are in adapting current methods for new data types or resolving some gaps in domain knowledge, but this is a different kind of challenge probably born out of the substantial developments made in network analysis in different fields.

-FutureSpock

 

Thoughts on “Assuring the quality of…”Goodchild 2012

November 12th, 2017

In discussing methods to assure the quality of VGI, Goodchild states that; “The degree to which such triage can be automated varies; in some cases it might be fully automatic, but in other cases it might require significant human intervention.” In VGI, the source of the data is human (as opposed to a scraping algorithm in SDM, for example), but the verification of data quality would definitely benefit from automation to deal with the large scale of geographic data that is produced everyday. He goes on to say that “Some degree of generalization is inevitable, of course, since it is impractical to check every item of data”, but by using the data analysis tools that have been developed to deal with large datasets, researchers can strive for a more complete assessment of accuracy.

To reintroduce the concept of positivism in GIS, Goodchild states that ” Our use of the terms truth and fact suggest an orientation towards VGI that is objective and replicable, and for which quality can be addressed using the language of accuracy. Thus our approach is less likely to be applicable for VGI that consists of opinion….or properties that are vaguely defined” This position seems to indicate that only quantitative or objectively measured geographic phenomena are capable of being tested for accuracy/uncertainty. I find this a flawed position because of the strong explanatory power of qualitative GIS and alternate ways of measuring attribute data. In suggesting it is not possible to apply the same rigorous standards of accuracy to these methods, the implication is that they are less scientific and worthy of merit. Even if this is not the intention, I would have appreciated some suggestions or potential methods by which to ascertain the accuracy of VGI when applied to qualitative GIS data.

The three definitions of crowd-sourcing provided by Goodchild describe its different applications, from “solving a problem”, to “catching errors made by an individual”, to “approaching a truth”. This progression appears traces the familiar role of GIS as a tool, tool-making, or science. It is interesting to note that the third definition does not converge onto a truth as observations approach infinity, but rather that after 13 contributors, there is no observable increase in accuracy for a position contributed to Open Street Map. This suggests that unlike a mathematical proof or principle which will always be proven true given the correct assumptions, the VGI phenomenon is messier and has to account for human factors like “tagging wars” born out of disagreement about geographic principles, or the level of “trust” which may discourage someone from correcting a contribution from a reputed contributor.

The social approach tries to minimize the human errors mentioned above by quantifying variables like “commitment” and “reliability” and allowing for social relations amongst contributors  to act as correction mechanisms.

-FutureSpock

 

 

 

 

Curtin (2013) – Networks in GIScience

November 12th, 2017

Curtin (2013) calls on the Geographic Information Science (GISc) community to seize the opportunities surrounding network analysis in geographic information systems (GIS). If GISc researchers and GIS developers can sufficiently integrate networks into existing theoretical frameworks, construct robust methods and design compatible softwares, they could exert a strong geographically-minded influence on the expansion of network analyses in a wide variety of other disciplines.

Networks define fundamental and distinct data structures in GISc, that have not always been served in past GIS implementations. Historically, both non-topological and topological data models in GIS have been inefficient for performing network analyses, with constraining factors leading to repetitions and inconsistencies within the structure. Consequently, data models are required that explicitly treat the description, measurement and analysis of topologically invariant properties of networks (i.e. properties that are not deformed by cartographic transformations), such as connections between transport hubs or links in a social network.

The paper demonstrates that networks are pervasive in their everyday use for navigation of physical and social space. Linear referencing is applied as an underlying location datum, as opposed to a geographic or relative coordinate system, to signify distance along a path. Common metrics for distance between two geographic locations are often calculated by optimally traversing a network.

I think that in order for GIScientists to exert the kind of influence, as envisioned by Curtin, over future GIS network analysis research and its applications, they will need to embrace and address the computational challenges associated with current geographic data models. While they are well-positioned to do so, the ambiguity in ownership implied by the existence of this paper suggests that concurrently evolving fields should not be discounted.
-slumley

Goodchild and Li (2012) – Quality VGI

November 11th, 2017

Goodchild and Li (2012) outline crowd-sourcing, social and geographic approaches to quality assurance for volunteered geographic information (VGI). Representing an increasingly important resource for data acquisition, there is a need to create and interrogate the frameworks used to accept, query or reject instances of VGI on the basis of its accuracy, consistency and completeness.

The authors argue that VGI presents a distinct set of challenges and considerations from other types of volunteered information. For example, Linus’s Law—that in software development, “given enough eyeballs, all bugs are shallow”—may not apply as readily to geographic facts as it does to other types of information. Evaluators’ “eyes” scan highly geographic content selectively, with exposure of geographic facts varying from the very prominent to the very obscure.

To me, it is unclear why this disparity is unique to geographic information. The direct comparison between Wikimapia and Wikipedia may be inappropriate for contrasting geographic/ non-geographic volunteered information, since their user/ contributor bases differ so markedly. I might actually advance in the opposite case; that the fact that geographic information is all connected by locations on the surface of the earth makes it more ‘visible’ than, for instance, an obscure wikipedia page on an isolated topic.  

The authors call upon further research to be directed towards formalising and expanding geographic approaches to quality assurance. These approaches seek to verify VGI using external information about location and by applying geographic ‘laws’. In my opinion, this provides an interesting strategy that is relatively unique to geographic information. Through geolocation, any instance of VGI could be linked to other geospatial databases, and could potentially be accepted or flagged on the basis of their relationships to other nearby features or variables. Elements of this process could be automated through formalisation. This approach will of course come with its own set of challenges, such as potential feedbacks generated by multiple incorrect sources reaffirming inaccurate information.
-slumley