Archive for November, 2017

On Roth (2009) and Uncertainty

Friday, November 17th, 2017

It was super interesting to learn about the differences between certain words that, outside of GISci/geography/mathematics, are often equivalent, like vagueness, ambiguity, etc. Knowing more about McEachern’s methodology would have definitely been helpful, but I’m looking forward to hearing more in Cameron’s talk!

I thought it was weird that their central argument about uncertainty was with 6 floodplain mappers in a focus group. I would think that a focus group would be an interesting setting, since people can change their minds or omit what they were thinking due to contributions from another, or from the impostor syndrome, or both. Also, 6 people is not really enough to test a theory. Roth argues that it was negligible since the 6 were experts, but I would have liked to hear more about what actually made them experts, if they actually used any of these steps outlined by McEachern to reduce uncertainty, and also I would have been interested in why Roth actually considers this small (thus biased) subset of GIScientists as negligible.

Also, does anyone actually use this methodology in determining/reducing uncertainty? I have never heard of it before, which is worrisome considering many do not take further GIS courses beyond the intro classes. I thought it was interesting that one of the respondents said that representing uncertainty on their final products brought skepticism about their actual skills by their clients. It is a real issue, but also, people need to know that these maps aren’t always truthful — even as much as the map producer has tried — because of the fundamental multitude of issues of representing all the data accurately, from data collection to 2D/3D representation. So, though these experts lamented the concept of explaining this to laymen, would it really be that difficult to explain, especially considering the long-lasting benefits of the educational experience?

Some thoughts on Schuurman (2006)

Thursday, November 16th, 2017

I bring this up in many of the blog posts that I write, but it truly amazes me how much I learn about the inner workings of GIScience every week in this class. During his talk on critical GIS for GIS day, Dr. Wilson mentioned how too often critical GIS is a lecture tacked on to the end of a GIS course, which truly was my experience taking GIS courses at another university, where limited background was provided and methods and applications were favoured. This phenomenon is reflected in the content analysis of GIScience journals provided in Schuurman (2006), which explains that only 49 of 762 published between 1995 and 2004 fell under the category ‘GIS and society’. Firstly, I find the shift/difference in nomenclature from GIS and society to critical GIS interesting, because critical GIS has negative connotations to me, implying a necessarily flawed use or understanding of GIS which needs to be critiqued; whereas ‘GIS and society’ is a neutral description of the scope and intention of the study. Secondly, I don’t find this discrepancy in number surprising, as ‘GIS and society’ isn’t the main focus of GIS research by any means, but I wonder, how can we, as GIS users/researchers make the distinction between ‘GIS and society’, when GIS studies necessarily implicate society, either because of the subject matter or by means of the study implications. To me, GIS is most often just as social as it is spatial (my own project in the high Arctic being as close to an example to the contrary that I can think of-but only because it takes place in one of the most remote places on this planet)- and I think it is highly problematic to ignore these important discussions and focus on the things that get the big funding and flashy publications.

Optimal routes in GIS and emergency planning application (Dunn and Newton, 1992)

Sunday, November 12th, 2017

This is an old paper presenting two algorithms (i.e., Dijkstra’s algorithm and out-of-kilter algorithm) for shortest path calculation. The different is that out-of-kilter algorithm can tackle the problems with flow control. This is useful in many situations such as transportation. And as the authors note, it is still far from the reality if we apply it to support real-world decisions. However, this paper seems to be limited in algorithm and its applications even it reveals the necessity to engage researchers from other disciplines (e.g. operational science). With recent development, we may need to realize that the limitation for developing better algorithms for shortest path or other optimal solutions is not about algorithms itself. It is possibly about how we construct the network, in other words, how we create an appropriate representation of the real world using arcs and nodes. It is a more fundamental level to discuss the limits of applying network analysis techniques.

Tackling questions in geography usually needs multidisciplinary knowledges, especially in the current age where the massive high-dimensional geo-tagged data (i.e., big data with both spatial and non-spatial attributes) are generated through information and communication technologies. The complexity of real-world problems dramatically increase when such massive data involve. Therefore, the challenge is we need reduce the complexity and transform these data to networks.  The design of networks should balance between efficiency (i.e., reducing the complexity) and accuracy (i.e., not losing information). Another challenge is conducting analysis on large networks that cannot be handled by old algorithms, such as traditional Dijkstra’s algorithm. The guidance for addressing these two challenges is not just from geography or some particular disciplines. I think it can be from any disciplines according to the context of actual problems. I believe that, in near future, there is no method can identify a “real” optimal solution.

VGI and Crowdsourcing Disaster Relief (Zook et al., 2010)

Sunday, November 12th, 2017

This paper mainly reviews the applications of four online mapping platforms in Haiti earthquake in 2010. It cannot be denied that the four platforms (i.e., CrisisCamp Haiti, OpenStreetMap, Ushahidi, GeoCommons) contribute to disaster relief in the earthquake. However, these technologies also bring problems remain to be discussed and further solved.

Primarily, in the beginning parts of this paper, the authors emphasize the importance of information technologies (ITs) in disaster response and then note how volunteered mapping helps. However, they focus on Haiti where the IT infrastructures are quite limited and geo-referenced data are lacked. I agree that volunteered mapping can provide efficiently and effectively provides these data for disaster rescue and tremendously facilitate the rescue. However, this may not happen in countries with good infrastructures and well-mapped. In that case, I will wonder what is the strength of volunteered mapping comparing with the traditional mapping databases and whether we need it.

Besides, since the platforms use volunteered geographic information (VGI), the fundamental problem is how to ensure the quality of these data. In term of disaster response, I think we should consider two general types of errors proposed by Goodchild (2007): a false positive (i.e., a false rumor of an incidence), or a false negative (i.e., absence of information about the existence of the incidence). The former will lead to the inefficiency in disaster rescue, and the latter can result in low effectiveness. Both could affect the human life even just one individual. Moreover, I will doubt that a place with more dense information is necessarily a place more in need. Information density can result from different reasons, but human lives are not different across areas. According to the authors, there are only 11% people can access to the Internet and one third have mobile phones. It means at two thirds people cannot send out distress calls in Ushahidi. Resources are firstly taken by people who have the access. The authors argue that we should blame the originally insufficient infrastructures. In other words, they think even without VGI the discrimination happens. This is a trick argument and defenses nothing. Of course, I agree that social inequality always exists. However, VGI is not value-neutral and it may worse the existing inequality. Criticists does not blame VGI bring the inequality but worsen the inequality, and currently there is no efficient way to solve the issues.

In conclusion, this paper provides comprehensive review of the benefits brought by volunteered mapping in disaster response in Haiti. But it is not critical enough when discussing the defects of using volunteered mapping. Through reading this paper, we can identify many questions remaining to be answered including the inherent characteristics of VGI and its applications.

Zook et al. Haiti Relief & VGI

Sunday, November 12th, 2017

Volunteered Geographic information is tool used to consolidate knowledge where it is needed by those willing to offer their data and expertise. I feel comfortable arguing that it is strictly a tool; it create no new process or analysis of information but simply refers to consolidation of knowledge to complete various projects. The project in which the information is being used could potentially be considered science depending on its nature, but VGI is a tool. In relation to the topic of privacy, VGI can either be enhanced or impeded depending on levels of privacy. in other words, if personal data is openly available for collection and use, certain tasks may be easier to complete as a result of readily available pools of knowledge. In contrast, if information is kept private, then certain tasks may lack some critical knowledge and result in inaccuracy or bias in final products.

I think the article does good job at framing VGI as a tool that facilitates transactions of knowledge and data to complete projects more efficiently than by individuals. I was skeptical of the utility/quality of the work completed but the article makes a good point that more users means more people to catch errors and mistakes throughout the process.

One particular concern I have is regarding potential failures to provide a comprehensive amount of information comparable to what could e collected on by local knowledge and expertise: Is everything doable through VGI or are there certain limitations to projects that need to be completed outside of VGI?

Dunn and Newton (1992)

Sunday, November 12th, 2017

This paper discusses two prominent forms of network analysis: Dijkstra’s shortest path analysis, and off-kilter network analysis.
Dijkstra’s algorithm presents a very simple form of network analysis by regarding the path from point A to point B through a series of nodes and arcs, which are accounted simply by length. Indeed, the authors make a point that original network analyses were affected by computer scientists, and did not account for the inherently geographical nature of transportation and movement. Namely, it does not account for directionality or geographical coordinates. Off-kilter analysis recognizes these issues by accounting for external factors, and partitioning flows of movement depending on the maximum allocated capacity of these certain roads. This needed situation for speed and efficiency is illustrated in disaster scenarios by the authors, where precarious roads and a mass of traffic need to be accounted for quickly and dealt with efficiently.
This was written in 1992, at the cusp of a widespread informatics revolution in the home market. As it stands right now, Dijkstra’s analysis is still highly relevant, I believe that it is used in Google Maps for directions, though I have the sense that off-kilter analysis has become a viable option for many people. With traffic collection data such as Waze collecting cell-phone information, paired with basic information about current infrastructure, it has become possible for GPS services to account for the dynamic changes in traffic data for users of infrastructure.
I can’t help but feel that this is nonetheless a still rudimentary network analysis, there are still many more factors that could potentially be quantifiable and added into the algorithm. What about greenhouse gas emissions, or the level of scenery? I wonder how easily those things could be accounted for. I am still wondering about the accountability of qualifiable data as playing a part into network analysis. Perhaps in the future our GPS’ could account for personal preferences and tailor their network analysis to the individual itself? This would raise questions over privacy, perhaps. Though with growing levels of information being tracked anyway, it’s almost something to be expected. I would be interested in knowing more about the evolution of network analysis, and I am looking forward to the presentation on Monday

Network Analysis (Curtin 2007)

Sunday, November 12th, 2017

I found this article quite interesting, in both its recap of traditional Network analyses (i.e. Djikstra’s formula) as well as how the network features of GIS are some of GIScience’s earliest and most popular uses. I find the point on how Graph Theory is ultimately the thought holding this immense function together. On this train of thought, I was very surprised to hear that a ‘non-topological’ network existed and is still used to some degree. How a network can be formed without information linking the network to other nodes makes no sense to me, and seems to defeat the point of creating a network.

I like how the author states that Network GIS is a sub-discipline of GIScience, and goes so far as to claim it’s the only one with linear referencing, which I assume since many GIS functions rely on network analysis, ultimately anything that uses a network incorporates this (making it seem not that out of the ordinary).

Lastly, I found the use of Network Analysis in multi-disciplinary fields like microbiology and neurology very interesting, and definitely would use this as an argument that network analysis is purely a tool. As a tool it’s extremely powerful in that it’s a simple to use and understand data structure which can use many algorithms for interesting analyses.

-MercatorGator

Optimal routes in GIS and Emergency Planning, Dunn & Newton (1992)

Sunday, November 12th, 2017

Dunn and Newton (1992) examine the performance of two popular approaches to network analysis, Dijkstra’s and out-of-kilter algorithms, in the context of population evacuation. At the time of publication, it’s clear that the majority of network analysis research has been conducted by computer scientists and mathematicians. It’s interesting how historical conceptualizations of networks, which appear to be explicitly non-spatial in the way that distortion or transformation are handled and the lack of integrated geospatial information, are transferable to GIS applications. What the authors describe as an “unnecessarily flexible” definition of a network for geographical purposes appears to be an insurmountable limitation of previous network conceptualizations for GIScience. However, I’ll admit that the ubiquity of Dijkstra’s algorithm in GIS software is a convincing argument for the usefulness of previous network concepts in GIS against my limited knowledge of network analysis.

The out-of-kilter algorithm provides a means to address the lack of integrated geospatial information in other network analysis methods. The authors demonstrate how one might incorporate geospatial concepts such as traffic congestion, one-way streets, and obstructions to enable geographic application more broadly. It’s striking that the processing time associated with network analysis is ultimately dependent on the complexity of the network. In the context of pathfinding, increased urban development and data availability will necessarily increase network complexity, and it was demonstrated in the paper how incorporating geographic information into a network can increase processing time. While it was unsurprisingly left out of a paper published in 1997, I would be curious to learn more about how heuristics might be applied to address computational concerns in the geoweb.

VGI and Crowdsourcing Disaster Relief, Zook et al. (2010)

Sunday, November 12th, 2017

Zook et al. (2010) describe the ways in which crowdsourced VGI was operationalized through the 2010 earthquake in Haiti, with emphasis on the response organized by CrisisCamp Haiti, OpenStreetMap, Ushahidi, and GeoCommons. The authors refer to the principle that “given enough eyeballs, all bugs are shallow” in deference of the suitability of crowdsourced VGI. It’s an interesting thought that the source of concerns for uncertainty, namely the contribution of non-experts, might also be the means to address uncertainty. The principle appears to rely on the ability of the crowd to converge upon some truth, but over the course the semester I’ve become less and less confident in the existence of such truth. It’s conceivable that what appears to be objective to some might ultimately be sensitive to vagueness or ambiguity. The argument that VGI need only be “good enough” to assist recovery workers is a reminder that this discussion is perhaps less pertinent to disaster response.

Still, I wonder if the principle holds if there is some minimum technical barrier to contribution. Differential data availability based on development is often realized in the differential technical ability of professionals and amateurs. It’s easy to imagine how remote mapping might renew concerns for local autonomy and self-determination. I thought the Ushahidi example provided an interesting answer to such concerns, making use of more widely available technologies than those ubiquitous within the Web 2.0. GeoCommons is another reminder that crowdsourcing challenges are not limited to expert/non-expert divide, but there are necessarily there are implications for interoperability, congruence, and collaboration.

Thoughts on “Network Analysis in Geographic Information Science…” Curtis 2007

Sunday, November 12th, 2017

I came into this paper not knowing too much about network analysis, but having some general notion of it through its ubiquity in geographic and neuroscience literature (network distance, social networks, neural networks). I thought the paper did a good job of outlining the fundamentals of the field before progressing into geographic specificities and future challenges. I learned that the most The basis of describing networks is in their topological qualities; namely connectivity, adjacency, and incidence, which is what makes it applicable to such a diverse range of phenomena.

Curtis states that “In some cases these network structures can be classified into idealized network types (e.g., tree networks, hub-and-spoke networks, Manhattan networks.” Are idealized network types simplifications of the input data which are performed to fit a certain standardized model?

On page 104, Curtis mentions that “The choice of network data structure chosen can profoundly impact the analysis performed”, just like scale can influence whether or not clusters are observed at a certain resolution and the choice of some variables over others can influence classification algorithms in SDM. Again, we see that the products of any geographic modeling/ network analysis are not objective, but dependent on subjective choice which requires justification.

I assume that the “rapid rendering” discussed in reference to non-topographic data structures is because of  function  of quicker run time.Why are the data in non-topographic networks processed more quickly than in topographic ones? Is it because without having to assess  relationships between points, each point only has to be accounted for once without regard for its connectivity with other points?

It was interesting to note that one of the biggest challenges or paths forward for geographical network analysis was in applying existing algorithms from different fields to geographic data. Usually the challenges are in adapting current methods for new data types or resolving some gaps in domain knowledge, but this is a different kind of challenge probably born out of the substantial developments made in network analysis in different fields.

-FutureSpock

 

Thoughts on “Assuring the quality of…”Goodchild 2012

Sunday, November 12th, 2017

In discussing methods to assure the quality of VGI, Goodchild states that; “The degree to which such triage can be automated varies; in some cases it might be fully automatic, but in other cases it might require significant human intervention.” In VGI, the source of the data is human (as opposed to a scraping algorithm in SDM, for example), but the verification of data quality would definitely benefit from automation to deal with the large scale of geographic data that is produced everyday. He goes on to say that “Some degree of generalization is inevitable, of course, since it is impractical to check every item of data”, but by using the data analysis tools that have been developed to deal with large datasets, researchers can strive for a more complete assessment of accuracy.

To reintroduce the concept of positivism in GIS, Goodchild states that ” Our use of the terms truth and fact suggest an orientation towards VGI that is objective and replicable, and for which quality can be addressed using the language of accuracy. Thus our approach is less likely to be applicable for VGI that consists of opinion….or properties that are vaguely defined” This position seems to indicate that only quantitative or objectively measured geographic phenomena are capable of being tested for accuracy/uncertainty. I find this a flawed position because of the strong explanatory power of qualitative GIS and alternate ways of measuring attribute data. In suggesting it is not possible to apply the same rigorous standards of accuracy to these methods, the implication is that they are less scientific and worthy of merit. Even if this is not the intention, I would have appreciated some suggestions or potential methods by which to ascertain the accuracy of VGI when applied to qualitative GIS data.

The three definitions of crowd-sourcing provided by Goodchild describe its different applications, from “solving a problem”, to “catching errors made by an individual”, to “approaching a truth”. This progression appears traces the familiar role of GIS as a tool, tool-making, or science. It is interesting to note that the third definition does not converge onto a truth as observations approach infinity, but rather that after 13 contributors, there is no observable increase in accuracy for a position contributed to Open Street Map. This suggests that unlike a mathematical proof or principle which will always be proven true given the correct assumptions, the VGI phenomenon is messier and has to account for human factors like “tagging wars” born out of disagreement about geographic principles, or the level of “trust” which may discourage someone from correcting a contribution from a reputed contributor.

The social approach tries to minimize the human errors mentioned above by quantifying variables like “commitment” and “reliability” and allowing for social relations amongst contributors  to act as correction mechanisms.

-FutureSpock

 

 

 

 

Curtin (2013) – Networks in GIScience

Sunday, November 12th, 2017

Curtin (2013) calls on the Geographic Information Science (GISc) community to seize the opportunities surrounding network analysis in geographic information systems (GIS). If GISc researchers and GIS developers can sufficiently integrate networks into existing theoretical frameworks, construct robust methods and design compatible softwares, they could exert a strong geographically-minded influence on the expansion of network analyses in a wide variety of other disciplines.

Networks define fundamental and distinct data structures in GISc, that have not always been served in past GIS implementations. Historically, both non-topological and topological data models in GIS have been inefficient for performing network analyses, with constraining factors leading to repetitions and inconsistencies within the structure. Consequently, data models are required that explicitly treat the description, measurement and analysis of topologically invariant properties of networks (i.e. properties that are not deformed by cartographic transformations), such as connections between transport hubs or links in a social network.

The paper demonstrates that networks are pervasive in their everyday use for navigation of physical and social space. Linear referencing is applied as an underlying location datum, as opposed to a geographic or relative coordinate system, to signify distance along a path. Common metrics for distance between two geographic locations are often calculated by optimally traversing a network.

I think that in order for GIScientists to exert the kind of influence, as envisioned by Curtin, over future GIS network analysis research and its applications, they will need to embrace and address the computational challenges associated with current geographic data models. While they are well-positioned to do so, the ambiguity in ownership implied by the existence of this paper suggests that concurrently evolving fields should not be discounted.
-slumley

Goodchild and Li (2012) – Quality VGI

Saturday, November 11th, 2017

Goodchild and Li (2012) outline crowd-sourcing, social and geographic approaches to quality assurance for volunteered geographic information (VGI). Representing an increasingly important resource for data acquisition, there is a need to create and interrogate the frameworks used to accept, query or reject instances of VGI on the basis of its accuracy, consistency and completeness.

The authors argue that VGI presents a distinct set of challenges and considerations from other types of volunteered information. For example, Linus’s Law—that in software development, “given enough eyeballs, all bugs are shallow”—may not apply as readily to geographic facts as it does to other types of information. Evaluators’ “eyes” scan highly geographic content selectively, with exposure of geographic facts varying from the very prominent to the very obscure.

To me, it is unclear why this disparity is unique to geographic information. The direct comparison between Wikimapia and Wikipedia may be inappropriate for contrasting geographic/ non-geographic volunteered information, since their user/ contributor bases differ so markedly. I might actually advance in the opposite case; that the fact that geographic information is all connected by locations on the surface of the earth makes it more ‘visible’ than, for instance, an obscure wikipedia page on an isolated topic.  

The authors call upon further research to be directed towards formalising and expanding geographic approaches to quality assurance. These approaches seek to verify VGI using external information about location and by applying geographic ‘laws’. In my opinion, this provides an interesting strategy that is relatively unique to geographic information. Through geolocation, any instance of VGI could be linked to other geospatial databases, and could potentially be accepted or flagged on the basis of their relationships to other nearby features or variables. Elements of this process could be automated through formalisation. This approach will of course come with its own set of challenges, such as potential feedbacks generated by multiple incorrect sources reaffirming inaccurate information.
-slumley

Curtin 2007: Network Analysis in GIS

Saturday, November 11th, 2017

Network analysis is very useful for showing relationships between objects/agents/people and does not require some of the more formal geographic foundations. The result is the formation and growth of informal and natural linkages to create complex systems which can model how things are connected to each other. It essentially provides an alternative to geographic datum for locating points in space through their relationships to other points. A good example are social media networks: the connections that individuals make online forms a global network of information about people and their relationships with each other.

An interesting topic highlighted in this article is the contrast between topological and non-topological data models. This distinction is interesting for me as a geography student since it seems ridiculous to exclude topology when thinking about networks. the paper makes a similar statement by explaining how these models were effectively useless as the are simply points and lines with no substantial information available for analysis. I would have appreciated a bit more explanation fro non-topological data models such as an example of how it may be used and why that might be advantageous over topological models in some uses.

The article makes one particularly large claim: Network GIS is the only sub-discipline to have redefined the spatial reference system on which locations are specified. Im not going to agree or disagree with this statement but I think the paper could’ve done a better job at supporting this argument and contrasting against potential sub-disciplines.

Thoughts on Goodchild (2012)

Saturday, November 11th, 2017

Goodchild does a thorough job assessing the benefits and hindrances of his three methods for quality assurance of VGI. His first two, the crowd-sourcing approach and the social approach, he evaluates in comparison to Wikipedia contribution. Goodchild failed to specify a few important details of the social approach. Ideally Wikipedia contributions are made by users who have specific knowledge of a subject. User profiles on Wikipedia list a user’s contributions/edits, as well as an optional description of the user’s background and interests (and accolades if they are a frequent or well-regarded contributor). An OSM user profile could similarly denote their [physical] area of expertise, and also register regions where the user has made the most contributions/edits, giving them more “credibility” for other related contributions.

An important aspect that Goodchild failed to mention regarding the crowd-sourcing approach is the barrier to editing OSM features. While Linus’ Law can certainly apply for geographic data, someone who sees an error in OSM would need to be a registered and knowledgeable user to fix the error. In Wikipedia, an “Edit” button is constantly visible and one need not register to make an edit. Legitimate Wikipedia contributions must also be accompanied by a citation of an outside source, an important facet that geographic information often lacks.

The geographic approach to VGI quality assurance requires a set of “rules.” Goodchild is concerned with the ability of these rules to distinguish between a real and imagined landscape, giving an example based on the characteristics of physical features such as coastlines, river systems, and settlement location. Satellite imagery has provided the basis of much of OSM’s physical geographic features. Quality assurance is more often concerned with the name and location of man-made features. A set of rules for man-made features could be more easily determined through a large-scale analysis of similarly tagged features and their relationship to their surroundings. I.e. a restaurant located in a park away from a street might be flagged as “suspicious” since its surroundings do not match the surroundings of other “restaurant” features.

Volunteered Geographic Information and Crowdsourcing Disaster Relief: A Case Study of the Haitian Earthquake, Zook et al. (2010)

Saturday, November 11th, 2017

This article by Zook et al. (2010) talks about VGI specifically in the context of the 2010 Earthquake in Haiti, but more broadly discusses many of the issues presented in Goodchild and Li (2012) regarding the accuracy and validity VGI. I think Zook et al. (2010) do a good job of considering many aspects of VGI, including issues in data licensing and compatibility, as well as the exclusive nature of VGI which is mostly restricted to people with the technical skills to participate in many cases, and the fact that “there will always be people and communities that are left off the map” (29). While reading that line I wondered, even though VGI is not necessarily accurate, and even though some people will be completely excluded from the VGI for a myriad of reasons (no access to internet or mobile platforms, illiteracy, distance from centres of help, etc…) is it not worth trying? There is a level of error and inaccuracy in any projected geographic information, but that does not stop us from using GISystems.

Moreover, while reading this I thought back to the Johnson, Ricker and Harrison (2017) article I shared with the class, where many of the same issues in accuracy, licensing and intention are presented. I wondered if, despite these unresolved issues, UAVs do not present an opportunity to collect objective, real-time data in instances of disaster mitigation and relief? Because UAVs were used in recent instances of disaster relief, I wonder how the discussion has shifted to include some of the particular issues that arise from their use.

Network analysis in GIS (Curtin, 2007)

Friday, November 10th, 2017

I found it very interesting how Curtin (2007) points out that network analysis is the only subfield of GISciences that has redefined a spatial reference system. Linear referencing, or using the network itself as a reference, is so intuitive that I had never thought of it as an alternative method of spatial referencing. I realize that standardized spatial referencing is something that I take for granted and alternative methods may be an interesting direction for future research.

This statement can be readily debated, but in my mind, network analysis is a field within GISciences that perhaps has the most tangible impact on our daily lives, and can be applied to the most diverse types of phenomena. The authors highlight routing as one of the most fundamental operations in network analysis, and I couldn’t imagine our society functioning without it. Routing is particularly relevant in urban areas where efficient movement from point A to point B across complex road systems is essential for the transportation of people and goods.

Shortest path routing may be the most basic implementation, but I am curious to understand how other factors can be incorporated into routing algorithms to enhance efficiency. The authors indicate that “many parameters can be set in order to define more complex versions of shortest path problems”. In urban areas, for example, how are factors such as traffic, road speed limits, and road condition integrated to provide better routing options?

In reading this article, I was reminded of a previous article that we read on spatial social networks (Radil et al., 2009). Both of these articles highlight the interesting role of space in network analysis. Networks are fundamentally spatial due to their graphical basis, but they can also be used to represent explicitly spatial geographic networks.

Curtin (2007)

Friday, November 10th, 2017

As with many of the topics covered in class, though I have used network analysis, I never read much background on the subject, because I mostly used it as a tool in various GISystems applications. For instance, I had not ever thought of the origin of the shape file, or some of the positive/negative attributes beyond the fact that I use shape files for some things, and not for others. Once again, this shows the shortcomings of using GIS strictly as a tool, and some of the important background and concepts that are lost when used in this way.

One thing that particularly stood out in this article by Curtin (2007) was the discussion of the Travelling Salesman Problem (TSP), and how solutions are heuristic, and the problem/ abstraction from “true” solutions not properly or completely understood. To me, this links back to the what I feel I am getting out of this course, which is a deeper understanding of the background, importance, and shortcomings of various GIScience concepts which is truly lacking in other GIS courses I have taken. As Curtin (2007) mentions, network analysis is now mostly used in route mapping like MapQuest (once upon a time) and Google Maps, without most people having any background knowledge on how those routes are computed or the algorithms used. This is something that the author touches on briefly, but doesn’t explore fully, and something I feel is very important in the broadening use of GIScience in everyday life.

On Dunn & Newton (1992) and early 1990s Network Analysis

Thursday, November 9th, 2017

Dunn & Newton’s article “Optimal Routes in GIS and Emergency Planning Applications” (1992) heavily discusses the mathematics behind the Djikstra algorithm & its spin-off, the “out-of-kilter” algorithm, and the “out-of-kilter” algorithm’s use in early ‘90s GISoftware and on early ‘90s computers.

The “out of kilter algorithm” that diverts in multiple paths for an increased flow from one node to another, like an increase in traffic in emergency evacuation. I would have liked some more information from this article on the possible uses of network analysis for everyday people, but I agree this could have been difficult as personal GISystem use did not really exist then like it does today. The network analysis that Dunn & Newton discuss uses set points with available road networks for its running example, but they could have considered a world using network analysis that could rely on (unscrambled, post-2000s) GPS & constant refreshing. They briefly mention that some emergency vehicles have on-board navigation systems, which infers that they had the capability to discuss GPS & network analysis further, but did the inaccuracy of GPS at the time affect the emergency vehicles? Also, without these systems, a user would have to start from a set route and end at a set route and be limited to analyzing within a specific area that 1) their computer could hold and 2) their data was collected on, and on-the-fly adjustments (commonplace now) could not occur without extensive coordination. 

I am looking forward to learning more about current uses & future advancements, especially now that GISoftware isn’t just reserved for highly specialized people as it was in 1992, and that computers are faster (and that cloud computing, (more) accurate GPS, and mobile devices exist)!

On Goodchild & Li (2012) and Validation in VGI

Thursday, November 9th, 2017

I thought that this article “Assuring the quality of volunteered geographic information” was super interesting. Encompassing the evolution of “uncertainty” in GIScience was interesting, and a welcome addition as a segue into the three approaches into quality assurance (crowd-sourcing, social, and geographic).

Exploring the social approach further, it stipulates that there will always be a hierarchy, even within a seemingly broad/open structure. Goodchild & Li discussed briefly that there is often a small amount of users who input information and a smaller amount of people who verify that information, in addition to the large number of constant users.

For future additions to OSM or other crowd-sourced sites, it would be super interesting to show who’s actually editing/adding, and make that info easily available and present on the screen. Currently in OSM, one can see usernames of the most recent editors of an area, and with some more digging, one can find out all the users that have edited in an area, and with even more digging, one can look at these editors’ bios or frequently mapped places and try to piece together info about them that way. I guess it would be more of a question of privacy (especially in areas where open data isn’t really encouraged, or where there aren’t a lot of editors other than bots, or both), but hopefully this sort of post-positivist change comes. I recently learned that most of OSM’s most active users & validators (worldwide) are white North American males between the ages of 18 and 40, which unfortunately is not unbelievable, and begs further questions about what information is being mapped and what’s being left out. Some info isn’t mapped as the mappers are not interested in this information (for example, what a 25 year old guy would want to see on a map may not even overlap with what a 65 year old woman would want to see on a map. This gets even more tangled when also considering gender, geographic, or ethnic/”race” dimensions). Showing this information, or at least making it less difficult to find or access without lots of time and adequate sleuthing skills, might compel layman users to be more interested in where exactly their information is coming from.