Asia on the Move: Research Challenges for Population Geography

October 26th, 2015

Graeme Hugo here elaborates a wide-ranging argument for the relevance of population geography to the question of international migration among the increasingly migratory populations of East Asia.
I found the article speaks to the increasingly complex global system of flows, encompassing goods, capital, and ideas as well as humans. Amidst what I’d call a general weakening of the sovereignty of states and a concomitant increase in their interdependence, the world of humans and their stuff resembles one unified system more obviously than ever before. At the same time, this world is massively chaotic, and while it was at some point relatively simple to analyze European immigration to America as a function of birthrates, automation of rural farm labour, and the growth of the American economy, the cyclical, multi-directional flows of human beings in and out of Asia in the 1990s rightly (as Hugo demonstrates) demands a different approach to understand.

And what of GI Science and big data? Hugo doesn’t delve as deeply into the complex methods he outlines as I would have liked, but with the multiplying ways that human beings can leave a recognisable trace today, I would argue that it has become generally easier to track even undocumented migrants. As evidence I’d present the exhibition “Forensis,” presented at Berlin’s Haus der Kulturen der Welt in 2014, which documents a multidisciplinary evidence-gathering effort undertaken to prove that NATO warships intentionally ignored a sinking ship full of African migrants. The researchers used advanced statistical methods, remote sensing data, modelling and visualization techniques, as well as human rights law to successfully mount a case against NATO in international courts. As we hone our techniques for detecting human beings, questions of our responsibility for them are naturally raised. http://www.hkw.de/en/programm/projekte/2014/forensis/start_forensis.php

Big Data: A Bubble in the Making? (geocomplexity)

October 26th, 2015

Coming off of the heels of our discussion last week’s seminar, I can’t help (for better or for worse) to read the articles about geo-complexity through the lens of uncertainty. In particular, I am reminded of when Professor Sieber challenged me to make an argument for why uncertainty could be good, and I proposed that some level of geographic uncertainty is likely to mitigate the worst effects of spatially occurring trends of discrimination (e.g.: red-lining, gerrymandering, etc.), while also accommodating a diversity of geographic experiences and ontologies. In his article “Asia on the Move: Research Challenges for Population Geography”, Graeme Hugo discusses geocomplexity as it pertains to conceptualizing and analyzing human migration in Asia. I wonder–somewhat contrary to conventional wisdom–if we are headed to a world of more geographic uncertainty, in spite of the emergence of big data and the discussion of a “major and focused multidisciplinary research effort” in order to circumvent the “huge gaps in our knowledge of the patterns, causes and consequences of international migration in Asia” (Hugo 95).

Hugo points out that census data is predicated on the assumption that “individuals and families have a single place and country of residence”, and therefore are increasingly difficult to use for studying migration patterns. As discussed in the paper, ease of travel has accommodated several migration patterns which involve living part-time both in the nation of origin and the nation of desitnation. Although Hugo presents several secondary sources for understanding migration trends, he notes nonetheless that understanding migration patterns is complicated by the increasing volume of migration, as well as the “increasing heterogeneity of the international labour flows and the people involved in them” (Hugo 103). It is that remark about the “heterogeneity” of labour flows that intrigues me.

If the motives behind labour migration are increasingly divergent, what implications does that have on studying migration patterns at all, even if we develop techniques to use secondary/alternative sources to mitigate the issue of geocomplexity? In my opinion, this will mean that certain assumptions held by human geographers will become invalid; in the case of migration and geocomplexity, this will mean that we cannot assume the migration was necessarily driven by economic necessity. Increasingly, rich, middle-class, and poor people are drawn to migrate for a variety of reasons, and even if we grasp exactly how many people move around, we will not be able to make assumptions as to why, or even the nature or duration of their migration.

To frame this another way, even if the quantity of data we are collecting is increasing, I believe the certainty, validity, and utility of it is often decreasing. In the same way we’ve discussed the limitations of making sweeping demographic assumptions about VGI (e.g.: people post information on social media selectively and aspirationally), so to are their limitations of capturing migration patterns in any region of the world. The reasons for migration are increasingly heterogenous and simply having numbers tells us nothing. In my opinion, this is bad news for Uber, Facebook, or any other company whose stock market value is intimately tied to the anticipated value of their amassed datasets. But it’s good news for anyone who’s worried about their privacy and ability to be profiled by their data footprint. It’s certainly contrary to the general thrust of this course, but I think our ability to be profiled based on data footprints is overstated.

~CRAZY15

Climate change: the ultimate complexity

October 26th, 2015

Manson and Sullivan’s article raises some very interesting point about geospatial complexity, the difficulty of navigating between the very general and the specific, complexity in ontologies and epistemologies, and in computer modeling. One of the first things that caught my eye was that the authors mentioned that space-and-place based research recognizes the importance of qualitative and quantitative approaches. Disregarding qualitative data is a critique I have read often in the critical GIS literature, and I was glad to see that the authors not only addressed this, but made space for qualitative approaches in their vision for complexity studies going forward.

The article actually made me reflect on my studies in environment. Geospatial complexity as it is explained in this article is actually quite connected to environment, and I immediately thought of climate change. Environmental systems are complex systems that are often not fully understood – for example, it’s difficult to know tipping points. Climate change is also a problem that experts struggle to navigate the space between making generalizations and losing sight of the particular, which is a problem the authors address in this article. Yes, it will make wide, sweeping changes to the planet which can be generalized as warming – but different places at a smaller scale will experience unique, unpredictable changes. Manson and Sullivan state that space, place and time are all part of complex systems – and of course, they are part of the complex system of climate change.

The authors conclude that it is an exciting time to be part of the research of complexity and space-and-place, and that complexity studies is moving beyond the phase of “starry-eyed exuberance.” From my perspective of the complexity of climate change, I’d say that there is no better time than now, because complexity seems to be an essential part of trying to understand what is happening on the planet.

-denasaur

Complexity theory in the study of space and place

October 26th, 2015

This article by Manson and O’Sullivan (2006) addresses some of the controversy, implications, and challenges around complexity theory.  Complexity theory is true to its name, indeed complex.  The fact that it is so interdisciplinary, or “surpadisciplinary” as the authors note, means it has implications across many fields and should thus be perused with caution.   After I was introduced with the idea of a “supradisciplinary” theory, I decided to type into my Google search bar ‘complexity theory in’ and let auto-fill do the rest.  I was surprised to find the top hits in education, nursing, data structure, business, and leadership.  I mean data structure and business made sense but the others I had to follow up on.  As the paper suggests, complexity theory really truly is applicable across all disciplines from educational reform to the nursing triage framework.  For those of you who can read something once and understand it, good for you.  I’m not one of those people.  So, slightly perplexed I set out to reread this article to answer my questions—why and how was this possible?

 

The answer. Relationships.  Why of course! (Upon reading this I promised myself I would try to write on a topic other than ontologies, but this now seems unavoidable) It all comes down to ontological relationships.  Complexity theory relies on ontology that “makes few restrictive assumptions about how the world is”.  Thus enabling the most holistic assessment currently available to the scientific community, perhaps outside of narratives or other ‘non scientific’ sciences.  For this very reason, it is applicable to many spheres and also faces challenges with generalizations, as the authors explain.  Generalizing relationships is something I have become increasingly concerned with while researching building my own ontology.  Essentially, anything you wish to include or not include in ontology can be considered a ‘design decision’, but where do we draw the line between a ‘design decision’ and a serious omission of information (potentially an over generalization) with potential ethical implications?  How can this be addressed?

 

-BannerGrey

 

Complexity theory in the study of space and place

October 26th, 2015

What struck me about the article, Complexity theory in the study of space and place, was how complexity theory transcends a variety of disciplines and schools of thought. It brings to mind the ultimate quest for the theory of everything. In addition, it tries to address the question concerning whether we may devise models and theories based on empirical observations that have the capacity to explain the world as we know it. Geocomplexity is highly related to the topic of uncertainty in spatial data, because it revisits the problem surrounding the extent that truth plays in modeling spatial observations. A key insight, although it does not directly answer questions concerning approaches to validating complexity-based models, is that “evaluation and validation of complexity-based models are as likely to be narrative and political in nature as they are to be technical and quantitative”(Manson and O’Sullivan, 2004). Narratives and political ideology highlight the importance that epistemology plays in complexity-based modeling of space and place. It seems that a big challenge in complexity science will be concerned with uncovering a better understanding of approaches that exist complimentary and at odds with one another. Examples of forces that are at odds within complexity theory include generalization and specificity, qualitative and quantitative reasoning, ontology and epistemology, pattern and process, holism and reductionism, and abstract theory and empirical evidence. I found the discussion about an overemphasis on pattern within complexity-based modeling over process to be a very interesting argument. I would agree that my experiences with GIS have tended to conflate spatial patterns with spatial processes. The static interface of arcmap tends to highlight the spatial patterns within my analysis, and I tend to not even entertain the possibility that the spatial patterns I see could be produced by two processes that conflict with one another.

I enjoyed how the article was outlined. I thought it helpful that the article laid out a series of questions for the article to answer. Following a series of central questions is important because complexity theory has such wide ranging applications. Complexity theory is particularly difficult to write about within the contest of geography because it is plagued by conflicting definitions and tends to be overly hyped by certain academic circles.

-geobloggerRB

Complexity Theory – Manson and O’Sullivan

October 25th, 2015

Manson et al.’s article Complexity Theory in the Study of Space and Place (2006) discusses “whether complexity theory is too specific or too general, through some ontological and epistemological implications, and on to the relationships of complexity theory with computational modeling” (688). The authors constantly mention “space-and-place-based studies” rather than introducing the discipline of GIScience and its involvement in space and place research (687). I believe Manson et al. deliberately did this because their article highlights that complexity theory is inter-disciplinary, as well as “supradisciplinary,” meaning multiple topics and disciplines that are interested in space and place (e.g. anthropology and geography) are intertwined to conceptualize the complexity of a certain phenomenon (680). Manson et al. also mention that “this breadth can be seen as a weakness with respect to disciplinary coherence and depth of analysis,” however, I believe GIScience is the discipline that aims to develop “disciplinary coherence” and new analyses in “space-and-place research” (ibid.).

Additionally, I wonder how complexity theory will be considered within anthropology and volunteered geographic information (VGI), especially since complexity theory is still trying to conceptualize “‘other ways of knowing’” as well as generalizations/specifics (687). Like what we discussed in class with Indigenous GIS and mapping, ‘others’ conceptualize space and place differently than the Western ethnocentric standards. Consequently, improvements in modeling ‘other’s’ social/cultural complex systems have been neglected because it is a difficult task programming different conceptualizations of space and place unless the ‘other’ is the one that models it. In another case, Schlesinger (2015) created an “Urban-Rural Index (URI)” through “crowd-sourced data” that represents the “spatial complexity” of “urban development patterns” in rural developing regions (295). Although Schlesinger’s URI may be useful for city planning, this specific example shows how complexity theory can be “too general” (especially since crowd-source data can lack specificity) and may treat social spatial complexities of rural-to-urban migrations as “facile algorithmic expression[s]” (679). With technology improving and cyberGIS becoming more established, I hope complexity theory can help conceptualize these social complex processes/relations/patterns/movements that usually are consider “too specific or too general” (688).

-MTM

Schlesinger, J. (2015). Using Crowd-Sourced Data to Quantify the Complex Urban Fabric. Edited by Arsanjani, J. J., In Zipf, A., In Mooney, P., & In Helbich, M. (2015). OpenStreetMap in GIScience: Experiences, research, and applications, 295-315.

 

A complex view of place, space and scale

October 25th, 2015

Even from the first page, I received the impression that “Complexity theory in the study of space and place” by Manson and O’Sullivan (2006) was a well-written paper. It tells a story and proceeds at a smooth pace that is easy to read, while still providing substantial information on the topic. I did find the constant references to various philosophical theories, such as reductionism and holism, difficult to assimilate into my understanding of complexity as I do not have a background in such theories. I felt like I was receiving an introduction to philosophy and complexity at the same time – a bit overwhelming! However, it did make me realize that an understanding of basic philosophical theories would probably help my conceptualization of GIScience as a whole – which was not a connection I thought to make in this class. To give credit where it is due, the authors did help comprehension by providing short definitions or context for obscure words within the text.

When asking the three main questions of “(1) Does complexity theory operate at too general a level to enhance understanding? (2) What are the ontological and epistemological implications of complexity? And (3) What are the challenges in modeling complexity? (678)” the paper highlights the tension inherent in the field of complexity. One problem that seemed especially prominent was the conflict between understanding emergent behaviour and the desire to simplify models. Computational modelling was provided as both a solution to accommodating large amounts of heterogeneous variables while also being presented as an easy avenue towards simplification (683).

The authors also made some references to spatial scale that I found particularly intriguing – namely how emergence and scaling up from local to more global phenomena can conflict with modeling assumptions of uniform patterns over different scales. I am finding more and more that all of our individual research topics are converging on each other. Complexity relates to spatial scale, which relates to ontologies, which relates to uncertainty, and so forth. I have not yet fully decided what that means for the broader context of understanding GIScience in my own head but I think it is important to acknowledge the increasingly common ground. I feel as if, through this class, I am step by step building my own conceptual network model of GIScience. It is not a linear path by any means – rather circular and backtracking in fact – but slowly, slowly, slowly the connections form.

-Vdev

Modelling Vague Places – the meaning in a name

October 19th, 2015

Excuse me for my invocation of a bit of prose, but this was the first thing to spring to mind upon completing the article by Jones et al. (2008):

 

What’s in a name? that which we call a rose

By any other name would smell as sweet;

So Romeo would, were he not Romeo call’d,

Retain that dear perfection which he owes

Without that title. Romeo, doff thy name;

And for that name, which is no part of thee,

Take all myself. (2.2.47-53)

– Juliet, Romeo and Juliet

 

Personally, I usually can’t stand the insipid characters in the aforementioned play but in this case they do provide interesting context. While Juliet is happy to ignore all the meaning in a name, I would argue that Jones et al. do the opposite – in fact, they assume that all names have meaning enough that even vague geographic descriptors should be subject to quantitative analysis. I do wonder how big data can help analysis of vague spaces because of the sheer quantity of data that could allow for better modelling.

 

After our discussion in class about indigenous use of GPS and GIS, I do also want to know how modelling of vague places could be specific to an ontology that prioritizes precision. Does everything need to be quantified? Should it be quantified and where exactly does the role of ambiguity play into more cultural understanding of places?

 

Furthermore, I really liked the description of how a place can be described by what it is not, for example an area can be clearly defined by determining all boundaries around it. A fun fact, the Student Society of McGill University was actually called the “Students’ Society of the Educational Institute Roughly Bounded by Peel, Penfield, University, Sherbrooke and Mac Campus” as a protest against not being able to use the word ‘McGill’ in Student Clubs. Overall, names are incredibly important and can be described in many ways and methods of quantifying vague names could give rise to new understanding of how space is conceptualized.

 

-Vdev

Jones, et al.: Modelling vague places with knowledge from the web

October 19th, 2015

In “Modelling vague places,” Jones, et al. introduce a novel method of natural language processing for vague toponymic data. They use open-source Named-Entity Recognition methods to extract associative place-names from the results of Google searches of vague toponymic terms such as “the Cotswolds,” an area straddling 6 different counties in Southern England. Then, using a gazetteer, they assign coordinates to the data extracted to transform the text into geolocated points. These are interpolated using density estimation techniques to draw the boundaries of vaguely-defined regions.

The process is representative of the general move toward big-data research: in the past, researchers on the topic would conduct interviews with a necessarily limited number of human beings who would sketch out their notions of boundaries or centres of vague areas. Meanwhile, GIS systems employ administrative definitions which are clearly not always suited to the needs of, say, a google-maps end-user who wants to know the boundaries of a neighbourhood such as Mile End, which has no official representation on a map or spatial data layer. Ask 10 different Montrealers where the southern boundary of the neighbourhood lies, and you will probably get several different answers. If an ontologically precise boundary definition were the goal, we might prefer the huge n-value of this sort of textual analysis to the anecdotal reports of several different people.

While the researchers employ a gazetteer to assign geographic coordinates to place-names, we can imagine that geolocative metadata extracted from Facebook posts or tweets could offer a potential alternative, especially when dealing with small, densely-populated areas of cities rather than large regions like the Cotswolds or Scottish Highlands.

I imagine that big-data approaches offer a lot to the development of natural language processing–the ability of machines to process language as humans do. In some areas of NLP, such as named-entity recognition, machines can almost match humans’ ability to determine which words signify a person, an organization, or a place. As computers become better at thinking like us, they may begin to teach us the “truest” meaning of our own concepts.

-grandblvd

Modeling Vague Places

October 19th, 2015

The article, Modelling Vague Places with Knowledge from the Web, acknowledges the fact that delimiting places is embedded in human processes. The paper’s discussion of people’s perception of the extent of places reminds me of my own topic related to spatial cognition within the field of GIScience (Jones et al., 2008). For example, the authors in the article assert that one way to map the extent of a “vague” place is to ask human subjects to draw its boundary. Acquired spatial knowledge of landmarks, road signs, nodes, and intersections inform how we define and draw these boundaries. In addition, the important role of language in applying Web queries reminds me of literature I read about the relationship between spatial cognition and language. Specifically, the article reminds us about how spatial relationships between objects are encoded both linguistically and visually. In addition, this topic very much relates to Olivia’s topic of geospatial ontologies. It reminds me of the example Prof. Sieber gave us in class about trying to define what constitutes something as a mountain. Where do we draw the line? Who get to agree on what makes a mountain? What empirical approaches exist/can we apply to human interviews to know what defines a geospatial entity such as a mountain?

In addition, I liked this article because it reveals the science behind GIS applications. More specifically, the article examines the science behind density surface modeling, alternative web harvesting techniques, and new methods applied to geographical Web search engines. I found the discussion about web harvesting relevant to my experience of applying webscraping tools to geospatial data in GEOG 407. Learning how these tools can be applied to observe and represent vague places is a very interesting concept and a dimension of web harvesting I had never considered before reading this article.

In addition, this paper reveals to me the part that the geospatial Web plays in increasing the magnitude and extent of geospatial data collection. I suspect that in the future, the geospatial Web will play an important part in conducting data-driven studies about problems and uncertainties within the field GIScience.

-geobloggerRB

Modelling vague places

October 19th, 2015

After reading the paper on approaches to uncertainty, it was interesting to see a case study of how these concepts are put into practice. In Approaches to Uncertainty, the authors outline the nature of uncertainty in spatial data. The authors outline two strains of uncertainty, one strain where the object is well defined and therefore the errors are probabalistic in nature. The other strain of uncertainty is when the object is poorly defined, which results in more vague and ambiguous forms of uncertainty. In Modelling vague places the authors describe a method of density modelling as an effective method of representing the uncertainty of a place name extent.

In Jones’ article, they discuss the difficulty of storing spatial information for vague place names like the Rockies, or a downtown, that are not strictly defined. The authors mention that when trying to determine how subjects conceptualize vague places, interviews are a powerful tool. They then go on to conclude that automatic web-harvesting is a better way to go, because it is clearly more time efficient. However there is still room for interviews in smaller scale studies.

For example, I found this article to be a good addition to a discussion on qualitative GIS that I had in GEOG 494. In the presented reading, a researcher had collected information through interviews of how safe a Muslim woman felt going about her daily activities through space post-911. Through her interviews, the researcher found that her subject’s definition of friendly space had shrunk in the post-911 society. I just bring up this example to show that not all uncertainty due to vague definitions of space in a GIS can be modelled using web-based knowledge or automated processes.

-anontarian

 

Modelling Vague Places – Jones et al.

October 19th, 2015

Through “Web-harvesting,” Jones et al.’s Modelling Vague Places (2008) introduces techniques to improve modeling vague places (1048). I was interested in how Jones et al. utilized “place names” from the Web to create their models because I am following a similar methodology for my own research. While researching for my own project on volunteered geographic information (VGI) and Twitter harvesting, I read an article by Elwood et al. (2013) called Prospects for VGI Research and the Emerging Fourth Paradigm that explains how people have a tendency to use place over space when they contribute geographic information through a public platform (i.e. social media or blogs). For example: a Tweeter may post a street-name without geotagging their post, thus the only geographic information they are providing is a place attribute, not any coordinate information. This makes it more difficult to gather specific/precise spatial information when crowd-sourcing data from the Web. What is similar to my project’s methodology and Jones et al.’s article is that we both look at “semantic components” (Bordogna et al. 2014, p. 315), meaning we both are identifying textual Web information to gather information on the “precise places that lie within the extent of the vague places” (1046). Additionally, Jones et al. “decrease[d] the level of noise” through filters, something I also will be doing while harvesting Tweets (1051). With comparable methodological approaches, I will certainly consider some of Jones et al.’s techniques while completing my own project.

Similarly to what we discussed last class, this article also highlights issues with ‘big data;’ specifically, how can we sift through so much heterogeneous data and pull out the most relevant information in an efficient and time-saving approach? Jones et al. introduce strategies to sift through the Web’s big data, but it would be interesting to see how these techniques have changed within the past 7 years since this article was published. CyberGIS could certainly improve the validity of gathering “published texts” off the Web through solving technological issues, such as improving automated algorithms that affected the results of Jones et al.’s research (1048).

One final point, the digital divide was not mentioned within this article. Although Jones et al. focused their research only within the U.K. where a richer demography have the capabilities to access the Web, it is important to consider that local people from a poorer locality may not be providing any information to the Web. This ignores local people’s interpretations of their landscape/place, which would be considered “rich in geographical content” if they could contribute information to he Web (1051).

-MTM

Bordogna, G., Carrara, P., Criscuolo, L., Pepe, M., and Rampini, A. (2014). A Linguistic Decision Making Approach to Assess the Quality of Volunteer Geographic Information for Citizen Science. In Information Science, 258, 312-327.

Elwood, S., Goodchild, M., Sui, D. (2013). Prospects for VGI Research and the Emerging Fourth Paradigm. Editors D. Sui, S. Elwood, M. Goodchild, Crowdsourcing Geographic Knowledge: Volunteered Geographic Information (VGI) in Theory and Practice (361-376). Dordrecht: Springer.

 

 

 

Approaches to Uncertainty in Spatial Data

October 18th, 2015

Approaches to Uncertainty in Spatial Data

 

This text outlined many facets of uncertainty and I found it to be very informative.  There seemed to be an abundant amount of information spread over a very short period, I suppose this speaks to the depth of uncertainty inherent to spatial data.  What I enjoyed most about this read was its connection to my research topic—ontologies and semantics.

 

One of the key sources of uncertainty is how an object is defined, this is often a subjective matter and may be very hard to quantify.  The focused of ontologies in general is to define the vocabulary in such a way that it is explicitly understood by both humans and computers.  Prior to reading this chapter, had you asked me will a well constructed ontology help combat data uncertainty I would be quick to respond absolutely, yes.  However, my position has changed.  Of course enough people should agree upon a well-constructed ontology that the subjectivity is no longer problematic, but when dealing with a domain ontology—like geospatial—the community that gives the “ok” is in agreement with certain things, say they have a similar epistemology.  The purpose of ontologies is to facilitate interoperability between domains and world-wide data exchange, so these domain specific definitions may not translate well into other areas of research.  For example, using a land-use ontology to find data and then translate this into a study of land-cover or visa versa may be problematic and cause a significant level of uncertainty.  This leaves me questioning where adjustments are too be made?  On one hand, there could be full disclosure on problems with uncertainty and anything contentious may be addressed in the near ‘final product’.  Or we adjust fundamentals, like ontologies, to attempt to account for such uncertainty (but this may inhibit an ontologies effectiveness at doing its job)? So David, maybe your seminar will clear this up for me, but how on earth do we begin to address uncertainty in all its forms?!

 

-BannerGrey

 

Embracing Uncertainty?

October 18th, 2015

I found the chapter “Approaches to Uncertainty” to be an interesting read, although it is definitely one coming from an empirical, quantitative perspective. In particular, the discussion of ambiguity was interesting and somewhat confusing to me. I think that even the existence of discord depends on the user, the individual defining the object. In a territorial dispute, one individual may not even recognize that a dispute exists, while another might argue over it. Something that I found difficult about the author’s discussion of discord was that in the flow chart (figure 3.1), “expert opinion” follows from discord. This seems troublesome to me, and does not seem to fit with the example the author uses for discord. In a land dispute, where two groups have laid claim to an area and have deep roots there, it would not be appropriate to have an expert’s opinion. For one thing, who would be the expert? There are power dynamics inherent in who resolves spatial uncertainty, and in doing so, legitimizes one thing or another.

The article also made me reflect on our discussion about indigenous epistemologies. Rundstrom (1995) describes how indigenous people exhibit a “trust in ambiguity” and embrace the nuances of geographic spaces and living beings. In the article, ambiguity is defined as confusion over how a phenomenon should be classified because of differing perceptions of it. I think indigenous people as Rundstrom understands them would take issue with “how” a phenomenon should be classified, and argue if it should even be classified at all. Can GIScience embrace ambiguity in some ways? There is certainly a need for a way to incorporate more ambiguity into GIS if we are to try to represent indigenous geographies.

-denasaur

(As a side note if anyone is interested: I thought that the article at the following link brings up some interesting questions about spatial uncertainty – it incorporates many of the definitions this article does, as well as some discussion of indigenous conceptions of space. The figure 1 diagram is a good visual. http://pubs.iied.org/pdfs/G02958.pdf.)

“Sure, Everything Looks Fine on the Map, But …”: Communicating Spatial Data Uncertainty to End-Users

October 17th, 2015

In Chapter 3 of Fundamentals of Spatial Data Quality, the authors approaches to uncertainty in spatial data, with a focus on the subject as it pertains mainly to geographic information systems (GIS) and, more broadly, GIScience (Fisher et al., 2006). The main themes of uncertainty (ambiguity, vagueness, and error) are reviewed, and each theme’s challenges listed.

While even introductory levels of GIS users can begin to understand the importance of uncertainty relatively quickly, end-users of GIS products (maps, spatial analysis results, 3D visualizations of phenomena) may take the data at face-value, as they typically only care about the final results and conclusions for further use for either research, policy-making, or as a navigational product to be sold to the general public, for example. How do we ensure that uncertainty is not only captured within the quantitative analysis on the GIS-user side, but also on the visual interpretation of the end-user?

This relates straight back to the conversation last class about the ethical implications of GIScience, and how to reconcile differences in cultural and historical epistemologies and ontologies. Creating a map with a similar spatial extent to the map produced may allow for users of maps that are not familiar with a GIS and more broadly GIScience to understand the probability of a certain region to contain errors in the original map. As to ambiguity, perhaps multiple maps could be produced, although this would only be realistic with Web GIS, where users can select layers to visualize, and perhaps even change then underlying assumptions of the GIS to account for personal aspirations of the intermediate or end-user (e.g. geo-political conflicts).

That being said, I look forward to next week’s class on Uncertainty to discuss this topic further, as well as the class where we will discuss Visualization in its various forms.

-ClaireM

Wang – CyberGIS

October 12th, 2015

Wang’s article is on cyberGIS; software that operates on parallel and distributed cloud-computing rather than the typical single-computer sequential GIS. This software, particularly the CyberGIS Gateway, reminded me of our class discussion on how GIS is taught in a research-oriented university. Initially I was frustrated that research-oriented schools are apprehensive to teach a step by step class on using ArcGIS. However from a practical perspective it doesn’t make sense to teach one software when there exists so many open-source softwares that catch-up quicker to the demands of researchers and businesses. It appears that our computing capability, especially through cloud computing, is increasing so rapidly, as are our datasets, that CyberGIS will become the future.

As we move towards CyberGIS the hope is that the traditional cost and skill barriers to GIS will fall, opening up the toolset to a wider range of disciplines. Then should the challenge of CyberGIS be to make the toolsets easier for this wide range of people to use? The development of easy tools for hydrology and emergency management allows decision makers to access powerful networks of computers to manage complex data. In this sense I think CyberGIS can empower smaller organizations to make good decisions, rather than corporations or governments. Often big-data applications are criticized as being the tools of large corporations like Google possessing expensive infrastructure. CyberGIS allows smaller organizations to compete with large firms and possibly break down the hegemony of these massive companies by performing complex analytics without the expensive infrastructure.

-AnOntarian

GCI and the future of GIScience (GCI past to future)

October 12th, 2015

Yang’s paper is an exhaustive review of the advancement of the Cyberinfrastructure that has grown since individuals desired to define such a term. Yang et al discuss both the utilized and untapped potential of the interconnectedness of the world in the 21st century, first from a general perspective, and then a Spatial/Geographic perspective.

As the authors discuss the existence of this network, I found that the desire to define the term came after the inception of the network and from the desire for connectivity among the vast amount of info available. It seems that the main theme of this paper is intelligent integration and cooperation between various computing platforms and scientific agencies, such as NASA, NEPTUNE, and ESRI.

I found Yang’s review of the GCI’s untapped Environmental and Geographic potential to be the most accessible and obvious component. Ideals such as heterogeneous integration between various institutions and data collectors multiply the analytical possibilities of scientific research. From a Geographic perspective, I believe a healthy GCI is the next logical step in the evolution of GIScience, following the inception of the Geographic Information System. With the introduction of big data and open source information, individual users and consumers can become more involved in a field that would be otherwise inaccessible if not for the existence of GCI’s to simplify data intensive endeavors, such as those discussed in section 5.9, Education.

Smitty_1


 

Geospatial Cyberinfrastructure

October 12th, 2015

The title of the article by Yang et al. (2010), “Geospatial Cyberinfrastructure: Past, present and future” should have given me a clue as to the extensive scope of the paper. By trying to cover almost every single aspect of GCIs, the authors provided an impressive review. Yet it was somewhat overwhelming as an introduction to the subject. I see the value in this paper as a reference text for more knowledgeable users. However, I would have liked to see more concrete, in-depth explanations of GCIs. I think my understanding of a GCI would also have been aided by an in-depth description of an unrelated CI and how exactly it was different from a GCI. Essentially, more tangible references would have helped my comprehension. The authors themselves link their work to GIScience by stating that this is a review of recent developments and that “similar to how GIS transformed the procedures for geospatial sciences, GCI provides significant improvements to how the sciences that need geospatial information will advance (265).”

While the article was very clear about the direction that GCI advancement should go in, the authors skimmed over barriers that might impede progress towards those end goals. The desire in particular for “a semantic (ontology) based framework that is sensitive to the scale, richness, character, and heterogeneity within and across disciplines (272)” is almost a chimera. I would argue that the ‘grand challenges’ briefly identified should be expanded into full papers themselves. How to integrate cyber infrastructures across disciplines and shift them to be human-centered paradigms are challenges that, once solved, could provide substantial improvements to the field. Geospatial cyberinfrastructure development seems to be at a crucial turning point. If all contributors could individually maintain as thoughtful a vision of the GCI framework for the future as Yang et al. while resolving current discrepancies then these far-reaching goals might become attainable.

-Vdev

Putting the ‘soul’ in GIS (geospatial cyberinfrastructures)

October 12th, 2015

Throughout the course, we have discussed various ways in which GIS can be manipulated for unethical ends. For instance, we have asked: to what extent does online advertising which discriminates based on assumed demographic characteristics exclude marginalized populations?; to what extent can business practices (such as Uber’s “surge pricing”) contingent on GIS data be considered appropriate?; and how does military involvement in the development of GIS operations implicate the field? In their article “Geospatial Cyberinfrastructure: Past, present and Future”, various goals at making CGI more inclusive, democratic, and multi-disciplinary are expounded upon. For instance, we are promised that CGI will aid “to advance citizen-based sciences to reflect the fact that cyberspace is open to the public and citizen participation will be essential” (264) and provide a standardized way for a multitude of actors including “government agencies, non-government organizations, industries, academia, and the public” (264) to manipulate geospatial data. Yang et al argue persuasively that the complexity and interdisciplinary scope of contemporary problems such as developing “strategies to reduce energy consumption and stabilize atmospheric emissions so that global temperature will not increase… [and choosing] a housing site that minimizes the risks of forest fire, flooding and other… hazards” (267) demands a coordinated approach and that CGI–with its enabling technologies such as web computing, open-source software, and interoperable platforms–is able to provide a coordinated platform for this problem-solving. But in this push to make GIS simultaneously more democratic and legible to a variety of actors, how will GIS remain an ethical science? Already in the “closed” world of GIS where meaningful operations require access to knowledge and resources, energy companies have assembled legions of capable GIS technicians to explore for extractable resources, and companies have established marketing departments engaged in ethically dubious GIS practices. So what does the world look like once the barriers of cost and knowledge to GIS use are removed or, at least, decreased? Do the ‘good guys’ win their case more often because they now have access to a multitude of data once available only behind a walled fortress of GIS elites? Or does the ease of access allow the data to go completely unmonitored its use? In other words, if we continue to hold that GIS is a science, how can the field maintain a ‘soul’–its own Hippocratic Oath, if you will–and maintain a reasonable set of ethics best practices? And how does it remain a science when its increasing scope and level of interoperability will have many academics and non-academics using it primarily as a tool?

-CRAZY15

Wang (2015) CyberGIS: Initially Skeptical, Now Converted

October 12th, 2015

In his 2015 paper, Shaowen Wang outlines the current state go CyberGIS as a growing ‘interdisciplinary field’ that hopes to enable widespread cooperation on geospatial research by creating a framework which integrates all sorts of data processing techniques from a number of ‘research domains’ and allows for real-time high volume, multi-user work by taking advantage of modern day networking technologies such as cloud computing, multi-core processing on an unprecedented scale and remote work.

At first, CyberGIS felt like a catch-all umbrella term with little purpose to me, a buzzword that packaged old concepts in a new way. Following Wang’s article however, I am convinced of the relevance of CyberGIS and the exciting possibilities it offers, particularly with regard to scaleability and the democratization of computer processing power.  Our in-class discussion of the paradigms which narrow our understanding of GIScience (e.g. attention to ‘maps’ over all other forms of data presentation) informed my reflection on Wang’s review of the existing status quo in research: ‘sequential computing’, ‘monolithic architecture’ and other concepts which we take for granted when we engage with online work.

CyberGIS has the potential to become an uprecedented force for radical change in academic research. The notion that a researcher defaults to working alone from a modest work station, carefully guarding the fruit of their research and selectively collaborating through individualized in-person or online communication could soon be overhauled. The barriers to research inherent in the limited access to powerful computers could potentially be broken down by a combination of cloud computing, easier collaboration and delegation of tasks, and real-time remote access to more computers/better facilities. This has enormous implications for the democratization of GIScience which go hand in hand with the tenets of the open source movement. Together, CyberGIS and open-source approaches to software development could certainly change the nature of resource accessibility in academia, providing opportunities for better collaboration and less elitism in GIScience. In my opinion, GIScience continues to suffer from high barriers to entry on both a physical and intellectual front, and it is high time for a fresh approach. CyberGIS might just be it.

-KH