Archive for October, 2015

Ominous Omission of Ethics in Smart Cities

Friday, October 30th, 2015

The article Contextual Sensing: Integrating Contextual Information with Human and Technical Geo-Sensor Information for Smart Cities by Sagl, Resch, and Blascke (2015) was certainly an interesting read.  They begin by addressing the idea of context as both a means of analyzing data and a consideration for data collection.  Followed by looking at the human-environment-technology relationship that is essential to the development of smart citizens, and ultimately smart cities.  They also address the geospatial aspects through context aware analysis approaches and finish with the future of smart city development.

Though Sagl et al. do mention many challenges associated to building smart cities, I was surprised at the ominous omission of ethics in the entire discussion.  The closest they come to the concept of ethics was when mentioning the lack of non-nadir remote sensing technologies (basically drones) that are not allowed in urban environments “for good reasons” (17023).  I find the idea of employing smart-citizens or people-as-sensors as the main means of data collection very interesting but ethically questionable, especially when any information being recorded is not voluntarily disclosed.  I recognize this is already happening in great magnitude in the private sector, particularly with regards to social media and advertising.  The fact of the matter is the majority of people involved in these exchanges are extremely unaware of their participation.  In order for this to be developed in a more ethical way, information collected should remain non-disclosed to any third parties and used solely to increase the QoL of the citizens.  This may seem obvious and easy to enforce, however, I fear the grey area is easy to manipulate; for example should a third party studying movement in a rainstorm be granted access to mobile tracking by all local phone companies if they are working to increase  urban mobility? The argument could go both ways.  I guess the question becomes how willing is the public to disclose private information in the hopes of building better, healthy living environments?

-BannerGrey

 

Why does a smart city need to be spatially enabled? – Roche

Thursday, October 29th, 2015

After reading Stéphane Roche’s article (2014) on smart cities and GIScience’s role in its development, I am not sure I am entirely convinced that such a grand idea can be achieved. GIScience’s role in the development of smart cities seems to be more on the technical and computational side. Roche constantly mentions how GIScience will contribute efficient spatial information to cities; however, efficiencies seem to be more directed toward technical solutions. For example: making “mobile positioning technologies… [that are] more user-friendly interfaces,” or developing “information technologies, networks and sensors so as to optimize its ‘routine’ operations” (4-5).

Even if cyberGIS and its corresponding infrastructure can develop efficient algorithms and “user-friendly interfaces” that allow citizens to contribute “meaningful” geospatial information, this article dismisses how difficult it is to change people’s values and behaviors. Roche mentions that there are “three conditions required” to establish “spatially enabled” smart cities (6). Nevertheless, in order for this to happen, people’s behaviors and values are going to need to be shifted; most people today tend to opt-out of geotagging their social media posts, so I question how smart city supporters can convince citizens to change their behaviors and not be so concerned about their personal information becoming more open. Additionally, security issues and whether people will be easily willing to give out their spatial whereabouts via sensors need to be considered (5).

Maybe it is hard for me to visualize a smart city’s success because this is my first piece of literature I have read on this topic, but I honestly think there is too many little things that need to be achieved before this grand narrative of smart cities can be addressed. Technological improvements in VGI methods are expanding, but there have been no ultimate solutions yet. Within cities, research is still developing on how VGI strategies can be useful, such as collecting citizen’s locational information via social media for disaster management purposes. Furthermore, standardized procedures in VGI alone are still being debated, so I wonder how “globally unified geospatial standards” will be agreed upon (6).

-MTM

 

Smart cities: who do they benefit?

Thursday, October 29th, 2015

Roche’s article about smart cities is an organized and interesting read which situates smart cities in GIScience and offers ways for GIScience to make cities smarter.

As I read this article, I wondered if and how smart cities might reinforce existing power structures and further marginalize some groups in urban landscapes. “Rethinking urbanization” with an approach that is more focused on individuals sounds great – but it begs the question: which individuals are we focusing on? For example, it was troubling to me that neither this article nor the Sagl et al article mentions how smart cities could also be accessible cities, in ways that current cities are not. Would the smart cities the author envisions make public transit wheelchair accessible or help people with social anxiety avoid crowds? Where are the homeless in the author’s smart city vision, and how can they contribute geospatial information? Another problem is that proposing technological solutions and enhancing the “digital city” dimension of smart cities comes with the problem of access to and exclusion from these technologies. The author does address this critique, however, saying that if initiatives are driven by technologies, they can be reductive and one-size-fits-all.

Overall it seems to me that smart cities have an enormous amount of potential to improve the lives of many people, but we must be sure that all people are included. Hopefully, this is where the concept of the “intelligent city” comes into play, using VGI and participatory GIS to connect citizens; and where the “open city” increases cooperation and transparency.

~denasaur

Migration in Asia

Monday, October 26th, 2015

The sense I get from this reading is that while in the past immigrants could generally be described as “defecting” from one country-system to another, countries are now more integrated into a single global migrant system. As such, they are following movement patterns for which conceptual frameworks and national data systems are ill-equipped, at least at the time the article was written. Zelinsky’s mobility transition model is useful for understanding the common migration patterns that countries experience as they undergo a specific type of economic restructuring. However, as 20th-century growth models become less dependable going forward, we may witness the emergence of more complex migration patterns. Furthermore, since the world is not becoming more politically unified even though migration systems are becoming more integrated, the migration data systems of the worlds countries will probably continue to be fractured in a way that becomes increasingly inadequate over time with regards to developing conceptual frameworks.

 

-Yojo

 

Geocomplexity Explored Through Human Migration

Monday, October 26th, 2015

In his 1996 paper “Asia on the Move: Research Challenges for Population Geography”, Graeme Huge explores the dynamics of a newly emerging network of economic migration, characteristic of the fluidity of the developed and developing world in the late 20th and early 21st century. I must say I am surprised at the date of this paper’s publishing, mostly due to the author’s mention of social networks and the relevance this paper has 20 years later. I now believe he used the term “social network” differently than we do today (a social network being a network of people socially connected, not necessarily through mediums such as Facebook or the internet).

Geocomplexity is a self-defining term, and as a concept is very applicable to what the author calls “Population Geography.” In striving to chart the dimensions of assessing the complexity of international population flows, he reveals why this increased level of population mobility is not simply a labor-related phenomenon. Although these economic migrants are motivated to move by the prospect of work, there are many other factors to consider.

Private and Government Institutions operate within and outside the law to aid immigration and emigration based on their own country’s needs and the needs of an entire region. Asian countries are in disjoint; they exist in different stages of the international migration transition, providing a political dimension to the migration and commoditizing labor. These economic migrants are not all hopeless, poor laborers as the term might suggest. Wealthy individuals have the means to lead double lives in the business sector, participating in the workings of Asian and Euro-American economies. Due to the inherent spatial dimension of this phenomenon, Hugo asserts many reasons to why this complex issue is one of geographic relevance, and why it is the responsibility of geographers to race the growth of data with the formulation of spatial analytical methods.

 

Smitty_1

(RIP Graeme Hugo)

Hugo – Challenges for Population Geography

Monday, October 26th, 2015

The article by Hugo outlines the research challenges for population geographers in a world of accelerating migratory flows. Reasons for this acceleration are: migration from Asia to the West, skilled worker migration to Asia, contract labor migration, student and business short term migration, illegal migration, and refugee flows. The author writes that the major challenge of studying the complexity of population geography is obtaining good data on the informal flows of migrants. Traditional census data is just too slow to keep up with all the short term migration.

This was written in 1996. Since then we have added more than a billion people and migration flows have only grown increasingly complex. Massive outmigration from Syria has strained sluggish refugee systems, creating a renewed interest in population geography. Unfortunately, a lot of the motivation to collect data has been on determining where flows of foreign fighters in Syria are coming from, rather than where to efficiently resettle millions of refugees. Since Hugo’s article was written, governments have actually increased their abilities to track and monitor movement across borders. Especially in the example of foreign fighters going to Syria, governments have been gathering data on anyone who visits countries deemed suspicious. The rise of political movements like ISIS are clear proof that migration models have to move beyond simple economic push-pull factors. If given access to the data, this increasing vigilance has probably expanded the ability to model complex migratory flows for population geography.

In the case of studying the push-pull factors of migrant flowing across the Mediterranean, the political context is actually a strong desire to stop these flows. This is not to say we should just stop researching because governments misuse information. But whether or not modelling this migration results in a social benefit seems increasingly unlikely in the current political climate*.

*European political climate, I have high hopes for JT’s promise to resettle 25k refugees.

 

-A proud anOntarian

 

Asia on the Move: Research Challenges for Population Geography

Monday, October 26th, 2015

Graeme Hugo here elaborates a wide-ranging argument for the relevance of population geography to the question of international migration among the increasingly migratory populations of East Asia.
I found the article speaks to the increasingly complex global system of flows, encompassing goods, capital, and ideas as well as humans. Amidst what I’d call a general weakening of the sovereignty of states and a concomitant increase in their interdependence, the world of humans and their stuff resembles one unified system more obviously than ever before. At the same time, this world is massively chaotic, and while it was at some point relatively simple to analyze European immigration to America as a function of birthrates, automation of rural farm labour, and the growth of the American economy, the cyclical, multi-directional flows of human beings in and out of Asia in the 1990s rightly (as Hugo demonstrates) demands a different approach to understand.

And what of GI Science and big data? Hugo doesn’t delve as deeply into the complex methods he outlines as I would have liked, but with the multiplying ways that human beings can leave a recognisable trace today, I would argue that it has become generally easier to track even undocumented migrants. As evidence I’d present the exhibition “Forensis,” presented at Berlin’s Haus der Kulturen der Welt in 2014, which documents a multidisciplinary evidence-gathering effort undertaken to prove that NATO warships intentionally ignored a sinking ship full of African migrants. The researchers used advanced statistical methods, remote sensing data, modelling and visualization techniques, as well as human rights law to successfully mount a case against NATO in international courts. As we hone our techniques for detecting human beings, questions of our responsibility for them are naturally raised. http://www.hkw.de/en/programm/projekte/2014/forensis/start_forensis.php

Big Data: A Bubble in the Making? (geocomplexity)

Monday, October 26th, 2015

Coming off of the heels of our discussion last week’s seminar, I can’t help (for better or for worse) to read the articles about geo-complexity through the lens of uncertainty. In particular, I am reminded of when Professor Sieber challenged me to make an argument for why uncertainty could be good, and I proposed that some level of geographic uncertainty is likely to mitigate the worst effects of spatially occurring trends of discrimination (e.g.: red-lining, gerrymandering, etc.), while also accommodating a diversity of geographic experiences and ontologies. In his article “Asia on the Move: Research Challenges for Population Geography”, Graeme Hugo discusses geocomplexity as it pertains to conceptualizing and analyzing human migration in Asia. I wonder–somewhat contrary to conventional wisdom–if we are headed to a world of more geographic uncertainty, in spite of the emergence of big data and the discussion of a “major and focused multidisciplinary research effort” in order to circumvent the “huge gaps in our knowledge of the patterns, causes and consequences of international migration in Asia” (Hugo 95).

Hugo points out that census data is predicated on the assumption that “individuals and families have a single place and country of residence”, and therefore are increasingly difficult to use for studying migration patterns. As discussed in the paper, ease of travel has accommodated several migration patterns which involve living part-time both in the nation of origin and the nation of desitnation. Although Hugo presents several secondary sources for understanding migration trends, he notes nonetheless that understanding migration patterns is complicated by the increasing volume of migration, as well as the “increasing heterogeneity of the international labour flows and the people involved in them” (Hugo 103). It is that remark about the “heterogeneity” of labour flows that intrigues me.

If the motives behind labour migration are increasingly divergent, what implications does that have on studying migration patterns at all, even if we develop techniques to use secondary/alternative sources to mitigate the issue of geocomplexity? In my opinion, this will mean that certain assumptions held by human geographers will become invalid; in the case of migration and geocomplexity, this will mean that we cannot assume the migration was necessarily driven by economic necessity. Increasingly, rich, middle-class, and poor people are drawn to migrate for a variety of reasons, and even if we grasp exactly how many people move around, we will not be able to make assumptions as to why, or even the nature or duration of their migration.

To frame this another way, even if the quantity of data we are collecting is increasing, I believe the certainty, validity, and utility of it is often decreasing. In the same way we’ve discussed the limitations of making sweeping demographic assumptions about VGI (e.g.: people post information on social media selectively and aspirationally), so to are their limitations of capturing migration patterns in any region of the world. The reasons for migration are increasingly heterogenous and simply having numbers tells us nothing. In my opinion, this is bad news for Uber, Facebook, or any other company whose stock market value is intimately tied to the anticipated value of their amassed datasets. But it’s good news for anyone who’s worried about their privacy and ability to be profiled by their data footprint. It’s certainly contrary to the general thrust of this course, but I think our ability to be profiled based on data footprints is overstated.

~CRAZY15

Climate change: the ultimate complexity

Monday, October 26th, 2015

Manson and Sullivan’s article raises some very interesting point about geospatial complexity, the difficulty of navigating between the very general and the specific, complexity in ontologies and epistemologies, and in computer modeling. One of the first things that caught my eye was that the authors mentioned that space-and-place based research recognizes the importance of qualitative and quantitative approaches. Disregarding qualitative data is a critique I have read often in the critical GIS literature, and I was glad to see that the authors not only addressed this, but made space for qualitative approaches in their vision for complexity studies going forward.

The article actually made me reflect on my studies in environment. Geospatial complexity as it is explained in this article is actually quite connected to environment, and I immediately thought of climate change. Environmental systems are complex systems that are often not fully understood – for example, it’s difficult to know tipping points. Climate change is also a problem that experts struggle to navigate the space between making generalizations and losing sight of the particular, which is a problem the authors address in this article. Yes, it will make wide, sweeping changes to the planet which can be generalized as warming – but different places at a smaller scale will experience unique, unpredictable changes. Manson and Sullivan state that space, place and time are all part of complex systems – and of course, they are part of the complex system of climate change.

The authors conclude that it is an exciting time to be part of the research of complexity and space-and-place, and that complexity studies is moving beyond the phase of “starry-eyed exuberance.” From my perspective of the complexity of climate change, I’d say that there is no better time than now, because complexity seems to be an essential part of trying to understand what is happening on the planet.

-denasaur

Complexity theory in the study of space and place

Monday, October 26th, 2015

This article by Manson and O’Sullivan (2006) addresses some of the controversy, implications, and challenges around complexity theory.  Complexity theory is true to its name, indeed complex.  The fact that it is so interdisciplinary, or “surpadisciplinary” as the authors note, means it has implications across many fields and should thus be perused with caution.   After I was introduced with the idea of a “supradisciplinary” theory, I decided to type into my Google search bar ‘complexity theory in’ and let auto-fill do the rest.  I was surprised to find the top hits in education, nursing, data structure, business, and leadership.  I mean data structure and business made sense but the others I had to follow up on.  As the paper suggests, complexity theory really truly is applicable across all disciplines from educational reform to the nursing triage framework.  For those of you who can read something once and understand it, good for you.  I’m not one of those people.  So, slightly perplexed I set out to reread this article to answer my questions—why and how was this possible?

 

The answer. Relationships.  Why of course! (Upon reading this I promised myself I would try to write on a topic other than ontologies, but this now seems unavoidable) It all comes down to ontological relationships.  Complexity theory relies on ontology that “makes few restrictive assumptions about how the world is”.  Thus enabling the most holistic assessment currently available to the scientific community, perhaps outside of narratives or other ‘non scientific’ sciences.  For this very reason, it is applicable to many spheres and also faces challenges with generalizations, as the authors explain.  Generalizing relationships is something I have become increasingly concerned with while researching building my own ontology.  Essentially, anything you wish to include or not include in ontology can be considered a ‘design decision’, but where do we draw the line between a ‘design decision’ and a serious omission of information (potentially an over generalization) with potential ethical implications?  How can this be addressed?

 

-BannerGrey

 

Complexity theory in the study of space and place

Monday, October 26th, 2015

What struck me about the article, Complexity theory in the study of space and place, was how complexity theory transcends a variety of disciplines and schools of thought. It brings to mind the ultimate quest for the theory of everything. In addition, it tries to address the question concerning whether we may devise models and theories based on empirical observations that have the capacity to explain the world as we know it. Geocomplexity is highly related to the topic of uncertainty in spatial data, because it revisits the problem surrounding the extent that truth plays in modeling spatial observations. A key insight, although it does not directly answer questions concerning approaches to validating complexity-based models, is that “evaluation and validation of complexity-based models are as likely to be narrative and political in nature as they are to be technical and quantitative”(Manson and O’Sullivan, 2004). Narratives and political ideology highlight the importance that epistemology plays in complexity-based modeling of space and place. It seems that a big challenge in complexity science will be concerned with uncovering a better understanding of approaches that exist complimentary and at odds with one another. Examples of forces that are at odds within complexity theory include generalization and specificity, qualitative and quantitative reasoning, ontology and epistemology, pattern and process, holism and reductionism, and abstract theory and empirical evidence. I found the discussion about an overemphasis on pattern within complexity-based modeling over process to be a very interesting argument. I would agree that my experiences with GIS have tended to conflate spatial patterns with spatial processes. The static interface of arcmap tends to highlight the spatial patterns within my analysis, and I tend to not even entertain the possibility that the spatial patterns I see could be produced by two processes that conflict with one another.

I enjoyed how the article was outlined. I thought it helpful that the article laid out a series of questions for the article to answer. Following a series of central questions is important because complexity theory has such wide ranging applications. Complexity theory is particularly difficult to write about within the contest of geography because it is plagued by conflicting definitions and tends to be overly hyped by certain academic circles.

-geobloggerRB

Complexity Theory – Manson and O’Sullivan

Sunday, October 25th, 2015

Manson et al.’s article Complexity Theory in the Study of Space and Place (2006) discusses “whether complexity theory is too specific or too general, through some ontological and epistemological implications, and on to the relationships of complexity theory with computational modeling” (688). The authors constantly mention “space-and-place-based studies” rather than introducing the discipline of GIScience and its involvement in space and place research (687). I believe Manson et al. deliberately did this because their article highlights that complexity theory is inter-disciplinary, as well as “supradisciplinary,” meaning multiple topics and disciplines that are interested in space and place (e.g. anthropology and geography) are intertwined to conceptualize the complexity of a certain phenomenon (680). Manson et al. also mention that “this breadth can be seen as a weakness with respect to disciplinary coherence and depth of analysis,” however, I believe GIScience is the discipline that aims to develop “disciplinary coherence” and new analyses in “space-and-place research” (ibid.).

Additionally, I wonder how complexity theory will be considered within anthropology and volunteered geographic information (VGI), especially since complexity theory is still trying to conceptualize “‘other ways of knowing’” as well as generalizations/specifics (687). Like what we discussed in class with Indigenous GIS and mapping, ‘others’ conceptualize space and place differently than the Western ethnocentric standards. Consequently, improvements in modeling ‘other’s’ social/cultural complex systems have been neglected because it is a difficult task programming different conceptualizations of space and place unless the ‘other’ is the one that models it. In another case, Schlesinger (2015) created an “Urban-Rural Index (URI)” through “crowd-sourced data” that represents the “spatial complexity” of “urban development patterns” in rural developing regions (295). Although Schlesinger’s URI may be useful for city planning, this specific example shows how complexity theory can be “too general” (especially since crowd-source data can lack specificity) and may treat social spatial complexities of rural-to-urban migrations as “facile algorithmic expression[s]” (679). With technology improving and cyberGIS becoming more established, I hope complexity theory can help conceptualize these social complex processes/relations/patterns/movements that usually are consider “too specific or too general” (688).

-MTM

Schlesinger, J. (2015). Using Crowd-Sourced Data to Quantify the Complex Urban Fabric. Edited by Arsanjani, J. J., In Zipf, A., In Mooney, P., & In Helbich, M. (2015). OpenStreetMap in GIScience: Experiences, research, and applications, 295-315.

 

A complex view of place, space and scale

Sunday, October 25th, 2015

Even from the first page, I received the impression that “Complexity theory in the study of space and place” by Manson and O’Sullivan (2006) was a well-written paper. It tells a story and proceeds at a smooth pace that is easy to read, while still providing substantial information on the topic. I did find the constant references to various philosophical theories, such as reductionism and holism, difficult to assimilate into my understanding of complexity as I do not have a background in such theories. I felt like I was receiving an introduction to philosophy and complexity at the same time – a bit overwhelming! However, it did make me realize that an understanding of basic philosophical theories would probably help my conceptualization of GIScience as a whole – which was not a connection I thought to make in this class. To give credit where it is due, the authors did help comprehension by providing short definitions or context for obscure words within the text.

When asking the three main questions of “(1) Does complexity theory operate at too general a level to enhance understanding? (2) What are the ontological and epistemological implications of complexity? And (3) What are the challenges in modeling complexity? (678)” the paper highlights the tension inherent in the field of complexity. One problem that seemed especially prominent was the conflict between understanding emergent behaviour and the desire to simplify models. Computational modelling was provided as both a solution to accommodating large amounts of heterogeneous variables while also being presented as an easy avenue towards simplification (683).

The authors also made some references to spatial scale that I found particularly intriguing – namely how emergence and scaling up from local to more global phenomena can conflict with modeling assumptions of uniform patterns over different scales. I am finding more and more that all of our individual research topics are converging on each other. Complexity relates to spatial scale, which relates to ontologies, which relates to uncertainty, and so forth. I have not yet fully decided what that means for the broader context of understanding GIScience in my own head but I think it is important to acknowledge the increasingly common ground. I feel as if, through this class, I am step by step building my own conceptual network model of GIScience. It is not a linear path by any means – rather circular and backtracking in fact – but slowly, slowly, slowly the connections form.

-Vdev

Modelling Vague Places – the meaning in a name

Monday, October 19th, 2015

Excuse me for my invocation of a bit of prose, but this was the first thing to spring to mind upon completing the article by Jones et al. (2008):

 

What’s in a name? that which we call a rose

By any other name would smell as sweet;

So Romeo would, were he not Romeo call’d,

Retain that dear perfection which he owes

Without that title. Romeo, doff thy name;

And for that name, which is no part of thee,

Take all myself. (2.2.47-53)

– Juliet, Romeo and Juliet

 

Personally, I usually can’t stand the insipid characters in the aforementioned play but in this case they do provide interesting context. While Juliet is happy to ignore all the meaning in a name, I would argue that Jones et al. do the opposite – in fact, they assume that all names have meaning enough that even vague geographic descriptors should be subject to quantitative analysis. I do wonder how big data can help analysis of vague spaces because of the sheer quantity of data that could allow for better modelling.

 

After our discussion in class about indigenous use of GPS and GIS, I do also want to know how modelling of vague places could be specific to an ontology that prioritizes precision. Does everything need to be quantified? Should it be quantified and where exactly does the role of ambiguity play into more cultural understanding of places?

 

Furthermore, I really liked the description of how a place can be described by what it is not, for example an area can be clearly defined by determining all boundaries around it. A fun fact, the Student Society of McGill University was actually called the “Students’ Society of the Educational Institute Roughly Bounded by Peel, Penfield, University, Sherbrooke and Mac Campus” as a protest against not being able to use the word ‘McGill’ in Student Clubs. Overall, names are incredibly important and can be described in many ways and methods of quantifying vague names could give rise to new understanding of how space is conceptualized.

 

-Vdev

Jones, et al.: Modelling vague places with knowledge from the web

Monday, October 19th, 2015

In “Modelling vague places,” Jones, et al. introduce a novel method of natural language processing for vague toponymic data. They use open-source Named-Entity Recognition methods to extract associative place-names from the results of Google searches of vague toponymic terms such as “the Cotswolds,” an area straddling 6 different counties in Southern England. Then, using a gazetteer, they assign coordinates to the data extracted to transform the text into geolocated points. These are interpolated using density estimation techniques to draw the boundaries of vaguely-defined regions.

The process is representative of the general move toward big-data research: in the past, researchers on the topic would conduct interviews with a necessarily limited number of human beings who would sketch out their notions of boundaries or centres of vague areas. Meanwhile, GIS systems employ administrative definitions which are clearly not always suited to the needs of, say, a google-maps end-user who wants to know the boundaries of a neighbourhood such as Mile End, which has no official representation on a map or spatial data layer. Ask 10 different Montrealers where the southern boundary of the neighbourhood lies, and you will probably get several different answers. If an ontologically precise boundary definition were the goal, we might prefer the huge n-value of this sort of textual analysis to the anecdotal reports of several different people.

While the researchers employ a gazetteer to assign geographic coordinates to place-names, we can imagine that geolocative metadata extracted from Facebook posts or tweets could offer a potential alternative, especially when dealing with small, densely-populated areas of cities rather than large regions like the Cotswolds or Scottish Highlands.

I imagine that big-data approaches offer a lot to the development of natural language processing–the ability of machines to process language as humans do. In some areas of NLP, such as named-entity recognition, machines can almost match humans’ ability to determine which words signify a person, an organization, or a place. As computers become better at thinking like us, they may begin to teach us the “truest” meaning of our own concepts.

-grandblvd

Modeling Vague Places

Monday, October 19th, 2015

The article, Modelling Vague Places with Knowledge from the Web, acknowledges the fact that delimiting places is embedded in human processes. The paper’s discussion of people’s perception of the extent of places reminds me of my own topic related to spatial cognition within the field of GIScience (Jones et al., 2008). For example, the authors in the article assert that one way to map the extent of a “vague” place is to ask human subjects to draw its boundary. Acquired spatial knowledge of landmarks, road signs, nodes, and intersections inform how we define and draw these boundaries. In addition, the important role of language in applying Web queries reminds me of literature I read about the relationship between spatial cognition and language. Specifically, the article reminds us about how spatial relationships between objects are encoded both linguistically and visually. In addition, this topic very much relates to Olivia’s topic of geospatial ontologies. It reminds me of the example Prof. Sieber gave us in class about trying to define what constitutes something as a mountain. Where do we draw the line? Who get to agree on what makes a mountain? What empirical approaches exist/can we apply to human interviews to know what defines a geospatial entity such as a mountain?

In addition, I liked this article because it reveals the science behind GIS applications. More specifically, the article examines the science behind density surface modeling, alternative web harvesting techniques, and new methods applied to geographical Web search engines. I found the discussion about web harvesting relevant to my experience of applying webscraping tools to geospatial data in GEOG 407. Learning how these tools can be applied to observe and represent vague places is a very interesting concept and a dimension of web harvesting I had never considered before reading this article.

In addition, this paper reveals to me the part that the geospatial Web plays in increasing the magnitude and extent of geospatial data collection. I suspect that in the future, the geospatial Web will play an important part in conducting data-driven studies about problems and uncertainties within the field GIScience.

-geobloggerRB

Modelling vague places

Monday, October 19th, 2015

After reading the paper on approaches to uncertainty, it was interesting to see a case study of how these concepts are put into practice. In Approaches to Uncertainty, the authors outline the nature of uncertainty in spatial data. The authors outline two strains of uncertainty, one strain where the object is well defined and therefore the errors are probabalistic in nature. The other strain of uncertainty is when the object is poorly defined, which results in more vague and ambiguous forms of uncertainty. In Modelling vague places the authors describe a method of density modelling as an effective method of representing the uncertainty of a place name extent.

In Jones’ article, they discuss the difficulty of storing spatial information for vague place names like the Rockies, or a downtown, that are not strictly defined. The authors mention that when trying to determine how subjects conceptualize vague places, interviews are a powerful tool. They then go on to conclude that automatic web-harvesting is a better way to go, because it is clearly more time efficient. However there is still room for interviews in smaller scale studies.

For example, I found this article to be a good addition to a discussion on qualitative GIS that I had in GEOG 494. In the presented reading, a researcher had collected information through interviews of how safe a Muslim woman felt going about her daily activities through space post-911. Through her interviews, the researcher found that her subject’s definition of friendly space had shrunk in the post-911 society. I just bring up this example to show that not all uncertainty due to vague definitions of space in a GIS can be modelled using web-based knowledge or automated processes.

-anontarian

 

Modelling Vague Places – Jones et al.

Monday, October 19th, 2015

Through “Web-harvesting,” Jones et al.’s Modelling Vague Places (2008) introduces techniques to improve modeling vague places (1048). I was interested in how Jones et al. utilized “place names” from the Web to create their models because I am following a similar methodology for my own research. While researching for my own project on volunteered geographic information (VGI) and Twitter harvesting, I read an article by Elwood et al. (2013) called Prospects for VGI Research and the Emerging Fourth Paradigm that explains how people have a tendency to use place over space when they contribute geographic information through a public platform (i.e. social media or blogs). For example: a Tweeter may post a street-name without geotagging their post, thus the only geographic information they are providing is a place attribute, not any coordinate information. This makes it more difficult to gather specific/precise spatial information when crowd-sourcing data from the Web. What is similar to my project’s methodology and Jones et al.’s article is that we both look at “semantic components” (Bordogna et al. 2014, p. 315), meaning we both are identifying textual Web information to gather information on the “precise places that lie within the extent of the vague places” (1046). Additionally, Jones et al. “decrease[d] the level of noise” through filters, something I also will be doing while harvesting Tweets (1051). With comparable methodological approaches, I will certainly consider some of Jones et al.’s techniques while completing my own project.

Similarly to what we discussed last class, this article also highlights issues with ‘big data;’ specifically, how can we sift through so much heterogeneous data and pull out the most relevant information in an efficient and time-saving approach? Jones et al. introduce strategies to sift through the Web’s big data, but it would be interesting to see how these techniques have changed within the past 7 years since this article was published. CyberGIS could certainly improve the validity of gathering “published texts” off the Web through solving technological issues, such as improving automated algorithms that affected the results of Jones et al.’s research (1048).

One final point, the digital divide was not mentioned within this article. Although Jones et al. focused their research only within the U.K. where a richer demography have the capabilities to access the Web, it is important to consider that local people from a poorer locality may not be providing any information to the Web. This ignores local people’s interpretations of their landscape/place, which would be considered “rich in geographical content” if they could contribute information to he Web (1051).

-MTM

Bordogna, G., Carrara, P., Criscuolo, L., Pepe, M., and Rampini, A. (2014). A Linguistic Decision Making Approach to Assess the Quality of Volunteer Geographic Information for Citizen Science. In Information Science, 258, 312-327.

Elwood, S., Goodchild, M., Sui, D. (2013). Prospects for VGI Research and the Emerging Fourth Paradigm. Editors D. Sui, S. Elwood, M. Goodchild, Crowdsourcing Geographic Knowledge: Volunteered Geographic Information (VGI) in Theory and Practice (361-376). Dordrecht: Springer.

 

 

 

Approaches to Uncertainty in Spatial Data

Sunday, October 18th, 2015

Approaches to Uncertainty in Spatial Data

 

This text outlined many facets of uncertainty and I found it to be very informative.  There seemed to be an abundant amount of information spread over a very short period, I suppose this speaks to the depth of uncertainty inherent to spatial data.  What I enjoyed most about this read was its connection to my research topic—ontologies and semantics.

 

One of the key sources of uncertainty is how an object is defined, this is often a subjective matter and may be very hard to quantify.  The focused of ontologies in general is to define the vocabulary in such a way that it is explicitly understood by both humans and computers.  Prior to reading this chapter, had you asked me will a well constructed ontology help combat data uncertainty I would be quick to respond absolutely, yes.  However, my position has changed.  Of course enough people should agree upon a well-constructed ontology that the subjectivity is no longer problematic, but when dealing with a domain ontology—like geospatial—the community that gives the “ok” is in agreement with certain things, say they have a similar epistemology.  The purpose of ontologies is to facilitate interoperability between domains and world-wide data exchange, so these domain specific definitions may not translate well into other areas of research.  For example, using a land-use ontology to find data and then translate this into a study of land-cover or visa versa may be problematic and cause a significant level of uncertainty.  This leaves me questioning where adjustments are too be made?  On one hand, there could be full disclosure on problems with uncertainty and anything contentious may be addressed in the near ‘final product’.  Or we adjust fundamentals, like ontologies, to attempt to account for such uncertainty (but this may inhibit an ontologies effectiveness at doing its job)? So David, maybe your seminar will clear this up for me, but how on earth do we begin to address uncertainty in all its forms?!

 

-BannerGrey

 

Embracing Uncertainty?

Sunday, October 18th, 2015

I found the chapter “Approaches to Uncertainty” to be an interesting read, although it is definitely one coming from an empirical, quantitative perspective. In particular, the discussion of ambiguity was interesting and somewhat confusing to me. I think that even the existence of discord depends on the user, the individual defining the object. In a territorial dispute, one individual may not even recognize that a dispute exists, while another might argue over it. Something that I found difficult about the author’s discussion of discord was that in the flow chart (figure 3.1), “expert opinion” follows from discord. This seems troublesome to me, and does not seem to fit with the example the author uses for discord. In a land dispute, where two groups have laid claim to an area and have deep roots there, it would not be appropriate to have an expert’s opinion. For one thing, who would be the expert? There are power dynamics inherent in who resolves spatial uncertainty, and in doing so, legitimizes one thing or another.

The article also made me reflect on our discussion about indigenous epistemologies. Rundstrom (1995) describes how indigenous people exhibit a “trust in ambiguity” and embrace the nuances of geographic spaces and living beings. In the article, ambiguity is defined as confusion over how a phenomenon should be classified because of differing perceptions of it. I think indigenous people as Rundstrom understands them would take issue with “how” a phenomenon should be classified, and argue if it should even be classified at all. Can GIScience embrace ambiguity in some ways? There is certainly a need for a way to incorporate more ambiguity into GIS if we are to try to represent indigenous geographies.

-denasaur

(As a side note if anyone is interested: I thought that the article at the following link brings up some interesting questions about spatial uncertainty – it incorporates many of the definitions this article does, as well as some discussion of indigenous conceptions of space. The figure 1 diagram is a good visual. http://pubs.iied.org/pdfs/G02958.pdf.)