Archive for the ‘General’ Category

Spatial Cognition in everyday life (Montello and Raubal)

Friday, November 6th, 2015

As technology changes, so do our applications of spatial cognition. When reading this article, I first thought it was dated, as technology has replaced the need to imagine many spaces. Perhaps as a geography student I am biased, but for almost every small task I looked at travel times, Google maps transit extension for route planning, Streetview, and long hours on google earth which replaces the spatially iconic symbolic representation with a digital earth.

What I have learned from this article is that spatial cognition is not irrelevant in my technology-saturated life. A good example is how people perceive the distance of my apartment from campus. Technology tells us that I live 1.5 km west of campus whereas most of my friends live 1.5 km east of campus. Despite this similarity almost all these eastern friends have concluded at some point that I live “very far” from campus and getting there must be difficult. I propose there are several factors of spatial cognition that contribute to this spatial understanding as described in the article. Firstly these friends are not familiar with the area west of campus, so their wayfinding through experience abilities are limited. Navigating through the high-rises of the downtown core, you lose your common landmarks like McGill campus or Mont-Royal. The highrises block your view and therefore limit your spatial knowledge learned directly as well as inhibit your sense of orientation. I believe that these factors are why my friends have a limited ability to judge the distance of my apartment from campus. In terms of using spatial language, when explaining directions to my apartment from campus I say, “it’s just down from the Bell Centre.” I can justify this spatially vague language because most people have an understanding of how to get to the Bell center as it is a large landmark of the city. In addition, the ability for people to use smartphones if they get lost, means that my spatial language does not have to navigate people directly, only give them idea of distance base on their own acquired or imagined spatial knowledge.

-anontarian

 

Sagl: Contextual Sensing for Smarter Cities

Tuesday, November 3rd, 2015

This article examines incorporating spatiotemporal contextual information in the hope of creating smarter cities.

When trying to contextualize my topic of drones in GIS, I find myself wondering how it differentiates from just being a tool; a sensor on a new platform. One of the possible fields of research in drone GIScience is geofencing, whereby drones are programmed to not take off in certain areas and altitudes. The article mentions how drones could be used to monitor urban areas, but are not because of (good) restrictions. To create a smart city, one needs both sophisticated monitoring systems, and equally sophisticated systrems to keep out the unwanted sensors, like drones. One of the ways in which drones could be detected and regulated is through contextual sensing. For example, police use networks of microphones that collect noise data which is then processed to listen for drones. However there is not one sound that identifies a drone, and many other machines can sound similar, like a far-away leafblower. Therefore other sensors are needed to provide context to this noise. Another way drones are sensed is through optical sensors, which could identify a distant moving object and classify it based on its flightpath. However in order to distinguish a drone from say, an eagle, you would need to contextualize the optical information with thermal sensor information.

From this article I learned some terms that can be used to classify drone technology. An interesting aspect of military drones is that the US government uses “collective sensing” in order to establish the location of a target before using “classic sensors” as termed by the author to command the drone. Collective sensing is sensor data that users do not necessarily intentionally share, like there location generated from a mobile phone call. The problem though is that they do not bother to associate this data with any contextual information from other sensors, and so frequently make bad judgement calls. I think that contextual information in this form of sensing is important, but involves more of a political shift than a shift in GIScience.

__aNOntarian__?(???)?


				

Worthy – Open Data in the UK

Tuesday, November 3rd, 2015

 

This article looks at the complex effects of open data in the case of the UK government. The rationale of Open Data in this context was to democratize government, and devolve power to the people. With government spending open to the public, surely there would be more accountability, participation, and information transmission. As mentioned last week by CRAZY15, will the “certainty, validity and utility” of the data decrease as the quantity increases? I feel that similar to the way people behave on social media, governments will become more performative as their actions become more shared and open. The author states that governments have redacted sensitive information, and what is published lacks context. The result is that overall engagement by the public is low, and those who do engage have specific interests. One of the most successful examples of engagement was through a website created by mySociety.org. This organization claims to “make websites that empower citizens worldwide”. An example of one of their other projects is “FixMyStreet”, an open source tool for reporting infrastructure problems to city council. I think that organizations like this are an example of progressive toolmaking in GIS, where VGI can be integrated with open data and effect change. Simply publishing data will not create armchair auditors; we need to create the tools to understand the data.

__AnonTarian__

A Different Context for Smart Cities?

Monday, November 2nd, 2015

While I understand the importance of creating healthier, happier and more sustainable cities. “Contextual Sensing: Integrating Contextual Information with Human and Technical Geo-Sensor Information for Smart Cities” by Sagl, Resch and Blaschke seems to leave a lot unsaid. The authors talk about ‘smart citizens’ becoming bigger contributors to city dynamics but a lot of technological advancement so far has been in making it ubiquitous and as seamlessly fit into our lifestyle as possible so it is easy to ignore. It is very interesting to see the dissonance in presentation of information from Worthy’s article on “The Impact of Open Data in the UK: Complex, Unpredictable, and Political” where he describes how interaction with open data varies across heterogeneous groups. It brings to mind question such as: What would the ‘average’ citizen’s awareness of the smart city actually be? Would smart city data be open data? Who would use it? Furthermore, the feedback loop between smart citizens and beneficiaries seems to imply that people will become more externally engaged with their surroundings (17024), yet over time people have been narrowing their scope of interaction, especially in so-called third spaces, to their personal devices. I think there is some sense of taking for granted that all these environmental interactions will continue to exist with the increased saturation of technology in our lives.

Sagl et al. also remind me of our previous class discussion on big data. The authors deliberately state that more data does not necessarily provide better results (17017). However, when they state that “…in contextual sensing a larger quantity of data may allow contexts that have not previously been thought of, or have not previously been considered relevant, to be better understood and taken into account (17017)”, they also seem troublingly close to the trend of aimlessly analyzing masses of data that spits out patterns without scientific methods of inquiry. I think it would be very interesting indeed to have a skeptic of open and big data to analyze smart city trends. I do have to say that some of my questions are outside the scope of this article but the tangents to be explored are potentially more interesting.

-Vdev

Spatially (& Equitably) Enabled Smart Cities

Monday, November 2nd, 2015

Stephane Roche Discusses in his 2014 report in Progress in Human Geography the concept of a “spatially enabled city” in the context of “smart” cities. While the terminology alone inspires ideas of Utopian (or dystopian) futures, the conversion that Roche presents in this piece is very much grounded in reality.

I found the discussion with regards to the conditions that cities must meet in order to be considered “spatially enabled” in Roche’s view – spatially literate citizens, open data, and unified data standards – very interesting. What makes a citizen spatially literate? Does it require digital literacy as well? And what of Open Data (as discussed in Sundberg & Melander’s and Worthy’s respective pieces): Do global citizens or only local citizens truly have access to all this data? What are the repercussions?

I wonder as well how we will use the remote sensing data gathering techniques discussed in Sagl and colleagues’ 2015 article in Sensor to “spatially enable” cities. The first thought that came to mind when reading these two articles on smart cities is who do you consider to be citizens? Will smart cities devolve into having border control to stop digitally illiterate folk from obtaining residence status? Will smart cities be used as a tool to further stratify society?

My hope, of course, is that geospatial information & GIScience can improve society and reduce as much harm as possible. With that in mind, I look forward to see how scientists developing these remote sensing tools and “spatially enabled” cities use their knowledge and expertise to increase livelihoods at all levels, i.e., notions of equity and equality are not left behind in the dust, but rather woven into the fabric (or circuit board) of our evolving urban centers.

-ClaireM

 

The UCDP GED & the Power of GIScience

Monday, November 2nd, 2015

Sundberg & Melander (2013) introduce the Uppsala Conflict Data Program (UCDP) new Georeferenced Event Dataset (GED) in their 2013 piece published in the Journal of Peace Research. The details of the dataset are presented in a concise manner, however I had to dig a bit deeper to find more information regarding the geocoding of lethal events. I found a very interesting article written by Kristine Eck (Department of Peace and Conflict Research, Uppsala University) that highlighted the geocoding procedure of the UCDP’s GED (see excerpt below):

The creators of the dataset appear to have really thought about that importance of communicating uncertainty to end-users of the database. The 3-step process includes manual input from “coders”, revision of entries by a supervisor, and a final verification of the entries with specific automated processes (scripts). I applaud the creators of the dataset for this rigorous verification of the entries into the dataset. Moreover, I am also happily surprised to read that the creators of the dataset have really thought about how to deal with uncertainty in the geospatial data (e.g. a fatal event that occurs “somewhere near place X”, or “In province Y”). The introduction of a system that assigns an integer value (1-7) to an attribute/event based on the precision of the geospatial information associated with the event itself is not particularly new: the Armed Conflict Location and Event Dataset – ACLED – has a 1 to 3 scale similar to the UCDP’s GED. What is noteworthy is the use of centroid locations, rather than important cities, as pseudo-locations to events that have vague event areas (and not so much locations).

While the sociopolitical ramifications of a database of this sort are important and should be debated, I really think that the authors and creators of the dataset have done a thorough job of thinking through the use of geospatial information within their data. They strive to minimize bias towards densely populated areas, and strive to maintain, not “improve” or “make more detailed” by introducing MORE error into the location information, the uncertainty in spatial information by using an uncertainty scale and using location information other than a a country’s capital city, for example, as the default location of events that have vague locations/areas.

I believe that this dataset is a great step forward for GIScience, as it has proven to be useful and arguably essential to the success of the UDCP’s GED. As for the Sundberg & Melander piece, I really wish they went more into detail about the decisions behind the georeferencing of these events. That’s probably just the (albeit reluctant) GIScience side of me starting to come out, though.

– ClaireM

Eck, K. (2012). In data we trust? A comparison of UCDP GED and ACLED conflict events datasets. Cooperation And Conflict, 47(1), 124-141. http://dx.doi.org/10.1177/0010836711434463

“UCDP GED avoids a great deal of these problems through a triple-checking process. The first manual check is done by the coder, and the second by the UCDP project leader, who manually checks the data and uses Spatial Key, a visualization software for geographic data, to map the data and locate possible miscoded coordinates. In the third stage, automated scripts in Python and PHP are run to check for internal consistency in dates, actors, dyads, conflicts, and fatality counts. The automated scripts pick up problems like the same city being given different coordinates. The scripts normally pick up dozens of errors per country, suggesting that they are invaluable in the data-cleaning process.

The second recurring geocoding problem in the ACLED data is the misuse of the geoprecision codes. In ACLED and UCDP GED, a geoprecision code of 1 indicates that the coordinates marking the exact location that the event took place, usually a inhabited area. When a specific location is not provided, i.e. “Helmand province,” ACLED and UCDP GED employ different strategies for managing this issue. ACLED selects the provincial capital while UCDP GED selects the centroid point when available and the provincial capital when a centroid point is not available. One can debate which is the best practice, but what is crucial is that the data provider convey uncertainty about the location to the user. This is done through geoprecision codes; higher numbers on the geoprecision code indicate broader geographic spans and thus greater uncertainty about where the event occurred (the range for ACLED is 1-3, for UCDP GED it is 1-7).”

 

The Nuances of Open Data

Monday, November 2nd, 2015

In the 2015 article “The Impact of Open Data in the UK: Complex, Unpredictable, and Political” Ben Worthy convincingly demonstrates how Open Data should be seen as a complex, unpredictable and political issue. Before this article, I honestly thought that access to government data could only lead to better things. In hindsight, this seems startlingly naïve – yet I think that even more knowledgeable proponents of open data can appreciate the nuances presented in the article regardless of whether they can provide a counter-argument.

The article’s format intuitively answered my questions as they came up: what are the downsides of Open Data? Who are the users?  How does the media play a role? However, I think the point that hit me the most was that despite the fact that open data is portrayed to be neutral information; it can be used for very political purposes. Access to knowledge does have power, especially the power to manipulate information as you see fit. What I really get a sense of about Open data is how how tensions can be created between different levels or sections of government. I also thought one of the most interesting ideas that I would have liked to read more on was the section talking about how the relationship between accountability and transparency is very complex (796). I think this relationship could be easily expanded to another paper.

Finally, Worthy’s insights into ‘armchair auditors’ was very relevant to other topics of discussion in the class –specifically the fact that these people are not “ordinary” citizens but rather have a specialized repertoire of time, interest and skills (796). Overall, I think it is very difficult to get people to care that much about very specific issues unless it is part of their jobs or affects their lives in a direct and personal way. This has wider implications for other aspects of GIScience such as VGI.

-Vdev

Roche 2014: Issues with Democratization and Uncertainty

Monday, November 2nd, 2015

Smart cities must strike balance between maximizing efficiency for current conditions and leaving room for uncertainties in how needs will change in the future. In a way smart cities might be described as cities that rely more on digital infrastructure than physical infrastructure. The former would be easier and cheaper to modify with changing needs. As said in the article, innovation and technological literacy on the part of the city’s residence would be key factors. Relying instead on top-down design from a municipal government might impose too much uniformity, when the needs of the city’s residents are so diverse. While I’m usually skeptical of positivist notions that better technology will lead to more democratization, in the case of smart cities I find this idea more compelling. Crowd-sourcing and VGI do have an incredible potential to give city planners a comprehensive and dynamic view of the behavior and needs of urban residents. However, again the threat arises of the technology being diverted to serve the purposes of certain interests, bypassing the needs of the majority. Specifically, I think there is a danger of cities ending up developing to suit the needs of companies like Uber and Google. This would be especially probable if governments, with the best of intentions, started subsidizing such companies in the belief that the private sector will the most effective leader in developing smart cities. Finally, I find that this topic relates very pertinently to my seminar topic of uncertainty. I imagine that the technological, economic and environmental uncertainties with which we cope will probably only get bigger as time goes on. Smart cities will be increasingly difficult to conceive of as time passes.

– Yojo

UCDP GED and Open Data

Monday, November 2nd, 2015

The benefit of datasets is that they are a great tool for cross comparison of attributes and trends. Therefore, establishing a resource that compares and elucidates trends relating to organized violent conflict would be extremely beneficial for peace research and policy. However, the dataset will only be of significance as long as it applies specified standards for structuring the data that are both machine and human readable. In addition, datasets open to the public should focus on relational database models and devise a clear ontology for the data in order to optimize interoperability and information exchange. The UCDP GED is a good example of open data within the subfield of GIScience because it has had success cataloguing events that are difficult to observe and classify within the geospatial and temporal domains. Events of organized violence are difficult to observe due to their sporadic, socially complex, and seemingly irrational nature.

UCDP GED also highlights the importance of the subfield of geocoding within GIScience. Limitations and conflicts in geocoding events of organized violence for the UCDP GED are apparent in the divide between the ability to code rural locations of violence as opposed to urban locations. We notice a digital and informational divide between places that are poorer and less populated compared to places with greater population densities and more wealth. Alternative geocoding resources and databases therefore become of utmost importance for mapping and observing organized violent conflict in rural areas. Limitation of geospatial frameworks for rural areas also allude to approaches of uncertainty in spatial data. Therefore, what methods do we apply in order to compensate and aggregate for marginalized place that that lack geospatial frameworks and coding?

-geobloggerRB

The UCDP dataset: now with geography!

Monday, November 2nd, 2015

The article by Sundberg and Melander was an interesting article, which for me brought up questions about spatial scales and situating the data within geography and GIS. One thing I noticed immediately about the article was the map, as this is often what non-geographers immediately think of GIS and geography. I was disappointed that the authors didn’t map the trends of organized violence (i.e., state-based, non-state and one-sided), because it would have been a very interesting visualization to see, for example, where state-based violence is occurring the most. They have represented it temporally in a line graph, but it would have added to the analysis to represent it geographically. Perhaps they didn’t include it because it would have just reiterated already known information? (For example, it’s perhaps already well-known which countries or cities in Africa experience the most state-based violence.)

For me, the article raised as many questions about spatial scales as it did about open data. The authors write that previous research has been focused mainly on violence at the country/year level, but they argue for more sub-national studies, saying that they might help shed more light on the underlying mechanisms of violence. I agree, and think that mapping examples of violence at the sub-national level would allow for more thorough examination of all the variables that contribute to violence, because these variables would certainly change from country to country.

Overall, I found the article very interesting, but a bit difficult to situate the topic in GIScience or even in geography. It seemed like the authors were incorporating the spatial data as simply another facet of their data, along with other factors like time and type of violence, rather than framing it as an investigation fundamentally based in geography. For the authors, GIS is a tool they use to georeferenced their data and make a nice-looking map. This is a fine approach – but it leaves me wondering how the article would be different if the approach was embedded in geography, rather than incorporating geography as one aspect of the data.

~denasaur

Contextual Sensing

Monday, November 2nd, 2015

The discussion of context-specific sensing, especially the reference by Sagl et al. to internal considerations, is relevant to my topic of spatial cognition. Geo-sensors for smart cities take into account knowledge acquisition of spatial information down to the individual level. Contextual reasoning within the geospatial domain therefore is a vital component for the development of geo-sensors for smart cities. Understanding public perception about urban areas and observing the individual and societal behavioral responses pertains to how greater research in spatial cognition could likely benefit the design of smart city concepts. In addition, the paper’s discussion of mobile based sensors reminds me of papers I am reading for my topic about studies that compare spatial knowledge acquisition of maps to mobile maps. These studies share this article’s examination of the mingled forces that emerge from the interactions between humans, the environment, and technology. Therefore, how do geo-spatial technologies mimic and simultaneously effect how we move through the urban environment?

In addition, the discussion of involuntary geographic information brings to mind how smart cities are faced with ethical dilemmas regarding privacy and human tracking.  Not only does involuntary crowdsourced information reflect the pragmatic ethical issues of the development of geo-sensors for smart cities, but it also brings to light different interpretations and perception of the law and issues surrounding liability.

In addition, can we contribute an increase in democratization to the fact that geo-sensors for smart cities are becoming more dependent on smart-citizen contributions? Do “smarter” citizens really refer to more empowered citizens? I’m slightly skeptical that this is the case, and I find myself agreeing with the authors that, at the moment, there is little indication that the technologies for smart cities have substantially improved the quality of life for its inhabitants. The focus on development and increase of prevalence of geo-sensors in smart cities will not alone yield positive impacts. Instead, we must be critical and focus more on how the sensors are implemented and for what social/societal causes.

-geobloggerRB

The UCDP Dataset: Achieving Information Democracy or Turning Horror into Bland Data?

Sunday, November 1st, 2015

The database described in this paper had both important advantages and limitations. Its ability to spatially locate incidences of violence adds a decidedly geographic component that is missing from nation-level conflict databases. The higher incidence of violence in urban areas in most cases is a particularly interesting finding, though its immediate usefulness is unclear. However, the strict criteria for what constitutes an incidence of conflict meant that the numbers calculated in this study represented only a fraction of the scale of death in the relevant conflicts. While the dataset had a total death count of about 750,000, the civil wars in the Democratic Republic of Congo alone, for example, resulted in the deaths of approximately 4 million people when considering disease and malnutrition. This discrepancy highlights the true cost of war, in that the scale of destruction is actually much greater than the scale of the violence. With regards to open data, one must ask what the purpose is of making this dataset open to the public. If it stems from a desire for transparency and democracy, I worry that such an analysis is not particularly informative to the general public. Firstly, for those people for whom a sense of scale is necessary for their comprehension of human tragedy, the numbers represent only a fraction of the tragedy. Meanwhile, for the majority of people who require human stories to get a feeling of the horror of war, bland statistics do precious little, and may in fact do more harm than good by desensitizing the public.

– Yojo

 

Smart cities: who do they benefit?

Thursday, October 29th, 2015

Roche’s article about smart cities is an organized and interesting read which situates smart cities in GIScience and offers ways for GIScience to make cities smarter.

As I read this article, I wondered if and how smart cities might reinforce existing power structures and further marginalize some groups in urban landscapes. “Rethinking urbanization” with an approach that is more focused on individuals sounds great – but it begs the question: which individuals are we focusing on? For example, it was troubling to me that neither this article nor the Sagl et al article mentions how smart cities could also be accessible cities, in ways that current cities are not. Would the smart cities the author envisions make public transit wheelchair accessible or help people with social anxiety avoid crowds? Where are the homeless in the author’s smart city vision, and how can they contribute geospatial information? Another problem is that proposing technological solutions and enhancing the “digital city” dimension of smart cities comes with the problem of access to and exclusion from these technologies. The author does address this critique, however, saying that if initiatives are driven by technologies, they can be reductive and one-size-fits-all.

Overall it seems to me that smart cities have an enormous amount of potential to improve the lives of many people, but we must be sure that all people are included. Hopefully, this is where the concept of the “intelligent city” comes into play, using VGI and participatory GIS to connect citizens; and where the “open city” increases cooperation and transparency.

~denasaur

Migration in Asia

Monday, October 26th, 2015

The sense I get from this reading is that while in the past immigrants could generally be described as “defecting” from one country-system to another, countries are now more integrated into a single global migrant system. As such, they are following movement patterns for which conceptual frameworks and national data systems are ill-equipped, at least at the time the article was written. Zelinsky’s mobility transition model is useful for understanding the common migration patterns that countries experience as they undergo a specific type of economic restructuring. However, as 20th-century growth models become less dependable going forward, we may witness the emergence of more complex migration patterns. Furthermore, since the world is not becoming more politically unified even though migration systems are becoming more integrated, the migration data systems of the worlds countries will probably continue to be fractured in a way that becomes increasingly inadequate over time with regards to developing conceptual frameworks.

 

-Yojo

 

Geocomplexity Explored Through Human Migration

Monday, October 26th, 2015

In his 1996 paper “Asia on the Move: Research Challenges for Population Geography”, Graeme Huge explores the dynamics of a newly emerging network of economic migration, characteristic of the fluidity of the developed and developing world in the late 20th and early 21st century. I must say I am surprised at the date of this paper’s publishing, mostly due to the author’s mention of social networks and the relevance this paper has 20 years later. I now believe he used the term “social network” differently than we do today (a social network being a network of people socially connected, not necessarily through mediums such as Facebook or the internet).

Geocomplexity is a self-defining term, and as a concept is very applicable to what the author calls “Population Geography.” In striving to chart the dimensions of assessing the complexity of international population flows, he reveals why this increased level of population mobility is not simply a labor-related phenomenon. Although these economic migrants are motivated to move by the prospect of work, there are many other factors to consider.

Private and Government Institutions operate within and outside the law to aid immigration and emigration based on their own country’s needs and the needs of an entire region. Asian countries are in disjoint; they exist in different stages of the international migration transition, providing a political dimension to the migration and commoditizing labor. These economic migrants are not all hopeless, poor laborers as the term might suggest. Wealthy individuals have the means to lead double lives in the business sector, participating in the workings of Asian and Euro-American economies. Due to the inherent spatial dimension of this phenomenon, Hugo asserts many reasons to why this complex issue is one of geographic relevance, and why it is the responsibility of geographers to race the growth of data with the formulation of spatial analytical methods.

 

Smitty_1

(RIP Graeme Hugo)

Hugo – Challenges for Population Geography

Monday, October 26th, 2015

The article by Hugo outlines the research challenges for population geographers in a world of accelerating migratory flows. Reasons for this acceleration are: migration from Asia to the West, skilled worker migration to Asia, contract labor migration, student and business short term migration, illegal migration, and refugee flows. The author writes that the major challenge of studying the complexity of population geography is obtaining good data on the informal flows of migrants. Traditional census data is just too slow to keep up with all the short term migration.

This was written in 1996. Since then we have added more than a billion people and migration flows have only grown increasingly complex. Massive outmigration from Syria has strained sluggish refugee systems, creating a renewed interest in population geography. Unfortunately, a lot of the motivation to collect data has been on determining where flows of foreign fighters in Syria are coming from, rather than where to efficiently resettle millions of refugees. Since Hugo’s article was written, governments have actually increased their abilities to track and monitor movement across borders. Especially in the example of foreign fighters going to Syria, governments have been gathering data on anyone who visits countries deemed suspicious. The rise of political movements like ISIS are clear proof that migration models have to move beyond simple economic push-pull factors. If given access to the data, this increasing vigilance has probably expanded the ability to model complex migratory flows for population geography.

In the case of studying the push-pull factors of migrant flowing across the Mediterranean, the political context is actually a strong desire to stop these flows. This is not to say we should just stop researching because governments misuse information. But whether or not modelling this migration results in a social benefit seems increasingly unlikely in the current political climate*.

*European political climate, I have high hopes for JT’s promise to resettle 25k refugees.

 

-A proud anOntarian

 

Asia on the Move: Research Challenges for Population Geography

Monday, October 26th, 2015

Graeme Hugo here elaborates a wide-ranging argument for the relevance of population geography to the question of international migration among the increasingly migratory populations of East Asia.
I found the article speaks to the increasingly complex global system of flows, encompassing goods, capital, and ideas as well as humans. Amidst what I’d call a general weakening of the sovereignty of states and a concomitant increase in their interdependence, the world of humans and their stuff resembles one unified system more obviously than ever before. At the same time, this world is massively chaotic, and while it was at some point relatively simple to analyze European immigration to America as a function of birthrates, automation of rural farm labour, and the growth of the American economy, the cyclical, multi-directional flows of human beings in and out of Asia in the 1990s rightly (as Hugo demonstrates) demands a different approach to understand.

And what of GI Science and big data? Hugo doesn’t delve as deeply into the complex methods he outlines as I would have liked, but with the multiplying ways that human beings can leave a recognisable trace today, I would argue that it has become generally easier to track even undocumented migrants. As evidence I’d present the exhibition “Forensis,” presented at Berlin’s Haus der Kulturen der Welt in 2014, which documents a multidisciplinary evidence-gathering effort undertaken to prove that NATO warships intentionally ignored a sinking ship full of African migrants. The researchers used advanced statistical methods, remote sensing data, modelling and visualization techniques, as well as human rights law to successfully mount a case against NATO in international courts. As we hone our techniques for detecting human beings, questions of our responsibility for them are naturally raised. http://www.hkw.de/en/programm/projekte/2014/forensis/start_forensis.php

Big Data: A Bubble in the Making? (geocomplexity)

Monday, October 26th, 2015

Coming off of the heels of our discussion last week’s seminar, I can’t help (for better or for worse) to read the articles about geo-complexity through the lens of uncertainty. In particular, I am reminded of when Professor Sieber challenged me to make an argument for why uncertainty could be good, and I proposed that some level of geographic uncertainty is likely to mitigate the worst effects of spatially occurring trends of discrimination (e.g.: red-lining, gerrymandering, etc.), while also accommodating a diversity of geographic experiences and ontologies. In his article “Asia on the Move: Research Challenges for Population Geography”, Graeme Hugo discusses geocomplexity as it pertains to conceptualizing and analyzing human migration in Asia. I wonder–somewhat contrary to conventional wisdom–if we are headed to a world of more geographic uncertainty, in spite of the emergence of big data and the discussion of a “major and focused multidisciplinary research effort” in order to circumvent the “huge gaps in our knowledge of the patterns, causes and consequences of international migration in Asia” (Hugo 95).

Hugo points out that census data is predicated on the assumption that “individuals and families have a single place and country of residence”, and therefore are increasingly difficult to use for studying migration patterns. As discussed in the paper, ease of travel has accommodated several migration patterns which involve living part-time both in the nation of origin and the nation of desitnation. Although Hugo presents several secondary sources for understanding migration trends, he notes nonetheless that understanding migration patterns is complicated by the increasing volume of migration, as well as the “increasing heterogeneity of the international labour flows and the people involved in them” (Hugo 103). It is that remark about the “heterogeneity” of labour flows that intrigues me.

If the motives behind labour migration are increasingly divergent, what implications does that have on studying migration patterns at all, even if we develop techniques to use secondary/alternative sources to mitigate the issue of geocomplexity? In my opinion, this will mean that certain assumptions held by human geographers will become invalid; in the case of migration and geocomplexity, this will mean that we cannot assume the migration was necessarily driven by economic necessity. Increasingly, rich, middle-class, and poor people are drawn to migrate for a variety of reasons, and even if we grasp exactly how many people move around, we will not be able to make assumptions as to why, or even the nature or duration of their migration.

To frame this another way, even if the quantity of data we are collecting is increasing, I believe the certainty, validity, and utility of it is often decreasing. In the same way we’ve discussed the limitations of making sweeping demographic assumptions about VGI (e.g.: people post information on social media selectively and aspirationally), so to are their limitations of capturing migration patterns in any region of the world. The reasons for migration are increasingly heterogenous and simply having numbers tells us nothing. In my opinion, this is bad news for Uber, Facebook, or any other company whose stock market value is intimately tied to the anticipated value of their amassed datasets. But it’s good news for anyone who’s worried about their privacy and ability to be profiled by their data footprint. It’s certainly contrary to the general thrust of this course, but I think our ability to be profiled based on data footprints is overstated.

~CRAZY15

Climate change: the ultimate complexity

Monday, October 26th, 2015

Manson and Sullivan’s article raises some very interesting point about geospatial complexity, the difficulty of navigating between the very general and the specific, complexity in ontologies and epistemologies, and in computer modeling. One of the first things that caught my eye was that the authors mentioned that space-and-place based research recognizes the importance of qualitative and quantitative approaches. Disregarding qualitative data is a critique I have read often in the critical GIS literature, and I was glad to see that the authors not only addressed this, but made space for qualitative approaches in their vision for complexity studies going forward.

The article actually made me reflect on my studies in environment. Geospatial complexity as it is explained in this article is actually quite connected to environment, and I immediately thought of climate change. Environmental systems are complex systems that are often not fully understood – for example, it’s difficult to know tipping points. Climate change is also a problem that experts struggle to navigate the space between making generalizations and losing sight of the particular, which is a problem the authors address in this article. Yes, it will make wide, sweeping changes to the planet which can be generalized as warming – but different places at a smaller scale will experience unique, unpredictable changes. Manson and Sullivan state that space, place and time are all part of complex systems – and of course, they are part of the complex system of climate change.

The authors conclude that it is an exciting time to be part of the research of complexity and space-and-place, and that complexity studies is moving beyond the phase of “starry-eyed exuberance.” From my perspective of the complexity of climate change, I’d say that there is no better time than now, because complexity seems to be an essential part of trying to understand what is happening on the planet.

-denasaur

A complex view of place, space and scale

Sunday, October 25th, 2015

Even from the first page, I received the impression that “Complexity theory in the study of space and place” by Manson and O’Sullivan (2006) was a well-written paper. It tells a story and proceeds at a smooth pace that is easy to read, while still providing substantial information on the topic. I did find the constant references to various philosophical theories, such as reductionism and holism, difficult to assimilate into my understanding of complexity as I do not have a background in such theories. I felt like I was receiving an introduction to philosophy and complexity at the same time – a bit overwhelming! However, it did make me realize that an understanding of basic philosophical theories would probably help my conceptualization of GIScience as a whole – which was not a connection I thought to make in this class. To give credit where it is due, the authors did help comprehension by providing short definitions or context for obscure words within the text.

When asking the three main questions of “(1) Does complexity theory operate at too general a level to enhance understanding? (2) What are the ontological and epistemological implications of complexity? And (3) What are the challenges in modeling complexity? (678)” the paper highlights the tension inherent in the field of complexity. One problem that seemed especially prominent was the conflict between understanding emergent behaviour and the desire to simplify models. Computational modelling was provided as both a solution to accommodating large amounts of heterogeneous variables while also being presented as an easy avenue towards simplification (683).

The authors also made some references to spatial scale that I found particularly intriguing – namely how emergence and scaling up from local to more global phenomena can conflict with modeling assumptions of uniform patterns over different scales. I am finding more and more that all of our individual research topics are converging on each other. Complexity relates to spatial scale, which relates to ontologies, which relates to uncertainty, and so forth. I have not yet fully decided what that means for the broader context of understanding GIScience in my own head but I think it is important to acknowledge the increasingly common ground. I feel as if, through this class, I am step by step building my own conceptual network model of GIScience. It is not a linear path by any means – rather circular and backtracking in fact – but slowly, slowly, slowly the connections form.

-Vdev