Archive for November, 2019

Thoughts on Web Mapping 2.0: The Neogeography of the GeoWeb (Haklay et al. 2008)

Sunday, November 17th, 2019

This paper overviews the development of Geography in the Web 2.0 era, where neogeography is founded based on the blurring boundary of geographers, consumers, technicians, and contributors. The case study of OSM and London Profiler showed how technology allows geography and cartography embrace new form of data sources, interaction with general public, as well as the access to geographical knowledge and information.

The most intriguing part for me from this paper is the debate of whether Web 2.0 brings public participation in Geographic information manufacturing and researches a “cult of the amateur” or “mass collaboration”. From my point of view, this is what professionals from the realm of geography is needed. Data is neutral, but not their providers, contributors, or manufacturers. However, no one is going to complain about the abundance of available data, especially it is detailed user generated data that serves as a complement of what was missing before. It is up to the expert to decide what to do and how to interpret with the data, instead of blaming the source of the data.

From there, my point being there is nothing wrong with the information, along with its provider. What only matters are the people that interpreting, making profit, and discovering knowledge from it. Thus whether Web 2.0 is facilitating a “cult of the amateur” or “mass collaboration” solely depends on how the professionals who is trying to use the data. Even some seemingly useless data in one research can be extremely important for another field of research. Thus, geography as an open-minded multi-disciplinary science, drawing lines and rejecting what is novel, trendy and probably the future, is not supported by the ideology of the subject. Adaption and evolution should be the key for a science subjects.

Le GéoWeb

Sunday, November 17th, 2019

The advent of the Internet followed by the arrival of Web 2.0 have no doubt changed the way geographic information is obtained and shared, a fact that is well described by this article by Haklay et. Al. Without saying the age for paper maps is behind us, the internet has propelled us into an era where internet and geography are combined like never before.

Although the Internet has allowed many to navigate on a digital Earth for the first time, several issues have appeared from the combination of the internet and the discipline of geography. The field has been democratized by the increased online accessibility to geographic tools and mashups, which increases the visibility of geography but also could be seen as being reductionist, geography essentially being non-experts having fun by geotagging pictures. Speaking of geotagging, the social media boom has lead to people going en masse to previously unvisited areas, a phenomenon that has led to an explosion of visitors to the Horseshoe Bend in Arizona for example, something that has drastically affected the local environment.

Researching Volunteered Geographic Information: Spatial Data, Geographic Research, and New Social Practice

Sunday, November 17th, 2019

Sarah Elwood et al. wrote a fundamental paper about Volunteered Geographic Information (VGI) from aspects of definition(s) of the VGI, domains researches of VGI, frameworks within or beyond the VGI, impact of VGI on spatial data infrastructure and geographic researches, quality issues of VGI data and concerning methodologies. VGI does contribute to the data explosion nowadays and expand the research contents in new fields, but there still plenty of issues exist confronted with creation and application of VGI data. I am really interested in the quality issues of VGI data which I think is the most important issue when conducting researches using VGI data though there are a host of situations when VGI is beneficial even if the quality is hard to assess. The authors do point out four aspects of insights concerning the quality issue in VGI that it is quite different from the traditional data qualities, however, I am still curious about how to deal with those difference and difficulties in uncertainty and inaccuracy of the VGI data. How could new developed AI technologies help in determining the basic quality assessment rules when filtering useful information? And I think education in geography would be really helpful for further better developing the high quality VGI system.

Moreover, it is argued that VGI is a paradigmatic shift of data collection, creation and sharing in GIS. Does that mean the traditional types of data like raster, vector, object based, etc. would be changing forced by the development of VGI? Also, Web plays an important role in VGI and does that mean VGI would not be developing independent of WebGIS, so what is the relationship between WebGIS and VGI (both would be presented next week)?

Who is the crowd?

Sunday, November 17th, 2019

This post is written in response to both the articles covering the geoweb and those covering VGI.

In reviewing these topics, I’m struck by an interesting thing that seems unaddressed. Within both topics, there are extensive references to the power of open street map and numerous examples of OSM as an example of both VGI and as an important part of the GeoWeb. Recent discoveries of mass corporate edits of Open Street Map have upended the academic conceptualization of the product, and throw much of the rhetoric used in both VGI and in the GeoWeb into disarray. The main question raised by this development, as I see it, is who is the “crowd” in “crowdsourcing.”

The internet is an incredibly complex system. In understanding the internet, much research is focused on the interactions between individual humans and the internet. These interactions are the end-point of the GeoWeb system and the input point of VGI systems. These points involve extensive flows of information back and forth (a defining aspect of Web 2.0).

We normally conceive of the end-consumers as individual human beings. When these consumers are represented by entities that consist of large groups of humans, such as governments or companies, the way the system works changes. The end user can no longer be assumed to have one set of ideologies or use-cases, and the power of a single large multi-person entity may be exponentially greater than a single person. These entities have MUCH more complex physical bounds than a single person, so the offered VGI information from these entities may not fit well within our traditional concepts of maps. Similarly, the usage of the Geoweb by these entities likely fits the definition of “Geocomplexity.” in that it will certainly generate emergent spatial systems. The large scale relationship between the internet and institutions is deserving of further research.

Extending this idea, the entities that the Geoweb is interacting with, and the entities that may be generating VGI, might not be human at all. Animals with trackers that are uploaded to the internet, or entire ecosystems being viewed from a satellite play with our loose definitions of both concepts. Can the actions of a lumber company using online map data analysis to decide where to cut be considered a natural process?

At the extreme edges of this train of thought, the internet may be interacting with and receiving information from AI’s or from itself. These internal loops and systems happening inside a computer resemble those occurring outside, and when the internet of things is considered the line between physical and digital becomes blurry. Where do you draw lines?

Thoughts on Researching Volunteered Geographic Information

Sunday, November 17th, 2019

Elwood et al.’s article discusses the emergence of VGI as a new form of geographic information and how this can influence geographic research. This article did a good job analyzing the related concerns and issues in using VGI in geographic research, which provides me with a lot of new insights on this topic. I’m particularly drawn by two points discussed in the article.
The first point is the data quality of VGI. Researchers are often concern about the data quality of VGI as it is non-authoritative and has not been validated in a formal way. In response to this, the authors argue that VGI can be regarded as authoritative on the basis that it originates from direct local knowledge and observations, and the reliability can be rested on the convergence of information generated by a number of contributors. However, this does not mean that expertise is not important anymore. As argued earlier in the article, “expertise, tools, and theoretical frameworks of professional geographers are essential to addressing many of the more profound questions associated with VGI”, including the issue of data quality. I’m wondering what role professional geographers could and should play in the data quality issue related to VGI, given that the reliability is based on the “similarity of the submissions”.
Second, the authors highlight the issue of digital divide formed by the VGI. Several groups or individuals are included while others are excluded in creating and using VGI. For researchers who are using VGI as research input, it is important to realize that the data is biased towards the people who are “privileged” in contributing to this information.

Thoughts on Neogeography

Sunday, November 17th, 2019

I have several concerns about neogeography as it’s defined and described in the “Web Mapping 2.0” article. The quote from Turner portrays neogeography as “fun” and “about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place.” However, I’d push back on both of these notions. First of all, why would geography have to be fun? Making an academic pursuit more inherently enjoyable could run the risk of eroding the rigor of the field. This could come off as me being “elitist,” and I don’t want geography to be inaccessible to anyone who’d like to use it. However, if anyone (academic or layperson) finds geography not “fun” enough to pursue, then they shouldn’t pursue it; creating a snazzy “neogeography” for them to utilize would almost necessarily make it easier and less rigorous, diluting and weakening their results. Furthermore, can’t it already be fun? I think it is! With regards to the applications of neogeography, can’t geography/GIS already be used for “sharing location information… helping shape context, and conveying understanding through knowledge of place?” For example, the paper “Extending the Qualitative Capabilities of GIS” by Jung and Elwood thoroughly discusses how GIS can be used to display meaning and context, and it was written in 2010. Why come up with a “neogeography” to complete these tasks, when existing GIS technologies can do the same thing as is or with slight modifications? Perhaps I’m too caught up in the current paradigm of what GIS is/should be; regardless, however, we should ask ourselves if going through the effort of creating, classifying, or distinguishing a new kind of geography from the status quo is necessary or appropriate.

Thoughts on “Citizens as Sensors”

Sunday, November 17th, 2019

I really liked this piece, and thought it was an easy/informative read (thanks Liz!). One place where I thought it was lacking, however, was in the “Concerns” section. Goodchild talks about how only the privileged may be able to contribute VGI, and as a result they may be overrepresented or may over-benefit from analyses/policies that come from VGI, like disaster relief plans. This is probably true, but Goodchild fails to consider what a double-edged sword VGI can be. He’s only looking at examples of VGI being used for “good;” however, that won’t always be the case. Those who can’t contribute VGI because of their social status and wealth (for example, lack of phone) won’t benefit as much from well-meaning and helpful uses of VGI; however, it can also be argued that they won’t be hurt as much by improper uses of VGI. I’m probably looking at this through too much of a geodemographics/Big Data lens, but I can imagine VGI being used for nefarious purposes. In such cases, not being able to contribute to VGI (for example, those “off the grid”) may be beneficial, as the powers that be (government, private sector, etc) cannot use your data against you. Goodchild has made the assumption that VGI is used to help society and individuals; from this viewpoint, everyone would want to be able to contribute VGI. However, as data privacy and the like become bigger problems, will we? I think there will be a balance to strike between wanting to contribute VGI to reap the resulting policy benefits and holding back from contributing as much VGI to avoid potential negative impacts

Thoughts on Citizens as sensors: the world of volunteered geography (Goodchild, 2007)

Sunday, November 17th, 2019

This is a Goodchild works that serves as a brief introduction to the topic of VGI. Although it is done in 2007, when computational power and artificial intelligence was just in the start-up phase. We do see how VGI serves as a main data source in the field of Cartography and some geographic related fields. However, by highlighting the contributions of VGI, he also pointed out the limitation of relying on VGI as a source of geographic data – the validity, accessibility, and authority of data.

Nowadays, we see OSM and Google Maps are used as major sources of many spatial analytical researches, especially in a larger extent when primary data collection became time- and human-consuming. Just as Goodchild argues, from the perspective of researchers, the availability of spatial data can be extracted from VGI sources is promised, there are questions need to be asked about synthesizing and validating VGI data to increase the accuracy of data.

Who contributes to the data? This is the question unsolved even after 12 years after he wrote this paper. This particular question asks the coverage of population that VGI data might represents, the area it covers, and the scope it uses. Why do people do this? Another question relating to the bias and incentive of VGI data, which potentially influencing the result from researches using VGI data. Also, with various available VGI data sources, how we can incorporate them together to cross-validate, references each other to generate better accuracy for our objective is the question I would like to seek for an answer. As well as how to cross referencing different sources (other than VGI) to VGI data to increase its validity, and somehow gives them authority is another interesting topics I am eager to learn.

Thoughts on “Goodchild – Citizens as sensors”

Sunday, November 17th, 2019

This article by Goodchild lays out the foundation of Volunteered Geographic Information (VGI) by explaining technological advances that helped it develop as well as how it is done.

The widespread availability of 5G cellular network in the upcoming years will drastically improve our ability as humans to act as sensors with our internet-connected devices given improved upload/download speeds as well as lower latency. These two factors will greatly help in the transfer of information, for example allowing for more frequent locational pings or allow more devices to be connected to the internet as 5G will allow more connections compared to 4G.

Although VGI provides the means to obtain information that might otherwise be impossible to gather, the reliability of the data can be questioned. An example could be with OpenStreetMap, where anyone is free to add, change, move or remove buildings, roads or features as they please. Although most data providers do so with good intentions, inaccuracies and errors can slip in, affecting the product. As other websites or mobile applications use data on OSM to provide their services, it becomes important for users and providers to have valid information. As pointed out in the article, the open-nature of VGI allows malevolent users to undermine others’ experience. An example of such an event would be with people recently taking advantage of the VGI nature of OSM to change the land coverage of certain areas in order to gain an advantage in the mobile application Pokemon GO.

Finally, there is also an issue with who owns the data. Is it the platform or the user that provided the data? Who would be responsible if an inaccurate data entry leads to an accident or a disaster? As with any other growing field closely linked to technological advancements, governments will need to further legislate on VGI in order to allow for an easier regulation.

neo is new in the way of empowering individuals

Sunday, November 17th, 2019

I have little to say about the Webmappion 2.0 paper. We very clearly persist in a new geography as we interact via a space we didn’t always have access to – the internet. Some of us still don’t have this access. But I’m not convinced the paper actually did what it set out to – specifically in the sense of discussing ramifications for society. Early discussion of terms is important, so for someone like me – new to thinking about neogeo – the paper is a helpful start. Wouldn’t end here now though. We get to decide what’s next for geo, and it seems like neogeo is in the driver’s seat.
Just want to point to authors use of complexity in Web Mapping 2.0 and neogeography. It’s not the same as complexity theory—they must’ve meant complicated at each instance.
“Essentially, Neogeography is about people using and creating their ownmaps, on their own terms and by combining elements of an existing toolset”
An encouraging quote; empowering people by assigning the agency in characterizing their human/biophysical environments is part of neogeo that makes it neo – new, and not steeped in colonialism.
Excited to force conversations of either movement or complexity in class tomorrow.

sensors –> singularity

Sunday, November 17th, 2019

With humans as sensors, we move towards the singularity.

Woah, catchy subtitle, movement, and robo-human dystopia! Does the post need any thing else?

I guess so… some hundred more words according to the syllabus :/.

Goodchild’s example of errors at UCSB and the City of Santa Barbara point to the danger of ascribing authority to mappers. With this authority, they also accept power to erase people and place. The real question in any discussion of VGI ought to be about who gets this power. Whether it’s the USGS, NASA, or a network of individually empowered agents, someone wields this power. What infrastructure to do we as GIScientists support?
I’m so conflicted: I like bottom-up everything, but maps are consumed by, represent, and interact with people. Question is, can they also be by the people. Who knows – I’ve just strung enough words together to make this work – see yas in class.

thoughts on geodemographics (Baiocchi et al., 2010)

Monday, November 11th, 2019

“The rationale behind geodemographics is that places and people are inextricably linked. Knowledge about the whereabouts of people reveals information about them. Such an approach has been shown to work well because people with similar lifestyles tend to cluster — a longstanding theoretical and empirical finding in the sociological literature.”
This paragraph summarizes the theoretical basis of the analysis conducted by this study and the basic idea of geodemographics. I think this shares the same idea with AI profiling by using big geospatial data, or in another way, AI profiling in regards to space is geodemographics. Some of the critical issues are similar. The first issue is related to the uncertainties of the knowledge it produces, which can cause unjust action towards individuals. As Mittelstadt (2016) argues, even if strong correlations or causal knowledge are found, this knowledge may only concern populations while actions are directed towards individuals. This becomes more problematic when we conduct spatial clustering and assuming that places can reflect every individual and decisions can be made based on the analysis of an area. The second issue is once again related to scale, or the modifiable areal unit problem. The scale of analysis can significantly influence the results we obtained. At which scale can we argue that the places and people are inextricably linked? At the neighborhood level, city level, or country level? I wonder if in the field of geodemographics those issues are considered or addressed.

Reflection on Geodemographic

Monday, November 11th, 2019

As far as what I understand, geodemographic data links the science of demography and geography together to represent the variation in human and physical phenomenon locationally and spatially. The study presented in this article used a geodemographic dataset call ACORN. The author mentioned in the limitation part that the uncertainties in the ACORN data are associated with the imputation of missing information. And there is also some limitation such that the uncertainties in this dataset are difficult to quantify. Since geodemographic data are very much linked with human behavior, it would be hard to identify its quality and accuracy. But I still wonder if there are some possible ways to deal with such uncertainty? Or how can we manage geodemographic data so it can have relatively less uncertainty?

Besides, the author also assumes that there is no reginal or local variation in the expenditure profiles, which means households belonging to the same type are presumed to have the same spending patterns no matter where they are located in the territory. But obviously this can be problematic, since a uncertain data source may strongly influence the final result. So, is there any way that we can assess how this averaging process can influence the result, and is there any way that we can at least tried to eliminate it?

There is also one thing that I’m very curious when I’m trying to understand the concept of geodemographic data. Are they the same with census data? If not, what is the difference? Are geocoded census data part of geodemographic data? Or are census data part of geodemographic data?

Thoughts on Simplifying complexity: a review of complexity theory (Manson 2000)

Sunday, November 10th, 2019

This paper thoroughly reviewed and examined the field of complexity. By diving and recognizing three different kind of complexity theories: 1) algorithmic complexity; 2) deterministic complexity; 3) aggregate complexity. The author systematically explained each complexity theory with different implication and future research opportunities, opens a new door for me as a urban researcher.

I do agree with the author that complexity needs to have more attention from geographers and planners, since from my first class of urban geography, I have been taught and agreed that cities are open systems that the public and academics have yet found a way to understand. Thus, to better simplify cities and urban research areas, understanding the complexity is the first step. Although, the majority of urban researchers seek to simplify urban environments to reach a empirical theory/statement/knowledge. However, simplification needs to be done after fully understanding the complexity of the existing study objects. In urban geography and planning, I doubt anyone had ever thoroughly comprehend all the underlying components that makes a city work. Thus, there is necessity for urban researches and GIScientists to study the algorithmic, deterministic, and aggregate complexity before proposing a simplified models. In the realm of urban related study, the need for complexity research is urgent, before this study area became a palace build on the cloud.

In addition, for GIScientists especially, understanding and studying algorithmic complexity might be the future trend of study, regardless the field their study objective landed at. Since the discipline’s technological foundation makes GIScientists easier to be aware of such issue, as the capability of addressing algorithmic complexity is advantageous compare to researchers from other spatial related disciplines.

Reflection on Geocomplexity

Sunday, November 10th, 2019

After reading this article, I’m still not sure if I fully understand the concept of geocomplexity, since I am still trying to understand how geocomplexity related to spatial problem. The author has categorized the complexity into three types: algorithmic complexity, deterministic complexity, and aggregate complexity. And each type of complexity deals with different types of theory. For example, algorithmic complexity deals with mathematical complexity theory and information theory, and deterministic complexity deals with chaos theory and catastrophe theory.

As far as what I understand, algorithmic complexity calculates the efforts need to solve one problem or achieve one result. Therefore, it would be necessary that some topic that are vague itself may be hard to evaluate. Since my topic is spatial data uncertainty, I was then wondering how would researcher apply algorithmic complexity to data uncertainty, since the uncertainty itself can be vague and ambiguous.

As for deterministic complexity, the author mentioned that it would be too simplistic to characterize a human system by few simple variables or deterministic equations, so less systems are actually deterministically chaotic. Then, I was wondering if there are any examples where human system are in fact deterministic complex. If there is none, then what systems are then usually be regarded as deterministic complex.

And finally, aggregate complexity is used to access the holism and synergy that comes from the interaction of system components. Then back to my topic, the system components in the spatial data uncertainty field would be error, vague and ambiguity. So how would these three components be defined in the case of aggregate complexity.

The Impact of Social Factors and Consumer Behavior on Carbon Dioxide Emissions (Baiocchi et al., 2010)

Sunday, November 10th, 2019

This paper applies geodemographic segmentation data to assess the direct and indirect carbon emissions associated with different lifestyles. As geodemographics are generally used to improve the targeting of advertising and marketing communications, I am curious about the use of geodemographics in GIScience.

In this paper, the authors argue that the top-down approach, which is conventionally used to classify lifestyle groups, fails to recognize spatial aspects associated with lifestyles. This is why they choose to use geodemographic lifestyle data. Because lifestyle data employs bottom-up techniques that draw spatial patterns out from the lifestyle data, as opposed to fitting it to some a priori classification of neighborhood types. However, it is important to note that the geodemographic classification systems are beset by Modifiable Areal Unit Problem and ecological fallacies in which the average characteristics of individuals within a neighborhood are assigned to specific individuals. For example, in ACORN groups that are labeled as “Prudent pensioners”, many people will be neither elderly single nor old. More importantly, many others who are both elderly single and old are located outside of “Prudent pensioners” groups. Also, as I know, the data used to build the classification systems mostly derive from the census, which becomes dated quickly and is not sufficient to capture the key dimensions that differentiate residential neighborhoods. Are there any alternative datasets for geodemographics?

Simplifying complexity (Manson, 2001)

Sunday, November 10th, 2019

In this paper, Manson (2001) presents a thorough review of complexity theory. I argue that Manson doesn’t make clear several concepts in his paper, such as the differences between chaos and complexity. Manson states that “there is no one identifiable complexity theory” and “any definition of complexity is beholden to the perspective brought to bear upon it”. He parses complexity into three streams of research: algorithmic complexity, deterministic complexity, and aggregate complexity. However, I don’t quite agree with this schema. Algorithmic complexity describes those systems that are so intricate that they are practically impossible to study. This problem cannot form part of the study of complex systems because it arises from an insufficient understanding of the system being studies or inadequate computational power to model and describe them. Therefore, algorithmic complexity may be a misleading movement away from complexity and its associated issues.

Even with many theoretical advancements and technical developments, complexity theory is still considered to be in its infancy, lacking a clear conceptual framework and unique techniques. Also, as Manson notes, it is important to explore “the ontological and epistemological corollaries of complexity”. Indeed, complexity has a relatively open ontology. It is necessary to consider the epistemology of complexity to understand the relationship between complexity ontology, emergence, and the balance between holism and reductionism.

Thoughts on Class Places and Place Classes – Geodemographics and the spatialization of class (Parker et al. 2007)

Sunday, November 10th, 2019

First of all, after reading the whole article, I am very confused on the structure of it. It is not well-structured, from my personal perspective, which lead to some confusion of myself about the topic of geodemographics. Also, throughout the article, it is hard for me to get anything related to GIScience, since except of some hint of using information technology to classify population, the whole article focuses on sociology perspective of geodemographic rather than GIScientists view of the topic.

Part of the reason that this article is not that related to GIScience is probably the time they wrote it. 2007 seems in the period of transforming GIS to GIScience, while GIS itself were not strongly bonded to other disciplines as it is now at 2019. On the other hand, although the article focuses more on the classification methods, it is less concerned with existing debate from computational classification method, rather than discussing more supervised heavily human-intensive classification works.

However, the article definitely gives a brief introduction of what is geodemographics is. What the major debates is on the field of geodemographics, from the sociology perspective. I do wander how GIScientist sees their fellow sociologists’ classification methods, as well as ontologies for geodemographics derived from the sociological classification methods.

Furthermore, I wonder that in the age of big data analysis, how the combination of big data and GeoAI contributes to the field of geodemographics from a GIScientsts perspective. Since so far in this paper, the authors stated that sociological classification still heavily focus on Census and commercial data, which are specially designed to study demographics. On the other hand, I would like to learn more about what the reactions are, from included/excluded population in geodemographic classification. As well as some ethical discussion related to the process of classifying population.

Thoughts on Turcotte (2006) “Modeling Geocomplexity: A New Kind of Science”

Sunday, November 10th, 2019

The article “Modeling Geocomplexity: A New Kind of Science” by Turcotte (2006) introduced the topic of geocomplexity. The article highlighted how the understanding of natural phenomena is enriched and more complete when it incorporates a variety of methods beyond standard statistical methods that are emerging in the field for different situations.

As someone who has no prior knowledge of geocomplexity in GIScience, I did find this topic a little difficult to wrap my mind around. Despite this, I did find it very interesting to see the different models that have emerged to better understand geological processes. I find it interesting that self-organized complexity utilizes computer-based simulation that can be used in classrooms. I think it would be a more intuitive and visual way to learn about geological functions.

After reading this article I had more questions than I had before. I am not sure I completely understand the concept of geocomplexity… but look forward to learning more about it.

What is randomness?

Sunday, November 10th, 2019

Geocomplexity is, for lack of a better word, complex. After reading Turcottes “modeling geocomplexity”, I’m left with one main question – in this context, what is randomness?

Most of the models outlined in the paper involve are focused on demonstrating the chaotic, unpredictable nature of natural systems. The argument, as I understand it, is centered around the idea that a sufficiently complex system will be unbelievably unpredictable, and that minor changes can have massive consequences as those systems play out. What this implies to me is that there is some degree of truly “random” behavior at play, and that that randomness is what is preventing making these systems easily understood.

Having no background in this subject, I find that I still don’t understand what “randomness” is. How does it arise in these systems? If the location of every particle in a system was known, would we be able to model this in a way that did not include any randomness? Chaos theory is mostly preoccupied with the idea that minuscule variations in the initial conditions of a system can result in vastly different outcomes. Where within that concept does randomness lie? I suppose I don’t have the theory and statistics background to make sense of these arguments well. This has however inspired me to delve deeper into this subject matter in the future.