Researching Volunteered Geographic Information (Elwood et al., 2012)

November 17th, 2019

In this paper, the authors classify sites related to the collection of VGI in order to study VGI quality and develop methods for analyzing VGI. VGI has altered how spatial data are created and the mechanisms for using and sharing these data. Because VGI is driven by contributors’ collective efforts, I am curious to know what motivates individuals to give freely of their time and expertise to develop VGI? What makes contributors stop contributing information to VGI projects? How do their motivations change as they engage in VGI activities? Will individuals map an area that has already been mapped in the last few years?

The authors point out concerns over the quality and trustworthiness of VGI. As we know, VGI has been used as an alternative to commercial or proprietary datasets. This makes me wonder about how can a VGI project, with no strict data specification or quality control, establish some type of trust. How to measure the reputation of a contributor to provide a better understanding of the quality and trust of the data? How to assess the quality of the contributions? Last, the authors mention that “VGI has the potential to address and constraints and omissions that plague SDIs”. Although VGI has concerns about data quality and scale and will not completely replace SDIs, I believe that VGI will become a key spatial data producer in SDIs.

Towards the Geospatial Web (Scharl, 2007)

November 17th, 2019

This chapter identifies the possibilities of spatial knowledge extraction from unstructured text. Unstructured data does not necessarily require a more structured geography. But if these data are combined with other datasets that are geolocated, being able to geolocate these data might be useful. Translating text into geographic information is difficult. It is also a much more difficult proposition than simply assigning coordinates to photographs. The author introduces geoparsing, which is a process used to extract spatial data from texts. In addition to photos and videos, we can now geotag text messages, tweets, and more. But what about the data generated before the emergence of geoweb? Can we extract spatial information from old new articles? How can we add a spatial structure to data that do not already have it in order to mesh it with geoweb? Also, I am looking forward to knowing some useful tools for geoparsing.

Furthermore, this chapter doesn’t clarify what exactly the geoweb is. What are the boundaries between the web and geoweb? Last, many of the platforms that we rely on for geographic information are for-profit entities that do not have issues of justice and equity. However, it is important for us to note that how the geoweb encode, reify, and (re)produce inequality.

Thoughts on Geospatial Web

November 17th, 2019

After reading this article, I found out that the geospatial web is much more than what I expected before. It turns out that geospatial web can not only be used in geography studies, but also can be used in other disciplines.

The article has mentioned that “Once geospatial context information becomes widely available, any point in space will be linked to a universe of commentary on its environmental, historical and cultural context, to related community events and activities and to personal stories and preferences”. So, I have a very interesting thoughts on this statement. Researches augmented reality are very popular in recent years. And I would say each world build in AR should also be in a certain geospatial context, so the things in that world also have some kind of location information. If it doesn’t have a location, the AR world would be a mess since everything will be floating around.

Obviously, the location information in AR world cannot be directly interpreted coordinates that exists in real world. But still, AR have some way to have all the things geolocated. And as mentioned in the author’s statement, it should be linked to a universe of commentary where the AR world can have some environmental, historical and cultural context. As a result, the AR world are very similar to the real world. So, the question would be can we have a geospatial web based on the AR world?

I would say yes, but I’m still curious about how can this works.

Thoughts on VGI

November 17th, 2019

With the development of the Internet, volunteered geographic formation played a more a more important role in not only geographic information science, but also human geography, human geography, and geographic education. But I noticed that the author emphasizes explicitly about the importance of the volunteer part. The author thinks that in order to be referred to as VGI, the people who involve in this should know that that are doing it voluntarily but not passively. Then this leaves me a question, then where should we categorize the data that are generated passively?

Besides, the author also mentioned that no one can guarantee the data quality of VGI data. Then I think it would be a big problem especially when researchers are using the data to make some critical decisions. Data uncertainty problem are always important no matter in what discipline. The data quality of VGI data, however, are extra harder to evaluate because it is volunteered, and they are collected and analyzed by different groups of people with different background. So, my question is that is there any way that researchers can at least take the data uncertainty problem into consideration when they are using it. The characteristics of the VGI data made it even harder to use regular ways to evaluate the data quality.

Another point is the author mentioned that there is some connection between Geospatial web and VGI, but he didn’t explain it. I’m very curious that is there any examples or explanation of it.

Thoughts on Web Mapping 2.0: The Neogeography of the GeoWeb (Haklay et al. 2008)

November 17th, 2019

This paper overviews the development of Geography in the Web 2.0 era, where neogeography is founded based on the blurring boundary of geographers, consumers, technicians, and contributors. The case study of OSM and London Profiler showed how technology allows geography and cartography embrace new form of data sources, interaction with general public, as well as the access to geographical knowledge and information.

The most intriguing part for me from this paper is the debate of whether Web 2.0 brings public participation in Geographic information manufacturing and researches a “cult of the amateur” or “mass collaboration”. From my point of view, this is what professionals from the realm of geography is needed. Data is neutral, but not their providers, contributors, or manufacturers. However, no one is going to complain about the abundance of available data, especially it is detailed user generated data that serves as a complement of what was missing before. It is up to the expert to decide what to do and how to interpret with the data, instead of blaming the source of the data.

From there, my point being there is nothing wrong with the information, along with its provider. What only matters are the people that interpreting, making profit, and discovering knowledge from it. Thus whether Web 2.0 is facilitating a “cult of the amateur” or “mass collaboration” solely depends on how the professionals who is trying to use the data. Even some seemingly useless data in one research can be extremely important for another field of research. Thus, geography as an open-minded multi-disciplinary science, drawing lines and rejecting what is novel, trendy and probably the future, is not supported by the ideology of the subject. Adaption and evolution should be the key for a science subjects.

Le GéoWeb

November 17th, 2019

The advent of the Internet followed by the arrival of Web 2.0 have no doubt changed the way geographic information is obtained and shared, a fact that is well described by this article by Haklay et. Al. Without saying the age for paper maps is behind us, the internet has propelled us into an era where internet and geography are combined like never before.

Although the Internet has allowed many to navigate on a digital Earth for the first time, several issues have appeared from the combination of the internet and the discipline of geography. The field has been democratized by the increased online accessibility to geographic tools and mashups, which increases the visibility of geography but also could be seen as being reductionist, geography essentially being non-experts having fun by geotagging pictures. Speaking of geotagging, the social media boom has lead to people going en masse to previously unvisited areas, a phenomenon that has led to an explosion of visitors to the Horseshoe Bend in Arizona for example, something that has drastically affected the local environment.

Researching Volunteered Geographic Information: Spatial Data, Geographic Research, and New Social Practice

November 17th, 2019

Sarah Elwood et al. wrote a fundamental paper about Volunteered Geographic Information (VGI) from aspects of definition(s) of the VGI, domains researches of VGI, frameworks within or beyond the VGI, impact of VGI on spatial data infrastructure and geographic researches, quality issues of VGI data and concerning methodologies. VGI does contribute to the data explosion nowadays and expand the research contents in new fields, but there still plenty of issues exist confronted with creation and application of VGI data. I am really interested in the quality issues of VGI data which I think is the most important issue when conducting researches using VGI data though there are a host of situations when VGI is beneficial even if the quality is hard to assess. The authors do point out four aspects of insights concerning the quality issue in VGI that it is quite different from the traditional data qualities, however, I am still curious about how to deal with those difference and difficulties in uncertainty and inaccuracy of the VGI data. How could new developed AI technologies help in determining the basic quality assessment rules when filtering useful information? And I think education in geography would be really helpful for further better developing the high quality VGI system.

Moreover, it is argued that VGI is a paradigmatic shift of data collection, creation and sharing in GIS. Does that mean the traditional types of data like raster, vector, object based, etc. would be changing forced by the development of VGI? Also, Web plays an important role in VGI and does that mean VGI would not be developing independent of WebGIS, so what is the relationship between WebGIS and VGI (both would be presented next week)?

Who is the crowd?

November 17th, 2019

This post is written in response to both the articles covering the geoweb and those covering VGI.

In reviewing these topics, I’m struck by an interesting thing that seems unaddressed. Within both topics, there are extensive references to the power of open street map and numerous examples of OSM as an example of both VGI and as an important part of the GeoWeb. Recent discoveries of mass corporate edits of Open Street Map have upended the academic conceptualization of the product, and throw much of the rhetoric used in both VGI and in the GeoWeb into disarray. The main question raised by this development, as I see it, is who is the “crowd” in “crowdsourcing.”

The internet is an incredibly complex system. In understanding the internet, much research is focused on the interactions between individual humans and the internet. These interactions are the end-point of the GeoWeb system and the input point of VGI systems. These points involve extensive flows of information back and forth (a defining aspect of Web 2.0).

We normally conceive of the end-consumers as individual human beings. When these consumers are represented by entities that consist of large groups of humans, such as governments or companies, the way the system works changes. The end user can no longer be assumed to have one set of ideologies or use-cases, and the power of a single large multi-person entity may be exponentially greater than a single person. These entities have MUCH more complex physical bounds than a single person, so the offered VGI information from these entities may not fit well within our traditional concepts of maps. Similarly, the usage of the Geoweb by these entities likely fits the definition of “Geocomplexity.” in that it will certainly generate emergent spatial systems. The large scale relationship between the internet and institutions is deserving of further research.

Extending this idea, the entities that the Geoweb is interacting with, and the entities that may be generating VGI, might not be human at all. Animals with trackers that are uploaded to the internet, or entire ecosystems being viewed from a satellite play with our loose definitions of both concepts. Can the actions of a lumber company using online map data analysis to decide where to cut be considered a natural process?

At the extreme edges of this train of thought, the internet may be interacting with and receiving information from AI’s or from itself. These internal loops and systems happening inside a computer resemble those occurring outside, and when the internet of things is considered the line between physical and digital becomes blurry. Where do you draw lines?

Thoughts on Researching Volunteered Geographic Information

November 17th, 2019

Elwood et al.’s article discusses the emergence of VGI as a new form of geographic information and how this can influence geographic research. This article did a good job analyzing the related concerns and issues in using VGI in geographic research, which provides me with a lot of new insights on this topic. I’m particularly drawn by two points discussed in the article.
The first point is the data quality of VGI. Researchers are often concern about the data quality of VGI as it is non-authoritative and has not been validated in a formal way. In response to this, the authors argue that VGI can be regarded as authoritative on the basis that it originates from direct local knowledge and observations, and the reliability can be rested on the convergence of information generated by a number of contributors. However, this does not mean that expertise is not important anymore. As argued earlier in the article, “expertise, tools, and theoretical frameworks of professional geographers are essential to addressing many of the more profound questions associated with VGI”, including the issue of data quality. I’m wondering what role professional geographers could and should play in the data quality issue related to VGI, given that the reliability is based on the “similarity of the submissions”.
Second, the authors highlight the issue of digital divide formed by the VGI. Several groups or individuals are included while others are excluded in creating and using VGI. For researchers who are using VGI as research input, it is important to realize that the data is biased towards the people who are “privileged” in contributing to this information.

Thoughts on Neogeography

November 17th, 2019

I have several concerns about neogeography as it’s defined and described in the “Web Mapping 2.0” article. The quote from Turner portrays neogeography as “fun” and “about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place.” However, I’d push back on both of these notions. First of all, why would geography have to be fun? Making an academic pursuit more inherently enjoyable could run the risk of eroding the rigor of the field. This could come off as me being “elitist,” and I don’t want geography to be inaccessible to anyone who’d like to use it. However, if anyone (academic or layperson) finds geography not “fun” enough to pursue, then they shouldn’t pursue it; creating a snazzy “neogeography” for them to utilize would almost necessarily make it easier and less rigorous, diluting and weakening their results. Furthermore, can’t it already be fun? I think it is! With regards to the applications of neogeography, can’t geography/GIS already be used for “sharing location information… helping shape context, and conveying understanding through knowledge of place?” For example, the paper “Extending the Qualitative Capabilities of GIS” by Jung and Elwood thoroughly discusses how GIS can be used to display meaning and context, and it was written in 2010. Why come up with a “neogeography” to complete these tasks, when existing GIS technologies can do the same thing as is or with slight modifications? Perhaps I’m too caught up in the current paradigm of what GIS is/should be; regardless, however, we should ask ourselves if going through the effort of creating, classifying, or distinguishing a new kind of geography from the status quo is necessary or appropriate.

Thoughts on “Citizens as Sensors”

November 17th, 2019

I really liked this piece, and thought it was an easy/informative read (thanks Liz!). One place where I thought it was lacking, however, was in the “Concerns” section. Goodchild talks about how only the privileged may be able to contribute VGI, and as a result they may be overrepresented or may over-benefit from analyses/policies that come from VGI, like disaster relief plans. This is probably true, but Goodchild fails to consider what a double-edged sword VGI can be. He’s only looking at examples of VGI being used for “good;” however, that won’t always be the case. Those who can’t contribute VGI because of their social status and wealth (for example, lack of phone) won’t benefit as much from well-meaning and helpful uses of VGI; however, it can also be argued that they won’t be hurt as much by improper uses of VGI. I’m probably looking at this through too much of a geodemographics/Big Data lens, but I can imagine VGI being used for nefarious purposes. In such cases, not being able to contribute to VGI (for example, those “off the grid”) may be beneficial, as the powers that be (government, private sector, etc) cannot use your data against you. Goodchild has made the assumption that VGI is used to help society and individuals; from this viewpoint, everyone would want to be able to contribute VGI. However, as data privacy and the like become bigger problems, will we? I think there will be a balance to strike between wanting to contribute VGI to reap the resulting policy benefits and holding back from contributing as much VGI to avoid potential negative impacts

Thoughts on Citizens as sensors: the world of volunteered geography (Goodchild, 2007)

November 17th, 2019

This is a Goodchild works that serves as a brief introduction to the topic of VGI. Although it is done in 2007, when computational power and artificial intelligence was just in the start-up phase. We do see how VGI serves as a main data source in the field of Cartography and some geographic related fields. However, by highlighting the contributions of VGI, he also pointed out the limitation of relying on VGI as a source of geographic data – the validity, accessibility, and authority of data.

Nowadays, we see OSM and Google Maps are used as major sources of many spatial analytical researches, especially in a larger extent when primary data collection became time- and human-consuming. Just as Goodchild argues, from the perspective of researchers, the availability of spatial data can be extracted from VGI sources is promised, there are questions need to be asked about synthesizing and validating VGI data to increase the accuracy of data.

Who contributes to the data? This is the question unsolved even after 12 years after he wrote this paper. This particular question asks the coverage of population that VGI data might represents, the area it covers, and the scope it uses. Why do people do this? Another question relating to the bias and incentive of VGI data, which potentially influencing the result from researches using VGI data. Also, with various available VGI data sources, how we can incorporate them together to cross-validate, references each other to generate better accuracy for our objective is the question I would like to seek for an answer. As well as how to cross referencing different sources (other than VGI) to VGI data to increase its validity, and somehow gives them authority is another interesting topics I am eager to learn.

Thoughts on “Goodchild – Citizens as sensors”

November 17th, 2019

This article by Goodchild lays out the foundation of Volunteered Geographic Information (VGI) by explaining technological advances that helped it develop as well as how it is done.

The widespread availability of 5G cellular network in the upcoming years will drastically improve our ability as humans to act as sensors with our internet-connected devices given improved upload/download speeds as well as lower latency. These two factors will greatly help in the transfer of information, for example allowing for more frequent locational pings or allow more devices to be connected to the internet as 5G will allow more connections compared to 4G.

Although VGI provides the means to obtain information that might otherwise be impossible to gather, the reliability of the data can be questioned. An example could be with OpenStreetMap, where anyone is free to add, change, move or remove buildings, roads or features as they please. Although most data providers do so with good intentions, inaccuracies and errors can slip in, affecting the product. As other websites or mobile applications use data on OSM to provide their services, it becomes important for users and providers to have valid information. As pointed out in the article, the open-nature of VGI allows malevolent users to undermine others’ experience. An example of such an event would be with people recently taking advantage of the VGI nature of OSM to change the land coverage of certain areas in order to gain an advantage in the mobile application Pokemon GO.

Finally, there is also an issue with who owns the data. Is it the platform or the user that provided the data? Who would be responsible if an inaccurate data entry leads to an accident or a disaster? As with any other growing field closely linked to technological advancements, governments will need to further legislate on VGI in order to allow for an easier regulation.

neo is new in the way of empowering individuals

November 17th, 2019

I have little to say about the Webmappion 2.0 paper. We very clearly persist in a new geography as we interact via a space we didn’t always have access to – the internet. Some of us still don’t have this access. But I’m not convinced the paper actually did what it set out to – specifically in the sense of discussing ramifications for society. Early discussion of terms is important, so for someone like me – new to thinking about neogeo – the paper is a helpful start. Wouldn’t end here now though. We get to decide what’s next for geo, and it seems like neogeo is in the driver’s seat.
Just want to point to authors use of complexity in Web Mapping 2.0 and neogeography. It’s not the same as complexity theory—they must’ve meant complicated at each instance.
“Essentially, Neogeography is about people using and creating their ownmaps, on their own terms and by combining elements of an existing toolset”
An encouraging quote; empowering people by assigning the agency in characterizing their human/biophysical environments is part of neogeo that makes it neo – new, and not steeped in colonialism.
Excited to force conversations of either movement or complexity in class tomorrow.

sensors –> singularity

November 17th, 2019

With humans as sensors, we move towards the singularity.

Woah, catchy subtitle, movement, and robo-human dystopia! Does the post need any thing else?

I guess so… some hundred more words according to the syllabus :/.

Goodchild’s example of errors at UCSB and the City of Santa Barbara point to the danger of ascribing authority to mappers. With this authority, they also accept power to erase people and place. The real question in any discussion of VGI ought to be about who gets this power. Whether it’s the USGS, NASA, or a network of individually empowered agents, someone wields this power. What infrastructure to do we as GIScientists support?
I’m so conflicted: I like bottom-up everything, but maps are consumed by, represent, and interact with people. Question is, can they also be by the people. Who knows – I’ve just strung enough words together to make this work – see yas in class.

thoughts on geodemographics (Baiocchi et al., 2010)

November 11th, 2019

“The rationale behind geodemographics is that places and people are inextricably linked. Knowledge about the whereabouts of people reveals information about them. Such an approach has been shown to work well because people with similar lifestyles tend to cluster — a longstanding theoretical and empirical finding in the sociological literature.”
This paragraph summarizes the theoretical basis of the analysis conducted by this study and the basic idea of geodemographics. I think this shares the same idea with AI profiling by using big geospatial data, or in another way, AI profiling in regards to space is geodemographics. Some of the critical issues are similar. The first issue is related to the uncertainties of the knowledge it produces, which can cause unjust action towards individuals. As Mittelstadt (2016) argues, even if strong correlations or causal knowledge are found, this knowledge may only concern populations while actions are directed towards individuals. This becomes more problematic when we conduct spatial clustering and assuming that places can reflect every individual and decisions can be made based on the analysis of an area. The second issue is once again related to scale, or the modifiable areal unit problem. The scale of analysis can significantly influence the results we obtained. At which scale can we argue that the places and people are inextricably linked? At the neighborhood level, city level, or country level? I wonder if in the field of geodemographics those issues are considered or addressed.

Reflection on Geodemographic

November 11th, 2019

As far as what I understand, geodemographic data links the science of demography and geography together to represent the variation in human and physical phenomenon locationally and spatially. The study presented in this article used a geodemographic dataset call ACORN. The author mentioned in the limitation part that the uncertainties in the ACORN data are associated with the imputation of missing information. And there is also some limitation such that the uncertainties in this dataset are difficult to quantify. Since geodemographic data are very much linked with human behavior, it would be hard to identify its quality and accuracy. But I still wonder if there are some possible ways to deal with such uncertainty? Or how can we manage geodemographic data so it can have relatively less uncertainty?

Besides, the author also assumes that there is no reginal or local variation in the expenditure profiles, which means households belonging to the same type are presumed to have the same spending patterns no matter where they are located in the territory. But obviously this can be problematic, since a uncertain data source may strongly influence the final result. So, is there any way that we can assess how this averaging process can influence the result, and is there any way that we can at least tried to eliminate it?

There is also one thing that I’m very curious when I’m trying to understand the concept of geodemographic data. Are they the same with census data? If not, what is the difference? Are geocoded census data part of geodemographic data? Or are census data part of geodemographic data?

Thoughts on Simplifying complexity: a review of complexity theory (Manson 2000)

November 10th, 2019

This paper thoroughly reviewed and examined the field of complexity. By diving and recognizing three different kind of complexity theories: 1) algorithmic complexity; 2) deterministic complexity; 3) aggregate complexity. The author systematically explained each complexity theory with different implication and future research opportunities, opens a new door for me as a urban researcher.

I do agree with the author that complexity needs to have more attention from geographers and planners, since from my first class of urban geography, I have been taught and agreed that cities are open systems that the public and academics have yet found a way to understand. Thus, to better simplify cities and urban research areas, understanding the complexity is the first step. Although, the majority of urban researchers seek to simplify urban environments to reach a empirical theory/statement/knowledge. However, simplification needs to be done after fully understanding the complexity of the existing study objects. In urban geography and planning, I doubt anyone had ever thoroughly comprehend all the underlying components that makes a city work. Thus, there is necessity for urban researches and GIScientists to study the algorithmic, deterministic, and aggregate complexity before proposing a simplified models. In the realm of urban related study, the need for complexity research is urgent, before this study area became a palace build on the cloud.

In addition, for GIScientists especially, understanding and studying algorithmic complexity might be the future trend of study, regardless the field their study objective landed at. Since the discipline’s technological foundation makes GIScientists easier to be aware of such issue, as the capability of addressing algorithmic complexity is advantageous compare to researchers from other spatial related disciplines.

Reflection on Geocomplexity

November 10th, 2019

After reading this article, I’m still not sure if I fully understand the concept of geocomplexity, since I am still trying to understand how geocomplexity related to spatial problem. The author has categorized the complexity into three types: algorithmic complexity, deterministic complexity, and aggregate complexity. And each type of complexity deals with different types of theory. For example, algorithmic complexity deals with mathematical complexity theory and information theory, and deterministic complexity deals with chaos theory and catastrophe theory.

As far as what I understand, algorithmic complexity calculates the efforts need to solve one problem or achieve one result. Therefore, it would be necessary that some topic that are vague itself may be hard to evaluate. Since my topic is spatial data uncertainty, I was then wondering how would researcher apply algorithmic complexity to data uncertainty, since the uncertainty itself can be vague and ambiguous.

As for deterministic complexity, the author mentioned that it would be too simplistic to characterize a human system by few simple variables or deterministic equations, so less systems are actually deterministically chaotic. Then, I was wondering if there are any examples where human system are in fact deterministic complex. If there is none, then what systems are then usually be regarded as deterministic complex.

And finally, aggregate complexity is used to access the holism and synergy that comes from the interaction of system components. Then back to my topic, the system components in the spatial data uncertainty field would be error, vague and ambiguity. So how would these three components be defined in the case of aggregate complexity.

The Impact of Social Factors and Consumer Behavior on Carbon Dioxide Emissions (Baiocchi et al., 2010)

November 10th, 2019

This paper applies geodemographic segmentation data to assess the direct and indirect carbon emissions associated with different lifestyles. As geodemographics are generally used to improve the targeting of advertising and marketing communications, I am curious about the use of geodemographics in GIScience.

In this paper, the authors argue that the top-down approach, which is conventionally used to classify lifestyle groups, fails to recognize spatial aspects associated with lifestyles. This is why they choose to use geodemographic lifestyle data. Because lifestyle data employs bottom-up techniques that draw spatial patterns out from the lifestyle data, as opposed to fitting it to some a priori classification of neighborhood types. However, it is important to note that the geodemographic classification systems are beset by Modifiable Areal Unit Problem and ecological fallacies in which the average characteristics of individuals within a neighborhood are assigned to specific individuals. For example, in ACORN groups that are labeled as “Prudent pensioners”, many people will be neither elderly single nor old. More importantly, many others who are both elderly single and old are located outside of “Prudent pensioners” groups. Also, as I know, the data used to build the classification systems mostly derive from the census, which becomes dated quickly and is not sufficient to capture the key dimensions that differentiate residential neighborhoods. Are there any alternative datasets for geodemographics?