Archive for the ‘General’ Category

Researching Volunteered Geographic Information (Elwood et al., 2012)

Sunday, November 17th, 2019

In this paper, the authors classify sites related to the collection of VGI in order to study VGI quality and develop methods for analyzing VGI. VGI has altered how spatial data are created and the mechanisms for using and sharing these data. Because VGI is driven by contributors’ collective efforts, I am curious to know what motivates individuals to give freely of their time and expertise to develop VGI? What makes contributors stop contributing information to VGI projects? How do their motivations change as they engage in VGI activities? Will individuals map an area that has already been mapped in the last few years?

The authors point out concerns over the quality and trustworthiness of VGI. As we know, VGI has been used as an alternative to commercial or proprietary datasets. This makes me wonder about how can a VGI project, with no strict data specification or quality control, establish some type of trust. How to measure the reputation of a contributor to provide a better understanding of the quality and trust of the data? How to assess the quality of the contributions? Last, the authors mention that “VGI has the potential to address and constraints and omissions that plague SDIs”. Although VGI has concerns about data quality and scale and will not completely replace SDIs, I believe that VGI will become a key spatial data producer in SDIs.

Towards the Geospatial Web (Scharl, 2007)

Sunday, November 17th, 2019

This chapter identifies the possibilities of spatial knowledge extraction from unstructured text. Unstructured data does not necessarily require a more structured geography. But if these data are combined with other datasets that are geolocated, being able to geolocate these data might be useful. Translating text into geographic information is difficult. It is also a much more difficult proposition than simply assigning coordinates to photographs. The author introduces geoparsing, which is a process used to extract spatial data from texts. In addition to photos and videos, we can now geotag text messages, tweets, and more. But what about the data generated before the emergence of geoweb? Can we extract spatial information from old new articles? How can we add a spatial structure to data that do not already have it in order to mesh it with geoweb? Also, I am looking forward to knowing some useful tools for geoparsing.

Furthermore, this chapter doesn’t clarify what exactly the geoweb is. What are the boundaries between the web and geoweb? Last, many of the platforms that we rely on for geographic information are for-profit entities that do not have issues of justice and equity. However, it is important for us to note that how the geoweb encode, reify, and (re)produce inequality.

Thoughts on Geospatial Web

Sunday, November 17th, 2019

After reading this article, I found out that the geospatial web is much more than what I expected before. It turns out that geospatial web can not only be used in geography studies, but also can be used in other disciplines.

The article has mentioned that “Once geospatial context information becomes widely available, any point in space will be linked to a universe of commentary on its environmental, historical and cultural context, to related community events and activities and to personal stories and preferences”. So, I have a very interesting thoughts on this statement. Researches augmented reality are very popular in recent years. And I would say each world build in AR should also be in a certain geospatial context, so the things in that world also have some kind of location information. If it doesn’t have a location, the AR world would be a mess since everything will be floating around.

Obviously, the location information in AR world cannot be directly interpreted coordinates that exists in real world. But still, AR have some way to have all the things geolocated. And as mentioned in the author’s statement, it should be linked to a universe of commentary where the AR world can have some environmental, historical and cultural context. As a result, the AR world are very similar to the real world. So, the question would be can we have a geospatial web based on the AR world?

I would say yes, but I’m still curious about how can this works.

Thoughts on VGI

Sunday, November 17th, 2019

With the development of the Internet, volunteered geographic formation played a more a more important role in not only geographic information science, but also human geography, human geography, and geographic education. But I noticed that the author emphasizes explicitly about the importance of the volunteer part. The author thinks that in order to be referred to as VGI, the people who involve in this should know that that are doing it voluntarily but not passively. Then this leaves me a question, then where should we categorize the data that are generated passively?

Besides, the author also mentioned that no one can guarantee the data quality of VGI data. Then I think it would be a big problem especially when researchers are using the data to make some critical decisions. Data uncertainty problem are always important no matter in what discipline. The data quality of VGI data, however, are extra harder to evaluate because it is volunteered, and they are collected and analyzed by different groups of people with different background. So, my question is that is there any way that researchers can at least take the data uncertainty problem into consideration when they are using it. The characteristics of the VGI data made it even harder to use regular ways to evaluate the data quality.

Another point is the author mentioned that there is some connection between Geospatial web and VGI, but he didn’t explain it. I’m very curious that is there any examples or explanation of it.

Le GéoWeb

Sunday, November 17th, 2019

The advent of the Internet followed by the arrival of Web 2.0 have no doubt changed the way geographic information is obtained and shared, a fact that is well described by this article by Haklay et. Al. Without saying the age for paper maps is behind us, the internet has propelled us into an era where internet and geography are combined like never before.

Although the Internet has allowed many to navigate on a digital Earth for the first time, several issues have appeared from the combination of the internet and the discipline of geography. The field has been democratized by the increased online accessibility to geographic tools and mashups, which increases the visibility of geography but also could be seen as being reductionist, geography essentially being non-experts having fun by geotagging pictures. Speaking of geotagging, the social media boom has lead to people going en masse to previously unvisited areas, a phenomenon that has led to an explosion of visitors to the Horseshoe Bend in Arizona for example, something that has drastically affected the local environment.

Who is the crowd?

Sunday, November 17th, 2019

This post is written in response to both the articles covering the geoweb and those covering VGI.

In reviewing these topics, I’m struck by an interesting thing that seems unaddressed. Within both topics, there are extensive references to the power of open street map and numerous examples of OSM as an example of both VGI and as an important part of the GeoWeb. Recent discoveries of mass corporate edits of Open Street Map have upended the academic conceptualization of the product, and throw much of the rhetoric used in both VGI and in the GeoWeb into disarray. The main question raised by this development, as I see it, is who is the “crowd” in “crowdsourcing.”

The internet is an incredibly complex system. In understanding the internet, much research is focused on the interactions between individual humans and the internet. These interactions are the end-point of the GeoWeb system and the input point of VGI systems. These points involve extensive flows of information back and forth (a defining aspect of Web 2.0).

We normally conceive of the end-consumers as individual human beings. When these consumers are represented by entities that consist of large groups of humans, such as governments or companies, the way the system works changes. The end user can no longer be assumed to have one set of ideologies or use-cases, and the power of a single large multi-person entity may be exponentially greater than a single person. These entities have MUCH more complex physical bounds than a single person, so the offered VGI information from these entities may not fit well within our traditional concepts of maps. Similarly, the usage of the Geoweb by these entities likely fits the definition of “Geocomplexity.” in that it will certainly generate emergent spatial systems. The large scale relationship between the internet and institutions is deserving of further research.

Extending this idea, the entities that the Geoweb is interacting with, and the entities that may be generating VGI, might not be human at all. Animals with trackers that are uploaded to the internet, or entire ecosystems being viewed from a satellite play with our loose definitions of both concepts. Can the actions of a lumber company using online map data analysis to decide where to cut be considered a natural process?

At the extreme edges of this train of thought, the internet may be interacting with and receiving information from AI’s or from itself. These internal loops and systems happening inside a computer resemble those occurring outside, and when the internet of things is considered the line between physical and digital becomes blurry. Where do you draw lines?

Thoughts on Researching Volunteered Geographic Information

Sunday, November 17th, 2019

Elwood et al.’s article discusses the emergence of VGI as a new form of geographic information and how this can influence geographic research. This article did a good job analyzing the related concerns and issues in using VGI in geographic research, which provides me with a lot of new insights on this topic. I’m particularly drawn by two points discussed in the article.
The first point is the data quality of VGI. Researchers are often concern about the data quality of VGI as it is non-authoritative and has not been validated in a formal way. In response to this, the authors argue that VGI can be regarded as authoritative on the basis that it originates from direct local knowledge and observations, and the reliability can be rested on the convergence of information generated by a number of contributors. However, this does not mean that expertise is not important anymore. As argued earlier in the article, “expertise, tools, and theoretical frameworks of professional geographers are essential to addressing many of the more profound questions associated with VGI”, including the issue of data quality. I’m wondering what role professional geographers could and should play in the data quality issue related to VGI, given that the reliability is based on the “similarity of the submissions”.
Second, the authors highlight the issue of digital divide formed by the VGI. Several groups or individuals are included while others are excluded in creating and using VGI. For researchers who are using VGI as research input, it is important to realize that the data is biased towards the people who are “privileged” in contributing to this information.

Thoughts on Neogeography

Sunday, November 17th, 2019

I have several concerns about neogeography as it’s defined and described in the “Web Mapping 2.0” article. The quote from Turner portrays neogeography as “fun” and “about sharing location information with friends and visitors, helping shape context, and conveying understanding through knowledge of place.” However, I’d push back on both of these notions. First of all, why would geography have to be fun? Making an academic pursuit more inherently enjoyable could run the risk of eroding the rigor of the field. This could come off as me being “elitist,” and I don’t want geography to be inaccessible to anyone who’d like to use it. However, if anyone (academic or layperson) finds geography not “fun” enough to pursue, then they shouldn’t pursue it; creating a snazzy “neogeography” for them to utilize would almost necessarily make it easier and less rigorous, diluting and weakening their results. Furthermore, can’t it already be fun? I think it is! With regards to the applications of neogeography, can’t geography/GIS already be used for “sharing location information… helping shape context, and conveying understanding through knowledge of place?” For example, the paper “Extending the Qualitative Capabilities of GIS” by Jung and Elwood thoroughly discusses how GIS can be used to display meaning and context, and it was written in 2010. Why come up with a “neogeography” to complete these tasks, when existing GIS technologies can do the same thing as is or with slight modifications? Perhaps I’m too caught up in the current paradigm of what GIS is/should be; regardless, however, we should ask ourselves if going through the effort of creating, classifying, or distinguishing a new kind of geography from the status quo is necessary or appropriate.

Thoughts on “Citizens as Sensors”

Sunday, November 17th, 2019

I really liked this piece, and thought it was an easy/informative read (thanks Liz!). One place where I thought it was lacking, however, was in the “Concerns” section. Goodchild talks about how only the privileged may be able to contribute VGI, and as a result they may be overrepresented or may over-benefit from analyses/policies that come from VGI, like disaster relief plans. This is probably true, but Goodchild fails to consider what a double-edged sword VGI can be. He’s only looking at examples of VGI being used for “good;” however, that won’t always be the case. Those who can’t contribute VGI because of their social status and wealth (for example, lack of phone) won’t benefit as much from well-meaning and helpful uses of VGI; however, it can also be argued that they won’t be hurt as much by improper uses of VGI. I’m probably looking at this through too much of a geodemographics/Big Data lens, but I can imagine VGI being used for nefarious purposes. In such cases, not being able to contribute to VGI (for example, those “off the grid”) may be beneficial, as the powers that be (government, private sector, etc) cannot use your data against you. Goodchild has made the assumption that VGI is used to help society and individuals; from this viewpoint, everyone would want to be able to contribute VGI. However, as data privacy and the like become bigger problems, will we? I think there will be a balance to strike between wanting to contribute VGI to reap the resulting policy benefits and holding back from contributing as much VGI to avoid potential negative impacts

Thoughts on “Goodchild – Citizens as sensors”

Sunday, November 17th, 2019

This article by Goodchild lays out the foundation of Volunteered Geographic Information (VGI) by explaining technological advances that helped it develop as well as how it is done.

The widespread availability of 5G cellular network in the upcoming years will drastically improve our ability as humans to act as sensors with our internet-connected devices given improved upload/download speeds as well as lower latency. These two factors will greatly help in the transfer of information, for example allowing for more frequent locational pings or allow more devices to be connected to the internet as 5G will allow more connections compared to 4G.

Although VGI provides the means to obtain information that might otherwise be impossible to gather, the reliability of the data can be questioned. An example could be with OpenStreetMap, where anyone is free to add, change, move or remove buildings, roads or features as they please. Although most data providers do so with good intentions, inaccuracies and errors can slip in, affecting the product. As other websites or mobile applications use data on OSM to provide their services, it becomes important for users and providers to have valid information. As pointed out in the article, the open-nature of VGI allows malevolent users to undermine others’ experience. An example of such an event would be with people recently taking advantage of the VGI nature of OSM to change the land coverage of certain areas in order to gain an advantage in the mobile application Pokemon GO.

Finally, there is also an issue with who owns the data. Is it the platform or the user that provided the data? Who would be responsible if an inaccurate data entry leads to an accident or a disaster? As with any other growing field closely linked to technological advancements, governments will need to further legislate on VGI in order to allow for an easier regulation.

neo is new in the way of empowering individuals

Sunday, November 17th, 2019

I have little to say about the Webmappion 2.0 paper. We very clearly persist in a new geography as we interact via a space we didn’t always have access to – the internet. Some of us still don’t have this access. But I’m not convinced the paper actually did what it set out to – specifically in the sense of discussing ramifications for society. Early discussion of terms is important, so for someone like me – new to thinking about neogeo – the paper is a helpful start. Wouldn’t end here now though. We get to decide what’s next for geo, and it seems like neogeo is in the driver’s seat.
Just want to point to authors use of complexity in Web Mapping 2.0 and neogeography. It’s not the same as complexity theory—they must’ve meant complicated at each instance.
“Essentially, Neogeography is about people using and creating their ownmaps, on their own terms and by combining elements of an existing toolset”
An encouraging quote; empowering people by assigning the agency in characterizing their human/biophysical environments is part of neogeo that makes it neo – new, and not steeped in colonialism.
Excited to force conversations of either movement or complexity in class tomorrow.

sensors –> singularity

Sunday, November 17th, 2019

With humans as sensors, we move towards the singularity.

Woah, catchy subtitle, movement, and robo-human dystopia! Does the post need any thing else?

I guess so… some hundred more words according to the syllabus :/.

Goodchild’s example of errors at UCSB and the City of Santa Barbara point to the danger of ascribing authority to mappers. With this authority, they also accept power to erase people and place. The real question in any discussion of VGI ought to be about who gets this power. Whether it’s the USGS, NASA, or a network of individually empowered agents, someone wields this power. What infrastructure to do we as GIScientists support?
I’m so conflicted: I like bottom-up everything, but maps are consumed by, represent, and interact with people. Question is, can they also be by the people. Who knows – I’ve just strung enough words together to make this work – see yas in class.

thoughts on geodemographics (Baiocchi et al., 2010)

Monday, November 11th, 2019

“The rationale behind geodemographics is that places and people are inextricably linked. Knowledge about the whereabouts of people reveals information about them. Such an approach has been shown to work well because people with similar lifestyles tend to cluster — a longstanding theoretical and empirical finding in the sociological literature.”
This paragraph summarizes the theoretical basis of the analysis conducted by this study and the basic idea of geodemographics. I think this shares the same idea with AI profiling by using big geospatial data, or in another way, AI profiling in regards to space is geodemographics. Some of the critical issues are similar. The first issue is related to the uncertainties of the knowledge it produces, which can cause unjust action towards individuals. As Mittelstadt (2016) argues, even if strong correlations or causal knowledge are found, this knowledge may only concern populations while actions are directed towards individuals. This becomes more problematic when we conduct spatial clustering and assuming that places can reflect every individual and decisions can be made based on the analysis of an area. The second issue is once again related to scale, or the modifiable areal unit problem. The scale of analysis can significantly influence the results we obtained. At which scale can we argue that the places and people are inextricably linked? At the neighborhood level, city level, or country level? I wonder if in the field of geodemographics those issues are considered or addressed.

Reflection on Geodemographic

Monday, November 11th, 2019

As far as what I understand, geodemographic data links the science of demography and geography together to represent the variation in human and physical phenomenon locationally and spatially. The study presented in this article used a geodemographic dataset call ACORN. The author mentioned in the limitation part that the uncertainties in the ACORN data are associated with the imputation of missing information. And there is also some limitation such that the uncertainties in this dataset are difficult to quantify. Since geodemographic data are very much linked with human behavior, it would be hard to identify its quality and accuracy. But I still wonder if there are some possible ways to deal with such uncertainty? Or how can we manage geodemographic data so it can have relatively less uncertainty?

Besides, the author also assumes that there is no reginal or local variation in the expenditure profiles, which means households belonging to the same type are presumed to have the same spending patterns no matter where they are located in the territory. But obviously this can be problematic, since a uncertain data source may strongly influence the final result. So, is there any way that we can assess how this averaging process can influence the result, and is there any way that we can at least tried to eliminate it?

There is also one thing that I’m very curious when I’m trying to understand the concept of geodemographic data. Are they the same with census data? If not, what is the difference? Are geocoded census data part of geodemographic data? Or are census data part of geodemographic data?

Reflection on Geocomplexity

Sunday, November 10th, 2019

After reading this article, I’m still not sure if I fully understand the concept of geocomplexity, since I am still trying to understand how geocomplexity related to spatial problem. The author has categorized the complexity into three types: algorithmic complexity, deterministic complexity, and aggregate complexity. And each type of complexity deals with different types of theory. For example, algorithmic complexity deals with mathematical complexity theory and information theory, and deterministic complexity deals with chaos theory and catastrophe theory.

As far as what I understand, algorithmic complexity calculates the efforts need to solve one problem or achieve one result. Therefore, it would be necessary that some topic that are vague itself may be hard to evaluate. Since my topic is spatial data uncertainty, I was then wondering how would researcher apply algorithmic complexity to data uncertainty, since the uncertainty itself can be vague and ambiguous.

As for deterministic complexity, the author mentioned that it would be too simplistic to characterize a human system by few simple variables or deterministic equations, so less systems are actually deterministically chaotic. Then, I was wondering if there are any examples where human system are in fact deterministic complex. If there is none, then what systems are then usually be regarded as deterministic complex.

And finally, aggregate complexity is used to access the holism and synergy that comes from the interaction of system components. Then back to my topic, the system components in the spatial data uncertainty field would be error, vague and ambiguity. So how would these three components be defined in the case of aggregate complexity.

The Impact of Social Factors and Consumer Behavior on Carbon Dioxide Emissions (Baiocchi et al., 2010)

Sunday, November 10th, 2019

This paper applies geodemographic segmentation data to assess the direct and indirect carbon emissions associated with different lifestyles. As geodemographics are generally used to improve the targeting of advertising and marketing communications, I am curious about the use of geodemographics in GIScience.

In this paper, the authors argue that the top-down approach, which is conventionally used to classify lifestyle groups, fails to recognize spatial aspects associated with lifestyles. This is why they choose to use geodemographic lifestyle data. Because lifestyle data employs bottom-up techniques that draw spatial patterns out from the lifestyle data, as opposed to fitting it to some a priori classification of neighborhood types. However, it is important to note that the geodemographic classification systems are beset by Modifiable Areal Unit Problem and ecological fallacies in which the average characteristics of individuals within a neighborhood are assigned to specific individuals. For example, in ACORN groups that are labeled as “Prudent pensioners”, many people will be neither elderly single nor old. More importantly, many others who are both elderly single and old are located outside of “Prudent pensioners” groups. Also, as I know, the data used to build the classification systems mostly derive from the census, which becomes dated quickly and is not sufficient to capture the key dimensions that differentiate residential neighborhoods. Are there any alternative datasets for geodemographics?

Simplifying complexity (Manson, 2001)

Sunday, November 10th, 2019

In this paper, Manson (2001) presents a thorough review of complexity theory. I argue that Manson doesn’t make clear several concepts in his paper, such as the differences between chaos and complexity. Manson states that “there is no one identifiable complexity theory” and “any definition of complexity is beholden to the perspective brought to bear upon it”. He parses complexity into three streams of research: algorithmic complexity, deterministic complexity, and aggregate complexity. However, I don’t quite agree with this schema. Algorithmic complexity describes those systems that are so intricate that they are practically impossible to study. This problem cannot form part of the study of complex systems because it arises from an insufficient understanding of the system being studies or inadequate computational power to model and describe them. Therefore, algorithmic complexity may be a misleading movement away from complexity and its associated issues.

Even with many theoretical advancements and technical developments, complexity theory is still considered to be in its infancy, lacking a clear conceptual framework and unique techniques. Also, as Manson notes, it is important to explore “the ontological and epistemological corollaries of complexity”. Indeed, complexity has a relatively open ontology. It is necessary to consider the epistemology of complexity to understand the relationship between complexity ontology, emergence, and the balance between holism and reductionism.

Thoughts on Turcotte (2006) “Modeling Geocomplexity: A New Kind of Science”

Sunday, November 10th, 2019

The article “Modeling Geocomplexity: A New Kind of Science” by Turcotte (2006) introduced the topic of geocomplexity. The article highlighted how the understanding of natural phenomena is enriched and more complete when it incorporates a variety of methods beyond standard statistical methods that are emerging in the field for different situations.

As someone who has no prior knowledge of geocomplexity in GIScience, I did find this topic a little difficult to wrap my mind around. Despite this, I did find it very interesting to see the different models that have emerged to better understand geological processes. I find it interesting that self-organized complexity utilizes computer-based simulation that can be used in classrooms. I think it would be a more intuitive and visual way to learn about geological functions.

After reading this article I had more questions than I had before. I am not sure I completely understand the concept of geocomplexity… but look forward to learning more about it.

What is randomness?

Sunday, November 10th, 2019

Geocomplexity is, for lack of a better word, complex. After reading Turcottes “modeling geocomplexity”, I’m left with one main question – in this context, what is randomness?

Most of the models outlined in the paper involve are focused on demonstrating the chaotic, unpredictable nature of natural systems. The argument, as I understand it, is centered around the idea that a sufficiently complex system will be unbelievably unpredictable, and that minor changes can have massive consequences as those systems play out. What this implies to me is that there is some degree of truly “random” behavior at play, and that that randomness is what is preventing making these systems easily understood.

Having no background in this subject, I find that I still don’t understand what “randomness” is. How does it arise in these systems? If the location of every particle in a system was known, would we be able to model this in a way that did not include any randomness? Chaos theory is mostly preoccupied with the idea that minuscule variations in the initial conditions of a system can result in vastly different outcomes. Where within that concept does randomness lie? I suppose I don’t have the theory and statistics background to make sense of these arguments well. This has however inspired me to delve deeper into this subject matter in the future.

Thoughts on complexity

Sunday, November 10th, 2019

Steven’s article gives an overview of the complexity theory. The author argues that there is no single complexity theory because there are different kinds of complexity that have different or even conflicting assumptions and conclusions. Three types of complexity are discussed by the authors: algorithmic complexity, deterministic complexity, and aggregate complexity.

I am not sure if I fully understood the concept of complexity even though the title of this article is “simplifying complexity”. Tons of questions remain to me after reading this article. Before talking bout my questions, there are certain points that interest me. First, the author states that complexity theory and general systems theory are both anti-reductionism and interconnectedness of the system, whereas one of the differences is that complexity research uses techniques such as artificial intelligence to examine quantitative characteristics, while general systems theory that only concerns qualities. I’ve never thought about it this way before as I believe that AI is a quantitative method that can make inferences about qualitative attributes. In this sense, the qualitative and quantitative parts do not differentiate the two, because general systems theory also has the ability to make qualitative inferences. Second, the author mentioned the deterministic complexity, which means a few key variables related through a set of known equations can describe the behavior of a complex system. I wonder deterministic complexity is also a kind of reductionism because it tires to describing a complex system by equations and variables, which goes against the anti-reductionism notion of complexity. Third, the author mentions that a complex system is not beholden to the environment – it actively shapes, reacts and anticipates. This reminds me of the machine learning algorithm that activity adapts to the data it saw. It seems that this is a way of approaching complexity.

Main questions I have are
1. If there are different kinds of complexity that sometimes conflict with each other, what is actually the complexity?
2. Is every generalization we made a reductionism in some way? If so, isn’t all the research, even the complexity research anti-complexity?
3. What can complexity theory offer us? Does it complicate the analysis or does it offers us a more sophisticated way of approaching a problem?