Archive for the ‘506’ Category

Thoughts on Web Mapping 2.0: The Neogeography of the GeoWeb (Haklay et al. 2008)

Sunday, November 17th, 2019

This paper overviews the development of Geography in the Web 2.0 era, where neogeography is founded based on the blurring boundary of geographers, consumers, technicians, and contributors. The case study of OSM and London Profiler showed how technology allows geography and cartography embrace new form of data sources, interaction with general public, as well as the access to geographical knowledge and information.

The most intriguing part for me from this paper is the debate of whether Web 2.0 brings public participation in Geographic information manufacturing and researches a “cult of the amateur” or “mass collaboration”. From my point of view, this is what professionals from the realm of geography is needed. Data is neutral, but not their providers, contributors, or manufacturers. However, no one is going to complain about the abundance of available data, especially it is detailed user generated data that serves as a complement of what was missing before. It is up to the expert to decide what to do and how to interpret with the data, instead of blaming the source of the data.

From there, my point being there is nothing wrong with the information, along with its provider. What only matters are the people that interpreting, making profit, and discovering knowledge from it. Thus whether Web 2.0 is facilitating a “cult of the amateur” or “mass collaboration” solely depends on how the professionals who is trying to use the data. Even some seemingly useless data in one research can be extremely important for another field of research. Thus, geography as an open-minded multi-disciplinary science, drawing lines and rejecting what is novel, trendy and probably the future, is not supported by the ideology of the subject. Adaption and evolution should be the key for a science subjects.

Le GéoWeb

Sunday, November 17th, 2019

The advent of the Internet followed by the arrival of Web 2.0 have no doubt changed the way geographic information is obtained and shared, a fact that is well described by this article by Haklay et. Al. Without saying the age for paper maps is behind us, the internet has propelled us into an era where internet and geography are combined like never before.

Although the Internet has allowed many to navigate on a digital Earth for the first time, several issues have appeared from the combination of the internet and the discipline of geography. The field has been democratized by the increased online accessibility to geographic tools and mashups, which increases the visibility of geography but also could be seen as being reductionist, geography essentially being non-experts having fun by geotagging pictures. Speaking of geotagging, the social media boom has lead to people going en masse to previously unvisited areas, a phenomenon that has led to an explosion of visitors to the Horseshoe Bend in Arizona for example, something that has drastically affected the local environment.

Researching Volunteered Geographic Information: Spatial Data, Geographic Research, and New Social Practice

Sunday, November 17th, 2019

Sarah Elwood et al. wrote a fundamental paper about Volunteered Geographic Information (VGI) from aspects of definition(s) of the VGI, domains researches of VGI, frameworks within or beyond the VGI, impact of VGI on spatial data infrastructure and geographic researches, quality issues of VGI data and concerning methodologies. VGI does contribute to the data explosion nowadays and expand the research contents in new fields, but there still plenty of issues exist confronted with creation and application of VGI data. I am really interested in the quality issues of VGI data which I think is the most important issue when conducting researches using VGI data though there are a host of situations when VGI is beneficial even if the quality is hard to assess. The authors do point out four aspects of insights concerning the quality issue in VGI that it is quite different from the traditional data qualities, however, I am still curious about how to deal with those difference and difficulties in uncertainty and inaccuracy of the VGI data. How could new developed AI technologies help in determining the basic quality assessment rules when filtering useful information? And I think education in geography would be really helpful for further better developing the high quality VGI system.

Moreover, it is argued that VGI is a paradigmatic shift of data collection, creation and sharing in GIS. Does that mean the traditional types of data like raster, vector, object based, etc. would be changing forced by the development of VGI? Also, Web plays an important role in VGI and does that mean VGI would not be developing independent of WebGIS, so what is the relationship between WebGIS and VGI (both would be presented next week)?

Thoughts on Citizens as sensors: the world of volunteered geography (Goodchild, 2007)

Sunday, November 17th, 2019

This is a Goodchild works that serves as a brief introduction to the topic of VGI. Although it is done in 2007, when computational power and artificial intelligence was just in the start-up phase. We do see how VGI serves as a main data source in the field of Cartography and some geographic related fields. However, by highlighting the contributions of VGI, he also pointed out the limitation of relying on VGI as a source of geographic data – the validity, accessibility, and authority of data.

Nowadays, we see OSM and Google Maps are used as major sources of many spatial analytical researches, especially in a larger extent when primary data collection became time- and human-consuming. Just as Goodchild argues, from the perspective of researchers, the availability of spatial data can be extracted from VGI sources is promised, there are questions need to be asked about synthesizing and validating VGI data to increase the accuracy of data.

Who contributes to the data? This is the question unsolved even after 12 years after he wrote this paper. This particular question asks the coverage of population that VGI data might represents, the area it covers, and the scope it uses. Why do people do this? Another question relating to the bias and incentive of VGI data, which potentially influencing the result from researches using VGI data. Also, with various available VGI data sources, how we can incorporate them together to cross-validate, references each other to generate better accuracy for our objective is the question I would like to seek for an answer. As well as how to cross referencing different sources (other than VGI) to VGI data to increase its validity, and somehow gives them authority is another interesting topics I am eager to learn.

Thoughts on “Goodchild – Citizens as sensors”

Sunday, November 17th, 2019

This article by Goodchild lays out the foundation of Volunteered Geographic Information (VGI) by explaining technological advances that helped it develop as well as how it is done.

The widespread availability of 5G cellular network in the upcoming years will drastically improve our ability as humans to act as sensors with our internet-connected devices given improved upload/download speeds as well as lower latency. These two factors will greatly help in the transfer of information, for example allowing for more frequent locational pings or allow more devices to be connected to the internet as 5G will allow more connections compared to 4G.

Although VGI provides the means to obtain information that might otherwise be impossible to gather, the reliability of the data can be questioned. An example could be with OpenStreetMap, where anyone is free to add, change, move or remove buildings, roads or features as they please. Although most data providers do so with good intentions, inaccuracies and errors can slip in, affecting the product. As other websites or mobile applications use data on OSM to provide their services, it becomes important for users and providers to have valid information. As pointed out in the article, the open-nature of VGI allows malevolent users to undermine others’ experience. An example of such an event would be with people recently taking advantage of the VGI nature of OSM to change the land coverage of certain areas in order to gain an advantage in the mobile application Pokemon GO.

Finally, there is also an issue with who owns the data. Is it the platform or the user that provided the data? Who would be responsible if an inaccurate data entry leads to an accident or a disaster? As with any other growing field closely linked to technological advancements, governments will need to further legislate on VGI in order to allow for an easier regulation.

Thoughts on Simplifying complexity: a review of complexity theory (Manson 2000)

Sunday, November 10th, 2019

This paper thoroughly reviewed and examined the field of complexity. By diving and recognizing three different kind of complexity theories: 1) algorithmic complexity; 2) deterministic complexity; 3) aggregate complexity. The author systematically explained each complexity theory with different implication and future research opportunities, opens a new door for me as a urban researcher.

I do agree with the author that complexity needs to have more attention from geographers and planners, since from my first class of urban geography, I have been taught and agreed that cities are open systems that the public and academics have yet found a way to understand. Thus, to better simplify cities and urban research areas, understanding the complexity is the first step. Although, the majority of urban researchers seek to simplify urban environments to reach a empirical theory/statement/knowledge. However, simplification needs to be done after fully understanding the complexity of the existing study objects. In urban geography and planning, I doubt anyone had ever thoroughly comprehend all the underlying components that makes a city work. Thus, there is necessity for urban researches and GIScientists to study the algorithmic, deterministic, and aggregate complexity before proposing a simplified models. In the realm of urban related study, the need for complexity research is urgent, before this study area became a palace build on the cloud.

In addition, for GIScientists especially, understanding and studying algorithmic complexity might be the future trend of study, regardless the field their study objective landed at. Since the discipline’s technological foundation makes GIScientists easier to be aware of such issue, as the capability of addressing algorithmic complexity is advantageous compare to researchers from other spatial related disciplines.

Thoughts on Class Places and Place Classes – Geodemographics and the spatialization of class (Parker et al. 2007)

Sunday, November 10th, 2019

First of all, after reading the whole article, I am very confused on the structure of it. It is not well-structured, from my personal perspective, which lead to some confusion of myself about the topic of geodemographics. Also, throughout the article, it is hard for me to get anything related to GIScience, since except of some hint of using information technology to classify population, the whole article focuses on sociology perspective of geodemographic rather than GIScientists view of the topic.

Part of the reason that this article is not that related to GIScience is probably the time they wrote it. 2007 seems in the period of transforming GIS to GIScience, while GIS itself were not strongly bonded to other disciplines as it is now at 2019. On the other hand, although the article focuses more on the classification methods, it is less concerned with existing debate from computational classification method, rather than discussing more supervised heavily human-intensive classification works.

However, the article definitely gives a brief introduction of what is geodemographics is. What the major debates is on the field of geodemographics, from the sociology perspective. I do wander how GIScientist sees their fellow sociologists’ classification methods, as well as ontologies for geodemographics derived from the sociological classification methods.

Furthermore, I wonder that in the age of big data analysis, how the combination of big data and GeoAI contributes to the field of geodemographics from a GIScientsts perspective. Since so far in this paper, the authors stated that sociological classification still heavily focus on Census and commercial data, which are specially designed to study demographics. On the other hand, I would like to learn more about what the reactions are, from included/excluded population in geodemographic classification. As well as some ethical discussion related to the process of classifying population.

Thoughts on Parker et al., (2007) “Class Places and Place Classes: Geodemographics and the Spatialization of Class”

Sunday, November 10th, 2019

The article “Class Places and Place Classes: Geodemographics and the Spatialization of Class” by Parker et al., (2007) introduced the concept of geodemographics and on a small research study that focused on geodemographic classification, the relationship between ‘class places’ and ‘places classes’ and their co-construction.

I found this article to be quite interesting, particularly the parts that touched on the merge of this type of data with web technologies. This would allow for data to be interacted with by a larger portion of the population with greater ease.

After reading this article I was left with a few questions. One of these being what the effects of this type of data are. Taking the case of this article and the research study done, this sort of generalization of residents in urban neighborhoods to create a classification seems problematic. As the information gets used, people’s perceptions of places are based on census data. I feel this highlights socioeconomic differences in urban settings and further divides populations based on differences.

Thoughts on “Turcotte – Modeling geocomplexity?: “ A new kind of science .””

Saturday, November 9th, 2019

This article by Turcotte emphasized the importance of fractals in the understanding of geological processes as opposed to statistical equations, which cannot always explain geological patterns.

Although this reading provided insight into how various situations are modeled and how statistical modelling plays an important role into understanding the geophysics or our planet, geocomplexity as a whole still remains a rather abstract concept to me. The article provided some illustrations that greatly helped my comprehension, but more would be necessary to better comprehend some concepts. Illustrating complexity may be complex in itself, but

Will we find new statistical formulas to model problems we couldn’t model in the past? How we understand and conceptualize Earth plays a vital role into how GIScientists are able to push for further knowledge. Recent technological advances in quantum computing, artificial intelligence and increasing supercomputing capabilities open the door for further innovation in the field. For example, geological instability could better be understood. In those scenarios, could weather or earthquakes become more predictable? Further advances in related fields such as geophysics and geology will also greatly contribute to GIScience.

The concept of chaos theory is also very intriguing to me, a theory I’d never heard of before. A quote from Lorenz greatly helped me understand the concept: “When the present determines the future, but the approximate present does not approximately determine the future”, meaning small changes in the initial state have an effect on the final state of a particular event.

Reflection on “The Impact of Social Factors….On Carbon Dioxide Emissions” (Baiocchi et al., 2010)

Saturday, November 9th, 2019

In Baiocchi et al.’s piece they analyze geodemographic data to better understand the direct and indirect CO2 emissions associated with different lifestyles in the UK. They open the piece by listing criticisms in the field of environmental input-output models, namely that there is too much literature dependent on top-down classification, too much emphasis on consumer responsibility, too much literature with entirely descriptive analyses, and that the term ‘lifestyle’ is defined by expenditures, which ignores human activity patterns. Using geodemographic data as a basis for their study mitigates the potential harm from these criticisms.

One thing I noticed about this paper was how it used geodemographic data as a way to create a bottom-up procedure for their research. Historically, the fields of geography and cartography have been very top-down in nature, with little, if any input from “non-experts”. One of the ways GIS has been so revolutionary and popular is that it is redefining how and what people can contribute, and today there is ample opportunity for “non-experts” to be involved. As geodemographic data was around long before GIS existed, I did not initially realize how it could contribute to more bottom-up approaches. Now, I know, among other reasons, that there is open data almost everywhere, making it much easier to access and understand, and that GIS technology in general is much easier to access and understand than ever before.

I’ll end my reflection with a few general questions about geodemographics. Specifically, what is the difference between demographics and geodemographics? Doesn’t all demographic data have some sort of location/geographical component? 

 

The Impact of Social Factors and Consumer Behavior on CO2 Emissions in the UK

Saturday, November 9th, 2019

This is an interesting case study using geodemographic data to analyze social economical factors’ impact on carbon dioxide emissions. Regardless of how carbon dioxide emissions affected by different social economic determinants, I am curious about the original geodemographic data used for further analysis. The study uses geodemographic data in ACORN database and conducts research based on lifestyle classification, and my question is what lifestyle exactly means and what rules are based on for ACORN classification defined. Also, the socioeconomic variables used in regression analysis are from wider categories of housing, families, education, work and finance, etc. Are these the typical variables or objects which geodemographic theory usually deals with and are these the research contents that demographic researchers focus on? Moreover, the geodemographic data are coded at the postal code level which could be explained as scale that the data built on. Is there any possibility that regression analysis results of what impact CO2 emissions would change if the scale changed? Does policy districts or postal code allocation rule play a role of noise in the analysis.

Another thing I want to point out is that we did learn about human mobility in last week seminar and could movement theory study be applied into studying geodemographic in aspect of changing over time, or it does not matter in developing the geodemographic theory.

Thoughts on “Parker et. al – Class Places and Place Classes: Geodemographics and the spatialization of class”

Friday, November 8th, 2019

As with a wide variety of other research fields within the confines of GIScience, it will be interesting to see how geodemographics may change with technological advances in machine-learning. An example could be with the delineation of boundaries between clusters, which could be fractured or combined based on reasoning that could be quite difficult to understand for humans. These geodemographic generalizations of space could also be continuously computerized in a not so distant future, which could lead to an ever changing assessment of neighborhoods on a very short temporal scale. Micro-level analysis could also allow for a better representation of a neighborhood based on recent population inflow or outflow data, data that becomes increasingly accessible in the era of the Internet of Things (IoT).

The thresholds used to assess whether a neighborhood is more closely related to x rather than to need to be defined quantitatively, which forces a certain cutoff and brings in a little subjectivity. An example could be demonstrated with the occurrence of a natural disaster in a hypothetical neighborhood, which could lead to a sufficient devaluation of houses to warrant changing how the neighborhood is characterized. In that case, a population possibly once seen as energetic and lively (or as defined by Parker et. al as a live/tame zone) could be completely changed to a dead/wild zone from one day to the next. Although these would be reassessed at some point in time by corporations or the government, technological advancements grant the ability to reassess neighborhoods much more rapidly.

As someone not well versed in the conceptualization of geodemographics, it becomes apparent that a balance needs to be made between the number of classes needed and the level of representativity desired; after all, every household could be considered unique enough to warrant its own neighborhood. Future advances in the field might incorporate a three-dimensional analysis of neighborhoods in densely populated urban centers, as residential skyscrapers present vertical spatial clustering.

Simplify complexity: A review of complexity theory

Friday, November 8th, 2019

It is really a good paper that reviews the principle research fields in complexity theory with clear structure and simplified explanation, uncovering the nature of complexity. Generally, complexity theory describes objects with nonlinear relationships between changing entities with qualitative characteristics examined and how interactions change over time, etc. And the author breaks the complexity theory into three major parts: algorithm complexity, deterministic complexity and aggerate complexity. However, I am still a little confused about why it should be divided into those three parts? Is that possible if we think about and explain well complexity theory in time complexity and spatial complexity?

Should most of research question have to take into account the complexity theory because most of objects in natural environment and human society do have the general characteristics that complexity theory deals with? And it is really interesting when talking about self-organized system that will receive balance between randomness and stasis like peatlands ecosystem. But how complexity theory that helps explore the self-organization in physical geosicence be applied in social economical study is really appealing. There is still some unclear space in complexity theory study. How could new developing techniques like GeoAI and spatial data mining that extract more hidden knowledge help complexity step further? These are all interesting and exciting questions to be answered in the future.

Thoughts on “Simplifying Complexity” (Manson, 2001)

Thursday, November 7th, 2019

As someone with very little knowledge on complexity theory before reading this article, I think Manson’s piece offers a solid introduction to the concept. I can see how complexity theory directly relates to geographic and GIScience problems. It all comes back to Tobler’s First Law of Geography, as geography creates complexity not replicated in other disciplines. 

“The past is not the present” and “complicated does not equal complex” are two concepts we have discussed at length in class. Regarding the first statement, complexity theory looks at entities as in a constant state of flux, and could thus reduce the problems associated with the “past = present” assumption; for instance, a common issue here is assuming that the locations of past actions will be the same as the present ones, among others. Regarding the second statement, this article was written in 2001, before big, complex, data was around like it is today, especially concerning its variety, veracity, value, volume, and velocity. Big data is complex, but not complicated. There are methods and technologies to more easily analyze this data; however, technology and complexity theory must keep up for researchers to continue to adequately analyze it. 

the unified theory of movement is here

Sunday, November 3rd, 2019

This is the only blog post I’ve actually wanted to write.

All things are dynamic. In our last class, Corey showed that even though we were equipped to portray a river as a dynamic feature, we did so statically. I bet we did this because of the numerous ways we are told to think of our world as static. Relationships are inherently dynamic, but we have static statuses to represent sometimes extensive periods. We take repeated single-point observations to measure some natural phenomena, then interpolate to fill in the blanks. But what are these blanks: evidence of dynamism. Since all phenomena are actually dynamic; falling down some temporal gradient — not dissimilar to Miller’s space-time cubes concept.

Miller brings up scale issues in movement. Traditional movement scientists like kinesiologists, physiotherapists, or occupational therapists think of movement on completely different scales than do movement ecologists. In fact they have a different semantic representation of movement as well, often related to the individual irrespective of the environment. Geographers and human mobility researchers have their own ideas about drivers and detractors of movement that run contrary to ecologists conceptualizations. So, how do we move toward an integrated science of movement? The best option is to start thinking about movement as fractal patterns. There’s a primatologist at Kyoto studying just that in penguins (which are not primates ….) to get an understanding of interactions of complexity, scale, movement, and penguin deep-diving behaviour. Think about this: this researcher is interested in how movement is invariant across scale and can explain behaviour as a complex phenomena. There’s already a unified theory of movement — it’s called fractal analysis of movement.

I am optimistic about the potential of merging scale-invariant disciplines: if physicists could accept Newton’s law of universal gravitational attraction, even when it couldn’t explain solar systems with more than 2 planets, why can we not accept that movement unifies us even if it cannot predict each time-step for each species taking whatever method of transport. It’s a narrow-minded perspective to say that we can’t have unified movement theory because some people take bicycles, while others prefer the Metro. Algorithms cooked up by silicon valley are already capable of differentiating this movement — doesn’t that mean these are already unified in the neural network’s internal representations of movement? Train a neural network to detect directionality of some moving object. Assuming you did the requisite pre-processing, chances are that algorithm will tell you the direction of any moving object. That’s unified movement theory. Not convinced? Take the first neural network and perform transfer learning for another object. The transferred network will outperform a network that didn’t ‘see’ the first objects movement/directionality. This is unified movement theory. There’s a team of researcher’s studying locomotion in ants who strapped sticks onto the legs of ants. They found their ants on stilts would walk past their intended destinations. Doesn’t this indicate that regardless of the interaction between ant and environment (the ecology), movement could be characterized using common conceptualizations: be they step-length, velocity, or the ant’s internal step count?

This paper came about as a discussion Miller had with various mobility/movement researchers; what’s clear is that people don’t have answers. It’s not as simple as ecologists neglecting scale or geographers neglecting behaviour: our silo-ed approach to science is undermining our ability to comprehend unifying phenomena. And I bet movement is that unifying theory. Can you think of anything that’s truly static?

Scaling Behavior of Human Mobility

Sunday, November 3rd, 2019

This conference paper discusses about the spatial-temporal scaling behavior of human mobility by conducting an experimental study using five datasets from different areas and generations. They do get the results that consistent with the literature that human mobility shows characteristics forms for power law distributions, not all datasets are equal, etc. However, what are the basic disciplines of human mobility is not well explained. And the analysis (case studies) carried out based on large amounts of data generated by new measurement techniques is to examine the impact on aggerate metrics of spatial and temporal sampling period. These analysis and discussion on results conduct great researches on how scaling behavior varies and how massive datasets be interpolated with a generalized conclusion that spatial temporal resolution behaviors matter a lot to describe human mobility. That issue is not only associated with human mobility analysis, and it does always matter in plenty of fields in GISicence.

What is particular different and influential on human mobility? Is there any spatial data quality or spatial data uncertainty discussion necessary before or after analyzing movement datasets? Is there any argument on definition of human mobility and related measuring metrics? I expected more about more fundamental issues on human movement analysis which is still vague to me instead of case studies showing basic rules form human mobility and relationships with scaling issues.

SRS for Uncertainty — some brief thoughts

Sunday, November 3rd, 2019

Quale — a new word I will almost certainly never use.

It does however represent a concept we all have to wrangle with. Forget statistical models, literally no representation is complete. I tell you to imagine a red house – you imagine it. But was it red? Or maroon, burgundy, pink, or orangish? Not just a matter of precision, what we are communicating depends on what we both think of as ‘red’, or ‘maroon’, or ‘burgundy’, or whatever else. We might also have ideas about what sorts of red are ‘house’ appropriate. An upper-level ontology might suggest to us red-ness that is universal. But no houses in my neighbourhood are the bright lego red? Why not?

Some of what Schade writes reminds me of Intro Stats: error being present in every single observation. This sort of error can be thought of as explained and unexplained variance. Variance is present in all data; the unexplained variety being what may have risen due not only apparatus error, but also what we describe as uncertainty in data.

Schades temperature example is handy: the thermometer doesn’t read 14 degrees – it reads 13.5-14.5 with 68% probability. The stories we tell aren’t about what we say, but what we mean. This sort of anti-reductionism is also at the root of complexity theory. Acknowledging that we cannot characterize systems as static and linear components disregards the emergence of something that can explain why complex things are greater than the sum of their parts. Applied machine learning research also appreciates this anti-reductionism: the link to AI Schade makes, I THINK is about how applied machine learning researchers aren’t really interested in determining the underlying relationships of phenomena – only the observed patterns of associations. Methods that neglect the former and embrace the latter perspective explicitly consider their data to be incomplete and uncertain to some degree. Though to be honest this connection seems forced in the paper, but I’m happy to help force it along. :)

Spatial data quality: Concepts

Sunday, November 3rd, 2019

This chapter in the book “Fundamentals of Spatial Data Quality” gives a shot on basic concepts in spatial data quality by pointing out that the divergences between the reality and the representation are what the spatial data quality issues often deal with. And there are several aspects of where the errors would happen during the data production process such as the data manipulation process and the human involved data creation process. Moreover, spatial data quality is summarized to be assessed from internal and external aspects. This chapter explains well what the data quality is and what errors could be and is very easy to understand.

It is interesting that the introduction starts with a quote, “All models are wrong, but some are useful”. However, does it mean all spatial data or data created could be interpolated as the product of model or filter? Authors argue that the representation of reality may not be fully detailed and accurate but partially useful. But how to determine whether the data with those uncertainty or errors should be accepted is a much more urgent problem. Also, as the topic is “spatial data uncertainty” and spatial data quality issues discussed in the chapter, does the uncertainty exactly mean different sources of error assessed in spatial data quality?

The chapter defines the internal quality as level of similarity between data produced and perfect data while external quality means level of concordance between data product and user needs. My thought is if user participate in the data producing process (which is about internal quality), will the external quality be efficiently and effectively improved? Can we just replace “as requested by the manager” with “what user wanted” in Figure 2.4 and there should be no external quality worries?

Thoughts on “Miller et. al – Towards an integrated science of movement”

Sunday, November 3rd, 2019

“Towards an integrated science of movement” by Miller et. al lays out the advances that have been made in the understanding of mobility and movement as a whole given the growth of location-aware technologies, which have provided much more accessible data acquisition. They are interested in synergizing the components of animal movement ecology and human mobility science to promote a science of movement.

In regards to mobile entities that are defined as “individually identifiable things that can change their location frequently with respect to time”, are there specific definitions that clearly define what “frequently in time” means? Examples have been made with birds or humans, but would trees or continental masses be considered mobiles entities as well?

It would be interesting to assess the impact of tracking location on the observations, in other words if tracking can affect the decisions made by whoever or whatever is being tracked. For example, a human who knows they are being tracked might change their trajectory solely based on the fact they do not want to potentially compromise sensitive areas or locations they visit, while an animal could behave differently if the technology used to track its movement make it more visible to predators. There is an ethical dilemma in tracking a human being without their consent, but it must be acknowledged that tracking does come with some consequences in terms of results differing from reality.

Reflecting on “Scaling Behavior of Human Mobility Distributions”

Sunday, November 3rd, 2019

Analyzing big data is an obstacle across GIS, and movement is no exception. Cutting out potentially unnecessary components of the data in order to reduce the dataset  is one way of addressing this challenge. In Paul et al.’s piece they look at how much cutting down on datasets’ time windows may affect the end distribution.

Specifically, they examine the effects of changing the spatio-temporal scale of five different movement datasets, revealing which metrics are best to compare human relationships to movement across datasets. The findings of the study, which examines GPS data from undergraduate students, graduate students, schoolchildren, and working people, reveal that changing temporal sampling periods does affect the distributions across datasets, but the extent of this change is reliant on the dataset.

After reading this piece, I would like to understand more about how researchers studying movement address privacy. I’m sure having enormous datasets of anonymized data addresses part of this issue; however, I’m sure different government agencies, organizations, corporations, etc. collecting this data have different standards regarding the importance of privacy. How strictly enforced are data privacy laws (looking at movement data specifically)?