Archive for the ‘506’ Category

The pleasure(s) of GOD – Geospatial Open Data

Monday, October 7th, 2019

I know the acronym is OGD in the paper… but I wanted this to be my title, so please deal –

The ‘costs’ paper goes over what may be some underrated or undiscovered costs associated with open geospatial data. The four ‘big’ ones the authors point out are: 1) citizen participation smoke and mirrors; 2) uneven geographies; and 3) open data being a public subsidy for private companies; and 4) corporate influence. In my opinion: If the government wants to get in the business of opening data – because it’s fashionable now or we’ve decided it’s necessary for a well-to-participate civic society, it must do so with even-weighting on social good and long-term viability. We should solve this problem, as a nation, the same way we’ve done whenever some social output was necessary but not necessarily financially feasible: Crown-Corps. I’m sure we’re all fans.

Johnson and colleagues describe how open governmental data would enable faux-participation, which is what I think is meant by smoke-and-mirrors; will hopefully be able to follow-up with the cited Robinson paper. The note on North America’s public libraries reminds me of an excellent 99% invisible episode on how these critical pieces of socio-cultural infrastructure needed imaginative re-building. And they obviously do. We need places for people to think about how this world intersects with the digital one. One argument made – “that placing data in an open catalogue [was] no guarantee of its use” felt odd to me. Of course, I could guarantee that not placing data in an open catalogue would guarantee no use whatsoever. I’m not sure I understand how people not using open data when it’s made available is a cost associated with opening data.

Uneven geographies I felt was self-explanatory. Based on scale, access, and representation in data, various places may be left out, while others emphasized.

I lean on my Crown-Corp idea for dealing with issue # 3 & 4: open data ending-up becoming a pseudo-subsidy; and open data as an entry-point for corporate influence in government. I don’t think this is inherently a bad or necessary thing. Authors suggest that there is an indirect cost when opening data as companies take data to build products that they can sell back to the consumer. If some company follows these steps and provides their product for free, then there is no indirect cost – it’s purely built into the downstream direct costs to the consumer. My one-stop solution, the might Crown-Corp, could simply regulate data as a special commodity. If you are sufficiently likely to have formed part of some products training data, you are exempt from paying the product-making company. If a private tech giant is equipped to influence and standardize data formats, we can offer direct subsidies for creating platforms that are socially inclusive. Since datasets of benefit to civic society are likely to be different from those of corporate interests, again offer subsidies for helping open civic-priority data. All this starts with the establishment of a data-oriented CBC. Data journalism focused, open governmental geo-spatial data behemoth tasked with coalescing data for safe use by Canadians. Should entrepreneurial Canadians be interested in this data: simply charge them for it – this century’s oil right?

I’ve written too much already. Sorry. Last thing: Bates comments are 100% spot-on. Any open data will be used to reinforce trust in our government. If we’ve learned anything from 2016, it’s how quickly faith in once-esteemed institutions can be lifted. How can we ensure data is opened in a transparent way? Without having to rely on self-certified transparency?

I think a repeating pattern I’m struggling with in GI-Science is this belief that we as GI-Scientists are optimally placed to be considering how to deal with data and real-world representations of data, likely informed by our history as modelling geographic realities. Sometimes it feels like a stretch to me – many fields have spatial scientists, some of whom are talking about the same topics we pride ourselves on. And for when the others aren’t speaking the same language – why not? We are either critical to almost everything that is going to be happening this century, or we are in an echo-chamber.

Geospatial Open data Cost

Sunday, October 6th, 2019

The paper discusses about the direct and indirect costs of geospatial open data. As is defined in the article, open data are government data typically provided for free with minimal reuse restrictions. Open data are referred to as open government data where government plays an important role in regulating the open data system for collecting, processing, managing, sharing free data with certain value and cost to the public. It has been pointed out that open data does have high cost from data collection and maintenance process and other anticipated challenges because these data involved are free for customers to use, meaning little return for developing open data.

From my perspectives, first question is why open data are mostly managed and released by governments though there is a view that government data was already funded by the taxpayers? Is there any possibility for more companies or institutions to run the system for open data with more advanced geospatial data processing technologies while making funds from adverting investment on the website? Second question is to what extent should the open data be open? Is it necessary for open data to be accessible to everyone? What is the definition of the “general public”? Should government make the data understandable enough and easy enough for public costumers to use, or some partially-open data which can be shared for those who have been educated with some professional GIS knowledge. People with professional skills (academic use) would process those open data (much rawer) on their own for unique purpose which will reduce cost for government original data process. Moreover, if open data are used for commercial use, cost should be placed on those cases.

The last point is open data give rise to the new sources to get data like self-publishing data and data differences and variance of data forms force the development for GIScience. Ontologies in GIScience firstly should be more widely developed and new technologies for filtering valuable data and formulating data are indispensable. This paper helps me a lot in understanding the open data concept and emphasis the indirect cost (challenges) for policy maker to consider how should open data system be better developed with a clear stating structure.

Thoughts on “Government Data and the Invisible Hand” (Robinson et al. 2009)

Sunday, October 6th, 2019

This article’s main argument is that government public data should be available in an organized, easy to use and find manner for the general public and third-party organizations. I agree with the article’s general argument; the government should have the appropriate infrastructure to provide public data in a reusable and universal format.

The article points out that oftentimes the government does not keep up with the fast evolution of technology and web capabilities that emerge. This article was published in 2009, now in 2019, similar issues are still at play. In my own personal experience, this is still the case in the Canadian Government. There have been big steps taken within the Canadian Government to modernize and make use of the wide variety of tools available for data analysis and visualization for internal and external use.

A point important to highlight is that despite data being accessible, third-party organizations and citizens interested in creating web products to analyze and better understand the data being used to inform policy and regulation decisions, do not have all of the data required to see the full picture. In the case of the Government of Canada, data is split into three different categories, public and open data; protected data (Protected A, Protected B, and Protected C); and classified data (Confidential, Secret, and Top Secret). All of this data is used at different levels of government to make decisions – data that due to security and privacy is not accessible to the public.

I believe that along with making data easily accessible to the public, it is also the responsibility of the government to deliver a quality web product for the public to view the data in the way the government used it. This still allows for third-party organizations to create different ways to view the same data.

Thoughts on “Government Data and the Invisible Hand”

Sunday, October 6th, 2019

In “Government Data and the Invisible Hand”, Robinson et. Al outline the process and advantages for the American Government to grant open online access to their data, which would provide the ability for third-party organizations to broaden data accessibility and contribute themselves by making use of them. Furthermore, it is argued that the private sector is “better suited to deliver government information to citizens” if the data is easy to parse through, given their ability to quickly change the tools based on the public needs as well as their position as outsiders.

If we’re thinking about geospatial data in this context, an important question remains after reading this article, which specified that public data should be provided by the government “in the highest detail available”: wouldn’t there be privacy concerns in that regard? There could be occurrences where the highest detail available for a dataset compromises the identity of individuals or groups if the spatial scale is fine enough. There would still be a privacy concern with non-geospatial data, as some sensitive information about individuals would have to be withheld from the public, meaning that a censorship would have to be done in order to preserve every citizens’ privacy. Alternatively, different political administrations could differ in what they deem acceptable and unacceptable for public access based on their own political positions. Finding a perfect balance between data accessibility and minimizing security concerns for the population is an extremely difficult challenge, as each and every one could have a different view. These differing subjective views could drastically affect the ability of private actors to make use of the data, especially if the administration has a veto in terms of what should or should not be publicly accessible.

All in all, I personally think that it is the government’s responsibility to first determine what constitutes sensitive data, as preserving privacy is of utmost importance. Following that, making all its non-sensitive data easily available online and promoting their use would go a long way to further our understanding of studied phenomenons using the data, but also improving society’s trust in government given a higher level of transparency.

Open data and bureaucratic thrift

Sunday, October 6th, 2019

After reading through both of the articles this week, I’m reflecting on previous conversations and experiences I have had with open data and government access. I was especially impressed by the thoroughness of the “5 Myths” section of Janssen et al, which did an excellent job of unpacking some of the rhetoric and misinformation surrounding the current trend of open government data.

In reading both, I did feel that one aspect of open data was especially under-addressed, and could be explored further – the cost-saving factor motivating governments decisions to release open data to the public. As the size of the data sets local and national government actors manage has grown, the burden of managing those has increased. Keeping this data private and making careful decisions about who has access, what requests to authorize, and how to manage it quickly becomes a bureaucratic leviathan as the data sets exponentially increase. By making these data sets public, the labor and infrastructural costs of managing information access requests are massively reduced, making the governments work far easier. Many governments have adopted a policy that data is by default “open”, and unless policy makers and data managers specifically believe a certain data set should be private any new information generated is immediately available for public dispersal.

This dynamic has been explained to me multiple times by policy-makers at the city level, and I have personally seen its efficiency. In many ways this cost saving motivation provides more support for the argument at the center of Robinson et al, which is that data is better left in the hands of outside actors whereas it is the governments responsibility to ensure that what data is accessible is easily managed. The previous comment stated that “Public officials tend to focus on the number of datasets they release rather than on the effect of releasing high-quality sets of data.” I believe that the best explanation for this decision is the cost-saving factor I outlined above.

Reflections on Government Data and the Invisible Hands

Sunday, October 6th, 2019

The core proposal of Robinson et al’s work is to promote operational change on how government should share its public data. They point out that the reason for U.S. government agencies tend to have out-of-date website and unusable data is due to regulation and spending too much effort on improving each agency’s own website. Thus, they propose to hand the interaction part of public data, to third-party innovators, who has far superior technology and experience on creating better user interface, innovative reusable data, and collection of users’ feedback.

Although, under current trend of U.S.’s regulation and laws of sharing public data, it is true if the distribution of public data is better operated by third party innovators for better distribution and surplus value creation. I would argue, however, their work is missing some perspective on U.S’s current public data.

The first is standardization, it is more urgent for a public data standard to come out from the government, to ensure data quality and usability, rather than distribution. The top complaining of public data is that even data from the same realm (economic data), can end up very differently from different agencies who published it. This create more severe issue on the usability and accountability of the data, than distributing the data. So. in order for government agencies to become good public data “publishers” in Robinson et al’s proposal, all government agencies have to come up with a universal understandable and usable data standard, rather than each agencies using their own standard, or left the most basic part of data handling to private sector.

The second issue from their proposal is credibility of the data. If all public data is handed over to the public by third-party innovators, for increasing their own competitiveness, they will modify the original data to match what the public want, in stead of the original unmodified data. This create credibility issue, since there is way less legislation and regulation on what third-party distributors can and cannot do to the originally published government data. And this modification is inevitable for third-party distributors, since at least they need to modify the original public data to fit in their database.

At the end, I do think commercializing public data distribution can promote effective use and reuse of public data. Meanwhile create problems in all business, privacy issue, “rat race”, and intended leading on the exposure of more public-interested product, etc.. It will have its pros and cons, but before government agencies can solve their data standardization issue, and regulations are built to supervise third-party distribution of public data. Whether there will be more pros of Robinson et al’s proposal than cons remains questionable.

Reflecting on The Cost(s) of Geospatial Open Data (Johnson et al, 2017)

Saturday, October 5th, 2019

This paper examines the rise of geospatial open data, particularly at the federal level. It looks at very concrete, monetary costs, such as resource costs, and staff time costs; it also looks at the less concrete and maybe less obvious, indirect costs of open data, such as when expectations are not met, and the potential for more corporate influence in the government.

 

In an economics class that I am currently taking, we discussed the seven methodological sins of economic research, and I believe some of these points can transcend disciplines. For instance, one of the sins is reliance on a single metric, such as a price or index. I think it’s important to note that when the authors of this paper were discussing costs, they did not just include monetary costs in their analysis. I believe the addition of the indirect costs is an important component to their argument and that these indirect costs present even more pressing issues than the direct costs do. I think it is very important to acknowledge the far-reaching and even harder-to-solve problems of the effects and influences of citizen engagement, the uneven access to information across regions, the influence of the private sector on government open data services, and the risks of public-private collusion through software and service licensing. 

 

A critique I have of the paper is that I believe the title to be a bit misleading in its simplicity. The title implies that the paper addresses geospatial open data cost across disciplines, whereas the paper addresses the costs only at the government level, and not any other level (for instance, perhaps looking at OSM or Zooniverse, if crowdsourcing/VGI falls under the same category as open data). The abstract, however, makes it very clear that the paper is only addressing issues caused by government-provided open data.

sliding into scales; i didn’t like the Atkinson & Tate paper very much, but still love scale!

Monday, September 30th, 2019

Scale is something people from lots of backgrounds pay attention to. Ecologists, biologists, geographers, and physicists all wrap their data into ‘packages’ of what they perceive to appropriate – in terms of what their data represent, the framework by which their data was collected and managed, and the question they seek to answer. By packages, I mean a sort of conceptualization of the data and the underlying model it is situated in / in which it is embedded. While many fields are entangled with scale, GIScientists have determined this is their time to shine – to explain core realities common to all spatial problems.

Some quick things that caught my attention:

  • the importance of scale in geography is based on the link between scale and modelling;
    1. makes me think about how geocomplexity is defined by geo-simulation
  • the notion of a spatial framework that determines what data is and what it represents by being the ‘strategy’ by which all information is ‘filtered’;
    1. the authors talk about how we see with our brains, not our eyes: aren’t we all just socially-informed machines?
    2. sort of fits into the narrative (geo)complexity scientists push: that we are embedded in a complex system worth modelling
  • the idea that something can be homogenous at one scale, and heterogenous at another;

The five categories presented in Atkinson and Tate to breakdown and understand what exactly Spatial Scale is are helpful, although I felt lost in discussion of spatial variation & stationarity. Something I thought I would be able to grasp quickly since my true geographical love is movement. But no. Still lost.

In the work I get to do with Prof. Sengupta, we explore how landscape heterogeneity affects animal behaviour selection and therefore determines movement. It’s neat that we get to wrangle with many of the concepts Atkinson and Tate discuss: collecting data at some frequency; rescaling data to represent what we want it to represent; considering spatial extent; and dealing with spatial dependence (by mostly not having dealt with it). Right now, we are exploring movements of a troop of olive baboons in Kenya. I wonder how our representative trajectories would look at half the sampling frequency – would we still be able to employ ‘behaviour-selection’ in the way we are trying? I don’t know – much to learn from the ‘is-it-a-science-or-isn’t-it’ science.

I perceive Professional Geographer to be a journal of decent quality in the field – for not many reasons beyond that they are found in our lab-space, scattered amongst old machines, super-powered servers, and sensors not sensing yet. Going into the reading with this perception, I was left disappointed. For a paper taking issue with how data is represented, communicated, and handled; there is failure to really understand what the authors are doing – maybe mostly on my part, but there is certainly some shared culpability. While scale is indeed complicated, and discussing it can be hard and technical, I think the authors have failed to simplify and communicate their field in what is meant to be a paper for all geographers to engage with. Obfuscation and lack-of-transparency will kill this field.

Thoughts on The Scale Issue in the Social and Natural Science (Marceau, 1999)

Sunday, September 29th, 2019

Marceau thoroughly introduced the scale issue in geographic, or any researches related to spatial and temporal scale issue. It refers to the difficulty of understanding and using the “correct scales”, as phenomenon of interest may only occur in certain scales, and the study result might be various due to the use of alternative combinations of areal units at similar scales.

To explain scale issues in social science and natural science, Marceau focuses on MAUP (Modifiable Areal Unit Problem) in the realm of social science, and scale and aggregation problems at natural science. As progress has been made for the last few decades, the MAUP problem still remain unsolved (according to Marceau, by 1999) , but studies on how to control and predict its effects were developed to get close to solve the problem. While in natural science, HPDP (Hierarchical patch dynamics paradigm) was provided to solve the scale and aggregation problem in natural science, as a framework of combining bother vertical and horizontal hierarchy problem.

In the end of his introduction of scale issue, Marceau threw out three main steps of solving the scale issue: understanding scale dependence effect; the development of quantitative methods to predict and control the MAUP effects, as well as to explain how entities, patterns, and processes are linked across scales (aggregation problem); and to build a solid unified theoretical framework, which hypothesis can be derived and tested, and generalizations achieved (Marceau, 1999).

The most intriguing part of this article to me is the MAUP problem. Although progress has been made to control and predict its effects on the study result, from some of the recent urban geographic studies I have been read, the MAUP problem is still unsolved. And sometimes, ignored when researchers talked about their sampling process. From Marceau’s explanation, I do realize that it is important to address scale issues, such as MAUP in both social and natural science studies, in order to figure out whether the study result is solid and spatially valid, and avoid unexpected spatial bias.

 

For “A Review on Spatial Scale Problems and Geostatistics Solutions”

Sunday, September 29th, 2019

This paper points out that nearly all environmental processes are scale-dependent in general because the spatial data are captured based on observations that is dependent on sampling framework with the particular scales of measurement – filtered version of reality. Moreover, the author reviews recent literature revealing scale problems in geography and holds a few discussions on the geostatistical techniques for re-scaling data and data models by introducing scales of measurement, scales of variation in spatial data and the modelling of spatial variation. Some approaches to changing the scale of measurement are suggested in the end. Adopting a conceptual framework that fits scales of the spatial variation and the scales of the spatial measurement and learning more details about the structure of the property do matter a lot when dealing with a scale-related geographic issue.

I appreciate this paper a lot for it helping me think more about scale issues in my thesis research. One of my research questions is to find if regular patterns do exist at large scale peatlands over the landscape by exploring the large-scale pattern of peatlands in one of the typical peatland landscapes–Hudson Bay Lowland. “Scale” referring as resolution and extent plays an important role when raising up my research project. The emergence of regular spatial pattern from the scale of several meters (hummocks and hollows), to tens of meters (pools, ridges and lawns) has been confirmed and the regular pattern together forming the stable individual peatland ecosystem (bogs, swamps and fens). However, there has been a lack of studies at the larger scale — hundreds of kilometers of massive peatland complexes. We inferred that the characteristics of regular patterns revealing the negative feedbacks are cross-scale transferrable which gives rise to the hypothesis that regular pattern still exist at large scale making itself a self-regulated system that is adaptive to the climate change. When it come to the implement of data collection and processing on remote sensing data, “which scale is most suitable to detect the heterogeneity between grids with a limited cost of image request” is the first step. How large the area I want to deal with(extent) and how much detail(resolution) I want in distinguish heterogeneity? If interpolation used for creating more high-resolution images, how much information is lost or mislead?

“SCALE issues” matters a lot, helps a lot, bothers a lot…

Thoughts on “Marceau – The Scale Issue in the Social and Natural Sciences”

Sunday, September 29th, 2019

In “The Scale Issue in the Social and Natural Sciences”, D.J. Marceau explains the increase in the importance of the Spatial Scale concept and the evolution of its conceptualization over the last few decades by reviewing the main developments in both the social and natural sciences. This is illustrated in the article with many examples of contemporary environmental problems and how the observations will differ based on the selected scale of the analysis, as patterns may differ across scales.

In an era where geographic data is becoming more and more accessible online while also becoming more heterogeneous, scaling becomes an increasingly important issue to consider when analyzing space, given the ever growing reliance on Geographic Information Systems (GIS) today. Although science always aims to be as objective as possible, the lens through which a phenomenon can be observed can vary widely based on our culture, our social environment as well as the standards in use among many other factors.

The conclusion of the article proposes an emergence of what is referred to as the science of scale. I would be curious to know if there have been recent developments since the article was published in 1999. Do we have a better understanding of ways to control the distortion created by the Modifiable Areal Unit Problem (MAUP)? Also, have there been contributions made by other disciplines not mentioned in the article in regards to the scaling problem?

Review of Sinha et al. – “An Ontology Design Pattern for Surface Water Features”

Monday, September 23rd, 2019

In “An Ontology Design Pattern for Surface Water Features” (2014), Sinha et al. proposed an ontology of Surface Water to generalize distinguishable characteristics with an aim to make it interoperable between different cultures and languages, as well as to help build the Semantic Web. To achieve this, the authors distinguished the container from the water body, separating them in two distinct parts; the Dry module referencing to the terrain and the Wet module referencing to the water body. They also emphasize that the Wet module is dependent on the Dry module to exist, meaning they are superposed when the former is present.

The article provides a great approach to analyze the ontology of surface water features by generalizing both the Dry and Wet module in a limited number of classes while also preserving a sufficient number of defining features. An interesting example would be in their characterization of a water body, which even encompasses endorheic basins, in other words a drainage basin that has no outflow to another water body, as they didn’t specify the need for it to have an outlet point. With that said, while this ontology mentions that water movement is dictated by gravity, there are some instances of water bodies flowing uphill, such as a river under the ice sheet of Antarctica or the flow reversal in a water body following an cataclysm. In that case, this would challenge the assumption that water always flows from a high point to a low point. 

Review about reading Sinha and Mark et al’s An Ontology Design Pattern for Surface Water Features.

Monday, September 23rd, 2019

In Sinha and Mark et al’s paper, “An Ontology Design Pattern for Surface Water Features”, the authors works together to introduce Surface Water pattern, in order to generalize and standardize the semantics of basic surface water related features on earth’s surface. Their incentive to create this model is to resolve the differences of semantics description around the world, on describing surface water related feature and terrain. To bring convenience and precise description on surface water features are the essence of their work.

In the Surface Water pattern, they divided Earth’s surface water system into two parts: Dry module and Wet module. The Dry module, which they used to describe the landscape that is able to contain water body/flow, contains: Channel, Interface, Depression. Channel describes the landscape that allow water to flow, tend to have two ends (start/end point), which they latter describe as Interface. Interface is where channel start and end, and if the Interface include interaction with other surface water related landscape (e.g. another Channel or Depression), it is a Junction (subclass). And depression, they describe as a landscape that can contain water body, so it does not over flow. It is usually surround and enclosed by a rim (which is usually a contour line represents the highest elevation of the depression).

The Wet module is about actual water (or in their further discussion, to be any liquid has the capability to flow) body/flow. It includes Stream Segment, Water Body, and Fluence.  Stream Segments represents water flow in Channel (from the Dry module), which has only one start and end point (later explained as Influence and Exfluence), which is not necessary the Interface for the Channel it flows within. Water Body is the water that sit relatively still inside Depression (from the Dry module). It is also included by the rim of Depression. Fluence describes the start and end point of Stream Segments. If it is the start point of Stream Segments, it is called an Influence. Otherwise, it is called an Exfluence. If it is where one Stream Segment interact with another Stream Segments or Water Body, it is a Confluence.

And the end. Sinha and Mark et al explained that the Surface Water pattern did not cover every features that needs to be describe as part of Earth’s surface water system, such as features related to glaciers and ice flows. It rather serves as a frame work that can be extended, and further developed to more specific Ontology. And the Surface Water pattern should be describing basic features for all flowing liquid including water, and on all planet with gravity.

My major critics on their Surface Water pattern is: although they said Wetland (as an important feature in surface water system) may not be described using their pattern in the discussion part, it is not proper to call their Onotology design as “Surface Water” when they clearly excluded wetland as a necessary part of Earth’s surface water system. The reason I claim that it is a major flaw for excluding wetland in their Surface Water pattern is: wetland in Dry module, neither fits their definition for Depression (since wetland not necessarily have a rim), nor can be described as a series of Channel (it does not have to contain flowing water). Even though in their discussion of their Onotology pattern, they stated that wetland can be developed in a different Ontologiy pattern or future extension of this pattern, it still creates confusion when they name their pattern as Surface Water but not including all parts of surface water.

Thoughts on “Do geospatial ontologies perpetuate – Indigenous assimilation?”

Sunday, September 22nd, 2019

The article written by Reid and Sieber discusses the underlying ontology development which reveals the central motivation in the academic fields of GIScience and computer science — making data interoperable across different sources of information. Moreover, the authors explore how the ontology theories should be better developed considering indigenous knowledge inclusive. The title raises up a question while at the end of the paper they answer that: With approaches suggested — indigenous place-based approach and deep engagement with indigenous methodologies for ontology co-creation (participatory approach), Indigenous conceptualizations would be taken seriously and never assimilated by western concepts in ontology development.

I find this paper really interesting for that it brings about the doubts for the conventional geospatial ontology development and asserts the importance of indigenous knowledge. I think not only indigenous knowledge should be emphasized but also many other unique cultures which are not consistent with the advanced western regime. However, constructing a universal ontology is fundamental and a main focus in GIS and CS for data collection, management, control, sharing and etc, and different cultures involved in ontology creation may make the universality much more complicated to understand or communicate. I think maybe sometimes we can create specific ontologies for special case with localized problems.

Do geospatial ontologies perpetuate Indigenous assimilation? (Reid & Sieber, 2019)

Sunday, September 22nd, 2019

To answer the title of the paper succinctly: yes, geospatial ontologies do perpetuate Indigenous assimilation when no Indigenous perspectives are considered; however, if researchers do consider Indigenous perspectives, decolonization of geospatial research is possible. Indigenous people across the globe view their landscape much differently than western geographers do; for example, physical entities can have their own agency, there are less abrupt changes between different aspects of the landscape, and often there is no separation between cultural beliefs and the entity itself.  With more Indigenous experts participating in discussions of ethnophysiography and geospatial ontologies, it appears that there can be no universality of geospatial ontologies and that multiple worlds must exist.

Concerning GIScience, ontologies have an important place in discussing landscape perspectives, especially in an ever-connected and technologically-driven world where standardization and simplicity reign. With the rise of the neogeography and VGI, geospatial ontologies are more important than ever, as everyone who is contributing geographic data should view physical entities of their landscape in a similar manner in order for the data to be viewed as accurate and useful.

After reading this paper, I have some questions regarding Indigenous ontologies and geospatial ontologies generally: how much differentiation is there between different Indigenous groups’ perspectives of their landscape? How often was universality discussed before the creation of the internet? What is the research on ontology universality like today – is it still a popular field of research like it was in the 2000s?  How do Indigenous perspectives translate to modern technology – do computers have trouble understanding their view of the landscape? What else is being done to decolonize geographic research?

I realize some of these questions might have answers in other literature, especially since I do not have a lot of prior research in Indigenous studies nor ontologies, and I welcome any response educating me or pointing me in the direction of other papers that may answer my questions.

-Elizabeth

 

 

On Sarkar et al. (2014) and movement data

Sunday, December 3rd, 2017

I thought Sarkar et al.’s “Analyzing Animal Movement Characteristics From Location Data” (2014) was super interesting, as I don’t have a very strong background in environment and I didn’t know about all of the statistical methods involved in understanding migratory patterns via GPS tracking. The visualizations were super interesting, like the Rose diagram to show directionality and the Periodica method to then determine hotspots. I also appreciated the macroscopic viewpoint of this article; though the inclusion of equations is important for replication and critical understanding, it is also important to discuss the outputs and limitations of the equations at a larger level, in order to better understand results. It is especially useful for those without deep math backgrounds, like myself, to understand the intentions of using these certain equations without having the math background of being able to visualize output.

As interesting as it was to learn about the incredible utility of understanding migratory patterns of animals, I couldn’t help but think about applications to human geography. I wonder if these patterns are already being used to extrapolate information on someone based on their location, as the Toch et al. (2010) paper on privacy for this week hinted at. If they are different, it would be interesting to evaluate the utility of the methods to study human spatial data patterns as applied to animal migratory patterns. Since Sarkar et al. used unsupervised classification to learn new patterns, it seems like the combination of methods used could apply (and probably do apply) to human spatial data mining. As terrifying as that is.

On Toch et al. (2010) and location-sharing

Sunday, December 3rd, 2017

This article was super interesting, as I didn’t know too much about the actual mechanics behind location sharing (ie. “Creating systems that enable users to control their privacy
in location sharing is challenging” (p. 129)). Their ideas of identifying privacy preferences based on the locations that people go to was confusing (and was not really ameliorated by the end). Perhaps it’s because I don’t understand Loccacino, particularly because of technology constraints from 7 years ago (did they or could they collect data then? All the time, or just when you wanted to share your location like “At the mall”?), underlined by the wonderful image of the “smartphone” (p. 131). Like some of my classmates noted, this seemed very similar to Find My Friends, and perhaps that’s why I didn’t understand how this worked, what the line was between actively volunteering and passively volunteering location.
Further, I had some issues with the participant pool that they used. The researchers relied on a set that was 22/28 male and 25/28 student and then were surprised that “the study revealed distinct differences between the participants, even though the population was homogenous”. As evident from spatiotemporal GIS & feminist GIS, women interact with spaces differently than men. Further, age of participants, as another classmate noted, is crucial: a 50-year-old staff member or student will go different places than a 22-year-old student. Not to mention, analysis of age could determine why there was a big difference in sharing (or if there was not a difference). Also, I was interested in seeing the differences between people between mediums, as some people used phones and some used laptops, and phones are way easier to pull out and share info on than laptops, especially in social gatherings or public spaces. They acknowledged this difference as being 9 mobile & 5 laptop users being “highly visible” (p. 135), but I would be more interested in seeing the differences between the two mediums first and seeing activity levels as a whole for the two mediums, rather than continuing to equate the two, especially since laptops and phones were not distributed equally among participants. I think this study would be interesting to redo today, but with more information about participants and more controls throughout the study (or at least, fixing for differences among participants and modes of participation).

Location Privacy and Location-Aware Computing, Duckham and Kulik (2006)

Saturday, December 2nd, 2017

Duckham and Kulik (2006) introduce the importance of privacy in location-aware computing, and present emergent themes in the proposed solutions to related concerns. In their section contextualizing privacy research, the authors present privacy and transparency as opposing virtues (p. 3). I’m curious about the distinction that would motivate the valuation of one over the other. For instance, many would feel uncomfortable with the details of their personal finances being public (myself included), but would advocate for the openness of business or government finances, or even those of the super-rich. Is power the distinguishing characteristic? Perhaps concerns for person wellbeing or intrusive inferences are less applicable to large organizations, but how do we explain the public response to the Panama or Paradise Papers?

Duckham and Kulik (2006) also posit that greater familiarity and ubiquity of cheap, reliable location-aware technologies will increase public concern for privacy (p. 4). I’m not so convinced–in fact, is it not the opposite? It would seem that during their inception concern for privacy was much higher than it is now. I would argue the pervasiveness of location-aware technologies has generated a reasonable level of comfort with the idea that personal information is always being collected. I would imagine this is evident in the differential use of location-aware technologies in people that have grown up with them.

I appreciated the authors’ discussion of location privacy protection strategies. They provided interesting critique of regulatory, privacy, anonymity, and obfuscation approaches. I would add to the critique of regulatory or policy frameworks based on “consent” that participation in such technologies is becoming less and less optional. Even when participation is completely optional, consent is often ill-informed. It’s clear that the question of privacy in location-aware computing is one with no clear answer.

Animal Movement Characteristics from Location Data, Sarkar et al. (2014)

Saturday, December 2nd, 2017

Sarkar et al. (2014) present an analytical framework for making inferences about animal movement patterns from locational information. The article was an insightful show-don’t-tell introduction to how movement research could be applied beyond the domain of GIScience. Also, I think this may be one of the first articles we’ve looked at with an explicitly ecological application of GIScience research… A welcome addition!

It’s becoming increasingly evident how these GIScience topics we’ve discussed in class interact to provide a better understanding of how geospatial information is analyzed and represented. I appreciated the authors’ discussion of uncertainty in the Li et al. (2010) algorithm for detecting periodicity. I found myself tempted again to assume that increasing temporal resolution is the best way to minimize this sort of uncertainty. Even withholding concerns for feasibility, ultimately I’m not convinced that this really does more than mask the problem. The detection of periodicity through cluster analysis resembles aggregation techniques for reducing the influence of outliers on uncertainty in the resulting periods, but I am still a little unclear on how the temporality of the location data was incorporated into the clusters. Does the Fourier analysis account for points near in space but distance in time? Perhaps the assumption of linearity is enough in the assessment of migration patterns.

The distinction between directionality and periodicity as components of movement was insightful. Typically I would think about the significance of movement as it relates to the physical space, but Sarkar et al. demonstrate how inferences from orientation and temporality of movement can be insightful on their own.

Thoughts on “How fast is a cow? Cross-Scale Analysis of Movement Data”

Friday, December 1st, 2017

It was interesting to see a direct link to Scale (the presentation I gave this week) in the very first paragraph of this paper. It just reaffirmed how scale is a central tenet of many different sub-fields of GIS, from uncertainty to VGI to movement data in this particular case.

The authors enumerate the many different factors which influence the collection of movement data, from sampling method to measurement of distance (euclidian vs. network) and the nature of the space being traversed. One of the concerns they highlight is “sinful simulation” and this reminds me of our discussions of abstraction pertaining to algorithms, agent based modelling, and spatial data mining. For all these methods, the information lost in order to model behaviour or trends is always a concern and I wonder what steps are taken to address the loss of spatial or other crucial dimensions for movement data.

Another common theme discussed by the authors is the issue of relativity and absoluteness. In their decision to focus on temporal scale, they reiterate that as with slope, “there is no true speed at a given timestamp” (403) because this is dependent on the speed at adjacent points, and is relative. But they say that the speed is dependent on the scale at which it it measured and this confused me because whether they measured it in cm/s or inch/minute, is it the unit which they are using to speak about granularity? Because if so, then regardless of the unit of measurement the speed should be the same. I wonder what they mean by there is no absolute speed at a given timestamp if they are referring to it in terms of a scale issue and not a relative measurement/sampling issue.

The authors contend that the nascency of the field of movement data analysis means that researchers rarely question the choice of a particular temporal scale or parameter definition, and this is definitely an important issue as we have seen with the illustration of the MAUP and gerrymandering. The fact that all these subfields are subsumed within the umbrella of GIS, and that researchers tend to have some “horizontal” knowledge about how methods have been developed and critiqued in other fields, hopefully means that they can adopt the same critical attitude and lessons learned from the past towards this new domain of research.

-futureSpock