Core concepts, Kuhn (2012)

September 16th, 2017

Much like Mark (2003) , Kuhn (2012) seeks to create a comprehensive list of core concepts in GIScience. Kuhn emphasizes the multi-disciplinarity of GIScience, and its importance in the growth GIScience. In general, I think that multi-disciplinarity is beneficial to any field, as different perspectives can provide fresh outlooks. Kuhn’s list of 10 core concepts is approachable for researchers in many disciplines, which can help promote cross-disciplinary GIScience research.

The core concepts are all relatively basic, but Kuhn’s more philosophical approach to them is really interesting. I found the discussion of location and accuracy particularly thought-provoking. Kuhn states that nothing has a true location, as location is based on relativity and context. While I immediately agreed that the understanding of a location is based on context, it took me a while to wrap my head around the fact that a theoretically unmoving object’s location would necessarily be relative to something else in order to establish its location (ie. I can’t be ‘here’ unless there is a ‘there’). I haven’t ever considered location as a dualism, but Kuhn has opened up my mind to the notion.

In the discussion on accuracy, Kuhn suggests that one aspect of accuracy depends on regularity in repeated measurements, but goes on to say that measurements must be understood as a random process. I will readily agree that units of measurement can be random, but shouldn’t the outcome of a measurement be far from random? Either I haven’t thought t enough about it, or a more in-depth explanation of this would be helpful.

Defining the field- Mark (2003)

September 16th, 2017

In Geographic Information Science: Defining the field, Mark (2003) presents the “intellectual scope” of GIScience, and seeks to precisely define the science in a way that Goodchild (1992) did not do. He does this by suggesting that GIScience is a multi-disciplinary branch of information science, comparable to computer science. Mark clearly lays out the basic tenets of GIScience, and, in my opinion, successfully presents Geographic Information Science as a discipline that reaches far beyond GISystems and their applications. While reading the article, I often thought about the shortcomings in my GIS education during my undergraduate degree. GIS was presented almost exclusively as GISystems, and this article helped to provide me with a  base understanding of GIScience, its scope and its importance.

One thing that stood out to me, is that Mark suggests that GIScience became a truly academic field when the National Scientific Fund of the US began funding GIScience research. I find the use of agency funding as a source of science legitimacy incredibly interesting. On the one hand, funding is a crucial component of science and academia: without any funding, scientific research cannot be done. On the other hand, the fact that Mark’s understanding of the basic components of GIScience rests partially on successful funding proposals seems troubling. If government funding agencies have the power to define the scope and content of the science, is the science moving forward freely?

While this is a philosophical and ethical question that speaks more to our society than GIScience itself, given the personal privacy concerns tied to GIScience that we discussed in class, I feel somewhat perturbed by this. If government funding is pushing forward the scope and content of GIScience, how will citizen and consumer rights be protected?

Comments on Wright et al. (1997)

September 15th, 2017

Let me start off by saying that I’m happy that Wright et al. (1997) acknowledge that GIS can have many different identities. GIS can be different things to different people – it can be used to answer research questions or fuel new ones. I was introduced to GIS as a tool, and initially saw it as a mechanism which allowed me to investigate spatial data and produce maps to show my findings. As I am now delving further into GIS, I am realizing that it is important to interrogate GIS as a means of producing knowledge and look into any errors or biases that GIS itself introduces. I don’t really know if this qualifies GIS as a “science”, but to me, this type of investigation goes beyond considering GIS as merely a tool.

I appreciated that Wright et al. (1997) discussed the various philosophical approaches underpinning science. It’s important to talk about the pedestal that we often put science on and the types of knowledge that it values. In the context of GIS, I think that the label of “science” is problematic in that it, while perhaps elevating the field, inevitably becomes exclusionary to those outside of academia or other similar positions of privilege. I’m not convinced by that author’s rationalization of why being able to call GIS a science is important. If we’re fighting to call GIS a science to give it academic legitimacy, isn’t this just giving in to the skewed status quo? Furthermore, I’m not clear on where this meta discussion of what GIS “is” actually gets us. Nothing is stopping anyone from talking about or using GIS in different ways. Why do we need to put so much effort into rigorously defining its many identities? Aren’t there more pressing problems which need solving?

World peace, Vancouver’s astronomical housing prices, Donald Trump…???

  • janejacobs

Thoughts on ” GIS: Tool or Science?” (1997) (Wright et al.)

September 15th, 2017

Right off the bat, the antiquity of this article stood out. When the authors discuss how it has become necessary to “refer to information that may exist only in electronic form”, and how new methods of citation will need to be developed for websites,  one realizes that the context in which this review was written is very different from that of present day where citing electronic sources in research is second nature. This point is relevant because we can treat the text almost as a historical document, an insight into how the question of GI-tool/GI-Science was being discussed at the conception of the “field” of GIS.

The initial description of the GIS-L presents an interesting case of how issues in GIS were first being discussed on an online platform by geographically distant scholars and interested individuals. It is difficult to imagine an academic paper devoting so much space to a conversation that took place on a discussion board. There is clearly some ambiguity about how to treat the discussion, with the authors positing that that the bulletin board “falls into the realm of personal communication”. Surely no one would make the mistake of assuming that anything they post to the internet today is “personal” or protected by some common understanding of privacy and discretion.

I enjoyed their discussion of GIS belonging on a fuzzy continuum. Their justification for caring about  about GIS’s scientific identity seemed a little circular to me: “labeling a field as a science…may…secure it greater funding and prestige.” Seeing as the authors have a vested interest in securing funding for their area of research, it seemed like it would be in their interest to argue for GIS as a science, to expand their prospects in the academy.

The “GIS as a tool” section raised a few questions for me. The authors state that “The tool itself is inherently neutral…it’s development and availability driven by application.” I am not sure I agree with this statement, seeing as the prohibitive cost of many packages make them decidedly un-available to the majority of people, and the development of the programs are always with the products in mind, which have political and social implications (gerrymandering using GIS, for one example.)

In their section on GIScience, the authors laid down four conditions for a discipline to be considered a “Science” which begged the question; who decides whether these conditions have been met or not? The language was (deliberately?) vague, citing “ sufficient significance” , “sufficiently challenging”, “sufficient commonalities”. This language makes it nearly impossible to arrive at a definitive or quantifiable answer as to whether something is a science or not, and perhaps this is the point.

I thought it was interesting how the authors discussed the problems that arise from the subjugation of GIS at both a theoretical level (what is science?) and a fine-grain, administrative one, recognizing how hard it would be for academics in the field to secure jobs and train students while devoting time to research.

In their conclusion they discussed how GIS may be a “new kind of science”, and I think this was a prescient observation as the diversity of fields within GIS today validate this point.

-FutureSpock

GIS: Tool, Science, or the new norm? (Goodchild 2010)

September 14th, 2017

I enjoyed Goodchild’s article as it served more as an almost nostalgic reflection on GIS’s progress over the years since its beginning. Goodchild’s debate on whether GIS is a tool or science quite interesting as well, as he states chronologically in the 80’s the focus on measuring error, shifted to a focus on uncertainty in GIS applications in the 90’s. This to me reflects how GIS, being so linked to computers software so early on, can easily be confused to BE the computer/software versus the science/scientific reasoning that goes into setting the parameters and finding appropriate uses for it in the realm of science. Personally, in working with GIS I find this reasoning behind which stats technique, or projection, etc. to be more and more of a scientific judgement, which cannot be made by a computer with problems of scale and the Modifiable Areal Unit Problem, to name a few. Goodchild hints at GIS being multidisciplinary in itself in contrasting geodetic science, cartography being an artistic science, and photogrammetry being largely engineering/problem solving. I feel this too leaves lots of room for people not familiar with GIS to focus on one, while ignoring the other applications of it and the fact that GIS today is often used as an umbrella term encompassing all of these very multidisciplinary areas into one subject.

As for Goodchild’s predictions for the GI-future, I agree with him in that neogeography and VGI will be the ‘future’ of GIS, highlighting the increased use of user-generated information and being more open to the general public (either passively or actively). I feel it’s this increased participation in GIS over the years, from a very non-user friendly interface, to being incorporated into so many mobile apps that makes the future for GIS quite bright, even if it doesn’t particularly hold to it’s ‘hard science’ background/early aspirations. I find his comments on ‘knowing where everything is, all of the time’ eerily current with the privacy concerns we brought up in lecture regarding never being able to shut off our phones, and although we have not fully reached this statement (though we are getting close), I could see this intensifying in the near future, as well as possibly coupling this with biometric data such as step count, heartrate, and even health from increased prevalence in wearable technology. Geography (as defined by Goodchild as relating to the earth or close to it) is also challenged by new uses of GIS, such as in it’s neuroscience applications (where earth’s topology is replaced with that of a brain) or modern AR, where the space could simply mean a table or a sandbox. Whether or not these are things to look forward to will be interesting to debate, and I can only wonder what the next 20 year report on GIS will look like.

-MercatorGator ?

Thoughts on Goodchild (2010)

September 13th, 2017

This paper by Goodchild (2010) is an assessment of the (then) current state of the field of GISciences. Goodchild talks about the beginnings of the field of GISciences, research accomplishments and current agendas, and future predictions for the field. I often struggle to develop a conceptual framework for GIS, so I was happy to see that this paper did just that. By defining the field as the intersection between computers, society, and humans, I feel that I have a much clearer understanding of what GIScience actually is and the disciplines that it was born out of. Although, I feel that it’s worth noting that this conceptual framework doesn’t explicitly mention anything spatial…

This paper left me thinking about where GIScience fits within existing bodies of geographic thought. Goodchild’s many references to Tobler reminded me of geography’s “quantitative revolution” in the mid 50s. It seems to me that GIScience as it is today is only possible because of previous efforts to develop the field of spatial science, which is based on rigorous statistical techniques and scientific ways of theorizing. I was then thinking more broadly about theoretical understandings of space, and discussions of absolute vs. relative space. Yes, dimensions of space can be measured, but space can also be experienced, tied to significant symbolic meaning, and transformed by the perspective of the individual. GIScience fits very well within absolute theories of space, but how can it be adapted to answer questions about relative space?

  • janejacobs

Thebault-Spieker et al:

November 30th, 2015

This article uses an interesting combination of quantitative and qualitative methods to shed light on decision-making among crowdworkers. The quantitative data demonstrated strong correlations between willingness to perform tasks and socio-economic status of the destination, while the qualitative data provided rather direct responses that implied causality. As far as the position of crowdsourcing on the tool-science spectrum, I would place it firmly on the tool side, because it’s applications are so purely commercial, and the use of the technology doesn’t contribute in itself to the furthering of geographic knowledge. This study’s focus on decision-making reminds me of my proposed masters’ research, which involves a discrete choice experiment. Choice experiments identify several variables that are of importance to interviewees in making a certain decision. The variables are then combined at different values in order to make several scenarios to present to the interviewee, after which they are asked for their preference. The interviewer can then infer which of the variables was the most important to the decision. Applying such a method could be interesting for a study like this, because several attributes of the destination neighborhoods are distinct but interrelated, e.g. socio-economic status, crime and race. The qualitative results implied that crime was an attribute about which respondents were vary open in citing as a decision driver. By contrast, the extent to which socio-economic status and race are decision drivers would be quite difficult because many people would feel ashamed to say so openly. In this case a choice experiment might not get around this problem, though choosing neighborhoods solely on the basis of race and asking whether the person would be willing to serve that neighborhood could be a viable method. Answering these questions would have important implications for the ethical value of the sharing economy.

 

-Yojo

The Sharing Economy-Uber (Isaac 2015)

November 30th, 2015

There is also a notable difference in the relationship between today’s two topics and GIScience as a discipline. While issues of scale are more clearly within GIScience, the sharing economy is one of those topics–along with, say, drones–where what’s most pertinent to discuss is how GIScience technologies (GPS, in this case) are employed, and what their wide-ranging effects on society might be. In these cases, I think a valid question is, what can GIScientists contribute to a conversation in the social sciences and humanities to further our understanding of these new technologies?

There is evidence of a certain conceit around the “sharing economy.” As Isaac argues, uber wouldn’t exist the same way in a better job market, and there appears to be a continual effort to reduce the proportion of profits going to labour–epitomised by the plan to eliminate the drivers. When we ponder these aspects of a GIScience-potentiated technology like uber, are we still “doing” GIScience the same way as when we talk about issues of scale? I’d argue that even if we are not, in a strict sense, that we should broaden our definition of what doing science is. Coming to the end of the semester, I’m increasingly convinced that scientists ought to be better versed in methods of critiquing and analyzing the influence of technologies on society, and that this sort of thinking should be incorporated into various scientific disciplines.

Atkinson and Tate: links between scale and uncertainty

November 30th, 2015

In this article, the authors discuss the problems associated with re-scaling data and possible tools for addressing these problems. Re-scaling is required in order to compare data sets that are collected at different scales. I find the article extremely dense and challenging, being very heavy on statistical theory, and the examples provided to give context are themselves quite hard to understand. The article did give importance to several topics that are also important in the study of uncertainty, namely the modifiable areal unit problem (MAUP) and spatial autocorrelation. It is important to understand heterogeneity at scales that are finer than the scale of the sampling. I wonder, however (and the authors may have answered this question in language that I could not understand), how one incorporates heterogeneity at larger scales when scaling up. While I came to understand the MAUP as a product of the process of aggregating small-scale data to a larger scale and masking heterogeneity in the process, I suppose that it could be equally described as a process of dividing large-scale data to a smaller scale, except that heterogeneity must be interpolated when going from a large to a small scale. Furthermore, though interpolation, a crucial tool of re-scaling, was not prominent in my own review of literature, it is relevant to the topic of uncertainty because it involves creating data were no actual measurements were taken, so that the uncertainty is basically absolute. I’m actually not sure if interpolation can be approached from a position of error, vagueness or ambiguity. I suppose that error would be applicable because the interpolated value could be cross-referenced by samples from the field.

  • Yojo

Problems of Scale in GIScience

November 30th, 2015

The topic of scale is a good example of GIS being synonymous with “doing science”. When I think about GIScience as opposed to GIS, I think about the problems that arise when trying to represent and communicate space using digital geographic information. Scale, as expressed in Spatial Scale Problems and Geostatistical Solutions: A Review by Atkinson and Tate, presents many problems for how to optimally relate and represent spatial features and properties. GIS is special because unlike traditional graphical maps, they have the capacity to integrate multi-scale data. Therefore, when discussing spatial data, one must address issues of scale and the implications theses new types of interfaces have for representing and analyzing spatial data.

Scale is very much a central topic of spatial cognition. I have seen many applications of scale for explaining how we conceptualize and categorize space. Atkinson and Tate assert in their paper that, “one can never observe “reality” independent of some sampling framework, so that what we observe is always a filtered version of reality” (Atkinson and Tate, 2000). This acknowledgement of the conceptual frameworks that contextualize scale is an essential part of cognitive processes that involve spatial properties as a core component.

In addition, scale is a fundamental component of spatial statistics and analysis. MUAP and variations of sampling schemes are met with issues pertaining to scale. In our final project for Geog 308, my group members and I have to address issues of scale in our analysis. In order to observe urban sprawl over time for the city of Maceio, Brazil, we have to confront problems of spatial resolution and how to stratify and randomly choose our ground truth sample points. The scales of these samples affect the heterogeneity of land cover classes and affect the results of our analysis.

In addition, I find that scale is relevant to the other topic being presented tomorrow on the sharing economy in GIScience. Scale is very important when discussing networks, accountability, and trust within the sharing economy. I hope to discuss this topic further during tomorrow’s discussion period.

-geobloggerRB

Site Vs Situation

November 30th, 2015

In Thebault-Spieker et al.’s (2015) article they analyze the site and situation attributes of each census tract to get a better idea of the qualitative factors influencing crowdworkers decisions. They found that perceived safety and distance from starting location/accessibility both where the representative site and situation attributes.

This got me thinking about the site and situation attributes we might find in other sharing economy development that are not necessarily crowd sourcing, take Airbnb for example.   Some site attributes I can think of for Airbnb, off the top of my head, are cost, safety, and quality (whole house/vs room in apt). Situation attributes may be connectivity to tourist attractions (via streets and public transit) or specific neighborhoods. It would be interesting to see what attribute was more important to people selecting houses to stay in. As a young female with little disposable income, I would characterize location second to cost (unless it seemed really worth it).

Generally I wonder what attributes are deemed most important by users across the various sharing-economy platforms. Thebault-Spieker et al. addresses some implications their findings may have on UberX drivers, mainly the idea of a service desert (comparable to a food desert but for sharing economy services) (2015). Extrapolating this to the slightly different platform of Airbnb, I wonder if there is a service desert in lower SES neighborhoods. I would predict that there are less so than in this TaskRabbit study simply on the assumption that lower income families also may wish to travel and Airbnb could aid in making this more affordable. And it seems there do exist a number of Airbnb’s in the ‘ghettos’ of Chicago. Lastly, I acknowledge that I am making a sweeping statement of the southwest region as most people do, however, I do share some of the views of the female respondents in this study as a Northern Chicagoan.

The stereotypical danger zones are bound more or less by the 294

The stereotypical danger zones are bound more or less by the 294

-BannerGrey

Thebault-Spieker: Whose Crowdsourced Market?

November 30th, 2015

The authors situate mobile crowdsourcing markets such as TaskRabbit within geography, arguing that the geographical perspective is fundamental to the functioning of these markets. I was surprised by how little distance seemed to affect willingness to do a task: the authors write that workers were 4.3% less likely to do a task an hour away than one in their immediate area. To me, an hour seems far, and I thought that this distance would have much more of an impact on willingness. I was also surprised by how much gender impacted the decision to complete a task: the mean of means for women’s willingness to do a task was 20% lower than the mean of means for men. The authors hint at it, but I am curious to know what the demographics are of the people asking for the job to be done.

Overall, I think that this article, and the crowdsourced market, is a good example of an application that needs geography. This is certainly a technology that is embedded in geography, and an analysis like this, I would argue, is really essential to understanding the demographics and the processes behind crowdsourcing applications like this one. Inevitably, some people will look at applications like this, and add them to lists such as “ways to make money in GIS” or “another new innovation that uses GIS!” (I’m looking at you, keynote speaker at GIS day.) However, we need to keep working on critical research, keep asking who these technologies empower, and keep examining the underlying inequalities and how they may be perpetuated by services like this.

 

-denasaur

Avoiding the South Side and the Suburbs: Thebault-Spieker et al., 2015

November 30th, 2015

Thebault-Spieker and colleagues (2015) discuss the geographic factors influencing mobile crowdsource market “workers” and how these factors may affect the willingness of a participant to accept a work task on the mobile crowdsourcing market application “TaskRabbit”.

I found the article to be an interesting read, however I found that the authors could have made their geographic argument stronger. They could have have gone more in depth with regards to how task duration in relation to distance traveled affected people’s willingness to travel to the task. As well, I thought the authors could have discussed the MAUP with regards to their argument that census tracts with low reported household income (derived from aggregated point data) are disadvantaged in this market.

The authors admit that the study is limited by the fact that it was only conducted in one county. I wonder what their findings would be if they looked at areas that are smaller, such as rural communities. Would they find that socioeconomic status is no longer the driving factor of prices within the crowdsourcing market? Would they find that perhaps individuals with lower socioeconomic status are more self-reliant? From a sociological and economic point of view, I find the study to be very interesting. From a GIScience perspective, I find it has many logical holes and could be more rigorous, but it has promise nonetheless.

 

-ClaireM

The Scale Issue in Social & Natural Sciences, Marceau 1999

November 30th, 2015

In Marceau’s piece, the issue of scale is discussed at length (no pun intended), and raises many good points. Scale and complexity truly go hand in hand, as complex systems can be invariant to scale (fractal characteristics) – a strange but intriguing phenomena.

While the two topics are inherently linked, the issue of scale comes up much more often, as it is very visible (scales at the bottom of maps) and important (“zooming” in and out on Google Maps, for example, to see the “bigger picture”). That being said, just because map users know what scale is, does not mean that they understand how it changes the information represented on the static or dynamic interface.

Marceau stresses the important of recognizing the Modifiable Areal Unit Problem (MAUP) – an important statistical error born from the aggregation of data over (typically) large swaths of area – and correcting any spatial analysis that may be affected by it accordingly. I do not pretend to fully understand the geostatistical implications of the MAUP, but I do agree that it is indeed a problem, and am happy that someone who understands the problem mathematically is working hard to find statistical solutions for it.

It is interesting to think about how the increasing use of dynamic interfaces such as mobile applications is changing how we reconcile issues of scale. As we can “zoom” in and out so easily, developers of future maps will have to generate many tiles to accommodate the users’ requests of displaying information at various scales. And to generate these tiles, we will have to really work through the MAUP, and by “we”, I mean not just “map makers”, but map users and map builders too. Will we have to include warnings at the bottom of these dynamic maps that “objects on map may not be closer than as they appear”?

-ClaireM

Can we relate qualitative GIS and spatial scale? (Marceau)

November 30th, 2015

I found Marceau’s article to be a clear and easy-to-understand explanation of spatial scale, different frameworks of space and scale, and problems to do with spatial scale. I realized that I had really only thought of space, and therefore spatial scales, in the absolute sense, and I am looking forward to understanding the relative sense more fully.

This article made me think of discussions of how to incorporate qualitative data and methods in critical GIS. How would one go about using qualitative data while being cognisant of the problems presented here with spatial scales? From what I could find, there has not been much explicit discussion of spatial scales in qualitative GIS. However, I did find an interesting piece by Knigge and Cope (2009) in Qualitative GIS that relates the two topics. They use interviews and conversations to explore residents’ ideas of the vacancies on a rundown commercial street in Buffalo NY. They argue that the social production of scale is dependent on multiple processes (such as economic exchanges) and discursive practices, such as the imagining of “the city” or “the neighborhood.” They indicate that the scale at which data was collected revealed different interpretations of vacancy, which often conflict one another. However, one question that this paper brought up for me was the fact that the authors were examining this issue “through the lens of scale” – so does this mean that scale is just another lens through which problems can be explored, and therefore a lens that can be disregarded when it isn’t relevant? To what extent is scale a fundamental geographical issue that is necessary to address – or is it only relevant when it is causing these problems that Marceau talks about?

I may be in a bit over my head in trying to relate the very complex and nuanced topics of qualitative GIS and spatial scales, but I think there is definitely room for more research on the intersection of these subjects.

~ denasaur

Knigge, L., & Cope, M. (2009). Grounded visualization and scale: A recursive analysis of community spaces. Qualitative GIS. A mixed methods approach, 95-114.

Scale is an Issue!

November 30th, 2015

 

As a student of the MSE and a frequenter of geography courses, my understanding of scale is far more developed than the average person’s (I hope). Marceau’s (1999) article was an interesting read because it forced me to consider, in depth, the problems beyond just noting MAUP as a point of contention in your final research project. I am very curious to see what the future holds in terms of solving the MAUP—particularly the sensitivity test if we can find a way to perform it with less effort.  Maybe this already exists, as it has been 15 years.

 

On another note, applying this reading to my own project—scale is a somewhat challenging idea to take into account when building an ontology. Marceau is very clear about the problems of the spatial aggregation of data and cross scale correlations. Scale is obviously a huge factor in farming—what one farmer produces and how they run the farm is directly dependent on the scale of the operation. I have had trouble trying to work in a varying scale for the simple notion of a farm, since I was not planning to include geometry. I have come to realize the best way to address scale in my ontology is to specify a type of farm at a specific scale and work from there (Intensive agriculture for example). In fact by trying to include multiply scales for a farm, I would be building an upper-level ontology (which is not my goal). Geospatial ontologies built a single scale, however, may be a contributing factor the MAUP because the relationships they display won’t exist on another scale, or if they do maybe they are altered? On the other hand, a good ontology should be ‘universal’ which to means it would be applicable at many scales. So is the answer many single scale ontologies or one multi-scalar one (per research topic)?

-BannerGrey

Marceau – Blurred lines

November 29th, 2015

This article emphasizes the importance of spatial scale in research and defines important concepts like space and scaling. Written in 1999, this article continues to be relevant to problems of scale presented by new technologies like drones. Marceau states “nor is a single scale sufficient to investigate phenomena that are inherently hierarchical in space.” She explains that doing this can severely jeopardize your research by hiding the modifiable areal unit problem. One of the important contributions of remote sensing, and more recently programmable drones, is the ability to rapidly collect data on phenomenon at multiple scales. In terms of mitigating the MAUP, the use of a drone to collect imagery could allow the researcher to perform a more robust sensitivity analysis.

I found the discussion on the difference between relative space and absolute space. The author writes that scale is the window in which we view the world, and that scales within relative space are more difficult to define than scales in absolute space, for example in remote sensing. As we move towards more advanced remote sensing using autonomous drones, I wonder how these concepts of space are programmed into AI. For example, traditional remote sensing uses GPS based imagery that is georeferenced in absolute space. But research is moving towards drones that can navigate absent of GPS coordinates, using computer vision to extract features from the landscape. This way, the drone can navigate around obstacles with only references to relative distance based on velocity and no computation of absolute space. Defining scale in such studies becomes difficult when the lines between absolute and relative space are blurred.

 

~anontarian~

Marceau’s Article

November 29th, 2015

Marceau’s (1999) article highlights what scale is and how it affects traditional (authoritative) geospatial datasets. This article reminded me of our discussion in Lesley’s geocomplexity seminar because Lesley addressed the concerns about being too specific or too generalizing, and whether or not we can have both.

Marceau states research should explicitly state the variables, specifically “the role of scale in the detection of patterns and processes, the scale impact on modelling, the identification of scale thresholds, and the derivation of scaling laws” (12). Although I agree with this, certain VGI datasets do not host these explicit details because VGI data lacks metadata that can provide information on scale. With this in mind, I wonder how a “solid unified theoretical framework” to understand scale issues will be approached now that new heterogeneous spatial datasets are produced and used, which can be seen within VGI datasets (ibid.).

Moreover, the connection between larger and smaller scales (e.g. global and local scales) can be connected via VGI. Johnson and Sieber (2013) state that “VGI can cross spatial scales” (74). For example: citizens (the local level) can communicate with governments (the provincial or national level) through producing VGI that the government can use (75). Nevertheless, VGI introduces a unsolidified non-unified framework, which is different from existing expert (GIS) ways of seeing spatial scales that Marceau discusses in his article. As such, Marceau’s article does highlight scale issues that are worth considering; however, since this article was written prior to the Web 2.0 boom, the article does not consider how spatial extent and grain affect other (less authoritative) forms of spatial data. For instance: the word “near” may be conceptualized differently amongst different individuals; experts may consider “near” differently than non-experts. Since individuals have different conceptualization of what “near” means, then collected VGI will have different/individualized standards/opinions that are inputted.

-MTM

 

Isaac’s Uber Article

November 29th, 2015

Isaac’s (2014) article on Uber can certainly relate to our class discussions. Like Goodchild (2007) stated, spatially-aware technology like new smart phones have proliferated a series of location-based services, such as Uber. Moreover, Uber’s user-friendly applications allow amateurs to use Uber’s services, and also contribute to Uber’s services by classifying oneself as a contract worker. In a sense, Uber encourages ‘produsers.’ No longer does a taxi driver necessarily need to be trained to provide expert services, which is similar to how geospatial information does not necessarily need to be produced by experts. This highlights how the conceptualization of “expert” is being transformed through technological shifts. Now, whether or not this is a good or a bad situation is up for debate. Reflecting on our last week’s discussions, is it OK for large private corporations to change labour structures in a way that allows certain classes to benefit, while other classes perish, possibly from unemployment?

As GIScientists maybe it is important to consider whether geospatial information should be dictated by large Western corporations and their competitive advantages, or rather it should be dictated by a more distributed population. Like I discussed in my seminar, the divide exists; furthermore, Isaac questioned whether or not Uber and other TNCs are really democratizing the hierarchy that differentiates experts and non-experts. Therefore, as GIScientists, should our focus simply be on the technological improvements of software and hardware to enable certain sharing economy applications to be prodused by a wider audience, or should our focus be on societal improvements to allow a wider audience to contribute to big data? Maybe both? It is important to be aware that the former reinforces power structures because there is still a reliance on certain experts isolating technological complexities from citizens, while the latter may be too difficult to accomplish.

-MTM

UBER: Sharing Economy or Stealing Economy?

November 28th, 2015

This article uses the example of Uber to explicate the downsides of the so-called sharing economy . The author argues that Uber is another step towards the new neoliberal economy where employees have no job security or benefits. A depressed job market creates a steady supply of drivers willing to work and GIS technology enables the service to function. Their website says “We’re bringing Uber to every major city in the world.” If you’re a taxi driver, the situation looks grim. However if you happen to be an experienced GIS analyst, Uber will offer you a 401k plan, gym membership, full health benefits, and paid vacations. GIS-enabled sharing economy technologies are said to be disruptive in the name of efficiency and a better consumer experience; but from the comparison of benefits between the tech community and the average worker, it is clear who is really being disrupted. The genius of Uber framing itself as a technology company rather than as a taxi service is not just a loophole to avoid regulation. Uber really is a technology company, using its commission from drivers to create ever better geospatial infrastructure. When driverless cars put the Uber drivers out of work, Uber is still well positioned to compete as a transportation and logistics firm.

Get educated folks, the end is near:
Uber Jobs: https://www.uber.com/jobs/57019

 

-ANontarian