Archive for the ‘General’ Category

Modelling Vague Places – the meaning in a name

Monday, October 19th, 2015

Excuse me for my invocation of a bit of prose, but this was the first thing to spring to mind upon completing the article by Jones et al. (2008):

 

What’s in a name? that which we call a rose

By any other name would smell as sweet;

So Romeo would, were he not Romeo call’d,

Retain that dear perfection which he owes

Without that title. Romeo, doff thy name;

And for that name, which is no part of thee,

Take all myself. (2.2.47-53)

– Juliet, Romeo and Juliet

 

Personally, I usually can’t stand the insipid characters in the aforementioned play but in this case they do provide interesting context. While Juliet is happy to ignore all the meaning in a name, I would argue that Jones et al. do the opposite – in fact, they assume that all names have meaning enough that even vague geographic descriptors should be subject to quantitative analysis. I do wonder how big data can help analysis of vague spaces because of the sheer quantity of data that could allow for better modelling.

 

After our discussion in class about indigenous use of GPS and GIS, I do also want to know how modelling of vague places could be specific to an ontology that prioritizes precision. Does everything need to be quantified? Should it be quantified and where exactly does the role of ambiguity play into more cultural understanding of places?

 

Furthermore, I really liked the description of how a place can be described by what it is not, for example an area can be clearly defined by determining all boundaries around it. A fun fact, the Student Society of McGill University was actually called the “Students’ Society of the Educational Institute Roughly Bounded by Peel, Penfield, University, Sherbrooke and Mac Campus” as a protest against not being able to use the word ‘McGill’ in Student Clubs. Overall, names are incredibly important and can be described in many ways and methods of quantifying vague names could give rise to new understanding of how space is conceptualized.

 

-Vdev

Jones, et al.: Modelling vague places with knowledge from the web

Monday, October 19th, 2015

In “Modelling vague places,” Jones, et al. introduce a novel method of natural language processing for vague toponymic data. They use open-source Named-Entity Recognition methods to extract associative place-names from the results of Google searches of vague toponymic terms such as “the Cotswolds,” an area straddling 6 different counties in Southern England. Then, using a gazetteer, they assign coordinates to the data extracted to transform the text into geolocated points. These are interpolated using density estimation techniques to draw the boundaries of vaguely-defined regions.

The process is representative of the general move toward big-data research: in the past, researchers on the topic would conduct interviews with a necessarily limited number of human beings who would sketch out their notions of boundaries or centres of vague areas. Meanwhile, GIS systems employ administrative definitions which are clearly not always suited to the needs of, say, a google-maps end-user who wants to know the boundaries of a neighbourhood such as Mile End, which has no official representation on a map or spatial data layer. Ask 10 different Montrealers where the southern boundary of the neighbourhood lies, and you will probably get several different answers. If an ontologically precise boundary definition were the goal, we might prefer the huge n-value of this sort of textual analysis to the anecdotal reports of several different people.

While the researchers employ a gazetteer to assign geographic coordinates to place-names, we can imagine that geolocative metadata extracted from Facebook posts or tweets could offer a potential alternative, especially when dealing with small, densely-populated areas of cities rather than large regions like the Cotswolds or Scottish Highlands.

I imagine that big-data approaches offer a lot to the development of natural language processing–the ability of machines to process language as humans do. In some areas of NLP, such as named-entity recognition, machines can almost match humans’ ability to determine which words signify a person, an organization, or a place. As computers become better at thinking like us, they may begin to teach us the “truest” meaning of our own concepts.

-grandblvd

Modeling Vague Places

Monday, October 19th, 2015

The article, Modelling Vague Places with Knowledge from the Web, acknowledges the fact that delimiting places is embedded in human processes. The paper’s discussion of people’s perception of the extent of places reminds me of my own topic related to spatial cognition within the field of GIScience (Jones et al., 2008). For example, the authors in the article assert that one way to map the extent of a “vague” place is to ask human subjects to draw its boundary. Acquired spatial knowledge of landmarks, road signs, nodes, and intersections inform how we define and draw these boundaries. In addition, the important role of language in applying Web queries reminds me of literature I read about the relationship between spatial cognition and language. Specifically, the article reminds us about how spatial relationships between objects are encoded both linguistically and visually. In addition, this topic very much relates to Olivia’s topic of geospatial ontologies. It reminds me of the example Prof. Sieber gave us in class about trying to define what constitutes something as a mountain. Where do we draw the line? Who get to agree on what makes a mountain? What empirical approaches exist/can we apply to human interviews to know what defines a geospatial entity such as a mountain?

In addition, I liked this article because it reveals the science behind GIS applications. More specifically, the article examines the science behind density surface modeling, alternative web harvesting techniques, and new methods applied to geographical Web search engines. I found the discussion about web harvesting relevant to my experience of applying webscraping tools to geospatial data in GEOG 407. Learning how these tools can be applied to observe and represent vague places is a very interesting concept and a dimension of web harvesting I had never considered before reading this article.

In addition, this paper reveals to me the part that the geospatial Web plays in increasing the magnitude and extent of geospatial data collection. I suspect that in the future, the geospatial Web will play an important part in conducting data-driven studies about problems and uncertainties within the field GIScience.

-geobloggerRB

Modelling vague places

Monday, October 19th, 2015

After reading the paper on approaches to uncertainty, it was interesting to see a case study of how these concepts are put into practice. In Approaches to Uncertainty, the authors outline the nature of uncertainty in spatial data. The authors outline two strains of uncertainty, one strain where the object is well defined and therefore the errors are probabalistic in nature. The other strain of uncertainty is when the object is poorly defined, which results in more vague and ambiguous forms of uncertainty. In Modelling vague places the authors describe a method of density modelling as an effective method of representing the uncertainty of a place name extent.

In Jones’ article, they discuss the difficulty of storing spatial information for vague place names like the Rockies, or a downtown, that are not strictly defined. The authors mention that when trying to determine how subjects conceptualize vague places, interviews are a powerful tool. They then go on to conclude that automatic web-harvesting is a better way to go, because it is clearly more time efficient. However there is still room for interviews in smaller scale studies.

For example, I found this article to be a good addition to a discussion on qualitative GIS that I had in GEOG 494. In the presented reading, a researcher had collected information through interviews of how safe a Muslim woman felt going about her daily activities through space post-911. Through her interviews, the researcher found that her subject’s definition of friendly space had shrunk in the post-911 society. I just bring up this example to show that not all uncertainty due to vague definitions of space in a GIS can be modelled using web-based knowledge or automated processes.

-anontarian

 

Embracing Uncertainty?

Sunday, October 18th, 2015

I found the chapter “Approaches to Uncertainty” to be an interesting read, although it is definitely one coming from an empirical, quantitative perspective. In particular, the discussion of ambiguity was interesting and somewhat confusing to me. I think that even the existence of discord depends on the user, the individual defining the object. In a territorial dispute, one individual may not even recognize that a dispute exists, while another might argue over it. Something that I found difficult about the author’s discussion of discord was that in the flow chart (figure 3.1), “expert opinion” follows from discord. This seems troublesome to me, and does not seem to fit with the example the author uses for discord. In a land dispute, where two groups have laid claim to an area and have deep roots there, it would not be appropriate to have an expert’s opinion. For one thing, who would be the expert? There are power dynamics inherent in who resolves spatial uncertainty, and in doing so, legitimizes one thing or another.

The article also made me reflect on our discussion about indigenous epistemologies. Rundstrom (1995) describes how indigenous people exhibit a “trust in ambiguity” and embrace the nuances of geographic spaces and living beings. In the article, ambiguity is defined as confusion over how a phenomenon should be classified because of differing perceptions of it. I think indigenous people as Rundstrom understands them would take issue with “how” a phenomenon should be classified, and argue if it should even be classified at all. Can GIScience embrace ambiguity in some ways? There is certainly a need for a way to incorporate more ambiguity into GIS if we are to try to represent indigenous geographies.

-denasaur

(As a side note if anyone is interested: I thought that the article at the following link brings up some interesting questions about spatial uncertainty – it incorporates many of the definitions this article does, as well as some discussion of indigenous conceptions of space. The figure 1 diagram is a good visual. http://pubs.iied.org/pdfs/G02958.pdf.)

“Sure, Everything Looks Fine on the Map, But …”: Communicating Spatial Data Uncertainty to End-Users

Saturday, October 17th, 2015

In Chapter 3 of Fundamentals of Spatial Data Quality, the authors approaches to uncertainty in spatial data, with a focus on the subject as it pertains mainly to geographic information systems (GIS) and, more broadly, GIScience (Fisher et al., 2006). The main themes of uncertainty (ambiguity, vagueness, and error) are reviewed, and each theme’s challenges listed.

While even introductory levels of GIS users can begin to understand the importance of uncertainty relatively quickly, end-users of GIS products (maps, spatial analysis results, 3D visualizations of phenomena) may take the data at face-value, as they typically only care about the final results and conclusions for further use for either research, policy-making, or as a navigational product to be sold to the general public, for example. How do we ensure that uncertainty is not only captured within the quantitative analysis on the GIS-user side, but also on the visual interpretation of the end-user?

This relates straight back to the conversation last class about the ethical implications of GIScience, and how to reconcile differences in cultural and historical epistemologies and ontologies. Creating a map with a similar spatial extent to the map produced may allow for users of maps that are not familiar with a GIS and more broadly GIScience to understand the probability of a certain region to contain errors in the original map. As to ambiguity, perhaps multiple maps could be produced, although this would only be realistic with Web GIS, where users can select layers to visualize, and perhaps even change then underlying assumptions of the GIS to account for personal aspirations of the intermediate or end-user (e.g. geo-political conflicts).

That being said, I look forward to next week’s class on Uncertainty to discuss this topic further, as well as the class where we will discuss Visualization in its various forms.

-ClaireM

Wang – CyberGIS

Monday, October 12th, 2015

Wang’s article is on cyberGIS; software that operates on parallel and distributed cloud-computing rather than the typical single-computer sequential GIS. This software, particularly the CyberGIS Gateway, reminded me of our class discussion on how GIS is taught in a research-oriented university. Initially I was frustrated that research-oriented schools are apprehensive to teach a step by step class on using ArcGIS. However from a practical perspective it doesn’t make sense to teach one software when there exists so many open-source softwares that catch-up quicker to the demands of researchers and businesses. It appears that our computing capability, especially through cloud computing, is increasing so rapidly, as are our datasets, that CyberGIS will become the future.

As we move towards CyberGIS the hope is that the traditional cost and skill barriers to GIS will fall, opening up the toolset to a wider range of disciplines. Then should the challenge of CyberGIS be to make the toolsets easier for this wide range of people to use? The development of easy tools for hydrology and emergency management allows decision makers to access powerful networks of computers to manage complex data. In this sense I think CyberGIS can empower smaller organizations to make good decisions, rather than corporations or governments. Often big-data applications are criticized as being the tools of large corporations like Google possessing expensive infrastructure. CyberGIS allows smaller organizations to compete with large firms and possibly break down the hegemony of these massive companies by performing complex analytics without the expensive infrastructure.

-AnOntarian

GCI and the future of GIScience (GCI past to future)

Monday, October 12th, 2015

Yang’s paper is an exhaustive review of the advancement of the Cyberinfrastructure that has grown since individuals desired to define such a term. Yang et al discuss both the utilized and untapped potential of the interconnectedness of the world in the 21st century, first from a general perspective, and then a Spatial/Geographic perspective.

As the authors discuss the existence of this network, I found that the desire to define the term came after the inception of the network and from the desire for connectivity among the vast amount of info available. It seems that the main theme of this paper is intelligent integration and cooperation between various computing platforms and scientific agencies, such as NASA, NEPTUNE, and ESRI.

I found Yang’s review of the GCI’s untapped Environmental and Geographic potential to be the most accessible and obvious component. Ideals such as heterogeneous integration between various institutions and data collectors multiply the analytical possibilities of scientific research. From a Geographic perspective, I believe a healthy GCI is the next logical step in the evolution of GIScience, following the inception of the Geographic Information System. With the introduction of big data and open source information, individual users and consumers can become more involved in a field that would be otherwise inaccessible if not for the existence of GCI’s to simplify data intensive endeavors, such as those discussed in section 5.9, Education.

Smitty_1


 

Geospatial Cyberinfrastructure

Monday, October 12th, 2015

The title of the article by Yang et al. (2010), “Geospatial Cyberinfrastructure: Past, present and future” should have given me a clue as to the extensive scope of the paper. By trying to cover almost every single aspect of GCIs, the authors provided an impressive review. Yet it was somewhat overwhelming as an introduction to the subject. I see the value in this paper as a reference text for more knowledgeable users. However, I would have liked to see more concrete, in-depth explanations of GCIs. I think my understanding of a GCI would also have been aided by an in-depth description of an unrelated CI and how exactly it was different from a GCI. Essentially, more tangible references would have helped my comprehension. The authors themselves link their work to GIScience by stating that this is a review of recent developments and that “similar to how GIS transformed the procedures for geospatial sciences, GCI provides significant improvements to how the sciences that need geospatial information will advance (265).”

While the article was very clear about the direction that GCI advancement should go in, the authors skimmed over barriers that might impede progress towards those end goals. The desire in particular for “a semantic (ontology) based framework that is sensitive to the scale, richness, character, and heterogeneity within and across disciplines (272)” is almost a chimera. I would argue that the ‘grand challenges’ briefly identified should be expanded into full papers themselves. How to integrate cyber infrastructures across disciplines and shift them to be human-centered paradigms are challenges that, once solved, could provide substantial improvements to the field. Geospatial cyberinfrastructure development seems to be at a crucial turning point. If all contributors could individually maintain as thoughtful a vision of the GCI framework for the future as Yang et al. while resolving current discrepancies then these far-reaching goals might become attainable.

-Vdev

Putting the ‘soul’ in GIS (geospatial cyberinfrastructures)

Monday, October 12th, 2015

Throughout the course, we have discussed various ways in which GIS can be manipulated for unethical ends. For instance, we have asked: to what extent does online advertising which discriminates based on assumed demographic characteristics exclude marginalized populations?; to what extent can business practices (such as Uber’s “surge pricing”) contingent on GIS data be considered appropriate?; and how does military involvement in the development of GIS operations implicate the field? In their article “Geospatial Cyberinfrastructure: Past, present and Future”, various goals at making CGI more inclusive, democratic, and multi-disciplinary are expounded upon. For instance, we are promised that CGI will aid “to advance citizen-based sciences to reflect the fact that cyberspace is open to the public and citizen participation will be essential” (264) and provide a standardized way for a multitude of actors including “government agencies, non-government organizations, industries, academia, and the public” (264) to manipulate geospatial data. Yang et al argue persuasively that the complexity and interdisciplinary scope of contemporary problems such as developing “strategies to reduce energy consumption and stabilize atmospheric emissions so that global temperature will not increase… [and choosing] a housing site that minimizes the risks of forest fire, flooding and other… hazards” (267) demands a coordinated approach and that CGI–with its enabling technologies such as web computing, open-source software, and interoperable platforms–is able to provide a coordinated platform for this problem-solving. But in this push to make GIS simultaneously more democratic and legible to a variety of actors, how will GIS remain an ethical science? Already in the “closed” world of GIS where meaningful operations require access to knowledge and resources, energy companies have assembled legions of capable GIS technicians to explore for extractable resources, and companies have established marketing departments engaged in ethically dubious GIS practices. So what does the world look like once the barriers of cost and knowledge to GIS use are removed or, at least, decreased? Do the ‘good guys’ win their case more often because they now have access to a multitude of data once available only behind a walled fortress of GIS elites? Or does the ease of access allow the data to go completely unmonitored its use? In other words, if we continue to hold that GIS is a science, how can the field maintain a ‘soul’–its own Hippocratic Oath, if you will–and maintain a reasonable set of ethics best practices? And how does it remain a science when its increasing scope and level of interoperability will have many academics and non-academics using it primarily as a tool?

-CRAZY15

Wang (2015) CyberGIS: Initially Skeptical, Now Converted

Monday, October 12th, 2015

In his 2015 paper, Shaowen Wang outlines the current state go CyberGIS as a growing ‘interdisciplinary field’ that hopes to enable widespread cooperation on geospatial research by creating a framework which integrates all sorts of data processing techniques from a number of ‘research domains’ and allows for real-time high volume, multi-user work by taking advantage of modern day networking technologies such as cloud computing, multi-core processing on an unprecedented scale and remote work.

At first, CyberGIS felt like a catch-all umbrella term with little purpose to me, a buzzword that packaged old concepts in a new way. Following Wang’s article however, I am convinced of the relevance of CyberGIS and the exciting possibilities it offers, particularly with regard to scaleability and the democratization of computer processing power.  Our in-class discussion of the paradigms which narrow our understanding of GIScience (e.g. attention to ‘maps’ over all other forms of data presentation) informed my reflection on Wang’s review of the existing status quo in research: ‘sequential computing’, ‘monolithic architecture’ and other concepts which we take for granted when we engage with online work.

CyberGIS has the potential to become an uprecedented force for radical change in academic research. The notion that a researcher defaults to working alone from a modest work station, carefully guarding the fruit of their research and selectively collaborating through individualized in-person or online communication could soon be overhauled. The barriers to research inherent in the limited access to powerful computers could potentially be broken down by a combination of cloud computing, easier collaboration and delegation of tasks, and real-time remote access to more computers/better facilities. This has enormous implications for the democratization of GIScience which go hand in hand with the tenets of the open source movement. Together, CyberGIS and open-source approaches to software development could certainly change the nature of resource accessibility in academia, providing opportunities for better collaboration and less elitism in GIScience. In my opinion, GIScience continues to suffer from high barriers to entry on both a physical and intellectual front, and it is high time for a fresh approach. CyberGIS might just be it.

-KH

GCI: Shaped By and Shaping Society

Monday, October 12th, 2015

Yang et al’s article about the history, frameworks, supporting technologies, functions, domains and users, and future directions of GCI (Geospatial Cyberinfrastructure) is a dense read which attempts to cover all the bases of GCI. The article made me think about some of the Critical GIS articles I have been reading for my literature review. For example, Sheppard’s 1995 article “GIS and Society: Toward a Research Agenda,” addresses the way that society influences technology as much as technology influences society. For example, the GIS we know has been shaped by a post-war society focused on maximizing efficiency (Sheppard 8). Yang focuses on the possible impacts of GCI in different domains and in society, but doesn’t directly discuss how GCI is shaped by society. However, this does come through in the article: for example, Yang writes about how climate change poses a problem for humanity and will require high-quality geospatial data in vast quantities in order to capture and interpret knowledge. In the same way that GIScience was shaped by the needs of both wartime and post-war societies, perhaps GCI will be shaped by the needs of a society facing a global climate problem. Yang describes a need for a new sociology of knowledge, based on how science has been transformed and shifted to online media.

Yang lists several areas for future strategies in GCI; the one which stands out to me is social heterogeneity and complexity. This complements Yang’s discussion of a diverse community and end-users in fields ranging from education to environmental sciences. There is a possibility for the field of GCI to develop more organically, to be shaped and improved in response to the diverse needs of the end users in the community.

~denasaur

CyberGIS

Monday, October 12th, 2015

In the article, CyberGIS, Shaowen Wang discusses building the bridge between the digital and geospatial worlds. The path to making the instruments and formats to handle big and interactive data provides many challenges for cyberGIS. Cyberinrastructure, in order to best handle high volumes of relevant, geospatial data, ought to be the product of a collaborative and consensus-driven process.  These kinds of processes benefit the cyberGIS movement best because they monitor the requirements of users and adapt to the changing nature of data formats and technologies. These systems ought to employ backward compatibility and maximize interoperability through standardization and increasingly sophisticated APIs and software applications.

It is also important to comment on wether these platforms and middleware  will be open source and maximize the accessibility of the data for public use. Cyberinfrastructure that values openness may have the capacity to inform civil society and uphold the values of a participatory democracy. Accessible cyberinfrastructure are vital drivers of the knowledge economy. In addition, changes occurring from the emergence of the GeoSpatial and Semantic Web will continue to highlight the value of advancing research in cyberGIS.

Hopefully, the future of cyberGIS will be able to enlighten us about the complexity and globally pervasive environment of the digital world. Advancements of cyberGIS will prove to be very beneficial to society due to the fact we are becoming increasingly reliant on location based services. Platforms such as twitter and the emergence of the internet of things will only continue to grow and legitimize cyberGIS as a subfield within GIScience.

-GeobloggerRB

Geospatial Cyberinfrastructures: Past, present and future

Monday, October 12th, 2015

This past week, I’ve been reading many articles in preparation for the symposium lecture I will be giving to the class. Yang et al.’s Geospatial Cyberinfrastructure: Past, present and future proved to be an informative read as my knowledge of geospatial cyberinfrastructures (GCIs) extended only so far as current end-user products, such as ArcGIS Online and Web browsers, and complemented my readings on complexity as well, as GCIs are very complex systems.

The article was much more than just a literature review of past and present GCIs however. It posed important questions regarding the ethical implications and epistemological and ontological challenges of developing GCIs for governmental, non-governmental, academic, and public use (272).

Following the in-class discussion that we had last week about ethical and epistemological challenges of integrating indigenous groups’ traditional ecological knowledge into GIS (as a system of knowledge transfer and as a science), I found myself asking many of the same questions while reading this article.

While I applaud the authors for pushing for an inclusive and open CI, I wonder how they will reconcile conflicts with regard to country boundaries, land use classification, the naming of cities and geographic features, for example? Who will have the final say as to what is fact? Perhaps we should create multiple datasets for the same region, so as to not promote reductionism? Who will ‘curate the data’, and will they be unbiased in their curation?

While many questions are left unanswered, I think that the article did a great job at presenting GCIs to academics and non-academics alike, which I found has been lacking in the articles that we’ve read thus far. It is important to remember that public and private funding is what will allow GCIs to be further developed and made widely accessible. Until proponents of GCIs fully grasp and account for the special interests of the GCI stakeholders and curate the code underlying the GCI functions (and not just the raw data itself), GCIs will have to be used with caution and with both eyes wide open.

-ClaireM

Wang 2015

Sunday, October 11th, 2015

“Scaling up” appears to be the key phrase in this article. The volume of data that we are amassing has gotten to the scale that we need infrastructure that streamlines massive processes and transfers of data between different fields. The tone of the article is extremely enthusiastic, and certainly the possibilities are compelling. Massive-scale agent-based-modeling may get to the point where each of us has our “own” agent that is intimately programmed to us. Disease control is a situation where this kind of thing could be very practical. However, this article offers no insight on the ethical level and I think this is a serious omission. It is troubling that it is now possible to perform sophisticated analyses on tens of millions of tweets on Twitter in a few seconds. The application for emergency response is no doubt useful, but powerful forces in our society seem to be gaining more ability to sort through all the noise of everyone’s data, and I think this should be discussed at least briefly. Perhaps such a discussion would make the NSF less eager to grant millions of dollars to the author’s institution, as they have been doing. On a different note, I like the idea of these infrastructures as an eco-system, as they are called in the paper. Perhaps the organisms that inhabit this ecosystem will be the agents that are intimately programmed to us, so that we’ll all have holograms in a parallel universe. It reminds me a bit of Ray Kurzweil’s idea of singularity, where we all upload our consciousness onto the cloud. The technology described in the article is still a long way from that, of course.

 

-Yojo

An Ecology of Technology

Monday, October 5th, 2015

Claudio Aporta and Eric Higgs (2005) present the integration of GPS into Inuit life as an example of a greater pattern of technological change on human engagement. I was taught by George Wenzel about the introduction of new technologies into Inuit life and how it affected social and cultural practices. However, this article expands my knowledge of technology as a factor that limits our understanding of the original methods behind the devices. While the Inuit example is useful due to a more spatially and culturally isolated population, the ideas from this text can be expanded to the wider world.

This piece talks about the loss of knowledge, social engagement, and connection with the local environment. We should also ask what do we gain from making our lives easier and limiting interactions with the environment? Do we gain a greater understanding of different information? Do we gain other forms of meaningful interactions? Or is the loss irreparable?

Let’s look at the evolution of human beings; is this just a result of the process – as humans evolve they evolve out of their environment? The evolution of technology is referenced many times but I wished that this piece went in an even more philosophical direction and asked questions about human evolution as well. I think that now, 10 years after this article was written, we can assume that technology has started to merge with permanent human behavioural changes. The authors allude to this by stating; “it is not unrealistic to suppose that [GPS technology] will at some point become so integrated into a larger ecology of technologies that its presence will be hardly noticed. (748)” Yet is this simply the price of evolution? Do we ever look back on a previous period of change and even recognize the loss, or, in our further evolved state just conclude it was inevitable?

I think that in this case GIScience provides hope. I see it as a way to understand the methods behind the technology or to use technology to create new, and potentially meaningful, understandings of the world around us. As the authors concluded, finding meaning in a life full of technology might be more difficult, but it is still possible.

-Vdev

Aporta (2005) GIS, Wayfinding, and the Device Paradigm

Monday, October 5th, 2015

The article by Aporta and Higgs examines the shift in Inuit culture from traditional means of wayfinding to GPS based navigating. In an article writing about the shift away from traditional means of wayfinding I was worried that the authors would overlook the fact that Inuit have been open to many technological developments such as the snowmobile or rifle. Therefore it was good that they qualified their argument by first giving a historical overview of Inuit adoption of technology and incorporation into their culture. The article then looks at what the GPS provides, all the obvious advantages, including safety, efficiency, simplicity, and its disadvantage, which is a disengagement with the environment. What the article fails to answer is how important this disengagement is to the Inuit experience of the environment. The article mentions that the allure of technology in reducing labour has usually resulted in more negatives. Even if I agree with the authors’ findings, I’m not sure what the point is other than lamenting a lost era. As they write about earlier, Inuit have always been quick to adopt new technologies. Their economic structures have adapted to resettlement in town, their hunting techniques have adapted to rifles and snowmobiles, their forms of protest have adapted to the internet, and now there wayfinding will change with the adoption of GPS.

-Anontarian

The Meaning of Life cannot be found by Global Positioning Systems: Aporta and Higgs’s Satellite Culture

Monday, October 5th, 2015

The authors’ sought to shed a light on technology-induced societal changes taking place all over the world, focusing their attention on the Inuit hunters of Igloolik, Nunavut to illustrate to readers the challenges and successes that the introduction of GPS’s within the community over the last decade has had on traditional navigational practices. The authors ultimately attempt to position the situation in Igloolik to society as a whole with regards to our argued “disengagement with nature” as a direct result of increased integration of technology (or as the authors state: “machinery”) within the fabric of society.

Palmer and Rundstrom, geographers, dutifully responded to Aporta and Higgs’ article, reminding the authors that the study of technology, geography, society, and their interactions is not a new concept: GIScience has been working on these issues already for over a decade, and that important nuances tie them all together; nuances that the authors fail to recognize.

What is evident from this piece, is that the athours view GIS as a tool (not a science). They suggest that technology is contaminating “authentic” engagements with our surroundings, voicing “worry” and “concern about the effects of GPS technology”, as they claim it “takes the experience [of fully relating to the activity we perform] away” (745). This is a grand oversimplification, as there are many degrees to which society can and does interact with technology, either passively or actively.

In my experience, the use of a GPS has given me more confidence when hiking in unfamiliar territory, and allowed me to successfully navigate to otherwise hidden natural wonders, thus increasing my interaction with my surroundings in a positive way.

I posit that it is the lack of institutional programs in place that teach traditional Inuit navigation systems that is to blame for the increasing reliance on GPS devices by the younger generations. GPS’s are not easy to learn how to use, as the authors suggest, as it can take months, even years, to understand all the underlying geospatial concepts and how to work with the technology within harsh environments. It is easy to learn to push buttons in a few days, yes, but to master its use, to the level that you would have to master the concepts underlying traditional navigation systems for it to be a “completely reliable” tool, would require, I argue, just as long.

The last line of the article truly highlights its lack of scientific integrity:

“ However, we believe that this fundamental premise is right: if life is lived through devices, finding meaning (personal, social, and environmental) becomes more difficult and engaging with our social and physical surroundings becomes less obvious and appropriate” (746).

Nowhere in the article do the hunters of Igloolik suggest a loss of fundamental identity; all they suggest is that their society is evolving, as do all societies; and that, yes, technology is fallible, but nonetheless important, and, dare I suggest, welcome.

-ClaireM

Satellite Culture (Indigenous GIS)

Monday, October 5th, 2015

Aporta and Higgs use the example of GPS integration into Inuit culture to explore the relationship between humans and modern technology. The introduction of GPS systems into a society that had previously depended on the persistence of traditional wayfinding knowledge (incorporating wind patterns, snowdrift patterns, astronomical observation, animal movement patterns, and other natural phenomena) presented the researchers with a case where a single technology promised to “deeply modify and cause disengagement from a well-established approach […] to the environment” on which the Inuit so closely depend.

The authors invoke Albert Borgmann’s theory of technology, in particular his “device paradigm,” which holds that contemporary technologies (‘devices’) mediate our engagement with our surroundings (and arguably reality itself) by reducing the amount of complex interaction required for their use. The GPS is therefore the “perfect Borgmannian device,” according to Aporta and Higgs, in that it removes the need to engage with local conditions, it’s easy to use, and provides instantaneous results.

The authors reach a reasonable conclusion: that the introduction of new technologies ought to be analysed within ecological, relational frameworks ­that take into account the effects on society that they may wreak. My main concern is that the fundamental reasons arguing in favour of a cautious or even reactionary approach to the introduction of new technologies rests fundamentally on existential reasoning; while ‘enlightenment’ positivism ultimately argues a materialist case. The material and existential consequences of the enlightenment are innumerable, arguably ranging from brutal death machines and concentration camps to the significant extension of the human lifespan and reduction in physical pain, declines in infant mortality, etc. While the introduction of new technologies has helped to lead humanity down the darkest paths in history, I believe that reactions like Borgmann’s are indeed prelapsarian or quixotic, and tend to elevate the importance of abstract types of thought and engagement above the hard realities of material life: is there enough food? Do we have adequate leisure time? And so on.

To return to the question of GPS and the Inuit, there is a telling line where it is postulated that Borgmann ‘would counsel that GPS technology is well deployed as an adjunct to Inuit navigation instead of as the central or dominant device for wayfinding.’ Ultimately, such counsel would amount to nagging based on very abstract notions of value, and would have little place in a harsh arctic environment. While I feel that critical engagement with new technology is essential, romantic associations with the past have little to contribute to the project of liberating people from material hardship. Rather, we should be thinking about how to address the changes technology makes to the distribution of power in society, and how to maximize its numerous beneficial capacities while managing its tendency to concentrate expertise and power in the hands of the few.

 

Indigenous Epistemology and Forced Assimilation

Monday, October 5th, 2015

What Rundstrom has done in this paper is highlight the large rift between Euro-North American and Indigenous American school of geographic thought. I find it quite obvious that these differences exist, but never would I have thought to compare the methods of spatial cognition used by natives such as the Inuit and Hopi to those used by proponents of GIS and GI Science. I am a product of colonialism and the Euro-American school of thought, and more recently the Euro-Canadian school of GIS, so to me it seems obvious the only way of analyzing spatial phenomena is to treat nature as non-human objects to be taken out of context and subdivided into layers and databases.

Rundstrom compares this Spatial analyst way of thinking with Native American practices of spirituality and the passage of knowledge. An example of these two practices would be how the Hopi treat water as a spiritual entity, and the selective process through which Native Americans pass down geographic knowledge. At first, I found this ridiculous. Why compare an advanced technological system with the teachings of my high school history classes? I was well aware of the Native’s oneness with nature and their uncanny abilities to communicate with their surroundings, but how can their primitive technology compare to the advanced methods we use today and at the time of this papers publishing?

 

Smitty_1