Archive for the ‘506’ Category

Modeling Vague Places

Monday, October 19th, 2015

The article, Modelling Vague Places with Knowledge from the Web, acknowledges the fact that delimiting places is embedded in human processes. The paper’s discussion of people’s perception of the extent of places reminds me of my own topic related to spatial cognition within the field of GIScience (Jones et al., 2008). For example, the authors in the article assert that one way to map the extent of a “vague” place is to ask human subjects to draw its boundary. Acquired spatial knowledge of landmarks, road signs, nodes, and intersections inform how we define and draw these boundaries. In addition, the important role of language in applying Web queries reminds me of literature I read about the relationship between spatial cognition and language. Specifically, the article reminds us about how spatial relationships between objects are encoded both linguistically and visually. In addition, this topic very much relates to Olivia’s topic of geospatial ontologies. It reminds me of the example Prof. Sieber gave us in class about trying to define what constitutes something as a mountain. Where do we draw the line? Who get to agree on what makes a mountain? What empirical approaches exist/can we apply to human interviews to know what defines a geospatial entity such as a mountain?

In addition, I liked this article because it reveals the science behind GIS applications. More specifically, the article examines the science behind density surface modeling, alternative web harvesting techniques, and new methods applied to geographical Web search engines. I found the discussion about web harvesting relevant to my experience of applying webscraping tools to geospatial data in GEOG 407. Learning how these tools can be applied to observe and represent vague places is a very interesting concept and a dimension of web harvesting I had never considered before reading this article.

In addition, this paper reveals to me the part that the geospatial Web plays in increasing the magnitude and extent of geospatial data collection. I suspect that in the future, the geospatial Web will play an important part in conducting data-driven studies about problems and uncertainties within the field GIScience.

-geobloggerRB

Modelling Vague Places – Jones et al.

Monday, October 19th, 2015

Through “Web-harvesting,” Jones et al.’s Modelling Vague Places (2008) introduces techniques to improve modeling vague places (1048). I was interested in how Jones et al. utilized “place names” from the Web to create their models because I am following a similar methodology for my own research. While researching for my own project on volunteered geographic information (VGI) and Twitter harvesting, I read an article by Elwood et al. (2013) called Prospects for VGI Research and the Emerging Fourth Paradigm that explains how people have a tendency to use place over space when they contribute geographic information through a public platform (i.e. social media or blogs). For example: a Tweeter may post a street-name without geotagging their post, thus the only geographic information they are providing is a place attribute, not any coordinate information. This makes it more difficult to gather specific/precise spatial information when crowd-sourcing data from the Web. What is similar to my project’s methodology and Jones et al.’s article is that we both look at “semantic components” (Bordogna et al. 2014, p. 315), meaning we both are identifying textual Web information to gather information on the “precise places that lie within the extent of the vague places” (1046). Additionally, Jones et al. “decrease[d] the level of noise” through filters, something I also will be doing while harvesting Tweets (1051). With comparable methodological approaches, I will certainly consider some of Jones et al.’s techniques while completing my own project.

Similarly to what we discussed last class, this article also highlights issues with ‘big data;’ specifically, how can we sift through so much heterogeneous data and pull out the most relevant information in an efficient and time-saving approach? Jones et al. introduce strategies to sift through the Web’s big data, but it would be interesting to see how these techniques have changed within the past 7 years since this article was published. CyberGIS could certainly improve the validity of gathering “published texts” off the Web through solving technological issues, such as improving automated algorithms that affected the results of Jones et al.’s research (1048).

One final point, the digital divide was not mentioned within this article. Although Jones et al. focused their research only within the U.K. where a richer demography have the capabilities to access the Web, it is important to consider that local people from a poorer locality may not be providing any information to the Web. This ignores local people’s interpretations of their landscape/place, which would be considered “rich in geographical content” if they could contribute information to he Web (1051).

-MTM

Bordogna, G., Carrara, P., Criscuolo, L., Pepe, M., and Rampini, A. (2014). A Linguistic Decision Making Approach to Assess the Quality of Volunteer Geographic Information for Citizen Science. In Information Science, 258, 312-327.

Elwood, S., Goodchild, M., Sui, D. (2013). Prospects for VGI Research and the Emerging Fourth Paradigm. Editors D. Sui, S. Elwood, M. Goodchild, Crowdsourcing Geographic Knowledge: Volunteered Geographic Information (VGI) in Theory and Practice (361-376). Dordrecht: Springer.

 

 

 

Approaches to Uncertainty in Spatial Data

Sunday, October 18th, 2015

Approaches to Uncertainty in Spatial Data

 

This text outlined many facets of uncertainty and I found it to be very informative.  There seemed to be an abundant amount of information spread over a very short period, I suppose this speaks to the depth of uncertainty inherent to spatial data.  What I enjoyed most about this read was its connection to my research topic—ontologies and semantics.

 

One of the key sources of uncertainty is how an object is defined, this is often a subjective matter and may be very hard to quantify.  The focused of ontologies in general is to define the vocabulary in such a way that it is explicitly understood by both humans and computers.  Prior to reading this chapter, had you asked me will a well constructed ontology help combat data uncertainty I would be quick to respond absolutely, yes.  However, my position has changed.  Of course enough people should agree upon a well-constructed ontology that the subjectivity is no longer problematic, but when dealing with a domain ontology—like geospatial—the community that gives the “ok” is in agreement with certain things, say they have a similar epistemology.  The purpose of ontologies is to facilitate interoperability between domains and world-wide data exchange, so these domain specific definitions may not translate well into other areas of research.  For example, using a land-use ontology to find data and then translate this into a study of land-cover or visa versa may be problematic and cause a significant level of uncertainty.  This leaves me questioning where adjustments are too be made?  On one hand, there could be full disclosure on problems with uncertainty and anything contentious may be addressed in the near ‘final product’.  Or we adjust fundamentals, like ontologies, to attempt to account for such uncertainty (but this may inhibit an ontologies effectiveness at doing its job)? So David, maybe your seminar will clear this up for me, but how on earth do we begin to address uncertainty in all its forms?!

 

-BannerGrey

 

“Sure, Everything Looks Fine on the Map, But …”: Communicating Spatial Data Uncertainty to End-Users

Saturday, October 17th, 2015

In Chapter 3 of Fundamentals of Spatial Data Quality, the authors approaches to uncertainty in spatial data, with a focus on the subject as it pertains mainly to geographic information systems (GIS) and, more broadly, GIScience (Fisher et al., 2006). The main themes of uncertainty (ambiguity, vagueness, and error) are reviewed, and each theme’s challenges listed.

While even introductory levels of GIS users can begin to understand the importance of uncertainty relatively quickly, end-users of GIS products (maps, spatial analysis results, 3D visualizations of phenomena) may take the data at face-value, as they typically only care about the final results and conclusions for further use for either research, policy-making, or as a navigational product to be sold to the general public, for example. How do we ensure that uncertainty is not only captured within the quantitative analysis on the GIS-user side, but also on the visual interpretation of the end-user?

This relates straight back to the conversation last class about the ethical implications of GIScience, and how to reconcile differences in cultural and historical epistemologies and ontologies. Creating a map with a similar spatial extent to the map produced may allow for users of maps that are not familiar with a GIS and more broadly GIScience to understand the probability of a certain region to contain errors in the original map. As to ambiguity, perhaps multiple maps could be produced, although this would only be realistic with Web GIS, where users can select layers to visualize, and perhaps even change then underlying assumptions of the GIS to account for personal aspirations of the intermediate or end-user (e.g. geo-political conflicts).

That being said, I look forward to next week’s class on Uncertainty to discuss this topic further, as well as the class where we will discuss Visualization in its various forms.

-ClaireM

GCI: Past, Present, Future

Monday, October 12th, 2015

In Yang et al’s (2010) article they explore the diverse and growing field of Geospatial Cyberinfrasturctures (GCI).  Full disclosure, I had a very vague idea of what to expect when sitting down to read this paper.  When I first saw the word “cyberinfrastructure” I envisioned the entire Internet in the form of a city very reminiscent of the 90’s TV show ReBoot.   I could not explicitly define what CI was let alone a GCI, and I’m still not sure I could give a meaningful definition, as this paper was a tad-bit onerous. That said I also did not expect to be able to connect a myriad of topics addressed in class (as well as my own project research) to such an unfamiliar word.

 

One recurring theme is that of ‘interoperability’, which I generally understand as the idea that what ever it is you are doing it should be managed in a way that people from other domains of research have the potential to use it (effectively expanding the amount of data available).  This can be implemented in many ways from data technologies to interfaces for exchange.  In my own research I am reading up on sematic interoperability, mentioned briefly in this article, along with the geospatial semantic web.  Yang et al brought up a very important aspect of the geospatial semantic web that I have yet to give much thought—the problem of temporality.  That is, semantics change with time as does human understanding and knowledge and GCI’s must account for this.  But how do they do it?  I know than from a geospatial ontological perspective, formal geospatial domain ontologies are only formed on a need basis by a collection of specialists over a relatively short period of time.  They are very meticulously constructed and I can’t imagine an ‘update’ being applied to them semi-regularly.  More general geospatial ontologies are constructed in a way such that they are interoperable across many domains—would these hold up to the test of time? What is the geospatial sematic web going to look like in 50 years? 100 years (assuming the internet still exists)?  This has given me another subject to look into out of curiosity as well as applicability for when I begin to construct my own geospatial ontology.

 

-BannerGrey

 

CyberGIS – Wang

Monday, October 12th, 2015

With technological advancements expanding, more funding, and an upsurge in spatially aware data, Wang’s article CyberGIS (2015) discusses the emerging field of cyberGIS and its applications that fall between computing and geographic information. After reading this article, it seems that cyberGIS is the toolmaker for GIScience because the “interdisciplinary field” aims to develop improved “cyberinfrastructure” for GIScience applications (1). This parallels how GIS tools have toolmakers to improve its software. Like GIScience, cyberGIS encompasses multiple disciplines and gains its uniqueness through dealing with spatial data (i.e. geographic information). Because of this, I question its legitimacy; is it really an “interdisciplinary field” or is it just a toolmaker that improves the efficiency of GIScience applications (maybe it is both an “interdisciplinary field” and a toolmaker)? Since cyberGIS enhances agent-based modeling and it also introduces efficient mechanisms for calculating vast amounts of spatial data that is produced through the Web 2.0, is it really introducing new concepts or just improving already existing concepts?

For instance, Wang highlights how volunteered geographic information (VGI) is becoming a more important strategy to deal with “emergency management” because mobile technology such as smart phones provide locational information. CyberGIS has helped compose strategies, such as software, to collect large quantities of VGI in an “efficient and scalable manner” (6). Since there are lots of concern over VGI data accuracy and quality, cyberGIS could also make software that can efficiently filter through VGI and categorize data as valid or invalid. This allows researchers to avoid manually sifting through thousands or millions of VGI data (e.g. Twitter or Facebook posts). Because cyberGIS crosses many disciplines and it works on improving/developing already existing disciplinary fields such as VGI, I believe cyberGIS is more of a toolmaker that “plays important roles in seeking solutions to challenging and important geographic problems” (9).

-MTM

 

CyberGIS

Monday, October 12th, 2015

In the article, CyberGIS, Shaowen Wang discusses building the bridge between the digital and geospatial worlds. The path to making the instruments and formats to handle big and interactive data provides many challenges for cyberGIS. Cyberinrastructure, in order to best handle high volumes of relevant, geospatial data, ought to be the product of a collaborative and consensus-driven process.  These kinds of processes benefit the cyberGIS movement best because they monitor the requirements of users and adapt to the changing nature of data formats and technologies. These systems ought to employ backward compatibility and maximize interoperability through standardization and increasingly sophisticated APIs and software applications.

It is also important to comment on wether these platforms and middleware  will be open source and maximize the accessibility of the data for public use. Cyberinfrastructure that values openness may have the capacity to inform civil society and uphold the values of a participatory democracy. Accessible cyberinfrastructure are vital drivers of the knowledge economy. In addition, changes occurring from the emergence of the GeoSpatial and Semantic Web will continue to highlight the value of advancing research in cyberGIS.

Hopefully, the future of cyberGIS will be able to enlighten us about the complexity and globally pervasive environment of the digital world. Advancements of cyberGIS will prove to be very beneficial to society due to the fact we are becoming increasingly reliant on location based services. Platforms such as twitter and the emergence of the internet of things will only continue to grow and legitimize cyberGIS as a subfield within GIScience.

-GeobloggerRB

Geospatial Cyberinfrastructures: Past, present and future

Monday, October 12th, 2015

This past week, I’ve been reading many articles in preparation for the symposium lecture I will be giving to the class. Yang et al.’s Geospatial Cyberinfrastructure: Past, present and future proved to be an informative read as my knowledge of geospatial cyberinfrastructures (GCIs) extended only so far as current end-user products, such as ArcGIS Online and Web browsers, and complemented my readings on complexity as well, as GCIs are very complex systems.

The article was much more than just a literature review of past and present GCIs however. It posed important questions regarding the ethical implications and epistemological and ontological challenges of developing GCIs for governmental, non-governmental, academic, and public use (272).

Following the in-class discussion that we had last week about ethical and epistemological challenges of integrating indigenous groups’ traditional ecological knowledge into GIS (as a system of knowledge transfer and as a science), I found myself asking many of the same questions while reading this article.

While I applaud the authors for pushing for an inclusive and open CI, I wonder how they will reconcile conflicts with regard to country boundaries, land use classification, the naming of cities and geographic features, for example? Who will have the final say as to what is fact? Perhaps we should create multiple datasets for the same region, so as to not promote reductionism? Who will ‘curate the data’, and will they be unbiased in their curation?

While many questions are left unanswered, I think that the article did a great job at presenting GCIs to academics and non-academics alike, which I found has been lacking in the articles that we’ve read thus far. It is important to remember that public and private funding is what will allow GCIs to be further developed and made widely accessible. Until proponents of GCIs fully grasp and account for the special interests of the GCI stakeholders and curate the code underlying the GCI functions (and not just the raw data itself), GCIs will have to be used with caution and with both eyes wide open.

-ClaireM

The Meaning of Life cannot be found by Global Positioning Systems: Aporta and Higgs’s Satellite Culture

Monday, October 5th, 2015

The authors’ sought to shed a light on technology-induced societal changes taking place all over the world, focusing their attention on the Inuit hunters of Igloolik, Nunavut to illustrate to readers the challenges and successes that the introduction of GPS’s within the community over the last decade has had on traditional navigational practices. The authors ultimately attempt to position the situation in Igloolik to society as a whole with regards to our argued “disengagement with nature” as a direct result of increased integration of technology (or as the authors state: “machinery”) within the fabric of society.

Palmer and Rundstrom, geographers, dutifully responded to Aporta and Higgs’ article, reminding the authors that the study of technology, geography, society, and their interactions is not a new concept: GIScience has been working on these issues already for over a decade, and that important nuances tie them all together; nuances that the authors fail to recognize.

What is evident from this piece, is that the athours view GIS as a tool (not a science). They suggest that technology is contaminating “authentic” engagements with our surroundings, voicing “worry” and “concern about the effects of GPS technology”, as they claim it “takes the experience [of fully relating to the activity we perform] away” (745). This is a grand oversimplification, as there are many degrees to which society can and does interact with technology, either passively or actively.

In my experience, the use of a GPS has given me more confidence when hiking in unfamiliar territory, and allowed me to successfully navigate to otherwise hidden natural wonders, thus increasing my interaction with my surroundings in a positive way.

I posit that it is the lack of institutional programs in place that teach traditional Inuit navigation systems that is to blame for the increasing reliance on GPS devices by the younger generations. GPS’s are not easy to learn how to use, as the authors suggest, as it can take months, even years, to understand all the underlying geospatial concepts and how to work with the technology within harsh environments. It is easy to learn to push buttons in a few days, yes, but to master its use, to the level that you would have to master the concepts underlying traditional navigation systems for it to be a “completely reliable” tool, would require, I argue, just as long.

The last line of the article truly highlights its lack of scientific integrity:

“ However, we believe that this fundamental premise is right: if life is lived through devices, finding meaning (personal, social, and environmental) becomes more difficult and engaging with our social and physical surroundings becomes less obvious and appropriate” (746).

Nowhere in the article do the hunters of Igloolik suggest a loss of fundamental identity; all they suggest is that their society is evolving, as do all societies; and that, yes, technology is fallible, but nonetheless important, and, dare I suggest, welcome.

-ClaireM

GIS, Wayfinding, and the Device Paradigm

Monday, October 5th, 2015

Aporta and Higgs (2003) present a case study of Inuit hunters of the Igloolik region, examining the effects of the introduction of GPS technology to their traditional understanding and navigation of the landscape, referred to as wayfinding. They introduce Albert Borgmann’s “Device Paradigm” in consideration of the effects new technology has on old practices. The paper’s purpose is to bring more attention to considering the implications technology has on cultural perceptions of geographic space. Since it’s undeniable that this is an issue relevant to GIScience, I would like to talk more about my own thoughts regarding the philosophy of technology, specifically the device paradigm.

The device paradigm refers to how technology is perceived and consumed. It suggests that as technology becomes more advanced, commodities and services become more available and the processes and meanings behind the technology become less understood. In the case of GPS, as examined by the authors, the Inuit people’s tradition of wayfinding has become less necessary to learn and pass down because GPS simplifies and increases the accuracy of navigation across the arctic landscape. Many anthropologists take this to be a bad thing, because replacing traditional methods with new technology makes it harder to find meaning in what it is that someone is actually doing, because they are interacting with the technology instead of the environment that they are using it in.

This is obviously true, but I feel like for the sake of technological advancement and the progress of the human race, we have to be willing to forego the meaning behind certain aspects of life. This is because learning takes time, and it is only possible to learn so much within one’s own lifespan. Technology offers shortcuts that allow us to reach our destination faster than if we had to learn and memorize every single step along the way. These sorts of shortcuts are everywhere in GIS, from data management and spatial analysis tools to the computers we use to run the software that includes them. But then again, we need people around who know what to do when these devices fail us.

-yee

Only one more post on Rundstrom

Monday, October 5th, 2015

Just a reminder that only one more of you can post on Rundstrom before you switch over to the other article.

First to post gets Rundstrom.

Ethical Implications of GIScience

Monday, October 5th, 2015

In his article, GIS, Indigenous Peoples, and Epistemological Diversity, Rudstrom expounds upon the power hierarchies that contextualize doing GIS. He boldly asserts that doing GIS as a “technoscience” reinforces and perpetuates narratives of dominance that disenfranchise indigenous ways of thinking. Therefore, if GIS adheres to Western epistemology, then is it really appropriate to apply these systematic ways of knowing to indigenous ways of being? Any process that involves structuring and classifying the world around us are inherently exclusive in nature. Therefore, how can we ethically claim that our way of mapping the world could encompass the entirety of non-western ontologies? I’m not sure if these questions will ever be answered fully, but as GIS users we must entertain the the possibility that how we do GIS may have extreme ethical implications. For example, if certain geographic knowledge is privileged in indigenous societies, then do we have the right to map them for the sake of the long term preservation of knowledge and culture?

In addition, generalizing indigenous communities suggests that there is an inherent nature to indigenous epistemologies. However, indigenous communities in Northern Quebec have very different ways of perceiving and managing their environment compared to indigenous communities in Central America. Conflating indigenous epistemologies does a disservice to the complexity and diversity of how space and processes of thought engage with one another.

I hope that more community participation in GIS and the geospatial web may tackle some of these problems. Such work will be crucial for understanding the ethical and practical implementations of doing GIS in the future.

-GeoBloggerRB

TEK in GIS?

Monday, October 5th, 2015

One of the central themes in Rundstrom’s text on GIS, Indigenous Peoples, and Epistemological Diversity is the idea that indigenous epistemologies and current GIS technologies are inherently incompatible.  He cites the fundamental difference in the western world’s definition and understanding of energy and matter to that of the indigenous peoples as well as differences in temporal change as two of the reasons for this.  I immediately connected this to my research topic for this course, geospatial ontologies.  Epistemologies are concerned with how one procures knowledge while ontologies more are concerned with defining the nature of being.   Both work to inform us on how we’ve come understand what we do.  More specifically geospatial ontologies aid us in the defining and the reasoning of real world spatial phenomena.

Though I agree with Rundstrom’s point that indigenous people’s geographic knowledge should be separated from GIS for ethical purposes (and I am not advocating the disenfranchising of indigenous communities by any means), I disagree with the idea that they are fundamentally incompatible.  By utilizing indigenous knowledge into geospatial ontologies (perhaps creating indigenous specific geospatial ontologies) I think it is possible to combine the two.  This will not be achieved without difficulty since our current GIS framework is centered on the Western world view, as specified by Rundstrom.   However, I think that by acknowledging this we have the potential to develop a new framework where a new understanding of environment may be incorporated.

Rundstrom very well may argue that my position towards this is part of the problem and that I am a symptom of the insensitivity of the western world.  I would argue that since 1995 we have made advances in GIS, GIScience, and the world’s valuation of Traditional Ecological Knowledge (TEK).  On behalf of both parties, whom ought to find common ground and work together to protect the environment, these two world views must be integrated and I think GIS is the most feasible platform to achieve this.

 

-BannerGrey

 

GIS, Indigenous Peoples, and Epistemological Diversity

Monday, October 5th, 2015

Rundstrom’s article “GIS, Indigenous Peoples, and Epistemological Diversity” (1995) discusses how indigenous cultures perceive “geographical knowledge” differently compared to North American and European Westerners (45). Even though there are different cultural perspectives on spatial knowledge, there has been a tendency for GIS to be ethnocentric, focusing on Westernized epistemology and ignoring the cross-cultural variations in understanding landscapes. As someone who studies anthropology and geography, I agree with Rundstrom’s proposition that the “GIS research agenda [should] include cross-cultural studies of knowledge transformations and culture change;” however, since Rundstrom’s article, there has been technological advancements and offspring disciplines, such as Qualitative GIS and GIScience, that consider different perspectives (45). Before I discuss how GIScience has contributed, I do want to make a point that even though GIS is known for being “eurocentric,” GIS researchers wanted to develop a systematic procedure for data collection and modeling (55). Now with improvements in technology, we can veer away from authoritative systematic analyses and allow everyday citizens, including indigenous people, to contribute their own geographic information. This is what volunteered geographic information (VGI) is, and what I am researching for my final project.

Within GIScience, VGI accepts amateur volunteers’ geographic information; this means indigenous peoples can use the internet to geotag a specific location that pertains to their culture and describe that location’s significance to them. This can be done in Google Maps or Yelp, where the geotagged area and small description can provide a more enriched epistemology that can be collected and analyzed by an outside party. Nevertheless, it is not that simple, collecting data from amateur internet users introduces topics on accuracy and how to properly validate the information as correct – there are still debates on how to define which volunteered knowledge is valid or not. In some cases, websites have volunteer monitors that check accuracy in what people write; thus, some reviewers may not objectively agree with an indigenous person’s subjective description on a certain place.

Similarly to what we discussed last class, geospatial agent based-models may also be able to show variations in geographical knowledge as technology and technical methodologies improve; maybe an agent can receive multiple attributes that can enhance how they perceive their landscape. This can allow for a more diverse epistemology. Therefore, since Rundstrom’s article, there have been improvements in GIS to account for “epistemological diversity,” but there is still room to grow (45).

-MTM

 

Simulated Movement, an Emerging Field?

Monday, September 28th, 2015

The article by Haklay et al. from 2001 is an interesting look into simulated pedestrian movement in a closed-system urban downtown setting. Named STREETS, this module-based model shows just how complex real human movement is by detailing the ways our unconscious decision-making must be broken down by a computer in order to simply approximate pedestrian paths.

After reading about the various modules, my thoughts were immediately distracted by trying to think up further additions to make the model as realistic as possible.  A more complex model might include the presence of cars as another variable that would affect how pedestrians are able to cross roads, and for example, how their path might change if the time spent waiting for cars to go by allows them to focus on an alternate target destination that they originally ignored. In relation to my own project on hydrological models, the simplest Mover module could be applied to predicting overflow in river systems. If excess water flow units were given values like individual agents in the article, and the water filled certain pixels like pedestrians filled sidewalk cells, once a pixel was “full”, the excess water would have to move into the adjacent pixel and could change overflow paths.

As the modules became more specific in their control of agent movement, the final module, Planner, almost seemed like artificial intelligence. It was not until the authors directly address the difference between deliberate simulation and emergent, ‘self-organizing’ movement that I realized model simulation can become so much closer to “real-life” than exists currently. Overall, this piece was engaging and had easy-to-follow technical descriptions of the modules combined with just enough theory to relate the topic to GIScience and future implications.

– Vdev

Role of Geospatial Agents in GIScience

Monday, September 28th, 2015

In their article, Geospatial Agents, Agents Everywhere…, Sengupta and Sieber (2007) demonstrate how the paradigm of agents in AI both serve and benefit from research in GIScience. I found it interesting that Artificial Life Geospatial Agents (ALGAs) are relevant to our previous discussion about the importance of spatializing social networks. ALGAs are relevant to spatial social networks in that they model “rational-decision making behavior as impacted by a social network” (486-487).  Therefore, applying our knowledge about spatial social networks (as opposed to just social networks) to ALGA development could perhaps help us better understand and model social interactions and information passing between individual agents.

In addition, the interoperability of Software Geospatial Agents (SGAs) across software and hardware platforms informs us about ontology, representation, and semantics in GIScience. Therefore, SGAs might unlock answers concerning key questions surrounding geospatial ontologies and semantics. This is because SGAs have the key responsibility of determining the standards to interpret semantically. These standards may help with important GIScience tasks of expressing topology and geospatial data in GIS. Therefore, the fact that SGAs are “geospatial” in nature will impact the extent of how we “do GIS” as geographers.

I am interested to know the extent that ALGAs are able to incorporate temporal dimensions within its frame of development. I suspect that adopting the added dimension of time into these platforms and models would be a crucial challenge for ALGA research in GIScience.

-GeoBloggerRB

Autonomy?

Monday, September 28th, 2015

Geospatial Agents, Agents Everywhere by Sengupta and Sieber (2007) qualifies the distinction of geospatial agents in Artificial Intelligence (AI) research as well as distinguishes between Artificial Life Geospatial Agents (ALGAs) and Software Geospatial Agents (SGAs).  Since I do not have much experience with ALGAs, I began thinking about SGAs and as I was reading this I kept going back to various instances during my time at McGill where I had any exposure to SGAs, and one time stands out in particular.  During GEOG 307 we had a reading on location-allocation based modeling and shortest path analysis called Flaming to the scene:  Routing and locating to get there faster by Figueroa and Kartusch (2000) where the Regina fire department did a Fire Station Location Study as well as built a program to identify the best routes to achieve the fastest response times.  Sengupta and Sieber (2007) are concerned with highlighting the legitimacy of these two AI traditions and the importance of geospatial agents’ ability to work with geospatial data specifically as well as it relevance to GIScience.  They mention its applicability to social science problems and I immediately thought of the Fire Station Location Study as an example of a SGAs used to solve a real world concern.  However, my certainty of this as an SGA is not as strong once I considered the problem of autonomy.  The researchers were capable of letting the simulation run to determine an output but they had predetermined all of the necessary inputs from municipal data beforehand.  The authors do address the problem of autonomy for ALGAs and SGAs in AI research but they really only distinguish between a strong and weak level of autonomy. It seems to me that defining a level of autonomy is extremely subjective, and though a necessary qualifier for a program to be considered in the realm of AI, it may not be the best measure.  Perhaps the field of AI research would benefit from further elaboration on what is truly autonomous?

 

-BannerGrey

 

“So go down town”: stimulating pedestrian movement in town centres by Haklay et al.

Monday, September 28th, 2015

Haklay et al.’s article exhibits how geospatial agents can replicate real-world environments – specifically, how pedestrians move throughout urban downtowns. Similarly to what we discussed last class with social networks, the researchers utilized the concept of nodes (“waypoints”) in a street network to methodize the individual agent’s “planned route” (12). Haklay et al.’s methodology for STREETS also considered impedance, which means they considered obstacles (e.g. buildings or large clusters of people) that would slow down the movement of a pedestrian from one “waypoint” to another.

After reading Sengupta and Sieber’s review article and comprehending the technical terms introduced, Haklay et al.’s STREETS methodology was easier to conceptualize. For instance, Haklay et al. described an agent-based model as one that is “autonomous and goal-directed,” which were two of the four factors described in Sengupta and Sieber’s article. Although Haklay et al. do not specifically describe STREETS as a geospatial agent that has all four properties described by Sengupta and Sieber, they state STREETS is unique because the agents understand where they are “spatially located” and are spatially “aware” (8).

What was interesting about this article, and what also parallels last week’s article, was that many parts of the methodology incorporated multiple attributes to determine how an agent/individual makes decisions. Like how Radil et al. considered both gang relations and territory in their spatial social network, Haklay et al. incorporated “behavior” and “socio-economic characteristics” in their street network (13-14). I think incorporating multiple variables is important because it replicates the real-world more accurately. Previous pedestrian movement models did not integrate an individual’s characteristics that would affect their choices. For these reasons, I am interested to see how STREETS will improve in the future, and how it will be able to incorporate even more modules/variables into the agent-based model.

-MTM

Geospatial Agents

Monday, September 28th, 2015

Okay so, I think Sengupta & Sieber (2007)’s  lit review and discussion of artificial intelligence research within GIScience has been the most thought-provoking article we’ve had to read so far and I’m not just saying that to suck up to the profs. The subject material is current and very relevant to one of my fields of interest in GIS, which is programming geospatial applications.

Anyways, they mention the four properties necessary for a software to be considered an intelligent agent:

(1) autonomous behavior; (2) the ability to sense its environment and other agents; (3) the ability to act upon its environment alone or in collaboration with others; and (4) possession of rational behavior

I’m pretty sceptical when it comes to artificial intelligence. Obviously a system that possesses these four qualities can be considered more “intelligent” than most software, but I think that whether or not a software actually qualifies as an “intelligent agent” depends on one’s interpretation of what each of the four properties entails.

Similar to ClaireM, I question what “autonomy” actually entails, because this could either mean the ability for a software to run and maintain itself free of human prompts (that means, it recognizes on its own when it is supposed to run, instead of needing to be “started” to perform a task), or it could mean the much simpler concept of being able to be “started” and then left to run until its completion. In my opinion the latter does not count as full autonomy and as such should be considered less “intelligent”. The types of programs referred to in this paper all seem to be of this kind.

While these systems may be able to sense their environment, they cannot do so without being first given an environment within which to operate. The paper also doesn’t really touch upon the notion of sensing and interacting with other agents, which most geospatial software systems would not do on their own since they run separate from one another. Finally, all computer programs created as tools are designed to use algorithms to evaluate situations and make decisions, so I think any software system can be said to possess rational behaviour.

I feel like the four qualifications for software to be considered “intelligent” are not defined well enough in this article to actually establish a clear dividing line between intelligent and non-intelligent software. I don’t think this is really all that important though because it doesn’t affect its usefulness, and it’s undeniable that geospatial software systems can be intelligent agents.

-yee

Geospatial Agents, Agents Everywhere…

Saturday, September 26th, 2015

Sengupta and Sieber’s review of artificial intelligence (AI) agent research history and its current landscape sought to define and ponder the legitimacy of ‘geospatial’ agents within GIScience.

The discussion of artificial life agents, often used for modeling human interactions and other dynamic populations, complemented my current research into complexity theory and agent-based modeling of chaotic systems that are sensitive to initial conditions, as it holistically related them back to GIScience.

However ‘software’ agents, defined as agents that mediate human-computer interactions, was an unfamiliar notion to me. I found it more understandable to read about these types of agents if instead I replaced it with the words ‘computer program’, ‘process’, or ‘application’.

As a student familiar with software development, the article made me question a lot of the computational theory I’ve learned thus far, and raised some big questions: What does it truly take for an agent or program to be characterized as autonomous? If an agent or program engages in recursive processes, does that count as being autonomous, as it essentially calls itself to action? And when is a software agent considered to be ‘rational’?

I wonder if rationality in decision making should even be included in the definition of an agent. Humans often make irrational decisions. Our decision making process and socialization patterns are highly complex and difficult to model, issues that are quick to see even when attempting to analyze static representations of spatial social networks.

I look forward to see how this conversation evolves.

-ClaireM