Archive for the ‘506’ Category

On Dunn & Newton (1992) and early 1990s Network Analysis

Thursday, November 9th, 2017

Dunn & Newton’s article “Optimal Routes in GIS and Emergency Planning Applications” (1992) heavily discusses the mathematics behind the Djikstra algorithm & its spin-off, the “out-of-kilter” algorithm, and the “out-of-kilter” algorithm’s use in early ‘90s GISoftware and on early ‘90s computers.

The “out of kilter algorithm” that diverts in multiple paths for an increased flow from one node to another, like an increase in traffic in emergency evacuation. I would have liked some more information from this article on the possible uses of network analysis for everyday people, but I agree this could have been difficult as personal GISystem use did not really exist then like it does today. The network analysis that Dunn & Newton discuss uses set points with available road networks for its running example, but they could have considered a world using network analysis that could rely on (unscrambled, post-2000s) GPS & constant refreshing. They briefly mention that some emergency vehicles have on-board navigation systems, which infers that they had the capability to discuss GPS & network analysis further, but did the inaccuracy of GPS at the time affect the emergency vehicles? Also, without these systems, a user would have to start from a set route and end at a set route and be limited to analyzing within a specific area that 1) their computer could hold and 2) their data was collected on, and on-the-fly adjustments (commonplace now) could not occur without extensive coordination. 

I am looking forward to learning more about current uses & future advancements, especially now that GISoftware isn’t just reserved for highly specialized people as it was in 1992, and that computers are faster (and that cloud computing, (more) accurate GPS, and mobile devices exist)!

On Goodchild & Li (2012) and Validation in VGI

Thursday, November 9th, 2017

I thought that this article “Assuring the quality of volunteered geographic information” was super interesting. Encompassing the evolution of “uncertainty” in GIScience was interesting, and a welcome addition as a segue into the three approaches into quality assurance (crowd-sourcing, social, and geographic).

Exploring the social approach further, it stipulates that there will always be a hierarchy, even within a seemingly broad/open structure. Goodchild & Li discussed briefly that there is often a small amount of users who input information and a smaller amount of people who verify that information, in addition to the large number of constant users.

For future additions to OSM or other crowd-sourced sites, it would be super interesting to show who’s actually editing/adding, and make that info easily available and present on the screen. Currently in OSM, one can see usernames of the most recent editors of an area, and with some more digging, one can find out all the users that have edited in an area, and with even more digging, one can look at these editors’ bios or frequently mapped places and try to piece together info about them that way. I guess it would be more of a question of privacy (especially in areas where open data isn’t really encouraged, or where there aren’t a lot of editors other than bots, or both), but hopefully this sort of post-positivist change comes. I recently learned that most of OSM’s most active users & validators (worldwide) are white North American males between the ages of 18 and 40, which unfortunately is not unbelievable, and begs further questions about what information is being mapped and what’s being left out. Some info isn’t mapped as the mappers are not interested in this information (for example, what a 25 year old guy would want to see on a map may not even overlap with what a 65 year old woman would want to see on a map. This gets even more tangled when also considering gender, geographic, or ethnic/”race” dimensions). Showing this information, or at least making it less difficult to find or access without lots of time and adequate sleuthing skills, might compel layman users to be more interested in where exactly their information is coming from.

Ester et al 1997 – Spatial data mining

Sunday, November 5th, 2017

The broad goal of knowledge discovery in databases (KDD) is, fittingly, to construct knowledge from large spatial database systems (SDBS). This goal is achieved via spatial data mining methods (algorithms) which are used to automate KDD tasks (e.g. detection of classes, dependencies, anomalies). Without a fuller understanding of the field at present, it is hard to judge how comprehensive an approach is outlined in Ester et al’s (1997) paper.

The authors underline the distinguishing characteristics of spatial databases; namely, the assumption that an object’s attributes may be influenced by the attributes of its neighbours (Tobler). These assumptions motivate the development of techniques and algorithms which automate the identification and extraction of spatial relationships. For instance, a simple classification task can be executed by algorithms that group objects based on the value of their attributes. The authors present a spatial extension of this approach, by incorporating not only an object’s attributes, but also those of its neighbours, allowing for greater insight into spatially classified sets of objects within a SDBS.

Contrasting with last week’s topic, the approach to knowledge extraction here emphasises automation. The goal is to construct basic rules that can efficiently manipulate and evaluate large datasets to detect meaningful, previously unknown information. Certainly, these techniques have been invaluable for pre-processing, transforming, mining and analysing large databases. In light of recent advances, it would be interesting to revisit these techniques to assess whether new spatial data mining methods are more effective for guessing or learning patterns that may be interpreted as meaningful, and to consider the theoretical limits of these approaches (if they exist).
-slumley

Spatial Data Mining: A Database Approach, Ester et al. (1997)

Sunday, November 5th, 2017

Ester et al. (1997) propose basic operations used for knowledge discovery in databases (KDD) for spatial database systems. They do so with an emphasis on the utility of neighbourhood graphs and neighbourhood indices for KDD. When the programming language began to bleed into the article it was clear that maybe some of the finer points would be lost on me. I was reminded of the discussion of whether or not it’s critical that every concept in GIScience is accessible to every GIS user. I’m convinced that in order for GIS users to practice critical reflexivity in their use of queries within a database, they ultimately need to understand the fundamentals of the operations they utilize. After making it through the article, I can say that Ester et al. could explain these principles to a broader audience reasonably well. I’ll have to echo the sentiments of previous posts that it would have been interesting to see more discussion of this, but perhaps it’s beyond the scope of this article.

Maybe it’s because we’re now into our 9th week of GIScience discourse, but I felt that the authors did a particularly good job of situating spatial data mining–which, despite its name, might appear more closely related to the field of computer science at a glance–within the realm of GIScience. Tobler’s Law even makes an appearance on page 4! It’s an interesting thought that GIScientists might have more to contribute to computation beyond the handling of explicitly spatial data. For instance, Ester et al. point to spatial concept hierarchies that can be applied to both spatial and non-spatial attributes. You can imagine how spatial association rules conceived by spatial scientists might then lend themselves the handling of non-spatial data as well.

On Ester et al (1997)’s Spatial Data Mining in Databases

Sunday, November 5th, 2017

In their article “Spatial Data Mining: A Database Approach” (1997), Ester et al outlined the possibility of knowledge discovery in databases (KDD) using spatial databases, utilizing four algorithms (spatial association, clustering, trends, and classification). Unfortunately, the algorithms are not entirely connected to how one mines spatial information from databases, and the algorithms introduced don’t seem incredibly groundbreaking 20 years later. This paper seemed very dated, particularly because I feel like most of these algorithms are now tools in ESRI’s ArcGIS and the frameworks behind GeoDa, and because the processing issues that seemed to plague the researchers in the late 1990s are not issues (on the same scale) today.

Also, I found it strange that the paper adopted an incredibly positivist approach, and did not mention anything about how these tools could be applied in real life. They acknowledged this as a point of further research in the conclusion, but weighted it less heavily than the importance of speeding up processing times in ‘90s computing. In their introduction, the authors discuss their rationale for using nodes, edges, and quantifying relationships using Central Place Theory (CPT). However, they do not mention that CPT/theorizing the world as nodes & edges is an incredibly detached idea that 1) cannot describe all places, 2) does not realize that human behaviour is inherently messy and not easily predictable by mathematical models, and 3) only identifies trends and cannot be used to actually explain things, just to identify deviances from the mathematical model. Therefore, not everything can be identified by a relationship that a researcher specifies to scrape data using an inherently flawed model, and therefore there will be inaccuracies. It will be interesting to learn if/how spatial data miners have adapted to this and (hopefully) humanized these processes since 1997.

Thoughts on Spatial Data Mining Chapter (Shekhar et al.)

Thursday, November 2nd, 2017

This chapter provided a review of several spatial data mining techniques, example datasets, and how equations can be adapted to deal specifically with spatial information. In the very beginning, the authors state that to address the uniqueness of spatial data, researchers would have to “create new algorithms or adapt existing ones.” Immediately, I thought about how these algorithms would be adapted; would the inputs be standardized to meet the pre-conditions of non-spatial statistics? Or would the equations themselves be adapted by adding new variables to account for differences in spatial data? The authors address these questions later in their explication of the different parts of the Logistic Spatial Autoregressive Model(SAR). 

When discussing location prediction, the authors state that “Crime analysis, cellular networks, and natural disasters such as fires, floods, droughts, vegetation diseases, and earthquakes are all examples of problems which require location prediction.” (Shankar et al. 5/23) Given the heterogeneity and diversity in these various data inputs, I was wondering how any level of standardization is achieved in SDA, and how interoperability is achieved when performing the same operations on such different data types. 

What I gathered from this chapter was that there is considerable nuance and specificity within each SDM technique. Given the diversity of applications for each technique, from species growth analysis to land use change, to urban transportation data, the choice of attribute that is included in the model greatly influences the subsequent precision of any observed correlation. (See example of Vegetation Durability over Vegetation Species for Location Prediction example) 

There was a clear link between SDM and data visualization, as illustrated by the following statement about visualizing outliers; “ there is a research need for effective presentations to facilitate the visualization of spatial relationships while highlighting spatial outliers.” Clearly, there is overlap between accurate spatial models and the  effective presentation of that data for the intended audience. 

-FutureSpock

Cognitive and Usability Issues in Geovisualization, Slocum et al. (2001)

Sunday, October 29th, 2017

Solcum et al. (2001) detailed emergent research themes in geovisualization circa 2001. The authors advocate for an interdisciplinary approach incorporating cognitive and usability engineering principles to address challenges concerning immersion and collaborative visualization. It was striking to realize how frequently I’ve brushed over the finer points made by the authors over the year and change I’ve spent submitting GIS assignments.I feel that so many without technical GIS training are inclined to conceptualize the discipline as “mapmaking.” In contrast it’s interesting how little time is spent on more nuanced cartographic considerations in introductory courses. The article made for a good introduction for engaging more meaningfully with what’s quite literally right under my nose.

Even though the article was presumably written before the release of Google Earth (B.G.E.?) it would appear that most of their discussion concerning emergent research themes is relatively robust–even if perhaps some of their associated challenges have since been addressed. For instance, I am not sure of what more could be said about maintaining orientation in explicitly geographic visual environments, but I would interested to learn more about how one would handle orientation in alternative spatial environments. Particularly such that would be immersive enough that would enable the type of cognition that we use in handling the real world. Moreover, I wonder how the ubiquity of Google Earth alone has propelled the topic of cognition and usability in geovisualization.

On Slocum et al (2001) and Geovisualization Trends

Sunday, October 29th, 2017

In the article “Cognitive and Usability Issues in Geovisualization”, Slocum et al.  discussed a need for maps or visualization tools to be conceptualized as  composed of both theory-driven design as well as usability engineering. The theory will describe how people think about maps with preconceived ideas about symbology, colour, layout, and representation. I thought it was super interesting to find out that different languages perceive different geographic features differently (as they noted, English and French perceive lake and pond differently), as well as other cultures perceive other colours differently. Along with the other more well-known differences between people, like sex, age, and sensory abilities, these can change ways that people view or look at maps. “Masculinist” has long been a term used in critiquing mapmaking and geovisualization, as the representations often favor a “God’s-eye”, flaneur-ish approach rather than other views. Geovisualization, particularly 3D visualization, may have the ability to change this. I think it would be interesting to revisit the emerging trends and (formerly) current standards that the authors review, to see where this representation has changed and where they envision it going to. I am not very caught up on the progress of the world of AI in geoviz, but the world of GIS has certainly changed with handheld digital maps like Google Maps or OSM, and even some of the “maps that change in real-time” has changed drastically (for example, manifested in Snapchat’s Snap Map).

 

It would also be interesting to learn who follows the methodologies laid out by Slocum et al. Though they do think more rationally about inclusivity, it doesn’t seem to be entirely all encompassing (ie. asking different groups of people what they like and don’t like about a geoviz and then working in the public’s comments). Further, do people actually use this advice? In video games, many use a lot of geoviz techniques to make the game world more realistic. Do game developers follow these trends? And more importantly for research and academic purposes, have the game developers shared their techniques of bettering geoviz with other industry professionals (like reducing “cyber-sickness” (6), or color choice, etc.)?

Thoughts on ‘Cognitive and Usability Issues in Geovisualization” Slocum et al. 2017

Friday, October 27th, 2017

This paper was a comprehensive review of the current issues in geovisualization,  with a focus on legibility, and user-centered cognition, and features of experiential VR. They provided a number of conclusions based on their review of the current literature and the state of existing technology, and one of their recommendations was for more research in the cognitive sciences to identify the best methods for visualizing data to ensure maximum comfort and comprehension. I would have liked some more specific recommendations as to areas within cognitive psychology, or especially pertinent methods, which they believe would be useful for this field of geographic visualization.

Early in the paper, the authors note that if we develop”theories of how humans create and utilize mental representations of the environment, then we can minimize the need for user testing of specific geovisualization methods.” But I think that even if we formulate theories about how humans internalize, process, and store geographic information, this does not preclude the necessity of user testing specific methods. As they discuss later in their paper, there are considerable individual differences and a high level of specificity for each kind of VR representation, so user testing in each instance seems like a crucial step.

The authors commented on the paucity of publications related to 3D cmapping in comparison with the prominence of new softwares, and this seemed to be another manifestation of the GIS tool/science discussion we have been revisiting in class. In the sense that, should geo-VR and visualization be considered a tool or a science in itself? This usually leads to a further discussion of what constitutes a science.

One of the types of collaborative geo-visualization that the authors mention is the different-place, same-time scenario. I thought the most obvious instance of this is the multi-player  video games that people play over the web in real time, with people often located in entirely different countries. These games can be quite immersive and require considerable synchronization of timing, representation, and events, from multiple perspectives.

There is quite a positivist sentiment permeating this paper as to the power and potential of geo-visualization and in the discussion of education, the authors state that “we know so little about the ways in which children’s developing spatial abilities can be enabled through visual representations-” but whether or not this technology “enables” improved spatial abilities has not been established- they could have neutral or even deleterious effects on the development of spatial visualization and navigation skills.

-FutureSpock

On Johnson et al (2017)

Friday, October 20th, 2017

I thought this article was really interesting, as I didn’t know too much about UAVs and their uses (and access to their imagery). The article covered the uses of (and constraints) of imagery from UAVs, and offered a refined list of online or free sources of imagery and editing software to process these images or videos.

This relates to a discussion which we often discuss in class, about the availability of funding and funding’s ability to steer the direction of research. Open data, like Open Aerial Map and others, are incredibly useful for those who may have often-imaged areas to research, and it helps to reduce the cost of conducting research while still allowing knowledge-production to occur. It seems possible that with the costs of UAVs decreasing, this may help remote sensing knowledge production to continue without a lot of funding (if the area is accessible, if the researcher has access to an UAV and the knowledge to use it properly, and access to the sensor they need, if the resolution works for their purposes, etc.).

Further, there is also software to process these images for free (of which I was previously not aware). Though there were some listed that were pay-per-use, it was really interesting to learn that free programs exist for RS processing. I have yet to open these sources and investigate them myself, but they seem like a good step towards a lower threshold for learning about and conducting remote sensing or even just aerial photography processing. Granted, there is always a worry that VGI/PGIS will be inaccurate due to the low threshold that “non-experts” can contribute to these sites, but I think that for basic use or for use in a project where higher inaccuracy/coarseness in data can be afforded, it’s a good resource. Further, I think these programs should be used more in an educational environment to avoid reliance on a specific company and give students a greater breadth in learning about different software packages’ capabilities other than the name-brand or industry standard (see: ESRI).

Volunteered Drone Imagery, Johnson et al. (2017)

Friday, October 20th, 2017

I think the article draws an interesting comparison between volunteered drone imagery (VDI) and other forms of volunteered geographic information (VGI). The “rise of the amateur” in most other VGI applications was really enabled by the spread of personal computers. It’s difficult for me to envision a world, at least in the near future, where personal UAVs become similarly prolific. The authors note that even for those that would purchase UAVs for enjoyment, there is still a technical barrier that would prevent less knowledgeable users from contributing. I imagine that it will be awhile before VDI contributors begin to resemble “amateurs” in the way many VGI contributors do more broadly. There’s probably an interesting discussion to be had about how different motivations behind contributing VDI and other types of VGI might affect concerns about data quality. I would be inclined to posit that VDI contributors have more professional expertise than that of the greater VGI community, perhaps making VDI less vulnerable to issues of  credibility and vandalism. However, it’s conceivable that fewer users with the appropriate technical expertise would give rise to less power of the crowd to catch and rectify errors.

I think another important distinction between VDI and other VGI projects like OSM is that many remote sensing contributions are likely less interpretive. For instance, an OSM contributor might delineate a boundary between wetland and forest from aerial imagery through tags. It would appear–based on my limited experience with remote sensing–the collection and contribution of most VDI would precedes these interpretive steps, so naturally there would be different ways that one would go about addressing accuracy and precision. Of course, if the definition of VDI were to include remote sensing derivative like classifications and DEMs (per the “UAV Mapping Workflow) address challenges associated with interpretation are unavoidable.

Thoughts on ‘Volunteered Drone Imagery…” (Johnon et al.)

Friday, October 20th, 2017

I thought this paper was short and sweet summary of the current state of UAV/UAS acquisition tools and data processing softwares. They used OSM as a a parallel, vector-based example of what the future platform could be for aerial data and it was helpful to have some schema about this topic to build from.

One issue that they did not touch upon which immediately springs to mind when considering a database of “frequently updated, high resolution imagery” (pg. 1) is that of privacy. If they are referring to real time information about habited environments, then having an exceedingly easy way to obtain high-resolution aerial imagery comes with all kinds of implications for protecting individuals privacy. Would they blur out humans and sensitive information like license plates? At which stage would this image manipulation occur, who would be responsible for it? Even if the images are not granular enough to allow identification, there have been nefarious uses of geographic data before (like the people who used PokemonGo! data to target spaces known to contain other users and mug people. Especially since the ultimate aim seems to be for this data to be easily accessed/manipulated into third party products/services, it would be difficult (or impossible) to “opt-out”.

The authors discuss how the private sector is investing in this industry to “reduce even further the entrance costs”(pg.1) to this field. I can see why companies would want to encourage recreational use o fUAVs as a  hobbie, because the associated paraphernalia and updates presents an opportunity for endless monetization. But as they note later in the paper, the specialized data processing softwares can be expensive and complicated. So it will be interesting to see how this balance between democratization of the hardware and usability of UAVs and the high-barrier of later-stage data manipulation changes with time, investment, and public interest.

The issue of interoperability was not discussed explicitly but touched on when mentioning how the large variety in quality of sensors means that it can be difficult to host imagery on a common site and stitch images from a given area together coherently. This reminded me of the interoperability issues mentioned in the article on cyberGIS and seems like a recurrent issue in discussions of GIScience and its applications.

The example of Nature Conservancy Coastal Resilience Project as a hosting service with a concrete agenda made me think about the importance of objectivity when compiling imagery or creating a data hosting platform. I would say OSM tries to be pretty objective in their collection and representation of data (although of course complete objectivity is impossible.) But I wonder if it is more valuable to explicitly state the objectives and goals of an aerial imagery project in the hopes of solving a particular problem, or addressing a particular gap in the data. That way, users who are interested in that particular issue are more likely to participate and provide better quality data. The general public could too, but their contributions might be stronger if in pursuit of a particular feature of the landscape, or to capture specific environmental indicators. If, instead of having one platform of uniform data, a few platforms with specialized guidelines, centralized organization, and stated objectives for specific projects would be a meaningful and pragmatic first step. After assessing the success of these pilot projects,  the UAS community could reflect on the necessity of a universal, high quality aerial imagery platform.

-FutureSpock

Spatializing Social Networks (Radil et al. 2010)

Sunday, October 15th, 2017

I really enjoyed this paper and how it clearly elucidated the theories of social network analysis, the idea of embededness and interplay between spatial and social attributes of geographical phenomena, and the need for a matrix-based model to describe and predict gang-based violence in Los Angeles.

The overarching and unspoken goal of the paper seemed to be to create a quantitative framework for visualizing social and spatial relationships. This distilling of observable social phenomena (gang rivalries and violence) into matrices and testable hypothesis, gave the paper a very logical flow and purpose. On page 311, the authors discuss how “identifying social positions as collections of actors with similar measures of equivalence allows…outcomes for similar actors to be operationalized and tested”.  This is the first explicit mention of the predictive power of the outcomes of network analysis when applied to this specific issue of gang rivalries in Hollenbeck. Again, on pg. 315, the authors present their analysis as a testable hypothesis; “the hypothesis here is whether or not similarly embedded and positioned territorial spaces experience similar amounts of violence.” Their declaration of the relevant variables is very clear, as is the directionality of their hypothesis. I appreciate how succinctly they state the purpose of their network analysis and the predictive power of their results.

That being said, their methodology definitely glosses over some of the nuances of rivalry, resulting in binary positions on each matrix criteria, a criticism that they acknowledge on pg. 317. Several iterations of the calculations result in a grouping into either +1 or -1, allowing for dichotomized categorization which is very convenient for the purposes of data analysis, but are oversimplifications. They address this issue by expanding into 8 subcategories, but I would have liked an explanation as to why 8 was deemed sufficiently nuanced whereas 2 was two few and 30 too many.

I thought that the groundwork laid in the beginning of the paper (map with clear geographic and social boundaries outlined) provided a strong foundation for the discussion of the case study. Rather than apply their methodology to the crime data in a vacuum, the authors gave the reader background on why the observed relations among rival gangs might occur based on the geography of the space. I also appreciated the diagrams and thought they complemented the text nicely, although I did have a few lingering questions. Figure 3 represents each gang as a node and the portruding lines connecting one node to another indicate whether or not a rivalry exists between the two groups. But there is no discussion of the placement and relative distances between the nodes. Do the ones that are depicted closer to each other have contiguous turf areas? Are they organized according to a N/S/E/W axis that mirrors their real-world territoriality? These questions remained unanswered.

A further criticism would be the wording and use of survey from LAPD and *some* former gang members to identify the rivalries between each of the gangs. The use of the word “enemy” in particular is vague and seeing as perceptions of rivalry may arise from individual interactions and experiences, the accuracy of these designations is questionable, since alliances and feuds are dynamic, mistrust of outsiders is endemic, and their model is static. However, they did mention that the rivalries showed perfect symmetry between cops and ex-gang members alike.

But these are all minor criticisms of an otherwise excellent review and application of social network analysis to a contemporary geospatial issue. They suggest that these methods can be applied to all kinds of other scales and kinds of research in geography and I would like to see it applied to health epidemics, drug use, and homelessness, just to name a few.

-futureSpock

 

 

On Andris (2016), social networks, and geovisualization

Sunday, October 15th, 2017

I found this article super interesting, as it discussed the complexities of visualizing social ties over space. I had never thought about 1) the fact that most relationships are thought about at least a little bit geographically but geographic visualization systems have not been able to visualize these dynamics (because they are, as Andris notes, “crude”) and 2) all this data from varying sources being used (possibly) to piece these things together that one takes for granted.

It is also interesting that this is an up-and-coming concept. It did not seem as though Andris was expressly studying social media platforms as much as she was studying the relationships between people expressed through data (through paper, telephone, email, social media platforms). It is incredibly surprising that this is not picked up further. In the big data studies that I have read, it seems as though most studies focus on numbers, the “butts in seats” raw numbers, and avoids connecting this information to the actual users themselves; Andris’s discussions of the nuances of geo-visualizing social networks actually tried to tie people’s digital presences back down to their humanity, which I found refreshing.

Further discussion of the geoviz nuances and making this discussion more “mainstream” in GIScience would be incredibly useful in urban planning or other disciplines that try to bring people together and make life easier for the people they serve. It seems to me that these visualizations have the potential to help decision-makers to do things better by actually seeing these numbers as real people’s real lives and livelihoods– though I do agree/worry with the classmate who posted earlier that this information could also be used nefariously.

Social Network Analysis of Gang Rivalry, Territoriality, and Violence, Radil et al. (2010)

Saturday, October 14th, 2017

Ridal et al. (2010) synthesize sociological and geographic techniques to investigate gang related activity in an eastern policing district of Los Angeles, California. The article challenged my conception of space and its role in GIScience. I can appreciate how social networks can be “spatialized” in relation to geographic information, but it was interesting how the language surrounding social networks themselves mirrors the language used more broadly in GIScience. How entities can be associated with a “location” in social or network space made me consider how other concepts I wouldn’t consider to be inherently spatial might be framed in a spatial context.

Their discussion of “spatial fetishization” really resonate with me, particularly in my experiences outside the Department of Geography. Mentioning my minor program to a group project member might prompt enthusiasm about the idea of “doing GIS,” and how we could incorporate it into our assignment. This could be especially true in the School of Environment, where GIS is touted as a uniquely hirable skill in a program that might otherwise emphasize theory over practice, but more generally I think the proliferation of GIS tools beyond the field of geography has the potential to generate excitement about exploring the spatial dimensions of a topic in a way that lacks nuance. The cluster analysis exercise was a good example of how a purely spatial approach alone might oversimplify a multidimensional question.

The binary classification of gang relationships being rivalrous or non-rivalrous seemed to be a little reductive. I was hoping the authors would explore further how one could address the evolution of social networks over time, but I agree this might be beyond the scope of the paper.

Andris 2016 – Social networks in GIS

Saturday, October 14th, 2017
As outlined in the CyberGIS articles, rapidly increasing quantities of geo-/ socially-referenced information is being generated. Andris (2016) argues that the existing theoretical and technical infrastructures of Social Networks (SN) and GISystems are insufficiently integrated for efficient geosocial analysis.

The author proposes a stronger embedding of SN systems in geographic space. In their framework, a node (geospatial agent) in a SN has its geolocation information formalised in the concept of an ‘anthropospace’. Unlike previous descriptions of human movement (e.g. life paths, anchor points), the anthropospace is a fluid concept which can refer to points, lines, areas, probability clouds associated with a social agent’s ‘activities’. I think this terminology is compelling in its universality, but may (as emphasised by the author) present challenges in a GIS setting. For instance, how should GIS deal with nodes that have different types/ scales of anthropospace?

The idea of non-Euclidean geometries and network analyses are not new in Geography. For instance, the time it takes for a human agent to traverse geographic space forms a highly variable non-Euclidean metric space over the Earth, which might be constrained by a transport network or the individual’s characteristics. The additional difficulty with SNs is dealing with the transient/ ambiguously defined geolocations of nodes. To address this, the concept of ’social flows’ are introduced to signify social connections in geographic space. The calculation of a ‘socially-bounded’ Scotland was a particularly amusing (/troubling) example. Of course, social flow can only be derived from proxies for social connection (like phone calls).

I’m not convinced Andris’s system represents a definitive framework for resolving SN and GIS, but it does offer significant insights and examples. Further work would be necessary to persuade readers that the suggested typologies are exhaustive, non-arbitrary, and widely useful. Would a technical fix (making GIS software more SN compatible) solve this problem? I agree with the author that a conceptual understanding also needs to be advanced.
-slumley

A Theoretical Approach to Cyberinfrastructure, Wang and Armstrong (2009)

Thursday, October 12th, 2017

Maybe the most challenging aspect of the paper–aside from the technical jargon–was trying to connect cyberinfrastructure concepts with my own experiences using GIS. Wang and Armstrong describe an approach to GIScience topics that I have never properly confronted. For instance, if someone were to ask me about the “crux” of inverse distance weighting, I would probably mention the definition of the relationship between interpolated values and surrounding points, or the selection of an appropriate output resolution. Increasing the near neighbour search efficiency is not immediately called to mind. If this paper had been my introduction to the study of GIScience, I would have likely begun with a different opinion about it’s place in the field of geography over computer science. Anyway, I can appreciate the appeal to the authors’ target audience through the conceptualization of “spatial computation domains” as spectral bands.

It’s true that some of the finer points may have been beyond my understanding. Still, the question on my mind as I read was whether or not my understanding was really necessary. As an end user, is my comprehension of the underlying cyberinfrastructure critical for me to evaluate the suitability of an algorithm? If it adds no additional source of bias or uncertainty, maybe not. Of course this is a big “if” as I am not confident in my ability to identify any such sources should they exist. In any case, it’s conceivable that in the age of big data GIS practitioners will need increasingly sophisticated tools to accommodate unprecedented data volume and reduce processing time.

Thoughts on CyberGIS Software: Synthetic review and Integration Roadmap (Wang et al. 2013)

Monday, October 9th, 2017

I think the density of jargon employed in this paper makes for a very high barrier for comprehension.The authors constructed many diagrams, models, and frameworks to explain how spatial data is currently processed in software environments, but I found even this descriptive models (Figures 1, 4) to be highly abstract, and the labels were not explanatory. They seem at once highly specialized and lacking any descriptive power- a true feat. I am clearly not the intended audience of the paper, with my limited knowledge of software architecture.

One of the major themes of the paper was the need for interoperability amongst different spatial data types. The authors put a lot of emphasis on the power of cyberGIS to analyze big data sets and solve complex problems, and the need for data to adhere to common guidelines seems paramount to that goal.

The goal of the paper seemed to be to review existing softwares that deal with geospatial analyses and critique the current methods and modularity through which they operate. But they themselves concede that we “do not yet have a well-defined set of fundamental tasks that can be distributed and shared.” Thus, there is necessarily a lot of overlap between the functions of the different CI/SAM tools that are in use. This redundancy and lack of efficiency seemed to be an issue for the authors.

It seemed contradictory to me that of the six objectives outlined for the CyberGIS Software Integration for Sustained Geospatial Information, at least half of them had primarily social objectives, whereas the discussion of human usability, barriers to learning, social implications of the integration of these various methods, was hardly ever mentioned. For example, the authors state engagement of communities through participative approaches, allowing for sharing, contribution and learning. Yet there is no discussion about how the integration of technologies discussed here would make those objectives more feasible, aside from interoperability. There was a reference to a “stress test” in which 50 users performed viewshed analysis to test the ability a the “Gateway stage” of software development, but this was a brief mention of humans in an otherwise immensely abstract and theoretical discussion of various issues in cyber-infrastructure.

I would really like to hear from an expert in this field to ascertain whether they felt this paper made a valuable contribution, or answered a pressing question, or even clarified the goals and future of their field.

  • futureSpock

 

Wang et al 2013 – CyberGIS

Sunday, October 8th, 2017
This paper addresses growing demands for computing power, flexibility and data handling made by the (also growing) geospatial community. Wang et al examine the current (2013) array of open geospatial tools available to researchers, and present a CyberGIS framework to integrate them. This framework leverages existing software modules, APIs, web services, and cloud-based data storage systems to connect special-purpose services to high-performance computer resources.

It is not entirely apparent to me whether CyberGIS represented a particular project or a generalisable framework. The online resources for the NSF-funded project are now dated – possibly owing to the conclusion of the grant and Wang’s appointment as the President of UCGIS. Nonetheless, CyberGIS has been highly influential in shaping community-driven and participatory approaches to big data GIS, pointing towards cloud-based web GIS platforms such as GeoDa-web and Google Earth Engine.

Advancements in the accessibility and usability of geospatial services greatly increases the potential benefit to multidisciplinary research communities. Removing highly technical skills needed for spatial data handling and analysis of large datasets across multiple platforms allows researchers to allocate more of their time and resources to other elements of their work. This division of intellectual tasks marks the maturing GIScience as a field.

CyberGIS also raises interesting questions about who constitutes a GIScientist. On one hand, it lowers the bar for carrying out analysis of big geospatial data, empowering researchers for whom spatial analysis is an important, but ancillary component of their work. On the other, it could also reduce the pool of academics who are familiar with the GISc and computational techniques that were previously required.
-slumley

Thoughts on ‘ Geospatial Agents, Agents Everywhere…’ (Sengupta, Sieber 2007)

Sunday, October 1st, 2017

I liked how this review delineated very specifically the difference between Artificial Life Agents and Software Agents because I was quite ignorant about the latter before.

What struck me about the four conditions necessary for classification as an intelligent agent, each criteria is the result of many different interacting subfields. For example, the possession of rational behaviour is dependent on decision-making theory from psychology, economics, reward learning paradigms, and machine learning algorithms. These varied fields all have something to contribute to the “rational behaviour” required of an intelligent agent. This suggests that agents are the product of interdisciplinary sciences, and supports the findings that they have very diverse applications.

The major distinction I made between ALGAs and Software Agents was that ALGAs are concerned with intra-human relations; behaviour, interactions between social beings, and the learning that arises from these interactions, whereas Software Agents are concerned with making inter-human processes easier, namely the retrieval and manipulation of spatial data though a computer interface. I am not sure if this distinction captures the true differences between the two types of agents. Are software agents really so removed from the inner workings of humans? Do they not need an awareness of the user and the user’s capabilities, limitations, responses to the environment in order to facilitate easier task processing?

The authors discuss how initial forays into AI research were met with “disillusionment with their true potential in mirroring human intelligence”. This doesn’t surprise me because I think as a society we tend to idealize new technologies when they arise, and overestimate their explanatory, predictive, or transformative power. An example is the use of fMRI and the huge spike in using neural data to explain every conceivable phenomena. AI is also having a major moment, with its use in categorizing human faces for facial recognition to its use in perfecting self-driving cars.  I wonder if agent based modelling, promising though it is in terms of testing out predictions and tweaking specific variables to see the outcomes, is also one of those overly hyped technologies? Is it intrinsically better than naturalistic observation in coming to conclusions about behaviour, or are they different tools to answer different kinds of questions?

The discussion about ALGAs immediately reminded me of agent-based modelling by Thomas Schultz in McGill’s psychology department that modelled different cooperation strategies (ethnocentric, allocentric) to see which one was most evolutionarily successful. This makes me think about how much we can extrapolate from geospatial agent based modelling, especially when dealing with a very granular issue like the behaviour of individual organisms. Can we make widespread predictions about the real world based on these simulations?

The discussion about the various applications of ALGAs, from migratory behaviour to models of urban sprawl, raises questions about what kinds of problems ALGAs are more suited to, and what metric could be used to determine the appropriateness of agent-based modelling for a given issue. This might help curtail the tendency to use the technology for every kind of problem (even those which might benefit from another approach.) Developing some way of testing the adequacy of the method for the issue would also help focus the predictive power and efficacy of the generated models. This is a framework that could be built upon a comprehensive review of the different kinds of issues which geospatial agent based modelling is being used to solve.

-futureSpock