Archive for September, 2015

Geospatial Agents, Agents Everywhere

Monday, September 28th, 2015

The first reading So go downtown gave an introduction to agent-based modelling. As mentioned in the article, one of the large limitations of the model is that pedestrians are generated according to a Poisson distribution. Similar to the train example, I would propose that it limits this model for use on campuses where large numbers of students are released at once at regular time intervals. That being said, this article is more than 10 years old and I’m sure agent-based modelling has progressed rapidly since then. Advances in CPU capabilities likely allow researchers to simulate way more agents with a more complex set of behaviors and landscapes.
Reading Prof. Sengupta and Prof. Sieber’s article Geospatial Agents, Agents Everywhere, I was excited to learn that the models have progressed and been applied to several scenarios from movement in alpine environments to shopping behavior. One of the most interesting applications mentioned in the article was a system that could vary highway tolls based on traffic density. This immediately reminded me of the car sharing service Uber, which currently varies its fares based on demand. Uber would likely be interested in traffic-predicting geospatial agent models, so that their cars could both avoid traffic and be well located to pick up passengers before they even request a lift. For example when a large event ends traditional taxis may have exclusive rights to park right outside the venue, forcing Uber cars to linger a couple blocks away. By using geospatial agent modelling, the Uber cars could predict crowd behavior leaving the concert and better distribute their cars to better compete with traditional taxis.
Fares could even become geofenced, so that zones with a high predicted agent density receive a higher fare bracket than low density zones. In this scenario Uber could entice more cars into specific areas before they are needed, and influence crowd behavior by encouraging thrifty pedestrians to enter zones of low predicted density.
-anontarian

Model Citizens: Haklay et al, “So Go Downtown”

Monday, September 28th, 2015

Haklay et al’s article “So Go Downtown” describes an intricate model of pedestrian movement, STREETS. I began the article somewhat skeptical of the need for such a model (is it not good enough to simply collect enough data on pedestrians?) and the capacity of the model to think of everything; for example, agents deviating from their agenda, socioeconomic status, etc. Nearly every “but what about…” was answered in the article, and I was surprised by the complexity of the model and how much it takes into account. I also found the combination of raster, vector and network data to be fascinating: often in our education, these data models are taught as disparate and we do not use them in conjunction. This article started to give me an idea of the ways that these data models can, in fact, be used together.

One problem with the model that the authors raise is that the town is “spatially closed” – the town in the model is a bubble, with no competing towns or suburbs nearby. The authors recognize that adding further complexity by expanding what is a closed model would be an incredible task. It is difficult to place boundaries on what should and should not be included in a model – it requires making serious choices about what is significant enough to be included in the micro world of the model.

Clearly, there is room for expansion and improvement in the modeling and simulation realm of GIScience, as existing models are modified and new ones are created.

– denasaur

Modular, spatial ABMs: Haklay et al., 2001

Monday, September 28th, 2015

In the intervening fourteen years since “So go down town” (Hackley et al., 2001) was published, agent-based modeling has, unsurprisingly, been harnessed for an ever-expanding number of applications. In the wake of the late-2000s recession, which appeared to discredit the economistic assumption of equilibrium, influential science journal Nature published an editorial calling for the synthesis of existing ABM techniques into a modular representation of the existing economy. Spatial ABMs (such as the STREETS model) have surfaced in mainstream news as potential predictors of crowd behaviour. Needless to say, Hackley, et al. were on to something very important with the development of their modular, multi-scalar representation of pedestrian behaviour. Avoidable catastrophes such as the 2010 Love Parade disaster, in which 21 people were killed by trampling due to a dynamic feedback phenomenon now known as “crowd turbulence,” have provided fodder for the study of the effects of interrelated psychological and physical forces on large crowds.

In general, ABMs appear to be one of the most promising intersections of social science and computer science, due to its ability to model situations of staggering complexity, involving thousands or millions of agents whose dynamic interactions produce highly unpredictable results. Our last discussion about geolocated SNA produced some interesting conjecture about what could be done — for better or worse — with the datasets of Google or Facebook, which contain geolocated information on billions of real individuals. Haklay’s observation that ABM research in the 1990s was hindered by “sufficiently powerful comptuers and suitably rich data sets” points to the potential that this information has to expand human knowledge, as well as to enable much more effective control of human populations.

I would venture that combinations of current-day iterations of modular ABMs like STREETS, combined with these ever-growing, dynamic sources of socioeconomic data, hold the potential to create very well-informed models that capture the dynamism of emergence with the power of immense and ever-evolving observations of real people. With so much relevant research now being conducted behind closed doors at intelligence agencies and in corporations whose business is selling data, the current and future possibilities of spatial ABMs remain both fascinating and frightening.

 

-grandblvd

Simulated Movement, an Emerging Field?

Monday, September 28th, 2015

The article by Haklay et al. from 2001 is an interesting look into simulated pedestrian movement in a closed-system urban downtown setting. Named STREETS, this module-based model shows just how complex real human movement is by detailing the ways our unconscious decision-making must be broken down by a computer in order to simply approximate pedestrian paths.

After reading about the various modules, my thoughts were immediately distracted by trying to think up further additions to make the model as realistic as possible.  A more complex model might include the presence of cars as another variable that would affect how pedestrians are able to cross roads, and for example, how their path might change if the time spent waiting for cars to go by allows them to focus on an alternate target destination that they originally ignored. In relation to my own project on hydrological models, the simplest Mover module could be applied to predicting overflow in river systems. If excess water flow units were given values like individual agents in the article, and the water filled certain pixels like pedestrians filled sidewalk cells, once a pixel was “full”, the excess water would have to move into the adjacent pixel and could change overflow paths.

As the modules became more specific in their control of agent movement, the final module, Planner, almost seemed like artificial intelligence. It was not until the authors directly address the difference between deliberate simulation and emergent, ‘self-organizing’ movement that I realized model simulation can become so much closer to “real-life” than exists currently. Overall, this piece was engaging and had easy-to-follow technical descriptions of the modules combined with just enough theory to relate the topic to GIScience and future implications.

– Vdev

Haklay et al 2001

Monday, September 28th, 2015

I imagine that agent-based modeling is much more complex than most models in natural sciences, such as climate models or forest growth models. While for now agent-based modeling is applicable for more simple aspects of human behavior such as commuting, further application in economics or sociology would probably require significant advances in fields such as artificial intelligence, which would improve our ability to simulate human decision-making. Since the writing of this article, however, I imagine vast advances have been made. Such advances would allow computer models to complement or perhaps replace some survey-based research. Choice experiments, for example, represent a survey-based approach that is used to understand how subsistence farmers and herders use ecosystem services based on environmental and socio-economic factors. I would be intrigued to see computer models simulate such scenarios.

I wish that the “planner” module were functional and applicable at the time that this article were written. Perhaps it would be representative of people having multiple, completely different modes of behavior. For example, would a student or worker have a “weekday” plan and a “weekend” plan that the planner module would alternate between? Also, I was very intrigued by the term “cognitive map”, but the paper did not expand on it. Furthermore, the discussion of emergence was difficult to grasp. I believe it was talking about whether we should try to look for clear behavior patterns  and systems at aggregate scales or just accept ambiguity or lack of patterns as they are.

 
-yojo

More than ‘plausible’: pedestrian simulations and the future.

Monday, September 28th, 2015

In their 2000 article, Hacklay et. al. present a model of impressive complexity – STREETS – to simulate pedestrian movements in central urban areas, relying on the use of several different ‘modules’ to control individual agents as well as interactions with the environment and crowd dynamics. The authors outline a number of shortcomings to their methodology, notably the assumption that the town centre in the simulation is a ‘spatially closed’ (p.10).

Initially skeptical of the model’s use beyond simply confirming or denying existing ideas about pedestrian behaviour, I was reminded (as in previous articles) of our in class discussion about the quantification bias that can legitimize numerical/computational work over more qualitative approaches, and has indisputably helped maintain the relevance of GIS (and now modelling) in contemporary geography. This made me question the relevance of modelling human behaviour; I felt the assumptions in the STREETS model were too damning, and the implied complexity would never be adequately abstracted. To my surprise, this was boldly addressed by the authors through a fascinating discussion of ‘bottom up emergence’ (p.25-26).

In discussing the role of agent-based modelling and its ‘one-way notion of emergence’ (p.26), the authors detach themselves from the notion that inductive research is possible in the STREETS model, and the jump from describing pedestrian movement as ‘plausible’ to ‘self-organizing’ (p. 25) is significant. This discussion peaked my interest, for it suggests that there is more to modelling than increasing its complexity every time advances in computational power allow for it. Clearly, the addition of dozens of modules or parameters is not enough to allow reliable inductive research to be conducted. Nevertheless, the power of modelling hundreds of thousands of agents at a time far exceeds the current possibilities of qualitative research on pedestrian movement, suggesting that modelling will remain highly relevant in the study of pedestrian movement into the future.

At what point will models transition into ‘self-organizing emergent structures’ (p.26)? I honestly cannot say, and my level of understanding doesn’t even allow for an educated guess – all I know for sure is that it won’t be exclusively dependent on computational power. In any case, I look forward to seeing how the field develops.

-XYCoordinator

 

Are all Trip Generators Created Equal? (ABMs)

Monday, September 28th, 2015

In their article “So go downtown: simulating pedestrian movement in town centres”, Mordechay Haklay et al describe ways in which “agent-based modelling” have produced superior models of pedestrian behaviour, by taking into accout variability in the preferences and behaviour of pedestrians based on the purpose of their trip, their demographic characteristics, and a variety of other considerations. However, one aspect of earlier pedestrian traffic modelling–from which the assumptions of agent-based modelling are derived–underlined some of the limitations of the agent-based modelling approach. Haklay et al indicate that pedestrian models typically incorporate two elements of a place (typically a city block, tract, or some similar defined area) to predict the volume of pedestrian activity: the “population at [the] location” and the “measure of the attraction of facilities at [the] location” (Haklay et al 7). However, this begs the question: are all attractors created equal? In less abstrsct terms, can the number of trips generated by commercial and employment nodes be considered with equal weight as a trip generator as the relative permancy of a residential population at a particular location? In my opinion, they surely cannot. The variablity of pedestrian trips–particulary to retail–cannot be overlooked. While the “attraction of facilities” (7) at a location can vary on an hourly basis, residential populations fluctuate significantly only over several years at a time. Some factors that affect pedestrian trips to facilities at a location–particularly retail facilities–include: variablity in the seasonal commerce (e.g.: Christmas shopping, tourism season, etc.); variable personal preferences from person-to-person in different weather conditions (e.g.: a shop may see less clients during inclement weather, while a movie theatre might benefit); personal preferences in walking speed and environment (e.g.: some people may prefer quieter streets so they can walk faster, while others prefer busier, slower streets); and variable tolerance to environmental conditions, such as the urban heat island effect. Although incorporating these elements into agent-based modelling would be arduous and expensive, the potential benefits to countless urban environments is unimagineable. For instance, pedestrian modelling which incorporates pedestrian behavioural response to changing weather conditions could correspond to public transit network, deploying more or less vehicles during times of demand induced by weather (e.g.: several people seeking bus service during a rain storm). Models which comsidered variability in tourist traffic could help business owners make educated decisions about their investments (e.g.: where to locate, what hours to have etc.). But perhaps most intriguigly of all, pedestrian models could ipactually show what factors in the environment affect pedestrian behaviour adversely, allowing for targeted investments that enhance the walkability of an area and maintain the vitality of pedestrian-oriented neighbourhoods.

-CRAZY15

Role of Geospatial Agents in GIScience

Monday, September 28th, 2015

In their article, Geospatial Agents, Agents Everywhere…, Sengupta and Sieber (2007) demonstrate how the paradigm of agents in AI both serve and benefit from research in GIScience. I found it interesting that Artificial Life Geospatial Agents (ALGAs) are relevant to our previous discussion about the importance of spatializing social networks. ALGAs are relevant to spatial social networks in that they model “rational-decision making behavior as impacted by a social network” (486-487).  Therefore, applying our knowledge about spatial social networks (as opposed to just social networks) to ALGA development could perhaps help us better understand and model social interactions and information passing between individual agents.

In addition, the interoperability of Software Geospatial Agents (SGAs) across software and hardware platforms informs us about ontology, representation, and semantics in GIScience. Therefore, SGAs might unlock answers concerning key questions surrounding geospatial ontologies and semantics. This is because SGAs have the key responsibility of determining the standards to interpret semantically. These standards may help with important GIScience tasks of expressing topology and geospatial data in GIS. Therefore, the fact that SGAs are “geospatial” in nature will impact the extent of how we “do GIS” as geographers.

I am interested to know the extent that ALGAs are able to incorporate temporal dimensions within its frame of development. I suspect that adopting the added dimension of time into these platforms and models would be a crucial challenge for ALGA research in GIScience.

-GeoBloggerRB

On Geospatial Agents

Monday, September 28th, 2015

Firstly, I can see why ALGAs are dominating the GIScience literature on agents. Modeling complex social relationships and migration patterns as well as predator-prey interactions (and more) has a much more compelling and interesting implications for geography (at least on the surface) than does information mining. Even with my limited knowledge of AI agents, I find my mind is flooded with scenarios in which I could apply ALGAs; I grasp the concept of using a computer to model intelligent systems easily.  With that said, I certainly do not wish to underwrite the potential of SGAs. The implications of the ability to work across multiple platforms is somewhat lost on me, and I will attempt to explore in my upcoming lecture with the authors of this piece.

I find most of my difficulty in understanding SGAs and their potential applications lies in what is said by Sieber and Sengupta on page 492. The authors describe how SGAs are divided by tasks while ALGAs are divided by themes. What I gather from this is the following statement: ALGAs are defined by an application, while SGAs are defined within an application.

It seems that these agents certainly have a place in Geography. I have faced more than in one situation in which I felt like a robot data mining unsuccessfully and re-iterating nearly similar interpolations. Another potential use for SGAs crossed my mind, in identifying patterns between z-spectrum graphs in Remote Sensing. My experience with the graphs was that they are very data intensive and uninterpretable.

Smitty_1

7:45 pm, 9/28/12

 

Autonomy?

Monday, September 28th, 2015

Geospatial Agents, Agents Everywhere by Sengupta and Sieber (2007) qualifies the distinction of geospatial agents in Artificial Intelligence (AI) research as well as distinguishes between Artificial Life Geospatial Agents (ALGAs) and Software Geospatial Agents (SGAs).  Since I do not have much experience with ALGAs, I began thinking about SGAs and as I was reading this I kept going back to various instances during my time at McGill where I had any exposure to SGAs, and one time stands out in particular.  During GEOG 307 we had a reading on location-allocation based modeling and shortest path analysis called Flaming to the scene:  Routing and locating to get there faster by Figueroa and Kartusch (2000) where the Regina fire department did a Fire Station Location Study as well as built a program to identify the best routes to achieve the fastest response times.  Sengupta and Sieber (2007) are concerned with highlighting the legitimacy of these two AI traditions and the importance of geospatial agents’ ability to work with geospatial data specifically as well as it relevance to GIScience.  They mention its applicability to social science problems and I immediately thought of the Fire Station Location Study as an example of a SGAs used to solve a real world concern.  However, my certainty of this as an SGA is not as strong once I considered the problem of autonomy.  The researchers were capable of letting the simulation run to determine an output but they had predetermined all of the necessary inputs from municipal data beforehand.  The authors do address the problem of autonomy for ALGAs and SGAs in AI research but they really only distinguish between a strong and weak level of autonomy. It seems to me that defining a level of autonomy is extremely subjective, and though a necessary qualifier for a program to be considered in the realm of AI, it may not be the best measure.  Perhaps the field of AI research would benefit from further elaboration on what is truly autonomous?

 

-BannerGrey

 

“So go down town”: stimulating pedestrian movement in town centres by Haklay et al.

Monday, September 28th, 2015

Haklay et al.’s article exhibits how geospatial agents can replicate real-world environments – specifically, how pedestrians move throughout urban downtowns. Similarly to what we discussed last class with social networks, the researchers utilized the concept of nodes (“waypoints”) in a street network to methodize the individual agent’s “planned route” (12). Haklay et al.’s methodology for STREETS also considered impedance, which means they considered obstacles (e.g. buildings or large clusters of people) that would slow down the movement of a pedestrian from one “waypoint” to another.

After reading Sengupta and Sieber’s review article and comprehending the technical terms introduced, Haklay et al.’s STREETS methodology was easier to conceptualize. For instance, Haklay et al. described an agent-based model as one that is “autonomous and goal-directed,” which were two of the four factors described in Sengupta and Sieber’s article. Although Haklay et al. do not specifically describe STREETS as a geospatial agent that has all four properties described by Sengupta and Sieber, they state STREETS is unique because the agents understand where they are “spatially located” and are spatially “aware” (8).

What was interesting about this article, and what also parallels last week’s article, was that many parts of the methodology incorporated multiple attributes to determine how an agent/individual makes decisions. Like how Radil et al. considered both gang relations and territory in their spatial social network, Haklay et al. incorporated “behavior” and “socio-economic characteristics” in their street network (13-14). I think incorporating multiple variables is important because it replicates the real-world more accurately. Previous pedestrian movement models did not integrate an individual’s characteristics that would affect their choices. For these reasons, I am interested to see how STREETS will improve in the future, and how it will be able to incorporate even more modules/variables into the agent-based model.

-MTM

Geospatial Agents

Monday, September 28th, 2015

Okay so, I think Sengupta & Sieber (2007)’s  lit review and discussion of artificial intelligence research within GIScience has been the most thought-provoking article we’ve had to read so far and I’m not just saying that to suck up to the profs. The subject material is current and very relevant to one of my fields of interest in GIS, which is programming geospatial applications.

Anyways, they mention the four properties necessary for a software to be considered an intelligent agent:

(1) autonomous behavior; (2) the ability to sense its environment and other agents; (3) the ability to act upon its environment alone or in collaboration with others; and (4) possession of rational behavior

I’m pretty sceptical when it comes to artificial intelligence. Obviously a system that possesses these four qualities can be considered more “intelligent” than most software, but I think that whether or not a software actually qualifies as an “intelligent agent” depends on one’s interpretation of what each of the four properties entails.

Similar to ClaireM, I question what “autonomy” actually entails, because this could either mean the ability for a software to run and maintain itself free of human prompts (that means, it recognizes on its own when it is supposed to run, instead of needing to be “started” to perform a task), or it could mean the much simpler concept of being able to be “started” and then left to run until its completion. In my opinion the latter does not count as full autonomy and as such should be considered less “intelligent”. The types of programs referred to in this paper all seem to be of this kind.

While these systems may be able to sense their environment, they cannot do so without being first given an environment within which to operate. The paper also doesn’t really touch upon the notion of sensing and interacting with other agents, which most geospatial software systems would not do on their own since they run separate from one another. Finally, all computer programs created as tools are designed to use algorithms to evaluate situations and make decisions, so I think any software system can be said to possess rational behaviour.

I feel like the four qualifications for software to be considered “intelligent” are not defined well enough in this article to actually establish a clear dividing line between intelligent and non-intelligent software. I don’t think this is really all that important though because it doesn’t affect its usefulness, and it’s undeniable that geospatial software systems can be intelligent agents.

-yee

Geospatial Agents, Agents Everywhere…

Saturday, September 26th, 2015

Sengupta and Sieber’s review of artificial intelligence (AI) agent research history and its current landscape sought to define and ponder the legitimacy of ‘geospatial’ agents within GIScience.

The discussion of artificial life agents, often used for modeling human interactions and other dynamic populations, complemented my current research into complexity theory and agent-based modeling of chaotic systems that are sensitive to initial conditions, as it holistically related them back to GIScience.

However ‘software’ agents, defined as agents that mediate human-computer interactions, was an unfamiliar notion to me. I found it more understandable to read about these types of agents if instead I replaced it with the words ‘computer program’, ‘process’, or ‘application’.

As a student familiar with software development, the article made me question a lot of the computational theory I’ve learned thus far, and raised some big questions: What does it truly take for an agent or program to be characterized as autonomous? If an agent or program engages in recursive processes, does that count as being autonomous, as it essentially calls itself to action? And when is a software agent considered to be ‘rational’?

I wonder if rationality in decision making should even be included in the definition of an agent. Humans often make irrational decisions. Our decision making process and socialization patterns are highly complex and difficult to model, issues that are quick to see even when attempting to analyze static representations of spatial social networks.

I look forward to see how this conversation evolves.

-ClaireM

Spatializing Social Networks

Monday, September 21st, 2015

Radil, Flint, and Tita’s “Spatializing Social Networks: Using Social Network Analysis to Investigate Geographies of Rivalry, Territoriality, and Violence” (2010) demonstrates a promising integration of social network and spatial analysis in a study of gang violence occurrences and intergang rivalry in an LA neighbourhood with an above-average rate of violent crime.

The article highlights the fundamental interdependence of relationships and space, and thereby the fruitfulness of analyzing both “network space” and “geographic space” at the same time. While the article speculates as to why certain areas with particular network roles may experience higher rates of violence – interstitial or “brokerage” areas in particular – it stops short of musing on the potential predictive power that such analyses may one day hold.

Discussions of GIScience in 2015 inevitably seem to gravitate toward questions of power, which is well as we are at a moment of paradigmatic change in the amount of geolocated information that is collected on all digitally-active individuals on a regular basis. On a fundamental level, the article’s attempt at developing a synthesized geographic and social network analysis method points to a future where individuals’ and groups’ positions in network and geographic space can be studied simultaneously and automatically.

This in turn has considerable implications for questions of both collective and individual freedom. If Facebook, using data derived from the frequency of message exchanges, can predict a breakup between two romantically involved individuals, as CRAZY15 noted, what could it do with geolocational data synthesized with all of its existing (and future) network/relational indicators?

-grandblvd

Spatializing Social Networks

Monday, September 21st, 2015

In “Spatializing Social Networks: Using Social Network Analysis to Investigate Geographies of Gang Rivalry, Territoriality, and Violence in Los Angeles”, Radil, Flint and Tita describe the current academic understanding of embeddedness and how it was integrated into their study of geographic gang violence in an LA neighbourhood. I liked this study because while the idea sounds intuitive when explained, it shows a clear advancement in the field of space conceptualization.

The exclusionary vernacular used in the theory section was something that could have been improved upon. However, the neighbourhood gang violence provided more clarity to the topic and my understanding of the first section improved after the second full reading of the article. I like the concept of different types of embeddedness and especially the reference to Massey’s work and the idea that social networks are “stretched out over space” – a key finding in the subsequent gang violence study. The description of the CONCOR method was initially confusing but seemed like an innovative way to use quantitative methods to provide more qualitative results.  A potential follow-up could see if there were any new connections teased out by investigating the neutral or positive relations between gangs. I would also like to see how the final figure (6C) matched up to various locals’ perspectives on where gang territories were defined versus the formal census blocks. Finally, the specific acknowledgement of the study as a static view piqued my interest as to how temporal scales could be included in the future. Overall a thought-provoking read.

-VdeV

Radil et al 2010

Monday, September 21st, 2015

This study takes on quite a difficult task, in that it attempts to quantitatively analyze a social system while simultaneously using two distinct concepts of space. In this case, the gang rivalries correlated for the most part with the geographic proximity of the gangs. I think the utility of this approach would have been more obvious if the spatiality of rivalries and geographic proximity were much more divergent. The fact that gangs were usually embedded with and structurally equivalent with neighboring gangs makes the results appear underwhelming. However, certain exceptions, such as the existence of a center-periphery geography in the northern part of Hollenbeck, as well as the condition of “betweenness” being associated with more violence, exemplify the exceptions to the norm that would be difficult to discern without this kind of analysis. I struggled to grasp which stage it was that social networks and geographic space were combined. When using CONCOR to make the dendogram, did both location and gang rivalry influence which position each gang was placed in? An aim of this study was to quantify the interaction between the two spatialities, but it seems to me that the network positions are themselves quantified, but their comparison to the geographic positions is only qualitative, i.e. “north-south” and “center-periphery”, and that these characteristics were determined visually. Nevertheless, these types of qualitative characterizations should still be immensely useful in predicting gang behavior. This type of analysis could potentially perform any combination of spatialities, including those in which neither of the two spatialities is geographic space.

-yojo

Spatializing Social Networks

Monday, September 21st, 2015

In Radil et al.’s Spatializing Social Networks (2010), the authors introduced an innovative method of integrating social network concepts of closeness and space with those of proximity and location called ‘structural equivalence’ (2010:308). A case study of rivalry and territoriality in the Hollenbeck Policing Area of Los Angeles was used to demonstrate how social network analysis can go beyond mapping of spatial networks (called ‘relational embeddedness’) of gangs in this area, but also the social positions of the gangs (also called the ‘structural position in network space’) within the Hollenbeck social network of gangs (2010:309).

Radil et al.’s publication achieves its goal of presenting to readers thoughtful (for 2010 at least) methods of incorporating the fundamental ideas behind sociological constructs of human interaction and social networks into spatial network analysis. I found the publication to have a thorough literature review of past forays into structural equivalence and concepts of spatial and social types of embeddedness, albeit difficult to understand at times, for readers unfamiliar with this geographic information science subdomain.

I found the Radil et al.’s Spatializing Social Networks to be an intriguing exercise in the harmonizing of social and geographical sciences. Most of all, I appreciated the authors obvious endeavour to use as much scientific terminology as possible (and very little tool-talk), in an effort to elevate geographic information science away from the simplistic ‘GIS is only a tool, not a science worthy of funding’ label.

The authors addressed a question that was raised in my mind as I was reading the article; that of temporal dynamism of spatial and social networks. I would be very interested to see how the CONCOR (convergence of iterated correlations) positional analysis would fair when a third, temporal dimension were to be added to the positional analysis. How would social constructs of space change over time? How would changes to the temporal resolution (i.e. scale) affect the magnitude of these changes? How could these results sway our understanding of Hollenbeck and the structural positions of gang network space?

-ClaireM

Spatializing Social Networks

Monday, September 21st, 2015

The article Spatializing Social Networks: Using Social Network Analysis to Investigate Geographies of Gang Rivalry, Territoriality, and Violence in Los Angeles by Radil et al. (2010) gives light to not only the relevance of the GIScience (though the authors don’t explicitly use the term GIScience) lens but also to its broad applicability for understanding social problems.  The authors first examine the idea of embeddedness, an integral theme of geography, and tie this to the gang’s territoriality and associated violence.  These two variables work very well for defining the social networks of gangs as their relationships are based on continuing rivalries.  I was particularly intrigued by how they used three splits of correlations analyses to quantify the relationships between territories; this has potential to offer increased surveillance of areas potentially considered hot spots as well as outlining areas for interventions.  By interventions I’m thinking of targeted anti-gang education and off-the-street programs for schools in the Region, especially since the school district was mentioned as one of the social factors separating the gangs in Hollenbeck from interacting with gangs in the rest of LA.  Lastly and on a larger scale, social networks have always been an integral part of humanity as we are inherently social beings, but as the world becomes more interconnected (via globalization) I am left contemplating the implications of such rapidly expanding social networks and how spatial networks will continue to shape the modern world?

-BannerGrey

 

 

Spatializing Social Networks: Quantification Gone Too Far?

Monday, September 21st, 2015

In their 2010 article, Radil, Flint and Tita attempt to combine spatial analysis techniques with a social networks approach to tease out spatial patterns in gang activity across Hollenbeck, CA. While successful in presenting a three-tiered distribution of gang activity with interesting spatial phenomena, including the effect of ‘relational betweenness’ on violent crime (p.321) and elements of ‘north-south’ (317) and ‘core-periphery’ (p.320) territoriality, the authors highlight the potential of the technique to be used ‘in concert with other ways of knowing’ (p. 322) and suggest that the static nature of their study is an important limitation considering the ‘dynamism’ (p.321) of gang activity across space and time.

While I do appreciate the author’s efforts to quantify a traditionally qualitative area of Geography, their detachment from the subject of study leaves me very uneasy. Gang violence affects the day-to-day experience of hundreds of thousands of Angelenos, and to ‘focus on methodology’ (p.322) and engage so little with its repercussions leaves the door wide open for criticism from a cultural geography standpoint.

Could these patterns had been identified through a qualitative approach? How do testimonials acquired ‘in the field’ stack up to the matrices, network diagrams and spatial analyses in this paper? As discussed in class, quantification (quite like the term ‘science’ as a qualifier for the ‘S’ in GIS) has traditionally been associated with a higher level of recognition and funding in academia. While there are undoubtedly benefits to this, the value of the study assigned this week is limited so long as we agree that it uncovers little new information and only bolsters (or at least claims to bolster) the legitimacy of known patterns and distributions.

The First Law of Geography was derived from a similarly complex attempt to model urban sprawl; a simple message with enormous repercussions drawn from a paper riddled with number crunching and model making (see Tobler, 1970). The degree to which we can use quantitative methods for inductive reasoning in social geography is, in my opinion, an interesting debate that I would love to expand on in class. Let’s remember that Los Angeles is more than a collection of statistics…

-XYCoordinator

Spatializing Social Networks

Monday, September 21st, 2015

In this week’s article, Spatializing Social Networks, researchers looked at gang violence in a section of Los Angeles. To understand the context of the violence they looked at to spatialities of gangs, firstly there geographic position and secondly there position within a network of rivalries. It was important for the researchers to start with a Moran’s I test. This showed them that there was not a significant spatial autocorrelation of gang violence meaning that a purely spatial analysis would be unhelpful to understanding gang violence. This was a good clear rationale for moving their study beyond just location and including network analysis as well. Using GIS they mapped the positions of gang territory as well as overlaying the network of rivalries. In their analysis they found a first split divided geographically between north and south of the freeway, and then a second split within each region divided by a core of dense rivalry linkages creating violence and a more peaceful periphery.

This study was interesting and shows how, as discussed in class, GIS can be used to improve knowledge for law enforcement. In this specific case, nobody outside of the LA gangs would argue this knowledge is a bad thing. The data was collected by survey from willing law enforcement and gang informants/experts. However it presents an interesting question for scientists who are developing methods of simultaneously analyzing social networks and geographic space. What are the implications now that all that data that can be gleamed from Twitter or Facebook users? Could law enforcement be made more efficient by predicting spaces of rivalry, or could it be used as an authoritarian tool? In a Ukrainian or Syrian uprising scenario, to what extent could governments use these same techniques to quickly quell dissent?
-anontarian