Archive for the ‘506’ Category

Scaling Behavior of Human Mobility

Sunday, November 3rd, 2019

This conference paper discusses about the spatial-temporal scaling behavior of human mobility by conducting an experimental study using five datasets from different areas and generations. They do get the results that consistent with the literature that human mobility shows characteristics forms for power law distributions, not all datasets are equal, etc. However, what are the basic disciplines of human mobility is not well explained. And the analysis (case studies) carried out based on large amounts of data generated by new measurement techniques is to examine the impact on aggerate metrics of spatial and temporal sampling period. These analysis and discussion on results conduct great researches on how scaling behavior varies and how massive datasets be interpolated with a generalized conclusion that spatial temporal resolution behaviors matter a lot to describe human mobility. That issue is not only associated with human mobility analysis, and it does always matter in plenty of fields in GISicence.

What is particular different and influential on human mobility? Is there any spatial data quality or spatial data uncertainty discussion necessary before or after analyzing movement datasets? Is there any argument on definition of human mobility and related measuring metrics? I expected more about more fundamental issues on human movement analysis which is still vague to me instead of case studies showing basic rules form human mobility and relationships with scaling issues.

SRS for Uncertainty — some brief thoughts

Sunday, November 3rd, 2019

Quale — a new word I will almost certainly never use.

It does however represent a concept we all have to wrangle with. Forget statistical models, literally no representation is complete. I tell you to imagine a red house – you imagine it. But was it red? Or maroon, burgundy, pink, or orangish? Not just a matter of precision, what we are communicating depends on what we both think of as ‘red’, or ‘maroon’, or ‘burgundy’, or whatever else. We might also have ideas about what sorts of red are ‘house’ appropriate. An upper-level ontology might suggest to us red-ness that is universal. But no houses in my neighbourhood are the bright lego red? Why not?

Some of what Schade writes reminds me of Intro Stats: error being present in every single observation. This sort of error can be thought of as explained and unexplained variance. Variance is present in all data; the unexplained variety being what may have risen due not only apparatus error, but also what we describe as uncertainty in data.

Schades temperature example is handy: the thermometer doesn’t read 14 degrees – it reads 13.5-14.5 with 68% probability. The stories we tell aren’t about what we say, but what we mean. This sort of anti-reductionism is also at the root of complexity theory. Acknowledging that we cannot characterize systems as static and linear components disregards the emergence of something that can explain why complex things are greater than the sum of their parts. Applied machine learning research also appreciates this anti-reductionism: the link to AI Schade makes, I THINK is about how applied machine learning researchers aren’t really interested in determining the underlying relationships of phenomena – only the observed patterns of associations. Methods that neglect the former and embrace the latter perspective explicitly consider their data to be incomplete and uncertain to some degree. Though to be honest this connection seems forced in the paper, but I’m happy to help force it along. 🙂

Spatial data quality: Concepts

Sunday, November 3rd, 2019

This chapter in the book “Fundamentals of Spatial Data Quality” gives a shot on basic concepts in spatial data quality by pointing out that the divergences between the reality and the representation are what the spatial data quality issues often deal with. And there are several aspects of where the errors would happen during the data production process such as the data manipulation process and the human involved data creation process. Moreover, spatial data quality is summarized to be assessed from internal and external aspects. This chapter explains well what the data quality is and what errors could be and is very easy to understand.

It is interesting that the introduction starts with a quote, “All models are wrong, but some are useful”. However, does it mean all spatial data or data created could be interpolated as the product of model or filter? Authors argue that the representation of reality may not be fully detailed and accurate but partially useful. But how to determine whether the data with those uncertainty or errors should be accepted is a much more urgent problem. Also, as the topic is “spatial data uncertainty” and spatial data quality issues discussed in the chapter, does the uncertainty exactly mean different sources of error assessed in spatial data quality?

The chapter defines the internal quality as level of similarity between data produced and perfect data while external quality means level of concordance between data product and user needs. My thought is if user participate in the data producing process (which is about internal quality), will the external quality be efficiently and effectively improved? Can we just replace “as requested by the manager” with “what user wanted” in Figure 2.4 and there should be no external quality worries?

Thoughts on “Miller et. al – Towards an integrated science of movement”

Sunday, November 3rd, 2019

“Towards an integrated science of movement” by Miller et. al lays out the advances that have been made in the understanding of mobility and movement as a whole given the growth of location-aware technologies, which have provided much more accessible data acquisition. They are interested in synergizing the components of animal movement ecology and human mobility science to promote a science of movement.

In regards to mobile entities that are defined as “individually identifiable things that can change their location frequently with respect to time”, are there specific definitions that clearly define what “frequently in time” means? Examples have been made with birds or humans, but would trees or continental masses be considered mobiles entities as well?

It would be interesting to assess the impact of tracking location on the observations, in other words if tracking can affect the decisions made by whoever or whatever is being tracked. For example, a human who knows they are being tracked might change their trajectory solely based on the fact they do not want to potentially compromise sensitive areas or locations they visit, while an animal could behave differently if the technology used to track its movement make it more visible to predators. There is an ethical dilemma in tracking a human being without their consent, but it must be acknowledged that tracking does come with some consequences in terms of results differing from reality.

Reflecting on “Scaling Behavior of Human Mobility Distributions”

Sunday, November 3rd, 2019

Analyzing big data is an obstacle across GIS, and movement is no exception. Cutting out potentially unnecessary components of the data in order to reduce the dataset  is one way of addressing this challenge. In Paul et al.’s piece they look at how much cutting down on datasets’ time windows may affect the end distribution.

Specifically, they examine the effects of changing the spatio-temporal scale of five different movement datasets, revealing which metrics are best to compare human relationships to movement across datasets. The findings of the study, which examines GPS data from undergraduate students, graduate students, schoolchildren, and working people, reveal that changing temporal sampling periods does affect the distributions across datasets, but the extent of this change is reliant on the dataset.

After reading this piece, I would like to understand more about how researchers studying movement address privacy. I’m sure having enormous datasets of anonymized data addresses part of this issue; however, I’m sure different government agencies, organizations, corporations, etc. collecting this data have different standards regarding the importance of privacy. How strictly enforced are data privacy laws (looking at movement data specifically)? 

Thoughts on “Fisher et. al – Approaches to Uncertainty in Spatial Data”

Sunday, November 3rd, 2019

This article by Fisher et. Al clearly lays out the components and concepts that are part of spatial data uncertainty and explain solutions that have been proposed to counteract their potential consequences on data analysis and interpretation. A better understanding of what uncertainty really is helped me realize that an overwhelming majority of geographical concepts are poorly defined objects, either being vague or ambiguous.

One solution for reducing the effects of discord ambiguity, although maybe not realistic but very practical, would be to create a global lexicon that stipulates how certain statistics need to be calculated and defines concepts on a global scale. This would allow for easier comparisons between regions currently using different approaches and would uniformize the process. However, it is important to note that this could not be applied to every statistical measurement, definition or observations made given the fact there could be biases against certain regions. An example could be that a road is conceptualized differently in one part of the world when compared to another.

On the topic of data quality, the advent of geolocational technologies has propelled geospatial data to the forefront of organizations and businesses aiming to profit from their use. Without trying to be too cynical, wouldn’t private organizations have an incentive to manipulate the data quality at the detriment of others in order to benefit themselves? This is where Volunteered Geographic Information (VGI), an example being OpenStreetMap, comes into play as to balance the playing field, in this case being Google Maps.

Suggestions concerning development of AI in GIS

Monday, October 28th, 2019

This paper written by Stan Openshaw in 1992 introduces concept of artificial intelligence application in GIS to us by explain how AI emerged and being applied in geographic information system development and why AI is inevitably needed and matters a lot in spatial modelling and analysis. AI does bring a lot to GIS development concerning large spatial database management, spatial data pattern recognition and modifying spatial statistics analysis. Neurocomputing, as a revolution of the century, makes it possible for large data sets analysis and modelling, both supervised classification and unsupervised classification eliminate the difficulties and uncertainty of manual analysis and computing for pattern studies. And AI is definitely unavoidable to be referred to when applying spatial data mining to study large spatial data sets. The paper has a clear structure and explain well the complicated concept of AI application in GIS with a strong background of how and why AI should be used in GIS though I do not fully understand the specific method like expert system and ANN.

As we all known, spatial data in GIS is quite different from general type of data with characteristics of spatial dependencies, space- time dependencies, non-stationarities, etc. and AI technologies give more chance to deal with those complex properties. However, I am wondering if these characteristics in spatial data sets and special way in treating them using AI help develop Ai technologies itself (method structure, algorithm development). Will GIS bring opportunities and development for AI? What GIS have brought for AI?

Research Challenges in Geo-visualization

Monday, October 28th, 2019

This article gives us an overview of the importance of research on Geo-visualization topic and discusses some major themes for Geo-visualization and related issues, raising up the main discussion about current challenges emerged in Geo-visualization. Moreover, the authors summarize research challenges and problems proposed crosscutting and end with recommended action for these emerging challenges. Generally speaking, the paper goes through most of research challenges for Geo-visualization from various aspects and lists them one by one for each theme, but it seems not so sensible for me for that though problems have been discussed from different themes and crosscutting view with clear lists, paper structure still confuses me a little. Many terms like visualization method should be developed for better data mining technologies and new tools & methods should be improved with increasingly high representation technologies are not well explained clearly. Some points of view are overlaid when illustrating those challenges. Why representation, integration of computing and interfacing, Interface and usability are the four major theme and what makes them distinctive and related with each other are not well explained. Challenges referred in this paper about geo visualization are not just limited to the visualization technologies, and these could mostly be challenges faced for many concepts in GIS, discussing about data format problems, data amount problems, AI application issues, human-centered, etc. what are issues should also be discussed and think about for term like spatial statistics analysis methods development. I am wondering how to balance the information accuracy (value) and the interface friendliness. Also, Is the geo visualization always the final steps for data analysis, making results more understandable for further use? Will geo visualization technologies be more important dealing with data and information itself or just focusing on results to be better represented?

Thoughts on “Koua et. al – Evaluating the usability of visualization methods in an exploratory geovisualization environment”

Sunday, October 27th, 2019

This article by Koua et. al articulates that the choices made and the techniques used when designing a geovisualization are crucial to convey all the necessary information to the interpreter. Based on certain objectives, certain visualizations were more effective at conveying the necessary information and were more usable compared to others, something that was tested with scientists in the field.

An interesting addition to the research would have been to test the geovisualizations with non-scientists given the fact they are becoming increasingly present in interactive newspaper articles online and on websites in general: what is easily conveyed to scientists may not be as easy to a general public. This research reinforced the notion that these visualizations are only used by professionals in the field, which is no longer the case. In an era where misinformation is rampant on social media and online, understanding how certain geovisualizations are interpreted by the general public could certainly help in designing more intuitive geovisualization techniques.

Technological advancements in the coming years will potentially open the door for new visualization techniques, which, for example, could make use of augmented reality and other emerging technologies. This could make it easier to visually represent certain situations and aid in the transfer of information.

Thoughts on Vopham et.al “Emerging trends in geospatial artificial intelligence (geoAI)”

Sunday, October 27th, 2019

The article by Vopham et. al Emerging trends in geospatial artificial intelligence (geoAI) Potential applications for environmental epidemiology provides us with a general understanding of what geoAI is and how it is utilized.

The interdisciplinary nature of geoAI is highlighted not only by the scientific fields that develop and utilize geoAI, but also by the wide spectrum of applications “to address real-world problems” it has. These vary from predictive modeling of traffic to environmental exposure modeling. Focus on machine learning, data mining, big data and volunteered geographic information has helped the expansion of geoAI. The main topic of this paper, however, is how this scientific discipline can be applied to the advancement of environmental epidemiology.

I find the future possibilities and applications of geoAI particularly exciting. As explained in the article, the progress in geoAI that has allowed for more accurate, high-resolution data which has the potential to revolutionize the use of remote sensing.  As with most of the evolving GIScience technologies we have yet to uncover their full potential and applications.

Thoughts on Koua et.al “Evaluating the usability of visualization methods in an exploratory geovisualization environment”

Sunday, October 27th, 2019

The article Evaluating the usability of visualization methods in an exploratory geovisualization environment by Koua et al. report on their findings regarding visualization methods and geovisualization. The study aimed to evaluate how the use of different visualization tools impacted the usability and understanding of geospatial data.

I found it quite interesting to see the results of the study, out of six different ways of visualizing the same data, the map was found to be the better tool for tasks such as locating, ranking and distinguishing attributes. On the other hand, the self-organizing map (SOM) component plane was better for the visual analysis of relationships and patterns in the data. This brings a question to mind about the type of users interacting with the product.

In the study, the participants were made up of 20 different individuals with a background in GIS and data analysis. This means that they had experience with GIS tools and their own preference of tools for analysis – they knew what to expect and (generally) how to use the tools. I wonder how the results would change if the participants of the study varied more in their knowledge background of GIS. How would someone with no particular experience with GIS tools interact and understand that same data? I find this particularly interesting because when creating a Geoweb product for public use that supports analysis, the user interaction and understanding of the product is crucial.

Thoughts on “VoPham et. al – Emerging trends in geospatial artificial intelligence (geoAI)”

Sunday, October 27th, 2019

In “Emerging trends in geospatial artificial intelligence (geoAI)”, VoPham et. al explain the emergence of geoAI as a new research field combining concepts, methods and innovations from various fields, such as spatial science, artificial intelligence (AI), data mining and high performance computing, and give examples of recent applications in real-life situations. The fusion between AI and GIS helps us obtain more accurate representations compared to traditional methods given the ability to make use of spatial big data.

As mentioned in the article, geoAI has the ability to revolutionize remote sensing, with the potential to more accurately recognize earth features. Slight differences in the spectral response of a pixel could be detected by an algorithm trained to detect these ever so small differences, which could help detect and respond to forest fires more rapidly for example. A research project I worked on last year aimed at assessing the extent of the Fort McMurray forest fire of 2016, and although the results were extremely similar to what had been obtained by official government sources, the use of geoAI could have overcome the limitations of the NDVI and NBRI indices used.

As with any new emerging scientific field, it will be interesting to see how and to what geoAI will be applied to next. An example would be spatial Agent-based modelling (ABM), which aims to simulate the actions of specifically defined agents in space, which could highly benefit from geoAI and the input from spatial big data. Geographical ontologies could also be redefined by deep learning, which could conceptualize things differently from the way we currently do.

Thoughts on Evaluating the usability of visualization methods in an exploratory geovisualization environment (Koua et al., 2006)

Sunday, October 27th, 2019

In this paper, Koua et al. developed a geovisualization use and usability assessment method based on variables including user tasks, time to complete tasks, usefulness and user reactions, compatibility with the user’s expectations for the different tasks, flexibility, perceived user understanding of the representations used, user satisfaction, and user preference rating. Their result seems to be decently analyzed and explain in a understandable way that different geovisualization methods have its advantage on certain tasks, and disadvantages on some of the other tasks. This study enlightened me that geovisualization process as tools to interpret data, rather than a representation of the data. As well as systematically assess and provide the which geovisualization method is better to use in terms of expected tasks it will perform. This helps me to make the decision when it comes to choosing method of geovisualization, which potentially means what kind of tasks I intend to provided for viewers, and what viewer will expect to gain and utilize the data.

However, I do find their assessment design not that convincing, in terms of participants involved in this assessments are all academics or researchers in science related field. Not only I am not convinced that this assessment are only made for professionals, since they exclude the general public, they also exclude policy makers, urban planners, and social activists, who are also potential users of geovisualization product. And sometimes general public and those social science related professionals tend to use more geovisualization products, due to the lack of programming or statistical skills to process and analyze data. Thus, I would argue the flaw of this assessment process is they fail to include all potential users of geovisualization products, when they only choosing participants from the nature science professionals.

Thoughts on Emerging trends in geospatial artificial intelligence (geoAI): potential applications for environmental epidemiology (VoPham et al., 2018)

Saturday, October 26th, 2019

VoPham et al. basically summarized the major trend of practical application of geoAI (mostly machine learning, deep learning, and data mining), and its specific practice regards to environmental epidemiology. By using the example of Lin et al.’s air pollution concentration study at Los Angeles, the authors illustrate how geoAI is used to processing the combination of big data from different sources, as well as efficiency computational process on pattern detect and modelling.

However, a question struggles me from their introduction to geoAI in practical use to the end of their envision of geoAI’s future: what is the exact difference between machine learning/data mining algorithms and geoAI. Is geoAI merely a combination of different machine learning or data mining algorithms? Or is it something more complicated than they illustrated in their article? Since from their example of modelling air pollution, Lin et al. (as the authors of original study) says that specialized geospatial data experts are still needed to decide what kind and quality of data can go in the modelling, to avoid the “garbage in garbage out” situation. To me, however, if a geoAI cannot reach a standard to identify and evaluate what should be included in the computational process, it is just a combination of different computational algorithms. Self-evolution and decision making process might be key to distinguish geoAI and combination of algorithms.

Some may argue that geoAI is only on its early stages, and so much more need to be done in order for geoAI to self-evolve and make decisions. However, if geoAI cannot be adaptive to different spatial or temporal instance, what is the need for an AI instead of a team of machine learning programmers and data miners? I believe reaching proper self-evolutionary ability to adapt different spatial and temporal instances, as well as making decisions of what comes in the modelling process, and what parameter or logic need to be change to adapt the different input variables is essential to call it geoAI, rather than a systematic geospatial data modelling algorithm.

Thoughts on Spatial data mining and geographic knowledge discovery – An introduction (Mennis & Guo, 2009)

Sunday, October 20th, 2019

Mennis and Guo’s work generalized the trend, progress, and achievement on Spatial data mining, processing, and interpreting till 2009. It is a very helpful review for those who are not familiar with most of the techniques and approaches in the field of spatial data mining. Their work especially focus on the spatial classification and prediction, spatial association rule mining, spatial clustering, regionalization and point pattern analysis. Although this article makes everyone feels so excited about how the boom of geospatial data and mining technique, which feeds into research field, private sectors, and sometimes government operation, opportunities comes with a cost.

I am not saying more available geospatial data is bad, however, there are certain challenges the authors fails to discuss in detail. First is the selection bias when mining spatial data. For people who aware of GPS tracking devices and do not want to share their geospatial data, and those who have not access to GPS tracking devices yet, they are excluded in some of the hottest geospatial data mining realm, such as social media spatial data mining, there is a selection bias with the data, which may leads to unintended exclusion of population from the interpretation of the data. Although it can be taken care of if data from various sources can be joined together are used in the processing and interpreting stage, it is definitely something spatial data miners should be aware of.

Second is the privacy issue, more geospatial data does not actually makes everyone happier, it has a cost. Although more and more geospatial data is masked to protect privacy, the huge amount of data flows inevitably expose some or most population under privacy crisis. Thus there has to be an awareness for data miners to protect study subjects or data contributors’ privacy, and proper supervision in this field need to be address to prevent malicious mining of geospatial data.

At last, the availability of seems infinite geospatial data is thrilling for people who works in this field for sure. However, it also increases the difficulty and skill requirement for data miners. It is not only computational skills that allows data miners to mining the data. More importantly, is the skill to discover, to observe, and formulating the right question, which until nowadays is still heavily depend on human to make the call. Also, the ability to look at geospatial data critically is necessary. Unless the data fits perfect with our questions, there are uncertainties need to be address rather than blindly trust the data because the size of it.

on Optimal Routes in GIS and Emergency Planning Applications (Dunn & Newton, 1992)

Sunday, October 20th, 2019

Dunn & Newton offer a pretty decent summary of the most basic form of optimal/shortest path algorithms, and what they say still holds true (at least to my understanding …) some 27 years later. They describe how Dijkstra’s and the “out-of-kilter” algorithms determine routes, the different uses for each of them in emergency planning, the computational requirements, and finally some potential developments in the topic. Although a verbal description of the contents of a matrix isn’t the most intuitive or interesting to me, I found the easy enough to follow and I’m glad they included the few visuals they did.

It is interesting to see what they list as future development opportunities, as most of these things are now incorporated into every current mapping site, public transit app, and dashboard GPS. Even their “most difficult challenge”, temporal change, has been met, and I can map myself to Saskatoon in a few seconds without having to worry about getting stuck in heavy traffic or road closures. It is honestly very difficult to think of something else that I would want to incorporate into shortest path algorithms or an application like Google Maps. My first two thoughts are pedestrian traffic and incorporating greater local knowledge for more complex decisions. i feel like pedestrian traffic could be done very easily through traffic projection just like street traffic, and I’m sure someone has at least tried to do it. On local knowledge, I guess I mean if you want to ask something like “find me the most scenic drive”, “avoid routes where there are frequent accidents”, “only take me down streets with ample night lighting”. Some of these could lead into problematic discussions surrounding the ideas of aesthetics (who decides?) and safety (for who?).

Thoughts on Spatial Data Mining Approaches for GIS – A Brief Review

Sunday, October 20th, 2019

This review article outlines the challenges in the use of geospatial data and the challenges in Spatial data mining and it summarized the tasks and tools in spatial data mining. It proposed an architecture to address the challenge of huge data volume.
This article did a good job of summarizing the tasks and tools in spatial data mining for me to have a basic understanding of the topic. However, I was a bit confused by this article mainly because it sometimes uses several terms interchangeably such as GIS, GIS data, spatial data mining, big data and I had a hard time grabbing what is the main idea that the author wants to explain. It seems to argue that the challenge of spatial data mining can be viewed as merely the challenge of big data volume and the challenge can be solved by a “big data” approach by integrating the data. An important dimension is missing – big data is not just about the big volume, but also about its velocity. Some spatial data such as social media data with spatial attributes is generated in a timely manner in a variety of form. Given this, the proposed framework doesn’t seem to be useful to me because it doesn’t address the velocity challenge. Even apart from this, the framework proposed is also not very well explained. In summary, I don’t really like this article.

Thoughts on Optimal routes in GIS and emergency palnning applications (Dunn et al., 1992)

Sunday, October 20th, 2019

The authors of this article explains and using experiment to illustrate how the shortest path algorithm (Dijkstra algorithm) and the more complicated “out-of-kilter” algorithm fits in the field of network analysis. Although the article is written in 1992, it still provide the basic knowledge to those who are not familiar with network analysis before. It also reflects the most basic level algorithm for nowadays more advanced network analysis algorithm.

One thing that catches my notice throughout the whole article, is that although the “out-of-kilter” algorithm the article mainly tested focus on shortest path, in realistic application, I would argue the time efficiency are way more important than the distance between two routes, in the case of emergency evacuation. As the authors discusses, however, more factors are indeed needed to perform a network analysis that considers time efficiency, such as peak hours, route capacity, means of transportation, slope, landscape etc..

Another important issue from this 1992 article about network analysis is the limitation of computing power and technological foresight. Due to the computing power limit, only simple shortest path between nodes can be compute in euclidean distance, which in most of the case, will hit obstacle in real-life practice, due to multiple physical, social, and cultural barrier. The issue with technological foresight is that although network analysis itself does not necessarily require geographical coordinate to perform. Or in other words, network analysis itself is not limited by geographic coordinate system or projection. However, nowadays, when we apply network analysis on more advanced use, like GPS tracking, navigation, even real-time traffic monitoring/dispatch, the geographic side of network analysis just cannot be totally ignored, and sometimes rather necessary.

Thoughts on “Spatial data mining and geographic knowledge discovery – An introduction”

Saturday, October 19th, 2019

In “Spatial data mining and geographic knowledge discovery – An introduction”, Mennis and Guo articulates the four main methods used in spatial data mining, namely the spatial classification method, spatial association rule mining, spatial clustering and geovisualization, while also explaining the challenges linked with the spatial data mining process.

Although it is true that spatial data mining technologies have greatly evolved over the last few decades, it is always the case that the law is always trailing technological advances, which may allow unethical uses that could compromise the privacy of certain service users, especially from the private sector. While the methods presented in this article seem to be appropriate for many different cases, it could be raised that a partitioning spatial clustering method, which is non-overlapping, might assign a data item to cluster ‘x‘ even though it could have equally been assigned to a cluster ‘y‘, something that could change from one iteration to another.

Interestingly, the conclusion supposes that “the data cannot tell stories unless we formulate appropriate questions to ask and use appropriate methods to solicit the answers from the data”, a notion that could be challenged with the rapid growth of several fields, such as machine learning and artificial intelligence. Although it is hard to conceptualize right now, tt wouldn’t be too far-fetched in the near future where machines could essentially determine by themselves the best algorithms to use in order to classify spatial data from a vast database.

Thoughts on “Optimal routes in GIS and emergency planning applications”

Saturday, October 19th, 2019

In “Optimal routes in GIS and emergency planning applications”, Dunn and Newton present the importance of GIS in the context of optimizing flow in emergency management situations. Two algorithms, namely Djikstra’s algorithm and out-of-kilter algorithm are presented as ways to determine the shortest path from a starting node to an end node. Where Djikstra’s algorithm is optimized for path finding in more simple networks, out-of-kilter’s algorithm is more efficient in complex networks with arcs having limited flow.

Network analysis is definitely key to better emergency response and evacuation situations that require optimized knowledge of all evacuation path networks, which may include more than just roads. However, analyzing emergency evacuation through these two algorithms does not leave space for unaccounted human decisions. An example could be painted for Sainte-Marthe-sur-le-Lac, a northern suburb of Montreal that experienced a dike breach earlier this year, which forced the immediate evacuation of more than 8,000 residents. Following the evacuation order, the road network was completely overflowing, with people stuck in traffic for more than an hour, which prompted people to start driving on terrains and properties to flea the scene. This gives an example where the magnitude of a catastrophe could force people to use paths outside the road network to get to their desired destination. Using the out-of-kilter algorithm to analyze networks in emergency situations is thus limited in its ability by not accounting for out of network transit.

Another interesting point would be in terms of the computational times necessary to update the preferred path in emergency evacuations. Since this article was published in 1992, have there been significant improvements in computational times? Has another algorithm emerged as more efficient to determine the most efficient path? Emergency evacuations requiring frequent updates, such as a flooding event or a hurricane, could be severely affected if computational time isn’t maintained under certain thresholds.