Archive for October, 2019

Thoughts on Research Challenges in Geovisualization

Monday, October 28th, 2019

This article gives us a detailed introduction about Geovisualization. The author started by giving out reasons about whys should we care about Geovisualization. In short, I think the explanation would be that people can get knowledge from it by transforming the geospatial referenced data into information and turn the information into knowledge by analysing it. One example would be the switching from data to paper maps, and then paper maps to web-based maps. The visualization is advancing overtime, so researcher can get more out of a geospatial dataset. How much can we get from the dataset are largely depends on the visualizing techniques.

Then, the author introduced some issues that still remains in the geovisualization field. These problems are representation, visualization-computation integration, interface design, and cognition-usability. One issue I noticed about representation is that in order to take full advantage of the information we want to give out in the geospatial dataset, we want to personalized representation as much as possible. One example I can think of is about Google Map. When I’m trying to find certain stores in a big shopping mall, I always find it hard to locate a certain floor and the direction since sometimes there is no detailed information about this represented on the map. However, I find some other map application gives about 3D navigations in shopping mall so user will find it very easy to locate a certain store. Obviously, the latter application gets more out of the mall database, and make the navigation process more personalized.

Thoughts on GeoAI (Openshaw,1992)

Monday, October 28th, 2019

This article essentially introduced the emergency of GeoAI. The author gives out some detailed reasons about why we should use GeoAI, and he also briefly reviews the expert systems approaches, the use of heuristic search procedures, and the utility of neurocomputing based tools. At last, he predicted the future trends of GeoAI as a emerging new technology.

GeoAI is actually a new topic to me since I have never done a project using this technique. I think it would be very useful and convenient when we are facing a huge dataset and trying to analyse or model it. As far as I understand it, people basically just transfer their thoughts to the computer and let the computer to decide and calculate result. Then, there would be a point where GeoAI will be connected to spatial data uncertainty. Is it possible to train the data and let it decide the level of uncertainty in a dataset? Or it there any way to eliminate of reduce some uncertainty in a dataset?

Another aspect to think about the uncertainty problem in GeoAI would be the supervision of human. What I get from the article is that people can supervise the computer when they are doing analytic works using the algorithms researchers put in. Would this supervision process bring more uncertainty into the dataset, or it will help to reduce the error? These are thoughts that come to me when I’m reading the article.

Suggestions concerning development of AI in GIS

Monday, October 28th, 2019

This paper written by Stan Openshaw in 1992 introduces concept of artificial intelligence application in GIS to us by explain how AI emerged and being applied in geographic information system development and why AI is inevitably needed and matters a lot in spatial modelling and analysis. AI does bring a lot to GIS development concerning large spatial database management, spatial data pattern recognition and modifying spatial statistics analysis. Neurocomputing, as a revolution of the century, makes it possible for large data sets analysis and modelling, both supervised classification and unsupervised classification eliminate the difficulties and uncertainty of manual analysis and computing for pattern studies. And AI is definitely unavoidable to be referred to when applying spatial data mining to study large spatial data sets. The paper has a clear structure and explain well the complicated concept of AI application in GIS with a strong background of how and why AI should be used in GIS though I do not fully understand the specific method like expert system and ANN.

As we all known, spatial data in GIS is quite different from general type of data with characteristics of spatial dependencies, space- time dependencies, non-stationarities, etc. and AI technologies give more chance to deal with those complex properties. However, I am wondering if these characteristics in spatial data sets and special way in treating them using AI help develop Ai technologies itself (method structure, algorithm development). Will GIS bring opportunities and development for AI? What GIS have brought for AI?

Research Challenges in Geo-visualization

Monday, October 28th, 2019

This article gives us an overview of the importance of research on Geo-visualization topic and discusses some major themes for Geo-visualization and related issues, raising up the main discussion about current challenges emerged in Geo-visualization. Moreover, the authors summarize research challenges and problems proposed crosscutting and end with recommended action for these emerging challenges. Generally speaking, the paper goes through most of research challenges for Geo-visualization from various aspects and lists them one by one for each theme, but it seems not so sensible for me for that though problems have been discussed from different themes and crosscutting view with clear lists, paper structure still confuses me a little. Many terms like visualization method should be developed for better data mining technologies and new tools & methods should be improved with increasingly high representation technologies are not well explained clearly. Some points of view are overlaid when illustrating those challenges. Why representation, integration of computing and interfacing, Interface and usability are the four major theme and what makes them distinctive and related with each other are not well explained. Challenges referred in this paper about geo visualization are not just limited to the visualization technologies, and these could mostly be challenges faced for many concepts in GIS, discussing about data format problems, data amount problems, AI application issues, human-centered, etc. what are issues should also be discussed and think about for term like spatial statistics analysis methods development. I am wondering how to balance the information accuracy (value) and the interface friendliness. Also, Is the geo visualization always the final steps for data analysis, making results more understandable for further use? Will geo visualization technologies be more important dealing with data and information itself or just focusing on results to be better represented?

Research Challenges in Geovisualization (MacEachren & Kraak, 2001)

Sunday, October 27th, 2019

In this paper, Maceachren and Kraak (2001) concluded the research challenges in geovisualization. The first thing that catches my eye is the cartograph cube, which defines visualization in terms of map use. The authors argue that visualization is not the same thing as cartography. Visualization, same as communication is not just about making maps, but is also using them.

While the authors highlight the importance of scale issue, integrating heterogeneous data also present a challenge for geovisualization, because of the different categorization schemes and complex semantics that are applied in data creation. Similar conditions or entities are often represented with different attributes or measured with varying measurement systems. Therefore, the heterogeneity raises questions when we use data from different data producers: How to assess heterogeneity? How to make decisions about whether data may be combined? How to integrate multiple data sets if the same semantics are used differently?

Further, the emergence of geospatial big data, such as millions of conversations via location-enabled social media, stretches the limits of what and how we map. The potential of using geospatial big data as a data source for geovisualization requires developing appropriate methodologies. While this paper mainly discusses geovisualization of quantitative data. I am also curious about how to visualize qualitative spatial data. (QZ)

Thoughts on “Koua et. al – Evaluating the usability of visualization methods in an exploratory geovisualization environment”

Sunday, October 27th, 2019

This article by Koua et. al articulates that the choices made and the techniques used when designing a geovisualization are crucial to convey all the necessary information to the interpreter. Based on certain objectives, certain visualizations were more effective at conveying the necessary information and were more usable compared to others, something that was tested with scientists in the field.

An interesting addition to the research would have been to test the geovisualizations with non-scientists given the fact they are becoming increasingly present in interactive newspaper articles online and on websites in general: what is easily conveyed to scientists may not be as easy to a general public. This research reinforced the notion that these visualizations are only used by professionals in the field, which is no longer the case. In an era where misinformation is rampant on social media and online, understanding how certain geovisualizations are interpreted by the general public could certainly help in designing more intuitive geovisualization techniques.

Technological advancements in the coming years will potentially open the door for new visualization techniques, which, for example, could make use of augmented reality and other emerging technologies. This could make it easier to visually represent certain situations and aid in the transfer of information.

Thoughts on Vopham “Emerging trends in geospatial artificial intelligence (geoAI)”

Sunday, October 27th, 2019

The article by Vopham et. al Emerging trends in geospatial artificial intelligence (geoAI) Potential applications for environmental epidemiology provides us with a general understanding of what geoAI is and how it is utilized.

The interdisciplinary nature of geoAI is highlighted not only by the scientific fields that develop and utilize geoAI, but also by the wide spectrum of applications “to address real-world problems” it has. These vary from predictive modeling of traffic to environmental exposure modeling. Focus on machine learning, data mining, big data and volunteered geographic information has helped the expansion of geoAI. The main topic of this paper, however, is how this scientific discipline can be applied to the advancement of environmental epidemiology.

I find the future possibilities and applications of geoAI particularly exciting. As explained in the article, the progress in geoAI that has allowed for more accurate, high-resolution data which has the potential to revolutionize the use of remote sensing.  As with most of the evolving GIScience technologies we have yet to uncover their full potential and applications.

Thoughts on Koua “Evaluating the usability of visualization methods in an exploratory geovisualization environment”

Sunday, October 27th, 2019

The article Evaluating the usability of visualization methods in an exploratory geovisualization environment by Koua et al. report on their findings regarding visualization methods and geovisualization. The study aimed to evaluate how the use of different visualization tools impacted the usability and understanding of geospatial data.

I found it quite interesting to see the results of the study, out of six different ways of visualizing the same data, the map was found to be the better tool for tasks such as locating, ranking and distinguishing attributes. On the other hand, the self-organizing map (SOM) component plane was better for the visual analysis of relationships and patterns in the data. This brings a question to mind about the type of users interacting with the product.

In the study, the participants were made up of 20 different individuals with a background in GIS and data analysis. This means that they had experience with GIS tools and their own preference of tools for analysis – they knew what to expect and (generally) how to use the tools. I wonder how the results would change if the participants of the study varied more in their knowledge background of GIS. How would someone with no particular experience with GIS tools interact and understand that same data? I find this particularly interesting because when creating a Geoweb product for public use that supports analysis, the user interaction and understanding of the product is crucial.

Reflection on “Research Challenges in Geovisualization”

Sunday, October 27th, 2019

This piece gives a very thorough background on geovisualization and its problems, especially its problems across disciplines.

A part of the piece that caught my attention was when MacEachren and Kraak said that “Cartographers cannot address the problem alone.” Through all the papers we have read in this class, there is a trending theme that there needs to be more cross-disciplinary communication in GIS to solve crosscutting problems. This article is better than other articles who just mention that more communication needs to happen; this article actually lists ways to better research cross-disciplinarily, in addition to listing short, medium, and long term goals. 

Although, I also feel that this article was written in a way that was very very generalized and vague, which made it a bit difficult to follow. This also gave their reasons less clout because it’s always easier to explain vague solutions as opposed to more specific ones. Some specific GIS examples would have also been very helpful!

The potential of AI methods in GIS (Openshaw, 1992)

Sunday, October 27th, 2019

In this old paper, Openshaw (1992) calls attention to the potential of artificial intelligence (AI) methods in relation to spatial modeling and analysis in GIS. He argues that GIS with a low level of intelligence has only little changes to provide efficient solutions to spatial decision-making problems. The application of AI principles and techniques may provide opportunities to meet the challenges encountered in developing intelligent GIS. One thing which draws my attention is that the author mentions it is important to “discover how best to model and analyse what are essentially non-ideal data”. But I didn’t see a definition or explanation of non-ideal data in this paper. Does the non-ideal data refer to less structured data or unreliable data? AI can use less structured data such as raster data, video, voice, and text to generate insights and predictions. However, every AI system needs reliable and diverse data to learn from. Very similar data can lead to overfitting the model, with no new insights.

Further, Openshaw demonstrates the usefulness of artificial neural networks (ANNs) in modeling spatial interaction and classifying spatial data. But he didn’t mention how to transfer data from the GIS to the ANN and back. The most widely used ANNs requires data in raster form. However, the spatial data used to produce an interpretive result in GIS is most efficiently managed in vector form. Therefore, I am wondering if there is an efficient methodology to transfer information between the GIS and the ANN.

As of now, GIScience is not new to AI. For example, the most well-known application of AI is probably image classification, as implemented in many commercial and open tools. Many classification algorithms have been introduced to clustering and neural networks. Also, recent increases in computing power have made AI systems efficiently deal with large amounts of input data. I am looking forward to learning more about the current uses of AI in GIS.

Thoughts on “VoPham et. al – Emerging trends in geospatial artificial intelligence (geoAI)”

Sunday, October 27th, 2019

In “Emerging trends in geospatial artificial intelligence (geoAI)”, VoPham et. al explain the emergence of geoAI as a new research field combining concepts, methods and innovations from various fields, such as spatial science, artificial intelligence (AI), data mining and high performance computing, and give examples of recent applications in real-life situations. The fusion between AI and GIS helps us obtain more accurate representations compared to traditional methods given the ability to make use of spatial big data.

As mentioned in the article, geoAI has the ability to revolutionize remote sensing, with the potential to more accurately recognize earth features. Slight differences in the spectral response of a pixel could be detected by an algorithm trained to detect these ever so small differences, which could help detect and respond to forest fires more rapidly for example. A research project I worked on last year aimed at assessing the extent of the Fort McMurray forest fire of 2016, and although the results were extremely similar to what had been obtained by official government sources, the use of geoAI could have overcome the limitations of the NDVI and NBRI indices used.

As with any new emerging scientific field, it will be interesting to see how and to what geoAI will be applied to next. An example would be spatial Agent-based modelling (ABM), which aims to simulate the actions of specifically defined agents in space, which could highly benefit from geoAI and the input from spatial big data. Geographical ontologies could also be redefined by deep learning, which could conceptualize things differently from the way we currently do.

Thoughts on “Evaluating the usability of visualization methods in an exploratory geovisualization environment”

Sunday, October 27th, 2019

There’s a very important component of geovisualization missing from this article: aesthetics. Koua et al cover a great number of factors important to geovisualization, in particular test measures, effectiveness, and usability. However, they only briefly mention users’ “subjective views” towards a geovisualization and “compatability (between the way the tool looks… compared with the user’s conventions and expectations).” This omission is noticeable, since geovisualization is, as its name implies, a very visual aspect of any cartographic scheme, and the aesthetics of any visualization are almost always inherently important. However, it may not be entirely surprising, since this paper focuses on the “usability and usefulness of the visual-computational analysis environment.” The authors have implied through this omission that aesthetics do not relate to the usability and usefulness of geovisualization; however, I would disagree with that assumption. Maps are, at their core, a visual way of displaying data; how they look, not just how they show data, matter. Therefore, however subjective aesthetics are (and they are quite subjective), they must relate to the effectiveness of a map. A map could have great data to show, but if the colors are oversaturated, or the water isn’t blue, this could distract the eye of whoever is looking at it and take away from the map’s findings. If the map user can’t pull the important information from it, then what’s the point of having a map at all? I understand why the authors may have decided to omit aesthetics, considering it’s such a subjective factors compared to everything else they discuss; however, including aesthetics would have made this discussion on visualization usability more robust and complete.

Thoughts on “Emerging Trends in geoAI” by VoPham et al

Sunday, October 27th, 2019

This article is extremely relevant to the independent study I’m conducting this year, and I think I’ll be able to use it for some of the methods I’m conducting. My study is looking at different 911 calls in Baltimore over the last 7 years (assaults, overdoses, car accidents, person found not breathing, and shootings). I have both the address where the call was placed as well as the time, down to the minute. This could be considered a “health” study of sorts, since bodily harm is a health outcome, and I’d like to find factors that correlate to the calls’ times and locations. Therefore, the geoAI described in this article would be great for my study. It tackles big data, which I have (over 6.5 million 911 call times and locations), geoAI that produces high-resolution exposure modeling, which is what I’m looking for. The example given about the study that developed a method to predict air pollution was particularly appealing to me. I’d like to do something similar with my own data, inputting as much spatial data available (demographic indicators, built environment factors, etc) to see which variables predict calls in space and time. I’m glad that this article discusses “garbage in, garbage out computer science,” as that was an issue I was concerned about while reading this piece: that treating geoAI like a black box or ignoring data quality because advanced methods are applied to them may result in flawed results. These are factors I’ll have to keep in mind in my own study, and I’ll have to research further than this article on the proper data, methods and contexts to conduct such an exposure analysis.

Thoughts on Evaluating the usability of visualization methods in an exploratory geovisualization environment (Koua et al., 2006)

Sunday, October 27th, 2019

In this paper, Koua et al. developed a geovisualization use and usability assessment method based on variables including user tasks, time to complete tasks, usefulness and user reactions, compatibility with the user’s expectations for the different tasks, flexibility, perceived user understanding of the representations used, user satisfaction, and user preference rating. Their result seems to be decently analyzed and explain in a understandable way that different geovisualization methods have its advantage on certain tasks, and disadvantages on some of the other tasks. This study enlightened me that geovisualization process as tools to interpret data, rather than a representation of the data. As well as systematically assess and provide the which geovisualization method is better to use in terms of expected tasks it will perform. This helps me to make the decision when it comes to choosing method of geovisualization, which potentially means what kind of tasks I intend to provided for viewers, and what viewer will expect to gain and utilize the data.

However, I do find their assessment design not that convincing, in terms of participants involved in this assessments are all academics or researchers in science related field. Not only I am not convinced that this assessment are only made for professionals, since they exclude the general public, they also exclude policy makers, urban planners, and social activists, who are also potential users of geovisualization product. And sometimes general public and those social science related professionals tend to use more geovisualization products, due to the lack of programming or statistical skills to process and analyze data. Thus, I would argue the flaw of this assessment process is they fail to include all potential users of geovisualization products, when they only choosing participants from the nature science professionals.

Thoughts on Emerging trends in geospatial artificial intelligence (geoAI): potential applications for environmental epidemiology (VoPham et al., 2018)

Saturday, October 26th, 2019

VoPham et al. basically summarized the major trend of practical application of geoAI (mostly machine learning, deep learning, and data mining), and its specific practice regards to environmental epidemiology. By using the example of Lin et al.’s air pollution concentration study at Los Angeles, the authors illustrate how geoAI is used to processing the combination of big data from different sources, as well as efficiency computational process on pattern detect and modelling.

However, a question struggles me from their introduction to geoAI in practical use to the end of their envision of geoAI’s future: what is the exact difference between machine learning/data mining algorithms and geoAI. Is geoAI merely a combination of different machine learning or data mining algorithms? Or is it something more complicated than they illustrated in their article? Since from their example of modelling air pollution, Lin et al. (as the authors of original study) says that specialized geospatial data experts are still needed to decide what kind and quality of data can go in the modelling, to avoid the “garbage in garbage out” situation. To me, however, if a geoAI cannot reach a standard to identify and evaluate what should be included in the computational process, it is just a combination of different computational algorithms. Self-evolution and decision making process might be key to distinguish geoAI and combination of algorithms.

Some may argue that geoAI is only on its early stages, and so much more need to be done in order for geoAI to self-evolve and make decisions. However, if geoAI cannot be adaptive to different spatial or temporal instance, what is the need for an AI instead of a team of machine learning programmers and data miners? I believe reaching proper self-evolutionary ability to adapt different spatial and temporal instances, as well as making decisions of what comes in the modelling process, and what parameter or logic need to be change to adapt the different input variables is essential to call it geoAI, rather than a systematic geospatial data modelling algorithm.

Thoughts on “Some Suggestions…” by Stan Openshaw (1992)

Saturday, October 26th, 2019

Openshaw’s paper, written in the early 1990s, gives us an interesting glimpse into the early days of GIS. It was very interesting to hear about the problems of then as compared to the problems of today; for instance, he advocates that computers should be utilized more in analyzing GIS data, which is in comparison to today, where computers are ubiquitous in GIS analysis. This paper focuses on greater AI involvement because he believes the combination of non ideal data that GIS provides and the increasing complexities in GIS research make some analysis too complex for people to understand.

Openshaw believes that we must shift our mindsets from prioritizing the conservation of computer energy to the utilization of computer’s “endless” energy. He expands on this mindset by listing some examples of computer modeling, such as genetic optimization, and neurocomputing, that he believes GIS should start to utilize more (and are in popular use today).

A part of this piece which leaped out at me was about how people’s sentiments towards computers have changed over time. At one point, a computer’s use was something that must be conserved, and now computers are used everywhere for everything. Everyone assumes that you are connected to the internet all the time; however, this constant connection is also creating new problems concerning digital privacy. I wonder how this constant connection affects GIS data, is it still “non-ideal”, according to Openshaw? This is hard to answer as he never exactly explains how GIS data is “non-ideal” in the paper.

Network Analysis and Topology

Monday, October 21st, 2019

The author introduces the network data structure, the theoretical basis of Network analysis in graph theory and topology. It also discusses that networks are an alternative way of representing the location and the difficulties in the network location problems.
I am particularly interested in the discussion of the 3 types of data models in the evolution of network analysis. The author proposes that it is important to preserve the topological properties of the data whereas these properties also impose difficult constraints on network analysts. The author gave the scenario in which a vertex must exist in the crossing whether or not a true intersection exists, which is problematic when modeling bridges or
tunnels. This reminds me of the project that I’m working on. I plan to use a supervised machining algorithm to extract the road network from satellite images and turn it into a road network. However, the preservation of the topological properties such as the identification of if a crossing is a true intersection or a bridge is extremely difficult. The author then discusses the pure network data structure that is currently widely used in GIS. I think this is a useful topic for me to look into, although I am still not so clear about what is the difference between these two structures and what is planarity requirements that the author mentioned in discussing the two data models.

Thoughts on “Spatial Data Mining Approaches for GIS – A Brief Review”

Monday, October 21st, 2019

This article gives an outline of data formats, data representation, data sources, data mining approaches, related tools, and issue and challenges. The author concluded that “spatial data mining is the analysis of geometric of statistical characteristics and relationships of spatial data”, and it can be used in may fields with different applications.

In terms of spatial data mining tasks, I think other than spatial classification, spatial association rule, spatial clustering, and trend detection, terms like local statistics, spatial autocorrelation, point pattern analysis, etc. should be mentioned or included.

The author also mentioned that the issue and challenges of spatial data mining is data integration and mining huge volume of data. But the author did not explain why this two are challenges. This would be the first things that I felt confused after reading this article.

Then the author provided an architecture of the solution to this two issues and challenges. With no further explanation in detail, I find it really hard to understand why this architecture can solve the above challenges.

on Spatial data mining and geographic knowledge discovery (Mennis & Guo, 2009)

Monday, October 21st, 2019

Mennis & Guo put together a brief, somewhat easy to read literature review on spatial data mining, a good starting place for those with only nominal knowledge of the topic (like me). They go over spatial data mining methods and processes as an intro and follow it up with recent (as of 2009) developments in the field and connect the methods to the reviewed articles. They end with the growing area of spatial data mining  and its expanding “frontier”.

With the exception of the first bit of Section 2.2, I found the paper pretty easy to follow. I was never good at statistics but I remember enough to get through it, and the examples included in each short paragraph were helpful. The most intuitive parts to me were the mentions of remote sensing, which is not something that I would have immediately associated with spatial data mining or Big Data, but once the connections were there it made more sense and helped clarify some other things mentioned.

In the conclusion, Mennis & Guo say “Data mining is data-driven but also, more importantly, human-centered”. This is basically the foundation of Critical GIS, that GIS isn’t a set of tools to be used in a vacuum but to incorporate human knowledge/social theory, and that products of GIS and their interpretation are always influenced by some personal epistemology or ontology. It isn’t surprising that they made this conclusion, but it is an interesting inclusion in an otherwise very technical and by-the-book type of article.

Thoughts on Spatial data mining and geographic knowledge discovery – An introduction (Mennis & Guo, 2009)

Sunday, October 20th, 2019

Mennis and Guo’s work generalized the trend, progress, and achievement on Spatial data mining, processing, and interpreting till 2009. It is a very helpful review for those who are not familiar with most of the techniques and approaches in the field of spatial data mining. Their work especially focus on the spatial classification and prediction, spatial association rule mining, spatial clustering, regionalization and point pattern analysis. Although this article makes everyone feels so excited about how the boom of geospatial data and mining technique, which feeds into research field, private sectors, and sometimes government operation, opportunities comes with a cost.

I am not saying more available geospatial data is bad, however, there are certain challenges the authors fails to discuss in detail. First is the selection bias when mining spatial data. For people who aware of GPS tracking devices and do not want to share their geospatial data, and those who have not access to GPS tracking devices yet, they are excluded in some of the hottest geospatial data mining realm, such as social media spatial data mining, there is a selection bias with the data, which may leads to unintended exclusion of population from the interpretation of the data. Although it can be taken care of if data from various sources can be joined together are used in the processing and interpreting stage, it is definitely something spatial data miners should be aware of.

Second is the privacy issue, more geospatial data does not actually makes everyone happier, it has a cost. Although more and more geospatial data is masked to protect privacy, the huge amount of data flows inevitably expose some or most population under privacy crisis. Thus there has to be an awareness for data miners to protect study subjects or data contributors’ privacy, and proper supervision in this field need to be address to prevent malicious mining of geospatial data.

At last, the availability of seems infinite geospatial data is thrilling for people who works in this field for sure. However, it also increases the difficulty and skill requirement for data miners. It is not only computational skills that allows data miners to mining the data. More importantly, is the skill to discover, to observe, and formulating the right question, which until nowadays is still heavily depend on human to make the call. Also, the ability to look at geospatial data critically is necessary. Unless the data fits perfect with our questions, there are uncertainties need to be address rather than blindly trust the data because the size of it.