Thoughts on “Emerging Trends in geoAI” by VoPham et al

This article is extremely relevant to the independent study I’m conducting this year, and I think I’ll be able to use it for some of the methods I’m conducting. My study is looking at different 911 calls in Baltimore over the last 7 years (assaults, overdoses, car accidents, person found not breathing, and shootings). I have both the address where the call was placed as well as the time, down to the minute. This could be considered a “health” study of sorts, since bodily harm is a health outcome, and I’d like to find factors that correlate to the calls’ times and locations. Therefore, the geoAI described in this article would be great for my study. It tackles big data, which I have (over 6.5 million 911 call times and locations), geoAI that produces high-resolution exposure modeling, which is what I’m looking for.┬áThe example given about the study that developed a method to predict air pollution was particularly appealing to me. I’d like to do something similar with my own data, inputting as much spatial data available (demographic indicators, built environment factors, etc) to see which variables predict calls in space and time. I’m glad that this article discusses “garbage in, garbage out computer science,” as that was an issue I was concerned about while reading this piece: that treating geoAI like a black box or ignoring data quality because advanced methods are applied to them may result in flawed results. These are factors I’ll have to keep in mind in my own study, and I’ll have to research further than this article on the proper data, methods and contexts to conduct such an exposure analysis.

Comments are closed.