Archive for October, 2017

Cognitive and Usability Issues in Geovisualization, Slocum et al. (2001)

Sunday, October 29th, 2017

Solcum et al. (2001) detailed emergent research themes in geovisualization circa 2001. The authors advocate for an interdisciplinary approach incorporating cognitive and usability engineering principles to address challenges concerning immersion and collaborative visualization. It was striking to realize how frequently I’ve brushed over the finer points made by the authors over the year and change I’ve spent submitting GIS assignments.I feel that so many without technical GIS training are inclined to conceptualize the discipline as “mapmaking.” In contrast it’s interesting how little time is spent on more nuanced cartographic considerations in introductory courses. The article made for a good introduction for engaging more meaningfully with what’s quite literally right under my nose.

Even though the article was presumably written before the release of Google Earth (B.G.E.?) it would appear that most of their discussion concerning emergent research themes is relatively robust–even if perhaps some of their associated challenges have since been addressed. For instance, I am not sure of what more could be said about maintaining orientation in explicitly geographic visual environments, but I would interested to learn more about how one would handle orientation in alternative spatial environments. Particularly such that would be immersive enough that would enable the type of cognition that we use in handling the real world. Moreover, I wonder how the ubiquity of Google Earth alone has propelled the topic of cognition and usability in geovisualization.

Cognitive and Usability Issues in Geovisualization (Slocum et al., 2001)  

Sunday, October 29th, 2017

This paper discusses the challenges of using novel geovisulization methods (methods based on advanced software and hardware) and emphasize the significance to conduct cognitive research and usability evaluation for higher effectiveness of these methods. I may agree that it is important to explore how to develop and apply the geovisulization methods “correctly”. Main reasons are, first, geovisualization can be widely applied in different fields having varied requirements; second, the old cognitive framework of geovisualization methods is not suitable to guide new techs (i.e. novel methods). When new techs coming, they bring both new demands and issues. People may want geovisualization to achieve more, for example, we achieve ubiquitous monitoring of the environment through geovisualize the data from the popularity of mobile devices or censors. While, people also concerns the issues of surveillance and privacy. Therefore, it is necessary to do research for guiding the geovisualizaiton method developments and applications.

However, I am not quite convinced by the arguments of using the usability engineering methods to evaluate the effectiveness of geovisualization methods. First, I didn’t see a good definition or explanation of effectiveness in this paper. Effectiveness may be varied when applying geovisulization methods in different cases, but I still believe the authors should have a general and clear definition to state what effectiveness is with respect to gevisualization. Or even they can just clearly say effectiveness of gevisualization methods is the same to that of other software. Second, I think the authors can be more straightforward about saying the essence of adopting concepts from usability engineering, which is geovisulizaiton methods should be highly user-centered. According to the authors, we should highly consider the user needs and iteratively improve the methods instead of developing them first and testing in the end. This clarification may make readers less confused about why we need usability engineering here.

Following the discussion, I believe it is better to have further investigation on how to practically adopt usability engineering methods in geovisualization. We may need to distinguish the geovisualization tool from general software and customize a development life cycle for it. Besides, since this paper is published in 2001, which is 16 years ago, it is good to ask whether the concepts promoted in this paper are still valid in term of emerging “novel methods”.

MacEachran 2001: geovisualization

Sunday, October 29th, 2017

As explained in MacEachran’s paper, Geovisualization is much more than just making maps; it tackles the issues of displaying spatial information in a more accurate and precise sense. As i’ve seen through my research for location privacy conners, there is an incredible amount of data that has spatial information attached to it; > 80% according to the paper. Displaying this information effectively can be challenging.

The article handles these difficult questions quite well by handling the themes and the conceptual questions to create an idea of the field of geovizualization without fixing any limiting examples; examples that would maybe devalue another aspect of the field. by maintaining a certain level of abstraction the paper is able to address the concepts quite well. This however does make it a bit tricky to have a concrete idea or example of what is being discussed at times.

Personally, I find it slightly frustrating to not have something more concrete to ties these concepts and problems too. But, i suppose it’s simply the nature of the field; if we had the answers we wouldn’t need to tackle these abstract concepts. however, the article was written in 2001; perhaps some of these problems have been addressed since. Im very curious about how we can move past classic conceptions of cartography; what will the nature be of these different kinds of information and why will it be better to display them in certain ways.

On Slocum et al (2001) and Geovisualization Trends

Sunday, October 29th, 2017

In the article “Cognitive and Usability Issues in Geovisualization”, Slocum et al.  discussed a need for maps or visualization tools to be conceptualized as  composed of both theory-driven design as well as usability engineering. The theory will describe how people think about maps with preconceived ideas about symbology, colour, layout, and representation. I thought it was super interesting to find out that different languages perceive different geographic features differently (as they noted, English and French perceive lake and pond differently), as well as other cultures perceive other colours differently. Along with the other more well-known differences between people, like sex, age, and sensory abilities, these can change ways that people view or look at maps. “Masculinist” has long been a term used in critiquing mapmaking and geovisualization, as the representations often favor a “God’s-eye”, flaneur-ish approach rather than other views. Geovisualization, particularly 3D visualization, may have the ability to change this. I think it would be interesting to revisit the emerging trends and (formerly) current standards that the authors review, to see where this representation has changed and where they envision it going to. I am not very caught up on the progress of the world of AI in geoviz, but the world of GIS has certainly changed with handheld digital maps like Google Maps or OSM, and even some of the “maps that change in real-time” has changed drastically (for example, manifested in Snapchat’s Snap Map).

 

It would also be interesting to learn who follows the methodologies laid out by Slocum et al. Though they do think more rationally about inclusivity, it doesn’t seem to be entirely all encompassing (ie. asking different groups of people what they like and don’t like about a geoviz and then working in the public’s comments). Further, do people actually use this advice? In video games, many use a lot of geoviz techniques to make the game world more realistic. Do game developers follow these trends? And more importantly for research and academic purposes, have the game developers shared their techniques of bettering geoviz with other industry professionals (like reducing “cyber-sickness” (6), or color choice, etc.)?

Thoughts on Slocum et al

Sunday, October 29th, 2017

Reading about “current” and anticipated issues in geovisualization from sixteen years ago is quite interesting. Perspectives on many of these issues would be quite different with the technologies that exist today. The replacement of CRT monitors with liquid crystal display, the affordability of desktop and laptop computers with several times more RAM, and the proliferation of web-based slippy maps are all advancements that have improved the usability and access to digital maps. The article dwells on 3D GeoVEs as a proper method for disseminating geospatial information. 3D rendering has improved since 2001, and Google Earth has become an accessible resource for viewing most of the world’s cities in 3D.
Slocum et al sees “VE to be a technology with considerable potential for extending the power of geovisualization”. Despite focusing on GeoVE technology for most of the article, in the summary it is realized that “research is still necessary in more traditional desktop environments”. The article seems to be partially aware of the trajectory of internet technologies in geospatial data management. They see potential in collaboration via the internet, which has proven a relevant reality.
Mobile computing is only mentioned once, but I would argue has become the primary mode of everyday geovisualization. Map applications such as Google Maps and Apple Maps have become the standard way of viewing a flat world. 2.5D visualization is employed for features such as 3D buildings, and street-view imagery has made non-immersive 3D imagery accessible to desktop and mobile internet users. A street-level virtual tour of LA would certainly be more stimulating in a GeoVE with speech and body movement interactivity, but is now as easily accomplished in a web browser.
I disagree with the authors’ view of GeoVEs as an accessible or useful resource for education and decision-making. Immersive visualization is a now emerging technology for VR video games, but has seen less use as an educational tool. AR has also become increasingly accessible with the ability to use smartphones as AR and VR devices. The relevant research from this article pertains to the usability issues experiences by those with cognitive impairments. These will continue to be an issue as geovisualization technologies evolve, and solutions will likely come from “real-world” applications of geovisualization technologies.

Research Challenges in Geovisualization (MacEachren 2013)

Sunday, October 29th, 2017

This paper delves into GeoVisualization, which at face value seems like a simple topic (In the sense there is nothing quite as universally recognized as a world map), though actually is very multifaceted with many considerations to be taken.

My first thoughts in reading how GeoVisualization was a combination of Virtual Environments, ViSC, and other fields all with their own governing scientific bodies, was how this field fits very well into GIScience in its existential problems. Honestly though, I feel the author best captures the value of GeoVisualization in: a) It’s huge relevance (80% of all data being linked to geography/xy-coordinates/postal codes), and b) How visualization essentially is the transformation of data into knowledge. In thinking of Big Data, GeoVis seems to be needed as a tool more so now than ever to actually make sense of this data, and most importantly represent it outside of the purely data-driven science community, and into the greater public.

In this sense of conveying knowledge to as many people as possible in the most appropriate way, I feel the author does a good job at listing the many considerations needed to avoid misrepresenting data, and conveying false information. Linking GeoVis back to the essential issues in GIScience (i.e. issues of scale, spatial autocorrelation, etc.) really brought into the GIScience realm for me. Furthermore, having been published in 2013 (a peak year for GeoVis in the web 2.0),  it’s great to see recognition in how maps have moved from having to embody both the database and be a form of media to present it, while today we are more so given a highly interactive GUI to interact with a whole dataset and explore/query it for our own purpose. In this train of thought, I wonder what new forms of GeoVisualization will come from technologies like VR, AR, and the classic hologram globe that has been hypothesised in the earliest of sci-fi and spy movies. Wondering about what these technologies will be used for, as well as the considerations we need to take in using them to accurately represent geographic information are obviously being thought of in papers like this, and hopefully we stay ahead of the curve.

-MercatorGator

 

 

Research Challenges in geovisualization – MacEachren & Kraak

Saturday, October 28th, 2017

This paper was published at the cusp of the digital age, and the author acknowledges the deluge of information that will soon pose a variety of challenges to GI analysts, which are elaborated further throughout the article. Typical issues to be expected such as how to represent information adequately becomes a pertinent question. But also questions of the ease-of-use and usability becomes a prime concern for the authors. I was happy to perceive the level of interest the authors had in the users of their information, harboring a concern for its accessibility. The authors showed a particular tact for acknowledging the different ways in which different individuals or groups may respond to geovisualized data, and how they may interpret the accessibility to this information.
I also found it quite interesting that throughout the article, the authors stressed the multi-disciplinary approach of their science. All the research challenges that were mentioned had elements of cross disciplinary tact which cut between human concerns and the development of GI technology. While it is not specifically mentioned, this article reminds me of the debate as to whether GIS is a tool or a science. This article demonstrates the concern for both data handling and the outcomes of it, with a real sense of simultaneous ethics and progress giving it the allure of a scientific discipline.
While reading this article, I wondered if the concern for the human element of GIS wasn’t caused by the close proximity GIS has with geography in academia. As faculty members in geography intermingle with the social, the physical and the applied dimensions of geography through the mere proximity of each other’s offices in a physical plane, there seems to be fertile grounds for a lot of academic cross-pollination.

Thoughts on ‘Cognitive and Usability Issues in Geovisualization” Slocum et al. 2017

Friday, October 27th, 2017

This paper was a comprehensive review of the current issues in geovisualization,  with a focus on legibility, and user-centered cognition, and features of experiential VR. They provided a number of conclusions based on their review of the current literature and the state of existing technology, and one of their recommendations was for more research in the cognitive sciences to identify the best methods for visualizing data to ensure maximum comfort and comprehension. I would have liked some more specific recommendations as to areas within cognitive psychology, or especially pertinent methods, which they believe would be useful for this field of geographic visualization.

Early in the paper, the authors note that if we develop”theories of how humans create and utilize mental representations of the environment, then we can minimize the need for user testing of specific geovisualization methods.” But I think that even if we formulate theories about how humans internalize, process, and store geographic information, this does not preclude the necessity of user testing specific methods. As they discuss later in their paper, there are considerable individual differences and a high level of specificity for each kind of VR representation, so user testing in each instance seems like a crucial step.

The authors commented on the paucity of publications related to 3D cmapping in comparison with the prominence of new softwares, and this seemed to be another manifestation of the GIS tool/science discussion we have been revisiting in class. In the sense that, should geo-VR and visualization be considered a tool or a science in itself? This usually leads to a further discussion of what constitutes a science.

One of the types of collaborative geo-visualization that the authors mention is the different-place, same-time scenario. I thought the most obvious instance of this is the multi-player  video games that people play over the web in real time, with people often located in entirely different countries. These games can be quite immersive and require considerable synchronization of timing, representation, and events, from multiple perspectives.

There is quite a positivist sentiment permeating this paper as to the power and potential of geo-visualization and in the discussion of education, the authors state that “we know so little about the ways in which children’s developing spatial abilities can be enabled through visual representations-” but whether or not this technology “enables” improved spatial abilities has not been established- they could have neutral or even deleterious effects on the development of spatial visualization and navigation skills.

-FutureSpock

Research Challenges in Geovisualization (MacEachren & Kraak, 2001)

Friday, October 27th, 2017

This article by MacEachren and Kraak (2001) presents the importance of geovisualization, and discusses research challenges in geovis based on a multi-component research agenda.

The authors discuss the process of transforming data into information and information into knowledge, an issue which I find really interesting, and which is increasingly common in the age of ‘big data’. The fact that “80% of all digital data generated today include geospatial referencing (e.g., geographic coordinated, addresses, and postal codes)” (1), there is a clear link between big data management and geovisualization. As the authors trace a brief history of cartography and technological advances in GISystems, they explain the changing role of cartography in data acquisition/management/analysis. Geovis will likely play an important role in the management of big data, as new methods of visualization and analysis are required.

While reading the article, I wondered how geovis fit into GIScience, and whether geovis fits in differently that traditional cartography. The following questions came to mind: Is ’traditional’ paper-map making a science, or an art? Is the use of GISystem technology and the knowledge required to use and maintain them what drives the science, or is it the fact that geographic information is increasingly embedded in scientific data? I would argue that because of the inherent geographic concepts behind map-making (scale, projections, distortions, etc) make a science of map-making, and the technology is only changing the way in which it is used. The embededness of geographic information in data will likely (hopefully) lead to an increased understanding and use of GIScience, and of turning data into knowledge through geovisualization.

Thoughts on “Research Challenges in Geovisualization”

Friday, October 27th, 2017

MacEachren and Kraak (2001) compile the various research challenges and main themes underscoring the field of geovisualization. Geovisualization is discussed vis-a-vis representation, computer integration, interfaces, and cognitive/usability issues. From the number of issues presented, it is clear that geovisualization is a field with plenty of room for further research and development, particularly as geospatial data becomes increasingly complex and computational techniques become more powerful.

This article does an excellent job of highlighting the many different functions that geovisualization can serve and the dynamic role that it has in shaping knowledge production. While we may consider data visualization to primarily be for representing finished results, this article makes it clear that geovisualization can be an effective tool for visual data exploration, education, and knowledge discovery. It is less clear where the line is drawn between data visualization and data analysis. If geovisualization is increasingly being used in the early steps of the research process (ie. before any conclusions about the data are formed), is there significant overlap between this exploration stage and the more rigorous analytical stage? Is data exploration a form of analysis? Both visualization and analysis have a critical role to play in the process of knowledge discovery and seem to be increasingly intertwined.

One of the concluding points from this article that really stuck with me was the call for geovisualization to focus on a human-centered approach. Such a recommendation rejects the view that one visualization strategy will be interpreted and used in the same way across populations. As technological developments allow geovisualizations to become increasingly diverse and complex, I believe that retaining a humanistic focus will be key. If geovisualizations are trending towards becoming more widely used as an interactive tool for knowledge discovery (rather than simply communicating overall findings), then it is critical for those developing the visualizations to have a clear understanding of how they will be used and interpreted. This call to develop humanistic approaches is something that should be applied across the whole field of GISciences. With increasing focus on new data types and innovative technologies, it is easy to forget that there are very real people behind this data and these technologies.

Structure from motion photogrammetry in physical geography (Smith, Carrivick, & Quincey, 2016)  

Monday, October 23rd, 2017

The paper present an overall review of structure from motion photogrammetry with multi-view stereo (SfM-MVS). With the comparison figure shown in the paper, we can say SfM is an economic, efficient, and accurate method for surveying small areas with high resolution. The authors emphasize its significance in physical geography, while I’m thinking that it may be applied in human geography for contextualization through some ways. Based on my knowledge, there is some human geographers using mapping or drawing methods in fields. And they analyze these maps or drawings for studying communities. SfM engagement in human geography can provide detailed contexts in a community, which may help human geographers learn insights to the case.

 

Besides, when reading this paper, I always have a question of whether we really need data with so high resolutions. Although SfM-MVS enable us to have many details about a small area, it may be not always necessary in our analysis. Therefore, I believe research on guiding the choice of SfM-MVS it also important for surveying communities. Moreover, I doubt about the operations of integrating high-resolution images captured by SfM-MVS, and when we analyze the data, high demanded geo-computing ability make us to separate the data for parallel computing.

Smith and al. Structure Motion Photogrammetry

Monday, October 23rd, 2017

Structure from motion photogrammetry in physical geography refers to the the collection of topographic information for 3 dimensional data sets of areas. from my perspective i would lean towards categorizing this under the label of GIS tool rather than as a GIScience. the uses of the data sets produced could categorized as science but i wanted to establish the nature of the process. The Article relates somewhat to my topic of location surveillance and privacy via the common application of using drones for surveillance. however, the use of motion photogrammetry for surveillance does not apply very well to monitoring the location of individuals.

The article has heavy focus on the process of data collection while explaining less how the data could be useful. The precision and the detail of the data sets that can be created are very impressive. However, I would be interested to see more about how this data can be used differently compared to more basic data. Obviously, the improved accuracy of the information allows for more specific projects with larger scales, but aside from this how will the technology change?

The article is still recent and it states clearly that the potential of these technologies has not yet reached its full potential. it will be interesting to see what happens when it has reached it and more applications exist. These may be become evident, but will require more time.

Smith et al – Structure from Motion Photogrammetry in Physical Geography

Monday, October 23rd, 2017

In this article, Smith et al discuss a new advancement in the field of topographic data collection: the combined forces of Structure from Motion (SfM) and Multi-View Stereo (MVS) analysis. This system works tangentially as a way to acquire data from still images and then constituting those images into a 3d model using point clouds taken from these original images. These would be georeferenced with points for better accuracy.

This proves to be an exciting change. While the need for topographic data remains an important factor for many organisations, governments and people, there is often a barrier that prevents attaining this data due to the high cost of obtaining the software packages, tools and skills necessary to do an effective topographic survey. SfM-MVS works fine on any sort of digital camera, with Smith musing about the potential use of smart phone-collected data in the use of participatory surveying.

While reading this article, I wondered if this quick and easy method to collect data would phase out the need for expertise and in the process lose an expert’s viewpoint of topographic data. Many of the other methods discussed in Smith’s article seemed to prioritize accuracy and scope over speed and cheapness, which in many cases may prove crucial. I can imagine companies, in cost-cutting intentions, switch from a system such as lidar or dGPS to SfM-MVS, and thus lose crucial aspects of its previous topographic surveys, which may then lead to negative externalities.

Smith discusses that there is a trend emerging in which SfM-MVS is being marketed in software packages and taking on a ‘black-box’ format. Smith spends a long time discussing the various steps taken in this method as a way to subvert that potentially damaging practice. The way in which specific technologies can be used most effectively needs to be considered.

While the inaccuracies inherent in SfM-MVS will make it difficult to find a place alongside more traditional methods of topographic surveying, it nonetheless holds up an important role in certain scenarios. While it is inaccurate, it is cheap, quick and easy to use. Smith brings up the example of SfM-MVS being used in post-flood events as a way to quickly survey the affected areas. In cases of search-and-rescue emergency settings, small details can be emitted without much being lost.

The digital ecology in which all these different methods for topological surveying take part of must be understood as a whole, and ideally, the strengths and weaknesses of each method must be properly communicated to the surveyor.

 

-RTY

Thoughts on structure from motion – Smith et al. (2016)

Saturday, October 21st, 2017

Smith et al. (2016) discuss various dimensions of the structure from motion (SFM) photogrammetry technique, which is a recent development in topographic survey methods in physical geography. This article is one of the first times that our class has touched on a dimension of GISciences which is tightly linked to a physical system. As such, this article was particularly interesting for how it highlighted many considerations which are relevant when using a physical system that I hadn’t previously considered to be significant. As the SFM technique (and photogrammetry more broadly) collects data from a sensor mounted on a platform, it becomes important to think about factors such as portability, sensor type, and cost. As evidenced by this example, it is important for us to remember that technological developments and progress in building software systems must also be accompanied by practical implementation.

This article outlines the value of SFM for its relative affordability and accessibility (in terms of level of expertise required). Such benefits make me think more broadly about the various barriers to access for the field of GISciences. As highlighted in this case, the often-high cost of software, data, and sensing platforms is a clear barrier which may restrict the research being done to formal institutions (eg. universities), businesses, and other individuals/organizations with ample financial resources. The level of expertise necessary for the various advanced techniques that GISciences research often requires may also close the field to those without formal training. Techniques such as SFM which focus on democratizing the field of GISciences will hopefully grow in coming years.

On Johnson et al (2017)

Friday, October 20th, 2017

I thought this article was really interesting, as I didn’t know too much about UAVs and their uses (and access to their imagery). The article covered the uses of (and constraints) of imagery from UAVs, and offered a refined list of online or free sources of imagery and editing software to process these images or videos.

This relates to a discussion which we often discuss in class, about the availability of funding and funding’s ability to steer the direction of research. Open data, like Open Aerial Map and others, are incredibly useful for those who may have often-imaged areas to research, and it helps to reduce the cost of conducting research while still allowing knowledge-production to occur. It seems possible that with the costs of UAVs decreasing, this may help remote sensing knowledge production to continue without a lot of funding (if the area is accessible, if the researcher has access to an UAV and the knowledge to use it properly, and access to the sensor they need, if the resolution works for their purposes, etc.).

Further, there is also software to process these images for free (of which I was previously not aware). Though there were some listed that were pay-per-use, it was really interesting to learn that free programs exist for RS processing. I have yet to open these sources and investigate them myself, but they seem like a good step towards a lower threshold for learning about and conducting remote sensing or even just aerial photography processing. Granted, there is always a worry that VGI/PGIS will be inaccurate due to the low threshold that “non-experts” can contribute to these sites, but I think that for basic use or for use in a project where higher inaccuracy/coarseness in data can be afforded, it’s a good resource. Further, I think these programs should be used more in an educational environment to avoid reliance on a specific company and give students a greater breadth in learning about different software packages’ capabilities other than the name-brand or industry standard (see: ESRI).

Volunteered Drone Imagery, Johnson et al. (2017)

Friday, October 20th, 2017

I think the article draws an interesting comparison between volunteered drone imagery (VDI) and other forms of volunteered geographic information (VGI). The “rise of the amateur” in most other VGI applications was really enabled by the spread of personal computers. It’s difficult for me to envision a world, at least in the near future, where personal UAVs become similarly prolific. The authors note that even for those that would purchase UAVs for enjoyment, there is still a technical barrier that would prevent less knowledgeable users from contributing. I imagine that it will be awhile before VDI contributors begin to resemble “amateurs” in the way many VGI contributors do more broadly. There’s probably an interesting discussion to be had about how different motivations behind contributing VDI and other types of VGI might affect concerns about data quality. I would be inclined to posit that VDI contributors have more professional expertise than that of the greater VGI community, perhaps making VDI less vulnerable to issues of  credibility and vandalism. However, it’s conceivable that fewer users with the appropriate technical expertise would give rise to less power of the crowd to catch and rectify errors.

I think another important distinction between VDI and other VGI projects like OSM is that many remote sensing contributions are likely less interpretive. For instance, an OSM contributor might delineate a boundary between wetland and forest from aerial imagery through tags. It would appear–based on my limited experience with remote sensing–the collection and contribution of most VDI would precedes these interpretive steps, so naturally there would be different ways that one would go about addressing accuracy and precision. Of course, if the definition of VDI were to include remote sensing derivative like classifications and DEMs (per the “UAV Mapping Workflow) address challenges associated with interpretation are unavoidable.

Thoughts on ‘Volunteered Drone Imagery…” (Johnon et al.)

Friday, October 20th, 2017

I thought this paper was short and sweet summary of the current state of UAV/UAS acquisition tools and data processing softwares. They used OSM as a a parallel, vector-based example of what the future platform could be for aerial data and it was helpful to have some schema about this topic to build from.

One issue that they did not touch upon which immediately springs to mind when considering a database of “frequently updated, high resolution imagery” (pg. 1) is that of privacy. If they are referring to real time information about habited environments, then having an exceedingly easy way to obtain high-resolution aerial imagery comes with all kinds of implications for protecting individuals privacy. Would they blur out humans and sensitive information like license plates? At which stage would this image manipulation occur, who would be responsible for it? Even if the images are not granular enough to allow identification, there have been nefarious uses of geographic data before (like the people who used PokemonGo! data to target spaces known to contain other users and mug people. Especially since the ultimate aim seems to be for this data to be easily accessed/manipulated into third party products/services, it would be difficult (or impossible) to “opt-out”.

The authors discuss how the private sector is investing in this industry to “reduce even further the entrance costs”(pg.1) to this field. I can see why companies would want to encourage recreational use o fUAVs as a  hobbie, because the associated paraphernalia and updates presents an opportunity for endless monetization. But as they note later in the paper, the specialized data processing softwares can be expensive and complicated. So it will be interesting to see how this balance between democratization of the hardware and usability of UAVs and the high-barrier of later-stage data manipulation changes with time, investment, and public interest.

The issue of interoperability was not discussed explicitly but touched on when mentioning how the large variety in quality of sensors means that it can be difficult to host imagery on a common site and stitch images from a given area together coherently. This reminded me of the interoperability issues mentioned in the article on cyberGIS and seems like a recurrent issue in discussions of GIScience and its applications.

The example of Nature Conservancy Coastal Resilience Project as a hosting service with a concrete agenda made me think about the importance of objectivity when compiling imagery or creating a data hosting platform. I would say OSM tries to be pretty objective in their collection and representation of data (although of course complete objectivity is impossible.) But I wonder if it is more valuable to explicitly state the objectives and goals of an aerial imagery project in the hopes of solving a particular problem, or addressing a particular gap in the data. That way, users who are interested in that particular issue are more likely to participate and provide better quality data. The general public could too, but their contributions might be stronger if in pursuit of a particular feature of the landscape, or to capture specific environmental indicators. If, instead of having one platform of uniform data, a few platforms with specialized guidelines, centralized organization, and stated objectives for specific projects would be a meaningful and pragmatic first step. After assessing the success of these pilot projects,  the UAS community could reflect on the necessity of a universal, high quality aerial imagery platform.

-FutureSpock

Volunteered Drone Imagery (Johnson et al. 2017)

Friday, October 20th, 2017

I found this article very interesting, especially in the context of my project topic for this course, VGI. This goes a step beyond OSM and other vector-based VGI platforms, and attempts to use raster and all the issues that lie within it. With varying scale, resolution, flight height, and the other plethora of drone attributes embedded in drone imagery metadata, incorporating data from different instruments and temporal scales will be a headache to say the least (if it is even possible). In this sense, I feel cyberGIS could be a useful consideration for this topic, as there will be no doubt many different software and hardware attributes to standardize in such a dataset.

Furthermore, I’m still unsure if this platform would be reducing the digital and financial divide in GIS by providing more orthorectified aerial imagery (expensive to come by in many cases), or whether it exacerbates it by assuming a wide contributor base even though drones are not very common items, and neither are their accompanying software and knowledge on complicated open source SDKs. However, this was true for GPS units before the cell-phone era, and the near future could result in drones becoming common household items for various tasks, in which case a drone OSM would be very feasible if data collection and stitching of various different images from different sources is resolved.

Another question that came to mind was “which photos get displayed / are treated as more reliable?”. The fundamental questions of VGI stay true in this case, as which image would be selected if there were two identical images at the same scale/location provided by different users? Would user contribution history, drone model, temporally recent, or overall quality of the image (i.e. haze/smoke prevalence) be treated equally in selecting/displaying appropriate drone imagery on this open drone map?

I’m sure these issues will be answered in a trial and error basis at first, and hopefully we will be using an open drone map soon, as this topic is very exciting. I’d also like to add that the repurposing of military equipment, I’m extremely for and hope the military truly isn’t a sink for money after wars, raises the question of if contributions from a military drone would just replace user contributions as an ‘authoritative’ government contributor on this platform, as this sometimes occurs in OSM. In my opinion this goes against the purpose of an open map, though could belong on a different platform.

-MercatorGator

Johnson et al (2017) – AUVs for VGI

Thursday, October 19th, 2017

This paper offered an interesting introduction to unmanned aerial vehicles (UAVs), following the process from data acquisition to map distribution. Johnson et al advocate for the development of an Open Street Map-style data repository, where users can volunteer imagery that they’ve collected themselves. This type of platform, they argue, could provide an invaluable resource for citizen science and grassroots initiatives, empowering communities through powerful technological and analytical tools.

The authors characterise a number of challenges to creating a user-contributed repository for AUV aerial imagery drawn from the VGI literature; namely data quality, licensing and supporting broad of user engagement. I would add four important considerations. First, data heterogeneity- it will be a challenge to ’stitch’ separately collected imagery/ DEM data of different formats, resolutions, elevations, aspects, colour balances etc. Perhaps an overlay would be necessary. Second, data coverage- high spatiotemporally resolved imagery may exist for a particular area of interest for a certain community, but may not be useful for others. This could impact the extent to which people are willing to contribute information. Third, privacy- highly resolved mapping services like Google Street View have to take costly precautions to protect subject anonymity. People may not be happy about having detailed images of them/ their personal information made public. Fourth, making the data open- ‘enemies’ of grassroots organisations may have stronger analytical capabilities than their opposition, and be able to manipulate the data to their favour.

These issues apply to other citizen-based aerial mapping projects, but would become more pertinent in the case of a scalable sharing platform. That said, this kind of work outlines an extremely productive venture for citizen science and VGI, and opens a promising avenue for future research.
-slumley

Thoughts on Radil et al.

Monday, October 16th, 2017

After reading “Spatializing Social Networks: Using Social Network Analysis to Investigate Geographies of Gang Rivalry, Territoriality, and Violence in Los Angeles”, I am more able to understand the methodologies of quantitatively relating physical geography and social networks. The use of reported violence as the indicator for relatedness is interesting, especially in the field of social networks.

This article begins by introducing qualitative sociological and geographic concepts such as embeddedness, locale, and sense of place. The use of qualitative GIS to observe such phenomena is nothing new. The mixed method approach used for this research is problematic. Several reservations were explained in the conclusion, including the dynamic nature of gang rivalries.

The issues that I found with the formation of the social network is the binary nature of rivalry, and the source of these network links, the LAPD. Is the opposite of rivalry allyship? or indifference? Is the cause of “gang-related violence” solely concerned with “spatial transgressions”? If these transgressions occur between non-rivals is there simply no violent response? Or do transgressions only occur between gangs that share a common border?

The statistical methods used in this article were difficult to grasp. Although the subject matter was interesting, and the results could be stated simply (the location of gang violence is spatially correlated with turf between rival gangs). I would like to learn more about the network positionality and the quantitative process of CONCOR for evaluation of qualitative links.