Archive for the ‘geographic information systems’ Category

Problems of classification

Thursday, April 4th, 2013

Since the paper by Wilkinson in 1996 many satellites have been put into orbits and several million GBs of satellite image have been collected. But more importantly, with the coming of the digital camera there has been an explosion in the amount of digital images that have been captured. Consequently, people were quick to spot the opportunity in leveraging the data from the images; hence a lot of research has been conducted in the image processing domain (mainly in biometrics and security). This being said, some of the most successful approaches in other domains have not been as well, when applied to satellite images. And the  challenges outlined in the paper still hold true today.

According to my understanding this is mainly because of the great diversity in satellite images. The resolution is only one part of the equation. The main problem lies in the diversity of the things being imaged. This makes it very difficult to come up with training samples that are a good fit. Thus, traditional Machine Learning techniques based on supervised learning have a hard time. Moreover, the problem is compounded by the fact that when we are classifying satellite images, we are generally interested in extracting not one, but several classes simultaneously with great accuracy. However, the algorithms do perform well when classification is performed one image at a time but significant human involvement is needed to select good training samples for each image. But to the best of my knowledge no technique exists which can completely automatically classify satellite images.

-Dipto Sarkar

Error prone GIS

Monday, April 1st, 2013

In any data related field great efforts are put into ensuring the quality and integrity of the data being used. It has long been recognized that the quality of results can only be as good as the data itself, moreover, the quality of data is no better than the worst apple in the lot. Hence, for any data intensive field great efforts are put into data pre-processing to understand and improve the quality of the data. GIS is no exception when it comes to being cautious about the data.

The various kinds of data being handled in GIS makes the problem of errors more profound. Not only does GIS work with vector and raster data, it also needs to handle data in forms of tables. Moreover, the way the data is procured and converted is also a concern. Many a times data is obtained from external sources in the form of tables of incidences that have some filed(s) containing the location of the event. Usually this data was not collected with the specific purpose of being analysed for spatial patterns, hence, the location accuracy of the events are greatly varied. Thus, when these files are converted into shapefiles, it inherits the inaccuracy inbuilt in the data-set.

One of the things to remember however is, that the aim of GIS is to abstract reality to a form which can be understood and analysed efficiently. Thus it is important not to lay too much emphasis on how accurately the data fits the real world. The emphasis on the other hand should be to find out the level of abstraction that is ideal for the application scenario and then understand the errors that can be accepted at that level of abstraction.

-Dipto Sarkar

Statutory warning: Geocoding may be prone to errors

Thursday, March 21st, 2013

The last few years have seen tremendous growth in the usage of Spatial Data. Innumerable applications have contributed to the gathering of spatial information from the public. Application’s people use every day like Facebook and Flickr have also introduced features with which one can report their location. However, people are not generally interested in geographic lat-long. Names of places make more sense in a day to day life. Hence, all the applications report not the spatial co-ordinates but the named location (at different scale) where the person is. The tremendous amounts of location information generated have not gone unnoticed and several researches have been conducted to leverage this information. But, one issue that is frequently overlooked in researches that use these locations is the accuracy of the geocoding service that was used to get the named locations. Not only is displacement a problem but scale at which the location was geocoded will also have an effect on the study. The comparison of the various accuracy of the available geocoding services done by Roongpiboonsopit et. al. serves as a warning to anyone using the geocoded results.

-Dipto Sarkar

 

Radical changes in Time

Wednesday, March 13th, 2013

The paper by Langran et. al. made me realize how little has been achieved in representing the temporal aspect through maps. Digital maps have tried portraying the changes in some phenomenon over time through the use of accessories like time sliders. But this only changes the overlay information on a static base map. The lack of tighter time-map integration makes it impossible to capture the cause and effects in a more holistic way.

Though GIScience emerged as a merger of spatial sciences with technology, it embraced the concept of temporarily static maps to represent data.

The foremost thought that comes to my mind is that a radical change is required in how we represent space-time. The whole concept of maps needs to be redesigned to break the triangle of theme, location and time. Though this may be a very strong statement without much backing, I think with redesign of representation and choosing the right data structure, maps can be made to represent both location and time together, keeping the theme fixed. This will be akin to perceiving the world as a state machine, with a set of states and actions that causes state changes (but the set of states and actions may be potentially infinite and not necessarily be known a priori). The concept of state machine addresses the “root” of the problem, i.e. different snapshots represent the states, but not the events that caused the changes. This however, requires tremendous efforts and change of mind-set coupled with embracing of technology in redesigning the thought process.

- Dipto Sarkar

Maps vs Reality vs Virtual Reality

Thursday, February 28th, 2013

To be very honest, I found the paper by Richardson et. al to be one of the more interesting papers that I have read. The comparisons that they make are intriguing and the results are still more surprising.

I found the experiment designed by the researchers to be very robust. Hence, the results of the experiment can be accepted to be quite accurate. The question that the results raised in my mind was about the effects that augmented reality systems have on our spatial cognition abilities. Considering GPS navigator to be an augmented reality system, does it mean that we are becoming less adept at navigating naturally because we rely on the GPS navigator? Has anyone conducted research to understand the effect GPS navigation systems have on an individual’s spatial cognition abilities? How accurately and efficiently can regular GPS navigator users find out the route between two places compared to non-navigator users?

-Dipto Sarkar

 

Humans as Sensors

Monday, February 25th, 2013

The paper by Goodchild provides an overview of the various enabling factors that have led to the success of VGIS. I found the concept of “Humans as sensors” to be particularly interesting. I feel that this is has been the primary driving force behind VGIS services like Wikimapia, Openstreet Maps and even Google Maps. When maps started becoming digital, one of the primary challenges was to gather enough data to represent an area at different scales. This problem was not particularly profound in case of paper maps which were produced at certain discrete scales only. To gather enough data for digital maps, mass public participation became inevitable. Collecting so much data at different granularity levels was made possible only because people with varying degree of knowledge about an area started to contribute to services like OpenStreet Maps; overtime generating enough information to provide a fairly complete “patchwork”. Despite all the public effort, Google Maps for India have been criticized to be incomplete, incorrect and even non-existent in certain cases. As a response, Google has organised an event called Mapathon 2013 (from 12th of February 2013 to the 25th of March 2013) in India. The event aims to incentivise the process of adding geographic information to Google Maps by giving out attractive prices to the top editors.

When it comes to the use of VGIS in case of emergency or disaster situations, where traditional data collection can become too slow to be useful, Ushahidi deserves special mention. “Ushahidi (Swahili for “testimony” or “witness”) created a website (http://legacy.ushahidi.com) in the aftermath of Kenya’s disputed 2007 presidential election that collected eyewitness reports of violence sent in by email and text-message and placed them on a Google Maps map” (Wikipedia). A visit to the Wikipedia entry for Ushahidi reveals several crisis situations where similar solutions based on the Ushahidi platform proved to be helpful. I also encourage a visit to the Ushahidi website (http://www.ushahidi.com/) to understand the wide range of technological support that it provides to build crisis/disaster mapping portals.

- Dipto Sarkar

Critical GIS

Thursday, February 21st, 2013

I found the paper by O’Sullivan very intriguing. I was completely unaware of the fact that research in GIS is going on in some of the directions mentioned by the paper.  I particularly found the sections ‘Gendering of GIS’ to be very interesting.  In India, there is a lot of work going on in woman empowerment. And it will be very interesting to see whether someone can use similar systems given the limited penetration of the internet.

Privacy and ethics is another part of GIS which does require a lot of research. As more and more applications take into account the location of the user as a principal component, it is becoming very important to come up with standards for privacy protection. With the number of PPGIS applications increasing, a great number of people from the society are contributing to the task of collecting Geographic data. Though this means that GIS is getting higher acceptance in society, it remains a challenge as to how to release this data while striking a balance between accountability and privacy.

-Dipto Sarkar

 

The near future of Augmented Reality

Monday, February 18th, 2013

After reading the paper by Azuma et. al., I am convinced of the fact that augmented reality systems of the likes shown in Science Fiction Movies are not far. However, I think the first commercial applications of Augmented Reality will use the mobile phones as the primary device. The mobile phones are already equipped with a range of sensors like GPS, Electronic Compass, Accelerometer, Camera, etc. which can be used to provide measurements of the environment. This fact is already leveraged by applications such as Google Goggles and only slight improvements to it will make the system real time, thus making it qualify as an Augmented Reality System according to the definition given by Azuma et. al.  I also feel that acceptance of these applications will be higher as they do not require clunky wearable computers.

Another thought that came to my mind is the use of ubiquitous computing for augmented reality based applications. Instead of putting all the responsibility of sensing the environment, doing calculations and displaying results, it might be useful to distribute some of the task to other smaller specialized units present (or planted) in the physical environment of the user. When a user comes in proximity of these computers, the device they are carrying may just fetch the data and display them after doing some minimal calculations.

-Dipto Sarkar

 

How to handle scale?

Tuesday, February 12th, 2013

Any discussion in the initial stages of a GIS project has an episode where people argue about what should be the exact scale at which to carry out the analysis. The paper by Danielle J. Marceau gives a great overview of the various ways in which space and scale is conceived and how scales affect the results of analysis. However, many things in nature do repeat themselves very regularly with scale. An entire field of mathematics called Fractals deals with things that are self-similar at different scales. So, a set of formulas can define them very precisely and those formulas are all that is needed to reproduce it at any scale.

So, is it accurate to say that many things in geography appear entirely different at different scales? Or does it change gradually with scale? If so, probably we can view these things as a continuous function of scale. Then it is possible that we will come up with equations that explain this gradual change.  All we would require then will be an equation to describe the process at a particular scale, and another equation to describe how the process changes with scale, and we would be able to reconstruct how the object or phenomenon will look at any required scale.

- Dipto Sarkar

Do Mountains Exist?

Thursday, February 7th, 2013

The deep question with which the paper starts delves into the definitions of existence and comprehension of geographic features around us. The coming of predicate logic was the first attempt to consolidate questions about existence in a scientific framework, thus binding existence to a variable. However, to answer questions about categories and objects, predicate logic faces a challenge as these definitions are by nature recursive. As rightly pointed out by Barry Smith and David M. Mark the question then becomes two folds: “do token entities of a given kind or category K exist?”  and “does the kind or category K itself exist? ”. Predicate logic in itself is good at explaining logical entailment but fails to take into account the how humans perceive things. Thus, it may be right to say that mountains exist as they are part of the perceived environment.

Information Systems on the other hand adopted a different definition of ontologies. It considers ontologies as a set of syntax and semantics to unambiguously describe concepts in a domain. The objects are hence classified by information Systems in to categories and the categories are in turn arranged into a hierarchical structure. However, such an arrangement was futile in describing things like mountains, soils or phenomenon such as gravity. One central goal of ontological regimentation is the resolution of the incompatibilities which result in such circumstances. Hence the concept of fields was developed to efficiently categorize these “things”.

However, there are still doubts with naming of such “things” like mountains. Obviously, Mt. Everest exists because all the particles making Mt. Everest exist but exactly what particles are called Mt. Everest. This is the inherent problem in dealing with fields which are by nature continuous, lacking discrete boundaries.

Ideally the entire field of Ontology should be able to explain the entire set of things which are conceptualized and perceived with no ambiguity. This requires tremendous insight and reflection about why do the things exist in the first place.

- Dipto Sarkar

 

Statistics and GIS- a lot has changed

Tuesday, February 5th, 2013

A lot has changed in the last 2 decades since the paper on “Spatial Statistical Analysis and Geographic Information Systems” was published by Anselin and Getis. Today, the central focus of GIS is on spatial analysis and the rich set of statistical tools to perform the analysis. Today the GIS database and analysis tools are not looked upon as different software. Spatial analysis is fully integrated in GIS softwares like ArcGIS and QGIS. Furthermore, for very specialized applications, the modular or the loosely coupled approach is often employed. Software like CrimeStat uses data in established GIS Sofware format, perform analysis on them and produce results for use in GIS softwares.

When it comes to the nature of spatial data, two data models have been widely accepted namely Object based model and field/raster based model. Extensive set of analysis tools have been developed for each of them. Data heterogeneity and relation between the objects are also taken into account by slight improvements over these two models.

Exploratory Data Analysis and model driven analysis have progressed hand in hand and complement each other. While new and innovative visualization and exploration tools help in understanding the data and the problem better. Software has evolved over time to perform complex non-linear estimations required for model based analysis.

However, Statistics and GIS is an ever evolving field and newer methodologies and techniques are developed everyday which pushes the boundary of cutting edge research further and further. Newer challenges in statistical analysis include handling Big Data and community generated spatial information. How these new challenges evolve will be very interesting to observe.

-Dipto Sarkar

Geovisualization-What we have achieved

Thursday, January 31st, 2013

Many of the pressing problems of today have a geo-spatial component. The paper by MacEachren rightly points out the challenges in dealing with efficient representation of Geospatial data. In the last 11 years since the paper was written, radical changes have taken place in the domain of virtual mapping. Not only did GIS softwares like ArcGIS and QGIS develop rapidly, other mapping and Virtual Earth services like Google Maps and Google Earth have also become popular. The authors had rightly pointed out the changes that were taking place since the internet became the prominent medium for disseminate geospatial data.

With 80% of all user-generated data on the web containing geo-location information, storing and leveraging this data generates a lot of interest. Some of the problems discussed in the paper have been efficiently dealt with in the recent years. For example, multi-scale representations of objects have been handled with the concept of scale-dependent renderers used extensively in GIS packages as well as in Google Maps and Google Earth. However, the decision of what to show at each scale is still subjective. When Geographic objects are stored in the database as vectors, attribute information can be added to each of the objects to further describe it in a non-spatial manner. The abstraction of layers provide the flexibility of modularising map building and analysis approach, enabling reuse of the layers to create different themes. Crowd Sourcing and mobile mapping applications have defined the way group mapping tasks are performed.

The paper also emphasises several times on the need for cross domain research to address the problem of Geovisualization and spatial analysis. In terms of Geovisualization, research results from the field of Computer Graphics, Geo-sciences, Cartography, Human Computer Interaction and Information Visualization needs to be integrated in order to find new and innovative ways of creating maps. Multi-disciplinary crosscutting research is the way forward to make further advances in how geographic information is presented.

-Dipto Sarkar

 

Eye-tracking in Augmented Reality

Monday, January 28th, 2013

The paper by Poole et. al. discusses in details the metrics used in eye-tracking research and some of its application. However, the paper failed to mention one of the most successful commercial usage of the technology. Canon introduced SLR cameras from as early as 1992 which employed eye-controlled autofocus. The system worked very well and has led to a lot of discussion amongst photographers as to why Canon does not include this technology in their recent cameras.

Now with the coming of augmented reality systems, eye-tracking technology has the potential to revolutionize how users interact with their surroundings. Ubiquity is the most important requirement for any augmented reality system. Eye-tracking technology can be used to detect when the user seems to be confused and accordingly provide him with contextual information. Such application of augmented reality will be less intrusive and more usable in a day to day life. Eye-tracking technology can be further coupled with other technology such as GPS to make augmented reality systems more usable by increasing the speed at which it detects objects. The location information provided by the GPS can be used to narrow down the search space for the object.  For example, if a tourist is staring at the Eiffel Tower, then the system knows that he is located near the Eiffel Tower in Paris. Hence the search space where the system needs to search for similar looking objects is greatly reduced.

The whole domain of augmented reality is still in its infancy and it is up to the imagination of the engineers to find supplementary technologies that might be used to enhance the system.

- Dipto Sarkar

 

GIS and Spatial Decision Support Systems

Tuesday, January 22nd, 2013

Decision Support Systems (DSS) are distinguished by the fact that they aid in taking decisions about problems that are semi-structured in their definition. However, they do not replace the decision maker. A DSS have capabilities for handling data, analyzing data and provides muti-dimensional views to help highlight the different aspects of the problem.

One may notice that GISystems are already dealing with the some  of  the things mentioned above. Hence, it may be said that a complete GI suite is quite close to a DSS. The paper by Densham rightly points out that there are however some aspects in which the GISystems lacks from being a complete Spatial Decision Support System.

GIS systems are traditionally meant to handle only spatial data. For a GISystem to be useful as a Spatial DSS, it should have more flexibility in how it handles non-spatial data. Moreover, the outputs of GISystems are usually only cartographic in nature and might not provide some insights about the problems. It is necessary for the system to be able to generates reports, charts and use other data visualization methods to supplement the cartographic maps, thus ensuring a 360 degree view of the situation. A further challenge for simultaneously handling spatial and non-spatial data is to model the complex relationships between them and to come up with algorithms which are able to leverage these relationships.

The paper also proposes a framework for the development of SDSS. The framework leverages the modular approach of building softwares. This approach enables maximum flexibility in terms of re-use of components in building different systems. SDSS toolboxes can be combined into generators, a combination of which can be further configured to produce specific SDSS. This approach not only provides the ease of component re-usability but also facilitates addition of new capabilities to an existing system without disruption.

Densham also emphasizes on the importance of incorporating research results from the fields of DBMS to have a high performance system. The UI of the system needs to be built keeping in mind the fact that the system is going to be used by decision makers who may not be GIS experts. Both the spatial analysis and non-spatial analysis components should be intuitive to use and a variety of outputs ranging from maps to charts to tables must be available in order to highlight all the aspects of the problem.

-Dipto Sarkar

PPGIS in spatial planning

Monday, January 21st, 2013

Web 2.0 shifted the role of the Internet users from being a mere consumer of service to a more active one where they are responsible for creating the content. The availability of mapping services like Google Maps and their public APIs have encouraged the development of various innovative mapping applications. However, there has been a lack of mapping applications where the main intent is to facilitate planning. Various web based applications of geographic information are there that generates hoards of spatial information, but the kind of application that will narrow the divide between GIS for people and GIS for professionals have been lacking. ArgooMap, in fact is an interesting experiment to understand the utility of public participation web mapping projects to facilitate planning.

The discussion thread for the application was carried out in a non-GIS environment first and yet generated a lot of spatial references. Thus it is clear that the inherent way in which people think about planning problems is spatial; hence a UI with a map will help in better representation of the locations being talked about. When the discussion was imported into ArgooMap, the linking of the threads to geographic locations provided a better understanding of what (place) is being discussed. The end output of the system was also helpful for the administrators as they could easily see the regions that generated the most interest without reading through all the messages. One of the problems with building such a system however will be to define what one means by high, medium and low spatial resolution, as the definition for them is very application sensitive. Moreover, a very intuitive UI is needed for such applications so as to ensure good participation from the public. Results of GIS research can also be incorporated to increase the efficiency and performance of the systems.

PPGIS applications such as these have the potential to change how grass root public participation is incorporated into spatial planning related decisions and hence give rise to a new range of e-governance applications.

- Dipto Sarkar

People centric GIS -is it the only way?

Friday, January 18th, 2013

The paper by Miller is concerned with the shift in perspective of making GIS people centric rather than Geography centric. The rapid development in the field of GIS has spawned several new applications like Location Based Services which essentially look into the more commercial aspect of spatial information. Innovative applications of LBS have been developed where the most important piece of information required is the location of the user. Location based advertisements and offers are just one side of the spectrum. On the other side of the spectrum are more futuristic developments like Google Goggles or other augmented reality based applications.

However, it is to be noted that GIS does not merely encompass the likes of the above mentioned applications. GIS has evolved into a scientific discipline which encompasses a whole range of problems. The “people centric” approaches to GIS will thus essentially only a part of the larger scientific discipline. New data models and new analysis techniques will be developed for addressing the specific issues of these applications, but by and large the main focus of GIScience will continue to be Geography or the spatial domain.

-Dipto Sarkar

Reference:
What about People in Geographic Information Science?- by Harvey J. Miller

Tool to Science

Friday, January 18th, 2013

“The unexamined life is not worth living.”

How subjects evolve?

The above quote was made by probably the first of the well-known Western Philosopher Socrates. Back in the time of Socarates, Plato and Aristotles, the men of intellectuality used to ponder about things material and spiritual. They were Theologists, Mathematicians, and Logicians at the same time. Once the ball of intellectualism had started rolling, more and more people delved deeper into the realms of the subjects. Starto (known as “The Physicist”) and Aristarchus (who anticipated Copernicus’s claims) and made important contributions to physics. Mathematics was enriched with the coming of Euclid. Eventually the body of knowledge started to increase, and soon by the time Newton had arrived, philosophy had spawned two new fields, namely Physics and Mathematics.

The 1960-80′s saw the development of another new field which has caused major inroads into all the aspects of our lives- Computer Science. When computers started being developed, mainly Electrical Engineers and Mathematicians used to show interest in the new tool. However, computer users started to develop their own vocabulary and as people delved more into the intricacies of theory of how computers work, they started realizing that the computer was not merely solving some existing problems but also enabled to create and solve a whole new spectrum of problems that were previously unknown. Hence the entire spectrum of problems that could be solved with computers and the ones they created emerged into a “Science” of its own called Computer Science.

What about GIS?

We the people working in GIS are at another cross road which is seeing the development of a new Science. The Geographic Information Systems cannot be called a mere tool anymore. It has amalgamated several fields which were related, but thought to be incompatible with each other. Today GIScience encompasses the Remote Sensing, Cartography, Geography, Computer Science and several other Earth based Science subjects. Several new tools have also gotten added to the arsenal like GPS which has transformed work flows. New ways of representing data have emerged. Active research is going on to solve a whole new class of spatial problems which was non-existent previously. The strong backbone of IT infrastructure is also creating interest in new data models, algorithms and large scale distributed GIS systems. Many of the existing academic fields have started showing interest in using and developing this new “emerging field”. The research interests in GIScience today are varied and far reaching. All-in-all GIScience is showing the same development cycle that has been followed by all the fields of Science that has developed.

So, it may be rightfully concluded that GIScience can definitely be considered as an emergent Science rather than merely a tool. We are at the crossroads where this transition is taking place. So, 16 years after the paper by Wright et al. there is little doubt that all the scepticism mentioned in the paper for a field to be deemed as a Science has been answered. The four conditions mentioned in the paper “for the emergence of a science from a technology” have effectively been fulfilled. GIS has thus progressed along the three continuums from being “a tool”, to a “tool making” to a “Science”.

 

-Dipto Sarkar

 

References:
Demystifying the Persistent Ambiguity of GIS as “Tool” Versus “Science” –  Dawn J. Wright, Michael F. Goodchild, and James D. Proctor

Citizen scientists working with scientists

Tuesday, May 1st, 2012

Good article on assessing data quality of volunteered contributions from citizen scientist:

Christopher Nagy, Kyle Bardwell, Robert F. Rockwell, Rod Christie and Mark Weckel. 2012. Validation of a Citizen Science-Based Model of Site Occupancy for Eastern Screech Owls with Systematic Data in Suburban New York and Connecticut. Northeastern Naturalist 19(sp6):143-158.

Abstract

We characterized the landscape-level habitat use of Megascops asio (Eastern Screech Owl) in a suburban/urban region of New York and Connecticut using citizen-science methodologies and GIS-based land-use information. Volunteers sampled their properties using call-playback surveys in the summers of 2009 and 2010. We modeled detection and occupancy as functions of distance to forest and two coarse measures of development. AICc-supported models were validated with an independent dataset collected by trained professionals. Validated models indicated a negative association between occupancy and percent forest cover or, similarly, a positive association with percent impervious cover. When compared against the systematic dataset, models that used forest cover as a predictor had the highest accuracy (kappa = 0.73 ± 0.18) in predicting the occupancy observations in the systematic survey. After accounting for detection, both datasets support similar owl-habitat patterns of predicting occupancy in developed areas compared to highly rural. While there is likely a minimum amount of forest cover and/or maximum level of urbanization that Screech Owls can tolerate, such limits appear to be beyond the ranges sampled in this study. Future research that seeks to determine this development limit should focus on very urbanized areas. The high accuracy of the citizen-science models in predicting the systematic dataset indicates that volunteer-based efforts can provide reliable data for wildlife studies.

Power, control and the social construction of place

Friday, March 30th, 2012

When reading the other posts, it seemed that Aitken and Michel’s (1995) article did not receive many positive remarks, mainly for its lack of clarity and vagueness. Perhaps I spent too much time reading marginal continental philosophy this semester that made me more sympathetic to this piece. Although the article is more theory based, it examines pertinent issues of GIS that are still around today. The authors advocate for “all actors involved in the production and consumption of GIS to have some ownership in the creation of GIS knowledge” (17). They question the differences between the ownership of a process and the participation of a process. Power struggles are created when it is certain one group dominates the influence of the outcome over another group. If GIS is identified and examined as social constructions in this article, how will we change power relations to find a more equal (not perfect) opportunity in not only the process of ownership, but also the process of participation? According to the article, “a GIS cannot be divorced from the social context of its creation” (18). So how do we make the groups with ownership rights, socially construct an alternative way of increasing importance, and power to the ones involved in the participation process? One pertinent thing I do find frustrating with critiques is the depressing feeling I am left with after reading them. It is often easier to identify the challenges, rather than find useful and workable solutions.

In addition to ownership, liveware is also a critical component to understanding power relations. It is defined as being comprised of the individuals responsible for the design, implementation and use of GIS, noting that it is hailed as “the most significant part of a GIS” (18). What responsibility and influence does this particular group have on the reality of GIS? How much of it gets convoluted in political agendas, territories (both academic and non-academic) that expect to be defended? How is the misrepresentation of facts, skewing of results, and meeting private agendas accounted for, monitored or, in the most optimistic scenario, eliminated?

“What it is not clear is how the communicative and power structures which develop between the GIS creator and user affect the people whose everyday lives become metrics and data within the system, and whether indeed these people’s voices are heard at all” (18). Do we just get used to these power dynamics? Work our lives around them? I’d like to be a little more positive than this. A lecture inspired me to think otherwise. Andrew Pickering encourages us to “try things… experiment, and mess around with them”; an alternative to being stuck on one idea, or a particular set of definitions (especially when analyzing inequality) that confine us. This way of thinking seemed to parallel Aitken and Michel’s statement that “empirical studies of technological innovation reveal a complex, messy, and nonlinear process” (27). The authors appreciate the flaws of empirical studies, maybe because in some ways, the empirical studies bring the less tinkered with ‘real’ in GIS.

-henry miller

Storytelling and integrated land-use models

Friday, March 30th, 2012

Clouclelis (2005) outlines the rethinking of integrated land-use models by orienting the article around three main roles that are interconnected: scenario writing, visioning, and storytelling. The details of the article more than suffice the upsides and downsides of urban planning history with regard to the computational and spatial planning world. The one role that intrigued me the most was that of storytelling. Storytelling, according to the article, strives to “build consensus by presenting particular desired or feared future developments in terms meaningful enough to be credible to non-specialists” (1354). I believe it to be a significant connection between qualitative, and quantitative attributes of planning systems. Clouclelis notes that there is much room for “interpretation and facts” derived from models, however planning emphasizes interpretation and values, a much more arbitrary combination (from a scientific stance anyways). There is a specific comfort that we find when relying on facts rather than values. The concreteness makes them somehow more plausible and tangible than individual intentions and agendas, hence having “models codify uncertain knowledge” (1359). We hold planning accountable for a particular outcome. We expect it to “lead to certain action” (1359). The pressure only accelerates on planning to provide solutions to problems at hand. If we eliminate the jargon in expert language to enhance meaning to implemented models for the non-expert, we should develop methods that are creative, and can facilitate the process of finding a balance between non-specialist, and specialist interaction. What can we learn from both camps? In my opinion, storytelling in itself is not enough to be evocative. The way we tell it has to be compelling. Ideas, experimentation, and actions by means of imagination and sharing, can be significant contributions to successful storytelling.

Another problem I want to address is the lack of clarity of what type of planning support system is indeed necessary, and in need of support (1355). The individuals, groups and communities involved all hold multiple agendas. “At the metropolitan level, transportation, commuting, growth, and sprawl cannot be addressed by one community without direct implications for several others” (1358). Will it ever be possible to address everyone’s needs? Is that feasible, realistic or practical? If that is not an option, will compromise be enough for a potential solution? Or will it be inevitable that certain groups’ requests will be sacrificed and overlooked?

-henry miller