Posts Tagged ‘GEOG506’

Scarce influence of technology when implementing the technology

Monday, November 24th, 2014

In this article, Sahay and Robey designed an interpretive research method that enabled a comparative analysis between two neighboring county government organizations that were conveniently in the process of implementing GIS. Both intra and inter-site comparisons were designed. With the results of this study, the formulations of specific inferences in 3 general areas as following are engendered: “the relationship between structure and initiation, deployment and spread of knowledge; the relationship between capability and transition, deployment, and spread of knowledge; and organizational consequences of GIS”. Every set of inferences is used to shape general theoretical arguments concerning the implementation of information systems based on specific comparisons between these two sites.

The King’s (1983) analysis of centralized and decentralized computing supports the first set of inferences: the organization of computing resources is anchored in more fundamental questions of organizational power and control, and the concentration of computing resources in a single unit of an organization is likely to preserve the units’ power over the users of technology. But a distributed deployment of computing resources encourages the spread of knowledge empower the users, and therefore technological capabilities are more likely to expand and spread in an organization where new technology is configured in a distributed rather than a centralized manner. In the study, the organizational structure, that is associated with following aspects of implementation process: initiation, deployment and spread of knowledge, was considered as the major point of contrast among the 2 sites in question. The authors argue that a unified organizational structure enables a better cohesion among the social interpretations of new technology and the establishment of a single vision and therefore information is more widely shared in a unified organization than in a differentiated structure, which allow congruent technological frames of meaning to emerge. Also, the rapid knowledge spreading is being kept due to the deployment of the technology is likely to be restricted to one organizational unit. Therefore the study is consistent with King’s argument… and so on.

As written above, this article compares 2 organizations and their contrasts based on the social context and the process of implementation with pre-existing theories. I find this study very well structured and quite convincing, since the research method used is sensitive to the assumptions underlying social construction and tries to identify the relevant social groups and their technological frames and in addition, they have done an excellent job in explaining the organizational processes of each site in relation to their respective social context and then they point out specific contrasts by comparing them and forming their arguments on the solid grounds of the pre-determined theories.

Therefore, when they underline that technology itself must not be considered as a determinant of organizational impact, since the distinction of each context and therefore different consequences will be produced, based on the interactions of contextual processual elements, and that one must not assume that technology will be understood in the same way by different groups of people, I couldn’t do nothing more than nodding my exhausted head. In an overly simplified manner, it is like assuming that by handing out a set of Lego to children from different cultural and family backgrounds and expects them to come up with the same output by the end of the day. Some may play and create something, some may play but leave as pieces and perhaps some may not show any interest to it at all.

Coming back to the implementation of GIS, I guess one can draw a parallel with the comparison of viewing and using GIS technologies from the perspective of tech savvy generation versus aboriginal population that is often new to such technology and coming from distinct cultural backgrounds. Hence same technology may generate a completely different and /or unexpected consequence from its use, due to the social context/background alone. Therefore one cannot, or rather should not, blame the technology being used for an aftereffect, but rather re-investigate how it should have been approached to respective targeted population/culture.


Laughing But Serious…

Monday, November 17th, 2014

This article from Raper et al. (2007) identifies main research issues within the field of Location Based Services (LBS), which includes sciences and technologies involving LBS, matter concerning LBS users and also the aspect regarding legal, social and ethical issues with LBS. Majority of the article covers distinct domains of sciences and technologies research areas connected to LBS and it is well established how wide range of subjects are associated with LBS. It was very rich and informative on that matter, but one can recognize some aspects being reappeared frequently on distinct subjects, such as visualization, users, ubiquity, etc. Unfortunately, specific differences were not elaborated and therefore it sounds repetitive, and difficult to differentiate if one was reading about whether GIScience or Spatial cognition, since it could have been either, and therefore it became tedious at some point.


As for the paragraphs where user issues are being discussed, it seems like many subjects were not being mentioned, such as VGI, managing geospatial data after its immediate use, etc. However, that was mainly because this article was written in 2007, when the Smartphone with GPS receiver capabilities and wireless broadband internet features were yet to be distributed among population as present date. On the other hand, it seemed like Raper et al. believed that it is only natural and obvious for the LBS to replace the traditional paper map, which was a controversial subject in GEOG 506: “LBS have to ‘substitute’ existing analogue approaches, e.g. the use of cheap, durable and easy-to-use paper maps for the most part…”. However, even today, lot of people still use paper maps despite the fact that they carry a smartphone that has a perfectly fine GPS receiver function, including myself. On the other hand, it is slowly but surely being replaced by less-analogue technologies for the majority of population who can afford it.


In the legal, social and ethical dimensions section, the authors consider the potential for surveillance and the exercise of power over individual movement as a negative effect, whereas the potential to guide people and the new social possibilities regarding LBS is positive implication. However, as we have discussed in class, this is just a mere perception from a particular culture or perhaps it is somewhat an individualistic view, rather than a representative perspective of a culture as a whole.

What can be defined as positive and/or negative?  It is all relative….again….sigh


Social Network Analysis and GIScience

Monday, October 6th, 2014

Social Network Analysis(SNA), in my understanding, is to analyze social relationship that can represent any type of link that one individual can have with another individual. There are 2 distinct methods, a quantitative approach and a qualitative approach to conduct the analysis. Each method obviously has its own advantages. Interestingly, in Gmma Edwards’ article, it is argued that a third option, which is a mixed-method approach to network analysis combining both quantitative and qualitative approaches are appropriate for SNA. This was very refreshing article. Especially when it came to my mind that the social network study and GIScience both have common features. Among others, the use of relational database was one of them.

In the SNA, the relationships between actors, such as flow and exchange of resources, the flow of information and ideas, the spatial embedding of network ties, etc. are generated and analysed.

Whereas in GIScience, the relational data are collected, stored and managed as well, but perhaps a different format/method than how it is being done in SNA, and  such software is called as Relational Database Management System(RDMS).

Of course, the objective or the way they use the relational data may slightly differ, but I think that it would be quite interesting to practice SNA by adding the geographic aspect on top of it and visualize it on an actual geographic map to display actors and lines rather than an empty space, for certain subject. That way, it could be easier to figure out a new relationship or a meaningful observation that one couldn’t find it previously.


Are We Certain that Uncertainty is the Problem?

Thursday, April 4th, 2013

Unwin‘s 1995 paper on uncertainty in GIS was a solid overview of some of the issues with data representation that might fly under the radar or be assumed without further comment in day-to-day analysis.  He discussed vector (or object) and raster (or field) data representations, and the underlying error inherent in the formats themselves, rather than the data, per se.

While the paper itself is clear and fairly thorough, I can’t help but question whether error and uncertainty are worth fretting over. Of course there is error, and there will always be error in a digital representation of a real-world phenomenon. Those people, such as scientists and policy makers, who rely on GIS outputs, are not oblivious to these representation flaws. For instance, raster data is constrained by resolution. It is foolhardy to assume that the land cover in every inch of a 30-meter grid cell is exactly uniform. It is also wrong to suggest that some highly mobile data (like a flu outbreak) would remain stationary over the course of the interval between sensing/mapping. There are ways around this, such as spatial and temporal interpolation algorithms and other spatial statistics, and I feel like estimates are often sufficient. If they aren’t, then perhaps the problem isn’t with the GIS, but rather in the data collection. Better data collection techniques, perhaps involving more remote sensing (physical geography) or closer fieldwork (social geography) would go far in lessening error and uncertainty.

With all of that said, I am not about to suggest that GIS is perfect. There is always room for growth and improvement. But, after all, the ultimate purpose of visualizing data is for understanding and gaining a mental picture of what is happening in the real world. An error-free or completely “certain” data representation is not only impossible within human limitations, but it is not particular necessary.

– JMonterey

geocode all the things

Friday, March 22nd, 2013

Goldberg, Wilson, and Knoblock (2007) note how geocoding match rates are much higher in urban areas than rural ones. The authors describe two routes for alleviating this problem: geocoding to a less precise level or including additional detail from other sources. However, both these routes result in a “cartographic confounded” dataset where accuracy degrees are a function of location. Matching this idea — where urban areas and areas that have been previously geocoded with additional information are more accurate than previously un-geocoded rural areas — with the idea that geocoding advances to the extent of technological advances and their use, we could state that eventually we’ll be able to geocode everything on Earth with good accuracy. I think of it like digital exploration — there will come a time when everything has been geocoded! Nothing left to geocode! (“Oh, you’re in geography? But the world’s been mapped already”).

More interesting to think about, and what AMac has already touched on, is the cultural differences in wayfinding and address structures. How can we geocode the yellow building past the big tree? How can we geocode description-laden indigenous landscapes with layers of history? Geocoding historical landscapes: how do we quantify the different levels of error involved when we can’t even quantify positional accuracy? These nuanced definitions of the very entities that are being geocoded pose a whole different array of problems to be addressed in the future.


What to Use VGI For?

Thursday, February 28th, 2013

The advent of VGI has brought on a whole set of new issues including but not limited to, the reliability, motivations and frequency of users.  For example, Goodchild outlines that VGI can be known as asserted information, as there is no source of checks and balances or peer-review to ensure that the data is “correct.”  While someone uploading data about a specific phenomenon in their locale may think they are an expert themselves, there is still the potential for errors.  There is also the issue of people purposely sabotaging projects, similar to the way in which people create viruses to spread via the internet.

Nonetheless, VGI has tremendous value, as Goodchild pointed out at the end of the paper.  Personally, I believe that VGI must be evaluated on a case by case basis.  It all depends on what the VGI is being used for and how accurate it needs to be.  With this must come a level of reservation for the person actually using the data.  Because many of us are familiar with Wikipedia, I will use that as an example.  I use Wikipedia when I am looking for general information on a topic that will not necessarily have determential effects if it is incorrect, for example the history of a rock band I like.  I will not, however, use Wikipedia as a source of in depth analysis on an academic subject that I will be writing a paper on, such as Location Based Services.  It is in this manner that I think VGI needs to be evaluated.  If the information being gathered needs to be of utmost accuracy, take the necessary steps to ensure that contributors have the necessary credentials.  If not, let VGI run wild and see what kind of results you get!



Spatial Cognition and Personal Preference

Thursday, February 28th, 2013

The study done by Richardson et al. gives us a very interesting look into the various ways individuals can conceive and understand a certain space.  However, problems tend to arise when trying to develop a solid understanding of the exact differences between direct learning, map learning and Virtual Environment learning.  It was mentioned that there are direct contradictions between this study and past studies, as well as among those past studies.

While it may not explain all the differences, I believe that personal preference plays a huge role in the effectiveness of using a VE to understand a space versus a map or directly walking through the area.  Thus, our ability to spatially comprehend a space, whether it is a series of halls or an entire city block depends heavily on what sort of sources of information we prefer over others.  While reading the paper, I thought of a similarity between this study and how we learn in a classroom.  It is obvious that all people do not like to learn concepts in the same way.  That is, some people prefer to learn by doing, while others prefer to have something explained to them in a very concise and clear manner.  I believe that this sort of preferential learning can be extended to these concepts of spatial cognition.  As VE becomes more advanced and ubiquitous, I think that some people will still find it difficult to use it as a means of learning about a place and would rather look at a bird’s-eye-view map to understand the space.  Others will tend to reject the “antiquated” notion of maps and prefer to virtually explore somewhere before they actually go there.   Regardless, I am very excited to see how far the use of VE goes in terms of understanding an area before we go there.  Will we get to the point where we could essentially “place” ourselves on any point on the Earth and explore it as if we are there?   Instead of a map of campus, will students be able to download a VE of the building they will spend the most time in and have a walkthrough to their classrooms and respective libraries?  All this could get very interesting within the next couple decades.




Ontology in Augmented Reality

Thursday, February 21st, 2013

Reading through the paper by Azuma I could not help but get a little excited about all the sorts of AR applications we will see within as little as 5-10 years.  I envision video games that allow the gamer to feel like they are directly in and interacting with an environment by projecting it in their house.  I also see travelers wearing glasses and getting a tour of a foreign city without the help of a guide.  However, there are obviously a few limitations before Augmented Reality takes these jumps.  The one I want to focus on is User Interface Limitations.

This essentially comes down to how to display and allow interaction with the massive amounts of data that we have access to.  The amount of information that we could potentially display on a pair of glasses is astronomical in my mind.  But, how do we go about deciding what information to display, and how to display it?  To me, this comes down to an individual’s ontology of space.  Take my previous tour guide example; one person may want to know where all the museums in a city are while another would prefer to have the best bars in the area.  This is a bit of a trivial example, however it highlights how it may become a bit difficult to take this amazing technology and make it equally useful for everyone.  While this is an issue today, I agree with the paper in that there will likely be “significant growth” in the research of these problems.  It is now a matter of putting in the time, effort and money into improving the ubiquitous use of these AR systems.  With the great potential for business growth (e.g.), I do not see this being a problem.



Privacy vs. Efficiency in GIScience

Thursday, February 21st, 2013

O’Sullivan brings up three very important points when considering the direction of critical GIScience.  The one that struck home for me was the subjects of privacy, access and ethics.  It is hard to argue against Curry’s point, brought up by O’Sullivan, that the increasing availability of “spatial data forces us to reconceptualize privacy and associated ethical codes” (O’Sullivan, 2006:786).  With millions of people around the world constantly “sharing” their locational information via social networking sites such as Twitter or Facebook, it is easy to see that such information is no longer private.  The reconceptualization of privacy includes the fact that when something is shared on the internet, there is potential for that information becoming accessible to those other than the intended “target.”  We thus need to realize how easy it may be for locational information such as our home or school to essentially become public.  As a society, do we accept the fact that acquaintances (sometimes real, sometimes over the internet), will now know more about us than ever?  If not, how do we use these new applications in a way that respects individuals’ level of privacy while still allowing us to become more connected?

The traffic management is a great example of weighing privacy and increased connection.  Obviously, with increased surveillance, we will be able to detect traffic patterns better, allowing people to travel more efficiently.  However, everyone may not be comfortable with such surveillance, even if it does make their commute easier.  So, this is where the social theory of GIS meets the tool that is GIS.  We can come up with hundreds of ways to track human activity to allow us to travel more efficiently, but there may be a level at which people in a society are no longer comfortable with their location being readily available.  Furthermore, who has the right to use this information?  Is it the private businesses looking to create a useful traffic application, or is the government the only institution that should be able to use this data? It is here where critical GIS comes into play, as a way to evaluate the way different societies value privacy versus efficiency.  Again, this will be different across cultures, communities and individuals.  These issues make the application of GIS inherently tricky, as it is not just a tool that can be used objectively.



Critical GIS or Geez-I’m-Sad

Wednesday, February 20th, 2013

I found Lake’s article incredibly interesting. Lake highlights critical components of GIS that are usually—in my experience—sidelined, and offers a shift away from the techie, positivist view that GIS practitioners typically (and perhaps unwittingly) hold. Lake makes several claims that sparked many more questions, and ultimately left me with an unsettling feeling; kind of dejected, all “what is all this even good for?”. I’m going to address and expand upon the bits that jumped out at me the most.

Subject-object dualism: Lake details how “the perspective, viewpoint, and ontology of the researcher are separate – and different – from those of the individuals constituting the data points comprising the GIS database,” (p. 408). Further, Lake notes how the data points (individuals) are stripped of their autonomy, becoming passive objects in the practitioners’ project. How can this notion be applied to concepts of VGI, where people are willingly providing their information? Does data derived from VGI or participatory crowdsourcing validate this subject-object dualism? Putting this dualism in a power framework; are the subjects granted more power (think of the Power Law) now? Are their ontologies embedded in the information they provide? I want to read Lake’s (and/or others’) opinions on how this dualism can be circumvented.

Technological mystification: Lake discusses how we reinforce existing structures of influence—undeniably true. GIS disenfranchises the less technically adept. This inherent technological mystification is just another type of mystification. Mystification, I would say, is inherent in pretty much everything—there is bureaucratic mystification of planning in an opaque government, for example, and I don’t see how this is going to be fully eradicated. While trying to make things more open and available to all people, there is inadvertent marginalization of certain groups. Nothing is going to reach everybody all the time—we just need to make effective tools that attempt to reach more people, more frequently. Maybe eventually we will have enough tools to satisfy everyone… We can dream, right?

I unequivocally agree with FischbobGeo’s statement that Lake’s article talks past GIS without engaging it. At certain points, this article could be talking about a whole range of topics. It raises more questions than it answers, and–call me a defeatist–but makes it seem like we will never get it right.


Different people, different ontologies

Thursday, February 7th, 2013

There is no one formal ontology for GIScience purposes. Agarwal notes Uschold and Gruninger (1996)’s four types of ontologies: ‘highly informal’, ‘semi-formal’, ‘formal’, and ‘rigorously formal’. Agarwal continues to outline other academics’ categories of ontologies, which can be loosely fit into the aforementioned four types. Most interesting to me are the ‘highly informal’ ontologies, which can comprise general or common ontologies and linguistic ontologies. How can these ontologies be incorporated into GIScience and into a GISystem? Do they need to be translated into a more formal or meta-ontology in order to be properly analysed, reproduced, and/or applied broadly across different applications? These are questions I don’t have answers for.

Agarwal acknowledges the lack of semantics in the ontological specifications. He notes that “explicit stating and consideration fo semantics allows better merging and sharing of ontologies” (p. 508)– perhaps it is from here, in the recognition of varying semantics across cultures and people, where we can move from informal to formal ontologies. Concepts can therefore be qualified with a criteria stemming from the merging and sharing of ontologies, and consequently increase our understanding and better our analyses.


GUIs, GIS, Geoweb

Thursday, January 31st, 2013

Lanter and Essinger’s paper, “User-Centred Graphical User Interface Design for GIS,” outlines the development from a typical user interface, to a graphical user interface, and finally to a user-centred graphical user interface. The biggest take-home point I gathered from the article was that the user interface must meet the user’s conceptual model of how the system is supposed to work. If the UI and the user’s model match up, then using the software becomes much more intuitive to the user, resulting in a minimal need for instruction manuals and other forms of support. It got me thinking about how quickly a preconceived conceptual model can be erased and/or replaced. Take switching operating systems for example—going from PC to Mac we already have a mental map of how to do basic computer tasks (how to find and transfer files, format the screen, etc), but these things are done differently on each system. Somehow we grow accustomed to the new operating system’s UI, and it will eventually replace our previous conceptual framework.

Following Elwood’s article and a call for a new framework for geovisualisation, it may be interesting to think about how our GIS conceptual frameworks will hold up in the new paradigm. The GUIs for geovisualisation are arguably easier to use than a traditional GIS (the idea of making it a public technology rather than an expert technology), so it follows that the GUI will fall into GIS users existing conceptual frameworks. Going the other way—starting with geovisualisation technologies and branching into traditional GIS—or even going back to GIS after extensive geowebbing—may be harder.


What does it all mean?

Thursday, January 31st, 2013

Part of Elwood’s paper considers the implications of using data provided from different users. Data providers stemming from different backgrounds and cultures approach information, its synthesis, and its portrayal in varying ways. This heterogeneous data is further transformed through the manipulations required to make any sense of it. Elwood notes, “data are dynamic, modified through individual and institutional interactions and practices” (259). How can we ensure that the meaning instilled by the original user is carried through all kinds of manipulations and transformations, especially when primarily deciphering the original meaning proves to be laden with complexities?

Elwood provides an overview of many solutions to grapple with a wide array of geovisualisation challenges, but I think we might be getting a little ahead of ourselves. Surely there are a vast number of challenges to be addressed, but can we do it all at the same time? Making sense of original user data seems to be of primary importance before we can assess how it changes through practice and collaboration. While initially seeming counterintuitive to user friendliness, approaches like “standardiz[ing] terms across multiple sources” (258) and using formal ontologies may prove necessary in trying to etch out semantic differences in user provided data.

How can we work collaboratively if we’re talking about different things? We can trace the “modification of concepts in a spatial database as they are used in the process of collaboration” (260), but what do these concepts mean? Can we actually standardize open, user-generated geospatial data in order for it to be interoperable? With the increasing amounts of data sources and data heterogeneity, it looks like there is a long, winding road ahead of us.

Elwood, S. 2009: Geographic Information Science: new geovisualization technologies — emerging questions and linkages with GIScience research. Progress in Human Geography 33(2), 256-263.


GIS and Personality

Thursday, January 24th, 2013

In his early overview of decision support systems (DSS), M. C. Er (1988) discusses the importance of allowing for variation in personal choice when choosing a support system. What was most interesting to me was the incorporation of cognitive style and Myers-Briggs personality types as determinants of people’s “preferred way of getting data and preferred way of processing data” (p. 359), and it led me to thinking about whether there is room for different personalities and cognitive styles in using a GIS for decision support (as a tool, that is). Stemming from the (pretty crude) dual personality descriptors on page 359 of Er’s article, I think GIS caters a bit to all of these types. On the other hand, I don’t think it is easy to use a GIS in any particular way that you want to—it’s known for a steep learning curve and definitely has its counter-intuitive moments—and people have to learn to think like the computer; learn to think like ArcMap. Maybe GIS is catered towards a certain cognitive style, which makes sense when it’s described as something you either love or hate.

I think this could be tested with a potential research project: get a group of people, give them a Myers-Briggs test, and give them a GIS task. See how they do it differently and compare that with their MBTI (while controlling for experience, etc).

Er, M. C. (1988). Decision Support Systems: A Summary, Problems, and Future Trends, Decision Support Systems 4. 355-363.



Thursday, January 17th, 2013

McNoleg speaks of the Tessallati and the Vectules, living in a prehistoric Europe (but still subjected to the hazards of global warming). I had to read the article four or five or six times to pick out the important parts, discard the superfluous parts, and synthesize the general gist of the article.

We know he’s talking about conventional geospatial data models. It’s obvious that the Tessellati are the inventors of the raster data model and the Vectules are the inventors of the vector data model. I’m going to attempt to unpack the analogies McNoleg wittingly and creatively puts together.

The Tessellati need to fit the maximum number of individual pig cells on their small amount of land. They want a series of geometric shapes with no overlaps or gaps (a tessellation)–they want a raster grid of regulated pixels. This system is shortlived for a reason akin to too much storage (I think). McNoleg suggests to diversify your diet–diversify your data types–insinuating that you can’t do everything you’ll ever want to do using a raster grid alone (or eating only pig products).

The Vectules are under threat of flooding and can’t swim, so they have to climb trees. Because they are climbing trees, we know where they will end up and where they are in relation to other things. Not being able to swim means they can’t float around wherever they want–and the trees give them a determined toplogical structure that they must follow. Eventually they develop a frame to hold their vacant polygons, completing their data structure model. The downfall of this system–like their religion–is that there are a lot of rules that need to be followed.

As an addition to this article, I would love to read McNoleg’s interpretation of what happens when the Tessellati meet the Vectules. Or if the Vectules suddenly start eating the Tesselatti‘s pigs.


Reviewing Geospatial Database Uncertainty with New Technologies

Wednesday, March 7th, 2012

The paper published by Hunter et al. in 1991 had categorized the challenges of uncertainty in Geographic Information Science (GIS) study into 3 types: definition, communication, and management. Authors begun with the presentation of error visualization, and discussed the uncertainty in visualization. Then, the approaches for error management in geospatial databases were pointed out, and the future research directions as well.

Authors also mentioned it was very helpful to look at the system logs for uncertainty determination. It might be a good solution at the time this paper was published, but currently system log analysis becomes a great challenge in pattern recognition study due to their high dimension. So the new question becomes: What is the tolerance for the uncertainty in system log analysis?

Uncertainty reduction and absorption were proposed as two solutions for error management. Authors utilized several good examples to demonstrate these two solutions. But with new challenges in GIS research (e.g., data intensity), these two solutions should be changed accordingly.

In their paper, authors also mentioned that the main reason that caused the poor utilization was the lack of confidence of the system, due to the fact that user cannot obtain enough information about the quality of the databases and the unacceptable errors. This might be true in the past, but nowadays Bayesian Network technique is utilized to handle this problem, as Lackey et al. presented in (Laskey, K.B., Wright, E.J. & da Costa, P.C.G., 2010. Envisioning uncertainty in geospatial information. International journal of approximate reasoning, 51(2), pp.209–223.)