Posts Tagged ‘GEOG 506’

Spatial Statistic Skills Being Lost?

Thursday, February 7th, 2013

Nelson’s article, “Trends in Spatial Statistics”, although providing a good summary of past trends seems to be disconnected with what is really happening, or in other words the reasons behind the needs and education within GIS. Geographers are mentioned as  users and not producers of GIS, but this is only because industry, looks at the short term needs and therefore education teach for those needs rather then get more in depth. As a result of geographers looking at industry needs the more advanced courses that are required for spatial analysis are often considered as “too much training” and students do not ask or take these “advanced courses” in a large enough number for them to be offered as geography specific.

I believe that to solve the problem of spatial analysis, more open source courses online that are not time limited are needed to solve the current issues with spatial analysis skills. These courses would allow professional and public assess to the tools needed in the globalization of GIS and the application of spatial analysis techniques. In addition to courses, much of the technical background of GIS program statistical functions are hidden. It may be beneficial to create a functional window that can display the statistical function’s equations and code.

Finally, geography may have once been the only home of spatial analysis, but in today’s global environment other fields may be better suited with their specialization for analysis then geography. I believe that no one can be a master of all disciplines, therefore the skills that geographers thought were important, may not be today for them. In the global context of knowledge and specialization of skills, people are now becoming a cooperative of learning where goals can be achieved to greater success together through an interdisciplinary approach, rather than a single discipline approach.

C_N_Cycles

 

Graphical user interfaces

Thursday, January 31st, 2013

Lanter’s assessment of user-center graphical user interfaces and the applicability of those graphical systems to GIS is quite accurate in that visualization makes GIS is easier to understand, learn and use. I believe this relates to how the evolution of the human brain adapted to man’s ever changing environment, in that it responded by creating a set of built in steps to learning, understanding and using tool, through touch and sight. To elaborate, the user-centered graphical interface is the connection of the GIS “tool” and the user. As the brain of a person is designed to see and expect a result in response to an action, the interface plays a major role in understanding; humans learn through observation of results from their actions. In essence, humans create logical connections through pathways, which humans can then observe and deduce the outcome of other similar actions.

The concepts of interlinking both the user and the system, through the system interface and the user model, as Lanter writes, seems to be the best way of linking man’s natural interaction tendencies with the computer’s unnatural approach. Even so, the design of the interface still may cause problems as man’s “instinctive” approach to the use of the interface may limit the function of the interface. Therefore, I agree with Lancer that input from the users to interface designers is essential to resolving the issue of result complexity and use with user simplicity. One thing that may resolve the problem of complexity and user ease, may be to design an interface that allows the use of both traditional and graphical interfaces (i.e. Graphical to start and learn, and traditional for the advanced user once they have mastered the basics).

 

C_N_Cycles

 

Geovisualization and GIScience

Thursday, January 31st, 2013

Sarah Elwood’s discussion, of the emerging questions in geovisualization and the linkages to GIScience research, does highlight the issues of qualitative and quantitative data overload and the dissemination of the data. However, I believe that the dynamic change and addition to data, be it quantitative or qualitative, is needed in both a standard and non-standard form. Thought my own research I have found that dynamic data in a non standard form often tells more about a situation then the standardized data. That said, standardized data is still needed in order to “create order” in our understanding and transmission of data to other people.

The article makes me think how as humans, we want everything in order so to make sense of what we see and how GIScience strives to create order in data for it to be useful. Nevertheless, is the universe not chaotic and basis of all data fundamentally chaotic? Maybe chaos and the none standard data tells us something more important about how we are as a people and how the tools and the ways we look at the world change from person to person and culture to culture. The heterogeneity of the data and the types of software and hardware we use maybe is the norm, and GIScience is trying to place artificial boundaries on how we see data and use tools.

Besides trying to fit data to standardized forms, the idea of “public” and “expert” technologies just does not make sense. Today technologies are so integrated in how youth (0-30 years old) see the world that technologies should not be classified as “expert” or “public” but the person who manipulates the technology. Growing up during the advent of mass produced home computers and driving the development for better processing power and performance, past what our parents had ever imagined, through the purchased of video games and internet use, has shown me it is the person not the machine. I have learned that often one must use a plentitude of  platform resources to achieve a result, as each type of platform like google earth or ARCgis has its strengths (one cannot create a single platform to satisfy all needs or wants).

 

C_N_Cycles

 

Making GIS UI friendly

Thursday, January 31st, 2013

Although unrelated to analysis, the User interface (UI) is an incredibly important aspect of any GIS. When using applications such as ArcGIS, the graphical user interface (GUI) is what the person sees when they interact with the software on their screen. Thus, the simpler and easier to use the interface is, the faster the end-user will be able to learn the system and use it efficiently.

One of the best ways of organizing the UI seems to be the use of  natural or interface mappings. These methods play on the users intuitive and logical reactions to occurences. For example, Lanter uses the analogy of the steering wheel. If a person turns the steering wheel right, the car will then move to the right; and vice versa. Similarly, when a user moves the mouse to the right or left, then they would logically assume the cursor on the screen would do the same. This seems to be the best way to teach users how to use a particular system, as they are more likely remember instinctive actions.

Lanter identifies two key concepts that should be taken into account during user centered interface design: how to map the system interface to the users existing model, and how to shape and influence the users model while they interact with the system.  The first part, as previously mentioned, has to do with designing the interface to take advantage of an individuals intuitions and natural mapping. The second part-arguably the biggest challenge going forward in UI design- regards how easily the user is able to learn the system, based on the way it is organized and fulfills functions. Overall, further development in UI- primarily in ease of use and intuitiveness- will open GIS to a larger variety of individuals, especially those relatively unfamiliar with GIS applications.

-Victor Manuel

The Future of Geovisualization technologies…

Thursday, January 31st, 2013

The emergence of new geovisualization technologies such as Google maps and Google earth are revolutionizing the way people interact with GIS. Contrary to software based programs, these web based applications allow for unprecedented access, at no cost, to powerful visualization technologies. In addition, previously text based web apps, such as Facebook and twitter are now incorporating spatial components. For example, a person is now able to tag their exact location, down to a particular building, when they update their status on Facebook. Web-based geovisualization technologies are growing in popularity because most of them are free, very easy to access (usually only an internet connection is required), and they allow for the standardization and greater sharing of spatial data. This last point is extremely important because it has opened up a wealth of research applications. For example, a researcher in Greenland might be tracking ice flows, while a researcher in northern Canada may be tracing the migration patterns of polar bears. Web based geovisualization technologies such as Google earth now allow both researchers to interpolate their data on the same map; opening avenues for further research, such as how season ice flows affect polar bear migration patterns.

On other point that must be brought up, and that was addressed well by Elwood, is the heterogeneity of the huge amounts of data being generated by these new geovisualization technologies. Thanks to these technologies, large and diverse data-sets are now available covering a wide variety of user-imputed geospatial data. However, an important challenge for the future will be how to standardize these data-sets, as much of the data is based on opinions rather than standard data-points. Most people have shifting meanings on how they perceive their local geographical points. Standardization will also be important as datasets become larger and larger, requiring more automation of analysis.

 

-Victor Manuel

The evolution of Spatial Decision Making

Thursday, January 24th, 2013

Claus Rinner delves into the increasing importance of Web 2.0 applications in spatial analysis and decision making process. He highlights the fact that, with the advent of more advanced and easy to use web apps, local geographic knowledge is increasingly being included in decision making processes.  More specifically, the emergence of web based GIS apps, such as Google maps, yahoo maps etc. has made basic spatial analysis extremely easy for the average user. Densham highlights the fact that these tools have a relatively similar interface, making them both faster and easier to use.

I find the evolution of web based apps extremely interesting. As more and more people use these apps, tons of of geodata is produced. This in turn provides of a huge database of potential data for multitudes of spatial analysis and research projects. Apart from obvious ethical issues, I believe this type of data will revolutionize both the research and decision making process, in a wide variety of fields.

One particular aspect of the case study conducted by Dersham was how the use of web based mapping software could be used to enhance a discussion forum. Throughout the discussion, the mapping feature provided for more focused discussion on the particular geographic areas that the participants were interested in. It was interesting to see how through the analysis of the geospatial data, it was visually apparent that most of the discussion members wanted to focus on improving a specific are of their school campus. This holds many implications for various fields, such as sustainable development, where policies could be better tailored to be most effective, based on the analysis of user provided spatial data.

Overall, Web based concepts seem to be evolving quickly and becoming more and more integral to spatial analysis. As these technologies continue to develop, spatial decision making should become much more effective.

Victor Manuel

Are SDSS actually important?

Thursday, January 24th, 2013

Densham gives a good account, albeit very dated (1991) of the basic characteristics of spatial decision support systems. Densham chooses to focus on the importance of these systems in decision making processes, arguing that they are more adaptable to the complex characteristics that must be factored in by decision makers.  He concludes that further development would allow decision makers to solve more complex spatial problems.

As I read through the article, the one recurring thought that kept coming to my mind was how much the field of GIS has evolved since the article was written. Spatial decision support systems have evolved tremendously with the influx of huge amounts of user based geodata. This in turn has led to more complex spatial analysis, with ever changing factors in space and time.One component I found well written was the distinction between GIS and SDSS. Densham highlights the shortcomings of GIS, mainly the lack of Geographical Information Analysis capabilities. He goes on to give a good description of SDSS, albeit one that was much more relevant during the time of writing. The age of the article becomes even more relevant when Densham goes on to the describe some of the problems facing the evolution of SDSS. Modern technology, such large increasing in computing power, have completely evolved SDSS into dynamic models that are affected by a multitude of changing characteristics.

Overall, the article gives great insight into the early days of SDSS. However, modern technology has rendered many of the issues brought up by Dersham rather obsolete.

-Victor Manuel

 

Spatial Decision Support Systems

Thursday, January 24th, 2013

P.J. Densham’s discussion and explanation of “Spatial Decision Support Systems” is a good summation of the basics of a “spatial decision support system” (SDSS). Even so, the discussion seems to be a bit out of date in relation to current GIScience and SDSSs, as user interfaces and report generators have been modified and further developed, to resolve the issues and needs Densham proposes. Furthermore, some of the ways SDSSs are now used, such as the integration of dynamic modeling and GIS programs, are not even mention, as technology has advanced since the publication of this discussion and explanation. For example, my current research has an aspect of dynamic modeling that it is represented spatially, and new programs now exist that can graphically represent dynamic models in the context of a spatial area. To clarify, Densham seems to only consider single state representation (or one time frame) in SDSSs not states in dynamic flux which change in relation to changing conditions. Today, with the facts of environmental change and the speed of human development dynamic representation is becoming the norm, especially with predictive capabilities, for managers and specialists looking at spatial variation in this new context of understanding. Although the article is a good representation of the time, a lot has changed. One example of change is that database management now has different classification and retrieval styles for spatial data, such as images and descriptions. With changes in interface and computing power, SDSSs are now integrated between programs and user friendly, becoming part of most types of spatial analysis and decision making today.

 

C_N_Cycles

 

Decision Support Systems

Thursday, January 24th, 2013

M.C. Er’s article on “Decision Support Systems” seem to capture the beginnings ideas that form some of the basic aspects of GIScience, where systems are used to help in the decision planning. For instance, a GIS program, such as ARC, which has the characteristics of a “Decision Support System” (DSS), will help me in deciding the placement of sample sites based on elevation and “wetness” of my study site. The data set would be too complex for analysis, without a computer program helping / supporting my determination of placement. That said, M.C. Er’s article although describing a DSS and its uses, lacks the knowledge (due to age of article) of modern computing power, which was not foreseen 25 years ago. To clarify, many of the problems mentioned with predecessors of DSSs and DSSs themselves have been solved with current day processing power, gorilla programming, AI evolution and cloud computing. Furthermore, the power of DSSs have grown beyond the constraints of the article, to the point that no matter what level of organization or field, DSSs are used in decision making. The article does leave one with the question of what is the current role of the DSS and how has it been modified and improved in view of current day GIScience’s integration of DSS to GIS systems? The progress of today, towers over the technology and its use in the past. The use and idea of the DSS is no longer for just for business but is now a integrated part of the way GIS and environmental modeling programs function. Furthermore the aspect of hardware is no longer a large component of the DSS since the PC is now is almost every home, institution, and business with a minimum computing power that can run most programs as a result of bundling with either Microsoft’s pervasive Windows or Macintosh’s use friendly interfaces.

 

C_N_Cycles

 

Is GIS a tool or a Science?

Friday, January 18th, 2013

One of the most interesting debates within the discipline of GIS is whether it should be categorized as a “tool”, employed to solve problems in other disciplines; or whether it should be considered a “science” in its own. Although the article is relatively dated, Wright et.al bring up some interesting points about the debate. It is interesting to note that from its inception, up until the time period during which the large debate that was sparked on the GIS-L listserv in 1993, GIS was employed almost unanimously as a tool in order to advance a specific focus. However, this huge exchange of opinions by scholarly sources, along with the rapid development of technology, has greatly changed the field of GIS.

Miller does a great job of summarizing and analyzing the GIS-L debate of 1993, which at the time was an unprecedented interaction on an online forum between scholarly individuals and their colleagues around the world. It is fascinating to see, from the provided excerpts, how the argument developed over time. Before considering a solution to the argument, It is vial to define what “science” actually is. This, however, is problematic because science can be defined in so many ways, and sometimes incorrectly! Miller identifies science as “a logical and systematic approach to problems that seek generelizable answers”. But does a complex field such as GIS fit into this category? Miller hits the nail on the head when he concludes that GIS represents a continuum between tool and science. However, it is clear that out of all the ranges on the continuum, GIS must be considered a science because it encompasses the analysis of issues raised by the use of GIS.

– Victor Manuel

What about people?

Thursday, January 17th, 2013

Historically, place based representation in Geographic Information Systems has been the norm.  However, in an ever increasingly complex and interconnected world, place-based methods are becoming more and more inadequate, especially with regards to Transportation GIS and urban GIS. Miller proposes that a people based method is required in order to better address  the complex spatial and temporal patterns in peoples lives. A space-time perspective views the  person in space and time as the center of social and economic phenomena.

Miller states that space-time activity (STA) data, which is collected through information technologies (IT) such as mobile phones, gps, etc is crucial to this method. One particular characteristic  that I found very interesting was the traditional methods of collection. STA data is usually collected in 4 ways: recall methods, stylized recall methods, diary methods, and prospective methods. However, each of these methods is flawed by the fact that they are entirely dependent on independent input from the test subjects. As is the case with a survey based approach, individual bias and error can severely affect the accuracy of the data. Miller does a good job of citing the substantial problems with these traditional approaches

It is also interesting to note how the evolution of IT technologies, or more specifically GPS and location based services, can enhance the collection of activity data.  This is very fascinating because it signifies the fact that a people based approach to data collection will increase and become significantly more reliable and accurate with time.One issue I do have with this approach is ethics surrounding the collection of data. Although STA can provide an important pattern of human activity, it may also infringe upon certain ethical issues, most chiefly privacy. Therefore, creating a method of collection STA that is not only accurate and unbiased, but chiefly anonymous, will be key to any development within the field in the future.

-Victor Manuel

Agent-Based Models and Land-use Planning

Thursday, March 29th, 2012

I couldn’t help think about our lecture on agent-based modeling when I started to read Helen Couclelis’s article on coupling better models with land-use planning. I know that, here, Couclelis is thinking about a different kind of modeling than that often implied in agent-based modeling approaches. She first notes how “sour” the relationship between planning and the academy has gone (135). Then, after detailing the debate over whether or not a planning process utilizing models has any effect on actual land-use plans, Couclelis delves into how, in practice, actual physical planning has become a decentralized activity that doesn’t incorporate strategy (1357).

I get it and I agree. The work being done on the ground, at least in North America, Couclelis argues no longer resembles any of the model runs done in academia. I think the quote Outdoor Addict utilized from the article sums up this problem well. In sum, the models have failed to predict the future accurately enough or, at least, keep up with land-use in practice. Even the “systematic effort to understand what makes certain things about the future predictable and others not, or how to prepare for genuinely unpredictable futures, have so far had only a negligible impact on land-use planning and modeling” (1360). So, what do we do?

Perhaps this is the problem inherent in any modeling process. We do not know that the future will follow the path layed-out in a model. We do not even know that the models initial parameters catch all the various variables and their interactions. Clearly, although she doesn’t say it quite so directly, modelers in this field are struggling with this problem. As Madskiier_JWong points out, uncertainty is the name of the game when it comes to modeling or trying to predict the future (essentially what modeling is when you unpack all its sophistication).

My suggestion: perhaps this is where agent-based modeling might come in. It seems from our previous lecture that agent-based modeling excels at representing lots of competing variables (or agents) and representing the emergent phenomenon that result from their disparate actions. In a land-use context, this might help us to better understand how certain sites might be used in future years and how they might handle such use. Couclelis does suggest this approach on page 1369 when she talks about assessing the cognitive and social dimensions of model interpretation so that modelers can move away from the macro level they currently operate on. Yet, she doesn’t devote much time to it, and such an approach  doesn’t fit within the confines of models she describes that try to predict varying types of land-use cover and changes to infrastructure. But it may add a valuable dimension. A quick search online shows that many people are, indeed, already trying to couple these two ideas.

–ClimateNYC

Time and GIS

Wednesday, March 21st, 2012

We’ve heard how the cyberinfrastructure handles temporal and spatial data separately, but must be developed in such a manner that allows for users/researchers to utilize both sets of variables when interacting with a GISystem. Now Gail Langran and Nicholas Chrisman provide an interesting overview of the topological similarities between time and space, and how best to design a GIS system which can accurately display temporal elements.

I find the authors’ notions of time and its important elements to be a overly simple in a way that helps to lend credence to their subject. In particular, they characterize cartographic time as “punctuated by ‘events,’ or changes” (4). Furthermore, they do a nice job contrasting GIS algorithms based on questions concerning space (what are its neighbors? what are its boundaries? what encloses it? what does it enclose?) with the similar questions one might ask for time (what was the previous state or version? what has changed? what is the periodicity of change? what trends are evident) (7). Such examples help to define this paper not just as a discussion of temporal data, but also of temporal data based closely to its application in geographic space. Such an added dimension can be incredibly important when we begin to think about all of the geographic phenomenon that occur over differing timelines. It’s also an element we should try to remember more in our own research efforts.

I do wonder about the distinction the authors draw between real world time and database time. Since many GIS databases are headed toward real time, streaming data – as was pointed out in previous lectures – why make this distinction? Perhaps I’m not technically inclined enough to understand the importance of the difference in programming or maybe it’s just a matter of how the system might store information. Anyone have thoughts on why real time data can’t be used in a manner that equates it to database time?
–ClimateNYC

Thinking About Scale

Thursday, March 1st, 2012

I agree with cyberinfrastructure and henry miller in their thinking about how scale is presented in the paper written by Dungan et. al. The authors of this paper primarily provide examples from ecology although they do discuss and provide context from other fields. I too think we must be careful in paying attention to what field we are working in when we think about the term scale.

My first introduction to the concept came from a political ecology class I took, where scale could be used outside of just its connotations in physical space and time. Scale, in this context, could be used to think about government, human communities, academic disciplines and more. Of course, political ecologists might often be more concerned with power relationships and how these relationships flow across different scales than we are in this course.

But, since we are looking at this in the context of GIS, I thought one interesting blog post that helps to make one of the same points as the authors of this article might be worth sharing (the pictures do it for me). Scale, just in a physical sense, does matter incredibly when investigating landscapes or in thinking about maps. As a human geographer, the author’s points about the sample size of scale also holds a lot of implications when thinking what is the appropriate scale to study human subjects or their communities on. As cyberinfrastructure notes, we should be mindful of how scale might adjust our methodologies or observations by paying attention to scale itself. But, I would argue that we also need to think about what discipline we are working in (and its definition or varying usages of scale) when we consider scale shifts and how it might affect our research.

-ClimateNYC

GIS and its Conceptual Framework

Sunday, January 22nd, 2012

Goodchild recounts eight topics that outline the research agenda for GIS, and how they fit in a conceptual framework which “[combines] three domains in different proportions,” (6), namely, the computer, the individual user, and society. Put forth in 1992, there are great additions and alterations that need to be made to modernise the framework.

For one, with the advancement of the GeoWeb 2.0, I think “public participation GIS” is better suited to have a greater proportion of society in the conceptual framework (over the human). Participatory GIS is so influential in part because of the sheer volume of it. People all over the world are creating maps in different ways, and mostly in a collaborative setting. For VGI to be beneficial it needs to come from a vast array of sources—used and updated by all of society, not one individual user.

A few additions to the framework include augmented reality, cloud computing, perhaps the geoweb itself. Geographic information is advancing at an incredible rate, and GIS needs to account for such changes. Society is playing a larger role, but how will GIS incorporate semantics and natural languages, for example, or different representations of place? We need to organise these different technologies and facets of GIS in a comprehensive (and user-friendly) conceptual framework in order to fully exploit the benefits GIS can bring to the understanding geographic information.

Goodchild, Michael F. “Twenty Years of Progress: GIScience in 2010.” Journal of Spatial Information Science. (2010): 3-20.

– Sidewalk_Ballet