Geospatial Agents, Agents Everywhere…

Sengupta and Sieber’s review of artificial intelligence (AI) agent research history and its current landscape sought to define and ponder the legitimacy of ‘geospatial’ agents within GIScience.

The discussion of artificial life agents, often used for modeling human interactions and other dynamic populations, complemented my current research into complexity theory and agent-based modeling of chaotic systems that are sensitive to initial conditions, as it holistically related them back to GIScience.

However ‘software’ agents, defined as agents that mediate human-computer interactions, was an unfamiliar notion to me. I found it more understandable to read about these types of agents if instead I replaced it with the words ‘computer program’, ‘process’, or ‘application’.

As a student familiar with software development, the article made me question a lot of the computational theory I’ve learned thus far, and raised some big questions: What does it truly take for an agent or program to be characterized as autonomous? If an agent or program engages in recursive processes, does that count as being autonomous, as it essentially calls itself to action? And when is a software agent considered to be ‘rational’?

I wonder if rationality in decision making should even be included in the definition of an agent. Humans often make irrational decisions. Our decision making process and socialization patterns are highly complex and difficult to model, issues that are quick to see even when attempting to analyze static representations of spatial social networks.

I look forward to see how this conversation evolves.

-ClaireM

Comments are closed.