Archive for the ‘General’ Category

LBS, consent

Thursday, March 14th, 2013

Steinfield’s article on location based services gives a useful overview of prominent technologies, applications and issues related to the domain. Questions of privacy and ethics are raised in the article, but the date of the article means that the most pressing aspects of LBS and privacy have yet to arrive. Indeed it seems that Steinfeld does not forecast the ubiquity of smartphones that we’re experiencing in the present. With current context in mind, I want to briefly revisit some of the questions of privacy raised by the author.

Steinfield cites a set of principles regarding privacy: Notice, Choice, Consent, Anonymynity, Access and Security. With these in mind, I started thinking about what kinds of options we had in terms of communication today. While it is certainly possible to live without a cell phone, it is pretty rare and largely inconvenient, especially amongst my generation. It is expected that we be reachable at all times, and I have heard employment counsellours telling clients that a cell phone is pretty necessary to get a job. I don’t have a smart phone myself, but most of my friends do, and they’ve become a less expensive option than many more basic models of late. But when we opt-in to a smart phone, does that mean we have to (to borrow lazily from Gramsci) consent to our own domination? Is it just that in order to be successful, to be able to communicate, we have to give up a large part of our privacy? Does this model of consent really respect the needs of all parties involved? Does it matter?

Wyatt

Spatial cognition, ontology, epistemology

Friday, March 1st, 2013

Tversky, in his paper, divides spatial cognition into three “spaces”: navigation, surrounding the body, and the body. Reading this article brought to mind previous discussions in our classes with respect to ontology and epistemology. While the article gave a series of examples of each type of spatial cognition, they were mostly rooted within a Western Academic framework. It would be interesting to extend this discussion of spatial cognition to the ways in which it is variable.
I think that the way we think about space is highly structured by our environment and culture. That is to say that the way we order the environment is culturally located. The space of navigation is an easy place to see this difference. I remember in an earlier GIS class talking about the house numbering system in Japan. Wikipedia explains:

“In Japan and South Korea, a city is divided into small numbered zones. The houses within each zone are then labelled in the order in which they were constructed, or clockwise around the block. This system is comparable to the system of sestieri (sixths) used in Venice.”

Even this small detail will have bearing on the space of navigation. While I feel confident navigating the Canadian street system, I would be lost in this different system. I think that it would require that I think about space and spatial relationships in a new way. My spatial cognition is rooted in local understandings. Thinking of this in terms of GIS work, I think it is important to keep in mind the ways that we think about space in our work and how that accords with the people we are producing GIS with and for.

Wyatt

Will You Volunteer?

Thursday, February 28th, 2013

Goodchild’s article does a great job of giving an overview of the history, components, and some of the uses of Volunteered Geographic Information (VGI). Though he does a great job of highlighting the many benefits to this huge source of data, he also acknowledges some of the issues that arise with dependency on this type of data.

The are several issues in particular that I believe affect the future of the field. First of all, standardization of data is an issue when dealing with volunteered information. Contributors may not know the correct way to upload and cite data, which in turn could affect the results. This issue has been addressed somewhat by the use of volunteers who monitor the data, as well as agencies that have outlined the way to standardize certain types of data. Another issue is the ability of certain user to undermine the collective effort. This issue in particular is ever more relevant as larger and larger databases are compiled. Although it is generally accepted that contributors are working together for the collective good, there is a possibility that some people, with ulterior motives, could undermine the collective effort.One example of this is when anonymous users tamper with Wikipedia pages. Wikipedia allows any user to edit the content of its pages. And while there are some volunteers who monitor pages for legitimacy, there is a possibility of people propagating false information.

Overall, VGI has the ability to be a very useful field for current and future collective projects. However, there are still some issues that need to be addressed before it can be relied upon for important policy decisions.

-Victor Manuel

Living in a Virtual World

Thursday, February 28th, 2013

As I was reading through Richardson’s article, I kept thinking to myself time and time again- why aren’t Virtual Environments and effective tool for learning the layouts of real environments? It stands to reason that if the real environment is reproduced at a digital level, a test subject should be able to gain a similar amount of knowledge about the environment as a person who walked through said environment in real life.

Therefore, as the authors outlined some of the limitations of a VE, I started to brainstorm how an accurate and effective VE could be constructed and displayed. One of the main issues withe using VE as a learning tool was the alignment effect- user of the VE could become disoriented, especially when rising sets of staircases. One potential solution to this conundrum could be the creation of a sort of “immersive” virtual environment, which visually surrounds the user. This could be achieved on a relatively portable scale through the use of some sort of “full experience” headset, which would make it appear as if the user is immersed in the real environment. Overall, the paper raises very though provoking questions about the limitations of Virtual Environments; especially how they are still not a viable substitute to experiencing said environment in real life.

-Victor Manuel

On Academia, Industry and Assumed Value Neutrality

Thursday, February 28th, 2013

Reading Coleman et als’ paper, a useful piece examining VGI participants and their motivation, brought forth, for me, one of my bigger pet peeves: the idea of value-neutrality (and proficiency) within academia. Let me explain. In the list of motivations to contribute, the authors identify three negative motivations: mischief, agenda, and malice and/or criminal intent. While the article by no means classifies these motivations as specific to VGI, their placement sets them symbolically apart from that knowledege produced by experts. By positing these negative uses as illustrative of VGI as non-neutral, I read an assertion of value neutrality into the domain of experts.
I recognize that the rigourous demands of a publishing process cannot be ignored, and unquestionably account for a higher quality of data production within academic and professional realms. This does not mean that they are perfect, nor does it mean they are without agenda. Agenda is not always explicit, and I argue not always even conscious. However, the lay reader of an academic paper believes it to be value-neutral. All the while, VGI is seen as never trustworthy. Let us bring this to the domain of GIS.
We trust the professionals at Google maps and the peer-reviewed GIS paper, but not at OpenStreetMaps. Both producers and produsers have to make decisions when they input data. We know that in spatial representations, it is easy to lie and it is easy to produce hierarchies. In fact, it is difficult not to. The difference between VGI and professional GIS is that people expect the former to do so and the latter to abstain. However, Google has to make money, and the academic has to be published, and they can mold their data to this end, as can their editors and publishers. I guess what I’m asking in the end, is where can we make a useful critique of VGI that takes into account the unreliability of all data? How to we introduce accountablity into academia, industry or VGI?

On the question of mischief, well, that one happens too. See here

Wyatt

VGI and the POWER LAW!!

Thursday, February 28th, 2013

Coleman, Georgiadou, and Labonte (2009) state that VGI causes a “more influential role [to be] assumed by the community” (p. 2). That’s great! But — is this influence level across the playing field of the “produsers” they talk about? Ross Mayfield’s Power Law of Participation says no.

WHERE DO YOU FIT???

WHERE DO YOU FIT???

 

As a produser, we fall somewhere along this graph which indicates our respective influence in the application, according to Mayfield. This Law affirms one of the fundamental characteristics of informational ‘produsage’ outlined in the article: the environment allows for fluid movement of individuals between different roles in the community. You can move along the Power Law graph whenever you want.With this in mind, we must consider who is located in each part for different participatory applications, and whether the produsers comprising the high engagement-collaborative intelligence are a good representation for the application’s purpose. After CGIS, power comes hand-in-hand with thoughts of who is being left behind; who is not being represented by the high engagement community.

The article provides a succinct overview of VGI, some of its applications, categories of users and their motivations, and potential data issues. Where does VGI fall short? In a world where collaboration and public participation see increasing popularity, will we be able to solely rely on VGI in the future? True, popularity != credibility — we still need to look at the holes in the maps.

 

-sidewalkballet

A model for your mental map

Thursday, February 28th, 2013

Tversky et al’s explanation of mental spaces as “built around frameworks consisting of elements and the relations among them,” (516) reminds me of an entity relationship model. The mental framework we have could consist of:

– Entities in line with Lynch’s city elements, and touched on in the Space of Navigation

  • Paths
  • Edges
  • Districts
  • Nodes
  • Landmarks

– Relationships to associate meaning between entities

  • Paths leading to landmarks
  • Edges surrounding districts

– Attributes distinguishing the characteristics of an entity

  • Significance of a landmark
  • Width of a path (maybe depicting how frequently it is used for travel opposed to actual width)

I would have liked this article to have a greater theoretical grounding within GIS. I struggle to see what cognitive maps can be used for in a GIS framework, but with this simplified schema in mind, can we translate these cognitive maps into usable data in a GIS? Maybe, but I think we would have to be very meticulous to grasp the nuances in spatial perception and cognition, and therefore the relationships between entities.

Cognitive mapping methodology stresses the importance of debriefing after the maps are made. Discussions must be held in order to begin to establish reasoning regarding why what things are placed in certain locations, why things are deemed to have greater importance, etc. I don’t think that a simply digitized cognitive map will serve much purpose (as a pedagogical tool or otherwise) without knowing the meaning behind it. Each user will have different experiences leading them to perceive different things—things that I don’t think we can make much sense of without dealing with the nitty-gritty relationships of entities.

-sidewalkballet

Explorations in the Use of Augmented Reality for Geographic Visualization

Thursday, February 21st, 2013

There is a small but significant difference that could make augmented reality boom or bust when it comes to GIS. It is the same problem that architects and engineers once faced as well. Only with the advent of computers and monitors were they able to rest their neck and sit down in a chair instead of hunching over a drafting board all day. GIS, for the most part, wasn’t subjected to such a fate.

Augmented reality could change that. Even now, similar displays are available to the public in shopping malls and showrooms, using the same table top, infrared projector method outlined in the article. What sets the visitors apart from GIS users is that they only use it for a couple of minutes at a time. As any GIS user knows, geospatial analysis rarely takes a short amount of time.

In light of that, augmented reality will need to make the jump from top-down to heads-up display before it makes significant inroads into the industry.

What part of the methodology that left something to be desired was the need for the user to place a flash card down on each section of the table that they wanted to view supplementary information at. Why not just display all the data at once? If it’s a matter of computing power, that is a simple fix. If, however, it is intrinsic to the software framework, it would greatly benefit the project if, instead of viewing a small section of a large map, the exocentric viewpoint was zoomed in to a smaller…bigger(?) scale so the data took up the extent of the display. After all, whens the last time you squinted at a map of the island of Montreal when trying to figure out how far your house is from the nearest depanneur.

AMac

Critical GIS: Ethics, a Ghost of the Past

Thursday, February 21st, 2013

Robert Lake’s article “Planning and applied geography…” take the idea to have transcending ethics between field to the extreme. I believe that the type of ethics, or extent, is unique to a field of study and common and should not be pushed into areas where grey zones outnumber the black and white. This article seems to try and force the idea of practitioners as absent minded of ethics, void of the knowledge of technology’s impact on society. Maybe it is my “laissez-faire” attitude or ideals of “I do not care what you believe in, but just do not push it on me ” that is speaking, but I do not believe practitioners have forgotten ethics and their applicability to structuring research in the digital realm. I would argue that it is how the ethics are applied that has changed and is causing this misunderstanding. For instance equal access to GIS data is not truly flawed, as inferred by Lake, as this data can be altered by user and re-published as a modified version, i.e. multiple users can use the data and modify it for themselves to create multiple ethical data sets, that correspond to the user’s ideals and background.

When Lake talks about a means to an end, this is a theoretically flawed assumption, because any good researcher or user of GIS knows that there is no end only a variable set of conclusions that lead to more elaboration of data and a refinement of GIS systems. I personally consider GIS a dynamic tool for representing geographical data in a changing world. Furthermore is it not the idea to show the variety of data from differing backgrounds during analysis to create a mosaic of geographic data that can lead to new discoveries.

The way this article is written and the way GIS and the application of ethical thought are paired, seems disconnected to reality. To clarify the Ethical ideas that Lake speaks of are the old way, a ghost of past thought. Ethics, I believe are considered in a new way, a way that was never considered to older generations of researchers at the time. Ethics of how GIS is used is more loose today, as a global society with a million views cannot be held to the archaic structures of Freudian dynamics of how research is done and how the tools are used.

C_N_Cycles

Hedley’s AR

Thursday, February 21st, 2013

**a quick post because wordpress ate my last one**

Hedley’s piece on AR provides a clear and pretty interesting, if dated, look at augmented reality, evaluating the merits of different interface designs. Eleven years on, it is interesting to look at how far AR has come.

A quick look at wikipedia shows a lot of different applications. While most of them are emblematic of everything that is wierd about the economy these days, some piqued my interest as actually pretty valuable. One such thing was workplace apps. Wikipedia explains: “AR can help facilitate collaboration among distributed team members in a work force via conferences with real and virtual participants. AR tasks can include brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms”

While I could certainly put on my Critical GIS hat and problematize this on a number of grounds, I find it pretty exciting. I think that especially in a field like geography, the use of AR could make collaboration over space a lot more effective. Maybe I am drawn to it because it brings to mind my favorite geography term “reducing the friction of distance”; and that it does!

Wyatt

Ontology in Augmented Reality

Thursday, February 21st, 2013

Reading through the paper by Azuma I could not help but get a little excited about all the sorts of AR applications we will see within as little as 5-10 years.  I envision video games that allow the gamer to feel like they are directly in and interacting with an environment by projecting it in their house.  I also see travelers wearing glasses and getting a tour of a foreign city without the help of a guide.  However, there are obviously a few limitations before Augmented Reality takes these jumps.  The one I want to focus on is User Interface Limitations.

This essentially comes down to how to display and allow interaction with the massive amounts of data that we have access to.  The amount of information that we could potentially display on a pair of glasses is astronomical in my mind.  But, how do we go about deciding what information to display, and how to display it?  To me, this comes down to an individual’s ontology of space.  Take my previous tour guide example; one person may want to know where all the museums in a city are while another would prefer to have the best bars in the area.  This is a bit of a trivial example, however it highlights how it may become a bit difficult to take this amazing technology and make it equally useful for everyone.  While this is an issue today, I agree with the paper in that there will likely be “significant growth” in the research of these problems.  It is now a matter of putting in the time, effort and money into improving the ubiquitous use of these AR systems.  With the great potential for business growth (e.g.), I do not see this being a problem.

-Geogman15

 

Privacy vs. Efficiency in GIScience

Thursday, February 21st, 2013

O’Sullivan brings up three very important points when considering the direction of critical GIScience.  The one that struck home for me was the subjects of privacy, access and ethics.  It is hard to argue against Curry’s point, brought up by O’Sullivan, that the increasing availability of “spatial data forces us to reconceptualize privacy and associated ethical codes” (O’Sullivan, 2006:786).  With millions of people around the world constantly “sharing” their locational information via social networking sites such as Twitter or Facebook, it is easy to see that such information is no longer private.  The reconceptualization of privacy includes the fact that when something is shared on the internet, there is potential for that information becoming accessible to those other than the intended “target.”  We thus need to realize how easy it may be for locational information such as our home or school to essentially become public.  As a society, do we accept the fact that acquaintances (sometimes real, sometimes over the internet), will now know more about us than ever?  If not, how do we use these new applications in a way that respects individuals’ level of privacy while still allowing us to become more connected?

The traffic management is a great example of weighing privacy and increased connection.  Obviously, with increased surveillance, we will be able to detect traffic patterns better, allowing people to travel more efficiently.  However, everyone may not be comfortable with such surveillance, even if it does make their commute easier.  So, this is where the social theory of GIS meets the tool that is GIS.  We can come up with hundreds of ways to track human activity to allow us to travel more efficiently, but there may be a level at which people in a society are no longer comfortable with their location being readily available.  Furthermore, who has the right to use this information?  Is it the private businesses looking to create a useful traffic application, or is the government the only institution that should be able to use this data? It is here where critical GIS comes into play, as a way to evaluate the way different societies value privacy versus efficiency.  Again, this will be different across cultures, communities and individuals.  These issues make the application of GIS inherently tricky, as it is not just a tool that can be used objectively.

-Geogman15

 

Augmented Questions

Thursday, February 21st, 2013

Azuma et al. “update” (in 2001) the reader on advances, problems, and applications of augmented reality. Their intended audience appears to already be aware of the basics of registering objects and placing people in visible artificial environments. In contrast, the article we read a few weeks ago on eye-tracking technology explained seemingly advanced technological notions to the layperson much more nicely. Still, if the article’s purpose is to discuss AR from a multi-faceted perspective, discussing issues pertaining to the user, the augmented objects, and the environment, then the authors accomplished this well enough.

As someone with little to no experience with, or background knowledge of, augmented reality, I am concerned more with possible applications of the technology than with the technical side of things. Still, as someone approaching this article from a GIS-based perspective, I am intrigued by notions like georegistering and dynamic augmented reality. I’m sure the technology has advanced leaps and bounds in the past 12 years, including AR applications on smart phones that solve many of the weight and cost issues. I’m curious how AR is able to take an unprogrammed environment and situate its device so accurately within that space. Surely GPS is involved, as are internal sensors that collect aspect information, but beyond that, I am more intrigued and curious than critical.

– JMonterey

The fence straddle

Thursday, February 21st, 2013

Another interesting paper that raises more questions for me than it answered (which likely was the point). Parts escaped me – how is feminist geography a non-spatial community? But what resonated with me the most was the advice from Goodchild that “straddle the fence” between human geography and GIS could be particularly academically lucrative. O’Sullivan interprets this statement to refer to social theory criticisms of GIS (critical GIS) and uses this anecdote to introduce the paper. I think this statement may have had broader interpretation or at least is relevant in a broader context. I think the future of GIS (and actually of many academic disciplines) may be strongest in the areas that straddle fences – with economics, with health, with resource management, with computer science, and sub-disciplines within these. And likewise, from what I understand, it seems like critical geographers do well at straddling these fences too.

[Side note: being named Pickles would be awesome (p. 784)]

-Kathryn

Critical GIS or Geez-I’m-Sad

Wednesday, February 20th, 2013

I found Lake’s article incredibly interesting. Lake highlights critical components of GIS that are usually—in my experience—sidelined, and offers a shift away from the techie, positivist view that GIS practitioners typically (and perhaps unwittingly) hold. Lake makes several claims that sparked many more questions, and ultimately left me with an unsettling feeling; kind of dejected, all “what is all this even good for?”. I’m going to address and expand upon the bits that jumped out at me the most.

Subject-object dualism: Lake details how “the perspective, viewpoint, and ontology of the researcher are separate – and different – from those of the individuals constituting the data points comprising the GIS database,” (p. 408). Further, Lake notes how the data points (individuals) are stripped of their autonomy, becoming passive objects in the practitioners’ project. How can this notion be applied to concepts of VGI, where people are willingly providing their information? Does data derived from VGI or participatory crowdsourcing validate this subject-object dualism? Putting this dualism in a power framework; are the subjects granted more power (think of the Power Law) now? Are their ontologies embedded in the information they provide? I want to read Lake’s (and/or others’) opinions on how this dualism can be circumvented.

Technological mystification: Lake discusses how we reinforce existing structures of influence—undeniably true. GIS disenfranchises the less technically adept. This inherent technological mystification is just another type of mystification. Mystification, I would say, is inherent in pretty much everything—there is bureaucratic mystification of planning in an opaque government, for example, and I don’t see how this is going to be fully eradicated. While trying to make things more open and available to all people, there is inadvertent marginalization of certain groups. Nothing is going to reach everybody all the time—we just need to make effective tools that attempt to reach more people, more frequently. Maybe eventually we will have enough tools to satisfy everyone… We can dream, right?

I unequivocally agree with FischbobGeo’s statement that Lake’s article talks past GIS without engaging it. At certain points, this article could be talking about a whole range of topics. It raises more questions than it answers, and–call me a defeatist–but makes it seem like we will never get it right.

-sidewalkballet

Better Algorithms or Better Computers?

Wednesday, February 20th, 2013

Any one who is facing dilemma about the above question should see the following video from 44:20 onward:

Data Structures and Algorithms – Richard Buckland on YouTube

Actually the entire video is very useful for anyone who is interested in understanding the basic of Complexity of Algorithms.

– Dipto Sarkar

The near future of Augmented Reality

Monday, February 18th, 2013

After reading the paper by Azuma et. al., I am convinced of the fact that augmented reality systems of the likes shown in Science Fiction Movies are not far. However, I think the first commercial applications of Augmented Reality will use the mobile phones as the primary device. The mobile phones are already equipped with a range of sensors like GPS, Electronic Compass, Accelerometer, Camera, etc. which can be used to provide measurements of the environment. This fact is already leveraged by applications such as Google Goggles and only slight improvements to it will make the system real time, thus making it qualify as an Augmented Reality System according to the definition given by Azuma et. al.  I also feel that acceptance of these applications will be higher as they do not require clunky wearable computers.

Another thought that came to my mind is the use of ubiquitous computing for augmented reality based applications. Instead of putting all the responsibility of sensing the environment, doing calculations and displaying results, it might be useful to distribute some of the task to other smaller specialized units present (or planted) in the physical environment of the user. When a user comes in proximity of these computers, the device they are carrying may just fetch the data and display them after doing some minimal calculations.

-Dipto Sarkar

 

Spatial data mining and spatial analysis

Friday, February 15th, 2013

I am late to post and I think everyone else has already posted lots of excellent ideas about these topics! I found the spatial data mining article very interesting. I think that statistical modeling and machine learning are two disciplines which share a lot in common and in some cases may even be redundant versions of one another. When I read papers written by computer scientists implementing machine learning with data, it seems that the goal (in this case mostly through unsupervised data mining) is to improve predictive ability, often measured by area under an ROC curve, for example. The goal of models in statistics is often to estimate (causal) effects and requires a different conceptual framework for model building and selection to avoid, for example, controlling for a variable in the causal pathway.
Additionally, many of the issues in spatial data mining / spatial statistics are mirrored as well. Correlation and dependence in space and time create problems for the traditional parameter estimators in statistics and for the traditional algorithms in classification/prediction/clustering in machine learning. It’s not enough to just consider spatial dependence, it’s also important to consider nuances of spatial data which may make goals difference – such as the authors mention below figure 3.2, where they talk about how spatial accuracy should be measured not in a binary (correct/incorrect) sense but should account for how close (spatially) the classification was. I would really like to more thoroughly understand how statistics and machine learning algorithms really align and differ. It’s clear this is a highly interdisciplinary field – we need people trained in GIS, computer science, and statistics!

-Kathryn

The article every undergraduate geographer needs to read

Friday, February 15th, 2013

As a geographer, Danielle Marceau’s article “The scale issue in social and natural sciences” is easily digestible. Familiar concepts such as the modifiable areal unit problem (MAUP) are presented in a very clear manner. The article focuses mainly on the effects of scale and aggregates on spatial inferences, and on linking sptial patterns and processes that occur across different scales.

 

Predicting and controlling for the MAUP can be very difficult, as pointed out by the authors. New technologies may be able to help us solve this problem with their advanced data acquisition and analysis,  however even though these technologies exist, conducting such a stud would be nearly impossible. So many processes are connected across varying scale, and when you make statistical inferences about specific phenomena, these inferences surely cannot account for it. We may use GIS to create multiple scale maps to run statistical tests and analyze the appropriate scales, however even in the creation of these ‘test scales’ there is inherent bias, in that we assume we know the limits of the scales of these processes.

 

Though technology has advanced, I believe this comes down to a philosophical debate about science and about space; can we attempt to identify every exchange in process across scale, or do we simply attempt to understand using what seems to be in the most intuitive and apparent scale? We may be able to use technology to improve the accuracy of our models, but only to a certain point. At this point, perhaps efforts would be better spent improving the processing capacity of the technology itself, rather than attempting to use appropriate scale for phenomena, that in the end, we don’t even know is correct.

 

Point McPolygon

 

Spatial Data Mining and Geographic Knowledge Discovery

Thursday, February 14th, 2013

Unlike some other fields in GIScience, advances in spatial data mining and geographic knowledge discovery are not only needed, but time sensitive. The rate at which data is collected and produced is accelerating with little end in sight. This is not due only to the number of observations, but the number of times an observation is made. Montreal’s public bus system, for instance, was in the dark ages until only a year or two ago. Now data is constantly collected from bus-mounted GPS units [Amyot]. At this rate we GISystems could drown in the surge of oncoming data. That is not to say that excess data is a bad thing. In a world in which one can must choose between too much and too little data, too much, I think, wins out. That doesn’t mean an excess of input is not a double edged sword.

Algorithms, data structures, and hardware limitations constrain the future of the science and must be improved upon. On the note of a double edged sword, however, it is my only guess that as these factors are improved, the incoming stream of data will only increase as well. What worries me is statements, like the one made by Guo and Mennis, “more recent research efforts have sought to develop approaches to find approximate solutions for SAR so that it can process very large data sets.” I understand that many times projects may have deadlines, researchers may have other places to be or feel obliged to not hog all the computing power. At the same time, the benefits to computing algorithms using complete likely outweigh the computing costs. Then again, the largest data set I have ever created on my own was an excel spreadsheet no bigger than 1 megabyte.

AMac