Archive for the ‘computer models’ Category

Hello, operator?

Thursday, September 29th, 2005

One phrase that strikes a rich chord in modern science, especially in chaotic systems, is “extreme sensitivity to initial conditions”. With a computer model of protein interactions, a research team at UCSD found that the same few proteins could produce radically alternate outputs from only minor differences in input. The model enabled the team to trace the communications within the cell through inference on the input/output duo, with a fairly complete understanding of the parameters a cell might respond to.

The prevailing opinion prior to this study was that computational models would be hard-pressed to predict cellular function based on outside signals, which is exactly what these new findings have accomplished. The image on their news release says it all:
image

Obvious implications for this uncovering of “hidden conversations in the cell’s wiring” lie with a cell-sourced problems, such as treatments of cancer which start by increasing immune functions. With a delicate drug tool, it is possible to “interfere with one of the pathological functions of the proteins, but leave the healthy functions intact.”

Plague decimates World of Warcraft

Friday, September 23rd, 2005

This is the true impact of computer technology: in what is considered to be the most popular online game in the world, thousands of virtual characters have fallen victim to a fatal computer virus.

To give … powerful characters more of a challenge, Blizzard [the creator of WOW] regularly introduces new places to explore in the online world.

In the last week, it added the Zul’Gurub dungeon which gave players a chance to confront and kill the fearsome Hakkar – the god of Blood.

In his death throes Hakkar hits foes with a “corrupted blood” infection that can instantly kill weaker characters.

The infection was only supposed to affect those in the immediate vicinity of Hakkar’s corpse but some players found a way to transfer it to other areas of the game by infecting an in-game virtual pet with it.

This pet was then unleashed in the orc capital city of Ogrimmar and proved hugely effective as the Corrupted Blood plague spread from player to player.

Although computer controlled characters did not contract the plague, they are said to have acted as “carriers” and infected player-controlled characters they encountered.

Apparently, this isn’t the first computer virus to affect online games. According to the article, hundreds of Sims died from an infected guinea pig. Remember, it’s all virtual. (I hope it isn’t a hoax because it’s too delicious a story.)

Satellites and New Orleans wetlands

Saturday, September 17th, 2005

Scientists, policy makers, and the public have made enormous use of satellite images since Hurricane Katrina struck (e.g., see here). Using these images, NASA has just reported how important wetlands are in absorbing flood waters.

Economics of virtual worlds

Saturday, September 17th, 2005

The Washington Post has caught onto a phenomenon that we have reported on before, namely how the economics of virtual gaming worlds have intersected with the real world:

increasingly popular online role-playing games [called MMORPG–massively multiple online role-playing games] have created a shadow economy in which the lines between the real world and the virtual world are getting blurred. More than 20 million people play these games worldwide, according to Edward Castronova, an economics professor at Indiana University who has written a book on the subject, and he thinks such gamers spend more than $200 million a year on virtual goods.

Virtual goods are those items, whether a sword or a character, that have been created online but are now sold in actual markets, such as eBay. The iconic example was an island that sold last year for $26,500US. This article has “photos” of the island for would-be real estate speculators.

I know how the financial exchange takes place in the real world but I always wondered how the exchange took place in the virtual world. The Washington Post article explains:

At one typical currency-exchange Web site, the MMORPG-Exchange, the current rate for Star Wars money is $24 for 5 million Imperial credits — about enough to buy a fast speeder bike. Put in an order via PayPal, and a green-skinned delivery guy will, within minutes, pop up inside the game to hand over the money in one shady corner of Mos Eisley, that corrupt city on Luke Skywalker’s home planet of Tatooine.

(Why journalists treat this phenomenon with wonder and view the sale of virtual goods are any different from the sale of most stock instruments like derivatives is beyond me. That’s normalization for you: stocks are normal; virtual swords are weird.)

The article mentions two other websites (if only they’d do the work and provide the links!): Game USD that tracks the value of virtual game currency against the US dollar (the site is cool because it’s so prosaic). GamingTreasures.com allows you to purchase a variety of products across the online gaming spectrum (a credible site because it doesn’t “dupe, exploit or farm”).

The article by The Washington Post probably came about because of the new book by Castronova, an Indiana University Bloomington professor. Called Synthetic Worlds : The Business and Culture of Online Games it covers how online computer gaming has become a lucrative part of the worldwide entertainment industry.

See our previous posts, about Sony launching its own virtual goods auction site and the impact of gaming on Chinese culture. Apparently, when reporting on the sheer size of these massively multiple player games, neither the reporter nor Castronova checked the Guardian newspaper article, which reported that there were 40 million game players in China alone.

One final intersection of the virtual and the real world mentioned in the Washington Post:

After Hurricane Katrina, the operators of EverQuest II assured more than 13,000 members in the Gulf Coast region that their virtual property would be protected and preserved until they could resume playing.

Cheery news

Saturday, September 17th, 2005

Another cheery message from the US National Snow and Ice Data Centre at Colorado University, who’ve been longitudinally comparing satellite images of polar ice. Via the Independent:

A record loss of sea ice in the Arctic this summer has convinced scientists that the northern hemisphere may have crossed a critical threshold beyond which the climate may never recover. Scientists fear that the Arctic has now entered an irreversible phase of warming which will accelerate the loss of the polar sea ice that has helped to keep the climate stable for thousands of years.

It’ll be nice in Montreal, except for the extra ice storms and 30 degree summers that no one is prepared for…

Footprints across the U.S.

Tuesday, August 2nd, 2005

The Wildlife Conservation Society and Center for International Earth Science Information Network (CIESIN) have just completed a comprehensive assessment of human impacts on wildlife across the globe. Part of their goal was to find the most untouched or pristine places in the world. The most pristine place in the U.S.? Alaska, although that may not be for long if developments like drilling in Arctic National Wildlife Refuge (ANWR) take place.

A nice graphic in the NYTimes article shows the varied impacts.

As posted previously on the Famine Early Warning Systems, this system also relies more on the data quality than the data analysis. Unlike FEWS, this is an entirely remoted sensed project. The NYTimes report mentions land use but it’s actually land cover, a subtle yet important distinction (see below). And the resolution in these types of analyses is small. Smaller resolutions equal big pixels. The bigger the pixel the more difficult it is to see small activities.

To give you a sense of how difficult it is to work at this scale with the data at hand, it’s as if all your data has the resolution of baseball stadiums. You’re trying to infer hotdog and beer sales from a baseball stadium sized snapshot. To get a sense of land use-land cover. The covered stadium is the akin to the land cover; what you’re trying to determine is the activity taking place under the dome–the land use.

Famine Early Warning Systems in the News

Saturday, July 30th, 2005

The Famine Early Warning Systems (FEWS) recently announced an emergency for the Horn of Africa. The model now reports that 18 million people are facing severe food shortages. Most of these people are in Ethiopia.

FEWS is the best known instance of computer models that predict potential hotspots of famines. It is also an example of the extraordinary difficulty in creating reliable output at a continental or global scale. These models are very data intensive and therefore depend entirely on the quality of the data. Poor data can result in massive under- or overstatement of a crisis. The models rely heavily on remotely sensed images from which the modelers infer vegetation levels, water/rain availability, and crop conditions. The temptation is to rely primarily on the remote sensing instead of visiting the sites, which may be difficult or dangerous to reach and therefore expensive to monitor. Sophisticated models like FEWS are calibrated with ground based data. The availability of ground based data over areas like the African continent is uneven and local data can be suspect. The Sudanese government, for example, has been known to control the availability, accuracy and interpretation of datasets characterizing their country as a way to play politics with humanitarian relief agencies. So, even with the most careful methods, 18 million is a rough estimate at best. At the same time, even rough estimates can save innumerable lives.

On another matter: Reuters, which carried the story as part of its alert system for humanitarians, has an associated interactive map that I find quite wanting. When I clicked on it I expected to see some numbers related to potential famine. Nothing. Indeed, a pulldown menu, with items like the Indian Ocean tsumami or AIDS in Asia, has at most standard map layers (roads, river, city locations). No information related to the subject. Also, the legend is broken for most of the links. Come on, guys, if you want map technology related to your stories then implement something. Don’t give us a standard atlas! Actually, this interactive map contains less information than an atlas. For a much better interactive map, see the Famine Early Warning Systems site.

Arctic ice melts

Saturday, July 30th, 2005

Nothing like interpretation of remote sensed imagery to ruin your day: Scientists sound alarm on Arctic ice cap:

Satellite data for the month of June show Arctic sea ice has shrunk to a record low, raising concerns about climate change, coastal erosion, and changes to wildlife patterns. Meier says circulation patterns are bringing more storms and warmer air from the South into the region, and that’s helping to break up the ice.

“June is really the first big month of melt in the central Arctic Ocean and so it’s an indication that the melt is progressing faster than normal,” [according to Walt Meier of the U.S. based National Snow and Ice Data Centre.] . “And when you start melting the ice you’re leaving the open ocean there which absorbs much more solar energy and so it tends to heat up even more.”

Less sea ice means more moisture in the air and more rain.

It also leads to an increase in coastal erosion since the ice isn’t there to buffer the shoreline from waves.

Meier says the ice has retreated almost everywhere in the Arctic except for a small area in the East Greenland Sea.

I guess it’s time again for Canada to worry about its sovereignty over the Northwest Passage.

BTW, the Centre’s site contains an enormous amount of free snow and ice data on the atmosphere, biosphere, ground level, glaciers, hydrosphere, land surfaces, oceans, even paleoclimate. If only Canada could be similarly generous in its offerings. Indeed, why is this analysis coming from the U.S.?

U.S. messing with time

Friday, July 29th, 2005

So says a leaked proposal made to the United Nations, reported in the Wall Street Journal. As you know, every so often an extra second, called a leap second, has to be added to the clocks so that time tracks the movement of the sun. Apparently the U.S. doesn’t like it because the extra time disturbs existing computer programs and navigation systems (e.g., global positioning systems). Better to have a standardized 24 hour clock and don’t worry about the drift.

Admittedly the drift is minimal. But it is upsetting to scientists,

including the Earth Rotation Service’s leap-second chief, Daniel Gambis, of the Paris Observatory. “As an astronomer, I think time should follow the Earth,” Dr. Gambis said in an interview. He calls the American effort a “coup de force,” or power play, and an “intrusion on the scientific dialogue.”…

[The U.S. proposal] has set off a wave of passionate opposition from astronomers, who argue that removing the link between time and the sun would require making changes to telescopes, changes that would cost between $10,000 and $500,000 per facility. That’s because a fancy telescope uses the exact time and the Earth’s position for aiming purposes when astronomers
tell it to point at a specific star.

[Note that there is a whiff of anti-Europeanism here because Gambis is from France and because Britain is considered to be the centre of time.]

Of course, the problem could actually be “lazy programmers”:

Deep down, though, the opposition is more about philosophy than cost. Should the convenience of lazy computer programmers triumph over the rising of the sun? To the government, which worries about safety more than astronomy, the answer is yes. In Mr. Allen’s view, absolutely not. [Steve Allen, an astronomer from University of California, runs a website about leap seconds.] “Time has basically always really meant what you measure when you put a stick in the ground and look at its shadow,” he said.

Liam?

Mapping Hacks

Monday, July 11th, 2005

From boing boing on easy to use mapping software:

This time last year, I met Rich Gibson at Dorkbot and he told me that he had just started work on a Mapping Hacks book for O’Reilly. This week, I had the opportunity to peruse the finished book, co-written with Schuyler Erle and Jo Walsh. As a “map curious” newcomer to digital cartography, I can say with certainty that it’s an engaging and downright inspiring book. From Google Maps to Dodgeball, location-enhanced technologies are all the rage these days. But it’s easy to get lost in the hype of geocoding, Geographic Information Systems, and even GPS. Fortunately, as with the other books in O’Reilly’s Hacks series, Mapping Hacks is all about learning by doing.

The hacks range from gems like #7, perfectly titled “Will the Kids Barf?” (how to create an index of road curvedness), to “#39 View Your Photo Thumbnails on a Flash Map,” to “#76 Explore the Effects of Global Warming.” I’m told that even experienced map hackers will get off on the open source GIS tricks, geocoding Web hacks, and other technical material. For me though, Mapping Hacks is a perfect compass to guide me into the realm of digital cartography with plenty of welcome rest stops and fun tourist attractions along the way.

Link to O’Reilly catalog page, Link to Mapping Hacks blog.

Google Earth

Monday, July 11th, 2005

Google Earth is a new map viewer with overlay capacity, cardinal directions and huge amounts of data, including topography, transportation (roads, railroads, transit stops), building footprints in major cities, socio-economic census data and crime statistics, business locations. Some of the data is extruded to 3-D. Most important, it’s free.

From Google’s site

Want to know more about a specific location? Dive right in — Google Earth combines satellite imagery, maps and the power of Google Search to put the world’s geographic information at your fingertips.

  • Fly from space to your neighborhood. Type in an address and zoom right in.
  • Search for schools, parks, restaurants, and hotels. Get driving directions.
  • Tilt and rotate the view to see 3D terrain and buildings.
  • Save and share your searches and favorites. Even add your own annotations.

Google Earth puts a planet’s worth of imagery and other geographic information right on your desktop. View exotic locales like Maui and Paris as well as points of interest such as local restaurants, hospitals, schools, and more.

The Washington Post has a good review of Google Earth. The article focuses on one feature that exploits the interactivity and exchange potentials of the Internet:

You can add “placemarks” for any interesting spots you find, then share them with other Google Earth users via an online bulletin board. This ought to be directly integrated with Google Earth, instead of requiring you to save a placemark as a separate file, then switch to your Web browser to attach that file to a posting in that bulletin board.

It should then show up under the “Keyhole BBS” category in Google Earth’s Layers menu, but the program neglects to explain (as a Google publicist did) that it takes about two weeks for that to happen.

Despite those roadblocks, users of Google Earth and the earlier Keyhole program have accumulated a massive library of shared placemarks that span a wide range of geo-trivia. One individual, for example, has assembled a set of placemarks that point to historic lighthouses; another is mapping the locations of publicly accessible webcams.

[The Keyhole bbs could be one of the best features of Google Earth because it creates on online community of map users and data sharers. For those of you who are not intimate users of geographic information systems (GIS), it has an excellent introduction to the software as well as FAQs posted by community members. See prior post on the wonders of Keyhole technology.]

What fascinates me is the impact that Google might have on GIS companies, particularly in the movement of GIS capability to the Internet. Not only is Google Earth offered for free but Google has value added packages as well. Google Plus has a GPS add in ($20US). Google Earth Pro, which is designed for professional and commercial users, promises to offer “the ultimate research, presentation and collaboration tool for location information” is $400 US. There is also an enterprise solution, “for on-site deployment of custom Google Earth databases in your enterprise”. Earlier, Google maps announced an api (for the geeky among you) that allows you to create mini applications. I have one sitting on my iBook desktop, a cool mapping utility for Montreal.

I’ve consistently been impressed by the user interface of Google maps, although one can get pretty tired of the ICBM-like zoom in every time you change locations. Plus you cannot really run it without a broadband connection, as the application doesn’t store the data on your computer but retrieves itas needed from its own servers. Nonetheless, the GIS community has been talking about distributed GIS for years, so we should accommodate a few glitches as it truly goes online. With all these features, the user interface, and the low, low price, I wonder if we’ll shortly be shifting to Google Earth as our standard GIS?

Politics and the hockey stick

Tuesday, July 5th, 2005

While we’re awaiting the decision to come out of the G8 summit on the issue of climate change, here’s the political dimension to the hockey stick controversy posted previously. It illustrates why this isn’t just a healthy debate between two groups of scientists but a case of harassment, the goal of which is likely the elimination of their federal research funds.

From Chris Mooney, author of The Republican War on Science:

[US House of Representatives] Energy and Commerce Committee chair Joe Barton has sent a threatening letter [on June 23rd] to the heads of the Intergovernmental Panel on Climate Change and the National Science Foundation, as well as to the three climate scientists who produced the original “hockey stick” study. Barton isn’t simply humoring questionable contrarian attacks on the “hockey stick” graph; he’s using his power as a member of Congress to intimidate the scientists involved in producing it.

You can read the actual letter here.

In what I would call, “death by a thousand forms”, this is what the head of the Congressional Committee is demanding:

  1. Your curriculum vitae, including, but not limited to, a list of all studies relating to climate change research for which you were an author or co-author and the source of funding for those studies.
  2. List all financial support you have received related to your research, including, but not limited to, all private, state, and federal assistance, grants, contracts (including subgrants or subcontracts), or other financial awards or honoraria.
  3. Regarding all such work involving federal grants or funding support under which you were a recipient of funding or principal investigator, provide all agreements relating to those underlying grants or funding, including, but not limited to, any provisions, adjustments, or exceptions made in the agreements relating to the dissemination and sharing of research results.
  4. Provide the location of all data archives relating to each published study for which you were an author or co-author and indicate: (a) whether this information contains all the specific data you used and calculations your performed, including such supporting documentation as computer source code, validation information, and other ancillary information, necessary for full evaluation and application of the data, particularly for another party to replicate your research results; (b) when this information was available to researchers; (c) where and when you first identified the location of this information; (d) what modifications, if any, you have made to this information since publication of the respective study; and (e) if necessary information is not fully available, provide a detailed narrative description of the steps somebody must take to acquire the necessary information to replicate your study results or assess the quality of the proxy data you used.
  5. According to The Wall Street Journal, you have declined to release the exact computer code you used to generate your results. (a) Is this correct? (b) What policy on sharing research and methods do you follow? (c) What is the source of that policy? (d) Provide this exact computer code used to generate your results.
  6. Regarding study data and related information that is not publicly archived, what requests have you or your co-authors received for data relating to the climate change studies, what was your response, and why?
  7. The authors McIntyre and McKitrick (Energy & Environment, Vol. 16, No. 1, 2005) report a number of errors and omissions in Mann et. al., 1998. Provide a detailed narrative explanation of these alleged errors and how these may affect the underlying conclusions of the work, including, but not limited to answers to the following questions:
    a. Did you run calculations without the bristlecone pine series referenced in the article and, if so, what was the result?
    b. Did you or your co-authors calculate temperature reconstructions using the referenced “archived Gaspe tree ring data,” and what were the results?
    c. Did you calculate the R2 statistic for the temperature reconstruction, particularly for the 15th Century proxy record calculations and what were the results?
    d. What validation statistics did you calculate for the reconstruction prior to 1820, and what were the results?
    e. How did you choose particular proxies and proxy series?
  8. Explain in detail your work for and on behalf of the Intergovernmental Panel on Climate Change, including, but not limited to: (a) your role in the Third Assessment Report; (b) the process for review of studies and other information, including the dates of key meetings, upon which you worked during the TAR writing and review process; (c) the steps taken by you, reviewers, and lead authors to ensure the data underlying the studies forming the basis for key findings of the report were sound and accurate; (d) requests you received for revisions to your written contribution; and (e) the identity of the people who wrote and reviewed the historical temperature-record portions of the report, particularly Section 2.3, “Is the Recent Warming Unusual?”

The hockey stick controversy

Thursday, June 30th, 2005

No, it’s not about the NHL lockout. It’s about a February article in the journal, Geophysical Research Letters, that questions what has become the traditional graph of the rise in global temperature: the hockey stick. The hockey stick refers to the shape of the temperature line, which is approximately unchanging (straight) from 1000 to 1900AD and then spikes from 1990 to 2000. The spike is particularly severe in the 1990s. The hockey stick came from a computer model created by a team led by Michael Mann and appeared in 1998 in a Nature article authored by Mann, Raymond Bradley and Malcolm Hughes.


(this image is from the IPCC 3rd assessment report, chapter 2 and describes “Millennial Northern Hemisphere (NH) temperature reconstruction (blue) and instrumental data (red) from AD 1000 to 1999, adapted from Mann et al. (1999). Smoother version of NH series (black), linear trend from AD 1000 to 1850 (purple-dashed) and two standard error limits (grey shaded) are shown.”)

To calculate modern temperature, the authors used instrument readings. To calculate historical temperatures, Mann et al. could not use instruments: there were no thermometers in 1200AD. Instead they relied on data from tree rings, ice cores, corals, as well as historical accounts of temperature.

According to a report in the BBC, it was the assumptions in how this “proxy” data was modelled that posed the problem. The authors Stephen McIntyre and Ross McKitrick in Geophysical Research Letters asserted that the statistical treatment of the tree ring data favoured a hockey stick shape for the temperature curve. The strong bias essentially “flipped the entire analysis”.

(For a detailed explanation of the technique and the controversy, see this realclimate.org post.)

BTW, the controversy has a Canadian connection. Stephen McIntyre is from a company called Northwest Exploration in Toronto and Ross McKitrick hails from the Department of Economics, University of Guelph.

In a subsequent post, I’ll go over the recent political controversy surrounding this computer modelling conflict. For the moment, it shows how model results can hinge on a single data set, whether primary or secondary (“proxy”), and on a single assumption. This isn’t to say that assumptions are bad–they are a necessary component of any model. In this instance the statistical technique chosen is well-respected (couldn’t interpret remote sensing data without it) and extensively backed-up by the literature. And it demonstrates that conflict is a healthy part of scientific advancement. The challenge is whether the scientists in the community are entrenched in a position or open to questioning the assumptions.

Updated post with graph directly from the IPCC report.

Politics and computer models: the nuclear edition

Wednesday, June 29th, 2005

An assumption underlying policy-driven computer models is that improving model sophistication–more robust statistical techniques or more accurate data–will lead to better policy. Empirically, that assumption is repudiated over and over again but it doesn’t seem to impact most developers of these models.

This is never so evident than in site selection models. Site selection models are traditionally geographic information system (GIS)-based analyses that compare overlapping geographic features to determine the most appropriate location for a facility. Example geographic features are slope, location vis-a-viz a flood zone, adjaceny to like facilities, and existing land use zoning. The facility could be a plaza, a subdivision or, in the most contentious instances, a hazardous waste disposal site. Of all the hazardous waste, the most contentious and most dangerous tends to be nuclear waste. Consequently, you’d want to get that one right and optimize placement of the waste facility. Therefore, siting should be guided by data such as stability of geological formations and should limit the amount of political interference.

Comes a New Scientist article (subscription required, although here’s a report on the sites chosen by Nirex and an article on the political calculations from Scotland’s Sunday Herald) on the sites chosen in the UK to dispose of nuclear waste generated in the UK. New Scientist was one of the organizations that submitted a request for the information about the siting process under UK’s new Freedom of Information Act.

The articles report that, despite the sophistication of the computer siting models, they were systematically ignored when sites were actually chosen. Nirex chose the sites. Nirex is a public private partnership set up by the nuclear industry and UK government to monitor radioactive waste. Unsurprisingly Nirex succumbed to political and personal calcuations, such as political instability in Northern Ireland, which ruled out the entire region and personal threats to Nirex staff, which ruled out all of Wales. Sites ultimately chosen for test bores? Places that already had nuclear power plants because they exhibited a “measure of local support for nuclear activities.” Correlation with stability of geological formations? Zero.

What is surprising is the naivete about politics. Even the New Scientist magazine falls for the assumption, in its article last week entitled, Politics left UK nuclear waste plans in disarray. Of course, politics will significantly impact nuclear waste siting, to the extent of derailing a ‘rational’ process. No one wants them and do what they can to avoid them. Nicholas Chrisman wrote a journal article on the use of GIS to site nuclear facilities in the US. Number of facilities actually built? Zero. What you need is to build political calculations into the process with enhancements like spatial decision support systems. It’s not necessarily pretty. Negotiating with the public or local officials is lengthy and contradictory but then that’s the nature of democracy. It’s not easily quantifiable. How do you equitably measure one community’s opposition compared to another’s? However, if you want one of these sites to ever be built then politics has to be acknowledged as an essential part of the science.

(Whether or not we should be generating the nuclear waste is another matter entirely.)

computer, build me a cure

Saturday, June 18th, 2005

In the fight against cancer, computing is a handy tool for drawing conclusions about various treatments. Of the various emerging areas of computing application, two are particularly ‘engaging’.

Using laser-scanning confocal microscopy, cross-sections of animal (that is, mouse or gerbil) tissue can be vacuumed of non-blood-vessel matter, leaving behind a 3D matrix of the blood network surrounding fat cells… by playing with the blood supply, one can destroy fat cells, and this principle extends to cancer cells quite nicely. Pretty 2D picture of a 3D model. Working as a technician at Harvard, I got to contort and rotate these models that the UNIX workstation spat out all day, one after another… but the mathematical analysis of the space between blood vessels and the growing/ shrinking of vessels was left to the machine.

And, more with nanotech, of course – computer models of special nanoparticles are constructed, which direct the spiting-out of the physical molecules, and testing ensues in the blood stream. The objective is to beat the speed of cancer cells, infiltrating their cell walls. A colorful model and a short write-up.

It’s a small world after all

Tuesday, May 24th, 2005

Computer models have linked ocean warmth to African drought. The authors of a new study predict that “a 50-year-long drying trend [in southern Africa] is likely to continue and appears tightly linked to substantial warming of the Indian Ocean.”

Read the whole article, which is cheery (not).

Touch nature, virtually

Saturday, May 21st, 2005

Via slashdot

Wired News reports that researchers have developed a computer system to allow physical interaction over the Internet. The system enables touching and feeling [tele-haptic sensing] of animals or other humans in real time, but it’s first being tried out on chickens. Researchers call it the “first human-poultry interaction system”, although they don’t explain why the chickens (actually roosters).

The Touchy Internet was built by researchers at the Mixed Reality Lab at the National University of Singapore (NUS) (with teams in other places such as Austria). Click on it. These guys have the coolest research website I’ve ever seen. Check out their video section, especially the ones on Human Pacman.

The immediate use, which comes to mind, is porn. After all, porn represents the number one use of the Internet. The Wired article mentions the possibility for rescue workers to remotely communicate with dogs as they search in dangerous or remote places. I wonder if this technology could be used as a component of nature interpretation, either in a virtual interpretive center or as a way for disabled people to interact with nature. Or it could be used to advance protection of distance habitats. For example, if we could touch them, would it help us better empathize with baby seals in northern Canada and therefore more vigourously protest the seal hunt? If we could pet dolphins, would we engage in protecting sea mammals from fishing? We could think of any number of environments that might benefit from tele-haptics.

Earthquake predictor

Wednesday, May 18th, 2005

Live in or planning to visit California? Suspect there’ll be an earthquake in the next 24 hours? Check out the US Geological Services’ new online Earthquake Predictor. According to the Associated Press, these are “real-time, color-coded maps that provide earthquake probabilities in a specific region. Areas shaded in red represent a high chance of strong shaking within the next 24 hours (less than a 1 in 10 chance) while those in blue represent a very remote chance, say, more than 1 in a million.” The predictive maps are updated hourly with seismic data and if you’re interested in a specific place in California, you can zoom in to take a close look.

Scientists say that it’s not designed to predict when the big quake will happen but instead where and how big the aftershocks will be.

For those of you who want more details about the models, here’s the background: Real-time Forecasts of Tomorrow’s Earthquakes in California: a New Mapping Tool.

Robots gone wild!

Tuesday, May 17th, 2005

Researchers at Cornell have just convinced robots to reproduce. That is, researchers have built block-shaped robots that are able to pick up, integrate and then hive-off the blocks to create duplicates of themselves. The video from the Cornell site is pretty cool (don’t try the mpg from the NYTimes site–it caused my computer to shut down).

The research is reported in the NYTimes, which reports on an article in Nature (sorry guys, it’s not free). I found the NYTimes article superficial, playing on the sensationalist angle (brave new world and all). Reporting at the Cornell site is better.

What the research shows me is how faaaaar we have to go before we achieve anything like what we have come to expect from movies like I Robot or earlier, Silent Running. Another site shows the state of the art on robot faces, fingers and eyes. It’s enlightening because the work is still primitive. We have robots in the world; they can handle the dirty jobs or give us the minimally intelligent toys. But they’re not C3PO and they are decades away from passing the Turing test.

What the research implies to me is that we have gotten blase about what our science can deliver. Science is a long slow process of careful incremental work. Rarely do advances appear quickly and never just because we wish for them. And where is the sense of wonder in this type of innovation? This is still pretty cool, even if it is blocks. I suppose the sense of wonder is drained into movies like Star Wars, which can do it all in CGI.*

The Cornell article also brings up the problem recognized by researchers in labelling this reproduction or self-replication:

human beings reproduce but don’t literally self-replicate, since the offspring are not exact copies. And in many cases, the ability to replicate depends on the environment. Rabbits are good replicators in the forest, poor replicators in a desert and abysmal replicators in deep space, they note. “It is not enough to simply say they replicate or even that they replicate well, because these statements only hold in certain contexts,” the researchers conclude. The conference paper also discusses the reproduction of viruses and the splitting of light beams into two identical copies. The analysis they supply “allows us to look at an important aspect of biology and quantify it,” Lipson explains.

Associated Links:
Researchers’ web page

*If it’s all been done, then where’s my anti-grav machine?

Wired woodlands

Friday, May 13th, 2005

Take a look at the article on the Wired Woodlands. It is a small forested reserve up in the San Jacinto Mountains in southern California. The James Reserve is owned by the University of California at Riverside. It is now covered with “more than 100 tiny sensors, robots, cameras and computers, which are beginning to paint an unusually detailed portrait” of innumerable processes in the natural landscape, from nesting behaviour to soil chemistry.

I’ve been to the James Reserve several times because it is the natural home of the Society for Conservation GIS (well, ok, it’s not home but it’s the site of the original meetings of the group). I remember how exciting it was when the director of the reserve got word that he won this major National Science Foundation Grant. This represents a major innovation in the use of computers to monitor the environment.