Скачать 7.62 Kb.
Bringing dna computers to life
Ehud Shapiro and Yaakov Benenson
Tapping the computing power of biological molecules gives rise to tiny machines that can speak directly to living cells.
When British mathematician Alan Turing conceived the notion of a universal programmable computing machine, the word "computer" typically referred not to an object but to a human being. It was 1936, and people with the job of computer, in modern terms, crunched numbers. Turing's design for a machine that could do such work instead— one capable of computing any computable problem—set the stage for theoretical study of computation and remains a foundation for all of computer science. But he never specified what materials should be used to build it.
Turing's purely conceptual machine had no electrical wires, transistors or logic gates. Indeed, he continued to imagine it as a person, one with an infinitely long piece of paper, a pencil and a simple instruction book. His tireless computer would read a symbol, change the symbol, then move on to the next symbol, according to its programmed rules, and would keep doing so until no further rules applied. Thus, the electronic computing machines made of metal and vacuum tubes that emerged in the 1940s and later evolved silicon parts may be the only "species" of nonhuman computer most people have ever encountered, but theirs is not the only possible form a computer can take.
Living organisms, for instance, also carry out complex physical processes under the direction of digital information. Biochemical reactions and ultimately an entire organism's operation are ruled by instructions stored in its genome, encoded in sequences of nucleic acids. When the workings of biomolecular machines inside cells that process DNA and RNA are compared to Turing's machine, striking similarities emerge: both systems process information stored in a string of symbols taken from a fixed alphabet, and both operate by moving step by step along those strings, modifying or adding symbols according to a given set of rules.
These parallels have inspired the idea that biological molecules could one day become the raw material of a new computer species. Such biological computers would not necessarily offer greater power or performance in traditional computing tasks. The speed of natural molecular machines such as the ribosome is only hundreds of operations a second, compared with billions of gate-switching operations a second in some electronic devices. But the molecules do have a unique ability: they speak the language of living cells.
The promise of computers made from biological molecules lies in their potential to operate within a biochemical environment, even within a living organism, and to interact with that environment through inputs and outputs in the form of other biological molecules. A bio-molecular computer might act as an autonomous "doctor" within a cell, for example. It could sense signals from the environment indicating disease, process them using its pre-programmed medical knowledge, and output a signal or a therapeutic drug.
Over the past seven years we have been working toward realizing this vision. We have already succeeded in creating a biological automaton made of DNA and proteins able to diagnose in a test tube the molecular symptoms of certain cancers and "treat" the disease by releasing a therapeutic molecule. This proof of concept was exciting, both because it has potential future medical applications and because it is not at all what we originally set out to build.
Viruses and vaccines
The terms viruses and vaccines have entered the jargon of the computer industry to describe some of the bad things that can happen to computer systems and programs. Unpleasant occurrences like the March 6, 1991, attack of the Michelangelo virus will be with us for years to come. In fact, from now on you need to check your IBM or IBM-compatible personal computer for the presence of Michelangelo before March 6 every year — or risk losing all the data on your hard disk when you turn on your machine that day. And Macintosh users need to do the same for another intruder, the Jerusalem virus, before each Friday the 13th, or risk a similar fate for their data.
A virus, as its name suggests, is contagious. It is a set of illicit instructions that infects other programs and may spread rapidly. The Michelangelo virus went worldwide within a year. Some types of viruses include the worm, a program that spreads by replicating itself; the bomb, a program intended to sabotage a computer by triggering damage based on certain conditions — usually at a later date; and the Trojan horse, a program that covertly places illegal, destructive instructions in the middle of an otherwise legitimate program. A virus may be dealt with by means of a vaccine, or anti-virus, program, a computer program that stops the spread of and often eradicates the virus.
Transmitting a Virus.
Consider this typical example. A programmer secretly inserts a few unauthorized instructions in a personal computer operating system program. The illicit instructions lie dormant until three events occur together: 1. the disk with the infected operating system is in use; 2. a disk in another drive contains another copy of the operating system and some data files; and 3. a command, such as COPY or DIR, from the infected operating system references a data file. Under these circumstances, the virus instructions are now inserted into the other operating system. Thus the virus has spread to another disk, and the process can be repeated again and again. In fact, each newly infected disk becomes a virus carrier.
Damage from Viruses.
We have explained how the virus is transmitted; now we come to the interesting part — the consequences. In this example, the virus instructions add 1 to a counter each time the virus is copied to another disk. When the counter reaches 4, the virus erases all data files. But this is not the end of the destruction, of course; three other disks have also been infected. Although viruses can be destructive, some are quite benign; one simply displays a peace message on the screen on a given date. Others may merely be a nuisance, like the Ping-Pong virus that bounces a "Ping-Pong ball" around your screen while you are working. But a few could result in disaster for your disk, as in the case of Michelangelo.
A word about prevention is in order. Although there are programs called vaccines that can prevent virus activity, protecting your computer from viruses depends more on common sense than on building a "fortress" around the machine. Although there have been occasions where commercial software was released with a virus, these situations are rare. Viruses tend to show up most often on free software acquired from friends. Even commercial bulletin board systems, once considered the most likely suspects in transferring viruses, have cleaned up their act and now assure their users of virus-free environments. But not all bulletin board systems are run professionally. So you should always test diskettes you share with others by putting their write-protection tabs in place.
If an attempt is made to write to such a protected diskette, a warning message appears on the screen. It is not easy to protect hard disks, so many people use anti-virus programs. Before any diskette can be used with a computer system, the anti-virus program scans the diskette for infection. The drawback I that once you buy this type of software, you must continuously pay the price for upgrades as new viruses are discovered.
Twenty-eight Ways to Build a Solar System
Ever since their discovery, extrasolar planets have challenged standard ideas about planet formation. Faced with increasing numbers of 'unusual' planetary systems, astronomers are now developing models that attempt to explain these giant planets around Sun-like stars.
A bewildering variety of extrasolar planetary systems are now known. Although the systems that have been discovered are heavily biased (by the methods used to detect them) towards planets at least as large as Jupiter very near their stars, we are already glimpsing the diversity of solar systems1. Some have Jupiter-like planets in circular orbits both near (closer than Mercury) and far (roughly the same as Jupiter) from their stars. Some planets have highly elliptical orbits, perhaps suggesting gravitational interactions with other large planets, although there are alternative explanations. It is believed that these newly discovered planets are 'gas giants', meaning that although they may have a solid core, most of the mass consists of hydrogen and helium. Writing in the Astronomical Journal, Levison and colleagues2 have modeled a variety of possible 'outer solar systems'.
One general scheme for giant-planet formation consists of forming a set of' 'embryos' (Mars- to Earth-size) by the aggregation of kilometre-sized 'planetesimals' from the disk-like nebula that swirls around a young star. The planetesimals form out of whatever solid material can locally condense from the nebula, depending largely on the local temperature. The embryos then come together to build giant-planet 'cores' with masses of 5-20 Earth masses; only then can cores suck up vast quantities of nebular gas and grow into gas giant planets3 before the remaining nebular gas is dispersed when the central star lights up. If this description seems vague, then the reader has caught on. The physics of these processes is very poorly known. It is not really understood how planetesimals form or what the physical conditions (temperature, mass and density as a function of solar distance) of the nebulae are. Other mysteries include how long gas disks remain around forming stars before being blown away, how planets accrete gas, or how the presence of all that gas in the disks affects the orbits of the embryos. There are, of course, reasonable guesses.
Levison et all have made a brave attempt to study the diversity of possible planetary systems by performing 28 numerical simulations of the aggregation of planetary embryos. Because the important physical parameters are unknown, they chose ranges that probably encompass the 'real' values. After constructing plausible initial distributions of embryos surrounding the star, they compute the orbital evolution of these protoplanets as they interact with each other, sometimes coalescing and sometimes throwing each other out of the system. Of particular importance (or perhaps concern) is the modeling of huge gas cocoons around the embryos to make them coalesce more easily as they pass each other. This was done to prevent the known problem of the strong gravity of the protoplanets sling-shorting each other to high-velocity orbits and then bashing each other to bits before they get large enough to accreted gas.
Simulations continue until the orbits stabilize, and produce a wide variety of final planetary systems. Some have a few widely spaced planets, others have many Uranus-sized planets evenly spaced, while yet others have big planets closely packed together. Some even look like our Solar System. Interestingly, the simulations do not produce the most prevalent class of known extrasolar planets — Jupiter-sized objects very close to their stars — indicating that some crucial piece of physics is missing. The authors observe that even the higher growth rates they use (produced by the gas cocoons) do not sufficiently damp the planetary velocities to produce solar systems like our own with nearly circular planetary orbits. This problem is also observed in simulations of inner- planet formation, but because most of the accumulation is presumed to occur after the gas has vanished, it is not clear that gas is the culprit in removing the dynamical excitation.
The variety found in these simulations raises two issues. One is that the range of physical parameters used (due largely to our ignorance) is far wider than the Universe actually produces in protoplanetary nebulae, implying that the diversity found is larger than would be observed in reality. For example, the authors double the gas mass of large cores on a timescale that varies by four orders of magnitude; perhaps real disks have much less variation, producing a smaller range of final planetary masses than emerge from these models.
The second issue is that of predictability for theories of solar system formation. The dynamical evolution of a small number of embryos is highly chaotic, giving variable locations and numbers for the resulting planets. Levison et al. find stable systems ranging from one to seven planets. So it is unreasonable to demand that planet-formation theories produce our Solar System in detail; rather, some of the time at least, they must provide systems similar to ours from what are thought to be plausible starting conditions. This is in contrast to more 'regularist' thinking of 10-20 years ago, when it was popular to postulate that most solar systems would have a Jupiter analogue just outside the 'snow line'. This is the distance beyond which the temperature has dropped enough that icy solids condense, and greatly increase the surface mass-density of solids and thus local accretion rates.
Radar uses radio waves to enable aircraft, ships and ground stations to see far into their surroundings even at night and in bad weather. The metal antennas behind those waves also strongly reflect radar, making them highly visible to others—a deadly disadvantage during wartime. A new class of nonmetallic radio antennas can become invisible to radar—by ceasing to reflect radio waves— when deactivated. This innovation, called plasma antenna technology, is based on energizing gases in sealed tubes to form clouds of freely moving electrons and charged ions.
Although the notion of the plasma antenna has been knocked around in labs for decades, Ted Anderson, president of Haleakala Research and Development—a small firm in Brookfield, Mass.—and physicist Igor Alexeff of the University of Tennessee-Knoxville have recently revived interest in the concept. Their research reopens the possibility of compact and jamming-resistant antennas that use modest amounts of power, generate little noise, do not interfere with other antennas and can be easily tuned to many frequencies. When a radio-frequency electric pulse is applied to one end of such a tube (Anderson and Alexeff use fluorescent lamps), the energy from the pulse ionizes the gas inside to produce a plasma. "The high electron density within this plasma makes it an excellent conductor of electricity, just like metal," Anderson says. When in an energized state, the enclosed plasma can readily radiate, absorb or reflect electromagnetic waves. Altering the plasma density by adjusting the applied power changes the radio frequencies it broadcasts and picks up. In addition, antennas tuned to the right plasma densities can be sensitive to lower radio frequencies while remaining unresponsive to the higher frequencies used by most radars. But unlike metal, once the voltage is switched off, the plasma rapidly returns to a neutral gas, and the antenna, in effect, disappears.
This vanishing act could have several applications, Alexeff reports. Defense contractor Lockheed Martin will soon flight-test a plasma antenna (encased in a tough, nonconducting polymer) that is designed to be immune from detection by radar even as it transmits and receives low-frequency radio waves. The U.S. Air Force, meanwhile, hopes that the technology will be able to shield satellite electronics from powerful jamming signals that might be beamed from enemy missiles. And the U.S. Army is supporting research on steerable plasma antenna arrays in which a radar transmitter-receiver is ringed by plasma antenna reflectors. "When one of the antennas is deactivated, microwave signals radiating from the center pass through the open window in a highly directional beam," Alexeff says. Conversely, the same apparatus can act as a directional receiver to precisely locate radio emitters.
Not all researchers familiar with the technology are so sanguine about its prospects, however. More than a decade ago the U.S. Navy explored plasma antenna technology, recalls Wally Manheimer, a plasma physicist at the Naval Research Laboratory. It hoped that plasmas could form the basis of a compact and stealthy upgrade to the metallic phased-array radars used today on the U.S. Navy's Aegis cruisers and other vessels. Microwave beams from these arrays of antenna elements can be steered electronically toward targets. Naval researchers, Manheimer recounts, attempted to use plasma antenna technology aimed by magnetic fields to create a more precise "agile mirror" array. To function well, the resulting beams needed to be steered in two dimensions; unfortunately, the scientists could move them in only one orientation, so the U.S. Navy canceled the program.
Sapphire plays supporting role for nanotubes
Charles Q. Choi
Carbon nanotubes would make ideal connecting wires in advanced circuits if not for the painstaking effort required to line up each tiny, sticky, floppy strand. Now scientists have found that crystalline sapphire can automatically help guide nanotubes into the patterns needed to build transistors and to make flexible electronics. Electrical signals can flow more quickly through carbon nanotubes than through silicon, which in principle could lead to faster computers, explains Chongwu Zhou, an electrical engineer at the University of Southern California. Moreover, nanotubes can be as small as one-fifth the theoretical minimum size of conventional silicon circuitry.
To make nanotube circuits, scientists scatter nanotubes randomly and attach electrodes wherever they can, or else they try to grow nanotubes toward one another and later fabricate electrodes on them. Such efforts, though, are slow and inefficient, leading scientists to wonder if substrates existed that could naturally orient the tubes. After more than a year of experiments on various crystals, Zhou and his colleagues discovered that sapphire could achieve just that. Sapphire crystal is hexagonal, rising from a flat base, and the researchers found that most vertical slices of sapphire apparently expose constituent aluminum and oxygen atoms in layouts that promote the formation of nanotubes in orderly rows.
In the January Nano Letters, Zhou's team reported the creation of transistors with such aligned nanotubes. The researchers coated commercially available artificial sapphire with a cagelike protein called ferritin and flowed hydrocarbon gas over it while baking it. Iron within the protein catalyzed the growth of single-walled nanotubes from carbon supplied by the gas. Once the sapphire was covered with nanotubes, they could place the metal electrodes of the transistors wherever they wanted and remove the unwanted nanotubes with highly ionized oxygen gas.
Past carbon nanotube transistors were typically constructed atop silicon composites common in the electronics industry. The drawback was that the metal electrodes and the silicon interacted to suck up electrical charge, slowing down performance and raising power consumption. Zhou's strategy eliminates the parasitic drain because sapphire is electrically insulating, not semiconducting like silicon. The method is closely related to the so-called silicon-on-sapphire approach that IBM and other chipmakers have used to build specialized high-performance circuitry, "so we can borrow a lot of knowledge from the semiconductor industry," Zhou remarks.
When compared with other carbon nanotube electronics, these findings display the highest density of aligned nanotubes, at up to 40 per micron. Other methods manage only one to five, Zhou states. Nanotube density is crucial, because the more there are between electrodes, the more signals will be conducted. The scientists can control nanotube density by varying how much iron they employ within the ferritin.
The researchers could readily create flexible electronics from their nanotube transistors by baking a plastic film onto the nanotube transistors and then peeling off the strips, which hold on to the transistors. Carbon nanotube flexible electronics could "relatively easily" outperform the silicon-based versions currently used by industry, Zhou says, and he foresees its use in applications such as large flat-panel displays, vehicle windshields and smart cards. He also notes that such aligned nanotubes could act as sensors: attached molecules could send an electrical signal across the nanotubes if they reacted with cancer markers or other compounds.
These findings are "a very important result in resolving one of the most difficult problems related to carbon nanotube manufacture for integrated circuits," says Kang Wang, director of the Center on Functional Engineered Nano Architectonics at the University of California, Los Angeles. He points to one crucial hurdle to overcome: ensuring that all nanotubes made by this technique are semiconducting, because it currently produces a mix of metallic (fully conducting) and semiconducting ones.
Robin E. Bell
Abundant liquid water newly discovered underneath the world's great ice sheets could intensify the destabilizing effects of global warming on the sheets. Then, even without melting, the sheets may slide into the sea and raise sea level catastrophically.
As our P-3 flying research laboratory skimmed above the icy surface of the Weddell Sea, I was glued to the floor. Lying flat on my stomach, I peered through the hatch on the bottom of the plane as seals, penguins and icebergs zoomed in and out of view. From 500 feet up everything appeared in miniature except the giant ice shelves—seemingly endless expanses of ice, as thick as the length of several football fields, that float in the Southern Ocean, fringing the ice sheets that virtually cover the Antarctic landmass. In the mid-1980s all our flights were survey flights: we had 12 hours in the air once we left our base in southern Chile, so we had plenty of time to chat with the pilots about making a forced landing on the ice shelves. It was no idle chatter. More than once we had lost one of our four engines, and in 1987 a giant crack became persistently visible along the edge of the Larsen B ice shelf, off the Antarctic Peninsula—making it abundantly clear that an emergency landing would be no gentle touchdown.
The crack also made us wonder: Could the ocean underlying these massive pieces of ice be warming enough to make them break up, even though they had been stable for more than 10,000 years?
Almost a decade later my colleague Ted Scambos of the National Snow and Ice Data Center in Boulder, Colo., began to notice a change in weather-satellite images of the same ice shelves that I had seen from the P-3. Dark spots, like freckles, began to appear on the monotonously white ice. Subsequent color images showed the dark spots to be areas of brilliant dark blue. Global climate change was warming the Antarctic Peninsula more rapidly than any other place on earth, and parts of the Larsen B ice surface were becoming blue ponds of meltwater. The late glaciologist Gordon de Q. Robin and Johannes Weertman, a glaciologist at Northwestern University, had suggested several decades earlier that surface water could crack open an ice shelf. Scambos realized that the ponding water might do just that, chiseling its way through the ice shelf to the ocean waters below it, making the entire shelf break up. Still, nothing happened.
Nothing, that is, until early in the Antarctic summer of 2001-2002. In November 2001 Scambos got a message he remembers vividly from Pedro Skvarca, a glaciologist based at the Argentine Antarctic Institute in Buenos Aires who was trying to conduct fieldwork on Larsen B. Water was everywhere. Deep cracks were forming. Skvarca was finding it impossible to work, impossible to move. Then, in late February 2002, the ponds began disappearing, draining - the water was indeed chiseling its way through the ice shelf. By mid-March remarkable satellite images showed that some 1,300 square miles of Larsen B, a slab bigger than the state of Rhode Island, had fragmented. Nothing remained of it except an armada of ice chunks, ranging from the size of Manhattan to the size of a microwave oven. Our emergency landing site, stable for thousands of years, was gone.
Suddenly the possibility that global warming might cause rapid change in the icy polar world was real. The following August, as if to underscore that possibility, the extent of sea ice on the other side of the globe reached a historic low, and summer melt on the surface of the Greenland ice sheet exceeded anything previously observed. The Greenland meltwaters, too, gushed into cracks and open holes in the ice known as moulins - and then, presumably, plunged to the base of the ice sheet, carrying the summer heat with them. There, instead of mixing with seawater, as it did in the breakup of Larsen B, the water probably mixed with mud, forming a slurry that was smoothing the way across the bedrock - "greasing," or lubricating, the boundary between ice and rock. But by whatever mechanism, the giant Greenland ice sheet was accelerating across its rocky moorings and toward the sea.
More recently, as a part of the investigations of the ongoing International Polar Year (IPY), my colleagues and I have been tracing the outlines of a watery "plumbing" system at the base of the great Antarctic ice sheets as well. Although much of the liquid water greasing the skids of the Antarctic sheets probably does not arrive from the surface, it has the same lubricating effect. And there, too, some of the ice sheets are responding with accelerated slippage and breakup.
Why are those processes so troubling and so vital to understand? A third of the world's population lives within about 300 feet above sea level, and most of the planet's largest cities are situated near the ocean. For every 150 cubic miles of ice that are transferred from land to the sea, the global sea level rises by about a 16th of an inch. That may not sound like a lot, but consider the volume of ice now locked up in the planet's three greatest ice sheets. If the West Antarctic ice sheet were to disappear, sea level would rise almost 19 feet; the ice in the Greenland ice sheet could add 24 feet to that; and the East Antarctic ice sheet could add yet another 170 feet to the level of the world's oceans: more than 213 feet in all. (For a comparison, the Statue of Liberty, from the top of the base to the top of the torch, is about 150 feet tall.) Liquid water plays a crucial and, until quite recently, underappreciated role in the internal movements and seaward flow of ice sheets. Determining how liquid water forms, where it occurs and how climate change can intensify its effects on the world's polar ice are paramount in predicting—and preparing for—the consequences of global warming on sea level.
Rumblings in the Ice
Glaciologists have long been aware that ice sheets do change; investigators simply assumed that such changes were gradual, the kind you infer from carbon 14 dating - not the kind, such as the breakup of the Larsen B ice shelf, that you can mark on an ordinary calendar. In the idealized view, an ice sheet accumulates snow - originating primarily in evaporated seawater - at its center and sheds a roughly equal mass to the ocean at its perimeter by melting and calving icebergs. In Antarctica, for instance, some 90 percent of the ice that reaches the sea is carried there by ice streams, giant conveyor belts of ice as thick as the surrounding sheet (between 3,500 and 6,500 feet) and 60 miles wide, extending more than 500 miles "upstream" from the sea. Ice streams moving through an ice sheet leave crevasses at their sides as they lurch forward. Near the seaward margins of the ice sheet, ice streams typically move between 650 and 3,500 feet a year; the surrounding sheet hardly moves at all.
But long-term ice equilibrium is an idealization; real ice sheets are not permanent fixtures on our planet. For example, ice-core studies suggest the Greenland ice sheet was smaller in the distant past than it is today, particularly during the most recent interglacial period, 120,000 years ago, when global temperatures were warm. In 2007 Eske Willerslev of the University of Copenhagen led an international team to search for evidence of ancient ecosystems, preserved in DNA from the base of the ice sheet. His group's findings revealed a Greenland that was covered with conifers as recently as 400,000 years ago and alive with invertebrates such as beetles and butterflies. In short, when global temperatures have warmed, the Greenland ice sheet has shrunk.
Today the snowfall on top of the Greenland ice cap is actually increasing, presumably because of changing climatic patterns. Yet the mass losses at its edges are big enough to tip the scales to a net decline. The elevation of the edges of the ice sheet is rapidly declining, and satellite measurements of small variations in the force of gravity also confirm that the sheet margins are losing mass. Velocity measurements indicate that the major outlet glaciers - ice streams bounded by mountains—are accelerating rapidly toward the sea, particularly in the south. The rumblings of glacial earthquakes have become increasingly frequent along the ice sheet's outlet glaciers.
Like the Greenland ice sheet, the West Antarctic ice sheet is also losing mass. And like the Greenland ice sheet, it disappeared in the geologically recent past - and, presumably, could do so again. Reed P. Scherer of Northern Illinois University discovered marine micro-fossils at the base of a borehole in the West Antarctic ice sheet that only form in open marine conditions. The age of the fossils showed that open-water life-forms might have lived there as recently as 400,000 years ago. Their presence implies that the West Antarctic ice sheet must have disappeared during that time.
Only the ice sheet in East Antarctica has persisted through the earth's temperature fluctuations of the past 30 million years. That makes it by far the oldest and most stable of the ice sheets. It is also the largest. In many places its ice is more than two miles thick, and its volume is roughly 10 times that of the ice sheet in Greenland. It first formed as Antarctica drew apart from South America some 35 million years ago and global levels of carbon dioxide declined. The East Antarctic ice sheet appears to be growing slightly in the interior, but observers have detected some localized losses in ice mass along the margins.
Insight into how the world’s largest river
formed is helping scientists explain
the extraordinary abundance
of plant and animal life in
the Amazon rain forest
The Birth of the Mighty Amazon
Looking down at the Amazon River from above, an observer cannot help noticing that water dominates the landscape even beyond the sinewy main channel. The river, which extends from the Pacific highlands of Peru some 6,500 kilometers to Brazil's Atlantic coast, swells out of its banks and inundates vast swaths of forest during the rainy seasons, and myriad lakes sprawl across its floodplains year-round.
All told, the river nurtures 2.5 million square kilometers of the most diverse rain forest on earth. Until recently, however, researchers had no idea how long this intimate relation between river and forest has actually existed. The inaccessibility of this remote region, now called Amazonia, meant that long-held theories about the early days of the river and surrounding forest were speculative at best.
In the past 15 years new opportunities to study the region's rock and fossil records have finally enabled investigators to piece together a more complete picture of Amazonian history. The findings suggest that the birth of the river was a complicated process lasting millions of years and that the river's development greatly influenced the evolution of native plants and animals. Indeed, many researchers now contend that the incipient river nourished a multitude of interconnected lakes in the continent's midsection before forging a direct connection to the Atlantic Ocean; this dynamic wetland produced ideal conditions for both aquatic and terrestrial creatures to flourish much earlier than previously thought. The new interpretations also explain how creatures that usually live only in the ocean - among them dolphins - now thrive in the inland rivers and lakes of Amazonia.
Understanding how and when the Amazon River came to be is essential for uncovering the details of how it shaped the evolution of life in Amazonia. Before the early 1990s geologists knew only that powerful movements of the earth's crust forged South America's Andes and towering mountain peaks elsewhere (including the Himalayas and the Alps) primarily between about 23 million and five million years ago, an epoch of the earth's history known as the Miocene. Those dramatic events triggered the birth of new rivers and altered the course of existing ones in Europe and Asia, and the experts assumed South America was no exception. But the specific nature and timing of such changes were unknown.
When I began exploring this mystery in 1988, 1 suspected that the best records of the ancient Amazonian environment were the massive deposits of mud, sand and plant debris stored in the trough that the mighty river now follows to the Atlantic. But getting to those sediments - long since solidified into mudstone, sandstone and other rocks - posed considerable challenges. A jungle big enough to straddle nine countries with differing laws does not yield its secrets easily. And the rocks forming the trough, which poke aboveground only rarely, usually do so along nearly inaccessible tributaries and tend to be covered by dense vegetation.
Along the hundreds of kilometers of waterways my field assistant and I surveyed in Colombia, Peru and Brazil, we encountered only a few dozen sizable outcrops. And often we had to wield a machete to cut away the foliage - once surprising a giant green anaconda and another time exposing the footprints of a jaguar. Even then, we could reach only the uppermost layers of the thick rock formation, which extends almost a kilometer below the surface in some locations.
Once the initial fieldwork was complete, my first conclusion was that the Amazon River did not exist before about 16 million years ago, the start of what geologists call the ‘Middle’ Miocene. Most of the rocks we found that dated from earlier times consisted of the reddish clays and white quartz sand that clearly had formed from the erosion of granites and other light-colored rocks in the continent's interior. This composition implied that the region's earlier waterways originated in the heart of Amazonia. I inferred - and other researchers later confirmed - that during the Early Miocene, rivers flowed northwest from low hills in the continental interior, and some eventually emptied into the Caribbean Sea.
The Amazonian landscape altered significantly soon thereafter when a violent episode of tectonic activity began pushing up the northeaster Andes. By about 16 million years ago in the rock record, the red and white sediments disappear. In their place we found intriguing alternations of turquoise-blue, gray and green clays, brown sandstone and fossilized plant matter called lignite. It was obvious that the dark particles of mud and sand were from a source other than light-colored granites. And distinctive layered patterns in the fossilized sediments indicated that the water that deposited them was no longer flowing north; instead it flowed eastward. My guess was that the rising mountains to the west shifted the drainage pattern, sending water east toward the Atlantic.
In support of this idea, later analysis of the sediment at Wageningen University in the Netherlands proved that many of the brown sand grains were indeed fragments of the dark-colored schists and other rocks that began washing away as the newborn Andes rose up. What is more, some of the pollen grains and spores I found in the clays and lignites came from conifers and tree ferns that could have grown only at the high altitudes of a mountain range. This pollen contrasted with that in the older Miocene sediments, which came from plants known to grow only in the low-lying continental interior. Drill cores of Miocene rocks in Brazil, which provided the only complete sequence of the change from reddish clays to the blue and brown sediments, further corroborated my conclusions.
Finally, scientists had undeniable evidence for when the budding Amazon River was born. But it soon became clear that the river did not establish its full grandeur until much later. In 1997 David M. Dobson, now
at Guilford College in Greensboro, N.C., and his colleagues discovered that the Andean sand grains I found in Amazonia first began accumulating along the Brazilian coast only about 10 million years ago.
That timing means the river took at least six million years to develop into the fully connected, transcontinental drainage system of today. Research into the geologic changes that occurred in that transition period has now illuminated the origins of the region's enigmatic, present-day fauna.
|Компьютерная лексика и ее функциональные эквиваленты в русском и французском языках|
Работа выполнена на кафедре французского языка Института языка гоу впо «Казанский (Приволжский) федеральный университет»
|Казанский (Приволжский) федеральный университет|
С этой целью Казанский университет регулярно проводит молодежные школы-конференции «Лобачевские чтения». Очередная конференция является...
|Казанский федеральный (приволжский) университет утверждаю|
Для студентов неязыковых специальностей казанского федерального (приволжского) университета
|Внешнеполитический курс США на западных балканах (1990-е|
Работа выполнена на кафедре политической истории фгаоувпо «Казанский (Приволжский) федеральный университет»
|Отечественная история учебно|
|1. Functional Styles in Modern English|
Роль английского языка как одного из мировых языков Основне варианты английского языка
|Создание творческой среды на уроках английского языка и во внеурочной деятельности|
Л. В. Нестеренко, учитель английского языка, I квалификационная категория, руководитель школьного методического объединения учителей...
|В процессе преподавания английского языка как иностранного особый упор|
Приоритет отдается структуре языка наших дней и общественному содержанию языка. Процессы глобализации в современном мире в последние...
|Научно-практическая конференция История возникновения заимствованных слов|
На протяжении всей своей истории русский язык обогащался за счет иноязычных слов. Например, интенсивное заимствование из польского...
|«Теоретическая грамматика английского языка»|
Целью курса является комплексное описание грамматического строя английского языка, введение в проблематику грамматических исследований...