CRN Science
& Technology
Essays - 2007
"Four
stages of acceptance: 1) this is worthless nonsense; 2) this is an interesting,
but perverse, point of view; 3) this is true, but quite unimportant; 4) I always
said so."
— Geneticist J.B.S. Haldane,
on the stages scientific theory goes through
Each issue of the
C-R-Newsletter
features a brief article explaining various aspects of advanced
nanotechnology. They are gathered in these archives for your review. If you have comments
or questions, please
email
Jessica Margolin, CRN's Director of Research Communities.
1.
More on Molecular Manufacturing Mechanics (February
6, 2007)
2.
Practical Skepticism (February 28, 2007)
3. Mechanical
Molecular Manipulations (March 31, 2007)
4. Nanomachines and
Nanorobots (April 30, 2007)
5.
Slip-Sliding Away (May 31, 2007)
6. Figuring Cost for Products of Molecular Manufacturing (June 29, 2007)
7. Civilization Without
Metals (August 10, 2007)
8. Limitations of Early
Nanofactory Products (August 31, 2007)
9.
Levels of
Nanotechnology Development (September 29, 2007)
10. Exploring the Productive
Nanosystems Roadmap (October 31, 2007)
11. Imagining the
Future (November 30, 2007)
12. Restating
CRN’s Purpose (December 28, 2007)
2004 Essays Archive
2005 Essays Archive
2006 Essays Archive
2008 Essays Archive
More on Molecular Manufacturing Mechanics
Chris Phoenix, Director of Research,
Center for Responsible Nanotechnology
In the last science essay, I promised
to provide additional detail on several topics that were beyond the scope of
that essay. First: How can a mechanosynthetic
reaction have nearly 100% yield, say 99.9999999999999%, when most reactions have
less than 99% yield? Second: Why will a well-designed molecular machine system
not suffer wear? Third: How can contaminant molecules be excluded from a
manufacturing system?
Mechanically guided reactions are very
different, in several important ways, from familiar chemical reactions. Pressure
can be two orders of magnitude higher; concentration, seven orders of magnitude
higher. The position and orientation of the reactant molecules can be
controlled, as well as the direction of forces. Molecular fragments that would
be far too reactive to survive long in any other form of chemistry could be
mechanically held apart from anything that would react with them, until the
desired reaction was lined up.
Nanosystems Table 8.1 and Section 8.3 give overviews of the
difference between mechanosynthesis and solution phase synthesis.
One of the most important differences is that reactions can be guided to the
correct site among hundreds of competing sites. An enzyme might have trouble
selecting between the atom five in from the edge, and the one six in from the
edge, on a nearly homogeneous surface. For a mechanical system, selecting an
atom is easy: just tell the design software that you want to move your reactive
fragment adjacent to the atom at 2.5 nanometers rather than 2.2 or 2.8.
Reactions can be made much more rapid and reliable than in solution-phase
chemistry. The reaction rate can be increased dramatically using pressure,
concentration, and orientation. Likewise, the equilibrium can be shifted quite
far toward the product by means of large energy differences between reactants
and product. Differences that would be quite large -- too large for convenience
-- in solution chemistry could easily be accommodated in mechanical chemistry.
In a macro-scale mechanical system, wear happens when tiny pieces of a component
are broken away or displaced. Small concentrations of force or imperfections in
the materials cause local failure at a scale too small to be considered
breakage. But even microscopic flecks of material contain many billions of
atoms. At the nano-scale, the smallest pieces -- the atoms -- are a large
fraction of the size of the components. A single atom breaking away or being
rearranged would constitute breakage, not wear. This also means that fatigue
cannot occur, since fatigue is also a physical rearrangement of the structure of
an object, and thus would constitute breakage.
We cannot simply dismiss the problem of wear (or fatigue) by giving it another
name; if mechanical breakage will happen randomly as a result of normal use,
then nanomachines will be less reliable than they need to be. Thus, it is
important to consider the mechanisms of random breakage. These include
high-energy radiation, mechanical force, high temperature, attack from
chemicals, and inherently weak bonds.
High-energy radiation, for these purposes, includes any photon or particle with
enough energy to disrupt a bond. The lower frequencies of photon, ultraviolet
and below, can be shielded with opaque material. Higher energy radiation cannot
be fully shielded, since it includes muons from cosmic rays; for many
nanomachines, even shielding from more ordinary background radiation will also
be impractical. So radiation damage is inescapable, but is not a result of
mechanical motion -- it is more analogous to rusting than to wear. And it
happens slowly: a cubic micron of nanomachinery only has a few percent chance of
being hit per year.
The mechanical force applied to moving parts can be controlled by the design of
the machine. Although an excess of mechanical force can of course break bonds,
most bonds are far stronger than they need to be to maintain their integrity,
and modest forces will not accelerate bond breakage enough to worry about.
High temperature can supply the energy needed to break and rearrange bonds. At
the nanoscale, thermal energy is not constant, but fluctuates randomly and
rapidly. This means that even at low temperatures, it will occasionally happen
that sufficient energy will be concentrated to break a bond. However, this will
be rare. Even taking modest mechanical forces into account, a wide variety of
molecular structures can be built that will be stable for decades. (See
Nanosystems Chapter 6.)
Various chemicals can corrode certain materials. Although pure diamond is rather
inert, nanomachines may be made of other, more organic molecules. However,
harmful chemicals will be excluded from the working volume of nanosystems. The
"grit" effect of molecules getting physically caught between moving interfaces
need not be a concern -- that is, if random molecules can actually be excluded.
This brings us to the third topic.
The ability to build flawless diamondoid nanosystems implies the ability to
build atomically flat surfaces. Diamond seals should be able to exclude even
helium and hydrogen with very high reliability. (See
Nanosystems Section 11.4.2.) This provides a way to make sliding
interfaces with an uncontrolled environment on one side and a completely
contaminant-free environment on the other. (Of course this is not the only way,
although it may be the simplest to design.)
Extracting product from a hermetically sealed manufacturing system can be done
in at least three ways. The first is to build a quantity of product inside a
sealed system, then break the seal, destroying the manufacturing system. If the
system has an expandable compartment, perhaps using a bellows or unfolding
mechanism, then quite a lot of product can be built before the manufacturing
system must be destroyed; in particular, manufacturing systems several times as
big as the original can be built. The second way to extract product is to
incorporate a wall into the product that slides through a closely fitting port
in the manufacturing system. Part of the product can be extruded while the
remainder of the product and wall are being constructed; in this way, a product
bigger than the manufacturing system in every dimension can be constructed. The
third way to extrude product, a variant of the second, is to build a bag with a
neck that fits into the port. The bag can enclose any size of product, and a
second bag can be put into place before the first is removed, freeing its
product. With this method, the shape of the product need not be constrained.
Any manufacturing system, as well as several other classes of system, will need
to take in molecules from the environment. This implies that the molecules will
have to be carefully selected to exclude any unwanted types.
Nanosystems Section 13.2 discusses architectures for purifying impure
feedstocks, suggesting that a staged sorting system using only five stages
should be able to decrease the fraction of unwanted molecules by a factor of 1015
or more.
Erratum: In the
previous essay, I stated that each instruction
executed in a modern computer required tens of millions of transistor
operations. I'm told by Mike Frank that in a modern CPU, most of the transistors
aren't used on any given cycle -- it may be only 105 rather than 107.
On the other hand, I don't know how many transistor operations are used in the
graphics chip of a modern gaming PC; I suspect it may be substantially more than
in the CPU. In any case, downgrading that number doesn't change the argument I
was making, which is that computers do quite a lot more than 1015
transistor operations between errors.
Practical Skepticism
Chris Phoenix, Director of
Research, Center for Responsible Nanotechnology
Engineers
occasionally daydream about being able to take some favorite piece of
technology, or the knowledge to build it, back in time with them. Some of them
even write fiction about it. For this month's essay, I'll daydream about taking
a bottomless box of modern computer chips back in time five decades.
{Although it may seem off-topic, everything in this essay relates to
molecular manufacturing.}
In 1957, computers were just starting
to be built out of transistors. They had some memory, a central processor, and
various other circuits for getting data in and out -- much like today's
computers, but with many orders of magnitude less capability. Computers were
also extremely expensive. Only six years earlier, Prof. Douglas Hartree, a
computer expert,
had declared that three computers would suffice for England's computing
needs, and no one else would need one or even be able to afford it. Hartree
added that computers were so difficult to use that only professional
mathematicians should be trusted with them.
Although I know a fair amount of
high-level information about computer architecture, it would be difficult for me
to design a useful computer by myself. If I went back to 1957, I'd be asking
engineers from that time to do a lot of the design work. Also, whatever
materials I took back would have to interface with then-current systems like
printers and tape drives. So, rather than trying to take back special-purpose
chips, I would choose the most flexible and general-purpose chip I know of.
Modern CPUs are actually quite specialized, requiring extremely high-speed
interfaces to intricate helper chips, which themselves have complicated
interfaces to memory and peripherals. It would be difficult if not impossible to
connect such chips to 1957-era hardware. Instead, I would take back a Field
Programmable Gate Array (FPGA): a chip containing lots of small reconfigurable
circuits called Logic Elements (LEs). FPGAs are designed to be as flexible as
possible; they don't have to be run at high speed, their interfaces are highly
configurable, and their internal circuits can simulate almost anything --
including a medium-strength CPU.
A single FPGA can implement a computer
that is reasonably powerful even by modern standards. By 1957 standards, it
would be near-miraculous. Not just a CPU, but an entire computer, including vast
quantities of "core" memory (hundreds of thousands of bytes, vs. tens of bytes
in 1957-era computers), could be put into a single chip.
{Similarly, molecular manufacturing
will use a few basic but general-purpose capabilities -- building programmable
functional shapes out of molecules -- to implement a wide range of nanoscale
functions. Each physical molecular feature might correspond to an FPGA's logic
element.}
A major part of
time-traveling-technology daydreams is the fun the engineer gets to have with
reinventing technologies that he knows can be made to work somehow. (It is, of
course, much easier to invent things when you know the goal can be achieved --
not just in daydreams, but in real life.) So I won't take back any programs for
my FPGAs. I'll hand them over to the engineers of the period, and try to get
myself included in their design teams. I would advise them not to get too fancy
-- just implement the circuits and architectures they already knew, and they'd
have a lightning-fast and stunningly inexpensive computer. After that, they
could figure out how to improve the design.
{Today, designs for machines built with
molecular manufacturing have not yet been developed.}
But wait -- would they accept the gift?
Or would they be skeptical enough to reject it, especially since they had never
seen it working?
Computer engineers in 1957 would be
accustomed to using analog components like resistors and capacitors. An FPGA
doesn't contain such components. An engineer might well argue that the FPGA
approach was too limited and inefficient, since it might take many LEs to
simulate a resistor even approximately. It might not even work at all! Of
course, we know today that it works just fine to build a CPU out of thousands of
identical digital elements -- and an FPGA has more than enough elements to
compensate for the lack of flexibility -- but an engineer accustomed to working
with diverse components might be less sanguine.
{One criticism of the molecular
manufacturing approach is that it does not make use of most of the techniques
and phenomena available through nanotechnology. Although this is true, it is
balanced by the great flexibility that comes from being able to build with
essentially zero cost per feature and billions of features per cubic micron. It
is worth noting that even analog functions these days are usually done
digitally, simulated with transistors, while analog computers have been long
abandoned.}
A modern FPGA can make computations in
a few billionths of a second. This is faster than the time it takes light to go
from one side of an old-style computer room to the other. A 1957 computer
engineer, shown the specifications for the FPGA chip and imagining it
implemented in room-sized hardware, might well assume that the speed of light
prevented the chip from working. Even those who managed to understand the
system's theoretical feasibility might have trouble understanding how to use
such high performance, or might convince themselves that the performance number
couldn't be practically useful.
{Molecular manufacturing is predicted
to offer extremely high performance. Nanotechnologists sometimes refuse to
believe that this is possible or useful. They point to supposed limitations in
physical law; they point out that even biology, after billions of years of
evolution, has not achieved these levels of performance. They usually don't stop
to understand the proposal in enough detail to criticize it meaningfully.}
Any computer chip has metal contact
points to connect to the circuit that it's part of. A modern FPGA can have
hundreds or even thousands of tiny wires or pads -- too small to solder by hand.
The hardware to connect to these wires did not exist in 1957; it would have to
have been invented. Furthermore, the voltage supply has to be precise within
1/10 of a volt, and the chip may require a very fast clock signal -- fast by
1957 standards, at least -- about the speed of an original IBM PC (from 1981).
Finally, an FPGA must be programmed, with thousands or millions of bytes loaded
into it each time it is turned on. Satisfying all these practical requirements
would require the invention of new hardware, before the chip could be made to
run and demonstrate its capabilities.
{Molecular manufacturing also will
require the invention of new hardware before it can start to show its stuff.}
In an FPGA, all the circuits are hidden
within one package: "No user-serviceable parts inside." That might make an
engineer from 1957 quite nervous. How can you repair it if it breaks? And
speaking of reliability, a modern chip can be destroyed by an electrostatic
shock too small to feel. Vacuum tubes are not static-sensitive. The extreme
sensitivity of the chip would increase its aura of unreliability.
{Molecular manufacturing designs
probably also would be non-repairable, at least at first. Thanks to molecular
precision, each nanodevice would be almost as reliable as modern transistors.
But today's nanotechnologists are not accustomed to working with that level of
reliability, and many of them don't believe it's possible.}
Even assuming the FPGA could be
interfaced with, and worked as advertised, it would be very difficult to design
circuits for. How can you debug it when you can't see what you're doing (the
1957 engineer might ask), when you can't put an oscilloscope on any of the
internal components? How can you implement all the different functions a
computer requires in a single device? How could you even get started on the
design problem? The FPGA has millions of transistors! Surely, programming its
circuits would be far more complex than anything that has ever been designed.
{Molecular manufacturing faces similar
concerns. But even simple repetitive FPGA designs -- for example, just using it
for core memory -- would be well worth doing in 1957.}
Rewiring a 1957-era computer required
hours or days of work with a soldering iron. An FPGA can be reprogrammed in
seconds. An interesting question to daydream about is whether engineers in 1957
could have used the rapid reprogrammability of FPGAs to speed their design
cycle. It would have been difficult but not impossible to rig up a system that
would allow changing the program quickly. It would certainly have been an
unfamiliar way of working, and might have taken a while to catch on.
But the bigger question is whether
engineers in 1957 would have made the million-dollar investment to gather the
hardware and skills in order to make use of FPGAs. Would they have said, "It
sounds good in theory, but we're doing well enough with our present technology?"
If I went back to 1957 with 2007-era technology, how many years or decades would
I have had to wait for sufficient investment?
What strategies would I have to use to
get people of that era familiar with these ideas? I would probably have to
publish theoretical papers on the benefits of designing with massive numbers of
transistors. (That's assuming I could find a journal to publish in. One hundred
million transistors in a single computer? Ridiculous!) I might have to hold my
own conferences, inviting the most forward-thinking scientists. I might have to
point out how the hardware of that period could be implemented more easily and
cheaply in FPGAs. (And in so doing, I might alienate a lot of the scientists.)
In the end, I might go to the media, not to do science but to put ideas in the
heads of students... and then I would have to wait for the students to graduate.
In short, I probably would have to do what the proponents of molecular
manufacturing were doing between 1981 and 2001. And it might have taken just
about that long before anyone started paying serious attention to the
possibilities.
All these reasons for skepticism make
sense to the skeptics, and the opinions of skeptics are important in determining
the schedule by which new ideas are incorporated into the grand system of
technology. It may be the case that molecular manufacturing proposals in the
mid-1980's simply could not have hoped to attract serious investment, regardless
of how carefully the technical case was presented. An extension of this argument
would suggest that molecular manufacturing will only be developed once it is no
longer revolutionary. But even if that is the case, technologies that are
evolutionary within their field can have revolutionary impacts in other areas.
The IBM PC was only an evolutionary
step forward from earlier hobby computers, but it revolutionized the
relationship between office workers and computers. Without a forward-looking
development program, molecular manufacturing may not be developed until other
nanotechnologies are capable of building engineered molecular machines -- say,
around 2020 or perhaps even 2025. But even at that late date, the simplicity,
flexibility, and affordability of molecular manufacturing could be expected to
open up revolutionary opportunities in fields from medicine to aerospace. And we
expect that, as the possibilities inherent in molecular manufacturing become
widely accepted, a targeted development program probably will be started within
the next few years, leading to development of basic (but revolutionary)
molecular manufacturing not long after.
Mechanical Molecular Manipulations
Chris Phoenix, Director of Research,
Center for Responsible Nanotechnology
Molecules used to be mysterious things that behaved in
weird quantum ways, and it was considered naive to think of them as machines, as
molecular manufacturing researchers like to do. But with more sophisticated
tools, that one-sided non-mechanistic view seems to be changing. Molecules are
now being studied as mechanical and mechanistic systems. Mechanical force is
being used to cause chemical reactions. Biomolecules are being studied as
machines. Molecular motors are being designed as though they were machines.
That's what we'll cover in this essay -- and as a bonus, I'll talk about
single-molecule and single-atom covalent deposition via scanning probe.
Mechanically Driven Chemistry
"By harnessing mechanical energy, we can go into molecules and pull on specific
bonds to drive desired reactions." This quote does not come from CRN, but from a
present-day researcher who has demonstrated a molecular system that does exactly
that. The system does not use a scanning probe -- in fact, it uses an innovative
fluid-based technique to deliver the force. But the study of molecule-as-machine
and its application to mechanical chemistry may herald a conceptual leap forward
that will make mechanosynthesis more thinkable.
Jeffrey Moore is a William H. and Janet Lycan Professor of Chemistry at the
University of Illinois at Urbana-Champaign, and also a researcher at the
Frederick Seitz Materials Laboratory on campus and at the school's Beckman
Institute for Advanced Science and Technology.
A story in Eurekalert describes what he has done. He built a long stringy molecule, put a "mechanophore"
in the middle, and tugged on the molecule using the high speeds and forces
produced by cavitation. The mechanophore is a mechanically active molecule that
"chooses" one of two reactions depending on whether it is stretched. The
research is reported in the March 22 issue of
Nature.
The work demonstrates the new potential of a novel way of directing chemical
reactions, but true mechanosynthesis will be even more flexible. The story
notes, "The directionally specific nature of mechanical force makes this
approach to reaction control fundamentally different from the usual chemical and
physical constraints." In other words, by pulling on the mechanophore from a
certain direction, you get more control over the reaction. But a mechanophore is
self-contained and, at least in the present design, can have one force in only
one direction. Mechanosynthesis with a scanning probe (or equivalent system)
will be able to apply a sequence of forces and positions.
It is significant that, despite the embryonic nature of this demonstration, the
potential flexibility of mechanically driven chemistry has been recognized. One
of the old objections to molecular manufacturing is that controlling the
reaction trajectory mechanically would not allow enough degrees of freedom to
control the reaction product. This research turns that idea on its head -- at
least in theory. (The objection never worried me -- the goal of mechanical
control is not to control every tiny parameter of the reaction, but simply to
constrain and bias the "space" of possible reactions so that only the desired
product could result.)
While doing an online search about this story, I stumbled upon the field of
inquiry that might have inspired it. It seems that polymer breakage in
cavitating fluids has been studied for several years; according to
this abstract the polymers tend to break in the middle, and the force applied to various
polymer types can be calculated. If this was in fact the inspiration for this
experiment, then this research -- though highly relevant to molecular
manufacturing -- may have arisen independently of both molecular manufacturing
theory and scanning probe chemistry demonstrations.
Mechanical Biopolymers
"In molecular biology, biological phenomena used to be studied mainly from
functional aspects, but are now studied from mechanistic aspects to solve the
mechanisms by using the static structures of molecular machines." This is a
quote from a Nanonet
interview with Nobuo Shimamoto, who is Professor,
Structural Biology Center, National Institute of Genetics, Research Organization
of Information and Systems. Prof. Shimamoto studies biomolecules using
single-molecule measurements and other emerging technologies. He seems to be
saying that back in the old days, when molecules could only be studied in
aggregate, function was the focus because it could be determined from bulk
effects; however, now that we can look at motions of single molecules, we can
start to focus on their mechanical behavior.
Prof. Shimamoto studied how RNA polymerase makes RNA strands from DNA -- and
also how it sometimes doesn't make a full strand, forming instead a "moribund
complex" that appears to be involved in regulating the amount of RNA produced.
By fastening a single molecule to a sphere and handling the sphere with optical
tweezers, the molecule's motion could be observed. RNA polymerase has been
observed working, as well as sliding along a strand of DNA and rotating around
it.
This is not to say that biology is always simple. One point made in the article
is that a biological reaction is not a linear chain of essential steps, but
rather a whole web of possibilities, some of which will lead to the ultimate
outcome and others that will be involved in regulating that outcome. Studying
the mechanics of molecules does not replace studying their function; however,
there has been a lot of focus on function to the exclusion of structure, and a
more balanced picture will provide new insights and accuracy.
I want to mention again the tension between mechanical and quantum models, although
the article quoted above does not go into it. Mechanical studies assume that
molecular components have a position and at least some structure that can be
viewed as transmitting force. In theory, position is uncertain for several
reasons, and calculating force is an inadequate analytical tool. In practice,
this will be true of some systems, but should not be taken as universal. The
classical mechanical approach does not contradict the quantum approach, any more
than Newton's laws of motion contradict Einstein's. Newton's laws are an
approximation that is useful for a wide variety of applications. Likewise,
position, force, and structure will be perfectly adequate and appropriate tools
with which to approach many molecular systems.
Mechanical Molecular Motors
"Looking at supramolecular chemistry from the viewpoint of functions with
references to devices of the macroscopic world is indeed a very interesting
exercise which introduces novel concepts into Chemistry as a scientific
discipline." In other words, even if you're designing with molecules, pretending
that you're designing with machine components can lead to some rather creative
experiments. This is the conclusion of
Alberto Credi and Belén Ferrer [PDF], who
have designed several molecular motor systems.
Credi and Ferrer define a molecular machine as "an assembly of a discrete number
of molecular components (that is, a supramolecular structure) designed to
perform mechanical-like movements as a consequence of appropriate external
stimuli." The molecules they are using must be fairly floppy, since they consist
of chains of single bonds. But they have found it useful to seek inspiration in
rigid macroscopic machines such as pistons and cylinders. Continuing the focus
on solid and mechanistic systems, the experimenters demonstrated that their
piston/cylinder system will work not only when floating in solution, but also
when caught in a gel or attached to a surface.
Another paper [PDF] reporting on this work makes several very interesting points.
The mechanical movements of molecular machines are usually binary -- that is,
they are in one of two distinct states and not drifting in a continuous range. I
have frequently emphasized the importance of binary (or more generally, digital)
operations for predictability and reliability. The paper makes explicit the
difference between a motor and a machine: a motor merely performs work, while a
machine accomplishes a function.
The machines described in the paper consist of multiple molecules joined
together into machine systems. The introduction mentions Feynman's "atom by
atom" approach only to disagree with it: it seems that although some physicists
liked the idea, chemists "know" that individual atoms are very reactive and
difficult to manipulate, while molecules can be combined easily into systems.
The authors note that "it is difficult to imagine that the atoms can be taken
from a starting material and transferred to another material." However, the
final section of this essay describes a system which does exactly that.
Transferring Molecules and Atoms
"In view of the increasing demand for nano-engineering operations in 'bottom-up'
nanotechnology, this method provides a tool that operates at the ultimate limits
of fabrication of organic surfaces, the single molecule." This quote is from
a
paper in Nature Nanotechnology,
describing how single molecules can be deposited onto a surface by transferring
them from a scanning probe microscope tip. This sounds exactly like what
molecular manufacturing needs, but it's not quite time to celebrate yet. There
are a few things yet to be achieved before we can start producing
diamondoid, but this work represents a very good start.
In the canonical vision of molecular manufacturing, a small molecular fragment
bonded to a "tool tip" (like a scanning probe microscope tip, only more precise)
would be pressed against a chemically active surface; its bonds would shift from
the tip to the surface; the tip would be retracted without the fragment; and the
transfer of atoms would fractionally extend the workpiece in a selected
location.
In this work, a long polymer is attached to a scanning probe tip at one end,
with the other end flopping free. Thus, the positional accuracy suffers.
Multiple polymers are attached to the tip, and sometimes (though rarely) two
polymers will transfer at once. The bond to the surface is not made under
mechanical force, but simply because it is a type of reaction that happens
spontaneously; this limits the scope of attachment chemistries and the range of
final products to some extent. The bond between the polymer and the tip is not
broken as part of the attachment to the surface; in other words, the attachment
and detachment do not take place in a single reaction complex. Instead, the
attachment happens first, and then the molecule is physically pulled apart when
the tip is withdrawn, and separates at the weakest link.
Despite these caveats, the process of depositing single polymer molecules onto a
surface is quite significant. First, it "looks and feels" like mechanosynthesis,
which will make it easier for other researchers to think in such directions.
Second, there is no actual requirement for the molecular transfer to take place
in a single reaction complex; if it happens in two steps, the end result is
still a mechanically guided chemical synthesis of a covalently bonded structure.
The lack of placement precision is somewhat troubling if the goal is to produce
atomically precise structures; however, there may be several ways around this.
First, a shorter and less floppy polymer might work. I suspect that large
polymers were used here to make them easier to image after the transfer. Second,
the molecular receptors on the surface could be spaced apart by any of a number
of methods. The tip with attached molecule(s) could be characterized by scanning
a known surface feature, to ensure that there was a molecule in a suitable
position and none in competing positions; this could allow reliable transfer of
a single molecule.
The imprecision issues raised by the use of floppy
polymers would not apply to the transfer of single atoms. But is such a thing
possible? In fact, it is. In 2003, the
Oyabu group in Japan was able to
transfer a single silicon atom from a covalent silicon crystal to a silicon tip,
then put it back. More recently, citing Oyabu's work,
another group has
worked out "proposed new atomistic mechanism and protocols for the
controlled manipulation of single atoms and vacancies on insulating
surfaces." Apparently, this sort of manipulation is now well enough understood
to be usefully simulated, and it seems that the surface can be scanned in a way
that detects single-atom "events" without disrupting the surface.
Molecular manufacturing is often criticized as viewing atoms as simple spheres
to be handled and joined. This is a straw man, since atomic transfer between
molecules is well known in chemistry, and no one is seriously proposing
mechanosynthetic operations on isolated or unbonded atoms. Nevertheless, the
work cited in the previous paragraph indicates that even a "billiard ball" model
of atoms may occasionally be relevant.
Summary
It is sometimes useful to think of molecules -- even biomolecules -- as simple
chunks of material with structure and position. Depending on the molecule, this
view can be accurate enough for invention and even study. The results described
here imply that a molecular manufacturing view of molecules -- as machines that
perform functions thanks to their structure -- is not flawed or inadequate, but
may be beneficial. It may even lead to new chemical capabilities, as
demonstrated by the mechanophore system. The relative unpopularity of the
mechanical view of molecules may be a result of the historical difficulty of
observing and manipulating individual molecules and atoms. As tools improve, the
mechanical interpretation may find increasing acceptance and utility. Although
it cannot supplant the more accurate quantum model, the mechanical model may
turn out to be quite suitable for certain molecular machine systems.
Nanomachines and
Nanorobots
Chris Phoenix, Director
of Research, Center for Responsible Nanotechnology
Here's an example of the kind of nanoscale molecular system being
envisioned, and perhaps even developed, by today’s nanomedical researchers:
A molecular cage holds a potent and toxic anti-tumor drug. The cage has a lid
that can be opened by a different part of the molecule binding to a marker that
is on the surface of tumor cells. So the poison stays caged until the molecular
machine bumps into a tumor cell and sticks there; then it is released and kills
the cell.
This is clearly a machine; it can be understood as operating by causal
mechanical principles. Part A binds to the cell, which pulls on part B, and
transfers force or charge to part C, which then changes shape to let part D out
of the physical cage. (Of course, mechanical analysis will not reveal every
detail of how it works, but it is a good place to start in understanding or
conceptualizing the molecule's function.)
Researchers are getting to the point where they could design this system — they
could plan it, engineer it, design a trial version, test it, modify the design,
and before too long, have a machine that works the way they intend. It is
tempting to view this as the ultimate goal of nanotechnology: to be able to
design molecular systems to perform intricate tasks like anti-cancer drug
delivery. But the system described above is limited in a way that future systems
will not be. It is a machine, but it is not a robot.
While researching this essay, I tried to find a definition of "robot" that I
could extend to nanorobotics. I was unable to find a consistent definition of
robot. Several web sites tried to be rigorous, but the one I found most
insightful was
Wikipedia, which admits that there is no rigorous definition. So I won't try
to give a definition, but rather describe a continuum. The more robotic a
machine is, the more new uses you can invent for it. Likewise, the more robotic
it is, the less the designer knows about exactly what it will be used for.
A machine in which every component is engineered for a particular function is
not very robotic. In the molecular machine described above, each component would
have been carefully designed to work exactly as intended, in concert with the
other carefully-designed pieces. In order to change the function of the machine,
at least one component would have to be redesigned. And with the current state
of the art, the redesign would not simply be a matter of pulling another part
out of a library — it would require inventing something new. The machine's
function may be quite elegant, but the design process is laborious. Each new
machine will cost a lot, and new functions and applications will be developed
only slowly.
The next stage is to have a library of interchangeable components. If a bigger
cage is needed, just replace the cage; if a different cell sensor is needed,
swap that out. This is a level of engineered flexibility that does not exist yet
on the molecular scale. Design will be easier
as this level of capability is developed. But it is still not very robotic, just
as building a machine out of standard gears rather than special-order gears does
not make it more robotic. There are levels beyond this. Also, this flexibility
comes at the cost of being limited to standard parts; that cost will eventually
be mitigated, but not until very robotic (fully programmable) machines are
developed.
A stage beyond interchangeable components is configurable components. Rather
than having to build a different physical machine for each application, it may
be possible to build one machine and then select one of several functions with
some relatively simple manipulations, after manufacture and before use. This
requires designing each function into the machine. It may be worth doing in
order to save on manufacturing and logistical costs: fewer different products to
deal with. There is another reason that gains importance with more complex
products: if several choices can be made at several different stages, then, for
example, putting nine functions (three functions at each of three levels) into
the product may allow 27 (3x3x3) configuration options.
The first configurable products will be made with each possible configuration
implemented directly in machinery. More complex configuration options will be
implemented with onboard computation and control. The ultimate extent of this,
of course, is to install a general-purpose computer for software control of the
product. Once a computer is onboard, functions that used to be done in hardware
(such as interpreting sensory data) can be digitized, and the functionality of
the product can be varied over a wide range and made quite complex simply by
changing the programming; the product can also change its behavior more easily
in response to past and present external conditions. At this point, it starts to
make sense to call the product a robot.
There are several things worth noticing about this progression from
single-purpose specially-designed machines to general-purpose
computer-controlled robots. The first is that it applies not only to medical
devices, as in the example that opened this essay, but to any new field of
devices. The second thing to notice is that it is a continuum: there is no
hard-edged line. Nevertheless, it is clear that there is a lot of room for
growth beyond today's molecular constructions. The third thing to notice is that
even today's mature products have not become fully robotic. A car contains
mostly special-purpose components, from the switches that are hardwired directly
to lights, right down to the tires that are specialized for hard-paved surfaces.
That said, a car does contain a lot of programmable elements, some of which
might justifiably be called robotic: most of the complexity of the antilock
brake system is in the software that interprets the sensors.
At what points can we expect molecular machine systems to advance along this
continuum? I would expect the step from special-case components to
interchangeable components to begin over the next few years, as early
experiments are analyzed, design software improves, and the various molecular
design spaces start to become understood. (The US National Science Foundation’s
“four
generations” of nanotechnology seem to suggest this path toward increased
interoperability of systems.) Configurable components have already been
mentioned in one context: food products where the consumer can select the color
or flavor. They may also be useful in medicine, where different people have a
vast range of possible phenotypes. And they may be useful in bio-engineered or
fully artificial bacteria, where it may be more difficult to create and maintain
a library of strains than to build in switchable genes.
Programmable products, with onboard digital logic, will probably have to wait
for the development
of molecular manufacturing. Prior to molecular manufacturing, adding a single
digital switch will be a major engineering challenge, and adding enough to
implement digital logic will probably be prohibitive in almost all cases. But
with molecular manufacturing, adding more parts to the product being constructed
will simply be a matter of tweaking the CAD design: it will add almost no time
or cost to the actual manufacture, and because digital switches have a simple
repeatable design that is amenable to design rules, it should not require any
research to verify that a new digital layout will be manufactured as desired.
Very small products, including some medical nanorobots, may be space-limited,
requiring elegant and compact mechanical designs even after digital logic
becomes available. But a cubic micron has space for tens of thousands of logic
switches, so any non-microscopic product will be able to contain as much logic
as desired. (Today's fastest supercomputer would draw about ten watts if
implemented with
rod logic,
so heat will not be a problem unless the design is *really* compute-intensive.)
What this all implies is that before molecular manufacturing arrives, products
will be designed with all the "smarts" front-loaded in the work of the molecular
"mechanical" engineers. Each product will be specially created with its own
special-purpose combination of "hardware" elements, though they may be pulled
from a molecular library.
But for products built with molecular manufacturing, the product designers will
find it much easier in most cases to offload the complexity to onboard
computers. Rather than wracking their brains to come up with a way to implement
some clever piece of functionality in the still-nascent field of molecular
mechanics, they often will prefer to specify a sensor, an actuator, and a
computer in the middle. By then, computer programming in the modern sense will
have been around for almost three-quarters of a century. Digital computation
will eclipse molecular tweaking as surely as digital computers eclipsed analog
computers.
And then the fun begins. Digital computers had fully eclipsed analog computers
by about the mid-1950's — before most people had even heard of computers, much
less used one. Think of all that's happened in computers since: the Internet,
logistics tracking, video games, business computing, electronic money, the
personal computer, cell phones, the Web, Google... Most of the comparable
advances in nanotechnology are still beyond anyone's ability to forecast.
Regardless of speculation about long-term possibilities, it seems pretty clear
that when molecular machines first become programmable, we can expect that the
design of "standard" products will rapidly become easier. This may happen even
faster than the advance of computers in the 20th century, because many of
today's software and hardware technologies will be portable to the new systems.
Despite the impressive work currently being done in molecular machines, and
despite the rapid progress of that work, the development of molecular
manufacturing in the next decade or so is likely to yield a sudden advance in
the pace of molecular product design, including nanoscale robotics.
Slip-Sliding Away
Chris Phoenix, Director
of Research, Center for Responsible Nanotechnology
There's a Paul Simon song that goes, "You know the nearer your destination,
the more you're slip-sliding away." Thinking about modern plans for
increasingly sophisticated nano-construction, I'm reminded of that song. As I
argued in a CRN
blog entry recently, it may turn out that developments which could bring
molecular manufacturing closer also will help to distract from the ultimate
power of the molecular manufacturing approach. People may say, "We already can
do this amazing thing; what more do we need?"
In this essay, I'll talk about a few technologies that may get us part way to
molecular manufacturing. I'll discuss why they're valuable -- but not nearly as
valuable as full molecular manufacturing could be. And I'll raise the
unanswerable question of whether everyone will be distracted by near-term
possibilities...or whether most people will be distracted, and thus unprepared
when someone does move forward.
The first technology is Zyvex's silicon-building system that I discussed in
another recent
blog article. Their plan is to take a silicon surface, carefully terminated
with one layer of hydrogen; use a scanning probe microscope to remove the
hydrogen in certain spots; hit it with a chemical that will deposit a single
additional silicon layer in the "depassivated" areas; and repeat to build up
multiple layers. As long as the scanning probe can remove single, selected
hydrogens -- and this capability has existed for a while, at least in the lab --
then this approach should be capable of building 3D structures (or at least,
2.5D) with atomic precision.
As I noted in that blog article, this "Atomically Precise Manufacturing" plan
can be extended in several ways for higher throughput and a broader range of
materials. The system may even be able to construct one of the key components
used in the fabrication machine. But, as I also noted, this will not be a
nanofactory. It will not be able to build the
vast majority of its own components. It will not be able to build on a large
scale, because the machine will be immensely larger than its products.
If you could build anything you wanted out of a million atoms of silicon, with
each atom placed precisely where you wanted it, what would you build? Well, it's
actually pretty hard to think of useful things to build with only one million
atoms. A million atoms would be a very large biomolecule, but biomolecules are a
lot more complex per atom than silicon lattice.
And without the complexity of bio-type molecules, a million atoms is really too
small to build much of anything. You could build a lot of different structures
for research, such as newfangled transistors and quantum dots, perhaps new kinds
of sensors (but then you'd have to solve the problem of packaging them), and
perhaps some structures that could interact with other molecules in interesting
ways (but only a few at a time).
Another approach to building nanoscale structures uses self-assembly. In the
past, I haven't thought much of self-assembly, because it requires all the
complexity of the product to be built into the component molecules before they
are mixed together. For most molecules, this is a severe limitation. However,
DNA can encode large amounts of information, and can convert that information
more or less directly into structure. Most self-assembled combinations are doing
well to be able to form stacks of simple layers. DNA can form bit-mapped
artistic designs and three-dimensional geometric shapes.
A recent breakthrough in DNA structure
engineering has made it much easier to design and create the desired shapes. The
shapes are formed by taking a long inexpensive strand of DNA, and fastening it
together with short, easily-synthesized DNA "staples" that each bind to only one
place on the strand; thus, each end of the staple joins two different parts of
the strand together. This can, with fairly high reliability, make trillions of
copies of semi-arbitrary shapes. In each shape, the DNA components (nucleotides)
will be in the right place within a nanometer or so, and the connection of each
atom relative to its neighbors will be predictable and engineerable.
Building atomically precise structures sounds enough like molecular
manufacturing to be misleading. If researchers achieve it, and find that it's
not as useful as the molecular manufacturing stories led them to expect, they
may assume that molecular manufacturing won't be very useful either. In a way,
it's the opposite problem from the one CRN has been facing for the past four
years: rather than thinking that molecular manufacturing is impossible, they may
now think that it's already happened, and was not a big deal.
Of course, the technologies described above will have limitations. One of the
most interesting limitations is that they cannot build a significant part of the
machines that built them. As far as I can see, DNA stapling will always be
dependent on big machines to synthesize DNA molecules, measure them out, and
stir them together. No one has proposed building DNA-synthesizer machines out of
DNA. The cost of DNA synthesis is falling rapidly, but it is still far above the
price where you could commission even a sub-micron DNA sculpture for pocket
change. This also implies that there is no way to ramp up production beyond a
certain rate; the synthesizing machines simply wouldn't be available. And
although the Zyvex process doesn't exist yet, I'm sure it will be at least as
limited by the cost and scarcity of the machines involved.
A very old saying reminds us, "When all you have is a hammer, everything looks
like a nail." So if atomically precise shapes can be built by layering silicon,
or by joining DNA, then any limitations in that technology will be approached by
trying to improve that technology. Typically, people who have a perfectly good
technology won't say, "I'll use my technology to invent a better one that will
completely eclipse and obsolete the one I have now." Change never comes easily.
Instead of seeking a better technology, people usually develop incremental fixes
and improvements for the technology they already have.
So the question remains, will everyone assume that technologies such as
Atomically Precise Manufacturing and DNA stapling are the wave of the future,
and work on improving those technologies as their shortfalls become apparent? Or
will someone be able to get funding for the purpose of bypassing those
technologies entirely, in order to produce something better?
It will only take one visionary with access to a funding source. The cost of
developing molecular manufacturing, even today, appears to be well within the
reach of numerous private individuals as well as a large number of national
governments. And the cost will continue to fall rapidly. So if the mainstream
remains uninterested in molecular manufacturing, slipping seamlessly from denial
into apathy, the chance that someone outside the mainstream will choose to
develop it should rapidly approach certainty.
Figuring Cost for Products of Molecular
Manufacturing
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
If finished products of molecular manufacturing will end up costing too much,
then the whole field might as well be scrapped now. But how much is too much?
And without knowing in detail how nanofactories will manufacture stuff, how can
we be sure that it actually will be worth developing and building them? In this
essay, I'll explore ways that we can reason about the costs of molecular
construction even with the existing knowledge gaps.
The cost of products made by molecular manufacturing will depend on the cost of
inputs and the cost of the machine that transforms the inputs into product. The
inputs are chemical feedstock, power, and information. The manufacturing system
will be an array of massive numbers of nanoscale machines which process the
input molecules and add them to build up nanoscale machine components, then join
the components into the product.
An ideal material for a molecular manufacturing system is a strongly bonded
covalent solid like diamond or sapphire (alumina). To build this kind of
crystalline material, just a few atoms at a time would be added, and the
feedstock would be small molecules. Small molecules tend not to cost much in
bulk; the limiting factor for cost in this kind of construction would probably
be the power. I have calculated that a
primitive manufacturing system with an inefficient (though flexible) design
might require 200 kWh per kg of product. Given the high strength of the product,
this cost is low enough to build structural materials; it would be quite
competitive with steel or aluminum.
Exponential manufacturing implies that the size of the manufacturing system
would not be limited; it appears to make sense to talk of building vehicles and
even houses by such methods. With the strength of diamond, a pressure-stiffened
(inflatable) structural panel might cost less than a dollar per square meter.
Even if this is off by multiple orders of magnitude, the materials might still
be useful in aerospace.
The earliest molecular manufacturing systems may not be able to do
mechanosynthesis of covalent solids; instead, they may use nanoscale actuators
to join or place larger molecules. This would probably require a lot less
precision, as well as using less energy per atom, but produce less strong and
stiff materials. Also, the feedstock would probably be more costly — perhaps a
lot more costly, on the order of dollars per gram rather than dollars per
kilogram. So these products probably would not be used for large-scale
structural purposes, though they might be very useful for computation, sensing,
and display. The products might even be useful for actuation. As long as the
product molecules didn't have to be immersed in water to maintain their shape or
function, they might still get the scaling law advantages — power density and
operation frequency — predicted for diamondoid machines. With a power density
thousands of times greater than today's macro-scale machines, even expensive
feedstock would be worth using for motors.
The second major component of product cost is the cost of the machine being used
to make the product. If that machine is too expensive, then the product will be
too expensive. However, our analysis suggests that the machine will be quite
inexpensive relative to its products. Here again, scaling laws provide a major
advantage. Smaller systems have higher operational frequency, and a nanoscale
system might be able to process its own mass of product in a few seconds — even
working one small molecule at a time. This implies that a nanofactory would be
able to produce many times its weight in product over its working lifespan.
Since nanofactories would be built by nanofactories, and have the same cost as
any other product, that means that the proportion of product cost contributed by
nanofactory cost would be miniscule. (This ignores licensing fees.)
When products are built with large machines that were built with other
processes, the machines may cost vastly more than the products they manufacture.
For example, each computer chip is worth only a few dollars, but it's made by
machines costing many millions of dollars. But when the machine is made by the
same process that makes its products, the machine will not cost more than the
other products.
To turn the argument around, for the nanofactory concept to work at all,
nanofactories have to be able to build other nanofactories. This implies minimum
levels of reliability and speed. But given even those minimum levels, the
nanofactory would be able to build products efficiently. It is, of course,
possible to propose nanofactory designs that appear to break this hopeful
analysis. For example, a nanofactory that required large masses of passive
structure might take a long time to fabricate its mass of product. But the
question is not whether broken examples can be found. The question is whether a
single working example can be found. Given the number of different chemistries
available, from biopolymer to covalent solid, and the vast number of different
mechanical designs that could be built with each, the answer to that question
seems very likely to be Yes.
Will low-cost atomically precise products still be valuable when nanofactories
are developed, or will other nanotechnologies have eclipsed the market? For an
initial answer, we might usefully compare molecular manufacturing with
semiconductor manufacturing.
In 1965, transistors cost
more than a dollar. Today, they cost well under one-millionth of a dollar,
and we can put a billion of them on a single computer chip. So the price of
transistors has fallen more than a million-fold in 40 years, and the number of
transistors on a chip has increased similarly. But this is still not very close
to the cost-per-feature that would be needed to build things atom-by-atom.
Worldwide, we build 1018 transistors
per year; if each transistor were an atom, we would be building about 20
micrograms of stuff — worldwide — in factories that cost many billions of
dollars. And in another 40 years, if the semiconductor trends continue, those
billions of dollars would still be producing only 20 grams of stuff per year. By
contrast, a one-gram nanofactory might produce 20 grams of stuff per day. So
when nanoscale technologies are developed to the point that they can build a
nanofactory at all, it appears worthwhile to use them to do so, even at great
cost; the investment will pay back quite quickly.
The previous paragraph equated transistors with atoms. Of course this is just an
analogy; putting an atom precisely in place may not be very useful. But then
again, it might. The functionality of nanoscale machinery will depend largely on
the number of features it includes, and if each feature requires only a few
atoms, then precise atom placement with exponential molecular manufacturing
technology implies the ability to build vast numbers of features.
For a surprisingly wide range of implementation technologies, molecular
manufacturing appears to provide a low-cost way of building huge numbers of
features into a product. For products that depend on huge numbers of features —
including computers, some sensors and displays, and perhaps parallel arrays of
high-power-density motors— molecular manufacturing appears to be a lower-cost
alternative to competing technologies. Even decades in the future, molecular
manufacturing may still be able to build vastly more features at vastly lower
cost than, for example, semiconductor manufacturing. And for some materials, it
appears that even structural products may be worth building.
Civilization Without Metals
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
There used to be an idea floating around -- maybe it still is -- that if our
current technological civilization collapsed, the human race would likely not
get a second chance because we've already used up all the easy-to-mine metals
and fossil fuels. Among other places, this idea showed up in Larry Niven's
Ringworld
novels: technology in a giant artificial space habitat collapsed, and because
there were no metal stocks available, civilization could not re-bootstrap
itself.
Fortunately, metals, though very useful, do not appear to be necessary for a
high-tech civilization. And there are lots of sources of energy other than
fossil fuels. Since fossil fuels add carbon dioxide to the atmosphere, and since
metal extraction causes various kinds of pollution (not to mention political
problems), the question is of more than theoretical interest. An advanced,
elegant technology should be able to use more local and greener resources.
Carbon is available everywhere on the surface of our planet. It may require
energy to convert it to useful form, but carbon-based solar collectors appear to
be feasible, and biomass can be used for modest amounts of energy. As a
structural material, carbon ranges from good to exceptional. Carbon fiber
composites are lighter and stronger than steel. Virtually all plastics are
carbon-based. Carbon nanotubes are dozens of times stronger than steel --
significantly better than carbon fiber. Carbon is an extremely versatile
element. Pure carbon can be opaque or transparent; it can be an electrical
conductor, semiconductor, or insulator; it can be rigid or flexible. In
combination with other readily-available elements, carbon can make a huge
variety of materials.
As technology advances, our ability to build smaller machines also advances.
Small machines work better; scaling laws mean that in general, smaller machines
have higher power density, operating frequency, and functional density. This
implies that, even if metals are needed to implement some functions,
increasingly small amounts will be needed as technology advances. But small
machines can implement a lot of functions -- actuation, sensing, computation,
display -- simply by mechanical motion and structure. Examples abound in Robert
Freitas's Nanomedicine I, which is
available online
in its entirety. This means that regardless of what molecular manufactured
structures are built out of -- diamond, alumina, silica, or something else --
they probably will be able to do a lot of things based on their mechanical
design rather than their elemental composition.
Just for fun, let's consider how people deprived of metal (and with technical
knowledge only slightly better than today's) might make their way back to a high
technology level. Glass, of course, can be made with primitive technology.
Polymers can be made from plants: plastic from corn, rubber from the sap of
certain trees. So, test tubes and flexible tubing could be produced, and perhaps
used to bootstrap a chemical industry. There are a number of ways to make carbon
nanotubes, some of which use electric arcs. Carbon is fairly high-resistance (it
was used for the first light bulb filaments), but might be adequate for carrying
high voltage at low current, and it has a long history of use as discharge
electrodes; an electrostatic generator could be made of glass and carbon, and
that plus some mechanical pumps might possibly be enough to make nanotubes for
high-quality wires.
Computers would be necessary for any high-tech civilization. Carbon nanotubes
are excellent electron emitters, so it might be possible to build small, cool,
and reliable vacuum-tube computing elements. Note that the first electronic
computers were made with vacuum tubes that used unreliable energy-consuming
(heated) electron emitters; if they were cool and reliable, many emitters could
be combined in a single vacuum enclosure. As an off-the-cuff guess: a computer
made by hand, with each logic element sculpted in miniature, might require some
thousands of hours of work, be small enough to fit on a large desk, and be as
powerful as computers available in the 1960s or maybe even the 1970s. The IBM
PC, a consumer-usable computer from the early 1980s, had about 10,000 logic
elements in its processor and 70,000 in its memory; this could be made by hand
if necessary, though computers suitable for controlling factory machines can be
built with fewer than 10,000 elements total.
Computer-controlled manufacturing machines would presumably be able to use
nanotube-reinforced plastic to build a variety of structures comparable in
performance to today's carbon-fiber constructions. Rather than milling the
structures from large hunks of material, as is common with metals, they might be
built additively, as rapid-prototyping machines are already beginning to do.
This would reduce or eliminate the requirement for cutting tools. Sufficiently
delicate additive-construction machines should also be able to automate the
manufacture of computers.
Although I've considered only a few of the many technologies that would be
required, it seems feasible for a non-metals-based society to get to a level of
technology roughly comparable to today's capabilities -- though not necessarily
today's level of manufacturing efficiency. In other words, even if it was
possible to build a car, it might cost 100 times as much to manufacture as
today's cars. To build a technological civilization, manufacturing has to be
cheap: highly automated and using inexpensive materials and equipment. Rather
than try to figure out how today's machines could be translated into glass,
nanotubes, and plastic without raising their cost, I'll simply suggest that
molecular manufacturing will use automation, inexpensive materials, and
inexpensive equipment. In that case, all that would be needed is to build enough
laboratory equipment -- at almost any cost! -- to implement a recipe for
bootstrapping a molecular manufacturing system.
There are several plausible approaches to molecular
manufacturing. One of them is to build self-assembled structures out of
biopolymers such as DNA, structures complex enough to incorporate
computer-controlled actuation at the molecular level, and then use those to
build higher-performance structures out of better materials. With glass,
plastic, electricity, and computers, it should be possible to build DNA
synthesizers. Of course, it's far from trivial to do this effectively: as with
most of the technologies proposed here, it would require either a pre-designed
recipe or a large amount of research and development to do it at all. But it
should be feasible.
A recipe for a DNA-based molecular manufacturing system doesn't exist yet, so I
can't describe how it would work or what other technologies would be needed to
interface with it. But it seems unlikely that metal would be absolutely required
at any stage. And -- as is true today -- once a molecular manufacturing
proto-machine reached the exponential stage, where it could reliably make
multiple copies of its own structure, it would then be able to manufacture
larger structures to aid in interfacing to the macroscopic world.
Once molecular manufacturing reaches the point of building large structures via
molecular construction, metals become pretty much superfluous. Metals are metals
because they are heavy atoms with lots of electrons that mush together to form
malleable structures. Lighter atoms that form stronger bonds will be better
construction materials, once we can arrange the bonds the way we want them --
and that is exactly what molecular manufacturing promises to do.
Limitations of Early Nanofactory Products
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
Although molecular manufacturing and its products will be amazingly powerful,
that power will not be unlimited. Products will have several important physical
limitations and other technological limitations as well. It may be true, as
Arthur C. Clarke suggests, that "any sufficiently advanced technology is
indistinguishable from magic," but early molecular manufacturing
(diamondoid-based nanofactories) will not, by that definition, be sufficiently
advanced.
Molecular manufacturing is based on building materials by putting atoms together
using ordinary covalent bonds. This means that the strength of materials will be
limited by the strength of those bonds. For several reasons, molecular
manufacturing-built materials will be stronger than those we are used to. A
structural defect can concentrate stress and cause failure; materials built
atom-by-atom can be almost perfect, and the few remaining defects can be dealt
with by branched structures that isolate failures. By contrast, today's carbon
fiber is chock-full of defects, so is much weaker than it could be. Conventional
metallurgy produces metal that is also full of defects. So materials built with
molecular manufacturing could approach the strength of carbon nanotubes -- about
100 times stronger than steel -- but probably not exceed that strength.
Energy storage will be bulky and heavy. It appears that the best non-nuclear way
to store energy is via ordinary chemical fuel. In other words, energy storage
won't be much more compact than a tank of gasoline. Small nuclear energy
sources, on the order of 10-micron fuel particles, appear possible if the right
element is chosen that emits only easily-shielded particles. However, this would
be expensive, unpopular, and difficult to manufacture, and probably will be
pretty rare.
To make the most of chemical energy, a few tricks can be played. One (suggested
by
Eric Drexler in conversation) is building structures out of carbon that
store mechanical energy; springs and flywheels can store energy with
near-chemical density, because they depend on stretched bonds. After the
mechanical energy is extracted, the carbon can be oxidized to provide chemical
energy. As it happens, carbon oxidized with atmospheric oxygen appears to be the
most dense store of chemical energy. Of course, if the mechanical structures are
not oxidized, they can be recharged with energy from outside the device, in
effect forming a battery-like energy store with very high energy density
compared to today's batteries.
Another trick that can make the most of chemical energy stores is to avoid
burning them. If energy is converted into heat, then only a fraction of it can
be used to do useful work; this is known as the Carnot limit. But if the energy
is never thermalized -- if the atoms are oxidized in a fuel cell or in an
efficient mechanochemical system -- then the Carnot limit does not apply. Fuel
cells that beat the Carnot limit exist today.
For a lot more information about energy storage, transmission, and conversion,
see Chapter 6 of Nanomedicine I (available
online).
Computer power will be effectively unlimited by today's standards, in the sense
that few algorithms exist that could make efficient use of the computers
molecular manufacturing could build. This does not mean that computer capacity
will be literally unlimited. Conventional digital logic, storing information in
stable physical states, may be able to store a bit per atom. At that rate, the
entire Internet (about 2 petabytes) could be stored within a few human cells (a
few thousand cubic microns), but probably could not be stored within a typical
bacterium.
Of course, this does not take quantum computers into account. Molecular
manufacturing's precision may help in the construction of quantum computer
structures. Also, there may be arcane techniques that might store more than one
bit per atom, or do computation with sub-atomic particles. But these probably
would not work at room temperature. So for basic computer capacity, it's
probably reasonable to stick with the estimates found in Nanosystems: 1017
logic gates per cubic millimeter, and 1016 instructions per second
per watt. (A logic gate may require many more atoms than required to store a
bit.) These numbers are from Chapter 1 of Nanosystems (available
online).
It is not yet known what kinds of chemistry the first nanofactories will do.
Certainly they will not be able to do everything. Water, for example, is liquid
at room temperature, and water molecules will not stay where they are placed
unless the factory is operating at cryogenic temperatures. This may make it
difficult to manufacture things like food. (Building better greenhouses, on the
other hand, should be relatively straightforward.) Complicated molecules or
arcane materials may require special research to produce. And, of course, no
nanofactory will be able to convert one chemical element into another; if a
design requires a certain element, that element will have to be supplied in the
feedstock. The good news is that carbon is extremely versatile.
Sensors will be limited by basic physics in many ways. For example, a small
light-gathering surface may have to wait a long time before it collects enough
photons to make an image. Extremely small sensors will be subject to thermal
noise, which may obscure the desired data. Also, collecting data will require
energy to do computations. (For some calculations in this area, see
Nanomedicine I,
Chapter 4.)
Power supply and heat dissipation will have to be taken into account in some
designs. Small widely-separated systems can run at amazing power densities
without heating up their environment much. However, small systems may not be
able to store much fuel, and large numbers of small systems in close proximity
(as in some nanomedical applications) may still create heat problems. Large
(meter-scale) systems with high functional density can easily overwhelm any
currently conceived method of cooling. Drexler calculated that a
centimeter-thick slab of solid nanocomputers could be cooled by a special
low-viscosity fluid with suspended encapsulated ice particles. This is quite a
high-tech proposal, and Drexler's calculated 100 kW per cubic centimeter (with
25% of the volume occupied by coolant pipes) probably indicates the highest
cooling rate that should be expected.
The good news on power dissipation is that nanomachines may be extremely
efficient. Scaling laws imply high power densities and operating frequencies
even at modest speeds -- speeds compatible with >99% efficiency. So if 10 kW per
cubic centimeter are lost as heat, that implies up to a megawatt per cubic
centimeter of useful mechanical work such as driving a shaft. (Computers, even
reversible computers, will spend a lot of energy on erasing bits, and
essentially all of the energy they use will be lost as heat. So the
factor-of-100 difference between heat dissipated and work accomplished does not
apply to computers. This means that you get only about 1021
instructions per second per cubic centimeter.)
Most of the limitations listed here are orders of magnitude better than today's
technology. However, they are not infinite. What this means is that anyone
trying to project what products may be feasible with molecular manufacturing
will have to do the math. It is probably safe to assume that a molecular
manufacturing-built product will be one or two orders of magnitude (10 to 100
times) better than a comparable product built with today's manufacturing. But to
go beyond that, it will be necessary to compute what capabilities will be
available, and do at least a bit of exploratory engineering in order to make
sure that the required functionality will fit into the desired product.
Levels of Nanotechnology Development
Chris
Phoenix, Director of Research, Center for Responsible Nanotechnology
Nanotechnology capabilities have been improving rapidly. More different things
can be built, and the products can do more than they used to. As nanotechnology
advances, CRN continually is asked: Why do we focus only on molecular
manufacturing, when there's important stuff already being done? This essay will
put the various levels of nanotechnology in perspective, showing where molecular
manufacturing fits on a continuum of development -- quite far advanced in terms
of capabilities. Along the way, this will show which kinds of nanotechnology
CRN's concerns apply to.
For another perspective on
nanotechnology development, it's worth reading the section on "The Progression
of Nanotechnology" (pages 3-6) from a
joint committee economic study [PDF] for the U.S. House of Representatives.
It does not divide nanotech along exactly the same lines, but it is reasonably
close, and many of the projections echo mine. That document is also an early
source for the NSF's division of nanotechnology into
four
generations.
The development arc of nanotechnology
is comparable in some ways to the history of computers. Ever since the abacus
and clay tablets, people have been using mechanical devices to help them keep
track of numbers. Likewise, the ancient Chinese reportedly used nanoparticles of
carbon in their ink. But an abacus is basically a better way of counting on your
fingers; it is not a primitive computer in any meaningful sense. It only
remembers numbers, and does not manipulate them. But I am not going to try to
identify the first number-manipulator; there are all sorts of ancient
distance-measuring carts, timekeeping devices, and astronomical calculators to
choose from. Likewise, the early history of nanotechnology will remain shrouded
in myth and controversy, at least for the purposes of this essay.
The first computing devices in
widespread use were probably mechanical adding machines, 19th century cash
registers, and similar intricate contraptions full of gears. These had to be
specially designed and built, a different design for each different purpose.
Similarly, the first nanotechnology was purpose-built structures and materials.
Each different nanoparticle or nanostructure had a particular set of properties,
such as strength or moisture resistance, and it would be used for only that
purpose. Of course, a material might be used in many different products, as a
cash register would be used in many different stores. But the material, like the
cash register, was designed for its specialized function.
Because purpose-designed materials are
expensive to develop, and because a material is not a product but must be
incorporated into existing manufacturing chains, these early types of
nanotechnology are not having a huge impact on industry or society.
Nanoparticles are, for the most part, new types of industrial chemicals. They
may have unexpected or unwanted properties; they may enable better products to
be built, and occasionally even enable new products; but they are not going to
create a revolution. In Japan, I saw an abacus used at a train station ticket
counter in the early 1990's; cash registers and calculators had not yet
displaced it.
The second wave of computing devices
was an interesting sidetrack from the general course of computing. Instead of
handling numbers of the kind we write down and count with, they handled
quantities -- fuzzy, non-discrete values, frequently representing physics
problems. These analog computers were weird and arcane hybrids of mechanical and
electrical components. Only highly trained mathematicians and physicists could
design and use the most complex of these computers. They were built this way
because they were built by hand out of expensive components, and it was worth
making each component as elegant and functional as possible. A few vacuum tubes
could be wired up to add, subtract, multiply, divide, or even integrate and
differentiate. An assemblage of such things could do some very impressive
calculations -- but you had to know exactly what you were doing, to keep track
of what the voltage and current levels meant and what effect each piece would
have on the whole system.
Today, nanotechnologists are starting
to build useful devices that combine a few carefully-designed components into
larger functional units. They can be built by chemistry, self-assembly, or
scanning probe microscope; none of these ways is easy. Designing the devices is
not easy. Understanding the components is somewhat easy, depending on the
component, but even when the components appear simple, their interaction is
likely not to be simple. But when your technology only lets you have a few
components in each design, you have to get the most you can out of each
component. It goes without saying that only experts can design and build such
devices.
This level of nanotechnology will
enable new applications, as well as more powerful and effective versions of some
of today's products. In a technical sense, it is more interesting than
nanoparticles -- in fact, it is downright impressive. However, it is not a
general-purpose technology; it is far too difficult and specialized to be
applied easily to more than a tiny fraction of the products created today. As
such, though it will produce a few impressive breakthroughs, it will not be
revolutionary on a societal scale.
It is worth noting that some observers,
including some nanotechnologists, think that this will turn out to be the most
powerful kind of nanotechnology. Their reasoning goes something like this:
Biology uses this kind of elegant highly-functional component-web. Biology is
finely tuned for its application, so it must be doing things the best way
possible. And besides, biology is full of elegant designs just waiting for us to
steal and re-use them. Therefore, it's impossible to do better than biology, and
those who try are being inefficient in the short term (because they're ignoring
the existing designs) as well as the long term (because biology has the best
solutions). The trouble with this argument is that biology was not designed by
engineers for engineers. Even after we know what the components do, we will not
easily be able to modify and recombine them. The second trouble with the
argument is that biology is constrained to a particular design motif: linear
polymers modified by enzymes. There is no evidence that this is the most
efficient possible solution, any more than vacuum tubes were the most efficient
way to build computer components. A third weakness of the argument is that there
may be some things that simply can't be done with the biological toolbox. Back
when computers were mainly used for processing quantities representing physical
processes, it might have sounded strange to say that some things couldn't be
represented by analog values. But it would be more or less impossible to search
a billion-byte text database with an analog computer, or even to represent a
thousand-digit number accurately.
It may seem strange to take a circuit that could add two
high-precision numbers and rework it into a circuit that could add 1+1, so that
a computer would require thousands of those circuits rather than dozens. But
that is basically what was done by the designers of ENIAC, the famous early
digital computer. There were at least two or three good reasons for this. First,
the 1+1 circuit was not just high-precision, it was effectively infinite
precision (until a vacuum tube burned out) because it could only answer in
discrete quantities. You could string together as many of these circuits as you
wanted, and add ten- or twenty-digit numbers with infinite precision. Second,
the 1+1 circuit could be faster. Third, a computer doing many simple operations
was easier to understand and reprogram than a computer doing a few complex
operations. ENIAC was not revolutionary, compared with the analog computers of
its day; there were many problems that analog computers were better for. But it
was worth building. And more importantly, ENIAC could be improved by improving
just a few simple functions. When transistors were invented, they quickly
replaced vacuum tubes in digital computers, because digital computers required
fewer and less finicky circuit designs.
The third level of nanotechnology,
which is just barely getting a toehold in the lab today, is massively parallel
nano-construction via relatively large computer-controlled machines. For
example, arrays of tens of thousands of scanning probes have been built, and
these arrays have been used to build tens of thousands of micro-scale pictures,
each with tens of thousands of nano-scale dots. That's a billion features, give
or take an order of magnitude -- pretty close to the number of transistors on a
modern computer chip. That is impressive. However, a billion atoms would make an
object about the size of a bacterium; this type of approach will not be used to
build large objects. And although I can imagine ways to use it for
general-purpose construction, it would take some work to get there. Because it
uses large and delicate machines that it cannot itself build, it will be a
somewhat expensive family of processes. Nevertheless, as this kind of technology
improves, it may start to steal some excitement from the bio-nano approach,
especially once it becomes able to do atomically precise fabrication using
chemical reactions.
Massively parallel nano-construction
will likely be useful for building better computers and less expensive sensors,
as well as a lot of things no one has thought of yet. It will not yet be
revolutionary, by comparison with what comes later, but it starts to point the
way toward revolutionary construction capabilities. In particular, some
nano-construction methods, such as Zyvex's
Atomically Precise Manufacturing, might eventually be able to build their
improved versions of their own tools. Once computer-controlled
nano-fabrication can build improved versions of its own tools, it will start to
lead to the next level of nanotechnology: exponential manufacturing. But until
that point, it appears too primitive and limited to be revolutionary.
ENIAC could store the numbers it was
computing on, but the instructions for running the computation were built into
the wiring, and it had to be rewired (but not rebuilt) for each different
computation. As transistors replaced vacuum tubes, and integrated circuits
replaced transistors, it became reasonable for computers to store their own
programs in numeric form, so that when a different program was needed, the
computer could simply read in a new set of numbers. This made computing a lot
more efficient. It also made it possible for computers to help to compile their
own programs. Humans could write programs using symbols that were more or less
human-friendly, and the computer could convert those symbols into the proper
numbers to tell the computer what to do. As computers became more powerful, the
ease of programming them increased rapidly, because the symbolic description of
their program could become richer, higher-level, and more human-friendly. (Note
that, in contrast, a larger analog computer would be more difficult to program.)
Within a decade after ENIAC, hobbyists could learn to use a computer, though
computers were still far too expensive for hobbyists to own.
The fourth level of nanotechnology is
early exponential manufacturing. Exponential manufacturing means that the
manufacturing system can build most of its key components. This will radically
increase the throughput, will help to drive down the cost, and also implies that
the system can build improved versions of itself fairly quickly. Although it's
not necessarily the case that exponential manufacturing will use molecular
operations and molecular precision (molecular manufacturing), this may turn out
to be easier than making exponential systems work at larger scales. Although the
most familiar projections of molecular manufacturing involve highly advanced
materials such as carbon lattice (diamondoid), the first molecular manufacturing
systems likely will use polymers that are weaker than diamondoid but easier to
work with. Exponential manufacturing systems with large numbers of fabrication
systems will require full automation, which means that each operation will have
to be extremely reliable. As previous science
essays have discussed, molecular manufacturing appears to provide the
required reliability, since covalent bonding can be treated as a digital
operation. In the same way that the 1+1 circuit is more precise than the analog
adder, adding a small piece onto a molecule can be far more precise and reliable
than any currently existing manufacturing operation -- reliable enough to be
worth doing millions of times rather than using one imprecise bulk operation to
build the same size of structure.
Early exponential manufacturing will
provide the ability to build lots of truly new things, as well as computers far
in advance of today's. With molecular construction and rapid prototyping, we
will probably see breakthrough medical devices. Products may still be quite
expensive per gram, especially at first, since early processes are likely to
require fairly expensive molecules as feedstocks. They may also require some
self-assembly and some big machines to deal with finicky reaction conditions.
This implies that for many applications, this technology still will be building
components rather than products. However, unlike the cost per gram, the cost per
feature will drop extremely rapidly. This implies far less expensive sensors. At
some point, as products get larger and conventional manufacturing gets more
precise, it will be able to interface with molecular manufactured products
directly; this will greatly broaden the applications and ease the design
process.
The implications of even early
molecular manufacturing are disruptive enough to be interesting to CRN. Massive
sensor networks imply several new kinds of weapons, as do advanced medical
devices. General-purpose automated manufacturing, even with limitations, implies
the first stirrings of a general revolution in manufacturing. Machines working
at the nanoscale will not only be used for manufacturing, but in a wide variety
of products, and will have far higher performance
than larger machines.
In one sense, there is a continuum from
the earliest mainframe computers to a modern high-powered gaming console. The
basic design is the same: a stored-program digital computer. But several decades
of rapid incremental change have taken us from million-dollar machines that
printed payroll checks to several-hundred-dollar machines that generate
real-time video. A modern desktop computer may contain a million times as many
computational elements as ENIAC, each one working almost a million times as fast
-- and the whole thing costs thousands of times less. That's about fifteen
orders of magnitude improvement. For what it's worth, the functional density of
nanometer-scale components is eighteen orders of magnitude higher than the
functional density of millimeter-scale components.
Diamondoid
molecular manufacturing is expected to produce the same kind of advances
relative to today's manufacturing.
The implications of this level of
technology, and the suddenness with which it might be developed, have been the
focus of CRN's work since our founding almost five years ago. They cannot be
summarized here; they are too varied and extreme. We
hope you will learn more and join our efforts to prepare the world for this
transformative technology.
Exploring the Productive Nanosystems Roadmap
Damian Allis, Research Professor of Chemistry at Syracuse University and Senior
Scientist for Nanorex, Inc.
What follows is a brief series of notes and observations about the
Roadmap Conference,
some of the activities leading up to it, and a few points about the state of
some of the research that the Roadmap is hoping to address. All views expressed
are my own and not necessarily those of other Roadmap participants,
collaborators, my affiliated organizations (though I hope to not straddle that
fine line between "instigation" and "inflaming" in anything I present below).
Some Opening Praise for Foresight
There are, basically, three formats for scientific conferences. The first is
discipline-intensive, where everyone attending needs no introduction and
certainly needs no introductory slides (see the division rosters at most any
National ACS conference). The
only use of showing an example of
Watson-Crick base pairing
at a DNA nanotechnology conference of this format is to find out who found the
most aesthetically-pleasing image on "the Google."
There is the middle ground, where a single conference will have multiple
sessions divided into half-day or so tracks, allowing the carbon nanotube
chemists to see work in their field, then spend the rest of the conference
arguing points and comparing notes in the hotel lobby while the DNA scientists
occupy the conference room. The
FNANO
conference is of a format like this, which is an excellent way to run a
conference when scientists dominate the attendee list.
Finally, there is the one-speaker-per-discipline approach, where introductory
material consumes roughly 1/3 of each talk and attendees are given a taste of a
broad range of research areas. Such conferences are nontrivial to organize for
individual academics within a research plan but are quite straightforward for
external organizations with suitable budgets to put together.
To my mind, Foresight
came close to perfecting this final approach for nanoscience over the course of
its annual Conferences on Molecular Nanotechnology. Much like the organizational
Roadmap meetings and the Roadmap conference itself, these Foresight conferences
served as two-day reviews of the entire field of nanoscience by people directly
involved in furthering the cause. In my own case, research ideas and
collaborations were formed that continue to this day that I am sure would not
have otherwise. The attendee lists were far broader than the research itself,
mixing industry (the people turning research into products), government (the
people turning ideas into funding opportunities), and media (the people bringing
new discoveries to the attention of the public). Enough cannot be said about the
use of such broad-based conferences, which are instrumental in endeavors to
bring the variety of research areas currently under study into a single focus,
such as in the form of a technology Roadmap.
Why A "Productive Nanosystems" Roadmap?
The semiconductor industry
has its Roadmap. The
hydrogen storage community has its Roadmap. The
quantum computing and
cryptography
communities have their Roadmaps. These are major research and development
projects in groundbreaking areas that are not in obvious competition with one
another but see the need for all to benefit from all of the developments within
a field (in spirit, anyway). How could a single individual or research group
plan 20 years into the future (quantum computing) or plan for the absolute limit
of a technology (semiconductor)?
The
Technology Roadmap for Productive Nanosystems falls into the former
category, an effort to as much take a snapshot of current research and very
short-term pathways towards nanosystems in general as it is to begin to plot
research directions that take advantage of the continued cross-disciplinary
efforts now begun in National Labs and large research universities towards
increasing complexity in nanoscale study.
On one far end of the spectrum, the "productive nanosystem" in all of its
atomically-precise glory as envisioned by many forward-thinking scientists is a
distant, famously debated, and occasionally ridiculed idea that far exceeds our
current understanding within any area of the physical or natural sciences. Ask
the workers on the first Model T assembly line how they expected robotics to
affect the livelihoods and the productivity of the assembly lines of their
grandchildren's generation, and you can begin to comprehend just how
incomprehensible the notion of a fully developed desktop nanofactory or medical
nanodevice is even to many people working in nanoscience.
On the other end of the spectrum (and the primary reason, I think, in molecular
manufacturing), it seems rather narrow-minded and short-sighted to believe that
we will never be able to control the fabrication of matter at the atomic scale.
The prediction that scientists will still be unable in 50 years to abstract a
carbon atom from a diamond lattice or build a computer processing unit by
placing individual atoms within an insulating lattice of other atoms seems
absurd. That is, of course, not to say that
molecular
manufacturing-based approaches to the positional control of individual atoms
for fabrication purposes will be the best approach to generating various
materials, devices, or complicated nanosystems (yes, I'm in the field and I
state that to be a perfectly sound possibility).
To say that we will never have that kind of control, however, is a bold
statement that assumes scientific progress will hit some kind of technological
wall that, given our current ability to manipulate individual hydrogen atoms
(the smallest atoms we have to work with) with positional control on atomic
lattices, seems to be sufficiently porous that atomically precise manufacturing,
including the mechanical approaches envisioned in molecular manufacturing
research, will continue on undaunted. At the maturation point of all possible
approaches to atomic manipulation, engineers can make the final decision of how
best to use the available technologies. Basically and bluntly, futurists are
planning the perfect paragraph in their heads while researchers are still
putting the keyboard together. That, of course, has been and will always
be the case at every step in human (and other!) development. And I mean that in
the most positive sense of the comparison. Some of my best friends are futurists
and provide some of the best reasons for putting together that keyboard in the
first place.
Perhaps a sea change over the next ten years will involve molecular
manufacturing antagonists beginning to agree that "better methods exist for
getting A or B" instead of now arguing that "molecular manufacturing towards A
and B is a waste of a thesis."
That said, it is important to recognize that the Technology Roadmap for
Productive Nanosystems is not a molecular manufacturing Roadmap, rather a
Roadmap that serves to guide the development of nanosystems capable of atomic
precision in the manufacturing processes of molecules and larger systems. The
difference is largely semantic, though, founded in the descriptors of molecular
manufacturing as some of us have come to know and love it.
Definitions!
If we take the working definitions from the Roadmap...
Nanosystems are interacting nanoscale structures, components, and
devices.
Functional nanosystems are nanosystems that process material, energy, or
information.
Atomically precise structures are structures that consist of a specific
arrangement of atoms.
Atomically precise technology (APT) is any technology that exploits
atomically precise structures of substantial complexity.
Atomically precise functional nanosystems (APFNs) are functional
nanosystems that incorporate one or more nanoscale components that have
atomically precise structures of substantial complexity.
Atomically precise self-assembly (APSA) is any process in which
atomically precise structures align spontaneously and bind to form an atomically
precise structure of substantial complexity.
Atomically precise manufacturing (APM) is any manufacturing technology
that provides the capability to make atomically precise structures, components,
and devices under programmable control.
Atomically precise productive nanosystems (APPNs) are functional
nanosystems that make atomically precise structures, components, and devices
under programmable control, that is, they are advanced functional nanosystems
that perform atomically precise manufacturing.
The last definition is the clincher. It combines atomic precision (which means
you know the properties of a system at the atomic level and can, given the
position of one atom, know absolutely about the rest of the system) and
programmable control (meaning information is translated into matter assembly).
Atomic precision does not mean "mostly (7,7) carbon nanotubes of more-or-less 20
nm lengths," "chemical reactions of more than 90% yield," "gold nanoparticles of
about 100 nm diameters," or "molecular nanocrystals with about 1000 molecules."
That is not atomic precision, only our current level of control over
matter. I am of the same opinion as
J. Fraser Stoddart,
who described the state of chemistry (in his
Feynman
Experimental Prize lecture) as "an 18 month old" learning the words of
chemistry but unable to speak the short sentences of supramolecular assembly and
simple functional chemical systems, make paragraphs of complex devices from
self-assembling or directed molecules, or the novels that approach the scales of
nanofactories, entire cells, or whatever hybrid system first can be pointed to
by all scientists as a first true productive nanosystem.
Plainly, there is no elegant, highly developed field in
the physical or natural sciences. None. Doesn't exist, and anyone arguing
otherwise is acknowledging that progress in their field is dead in the water.
Even chiseled stone was state-of-the-art at one point.
The closest thing we know of towards the productive nanosystem end is the
ribosome, a productive nanosystem that takes information (mRNA) and turns it
into matter (peptides) using a limited set of chemical reactions (amide bond
formation) and a very limited set of building materials (amino acids) to make a
very narrow range of products (proteins) which just happen to, in concert, lead
to living organisms. The ribosome serves as another important example for the
Roadmap. Atomic precision in materials and products does not mean
absolute positional knowledge in an engineering, fab facility manner. Most
cellular processes do not require knowledge of the location of any component,
only that those components will eventually come into Brownian-driven contact.
Molecular manufacturing proponents often point to the ribosome as "the example"
among reasons to believe that engineered matter is possible with atomic
precision. The logical progression from ribosome to
diamondoid nanofactory, if that progression exists on a well-behaved
wavefunction (continuous, finite -- yeesh-- with pleasant first derivatives), is
a series of substantial leaps of technological progress that molecular
manufacturing opponents believe may/can/will never be made. Fortunately, most of
them are not involved in research towards a molecular manufacturing end and so
are not providing examples of how it cannot be done, while those of us doing
molecular manufacturing research are both showing the potential, and the
potential pitfalls, all the while happy to be doing the dirty work for opponents
in the interest in pushing the field along.
It is difficult to imagine that any single discipline will contain within its
practitioners all of the technology and know-how to provide the waiting world
with a productive nanosystem of any kind. The synthetic know-how to break and
form chemical bonds, the supramolecular understanding to be able to predict how
surfaces may interact as either part of self-assembly processes or as part of
mechanical assembly, the systems design to understand how the various parts will
come together, the physical and quantum chemistry to explain what's actually
happening and recommend improvements as part of the design and modeling process,
the characterization equipment to follow both device assembly and manufacturing:
each of these aspects relevant to the assembly and operations of productive
nanosystems are, in isolation, areas of current research that many researchers
individually devote their entire lives to and that are all still very much in
development.
However, many branches of science are starting to merge and perhaps the first
formal efforts at systems design among the many disciplines are likely to be
considered the ACTUAL beginning of experimental nanotechnology. The
interdisciplinaritization (yes, made that one up myself) of scientific research
is being pushed hard at major research institutions by way of the development of
Research Centers, large-scale facilities that intentionally house numerous
departments or simply broad ranges of individual research. Like research efforts
into atomically precise manufacturing, the pursuit of interdisciplinary research
is a combination of bottom-up and top-down approaches, with the bottom-up effort
a result of individual researchers collaborating on new projects as ideas and
opportunities allow and the top-down efforts a result of research universities
funding the building of Research Centers and, as an important addition, state
and federal funding agencies providing grant opportunities supporting
multi-disciplinary efforts and facilities.
But is that enough? Considering all of the varied research being performed in
the world, is it enough that unionized cats are herding themselves into small
packs to pursue various ends, or is there some greater benefit to having a
document that not only helps to put their research into the context of the
larger field of all nanoscience research, but also helps them draw connections
to other efforts? Will some cats choose to herd themselves when presented with a
good reason?
The Roadmap is not only a document that describes approaches to place us on the
way to Productive Nanosystems. It is also a significant summary of current
nanoscale research that came out of the three National Lab Working Group
meetings. As one might expect, these meetings were very much along the lines of
a typical Foresight Conference, in which every half hour saw a research
presentation on a completely different subject that, because each provided a
foundation for the development of pathways and future directions, were found to
have intersections. The same is true of the research and application talks at
the official SME
release conference. It's almost a law of science. Put two researchers into a
room and, eventually, a joint project will emerge.
On to the Conference
In describing my reactions to the conference, I'm going to skip many, many
details, inviting you, the reader, to check out the Roadmap proper when it's
made available online and, until then, to read through Chris Phoenix's
live-blogging.
As for what I will make mention of...
Pathways Panel
A panel consisting of Schafmeister, Randall, Drexler, and Firman (with Von Ehr
moderating) from the last section of the first day covered major pathway
branches presented in the Roadmap, with all the
important points caught by Chris Phoenix's QWERTY mastery.
I'll spare the discussion, as it was covered so well by Chris, but I will point
out a few important take-homes:
Firman said, "Negative results are a caustic subject... while fusing proteins,
sometimes we get two proteins that change each other's properties. And that's a
negative result, and doesn't get published. It shouldn't be lost." Given the
survey nature of the types of quantum chemical calculations being performed to
model tooltip designs that might be used for the purposes of mechanosynthesis
(molecular manufacturing or otherwise),
Drexler,
Freitas,
Merkle, and
myself spend
considerable time diagnosing failure modes and possibly unusable molecular
designs, making what might otherwise be "negative results" important additions
to our respective design and analysis protocols. Wired readers will note
that Thomas Goetz covered this topic ("Dark Data") and some web efforts to make
this type of data available in Issue 15.10.
I loved the panel’s discussion of replication, long a point of great controversy
over concerns and feasibility. Drexler mentioned how his original notion of a
"replicator" as proposed in
Engines of Creation is obsolete for pragmatic/logistical reasons. But
the next comment was from Schafmeister, who, in his research talk, had proposed
something that performs a form of replication (yes, that's the experimental
chemist making the bold statement); it would be driven externally, but
nonetheless something someone could imagine eventually automating. Christian
also performed a heroic feat in his talk by presenting his own (admittedly, by
him) "science fiction" pathway for applying his own lab research to a far more
technically demanding end, something far down the road as part of his larger
research vision.
Randall, on the use of the Roadmap, said, "The value of the Roadmap will be
judged by the number of people who read it and try to use it. Value will
increase exponentially if we come back and update it." The nature of nanoscience
research is that six months can mean a revolution. I (and a few others at the
very first Working Group meeting) had been familiar with structural DNA
nanotechnology, mostly from having seen
Ned Seeman present
something new at every research talk (that is also a feat in the sciences, where
a laboratory is producing quick enough to always have results to hand off to the
professor in time for the next conference). The Rothemund
DNA Origami paper [PDF] was a turning point to many and made a profound
statement on the potential of DNA nanotech. I was amazed by it. Drexler's
discussions on the possibilities have been and continue to be contagious.
William Shih
mentioned that his research base changed fundamentally because of DNA Origami,
and seeing the complexity of the designs AND the elegance of the experimental
studies out of his group at the Roadmap Conference only cemented in my mind just
how fast a new idea can be extended into other applications. It would not
surprise me if several major advances before the first revision of the Roadmap
required major overhauls of large technical sections. At the very least, I hope
that scientific progress requires it.
Applications Panel
A panel consisting of Hall, Maniar, Theis, O'Neill (with Pearl moderating) from
the last section of the second day covered applications, with short-term and
very long-term visions represented on the panel (again,
all caught by Chris Phoenix).
For those who don't know him,
Josh Hall was the wildcard of the applications panel, both for his far more
distant contemplations on technology than otherwise represented at the
conference and for his exhaustive historical perspective (he can synthesize
quite a bit of tech history and remind us just how little we actually know given
the current state of technology and how we perceive it; O'Neill mentioned this
as well, see below). Josh is far and away the most enlightening and entertaining
after-dinner raconteur I know. As a computer scientist who remembers wheeling
around hard drives in his graduate days, Josh knows well the technological
revolutions within the semiconductor industry and just how difficult it can be
for even industry insiders to gauge the path ahead and its consequences on
researchers and consumers.
Papu made an interesting point I'd not thought of before. While research labs
can push the absolute limits of nanotechnology in pursuit of new materials or
devices, manufacturers can only make the products that their facilities, or
their outsourcing partner facilities, can make with the equipment they have
available. A research lab antenna might represent a five-year leap in the
technology, but it can’t make it into today's mobile phone if the fab facility
can't churn it out in its modern
6 Sigma
manifestation.
Nanoscience isn't just about materials, but also new equipment for synthesis and
characterization, and the equipment for that is expensive in its first few
generations. While it’s perhaps inappropriate to refer to "consumer grade"
products as the "dumbed down" version of "research grade" technologies,
investors and conspiracy theorists alike can take comfort in knowing that there
really is "above-level" technology in laboratories just hoping the company lasts
long enough to provide a product in the next cycle.
O'Neill said, "To some of my friends, graphite epoxy is just black aluminum."
This comment was in regards to how a previous engineering and technician
generation sees advances in specific areas relative to their own mindset and not
as part of continuing advancements in their fields. It's safe to say that we all
love progress, but many fear change. The progress in science parallels that in
technology, and the ability to keep up with the state-of-the-art, much less put
it into practice as Papu described, is by no means a trivial matter. Just as
medical doctors require recertification, scientists must either keep up with
technology or simply see their efforts slow relative to every subsequent
generation. Part of the benefit of interdisciplinary research is that the
expertise in a separate field is provided automatically upon collaboration.
Given the time to understand the physics and the cost of equipment nowadays,
most researchers are all too happy to pass off major steps in development to
someone else.
Closing Thoughts
Non-researchers know the feeling. We've all fumbled with a new technology at one
point or another, be it a new cell phone or a new (improved?) operating system,
deciding to either "learn only the basics" or throw our hands up in disgust.
Imagine having your entire profession changed from the ground up or, even worse,
having your profession disappear because of technology. Research happening today
in nanoscience will serve a disruptive role in virtually all areas of technology
and our economy. Entire industries, too. Can you imagine the first catalytic
system that effortlessly turns water into hydrogen and oxygen gas? If filling
the tank of your jimmied VW ever means turning on your kitchen spigot, will your
neighborhood gas station survive selling peanut M&M's
and Snapple at ridiculous prices?
Imagining the Future
By Jamais Cascio, CRN Director of Impacts Analysis
I'm one of the lucky individuals who makes a living by thinking about what we
may be facing in the years ahead. Those of us who follow this professional path
have a variety of tools and methods at our disposal, from subjective
brainstorming to models and simulations. I tend to follow a middle path, one
that tries to give some structure to imagined futures; in much of the work that
I do, I rely on scenarios.
Recently, the Center for Responsible Nanotechnology
undertook a project to develop a variety of scenarios regarding the
different ways in which molecular manufacturing might develop. One of the
explicit goals of that project was to come up with a broad cross-section of
different types of deployment -- and in that task, I think we succeeded.
I'd like to offer up a different take on scenarios for this month's newsletter
essay, however. With the last scenario project, we used "drivers" -- the various
key factors shaping how major outcomes transpired -- consciously intended to
reflect different issues around the development of molecular manufacturing. It's
also possible, however, to use a set of drivers with broader applicability,
teasing out specific scenarios from the general firmament. Such drivers usually
describe very high-level cultural, political and/or economic factors, allowing a
consistent set of heuristics to be applied to a variety of topics.
Recently, I developed a
set of scenarios for a project called "Green Tomorrows." While the scenario
stories themselves concerned different responses to the growing climate crisis,
the drivers I used operated at a more general level -- and could readily be
applied to thinking about different potential futures for
molecular manufacturing. The two drivers, each with two extremes, combine to
give four different images of the kinds of choices we'll face in the coming
decade or two.
The drivers I chose reflect my personal view that both how we live and how we
develop our tools and systems are ultimately political decisions. The first,
"Who Makes the Rules?", covers a spectrum from Centralized to Distributed. Is
the locus of authority and decision-making limited to small numbers of powerful
leaders, or found more broadly in the choices made by everyday citizens, working
both collaboratively and individually? The second, "How Do We Use Technology?",
runs from Precautionary to Proactionary. Do the choices we make with both
current and emerging technologies tend to adopt a "look before you leap" or a
"he who hesitates is lost" approach?
So, how do these combine?
The first scenario, living in the combination of Centralized rule-making and
Precautionary technology use, is "Care Bears." The name refers to online games
in which players are prevented by the game rules from attacking each other. For
players who want no controls, the rules are overly-restrictive and remove the
element of surprise and innovation; for players who just want an enjoyable
experience, the rules are a welcome relief.
In this scenario, then, top-down rule-making with an emphasis on prevention of
harm comes to slow overall rates of molecular manufacturing progress. The result
is a world where nanotechnology-derived solutions are harder to come by, but one
where nanotechnology-derived risks are less likely, as well. This is something
of a baseline scenario for people who believe that regulation, licensing, and
controls on research and development are ultimately good solutions for avoiding
disastrous outcomes. The stability of the scenario, however, depends upon both
how well the top-down controls work, and whether emerging
capabilities of molecular manufacturing tempt some people or states to grab
greater power. If this scenario breaks, it could easily push into the
lower/right world.
The second scenario, combining Centralized rule-making and Proactionary
technology use, is "There Once Was A Planet Called Earth..." The name sets out
the story fairly concisely: competition between centralized powers seeking to
adopt the most powerful technologies as quickly as possible -- whether for
benign or malignant reasons -- stands a very strong likelihood of leading to a
devastating conflict. For me, this is the scenario most likely to lead to a bad
outcome.
Mutually-assured global destruction is not the only outcome, but the probable
path out of this scenario is a shift towards greater restrictions and controls.
This could happen because people see the risks and act accordingly, but is more
likely to happen because of an accident or conflict that brings us to the brink
of disaster. In such a scenario, increasing restrictions (moving from
proactionary to precautionary) are more likely than increasing freedom (moving
from centralized to distributed).
The third scenario, combining Distributed rule-making and Proactionary
technology use, is "Open Source Heaven/Open Source Apocalypse." The name
reflects the two quite divergent possibilities inherent in this scenario: one
where the spread of user knowledge and access to molecular manufacturing
technologies actually makes the world safer by giving more people the ability to
recognize and respond to accidents and threats, and one where the spread of
knowledge and access makes it possible for super-empowered angry individuals to
unleash destruction without warning, from anywhere.
My own bias is towards the "Open Source Heaven" version, but I recognize the
risks that this entails. We wouldn't last long if the knowledge of how to make a
device that would blow up the planet with a single button-push became
widespread, and some of the arguments around the destructive potential of
late-game molecular manufacturing seem to approach that level of threat.
Conversely, it's not hard to find evidence that open source knowledge and access
tends to offer greater long-term safety and stability than does a closed
approach, and that insufficiently-closed projects leaking out to interested and
committed malefactors (but not as readily to those who might help to defend
against them) offers the risks of opening up without any of the benefits.
Finally, the fourth scenario, combining Distributed rule-making and
Precautionary technology use, is "We Are As Gods, So We Might As Well Get Good
At It." Stewart Brand used that as an opening line for his
Whole
Earth Catalogs, reflecting his sense that the emerging potential of new
technologies and social models gave us -- as human beings -- access to far
greater capabilities than ever before, and that our survival depended upon
careful, considered examination of the implications of this fact.
In this world, the widespread knowledge of and access to molecular manufacturing
technologies gives us a chance to deal with some of the more pressing big
problems we as a planet face -- extreme poverty, hunger, global warming, and the
like -- in effect allowing us breathing room to take stock of what kind of
future we'd like to create. Those individuals tempted to use these capabilities
for personal aggrandizement have to face a knowledgeable and empowered populace,
as do those states seeking to take control away from the citizenry. This is,
admittedly, the least likely of the four worlds, sadly.
But you don't have to take my word for it. This "four box" structure doesn't
offer predictions, but a set of lenses with which to understand possible
outcomes and the strategies that might be employed to reach or avoid them. The
world that will emerge will undoubtedly have elements of all four scenarios, as
different nations and regions are likely to take different paths. The main
purpose of this structure is to prompt discussion about what we can do now to
push towards the kind of world in which we'd want to live, and to thrive.
Restating CRN’s Purpose
By Jamais Cascio, Director of Impacts Analysis
How soon could molecular manufacturing (MM) arrive? It's an important question,
and one that the Center for Responsible Nanotechnology takes seriously. In our
recently released series of scenarios for the
emergence of molecular manufacturing, we talk about MM appearing by late in the
next decade; on the CRN main website, we describe MM as being plausible by
as early as 2015. If you follow the broader
conversation online and in the technical media about molecular manufacturing,
however, you might argue that such timelines are quite aggressive, and not at
all the consensus.
You'd be right.
CRN doesn't talk about the possible emergence of molecular manufacturing by
2015-2020 because we think that this timeline is necessarily the most realistic
forecast. Instead, we use that timeline because the purpose of the Center for
Responsible Nanotechnology is not prediction, but preparation.
While arguably not the most likely outcome, the emergence of molecular
manufacturing by 2015 is entirely plausible. A variety of public projects
underway today could, with the right results to current production dilemmas,
conceivably bring about the first working nanofactory within a decade. Covert
projects could do so as well, or even sooner, especially if they've been
underway for some time.
CRN's leaders do not focus on how soon molecular manufacturing could emerge
simply out of an affection for nifty technology, or as an aid to making
investment decisions, or to be technology pundits. The CRN timeline has always
been in the service of the larger goal of making useful preparations for (and
devising effective responses to) the onset of molecular manufacturing, so as to
avoid the worst possible outcomes such technology could unleash. We believe that
the risks of undesirable results increase if molecular manufacturing emerges as
a surprise, with leading nations (or companies, or NGOs) tempted to embrace
their first-mover advantage economically, politically, or militarily.
Recognizing that this event could plausibly happen in the next decade -- even if
the mainstream conclusion is that it's unlikely before 2025 or 2030 -- elicits
what we consider to be an appropriate sense of urgency regarding the need to be
prepared. Facing a world of molecular manufacturing without adequate forethought
is a far, far worse outcome than developing plans and policies for a
slow-to-arrive event.
There's a larger issue at work here, too, particularly in regards to the
scenario project. The further out we push the discussion of the likely arrival
of molecular manufacturing, the more difficult it becomes to make any kind of
useful observations about the political, environmental, economic, social and
especially technological context in which MM could occur. It's much more likely
that the world of 2020 will have conditions familiar to those of us in 2007 or
2008 than will the world of 2030 or 2040.
Barring what Nassim Nicholas Taleb calls "Black
Swans" (radical, transformative surprise developments that are extremely
difficult to predict), we can have a reasonable image of the kinds of drivers
the people of a decade hence might face. The same simply cannot be said for a
world of 20 or 30 years down the road -- there are too many variables and
possible surprises. Devising scenarios that operate in the more conservative
timeframe would actually reduce their value as planning and preparation tools.
Again, this comes down to wanting to prepare for an outcome known to be almost
certain in the long term, and impossible to rule out in the near term.
CRN's Director of Research Communities Jessica Margolin noted in conversation
that this is a familiar concept for those of us who live in earthquake country.
We know, in the San Francisco region, that the Hayward Fault is
near-certain to unleash
a major (7+) earthquake sometime this century. Even though the mainstream
geophysicists' view is that such a quake may not be likely to hit for another
couple of decades, it could happen tomorrow. Because of this, there are public
programs to educate people on what to have on hand, and wise residents of the
region have stocked up accordingly.
While Bay Area residents go about our lives assuming that the emergency bottled
water and the batteries we have stored will expire unused, we know that if that
assumption is wrong we'll be extremely relieved to have planned ahead.
The same is true for the work of the Center for Responsible Nanotechnology. It
may well be that molecular manufacturing remains 20 or 30 years off and that the
preparations we make now will eventually "expire." But if it happens sooner --
if it happens "tomorrow," figuratively speaking -- we'll be very glad we started
preparing early.