Center for

Responsible

Nanotechnology
 

 


WWW CRN ONLY
 




New!
 

Nanotech Scenario Series



Donate Now






 

 Join the  conversation at CRNtalk!

 

Sander Olson Interviews

André DeHon 

CONDUCTED OCTOBER 2004


Dr. André DeHon is a Professor of Computer Science at the California Institute of Technology. He has done pioneering research on many aspects of computation, including sublithographic computing, interconnects, and Programmable Logic Arrays.
 

Question 1: Tell us about yourself. What is your background, and what projects are you currently working on?

I won't do that one justice here. My web page and publication record should give many hints about that. I double majored in EE and CS at the undergraduate level because I wanted to design computers...and, at the time, VLSI and electrical circuits where the primary media for doing that.

I increasingly believe in the fundamental interaction between the substrate and its cost structure and what architectures (microarchitectures) are efficient for it. This is what led me into reconfigurable work. In hindsight, I can say that the silicon cost structure outgrew processors in the mid-1990s. FPGAs and FPGA-computing were the hint that there was something bigger to explore.

In addition to the substrate cost, the mapping tools play a big role in the architecture you want to build. So, starting as an architect, I've expanded to understand substrates, architecture, tools, run-time systems, and programming models---all the things that play into building a computer system. I continue to aspire to be the "Tall Thin Architect" which Carver Mead described.

I have three big themes in my lab these days:

  1. Compute models that will allow us to exploit modern/emerging silicon (exploit spatial computing systems --- go beyond the ISA abstraction)
  2. Interconnect -- understand the fundamental tradeoffs, requirements, and how to design
  3. Sublithographic computing systems -- how do we design/build/manage systems build without lithography

This sublithographic area is interesting to me as I see it changing the substrate costs even further. How does that change the efficient architectures?

If Howard Roark had been a Computer Architect, he might have asked:

Are we making copies
in submicron CMOS
of copies in early NMOS
of copies in discrete TTL
of vacuum tube computers?

When we design with nanowires or nanotubes,
should we make copies in nanowires
of copies in submicron CMOS
of copies in early NMOS
of copies in discrete TTL
of vacuum tube computers?

Question 2: How long do you believe that classical semiconductor scaling can continue? Do you think that we will ever see 20 nanometer transistors in mass production?

I guess I should be clear, I'm an academic not an economist. I wouldn't bet against the semiconductor industry. However, I think there may be some good alternatives to pushing the current lithographic model. I hope we can demonstrate some better options before the industry gets there. Perhaps, the stuff we're doing will be how they produce those 20nm (and smaller) transistors in mass production?

Question 3: Many proposals for molecular electronics have been proposed. Which molecular electronics paradigm do you believe has the greatest chance of success?

Mine, of course! [laughs]

Within the stuff we're looking at there are versions that are all silicon (or GaN...SiGe...). These things are potentially quite compatible with the existing, silicon infrastructure. In the short run, at least, this looks like the most promising thing. I see paths to introduce these components incrementally into the traditional silicon manufacturing flow.

Question 4: There are many companies examining various molecular memory schemes, but molecular logic is a much dicier proposition. Are there molecular electronics technologies that exhibit good amplification (gain)?

The nanowires that Charles Lieber builds, and we've been designing with, appear to exhibit sufficient gain for logic operation, but one could quibble as to whether or not this is "molecular". It's certainly "molecular-scale," and the SiNW gain elements here are compatible with using molecules for the switches, if that turns out to be the best way to build the switches. We just don't depend on the programmable switches for gain.

One architecture is laid out pretty clearly in our recent FPGA 2004 paper.

Question 5: Tell us about configurable computing. Could Field-Programmable Gate Array (FPGA) chips ever be reprogrammed "on the fly", or be used for general computing tasks? Could FPGAs ever render CPUs obsolete?

That (first part, at least) was my PhD thesis 8 years ago.

For a more concise version of why you want to use FPGA-like things, see:  André DeHon. The Density Advantage of Configurable Computing. IEEE Computer, 33(4):41--49, April 2000.

FPGAs alone won't render CPUs obsolete. They complement them. The real question is what percentage of your die area should be FPGA-like logic versus CPU-like logic. I think there's an argument that, over time, the percentage of area on the die going into CPU-like logic diminishes.

If there's something that renders CPUs obsolete, it may be sublithographic fabrication---which appears to favor building things that look more like FPGAs that CPUs.

The work around my PhD actually contained the most aggressive approaches for "on the fly" reconfiguration. My more recent work looks at other options and begins to deal with programming models for this and supporting run-time routines to make it profitable:

bullet http://brass.cs.berkeley.edu/documents/score_fpl2000.html
bullet http://brass.cs.berkeley.edu/documents/fpga02_sched.html
bullet http://www.cs.caltech.edu/research/ic/abstracts/hwassistsa_fpga2003.html
bullet http://www.cs.caltech.edu/research/ic/abstracts/fastroute_fccm2002.html

Question 6: One of the most severe problems plaguing modern computing systems is heat dissipation. Some modern CPUs already dissipate up to 100 watts, and this problem will only get worse. Can FPGAs ameliorate this problem?

They may be able to help, but bigger measures are necessary to tackle both processor and FPGA energy consumption.

Some numbers I ran a while back, suggested raw bit-op per bit-op the CPU and the FPGA were pretty close in energy requirements in conventional silicon---making worse-case assumptions. There's an argument I can make that the FPGA may be able to better exploit correlation in the data to reduce the activity factor...and this can produce a big win. Jan Rabaey had some data comparing processors and FPGAs that showed them coming about almost an order of magnitude lower net energy per operation (compared to low-energy processor designs like the StrongARM). I think that comes from this correlation effect.

So, maybe the FPGAs have two things going for them:

  1. Ability to better exploit data correlations
  2. More flexibility in how the resources can be adapted to the problem ...less wasted energy because of the architecture/problem mismatch

However, I think that's not enough of a win for the problem to go away. I think both processors and FPGAs will need to start looking at techniques like adiabatic switching.

Question 7: What is your opinion of the concept of 3-dimensional computing? Will it ever be feasible? How will computers deal with heat buildup issues?

I have ideas on how to make it feasible. There may be a synergy between nanowires and 3D computing structures.

This is where I think adiabatic switching is most motivated. If 3D gives us more area...such that heat-density is the key limiter to useful computational density, let's trade some area for reduced energy. I think we can find a sweet-spot which provides greater computational density than simply staying with 2D. But, there's definitely quite a bit of research needed to work this out --- a rich area to explore.

Question 8: What is your opinion of artificial intelligence? Do you believe that we will ever see truly sentient machine systems?

Understanding intelligence is a great goal. Automation is good -- it's what computing is all about. We will see much smarter machines than we have today. I'm not sure if we understand "sentience" enough, yet, to be able to judge if a machine is sentient...but trying to understand that is also a laudable goal.

In an increasing number of focused domains, we'll certainly see things that start looking pretty sentient.

Question 9: The researcher Hugo Degaris wants to use FPGAs to make "evolvable hardware". In essence, this hardware would alter its circuitry to perform tasks better. What is your opinion of evolvable hardware?

As far as you can say that "FPGA circuits can be altered", computer systems that "alter its circuitry to perform tasks better" will happen. I believe in that. I think we're seeing the first glimpses of how to do that...and we have a long way to go.

(By the way, are the circuits being altered? Or are we just loading in different instructions? {See my PhD thesis for more.} This becomes a philosophical question. Universal Turing equivalence tells us, in the end, it doesn't matter.)

Dynamic branch prediction is an example of this at some level. Trace history, JIT compilation, and on-the-fly binary translations are others. These are definitely small things at the start of a long journey, but they are the beginning.

Whether or not the technique that is used to accomplish that is "evolvable"/"evolutionary" is a separate question about which I am less certain.

Randomness is clearly a very powerful technique in our optimization arsenal, and I think we will use it more and more in our automated optimization. How much that use looks like today's concepts of "genetic algorithms" I'm less sure of...and whether or not these future techniques are "evolvable" may be in the eye of the beholder. I suspect there will be smarter things we can do than most of the "genetic" algorithms which I'm most familiar with, but random walks and fitness criteria are likely to play a role.

Our spatial placement work, for example, has many of the key components:

bulletwe can use a device to find a good placement for itself -- a placement that allows the device to perform better (i.e., this is a way for the FPGA to "alter its circuitry to perform tasks better")
bulletwe exploit randomness in the search space
bulletwe use a "fitness" function

...but I don't think you'd call it "evolvable."

Question 10: What are your plans for the next decade?

The start of it, at least, continues with the themes we're on. Find the right computing model to exploit the hardware capabilities...figure out how our computing models and stacks need to change to deal with atomic-scale building blocks...figure out how to raise the abstraction levels at which we program these things and increase the automation ("intelligence" if you want to call it that) with which we extract performance from them. We've probably got our hands full working out 2D nanowire architectures for the next few years, but this continues into the third dimension as noted above.

This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN.

RETURN TO LIST OF INTERVIEWS
 

             
CRN was a non-profit research and advocacy organization, completely dependent on small grants and individual contributions.

             

Copyright © 2002-2008 Center for Responsible Nanotechnology TM        CRN was an affiliate of World Care®, an international, non-profit, 501(c)(3) organization.