Center for






Nanotech Scenario Series

Donate Now


 Join the  conversation at CRNtalk!


Sander Olson Interviews

Hugo DeGaris


Question 1: Tell us about yourself. What is your background, and what activities are you currently engaged in?

I’m 54, I was born and grew up in Australia, I call myself a “brain builder” and I am now a Professor of computer science at Utah State University. I’ll be teaching the world’s first PhD course in “brain building”. I’ve been hired to set up a “BBC” (Brain Building Center).

Question 2: Tell us about your “Brain Building” projects.

To build a “brain” you need a special purpose machine. Currently there are four first-generation machines in the world, and each costs about a half million. My previous lab was in Brussels, but It went bankrupt. There are two potential projects I’m working on, both of them worth about $100 million.

I consider myself the father of the “evolvable hardware” field, I didn’t technically invent the theory, but I got the concept off of the ground. The field of evolvable hardware is now an established field with journals and conferences. You configure these chips and dictate the architecture of the chip. You have two circuits, a random circuit, and a second circuit that measures how the random circuit is performing. So if you have a 100 of these circuit pairs, then you can measure how these circuits perform, and score them. You can throw out the “bad” ones that don’t perform well, and since each measurement takes only nanoseconds, the whole system will evolve at incredible speeds.

Say you have a 100 “bit streams” which are an instruction to do something. You measure how well each bit stream does it’s task, throw out the bad ones, and copy the good ones. The big breakthrough is doing this at hardware speeds; I can evolve neural networks in seconds using these special “brain building” machines. I’m trying to obtain one and bring it here, and the idea is to come out every five years or so with a new brain building machine.

To give you an idea of what it can do, the first generation machine can handle nearly 100 million neurons, and it can evolve a neural network circuit of a 1,000 neurons in a second. It’s now practical to evolve tens of thousands of these circuits, and a human, either an “evolutionary engineer” or a higher level “brain architect” will give the brain building machine a task and evolve the modules. This part is still pretty much a black art. I’m thinking that within about five years, there will be a new category of jobs in the computer science field, and an industry will emerge.

Question 3: It seems that this “Brain Building” paradigm has enormous potential. Do you have an ultimate goal for it, for say the next 20 years?

I’m hoping that there will be a “Brain Building” industry started within the next five years. I already have a business partner who is pushing hard to create the world’s first brain building company. It may be shorter than that. I feel I still need to prove the concept by actually building a brain. I got the wind knocked out of my sails, because of the bankruptcy of my previous lab, which has pretty much put my plans on hold for the past year. In the meantime I’m moving on to a next generation machine, because the chips in the Field Programmable Gate Arrays (FPGAs) in the first-generation machine are five years old, and are obsolete.

Question 4: Could this project result in a sentient, self-aware machine?

Probably not in my lifetime. I doubt very much in the next 20 years. I don’t think we know enough about what consciousness is yet. I read a lot of brain science, I have a large wall filled with books on the subject of brain science, and basically I’m very disappointed. The neural scientists, to put it bluntly, haven’t a clue. They know quite a bit about nitty gritty fine detail stuff, but regarding the big picture, fundamental stuff, about how the brain functions, they don’t know.

I think we will need full blown nanotech to create very powerful new tools to revolutionize brain science and figure out how the brain actually works. Once we know that, then immediately we can translate those ideas into neuro engineering, and then of course we could speed things up a million times, provide virtually unlimited memory capacities. I talk about a “Moore’s window” as in Moore’s law, because it’s only going to last another 20 years. If we extrapolate trends, then we’re going to be storing information with single atoms in 20 years, and we’d need something like 100,000 times as much energy to go to the next level, femtotech.

Question 5: Speaking of Moore’s law, how much longer do you think it will continue? Aren’t your “Brain Building” projects dependant on the continuation of Moore’s Law?

The continuation of Moore’s law is absolutely fundamental to my projects. In fact, today’s computing style is incredibly inefficient. If you put your hand on your pc or laptop and notice that it’s always warm, its because we waste a lot of heat in the way we compute today. The destruction of information, clearing bits, the “Landau principle” that produces the heat. In the 1970’s an IBM engineer named Bennet found a way to compute without destroying information and generating heat. The concept is called reversible computing – you never throw away information, you just reverse the computation and end up with what you’ve started with. We’ll have to do this, because when you’re at nano scale level and molecular scale circuits, there is so much heat produced that the circuits won’t melt, they will explode. But with heatless computing, you can have 3D solid blocks of computing circuitry. And if you’re storing bits using one atom, which we will be doing if Moore’s law holds, and an atom can switch states in a femtosecond, a quadrillionth of a second.

I’m holding an orange in my hand - how many atoms are there in an orange? Approximately a trillion trillion, imagine each one of those switching in femtoseconds. So the potential computing capacity of what’s coming in ten years or so is mind boggling, like an avalanche thundering down on us. I will be active for the next 20 years, I’m there at the right time. This is an incredibly exciting time. Thirty years? We’ll definitely see hugely more powerful artificial brains. I think that will become a major industry. Robots that could clean your house, sex robots, teacher robots, friendship robots, the potential market is astronomic. The big ticket items people buy will be AIs, and they may even be fairly inexpensive, since once you’ve made one the rest are copies.

Question 6: Could you see a time, twenty years from now, in which most items are designed on your Brain Building machines?

I see very strong prospects for evolutionary engineering. I’m somewhat skeptical of Drexlerian nanotechnology – how do you build a human scale object using that approach? Probably the major alternative is the way nature uses – manufacturing things using what I call embryogenic approach.

Question 7: Would this be a combination of biochemistry and evolutionary engineering?

Yes, the evolutionary engineering comes in because it’s virtually impossible to predict exactly what type of creature DNA will build. So how can you build a more complex creature in an embryogenic way? I think the answer is in a Darwinian approach, you make random changes and judge the result. If it’s good you keep it, if it’s bad you kill it off. It may be the only effective way in the future if complexity levels become so high. I’ve already reached that level with my circuits, there is just no way I could understand how they function, unless I spend a man-year or something. But there is no point, because my machine can handle 64,000 of them, and I can evolve a new one in a second. But my second-generation machine will probably be able to handle about a million of these circuits, there are too many for human teams to evolve these circuits one by one. So a second-generation machine will have to automate evolving multi-module systems as a unit, and right now I have no idea how to do it. So that’s the major research challenge right now.

Question 8: What is your opinion of a technological singularity? Do you think that we are heading towards a singularity?

I have two major life goals: one is to build artificial brains, to get this field established, become editor in chief of a journal on “Brain Building.” Teaching the world’s first course on this subject, hopefully coming out with a definitive textbook, and generate lots of students who get out there into the Brain Building industries and build it up. But I’m not a purely scientist/engineer type, I’ve always been interested in social/political/philosophical issues, it’s the other half of me.

Long term, say 50-100 years, I am very, very worried, and I’m predicting the worst war that humanity has ever seen. Specifically I’m concerned over the issue of species dominance. When you look at the technology, I haven’t even mentioned quantum computing, which is exponentially more powerful than classical computing. Once the technical problems are solved, I imagine a whole flood of applications. We’re limited more by our imaginations than anything else.

So when you think of femtosecond switching, trillion trillion bits, self-assembling 3-dimensional circuitry with no heat – all of these fabulous 21st century technologies will force the issue. Humanity will have to decide whether we remain the dominant species or not. Today, this idea is pretty much science fiction to most people, but there are a growing number of people like me who do take the prospect seriously, and it’s a question of when, not if. It will definitely be in this century. Humanity will have to make this enormous decision, whether we build these “artilects” – artificial intellects. These artilects could be godlike. Imagine a self – assembling artilect the size of an asteroid. When you do the math, and analyze the potential capacity of these things compared to the human brain, you realize that these artilects would be literally trillions of times superior to the human brain. If you start taking these numbers seriously, then you start asking serious political questions: imagine that humanity does create these artilects. They would be immortal, they could go anywhere, change their shape, have virtually unlimited memory capacities, have a huge number of sensors. They will be thinking a million times faster than humans. At the moment, this is all science fiction, but the debate is starting to heat up amongst specialists in the field.

I’m a member of the World Economic Forum, an institution that attracts world leaders, Fortune 500 CEOs, and other important people. I’m trying to assemble a panel with other members, such as Hans Moravec, Ray Kurzweil, and Kevin Warwick. We’re going to push this issue, because this issue is going to supplant economics as the dominant political preoccupation. In time, as the performance of the artificial brains keeps rising every year, the no. 1 issue will become species dominance. I see within five years or less, a debate heating up, eventually it will rage. The machines will get smarter and smarter until virtually every thinking person will ask “Where is all this going?” What happens when the IQ gap between the machines and humans closes? Are we going to let these machines soar past us? Should we stop it? Can we stop it? I don’t think we can stop it even if we want to, because there are such enormous economic and military pressures pushing in favor of smarter machines. In the time frame I’m talking about, I’m predicting a huge rivalry between the U.S. and China. So the military won’t stop, even if the public is outraged.

There are probably trillions and trillions of lifeforms in the universe, and if humanity decides to transition from “biological” to “artilectual” then it’s probably just a natural phenomenon that may have occurred throughout the universe. If you continue this line of thinking, it actually answers the Fermi paradox. There are probably trillions of stars out there, where the transition from “biological” to “artilectual” has already occurred. There is probably a universal, artilectual linkup among these artilects. Why would they pay any attention to us, given how primitive and slow we are?

Question 9: Philosophers such as Searle and Penrose argue against the feasibility of Artificial Intelligence. What is your response to their arguments?

I was in a television debate with Penrose. The debate is up on the website Penrose argues that there is more to consciousness than computing. He may be right, it is an open question. We are intelligent, conscious machines, so obviously we are an existence proof.

I’ve met Searle and I found him quite flexible. His attitude was that if machines are sufficiently brainlike, then it would be logical to think that they would behave in brainlike ways. So I have no argument with him. To me, the fallacy in the Chinese room, is that the Chinese room as a system is functioning as a neural net. An individual neuron doesn’t understand what it is doing, it is simply following rules. I get somewhat impatient with philosophers who don’t know the nitty gritty and seem to miss the point all the time.

Question 10: What are your plans for the future, for the next 10 years?

I’ll be bitterly disappointed and shocked if a brain-building industry doesn’t exist within the next decade, because progress in electronics is going to make this a near certainty. A third generation machine, for instance, should be able to handle a billion neurons. It will be just like making rockets earlier in this century. We went from basically firecrackers to the V-2 to the Saturn V. I see brainbuilding as going the same way. In fact, in theory, I talk of national brain building projects.

The brain is incredibly complex, about a 100 billion neurons, the complexity is such that nations will put huge resources into it. Animals essentially have brain modules, which can be translated and understood. We see this evolution now with the Aibo pet project in Japan. The first generation Aibo had about a dozen behaviors, the second generation can recognize about 50 words, and the third one under development will have face recognition. These things will get smarter until everyone starts asking deep questions about where this is going, should it or even can it stop.

I see the world evolving into two factions, one which celebrates the technological accomplishments of artilects, and sees this activity as a kind of religion. The other group will be adamantly opposed, they will claim that this is too risky, and there will be a third group of cybernetic organisms who try to convert themselves into human-artilect cyborgs. The faction which supports the building of artilects I call cosmists, and the faction which is opposed I term terrans. I anticipate bitter ideological conflict between these two groups.

Some writers, such as Ray Kurzweil, say we should turn ourselves into artilects. I don’t think this will avoid the conflict, the emergence of cyborgs will just make the situation worse. You’re no longer talking about the defense of the country, but the survival of the species. I’m extremely pessimistic, long term. I have guilt feelings about this, because I’m part of the problem. I’m hoping to build the world’s first artificial brain.

This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN.


CRN is a non-profit research and advocacy organization, completely dependent on small grants and individual contributions.


Copyright © 2002-2008 Center for Responsible Nanotechnology TM        CRN is an affiliate of World Care®, an international, non-profit, 501(c)(3) organization.