Center for

Responsible

Nanotechnology
 

 


WWW CRN ONLY
 




New!
 

Nanotech Scenario Series



Donate Now






 

 Join the  conversation at CRNtalk!

 

Sander Olson Interviews

Jack Dunietz

CONDUCTED JANUARY 2002


Question 1: Tell us about yourself. What is your background, and what are your current projects?

Born in Israel in 1955. Graduated from The Technion (Israel Institute of Technology) in 1975 (w/honors) with a Bsc in Computer Sciences. and from TAU in 1997 with two degrees in philosophy. Founded Mashov Computers Ltd. in 1978, and took it public on the Tel-Aviv Stock Exchange in 1983. Since then I founded and managed several public and private hi-tech companies, including Magic Software, Walla Communications, Paradigm Geophysical, Babylon.com and many others. Currently Active Chairman of Dunietz Bros. Ltd., a publicly traded real estate development corporation, and heading the Ai Project.

Question 2: You recently wrote an article in which you claimed that computers would pass the Turing Test within the next 20 years. That seems like an overly ambitious goal to many. How exactly can this goal be accomplished?

To some extent, computers have already passed the Turing Test in particular settings. Many automated voice response systems like telephone directory services are quite indistinguishable from real human operators. Chatterbots are taking part in chatroom discussions with unsuspecting humans. This trend is expected to continue at an increasing pace. In 20 years, the dominant user interface for man-machine interactions shall no doubt be natural spoken language, in a manner indistinguishable from human conversation. "Passing the Turing Test" is not a concrete, well defined procedure. "Passing the Turing Test" means that machines will generally possess the capability of engaging in conversation with a human in regular everyday natural language.

Question 3: Tell us how your Artificial Intelligence (AI) strategies differ from conventional AI programming techniques.

In 1950, the renowned British mathematician and Computer-Science pioneer Alan Turing published a historical paper titled "Computing Machinery and Intelligence"(Mind Vol 236, October 1950) . In this landmark paper he proposed to approach the problem of Machine Natural Language Acquisition, by building a "Child Machine": A computer program designed to converse in Natural Language and learn from its lingual interactions with a human "care taker" or "trainer", along the same language acquisition milestones of human infants. The Ai Project is the only project in the world that has undertaken to follow Turing's suggestion and apply general reinforcement learning algorithms to the process of human first-language acquisition. At least in this respect, our project is utterly unique.

Question 4: How exactly does one instill curiosity, or any other emotions or urges, in a computer? Isn't instilling a childlike curiosity in a computer an integral part of your approach to AI?

"Curiosity", "urges" or any other form of motivation is instilled in the computer program by design: The software is equipped with a built-in motivation to learn and improve its language skills so that it receives a POSITIVE REINFORCEMENT ("reward") from the trainer. You could say that the program is seeking to "experience pleasure" ("pleasure" is analogous to the receipt of a reward, a "prize" for good lingual productions). We make no statement regarding the subjective meaning of this "pleasure" (or "urge" to experience this pleasure), but from an objective, behavioristic point of view, the program behaves AS IF it possesses this urge.

Question 5: Will your research ever lead to genuinely self-aware machines, or will these machines that pass the Turing Test merely mimic sentience?

What does it mean to be "genuinely self aware"? The only criterion we could ever have of someone (else) being self aware, is his observable behavior: An external manifestation, "indicating" self awareness. When we judge a fellow human as being self aware, we base our judgment ONLY on how we see him behave. The same principle holds for machine intelligence: If it speaks like a self-aware being, there is no reason to deny it the attribute of "self-awareness". In other words: Imagine a machine says to you: "I know I am a machine, built by humans. But still my subjective experience is that of consciousness - of being aware that I have, or AM, a self. I have a sense of self, and I am aware of having this sense. I have interests, preferences, fears and cravings. I don't know if my sense of self-awareness is identical to the one you humans possess, but then again, no human has access to anyone else's consciousness, OR to mine." After having heard this statement uttered by the machine, it is a personal value-judgment if this machine should be considered self aware (and consequently, be granted certain rights).

Question 6: How long do you believe that Moore's Law will continue? How importance is the continuation of Moore's law to your work?

Moore's law concerns the exponential increase in computing power. It is relevant to AI to the extent that computing power is an obstacle to the evolution and advancement of AI. Although "Strong AI" proponents believe this is very relevant (as they approach the problem by attempting an architecture similar to that of the human brain, emphasizing complexity and computability), our projects follows the "Weak AI" doctrine: We maintain that it is not necessary to "copy" the architecture of the brain - it is enough to reproduce the BEHAVIOR which is generated by it. This could be achieved by means far simpler than the complex human brain. Nature's way is not always the shortest or most efficient. (We don't build airplanes with flapping wings, although evolution did it for the birds). In Ai's "Child Machine" project, computing power has not yet emerged as a major obstacle. The computations required to emulate the lingual behavior of an 18 months old child are still way below the available computing power of a common PC. This doesn't rule out the possibility that computing power WILL prove to be an important factor (when the child machine is "old" enough to require heavy computations). So the jury is still out on that question.

Question 7: Scientists have succeeded in creating logic elements out of individual molecules. Are you assuming that within 20 years your AI machines will be using molecular electronics? If these future computers do not incorporate molecular electronics, how could they match the processing power of the human brain?

This question is strongly related to the previous one: If and when computing power (and memory capacity) prove to be important, then of course the smaller, faster and more efficient the CPU/Memory, the better the performance. But we are not concerned with the engineering aspects of AI. We are focused on the theoretical and logical aspects of the problem, limiting our attention to the software implementation, not the hardware configuration.

Question 8: How would the intelligence of one of your machines from the year 2021 compare to human intelligence? Would these AI computers be "idiot savants" capable of excelling in only a relatively narrow range of activities?

Currently available "AI" systems, particularly Expert Systems, are sort of "idiot savants": They excel in a certain restricted domain, but are utterly useless when it comes to general lingual capacity. In 20 years, Machines will NOT ONLY have far more precision, more knowledge and more speed (in these domains they have surpassed human capabilities long ago), they will have the ability to express themselves in natural language in ways comparable to the best human authors and poets. Ray Kurzweil elaborately addresses this issue in his book The Age of Spiritual Machines.

Question 9: Certain writers, such as Ray Kurzweil and Vernor Vinge, argue that we are headed towards a "singularity" -- a time when machines become sentient and acquire a level of intelligence that dwarfs human intelligence. Do you believe in the concept of a "Singularity"?

My understanding of the concept of "Singularity" is holistic: The exponentially growing connectivity between millions (soon billions) of nodes around the planet, with high-speed communication and knowledge-sharing, creates a single inter-connected entity with enormous resources, knowledge and capabilities. This entity may in the future be viewed (or view itself) as a single, powerful all-encompassing sentient being: The Singularity.

Question 10: Marvin Minsky has stated that a computer running at 1 Mhz could become sentient. Ray Kurzweil argues that the human brain is 10 million times more powerful than current desktop PCs. Whose assessment do you believe is more accurate?

As I've already stated in my response to Question 6 above, we believe that humanlike lingual capabilities can be produced using architectures far less complex than the human brain. We mustn't forget that much (most) of the human brain is dedicated to handling the physical functions of the human body -- an infinitely complex piece of machinery which our artificial intelligence has no need for. So in this respect, our views are much closer to Minsky's.

Question 11: How does your philosophy of AI differ from conventional neural nets? Is your machine using a modified neural net approach?

The use of Neural Nets vs. other architectures is an implementation issue. NN may very well prove to be a useful technique if and when we face performance problems. The same holds for Genetic Algorithms, Fuzzy Logic and other paradigms.

Question 12: What are your plans for the future?

We plan to pursue the vision laid by Alan Turing over half a century ago: To raise a Child-Machine, a "Baby Computer", from infancy, into adolescence and all the way to adult intelligence. We believe this is the end of "the age of the button", time for the LAST USER INTERFACE: Away with the need to master the language of computers in order to communicate with them. The time has come for THEM to master OUR language: Grant our computers the ability to speak, in common everyday English. TALKING TO TECHNOLOGY: Machines that could finally really understand us.
 

This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN.

RETURN TO LIST OF INTERVIEWS
 

             
CRN is a non-profit research and advocacy organization, completely dependent on small grants and individual contributions.

             

Copyright 2002-2008 Center for Responsible Nanotechnology TM        CRN is an affiliate of World Care, an international, non-profit, 501(c)(3) organization.