分享
 
 
 

Problems and Projections in CS for the Next 49 Years

王朝other·作者佚名  2006-01-09
窄屏简体版  字體: |||超大  

Problems and Projections in CS for the Next 49 Years

JOHN MC CARTHY

Stanford University, Stanford, California

Projection 50 years ahead is difficult, so I have eased my problem by offering problems and projections for only 49 years. My projections concern areas in which I have worked---artificial intelligence,mathematical theory of computation, and computer systems. However, I am surely not uptodate in any of these areas, so some of what I project for the future may have already happened. I also talk about the problems caused by the way computing and computer science are practiced. We shouldn't exaggerate. The computer has caused big changes in society, but these probably aren't as great as those caused by the automobile.

The future is a continuation of the past, so we treat progress in areas of computers and computer science as a continuation of their histories.

1. The Personal Computer.

Let's begin by dating some facilities provided by com puters. In each case the first date is when some people had the facility and the second date when it was widespread.

1.1 Computer Cost of Writing and Email becomes Marginal.

By the late 1970s for some people and by the 1990s for almost all,computer power for writing and for email did not have to be rationed.

1.2 Online All the Time at Home or at Work.

For a few (me) this started in 1968. By the 1990s, it was widespread. The obvious culmination is that via a computer attached to clothing or even built into the body, one will be online all the time. It seems to me that the main payoff has already been achieved with computers at home and in the office. The additional time a person spends doing something with a computer through having it all the time will be a small fraction of what he is doing already.

1.3 Email to Worldwide Destinations. 1970 to 1990s

1.4 Point and Click. 1980s.

1.5 Pocket Computer. 1990 to ?.

They aren't good enough for me yet. Surely we'll soon have rollable pocket displays or head mounted displays good enough so that one can wear them continuously. Having the computer facilities of one's desk computer or laptop always at hand will be nice, but it will be a minor Author's address: John McCarthy, Computer Science Department, Stanford University, Stanford, CA 94305. email: jmc@cs.stanford.edu. url:http://wwwformal.stanford.edu/jmc/. improvement, because the reason one is away from one's computer is to engage in some noncomputer activity.

1.6 Online Buying and Selling via the Web. 1990s.

This has a lot further to go.

1.7 Search Engines. 1980s to present.

Google is the first reasonably adequate web search engine. It can't answer questions but can often find an adequate humanreadable source of information.

In several important respects, the use of computers is more difficult than it was in the 1970s and 1980s. The complexity of basic facilities like editors and operating systems and application programs has grown now that limitations on RAM and disk have enormously relaxed.

Point and click has made it very difficult, even for programmers, to customize their environments. It has encouraged the authoritarian tendencies of people who design systems for other people to use. Another difficulty arises from the Microsoft and Apple basic software being proprietary and secret. Linux has made a minor improvement. Now the ordinary computer user needs the services of system administrators who do no programming but know how the facilities are connected together.

Perhaps this situation can be relieved by programs that can understand the features of system programs and can connect them in order to achieve goals specified in interaction with the user. Doing this right involves some AI but much less than humanlevel AI.

Here are some areas with problems to solve and opportunities for progress

2. Programming Languages.

In some important respects, recently developed languages, for example,Java, are a step backward from Lisp. For example, while Java does include automatic garbage collection (which its ancestor C++ did not),it still doesn't include functions as firstclass objects (like Lisp lambdaexpressions); these facilitate the creation of programs which themselves inspect, synthesize, or otherwise manipulate programs [and this has many important applications]. A Lisp program can look at its own internal structure and modify itself or create new Lisp program and interpret it right away. A Java program could modify itself at run time only by reference to the byte code, not the source code. Lisp also benefits from its in ternal form being list structure with the lead word saying what the expression is, for example, an assignment statement is (setq ...). To recognize an assignment statement, a Java program must search for =. The Java program can also have used a Java parser to parse itself into a suitable data structure.

In the next 49 years, programs that extend themselves at run time will become important.

I see opportunities for improvement in regarding inputoutput as consisting of speech acts. Thus, computers can make requests or promises to people or other computer systems. They can also ask questions and give truthful and responsive answers to the questions.

The proper performance of speech acts gives new specifications for programs. For example, we can ask whether a program keeps its promises or at least intends to keep them. The speech acts can be represented in machines as strings, or as XML, or (better) Lisp Sexpressions, but they will have an abstract syntax and semantics at a higher level, and promise (proposition, person) can be a program statement meaning to promise to person that propostion will be true. See http://wwwformal.stanford.edu/jmc/elephant.html for some ideas that are still not at the level of concrete proposals.

3. Mathematical Theory of Computation.

The mathematical relations of computer programs, computable functions and data structures.

3.1 Computability and Computational Complexity.

This is the one area in which computer science has developed deep mathematical questions, for example, P = NP. I trust others will characterize what may be expected from it in the next 49 years.

3.2 Channel Capacity?

Here's a problem whose solution would be as impressive as the discovery of NPcompleteness: Is there a computational analog of Shannon's channel capacity theorem for communication? Shannon's theorem says that it is possible to transmit information over a communication channel at a rate arbitarily close its channel capacity with an arbitrarily low probability of error. A corresponding theorem for computation would say approximately that a computer of speed V would be able to do a computation of size X in time X/V given enough memory.

3.3 Proving Correctness of Computer Programs.

In principle, noone should pay money for a computer program until its specifications are expressed formally and the program is proved to meet them and the prove is mechanically checked. Considerable progress was made in the 1960s and 1970s, the outstanding achievement being the BoyerMoore interactive theorem prover. While progress continues, it has been slowed by demands for shortterm payoffs.

Today there is already great emphasis on computer programs that interact with people or other programs. Therefore, specification and verification of such programs will become increasingly important. As early as the 1970s, there was work on specifying text editors and proving that they met their specifications.

The inputs and outputs of interactive programs are often well regarded as speech acts as discussed in Austin [1962], Searle [1969], and McCarthy [1996]. Speech acts include promises, requests, acceptances and denials of requests, questions, answers to questions, and pronouncements. Each of these, as discussed in McCarthy [1996], has conditions for its correct execution, e.g. promises should be kept and answers to questions should be true and responsive. These aspects of programs will become increasingly important.

4. Artificial Intelligence.

The long term goal of artificial intelligence research should be humanlevel AI, that is, computer programs with at least the intellectual capabilities of humans. There are two main approaches to seeking this goal.

The biological approach builds agents that imitate features of the physiology or psychology of humans. Most cognitive science agents imitate humans at the psychological level; connectionist systems and their neural net relatives imitate at the physiological level.

For progress, the biological AI researchers need to figure out how to represent facts independent of their purpose, to make systems capable of sequential behavi or and to figure out what information to build into their systems corresponding to the rich information that human babies are born possessing.

The engineering approaches to AI regard the world as presenting certain kinds of problems to an agent trying to survive and achieve goals. It studies directly how to achieve goals. The logical approach is a variety of the engineering approach. A logical agent represents what it knows in logical formulas and infers that certain actions or strategies are appropriate to achieve its goals.

The logical agents of the next 49 years need at least (1) continued existence over time, (2) improved ability to reason about action and change, (3) more elaboration tolerant formalisms, (4) the ability to represent and reason about approximately defined entities, (5) enough selfawareness and introspection to learn from the successes and failures of their previous reasoning, (6) domain-dependent control of theorem provers and problem solvers, and (7) identifying the most basic commonsense knowledge and getting it into the computer. This must include knowledge that people use without being able to formulate it verbally.

The classical AI problems---the frame problem, the qualification problem and the ramification problem [Shanahan 1997]---have been solved for particular for malisms for representing information about action and change. However, these currently known representation formalisms are unsatisfactory in various respects. Improved ways of representing action and change may present new versions of the frame, qualification and representation problems.

It's a race to humanlevel AI, and I think the logical approach is ahead. Why? The logical approach to AI has faced and partly solved problems that all approaches will face sooner or later. Mainly they concern identifying and representing as data and programs information about how the world works. Humans represent this information internally, but only some of it is verbally accessible. The logical approach also has the advantage that when we achieve human-level AI we will understand how intelligence works. Some of the evolutionary approaches might achieve an intelligent machine without anyone understanding how it works.

What are the problems faced by logical AI? Here are a few.

4.1 Facts about Action and Change.

This has been the major concentration of work in logical AI. Present formalisms are pretty good for representing facts about discrete sequences of actions and their consequences. Programs exist for the automatic translations of planning problems so described into propositional theories, and the propositional problem solvers are often adequate to solve the problems. The situation is not so good for continuous change, concurrent events, and problems involving the actions of several agents. We can hope for progress in the next 50 years.

4.2 Elaboration Tolerance Including the Frame Problem.

Existing AI systems, including both logical and biological, are extremely specialized in what in formation they can take into account. Taking into account new information, even information not contradicting previous knowledge, often requires rebuild ing the system from scratch. Logical AI has studied elaboration tolerance, a special cases of which is the presented by the frame problem and the qualifi cation problem. The frame problem involves the implicit specification of what doesn't change when an event occurs, and the qualification problem involves elaborating the sufficient conditions for an event to have its normal effect. [Shanahan 1997] treats elaboration tolerance and there is also my http://wwwformal.stanford.edu/jmc/elaboration.html.

4.3 Nonmonotonic Reasoning.

Humans and machines usually reach conclusions on the basis of incomplete knowledge. These conclusions may change when new knowledge becomes available. Probability theory is applicable to a part of the problem, but more general nonmonotonic logics seem to be also required.

4.4 The Three Dimensional World: Approximate Knowledge.

The knowledge available to a person or robot of its three dimensional surroundings and of the objects it needs to manipulate is almost always very approximate, though sometimes precise geometric models like rectangular parallelepiped approximate real objects well

enough. Logical AI needs a general theory of approximate objects.

4.5 The Relation between Appearance and Reality.

We and any robots we may build live in a world of threedimensional objects built up from substances that are, in turn, made of molecules. Our direct information about this world comes directly from senses like vision and hearing that only carry partial information about the objects. We, even babies, are built to infer information about the three dimensional objects from observations and from general information about what kinds of objects there are.

Machine learning and most AI have treated classifying appearances but don't go beyond appearance to the reality. I have a puzzle about this (http://www formal.stanford.edu/jmc/appearance.html). I think the relation of appearance and reality will be an important topic of AI research in the next 49 years.

Logical AI has faced the above phenomena. Other approaches haven't faced them explicitly. Maybe it will turn out that they don't have to, but I have seen no arguments to that effect.

My views on many of the specific problems are in articles published in various journals; almost all are to be found on my web page http://wwwformal. stanford.edu/jmc/. There is no complete summary, but http://www formal.stanford.edu/jmc/whatisai.html and

http://wwwformal. stanford.edu/jmc/logicalai.html express the logical AI point of view. [Shanahan 1997] covers much of the logical AI approach.

No approach is close to humanlevel AI and or within development range. No one can convincingly say that given a billion dollars he could reach humanlevel. A critical level of AI will be reached, as Douglas Lenat pointed out, when an AI system has enough basic common sense information to be able to get more information by reading books.

Humanlevel intelligence is a difficult scientific problem and probably needs some new ideas. These are more likely to be invented by a person of genius than as part of a Government or industry project. Of course, present ideas and techniques are sufficient for many useful applications.

Still we can ask for a subjective probability that humanlevel AI will be reached in the next 49 years. In the past, I have said that humanlevel AI will take be tween 5 and 500 years. I'll guess 0.5 probability in the next 49 years but a 0.25 probability that 49 years from now, the problems will be just as confusing as they are today.

4.6 What if We Get HumanLevel AI in the Next 49 Years?

The current speculation about the consequences of getting humanlevel AI is mostly beside the point, because it concentrates on the possible personalities of the intelligent agents. Present ideas about what humanlevel AI will be like seem to come from sciencefiction. Consequently discussions of policies concerning AI are presently beside the point.

Because of the speed of computers, qualitatively humanlevel Aimeans quantitatively superhuman AI. Its immediate applications will be to get advice relevant to decisions of what to do. Even here the speculation, especially science fiction, is beside the point. In the typical story, the human asks what to do, the machine answers, and taking the machine's advice has unexpected and unpleasant consequences. The proper use of an AI advisor is to ask it the consequences of many possible courses of action and to give the reasoning leading to the machine's opinion.

The main danger is of people using AI to take unfair advantage of other people. However, we won't know enough to regulate it until we see what it actually looks like. I would hate to see the American presidential candidates in 2004 asked for their positions on AI.

5. Quantum Computers.

I believe they will be made to work. There are many physical phenomena being tried and the discovery of quantum error correction makes getting a general quantum computer a finite task. On the other hand, there is still only Shor's factoring algorithm as an example of a spectacularly useful quantum algorithm. The next 49 years will surely tell us what quantum computers are good for. I'll conjecture that factoring will turn out to be a paradigmatic example.

6. Economics of Information.

Many costeffective improvements in the way information is handled are not done, because society is not organized to pay for them properly. We can hope for improvements in economic organization of in formation in the next 49 years. Here are some examples.

(1) Since about 1970, it has been economically feasible to put the world's lit erature, approximately the U.S. Library of Congress, on line and make it available worldwide. It is now rather cheap, but John Ockerbloom's library catalog www.digital.library.upenn.edu lists only about 20,000 freely available books. The situation was made worse by the extension of copyright from 75 to 95 years. The economic interests of publishers have dominated those of authors and readers. There is more money available than ever before to pay authors for their talent and work. This will get even better if the pub lishers can be squeezed as the new technologies warrant. Technologically, there is still a little ways to go before everyone can sit in his bathtub with a waterproof flat screen and browse the world's literature.

(2) The world would benefit from a lot more manpower going into free software. The volunteer efforts associated with the Free Software Foundation could use ten times the manpower. We need a way of paying for it out of public funds and for encouraging new volunteer efforts. Alternatively, the same result might be obtained by some improvement in the economics of commercial software.

(3) The previous sections of this article have described goals for computing and computer science for the next 49 years. How fast these goals are achieved depends significantly on the attitude of the researchers and the agencies that support research. A large fraction of computer science research in industry and academia over the last 20 years has been wasted, because the projects have had too short a time scale. For example, major researchers in program verification have spent almost all their time on verifying successive parts of microprocessors and have had little support for research in the general theory of verification.

One result has been many academic projects and Silicon Valley companies pursuing essentially identical minor ideas. This is likely to continue during the next 49 years. Big advances will not come from a succession of 25 two year projects. The rush to patent has resulted in patenting trivial improvements. Among the sciences computer science has been particularly badly off. One reads all the time about physics and astronomy projects with ten and twenty year completion times.

What long term projects are worthwhile in computer science? For one, there needs to be more longterm projects to develop automatic interactive theorem provers, better computer algebra systems, a Lisp better than scheme. It is important to choose experimental domains designed to be informative, rather than to emphasize short term goals. The geneticists have worked with Drosophila since 1910, and yet the fruit flies of today are no better than the fruit flies of 1910. Couldn't the geneticists at least have bred them for speed so we could enjoy fruit fly races? The ``practical'' types sneer at ``toy problems,'' but these are the Drosophilas.

To summarize, here some problems I see.

(1) Humanlevel AI and how to get there;

(2) Getting AI to where programs can learn from books;

(3) Specifying programs that interact with people and other programs;

(4) Making formal proofs that programs meet specifications part of contracts.

(5) Giving users full control of their computing environments, i.e. devising ways for users to reprogram their environments without their having to understand more than necessary.

(6) Giving programming languages primitives for the abstract syntax of the language itself.

(7) Proving a computational analog of the Shannon channel capacity theorem.

REFERENCES

AUSTIN, J. L. 1962. How to Do Things with Words, Oxford.

MCCARTHY, J. 1996. Elephant 2000

http://wwwformal.stanford.edu/jmc/elephant.html,

Stanford Formal Reasoning Group, available only as

http://wwwformal.stanford.edu/jmc/elephant.html.

SEARLE, J. R. 1969. Speech Acts, Univ. Press, Cambridge, Eng.

SHANAHAN, M. 1997. Solving the Frame Problem, A Mathematical Investigation of the Common Sense Law of Inertia. M.I.T. Press, Cambridge, Mass.

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
2023年上半年GDP全球前十五强
 百态   2023-10-24
美众议院议长启动对拜登的弹劾调查
 百态   2023-09-13
上海、济南、武汉等多地出现不明坠落物
 探索   2023-09-06
印度或要将国名改为“巴拉特”
 百态   2023-09-06
男子为女友送行,买票不登机被捕
 百态   2023-08-20
手机地震预警功能怎么开?
 干货   2023-08-06
女子4年卖2套房花700多万做美容:不但没变美脸,面部还出现变形
 百态   2023-08-04
住户一楼被水淹 还冲来8头猪
 百态   2023-07-31
女子体内爬出大量瓜子状活虫
 百态   2023-07-25
地球连续35年收到神秘规律性信号,网友:不要回答!
 探索   2023-07-21
全球镓价格本周大涨27%
 探索   2023-07-09
钱都流向了那些不缺钱的人,苦都留给了能吃苦的人
 探索   2023-07-02
倩女手游刀客魅者强控制(强混乱强眩晕强睡眠)和对应控制抗性的关系
 百态   2020-08-20
美国5月9日最新疫情:美国确诊人数突破131万
 百态   2020-05-09
荷兰政府宣布将集体辞职
 干货   2020-04-30
倩女幽魂手游师徒任务情义春秋猜成语答案逍遥观:鹏程万里
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案神机营:射石饮羽
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案昆仑山:拔刀相助
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案天工阁:鬼斧神工
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案丝路古道:单枪匹马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:与虎谋皮
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:李代桃僵
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:指鹿为马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:小鸟依人
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:千金买邻
 干货   2019-11-12
 
推荐阅读
 
 
 
>>返回首頁<<
 
靜靜地坐在廢墟上,四周的荒凉一望無際,忽然覺得,淒涼也很美
© 2005- 王朝網路 版權所有