计算理论(Theory of Computing)

王朝other·作者佚名  2006-01-08
窄屏简体版  字體: |||超大  

Computation

From Wikipedia, the free encyclopedia.

Printable version | Pages that link here | (redirected from Theory of computation)

12.234.196.121

Log in | Help

Main Page | Recent Changes | Edit this page | History | Random Page | Special Pages

The theory of computation, a subfield of computer science and mathematics, is the study of mathematical models of computing, independent of any particular computer hardware. It has its origins early in the twentieth century, before modern electronic computers had been invented. At that time, mathematicians were trying to find which math problems can be solved by simple methods and which can't. The first step was to define what they meant by a "simple method" for solving a problem. In other words, they needed a formal model of computation.

Several different computational models were devised by these early researchers. One model, the Turing machine, stores characters on an infinitely long tape, with one square at any given time being scanned by a read/write head. Another model, recursive functions, uses functions and function composition to operate on numbers. The lambda calculus uses a similar approach. Still others, including [Markov algorithms]? and [Post systems]?, use grammar-like rules to operate on strings. All of these formalisms were shown to be equivalent in computational power -- that is, any computation that can be performed with one can be performed with any of the others. They are also equivalent in power to the familiar electronic computer, if one pretends that electronic computers have infinite memory. Indeed, it is widely believed that all "proper" formalizations of the concept of algorithm will be equivalent in power to Turing machines; this is known as the Church-Turing thesis. In general, questions of what can be computed by various machines are investigated in computability theory.

The theory of computation studies these models of general computation, along with the limits of computing: Which problems are (provably) unsolvable by a computer? (See the halting problem.) Which problems are solvable by a computer, but require such an enormously long time to compute that the solution is impractical? (See Presburger arithmetic.) Can it be harder to solve a problem than to check a given solution? (See complexity classes P and NP). In general, questions concerning the time or space requirements of given problems are investigated in complexity theory.

In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, are used to specify string patterns in UNIX and in some programming languages such as Perl. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars are used to specify programming language syntax. Nondeterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a naturally defined subclass of the recursive functions.

Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; this leads to the Chomsky hierarchy of languages.

The following table shows some of the classes of problems (or languages, or grammars) that are considered in computability theory (blue) and complexity theory (green). If class X is a strict subset of Y, then X is shown below Y, with a dark line connecting them. If X is a subset, but it is unknown whether they are equal sets, then the line is lighter and is dotted.

Decision Problem

Type 0 (Unrestricted)

Undecidable

Decidable

EXPSPACE

EXPTIME

PSPACE

Type 1 (Context Sensitive)

PSPACE-Complete

Co-NP

NP

BPP

BQP

NP-Complete

P

NC

P-Complete

Type 2 (Context Free)

Type 3 (Regular)

For Further Reading

Garey, Michael R., and David S. Johnson: Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman & Co., 1979. The standard reference on NP-Complete problems - an important category of problems whose solutions appear to require an impractically long time to compute.

Hein, James L: Theory of Computation. Sudbury, MA: Jones & Bartlett, 1996. A gentle introduction to the field, appropriate for second-year undergraduate computer science students.

Hopcroft, John E., and Jeffrey D. Ullman: Introduction to Automata Theory, Languages, and Computation. Reading, MA: Addison-Wesley, 1979. One of the standard references in the field.

Taylor, R. Gregory: Models of Computation. New York: Oxford University Press, 1998. An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students.

This article contains some content from an article by Nancy Tinkham, originally posted on Nupedia. This article is open content.

Main Page

Recent Changes

Watch page links

Edit this page

History

Upload files

Statistics

New pages

Orphans

Most wanted

Most popular

Random Page

Stub articles

Long articles

List users

Bug reports

June 23, 2002

Talk

Main Page | Recent Changes | Edit this page | History | Random Page | Special Pages

This page has been accessed 1837 times. Other namespaces : Talk

Last edited: Sunday, June 2, 2002, 15:01 (diff)

Validate this page

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
 
 
© 2005- 王朝網路 版權所有  導航