稀疏线性系统的迭代方法(第2版)(影印版)(精装)(国外数学名著系列)(Iterative methods for sparse linear systems)
分類: 图书,科学与自然,数学,计算数学,
品牌: 萨阿德
基本信息·出版社:科学出版社
·页码:528 页
·出版日期:2009年
·ISBN:9787030234834
·条形码:9787030234834
·包装版本:2版
·装帧:精装
·开本:16
·正文语种:英语
·丛书名:国外数学名著系列
·外文书名:Iterative methods for sparse linear systems
产品信息有问题吗?请帮我们更新产品信息。
内容简介Iterative Methods for Sparse Linear Systems, Second Edition gives an in-depth, up-to-date view of practical algorithms for solving large-scale linear systems of equations. These equations can number in the millions and are sparse in the sense that each involves only a small number of unknowns. The methods described are iterative, i.e., they provide sequences of approximations that will converge to the solution.
This new edition includes a wide range of the best methods available today. The author has added a new chapter on multigrid techniques and has updated material throughout the text, particularly the chapters on sparse matrices, Krylov subspace methods, preconditioning techniques, and parallel preconditioners. Material on older topics has been removed or shortened, numerous exercises have been added, and many typographical errors have been corrected. The updated and expanded bibliography now includes more recent works emphasizing new and important research topics in this field.
This book can be used to teach graduate-level courses on iterative methods for linear systems. Engineers and mathematicians will find its contents easily accessible, and practitioners and educators will value it as a helpful resource. The preface includes syllabi that can be used for either a semester- or quarter-length course in both mathematics and computer science.
编辑推荐要使我国的数学事业更好地发展起来,需要数学家淡泊名利并付出更艰苦地努力。另一方面,我们也要从客观上为数学家创造更有利的发展数学事业的外部环境,这主要是加强对数学事业的支持与投资力度,使数学家有较好的工作与生活条件,其中也包括改善与加强数学的出版工作。 这次科学出版社购买了版权,一次影印了23本施普林格出版社出版的数学书,就是一件好事,也是值得继续做下去的事情。大体上分一下,这23《稀疏线性系统的迭代方法(第2版影印版)(精)》中,包括基础数学书5本,应用数学书6本与计算数学书12本,其中有些也具有交叉性质。 这些书可以使读者较快地了解数学某方面的前沿,对从事这方面研究的数学家了解该领域的前沿与全貌也很有帮助。
目录
Preface to the Second Edition
Preface to the First Edition
1 Background in Linear Algebra
1.1 Matrices
1.2 Square Matrices and Eigenvalues
1.3 Types of Matrices
1.4 Vector Inner Products and Norms
1.5 Matrix Norms
1.6 Subspaces, Range, and Kernel
1.7 Orthogonal Vectors and Subspaces
1.8 Canonical Forms of Matrices
1.8.1 Reduction to the Diagonal Form
1.8.2 The Jordan Canonical Form
1.8.3 The Schur Canonical Form
1.8.4 Application to Powers of Matrices
1.9 Normal and Hermitian Matrices
1.9.1 Normal Matrices
1.9.2 Hermitian Matrices
1.10 Nonnegative Matrices, M-Matrices
1.11 Positive Definite Matrices
1.12 Projection Operators
1.12.1 Range and Null Space of a Projector
1.12.2 Matrix Representations
1.12.3 Orthogonal and Oblique Projectors
1.12.4 Properties of Orthogonal Projectors
1.13 Basic Concepts in Linear Systems
1.13.1 Existence of a Solution
1.13.2 Perturbation Analysis
Exercises
Notes and References
2 Discretization of Partial Differential Equations
2.1 Partial Differential Equations
2.1.1 Elliptic Operators
2.1.2 The Convection Diffusion Equation
2.2 Finite Difference Methods
2.2.1 Basic Approximations
2.2.2 Difference Schemes for the Laplacian Operator
2.2.3 Finite Differences for One-Dimensional Problerr
2.2.4 Upwind Schemes
2.2.5 Finite Differences for Two-Dimensional Problerr
2.2.6 Fast Poisson Solvers
2.3 The Finite Element Method
2.4 Mesh Generation and Refinement
2.5 Finite Volume Method
Exercises
Notes and References
3 Sparse Matrices
3.1 Introduction
3.2 Graph Representations
3.2.1 Graphs and Adjacency Graphs
3.2.2 Graphs of PDE Matrices
3.3 Permutations and Reorderings
3.3.1 Basic Concepts
3.3.2 Relations with the Adjacency Graph
3.3.3 Common Reorderings
3.3.4 Irreducibility
3.4 Storage Schemes
3.5 Basic Sparse Matrix Operations
3.6 Sparse Direct Solution Methods
3.6.1 MD Ordering
3.6.2 ND Ordering
3.7 Test Problems
Exercises
Notes and References
4 Basic Iterative Methods
4.1 Jacobi, Gauss-Seidel, and Successive Overrelaxation
4.1.1 Block Relaxation Schemes
4.1.2 Iteration Matrices and Preconditioning
4.2 Convergence
4.2.1 General Convergence Result
4.2.2 Regular Splittings
4.2.3 Diagonally Dominant Matrices
4.2.4 Symmetric Positive Definite Matrices
4.2.5 Property A and Consistent Orderings
4.3 Alternating Direction Methods
Exercises
Notes and References
5 Projection Methods
5.1 Basic Definitions and Algorithms
5.1.1 General Projection Methods
5.1.2 Matrix Representation
5.2 General Theory
5.2.1 Two Optimality Results
5.2.2 Interpretation in Terms of Projectors
5.2.3 General Error Bound
5.3 One-Dimensional Projection Processes
5.3.1 Steepest Descent
5.3.2MR Iteration
5.3.3 Residual Norm Steepest Descent
5.4 Additive and Multiplicative Processes
Exercises
Notes and References
6 Kryiov Subspace Methods, Part I
6.1 Introduction
6.2 Krylov Subspaces
6.3 Arnoldi's Method
6.3.1 The Basic Algorithm
6.3.2 Practical Implementations
6.4 Arnoldi's Method for Linear Systems
6.4.1 Variation 1: Restarted FOM
6.4.2Variation 2: IOM and DIOM
6.5 Generalized Minimal Residual Method
6.5.1 The Basic GMRES Algorithm
6.5.2 The Householder Version
6.5.3 Practical Implementation Issues
6.5.4 Breakdown of GMRES
6.5.5 Variation 1: Restarting
6.5.6 Variation 2: Truncated GMRES Versions
6.5.7 Relations Between FOM and GMRES
6.5.8 Residual Smoothing
6.5.9 GMRES for Complex Systems
6.6 The Symmetric Lanczos Algorithm
6.6.1 The Algorithm
6.6.2 Relation to Orthogonal Polynomials
6.7 The Conjugate Gradient Algorithm
6.7.1 Derivation and Theory "
6.7.2 Alternative Formulations
6.7.3 Eigenvalue Estimates from the CG Coefficients
6.8 The Conjugate Residual Method
6.9 Generalized Conjugate Residual, ORTHOMIN, and ORTHODIR
6.10 The Faber-Manteuffel Theorem
6.11 Convergence Analysis
6.11.1 Real Chebyshev Polynomials
6.11.2 Complex Chebyshev Polynomials
6.11.3 Convergence of the CG Algorithm
6.11.4 Convergence of GMRES
6.12 Block Krylov Methods
Exercises
Notes and References
7 Kryiov Subspaee Methods, Part II
7.1 Lanczos Biorthogonalization
7.1.1 The Algorithm
7.1.2 Practical Implementations
7.2 The Lanczos Algorithm for Linear Systems
7.3 The Biconjugate Gradient and Quasi-Minimal Residual Algorithms
7.3.1 The BCG Algorithm
7.3.2 QMR Algorithm
7.4 Transpose-Free Variants
7.4.1 CGS
7.4.2 BICGSTAB
7.4.3 TFQMR
Exercises
Notes and References
8 Methods Related to the Normal Equations
8.1 The Normal Equations
8.2 Row Projection Methods
8.2.1 Gauss-Seidel on the Normal Equations
8.2.2 Cimmino's Method
8.3 Conjugate Gradient and Normal Equations
8.3.1 CGNR
8.3.2 CGNE
8.4 Saddle-Point Problems
Exercises
Notes and References
9 Preconditioned Iterations
9.1 Introduction
9.2 Preconditioned Conjugate Gradient
9.2.1 Preserving Symmetry
9.2.2 Efficient Implementations
……
10 Preconditioning Techniques
11 Parallel Implementations
12 Parallel Preconditioners
13 Multigrid Methods
14 Domain Decomposition Methods
Bibliography
Index
……[看更多目录]
序言要使我国的数学事业更好地发展起来,需要数学家淡泊名利并付出更艰苦地努力。另一方面,我们也要从客观上为数学家创造更有利的发展数学事业的外部环境,这主要是加强对数学事业的支持与投资力度,使数学家有较好的工作与生活条件,其中也包括改善与加强数学的出版工作。
从出版方面来讲,除了较好较快地出版我们自己的成果外,引进国外的先进出版物无疑也是十分重要与必不可少的。从数学来说,施普林格(springer)出版社至今仍然是世界上最具权威的出版社。科学出版社影印一批他们出版的好的新书,使我国广大数学家能以较低的价格购买,特别是在边远地区工作的数学家能普遍见到这些书,无疑是对推动我国数学的科研与教学十分有益的事。
这次科学出版社购买了版权,一次影印了23本施普林格出版社出版的数学书,就是一件好事,也是值得继续做下去的事情。大体上分一下,这23本书中,包括基础数学书5本,应用数学书6本与计算数学书12本,其中有些书也具有交叉性质。这些书都是很新的,2000年以后出版的占绝大部分,共计16本,其余的也是1990年以后出版的。这些书可以使读者较快地了解数学某方面的前沿,例如基础数学中的数论、代数与拓扑三本,都是由该领域大数学家编著的“数学百科全书”的分册。对从事这方面研究的数学家了解该领域的前沿与全貌很有帮助。按照学科的特点,基础数学类的书以“经典”为主,应用和计算数学类的书以“前沿”为主。这些书的作者多数是国际知名的大数学家,例如《拓扑学》一书的作者诺维科夫是俄罗斯科学院的院士,曾获“菲尔兹奖”和“沃尔夫数学奖”。这些大数学家的著作无疑将会对我国的科研人员起到非常好的指导作用。
当然,23本书只能涵盖数学的一部分,所以,这项工作还应该继续做下去。更进一步,有些读者面较广的好书还应该翻译成中文出版,使之有更大的读者群。
总之,我对科学出版社影印施普林格出版社的部分数学著作这一举措表示热烈的支持,并盼望这一工作取得更大的成绩。
文摘插图: