具体描述
内容简介
袁锦昀教授是杰出的旅居巴西华人1957年出生于江苏兴化唐刘镇,1977年考入南京工学院,巴西巴拉那联邦大学数学系终身教授、工业数学研究所所长,巴西计算和应用数学学会副会长,巴西数学会巴拉那州分会会长,巴西科技部基金委数学终审组应用数学和计算数学负责人,巴西巴拉那基金委数学终身组成员。 《实用迭代分析(英文版)(精)》是由其创作的英文版实用迭代分析专著。 目录
Preface to the Series in Information and Computational Science
Preface
Chapter 1 Introduction
1.1 Background in linear algebra
1.1.1 Basic symbols, notations, and definitions
1.1.2 Vector norm
1.1.3 Matrix norm
1.1.4 Spectral radii
1.2 Spectral results of matrix
1.3 Special matrices
1.3.1 Reducible and irreducible matrices
1.3.2 Diagonally dominant matrices
1.3.3 Nonnegative matrices
1.3.4 p-cyclic matrices
1.3.5 Toeplitz, Hankel, Cauchy, Cauchy-like and Hessenberg matrices "
1.4 Matrix decomposition
1.4.1 LU decomposition
1.4.2 Singular value decomposition
1.4.3 Conjugate decomposition
1.4.4 QZ decomposition
1.4.5 S T decomposition
1.5 Exercises
Chapter 2 Basic Methods and Convergence
2.1 Basic concepts
2.2 The Jacobi method
2.3 The Gauss-Seidel method
2.4 The SOR method
2.5 M-matrices and splitting methods
2.5.1 M-matrix
2.5.2 Splitting methods
2.5.3 Comparison theorems
2.5.4 Multi-splitting methods
2.5.5 Generalized Ostrowski-Reich theorem
2.6 Error analysis of iterative methods
2.7 Iterative refinement
2.8 Exercises
Chapter 3 Non-stationary Methods
3.1 Conjugate gradient methods
3.1.1 Steepest descent method
3.1.2 Conjugate gradient method
3.1.3 Preconditioned conjugate gradient method
3.1.4 Generalized conjugate gradient method
3.1.5 Theoretical results on the conjugate gradient method
3.1.6 GeueuAzed poduct-tpe methods base u -QC
3.1.7 Inexact preconditioned conjugate gradient method
3.2 Lanczos method
3.3 GMRES method and QMR method
3.3.1 GMRES method
3.3.2 QMR method
3.3.3 Variants of the QMR method
3.4 Direct projection method
3.4.1 Theory of the direct projection method
3.4.2 Direct projection algorithms
3.5 Semi-conjugate direction method
3.5.1 Semi-conjugate vectors
3.5.2 Left conjugate direction method
3.5.3 One possible way to find left conjugate vector set
3.5.4 Remedy for breakdown
3.5.5 Relation with Gaussian elimination
3.6 Krylov subspace methods
3.7 Exercises
Chapter 4 Iterative Methods for Least Squares Problems
4.1 Introduction
4.2 Basic iterative methods
4.3 Block SOR methods
4.3.1 Block SOR algorithms
4.3.2 Convergence and optimal factors
4.3.3 Example
4.4 Preconditioned conjugate gradient methods
4.5 Generalized least squares problems
4.5.1 Block SOR methods
4.5.2 Preconditioned conjugate gradient method
4.5.3 Comparison
4.5.4 SOR-like methods
4.6 Rank deficient problems
4.6.1 Augmented system of normal equation
4.6.2 Block SOR algorithms
4.6.3 Convergence and optimal factor
4.6.4 Preconditioned conjugate gradient method
4.6.5 Comparison results
4.7 Exercises
Chapter 5 Preconditioners
5.1 LU decomposition and orthogonal transformations
5.1.1 Gilbert and Peierls algorithm for LU decomposition
5.1.2 Orthogonal transformations
5.2 Stationary preconditioners
5.2.1 Jacobi preconditioner
5.2.2 SSOR preconditioner
5.3 Incomplete factorization
5.3.1 Point incomplete factorization
5.3.2 Modified incomplete factorization
5.3.3 Block incomplete factorization
5.4 Diagonally dominant preconditioner
5.5 Preconditioner for least squares problems
5.5.1 Preconditioner by LU decomposition
5.5.2 Preconditioner by direct projection method
5.5.3 Preconditioner by QR decomposition
5.6 Exercises
Chapter 6 Singular Linear Systems
6.1 Introduction
6.2 Properties of singular systems
6.3 Splitting methods for singular systems
6.4 Nonstationary methods for Singular systems
6.4.1 symmetric and positive semidefinite systems
6.4.2 General systems
6.5 Exercises
Bibliography
Index
精彩书摘
Chapter 1
Introduction
In this chapter, we first give an overview of relevant concepts and some basic results of matrix linear algebra. Materials contained here will be used throughout the book.
1.1 Background in linear algebra
1.1.1 Basic symbols, notations, and definitions
Let R be the set of real numbers; C,the set of complex numbers; and i 三 /^T. The symbol Rn denotes the set of real n-vectors and Cn, the set of complex n-vectors, a, /3, 7,etc., denote real numbers or constants. Vectors are almost always column vectors. We use Rmxn to denote the linear vector space of all m-by-n real matrices and Cmxn, the linear vector space of all m-by-n complex matrices. The symbol dimp) denotes the dimension of a linear vector space S.
The upper case letters A, B, C, A, A, etc., denote matrices and the lower case letters x, y, z, etc., denote vectors.
Let (A)ij = ctij denote the (i, j)th entry in a matrix A = (aij). For any n-by-n matrix, the indices j go through 1 to n usually but sometimes go through 0 to n — 1 for convenience. Let AT be the transpose of A; A*, the conjugate transpose of
A rank(yl), the rank of A and tr(A)三the trace of A. An n-by-n diagonal
matrix is denoted by
We use the notation In for the n-by-n identity matrix. When there is no ambiguity, we shall write it as I. The symbol ej denotes the jth unit vector, i.e., the jth column vector of I.
A matrix A G Rnxn is symmetric if AT = A, and skew-symmetric if AT = —A. A symmetric matrix A is positive definite (semidefinite) if xTAx > 00) for any
nonzero vector x G Rn. A matrix A G Cnxn is Hermitian if A* = A. A Hermitian matrix A is positive definite (semidefinite) if x*Ax ≥ 0( 0) for any nonzero vector
x e Cn.
A number A e C is an eigenvalue of A G Cnxn if there exists a nonzero vector x G Cn such that Ax = Xx, where x is called the eigenvector of A associated with A. It is well-known that the eigenvalues of all Hermitian matrices are real. Let Amin (A) and Amax(A) denote the smallest and largest eigenvalues of a Hermitian matrix A respectively. We use p(A) = max |Ai(A)| to denote the spectral radius of A where Ai(A) goes through the spectrum of A. Recall that the spectrum of A is the set of all the eigenvalues of A.
We use to denote a norm of vector or matrix. The symbols||oo denote the p-novm with p = 1,2, oo, respectively. Also we use ?a(A), which is defined by Ka(A) = ||A||a||A_1||a to denote the condition number of the matrix A. In general, we consider every norm at the definition when a is omitted. But most used norm is 2-norm.
We use and 1Z(A) to represent the null space and Image space (or Range)
of given matrix A respectively where = {x G Rn : Ax = 0} and 1^(A) = {y G
Rm : y = Ax for some x G Rn} and A G Rmxn.
For matrix iterative analysis, we need some tools, such as vector norms, matrix norms and their extensions, and spectral radii.
1.1.2 Vector norm
In fact, a norm is an extension of length of vector in R2 or absolute value in R. It is well-known that Vx G R, x = satisfies the following properties:
We generalize three properties above to vector space Rn as follows.
Definition 1.1.1 /i : Rn —j- R is a vector norm on Rn if
Example 1.1.1 There are three common norms on Rn defined by
There axe some important elementary consequences from Definition 1.1.1 of the vector norm.
Proposition 1.1.1
Proof
Then,
By interchanging x and y, we can obtain
The result of (1.1.1) follows from (1.1.3) and (1.1.4) together. We can prove (1.1.2) if y is replaced by —y in (1.1.1).
The 2-norm is the natural generalization of the Euclidean length of vector on R2 or R3 and called the Euclidean norm. The oo-norm also sometimes called the maximum norm or the Chebyshev norm. In fact, they are special cases of p-norm defined as ,
Sometimes, usual norm is not enough for our research. We have to construct a new norm. One useful technique to construct new norms from some well-known norm is given in the following theorem.
Theorem 1.1.2 Let v be a norm on Rm and A E Rmxn have linearly inde?pendent columns. Then /i(x) = u(Ax) : Rn is a norm on Rn.
The proof is easy, just to check properties of the norm in Definition 1.1.1. Leave it to reader. This technique can work for matrix norm in the next subsection.
Corollary 1.1.3 Let A G RnXn be symmetric and positive definite. Then, /i(x) = VxTAx is a norm on Rn? denoted ||尤||^4,and called weighted norm (with A). We have to know if the sequence generated by iterative methods converges to the solution when we study iterative methods. For this purpose, we shall give some concepts about limit of sequence in vector spaces.
Definition 1.1.2 Let {x(fc)} be a sequence of n-vectors,and x G Rn. Then, x is a limit of the sequence {x(fc)} (written x = limfc_,00 x^) if
where Xi(i = 1,2, … ,n) are components of x.
By the definition,
Furthermore, it follows from equivalence of vector norms that x = lim a;⑷ lim "(x — a:⑷)=0,
where " is a norm on Rn.
1.1.3 Matrix norm
Definition 1.1.3 : Rmxn — R is a matrix norm on Rmxn if
Example 1.1.2 Let A 前言/序言
架构演进与系统重构:现代软件工程的实践指南 本书聚焦于复杂软件系统在长期维护和快速迭代过程中所面临的深层结构性挑战,提供一套系统化、可操作的架构演进策略与重构技术。它不直接探讨特定工具链的迭代速度或版本发布周期,而是深入剖析支撑这些活动的底层软件设计原则和管理方法论。 在当今快速变化的技术环境中,软件产品不再是静态的交付物,而是需要持续进化的生命体。本书的核心价值在于,它将软件架构视为一个动态实体,强调“架构适应性”而非“初始完美性”。我们承认,任何初始设计都将随着业务需求、技术栈和团队规模的扩大而逐渐显现出瓶颈。本书的目标读者是那些正面临“技术债务雪球效应”的资深开发者、架构师以及技术领导者,旨在为他们提供穿越复杂性迷雾的清晰路径。 第一部分:识别与量化架构停滞的信号 在开始任何大规模的结构性变动之前,精准地识别出当前架构的健康状况至关重要。本部分详尽阐述了如何将抽象的“系统变慢”转化为可量化的工程指标。 1. 耦合度与内聚性的深度剖析: 我们将超越经典的“高内聚、低耦合”口号,深入探讨如何使用依赖图分析工具和模块间的调用频率统计,精确测量组件之间的耦合强度。重点分析了“虚拟耦合”(如共享数据库、全局配置状态)对系统灵活性的隐性制约。 2. 变更扩散效应(Change Diffusion): 成功的架构能够将局部改动的影响限制在最小范围内。本书提供了一套方法论,用于追踪和记录一次业务需求的变更,需要修改多少个不相关的代码文件,以及需要重新部署的组件数量。高扩散率是系统僵化的明确信号。 3. 领域边界的模糊化与冲突: 探讨康威定律在现实中的体现。当不同业务领域的逻辑混杂在同一代码库中时,理解和维护成本呈指数级增长。我们将介绍如何通过上下文映射(Context Mapping)技术,识别出正在被不当合并或过度耦合的业务领域。 4. 非功能性需求的降级模式: 性能、可观测性和安全性是架构的生命线。本章分析了当架构开始退化时,这些非功能性指标是如何系统性地、难以察觉地下降的。例如,内存泄漏、不合理的锁竞争引入的响应时间波动,以及如何在日常监控中捕获这些微妙的信号。 第二部分:演进式架构的设计原则与模式 本部分将视角从“问题诊断”转向“解决方案构建”,提出一套非破坏性的架构重构策略。我们坚信,大型系统的演进必须是渐进的、可验证的,而不是“大爆炸式”的推倒重来。 1. 绞杀者模式(Strangler Fig Pattern)的精细化应用: 详细阐述了如何将遗留系统中的特定功能或服务,安全地“绞杀”并迁移到新的服务边界内。重点讨论了如何设计透明的路由层,确保在迁移过程中业务的连续性和数据的一致性。这不是关于工具,而是关于策略的部署。 2. 边界上下文的物理化: 如何将逻辑上的领域边界,通过实际的技术选型(如独立部署单元、不同的数据存储技术)固化下来。讨论了微服务架构并非万能药,而是一种工具,用于隔离那些具有高波动性和高变更频率的业务核心。 3. 数据拓扑的解耦: 数据是系统中最难迁移的部分。本书深入研究了如何处理共享数据库的依赖。讨论了数据复制、事件溯源(Event Sourcing)和命令查询职责分离(CQRS)在实现数据层解耦中的具体实践,重点在于如何在高并发环境下维护事务语义。 4. 架构契约的建立与强制执行: 引入“架构契约”的概念,它定义了服务之间必须遵守的通信协议、数据格式和边界条件。我们将介绍如何利用静态分析和运行时验证工具,将这些契约嵌入到持续集成流程中,从而防止“架构漂移”(Architectural Drift)。 第三部分:重构的工程实践与文化支撑 成功的架构演进不仅是技术挑战,更是组织和流程的挑战。本部分着重于如何将重构活动融入日常开发流程,并建立支持持续改进的工程文化。 1. 增量式重构的“特性开关”策略: 介绍如何利用特性开关(Feature Toggles)来安全地部署包含新旧架构代码的分支。这使得团队可以在生产环境中对新架构进行有限范围的验证和灰度发布,极大地降低了部署风险。 2. 遗留代码的“就地清理”原则: 摒弃为重构而重构的观念。本书强调代码应在“触摸它时”进行清理。当需要添加新功能或修复缺陷时,将清理和改进作为任务的一部分,而不是单独的项目。这需要明确的团队共识和代码所有权划分。 3. 架构债务的可见化与治理: 架构债务需要像技术债务一样被追踪和管理。我们提供了一套方法,用于记录架构决策(Architecture Decision Records, ADRs),清晰记录为何采取了某个设计,以及未来需要解决的局限性。这些记录成为未来重构工作的路线图。 4. 持续反馈回路的构建: 强调自动化测试套件(单元、集成和契约测试)是架构演进的安全网。只有当测试覆盖率足够高时,架构师和工程师才有信心深入到系统的核心进行修改。本书提供了构建健壮测试金字塔,以支撑大规模重构的实用建议。 通过对这些深层次工程哲学的探讨与实践指导,本书旨在帮助读者构建出更具韧性、更易于理解和维护的软件系统,确保技术投资能够持续地支持业务的长期发展目标。