品牌:Alfred V.Aho
基本信息
·出版社:人民邮电出版社
·页码:1009 页码
·出版日:2008年
·ISBN:9787115172655
·条码:9787115172655
·版次:1版
·装帧:平装
·开本:16 16
·中文:中文
·丛书名:国外著名高等院校信息科学与技术优秀教材
内容简介
本书的新版本经过了全面的修订,包含了编译技术的新发展。这本书全面地介绍了编译器的设计,并继续强调编译技术在软件设计和开发中的广泛应用。
目录
1Introduction1
1.1Language Processors1
1.1.1Exercises for Section1.13
1.2The Structure of a Compiler4
1.2.1Lexical Analysis5
1.2.2Syntax Analysis8
1.2.3Semantic Analysis8
1.2.4Intermediate Code Generation9
1.2.5Code Optimization10
1.2.6Code Generation10
1.2.7Symbol-Table Management11
1.2.8The Grouping of Phasesin to Passes11
1.2.9Compiler-Construction Tools12
1.3The Evolution of Programming Languages12
1.3.1The Moveto Higher-level Languages13
1.3.2Impactson Compilers14
1.3.3Exercises for Section1.314
1.4The Science of Building a Compiler15
1.4.1Modelingin Compiler Design and Implementation15
1.4.2The Science of Code Optimization15
1.5Applications of Compiler Technology17
1.5.1Implementation of High-Level Programming Languages17
1.5.2Optimizations for Computer Architectures19
1.5.3Design of New Computer Architectures21
1.5.4Program Translations22
1.5.5Software Productivity Tools23
1.6Programming Language Basics25
1.6.1The Static/Dynamic Distinction25
1.6.2Environmentsand States26
1.6.3Static Scopeand Block Structure28
1.6.4Explicit Access Control31
1.6.5Dynamic Scope31
1.6.6Parameter Passing Mechanisms33
1.6.7Aliasing35
1.6.8Exercises for Section1.635
1.7Summary of Chapter136
1.8References for Chapter138
2A Simple Syntax-Directed Translator39
2.1Introduction40
2.2Syntax Definition42
2.2.1Definition of Grammars42
2.2.2Derivations44
2.2.3Parse Trees45
2.2.4Ambiguity47
2.2.5Associativity of Operators48
2.2.6Precedence of Operators48
2.2.7Exercises for Section 2.251
2.3Syntax-Directed Translation52
2.3.1Postfix Notation53
2.3.2Synthesized Attributes54
2.3.3Simple Syntax-Directed Definitions56
2.3.4Tree Traversals56
2.3.5Translation Schemes57
2.3.6Exercises for Section2.360
2.4Parsing60
2.4.1Top-Down Parsing61
2.4.2Predictive Parsing64
2.4.3When to Use ε-Productions65
2.4.4Designinga Predictive Parser66
2.4.5Left Recursion67
2.4.6Exercises for Section 2.468
2.5A Translator for Simple Expressions68
2.5.1Abstract and Concrete Syntax69
2.5.2Adaptingthe Translation Scheme70
2.5.3Procedures for the Nonterminals72
2.5.4Simplifying the Translator732.5.5The Complete Program74
2.6Lexical Analysis76
2.6.1Removalof White Spaceand Comments77
2.6.2Reading A head78
2.6.3Constants78
2.6.4Recognizing Keywords and Identifiers79
2.6.5A Lexical Analyzer81
2.6.6Exercises for Section 2.684
2.7Symbol Tables85
2.7.1Symbol Table Per Scope86
2.7.2The Use of Symbol Tables89
2.8Intermediate Code Generation91
2.8.1Two Kinds of Intermediate Representations91
2.8.2Constructionof Syntax Trees92
2.8.3Static Checking97
2.8.4Three-Address Code99
2.8.5Exercises for Section 2.8105
2.9Summary of Chapter 2105
3Lexical Analysis109
3.1The Role of the Lexical Analyzer109
3.1.1Lexical Analysis Versus Parsing110
3.1.2Tokens,Patterns,and Lexemes111
3.1.3Attributes for Tokens112
3.1.4Lexical Errors113
3.1.5Exercises for Section 3.1114
3.2Input Buffering115
3.2.1Buffer Pairs115
3.2.2Sentinels116
3.3Specification of Tokens116
3.3.1Stringsand Languages117
3.3.2Operationson Languages119
3.3.3Regular Expressions120
3.3.4Regular Definitions123
3.3.5Extensionsof Regular Expressions124
3.3.6Exercises for Section 3.3125
3.4Recognitionof Tokens128
3.4.1Transition Diagrams130
3.4.2Recognition of Reserved Words and Identifiers132
3.4.3Completion of the Running Example133
3.4.4Architecture of a Transition-Diagram-Based Lexical Analyzer134
3.4.5Exercises for Section 3.4136
3.5The Lexical-Analyzer Generator Lex140
3.5.1Use of Lex140
3.5.2Structure of Lex Programs141
3.5.3Confiict Resolutionin Lex144
3.5.4The Lookahead Operator144
3.5.5Exercises for Section 3.5146
3.6Finite Automata147
3.6.1Nondeterministic Finite Automata147
3.6.2Transition Tables148
3.6.3Acceptance of Input Stringsby Automata149
3.6.4Deterministic Finite Automata149
3.6.5Exercises for Section 3.6151
3.7From Regular Expressions to Automata152
3.7.1Conversionof an NFA to a DFA152
3.7.2Simulation of an NFA156
3.7.3Efficiency of NFA Simulation157
3.7.4Construction of an NFA from a Regular Expression159
3.7.5Efficiency of String-Processing Algorithms163
3.7.6Exercises for Section 3.7166
3.8Design of a Lexical-Analyzer Generator166
3.8.1The Structure of the Generated Analyzer167
3.8.2Pattern Matching Based on NFA's168
3.8.3DFA's for Lexical Analyzers170
3.8.4Implementing the Lookahead Operator171
3.8.5Exercises for Section 3.8172
3.9Optimization of DFA-Based Pattern Matchers173
3.9.1Important States of an NFA173
3.9.2Functions Computed From the Syntax Tree175
3.9.3Computing nullable,firstpos,and lastpos176
3.9.4Computing followpos177
3.9.5Converting a Regular Expression Directly to a DFA179
3.9.6Minimizing the Number of States of a DFA180
3.9.7State Minimization in Lexical Analyzers184
3.9.8Trading Time for Space in DFA Simulation185
3.9.9Exercises for Section 3.9186
3.10Summary of Chapter 3187
3.11References for Chapter 3189
4Syntax Analysis191
4.1Introduction192
4.1.1The Roleof the Parser192
4.1.2Representative Grammars193
4.1.3Syntax Error Handling194
4.1.4Error-Recovery Strategies195
4.2Context-Free Grammars197
4.2.1The Formal Definition of a Context-Free Grammar197
4.2.2Notational Conventions198
4.2.3Derivations199
4.2.4Parse Trees and Derivations201
4.2.5Ambiguity203
4.2.6Verifying the Language Generated by a Grammar204
4.2.7Context-Free Grammars Versus Regular Expressions205
4.2.8Exercises for Section 4.2206
4.3Writing a Grammar209
4.3.1Lexical Versus Syntactic Analysis209
4.3.2Eliminating Ambiguity210
4.3.3Elimination of LeftRecursion212
4.3.4Left Factoring214
4.3.5Non-Context-FreeLanguage Constructs215
4.3.6Exercises for Section 4.3216
4.4Top-Down Parsing217
4.4.1Recursive-Descent Parsing219
4.4.2FIRST and FOLLOW220
4.4.3LL(1) Grammars222
4.4.4Nonrecursive Predictive Parsing226
4.4.5Error Recovery in Predictive Parsing228
4.4.6Exercises for Section 4.4231
4.5Bottom-Up Parsing233
4.5.1Reductions234
4.5.2Handle Pruning235
4.5.3Shift-Reduce Parsing236
4.5.4Conflicts During Shift-Reduce Parsing238
4.5.5Exercises for Section 4.5240
4.6Introduction to LR Parsing:Simple LR241
4.6.1Why LR Parsers241
4.6.2Items and the LR(0) Automaton242
4.6.3The LR-Parsing Algorithm248
4.6.4Constructing SLR-Parsing Tables252
4.6.5Viable Prefixes256
4.6.6Exercises for Section 4.6257
4.7More Powerful LR Parsers259
4.7.1Canonical LR(1) Items260
4.7.2Constructing LR(1) Sets of Items261
4.7.3Canonical LR(1) Parsing Tables265
4.7.4Constructing LALR Parsing Tables266
4.7.5Efficient Construction of LALR Parsing Tables270
4.7.6Compaction of LR Parsing Tables275
4.7.7Exercises for Section 4.7277
4.8Using Ambiguous Grammars278
4.8.1Precedenceand Associativity to Resolve Confiicts279
4.8.2The “Dangling-Else” Ambiguity281
4.8.3Error Recoveryin LR Parsing283
4.8.4Exercises for Section 4.8285
4.9Parser Generators287
4.9.1The Parser Generator Yacc287
4.9.2Using Yacc with Ambiguous Grammars291
4.9.3Creating Yacc Lexical Analyzers with Lex294
4.9.4Error Recovery in Yacc295
4.9.5Exercises for Section 4.9297
4.10Summary of Chapter 4297
4.11References for Chapter 4300
5Syntax-Directed Translation303
5.1Syntax-Directed Definitions304
5.1.1Inheritedand Synthesized Attributes304
5.1.2Evaluating an SDD at the Nodes of a Parse Tree306
5.1.3Exercises for Section 5.1309
5.2Evaluation Orders for SDD's310
5.2.1Dependency Graphs310
5.2.2Ordering the Evaluation of Attributes312
5.2.3S-Attributed Definitions312
5.2.4L-Attributed Definitions313
5.2.5Semantic Rules with Controlled Side Effects314
5.2.6Exercises for Section 5.2317
5.3Applications of Syntax-Directed Translation318
5.3.1Construction of Syntax Trees318
5.3.2The Structure of a Type321
5.3.3Exercises for Section 5.3323
5.4Syntax-Directed Translation Schemes324
5.4.1Postfix Translation Schemes324
5.4.2Parser-Stack Implementation of Postfix SDT's325
5.4.3SDT's With Actions Inside Productions327
5.4.4Eliminating Left Recursion From SDT's328
5.4.5SDT's for L-Attributed Definitions331
5.4.6Exercises for Section 5.4336
5.5Implementing L-Attributed SDD's337
5.5.1Translation During Recursive-Descent Parsing338
5.5.2On-The-Fly Code Generation340
5.5.3L-Attributed SDD's and LL Parsing343
5.5.4Bottom-Up Parsing of L-Attributed SDD's348
5.5.5ExercisesforSection5.5352
5.6Summary of Chapter 5353
5.7References for Chapter 5354
6Intermediate-Code Generation357
6.1Variants of Syntax Trees358
6.1.1Directed Acyclic Graphs for Expressions359
6.1.2The Value-Number Method for Constructing DAG's360
6.1.3Exercises for Section 6.1362
6.2Three-Address Code363
6.2.1Addressesand Instructions364
6.2.2Quadruples366
6.2.3Triples367
6.2.4Static Single-Assignment Form369
6.2.5Exercises for Section 6.2370
6.3Types and Declarations370
6.3.1Type Expressions371
6.3.2Type Equivalence372
6.3.3Declarations373
6.3.4Storage Layout for Local Names373
6.3.5Sequences of Declarations376
6.3.6Fieldsin Records and Classes376
6.3.7Exercises for Section 6.3378
6.4Translation of Expressions378
6.4.1Operations Within Expressions378
6.4.2Incremental Translation380
6.4.3Addressing Array Elements381
6.4.4Translation of Array References383
6.4.5Exercises for Section 6.4384
6.5Type Checking386
6.5.1Rules for Type Checking387
6.5.2Type Conversions388
6.5.3Overloading of Functionsand Operators390
6.5.4Type Inference and Polymorphic Functions391
6.5.5An Algorithm for Unification395
6.5.6Exercises for Section 6.5398
6.6Control Flow399
6.6.1Boolean Expressions399
6.6.2Short-Circuit Code400
6.6.3Flow-of-Control Statements401
6.6.4Control-Flow Translation of Boolean Expressions403
6.6.5Avoiding Redundant Gotos405
6.6.6Boolean Values and Jumping Code408
6.6.7Exercises for Section 6.6408
6.7Backpatching410
6.7.1One-Pass Code Generation Using Backpatching410
6.7.2Backpatching for Boolean Expressions411
6.7.3Flow-of-Control Statements413
6.7.4Break-,Continue-,and Goto-Statements416
6.7.5Exercises for Section 6.7417
6.8Switch-Statements418
6.8.1Translation of Switch-Statements419
6.8.2Syntax-Directed Translation of Switch-Statements420
6.8.3Exercises for Section 6.8421
6.9Intermediate Code for Procedures422
6.10Summary of Chapter 6424
6.11References for Chapter 6425
7Run-Time Environments 427
7.1Storage Organization427
7.1.1Static Versus Dynamic Storage Allocation429
7.2Stack Allocation of Space430
7.2.1Activation Trees430
7.2.2Activation Records433
7.2.3Calling Sequences436
7.2.4Variable-Length Dataonthe Stack438
7.2.5Exercises for Section 7.2440
7.3Access to Nonlocal Dataon the Stack441
7.3.1Data Access Without Nested Procedures442
7.3.2Issues With Nested Procedures442
7.3.3A Language With Nested Procedure Declarations443
7.3.4Nesting Depth443
7.3.5Access Links445
7.3.6Manipulating Access Links447
7.3.7AccessLinks for Procedure Parameters448
7.3.8Displays449
7.3.9Exercises for Section 7.3451
7.4Heap Management452
7.4.1The Memory Manager453
7.4.2The Memory Hierarchy of a Computer454
7.4.3Localityin Programs455
7.4.4Reducing Fragmentation457
7.4.5Manual Deallocation Requests460
7.4.6Exercises for Section 7.4463
7.5Introduction to Garbage Collection463
7.5.1Design Goals for Garbage Collectors464
7.5.2Reachability466
7.5.3Reference Counting Garbage Collectors468
7.5.4Exercises for Section 7.5470
7.6Introduction to Trace-Based Collection470
7.6.1A Basic Mark-and-Sweep Collector471
7.6.2Basic Abstraction473
7.6.3Optimizing Mark-and-Sweep475
7.6.4Mark-and-Compact Garbage Collectors476
7.6.5Copying collectors478
7.6.6Comparing Costs482
7.6.7Exercises for Section 7.6482
7.7Short-Pause Garbage Collection483
7.7.1Incremental Garbage Collection483
7.7.2Incremental Reachability Analysis485
7.7.3Partial-Collection Basics487
7.7.4Generational Garbage Collection488
7.7.5The Train Algorithm490
7.7.6Exercises for Section 7.7493
7.8Advanced Topics in Garbage Collection494
7.8.1Paralleland Concurrent Garbage Collection495
7.8.2Partial Object Relocation497
7.8.3Conservative Collection for Unsafe Languages498
7.8.4Weak References498
7.8.5Exercises for Section 7.8499
7.9Summary of Chapter 7500
7.10References for Chapter 7502
8Code Generation505
8.1Issuesin the Design of a Code Generator506
8.1.1Input to the Code Generator507
8.1.2The Target Program507
8.1.3Instruction Selection508
8.1.4Register Allocation510
8.1.5Evaluation Order511
8.2The Target Language512
8.2.1A Simple Target Machine Model512
8.2.2Programand Instruction Costs515
8.2.3Exercises for Section 8.2516
8.3Addresses in the Target Code518
8.3.1Static Allocation518
8.3.2Stack Allocation520
8.3.3Run-Time Addresses for Names522
8.3.4Exercises for Section 8.3524
8.4Basic Blocksand Flow Graphs525
8.4.1Basic Blocks526
8.4.2Next-Use Information528
8.4.3Flow Graphs529
8.4.4Representation of Flow Graphs530
8.4.5Loops531
8.4.6Exercises for Section 8.4531
8.5Optimization of Basic Blocks533
8.5.1The DAG Representation of Basic Blocks533
8.5.2Finding Local Common Subexpressions534
8.5.3Dead Code Elimination535
8.5.4The Use of Algebraic Identities536
8.5.5Representation of Array References537
8.5.6Pointer Assignments and Procedure Calls539
8.5.7Reassembling Basic Blocks From DAG's539
8.5.8Exercises for Section 8.5541
8.6A Simple Code Generator542
8.6.1Register and Address Descriptors543
8.6.2The Code-Generation Algorithm544
8.6.3Design of the Function getReg547
8.6.4Exercises for Section 8.6548
8.7Peephole Optimization549
8.7.1Eliminating Redundant Loadsand Stores550
8.7.2Eliminating Unreachable Code550
8.7.3Flow-of-Control Optimizations551
8.7.4Algebraic Simplification and Reduction in Strength552
8.7.5Use of Machine Idioms552
8.7.6Exercises for Section 8.7553
8.8Register Allocationand Assignment553
8.8.1Global Register Allocation553
8.8.2Usage Counts554
8.8.3Register Assignment for Outer Loops556
8.8.4Register Allocation by Graph Coloring556
8.8.5Exercises for Section 8.8557
8.9Instruction Selection by Tree Rewriting558
8.9.1Tree-Translation Schemes558
8.9.2Code Generation by Tilingan Input Tree560
8.9.3Pattern Matching by Parsing563
8.9.4Routines for Semantic Checking565
8.9.5General Tree Matching565
8.9.6Exercises for Section 8.9567
8.10Optimal Code Generation for Expressions567
8.10.1Ershov Numbers567
8.10.2Generating Code FromLabeled Expression Trees568
8.10.3Evaluating Expressions with an Insuficient Supply of Registers570
8.10.4Exercises for Section 8.10572
8.11Dynamic Programming Code-Generation573
8.11.1Contiguous Evaluation574
8.11.2The Dynamic Programming Algorithm575
8.11.3Exercises for Section 8.11577
8.12Summary of Chapter 8578
8.13References for Chapter 8579
9Machine-Independent Optimizations583
9.1The Principal Sources of Optimization584
9.1.1Causes of Redundancy584
9.1.2A Running Example:Quicksort585
9.1.3Semantics-Preserving Trans for mations586
9.1.4Global Common Subexpressions588
9.1.5Copy Propagation590
9.1.6Dead-Code Elimination591
9.1.7Code Motion592
9.1.8Induction Variablesand Reductionin Strength592
9.1.9Exercises for Section 9.1596
9.2Introduction to Data-Flow Analysis597
9.2.1The Data-Flow Abstraction597
9.2.2The Data-Flow Analysis Schema599
9.2.3Data-Flow Schemason Basic Blocks600
9.2.4Reaching Definitions601
9.2.5Live-Variable Analysis608
9.2.6Available Expressions610
9.2.7Summary614
9.2.8Exercises for Section 9.2615
9.3Foundations of Data-Flow Analysis618
9.3.1Semilattices618
9.3.2Transfer Functions623
9.3.3The Iterative Algorithm for General Frameworks626
9.3.4Meaning of a Data-Flow Solution628
9.3.5Exercises for Section 9.3631
9.4Constant Propagation632
9.4.1Data-Flow Values for the Constant-Propagation Framework633
9.4.2The Meet for the Constant-Propagation Framework633
9.4.3Transfer Functions for the Constant-Propagation Framework634
9.4.4Monotonicity of the Constant-Propagation Framework635
9.4.5Nondistributivity of the Constant-Propagation Framework635
9.4.6Interpretation of the Results637
9.4.7Exercises for Section 9.4637
9.5Partial-Redundancy Elimination639
9.5.1The Sources of Redundancy639
9.5.2Can All Redundancy Be Eliminated642
9.5.3The Lazy-Code-Motion Problem644
9.5.4Anticipation of Expressions645
9.5.5TheLazy-Code-Motion Algorithm646
9.5.6Exercises for Section 9.5655
9.6Loopsin Flow Graphs655
9.6.1Dominators656
9.6.2Depth-First Ordering660
9.6.3Edgesina Depth-First Spanning Tree661
9.6.4Back Edgesand Reducibility662
9.6.5Depth of a Flow Graph665
9.6.6Natural Loops665
9.6.7Speed of Convergence of Iterative Data-Flow Algorithms667
9.6.8Exercises for Section 9.6669
9.7Region-Based Analysis672
9.7.1Regions672
9.7.2Region Hierarchies for Reducible Flow Graphs673
9.7.3Overview of a Region-Based Analysis676
9.7.4Necessary Assumptions About Transfer Functions678
9.7.5An Algorithm for Region-Based Analysis680
9.7.6Handling Nonreducible Flow Graphs684
9.7.7Exercises for Section 9.7686
9.8Symbolic Analysis686
9.8.1Affine Expressions of Reference Variables687
9.8.2Data-Flow Problem Formulation689
9.8.3Region-Based Symbolic Analysis694
9.8.4Exercises for Section 9.8699
9.9Summary of Chapter 9700
9.10References for Chapter 9703
10Instruction-Level Parallelism707
10.1Processor Architectures708
10.1.1Instruction Pipelines and Branch Delays708
10.1.2Pipelined Execution709
10.1.3Multiple Instruction Issue710
10.2Code-Scheduling Constraints710
10.2.1Data Dependence711
10.2.2Finding Dependences Among Memory Accesses712
10.2.3Tradeoff Between Register Usage and Parallelism713
10.2.4Phase Ordering Between Register Allocation and Code Scheduling716
10.2.5Control Dependence716
10.2.6Speculative Execution Support717
10.2.7A Basic Machine Model719
10.2.8Exercises for Section 10.2720
10.3Basic-Block Scheduling721
10.3.1Data-Dependence Graphs722
10.3.2List Scheduling of Basic Blocks723
10.3.3Prioritized Topological Orders725
10.3.4Exercises for Section 10.3726
10.4Global Code Scheduling727
10.4.1Primitive Code Motion728
10.4.2Upward Code Motion730
10.4.3Downward Code Motion731
10.4.4Updating Data Dependences732
10.4.5Global Scheduling Algorithms732
10.4.6Advanced Code Motion Techniques736
10.4.7Interaction with Dynamic Schedulers737
10.4.8Exercises for Section 10.4737
10.5Software Pipelining738
10.5.1Introduction738
10.5.2Software Pipelining of Loops740
10.5.3Register Allocation and Code Generation743
10.5.4Do-Across Loops743
10.5.5Goals and Constraints of Software Pipelining745
10.5.6A Software-Pipelining Algorithm749
10.5.7Scheduling Acyclic Data-Dependence Graphs749
10.5.8Scheduling Cyclic Dependence Graphs751
10.5.9Improvements to the Pipelining Algorithms758
10.5.10Modular Variable Expansion758
10.5.11Conditional Statements761
10.5.12Hardware Support for Software Pipelining762
10.5.13Exercises for Section 10.5763
10.6Summary of Chapter 10765
10.7References for Chapter 10766
11Optimizing for Parallelism and Locality769
11.1Basic Concepts771
11.1.1Multiprocessors772
11.1.2Parallelismin Applications773
11.1.3Loop-Level Parallelism775
11.1.4Data Locality777
11.1.5Introduction to Affine Trans form Theory778
11.2Matrix Multiply:AnIn-Depth Example782
11.2.1The Matrix-Multiplication Algorithm782
11.2.2Optimizations785
11.2.3Cache Interference788
11.2.4Exercises for Section 11.2788
11.3Iteration Spaces788
11.3.1Constructing Iteration Spaces from Loop Nests788
11.3.2Execution Order for Loop Nests791
11.3.3Matrix Formulation of Inequalities791
11.3.4Incorporating Symbolic Constants793
11.3.5Controllingthe Order of Execution793
11.3.6Changing Axes798
11.3.7Exercises for Section 11.3799
11.4AffineArray Indexes801
11.4.1Affine Accesses802
11.4.2Affineand Nonaffine Accesses in Practice803
11.4.3Exercises for Section 11.4804
11.5Data Reuse804
11.5.1Types of Reuse805
11.5.2Self Reuse806
11.5.3Self-Spatial Reuse809
11.5.4Group Reuse811
11.5.5Exercises for Section 11.5814
11.6Array Data-Dependence Analysis815
11.6.1Definition of Data Dependence of Array Accesses816
11.6.2Integer Linear Programming817
11.6.3The GCD Test818
11.6.4Heuristics for Solving Integer Linear Programs820
11.6.5Solving General Integer Linear Programs823
11.6.6Summary825
11.6.7Exercises for Section 11.6826
11.7Finding Synchronization-Free Parallelism828
11.7.1An Introductory Example828
11.7.2Affine Space Partitions830
11.7.3Space-Partition Constraints831
11.7.4Solving Space-Partition Constraints835
11.7.5Asimple Code-Generation Algorithm838
11.7.6Eliminating Empty Iterations841
11.7.7Eliminating Tests from Innermost Loops844
11.7.8Source-Code Transforms846
11.7.9Exercises for Section 11.7851
11.8Synchronization Between Parallel Loops853
11.8.1A Constant Number of Synchronizations853
11.8.2Program-Dependence Graphs854
11.8.3Hierarchical Time857
11.8.4The Parallelization Algorithm859
11.8.5Exercises for Section 11.8860
11.9Pipelining861
11.9.1Whatis Pipelining861
11.9.2Successive Over-Relaxation(SOR):An Example863
11.9.3Fully Permutable Loops864
11.9.4Pipelining Fully Permutable Loops864
11.9.5General Theory867
11.9.6Time-Partition Constraints868
11.9.7Solving Time-Partition Constraints by Farkas' Lemma872
11.9.8Code Trans for mations875
11.9.9Parallelism With Minimum Synchronization880
11.9.10Exercises for Section 11.9882
11.10Locality Optimizations884
11.10.1Temporal Locality of Computed Data885
11.10.2Array Contraction885
11.10.3Partition Interleaving887
11.10.4Puttingit All Together890
11.10.5Exercises for Section 11.10892
11.11Other Uses of Affine Transforms893
11.11.1Distributed memory machines894
11.11.2Multi-Instruction-Issue Processors895
11.11.3Vector and SIMD Instructions895
11.11.4Prefetching896
11.12Summary of Chapter 11897
11.13References for Chapter 11899
12Interprocedural Analysis 903
12.1Basic Concepts904
12.1.1Call Graphs904
12.1.2Context Sensitivity906
12.1.3Call Strings908
12.1.4Cloning-Based Context-Sensitive Analysis910
12.1.5Summary-Based Context-Sensitive Analysis911
12.1.6Exercises for Section 12.1914
12.2Why Interprocedural Analysis916
12.2.1Virtual Method Invocation916
12.2.2Pointer Alias Analysis917
12.2.3Parallelization917
12.2.4Detection of Software Errorsand Vulnerabilities917
12.2.5SQL Injection918
12.2.6Buffer Overflow920
12.3A Logical Representation of Data Flow921
12.3.1Introduction to Datalog921
12.3.2Datalog Rules922
12.3.3Intensional and Extensional Predicates924
12.3.4Execution of Datalog Programs927
12.3.5Incremental Evaluation of Datalog Programs928
12.3.6Problematic DatalogRules930
12.3.7Exercises for Section 12.3932
12.4A SimplePointer-Analysis Algorithm933
12.4.1Why is Pointer Analysis Difficult934
12.4.2A Model for Pointers and References935
12.4.3Flow Insensitivity936
12.4.4The Formulationin Datalog937
12.4.5Using Type Information938
12.4.6Exercises for Section 12.4939
12.5Context-Insensitive Interprocedural Analysis941
12.5.1Effects of a Method Invocation941
12.5.2Call Graph Discoveryin Datalog943
12.5.3Dynamic Loading and Reflection944
12.5.4Exercises for Section 12.5945
12.6Context-Sensitive Pointer Analysis945
12.6.1Contexts and CallStrings946
12.6.2Adding Context to Datalog Rules949
12.6.3Additional Observations About Sensitivity949
12.6.4Exercises for Section 12.6950
12.7Datalog Implementation by BDD's951
12.7.1Binary Decision Diagrams951
12.7.2Transformations on BDD's953
12.7.3Representing Relations by BDD's954
12.7.4Relational Operationsas BDD Operations954
12.7.5Using BDD's for Points-toAnalysis957
12.7.6Exercises for Section 12.7958
12.8Summary of Chapter 12958
12.9References for Chapter 12961
AA Complete FrontEnd965
A.1The Source Language965
A.2Main966
A.3Lexical Analyzer967
A.4Symbol Tablesand Types970
A.5Intermediate Code for Expressions971
A.6Jumping Code for Boolean Expressions974
A.7Intermediate Code for Statements978
A.8Parser981
A.9Creatingthe FrontEnd986
BFinding Linearly Independent Solutions989
Index993
……[看更多目录]
点此购买报价¥59.20