Hidden Markov models for bioinformatics生物信息中隐藏的马可夫模式

Hidden Markov models for bioinformatics生物信息中隐藏的马可夫模式  点此进入淘宝搜索页搜索
  特别声明:本站仅为商品信息简介,并不出售商品,您可点击文中链接进入淘宝网搜索页搜索该商品,有任何问题请与具体淘宝商家联系。
  參考價格: 点此进入淘宝搜索页搜索
  分類: 图书,进口原版书,科学与技术 Science & Techology ,

作者: T. Koski 编著

出 版 社: 化学工业出版社

出版时间: 2001-12-1字数:版次: 1页数: 391印刷时间: 2001/12/01开本: 16开印次: 1纸张: 胶版纸I S B N : 9781402001352包装: 精装内容简介

The purpose of this book is to give a thorough and systematic introduction to probabilistic modeling in bioinformatics. The book contains a mathematically strict and extensive presentation of the kind of probabilistic models that have turned out to be useful in genome analysis. Questions of parametric inference, selection between model families, and various architectures are treated. Several examples are given of known architectures (e.g., profile HMM) used in genome analysis.

Audience: This book will be of interest to advanced undergraduate and graduate students with a fairly limited background in probability theory, but otherwise well trained in mathematics and already familiar with at least some of the techniques of algorithmic sequence analysis.

目录

Foreword

1Prerequisites in probability calculus

1.1 Background

1.2 Formulae and Definitions

1.2.1 Alphabet, Sequence

1.2.2 Random Variables and their Distributions

1.2.3 Joint Probability Distributions

1.2.4 Conditional Probability Distributions

1.2.5 A Chain Rule

1.2.6 Independence

1.2.7 Conditional Independence

1.2.8 Probability Models with Independence

1.2.9 Multinomial Probability Distribution

1.2.10A Weight Matrix Model for a Family of Sequences

1.2.11 Simplifying Notations

1.3 Learning and Bayes' Rule

1.3.1 Bayes' Rule

1.3.2 A Missing Information Principle and Inference

1.4 Some Distributions for DNA Analysis

1.4.1 Fragment Accuracy

1.4.2 The Distribution of the Number of Fragments

1.5 Expectation

1.6 Jensen's Inequality

1.7 Conditional Expectation

1.8 Law of Large Numbers

1.9 Exercises

1.10 References and Further Reading:

2 Information and the Kullback Distance

2.1 Introduction

2.2 Mutual Information

2.3 Properties of Mutual Information

2.3.1 Entropy

2.3.2 Some Further Formulas

2.4 Shannon's Source Coding Theorems

2.4.1 AEP

2.4.2 The Source Coding Theorem

2.4.3 Lossless Compression Codes and Entropy

2.5 Kullback Distance

2.5.1 Definition and Examples

2.5.2 Calibration

2.5.3 Properties

2.6 The Score and the Fisher Information

2.7 Exercises on Mutual Information and Codelengths

2.8 Kullback Distance and Fisher Information

2.9 References and Further Reading

3 Probabilistic Models and Learning

3.1 Introduction

3.2 Bayesian probability

3.2.1 Chance and Probability

3.2.2 Coherence

3.3 Models with Conditional Independence

3.3.1 Modelling and Learning for Tosses of a Thumb tack

3.3.2 Learning of the Multinomial Process

3.3.3 General Summary

3.4 Comparison of Model Families

3.4.1 Bayes Factor

3.4.2 Inductive Learning, Updates

3.5 Some Asymptotics for Evidence

3.6 Evidence and Bayesian Codelengths

……

4 EM Algorthm

5 Alignment and Scoring

6 Mixture Models and Profiles

7 Markov Chains

8 Learning of Markov Chains

9 Markovian Models for DNA sequences

10 Hidden Mardov Models: and Overview

11 HMM for DNA Sequences

12 Left to Right HMM for Sequences

13 Derin's Algorithm

14 Forward-Backward Algorithm

15 Baum-Welch Learning Algorthm

16 Limit Points of Baum-Welch

17 Asymptotics of Learning

18 Full probabilistic HMM

Index

 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
 
© 2005- 王朝網路 版權所有 導航