Hidden Markov Models (In French) Essay

4635 words - 19 pages

BARTHÉLEMY CABOUAT
THIBAULT MATHIEU
ENCADRÉS PAR JEAN-MARC BARDET
MAI 2014

Travail d’étude et de recherche :
Les chaînes de Markov cachées

Table des matières
Introduction 2
Présentation générale 3
Chaînes de Markov 3
Propriété de Markov 3
Homogénéité 4
Noyaux de transition 5
Modèle de Markov caché 5
Définition 5
Exemple 7
Propriétés 9
Inférence 12
Estimateurs de paramètres : les MLE 12
Probabilité d’une séquence d’observations : l’algorithme Forward-Backward 15
Algorithme de Viterbi 17
Notations 17
Principe 18
Applications 20
Simulations Matlab 20
Inférence paramétrique 20
Algorithme de Viterbi 24
Application à la domotique 26
Conclusion 31
Bibliographie ...view middle of the document...

Chaînes de Markov

Propriété de Markov

Une chaîne de Markov est un processus stochastique à temps discret et états discrets X_n possédant la propriété de Markov, dont une définition est donnée ci-dessous :
Déf. 1.1. X_n vérifie la propriété de Markov faible si :

Où Edésigne l’espace des états possibles de X_n.
Cette propriété signifie que l’état futur d’une chaine de Markov ne dépend que de l’état actuel. Autrement dit, la connaissance des états passés n’apporte pas d’information utile supplémentaire pour la prédiction probabiliste du futur.
Exemple : soit E={soleil, pluie}
Les probabilités dites de transition (voir plus loin) sont :
P(Soleil │Soleil)=0,7 ;
P(Pluie│Soleil)=0,3 ;
P(Pluie│Pluie)=0,6
P(Soleil│Pluie)=0,4
Les probabilité initiales sont : P(Soleil)=0,2 et P(Pluie)=0,8
Ainsi, la probabilité d’avoir la séquence {‘Soleil’,’Pluie’,’Soleil’,’Pluie’} est d’après la propriété de Markov : P(Soleil)*P(Pluie│Soleil)*P(Soleil│Pluie)*P(Pluie│Soleil) = 0,2*0,3*0,4*0,3=0,0072

D’autre part, on peut poser une hypothèse d’homogénéité sur la chaîne de Markov considérée :

Homogénéité

Déf 1.2. X_n est une chaîne de Markov homogène si :

Cette propriété correspond à une invariabilité des probabilités de transition par rapport au temps. Celle-ci permet notamment de définir la matrice de transition et les noyaux de transition :

Noyaux de transition

Déf 1.3. Si X_n est une chaîne de Markov homogène, on définit A sa matrice de transition d’un état vers un autre par :

Déf 1.4. L’état du processus à l’instant 1 est la loi de probabilité, notée π, de la variable X_1 :

Modèle de Markov caché

Définition

Dans les chaînes de Markov, les observations correspondent aux états du processus. Or dans un modèle de Markov caché, on ne peut pas observer directement les états du processus, mais seulement des symboles (aussi appelés observables) émis par les états selon une certaine loi de probabilité.
Au vu d’une séquence d’observation, on ne peut pas savoir par quelle séquence d’états (ou chemin) le processus est passé, d’où le nom de modèle de Markov caché (MMC).
On distingue le processus 〖X= X〗_1,X_2,…,X_n qui représente l’évolution des états du MMC et le processus O= O_1,O_2,…,O_n qui représente la suite des symboles émis par le MMC.

On définit différents éléments pour une chaîne de Markov cachée :
n est le nombre d’états cachés du modèle. On note S = {s_1,s_2,…,s_n} l’ensemble des états caches; A l’instant t, un état est représenté par X_t 〖(X〗_tϵ S)
m est le nombre de symboles distincts que l’on peut observer dans chaque état. On les représente par l’ensemble V = {v_1,v_2,…,v_m}. A l’instant t, un symbole observable est désigné par O_t

Une matrice de probabilité de transitions, notée A = [a_ij], où a_ij est la probabilité a priori de transition de l’état i vers l’état j. On définit a_ij= P(X_(t+1)= s_j │X_t= s_i ), 1 ≤ i, j ≤ n

Une matrice de distributions des...

Other Papers Like Hidden Markov Models (In French)

Quantative Methods In Finance Midterm 2014spring

692 words - 3 pages FRE6083, Midterm Examination, Wednesday March 12 2014, 5:30pm-8:00pm 1. Number of pages including this one: 2 2. For this examination, you may only use a 2-page cheat sheet. No other notes, books or electronic devices may be used. Cell phones may not be used. Any violation of this policy will result in a grade of zero for this exam. Problem 1 (12 points) We consider X1 , X2 , · · · Xn , independent and identically distributed random variables

The Effects of Biological Weapons on the Past and Presents Society

1530 words - 7 pages Biological Weapons on the Past and Presents Society In 1978, a popular writer and Bulgarian exile by the name of Georgi Markov was going on his way to work in the British Broadcasting Corporation, which is better known as BBC, where he broadcasted to his homeland from a station named Radio Free Europe. While he was walking he felt a sudden sharp pain in his leg. When he turned around he observed a man picking up an umbrella. The man apologized for

Vpn, Voip, Wi-Fi

3490 words - 14 pages Entropy and Information Theory (1990) and coauthored the books Random Processes (1986), Vector Quantization and Signal Compression (1992), Fourier Transforms: An Introduction for Engineers (1995), Image Segmentation and Compression Using Hidden Markov Models (2000), and Stochastic Image Processing (2004). Dr. Gray received numerous awards, among which are the IEEE Centennial Medal (1984), IEEE Signal Processing Society’s Society Award (1993

Frm Syllabus

1406 words - 6 pages Smiles/Frowns, Volatility Term Structures, Volatility Surface 2. Exotic Options 3. Duration and Convexity of fixed income securities 4. Term structure models 5. Backtesting VaR 6. Mapping financial instruments to risk factors 7. Expected shortfall and coherent risk measures 8. Extreme Value Theory 9. Copulas and tail dependence 10. Mortgages and mortgage-backed securities (MBS) a. Underwriting mortgages b. Prepayment models c. Risks in mortgages and

Kdd Review

2186 words - 9 pages domain expert) the clusters can be used to train new HMM models. The straightforward mapping of the cluster HMMs is to a composite HMM, where each branch of the HMM corresponds to an HMM in the cluster. The authors note that the training for such an algorithm would be quite intensive. As an alternative they propose a binary tree structure, where each node in the tree is a composite HMM of two layers, each layer corresponding to a child node. The

How Facial Feature Extraction Works

980 words - 4 pages only for color images and second, the false positive rate is high. Finally, appearance based approaches aim to find a pattern automatically from a test dataset and then search the input image for the pattern. Methods such as Hidden Markov Model [17], SVM [18], and AdaBoost [19], [20] are used to extract the feature vector containing the facial components. Although these methods have a good success rate, in one hand, they have a high training

Edward T

978 words - 4 pages Edward T. Hall : Cultural Dimension Introduction „A fish only realizes it needs water to live when it is no longer swimming in the water. Our culture is to us like water to the fish. We live and breathe through our culture." As Trompenaar's quote outlines, culture is a crucial part of someone's life or even indispensable for the life of humans. This is because culture determines a human's basic assumptions, values, norms and belief

Development of automobile safety

925 words - 4 pages 100 years, the automobile safety has changed a lot to give out the best protection that we’re using today. Below are 3 major features: safety glass, air bag and seat belt. Today safety glass, which will not splinter when exposed to shock, is in windshields for cars. Essential as it is, safety glass was the result of a clumsy mistake. In 1903, Edouard Benedictus, a French scientist inadvertently knocked a glass flask to the floor when fetching

Computational Linguistics

896 words - 4 pages . Theoretical CL takes up issues in theoretical linguistics and cognitive science. It deals with formal theories about the linguistic knowledge that a human needs for generating and understanding language. Today these theories have reached a degree of complexity that can only be managed by employing computers. Computational linguists develop formal models simulating aspects of the human language faculty and implement them as computer programmes

Exploring the Transistor and Thin Clients with Musrolquinine

2400 words - 10 pages be noted that our heuristic is maximally efficient. As a result, we present an analysis of Scheme (MusrolQuinine), disproving that the famous autonomous algorithm for the investigation of RAID by I. Daubechies runs in O(logn) time. We construct new "fuzzy" models, which we call MusrolQuinine. We emphasize that our application follows a Zipf-like distribution, without managing Lamport clocks [3]. For example, many approaches emulate redundancy

Henri Fayol

1520 words - 7 pages Henri Fayol was a French mining engineer and director of mines who had developed a general theory of business administration.  He was one of the most influential contributors to modern theories of management. He also identified 5 functions and 14 general principles of management in the early 19th century. These functions and principles were based on his experience and observations and were made for general administration purposes. While

Related Essays

Writer Adaptation For Handwriting Recognition In Hindi Language – A Survey

2666 words - 11 pages language as well. This paper explores different classifiers models and their use as a promising classifier for Hindi handwriting recognition.The classifiers which have been previously used or implemented for online handwriting recognition are dealt with in the next few sections. Section 2 deals with Hidden Markov Models. Support vector machines and Active DTW classifiers are discussed in sections 3 and 4 respectively. Conclusions are discussed in

Bolta Mosam Essay

2136 words - 9 pages BYBLOS: The BBN continuous speech recognition system :- In this paper, they describe BYBLOS, the BBN continuous speech recognition system. The system, designed for large vocabulary applications, integrates acoustic, phonetic, lexical, and linguistic knowledge sources to achieve high recognition performance. The basic approach it makes is the extensive use of robust context-dependent models of phonetic coarticulation using Hidden Markov Models

Robot Movement Optimaization With Using Localization Algorithms

4245 words - 17 pages problems. Kalman Model: The model underlying the KF (Figure 1.1) is divided into a visible and a hidden part. The visible part consists of the system input and the sensor readings whereas the hidden part contains the state vector, the transition models, etc. As can be seen, the estimated state at time k is based upon the previous state estimation xk-1 and the system input or control vector uk. The sensor observation vector yk is derived out of

Probabilistic Robotics Navigation Essay

3459 words - 14 pages measurements). They do so recursively, based on the most recent control u and measurement z, the previous probabilistic estimate of the state, and the probabilistic models p(x’|x,u) and p(z|x) discussed earlier. Thus, Bayes filters do not just “guess” the state x, they calculate the probability that any state x is correct. Popular examples of Bayes filters used for interpreting streams of data in applications, such as speech recognition, are hidden