Viterbi algorithm use in Markov models

Viterbi algorithm use in Markov models

Post by Harr » Mon, 04 Feb 2002 00:22:13



What newsgroup or list might be best to find  a person
to communicate with on the Viterbi algorithm as a tool
to help with Markov model analysis?

I'm asking because I'm exploring the Viterbi algorithm
and have a few questions I would like to ask
someone who is very familiar with it.

Harry

 
 
 

Viterbi algorithm use in Markov models

Post by Jerry Avin » Mon, 04 Feb 2002 03:27:04



> What newsgroup or list might be best to find  a person
> to communicate with on the Viterbi algorithm as a tool
> to help with Markov model analysis?

> I'm asking because I'm exploring the Viterbi algorithm
> and have a few questions I would like to ask
> someone who is very familiar with it.

> Harry

I can't say what's best, but this one is good.   JA
--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

 
 
 

1. Discrete-Time Markov Processes and Forney's Paper on the Viterbi Algorithm

Hi Group,

I have a few very basic questions/misunderstandings about the "discrete-time
Markov process" model Forney uses in his 1973 landmark paper
"The Viterbi Algorithm" (Proceedings of the IEEE, vol 61, no 3,
March, 1973).

On p.269, Forney arrives at a fundamental expression for P(z | x),
where x is a finite input state vector running from time 0 to
time K and z is a finite observation state vector running over
the same interval, the observation being made over a channel with
memoryless noise. His expression involves the transition
states xi[k] (Greek letter xi) where xi[k] = (x[k+1],x[k]).

Here's where I run into a real basic conceptual problem. The
probability we seek to evaluate is P(z | x), i.e., "the probability
of z GIVEN x", does not have ANYTHING to do with the transition
states. That is, we are GIVEN x - it matters not one whit how
probable x is. Now since we are given x it seems that the only
thing needed to evaluate P(z | x) are the *CHANNEL* transition
probabilities P(z[k] | x[k]), NOT the source transition probabilies
P(x[k+1] | x[k]).

In other words, when Forney gives the relation

  P(z | x) = sum_{k=0}^{K-1} P(z[k] | xi[k])

I don't see that xi[k] has anything to do with it. We are GIVEN
x - the likelihood of x is 1!!!!
--
%% Randy Yates
%%  DSP Engineer
%%  Ericsson / Research Triangle Park, NC, USA

2. Disk I/O

3. Hidden Markov Models and Entropy Minimization

4. Shareware vs. Freeware

5. ISO: Rabiner Tutorial on Hidden Markov Models

6. LSI Logic L64032 MAC

7. help:Hidden Markov Modelling

8. MMU - PAL16L8

9. Hidden Markov Model Trainer

10. Where to find info on Markov modeling

11. Freeware and Commercial Hidden Markov Model Toolkits

12. On Hidden Markov Models for speech characterization

13. State estimation for hidden Markov models