Another reason why HMMs are popular is because they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence of n -dimensional real-valued vectors (with n being a small integer, such as 10), outputting one of these every 10 milliseconds. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of a short time window of speech and decorrelating the spectrum using a cosine transform , then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), each phoneme , will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes.
Fifty years from Bloody Sunday, our march is not yet finished. But we are getting closer. Two hundred and thirty-nine years after this nation’s founding, our union is not yet perfect. But we are getting closer. Our job’s easier because somebody already got us through that first mile. Somebody already got us over that bridge. When it feels the road’s too hard, when the torch we’ve been passed feels too heavy, we will remember these early travelers, and draw strength from their example, and hold firmly the words of the prophet Isaiah: