BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Paul Malcolm (ANU DST)
DTSTART:20220607T060000Z
DTEND:20220607T070000Z
DTSTAMP:20260423T024721Z
UID:anumacs/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/anumacs/3/">
 New representations for a semi-Markov chain and related filters</a>\nby Pa
 ul Malcolm (ANU DST) as part of ANU Mathematics and Computational Sciences
  Seminar\n\nLecture held in Room 1.33\, Hanna Neumann Building #145.\n\nAb
 stract\nIt is now usual that the null-hypothesis for a finite-state stocha
 stic process is conveniently taken to be the standard Markov chain. In the
  absence of any other system knowledge this is the model that is often use
 d. Some reasons for this are\; Markov chains are relatively simple\, they 
 have been well studied and much is known about these processes. Added to t
 his there are now decades of history applying the standard Hidden Markov M
 odel (HMM) to: defence science\, gene sequencing\, health science\, machin
 e learning\, artificial intelligence and many other areas. In this seminar
  we will briefly recall two common application domains of estimation with 
 latent Markov processes\, 1) parts-of-speech tagging (POS) in natural lang
 uage processing and 2) tracking a maneuvering object with a Jump Markov Sy
 stem. Semi-Markov models relax an implicit feature of every state in a fir
 st-order time-homogeneous Markov chains\, that is\, the sojourn random var
 iables of such states are geometrically distributed and are therefore\, (u
 niquely) memoryless random variables. In contrast\, semi-Markov chains all
 ow arbitrary sojourn models. Consequently\, a Hidden semi-Markov Model (Hs
 MM) offers a richer class of model\, but retains the classical HMM as a sp
 ecial degenerate case.\n\nThe main task we address in this seminar concern
 s model calibration\, or parameter estimation of a HsMM. We develop an Exp
 ectation Maximization (EM) algorithm to compute the best fitting (in the M
 aximum Likelihood sense) HsMM for a given set of observation data. There a
 re several parts to this task\, the first is to derive a recursive filter 
 and smoother for a partially observed semi-Markov chain. The second and mo
 re challenging part of the task is to derive filters and smoothers for var
 ious processes derived from the latent semi-Markov chain\, for example\, a
  counting process that counts the number of transitions between two distin
 ct states labelled "i" and "j"\, up to an including time k. We will see th
 at estimators for such quantities are non-trivial\, largely because of the
  sojourn dependence in transition probabilities.\n\nThe estimators we pres
 ent are all for partially observed joint events\, that is\, the state of t
 he semi-Markov chain at time "k" and the cumulative time it has remained i
 n this state. This means we are assured of exponential forgetting of initi
 al conditions in our estimators. Separate estimators for individual quanti
 ties such as the semi-Markov state alone are easily computed via marginali
 zation.\n
LOCATION:https://researchseminars.org/talk/anumacs/3/
END:VEVENT
END:VCALENDAR
