Markov Chain vs Bayes Net
I am learning about Markov Chain and Bayesian Nets. However at this point I am a bit confused about what types of problems are modelled with the two different models presented to us. From what I understand (mostly from the examples I have read) Markov Chains are being used to represent the change in a single type of variable over time. So for example a random variable X representing the weather. Let X = {sun, rain}. Then for a markov chain, at time = 0 we are given P(X) and a transition model $P(X_t/X_{t-1})$. So with this knowledge we could calculate $P(X_\infty)$. Like asking a question, given the initial distribution of a random variable X and a transition model, what would be the value of P(X=x) at time t? The solution of such a question can be answered by mini forward algorithm.
Now for bayesian network, from what I understand, we model dependencies among different random variables. So here essentially we have some random variables that may have a causation relationship with other variables. There are some nice properties about such networks that let us define the joint over all variables easily.
Onto my question - Often the topic of markov chains is introduced before bayesian networks. What is the relationship between the two? because I can't seem to draw parallels between them, to me I see both as quite different approaches at modelling quite different problems.
In what other contexts can markov chains be used? or are they always used to model a single variable varying over time steps? I hope to gain some clarity to distinguish between the two and hopefully this would help me understand the topics better! Any suggestions/readings/links are much appreciated!
Topic bayesian-networks markov-process
Category Data Science