# What is a Markov Chain?

dida

A Markov chain is a mathematical model that describes a system transitioning from one state to another within a set of possible states. The key feature of a Markov chain is the Markov property, which means that the probability of moving to the next state depends only on the current state and not on the sequence of events that led up to it. By generating a large number of possible state sequences through Markov chains, stochastic simulations can approximate the behavior of systems, helping to predict outcomes and assess risks in fields like finance, engineering, and biology.

## Markov chain -based on an example

Let's consider this directed graph as a representation of weather conditions, where S stands for Sunny, C for Cloudy, and R for Rainy. This Markov chain model helps predict the weather based on the current condition. In this model, transitions occur between states, and each transition has an associated probability.

Imagine we start with a sunny day (S). According to the diagram, there's a 60% chance that the next day will also be sunny, a 30% chance it will be cloudy, and a 10% chance it will rain.

This model provides a simple yet powerful way to predict the weather for the next day based solely on the current day's weather, without needing to consider the sequence of previous days. For example, if today is sunny, we can expect there's a higher likelihood of another sunny day tomorrow, but there’s also a chance of cloudiness or rain, as indicated by the transition probabilities.

## Transition matrix

To represent a Markov chain more efficiently, we can use a transition matrix. A transition matrix is a square matrix where each element represents the probability of transitioning from one state to another. For our weather example, we will have a 3x3 matrix since we have three states: Sunny (S), Cloudy (C), and Rainy (R).

The rows of the matrix represent the current state, while the columns represent the next state. The element in the ith row and jth column of the matrix indicates the probability of transitioning from state i to state j.

We can construct the transition matrix P as follows:

## Stationary distribution

To understand the long-term behavior of our Markov chain, we introduce the concept of a stationary distribution. This distribution is a probability vector that remains invariant under the system's transitions over time. In other words, if we start with any weather state within this equilibrium, the transition probabilities will remain unchanged. The stationary distribution therefore represents the distribution of states that the chain will tend to visit repeatedly over time, regardless of the initial state.

Let’s denote this equilibrium vector as π, which represents the long-term probabilities of being in each state (Sunny, Cloudy, Rainy). Mathematically, π satisfies the equation:

This equation implies that multiplying the equilibrium vector by the transition matrix P yields the same vector. In Markov chains, this is called "Pisa state" and it refers to a specific kind of state where the chain can only reach from other states but cannot transition away from it once entered.

Additionally, the elements of π must sum to 1 because they represent probabilities.

## Solving the equations

Solving these equations (which can be done using linear algebra techniques or computational tools), we find the stationary distribution π.

For simplicity, let's assume we solve this and find:

This stationary distribution tells us the long-term probabilities of each weather condition, regardless of the initial state. Starting from any weather condition, as time goes on, the probabilities of experiencing Sunny, Cloudy, or Rainy days will converge to 45%, 35%, and 20%, respectively. This concept is useful for understanding the steady-state behavior of Markov chains in various applications, such as predicting long-term weather patterns or steady-state customer distributions in business contexts.

## Conclusion

In summary, Markov chains provide a robust framework for modeling and predicting systems with probabilistic transitions between states. By leveraging the transition matrix, we can effectively capture and analyze the probabilities of moving from one state to another. The stationary distribution offers valuable insights into the long-term behavior of the system, revealing the equilibrium probabilities that the system will converge to over time. Understanding these concepts allows us to make informed predictions and decisions based on the steady-state behavior of various processes, from forecasting weather to optimizing business strategies. As we apply these principles, we gain a deeper understanding of how systems evolve and stabilize, leading to more accurate forecasts and strategic planning.