SSJ
V. labo.

Package umontreal.iro.lecuyer.markovchain

This package provides tools to implement and use discrete-time Markov chains (DTMC).

See:
          Description

Class Summary
ArrayOfComparableChains Permits one to simulate an array of MarkovChainComparable using the array-RQMC method of[#!vLEC05a!#], where n copies of the chain are simulated in parallel, and sorted using a multi-dimensional sort (see MultiDimSort) at each step of the chain.
ArrayOfComparableChainsStop Deprecated.
ArrayOfDoubleChains Similar to ArrayOfComparableChains, except that instead of working with n clones of a MarkovChain, we use a single MarkovChainDouble object for all the chains.
ArrayOfDoubleChainsStop Deprecated.
LeftScrambledFaureSequence Deprecated.
LeftScrambledSobolSequence Deprecated.
MarkovChain This class defines a generic Markov chain and provides basic tools to simulate it for a given number of steps or until it stops and recover the performance measure.
MarkovChainComparable A subclass of Markov chain for which there is a total ordering between the states in each dimension induced by the implementation of the MultiDimComparable interface in package umontreal.iro.lecuyer.util.
MarkovChainComparableStop Deprecated.
MarkovChainDouble A special kind of Markov chain whose state space is a subset of the real numbers.
MarkovChainDoubleStop Deprecated.
 

Package umontreal.iro.lecuyer.markovchain Description

This package provides tools to implement and use discrete-time Markov chains (DTMC). A DTMC is an important class of Markovian processes with time index I = {0, 1, 2,…}. It is defined as a sequence {Xi, iI} of random variables (Xi represents the state at index i), all defined on the same probability space. The evolution of the states is determined by the stochastic recurrence

X0 = x0,        Xj = φ(Xj-1, Uj),

where the Uj are independent random variables uniformly distributed over [0, 1)d. Here, the dimension d is usually 1, but can be larger.

A performance mesure Yi is defined over this sequence as

Yi = ∑j=1icj(Xj).

The cj are some cost (or revenue) function at step j. The objective is to estimate μ = E[Yτ], where τ is a stopping time (fixed or stochastic). It is possible that cj(⋅) = 0 for all j < τ.

The basic class is MarkovChain which contains methods to simulate steps of the Markov chains or several runs and store the performance mesure in a statistical collector. Simulation can be done using Monte Carlo or quasi-Monte Carlo.

To use these methods, one must implement a class inheriting from MarkovChain and implementing its three abstract methods: initialState() resets the chain to its initial state x0; nextStep(stream) advances the chain by one step from the current state using a random stream, it represents function φ(⋅); getPerformance() returns the performance mesure of the chain, the value of Yi where i is the current step.

However, it is recommended to inherit from MarkovChainComparable (if the chains can be sorted) or MarkovChainDouble (special case for one dimensional state) which are subclasses of MarkovChain, rather than directly from this class. Some other methods are then needed. See examples below for more details.

The classes ArrayOfComparableChains and ArrayOfDoubleChains can be used to work with multiple Markov chains in parallel. The chains can then be sorted using method sortChains. These classes also provide methods to simulate using the array-RQMC method of[#!vLEC05a!#].

The following examples demonstrate how to implement and use a Markov chain using this package.

First, the class Brownian.java shows a very simple implementation of a MarkovChainComparable. It represents a Brownian motion over the real line. The starting position x0 as well as the time step dt are given in the constructor. Each step represents a move which is represented by the addition of a normal variable of mean 0 and variance dt to the current position. The performance mesure here is just the positive distance between the current position and the initial position, but it could be anything else.

The program BrownianTest.java shows different ways to use the Markov chain.

1- How to simulate the trajectory and print the state of the chain at each step and the performance at the end.

2- How to simulate using Monte Carlo to get an unbiased estimator of the expectation of the performance and an estimation of its variance. If stream is a PointSetIterator, use simulRunsWithSubstreams instead of simulRuns. The Tally is a statistical collector; see package umontreal.iro.lecuyer.stat for how to use it.

3- Same as 2 but with randomized quasi-Monte Carlo. Basically, it takes a PointSet where the dimension of the points is the number of steps and the number of points is the number of trajectories. The PointSetRandomization must be compatible with the point set. See package umontreal.iro.lecuyer.hups more information on these classes.

4- Same as 2 but with the array-RQMC method (see[#!vLEC05a!#]). The ArrayOfComparableChains is used to simulate chains in parallel. It uses a PointSetRandomization to randomize the point sets and a MultiDimSort to sort the chains. Here, as the chain is one-dimensional, the sort used is a OneDimSort. It is important to call makeCopies in order to set the number of chains. See package umontreal.iro.lecuyer.util for more information on sorts.

5- How to simulate the trajectories with array-RQMC and do something with the chains at each step. The Do something with mc comment should be replaced by anything, using the MarkovChain mc. For example to store or print the state x of each chain for a later use.

The output of this program is shown here BrownianTest.res. For this example, the variance of the estimator with RQMC is 6.25 times less than MC, and 388 times less with array-RQMC compared to MC.


SSJ
V. labo.

To submit a bug or ask questions, send an e-mail to Pierre L'Ecuyer.