Komlós' theorem

From Wikipedia, the free encyclopedia

Komlós' theorem is a theorem from probability theory and mathematical analysis about the Cesàro convergence of a subsequence of random variables (or functions) and their subsequences to an integrable random variable (or function). It's also an existence theorem for an integrable random variable (or function). There exist a probabilistic and an analytical version for finite measure spaces.

The theorem was proven in 1967 by János Komlós.[1] There exists also a generalization from 1970 by Srishti D. Chatterji.[2]

Komlós' theorem[edit]

Probabilistic version[edit]

Let be a probability space and be a sequence of real-valued random variables defined on this space with

Then there exists a random variable and a subsequence , such that for every arbitrary subsequence when then

-almost surely.

Analytic version[edit]

Let be a finite measure space and be a sequence of real-valued functions in and . Then there exists a function and a subsequence such that for every arbitrary subsequence if then

-almost everywhere.

Explanations[edit]

So the theorem says, that the sequence and all its subsequences converge in Césaro.

Literature[edit]

  • Kabanov, Yuri & Pergamenshchikov, Sergei. (2003). Two-scale stochastic systems. Asymptotic analysis and control. 10.1007/978-3-662-13242-5. Page 250.

References[edit]

  1. ^ János Komlós (1967). "A Generalisation of a Problem of Steinhaus". Acta Mathematica Academiae Scientiarum Hungaricae. 18 (1). doi:10.1007/BF02020976.
  2. ^ S. D. Chatterji (1970). "A general strong law". Inventiones Mathematicae. 9: 235–245. doi:10.1007/BF01404326.