Processing math: 100%
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Engineering LibreTexts

Search

  • Filter Results
  • Location
  • Classification
    • Article type
    • Author
    • Set as Cover Page of Book
    • License
    • Show TOC
    • Transcluded
    • OER program or Publisher
    • Autonumber Section Headings
    • License Version
    • Print CSS
  • Include attachments
Searching in
About 63 results
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/07%3A_Random_Walks_Large_Deviations_and_Martingales/7.07%3A_Submartingales_and_Supermartingales
    \[ \begin{align} &\{ |Z_n|;\, n\geq \} \text{ is a submartingale} \\ &\{ Z_n^2; \, n\geq 1\} \text{ is a submartingale if } \mathsf{E}[Z_n^2]<\infty \\ &\{\exp (rZ_n); \, n\geq 1\} \text{ is a submart...{|Zn|;n} is a submartingale{Z2n;n1} is a submartingale if E[Z2n]<{exp(rZn);n1} is a submartingale for r such that E[exp(rZn)]<
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/03%3A_Finite-State_Markov_Chains/3.07%3A_Summary
    If the chain is ergodic, i.e., one aperiodic recurrent class, then the limit of the n-step transition probabilities become independent of the initial state, i.e., \(\lim _{n \rightarrow \infty} P_...If the chain is ergodic, i.e., one aperiodic recurrent class, then the limit of the n-step transition probabilities become independent of the initial state, i.e., limnPnij=πj where π=(π1,,πM) is called the steady-state probability.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/03%3A_Finite-State_Markov_Chains/3.03%3A_The_Matrix_Representation
    The matrix [P] of transition probabilities of a Markov chain is called a stochastic matrix; that is, a stochastic matrix is a square matrix of nonnegative terms in which the elements in each row sum...The matrix [P] of transition probabilities of a Markov chain is called a stochastic matrix; that is, a stochastic matrix is a square matrix of nonnegative terms in which the elements in each row sum to 1.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/04%3A_Renewal_Processes/4.05%3A_Random_Stopping_Trials
    Note that SrNr(t) is not only a renewal epoch for the renewal process, but also an arrival epoch for the arrival process; in particular, it is the \(N\left(S_{N^{r}(t)}^{r}\right) \text {...Note that SrNr(t) is not only a renewal epoch for the renewal process, but also an arrival epoch for the arrival process; in particular, it is the N(SrNr(t)) th  arrival epoch, and the N(SrNr(t))1 earlier arrivals are the customers that have received service up to time SrNr(t).
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/07%3A_Random_Walks_Large_Deviations_and_Martingales/7.03%3A_Detection_Decisions_and_Hypothesis_Testing
    The probability of this event is Pr{H=1|y}. Similarly, if we select ˆh=1, the probability of error is Pr{H=0|y}. Thus the probability of error is m...The probability of this event is Pr{H=1|y}. Similarly, if we select ˆh=1, the probability of error is Pr{H=0|y}. Thus the probability of error is minimized, for a given y, by selecting ˆh=0 if the ratio in (7.3.3) is greater than 1 and selecting ˆh=1 otherwise.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/04%3A_Renewal_Processes/4.02%3A_The_Strong_Law_of_Large_Numbers_and_Convergence_WP1
    \[\operatorname{Pr}\left\{\sum_{n=1}^{m}\left|Y_{n}\right|>\alpha\right\} \leq \frac{\mathrm{E}\left[\sum_{n=1}^{m}\left|Y_{n}\right|\right]}{\alpha}=\frac{\sum_{n=1}^{m} \mathrm{E}\left[\left|Y_{n}\r...Pr{mn=1|Yn|>α}E[mn=1|Yn|]α=mn=1E[|Yn|]α For any ω such that n=1|Yn(ω)|α, we see that {|Yn(ω)|;n1} is simply a sequence n=1 of non-negative numbers with a finite sum.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/03%3A_Finite-State_Markov_Chains/3.05%3A_Markov_Chains_with_Rewards
    Suppose that each state in a Markov chain is associated with a reward. As the Markov chain proceeds from state to state, there is an associated sequence of rewards that are not independent, but are re...Suppose that each state in a Markov chain is associated with a reward. As the Markov chain proceeds from state to state, there is an associated sequence of rewards that are not independent, but are related by the statistics of the Markov chain. The concept of a reward in each state is quite graphic for modeling corporate profits or portfolio performance, and is also useful for studying queueing delay, the time until some given state is entered, and many other phenomena. The reward associated wit
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/05%3A_Countable-state_Markov_Chains/5.02%3A_Birth-Death_Markov_chains
    A birth-death Markov chain is a Markov chain in which the state space is the set of nonnegative integers; for all i 0, the transition probabilities satisfy Pi,i+1 > 0 and Pi+1,i > 0, and for all |ij| ...A birth-death Markov chain is a Markov chain in which the state space is the set of nonnegative integers; for all i 0, the transition probabilities satisfy Pi,i+1 > 0 and Pi+1,i > 0, and for all |ij| > 1, Pij = 0 (see Figure 5.4). A transition from state i to i+1 is regarded as a birth and one from i+1 to i as a death. Thus the restriction on the transition probabilities means that only one birth or death can occur in one unit of time.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/07%3A_Random_Walks_Large_Deviations_and_Martingales/7.11%3A_Summary
    The focus in random walks, however, as in most of the processes we have studied, is more in the relationship between the terms (such as which term first crosses a threshold) than in the individual ter...The focus in random walks, however, as in most of the processes we have studied, is more in the relationship between the terms (such as which term first crosses a threshold) than in the individual terms. For example, we showed that all of the random walk issues of earlier sections can be treated as a special case of martingales, and that martingales can be used to model both sums and products of random variables.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/07%3A_Random_Walks_Large_Deviations_and_Martingales/7.01%3A_Introduction
    If X1,X2,... are IID positive random variables, then {Sn;n1 } is both a special case of a random walk and also the sequence of arrival epochs of a renewal counting process, {\(N (t...If X1,X2,... are IID positive random variables, then {Sn;n1 } is both a special case of a random walk and also the sequence of arrival epochs of a renewal counting process, {N(t);t>0 }. In this special case, the sequence {Sn;n1 } must eventually cross a threshold at any given positive value α, and the question of whether the threshold is ever crossed becomes uninteresting.
  • https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Discrete_Stochastic_Processes_(Gallager)/03%3A_Finite-State_Markov_Chains/3.06%3A_Markov_Decision_Theory_and_Dynamic_Programming
    In the previous section, we analyzed the behavior of a Markov chain with rewards. In this section, we consider a much more elaborate structure in which a decision maker can choose among various possib...In the previous section, we analyzed the behavior of a Markov chain with rewards. In this section, we consider a much more elaborate structure in which a decision maker can choose among various possible rewards and transition probabilities.

Support Center

How can we help?