Introduction to Algorithms
3rd Edition
ISBN: 9780262033848
Author: Thomas H. Cormen, Ronald L. Rivest, Charles E. Leiserson, Clifford Stein
Publisher: MIT Press
expand_more
expand_more
format_list_bulleted
Question
Chapter 8.4, Problem 5E
Program Plan Intro
To describe an
Expert Solution & Answer
Want to see the full answer?
Check out a sample textbook solutionStudents have asked these similar questions
Let X and Y be two binary, discrete random variables with the following joint probability mass functions. (a) Compute P(X = 0]Y = 1). (b) Show that X and Y are not statistically independent. P(X = 0, y = 1) = P(X = 1, Y = 0) = 3/8 P(X = 0, y = 0) = P(X = 1, Y = 1) = 1/8
2.
Given a Sample Space S = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13},
Event A = {1, 3, 4, 7, 9}, and
Event B = {3, 7, 9, 11, 12, 13}
Find the probability P(A|B). State your answer as a value with one digit after the decimal point.
Solve in R programming language:
Let the random variable X be defined on the support set (1,2) with pdf fX(x) = (4/15)x3.
(a) Find p(X<1.25).
(b) Find EX.
(c) Find the variance of X.
Chapter 8 Solutions
Introduction to Algorithms
Knowledge Booster
Similar questions
- Write Algorithm for Generating a random number from a distribution described by a finite sequence of weights.in: sequence of n weights W describing the distribution (Wi ∈ N for i = 0, . . . , (n − 1) ∧ 1 ≤ n−1 i=0 Wi)out: randomly selected index r according to W (0 ≤ r ≤ m − 1)arrow_forwardWrite Algorithm to Generating a random number from a distribution described by a finite sequence of weights.in: sequence of n weights W describing the distribution (Wi ∈ N for i = 0, . . . , (n −1) ∧ 1 ≤ n−1 i=0 Wi)out: randomly selected index r according to W (0 ≤ r ≤ m − 1)arrow_forwardConsider the simple case of evolving a string that contains all 1s in every location. Let the length of the strings be 30. The initial population should be randomly created. Use standard mutation and one-point crossover. The fitness of a solution is the number of 1s in the string. Plot the average fitness of the population versus the generations passed. This exercise is to show the operation of a genetic algorithm.arrow_forward
- A random variable X with two-sided exponential distribution given by has moment generating function given by M X (t)= e^ t +e^ -t -2 t^ 2 . f x (x)= x+1,&-1\\ 1-x,&0<= x<=1 - 1 <= x <= 0 (a) Using M_{X}(t) or otherwise, find the mean and variance of X. (b) Use Chebychev inequality to estimate the tail probability, P(X > delta) , for delta > 0 and compare your result with the exact tail probability.arrow_forward(control variates) Reproduce the class example of estimating int 0 ^ 1 2 dz 1+x by the MC approach using 100 uniform random variables and after that by using a control variate with function g(U) = 1 + U as suggested in class. Compare the results.arrow_forwardCashOrNothingOption = function(S, k, Time, r, q, sigma) {d1 = (log(S / k) + (r - q + sigma ^ 2 / 2) * Time) / (sigma * sqrt(Time))d2 = d1 - sigma * sqrt(Time)if(k>S){return (k * exp (-r * Time) * (-d2))}elseprint("zero")}arrow_forward
- Hypergeometric distribution Given user defined numbers k and n, if n cards are drawn from a deck, find the probability that k cards are black. Find the probability that at least k cards are black. INPUT 11 7 OUTPUT 0.1628063397551007 0.24927823677714275arrow_forwardTranscribed Image Text (ii) Suppose we are given the following conditional probability values for the Bayesian Network: P(L) = 0.1, P(P) = 0.2, P(F|L, P) P(F\notL, P) = 0.99, P(F|L, not P) = 0.99, P(F|not L, notP) = 0.1. From these values and the diagram, calculate the value of P(F) to three decimal places 0.999, %3Darrow_forwardConsider a logistic regression classifier that implements the 2-input OR gate. At iteration t, the parameters are given by w0=0, w1=0, w2=0. Given binary input (x1,x2), output of logistic regression is given by 1/(1+exp(-w0-w1*x1-w2*x2)). What will be value of the loss function at t? What will be the values of w0, w1 and w2 at (t+1) with learning rate ɳ=1?arrow_forward
- You are given the following data: vocabulary V = {w1, w2, w3} and the bigram probability distribution p on V × V given by: p(w1, w1) = 1/4 p(w3, w3) = 1/8 p(w2, w2) = 0, p(w2, w1) = 1/4, p(w1, w3) = 1/4, p(w1, *) = 1/2 (that is w1 as the first of a pair), p(*, w2) = 1 /8 . Calculate p(w1, w2) and p(w2 | w3) using Markov's rulearrow_forwardUse R to answer the following question According to the central limit theorem, the sum of n independent identically distributed random variables will start to resemble a normal distribution as n grows large. The mean of the resulting distribution will be n times the mean of the summands, and the variance n times the variance of the summands. Demonstrate this property using Monte Carlo simulation. Over 10,000 trials, take the sum of 100 uniform random variables (with min=0 and max=1). Note: the variance of the uniform distribution with min 0 and max 1 is 1/12. Include: 1. A histogram of the results of the MC simulation 2. A density plot of a normal distribution with the appropriate mean and standard deviation 3. The mean and standard deviation of the MC simulation. ps(plz do not use chatgpt)arrow_forwardGenerate 100 synthetic data points (x,y) as follows: x is uniform over [0,1]10 and y = P10 i=1 i ∗ xi + 0.1 ∗ N(0,1) where N(0,1) is the standard normal distribution. Implement full gradient descent and stochastic gradient descent, and test them on linear regression over the synthetic data points. Subject: Python Programmingarrow_forward
arrow_back_ios
SEE MORE QUESTIONS
arrow_forward_ios
Recommended textbooks for you
- Operations Research : Applications and AlgorithmsComputer ScienceISBN:9780534380588Author:Wayne L. WinstonPublisher:Brooks Cole
Operations Research : Applications and Algorithms
Computer Science
ISBN:9780534380588
Author:Wayne L. Winston
Publisher:Brooks Cole