A1
pdf
School
University of Waterloo *
*We aren’t endorsed by this school
Course
430
Subject
Industrial Engineering
Date
Dec 6, 2023
Type
Pages
9
Uploaded by HighnessJaguar3858
A1
Adrian Lau
2023-09-23
1.10
A single large experiment is much more likely to waste resources collecting irrelevant data that a pilot study
would have weeded out. A pilot study could also lead to a better match between the sample and the target
population.
1
2.27
a)
a
<-
c(
2.7
,
4.6
,
2.6
,
3.0
,
3.2
,
3.8
)
b
<-
c(
4.6
,
3.4
,
2.9
,
3.5
,
4.1
,
5.1
)
n
<-
6
sa
<-
var(a)
sb
<-
var(b)
ma
<-
mean(a)
mb
<-
mean(b)
t
<-
(ma - mb) / sqrt(sa/n + sb/n)
dof
<-
(sa/n + sb/n)ˆ
2
/((sa/n)ˆ
2
/(n
-1
) + (sb/n)ˆ
2
/(n
-1
))
2
* pt(abs(t),dof,
lower.tail =
FALSE)
## [1] 0.2070179
Since we do not know if the variances are equal, a Welch test was performed, granting
t
0
= 1
.
3487
and degrees
of freedom =
9
.
94
which was rounded to 10.
b) the resulting p-value was calculated to be
0
.
207
, providing no evidence against the means being unequal,
meaning there was no evidence for
C
2
F
6
flow rate affecting etch uniformity
c)
f
<-
sa/sb
2
* pf(f,
5
,
5
)
## [1] 0.8689017
The p-value from this f test was 0.8689, providing no evidence against the variances being equal, meaning
there is no evidence that
C
2
F
6
affects wafer-to-wafer variability in etch uniformity.
d)
boxplot(a,b,
names =
c(
125
,
200
),
xlab =
"C2F6 Flow rate"
,
ylab =
"Etch Uniformity"
,
main =
"Etch Uniformi
2
125
200
2.5
3.0
3.5
4.0
4.5
5.0
Etch Uniformity by Flow Rate
C2F6 Flow rate
Etch Uniformity
3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
2.29
a)
a
<-
c(
11.176
,
7.089
,
8.097
,
11.739
,
11.291
,
10.759
,
6.467
,
8.315
)
b
<-
c(
5.263
,
6.748
,
7.461
,
7.015
,
8.133
,
7.418
,
3.772
,
8.963
)
n
<-
8
sa
<-
var(a)
sb
<-
var(b)
ma
<-
mean(a)
mb
<-
mean(b)
t
<-
(ma - mb) / sqrt(sa/n + sb/n)
dof
<-
(sa/n + sb/n)ˆ
2
/((sa/n)ˆ
2
/(n
-1
) + (sb/n)ˆ
2
/(n
-1
))
pt(t,
13
,
lower.tail =
FALSE)
## [1] 0.009538865
Since the variances are not assume to be equal, a Welch test was performed.
t
0
was calculated to be
2
.
67
and
degrees of freedom was calculate to be
13
.
22
which was rounded to
13
b) The resulting p value was calculated to be
0
.
0095
providing very strong evidence against the hypothesis
that the mean of the 100c group is greater than or equal to the mean of the 95c group. Thus there is very
strong evidence that baking at a higher temperature produces a lower mean photoresist thickness.
c)
(ma - mb) - sqrt(sa/n + sb/n)*qt(
0.95
,
13
)
## [1] 0.8517505
The resulting confidence interval is
(0
.
8517505
, Infinity
)
. One of the bounds is infinite since a one sided
test was performed. The implication of the interval is that 95% of generated intervals will contain the true
difference in means, given that they are generated from random samples of the same two distributions.
e)
shapiro.test(a)
##
##
Shapiro-Wilk normality test
##
## data:
a
## W = 0.87501, p-value = 0.1686
shapiro.test(b)
##
##
Shapiro-Wilk normality test
##
## data:
b
## W = 0.9348, p-value = 0.5607
Both shapiro tests p-values showed no evidence against the samples being normally distributed.
f)
rejection_region
<-
qt(
0.05
,
13
)
shift
<-
2.5
/sqrt(sa/n + sb/n)
pt(rejection_region + shift,
13
)
## [1] 0.8033499
4
The power of this welch test is 0.803.
g)
pooled_variance
<-
(sa+sb)/
2
power.t.test(
delta =
1.5
,
sd =
sqrt(pooled_variance),
power =
0.9
,
sig.level =
0.05
,
alternative =
"o"
)
##
##
Two-sample t test power calculation
##
##
n = 27.72409
##
delta = 1.5
##
sd = 1.884034
##
sig.level = 0.05
##
power = 0.9
##
alternative = one.sided
##
## NOTE: n is number in *each* group
At least 28 samples are needed in each group to obtain a power greater than 0.9 for this test.
5
3.15
a)
numbers
<-
c(c(
1000
,
1500
,
1200
,
1800
,
1600
,
1100
,
1000
,
1250
), c(
1500
,
1800
,
2000
,
1200
,
2000
,
1700
,
18
factors
<-
c(rep(
"one"
,
8
),rep(
"two"
,
8
),rep(
"three"
,
8
))
dat
<-
data.frame(numbers,factors)
one
<-
c(
1000
,
1500
,
1200
,
1800
,
1600
,
1100
,
1000
,
1250
)
two
<-
c(
1500
,
1800
,
2000
,
1200
,
2000
,
1700
,
1800
,
1900
)
three
<-
c(
900
,
1000
,
1200
,
1500
,
1200
,
1550
,
1000
,
1100
)
mo
<-
mean(one)
mtw
<-
mean(two)
mtr
<-
mean(three)
mto
<-
mean(numbers)
ss_treat
<-
8
*(mo-mto)ˆ
2
+
8
*(mtw-mto)ˆ
2
+
8
*(mtr-mto)ˆ
2
ss_total
<-
sum((numbers - mto)ˆ
2
)
ss_error
<-
ss_total - ss_treat
f0
<-
(ss_treat/(
3-1
))/(ss_error/(
24-3
))
pf(f0,
2
,
21
,
lower.tail =
FALSE)
## [1] 0.00120875
Since the p-value is very small, < 0.01, there is very strong evidence against the treatment having no effect.
The data indicates a difference between the three approaches.
b)
intercept
<-
mo
trt_two
<-
mtw - mo
trt_three
<-
mtr - mo
shapiro.test(one - intercept)
##
##
Shapiro-Wilk normality test
##
## data:
one - intercept
## W = 0.91076, p-value = 0.3595
shapiro.test(two - trt_two)
##
##
Shapiro-Wilk normality test
##
## data:
two - trt_two
## W = 0.88411, p-value = 0.206
shapiro.test(three - trt_three)
##
##
Shapiro-Wilk normality test
##
## data:
three - trt_three
## W = 0.89968, p-value = 0.2871
Since the p-values are > 0.05, there is no evidence against the assumption of normality in each of the groups
residuals.
c)
6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
s_p
=
(
7
*var(one)+
7
*var(two) +
7
*var(three))/
21
num
=
(
21
*log(s_p) - (
7
*log(var(one)) +
7
*log(var(two)) +
7
*log(var(three))))
denom
=
1
+ (
1
/
6
)*(
3
*(
1
/
7
)-
1
/
21
)
t
<-
num/denom
pchisq(t,
2
,
lower.tail =
FALSE)
## [1] 0.8455212
d)
MSE
<-
ss_error /
21
LSD
<-
qt(
0.975
,
21
) *
sqrt(
2
*MSE/
8
)
abs(mo - mtw) > LSD
## [1] TRUE
abs(mo - mtr) > LSD
## [1] FALSE
abs(mtr - mtw) > LSD
## [1] TRUE
Arranged into tukey clusters
One belongs to groups 1 and 2
two belongs to group 1
three belongs to group 2
e)
rewriting the contrast as
2
µ
1
−
µ
2
−
µ
3
= 0
,
c
1
= 2
, c
2
=
−
1
, c
3
=
−
1
c
<-
c(
2
,-
1
,-
1
)
mu
<-
c(mean(one),mean(two),mean(three))
num
<-
sum(c*mu)
denom
<-
sqrt(MSE/
8
*sum(cˆ
2
))
t
<-
num/denom
2
*pt(t,
21
)
## [1] 0.2029711
Since the P-value is 0.21, there is no evidence against the null hypothesis, and thus no evidence against the
contrast.
7
3.18
a)
one
<-
c(
143
,
141
,
150
,
146
)
two
<-
c(
152
,
149
,
137
,
143
)
three
<-
c(
134
,
136
,
132
,
127
)
four
<-
c(
129
,
127
,
132
,
129
)
numbers
<-
c(one,two,three,four)
factors
<-
c(rep(
"one"
,
4
),rep(
"two"
,
4
),rep(
"three"
,
4
),rep(
"four"
,
4
))
dat
<-
data.frame(numbers,factors)
means
<-
c(mean(one),mean(two),mean(three),mean(four))
m_to
<-
mean(numbers)
ss_trt
<-
4
*sum((means - m_to)ˆ
2
)
ss_total
<-
sum((numbers - m_to)ˆ
2
)
ss_error
<-
ss_total - ss_trt
MSE
<-
ss_error/
12
MST
<-
ss_trt/
3
f0
<-
MST/MSE
pf(f0,
3
,
12
,
lower.tail =
FALSE)
## [1] 0.0002881237
Since the P-value < 0.001, there is very strong evidence against the treatment having no effect. Thus there is
an observed difference in conductivity dependent on coating type.
b)
leaving four as the intercept,
mean(four)
## [1] 129.25
mean(one) - mean(four)
## [1] 15.75
mean(two) - mean(four)
## [1] 16
mean(three) - mean(four)
## [1] 3
The overall mean is observed to be 129.25, with treatment one having an effect of 15.75, treatment two having
an effect of 16, and treatment three having an effect of 3.
c)
Using the formula for a confidence interval of a contrast, let the mean of treatment 4 be represented by the
contrast
µ
4
+ 0
µ
1
+ 0
µ
2
+ 0
µ
3
yielding
c
4
= 1
, c
1
, c
2
, c
3
= 0
c
<-
c(
0
,
0
,
0
,
1
)
sum(c*means) + qt(
0.025
,
12
,
lower.tail =
FALSE)*sqrt(MSE/
4
*sum(cˆ
2
))
## [1] 134.0838
sum(c*means) - qt(
0.025
,
12
,
lower.tail =
FALSE)*sqrt(MSE/
4
*sum(cˆ
2
))
## [1] 124.4162
8
a 95% CI for the mean of coating type 4 is (124.4162,134.0838)
using the same proccess as above,
c
<-
c(
1
,
0
,
0
,-
1
)
sum(c*means) + qt(
0.025
,
12
,
lower.tail =
FALSE)*sqrt(MSE/
4
*sum(cˆ
2
))
## [1] 22.58597
sum(c*means) - qt(
0.025
,
12
,
lower.tail =
FALSE)*sqrt(MSE/
4
*sum(cˆ
2
))
## [1] 8.914029
a 95% CI for the difference in means of coating 1 and coating 4 is (8.914029,22.58597)
d)
LSD
<-
qt(
0.975
,
12
) *
sqrt(
2
*MSE/
4
)
abs(means[
4
] - means[
1
]) > LSD
## [1] TRUE
abs(means[
4
] - means[
2
]) > LSD
## [1] TRUE
abs(means[
4
] - means[
3
]) > LSD
## [1] FALSE
abs(means[
3
] - means[
2
]) > LSD
## [1] TRUE
abs(means[
3
] - means[
1
]) > LSD
## [1] TRUE
abs(means[
2
] - means[
1
]) > LSD
## [1] FALSE
There is a significant difference between coating 1 and 4, coating 4 and 2, coating 3 and 2, and coating 3 and
1.
there is no significant difference between coating 4 and 3, and coating 2 and 1.
e)
If the manufacturer wishes to minimize conductivity, they should continue using coating four. There is not a
statistically significant difference between coating 3 and 4; however there is no reason to switch.
9
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help