### Case Study

The owner claims that bricks were defective. The brick manufacturer counters that poor design and shoddy management led to the damage. The goal is to estimate the spall/damage rate for 1,000 bricks.

Experiment

• Owner uses several scaffold-drop surveys. The scaffold-drop survey provides the most accurate estimate of spall rates in each wall segment, but the drop areas were not selected at random. Drops were made in areas with high spall concentrations. This leads to an overestimate of total damage.
• Brick manufacturer divides the walls of the complex into 83 wall segments and takes a photo of each.
• The number of damaged bricks for all 83 photos yielded total spall damage. The spall damage was low, as not all bricks damaged could be made out from the photos (especially in areas with high spall concentrations).

Construct a scatter diagram of data

The data shows how many bricks out of 1,000 were damaged at 11 drop locations for the two different experiments, “drop spall rate” and “photo spall rate”. For the sample of 11 drop locations, it is important that each location corresponds to the same wall segment for the two different experiments, as the data shows an increasing trend from location 1 to 11.

This trend is shown by using a least-squares prediction equation. The slope parameter B1 is positive in both experiments.

Find the prediction equation for the drop spall rate using MINITAB to perform regression analysis (for 1,000 bricks)

Drop location versus photo spall rate

• The regression equation is drop location = 3.70 + 0.477 photo spall rate
• S= 1.73107, R-Sq = 75.5%, R-Sq(adj) = 72.8%

Drop location versus drop spall rate

• The regression equation is drop location = 3.28 + 0.172 drop spall rate

The elements of the designed experiment are the response/dependent variable, which is the spalling rate, and the factors (possible effects on the response variable), which is the type of experiment: ‘drop spalling” or “photo spalling”. The levels for both factors are the 11 locations. We could use these two experiments as our treatments (factor-level combinations) and conduct a completely randomized design, but we must be careful.

Earlier, we noticed an increasing amount of spalling (slope in the least-squares equation) between location 1 and location 11. This is a problem, and means independent random samples were not selected for each treatment.

Because independent random samples were not selected for each treatment, we conduct a randomized block design. This will better control sampling variability within the treatments (as measured by MSE). The randomized block design uses experimental units that are matching sets and assign one from each set to each treatment.

These matching sets or blocks allow “k” experimental units (where “k” is the number of treatments), which are as similar as possible to be grouped together. By choosing blocks, the sampling variability of the experimental units in each block is reduced, which in turn reduces the measure of error, MSE. This tends to prevent a Type II error; not to reject the null hypothesis that treatment means are equal if they differ. The conclusion that the treatment mean for “drop spalling” and “photo spalling” are equal could be due to not using blocks, which makes the MSE large. This faulty conclusion could be a function of how we designed our experiment.

There will be 11 blocks for the 11 locations, 2 treatments for the 2 experiments, and thus 22 responses (n=bk).

Completely Randomized Design

• One-way ANOVA (for 1,000 bricks): drop spall rate and photo spall rate
• Randomized Block Design
• Regression Analysis: drop location versus drop spall rate, photo spall rate, and block mean
• *Block means are highly correlated with other X variables
• *Block mean is removed from the equation

The regression equation (for 1,000 bricks):

• Drop location = 3.31 + 0.152 drop spall rate + 0.057 photo spall rate

Conduct a formal statistical hypothesis test to determine whether the photo spall rates contribute to the prediction of drop spall rates.

ANOVA F-Test to Compare “k” Treatment Means: Randomized Block Design

• Ho: µ1= µ2
• Ha: At least two treatment means differ
• Test statistic: F = MST/MSE
• Rejection region: F > Fα, where Fα is based on (k- 1) numerator DOF & (n – b – k + 1) denominator DOF.

Conditions Required for a Valid ANOVA F-Test: Randomized Block Design

• The ‘b’ blocks are randomly selected, and all ‘k’ treatments are applied (in random order) to each block. Good
• The distributions of observations corresponding to all ‘bk’ block – treatment combinations are approximately normal. Good. T-test performed since sample size n is small and has the effect of making the average spalling rate for each treatment deviate from the normal distribution.
• The ‘bk’ block – treatment distributions have equal variances. Good. (0.7566 = 0.1368 = 0.3879) Let’s assume we want the significance level to be 95% (α = 0.05).

From Table IX of Appendix A, with ν1 = 1 DOF & ν2 = 10 DOF, we find F* = 4.96.

The F-ratio, for the completely randomized design (factor is the type of experiment) = 4.06 is < tabled value = 4.96, so we do not reject the null hypothesis and assume the two treatments are equal. We could have reached the same conclusion, by using the fact that the p-value = 0.058 is > α = 0.05. At this point, there is evidence that we should employ a randomized block design.

The F-ratio, for the randomized block design (factor is the type of experiment) = 14.85 is > tabled value = 4.96, so we accept the alternative hypothesis and assume the two treatments differ. We could have reached the same conclusion, by using the fact that the p-value = 0.002 is < α = 0.05.