1. To test the hypothesis of β1=−1 in a linear regression model, we can check if a 100(1−α)% confidence interval contains 0. 2. When random errors in a linear regression model are iid normal, the least-squares estimates of beta equals the maximum likelihood estimates of beta. 3. Larger values of R-squared imply that the data points are more closely grouped about the average value of the response variable. 4. For the model Y^i=b0+b1Xi, the correlation of X,Y always has same sign as b1. 5. We should always automatically exclude outliers. 6. When the error terms have a constant variance, a plot of the residuals versus the fitted values has a pattern that fans out or funnels in. 7. Residuals are the random variations that can be explained by the linear model. 8. Box-Cox transformation is primarily used for transforming the covariate. 9. To check for a possible nonlinear relationship between the response variable and a predictor, we construct a plot of residuals against the predictor.