---
title: "4.4 — Nonlinearities & Variable Transformations — R Practice"
author: "Answer Key"
date: "November 16, 2022"
format:
html:
self-contained: true # so we don't need other files (like plot images)
toc: true # show a table of contents
toc-location: left
theme: default
df-print: paged # by default, show tables (tibbles) as paged tables
editor: visual
execute:
echo: true # shows all code on rendered document
---
# Required Packages & Data
Load all the required packages we will use (**note I have installed them already into the cloud project**) by running (clicking the green play button) the chunk below:
```{r}
#| label: load-packages
#| warning: false
#| message: false
library(tidyverse) # your friend and mine
library(broom) # for tidy regression
library(modelsummary) # for nice regression tables
library(car) # for F-test
```
We are returning to the speeding tickets data that we began to explore in [R Practice 4.1 on Multivariate Regression](http://metricsf22.classes.ryansafner.com/r/4.1-r) and [R Practice 4.3 on Categorical Data nad Interactions](https://metricsf22.classes.ryansafner.com/r/4.3-r). Download and read in (`read_csv`) the data below.
- [ `speeding_tickets.csv`](http://metricsf21.classes.ryansafner.com/data/speeding_tickets.csv)
```{r}
# run or edit this chunk (if you want to rename the data)
# read in data from url
# or you could download and upload it to this project instead
speed <- read_csv("https://metricsf22.classes.ryansafner.com/files/data/speeding_tickets.csv") %>%
mutate_at(c("Black", "Hispanic", "Female", "OutTown", "OutState"), factor) %>%
filter(Amount > 0)
# this code cleans the data the same way from last class
```
This data comes from a paper by Makowsky and Strattman (2009) that we will examine later. Even though state law sets a formula for tickets based on how fast a person was driving, police officers in practice often deviate from that formula. This dataset includes information on all traffic stops. An amount for the fine is given only for observations in which the police officer decided to assess a fine. There are a number of variables in this dataset, but the one's we'll look at are:
| Variable | Description |
|--------------|----------------------------------------------------------|
| `Amount` | Amount of fine (in dollars) assessed for speeding |
| `Age` | Age of speeding driver (in years) |
| `MPHover` | Miles per hour over the speed limit |
| `Black` | Dummy $=1$ if driver was black, $=0$ if not |
| `Hispanic` | Dummy $=1$ if driver was Hispanic, $=0$ if not |
| `Female` | Dummy $=1$ if driver was female, $=0$ if not |
| `OutTown` | Dummy $=1$ if driver was not from local town, $=0$ if not |
| `OutState` | Dummy $=1$ if driver was not from local state, $=0$ if not |
| `StatePol` | Dummy $=1$ if driver was stopped by State Police, $=0$ if stopped by other (local) |
We want to explore **who gets fines, and how much**. We'll come back to the other variables (which are categorical) in this dataset in later lessons.
## Question 1
Run a regression of `Amount` on `Age`. Write out the estimated regression equation, and interpret the coefficient on Age.
```{r}
reg_linear <- lm(Amount ~ Age, data = speed)
summary(reg_linear)
```
$\widehat{\text{Amount}_i}=131.71-0.29 \, \text{Age}_i$
For every year of age, expected fines decrease by $0.29.
## Question 2
Is the effect of `Age` on `Amount` nonlinear? Let's run a quadratic regression.
### Part A
Create a new variable for $Age^2$. Then run a quadratic regression:
$$\widehat{\text{Amount}}_i=\beta_0+\beta_1 \, \text{Age}_i+\beta_2 \, \text{Age}_i^2$$
```{r}
# make Age_sq variable
speed <- speed %>%
mutate(Age_sq = Age^2)
# view it
speed
# run quadratic regression
reg_quad <- lm(Amount ~ Age + Age_sq, data = speed)
summary(reg_quad)
```
### Part B
Try running the same regression using the alternate notation: `lm(Y ~ X + I(X^2))`, replacing `X` and `Y` with our variables. This method allows you to run a quadratric regression without having to create a new variable first. Do you get the same results?
```{r}
reg_quad_alt <- lm(Amount ~ Age + I(Age^2), data = speed)
summary(reg_quad_alt)
```
This gives the same results.
### Part C
Write out the estimated regression equation.
$$\widehat{\text{Amount}_i}=146.75-1.17 \, \text{Age}_i+0.01 \, \text{Age}_i^2$$
### Part D
Is this model an improvement from the linear model? Compared $\bar{R}^2$.
Yes, a slight improvement. $\bar{R}^2$ went from 0.00425 on the linear model to 0.00485 on the quadratic model.
### Part E
Is the coefficient on the quadratic term statistically significantly different from zero? i.e. could we reject $H_0: \beta_2$?
Yes, since $p<0.05$, we have sufficient evidence to reject $H_0$. This implies that the quadratic term is not unnecessary.
### Part F
Write an equation for the marginal effect of `Age` on `Amount`.
The marginal effect is measured by the first derivative of the regression equation with respect to Age. But you can just remember the resulting formula and plug in the parameters:
\begin{align*}
\frac{d \, Y}{d \, X} &= \beta_1+2\beta_2 X\\
\frac{d \, Amount}{d \, Age} &= -1.17+2(0.01) \, Age\\
&=-1.17+0.02 \, Age\\
\end{align*}
### Part G
Predict the marginal effect on `Amount` of being one year older when you are 18. How about when you are 40?
For 18 year olds:
\begin{align*}
\frac{d \, Amount}{d \, Age} &=-1.17+0.02(18)\\
&=-1.17+0.36\\
&=-\$0.81\\
\end{align*}
For 40 year olds:
\begin{align*}
\frac{d \, Amount}{d \, Age} &=-1.17+0.02(40)\\
&=-1.17+0.80\\
&=-\$0.37\\
\end{align*}
```{r}
# Let's do this in R:
tidy_reg_quad <- tidy(reg_quad)
tidy_reg_quad
# save beta 1
quad_beta_1 <- tidy_reg_quad %>%
filter(term == "Age") %>%
pull(estimate)
# save beta 2
quad_beta_2 <- tidy_reg_quad %>%
filter(term == "Age_sq") %>%
pull(estimate)
# create function to estimate marginal effects
marginal_effect <- function(x){
return(quad_beta_1 + 2 * quad_beta_2 * x)
}
# run the function on the 18-year-old and the 40-year-old
marginal_effect(c(18,40))
# close enough, we had some rounding error
```
### Part H
Our quadratic function is a $U$-shape. According to the model, at what age is the amount of the fine minimized?
We can set the derivative equal to 0, or you can just remember the formula and plug in the parameters:
\begin{align*}
\frac{d Y}{d X} &= \beta_1+2\beta_2 X\\
0 &=\beta_1+2\beta_2 X\\
-\beta_1&=2\beta_2 X\\
-\frac{1}{2} \frac{\beta_1}{\beta_2}&=Age^*\\
-\frac{1}{2} \frac{-1.17}{0.01} &= Age^*\\
-\frac{1}{2} 117 & \approx Age^*\\
58.5 & \approx Age ^*\\
\end{align*}
```{r}
# Let's do this in R:
-0.5*(quad_beta_1 / quad_beta_2)
# again, some rounding error
```
### Part I
Create a scatterplot between `Amount` (`y`) and `Age` (`x`). Add a layer with a linear regression (as usual, `geom_smooth(method = "lm")`), and an additional layer of with the predicted quadratic regression curve. This additional layer is similar but we need to specify the formula of the curve to be quadratic: `geom_smooth(method = "lm", formula = "y ~ x + I(x^2)")`
```{r}
ggplot(data = speed)+
aes(x = Age,
y = Amount)+
geom_point()+
geom_smooth(method = "lm",
formula = "y ~ x + I(x^2)",
color = "red")
```
### Part J
It's quite hard to see the quadratic curve with all those data points. Redo another plot and this time, only keep the quadratic `geom_smooth()` layer and leave out the `geom_point()` layer. This will only plot the regression curve.
```{r}
ggplot(data = speed)+
aes(x = Age,
y = Amount)+
geom_smooth(method = "lm",
formula = "y ~ x + I(x^2)",
color = "red")
```
## Question 3
Should we use a higher-order polynomial equation? Run the following cubic regression, and determine whether it is necessary.
$$\widehat{\text{Amount}}_i = \beta_0 + \beta_1 \, \text{Age}_i + \beta_2 \, \text{Age}_i^2 + \beta_3 \, \text{Age}_i^3$$
```{r}
reg_cube <- lm(Amount ~ Age + I(Age^2) + I(Age^3), data = speed)
summary(reg_cube)
```
The $t$-statistic on Age$^3$ is small (-1.03) and the $p$-value is 0.31, so the cubic term does not have a significant impact on `Amount`. We should *not* include it.
Just for fun, would the cubic model *look* any better?
```{r}
ggplot(data = speed)+
aes(x = Age,
y = Amount)+
geom_point()+
geom_smooth(method = "lm", formula = "y ~ x + I(x^2)", color = "red")+
geom_smooth(method = "lm", formula = "y ~ x + I(x^2) + I(x^3)", color = "orange")
```
```{r}
ggplot(data = speed)+
aes(x = Age,
y = Amount)+
geom_smooth(method = "lm", formula = "y ~ x + I(x^2)", color = "red")+
geom_smooth(method = "lm", formula = "y ~ x + I(x^2) + I(x^3)", color = "orange")
```
## Question 4
Run an $F$-test to check if a nonlinear model is appropriate. Use the `car` package's `linearHypothesis()` command, which looks like:
```{r}
#| eval: false
linearHypothesis(reg_name, # name of your saved regression object
c("var1", "var2")) # name of the variables you are testing
```
Your null hypothesis is $H_0: \beta_2=\beta_3=0$ from the regression in question 4. The command is
```{r}
linearHypothesis(reg_cube, c("I(Age^2)", "I(Age^3)")) # F-test
```
We get a large $F$ of 26.43, with a very small $p$-value. Therefore, we can reject the null hyothesis that the model is linear $(\beta_2=0, \beta_3=0)$. We should in fact *not* use a linear model. Note it does *not* tell us if the model should be *quadratic* or *cubic* (or even *logarithmic* of some sort), *only* that it is not linear. Remember, this was a *joint* hypothesis of all *non-linear* terms $(\beta_2$ and $\beta_3)$!
## Question 5
Now let's take a look at speed (`MPHover` the speed limit).
### Part A
Creating new variables as necessary, run a **linear-log** model of `Amount` on `MPHover`. Write down the estimated regression equation, and interpret the coefficient on `MPHover` $(\hat{\beta_1})$. Make a scatterplot with the regression line. Hint: The simple `geom_smooth(method = "lm")` layer is sufficient, so long as you use the right variables on the plot!
```{r}
# create log of MPHover
speed <- speed %>%
mutate(log_mph = log(MPHover))
# Run linear-log regression
linear_log_reg <- lm(Amount ~ log_mph, data = speed)
summary(linear_log_reg)
# note we could have done this without creating the variable
# just take log() inside the regression:
linear_log_reg_alt <- lm(Amount ~ log(MPHover), data = speed)
summary(linear_log_reg)
ggplot(data = speed)+
aes(x = log_mph,
y = Amount)+
geom_point()+
geom_smooth(method = "lm", color = "red")+
labs(title = "Linear-Log Model")
```
$$\widehat{\text{Amount}_i}=-200.10+115.75\text{ln(MPHover}_i)$$
A 1% increase in speed (over the speed limit) increases the fine by $\frac{115.75}{100}\approx \$1.16$
### Part B
Creating new variables as necessary, run a **log-linear** model of `Amount` on `MPHover`. Write down the estimated regression equation, and interpret the coefficient on `MPHover` $(\hat{\beta_1})$. Make a scatterplot with the regression line. Hint: The simple `geom_smooth(method = "lm")` is sufficient, so long as you use the right variables on the plot!
```{r}
# create log of Amount
speed <- speed %>%
mutate(log_Amount = log(Amount))
# Run log-linear regression
log_linear_reg <- lm(log_Amount ~ MPHover, data = speed)
summary(log_linear_reg)
# again we could have done this without creating the variable
# just take log() inside the regression:
log_linear_reg_alt <- lm(log(Amount) ~ MPHover, data = speed)
summary(log_linear_reg_alt)
ggplot(data = speed)+
aes(x = MPHover,
y = log_Amount)+
geom_point()+
geom_smooth(method = "lm", color = "red")+
labs(title = "Log-Linear Model")
```
$$\widehat{\text{ln(Amount}_i)}=3.87+0.05 \, \text{MPHover}_i$$
For every 1 MPH in speed (over the speed limit), expected fine increases by $0.05 \times 100\%=5\%$
### Part C
Creating new variables as necessary, run a **log-log** model of `Amount` on `MPHover`. Write down the estimated regression equation, and interpret the coefficient on `MPHover` $(\hat{\beta_1})$. Make a scatterplot with the regression line. Hint: The simple `geom_smooth(method = "lm")` is sufficient, so long as you use the right variables on the plot!
```{r}
# Run log-log regression
log_log_reg <- lm(log_Amount ~ log_mph, data = speed)
summary(log_log_reg)
# again we could have done this just taking log()s inside the regression:
log_log_reg_alt <- lm(log(Amount) ~ log(MPHover), data = speed)
summary(log_log_reg_alt)
ggplot(data = speed)+
aes(x = log_mph,
y = log_Amount)+
geom_point()+
geom_smooth(method = "lm", color = "red")+
labs(title = "Log-Log Model")
```
$$\widehat{\text{ln(Amount}_i)}=2.31+0.86\text{ln(MPHover}_i)$$
For every 1% increase in speed (over the speed limit), expected fine increases by 0.86%.
### Part D
Which of the three log models has the best fit? Hint: Check $R^2$
We can compare the $R^2$'s of the three models or compare scatterplots with the regression lines. I will make a table of the three regressions with `modelsummary` for easy comparison of fit:
```{r}
modelsummary(models = list("Amount" = linear_log_reg,
"ln Amount" = log_linear_reg,
"ln Amount" = log_log_reg),
fmt = 2, # round to 2 decimals
output = "html",
coef_rename = c("(Intercept)" = "Constant",
"MPHover" = "MPH Over Speed Limit",
"log_mph" = "ln MPH Over Speed Limit"),
gof_map = list(
list("raw" = "nobs", "clean" = "n", "fmt" = 0),
#list("raw" = "r.squared", "clean" = "R^{2}", "fmt" = 2),
list("raw" = "adj.r.squared", "clean" = "Adj. R^{2}", "fmt" = 2),
list("raw" = "rmse", "clean" = "SER", "fmt" = 2)
),
escape = FALSE,
stars = c('*' = .1, '**' = .05, '***' = 0.01)
)
```
It appears the linear-log model has the best fit with the highest $R^2$ out of the three, but not by very much.
## Question 6
Return to the quadratic model from Question 3. Run a quadratic regression of `Amount` on `Age`, `Age`$^2$, `MPHover`, and all of the race dummy variables. Test the null hypothesis: *"the race of the driver has no effect on Amount"*
```{r}
full_reg <- lm(Amount ~ Age + Age_sq + MPHover + Black + Hispanic, data = speed)
summary(full_reg)
# it turns out we need to make Black and Hispanic into numeric variables for linearHypothesis to work
# so we'll redo this with them as numeric variables
speed <- speed %>%
mutate_at(c("Black", "Hispanic"), as.numeric)
full_reg <- lm(Amount ~ Age + Age_sq + MPHover + Black + Hispanic, data = speed)
summary(full_reg)
linearHypothesis(full_reg, c("Black", "Hispanic"))
```
Since $p<0.05$, we can reject the null hypothesis in favor of the alternative hypothesis, which implies that the race of the driver has some effect on Amount.
## Question 7
Now let's try standardizing variables. Let's try running a regression of `Amount` on `Age` and `MPHover`, but standardizing each variable.
### Part A
Create new standardized variables for `Amount`, `Age`, and `MPHover`:
```{r}
# let's check the mean and sd of each variable
speed %>%
summarize_at(vars(Amount, Age, MPHover),
funs("avg" = mean,
"sd" = sd))
```
| Variable | Mean | SD |
|----------|------|----|
| Amount | 122.03 | 56.25 |
| Age | 33.44 | 12.73 |
| MPHover | 17.08 | 5.79 |
```{r}
# make standardized variables
speed <- speed %>%
mutate(Amount_Z = scale(Amount),
Age_Z = scale(Age),
MPHover_Z = scale(MPHover))
```
We can verify that now all variables have been standardized to have mean 0 and sd 1:
```{r}
speed %>%
summarize_at(vars(Amount_Z, Age_Z, MPHover_Z),
funs("avg" = mean,
"sd" = sd))
```
| Variable | Mean | SD |
|----------|------|----|
| Amount | 0 | 1 |
| Age | 0 | 1 |
| MPHover | 0 | 1 |
### Part B
Run a regression of standardized `Amount_Z` on standardized `Age_Z` and `MPHover_Z`. Interpret $\hat{\beta_1}$ and $\hat{\beta_2}$. Which variable has a bigger effect on `Amount`?
```{r}
std_reg <- lm(Amount_Z ~ Age_Z + MPHover_Z, data = speed)
summary(std_reg)
```
$\hat{\beta_1}$: a 1 standard deviation increase in `Age` causes a 0.0056 standard deviation increase in Amount (i.e. $0.0056 \times 56.25 = \$0.315)$
$\hat{\beta_2}$: a 1 standard deviation increase in `MPHover` causes a 0.7095 standard deviation increase in Amount. (i.e. $0.7095 \times 56.25 = \$39.90)$
`MPHover` has a much larger marginal effect on Amount.