Let’s start by having a look at the formula:

`lm(formula = Dem_Gov1984 ~ GDPPC1984, data = GDP_Dem)`

Dem_Gov1984 : is our **dependent variable**

GDPPC1984 : is our **independent variable**

```
Residuals:
## Min 1Q Median 3Q Max
## -104.470 -16.507 -6.817 16.879 53.869
```

Residuals : The difference between the observed values and the predicted values of GDPPC1984.

```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.924e+01 3.777e+00 5.094 1.70e-06 ***
GDPPC1984 1.519e-03 2.522e-04 6.024 2.99e-08 ***
```

This part of the output describes the coefficients for the intercept and the independent variables.

Remember the general formula: Y=a+bx+e

We can re-write the formula using the coefficients to describe the relationship between Dem_Gov1984 and GDPPC1984.

Dem_Gov1984 = 1.92 + (1.51 * GDPPC1984)

This tells us that **for each unit increase** in the variable **Dem_Gov1984**, **GDPPC1984 increases by 1.51**

The standard error estimates the standard deviation of the sampling distribution of the coefficients in our model.

The t statistic , is used to conduct hypothesis tests on the regression coefficients. The t-test is obtained by dividing the coefficients by the standard error.

`Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1`

The p-value for each of the coefficients included in the model. According to the null hypothesis the value of the coefficient we are interested in is zero. Remember that hypothesis testing is based on the null hypothesis, the p-value will tell us whether we can reject or not the null hypothesis.

`R-squared: 0.2702, Adjusted R-squared: 0.2628 `

The R-squared and adjusted R-squared tell us how much of the variance in our model is accounted for by the independent variable.

The adjusted R-squared is a modified version of R-squared that has been adjusted for the number of predictors in the model.