The ggplot2::diamonds is a data frame included in ggplot2 with 50,000+ observations on 10 attributes of diamonds. The interest focusses on how price is related to the other variables. The main variables of interest are shown in the Table 1.
Ddata <- t(data.frame(
price = "price in US dollars",
carat = "weight of the diamond (0.2-5.01)",
clarity = "I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best)",
color = "from J (worst) to D (best)",
cut = "Fair, Good, Very Good, Premium, Ideal"
))
colnames(Ddata) <- c("Description")
print(xtable(Ddata, digits=0, caption= "ggplot2::diamonds, selected variables"),
type = "html")
Description | |
---|---|
price | price in US dollars |
carat | weight of the diamond (0.2-5.01) |
clarity | I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best) |
color | from J (worst) to D (best) |
cut | Fair, Good, Very Good, Premium, Ideal |
p1 <- ggplot(diamonds, aes(cut, price)) + geom_boxplot(varwidth=TRUE)
p2 <- ggplot(diamonds, aes(color, price)) + geom_boxplot(varwidth=TRUE)
p3 <- ggplot(diamonds, aes(clarity, price)) + geom_boxplot(varwidth=TRUE)
grid.arrange(p1, p2, p3, nrow = 3)
Note that the worst diamond color is J (slightly yellow), and the worst clarity is I1 (inclusions visible to the naked eye).
From the multi-panel facet grid plot, looking down the ‘Ideal’ column we see that there are many diamonds with low values of ‘carat’ and when we compare with the ‘Fair’ column we see that the number of low carat ‘Ideal’ diamonds greatly exceeds that of the ‘Fair’ cut diamonds. It is similar with the other quality attribute ‘clarity’.
We see from the plots that resolution in the plots is hampered by a few extremely large caret diamonds. Figure below shows the dotplot for diamonds with carat values exceeding 2.5,
diamonds%>%filter(carat>2.5)%>%select(carat)%>%
{fivenum(unlist(., use.names=FALSE))} #note use of { }
## [1] 2.51 2.53 2.63 3.01 5.01
This data will be interesting to explore using GGobi.
diamonds %>% ggplot(aes(x=carat, y=price)) +
geom_hex(bins=40) +
facet_grid(color~cut) +
theme(legend.position="none") #remove legend to better see the data
## Warning: package 'hexbin' was built under R version 3.4.4
From a visualzation viewpoint,
DATA = PATTERN + NOISE
Often the PATTERN is called the fit or smoothed value and the NOISE is the residual or remainder.
To make for a better visualization and simplify things:
diamonds2 <- diamonds %>%
filter(carat <= 2.5) %>%
mutate(lprice = log2(price), lcarat = log2(carat))
These changes make it easier to see the relationship between carat
and price
:
ggplot(diamonds2, aes(lcarat, lprice)) +
geom_hex(bins = 35) +
geom_smooth(color=alpha("red", 0.7), size=2) +
geom_smooth(method="lm", color=alpha("yellow", 0.4), size=2)
Remark 1:Using geom_hex() instead of geom_point() when there is so much data is very useful for large datasets. See also graphics::smoothScatter() in base R that use 2D kernel smoothing. The number of points is reflected by the color saturation. In the plot below dark blue means more data values. This method is more efficient for large datasets than geom_hex(). In the plot below we use smooth.spline() which is an alternative to gam(). With large data, loess() is computationally infeasible.
with(diamonds2, {smoothScatter(lcarat, lprice)
lines(smooth.spline(lcarat, lprice, df=6), col="red", lwd=3)
})
Remark 2: Many smoothers including loess(), smooth.spline() and gam() tend to exagerate changes at the end points. High order polynomials are infamous for their wild behaviour expecially when extrapolated.
Remark 2:The log-transformation is particularly useful here because it makes the pattern linear. More about linearlizing transformations later.
mod_diamond <- lm(lprice ~ lcarat, data = diamonds2)
Then we look at what the model tells us about the data. Note that I back transform the predictions, undoing the log transformation, so I can overlay the predictions on the raw data:
grid <- diamonds2 %>%
data_grid(carat = seq_range(carat, 20)) %>%
mutate(lcarat = log2(carat)) %>%
add_predictions(mod_diamond, "log2price") %>%
mutate(price = 2 ^ log2price)
grid
## # A tibble: 20 x 4
## carat lcarat log2price price
## <dbl> <dbl> <dbl> <dbl>
## 1 0.200 -2.32 8.29 313.
## 2 0.321 -1.64 9.44 694.
## 3 0.442 -1.18 10.2 1188.
## 4 0.563 -0.828 10.8 1784.
## 5 0.684 -0.547 11.3 2475.
## 6 0.805 -0.312 11.7 3255.
## 7 0.926 -0.110 12.0 4119.
## 8 1.05 0.0668 12.3 5064.
## 9 1.17 0.225 12.6 6087.
## 10 1.29 0.367 12.8 7184.
## 11 1.41 0.496 13.0 8354.
## 12 1.53 0.615 13.2 9594.
## 13 1.65 0.725 13.4 10903.
## 14 1.77 0.827 13.6 12279.
## 15 1.89 0.922 13.7 13721.
## 16 2.02 1.01 13.9 15227.
## 17 2.14 1.10 14.0 16795.
## 18 2.26 1.17 14.2 18426.
## 19 2.38 1.25 14.3 20117.
## 20 2.50 1.32 14.4 21868.
ggplot(diamonds2, aes(carat, price)) +
geom_hex(bins = 35) +
geom_line(data = grid, colour = "red", size = 1)
#note: dataframe grid has both 'carat' and 'price'
Large diamonds are much cheaper than expected.
Now we can look at the residuals, which verifies that we’ve successfully removed the strong linear pattern:
diamonds2 <- diamonds2 %>%
add_residuals(mod_diamond, "lresid")
ggplot(diamonds2, aes(lcarat, lresid)) +
geom_hex(bins = 50) +
geom_smooth(se=FALSE, col="red", size=1.5)
Recall that another interpretation of the residuals: residuals represent the adjusted value of the response variable “controlling” for the explanatory variables.
ggT <- ggpubr::text_grob("Adjusted log_2(price)")
p1 <- ggplot(diamonds2, aes(cut, lresid)) + geom_boxplot()
p2 <- ggplot(diamonds2, aes(color, lresid)) + geom_boxplot()
p3 <- ggplot(diamonds2, aes(clarity, lresid)) + geom_boxplot()
p1 <- p1 + ylab("log_2 price")
p2 <- p2 + ylab("log_2 price")
p3 <- p3 + ylab("log_2 price")
grid.arrange(p1, p2, p3, nrow = 3, top=ggT)
Now we see the relationship we expect: as the quality of the diamond increases, so to does it’s relative price. To interpret the y
axis, we need to think about what the residuals are telling us, and what scale they are on. A residual of -1 indicates that lprice
was 1 unit lower than a prediction based solely on its weight. \(2^{-1}\) is 1/2, points with a value of -1 are half the expected price, and residuals with value 1 are twice the predicted price.
If we wanted to, we could continue to build up our model, moving the effects we’ve observed into the model to make them explicit. For example, we could include color
, cut
, and clarity
into the model so that we also make explicit the effect of these three categorical variables:
mod_diamond2 <- lm(lprice ~ lcarat + color + cut + clarity, data = diamonds2)
This model now includes four predictors, so it’s getting harder to visualise. Fortunately, they’re currently all independent which means that we can plot them individually in four plots. To make the process a little easier, we’re going to use the .model
argument to data_grid
:
grid <- diamonds2 %>%
data_grid(cut, .model = mod_diamond2) %>%
add_predictions(mod_diamond2)
grid
## # A tibble: 5 x 5
## cut lcarat color clarity pred
## <ord> <dbl> <chr> <chr> <dbl>
## 1 Fair -0.515 G VS2 11.2
## 2 Good -0.515 G VS2 11.3
## 3 Very Good -0.515 G VS2 11.4
## 4 Premium -0.515 G VS2 11.4
## 5 Ideal -0.515 G VS2 11.4
ggplot(grid, aes(cut, pred)) +
geom_point()
If the model needs variables that you haven’t explicitly supplied, data_grid()
will automatically fill them in with “typical” value. For continuous variables, it uses the median, and categorical variables it uses the most common value (or values, if there’s a tie).
diamonds2 <- diamonds2 %>%
add_residuals(mod_diamond2, "lresid2")
ggplot(diamonds2, aes(lcarat, lresid2)) +
geom_hex(bins = 50)
This plot indicates that there are some diamonds with quite large residuals - remember a residual of 2 indicates that the diamond is 4x the price that we expected. It’s often useful to look at unusual values individually:
diamonds2 %>%
filter(abs(lresid2) > 1) %>%
add_predictions(mod_diamond2) %>%
mutate(pred = round(2 ^ pred)) %>%
select(price, pred, carat:table, x:z) %>%
arrange(price)
## # A tibble: 16 x 11
## price pred carat cut color clarity depth table x y z
## <int> <dbl> <dbl> <ord> <ord> <ord> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1013 264. 0.250 Fair F SI2 54.4 64. 4.30 4.23 2.32
## 2 1186 284. 0.250 Premium G SI2 59.0 60. 5.33 5.28 3.12
## 3 1186 284. 0.250 Premium G SI2 58.8 60. 5.33 5.28 3.12
## 4 1262 2644. 1.03 Fair E I1 78.2 54. 5.72 5.59 4.42
## 5 1415 639. 0.350 Fair G VS2 65.9 54. 5.57 5.53 3.66
## 6 1415 639. 0.350 Fair G VS2 65.9 54. 5.57 5.53 3.66
## 7 1715 576. 0.320 Fair F VS2 59.6 60. 4.42 4.34 2.61
## 8 1776 412. 0.290 Fair F SI1 55.8 60. 4.48 4.41 2.48
## 9 2160 314. 0.340 Fair F I1 55.8 62. 4.72 4.60 2.60
## 10 2366 774. 0.300 Very Go~ D VVS2 60.6 58. 4.33 4.35 2.63
## 11 3360 1373. 0.510 Premium F SI1 62.7 62. 5.09 4.96 3.15
## 12 3807 1540. 0.610 Good F SI2 62.5 65. 5.36 5.29 3.33
## 13 3920 1705. 0.510 Fair F VVS2 65.4 60. 4.98 4.90 3.23
## 14 4368 1705. 0.510 Fair F VVS2 60.7 66. 5.21 5.11 3.13
## 15 10011 4048. 1.01 Fair D SI2 64.6 58. 6.25 6.20 4.02
## 16 10470 23622. 2.46 Premium E SI2 59.7 59. 8.82 8.76 5.25
Nothing really jumps out at me here, but it’s probably worth spending time considering if this indicates a problem with our model, or if there are errors in the data. If there are mistakes in the data, this could be an opportunity to buy diamonds that have been priced low incorrectly.
We can assess the accuracy of the model in terms of the residual SD, MAD and lower/upper 2.5% points of the distribution.
diamonds2 %>%
add_predictions(mod_diamond2) %>%
add_residuals(mod_diamond2) %>%
summarise(sq_err = sqrt(mean(resid^2)),
abs_err = mean(abs(resid)),
p975_err = quantile(resid, 0.975),
p025_err = quantile(resid, 0.025)) %>%
knitr::kable(., format="html", digits=2,
col.names=c("SD ", "MAD ", "lower 2.5%", "upper 2.5%"))
SD | MAD | lower 2.5% | upper 2.5% |
---|---|---|---|
0.19 | 0.15 | 0.38 | -0.37 |
library(nycflights13)
## Warning: package 'nycflights13' was built under R version 3.4.4
library(lubridate)
daily <- flights %>%
mutate(date = lubridate::make_date(year, month, day)) %>%
group_by(date) %>%
summarise(n = n())
daily
## # A tibble: 365 x 2
## date n
## <date> <int>
## 1 2013-01-01 842
## 2 2013-01-02 943
## 3 2013-01-03 914
## 4 2013-01-04 915
## 5 2013-01-05 720
## 6 2013-01-06 832
## 7 2013-01-07 933
## 8 2013-01-08 899
## 9 2013-01-09 902
## 10 2013-01-10 932
## # ... with 355 more rows
ggplot(daily, aes(date, n)) +
geom_line()
There are very strong day-of-week effects.
daily <- daily %>%
mutate(wday = lubridate::wday(date, label = TRUE))
daily
## # A tibble: 365 x 3
## date n wday
## <date> <int> <ord>
## 1 2013-01-01 842 Tue
## 2 2013-01-02 943 Wed
## 3 2013-01-03 914 Thu
## 4 2013-01-04 915 Fri
## 5 2013-01-05 720 Sat
## 6 2013-01-06 832 Sun
## 7 2013-01-07 933 Mon
## 8 2013-01-08 899 Tue
## 9 2013-01-09 902 Wed
## 10 2013-01-10 932 Thu
## # ... with 355 more rows
ggplot(daily, aes(wday, n)) +
geom_boxplot()
One way to remove this strong pattern is to use a model. Fit model, display its predictions overlaid on the original data. The n’s are left-skewed and as would be expected in this case the predicted n’s are to the left of the observed medians.
mod <- lm(n ~ wday, data = daily)
mod
##
## Call:
## lm(formula = n ~ wday, data = daily)
##
## Coefficients:
## (Intercept) wday.L wday.Q wday.C wday^4
## 922.595 -83.322 -155.111 -62.834 -80.126
## wday^5 wday^6
## -4.967 -16.934
grid <- daily %>%
data_grid(wday) %>%
add_predictions(mod, "n") #predicted n
grid
## # A tibble: 7 x 2
## wday n
## <ord> <dbl>
## 1 Sun 891.
## 2 Mon 975.
## 3 Tue 951.
## 4 Wed 963.
## 5 Thu 966.
## 6 Fri 967.
## 7 Sat 745.
ggplot(daily, aes(wday, n)) + #observed n
geom_boxplot() +
geom_point(data = grid, colour = "red", size = 4)
Next we compute and visualise the residuals.
daily <- daily %>%
add_residuals(mod)
daily %>%
ggplot(aes(date, resid)) +
geom_ref_line(h = 0) +
geom_line()
Note the change in the y-axis: now we are seeing the deviation from the expected number of flights, given the day of week. This plot is useful because now that we’ve removed much of the large day-of-week effect, we can see some of the subtler patterns that remain:
Our model seems to fail starting in June: you can still see a strong regular pattern that our model hasn’t captured. Drawing a plot with one line for each day of the week makes the cause easier to see.
r ggplot(daily, aes(date, resid, colour = wday)) + geom_ref_line(h = 0) + geom_line(size=2)
It works better with scale_color_brewer(palette = “Set1”).
ggplot(daily, aes(date, resid, colour = wday)) +
geom_ref_line(h = 0) +
geom_line(size=2) +
scale_color_brewer(palette = "Set1")
Or we can use facetting,
ggplot(daily, aes(date, resid)) +
geom_ref_line(h = 0) +
geom_line() +
facet_wrap(~wday)
Our model fails to accurately predict the number of flights on Saturday: during summer there are more flights than we expect, and during Fall there are fewer. We’ll see how we can do better to capture this pattern in the next section.
There are some days with far fewer flights than expected:
daily %>%
filter(resid < -100)
## # A tibble: 11 x 4
## date n wday resid
## <date> <int> <ord> <dbl>
## 1 2013-01-01 842 Tue -109.
## 2 2013-01-20 786 Sun -105.
## 3 2013-05-26 729 Sun -162.
## 4 2013-07-04 737 Thu -229.
## 5 2013-07-05 822 Fri -145.
## 6 2013-09-01 718 Sun -173.
## 7 2013-11-28 634 Thu -332.
## 8 2013-11-29 661 Fri -306.
## 9 2013-12-24 761 Tue -190.
## 10 2013-12-25 719 Wed -244.
## 11 2013-12-31 776 Tue -175.
Some of these days correspond to American national holidays.
There seems to be some smoother long term trend over the course of a year. We can highlight that trend with geom_smooth()
:
daily %>%
ggplot(aes(date, resid)) +
geom_ref_line(h = 0) +
geom_line(colour = "grey50") +
geom_smooth(se = FALSE, span = 0.20)
There are fewer flights in January (and December), and more in summer (May-Sep). We can’t do much with this pattern quantitatively, because we only have a single year of data. But we can use our domain knowledge to brainstorm potential explanations.
Let’s first tackle our failure to accurately predict the number of flights on Saturday. A good place to start is to go back to the raw numbers, focussing on Saturdays:
daily %>%
filter(wday == "Sat") %>%
ggplot(aes(date, n)) +
geom_point() +
geom_line() +
scale_x_date(NULL, date_breaks = "1 month", date_labels = "%b")
I suspect this pattern is caused by summer holidays: many people go on holiday in the summer, and people don’t mind travelling on Saturdays for vacation. Looking at this plot, we might guess that summer holidays are from early June to late August. Lets create a “term” variable that roughly captures the three school terms, and check our work with a plot:
term <- function(date) {
cut(date,
breaks = ymd(20130101, 20130605, 20130825, 20140101),
labels = c("spring", "summer", "fall")
)
}
daily <- daily %>%
mutate(term = term(date))
daily %>%
filter(wday == "Sat") %>%
ggplot(aes(date, n, colour = term)) +
geom_point(alpha = 1/3) +
geom_line() +
scale_x_date(NULL, date_breaks = "1 month", date_labels = "%b")
It’s useful to see how this new variable affects the other days of the week:
daily %>%
ggplot(aes(wday, n, colour = term)) +
geom_boxplot()
There is significant variation across the terms. Fitting a separate day of week effect for each term is reasonable. We see some improvement.
mod1 <- lm(n ~ wday, data = daily)
mod2 <- lm(n ~ wday * term, data = daily)
daily %>%
gather_residuals(without_term = mod1, with_term = mod2) %>%
ggplot(aes(date, resid, colour = model)) +
geom_line(alpha=0.75)
daily %>%
gather_residuals(without_term = mod1, with_term = mod2) %>%
ggplot(aes(date, resid)) +
geom_line() +
facet_wrap(~model, nrow=2)
We can see the problem by overlaying the predictions from the model on to the raw data:
grid <- daily %>%
data_grid(wday, term) %>%
add_predictions(mod2, "n")
ggplot(daily, aes(wday, n)) +
geom_boxplot() +
geom_point(data = grid, colour = "red") +
facet_wrap(~ term)
Our model is finding the mean effect, but we have a lot of big outliers, so mean tends to be far away from the typical value. We can alleviate this problem by using a model that is robust to the effect of outliers: MASS::rlm()
. This greatly reduces the impact of the outliers on our estimates, and gives a model that does a good job of removing the day of week pattern:
mod3 <- MASS::rlm(n ~ wday * term, data = daily)
daily %>%
add_residuals(mod3, "resid") %>%
ggplot(aes(date, resid)) +
geom_hline(yintercept = 0, size = 2, colour = "white") +
geom_line()
It’s now much easier to see the long-term trend, and the positive and negative outliers.
Use time of year and robust regression.
In the previous section we used our domain knowledge (how the US school term affects travel) to improve the model. An alternative to using our knowledge explicitly in the model is to give the data more room to speak. We could use a more flexible model and allow that to capture the pattern we’re interested in. A simple linear trend isn’t adequate and polynomials are not as good as splines. Use natural spline to fit a smooth curve across the year:
library(splines)
mod <- MASS::rlm(n ~ wday * ns(date, 5), data = daily)
daily %>%
data_grid(wday, date = seq_range(date, n = 13)) %>% #13*7=91 rows
add_predictions(mod) %>%
ggplot(aes(date, pred, colour = wday)) +
geom_line(size=2) +
geom_point(size=4) +
scale_color_brewer(palette = "Set1")
We see a strong pattern in the numbers of Saturday flights. This is reassuring, because we also saw that pattern in the raw data. It’s a good sign when you get the same signal from different approaches.