suppressPackageStartupMessages(library(tidyverse))

In Regression I, the response was allowed to take on any real number. But what if the range is restricted?

Problems

Here are some common examples.

  1. Positive values: river flow.
    • Lower limit: 0
  2. Percent/proportion data: proportion of income spent on housing in Vancouver.
    • Lower limit: 0
    • Upper limit: 1.
  3. Binary data: success/failure data.
    • Only take values of 0 and 1.
  4. Count data: number of male crabs nearby a nesting female
    • Only take count values (0, 1, 2, …)

Here is an example of the fat content of a cow’s milk, which was recorded over time. Data are from the paper “Transform or Link?”. Let’s consider data as of week 10:

cow <- suppressMessages(read_csv("../data/milk_fat.csv"))
(plot_cow <- cow %>% 
    filter(week >= 10) %>% 
    ggplot(aes(week, fat*100)) +
    geom_point() +
    theme_bw() +
    labs(y = "Fat Content (%)") +
    ggtitle("Fat content of cow milk"))

Let’s try fitting a linear regression model.

plot_cow +
    geom_smooth(method = "lm", se = FALSE)

Notice the problem here – the regression lines extend beyond the possible range of the response. This is mathematically incorrect, since the expected value cannot extend outside of the range of Y. But what are the practical consequences of this?

In practice, when fitting a linear regression model when the range of the response is restricted, we lose hope for extrapolation, as we obtain logical fallacies if we do. In this example, a cow is expected to produce negative fat content after week 35!

Despite this, a linear regression model might still be useful in these settings. After all, the linear trend looks good for the range of the data.

Solutions

How can we fit a regression curve to stay within the bounds of the data, while still retaining the interpretability that we have with a linear model function? Remember, non-parametric methods like random forests or loess will not give us interpretation. Here are some options:

  1. Transform the data.
  2. Transform the linear model function: link functions
  3. Use a scientifically-backed parametric function.

Solution 1: Transformations

One solution that might be possible is to transform the response so that its range is no longer restricted. The most typical example is for positive data, like river flow. If we log-transform the response, then the new response can be any real number. All we have to do is fit a linear regression model to this transformed data.

One downfall is that we lose interpretability, since we are estimating the mean of \(\log(Y)\) (or some other transformation) given the predictors, not \(Y\) itself! Transforming the model function by exponentiating will not fix this problem, either, since the exponential of an expectation is not the expectation of an exponential. Though, this is a mathematical technicality, and might still be a decent approximation in practice.

Also, transforming the response might not be fruitful. For example, consider a binary response. No transformation can spread the two values to be non-binary!

Solution 3: Scientifically-backed functions

Sometimes there are theoretically derived formulas for the relationship between response and predictors, which have parameters that carry some meaning to them.