Authors: Yuanzhe Marco Ma, Arash Shamseddini, Kaicheng Tan, Zhenrui Yu
In this project, we look at the relationship between movie reviews and their IMDB scores (ranging from 0 ~ 10). Positive reviews are often related to high IMDB scores, while negative reviews indicate the opposite. While it is easy for humans to understand a piece of review and guess the scores, we wonder if machines could understand it as well. Furthermore, we would like to automate this process, so that given a bunch of movie reviews, we are able to predict their corresponding IMDB scores easily.
In this project, most of the data wrangling in this project is powered by the pandas (McKinney 2010) library. The machine learning models and frameworks utilize sklearn (Pedregosa et al. 2011). We use Ridge Regressor as our prediction model.
We obtained our data from an open-sourced github repository:
https://github.com/nproellochs/SentimentDictionaries/blob/master/Dataset_IMDB.csv
The repository was originally used for sentiment analysis related to movie reviews. Here we are using the Dataset_IMDB.csv as our main data source.
Let's look into the dataset by performing some Exploratory Data Analysis (EDA).
| Column Name | Column Type | Description |
|---|---|---|
| Id | Numeric | Unique ID assigned to each observation. |
| Text | Free Text | Body of the review content. |
| Author | Categorical | Author's name of the review |
| Rating | Numeric | Ratings given along with the review (normalized) |
For this project, we look primarily into the Text and Rating columns.
We realized that Author may have significant relation to ratings, but since we are making a generalized model for reviews from any audience, we have decided to discard it for this analysis. Therefore, we will drop both the Author and Id columns.
Text feature¶The Text feature contains all the movie reviews. This will be our primarily input feature.
Below are the top 10 most frequent words in the reviews:
import pandas as pd
freq = pd.read_csv('../results/top_20_frequent_words.csv')
freq[:10]
| Word | Count | |
|---|---|---|
| 0 | film | 17220 |
| 1 | movie | 9868 |
| 2 | like | 7170 |
| 3 | story | 6923 |
| 4 | director | 5525 |
| 5 | time | 5373 |
| 6 | just | 4689 |
| 7 | life | 4302 |
| 8 | good | 3464 |
| 9 | man | 3392 |
Note that when we use sklearn's CountVectorizer to generate the word counts, we discarded the most frequent literals such as "the", "an", and so on by using the stop_words='English' argument. This allows us to ignore frequently used but meaningless words in English, as they have little implication to our ML model and could cause overfitting.
Rating Class¶Ratings will be our target class. Let's look at a distribution of Rating.
The ratings seem roughly normally distributed, with a little skewness to the left. Most of the ratings cluster around 5 ~ 8.
Text length and Rating¶We suspect that people more passionate about certain movies tend to write longer reviews to express feelings. This could also be true for very negative reviews.
A bar plot of Text length vs Rating is presented below.
There doesn't seem to be a strong correlation between reviews length and rating. However, it is notable that for the most positive ratings (from 7 ~ 10), the reviews tend to be higher.
In addition to Text feature alone, we extracted two potentially useful columns that could enhance our machine learning model.
n_words: As mentioned above, we suspect some correlation between review lengths and ratings. Therefore we created an n_words feature, which counts the number of words in each review. sentiment: We utilized the NLTK (Bird 2009) package to assist us in extracting the sentiment of each review. This sentiment feature will have four ordinal categories - ['neg', 'compound', 'neu', 'pos']. Now we have three columns to transform before fitting - Review, n_words and sentiment.
Review: since it is text feature, we will use sklearn's CountVectorizer to transform the text into bag of words. As mentioned in the EDA, some of the most frequent words are often meaningless. We added stop_words='English' to ignore these common English words.
n_words: we standardize the data with sklearn's StandardScaler to avoid its scaling effect on the estimator.
sentiment: since it is ordinal data, we utilize sklearn's OrdinalEncoder to encode the data with one-hot method.
Now we have our training data ready, we will consider which model to use to fit.
We selected 3 candidate models that are suitable for regression prediction:
We tuned the hyperparameters of each model using sklearn's GridSearchCV.
Below are comparative cross-validation scores of the 3 models.
| model | fit_time | score_time | test_score | train_score |
|---|---|---|---|---|
| Ridge | 2.198422 | 0.511951 | 0.533365 | 0.841871 |
| SVR | 28.988741 | 8.086322 | 0.453478 | 0.714765 |
| RandomForestRegressor | 287.661862 | 0.499031 | 0.359660 | 0.906179 |
Ridge¶We selected Ridge with a combindation of two considerations - cross-validation scores and the risk of overfitting. Ridge was selected because it has the highest CV score. Although all three models show risk of overfitting, the degree to which Ridge and SVR overfit are around the same, while RandomForest is the worst. Therefore, we consider Ridge as best estimator. For more details on the model selection process, see the model comparison report.
Below is a detailed specification of our final model.
| Model Name | Hyperparameter - alpha | Mean Fit Time | Mean Scoring Time | Mean CV Score |
|---|---|---|---|---|
| Ridge | 500 | 0.52s | 0.53s | 0.533365 |
Below is our model's prediction scores on the test dataset.
scores = pd.read_csv('../results/model_test_scores.csv', index_col=0)
scores
| r2 | rmse | |
|---|---|---|
| model | 0.543102 | 1.233982 |
$R^2$ and RMSE (Root-mean-squared-error) are two common metrics for evaluating a regression model's accuracy.
The obtained $R^2$ score for the test set is 0.543102, which was comparable to, and even better than our validation score.
The RMSE score was 1.234. This could be large because it means our predicted score can have a margin of error of over 1.2 points when our overall scale is only 0-10.
We have also created a scatterplot to compare our predicted ratings vs. the true ratings.
There is an obvious difference between the predicted ratings and true ratings. In the true ratings, people tend to give whole number rating, i.e. 3 instead of 3.247. Our model did not capture that.
Despite that, most of the points are somewhat clustered around the identity line. This indicates that our model didn't seem to under-fit or over-fit.
Now that we are done with our prediction and analysis, we can examine the quality of our work. In fact, there are a few areas of improvement.
Author and id columns at the beginning. In fact, these columns could be influential features, especially Author. However, an inherent issue with our dataset is that all reviews come from four critics (authors). Including it could add to the difficulties of generalizing our model to the broader audience. It would be ideal if we can obtain reviews from the general audience. sentiment feature generated with the NLTK package contained only neu and compound in our dataset. This is confusing and we have yet to understand this behavior. We included the feature regardless because it may still provide useful information. However, this is definitely a place we need to investigate further into.