Reviewing and validating data is an important part of research. Invalid data leads to invalid results and other negative consequences. To help ensure our research project is collecting reliable data, we use a two-tiered approach on our data collection web site. Data are first validated by computers following a set of pre-determined rules, thus eliminating most of impossible values from the data. Data are then reviewed by people who have experience with the data. Humans are much better at catching unlikely values, as we can look at the data and ask "Do the data make sense?" In general, I would say that our two-tiered system works well, but it is not without its weaknesses. In the 2014-15 school year we collected data on 56,209 first grade students. Each student has up to 125 pieces of data entered about them on our web site. The data were reviewed by one of 298 people. So, on average, each person reviews the data for about 189 students. This translates to each person reviewing 23,625 pieces for all of their 189 students. As I go to conferences and talk to these people, one consistent comment I hear is the data review can be overwhelming. They struggle with finding enough time to properly review data and with becoming mentally fatigued because of the quantity of data to review. So we are investigating whether or not the computer science technique of machine learning can help ease the data review burden. We would like to see if we could train a machine learning algorithm to spot the more subtle errors in our data and draw them to the attention of data reviewers. This paper describes the early steps of this endeavor.