Overview of the Data Release Guidelines

The ASAP data collected by The Common Pool, LLC, (hereinafter “TCP”) represent a valuable source of information, and requests for data from qualified requesters will be given serious consideration. The data are the property of the state departments of education and the vendors who provided it. As such, requests for data will be evaluated with careful consideration, and the College Board reserves the right to deny any request that it deems an inappropriate use of its data.

ASAP data may be used only for the purposes specified in the publication Guidelines for the Release of ASAP Data (hereinafter “Guidelines”) and the data license agreement between the requester and TCP.

In order to protect the confidentiality, data with individual student, school, district, state department of education, and vendor identifying information is not included in this release of data.

TCP will give priority to requests from state departments of education and vendors who participated in ASAP.…



The original data have been modified in the following way:  Original essays were put through an anonymization process that replaced references to persons, places, and locations with tokens.  So, if a student wrote that “her name was Jada, attended school at Jackson High School in Canton, OH”, this would be replaced with “her name was @person, attended @place high school in @location”.  There were a few essays that were eliminated because the anonymization process was ineffective in scrubbing the text sufficiently to eliminate information that might lead to the identification of an individual writer.

In order to assess the suitability of the anonymized data for evaluating automated essay scoring systems, a small internal study was completed with the LightSIDE engine to determine the degree to which there might be differences. LightSide is an open-source scoring engine developed at Carnegie Mellon University and was included along with the commercial vendors in the first study. In that study the engine demonstrated a high agreement with human ratings, but had no NLP capabilities. The analysis was performed because it was suspected that the anonymized data might be harder to model than the original data since it would contain less specific information. However, the LightSIDE model showed only a slight correlation drop from .763 to .759, which, based on a t-test, was not statistically significant (p = .15), on quadratic weighted kappa across the data sets. While the data anonymization process therefore seems not to have substantially impeded the ability of machine-learning based systems to model human scores on this data set, it may have had the effect of making it more difficult for participants to develop features related to deeper aspects of writing ability.  Since content words were replaced with meaningless symbols in the process, the grammatical structures and meaning relationships within each essay were certainly made less accessible, even to human readers.…


Six of the essay sets were transcribed from their original paper-form administration in order to prepare them for processing by automated essay scoring engines, which require the essays to be in ASCII format.  This process involved retrieving the scanned copies of essays from the state or a vendor serving the state, randomly selecting a sample of essays for inclusion in the study, and then sending the selected documents out for transcription.

Both the scanning and transcription steps had the potential to introduce errors into the data that would have been minimized had the essays been directly typed into the computer by the student, the normal procedure for automated essay scoring.  Essays were scanned on high quality digital scanners, but occasionally student writing was illegible because the original paper document was written with an instrument that was too light to reproduce well, was smudged, or included handwriting that was undecipherable. In such cases, or if the essay could not be scored by human raters (i.e., essay was off-topic or inappropriate as determined by human raters), the essay was eliminated from the analyses.  Transcribers were instructed to be as faithful to the written document as possible keeping in mind the extended computer capabilities had they been employed.  For example, more than a few students used a print style in which all letters were capitalized.  To address this challenge, we instructed the transcribers to capitalize according to conventional practice.  This modification may have corrected errors that would have otherwise been made, but limited the over-identification of capitalization errors that might have been made otherwise by the automated essay scoring engines.

The first transcription company serviced four prompts from three states and included 11,496 essays.  In order to assess the potential impact of transcription errors, a random sample of 588 essays was re-transcribed and compared on the basis of error rates for punctuation, capitalization, misspellings, and skipped data.  Accuracy was calculated on the basis of the number of characters and the number of words with an average rate of 98.12%[1].  The second transcription company was evaluated using similar metrics.  From a pool of 6006 essays, a random sample of 300 essays was selected for re-transcription.  Accuracy for this set of essays was calculated to be 99.82%.

Two of the essays were provided in ASCII format by their respective states.  The 10th grade students in those states had typed their responses directly into the computer using web-based software that emulated a basic word processor.  While the test employed digital technology, the conditions for testing were similar to those in states where the essays had been transcribed.…

ASAP Phase One Vendor Demonstration Data

The materials provided here made available conditional on your acceptance of the licensing terms.

There are several files that are made available to you in support of your analysis of the ASAP Phase One Vendor Demonstration Data.  In addition to the actual data files, you will have access to the actual prompt (or a reference/link to locate the prompt if the material is copyrighted), state rubrics for rating student responses, training essays (if available), and cut-score information (if available).

Included in the tab-delimited data files is information on:

  • Data Set (Training, Test, Validation)
  • Essay ID
  • Essay Set (1-8)
  • Essay Text
  • Word Count
  • Rater 1, Domain 1
  • Rater 2, Domain 1
  • Rater 3, Domain 1
  • Domain 1 Score
  • Domain 1 Prediction ID
  • Rater 1, Domain 2
  • Rater 2, Domain 2
  • Domain 2 Score
  • Domain 2 Prediction ID
  • Vendor 1 Prediction Score
  • Vendor 2 Prediction Score
  • Vendor 3 Prediction Score
  • Vendor 4 Prediction Score
  • Vendor 5 Prediction Score
  • Vendor 6 Prediction Score
  • Vendor 7 Prediction Score
  • Vendor 8 Prediction Score
  • Vendor 9 Prediction Score