Is your AI as smart as an 8th grader? Prove it to Paul Allen’s AI2 for a shot to win $50K

The Allen Institute for Artificial Intelligence (AI2), is partnering with Kaggle, an online collective of data scientists, to challenge anyone to step up and test their AI’s abilities. Each AI system will have to answer a large set of multiple-choice science questions set at the 8th grade level.
The system that gets the most questions right wins $50,000, with second and third prizes set at $20,000 and $10,000, respectively. Called the “The Allen AI Science Challenge,” contest organizers are hoping that academics and researchers as well as those from the private sector will enter. An AI2 representative told us that “Kaggle roughly anticipates 1,000 participating teams” to enter.
“IBM has announced that Watson is ‘going to college’ and ‘diagnosing patients,’ ” said Oren Etzioni, CEO of AI2, in a statement. “But before college and medical school, let’s make sure Watson can ace the 8th grade science test. We challenge Watson, and all other interested parties, take the Allen AI Science Challenge.” So, Big Tech? Consider the gauntlet thrown down.
Etzioni told us via email that the idea originates with Paul Allen. Paul has encouraged us to engage the AI community through awards, he said, citing the past example of the $5.7 million Allen awarded to seven AI researchers as part of the AI Allen Distinguished Investigator Awards, which we reported on in 2014. “And competitions. We chose to partner with Kaggle due to their expertise in running these competitions.”
Why science questions? Well, besides being a natural fit for those developing AI, the AI2 release states that one of its flagship projects called Project Aristo is also focusing on science.AI2 states that a “basic understanding of science requires building a knowledge base that also includes not just facts about science, but also elements of the unstated, common sense knowledge that humans generate over their lives.”
Of course, Allen’s team over at AI2 has already matched the average human score in the SATs, as we recently reported. “GeoS, gets about 49 percent of questions right, which extrapolates to about a 500, out of a possible 800, on the SAT’s math test,” we wrote in September. “That’s pretty solid amongst its robot peers and was the average human score in 2015.”
For more details on how to enter, go here. The competition will last five months, closing in February 2016. Winners will be announced at the AAAI 2016 Conference, Feb. 12-17, 2016, in Phoenix.