How many times have you seen a photo of an amazing looking food dish and thought if you just knew the recipe you might be able to re-create it yourself? The problem is that the recipe is nowhere to be found.
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created an artificial intelligence system that could help find these recipes as well as better understand people’s eating habits.
The AI system can predict ingredients and suggest similar recipes simply by looking at images of food. MIT says the system was able to retrieve the correct recipe 65% of the time in tests.
“In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, a postdoctoral associate at MIT. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”
How It Works
The MIT CSAIL project builds on the previous work of collecting developing datasets to classify hundreds of thousands of images and thousands of recipes with ingredient lists and instructions.
Researchers combed web sites such as Allrecipes.com and Food.com to develop its system, called Recipe1M, a database of more than 1 million recipes annotated with information about the ingredients in a wide range of dishes. Then MIT trained a neural network to find patterns and make connections between food images and their recipes.
Given a photo of a food item, the system could identify ingredients such as flour, eggs and butter and then suggest several recipes that it determined to be similar to images from the database. MIT says the system did very well with desserts such as cookies or muffins but had a tougher time identifying foods such as sushi rolls and smoothies, which are more ambiguous.
The system was also stumped when similar recipes were found for the same dish. For example, there are dozens of ways to make lasagna, and the system had difficulty with the large array of recipes it found.
The next steps are to improve the system so it can understand food in even more detail and also to infer how it is prepared or to distinguish between variations of food such as mushrooms or onions. Other iterations of the AI system could be as a dinner aide or diet helper.
“This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” says Nick Hynes, a graduate student from MIT CSAIL. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.”
A full research paper will be presented at the upcoming Computer Vision and Pattern Recognition conference next week.