Back in May of this year Google announced the creation of an AI (AutoML) that is able to design better AIs. AutoML built a new AI that’s called NASNet that is used to recognize objects. NASNet outperformed its human designed counterparts and the methods used to develop it seems to be the future of advanced AI.
The Google researchers automated the design of machine learning models using an approach called reinforcement learning. For NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time. The Google Brain researchers, Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc Le, claim that their machine-designed model beats all state-of-the-art computer vision systems created by people. They applied AutoML to the ImageNet image classification and COCO object detection data set, which according to the Google team are “two of the most respected large-scale academic data sets in computer vision.”
Self-driving cars are one of the many possible use of this architecture. It’s easy to imagine the system helping Google’s AVs identify traffic, pedestrians, and road hazards. NASNet could also be used in augmented reality to help apps interact with the environment in a faster, more accurate way that current computer vision solutions. But perhaps the most intriguing applications have yet to be identified.