Google's artificial neural network has run rampant throughout the internet in the last few weeks, turning demure Twitter photos into surrealistic nightmares and taking the already-hellish Fear and Loathing in Las Vegas to a level only Hunter S. Thompson himself could have imagined.
But let us not forget: Google's AI also serves practical means. Google engineers are using their layered artificial neural network, also called a deep network, to create unseen views from two or more images of a scene. They call it "Deep Stereo." For example, if you have a photo from the left and right of a scene, the deep network will tell you what it looks like from anywhere the middle. Or if there are five photos of a room, the deep network can render unique views of what the room would look like from other angles, based on what it thinks should be there.