Google learned how to recover pictures on several pixels
08.02.2017 Erika J. Wells 0 Comments
The team of the project Google Brain has developed a computer algorithm that is able to “recreate” the images having the source code of only 64 pixels.
Scenes from the movies, when the camera manages to zoom in almost infinitely, getting more and more new details, still evoked a smile. However, Google managed to get a result, which makes these scenes more realistic.
The researchers reduced the original image size of 32 x 32 pixels to a size of 8 x 8. Then the neural network is formed based on these new image sizes to 32 x 32 pixels. This was recreated details that give the new images, enough similarity with the original images.
The source is “blurred” photo is very low quality. In the first stage of processing, the neural network finds another high-quality image, reduces its size and looking for common signs. In the second phase, the Brain algorithm Google is trying to play the maximum number of detail pictures of high resolution.
For example, if the top of the photo, the neural network will see a number of brown pixels, it can conclude that it’s the eyebrows, and “think through” them. It is important to understand that generirovanie the computer picture is not real. The neural network Google actually recreates the image from scratch, starting from a variety of previously seen samples.
Of course, between the original images and results of the work of AI there is a difference, however, in certain applications of the technology can be found now — for example, the increase of images, saving detail in cases where the accuracy of the result is not critical.