Colorful Image Colorization uses Deep Learning to convert (or "hallucinate", as their extract says: http://richzhang.github.io/colorization/) black & white photographs to color. It was developed by Richard Zhang, Philip Isola, and Alexei A. Efros.
Deep Learning is a subfield of Machine Learning, which uses algorithms inspired by the structure and function of the human brain. These are known as artificial neural networks and are concerned with 'teaching' the artificial 'brain' a certain task that can be repeated with improved results over time.
You can see from the images below there are still some kinks in the system, but, for the most part, it's perceptive. The most interesting aspect is the way the AI is seeing how the light falls in the photograph. Two of the photos were taken during sunset and somehow it sees the way the light is falling and applies a lower color temperature. Or it seems to, at least. It might just skew white to a yellow cast (evident in how many photos are converted to sepia). Skin tones seem to confuse the algorithm. My self-portraits ended up making my skin red or yellow, while my wife, Diane, inhabits a land of sepia tones. The more light sources in a scene, the less chance it seems of selecting the right parameters (you can see this in the living room scene, for example, where one window is cyan and another is forest green). The AI sees a diaspora of colors, as though reality were spray-painted by a four-year-old onto an oversized coloring book. The 'coloring' only gets within the lines when it encounters the most repetitive patterns, such as leaves, skies and clouds. It will be interesting how it improves as more visual references and programming tags are applied to the various types of images it receives.
Below are some colorized photos and their black & white originals, which I photographed on film or digital.
You can try it out for yourself here: