Wednesday, April 6, 2016

Highlights and new discoveries in Computer Vision, Machine Learning, and AI (March 2016)

In the third issue of this monthly digest series you can find out how Microsoft is bringing AI to the visually impaired, how to colorize your grayscale images, why a Google car caused a crash for the first time, and much more.

Microsoft Seeing AI

Last Thursday, Microsoft showed off its Seeing AI app for the first time. It's still under development, but it looks extremely promising.

Using a smartphone camera or a pair of camera-equipped smart glasses, the Seeing AI app can identify things in your environment—people, objects, and even emotions—to provide important context for what's going on around you.

By a swipe of hand, the user can instruct the app to take a snapshot of the current visual scene and run it through image recognition software. The results can then be read back to the user in order to make sense of their environment: "I see a young girl throwing an orange Frisbee in the park," the app will tell you. Works not only in a park, but also on a busy sidewalk before crossing the street or in a restaurant when trying to read the menu.

Watch for yourself:

Source: BusinessInsider.

A convolutional neural network for automatic image colorization

Researchers from the University of California at Berkeley have developed a new computer vision system that generates color images from grayscale images in a way that looks convincing to humans. They trained a convolutional neural network on a set of over a million color images in order for it to learn how features in different color images tend to be colored. When the system is presented with a new image, it identifies those features and tries to figure out what color they might have. The results are quite impressive:

The software is fully automated, so all the user has to do is provide the grayscale image, which it will then take an educated guess at coloring.

In order to quantify how well the system is working, the team took color photos, turned them into grayscale, and ran them through their software. They then showed both the original and the processed images (without any labels) to volunteers, and asked them to work out which was which. The software managed to trick the participants 20% of the time, which is apparently a much higher "fool rate" than previous studies have achieved.

Here it is applied to some black-and-white images from Ansel Adams:

Source: Gizmodo.

AlphaGo defeats Lee Sedol 4–1 in Google DeepMind Challenge Match

DeepMind's groundbreaking artificial intelligence, AlphaGo, defeated Lee Sedol in the final game of the Google DeepMind Challenge Match on March 15, 2016, winning the five game match with a 4–1 score. Go is an ancient and abstract strategy board game, which has long been thought to be too complex to be conquered by AI.

After Lee pulled off a surprise win in the fourth game, hopes of a repeat performance in the fifth were high amongst Lee’s fans, but it was not to be. Although Lee got off to a good start in game five, and AlphaGo even made a miscalculation around move 50, the computer’s superior judgment and efficiency of play eventually won the day.

AlphaGo now goes down in history as the first computer Go program to defeat a top professional player, and was awarded an honorary professional 9 dan rank by the Korean Baduk Association (baduk = Go in Korean). Although AI may outwit humans in chess and now Go, it still cannot compete in games like blackjack and poker, which require intuition and judgement for bluffing, value-betting, and re-steal.

Source: GoGameGuru.


NVIDIA has announced what it calles the world's first deep learning supercomputer. The DGX-1, which is built on NVIDIA's recently announced Pascal structure, is as powerful as 250 normal processors, and can deliver 170 teraflops. Announced at the GPU Technology Conference in San Jose, NVIDIA CEO Jen-Hsun Huang described the computer as a "data center in a box". , using popular neural network Alexnet as an example. Alexnet has to be trained to recognize images, which would take 150 hours with a dual Xeon processor, and only two hours with the DGX-1. The supercomputer is likely to be used for deep learning and artificial intelligence, Huang said. (Wired)

A six-rotor drone of start-up Flirtey made the first ever fully autonomous urban delivery. The drone delivered bottled water, foot, and a first-aid kit to a house in small town in Nevada. While Flirtey is making headlines with their drone delivery success, pizza company Domino's is readying an unmanned ground vehicle in Australia for their pizza delivery service. (BigStory, PopularMechanics)

Professor Tony Dyson, who built the original Star Wars R2-D2 droid, has died on the Maltese island of Gozo. The 68-year-old Briton was found by police after a neighbor called them, concerned his door was open. He is thought to have died of natural causes. A post-mortem is being carried out to determine cause of death. Dyson was commissioned to make eight R2-D2 robots for the film series. He said working on it was "one of the most exciting periods of my life". He also worked on Superman II, Moonraker and Dragon Slayer, and was nominated for an Emmy for his film special effects supervision. (BBC, The Verge)

On Wednesday March 23, Microsoft unleashed its brand new AI on Twitter. Her name was Tay, and she was programmed to tweet like a teenage girl. Within 24 hours she tweeted like a Nazi. Microsoft didn't intend for that to happen, of course. It wanted to test and improve its algorithm for conversational language. According to Microsoft, Tay was built by "mining relevant [anonymous] public data" which was "modeled, cleaned, and filtered" to create her personality. The filtering went out the window when she went live, though, and the you can see the results above. Thankfully, she didn't actually develop a hatred for humanity; she just parroted what users gave her. Microsoft has put Tay on ice for now, but their exercise in creating artificial intelligence is just one step in humanity's quest to create the perfect AI. (BigThink)

Microsoft wasn't the only big company with some embarrassing news as Google revealed one of its self-driving cars caused a crash (with no injuries) that—for the first time—was not directly linked to human error, but an incorrect assumption the car made. On the positive front, having this type of low speed crash was actually a good chance to learn and improve safety for autonomous car programming. (TheVerge)

Next month

Anything I missed? Sound away in the comment section! Have something of interest or want your discovery to be considered for next month's issue? Let me know via mbeyeler (at) uci (dot) edu.

No comments :

Post a Comment