In the second issue of this monthly digest series you can find out how DeepMind beat the reigning human champion at the game of Go, why a robot is playing art critic in a museum in Paris, why your phone might soon have a brain of its own, and much more.
Google's AI beats top human player at Go
Right after last month's digest it was announced that a computing system developed by Google researchers in Great Britain has beaten a top human player at the game of Go, the ancient Eastern contest of strategy and intuition that has bedeviled AI experts for decades.
Machines have topped the best humans at most games held up as measures of human intellect, including chess, Scrabble, Othello, even Jeopardy!. But with Go—a 2,500-year-old game that's exponentially more complex than chess—human grandmasters have maintained an edge over even the most agile computing systems. Earlier this month, top AI experts outside of Google questioned whether a breakthrough could occur anytime soon, and as recently as last year, many believed another decade would pass before a machine could beat the top humans. But Goole has done just that.
Researchers at DeepMind staged this machine-versus-man contest in October, at the company's offices in London. The DeepMind system, dubbed AlphaGo, matched its artificial wits against Fan Hui, Europe's reigning Go champion, and the AI system went undefeated in five games witnessed by an editor from the journal Nature and an arbiter representing the British Go Federation.
DeepMind's architecture is described in Nature. As its name suggests, the system makes clever use of deep learning techniques. Using a vast collection of Go moves from expert players (about 30 million moves in total), DeepMind researchers trained their system to play Go on its own. But this was merely a first step. In theory, such training only produces a system as good as the best humans. To beat the best, the researchers then matched their system against itself. This allowed them to generate a new collection of moves they could then use to train a new AI player that could top a grandmaster.
Scientists made a robot art critic that is able to form its own opinions
A robot art critic—complete with a bowler hat and scarf—is strolling the galleries of the Musée du quai Branly in Paris and forming his own opinions about what is good and what is bad. The Berenson robot, developed in France in 2011 and named after American art expert Bernard Berenson, is the brainchild of anthropologist Denis Vidal and robotics engineer Philippe Gaussier. Its programming allows it to record reactions of museum visitors to certain pieces of art and then use the data to develop its own unique taste, which allows Berenson to judge whether or not it likes a certain work of art within an exhibition.
Vidal said: "When he likes something, he goes in this direction and smiles. When he does not like, he goes away and he frowns—that's how it works. Basically, the idea is by doing so, it adapts itself to its environment, on the basis of this artificial taste, and the aim is to develop a robot that's the equivalent of aesthetic exploration of the world and to see if because of that it may adapt itself more easily to the world around and make other things on this basis."
Vidal explained that while at first Berenson's tastes are dependent on the tastes of those around him, eventually the robot will sort out its own personalized taste, which is different from that of any other person, or any other robot.
The experimental setup is also a durability test for the neural network algorithm deployed on the robot, Gaussier clarified. Berenson has been strolling the museum for weeks running the same algorithms that allow him to sense his environment, recognize people's reactions, and associate sensory cues with values.
Could Berenson be used to infer popular opinions about the exhibited art pieces, I ask Gaussier. "Not necessarily", he chuckles. "It turns out that for most installations you can find groups of people with completely diverging opinions, no matter how popular the piece." Perhaps this is observation is in agreement with the notion that for an art piece to be popular it need not necessarily be the easiest to interpret, but it must simply be stimulating.
MIT has announced that it has designed a new chip that can perform powerful artificial intelligence tasks right on your phone. Called Eyeriss, the chip has 168 cores and is 10 times as efficient as current-mobile GPUs. Each core has its own memory where it can store and analyze data. If it needs the help of another core, it compresses that data before sharing it with another core. Eyeriss allows cores to talk to each other directly so they don’t need to use up a phone’s main memory. (MIT Press)
Nervana Systems launched a deep learning cloud while it builds a chip designed for AI. Today Nervana's cloud is based on graphics processors purchased from NVIDIA, but the founders of Nervana hope to replace the underlying hardware by the end of 2016 with specialized chips of their own design. The software framework is called Neon, and it competes with others such as Torch and Caffe, as well as those offered by Google (TensorFlow) and Microsoft (CNTK). Rao maintains that his, however, is 10 times faster running on the NVIDIA hardware, even faster than NVIDIA's own framework. This speed matters because training a neural network and then running it is something that can take weeks or months. So any jump in speed, even if it’s halving the time, is a significant savings. (Fortune)
Joshua Bengio thinks that people who worry that we're on course to invent dangerously intelligent machines are misunderstanding the state of computer science. "There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power", he says. "The hype isn't about whether the stuff we're doing is useful or not—it is. But people underestimate how much more science needs to be done. And it's difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical." (Technology Review)
NVIDIA launched Vulkan, a new low-level graphics application programming interface (API) that gives direct access of the GPU to developers who want the ultimate in control. With a simpler, thinner driver, Vulkan uses less latency and overhead than traditional OpenGL or Direct3D. Vulkan has efficient multi-threading capabilities with multi-core CPUs that keep the graphics pipeline loaded, enabling a new level of performance on existing hardware. It is also the first new generation, low-level API that is cross-platform. (NVIDIA Developer Zone)
Google has prided itself on the fact that its self-driving car fleet has never been responsible for any of its crashes—they've always been caused by another (decidedly more human) force—but that may have just changed. According to a California DMV filing first reported by writer Mark Harris, one of Google's self-driving Lexus SUVs drove into the side of a bus at low speed. It looks like a bad assumption led to a minor fender-bender. (The Verge)
Chipmaker Intel and telecommunications company AT&T have partnered to develop a system that connects drones to devices on the ground via LTE wireless networks. (AT&T Press Release)
Last but not least, Disney is reportedly planning to use drones to keep an eye out for other drones that may be spying on the set of Star Wars Episode VIII. (Digital Trends)
Anything I missed? Sound away in the comment section! Have something of interest or want your discovery to be considered for next month's issue? Let me know via mbeyeler (at) uci (dot) edu.