Wednesday, May 11, 2016

Highlights and new discoveries in Computer Vision, Machine Learning, and AI (April 2016)

In the fourth issue of this monthly digest series you can find out how Qualcomm is bringing deep learning and AI to smart devices, why Daimler sent self-driving trucks all across Europe, how to imitate Rembrandt's best work with the help of deep learning, and much more.


The Next Rembrandt

From the Smithsonian comes news—and a must-see fascinating video—about a painting created using data from more than 168,000 fragments of Rembrandt's work, trained to paint in Rembrandt's signature style. Over the course of 18 months, a group of engineers, Rembrandt experts and data scientists analyzed 346 of Rembrandt's works, then trained a deep learning engine to "paint" in the master's signature style.

In order to stay true to Rembrandt’s art, the team decided to flex the engine's muscles on a portrait. They analyzed the demographics of the people Rembrandt painted over his lifetime and determined that it should paint a Caucasian male between 30 and 40 years of age, complete with black clothes, a white collar and hat, and facial hair.

Using what it knew about Rembrandt's style and his use of everything from geometry to paints, the machine then generated a 2D work of art that could be by the Dutch painter himself. But things didn’t end there—the team then used 3D scans of the heights of Rembrandt’s paintings to mimic his brushstrokes. Using a 3D printer and the heigh map, they printed 13 layers of pigments. The final result—all 148 million pixels of it—looks so much like a painting by Rembrandt during his lifetime that you'd be forgiven if you walked right by it in a collection of his work.

But why do all this? As Ron Augustus, director of SMB Markets at Microsoft, puts it in the video, "We are using a lot of data to improve business life, but we haven't been using data that much in a way that touches the human soul." The project succeeds in showing how deep learning can create something amazingly similar to the work of a person, not to mention an artistic genius. That said, the experiment doesn't purport to be a replacement for Rembrandt, and the team behind the painting is careful to avoid the direct comparison.

Sources: Smithsonian, Slate.

Qualcomm releases Zeroth Deep Learning SDK

Earlier this month, Qualcomm released a new software development kit (SDK) for their "machine intelligence platform", Zeroth. This SDK will make it easier for companies to run deep learning programs directly on devices like smartphones and drones—given that these devices are powered by one of Qualcomm's chips, of course.

"This means better privacy and lower latency as there are no uploads to the cloud," Qualcomm's director of product management, Gary Brotman, told The Verge. He gives the example of a medical app that doctors might use to analyze skin conditions. "Doing the image classification on the device with no trip to the cloud makes sense," Brotman says. "And it doesn't matter the kind of data—it could be visual, it could be audio."

Although running deep learning operations locally means limiting their complexity, the sort of programs you can run on your phone or any other portable device are still impressive. The real limitation will be Qualcomm's chips. The new SDK will only work with the latest Snapdragon 820 processors from the latter half of 2016, and the company isn't saying if it plans to expand its availability.

Qualcomm joins other companies such as chipmaker Movidius, who have been working with Google to produce processors that specialize in machine vision. Movidius' Myriad chips are currently powering the first generation of Google's space-mapping Tango tablets, and more recently, autonomous drones from DJI. Movidius' success shows that there's a market even for specialized deep learning chips. Time will tell how this technology will compare to Qualcomm's more generalist approach. In any case, smartphones and other mobile gadgets are only getting smarter in the future.

Sources: Qualcomm, The Verge.

National Robotics Week

April was a big month for robotics in the US as those working in the field hosted the seventh annual National Robotics Week, with numerous robotics activities, talks and workshops for anyone with an interest in robotics. The aim of Robotics Week is to engage and inspire people at a time when the field is moving forward so quickly that many struggle to keep up. For kids in particular the different activities offer a rare insight into what robotics is all about—and how much fun it can be.

Events took place across the country and one group in California, Robot Garden, stood out as the group that hosted the most activities, ranging from a Lego Robotics Club to a robot swap event. And don't worry if you missed it—Robot Garden hosts events all year round, just check out their website.

New 20TB dataset for autonomous driving research

The Mobile Robotics Group at the University of Oxford, UK has released an enormous dataset for autonomous vehicle research that was collected by their RobotCar platform over the timecourse of a year. The dataset contains 20TB worth of stereo, mono, LIDAR, and GPS data from 100 repetitions of a consistent 10km route through Oxford, UK under different combinations of weather, traffic, and pedestrians as well as construction and roadwork. This resulted in approximately 1000km of recorded driving with over 20 million images collected from 6 cameras mounted to the vehicle.

The dataset joins a number of other vision-based autonomous driving datasets, such as KITTI and Cityscapes. However, neither of these datasets address the challenges of long-term autonomy, such as localization and mapping in urban environments under significantly different conditions and mapping in the presence of structural change over time. The dataset is supposed to go live in mid-2016.

Source: Mobile Robotics Group at Uni Oxford, UK.

Miscellaneous

Researchers met at the 2016 Sustainable Oceans Summit in Washington, DC to discuss solutions to cleaning our seas. Merging aqua with AI (Aquaai), Simeon's Silicon Valley robotics startup presented a robot fish prototype—a Nemo-looking clownfish—that would light up and tweet images from below the water's surface. The bio-inspired vehicle (BIV) could be used for inspection of oilrigs and ship hulls. Thanks to its patent (pending) fin propulsion system, it uses very little external power. (RoboHub)

The Kickstart Accelerator program in Switzerland is currently accepting applications from early-stage startups making "smart and connected machines". They are especially (but not exclusively) interested in applications in the fields of smart homes, transportation, mobility and logistics. The 3-month program runs over the summer from late August to November and provides mentoring, expertise, up to CHF 25K in seed-funding, living stipends, and office space to the selected teams. The program culminates with a Demo Day, where each startup presents their company to Swiss venture capitalists, corporate leaders, and journalists.

As part of the European Truck Platooning Challenge, Daimler let three of its autonomous trucks loose on German and Dutch roads. The trucks drove from Stuttgart to Rotterdam using the company's Connected Highway Pilot system and reached their destination without incident on 7 April. (Daimler)

Drone racing is becoming a "sport". Starting this August, ESPN will broadcast the first US National Drone Racing Championships online. (ESPN)

Next month

Anything I missed? Sound away in the comment section! Have something of interest or want your discovery to be considered for next month's issue? Let me know via mbeyeler (at) uci (dot) edu.