Saturday, September 24, 2016

Highlights and new discoveries in Computer Vision, Machine Learning, and AI (August 2016)

In the latest issue of this monthly digest you can learn how AI is expediting breast cancer risk prediction, what facial analysis tells us about the demographics of Trump/Clinton followers, and how deep learning can help you sort your cucumbers.


AI expedites breast cancer risk prediction

Researchers from the Houston Methodist Cancer Center combined natural language processing and data mining techniques to develop an AI software that can reliably interpret mammograms, assisting doctors with a quick and accurate prediction of breast cancer risk. The AI computer software intuitively translates patient charts into diagnostic information at 30 times human speed and with 99 percent accuracy.

The software quickly scanned millions of patient charts, collected diagnostic features and correlated mammogram findings with breast cancer subtypes. Whereas it takes clinicians 50-70 hours to manually revieq 50 charts, the AI reviewed 500 charts in only a few hours, saving over 500 physician hours. The results of the analysis (e.g., expression of tumor proteins) were then used by the clinicians to accurately predict each patient’s probability of breast cancer diagnosis. Clinicians then used results such as the expression of tumor proteins to accurately predict each patient’s probability of breast cancer diagnosis.

Currently, when mammograms fall into the suspicious category, a broad range of 3 to 95 percent cancer risk, patients are recommended for biopsies. In the United States, 12.1 million mammograms and 1.6 million subsequent breast biopsies are performed every year. About 20% of these biopsies are unnecessary due to false-positive mammogram results of cancer-free breasts, estimates the American Cancer Society (ACS). Therefore, the Houston Methodist team hopes their AI software will help physicians better define the percent risk requiring a biopsy, equipping doctors with a tool to decrease unnecessary breast biopsies.

Source

Cucumber sorting powered by deep learning

A Japanese cucumber farmer by the name of Makoto Koike recently made the news for helping out his parents by using deep learning to sort cucumbers.

Makoto learned very quickly that sorting cucumbers is as hard and tricky as actually growing them. "Each cucumber has different color, shape, quality and freshness," he said. In Japan, each farm has its own classification standard and there's no industry standard. At Makoto's farm, they sort them into nine different classes, and his mother sorts them all herself—spending up to eight hours per day at peak harvesting times. There are also some automatic sorters on the market, but they have limitations in terms of performance and cost, and small farms don't tend to use them.

Instead, Makoto decided to use deep learning and the Google TensorFlow platform to devise a state-of-the-art automatic cucumber sorter. The system is built on TensorFlow's "Deep MNIST for Experts" code sample, with only minor modifications to the convolution, pooling and last layers, changing the network design to adapt to the pixel format of cucumber images and the number of cucumber classes. In a first phase, the system uses a Raspberry Pi 3 as the main controller to take images of the cucumbers. In this first phase, a small-scale neural network is running directly on the Raspberry Pi to determine whether an image contains a cucumber or not. Positives are then sent forward to a larger neural network running on a Linux server to perform a more detailed classification.

The system is still facing some challenges though. In order to train the model, Makoto spent three months taking some 7,000 pictures of cucumbers sorted by his mother, and subsequently achieved 95% recognition accuracy on his validation set. However, when he applied his system in the real-world used case, accuracy dropped down to 70%. Makoto thus suspects the neural network model has the issue of "overfitting" (the phenomenon in neural network where the model is trained to fit only to the small training dataset), because of the insufficient number of training images.

Another challenge is that the system consumes a lot of power. The current sorter uses a typical Windows desktop PC to train the neural network model. Although it converts the cucumber image into 80 x 80 pixel low-resolution images, it still takes two to three days to complete training the model with 7,000 images. "Even with this low-res image, the system can only classify a cucumber based on its shape, length and level of distortion. It can't recognize color, texture, scratches and prickles," Makoto explained. Increasing image resolution by zooming into the cucumber would result in much higher accuracy, but would also increase the training time significantly.

To improve his system, Makoto is eagerly awaiting Google's Cloud Machine Learning (CloudML) platform, which would allow him to harvest hundreds of dedicated cloud servers to train his network using TensorFlow.

via


Twitter facial analysis reveals demographics of presidential campaign followers

If you're following Donald Trump or Hillary Clinton on Twitter, chances are your profile has been analyzed by a machine learning algorithm to determine your gender, age, ethnicity, and political influence.

The study was performed by Yu Wang and colleagues at the University of Rochester. The profile picture of both candidates' followers was analyzed to determine gender and ethnicity, and a number of parameters (not limited to number of followers) were used to determine political influence. The results are simple to state. An interesting angle is the gender balance of Clinton’s followers. While Clinton enjoys substantial female support among politicians, Wu and colleagues say there is good evidence that her support among average Democratic women has fallen sharply. However, this does not seem to have influenced the gender balance among her supporters on Twitter—women make up 45 percent of her followers. Trump, however, has an almost identical level of support at 45 percent. “Apparently Trump’s feud with Megyn Kelly has not alienated female voters,” say Wu and co.

The racial diversity among the followers is a different story. Wu and co say that Clinton’s supporters are more likely to be African American or Hispanic than Trump’s, who are more likely to be white. Not surprisingly, this pattern is consistent with historical voting patterns.

The analysis of followers’ ages is put in perspective by the stereotypical idea that Republican Party followers tend to be old white people. And indeed, Wu and co say Trump’s followers contain more old people than Clinton’s. However, Trump also has more very young people, although many of them do not appear to be old enough to vote. Clinton has a stronger presence among the 18 to 40 age group.

The analysis of the social status of each candidate’s followers is curious. Wu and co do this simply by counting their number of followers, using the assumption that people with higher social status have more followers. “We find that individuals with only a few followers and individuals with hundreds of followers make up a larger share in the Trump camp than in the Clinton camp, while by contrast individuals with a few dozen to 200 followers have a larger presence among the Clintonists,” say Wu and co.

Source via

Robots learn from observing

It is now possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for, according to researchers at the University of Sheffield. This could mean advances in the world of technology with machines able to predict, among other things, human behavior.

"Our study uses the Turing test to reveal how a given system—not necessarily a human—works", said Dr. Roderich Gross from the Department of Automatic Control and Systems Engineering at the University of Sheffield. "In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm—made of learning robots—under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators."

However in this case, interrogators are not human but rather computer programs that learn by themselves. Their task is to distinguish between robots from either swarm. "They are rewarded for correctly categorizing the motion data from the original swarm as genuine, and those from the other swarm as counterfeit", Dr. Gross explained. "The learning robots that succeed in fooling an interrogator—making it believe their motion data were genuine—receive a reward."

The advantage of this approach, called "Turing Learning", is that humans no longer need to tell machines what to look for. "Imagine you want a robot to paint like Picasso. Conventional machine learning algorithms would rate the robot’s paintings for how closely they resembled a Picasso. But someone would have to tell the algorithms what is considered similar to a Picasso to begin with. Turing Learning does not require such prior knowledge. It would simply reward the robot if it painted something that was considered genuine by the interrogators. Turing Learning would simultaneously learn how to interrogate and how to paint."

Their results demonstrate that collective behaviors can be directly inferred from motion trajectories of individuals in the swarm, which may have significant implications for the study of animal collectives. Turing Learning could also prove useful whenever a behavior is not easily characterizable using metrics, making it suitable for a wide range of applications.

Source via


The road to autonomous vehicles

A lot has happened in the world of autonomous vehicles. Google was hit by another high-profile resignation, as top roboticist Chris Urmson quit the company's self-driving car division. If this trend continues, Google might lose their status as a leader in the field.

Meanwhile, others working on autonomous cars are making preparations to get their vehicles out onto the roads. In Singapore, Delphi Automotive is testing on-demand self-driving taxis, while in the US, cab service Uber’s autonomous cars are arriving in Pittsburgh. Not to be outdone, Ford has promised it’ll have fleets of driverless cars out on the roads by 2021 and Airbus has announced plans to build helicopter-like autonomous flying taxis to tackle city traffic.

Miscellaneous

Intel acquired San Diego-based deep learning startup Nervana Systems for $350M or more. The company provides a full-stack software-as-a-service platform called Nervana Cloud that enables businesses to develop custom deep learning software. (Nervana Systems)

San Francisco-based company URX released DeepHeart, a neural network designed for the 2016 Physionet Challenge which can predict cardiac abnormalities from phonocardiogram (PCG) data. The project is implemented in Google TensorFlow and available on GitHub.

Researchers in Hawaii used drones to help them count whales from the air, without having to disturb the animals.

A research team from Harvard University introduced Octobot, the first ever 3D-printed autonomous, untethered, entirely soft robot. A common struggle in soft robotics is to replace rigid components such as batteries and electronic controls with analogous soft systems. Octobot shows that it is possible to manufacture key components of a simple, entirely soft robot with 3D printing, which lays the foundation for more complex designs. (Nature via RoboHub)

JupyterLab, a new version of the Jupyter Notebook application, has been released in pre-alpha stage to the community. The new application offers an improved user interface and experience, which will allow you to arrange the file manager, notebooks, terminals, etc. in a columnar layout for side-by-side comparison. A full release is planned for later this year. (Jupyter Blog)

Microsoft launched Researcher, a new AI-based assistant for Microsoft Word, which lets you incorporate information from outside sources easily into your Word document. (via VentureBeat)

Last but not least, check out this beautiful video that used deep learning to turn New York City scenery into animated paintings:

Edit: Apparently the blogger server crashed and restored this article back to Draft version from Friday. I have updated it and sharing again.