Monday, October 17, 2016

Highlights and new discoveries in Computer Vision, Machine Learning, and AI (September 2016)

In the latest issue of this monthly digest series, you can learn how computer vision and AI is expediting brain tumor detection, how deep learning is used to teach autonomous vehicles how to drive, what Amazon is up to in the Cambridgeshire, and much more.


Computer program beats doctors at distinguishing brain tumors from radiation changes

Computer programs have defeated humans in Jeopardy!, chess and Go. Now a program developed at Case Western Reserve University has outperformed physicians on a more serious matter. The program was nearly twice as accurate as two neuroradiologists in determining whether abnormal tissue seen on magnetic resonance images (MRI) were dead brain cells caused by radiation, called radiation necrosis, or if brain cancer had returned.

“One of the biggest challenges with the evaluation of brain tumor treatment is distinguishing between the confounding effects of radiation and cancer recurrence,” said Pallavi Tiwari, assistant professor of biomedical engineering at Case Western Reserve and leader of the study. “On an MRI, they look very similar.” But treatments for radiation necrosis and cancer recurrence are far different. Quick identification can help speed prognosis, therapy and improve patient outcomes, the researchers say.

With further confirmation of its accuracy, radiologists using their expertise and the program may eliminate unnecessary and costly biopsies Tiwari said. Brain biopsies are currently the only definitive test but are highly invasive and risky, causing considerable morbidity and mortality.

To develop the program, the researchers employed machine learning algorithms in conjunction with radiomics, the term used for features extracted from images using computer algorithms. The engineers, scientists and physicians trained the computer to identify radiomic features that discriminate between brain cancer and radiation necrosis, using routine follow-up MRI scans from 43 patients. The images were all from University Hospitals Case Medical Center. The team then developed algorithms to find the most discriminating radiomic features, in this case, textures that can’t be seen by simply eyeballing the images.

“What the algorithms see that the radiologists don’t are the subtle differences in quantitative measurements of tumor heterogeneity and breakdown in microarchitecture on MRI, which are higher for tumor recurrence,” said Tiwari, who was appointed to the Department of Biomedical Engineering by the Case Western Reserve School of Medicine.

Tiwari and Madabhushi don’t expect the computer program would be used alone, but as a decision support to assist neuroradiologists in improving their confidence in identifying a suspicious lesion as radiation necrosis or cancer recurrence. Next, the researchers are seeking to validate and the algorithms’ accuracy using a much larger collection of images from across different sites.

Source via

Teaching autonomous vehicles how to drive with deep learning

Startup company Drive.ai has announced that it’s going to use deep learning to teach their autonomous cars how to drive. That involves providing a host of examples of situations, objects and scenarios and then letting the system extrapolate how the rules it learns there might apply to novel or unexpected experiences. It still means logging a huge number of driving hours to provide the system with basic information, but Carol Reiley, co-founder and president of Drive.ai, explained in an interview that it should also help their self-driving vehicles deal with edge cases better.

“We are using deep learning for more of an end-to-end approach. We’re using it not just for object detection, but for making decisions, and for really asking the question ‘Is this safe or not given this sensor input’ on the road,'” Reiley explained. “A rule-based approach for something like a human on a bicycle will probably break if you see different scenarios or different viewpoints. We think that deep learning is the definitely the key to driving because there are so many different edge cases.” Reiley says that Drive.ai has seen “millions of these cases,”, including things like people doing cartwheels across roads, running around their test cars in circles, and even dogs on skateboards. She argues you’d never be able to write a comprehensive rulebook that effectively takes all of that into account, so it’s clearly necessary to employ deep learning to really solve the problem.


Telepresence robots

Chronically ill, homebound children who use robotic surrogates to “attend” school feel more socially connected with their peers and more involved academically, according to a first-of-its-kind study by University of California, Irvine education researchers.

“Every year, large numbers of K-12 students are not able to go to school due to illness, which has negative academic, social and medical consequences,” said lead author Veronica Newhart, a Ph.D. student in UCI’s School of Education. “They face falling behind in their studies, feeling isolated from their friends and having their recovery impeded by depression. Tutors can make occasional home visits, but until recently, there hasn’t been a way to provide these homebound students with inclusive academic and social experiences.”

Telepresence robots could do just that. The Internet-enabled, two-way video streaming automatons have wheels for feet and a screen showing the user’s face at the top of a vertical “body.” From home, a student controlling the device with a laptop can see and hear everything in the classroom, talk with friends and the teacher, “raise his or her hand” via flashing lights to ask or answer questions, move around and even take field trips. The exploratory case study – co-authored by Mark Warschauer, UCI professor of education and informatics – involved five homebound children, five parents, 10 teachers, 35 classmates and six school/district administrators. The students – four males and one female – ranged in age from 6 to 16, and their chronic illnesses included an immunodeficiency disorder, cancer and heart failure.

Telepresence robots have been touted as a great research platform for human robot interaction, navigation, and perception. Studies suggest that assisted navigation decreases the number of collisions with objects in the environment, making the case that the integration of findings of recent AI research into the platform could greatly increase the usability of these robots.

This fall, telepresence robots will become available on the UCI campus – a gift from the class of 2016. “This is a solution for any student who’s prevented from completing a course or degree program because of a long-term injury or illness,” said Newhart, who will soon launch additional studies in school districts across the country.

via

Miscellaneous

A project from the Rehabilitation Engineering Laboratory at ETH Zurich, Switzerland uses the power of thought to control a robot that helps move a paralyzed hand, which could drastically change the therapy and daily lives of stroke patients. The lab developed an extraordinarily light exoskeleton that leaves the palm of the hand more or less free, allowing patients to perform daily activities that support both motor and somatosensory functions with ease. The exoskeleton features three overlapping leaf springs, where a motor moves the middle spring, which transmits the force to the different segments of the finger through the other two springs. The fingers thus automatically adapt to the shape of the object the patient wants to grasp. (via NeuroscienceNews)


Eric Horvitz at Stanford University invited leading thinkers to join him to begin a 100-year effort to study the effects of artificial intelligence on society. The group plans to produce a report every five years to track the impact of AI on all aspects of work, family and social life – for a century! The studies are expected to develop assessments that provide expert-informed guidance for directions in AI research, development, systems design, as well as programs and policies to help ensure that these systems broadly benefit individuals and society. (Stanford)

With the dawn of drone journalism looming, the Electronic Privacy Information Center (EPIC), a privacy advocacy organization in Washington, has voiced concerns that increased use of drones could threaten people’s right to privacy which the FAA regulations do not cover, according to the organization. (via IEEE Spectrum)

Amazon has apparently been testing its delivery drones in the UK, hiding their activities behind "a wall of haybales" in the middle of the Cambridgeshire countryside. A blue control tower with aerials has also been spotted in the remote field, which is constantly patrolled by security men and vans. The company has also created a manicured landing site, about the size of a football pitch, in front of the control tower, to resemble a front garden. (via DailyMail)

Victor Scheinman, inventor of the first robotic arm, passed away in September, aged 73. He was one of the fathers of industrial robotics and his ‘Stanford Arm’, commercialized as the Programmable Universal Machine for Assembly (PUMA) robotic arm, is still used in almost every industry application today. Scheinman was awarded the Robotics Industries Association’s Joseph F. Engelberger Award for technology in 1986, in honor of his prestigious accomplishments. In 1990, Scheinman was given the Leonardo da Vinci award from the American Society of Mechanical Engineering, its top award in product design and invention. (via RoboHub)