Home » Artificial Intelligence » New Approaches and Tasks for Machine Learning

New Approaches and Tasks for Machine Learning

March 21, 2014

supplu-chain

“We all know how difficult it can be using a text term to search for multimedia content online,” writes Darren Quick. “Many Internet radio sites, such as Pandora, pay experts in music theory to categorize songs and make catalogs easier to search, but this is time consuming and expensive. Another method, often employed by online music stores, involves basing recommendations on the purchases of others with similar tastes in music. However, this kind of collaborative filtering only works with music that is already popular, severely limiting things for those after something a bit more off the beaten track.” [“‘Game-powered machine learning’ could make searching for music online easier,Gizmag, 7 May 2012] Quick goes on to report that a team from the University of California San Diego has developed software that taps the inputs of “unpaid music fans” who are “enticed by an online Facebook game called Herd It.” So what, you might ask, is the relationship between human game players and machine learning? Quick explains:

“Players are asked to place music into different categories (romantic, jazz, saxophone, happy, etc.) after listening to a snippet – this is the ‘game-powered’ bit. This human knowledge is then used to train the computer, which analyzes the waveforms of songs in these categories looking for commonalities in acoustic patterns – this is the ‘machine-learning’ part. The system can then use these patterns to automatically categorize the millions of songs on the internet – be they popular or previously unheard of. And because the descriptions of the music are in text form, people can search the database using text. ‘This is a very promising mechanism to address large-scale music search in the future,’ said Gert Lanckriet, a professor of electrical engineering at the UC San Diego Jacobs School of Engineering and leader of the UCSD study. To improve its auto-tagging algorithms, the system is also able to automatically create new Herd It games to collect the data it most needs. If it’s struggling recognizing jazz music patterns, for example, it can request more jazz to study.”

There are two things I like about that story. First, it underscores how machine learning can be utilized in unexpected ways (e.g., categorizing music using text and waveforms). Second, it highlights the fact that as machines learn they can use that knowledge to make autonomous decisions (e.g., create new games). UCSD is only one of the top-level universities studying the best ways to help machines learn. At Carnegie Mellon University, “researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean.” [“New Research Aims To Teach Computers Common Sense,” by Kevin Begos, Associated Press, Manufacturing.net, 25 November 2013] If that sounds a lot like the Google project in which a computer learned to identify a cat on its own, you won’t be surprised to learn that Google is also involved in the CMU project. Begos continues:

“The system at Carnegie Mellon University is called NEIL, short for Never Ending Image Learning. In mid-July [2013], it began searching the Internet for images 24/7 and, in tiny steps, is deciding for itself how those images relate to each other. The goal is to recreate what we call common sense — the ability to learn things without being specifically taught. It’s a new approach in the quest to solve computing’s Holy Grail: getting a machine to think on its own using a form of common sense. The project is being funded by Google and the Department of Defense’s Office of Naval Research.”

Abhinav Gupta, a professor in the Carnegie Mellon Robotics Institute, told Begos, “Any intelligent being needs to have common sense to make decisions.” If you think that machine learning comes easy, you’d be wrong. Begos writes:

“NEIL uses advances in computer vision to analyze and identify the shapes and colors in pictures, but it is also slowly discovering connections between objects on its own. For example, the computers have figured out that zebras tend to be found in savannahs and that tigers look somewhat like zebras. In just over four months, the network of 200 processors has identified 1,500 objects and 1,200 scenes and has connected the dots to make 2,500 associations. Some of NEIL’s computer-generated associations are wrong, such as ‘rhino can be a kind of antelope,’ while some are odd, such as ‘actor can be found in jail cell’ or ‘news anchor can look similar to Barack Obama.’ But Gupta said having a computer make its own associations is an entirely different type of challenge than programing a supercomputer to do one thing very well, or fast. For example, in 1985, Carnegie Mellon researchers programmed a computer to play chess; 12 years later, a computer beat world chess champion Garry Kasparov in a match.”

I’m not sure that the association “actor can be found in jail” is so surprising given that the media publishes so many stories about actors behaving badly and far fewer stories about actors doing good; nevertheless, the point is, when starting from scratch without any human input or correction, it takes computers a long time to learn on their own — even running 24/7. That’s why at Enterra® we use experts to help speed up the learning process for our Cognitive Reasoning Platform™ (CRP) as well as the world’s largest common sense ontology. The faster that a machine learns what is correct and incorrect in any given situation, the more useful it is in a business setting. CMU researchers told Begos, “NEIL’s motto is ‘I Crawl, I See, I Learn,'” Catherine Havasi, an artificial intelligence expert at the Massachusetts Institute of Technology, told Begos that “humans constantly make decisions using ‘this huge body of unspoken assumptions,’ while computers don’t.” That is why, she told him, that “humans can also quickly respond to some questions that would take a computer longer to figure out.” It also highlights why human/computer partnerships are likely to dominate the business landscape for the foreseeable the future.

 

Larry Hardesty, from the MIT News Office, reports that at the 2013 IEEE International Conference on Robotics and Automation, MIT students from the Learning and Intelligent Systems Group at the Computer Science and Artificial Intelligence Laboratory presented “a pair of papers showing how household robots could use a little lateral thinking to compensate for their physical shortcomings.” What, you might ask, is lateral thinking? Lateral thinking involves moving sideways when working on a problem. One example I’ve heard goes like this: “Grandma is trying to knit and little Genny keeps bothering her. Mother suggests putting Genny in the playpen. Father, using lateral thinking suggests putting Grandma in the playpen.” That, however, is not exactly the type of challenge facing the MIT students. Hardesty explains:

“Many commercial robotic arms perform what roboticists call ‘pick-and-place’ tasks: The arm picks up an object in one location and places it in another. Usually, the objects — say, automobile components along an assembly line — are positioned so that the arm can easily grasp them; the appendage that does the grasping may even be tailored to the objects’ shape. General-purpose household robots, however, would have to be able to manipulate objects of any shape, left in any location. And today, commercially available robots don’t have anything like the dexterity of the human hand. … One of the papers concentrates on picking, the other on placing. Jennifer Barry, a PhD student in the group, describes an algorithm that enables a robot to push an object across a table so that part of it hangs off the edge, where it can be grasped. Annie Holladay, an MIT senior majoring in electrical engineering and computer science, shows how a two-armed robot can use one of its graspers to steady an object set in place by the other.”

Like the Genny/Grandma example noted above, Holladay’s approach uses lateral thinking in that her “algorithm in some sense inverts the ordinary motion-planning task. Rather than identifying paths that avoid collisions and adhering to them, it identifies paths that introduce collisions and seals them off.”

 

Let me end with one last example of machine learning that demonstrates how broadly machine learning can be applied. Clint Boulton notes that one problem “that has vexed retailers since the dawn of e-commerce” is “helping consumers figure out whether or not the apparel they want to buy will not only fit, but flatter them.” [“Startup Tries Amazon, Netflix Analytics Models On For Size,” Wall Street Journal, 19 February 2014] Boulton reports that a startup called True Fit is “using machine learning software” to solve that problem. He explains how this works:

“True Fit has built a vast database on how brands expect their apparel to be worn, including measurements, styles and colors from 1,000 clothing manufacturers and retailers. Macy’s Inc, Nordstrom Inc. and Guess? Inc. use the True Fit adviser tool on their websites to show customers how clothes they’re interested in will look on them. … Previous methods of right-sizing clothes online include everything from using a tape measure and entering in the data online to guessing and hoping for the best. But Mr. Adler said True Fit took its inspiration from Amazon Inc., Netflix Inc. and Pandora Media Inc., which process large amounts of transactional data to recommend books, movies and music. Using Big Data, True Fit seeks to help retailers curb their return rate. A consumer using True Fit on a retailer’s website, for example, would enter their height, weight and age and answer questions about their body shape to create a True Fit profile. Then they would add brands, styles and sizes from garments that fit well from their own closets. True Fit’s algorithms compare this information with the retailer’s purchase data to determine what products might best fit. It then recommends a size in each clothing item, describing fitting expectations, and summarizing them on a scale of one to five. True Fit maintains a single profile for each of its millions of members. Recommendations are personalized for each retailer with whom the True Fit user is shopping.”

With advances in artificial intelligence being made every day, the number of ways that machine learning will be used in the future will only be limited by our imaginations.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!