Breaking News

How Algorithms Will Evolve | Google Automatic Machine Learning – Popular Mechanics

Google’s AutoML-Zero is capable of creating brand-new algorithms from scratch, through a Darwinian-style evolution process.Scientists working for the tech giant believe this leap in automatic machine learning research (AutoML) will revolutionize the field, opening up machine learning capabilities for non-experts. Last month, they posted their work to the preprint server ArXiv, which means the research has not yet been peer-reviewed. Machine learning is hard. Algorithms in a particular use case often either don’t work or don’t work well enough, leading to some serious debugging. And finding the perfect algorithm–the set of rules a computer should follow to perform an operation–can be a tall task. You can’t just pick the perfect algorithm if it doesn’t exist, and some solutions simply aren’t intuitive to the human mind. That means the process of choosing and refining algorithms is iterative and somewhat monotonous. It’s a perfect storm for automation. Enter automatic machine learning, or AutoML, a branch of research exclusively devoted to methods and processes that automate machine learning so that non-experts can also reap its benefits.

Google believes a team of its computer scientists has come up with a new AutoML method that could automatically generate the best algorithm for a given task. The new research is outlined in a paper posted to the preprint server ArXiv. It’s also been submitted to a scientific journal for review and could be published as early as June.The premise goes like this: A new system, called AutoML-Zero, can adapt algorithms to different types of tasks and continuously improve them through a Darwinian-like evolution process that reduces the amount of human intervention required. Since humans can introduce bias into systems—and thus program their own limitations—that limits the results you ultimately get. So Google is trying to create a scenario where a computer can roam free and get creative—or take the red pill and the blue pill, so to speak. Esteban Real, a software engineer at Google Brain, Research and Machine Intelligence, and lead author of the research, offers this metaphor:”Suppose your goal is to put together a house. If you had at your disposal pre-built bedrooms, kitchens, and bathrooms, your task would be manageable but you are also limited to the rooms you have in your inventory,” he tells Popular Mechanics. “If instead you were to start out with bricks and mortar, then your job is harder, but you have more space for creativity.” Removing the Humans—and the Bias

In the past, AutoML research has heavily relied on human input. Neural architecture search, for example—which automates the design of a neural net, as its name suggests—relies on sophisticated, expert-built layers as building blocks for the new neural net. These are basically hand-coded instructions, or programs, that tell a computer what to do. By contrast, Google’s new AutoML-Zero uses mathematics, rather than human-designed components, as the building blocks for new algorithms. Programming languages—from COBOL, to Python, to Ruby on Rails—make the act of building a program simpler. Machines understand numbers, specifically binary code, and the languages act as a buffer between the programmer and the machine. That way, humans don’t have to spend all day breaking down commands into a bunch of 1s and 0s.

But that choice of language and representation in the programming languages allows bias to creep in, says Armando Solar-Lezama, an associate professor at the Massachusetts Institute of Technology (MIT), who isn’t affiliated with the work. He leads MIT’s Computer Assisted Programming Group, which focuses on automating the programming process. Solar-Lezama tells Popular Mechanics the new Google paper is about seeing how far you can push a simple, mathematics-based language “so that the things you discover are not biased by your choice of language.” In this case, bias means limiting your options. Going back to Real’s house metaphor, imagine you’re building your home out of whole rooms, and all you know is Roman style. “Then your house would be full of columns, atria, and impluvia; you wouldn’t be able to come up with the Empire State Building or the Sistine Chapel,” Real says. “If you use bricks and mortar, then you’re not limited to a specific style.” Real and his coauthors Chen Liang, David So, and Quoc Le acknowledge there’s still some remaining bias in the program, despite their best efforts. For example, even the specific math operations they’ve chosen can contain implicit bias based on the researchers’ pre-existing knowledge of machine learning algorithms. Genetic AlgorithmsTo discover new algorithms, AutoML-Zero starts with 100 random algorithms, generated through a combination of mathematical operations. Then, the system zips through the algorithms to find the best ones, which carry onto the next step, akin to the process of humans passing down favorable genes over time in a game of “survival of the fittest.”


From there, the algorithms complete some sort of machine learning task, like identifying motorcycles from trucks, as you might do in one of those RECAPTCHA tests that checks to see whether or not you’re a robot. AutoML-Zero uses the tasks to score each algorithm’s effectiveness in completing a certain objective and then “mutates” the best ones to begin another round. These new “child” algorithms are compared to the original “parent” algorithms to see if they’ve gotten better at the task at hand. The process is continually repeated until the best mutations win out and end up in the final algorithm. In the end, the system could search through 10,000 possible models per second, with the ability to skip over algorithms it’s already seen. The researchers used a small dataset as a proxy for more complicated amounts of information, making the work a proof-of-concept.

“The longer the piece of code you’re trying to generate, the easier it is for an error to sneak in.”

To do this, AutoML-Zero uses what are known as genetic algorithms, which have been around since the 1980s, but have fallen out of use for the most part, Solar-Lezama says. That’s because they’re usually best in unstructured environments, “where nothing else works,” and they often lead to unreadable code that’s difficult to reverse-engineer. Plus, they produce really long pieces of code. “The longer the piece of code you’re trying to generate, the easier it is for an error to sneak in,” Solar-Lezama says. “It can be the difference between a piece of code that does exactly what you want, and one that doesn’t work, and it could be one character. This is a general problem in program synthesis.”Still, genetic algorithms make sense in this case, because you don’t want to hamper the computer’s options. The Problem with ScalingGoogle has already developed its own programming language, called Cloud AutoML, which makes it easier to train machine learning models with minimal human expertise. But AutoML-Zero looks like a step toward even less human involvement. Scaling this method, however, will be a challenge, Solar-Lezama says. Because AutoML-Zero uses arithmetic, rather than higher-order programming languages, there aren’t any instructions to help the system quickly approach a problem it’s encountered some version of before. Instead, it will have to reinvent the wheel each time, which isn’t optimal.

To get past the scaling issue, Solar-Lezama says the researchers could take on a “divide and conquer” mentality in future work. By decoupling one part of the program from another portion, AutoML-Zero could find success. In addition, it’ll be vital to find the right balance between the abstract arithmetic as building blocks and more substantial instructions that can do more work, but that could introduce bias.If Google does scale up the system and let the machines really build the algorithms, it could mean way faster app development, language translation, video processing … everything, Solar-Lezama says. It could even empower small-time developers and small businesses to take advantage of machine learning capabilities without having to hire or outsource a whole data science team.”Being able to find an algorithm that is well-tuned and well set up for a particular application that you’re dealing with … it can be a very powerful thing,” he says.