Machine Learning’s ‘Amazing’ Ability to Predict Chaos – Quanta Magazine

Machine Learning’s ‘Amazing’ Ability to Predict Chaos – Quanta Magazine

Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. After training itself on data from the past evolution of the Kuramoto-Sivashinsky equation, the researchers’ reservoir computer could then closely predict how the flamelike system would continue to evolve out to eight “Lyapunov times” into the future, eight times further ahead than previous methods allowed, loosely speaking. The algorithm knows nothing about the Kuramoto-Sivashinsky equation itself; it only sees data recorded about the evolving solution to the equation. Ott and company’s results suggest you don’t need the equations — only data. “This paper suggests that one day we might be able perhaps to predict weather by machine-learning algorithms and not by sophisticated models of the atmosphere,” Kantz said. Besides weather forecasting, experts say the machine-learning technique could help with monitoring cardiac arrhythmias for signs of impending heart attacks and monitoring neuronal firing patterns in the brain for signs of neuron spikes. Ott particularly hopes the new tools will prove useful for giving advance warning of solar storms, like the one that erupted across 35,000 miles of the sun’s surface in 1859. If such a solar storm lashed the planet unexpectedly today, experts say it would severely damage Earth’s electronic infrastructure. “If you knew the storm was coming, you could just turn off the power and turn it back on later,” Ott said. Six or seven years ago, when the powerful algorithm known as “deep learning” was starting to master AI tasks like image and speech recognition, they started reading up on machine learning and thinking of clever ways to apply it to chaos. Most importantly, in the early 2000s, Jaeger and fellow German chaos theorist Harald Haas made use of a network of randomly connected artificial neurons — which form the “reservoir” in reservoir computing — to learn the dynamics of three chaotically coevolving variables. After training on the three series of numbers, the network could predict the future values of the three variables out to an impressively distant horizon. It took years to strike upon the straightforward solution. “What we exploited was the locality of the interactions” in spatially extended chaotic systems, Pathak said. Locality means variables in one place are influenced by variables at nearby places but not by places far away. “By using that,” Pathak explained, “we can essentially break up the problem into chunks.” That is, you can parallelize the problem, using one reservoir of neurons to learn about one patch of a system, another reservoir to learn about the next patch, and so on, with slight overlaps of neighboring domains to account for their interactions. Parallelization allows the reservoir computing approach to handle chaotic systems of almost any size, as long as proportionate computer resources are dedicated to the task. First, you measure the height of the flame at five different points along the flame front, continuing to measure the height at these points on the front as the flickering flame advances over a period of time. The input data triggers the neurons to fire, triggering connected neurons in turn and sending a cascade of signals throughout the network. The second step is to make the neural network learn the dynamics of the evolving flame front from the input data. The goal is to adjust the weights of the various signals that go into calculating the outputs until those outputs consistently match the next set of inputs — the five new heights measured a moment later along the flame front. “What you want is that the output should be the input at a slightly later time,” Ott explained. To learn the correct weights, the algorithm simply compares each set of outputs, or predicted flame heights at each of the five points, to the next set of inputs, or actual flame heights, increasing or decreasing the weights of the various signals each time in whichever way would have made their combinations give the correct values for the five outputs. In a series of results reported in the journals Physical Review Letters and Chaos, scientists have used machine learning — the same computational technique behind recent successes in artificial intelligence — to predict the future evolution of chaotic systems out to stunningly distant horizons. Outputs are fed back in as the new inputs, whose outputs are fed back in as inputs, and so on, making a projection of how the heights at the five positions on the flame front will evolve. In a plot in their PRL paper, which appeared in January, the researchers show that their predicted flamelike solution to the Kuramoto-Sivashinsky equation exactly matches the true solution out to eight Lyapunov times before chaos finally wins, and the actual and predicted states of the system diverge. The usual approach to predicting a chaotic system is to measure its conditions at one moment as accurately as possible, use these data to calibrate a physical model, and then evolve the model forward. That’s why machine learning is “a very useful and powerful approach,” said Ulrich Parlitz of the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, who, like Jaeger, also applied machine learning to low-dimensional chaotic systems in the early 2000s. “I think it’s not only working in the example they present but is universal in some sense and can be applied to many processes and systems.” In a paper soon to be published in Chaos, Parlitz and a collaborator applied reservoir computing to predict the dynamics of “excitable media,” such as cardiac tissue. Recently, researchers at the Massachusetts Institute of Technology and ETH Zurich achieved similar results as the Maryland team using a “long short-term memory” neural network, which has recurrent loops that enable it to store temporary information for a long time. In new research accepted for publication in Chaos, they showed that improved predictions of chaotic systems like the Kuramoto-Sivashinsky equation become possible by hybridizing the data-driven, machine-learning approach and traditional model-based prediction. Ott sees this as a more likely avenue for improving weather prediction and similar efforts, since we don’t always have complete high-resolution data or perfect physical models. “What we should do is use the good knowledge that we have where we have it,” he said, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.” The reservoir’s predictions can essentially calibrate the models; in the case of the Kuramoto-Sivashinsky equation, accurate predictions are extended out to 12 Lyapunov times. The duration of a Lyapunov time varies for different systems, from milliseconds to millions of years. (It’s a few days in the case of the weather.) The shorter it is, the touchier or more prone to the butterfly effect a system is, with similar states departing more rapidly for disparate futures. Yet strangely, chaos itself is hard to pin down. “It’s a term that most people in dynamical systems use, but they kind of hold their noses while using it,” said Amie Wilkinson, a professor of mathematics at the University of Chicago. “You feel a bit cheesy for saying something is chaotic,” she said, because it grabs people’s attention while having no agreed-upon mathematical definition or necessary and sufficient conditions. “There is no easy concept,” Kantz agreed. Wilkinson and Kantz both define chaos in terms of stretching and folding, much like the repeated stretching and folding of dough in the making of puff pastries. The weather, wildfires, the stormy surface of the sun and all other chaotic systems act just this way, Kantz said. “In order to have this exponential divergence of trajectories you need this stretching, and in order not to run away to infinity you need some folding,” where folding comes from nonlinear relationships between variables in the systems. Exactly why reservoir computing is so good at learning the dynamics of chaotic systems is not yet well understood, beyond the idea that the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. The technique works so well, in fact, that Ott and some of the other Maryland researchers now intend to use chaos theory as a way to better understand the internal machinations of neural networks. The equation also describes drift waves in plasmas and other phenomena, and serves as “a test bed for studying turbulence and spatiotemporal chaos,” said Jaideep Pathak, Ott’s graduate student and the lead author of the new papers.

Read More

AI helps grow 6 billion roaches at China’s largest breeding site

AI helps grow 6 billion roaches at China’s largest breeding site

The system also "learns" from its historical data so it can make improvements to grow the population.AI has been implemented in several fields in China. It's also part of the country's surveillance system, which has received plenty of attention over the past few months. Last December, a BBC reporter was taken into custody in just seven minutes as part of a test showing how effective the system works. Its facial recognition abilities also played a role in identifying and detaining a fugitive among 50,000 people attending a concert.In the Chinese city of Shenzhen, an AI company is working with local authorities to identify jaywalkers and send them a text informing them about a fine. CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.Rebooting the Reef: CNET dives deep into how tech can help save Australia's Great Barrier Reef. If your worst nightmare is to be trapped in a room with six billion roaches flying around you, here's the bad news.With the help of artificial intelligence, folks at a Chinese pharmaceutical company are breeding cockroaches by the billions every year, South China Morning Post reported Thursday. Their purpose: To make a "healing potion" that can cure respiratory, gastric and other diseases.The "potion," consumed by over 40 million people in China, is made by crushing the cockroaches once they reach a desired weight and size, according to the publication. There is a "slightly fishy smell" to the potion, which tastes "slightly sweet" and looks like tea, it added. The population of cockroaches at China's largest roach farm is so massive that it's been warned there would be a "catastrophe" if the roaches were to be suddenly released.The giant facility is managed with the help of a "smart manufacturing" system powered by AI algorithms. The system is responsible for collecting and analysing over 80 categories of data to ensure an optimal environment for the cockroaches to grow, according to the SCMP.

Read More

AI experts’ salaries are topping $1 million—even at nonprofits

AI experts’ salaries are topping $1 million—even at nonprofits

In the AI world, you don’t need to be working for a giant for-profit corporation to rake in the dough. Pay up: If you want to attract top AI talent, the lesson is simple: set aside the lion’s share of your budget for wages. In its first year, OpenAI spent a total of $11 million, and over $7 million of that went to salaries and benefits. Top tech companies increasingly see AI as integral to succeeding, and they’ll try anything, including some zany recruiting efforts, to try to lure in the very best minds.

Read More

This Machine Learning System Thinks About Music Like You Do

This Machine Learning System Thinks About Music Like You Do

Scientists reported in a new study from the Massachusetts Institute of Technology that they’ve created a machine-learning system that processes sound just like humans, whether it’s discerning the meaning of a word or classifying music by genre. It took thousands of examples to train it, but by the end, the model performed as well as humans. It even made errors in the same places that tripped up humans, struggling the most with the clips played over city sounds. But the researchers still weren’t sure if the model was processing these signals the same way a brain does—or if it had found its own way to solve the same problem. Lead author Alex Kell, of MIT, examined data from an fMRI scanner to see which regions of the brain were hardest at work as subjects listened to an array of natural sounds. He found that when the model was processing relatively basic information—such as the frequency of a sound or pattern—that corresponded with one region of the brain. It’s the first artificial system to mimic the way the brain interprets sounds—and it rivals humans in its accuracy. This suggests that the brain processes information the same way as their model, in a hierarchy that goes from simplest to most complex. This ability to connect the inner workings of a deep neural network to the brain is exciting, said Andrew Pfalz, a Ph.D candidate in Experimental Music and Digital Media at Louisiana State University whose research applies neural networks to sound. But through many queries, the MIT researchers were able to shed light on which layers of the system were engaged when—and how that aligned with activity in brains processing the same sounds. Yet computer scientist Ching-Hua Chuan, whose research focuses on the use of machine-learning systems to generate music at the University of North Florida, emphasizes the immensity of this claim. “[Neural networks] were never intended to model how our brain works,” she said, adding that the difficulty of peering into the “black box” suggests to her that it would take more research to prove that the model truly mimics the brain. If they’re right, it could help scientists understand and simulate how the brain processes sound and other sensory signals, said MIT’s Josh McDermott, senior author on the study. The research, published today in the journal Neuron, offers a tantalizing new way to study the brain. Despite the ubiquity of machine-learning systems—in the software that gives you music recommendations, for example—even the engineers who design these systems often don’t know how they “think,” or how human-like their inner workings are.

Read More

Musiio uses AI to help the music industry curate tracks more efficiently

Musiio uses AI to help the music industry curate tracks more efficiently

A former streaming industry exec and an AI specialist walk into a bar… they leave starting an AI company for the music industry. That’s not exactly how Singapore-based startup Musiio was formed, but it’s close enough — and the outcome is the same. That’s including acquisitions such as music intelligence company Echo Nest for $100 million, and smaller AI startup Niland, which helps make recommendations and search results smarter. While Spotify, which recently went public in an unconventional listing, might be the most visible company in need of smarts for music, it is not the only one by far. And we’re not even talking about direct rivals like Pandora, Apple Music, Google and co. Others involved in the less visible — but hugely lucrative — parts of the industry that also need help pouring through millions of tracks include labels, which filter through talent on a daily basis, and agencies that pick out music for brands, advertising, media, etc. The AI uses a combination of deep learning and feature extraction, the latter of which Musiio said allows it to identify and understand patterns and features of a track. The training is focused on the audio itself, rather than stats and data from third-parties, which some services use to categorize tracks. Pettersson runs the AI. For what it’s worth, he cut his teeth with an algorithm for the Swedish stock market that netted him a 28 percent annual return for eight years. Musiio said it is developing solutions for a number of undisclosed clients, but one public name it is talking up is Free Music Archive (FMA), a Creative Commons-like free music site developed by independent U.S. radio station WFMU. The site has more than 120,000 tracks, each of which is hand-selected, but with just one part-time developer the curation side is lacking. “Not only are we backfilling the Echo Nest partnership [after Spotify closed the service following its acquisition] but the lead track in the inaugural playlist (Kurt Vile, ‘I Wanted Everything’) had received 3,000 plays when we found it, after eight years in the database. For now, the playlists are created and held within Savage’s FMA account, but Musiio confirmed that it is considering the potential to develop a dashboard that would allow listeners themselves to use the AI to develop playlists. The startup will be part of the EF demo day in July, but Savage said it has already begun to have conversations with investors with a view to raising a seed round of funding. Brit Savage was looking for new ideas after work brought her and her husband to Singapore, and after crunching through some problems that need fixing, the duo settled on an AI service that helps music platforms tackle content and curation. The initial face of the streaming revolution was based on giving users instant access to millions of songs in a single place, removing the pain of downloads and paying per song. Now that streaming is established, the puck has moved to smarter solutions that help music streamers shift through those tens of millions of songs to find music they like, or, better yet, discover new tracks they’ll love. Aside from consumer products such as Discovery Weekly, a playlist that pulls in a weekly selection of music tailored to a user, it has invested considerable resources in making its product smarter.

Read More

Alibaba is developing its own AI chips, too

Alibaba is developing its own AI chips, too

The Chinese e-commerce giant will join a raft of other tech firms in designing its own processors tailored to in-house machine-learnings tasks. Why it matters: China spends over $200 billion a year importing integrated-circuit chips, mostly from the US. As trade tensions simmer between the nations, the prospect of using homegrown chips looks ever more appealing to China’s government and businesses. The news is another sign of China’s early success in growing its semiconductor industry—a key part of the government’s Made in China 2025 policy. The news: Alibaba announced that it’s building a chip called Ali-NPU—for “neural processing unit”—designed to handle AI tasks like image and video analysis. The firm says its performance will be 10 times that of a CPU or GPU performing the same task. What it’s for: The chips will be part of Alibaba’s ambitious plan to deliver AI through cloud computing and IoT devices. Joining a long list: In America, Google, Apple, and even Facebook are designing their own AI chips, and so are some startups.

Read More

ZTE may be too big to fail, as it remains the thin end of the wedge in China’s global tech ambition – South China Morning Post

ZTE may be too big to fail, as it remains the thin end of the wedge in China’s global tech ambition – South China Morning Post

The business turned around a year after it agreed to pay the United States government a record fine to settle a five-year probe of trade sanctions violations. One said he believed the “China government will help to solve the problem,” while another said he expects “some employees to lose their jobs” though it is “not difficult to get a new job” in Shenzhen, one of China’s main technology hub and home to Huawei Technologies, drone maker DJI and internet company Tencent Holdings. Huawei, which is not subjected to a parts ban, is likely to fill the gap as ZTE tries to negotiate a new deal with the US commerce department, Lee said. “We need to speed up the drive to build China into a strong country with advanced manufacturing, pushing for deep integration between the real economy and advanced technologies including the internet, big data, and artificial intelligence” Xi said in his speech.  China’s spending on technology research and development rose 10.6 per cent to 1.57 trillion yuan (US$249.5 billion) in 2016, equivalent to 2.1 per cent of gross domestic product (GDP) that year, according to data by the National Bureau of Statistics (NBS). The figure exceeded the European Union’s average of 2.08 per cent, but still lagged the 2.40 per cent average among the members of the Organisation for Economic Co-operation and Development (OECD). If a big portion of Chinese chip buyers support the development of domestic chips, then the development of domestic chips will be unstoppable,” said an April 17 editorial in the state-run Global Times. “If the US lost these Chinese chip buyers, these US hi-tech chip makers will also lose momentum to upgrade their products continuously.” The strong language in the statement suggests that the company has the support of the government, said Li Yi, chief fellow at the Shanghai Academy of Social Sciences. ZTE’s non-executive chairman Yin Yimin, who had issued a call for calm for his 80,000 employees in the immediate aftermath of the ban, said as much in a Friday press conference for selected Chinese state media.  The denial order by the US Department of Commerce, Yin said, has sent ZTE into a “state of shock” that would reverberate and hurt the interests of employees, telecom operators, consumers and shareholders, according to the official Xinhua News Agency. As part of the settlement for violating trade sanctions on Iran and North Korea, ZTE agreed to pay US$1.2 billion in penalties to the US government in return for a suspended seven-year ban during a probationary period.  ZTE also promised to dismiss four senior employees and discipline 35 others involved in the trade violation by either reducing their bonuses or reprimanding them. The US Department of Commerce revoked the probation after finding out that ZTE had paid full bonuses and lied about it, according to the denial order posted on the agency’s website.  If no settlement is reached, the export ban would not only hurt ZTE, China’s largest listed telecommunications equipment manufacturer, but deal a blow to the country’s goal of recasting itself as a leading innovator.

Read More

Machine-learning system processes sounds like humans do

Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre. The study, which appears in the April 19 issue of Neuron, also offers evidence that the human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. For the speech task, the researchers gave the model thousands of two-second recordings of a person talking. The task was to identify the word in the middle of the clip. For the music task, the model was asked to identify the genre of a two-second clip of music. The processing units that make up a neural network can be combined in a variety of ways, forming different architectures that affect the performance of the model. The MIT team discovered that the best model for these two tasks was one that divided the processing into two sets of stages. The first set of stages was shared between tasks, but after that, it split into two branches for further analysis — one branch for the speech task, and one for the musical genre task. The researchers then used their model to explore a longstanding question about the structure of the auditory cortex: whether it is organized hierarchically. "We thought that if we could construct a model that could do some of the same things that people do, we might then be able to compare different stages of the model to different parts of the brain and get some evidence for whether those parts of the brain might be hierarchically organized," McDermott says. To see if the model stages might replicate how the human auditory cortex processes sound information, the researchers used functional magnetic resonance imaging (fMRI) to measure different regions of auditory cortex as the brain processes real-world sounds. They then compared the brain responses to the responses in the model when it processed the same sounds. They found that the middle stages of the model corresponded best to activity in the primary auditory cortex, and later stages corresponded best to activity outside of the primary cortex. The authors now plan to develop models that can perform other types of auditory tasks, such as determining the location from which a particular sound came, to explore whether these tasks can be done by the pathways identified in this model or if they require separate pathways, which could then be investigated in the brain. Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform difficult real-world tasks, and they have become the standard approach in many engineering applications. "That's been an exciting opportunity for neuroscience, in that we can actually create systems that can do some of the things people can do, and we can then interrogate the models and compare them to the brain," Kell says.

Read More

Betting On Artificial Intelligence To Guide Earthquake Response

Betting On Artificial Intelligence To Guide Earthquake Response

What is the general humidity like?" explains Hu. "The third thing we look at is live instant data," she says, such as the magnitude of the quake, the traffic in the area of the quake and the weather at the time of the quake. He says one of the most remarkable things about the company's software is its ability to incorporate data from an earthquake as it's happening, and to adjust its predictions in real time. "Those sort of things used to be research projects,"says Deierlein. "After an event, we would collect data and a few years later we'd produce new models." Now the new models appear in a matter of minutes. He notes the company's exact methods are opaque. "Like many startup companies they're not fully transparent in everything they're doing," he says. "I mean, that's their proprietary knowledge that they're bringing to it." Nonetheless, some first responders are already convinced the software will be useful. A startup company in California is using machine learning and artificial intelligence to advise fire departments about how to plan for earthquakes and respond to them. Ghiorso says in the past, when an earthquake hit, he'd have to make educated guesses about what parts of his district might have suffered the most damage, and then drive to each place to make a visual inspection. "Instead of driving thirty-two square miles, in fifteen minutes on a computer I can get a good idea of the concerns," he says. "Instead of me, taking my educated guess, they're putting science behind it, so I'm very confident." Unfortunately, it's going to take a natural disaster to see if his confidence is justified. The company, One Concern, hopes its algorithms can take a lot of the guesswork out of the planning process for disaster response by making accurate predictions about earthquake damage. It's one of a handful of companies rolling out artificial intelligence and machine learning systems that could help predict and respond to floods, cyber-attacks and other large-scale disasters. Nicole Hu, One Concern's chief technology officer, says the key is to feed the computers three main categories of data. The first is data about homes and other buildings, such as what materials they're made of, when they were built and how likely they are to collapse when the ground starts shaking.

Read More

A startup that uses artificial intelligence to discover new drugs just landed a $2 billion valuation

A startup that uses artificial intelligence to discover new drugs just landed a $2 billion valuation

"We only know to look for the things that we know, rather than the actual signal that's in there," Mulvany told Business Insider. To start, BenevolentAI focuses in on a particular disease — so far, that's been around diseases of the central nervous system and rare cancers, as well as some work with Parkinson's disease — then finds drug targets with its technology, and goes on to test it out in their labs. The hope that some of these will pan out and make it through the clinical trial process factors into the company's high valuation. "I think the assumption from investors is that some will make it because the AI is changing the risk profile because we're getting better by predicting what may work what may not," James Chandler, BenevolentAI's vice president of corporate affairs told Business Insider. The funding will be used to keep developing the drugs the company has discovered, along with potentially expanding BenevolentAI's technology to other fields including energy and agriculture. In March, TwoXar, a startup that uses its software to discover new experimental drugs for other companies, raised $10 million and Atomwise, which designs drugs that companies can then test out, raised $45 million in its series A round in March. Traditionally, that company would seek to figure out the science behind a particular disease, working to find disease targets it could then design drugs to go after. This can be a lengthy process involving a lot of lab work and uncertainty over whether the drug will work when it is tested in animals. And often, the scientists working on it have a very specialized understanding of a particular disease that influences how they approach the problem.

Read More
1 2 3 86