While You Were Sleeping

While You Were Sleeping

It turns out, she explained, there are “3.6 million ways to arrange 10 people for dinner.”Classical computers don’t solve “big versions of this problem very well at all,” she said, like trying to crack sophisticated encrypted codes, where you need to try a massive number of variables, or modeling molecules where you need to account for an exponential number of interactions. Quantum computers, with their exponential processing power, will be able to crack most encryption without breaking a sweat.It’s just another reason China, the N.S.A., IBM, Intel, Microsoft and Google are now all racing — full of sweat — to build usable quantum systems.“If I try to map a caffeine molecule problem on a normal computer, that computer would have to be one-tenth the volume of this planet in size,” said Arvind Krishna, head of research at IBM. “A quantum computer just three or four times the size of those we’ve built today should be able to solve that problem.” Universities and companies are already accessing three IBM quantum systems (ranging from 5 to 16 qubits) that are online and open source at ibm.com/IBMQ, and they’ve already run two million quantum programs to prove out, and write papers on, theories that we never had the processing power before to prove.But, again, look at where we are today thanks to artificial intelligence from digital computers — and the amount of middle-skill and even high-skill work they’re supplanting — and then factor in how all of this could be supercharged in a decade by quantum computing. As education-to-work expert Heather McGowan (www.futureislearning.com) points out: “In October 2016, Budweiser transported a truckload of beer 120 miles with an empty driver’s seat. … In December 2016, Amazon announced plans for the Amazon Go automated grocery store, in which a combination of computer vision and deep-learning technologies track items and only charges customers when they remove the items from the store. In February 2017, Bank of America began testing three ‘employee-less’ branch locations that offer full-service banking automatically, with access to a human, when necessary, via video teleconference.”This will be a challenge for developed countries, but even more so for countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S now being automated.It’s why IBM’s C.E.O., Ginni Rometty, remarked to me in an interview: “Every job will require some technology, and therefore we’ll need to revamp education. Technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”To back that up, said Rometty, IBM designed Pathways in Technology (P-Tech) schools, partnering with close to 100 public high schools and community colleges to create a six-year program that serves large numbers of low-income students. Like others, I struggle to get this balance right, which is why I pause today to point out some incredible technological changes happening while Trump has kept us focused on him — changes that will pose as big an adaptation challenge to American workers as transitioning from farms to factories once did.Two and half years ago I was researching a book that included a section on IBM’s cognitive computer, “Watson,” which had perfected the use of artificial intelligence enough to defeat the two all-time “Jeopardy!” champions. Kids graduate in six years or less with both a high school diploma and an associate junior college degree.“The graduation rates are four times the average, and those getting jobs are at two times the median salary,” said Rometty, “and many are going on to four-year colleges.”Each time work gets outsourced or tasks get handed off to a machine, “we must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential,” McGowan said. Therefore, education needs to shift “from education as a content transfer to learning as a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill,” said McGowan. “Instructors/teachers move from guiding and accessing that transfer process to providing social and emotional support to the individual as they move into the role of driving their own continuous learning.”Anyway, I didn’t mean to distract from the “Trump Reality Show,” but I just thought I’d mention that Star Wars technology is coming not only to a theater near you, but to a job near you. After my IBM hosts had shown me Watson at its Yorktown Heights, N.Y., lab, they took me through a room where a small group of IBM scientists were experimenting with something futuristic called “quantum computing.” They left me thinking this was Star Wars stuff — a galaxy and many years far away.Last week I visited the same lab, where my hosts showed me the world’s first quantum computer that can handle 50 quantum bits, or qubits, which it unveiled in November. Well, if you think it’s scary what we can now do with artificial intelligence produced by classical binary digital electronic computers built with transistors — like make cars that can drive themselves and software that can write news stories or produce humanlike speech — remember this: These “old” computers still don’t have enough memory or processing power to solve what IBM calls “historically intractable problems.” Quantum computers, paired with classical computers via the cloud, have the potential to do that in minutes or seconds. For instance, “while today’s supercomputers can simulate … simple molecules,” notes MIT Technology Review, “they quickly become overwhelmed.” So chemical modelers — who attempt to come up with new compounds for things like better batteries and lifesaving drugs — “are forced to approximate how an unknown molecule might behave, then test it in the real world to see if it works as expected.

Read More

Google’s Self-Training AI Turns Coders into Machine-Learning Masters

Google’s Self-Training AI Turns Coders into Machine-Learning Masters

A new generation of cloud-based machine-learning tools that can train themselves would make the technology far more versatile and easier to use. Disney used the service to develop a way to search its merchandise for particular cartoon characters, even if those products are not tagged with that character’s name. Joaquin Vanschoren, a professor at the Eindhoven Institute of Technology in the Netherlands who specializes in automated machine learning, says it’s still a relatively new research topic, though interest in the area has been heating up lately. “It is impressive that they can release this as a production service so quickly,” he says. That’s only likely to get worse as programmers attempt to design AI systems that move beyond simple image classification and attempt to tackle ever broader tasks. In 2016, one team showed that deep learning could itself be used to identify the best tweaks to a deep-learning system. Building and optimizing a deep neural network algorithm normally requires a detailed understanding of the underlying math and code, as well as extensive practice tweaking the parameters of algorithms to get things just right. The difficulty of developing AI systems has created a race to recruit talent, and it means that only big companies with deep pockets can usually afford to build their own bespoke AI algorithms. Recommended for You “We need to scale AI out to more people,” Fei-Fei Li, chief scientist at Google Cloud, said ahead of the launch today. Li estimates there are at most a few thousand people worldwide with the expertise needed to build the very best deep-learning models. “But there are an estimated 21 million developers worldwide today,” she says. “We want to reach out to them all, and make AI accessible to these developers.” Cloud computing is one of the keys to making AI more accessible. That limits what they can do—for example, programmers will only be able to use the tools to recognize a limited range of objects or scenes that they have already been trained to recognize.

Read More

Google’s AutoML lets you train custom machine learning models without having to code

Google’s AutoML lets you train custom machine learning models without having to code

While Google plans to expand this custom ML model builder under the AutoML brand to other areas, the service for now only supports computer vision models, but you can expect the company to launch similar versions of AutoML for all the standard ML building blocks in its repertoire (think speech, translation, video, natural language recognition, etc.). The company didn’t share any pricing information yet, but chances are it will charge one fee for training the models and then another for accessing the model through its APIs. The basic idea here, Google says, is to allow virtually anybody to bring their images, upload them (and import their tags or create them in the app) and then have Google’s systems automatically create a customer machine learning model for them. The company says that Disney, for example, has used this system to make the search feature in its online store more robust because it can now find all the products that feature a likeness of Lightning McQueen and not just those where your favorite talking race car was tagged in the text description. Instead, Google is opting for a system where it handles all of the hard work and trains and tunes your model for you. “AI and machine learning is still a field with high barriers to entry that requires expertise and resources that few companies can afford on their own,” Google’s chief scientist for AI/ML Fei-Fei Li said during a press event earlier this week. If you assume there are about a million data scientists today, then that’s pretty much the number of people who will be able to use your tools. “Today, while AI offers countless benefits to businesses, developing a custom model often requires rare expertise and extensive resources.” Google argues that AutoML is the only system of its kind on the market.

Read More

The Rupee Coin Is The Latest Player In Cryptocurrency & It’s Here To Win The South Asian Market

The Rupee Coin Is The Latest Player In Cryptocurrency & It’s Here To Win The South Asian Market

Posted On Jan 15, 2018 | Updated On Jan 15, 2018 A few days ago, Jio announced its own cryptocurrency, the JioCoin, to be launched in the market.  Since then, there has been a debate going on among market analysts to see if this cryptocurrency will have a future or not. Actually, everybody is trying to get their hands on these 'get rich easy' cryptocurrencies since Bitcoin is all the rage and is serving as a valuable asset, these days.  Though the government of India has cautioned against trading and investing in Bitcoin and similar currencies, there have been unsubstantiated rumors about the government evaluating the possibility of introducing its own digital currency. However, one blockchain solution, in particular, has stuck with me – cryptocurrency.” He adds, “The opportunities that decentralized cryptocurrencies bring excites me. I wanted a cryptocurrency that resonated with the place I call home – India – as well as with the larger South Asian market.” © YourStory The “Rupee” has been a powerful brand with 2600 years of history in South Asia where it originated in India in 6th century B.C. The currency basically holds a strong emotional appeal for people belonging to the region and that is why it is gaining a lot of interest from the South Asian community based in the United States, Europe, and across the world. A number of merchants in the US and Canada are particularly excited about the Rupee Coin and want to accept it as payment for transactions as soon as the Rupee Coin mobile wallet is ready. Adam explains why the Rupee Coin could thrive in the long run, “The other reason why we have managed to garner interest in that region is that most of the Rupee team in the US and Canada are of South Asian ethnicity. They are rooting for Rupee as an established trade symbol in the cryptocurrency world because for many of them Rupee is an extension of their identity.” So how will the Rupee Coin be easier to trade when there are other cryptocurrencies circulating in the market? © BTC Upload The Rupee Coin is sourced from LiteCoin, and thus, it is based on the Scrypt mining algorithm. FYI, one of the world's leading cryptocurrencies, Bitcoin, is being traded in India at a much higher rate than the international market. Thus many big cryptocurrencies have become more like digital assets, and less usable.” As of now, the Rupee Coin is being traded on two exchanges – coinexchange.io and cryptopia.co.nz. The team is working towards bringing it in trading exchanges in India as well and hopes to be listed soon.Don't Miss Can Rupeebase.com become the potential facilitator for business in South Asia? Their team is 10 members strong including Adam Syed who wants to start a portal called Rupeebase.com, which will bring together merchants from countries like India, Pakistan, Indonesia, Nepal and Sri Lanka, whose fiat currencies are also known as the Rupee. Adam concludes by saying, “With smart contracts and cryptocurrencies becoming the next big thing, we believe a platform like Rupeebase.com can revolutionize trade in South Asia, especially India.” Do you know that until August 2017, RUP was trading at 0.004 USD, and now it is almost at 0.3 USD (30 cents)? The 'Rupee Coin' is the latest cryptocurrency riding on this increased interest and enthusiasm. © YourStory The Rupee coin is not the fiat currency that is backed and issued by a number of Asian countries, including India. This is almost a 7500 percent jump in value from what it initially started trading at. The all-time high for the Rupee has been 0.94 USD. He believes that blockchain can power transformation across sectors – from asset management to exchanges and wallets to healthcare to banking.  Adam says, “When I think about the future of technology, I see blockchain-based platforms, products, and services used by the masses.

Read More

Slack Hopes Its AI Will Keep You from Hating Slack

Slack Hopes Its AI Will Keep You from Hating Slack

If you work at one of the 50,000 companies that pay to use Slack for workplace collaboration, you probably spend hours on it, swapping information, bantering, and sharing files with your colleagues. It’s a casual, flexible way to interact—you tap out brief messages in group chat rooms (called channels) instead of sending e-mail, and it feels more like a smartphone app than typical office software. Facebook says that more than 30,000 organizations, including Walmart, use its Workplace by Facebook service. (These numbers aren’t directly comparable to Slack’s 50,000, though, since neither Microsoft nor Facebook would say how many daily users their platforms have, while Slack wouldn’t say how many organizations use the free version of its service.) These chat products deliver not only steady revenues from monthly and annual service fees, but also troves of data that show how people interact within companies and what types of files and applications they use to get work done. Companies like Microsoft “will tie these tools in with their other enterprise-wide platforms,” such as Office 365, says Jeffrey Treem, an expert on communication technologies at the University of Texas at Austin. “All of these large technology companies are pursuing this same space because it’s a very rich market.”  Slack is not worried. “We think we have a bunch of important advantages, among them traction in the market, sharp focus, and a really deep understanding of our users,” says CEO and cofounder Stewart Butterfield. The work graph To understand how Slack intends to improve work through AI, I visited the company’s New York office, where the team is based. The space, at the edge of Manhattan’s East Village, is an eclectic mix of Zen-like décor (tall green fronds planted among polished stones) and cartoon kitsch (flat-screen monitors broadcasting emoji animal faces). But while it can be an efficient way to collaborate, keeping up with Slack can become a full-time task, particularly when you return from a few days away and find thousands of status updates, scattered across dozens of channels. At Slack, Weiss is applying what he learned at Google and Foursquare to refine search queries and give people recommendations when they open the app. The information, which appears when users conduct searches in Slack, is meant to pinpoint subject experts so people can direct questions to their most knowledgeable and accessible colleagues. Another feature, added last year, evaluates all of a user’s unread messages, across all Slack channels; highlights up to 10 of the ones its algorithms deem most important; and presents them in a single list. Slack is using machine-learning algorithms to highlight the most important messages you missed while away from the platform. SLACK Both innovations rely on a data structure that Weiss calls the “work graph.” It essentially looks at companies that use Slack and analyzes how the people within them are interrelated, where in the app their discussions are taking place, and what topics are being discussed. But while Google studies public data and Facebook promotes the idea of a single, global network of relationships, Slack thinks of the work graph as specific to each company—a representation of how work is structured within it. The work graph emerges mainly through a type of machine-learning algorithm called collaborative filtering, which predicts a person’s interests and preferences by collecting information about those of many other people. For example, when people start using Slack, the algorithms will look at the channels they’ve joined, who is active in them, and where else those people are active, in order to suggest several more channels to the new users. “We’ve spent a lot of time building models that understand what you care about and what content you interact with,” says Jerry Talton, who helps lead the team’s technical work. “In the future, we’ll take that same understanding and apply it to content you don’t know about that could make you better at your job.”  Keeping an eye on you Another Slack goal is to help management keep a better eye on its employees. One of the team’s newest initiatives crunches data to construct online dashboards that give executives a bird’s-eye view of how employees are interacting, which topics are trending, and how sentiment changes over time. “You’d be able to see what your European set of offices are paying attention to versus your U.S. set of offices, or what people who have long tenure at your company are paying attention to versus people who are really new,” Weiss says. Slack is still working out the details—it is unclear, for example, whether companies will be able to access data from the past 24 hours or just the most recent week or weeks—but the AI team plans to roll it out in the near future. The very idea of “organizational insight” analyses shows how far Slack has come from its early days, when it was regarded as a startup beloved by other startups but out of sync with the demands of large companies. Weiss says his team hopes to assuage concerns by parsing activity only in public Slack channels (rather than the private ones where people can conduct confidential conversations). He also says Slack won’t turn on the feature unless companies request it. Still, employees may balk, particularly if they think they will get assessed on the basis of how active or popular they are on Slack. Adam Waytz, who researches social psychology and ethics at Northwestern University’s Kellogg School of Management, thinks the feature sounds invasive. “Given the increasing public unease about employers’ control over their employees’ lives and what gets said at work, this product could result in backlash or paranoia,” he says. Slack also needs to gain trust for its existing AI features. “AI can be tremendously beneficial in matching the right people with the right information to do the right tasks, but it’s not a perfect solution,” says Treem, the University of Texas communications professor. “If you were relying on algorithms to get you the most important messages and you find out a week later that you missed something particularly important, you’re going to lose confidence in Slack’s ability to do what you need it to do.” To gauge user satisfaction with its new tools, Slack includes thumbs-up, thumbs-down, and “dismiss” buttons with each message its algorithms highlight. Weiss says algorithm tweaks by the AI team last year made searches 50 percent more successful, and also made people 30 percent more likely to accept suggestions about new Slack channels to join. If all goes as planned, the intelligence layer the team is building on top of Slack will morph into a digital assistant that can make people more productive. Butterfield, the Slack CEO, sees AI as a long game. “I think what we have right now is good,” he says. “In a couple of years, it will be very good. In early 2016, the startup hired Stanford-trained computer scientist Noah Weiss to make the platform smarter and more useful. Over the past year and a half, Weiss’s group has used machine learning to enable faster, more accurate information searches within Slack and identify which unread messages are likely to matter most to each user. Eventually, Weiss aims to make Slack function like your ruthlessly organized, multitasking assistant who knows everything that’s going on and keeps you briefed on only the most salient events.

Read More

How Microsoft is working AI into its software without you even knowing it – GeekWire

How Microsoft is working AI into its software without you even knowing it – GeekWire

Steve Guggenheimer, Microsoft’s corporate vice president for AI business, talks about the company’s approach at the AI NextCon conference in Bellevue, Wash. (GeekWire Photo / Alan Boyle) Microsoft worked with Jabil, a Florida-based chip engineering firm, on a software platform that could review thousands of pass-fail records for circuit boards and develop criteria for automated quality assurance. Workplace AI agents can do predictive maintenance, sending alerts about hardware that’s likely to fail and even identifying which workers are best-placed to make repairs. (Just make sure HAL 9000 isn’t in charge.) People-detection software can even alert workers when there’s a forklift coming around the corner. AI assistants can help lawyers by making sure all the clauses are correct in a complicated contract, or review medical records for physicians to ensure that nothing is missed. All this may make it sound as if AI is merely about cold-eyed competence, but Guggenheimer pointed to some applications that warm the heart — for example, an AI-assisted watch bracelet that compensates for the hand tremors associated with Parkinson’s Disease, or an app that lets blind people know what’s going on around them by whispering in their ears. That perspective on AI runs counter to the usual stereotype of a Terminator-style robot uprising — which suggests there might be hope for humanity after all. The products just work better,” Steve Guggenheimer, Microsoft’s corporate vice president for AI business, told attendees at the AI NextCon conference here today. “You don’t actually go do a bunch of advertising and say, ‘Office, Now With AI!’ That’s not how it works.” But rest assured, it’s there. Those PowerPoint tricks take advantage of the latest in speech recognition and machine learning. “Most people wouldn’t think it’s AI, but it’s pretty cool,” Guggenheimer said. AI is benefiting from the rapid rise of cloud computing, big-data analysis and sophisticated software tools, with the result that AI programs are starting to outperform the average human in such categories as speech recognition and reading comprehension.

Read More

It’s time for Washington to start working on artificial intelligence

It’s time for Washington to start working on artificial intelligence

One of the more unexpected sights within the United States Capitol lies just outside the Old Supreme Court Chamber. There you can find a plaque marking the first ever long-distance communication by electronic telegraph, which took place inside the Capitol in 1844, when Samuel Morse sent and received messages from Baltimore. As the founder of the Artificial Intelligence Caucus, I’ve been working to start a new dialogue on Capitol Hill that is focused on the future. When I talk to people in the private sector, in research, in the sciences, AI dominates the discussion. Moreover, artificial intelligence isn’t just analogous to previous groundbreaking technologies like the steam engine, the telegraph, the microchip — it’s potentially even more transformative, because it represents an innovation that reaches across technologies and disciplines, from health care to transportation and logistics and beyond. The goal of the AI Caucus is to bring in academicians, entrepreneurs, scientists, ethicists, etc. and have them brief Congress on what’s happening in AI, what it means and what we need to do. Our caucus is bipartisan and co-chaired by my friend Republican Rep. Advances in technology always are – we can often see the old jobs that are going to go away, but we can’t see right away the new jobs that are going to be created. Earlier this year, during a Facebook Live on AI I hosted, Dr. Sebastian Thrun made the point that before the Industrial Revolution, it was necessary for most people to work in the fields planting and harvesting food, often in physically demanding circumstances. The committee is required to have academics, technologists, labor organizations and civil liberties groups represented, as well as people from all parts of the country. This committee will be required to study four key issues and make recommendations to the White House and to Congress as to what we should do to serve the national interest. Including: 1) how to encourage more investment in research and to make sure that the US is the global leader in innovation 2) how we can make sure workers benefit and the workforce is prepared for the new kinds of jobs being created, 3) making sure AI programs aren’t biased and 4) protecting civil liberties, privacy and individual rights. We’re working in a bipartisan way and it’s going to be focused on the facts. Let’s get all the experts together, take a close look at what’s happening and then hear their recommendations on what we need to do next to help the country. I want to make sure that AI is good for working people, good for businesses and good for our economy and that it’s implemented in an ethical way. Suddenly, we were not bound by the speed of the horse or the ship and the world would never be the same. Visitors to the Capitol today wouldn’t expect to see cutting edge experiments taking place inside the building and sadly, they probably don’t have much faith that Congress is even thinking about the future at all. Washington spends way too much time re-litigating the past — witness how much time has been devoted to debating old trade deals, the 2010 Affordable Care Act or the 1980s Reagan tax cuts — and has increasingly budgeted and legislated in a backwards looking way. Instead of embracing the trends of the future and empowering our citizens, too many policymakers would rather roll back the clock. According to data collected by the Brookings Institute, federal investment in research and development has declined significantly in recent decades, falling from 2.23% of our Gross Domestic Product (GDP) in the 1960s to just 0.77% in 2016 (GDP).

Read More

The technology behind AI in PPC

The technology behind AI in PPC

I believe artificial intelligence (AI) will be a key driver of change in PPC in 2018 as it leads to more and better PPC intelligence. After 27 times doubling its speed (the same number of times the microchip has doubled its speed since it was invented), we could have gone to the sun in about 4 minutes. So, if we’ve reached the point of PPC automation today where humans and computers are about equally good, consider that the pace of technological improvement makes it possible for the machines to leave humans in the dust later this year. And just like the first car is not the right vehicle for a flight to Neptune, the tools you used to manage AdWords a few years ago may no longer be the ones that make sense for managing AdWords today. Just like you want to know what your employees are capable of by interviewing them before hiring them, you should understand a technology’s capabilities (and limits) before adding it to your toolkit. Before the advent of AI as a research field in 1956, you could make a machine appear “intelligent” by programming it to deliver specific responses to a large number of scenarios. But that form of AI is very limited because it can’t deal with edge cases, of which there are invariably many in the real world. Rules are great for covering the majority use cases, but the real world is messy, and trying to write rules for every scenario is simply impossible. Between the 1950s and 1980s, AI evolved into using symbolic systems to be able to take heuristic shortcuts like humans do. By framing problems in human readable form, it was believed the machines could make logical deductions. Here’s a PPC problem: you’re adding a new keyword, but you don’t know the right bid to set because there is no historical data for it. By teaching the machine concepts like campaigns and keywords and how these relate to each other, we are providing it with the same heuristics we use to make reasonable guesses. So the system can now automate bid management and might set a similar bid to other keywords in the campaign because it knows that campaigns tend to have keywords that have something in common. The type of AI that is responsible for a lot of success in PPC today is based on statistics and machine learning to categorize things. Quality Score (QS) is a great example; Google looks at historical click behavior from users and uses machine learning to find correlations that help predict the likelihood of a click or a conversion. By having a score for how likely it is that each search will translate into a conversion, automated bidding products like those offered inside AdWords can “think” through many more dimensions (like geo-location, hour of day, device, or audience) that might impact the likelihood of a conversion than a person could. Thanks to the massively increased computing power available today, these systems can also consider interactions across dimensions without getting “overwhelmed” by the combinatorial nature of the problem. AI systems getting a lot of attention today, like AlphaGo Zero, are no longer dependent on structured data and can become “intelligent” without being “constrained by the limits of human knowledge,” as explained by DeepMind CEO Demis Hassabis. It can be applied to games because there is a clear outcome of “winning” or “losing.” When Google figures out what it means to win or lose in the game of AdWords, I bet we’ll see a huge acceleration in improvements of their automation tools. There are a lot of tools available to automate your PPC work, and multiple third-party vendors are starting to use AI and ML to provide stronger recommendations. For those willing to invest in connecting their own business data to AdWords and AI, I’m a big fan of prototyping solutions with AdWords Scripts because they provide a lot of customizability without requiring a lot of engineering resources. It’s because we’ve recently hit an inflection point where, due to the exponential nature of technological advances, we’re now seeing improvements that used to take years happen in weeks. Over the coming months, I will share my own experiences with AI so advertisers ready to take the plunge will have a better understanding of what is involved in building successful companies that leverage the latest state of the art in technology, computation, and statistics. Instead, let’s apply this doubling of speed to cars, where we can more easily understand how it impacts the distances we travel and how quickly we get somewhere.

Read More

AI Could Diagnose Your Heart Attack on the Phone—Even If You’re Not the Caller

AI Could Diagnose Your Heart Attack on the Phone—Even If You’re Not the Caller

Copious VC funding and generous government support are tempting Chinese nationals and foreign students to work or learn in China. Homeward bound: Chinese nationals who studied overseas, many of whom went through college in the U.S., are heading back… Read more Copious VC funding and generous government support are tempting Chinese nationals and foreign students to work or learn in China. Homeward bound: Chinese nationals who studied overseas, many of whom went through college in the U.S., are heading back to China. Limits to career advancement in the West often seem to motivate the move. Incentives on offer: A report by PwC and CB Insights says Asia is close to surpassing North America in VC funding—raising $70.8 billion in 2017. versus North America’s $74 billion.

Read More

Node.js + face-recognition.js : Simple and Robust Face Recognition using Deep Learning

Node.js + face-recognition.js : Simple and Robust Face Recognition using Deep Learning

But you can also detect and extract the faces, save and label them as follows:Now that we have our data in place we can train the recognizer:Basically what this does is feeding each face image into the neural net, which outputs a descriptor for the face and store all the descriptors for the given class. Increasing the number of jittered version may increase prediction accuracy but also increases training time.Furthermore, we can store the recognizers state, so that we do not have to train it again the next time and we can simply load it from a file:Save:Load:Now we can check the prediction accuracy for our remaining data and log the results:Currently prediction is done by computing the euclidean distance of the input face’s descriptor vector to each descriptor of a class and a mean value of all distances is computed. The output will look somehow like this:{ className: 'sheldon', distance: 0.5 }In case you want to obtain the distances of the face descriptors of all classes to an input face you can simply use recognizer.predict(image), which will output an array with the distance for each class:[ { className: 'sheldon', distance: 0.5 }, { className: 'raj', distance: 0.8 }, { className: 'howard', distance: 0.7 }, { className: 'lennard', distance: 0.69 }, { className: 'stuart', distance: 0.75 }]Running the above example will give the following results.Using 10 faces each for training:sheldon ( 90.9% ) : 10 of 11 faces have been recognized correctlylennard ( 100% ) : 12 of 12 faces have been recognized correctlyraj ( 100% ) : 12 of 12 faces have been recognized correctlyhoward ( 100% ) : 12 of 12 faces have been recognized correctlystuart ( 100% ) : 3 of 3 faces have been recognized correctlyUsing only 5 faces each for training:sheldon ( 100% ) : 16 of 16 faces have been recognized correctlylennard ( 88.23% ) : 15 of 17 faces have been recognized correctlyraj ( 100% ) : 17 of 17 faces have been recognized correctlyhoward ( 100% ) : 17 of 17 faces have been recognized correctlystuart ( 87.5% ) : 7 of 8 faces have been recognized correctlyLooking at the results, we can see that even with using a small set of training data, we can already obtain pretty accurate results. The dlib library uses deep learning methods and comes with some pretrained models, which have been shown to provide an astonishing prediction accuracy of 99.38% running the LFW face recognition benchmark.Lately I have been trying to build a face recognition app with Node.js to extract and recognize the faces of characters from The Big Bang Theory. Initially, I wanted to build this with OpenCV’s face recognizers, similarly to how I did it in my tutorial Node.js + OpenCV for Face Recognition.However, while these face recognizers deliver fast prediction results, I found them to not be robust enough. More precisely, while they seem to work well with frontal face images, as soon as the face pose is slightly different, they produce more insecure prediction results.Thus I was looking for alternatives, came across the dlib C++ library, fiddled around with the Python API, was impressed by the results and finally decided: I want to use this with Node.js! Thus I created this npm package providing a simplified Node.js API for face recognition.With face-recogntion.js I wanted to provide an npm package whichexposes a simple API to get started quicklystill allows for more fine grain control if desiredis easy to set up (optimally as simple as typing npm install)While this package is still work in progress, right now you can do the following stuff with it:You can either use a deep neural net for face detection or a simple frontal face recognizer to do fast and less robust detection:The face recognizer is a deep neural net, which uses the model I mentioned to compute a unique face descriptor. This face recognizer can be trained with labeled face images and can afterwards predict the label of an input face image:You can also use this package to detect 5 and 68 point face landmarks:Okay, as I said I initially failed to solve this task with OpenCV. The code of this example can be found on the repo.I have collected roughly 20 faces per character in different poses:We will use 10 faces each to train the recognizer and the rest to evaluate the accuracy of our recognizer:The file name of each face image contains the persons name so we can simply map our class names:['sheldon', 'lennard', 'raj', 'howard', 'stuart'] to an array of images per class.

Read More
1 2 3 64