Apple and Its Rivals Bet Their Futures on These Men’s Dreams

Apple and Its Rivals Bet Their Futures on These Men’s Dreams

Computers have learned to recognize faces and objects, understand the spoken word, and translate scores of languages. This is the peculiar story—pieced together from my interviews with them—of why it took so long for neural nets to work, how these scientists stuck together, and why Canada, of all places, ended up as the staging ground for the rise of the machines.(Now, not everyone agrees with Canada’s pride of place. That will take decades.BENGIO: I don’t think that humans will necessarily be out of jobs, even if machines become very smart and maybe even smarter than us. We’ll always want real people for jobs that really are about human interactions. I believe that if we’re able to build machines that are as smart as us, they will also be smart enough to understand our values and our moral system, and so act in a way that’s good for us.My real concern is around the potential misuse of AI, for example as applied to military weapons. We need to become collectively wiser.SUTTON: I think it’s a big mistake that we’ve called the field “artificial intelligence.” It makes it seem like it’s very different from people and like it’s not real intelligence. It makes people think of it as more alien than it should be, but it’s a very human thing we’re trying to do: re-create human intelligence.Science has always revealed truths that not all people like—you get the truth but not always the one you wanted. We shouldn’t want to freeze the way we are now and say that’s the way it should always be.Hinton (second from left) and Bengio (right) outside London at a workshop organized by the Gatsby Institute in 2011.LECUN: Until we know exactly what it’s going to look like, worrying about this really is premature. I don’t believe in the concept of singularity, where one day we’ll figure out how to build superintelligent machines and the next day that machine will build even smarter ones and then it will take off. I think people forget that every physical or social phenomenon will face friction, and so an exponentially growing process cannot grow indefinitely.This Hollywood scenario where some genius somewhere in Alaska comes up with the secret to AI and builds one robot and it takes over the world, that’s just preposterous.TRUDEAU: It’s not something I overly fret about. I’m reassured that Canada is part of it in terms of trying to set us on the right path. And I wouldn’t want to slow down our research, our trying to figure out the nuts and bolts of the universe.The question is: What kind of world do we want? Do we want a world where the successful have to hide behind gated communities and everyone else is jealous and shows up with pitchforks? Or do you want a world where everyone has the potential to contribute to innovation?HINTON: I think the social impact of all this stuff is very much up to the political system we’re in. Intrinsically, making the production of goods more efficient ought to increase the general good. The only way that’s going to be bad is if you have a society that takes all of the benefit of that rise in productivity and gives it to the top 1 percent. One of the reasons I live in Canada is its tax system; if you make a lot of money, the country taxes you a lot. It could be driving a simulated car down a road or trying to recognize a cat in a photo.Within that, there’s a subset of machine learning called deep learning. The general idea is you build a neural network, and it has weights and biases that can be tweaked to home in on the desired outcome. That’s what Geoff Hinton and others have really worked on over the past decades, and it’s now the underpinning of what’s most exciting about AI. It does a better job of mimicking the way a human brain thinks.Featured in Bloomberg Businessweek, May 21, 2018. Subscribe now.CADE METZ, reporter for the New York Times and author of a forthcoming history of AI: The idea of a neural network dates back to the 1940s—the notion of a computing system that would mimic the web of neurons in the brain. Navy and other parts of the government, and he developed this thing called a Perceptron based off the neural network concept. When he revealed it, places like the New York Times and the New Yorker covered it in pretty grand terms.Rosenblatt claimed it would not only learn to do small tasks like recognize images but also could theoretically teach machines to walk and to talk and to show emotion. But it was a single layer of neurons, and that meant it was extremely limited in what it could do. Needless to say, none of the things he promised actually happened.Marvin Minsky, a colleague of Rosenblatt’s who happened to be one of his old high school classmates from the Bronx, wrote a book in the late 1960s that detailed the limitations of the Perceptron and neural networks, and it kind of put the whole area of research into a deep freeze for a good 10 years at least.GEOFF HINTON: Rosenblatt’s Perceptron could do some interesting things, but he got ahead of himself by about 50 years. The book by Minsky and Seymour Papert on the technology (Perceptrons: An Introduction to Computational Geometry) basically led to the demise of the field.During the 1970s a small group of people kept working on neural nets, but overall we were in the midst of an AI winter. METZ: Geoff Hinton, at Carnegie Mellon University and then later at the University of Toronto, stuck with the neural network idea. Eventually he and his collaborators and others developed a multilayered neural network—a deep neural network—and this started to work in a lot of ways.A French computer scientist, Yann LeCun, spent a year doing postdoctoral research at Hinton’s lab in Toronto. I grew up in the 1960s, so there was space exploration, the emergence of the first computers, and AI. So when I started studying engineering, I was really interested in artificial intelligence, a field that was very nascent.LeCun (right) at Esiee ​​​​Paris graduate school in 1979.I heard about the Perceptron and was intrigued, because I thought learning was an integral part of intelligence. As an engineer, if you want to understand intelligence, the obvious approach is to try to build a smart machine—it forces you to focus on the components needed to foster intelligence. You don’t want to just mimic biological intelligence or the brain, because there are a lot of aspects of its function that are just due to biochemistry and biology—they’re not relevant to intelligence, really. Like how feathers aren’t crucial for flight: What’s important are the underlying aerodynamic principles.METZ: There were people who thought LeCun was a complete nut and that this was sort of a Sisyphean task. You would go to these big AI conferences as a neural network researcher, and you weren’t accepted by the core of academia. I had a scholarship from the government, so I could basically choose my topic, and it didn’t cost anything to the professor. We made a deal that I could do machine learning, but I would apply it to the thing that he cared about, which was speech recognition.LECUN: Around 1986, there was a period of elation around neural nets, partly due to the interest in those models from physicists who came up with new mathematical techniques. That made the field acceptable again, and this led to a lot of excitement in the late 1980s and early 1990s. But this was no overnight hit, nor was it the brainchild of a single Silicon Valley entrepreneur.The ideas behind modern AI—neural networks and machine learning—have roots you can trace to the last stages of World War II. Back then, academics were beginning to build computing systems meant to store and process information in ways similar to the human brain. I worked on an automated system for reading checks with character recognition.Pomerleau demonstrates his self-driving car in 1995.METZ: At Carnegie Mellon, a guy named Dean Pomerleau built a self-driving car in the late 1980s using a neural network. LeCun used the technology in the 1990s to build a system that could recognize handwritten digits, which ended up being used commercially by banks.So through the late ’80s and on into the ’90s, there was this resurgence in neural networks and their practical applications, LeCun’s work being the prime example. My first encounter with Yoshua was when he published the same thing, or more or less the same thing, four years after one of my students published it. And then a couple of years later there was a showdown at a conference where all of this came out. Over the decades, the technology had its ups and downs, but it failed to capture the attention of computer scientists broadly until around 2012, thanks to a handful of stubborn researchers who weren’t afraid to look foolish. What you do in science is you clarify things. (Bengio has denied Schmidhuber’s claims.)LECUN: The problem back then was that the methods required complicated software, lots of data, and powerful computers. That was kind of a dark period for Geoff, Yoshua, and I. We were not bitter, but perhaps a little sad that people didn’t want to see what we all thought was an obvious advantage. The number of people getting neural networks to work better was quite small.The Canadian Institute for Advanced Research got people like us from all over the world to talk to each other much more. It gave us something of a critical mass.LECUN: There was this very small community of people who had this in the back of their minds, that eventually neural nets would come back to the fore. They remained convinced that neural nets would light up the world and alter humanity’s destiny.While these pioneers were scattered around the globe, there happened to be an unusually large concentration of neural net devotees in Canada. We got together and decided that we should strive to rekindle interest in our work.But we needed a safe space to have little workshops and meetings to really develop our ideas before publishing them. Geoff published one in Science.Face-recognition test images from Hinton’s 2006 article in Science.TRUDEAU: Learning that Canada had quietly built the foundations of modern AI during this most recent winter, when people had given up and moved on, is sort of a validation for me of something Canada’s always done well, which is support pure science. We give really smart people the capacity to do smart things that may or may not end up somewhere commercial or concrete.HINTON: In 2006 in Toronto, we developed this method of training networks with lots of layers, which was more efficient. We had a paper that same year in Science that was very influential and helped back up our claims, which got a lot of people interested again. In 2009 two of the students in my lab developed a way of doing speech recognition using these deep nets, and that worked better than what was already out there.It was only a little better, but the existing technology had already been around for 30 years with no advances. The fact that these deep nets could do even slightly better over a few months meant that it was obvious that within a few years’ time they were going to progress even further. Like just about everyone else, Li Deng believed in a different form of AI known as symbolic AI. In this approach, you basically had to build speech recognition systems one line at a time, coding in specific behavior, and this was really slow going.Hinton mentioned that his neural-net approach to speech recognition was showing real progress. That’s only partly through luck: The government-backed Canadian Institute for Advanced Research (Cifar) attracted a small group of academics to the country by funding neural net research when it was anything but fashionable. It could learn to recognize words by analyzing the patterns in databases of spoken words, and it was performing faster than the symbolic, line-by-line work. Deng didn’t necessarily believe Hinton, but invited him and eventually two of his collaborators to Microsoft to work on the technology. Speech recognition took a huge leap forward at Microsoft, and then Google as well in 2010.Then, at the end of 2012, Hinton and two of his students have a huge image recognition breakthrough where they blew away previous techniques. That’s when not just Microsoft and Google but the rest of the industry woke up to these ideas.The thing to remember is, these are very old ideas. You need the data to train on, and you need the computing power to execute that training.LECUN: Why did it take so long? It backed computer scientists such as Geoffrey Hinton and Yann LeCun at the University of Toronto, Yoshua Bengio at the University of Montreal, and the University of Alberta’s Richard Sutton, encouraging them to share ideas and stick to their beliefs. Now we’re in a bit of a race between people trying to develop the algorithms and people trying to develop faster and faster computers. You have to sort of plan for your AI algorithms to work with the computers that will be available in 5 years’ and 10 years’ time.The computer has to have a sense of what’s good and what’s bad, and so you give it a special signal called a reward. It’s where a purpose comes from.A neural net is where you store the learning, and reinforcement is how you decide what changes you’d like to make.BENGIO: We’re still a long way from the kind of unsupervised learning that Geoff, Yann, and I dream about. They came up with many of the concepts that fueled the AI revolution, and all are now considered godfathers of the technology. A 2-year-old has intuitive notions of physics, gravity, pressure, and so on, and her parents never need to tell her about Isaac Newton’s equations for force and gravity. We interact with the world, observe, and somehow build a mental model of how things will unfold in the future, if we do this or that.We’re moving into a new phase of research into unsupervised learning, which connects with the work on reinforcement. We’re not just observing the world, but we’re acting in the world and then using the effect of those actions to figure out how it works. We can predict the consequences of our actions, which means that we don’t need to actually do something bad to realize it’s bad.So, what I’m after is finding ways to train machines so that they can learn by observation, so they can build those kind of predictive models of the world. You could say that the ability to predict is really the essence of intelligence, combined with the ability to act on your predictions.LECUN: It’s quite possible we’re going to make some significant progress over the next 3 years, 5 years, 10 years, or 15 years—something fairly nearby.

Read More

I Tried to Get an AI to Write This Story

I Tried to Get an AI to Write This Story

Google announced a new AI-powered set of products and services at its I/O conference for developers, including one called Duplex that makes phone calls for you and sounds just like a real person, which freaked everyone right out. But most important is that they’ve got all that data and not enough programmers to make sense of it. Machine learning is an enormous shortcut, a path to new products and big savings.“Watching a machine-learning model train itself is like watching a movie montage”So out of curiosity and a deeply optimistic laziness, I set out to learn enough about machine learning that I could feed a neural network everything I’ve ever written and have it write an article, or even just a paragraph, that sounded like me. The first wall I hit is that, even for a nerd who’s used to befuddlement, machine learning is opaque. I am a veteran of jargon, and trust me, this is one big epistemological hootenanny.Even worse, when you look under the rock at all the machine learning, you see a horrible nest of mathematics: Squiggling brackets and functions and matrices scatter. Can’t I just turn a dial somewhere?It all reminds me of Linux and the web in the 1990s: a sense of wonderful possibility if you could just scale the wall of jargon. You feed data to a program and it spits out a new program for classifying data. This should give us pause, but asking Silicon Valley to pause for reflection is like asking a puppy to drop its squeaky toy.Here’s more good news: Machine learning is amazingly slow. We’re so used to computers being ridiculously fast, doing thousands of things at once—showing you a movie and connecting to dozens of Wikipedia pages while you chat in one window, write in a word processor, and tweet all the while (admittedly I might have a problem). But when I tried to feed a machine-learning toolkit all my writing in the hope of making the computer write some paragraphs for me, my laptop just shook its head. It was going to take at least a night, maybe days, to make a model of my prose. At least for now, it’s faster for me to write the paragraphs myself.But I’d already read tutorials and didn’t want to give up. I’d downloaded and installed TensorFlow, a large machine-learning programming environment produced by Google and released as open source software. Fishing around, I decided to download my Google calendar and feed all my meetings to TensorFlow to see if it could generate new, realistic-sounding meetings. Just what the world needs: a meeting generator.Unfortunately, my meetings are an enormous pile of events with names like “Staffing,” “Pipeline,” “John x Paul,” and “Office happy hour.” I ran a script once to load the data, then ran another script to spit out calendar invites. However, on that trial run I set the wrong “beam” (God only knows what that is) and the RNN just produced the word “pipeline” over and over again. Add to this the public reveal that the musician Grimes and Elon Musk are dating, after the two shared a joke about AI.And yet when people ask what the software company I run is doing with machine learning, I say, calmly, “Nothing.” Because at some level there’s just nothing to do.The hotness of the moment is machine learning, a subfield of AI. In machine learning you take regular old data—pictures, emails, songs—and run it all through some specialized software. Sales = my life.“I went back to my laptop and applied a skill that’s fundamental to programming: cheating”The thing is, that might look like failure. But I’d fed my machine learner a few thousand lines of text (tiny by machine learning standards), and it had learned one word. I was almost as proud as when I thought my infant son said “cat.” I was back to the seminal 1950 paper by Alan Turing in which he proposed simulating a child via computer. “Presumably the child brain is something like a notebook as one buys it from the stationer’s,” he wrote. “Rather little mechanism, and lots of blank sheets.”Change the settings, try again. After 50 “epochs” (when the program reads in all of your data one time, that’s an epoch—training a network requires beaucoup epochs) I had it generating meetings with titles like “BOOK,” “Sanananing broces,” and “Talking Upgepteeelrent,” even though I’ve never talked Upgepteeelrent with anyone. A regular microprocessor is sort of a logic-powered sausage maker; you feed it meat (instructions) and it processes the meat and produces sausage (output) all day long. That software builds up a “model.” Since the model encodes what came before, it’s predictive—you can feed the model incomplete data and it will suggest ways to complete it. A trivial example: Anyone, including you and I, can feed the alphabet to a “recurrent neural network,” or RNN. Sadly, even though I followed the instructions, I couldn’t get Linux to recognize my graphics card, which after 20 years of using Linux feels more like a familiar feature than a bug. Of course, all would not be lost: I could jump online and rent a TPU, or Tensor Processing Unit, from Google (a tensor is a math thing where things connect to other things) using its cloud services. But if you want to rent a Google TPU and blast through a ton of machine learning tasks, it’ll cost $6.50 an hour, billed by the second. If you’re looking at tons of satellite imagery or MRIs—probably.I went back to my work laptop and applied a skill that’s fundamental to programming: cheating. I switched from “character”-based neural networks to training against “words”—and since my pet neural network was no longer learning the alphabet but merely looking at “tokens,” my meetings got much more plausible in a hurry.After 2,000 epochs, it got to some relatively good meetings: “Paul and Paul!,” “Sarony Hears,” and the dreaded “Check-in,” but it was still mostly producing stuff like “Sit (Contench: Proposal/Gina Mcconk.” I started to feel why everyone is so excited: There is always, always one more knob to turn, one other thing to tweak that could make the computer more thoughtful-seeming. Or, as then-Ph.D. student Andrej Karpathy wrote in a 2015 essay, The Unreasonable Effectiveness of Recurrent Neural Networks: “I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me.” He’s currently director of AI at Tesla Inc. His neural network must have been more than just amusing.Messing with machine learning scratches a nerd itch to understand the world and master it a little, too—to reduce reality to inputs and outputs, and remix it. I wanted to forget my family and my company and just fall backward into this world of cloud TPUs, feeding it ever more data and letting it create ever more surprising models that I would explore and filter. At the end, a robotic Rocky runs up the stairs of the Philly art museum and raises his robot arms in the air. It’s too bad that Robot Rocky was trained on a data set of hockey films instead of boxing, but it’ll still be fascinating to watch him enter the ring and try to score a goal.“At least for now, computers need people as much as we need them”Finally, I just let ’er crank for 20,000 epochs and went home, but the results weren’t any better in the morning. Now you execute that model (maybe by running a script) and give it the letters “ABC.” If your specially trained neural network is having a good day, it’ll say “D.” Pitch Lunch: Wendy no get,” and “Tyler chat Deck.” I don’t know what it says of my life that all these could be real invites.I’d tapped out the limit of what I could do without learning more. I’d learned that machine learning is very slow unless you use special equipment, and that my life, at least by the meetings I attend, is pretty boring. The reality is that my corpus wasn’t big enough; I need millions, billions of meetings to build a good predictive model. Get me a whiteboard!I work in software, and machine learning is the big new thing, but I’m not worried, nor are we retooling our company. As with all software, machine-learning tools still need people to come along to make them look good and teach them how to behave. Go up a level: Feed your neural network a million pictures with captions, then feed it a picture without a caption and ask it to fill in the missing caption. You can jump onto Amazon’s SageMaker platform and get yourself a machine with 8 GPUs and 616 gigabytes of memory across all its processors for $24.48 an hour. It didn’t set out to be an ad company, but it is, and its market value is around $750 billion, so it will have to accept that. Feed it countless emails with replies, then show it one without a reply and ask it what to say.Since we use software all the time, we create an unbelievable amount of data. And machine learning is really effective at productizing (a real word) big data.So if I’m Google, the absolute, most horrible, worst-case outcome is that I will be able to use what machine learning gives me and apply it to my enormous suite of advertising products and make them smarter and better and more useful, and do smarter and better search across the enormous swaths of culture where I charge a toll, which includes YouTube, all world geography, and (practically) the web itself. Plus, I can make it easier to use Android phones, which I also indirectly control.Simultaneously I, Google, will release TensorFlow, and that will bring a huge group of expensive-to-recruit engineers up to speed on the tools we use internally, creating in them a great desire to come and do machine learning at our massive scale, where they can have all the TPU hours they want. And that will add up to tens of billions of dollars over the years.But—still channeling the Google in my heart—in my wildest dreams I will open up entirely new product lines around machine vision, translation, automatic trading services, and generate many hundreds of billions of dollars in value, all before machine learning succumbs to the inevitable downward pressure and gets too cheap and easy.I mean, even if TPUs shrink and everyone in the world can do machine learning, I’ll have the data. And I will be providing the cloud infrastructure for a whole machine learning world—clawing back what’s rightfully mine from those mere booksellers at Amazon—because my tools will be the standard, and our data will be the biggest, and the applications the most immense. The cops can search for people who might become criminals, the credit agencies can predict people who will have bad credit, the homeland security offices of many nations can filter through their populations and make lists of questionable value.

Read More

Artificial Intelligence and Machine Learning: “Am I a real Boy”

Artificial Intelligence and Machine Learning: “Am I a real Boy”

This article was exclusively written for the Sting by Mr Ahmed Rafay Afzal, a medical student from King Edward Medical University, Lahore, Pakistan, currently pursuing a career in United States. All this happens by the magic of Artificial Intelligence, which makes the patient the point of importance, creates a large amount of statistical data about an individual and gives medical professionals tools to sort through and analyze that data. With such huge amounts of data it has almost become impossible for a physician to analyze it. This is where machine learning comes into play, although not objectively artificial intelligence, machine learning is the still the biggest arsenal available for the physicians who are embracing the new onslaught of technology in the medical field. National Institute of Health defines precision medicine as an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment and lifestyle for each person.1 This approach involves algorithms which use supercomputers to mine data with the help of machine learning and deep learning. When combined with the results of a pathologist the success rate increased upto 99.5%.3 Machine learning softwares have been written that help record the patient-doctor encounter, hence reducing the workload of the physician by writing his notes for him.With the current status quo of machine learning it might not be sufficient to replace a physician but there is enough evidence to argue that machine learning can definitely supplement a physician even in its current relatively untested phase. A computer cannot do a physical exam, it doesn’t have the cognitive ability of a physician, it doesn’t have compassion and it can’t feel for the patient. Just like humans, A.I. is also flawed, who is to blame when the A.I. makes a wrong decision regarding patient care. Would the society ever entrust a machine to deal with the intricate details of health records including sensitive data e.g sexual history or HIV status? However, the opinions expressed in this piece belong strictly to the writer and do not necessarily reflect IFMSA’s view on the topic, nor The European Sting’s one. How would the paradigm of human-machine interactions work in healthcare setting and what laws would govern that interaction? There is also another issue with misconceptions and over-exaggerations about the potential of A.I. Artificial Intelligence is revolutionary, no doubt, but its not the answer to every problem that we have in healthcare. In the long forgotten lores of eastern medicine, the doctors were called “Hakeem”. Ahmed Rafay Afzal is a medical student from King Edward Medical University, Lahore, Pakistan, currently pursuing a career in United States. His current focus is research in pediatric Gastroenterology and on revamping the healthcare system of Pakistan with introduction of digital technology. Unfortunately, I a doctor of real medicine, do not enjoy that status anymore as compared to my great-great-grandfather who was a “Hakeem”. The point being that the state of medicine has transformed from witchcraft to  scribbled prescriptions to Electronic Health Records. We used to develop medical protocols keeping general principles and populations in mind, but with the advent of digital health, precision medicine has been making the rounds.

Read More

Microsoft Pakistan partners P@SHA for cloud-based trainings

Microsoft Pakistan partners P@SHA for cloud-based trainings

Microsoft Pakistan has signed a partnership with P@SHA (Pakistan Software Houses Association) that will introduce a new universe of cloud computing with specialized courses and certificates.The participants of this training can connect with experts and meet like-minded cloud enthusiasts with the help of these courses specifically designed for them. Digital transformation is a digital eco-system that can present untapped opportunities for a variety of organizations such as governments, community and business leaders who want to increase their export capabilities along with attracting foreign investment. Microsoft has a long history of partnering with governments, businesses, and individuals to make technology accessible to the younger generation. Further to these goals, Microsoft’s vision to use Azure as a way to democratize artificial intelligence (AI) will help companies to build cloud fueled AI-based solutions across the globe.Microsoft Pakistan is on its way to nurture a highly evolved academic culture in the country, by integrating global advancements and cutting-edge tools thus, empowering our students and businesses to contribute and compete in the global workforce. Microsoft Pakistan has worked with all the business sectors and industries in Pakistan and is helping them to attain growth in all dimensions. Microsoft is a global leader in technology and innovation and has over 40k sellers and hundreds of thousands of partners in Pakistan. By plugging ISVs (Independent Software Vendors) into a sales engine, Microsoft Pakistan can connect them directly to customers.Cloud technology is popular throughout the regions and in Pakistan by way of contributing to national development, enhancing socio-economic growth, enabling innovation and improving service delivery. To fully realize the potential of the cloud Microsoft has launched the ‘Microsoft Cloud Society’ program to offer people working in the technology sector to be trained, certified and work face to face with Microsoft cloud experts.Microsoft Cloud Society is aimed at equipping IT workers at all levels with Azure-based cloud ready skills. Microsoft Pakistan is steadfastly pursuing a cloud readiness strategy for the future within PakistanThe courses that will be offered by Microsoft Pakistan for this partnership are Microsoft Azure Virtual Machines, Migrating Workloads to Azure, Data Science Essentials, DevOps for developers –Getting started and Artificial Intelligence.Abid Zaidi, Microsoft Pakistan Country Manager stated;“Microsoft’s vision is to enable and empower individuals and organizations through technology. Partnership with P@SHA is in line with our mutual goal of enabling the ISV ecosystem”.He then added “In order to scale and have the required impact, we are keen to further collaborate and forge similar partnerships. I see these technology-driven alliances will go a long way in nurturing fresh efforts for nation-building.”P@SHA Secretary General Shehryar Hydri said, “Microsoft Pakistan is very active and committed to the local market and this partnership around their cloud solutions will not only upskill our local workforce but also enable tech companies to expand outside Pakistan and scale aggressively.”At the signing ceremony for Microsoft’s cloud-based training program (R) Mr. Abid Zaidi, Country Manager – Microsoft Pakistan and (L) Mr. Shehryar Hydri, Secretary General – P@SHA (Pakistan Software Houses Association).As part of Microsoft’s worldwide efforts to empower people along with organizations, across a variety of industries and verticals, it is driving ‘digital transformation’ as a means for achieving economic prosperity.

Read More

World Telecom Day: How PTCL Harnesses The Power of AI

World Telecom Day: How PTCL Harnesses The Power of AI

However, the rise of information and telecommunication technology, particularly in the last two decades, has outstripped the rest of the man-made wonders the history has witnessed.The 21st century, for this reason, stands out as the communication century, a golden era, bringing forth all-inclusive socio-economic growth and technological development that have revolutionized the everyone’s lives.World Telecom DayOn May 17th every year, the World Telecommunication and Information Society Day (WTISD) is celebrated internationally under the auspices of the United Nation (UN). The ITU tends to be a specialized UN agency that is primarily responsible for issues and policy matters related to information and communication technology.Raising AwarenessHeld every year under a pre-defined theme, the prime objective of the annual World Telecommunication and Information Society Day (WTISD) is to enhance public awareness about the use of information and communication technology (ICT) in and to find out how it can bridge the digital divide and bring in more convenience and ease in carrying out complex and time-consuming day-to-day tasks.On the World Telecommunication Day, the UN member countries organize awareness sessions, hold seminars and conduct special workshops and conferences to explore future possibilities and stimulate reflection and exchange of ideas about the various aspects of the theme of the year.Artificial IntelligenceThis year the theme for the WTISD is ‘Artificial Intelligence for All’ with a view to enable positive use of artificial intelligence by mapping out its potential to expedite and accelerate the pace of the Sustainable Development Goals (SDGs) of the United Nations by 2030.AI technology has seen phenomenal growth and advancement in recent years and has emerged as the driving force behind such ICT fields as the Internet of Things (IoT), Big Data, ambient or mobile computing, machine learning, cloud computing, storage capacity, computing power and many others.PTCL’s RoleA leading telecom and ICT service provider in the country, Pakistan Telecommunication Company Limited (PTCL) has evolved in many ways since its inception, and now offers latest digital and telecommunication technologies. It has brought new innovations to the customer services and contact center operations within Pakistan.Afiniti’s Enterprise Behavioral Pairing™ algorithm analyzes and predicts the behavior of incoming customers and then pairs them with best suited agents in order to improve their overall experience.ALSO READPTCL Organizes Table Tennis and Badminton League for TelcosIn the near future, this will help PTCL to become an innovative ICT service provider and will facilitate it to offer the most modern, technology-based solutions to millions of customers and to other telecom service providers and ICT-related companies in the country.

Read More

Nvidia, AMD Get Buy Ratings On Artificial Intelligence Prospects

Nvidia, AMD Get Buy Ratings On Artificial Intelligence Prospects

Graphics-chip makers Advanced Micro Devices (AMD) and Nvidia (NVDA) received fresh buy ratings on Friday, as investment bank Cowen initiated coverage of a host of semiconductor stocks. As such, he rated Intel (INTC), Qualcomm (QCOM) and Cirrus Logic (CRUS) as market perform. Trade tensions between the U.S. and China are easing as the two countries negotiate, the report said. China's approval is the last regulatory roadblock to the Qualcomm-NXP deal, which was announced in October 2016. The $44 billion acquisition has received antitrust clearance from eight of the nine required government regulatory bodies around the world. He rated eight as outperform, or buy, and three as market perform, or neutral. Nvidia is currently ranked No. 24 on the IBD 50 list of top-performing growth stocks. It fell 0.7% to close at 245.94 on the stock market today. "Old guard" chip stocks tied to PCs and smartphones will be less attractive in the years ahead, he said.

Read More

Etsy opens machine learning center in Toronto

Etsy opens machine learning center in Toronto

The company broke the news yesterday during a meeting with Canadian prime minister Justin Trudeau in New York City. Etsy’s third Machine Learning Center of Excellence, which follows on the heels of its Brooklyn and San Francisco locations, will play host to leading figures from local universities and Toronto’s “deep pool of world-class machine learning talent,” according to a statement. — Justin Trudeau (@JustinTrudeau) May 17, 2018 According to Etsy, the Canadian ecommerce market’s growth was a deciding factor.

Read More

Baidu Falls, Artificial Intelligence Push In Limbo After Shake-up

Baidu Falls, Artificial Intelligence Push In Limbo After Shake-up

Baidu (BIDU) stock sold off Friday as analysts pondered whether its push into artificial intelligence will stall in the wake of its chief operating officer stepping down. Shares in the Beijing-based web services company broke out above a 275.07 buy point this week. The company still consolidates the financial results of iQiyi (IQ), a Netflix-like video streaming service that it recently spun off. At a conference Thursday, iQiyi said it plans to move into virtual reality applications. Baidu is just one stock to watch for artificial intelligence developments. The China-based internet search leader tumbled 9.5% to close at 253.01 on the stock market today, tumbling out of a buy zone. "In the past 1.5 years, Qi took over the operational management at Baidu enabling CEO Robin Li to spend more time on strategy," Credit Suisse analyst Thomas Chong said in a note to clients. "We view Dr. Lu as instrumental to Baidu's transition to becoming an 'All in AI' company. The internet search leader views voice-activated, smart-home devices as core to its long-term strategy. Baidu said Haifeng Wang will take over as senior vice president and general manager of the company's AI Group.

Read More

5 Traits Organizations Need to Get Value from Machine Learning

5 Traits Organizations Need to Get Value from Machine Learning

A new report identifies the traits of organizations that are getting value — higher revenues and profits — due to their machine learning initiatives. Does your enterprise have what it takes?Take a look at the hot technologies for the enterprise in 2018, and machine learning is right up at the top. However, the report notes that the biggest companies may struggle the most with the change necessary to be so-called fast learners. Smaller companies are making up for the lack of resources by tapping into what's available in the cloud, according to the report. "Smaller organizations now have access to substantial computing power at a fraction of the cost of maintaining such hardware on-premises," the report said. "They also have access to wide bodies of external AI and ML knowledge through roughly a dozen open source innovation platforms that big technology organizations and research institutes have been creating." You're probably wondering if your organization possesses any of these traits. The following is the list of fast learner traits compiled by The Economist Intelligence Unit in collaboration with SAP: They make machine learning a C-level strategic priority. Senior management at these organizations are more open to change because they see the strategic value of the technology, according to the report.   They drive competitive differentiation and innovation. According to the report, 31% of these fast learner companies say that machine learning yielded business model or business process innovation. A full 48% of these fast learner companies say increased profitability is the top benefit of machine learning, and another 48% of fast learners say they expect revenue growth of more than 6% from 2018 to 2019. Organizations are looking to get an edge on the competition by deploying this technology, even if many may not be ready for it. There are dozens of surveys out there about how many enterprise organizations have deployed this technology already or plan to deploy it. But just because they deploy it doesn't mean it will be successful. According to the survey, 58% of fast learners say they spend more than half their budget for business processes locally — not outsourced — compared to 39% of non-fast learner machine learning users. They may end up spending more to keep those processes in-house, but realize greater customer value in the process, according to the report. The report also has the following recommendations for getting started with machine learning: Organize a machine learning bootcamp for the executive team to help business unit heads understand how machine learning can help the business. Identify external sources of machine learning knowledge — look at examples of other organizations' machine learning initiatives Pilot the first machine learning initiatives in small sets of processes where the risk is low, but then spread the successful projects across the rest of the business processes. Direct your marketing and communications teams to put together a handbook for directors that they can use to answer internal questions about why machine learning is being adopted and what it will mean to their teams. Yet another new survey has pulled back the covers on some other aspects of machine learning in organizations, focusing on the best practices of those that are succeeding. The report identifies what it calls five traits of "machine learning leaders," or "fast learners." These are companies that are "already seeing substantial benefits from machine learning" that go across the entire organization and include higher profitability and revenues, greater competitive differentiation and faster, more accurate and more cost-efficient processes, according to the report.

Read More

Oracle acquires machine learning platform Datascience.com

Oracle acquires machine learning platform Datascience.com

In the near term, not much will change for customers of Datascience.com — it will continue to offer the same products and services to partners post-acquisition. “Data science requires a comprehensive platform to simplify operations and deliver value at scale,” DataScience.com CEO Ian Swanson said in a press release. “With DataScience.com, customers leverage a robust, easy-to-use platform that removes barriers to deploying valuable machine learning models in production. In a statement provided to investors and members of the press, Oracle said that it is reviewing Datascience.com’s existing product roadmap and will issue guidance to customers in the coming months. At Oracle OpenWorld in San Francisco last October, the Silicon Valley firm took the wraps off Oracle AI Platform Cloud Service, a suite of pre-configured AI libraries and deep learning frameworks, in addition to Oracle Mobile Cloud (a conversational AI platform), Oracle Autonomous Data Cloud (machine learning utility for industrial workloads), Oracle Analytics Cloud (an AI data visualization tool), and Oracle Security and Management Cloud (AI-powered cybersecurity threat analysis).

Read More
1 2 3 274