Learning to roll with being ‘good enough’ at work and parenting

Learning to roll with being ‘good enough’ at work and parenting

I wondered if they were talking about me, but it was a few weeks before I had time to sit down and read it without interruption. When one editor asked me to submit clips of my longform writing, I realized it had been nearly a decade since I’d written anything longer than 1,000 words. There, I said it. And I am trying not to feel bad about it anymore. Motherhood, especially when you have more than one child, is wonderful, miraculous and everything the women who came to your baby shower said it would be. But it’s also all-encompassing in a way that I never could have imagined before I had kids. There is always something to be done: meals to be made, messes to be cleaned, homework to be supervised, activities to be shuttled to, cuddles to be given (that last one is, admittedly, my favorite). And just when I think I’ve crossed everything off my to-do list and can finally sit down for a moment and maybe pitch an idea that’s been rolling around in my brain, or set up an interview with a source, something else pops up that needs my attention. When I give my 3-year-old daughter the iPad or turn on the TV so I can get some work done, I think that I should be playing with her, rather than potentially stunting her brain development so I can be creatively fulfilled. When I take an hour to exercise, I worry that I should be writing, that I am squandering my potential, that all of my free time needs to be devoted to my craft. When I’m cleaning or straightening up the house, I feel guilty that I’m not using the time to actively engage with my children. When I just want to watch “This Is Us” for an hour by myself after the kids are in bed, I feel bad that I’m not spending that time watching TV with my husband, whom I haven’t seen all day. On more than one occasion, I have felt paralyzed by what I’d like to accomplish in a day — and how I’m going to get it all done. This constant feeling of falling short was becoming a major source of unhappiness for me. And that also made me feel terrible because, all things considered, my life is a good one. I continued to wallow in this until one afternoon, as I was scrolling through my Facebook feed (something that also feeds my “never enough” anxiety) I saw a post from a mom asking for advice on how to balance a creative career with parenting. Another mother, with grown children, replied with the idea that there are “seasons to life.” That even if you aren’t doing everything you want to be doing right this second, it’s okay, because someday you will have the time to devote to whatever it is you desire. Several women were interviewed for the piece, and as I read about their experiences, I felt as though I was among my people. Wow. This was an aha moment for me. Yes, I know that my children won’t always need me the way they do now (and that I’ll probably mourn this), but dividing my life into “seasons” allowed me to embrace the fact that the kids are my focus, without feeling guilty or inadequate about what can’t be. It was freeing. I finally let go of the breath I didn’t realize I had been holding. So as I approach my 40s, I am trying to be more mindful of the metaphorical marathon that I, and so many of my fellow moms, are running. I am trying to be more present for my children mentally, not just physically. But our conversation soon turned to how hard it is to keep the momentum going on the successful careers we had cultivated before we became parents. “I’m terrified of becoming irrelevant,” she confided in me. Same here. I scaled back on freelancing when my son was born almost seven years ago, and at the time, I was fine with this. It was financially feasible, I had a newborn to take care of, and I was content to throw my entire being into that work. When my daughter was born in 2014, I again cut back on writing, but this time, when the itch to freelance returned, it was much harder to scratch it. So many colleagues I knew from my days as a magazine editor had fled the uncertainty of the print industry for jobs in content branding or the digital space, and I found myself cold-pitching editors I didn’t know at publications for which I had written years earlier.

Read More

IKEA & Teenage Engineering Announce FREKVENS ‘Party’ Collaboration

IKEA & Teenage Engineering Announce FREKVENS ‘Party’ Collaboration

The line is still early in the design stages. “We are just starting to shape the collection; it’s a work in progress. “In FREKVENS, we want to make products that everybody can grasp and handle,” says Teenage Engineering CEO Jesper Kouthoofd. “Even those who are not so tech-savvy should swiftly be able to understand and use the products. “Designing FREKVENS, we want to make something that feels like IKEA, and at the same time challenge how we perceive them today,” adds Kouthoofd. “IKEA is furniture, meatballs and soon… Party!”

Read More

AI Smartphones Will Soon Be Standard, Thanks to Machine Learning Chip

AI Smartphones Will Soon Be Standard, Thanks to Machine Learning Chip

Almost every major player in the smartphone industry now says that their devices use the power of artificial intelligence (AI), or more specifically, machine learning algorithms. Apple has already designed and built a “neural engine” as part of the iPhone X’s main chipset, to handle the phone’s artificial neural networks for images and speech processing. The MIT Tech Review notes, however, that ARM’s track record for energy-efficient mobile processors could translate to a more widespread adoption of their AI chip. ARM doesn’t actually make the chips they design, so the company has started sharing their plans for this AI chip to their hardware partners — like smartphone chipmaker Qualcomm. That might soon change: thanks to a processor dedicated to machine learning for mobile phones and other smart-home devices, AI smartphones could one day be standard. British chip design firm ARM, the company behind virtually every chip in today’s smartphones, now wants to put the power of AI into every mobile device. Project Trillium would make this process much more efficient. Their built-in AI chip would allow devices to continue running machine learning algorithms even when offline. “We analyze compute workloads, work out which bits are taking the time and the power, and look to see if we can improve on our existing processors,” Jem Davies, ARM’s machine learning group head, told the MIT Technology Review. With the advantages machine learning brings to mobile devices, it’s hard not to see this as the future of mobile computing.

Read More

“Smarticle” Robot Swarms Turn Random Behavior into Collective Intelligence

“Smarticle” Robot Swarms Turn Random Behavior into Collective Intelligence

In a lab at the Georgia Institute of Technology, physicists run experiments with robots that look as though they came from the dollar store. The work with these robots, known as “smarticles,” is part of a broader interest in the feasibility and applications of self-organizing robots. In many of these cases the idea is to mimic emergent phenomena found in nature, like the regimented motion of a decentralized colony of army ants or the unconscious, self-programming assembly of DNA molecules. “We know what we want the collective to do, but in order to program it we need to know what each agent must be doing on the individual level,” said Melvin Gauci, a researcher at Harvard working on swarm robotics. “Going between those two levels is what’s very challenging.” Beware of Leaders Daniel Goldman is a physicist at Georgia Tech who is leading the experiments with smarticles (a portmanteau of “smart active particles”). His fundamental scientific interest is in the physics of active granular materials that have the ability to change their own shape. In a slide deck he brings to conferences, he includes a clip from Spider-Man 3 that shows the birth of the supervillain Sandman: Loose grains of sand skitter across the desert and then congeal into the shape of a man. Smarticles are Goldman’s way of testing active granular materials in a lab. “They give us a way to use geometry to control the properties of a material. They can also be programmed to adjust the rate at which they swing their arms in response to the other smarticles they encounter in their immediate vicinity. These maneuvers could serve as building blocks for more complicated feats, but even the most basic functions, like compression, are hard to engineer when none of the smarticles have any idea where they’re positioned in relation to the overall group. It can’t see, it has limited memory, and the only thing it knows about the other smarticles it’s supposed to coordinate with is what it can learn from bumping into its immediate neighbors. “Imagine one person at a rock concert with his eyes closed,” said Joshua Daymude, a graduate student in computer science at Arizona State University who works on the smarticles project. One strategy would be to appoint a leader that orchestrates the swarm, but that approach is vulnerable to disruption—if the leader goes down, the whole swarm goes down. Another is to give each robot in the swarm a unique job to perform, but that’s impractical to implement on a large scale. “Individually programming 1,000 robots is basically an impossible task,” said Jeff Dusek, a researcher at Olin College of Engineering and a former member of the Self-Organizing Systems research group at Harvard, where he worked on underwater robot swarms. But “if every agent is following the same set of rules, your code is exactly the same whether you have 10 or 1,000 or 10,000 agents.” An algorithm used to program a swarm has two properties. First, it’s distributed, meaning it runs separately on each individual particle in the system (the way each army ant carries out the same simple set of instructions based on whatever it senses about its local environment). This means that if an army ant senses, say, five other army ants right around it, maybe there’s a 20 percent chance it moves to the left and an 80 percent chance it moves to the right. Random Guarantees In 2015, Goldman and Randall were discussing the possibility of finding rules that would lead Goldman’s smarticles to act coherently as a group. Randall realized that the swarm behaviors Goldman was after were similar to the behavior of idealized particle systems studied in computer science. “I was like, ‘I know exactly what’s going on,’” Randall said. In the late 1960s, the economist Thomas Schelling wanted to understand how housing segregation takes hold in the absence of any centralized power sorting people into neighborhoods by skin color. When the person moved, Schelling transported him to a random spot in the housing grid where he repeated the algorithmic process of observing his neighbors and deciding whether to stay or go. Schelling discovered that, according to his rules, residential segregation is virtually guaranteed to take hold, even if individuals prefer to live in diverse neighborhoods. And in Schelling’s model the decisions can be made with an element of randomness—if your neighbors look different from you, maybe there’s a high probability you move, but also some small probability you choose to stay put. Randall and her co-authors proved that if they weighted the die correctly, they were guaranteed to end up with a compressed swarm (in the same way Schelling could have proved that if he set individuals’ tolerance for diversity at the right level, segregation was unavoidable). The randomness in the algorithm helps particles in a swarm avoid getting stuck in locally compressed states, where lots of isolated subgroups are clustered together but the swarm as a whole isn’t compressed. The randomness ensures that if smarticles end up in small compressed groups, there’s a chance individuals will still decide to move to a new location, keeping the process alive until an overall compressed state is reached. (It takes just a little randomness to nudge particles out of locally compressed states—it takes a lot more to nudge them out of a globally compressed state.) Into the World Proving that particles in a theoretical world can run a simple algorithm and achieve specific swarm behaviors is one thing. Actually implementing the algorithm in cheap, faulty, real-life smarticles clanking around in a box is another. “Our theory collaborators are coming up with ways to program these things, but we’re just in the beginning and we can’t yet say these schemes have been transferred directly,” Goldman said. But one day the physicists were observing this chaotic motion when the battery died in one of the smarticles. Goldman and his collaborators noticed that the swarm suddenly started moving in the direction of the inactive unit. The work led to the recent development of the algorithm that will always get an idealized swarm to move in a specified direction. The researchers hope to eventually prove theoretically that a basic algorithm, implemented in a distributed way in a large collection of small, cheap robots, is guaranteed to produce a specified swarm behavior. “We’d like to move to a point where it’s not that batteries died and we found a phenomenon,” Daymude said. “We’d like it to be more intentional.” Reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Researchers are learning how to control these systems so that they function in a manner similar to swarms of bees or colonies of ants: Each individual operates in response to the same basic set of instructions. But when the swarm comes together, its members can carry out complex behaviors without any centralized direction. “Our whole perspective is: What’s the simplest computational model that will achieve these complicated tasks?” said Dana Randall, a computer scientist at Georgia Tech and one of the lead researchers on the project. “We’re looking for elegance and simplicity.” As a computer scientist, Randall thinks about the problem in algorithmic terms: What is the most basic set of instructions individual elements in a swarm can run, based on the meager data they can collect, that will lead inevitably to the complex collective behavior researchers want?

Read More

Throwback Thursday: Teaching And Learning Are A Shared Responsibility (Views: 1178)

Throwback Thursday: Teaching And Learning Are A Shared Responsibility (Views: 1178)

Teaching has been a passion of mine since I began riding horses as a young girl. Let me know if any of the subjects I touch on need more explanation. Learn by making the right correction, not by telling your instructor what she already sees. Most horses do what they are told to do. If you are giving correct aids and you are not getting what you ask for, there are ways to convince the horse to be more generous. Example: Student B crosses the half diagonal on the left lead. When she reaches the track, she asks for a flying change by putting the left leg back and presses with it so unfeelingly that the horse actually goes in counter canter with the haunches to the inside, but doesn’t make a flying change. (This is impossible to do if you actually intend to do it, but I saw a student create this very scenario a few weeks ago!!) The rider yells over her right shoulder, “I am asking and he is not doing it!” as she rides down the long side in haunches in/counter canter, pressing firmly with the left leg…. In reality, this rider was getting exactly what she asked for, so we had to go over the flying change aids again. When asked properly from a light tap of the spur reinforced by a tap of the whip so that there could be no misunderstanding, the horse did the flying change. Example: Student C wants to know how to correct her passage to piaffe transition. So she does it her way again to show me what she knows rather than trying to learn what I am teaching her. Now, this lady’s horse falls on the forehand in the transition to piaffe because she crams her lower leg back on the barrel and tips forward on her seatbones while pulling the head down in a tight frame. She needs to put her own weight on the back of her seat bones, tap the horse nearer to the girth with her spurs so that he will lift his withers, and allow him to lift his poll by giving with her hand. She stops and says, “I’ve been taught to keep the same contact between piaffe and passage.” So I repeat, “Give on the reins in the piaffe.” She tries again, this time with the spur at the girth, which drops her back on her seatbones and starts to elevate the horse’s withers. When she gives on the reins his nose comes to the vertical and his poll is at the highest point. Now, with any one of these scenarios, if the student had just NOT SPOKEN and instead tried to do what she was told, she would have been further along in correcting the mistakes in a shorter amount of time. I like to teach riders from all levels of the sport. I could have taken the time to explain all the tiny details of every correction, but the rider ALWAYS learns more if he or she actually does it first and receives the explanation afterward. If you talk first, the momentum and impulsion you need to achieve your goal are already lost. Don’t bring the frustrations of your daily life with you to your lesson. Let them go at the entrance to the arena. Your riding instructor is there to help you with your riding, not your marriage, divorce or stress-related problems. Every successful lesson I have ever taught has involved the improvement of a rider’s basic training—be that in a sitting trot lesson without stirrups, or in training the half-halt to improve self-carriage in passage. Don’t tell the teacher how to teach you or what you need to learn. Rita, I have been both a student and a teacher my whole life. I cannot completely separate one from the other, but I can tell you that as I progress and learn as a dressage rider, this sport continues to fascinate me every day. Training Tip of the Day: Can you begin a riding lesson (as a student or a teacher) with a clear mind and maintain that state throughout the lesson? I had taken a hiatus from teaching during the six years of my sponsorship with the JSS Trust because I had the financial support to concentrate solely on training and competing. Now I am on the road again teaching a lot of clinics, and I am very pleased to have new opportunities to share my knowledge with other riders. The concept of sharing responsibility for a good riding lesson—50/50 between the student and the instructor—is strong with me now. Nowadays, the more I learn, the more I want to teach. Firstly, a good dressage teacher must have an appropriate level of technical understanding for the level of student that rides in front of her/him. If you’re trying to fix problems in the right half-pass without ever having ridden one, you might be in for a bit of a struggle. If a rider comes to me for a lesson because she is struggling with pirouettes, I not only need the technical knowledge of how to ride one, but I also must develop the ability to observe her pirouettes with a skillful eye and offer a solution. 1. Evaluation: What is the basic problem with the pirouette? The rider is sitting too far to the outside and pushing the haunches too far to the inside. The more I teach, the more I want to learn! Begin the pirouette with shoulder-in on the centerline with the rider positioning her upper body more over the inner seatbone. This threefold approach can only be practiced by an instructor who possesses good technical skill AND the ability to evaluate, analyze and correct. Thirdly, and perhaps most importantly, the best teachers in the world have a generous spirit. They have stepped into the arena not to show off what they know, but to tell a student what she or he must hear in order to learn. I cannot abide an instructor who steps into the arena and tells me everything that is going wrong. This is an ego-based action, and I have no time for it. I KNOW what is going wrong; what I need to hear is how to fix it.  A good instructor does not step into the arena to show off, but rather, to HELP. Having said that, what a student needs to hear is different for every individual, and if an instructor (especially a clinician) is going to be successful, he/she must have the ability to figure that out in a timely manner. Let’s say I have Student A, Student B and Student C in a clinic. The more I teach, the more I appreciate the students who come to me with a centered, quiet mind, ready to listen, absorb and learn. Student A needs to hear: “Listen to the timing of the canter. Hear the rhythm of one-and –two, and one-and-two, and one-and-two. Student B needs to hear: “Ride counter canter across the half diagonal. Now how each of these horses and riders responds to the initial instruction may warrant an adjustment in the approach. And perhaps more importantly, an understanding of how you can get the student to respond in a way that helps them learn. I used to believe that giving a good riding lesson was solely my responsibility. She now professionally trains and shows the horses in my stable. Fifteen years ago, she needed to learn a lot of technical things (like how to count the flying changes). Today, she is still learning technical stuff (like how to speed up the piaffe) but she learns at an incredibly rapid pace now compared to 15 years ago. What has been consistent over the years is that Casey has always learned from FEEL, and she has always learned AFTER the lesson—when I’m not watching her anymore. I have to find ways to make her feel the things I want to see changed, and then I have to go away to let her learn it while I am not watching. This was not a comfortable process for me in the beginning, but it functions well for us now. I look up from a concentrated ride a few days later and viola!—there is the picture I want to see. I realize now, after many decades of teaching, that a student must also take responsibility for the outcome of their lesson. I tend to learn more from Morten the day after he is gone. This just goes to show you that the relationship between each instructor and student will function in various and mysterious ways! Of course, when you have 15 years with the same student, you have the luxury of taking your time to figure out how they learn from you. A clinic situation is time sensitive, and the difference between a good and bad clinician is how quickly one can determine what works for each student. If a student is not willing to do that, he or she will not be long in my company. Anybody can give 10 riding lessons in a day.  Very few people are focused enough to give 10 GOOD riding lessons! Step into the arena to offer guidance, not to show your own knowledge. Correct one thing that is possible to correct in the moment. Do the best job that you can and then detach yourself from the outcome. The rest is up to the student. Now let’s turn to the responsibility of the student. I am not kidding, Rita!) Let the instructor do his/her job without your guidance. Rita, this is going to be a lengthy piece that contains a lot of information. When she puts her left leg back, the horse swings the haunches out and comes simultaneously against the outside leg and against the hand. The rider tries to correct this by pushing with the left leg but the horse just tosses his head and swings his haunches more into that leg. I say: “You should bend him right in that moment.” Bending the horse to the right, which requires use of the RIGHT rein and leg, breaks up the resistance the horse is offering to the right rein (the actually crux of the problem) and puts the horse in a position to move his haunches away from the left spur when the bend created under the saddle follows through to the hindquarters. I know of course what you are trying to do. I’ve just offered you a solution, but you didn’t hear it because you want to tell me what I already see.

Read More

Inside the Chilling World of Artificially Intelligent Drones

Inside the Chilling World of Artificially Intelligent Drones

According to Russian military spokesmen, the drones were equipped with barometric sensors that allowed them to climb to a preselected altitude, an automatic leveling system for their control surfaces, and precision GPS guidance that would have taken them to their pre-selected targets had they not been intercepted. Chamayou’s continuum collapses when it’s no longer a case of humans killing humans but of a robot and its algorithms initiating the carnage. While it may be years before that kind of “hard” or “complex” AI—the programs that allow a machine to learn and exercise autonomy—are used by terrorists and other non-state actors, it will happen. The primitive AI that guided the drones toward the Russian bases in Syria and that allows AQAP to use off-the-shelf drones to conduct surveillance in Yemen was, just a few years ago, something that was only available to states. The availability of this technology comes at a time when militant groups like ISIS and AQAP are calling for—and supporting—what they call “lone wolf” attacks on targets in the West. While these groups have few qualms about killing those they deem to be infidels, at the level of the individual operative there is almost always doubt, anxiety, fear, and even guilt. A member of a terrorist organization like ISIS could thus launch a “fly and forget” drone on a mission to release a bomb or chemical agent without mistake-inducing fear or anxiety. The operative is entirely removed from the act of killing and that makes it far easier to carry out. Advanced AI paired with drone technology has the potential to overcome even the most effective countermeasures because it dramatically lowers and even eliminates the psychological, physical, and monetary costs of killing. Yuval Noah Harari, author of Sapiens and Homo Deus: A Brief History of Tomorrow, argues that terrorism is a show or spectacle that captures the imagination and provokes states into overreacting. He suggests that “this overreaction to terrorism poses a far greater threat to our security than the terrorists themselves.” The attacks on 9/11 successfully provoked the U.S. government into starting its war on terror, now in its seventeenth year. The incident was an ominous portent of what the world will soon face as governments race to develop smaller, more intelligent, and ultimately wholly autonomous drones. In addition to invading Afghanistan and Iraq, the unintended consequences of which continue to reverberate, the war on terror has driven the rapid development of drones and AI. In many respects, drones are the perfect tools for states. They offer deniability, there are no images of flag-draped coffins, they do not get PTSD, they do not question orders, and they never entertain doubts about what their algorithms tell them to do. Unfortunately for all of us, they’re the perfect tool for terrorists and militants who are less constrained by political agendas, bureaucratic structures, and, to some degree, ethical considerations than states are. In Robert Taber’s timeless book, War of the Flea, he uses the analogy of the dog and its fleas. The state and its military forces are the dog and the guerrilla forces are the fleas attacking it. The dog is of course far bigger and more powerful than the fleas, but it can do little because its enemies are too small and too fast, while it is too big, too slow, and has too much territory to defend. Major General Latiff’s call for governments to slow down the development of this technology and assess the consequences of its inevitable leakage into the public sphere should be heeded if we are to avert the kind of outcome foreseen by the AI experts at the Campaign to Stop Killer Robots. While states and the corporations that work for them remain in control of the most advanced military and surveillance technologies, they face the perennial problem of leakage: the inevitable diffusion of technology into the wider world. In November 2017, the campaign released a short dramatic film entitled Slaughterbots that clearly shows where the technology is headed and how it can be used by terrorists and states. There will be more attacks like the one on the Russian base, and as the drones get smaller and more intelligent, they’ll start to look more and more like those slaughterbots. The 13 crudely made aircraft, which were powered by small gas engines and flew on wings fashioned from laminated Styrofoam, zeroed in on their targets: the vast Russian army base at Khmeimim and the naval base at Tartus on the Syrian coast. The radar signature of the drones was minimal and by taking advantage of a cool night, they were able to fly at low altitudes and avoid detection. It is unclear whether or not the drones were able to communicate with one another and thus behave as a swarm. Russian forces, it is claimed, detected the drones and, through a combination of kinetic and electronic air defense systems, destroyed some of them. According to Russian military spokesmen, the drones were equipped with barometric sensors that allowed them to climb to a preselected altitude, an automatic leveling system for their control surfaces, and precision GPS guidance that would have taken them to their pre-selected targets had they not been intercepted. The incident was an ominous portent of what the world will soon face as governments race to develop smaller, more intelligent, and ultimately wholly autonomous drones. While states and the corporations that work for them remain in control of the most advanced military and surveillance technologies, they face the perennial problem of leakage: the inevitable diffusion of technology into the wider world. However, what is new and what the attack on the Russian bases in Syria demonstrates is that non-state actors are—just like states—becoming more capable of building and using drones that have minds—albeit primitive ones—of their own. In his prescient and timely book Future War: Preparing for the New Global Battlefield [1], Major General (Ret.) Robert Latiff argues that we are at a point of divergence where technologies are becoming increasingly complex while our ability and willingness to understand them and their implications is on the decline. He asks, “Will we allow the divergence to continue unabated, or will we attempt to slow it down and take stock of what we as a society are doing?” At this point, there is little evidence that governments or the societies they preside over are undertaking the kind of probing reassessment of technology that Latiff calls for. On the contrary, they’re competing to develop ever-more advanced drones and the AI that will ultimately allow them to think for themselves. In response, governments increasingly need to build and design a host of electronic and kinetic countermeasures to thwart the use of drones by non-state actors. The threat posed by drones is so difficult to overcome that even the Russians, who are at the forefront of electronic counter-measures, are using trained falcons to guard the Kremlin against the smallest drones. A dangerous cycle has thus begun: governments and the corporations they rely on are driving the development of unmanned technologies and AI. This, in turn, will require ever-more advanced and costly countermeasures to defend the same governments against the technology that has and will leak out. In addition to setting in motion this cycle, the spread of drone technology and AI threatens to overwhelm even the most advanced countermeasures. Few technologies are so capable of lowering or eliminating the psychological, physical, and monetary costs of killing as drones, and it is this subtle yet profound effect that may pose the greatest threat. “Shotguns, everyone wanted shotguns,” an Iraqi commander said when asked about the weeks in late 2016 when ISIS first began using drones to drop small bombs on Iraqi soldiers. “The quads [quad-copters] are the hardest to hear, see, and hit. However, what is new and what the attack on the Russian bases in Syria demonstrates is that non-state actors are—just like states—becoming more capable of building and using drones that have minds—albeit primitive ones—of their own. In January 2017, ISIS declared that it had formed the “Unmanned Aircraft of the Mujahedeen,” a unit devoted to drones. While the group has been using drone technology for surveillance and targeting for at least two years, the October attack in Syria marked the debut of its armed drones. “We watched how they got better and better at hitting us,” explained the same Iraqi commander. “First they send a drone in as a spotter, unarmed that they use to figure out where we’re most vulnerable—an ammo cache, a patio where men are cooking or relaxing. Then they send an armed drone to those coordinates, often at night or in the very early morning when the winds are calm.” ISIS claims to have killed in excess of two hundred Iraqi soldiers with its drones. ISIS, just like the governments that are fighting it, realizes that drones are the future of warfare. At the same time that groups like ISIS are devoting more and more resources to developing their drone warfare capability, governments and corporations are racing to develop countermeasures. Late in 2016 and in early 2017, soldiers in Iraq and Syria—especially Iraqi soldiers—had few options to defend themselves beyond firing their weapons into the skies. Within weeks of the first attack by ISIS using an armed drone in October 2016, countermeasures, many of which were already under development, were rushed from laboratories to the battlefield. In his prescient and timely book Future War: Preparing for the New Global Battlefield, Major General (Ret.) Robert Latiff argues that we are at a point of divergence where technologies are becoming increasingly complex while our ability and willingness to understand them and their implications is on the decline. These range from a variety of electronic “drone guns”—which cost tens of thousands of dollars—that jam drones’ ability to receive signals from their operators to shotgun shells that are loaded with a wire net designed to entrap a drone’s propellers and thus bring it to the ground. Groups like ISIS are already developing ways of electronically hardening their drones and adjusting their strategies to make them less susceptible to countermeasures. “They say the rocks have ears,” explained a Yemeni journalist who studies and writes about al-Qaeda in the Arabian Peninsula (AQAP). “They’re sensors but they look just like rocks. The sensors, hidden in plasticized containers designed to mimic the rocks of the areas where they are dropped, were likely part of the U.S. government’s effort to combat AQAP in Yemen. He asks, “Will we allow the divergence to continue unabated, or will we attempt to slow it down and take stock of what we as a society are doing?” At this point, there is little evidence that governments or the societies they preside over are undertaking the kind of probing reassessment of technology that Latiff calls for. The sensors, some of which are solar powered, can lie dormant for years and be programmed to activate by anything from ground vibrations to the sound signature of a specific automobile engine. Once on they can remain passive, continuing to collect information, or they can signal a drone to come and investigate or neutralize a target. Since then, the U.S. government has continued to deploy and use a range of drones to hunt and kill those who end up on its kill lists as well as so-called targets of opportunity that happen to be in designated “kill boxes.” The exact number of individuals killed by drones in Yemen is unknown as is the number of civilians killed as a result of these attacks. This is despite the fact that the U.S. government has spent billions of dollars fighting an organization that—at certain points in its history—had fewer than 50 dedicated operatives. It has also forced it to develop a range of countermeasures, including trying to co-opt the AI that drives—at least to some degree—target selection. The 13 crudely made aircraft, which were powered by small gas engines and flew on wings fashioned from laminated Styrofoam, zeroed in on their targets: the vast Russian army base at Khmeimim and the naval base at Tartus on the Syrian coast. On the contrary, they’re competing to develop ever-more advanced drones and the AI that will ultimately allow them to think for themselves. “They claim that they have let the drones kill some of their rivals,” a Yemen-based analyst explained. “They planted phones in cars that were carrying people they wanted to be eliminated and the drones got them.” While these claims cannot be verified, AQAP knows that data from phones such as voice signatures and numbers are vacuumed up by sensors and fed into the algorithms that—at least partly—help analysts decide whom to target. This data is the equivalent of digital blood spoor for the drones that are hunting them. In another recent video, the emir of AQAP, Qasim al-Raymi, decried his operatives’ inability to refrain from using their phones, claiming that most of the attacks on them over the last two years have been due to the use of cell phones by its operatives. However, given the organization’s expertise with explosives and the increasing availability of military-grade small drones (the UAE and Saudi Arabia are providing these to the forces they support in Yemen) on the black market, it is only a matter of time until they make their debut in Yemen or elsewhere. In the besieged Yemeni city of Taiz where AQAP has been present for at least the last two years, off-the-shelf and modified drones have been used by all sides in the conflict for surveillance. Just as in Iraq and Syria, various groups, including AQAP, are becoming more and more adept at deploying drones with primitive but effective AI. In Taiz and in other parts of Yemen where Houthi rebels and various factions aligned with Saudi Arabia and the United Arab Emirates are fighting for control, semi-autonomous drones are being used to map enemy positions and monitor the movements of rival forces. These drones are programmed to fly to a preselected set of waypoints that, when desired, allows them to move in a grid pattern, thereby providing a comprehensive view of a particular area. This kind of persistent and low-cost surveillance is critical—just as it is on a far larger and more precise scale for the U.S. government’s drones—for determining patterns of life for an individual or group prior to targeting them. While there are no signs that militant groups like AQAP or ISIS are close to employing more advanced AI that would allow them to use drones to identify and target specific individuals, these groups and others will, in a short period, have access to such technology. Face and gait recognition software and the high-pixel cameras that allow it to function are also widely available and undoubtedly already being used by well-funded non-state actors like Hezbollah. In response, governments increasingly need to build and design a host of electronic and kinetic countermeasures to thwart the use of drones by non-state actors. While this drone was reportedly being flown by the employee for recreational purposes, its ability to penetrate the airspace around one of the most secure buildings in the nation’s capital proved how vulnerable these sites are to drone-based attacks and surveillance. Apart from the stunning and multifold implications that this technology has for state security, the use of drones has had a more subtle yet profound effect on those who use them. Chamayou struggles to situate drones on a continuum of weapons used to hunt and kill other humans that correlates the proximity between the hunted and the hunter. The most intimate form of killing is hand-to-hand combat and the most distant is the pilot releasing his payload of bombs at thirty-thousand feet or the officer ordering a missile to be launched at some distant target. The threat posed by drones is so difficult to overcome that even the Russians, who are at the forefront of electronic counter-measures, are using trained falcons to guard the Kremlin against the smallest drones. Yet, just like the launch officer or to a lesser degree the bomber pilot, the drone operator is out of reach. In the case of the U.S. government, he or she is likely thousands of miles away and invulnerable to harm. Yet there is a kind of one-way intimacy between the hunter and the hunted, even though it is pixilated and mediated through screens—enough to unsettle and traumatize many of the soldiers who are charged with operating the drones that hunt and kill in countries like Yemen. Chamayou’s continuum collapses when it’s no longer a case of humans killing humans but of a robot and its algorithms initiating the carnage. While it may be years before that kind of “hard” or “complex” AI—the programs that allow a machine to learn and exercise autonomy—are used by terrorists and other non-state actors, it will happen. A dangerous cycle has thus begun: governments and the corporations they rely on are driving the development of unmanned technologies and AI. This, in turn, will require ever-more advanced and costly countermeasures to defend the same governments against the technology that has and will leak out. The primitive AI that guided the drones toward the Russian bases in Syria and that allows AQAP to use off-the-shelf drones to conduct surveillance in Yemen was, just a few years ago, something that was only available to states. The availability of this technology comes at a time when militant groups like ISIS and AQAP are calling for—and supporting—what they call “lone wolf” attacks on targets in the West. While these groups have few qualms about killing those they deem to be infidels, at the level of the individual operative there is almost always doubt, anxiety, fear, and even guilt. A member of a terrorist organization like ISIS could thus launch a “fly and forget” drone on a mission to release a bomb or chemical agent without mistake-inducing fear or anxiety. In addition to setting in motion this cycle, the spread of drone technology and AI threatens to overwhelm even the most advanced countermeasures. The operative is entirely removed from the act of killing and that makes it far easier to carry out. Advanced AI paired with drone technology has the potential to overcome even the most effective countermeasures because it dramatically lowers and even eliminates the psychological, physical, and monetary costs of killing. Yuval Noah Harari, author of Sapiens and Homo Deus: A Brief History of Tomorrow [3], argues that terrorism is a show or spectacle that captures the imagination and provokes states into overreacting. He suggests that “this overreaction to terrorism poses a far greater threat to our security than the terrorists themselves.” The attacks on 9/11 successfully provoked the U.S. government into starting its war on terror, now in its seventeenth year. In addition to invading Afghanistan and Iraq, the unintended consequences of which continue to reverberate, the war on terror has driven the rapid development of drones and AI. In many respects, drones are the perfect tools for states. They offer deniability, there are no images of flag-draped coffins, they do not get PTSD, they do not question orders, and they never entertain doubts about what their algorithms tell them to do. Unfortunately for all of us, they’re the perfect tool for terrorists and militants who are less constrained by political agendas, bureaucratic structures, and, to some degree, ethical considerations than states are. Few technologies are so capable of lowering or eliminating the psychological, physical, and monetary costs of killing as drones, and it is this subtle yet profound effect that may pose the greatest threat. In Robert Taber’s timeless book, War of the Flea, he uses the analogy of the dog and its fleas. The state and its military forces are the dog and the guerrilla forces are the fleas attacking it. The dog is of course far bigger and more powerful than the fleas, but it can do little because its enemies are too small and too fast, while it is too big, too slow, and has too much territory to defend. Major General Latiff’s call for governments to slow down the development of this technology and assess the consequences of its inevitable leakage into the public sphere should be heeded if we are to avert the kind of outcome foreseen by the AI experts at the Campaign to Stop Killer Robots. In November 2017, the campaign released a short dramatic film entitled Slaughterbots [4] that clearly shows where the technology is headed and how it can be used by terrorists and states. There will be more attacks like the one on the Russian base, and as the drones get smaller and more intelligent, they’ll start to look more and more like those slaughterbots. “Shotguns, everyone wanted shotguns,” an Iraqi commander said when asked about the weeks in late 2016 when ISIS first began using drones to drop small bombs on Iraqi soldiers. “The quads [quad-copters] are the hardest to hear, see, and hit. In January 2017, ISIS declared that it had formed the “Unmanned Aircraft of the Mujahedeen,” a unit devoted to drones. While the group has been using drone technology for surveillance and targeting for at least two years, the October attack in Syria marked the debut of its armed drones. “We watched how they got better and better at hitting us,” explained the same Iraqi commander. “First they send a drone in as a spotter, unarmed that they use to figure out where we’re most vulnerable—an ammo cache, a patio where men are cooking or relaxing. Then they send an armed drone to those coordinates, often at night or in the very early morning when the winds are calm.” ISIS claims to have killed in excess of two hundred Iraqi soldiers with its drones. ISIS, just like the governments that are fighting it, realizes that drones are the future of warfare. The radar signature of the drones was minimal and by taking advantage of a cool night, they were able to fly at low altitudes and avoid detection. At the same time that groups like ISIS are devoting more and more resources to developing their drone warfare capability, governments and corporations are racing to develop countermeasures. Late in 2016 and in early 2017, soldiers in Iraq and Syria—especially Iraqi soldiers—had few options to defend themselves beyond firing their weapons into the skies. Within weeks of the first attack by ISIS using an armed drone in October 2016, countermeasures, many of which were already under development, were rushed from laboratories to the battlefield. These range from a variety of electronic “drone guns”—which cost tens of thousands of dollars—that jam drones’ ability to receive signals from their operators to shotgun shells that are loaded with a wire net designed to entrap a drone’s propellers and thus bring it to the ground. Groups like ISIS are already developing ways of electronically hardening their drones and adjusting their strategies to make them less susceptible to countermeasures. “They say the rocks have ears,” explained a Yemeni journalist who studies and writes about al-Qaeda in the Arabian Peninsula (AQAP). “They’re sensors but they look just like rocks. It is unclear whether or not the drones were able to communicate with one another and thus behave as a swarm. The sensors, hidden in plasticized containers designed to mimic the rocks of the areas where they are dropped, were likely part of the U.S. government’s effort to combat AQAP in Yemen. The sensors, some of which are solar powered, can lie dormant for years and be programmed to activate by anything from ground vibrations to the sound signature of a specific automobile engine. Once on they can remain passive, continuing to collect information, or they can signal a drone to come and investigate or neutralize a target. Since then, the U.S. government has continued to deploy and use a range of drones to hunt and kill those who end up on its kill lists as well as so-called targets of opportunity that happen to be in designated “kill boxes.” The exact number of individuals killed by drones in Yemen is unknown as is the number of civilians killed as a result of these attacks. This is despite the fact that the U.S. government has spent billions of dollars fighting an organization that—at certain points in its history—had fewer than 50 dedicated operatives. It has also forced it to develop a range of countermeasures, including trying to co-opt the AI that drives—at least to some degree—target selection. “They claim that they have let the drones kill some of their rivals,” a Yemen-based analyst explained. “They planted phones in cars that were carrying people they wanted to be eliminated and the drones got them.” While these claims cannot be verified, AQAP knows that data from phones such as voice signatures and numbers are vacuumed up by sensors and fed into the algorithms that—at least partly—help analysts decide whom to target. This data is the equivalent of digital blood spoor for the drones that are hunting them. In another recent video, the emir of AQAP, Qasim al-Raymi, decried his operatives’ inability to refrain from using their phones, claiming that most of the attacks on them over the last two years have been due to the use of cell phones by its operatives. However, given the organization’s expertise with explosives and the increasing availability of military-grade small drones (the UAE and Saudi Arabia are providing these to the forces they support in Yemen) on the black market, it is only a matter of time until they make their debut in Yemen or elsewhere. In the besieged Yemeni city of Taiz where AQAP has been present for at least the last two years, off-the-shelf and modified drones have been used by all sides in the conflict for surveillance. Just as in Iraq and Syria, various groups, including AQAP, are becoming more and more adept at deploying drones with primitive but effective AI. In Taiz and in other parts of Yemen where Houthi rebels and various factions aligned with Saudi Arabia and the United Arab Emirates are fighting for control, semi-autonomous drones are being used to map enemy positions and monitor the movements of rival forces. These drones are programmed to fly to a preselected set of waypoints that, when desired, allows them to move in a grid pattern, thereby providing a comprehensive view of a particular area. This kind of persistent and low-cost surveillance is critical—just as it is on a far larger and more precise scale for the U.S. government’s drones—for determining patterns of life for an individual or group prior to targeting them. While there are no signs that militant groups like AQAP or ISIS are close to employing more advanced AI that would allow them to use drones to identify and target specific individuals, these groups and others will, in a short period, have access to such technology. Russian forces, it is claimed, detected the drones and, through a combination of kinetic and electronic air defense systems, destroyed some of them. Face and gait recognition software and the high-pixel cameras that allow it to function are also widely available and undoubtedly already being used by well-funded non-state actors like Hezbollah. The two-pound device, which was operated by an unidentified federal government employee, was too small to be detected by the radar installed at the White House. While this drone was reportedly being flown by the employee for recreational purposes, its ability to penetrate the airspace around one of the most secure buildings in the nation’s capital proved how vulnerable these sites are to drone-based attacks and surveillance. Apart from the stunning and multifold implications that this technology has for state security, the use of drones has had a more subtle yet profound effect on those who use them. Chamayou struggles to situate drones on a continuum of weapons used to hunt and kill other humans that correlates the proximity between the hunted and the hunter. The most intimate form of killing is hand-to-hand combat and the most distant is the pilot releasing his payload of bombs at thirty-thousand feet or the officer ordering a missile to be launched at some distant target. Yet, just like the launch officer or to a lesser degree the bomber pilot, the drone operator is out of reach. In the case of the U.S. government, he or she is likely thousands of miles away and invulnerable to harm. Yet there is a kind of one-way intimacy between the hunter and the hunted, even though it is pixilated and mediated through screens—enough to unsettle and traumatize many of the soldiers who are charged with operating the drones that hunt and kill in countries like Yemen.

Read More

Cooperative and Collaborative Learning: Student Partnership in Online Classrooms

Cooperative and Collaborative Learning: Student Partnership in Online Classrooms

Cooperative and collaborative learning are not new concepts in the field of education – they have been studied for decades and have been used as classroom practices for much longer than that. Although experts in the field might differentiate between the two, I'd suggest that the subtle differences are not all that important. Cooperative activities are more often utilized in the secondary classroom because the teacher assists in organizing and supervising work, whereas truly collaborative activities require students own the process of learning more independently. Regardless of terminology, we should all agree that as students progress through education they should be presented with frequent and meaningful opportunities to work with and learn from each other. There are many benefits to learning in groups – the Eberly Center for Teaching Excellence & Educational Innovation at Carnegie Mellon University outlines many benefits of group exercises on their website. The list includes development and reinforcement of skills that transcend individual and group exercises, such as: time management, project planning and task management, effective communication, and sharing or receiving feedback on performance. Group activities can be especially challenging in an online classroom where students may live in different states or countries.  Thanks to the work of education leaders and groups like Education Superhighway, improvements in access to infrastructure, devices, and software have made it easier for students to connect with peers around the world. When these roadblocks present themselves, teachers may be tempted to switch to an individualized version of an activity or move away from group activities in the future. When I was teaching in my online classroom in the early 2000s there were times I was tempted to do so myself.  Abandoning collaboration because it isn't easy sends our students the wrong message. What IS important is that the value proposition of each is similar: to create conditions where students gain interpersonal and cognitive skills necessary for work and life. They learn that it might be better to go it alone rather than work together, and the opportunity to build those crucial life-skills might be lost.   Partnership for 21st Century Learning published a research brief entitled "What We Know about Collaboration" that contains valuable information and highlights examples of success. Instead of tossing in the towel, here are some ideas to reflect upon, regardless of whether you are a teacher in an online or face-to-face classroom. This list is not exhaustive and certainly doesn't guarantee a successful group exercise, but the items highlighted below are the result of feedback from our students, teachers, and curriculum staff that have worked in cohort-based online classrooms for the past 20 years. They are the benchmarks used by our curriculum team as we discuss online group experiences and are an excellent starting point for any educator interested in enhancing the quality of cooperative or collaborative activities in their classroom. Do students have adequate time to get to establish closer working relationships and achieve the goals of the activities? Is the work contextualized so that students understand the value of the learning? For those with an interest in the language of education, the distinction between the two practices is primarily in the ownership of the learning process, although there are some who believe that cooperation and collaboration are essentially the same practice because they "overlap in their typical characteristics (i.e. shared knowledge and authority, socially co-constructed knowledge through peer interactions) and long-term goals which help students learn by working together on substantive issues". When specified, the major differences between cooperation and collaboration are with the role of the instructor in the process and the degree to which the community develops valued and shared vision. Students work individually and together and are accountable to the group for the overall success of the activity.

Read More

A Cocktail Lounge With a Champagne Vending Machine Is Coming to Boston’s Theater District [UPDATED]

A Cocktail Lounge With a Champagne Vending Machine Is Coming to Boston’s Theater District [UPDATED]

The Ghost Walks — owned by the team behind Committee, Bijou, and Cafeteria — will open at 57 Stuart St., in the space beneath Bijou. • The Ghost Walks to Open in Boston’s Theater District [BRT]• Drake Drops by Bijou [BG]• Committee Coverage on Eater [EBOS] The team doesn’t want to pigeonhole the venue as one thing or another, according to Peter Szigeti, the general manager. “We’re not calling ourselves a restaurant, and we’re not calling ourselves a bar,” he said. “We’ll be serving food until 1:30 a.m., and it’ll be elevated bar snacks: small plates, charcuterie, cheese boards. According to Szigeti, the venue’s name comes from the first act of “Hamlet,” when the ghost of Hamlet’s father walks across the stage. It’s also a reference to an old-timey phrase theater promoters used to utter when they’d had a good week.

Read More

DeepMind’s latest AI transfers its learning to new tasks

DeepMind’s latest AI transfers its learning to new tasks

Following a four-day trial, lawyers representing Uber and Waymo say they have suddenly come to a settlement in their suit  over the theft of autonomous-car technology. What happened: The Verge reports that Waymo attorneys announced the settlement in… Read more Following a four-day trial, lawyers representing Uber and Waymo say they have suddenly come to a settlement in their suit  over the theft of autonomous-car technology. What happened: The Verge reports that Waymo attorneys announced the settlement in a San Francisco courtroom this morning to “gasps of shock.” Federal judge William Alsup, who is presiding over a trial that was a year in the making, called the case “ancient history.” What Waymo says: “We have reached an agreement with Uber that we believe will protect Waymo’s intellectual property now and into the future. Uber denies wrongdoing: In a statement, Uber CEO Dara Khosrowshahi said he doesn’t believe any trade secrets ever made their way from Waymo to Uber, adding, “Nor do we believe that Uber has used any of Waymo’s proprietary information in its self-driving technology.” The settlement isn’t an admission of guilt but a way of avoiding further legal battles. Google Ventures was an early investor in Uber; Google got annoyed when Uber started building driverless cars, then downright mad when it poached staff and, allegedly, secrets—and now it has a small part of the ride-hailer in its back pocket.

Read More

What is AI? Everything you need to know about Artificial Intelligence – ZDNet

What is AI? Everything you need to know about Artificial Intelligence – ZDNet

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace. Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.What are neural networks? However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses. Image: Amazon Fully autonomous self-driving vehicles aren't a reality yet, but by some predictions the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers. However, what's uncertain is whether these new roles will be created rapidly enough to offer employment to those displaced, and whether the newly unemployed will have the necessary skills or temperament to fill these emerging roles. Not only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted worker — think a human concierge with an AR headset that tells them exactly what a client wants before they ask for it — will be more productive or effective than an AI working on its own. Among AI experts there's a broad range of opinion about how quickly artificially intelligent systems will surpass human capabilities.Oxford University's Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities, over the coming decades. They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.Recent and related coverageHow ML and AI will transform business intelligence and analyticsMachine learning and artificial intelligence advances in five areas will ease data prep, discovery, analysis, prediction, and data-driven decision making.Report: Artificial intelligence is creating jobs, generating economic gainsNew study from Deloitte shows that early adopters of cognitive technologies are positive about their current and future role.AI and jobs: Where humans are better than algorithms, and vice versaIt's easy to get caught up in the doom-and-gloom predictions about artificial intelligence wiping out millions of jobs. Here's a reality check.How artificial intelligence is unleashing a new type of cybercrime (TechRepublic)Rather than hiding behind a mask to rob a bank, criminals are now hiding behind artificial intelligence to do their attack. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.The structure and training of deep neural networks. Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task. Image: Nuance Another area of AI research is evolutionary computation, which borrows from Darwin's famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem. This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.What is fueling the resurgence in AI? The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning. This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models. An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained. These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).What are the elements of machine learning? These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk. See also: How artificial intelligence is taking call centers to the next level Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively — although this is increasingly possible in an age of big data and widespread data mining. AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.What are the uses for AI? AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.What are the different types of AI? At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI. Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so. This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome. By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait. Image: Gartner / Annotations: ZDNet Which are the leading firms in AI? With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.What can narrow AI do? There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.What can general AI do? Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.Which AI services are available? All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models. All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models. For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services — such as voice, vision, and language recognition — Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella — and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.Which of the major tech firms is winning the AI race? Internally, each of the tech giants — and others such as Facebook — use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam — the list is extensive. But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in. Image: Jason Cipriani/ZDNet Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants. But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space — Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities. Read more: How we learned to talk to computers, and how they learned to answer back (PDF download) Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality. This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks with a far smaller amount of data than necessary for training systems using supervised learning. The effect on the winners and losers in the AI race may be minimal, as the companies that control the largest amounts of data, are generally the same firms with globe-spanning networks of datacenters for carrying out machine learning.Which countries are leading the way in AI? It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.How can I get started with AI? While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud. All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.What are recent landmarks in the development of AI? There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each — setting society on a path towards driverless vehicles. Special report: How to implement AI and machine learning (free PDF) A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.IBM Watson competes on Jeopardy! in January 14, 2011 Image: IBM In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats. Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome. And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2. That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.How will AI change the world? Robots and driverless cars The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. The group went even further, predicting that so-called ' superintelligence' — which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" — was expected some 30 years after the achievement of AGI. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix. Facial recognition and surveillance In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police. Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology — including AI that can recognize emotions — will gradually become more widespread elsewhere.HealthcareAI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs. There have been trials of AI-related technology in hospitals across the world, including IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.Will AI kill us all? As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race. Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers. Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes.

Read More
1 2 3 16