Women Entrepreneurs Finance Initiative allocates first round funding; Expected to mobilize twice the original target

Women Entrepreneurs Finance Initiative allocates first round funding; Expected to mobilize twice the original target

WASHINGTON, APRIL 19, 2018 – The Women Entrepreneurs Finance Initiative (We-Fi) today announced its first funding allocations — expected to mobilize over $1.6 billion in additional funds from an allocation of $120 million — for programs designed to knock down the unique barriers facing women entrepreneurs in developing countries.   This initial round of grant allocations alone mobilizes twice the amount originally   targeted for We-Fi over its lifetime.The first round includes funding for proposals from the Islamic Development Bank to complement and expand successful initiatives in Yemen, Mali and Nigeria; the Asian Development Bank to improve the business environment for women in Sri Lanka; and the World Bank Group for global, regional and country specific activities to increase public and private sector support for women in business, with a focus on the poorest and most fragile environments.We-Fi, which has received over $340 million from 14 governments, initially set out to mobilize $800 million in additional financing from the private sector, donors, governments and other development partners, but expected mobilization from today’s first round of allocations exceeds those expectations. In addition, fifty-eight percent of the first allocations will go to IDA countries or states affected by fragility, conflict or violence, putting We-Fi on track to exceed its commitment to devote half its portfolio to those areas.“We know that everyone benefits when women have the resources they need to fully participate in economies and societies,” said World Bank Group President Jim Yong Kim. “By harnessing the public and private sector, We-Fi creates an unprecedented opportunity to maximize financing for women entrepreneurs in developing countries, so that they have a real and fair chance to start and run businesses, create wealth, share in prosperity, and achieve their highest aspirations.”Some 70% of women who own small and medium-sized enterprises (SMEs) in the developing world currently can’t get the financing they need.  They are either shut out of financial institutions, or can only get high-interest, short-term loans, resulting in a $1.5 trillion credit deficit for women entrepreneurs in emerging markets.  Women also often lack access to the technologies, market connections, networks, and training necessary to build and maintain a successful business.We-Fi, announced last July at the G-20 Summit in Hamburg, Germany, is an innovative, new facility that supports women-led businesses and works with governments to improve the laws and regulations stifling women entrepreneurs in developing countries.“The response from stakeholders in both emerging and advanced markets has been enthusiastic and immediate, clearly demonstrating the urgent need to scale up efforts to help women entrepreneurs in developing countries,” said Priya Basu, head of the We-Fi Secretariat at the World Bank.  “We-Fi fills a critically important gap; it’s the first significant fund committed to tackling the full range of barriers facing women entrepreneurs across the developing world.”On Thursday, at its Spring Meetings, the World Bank Group will host a session on closing the digital economy gap facing women entrepreneurs.We-Fi is supported by the governments of Australia, Canada, China, Denmark, Germany, Japan, the Netherlands, Norway, the Russian Federation, Saudi Arabia, Republic of South Korea, the United Arab Emirates, the United Kingdom, and the United States.We-Fi was established as a Financial Intermediary Facility (FIF), which allows the international community to provide a direct and coordinated response to global priorities.  FIF funds can be raised from multiple sources, both public and private, and are usually transferred to external agencies (e.g. ADB’s program aims to improve the business environment for women-owned/led SMEs in Sri Lanka, build the capacity of women entrepreneurs, improve access to finance, and strengthen evidence and data. It will increase business growth opportunities for women entrepreneurs by boosting investment and providing capacity-building support.• The World Bank Group was granted $75 million for its program “Creating Finance and Markets for All”, with $49 million allocated to the International Finance Corporation (IFC) to lead private-sector initiatives, and $26 million allocated to the World Bank to lead public-sector activities. Over half of the funds allocated to the World Bank Group will be dedicated to International Development Association (IDA) eligible countries and conflict affected states where women struggle most to grow their small and medium businesses, including countries like Bangladesh, Cote d’Ivoire, Mozambique, Nigeria, Pakistan, Senegal, Tanzania, and Zambia.   The grant is expected to mobilize innovative private sector focused solutions, test and evaluate new approaches.

Read More

Top CEOs in Pakistan – 2018

Top CEOs in Pakistan – 2018

The foundation runs various projects such as hospitals, schools initiative and water supply amongst other projects.Afzal, known for his creative pocket squares, has previously served as CEO at Peshawar Zalmi, Executive Director at Stylo and Associate Director at KPMG.Zeeshan tweets @zeshanc100Shamoon Sultan – KhaadiOver recent years Khaadi has become one of the leading retailers of clothing with exponential growth throughout Pakistan and worldwide. Working with names like Noorjehan Bilgrami and Shahnaz Siddiq, Sultan wanted to bring something unique to the market, the result of which we see today is Khaadi.Shazia Syed – UnileverShazia heads the Pakistani endeavour of one of the largest companies in the world – Unilever, which has a stake in most household items.Shazia started in Unilever building the Brand in 1989 and has gained experience in various departments since then before being named as CEO in 2015.Mian Abdul Rehman Talat – BlueEastSome of you may know Talat from his appearance on the show ‘Idea Croron Ka’. In 2016 he has founded BlueEast which is driving innovation by utilising the ‘Internet of Things’ (IoT).Apart from developing their branded IoT solution, BlueEast is supporting the open source ecosystem with a few additions of their in-house software.Abdul Rehman tweets @artalatDr Faisel Sultan – Shaukat Khanum Memorial Cancer HospitalWe’ve all heard of the Shaukat Khanum Memorial Cancer Hospital (SKMCH) and Imran Khan who envisioned the initiative but some credit is due to the CEO, Dr Faisel Sultan. SKMCH has become a benchmark for the highest standards for healthcare in Pakistan.Dr Sultan graduated from King Edward Medical College in 1987 and post graduate in Internal Medicine and Infectious Disease.Dr Faisel tweets @ceoskmchSameer Ahmed Khan – SocialChampKhan runs a social media automation software company alongside a software consultancy. Graduating from Karachi University with a BS in Computer Science, Khan has fulfilled roles as a developer going on to starting up a couple of companies which had to close up. He sees the failures as experience which benefit him in his current role.Sameer tweets @sameerpeaceBilal Athar – WifiGen (Start Up)Athar, who we’ve covered in detail before makes it as the CEO under the start up category. Available data (online and offline) is limited and often unverified; couple that with the competitiveness and high quality talent available in the current market, it makes it a tough job to shortlist candidates. As with previous years, we review a number of factors which include the standard metrics most businesses consider, but also try to add innovation, social media input, and factors like leadership style to that list too. Over the recent years the startup ecosystem has skyrocketed and the government is supporting the innovation. Yusuf is leading the innovation from the front through the National ICT R&D Fund which is deployed under the Minister of IT. He heads the ICT research & development fund which is tasked with growing the knowledge economy through innovation and research.Previously he has been with Time Warner Cable in the US, lead the Pakistan Software Export Board as MD/CEO and founded companies that have provided organisational transformation and IT policy development to organisations like the UN.Yusuf tweets @yhuss2000Junaid Iqbal – CareemJunaid is somewhat a household name now with the surge in popularity of Careem. Facing stiff competition from Uber, Careem’s marketing ploys like ‘Rishta Aunty’ and ‘Waseem Akram, CEO for a day’ ensure it keeps the multinational behemoth sweating.Iqbal boasts a BA in Economics from the University of Michigan, later adding to his education at LUMS and Harvard business School. Professionally he has served as CEO in finance and security companies.Junaid tweets @jiqbalpkZeshan Afzal – Shahid Afridi FoundationEveryone has heard of (Lala) Shahid Afridi and as you will know, he has been active in his philanthropic efforts of late.

Read More

This Is How We’ll Survive After Robots Have Taken Our Jobs

This Is How We’ll Survive After Robots Have Taken Our Jobs

April 23, 2018, 8:30 AM GMT Photo Credit: Mopic/Shutterstock Imagine for a moment that Uber drivers, rather than Uber stockholders, own Uber.That is one way to understand what a platform cooperative is. Uber is a company built on a software platform that connects service providers with customers. As a platform co-op, it would be owned by the people who provide the service, by its users, or both.Platform co-ops are the leading edge of the cooperative-ownership movement. A third example is Stocksy, an online stock photo service that is owned by its contributing photographers and videographers.How to slow down a flywheelAs we think about scaling up platform co-ops we run into the financing problem that all co-ops face. The flow of cash through these spigots takes the form of dividends as well as stock buyback programs aimed at increasing the company’s share price. GDP—every year.These massive returns to shareholders have a macro impact of exacerbating income inequality as well as a micro impact on the companies themselves. Between 1999 and the end of 2017, dividends and stock buybacks averaged 88 percent of operating earnings for the S&P 500. What is clear is that they contribute significantly to the concentration of wealth in the U.S. Recent research shows that 41 percent of that increased concentration of wealth took the form of profits to owners of pass-through corporations.What do spigots have to do with flywheels? This ongoing extraction of profits acts as a drag on the flywheel of the enterprise. Funds are drained out of the company rather than being reinvested to sustain the momentum of the enterprise.Contrast that to how things are designed to work within a cooperative company.Reinvestment: Keeping the wheel spinningREI is a consumer cooperative, which means that it is owned by and run for the benefit of its customers. As the technology sector sends us hurtling toward a world of robots and artificial intelligence, “technological unemployment” becomes the threat as automation replaces jobs faster than we come up with new work for people to do. We will be left scrambling to find alternative sources of income for average people.Those decreases in salaries, wages, and benefits also increase corporate profitability and the investment income that profitability generates for shareholders. A few years ago, I was talking with an REI employee who mentioned how good the company is about investing in technology—and more generally how it plows much of its profits back into the business and back into its stakeholders. REI does pay dividends to its owners ($138 million in 2016), but those owners are also its customers, and the dividends are in the form of discounts on store purchases. Bureau of Labor and Statistics reports that 20 percent of businesses fail within their first year, with the rate increasing to 50 percent by year five, and 70 percent by the tenth year. Comparable statistics specific to cooperative failure rates do not yet exist in the United States, but analysis in other countries suggests that cooperatives tend to sustain themselves as well or better than standard businesses.Start-up funding: Getting the wheel turningThere are some 40,000 co-ops in the United States, less than two-tenths of 1 percent of our total 26 million businesses. Since co-ops sustain themselves as well as standard businesses, the real challenge has less to do with keeping the wheel spinning and more to do with getting new cooperatives up and running.Whether a cooperative or traditional company, all startups are risky; their lack of assets and existing revenue streams mean that traditional banks are generally not interested in financing them. But these financial institutions are not currently growing at the rate required to provide startup funding for the dramatic increase in cooperatives that we need.For platform co-ops to play a vital role in addressing the technological unemployment of the emerging economy, we need new funding sources for them. This is particularly true given the upfront investments in technology development needed to build software platforms that stand a chance of competing with an Uber, Taskrabbit, or Airbnb. Three new developments may address this need.The first is the development of the impact investing field and the growing awareness of the power of Program Related Investments among philanthropic foundations. Laurie Lane-Zucker’s concept of Integrated Impact Finance Vehicleenvisions a chain of financial support starting with philanthropic contributions, moving to loans, and finally into an impact investment equity stake.The second development is the growing popularity of crowdfunding strategies, turbocharged in recent years by the rise of sharing on social media.The third opportunity for new cooperative funding is in the “Initial Coin Offerings,” or ICOs, of cryptocurrencies. But the underlying technologies for decentralized computing on platforms like Ethereum can also be used to build new types of organizations and services built around the premise of democratizing ownership. Another is a startup called Colony, whose technology aims to change the nature of ownership.A platform for the futureThe argument for platform cooperatives is essentially the argument for cooperatives more generally—with the added twist that platform co-ops carry the cooperative promise into important new technology markets. As automation and artificial intelligence cost jobs in one economic sector after another, employment-related income shrinks and investment income expands, exacerbating wealth disparity.Platform co-ops alter that equation by broadening the ownership of the companies that build and operate these technologies. But it would be owned by, and share dividends with, its property-listing members—perhaps even renters—much as REI does with its members today.We are in a race that pits the explosion of artificial intelligence and automation against our ability to rapidly expand ownership of the engines that drive this technological revolution. That’s why extending cooperatives into the technology sector is so critical and where platform co-ops can play a vital role.People have been thinking about platform cooperatives for a while.Trebor Scholz, a professor at the New School in New York City, coined the term “platform cooperativism.” For the book Ours to Hack and to Own (2017), he and University of Colorado professor Nathan Schneider compiled 40 short essays by leading thinkers in this field. I’ve compiled a“Platform Co-op” list on Twitter that includes many of the contributors in the book, along with a handful of other experts.A number of platform co-ops already exist.

Read More

Buccaneers to have a parrot announce Day 3 selection

Buccaneers to have a parrot announce Day 3 selection

The NFL announced the Tampa Bay Buccaneers will have a parrot reveal a Day 3 selection from the pirate ship at Raymond James Stadium. Whereas the Buccaneers typically feature a remote-controlled parrot perched on the ship and engaging with fans, the draft called for the real thing. Zsa Zsa, an eight-year old Catalina Macaw hailing from the Florida Exotic Bird Sanctuary, will relay the Bucs' fourth-round selection to an announcer on Saturday. The Vikings will feature members of the 2018 U.S. men's Olympic curling team, which became the country's first ever to win gold, at the St. Paul Curling Club in Minnesota. The parrot is slated to fly onto the pirate ship with the Bucs' selection in her beak and then repeat the announcement.

Read More

State Tests Do NOT Assess Student Learning

State Tests Do NOT Assess Student Learning

Kids are free to learn in an environment that challenges their curiosities and classroom teachers in collaboration with school or district-wide expectations teach curricula that truly engage and assess students. When my son did take the tests, he always scored in high 3s or 4s. Our decision to opt him out had less to do with his capacity as a learner as it did us exercising a right we have to not create undue stress for something that won't even inform his classroom performance in that year. And let's not even get into the kinds of bias that exist in the kinds of questions asked or the passages presented in terms of student experience based on their location and life experience. We, unfortunately, live in a world that wants to label everything for the sake of comparison instead of seeing each learner as an individual who has something positive to work with. In educational institutions, it is our job to make sure all students get the learning they need to be prepared for the paths they will choose later and not every child should be doing the same thing. No two people can fit into the same boxes in a multiple choice test and by using these methods of assessment, we are reducing children and young adults to quantifiable measures for efficiency and ease. That doesn't seem a good enough reason and this, of course, is oversimplifying the folks who are making money off of these endeavors as well as institutions being held accountable using them. And we would all agree that students at this age should NOT be tested as they are too young and what those tests would show would likely not shed any light on future learning. Schools and educational institutions from k-12 and higher need to re-evaluate how students are being assessed, well beyond the state testing. As a profession, we need to take the reins back here and ask ourselves what we are hoping to get out of these experiences. If our goal is truly to know what students know and can do, we should be providing multiple opportunities for them to show it. We should be giving them opportunities to regularly reflect and self-assess against an agreed upon criteria. Yes, I agree that students need to be challenged in ways that push them outside of their comfort zones at times, but we also have to listen to them more. The classroom teacher is allowed to develop the assessments that are appropriate for their students and then review the assessments immediately after and determine future learning experiences based on the data from these assessments. The teachers aren't involved in making these tests and like any standardized test, although the intention is to be able to see where students fit on a spectrum larger than just your class or even school, they aren't really about what students know and can do. If that was the ultimate goal, there would be multiple means for measuring it and comparing the data. The more families make the choice to opt their children out, the less the data even means because people don't look closely enough at the demographics of the students who ARE taking the tests. For the record, when I shared that I opt my son out, many folks who read my post attacked me as a mother and my son claiming that I am a helicopter mom and that my son will never be able to get into the Ivy League schools of his aspirations because he can't deal with the pressure.

Read More

10 Things You Should Know About Deep Learning

10 Things You Should Know About Deep Learning

This type of machine learning is going to become increasingly important in analytics and enterprise applications.Most IT leaders have heard of deep learning, but few really understand how this new technology works. Deep learning burst onto the public consciousness in 2016 when Google’s AlphaGo software, which was based on deep learning, beat the human world champion at the board game Go. Since then, deep learning has begun appearing in news reports and product literature with more frequency, but few organizations are actually using it today. Check out the related sessions in the Interop AI Summit and the Data and Analytics Track.] Cynthia Harvey is a freelance writer and editor based in the Detroit area. The report noted, “Leaders in AI have long talked about the need to make deep learning accessible to developers without a Ph.D. That’s essential to progress; AI must become accessible to domain experts in other disciplines.” The following slideshow doesn’t even scratch the surface of what it takes to become an expert in deep learning. But it does provide a high-level overview of the topic and covers the basics that CIOs, IT managers, and business leaders need to understand about this emerging technology.

Read More

The Machine Learning Potential of a Combined Tech Approach

The Machine Learning Potential of a Combined Tech Approach

This is the first in a five-part series exploring the potential of unified deep learning with CPU, GPU and FGPA technologies. For application developers working below the framework level, the AMD ROCm and MIopen software frameworks are discussed as an example of a unified software environment applicable to a CPU and GPU solution. FPGAs are primarily used for inference, and the xfDNN middleware from Xilinx captures the software features essential for implementing deep learning inference on FPGAs. A long-term vision for application developers is a full and seamless programing environment that works across CPUs, GPUs, and FPGAs. This could initially focus on support for a common language and runtime, such as OpenCL, and later be extended to additional languages. Deep learning has emerged as the most effective method for learning and discerning classification of objects, speech, and other types of information resolution. A brief review of deep learning is useful, although there are many good references that cover the historical and state-of-the-art technology. The main purpose here is to illustrate the compute and data management challenges that exist as a result of implementing successful deep learning systems. Deep learning and complex machine learning has quickly become one of the most important computationally intensive applications for a wide variety of fields. A long-term vision for application developers is a full and seamless programing environment that works across CPUs, GPUs, and FPGAs. A neural network is “trained” by adjusting the weights of the various artificial synapses so that the network produces a desired output for various input data. Deep learning has emerged as the most effective method for learning and discerning classification of objects, speech, and other types of information resolution. The combination of large data sets, high-performance computational capabilities, and evolving and improving algorithms has enabled many successful applications which were previously difficult or impossible to consider. DL Training: Using a set of training sample data to determine the optimal weights of the artificial neurons in a DNN. Modern DL models use a deep network with hidden layers and a process called stochastic gradient descent to train the network. This series explores the challenges of deep learning training and inference, and discusses the benefits of a comprehensive approach for combining CPU, GPU, and FPGA technologies, along with the appropriate software frameworks in a unified deep learning architecture. You can download the full report here, courtesy of AMD and Xilinx, “Unified Deep Learning with CPU, GPU and FPGA technology.” Each of these hardware technologies offers unique benefits to the deep learning problem, and a properly designed system can take advantage of this combination. Moreover, the combination can provide unique capabilities that result in higher performance, better efficiency, greater flexibility, and a hedge against algorithm obsolescence compared to CPU/GPU and FPGA systems designed separately. Aside from the underlying hardware approaches, a unified software environment is necessary to provide a clean interface to the application layer. This needs to account for several factors, including framework support, different compiler and code generator technologies, and optimization support for the underlying hardware engines.

Read More

Why data analytics initiatives still fail

Why data analytics initiatives still fail

Executives talk about the value of data in generalities, but Michele Koch, director of enterprise data intelligence at Navient Solutions, can calculate the actual worth of her company’s data.In fact, Koch can figure, in real dollars, the increased revenue and decreased costs produced by the company’s various data elements. Here’s a look at seven such problematic data practices.Bringing data together, but not really integrating itIntegration tops the list of challenges in the world of data and analytics today, says Anne Buff, vice president of communications for the Data Governance Professionals Organization.True, many organizations gather all their data in one place. You need to make it so, when this all comes together, it creates this larger view of who Bill Smith is. You have to have something to connect the dots.”Various data integration technologies enable that, Buff says, and selecting, implementing and executing the right tools is critical to avoid both too much manual work or redoing the same work over and over.Moreover, integration is becoming increasingly critical because data scientists are searching for patterns within data to gain the kind of insights that can yield breakthroughs, competitive advantages and the like. “But if you can’t bring together data that has never been brought together before, you can’t find those patterns,” says Buff, who is also an advisory business solutions manager at SAS in Cary, N.C.Not realizing business units have unique needsYes, consolidated, integrated data is critical for a successful analytics program. But some business users may need a different version of that data, Buff says.“Data in one form doesn’t meet the needs for everyone across the organization,” she adds.Instead, IT needs to think about data provisioning, that is, providing the data needed for the business case determined by the business user or business division.She points to a financial institution’s varying needs as an example. They might want to search for someone at the same address using slight variations of their personal identifying information to apply for multiple loans.“You’ll see similar data elements but with some variables, so you don’t want to knock out too much of those variances and clean it up too much,” Buff explains.On the other hand, she says, the marketing department at that financial institution would want to have the correct version of a customer’s name, address and the like to properly target communications.Recruiting only data scientists, not data engineers, tooAs companies seek to move beyond basic business intelligence to predictive and prescriptive analytics as well as machine learning and artificial intelligence, they need increasing levels of expertise on their data teams.That in turn has shined a spotlight on the data scientist position. But equally important is the data engineer, who wrangles all the data sets that need to come together for data scientists to do their work but has (so far) gained less attention in many organizations.That’s been changing, says Lori Sherer, a partner in Bain & Co.’s San Francisco office and leader of the firm’s Advanced Analytics and Digital practices.“We’ve seen the growth in the demand for data engineer is about 2x the growth in the demand for data scientist,” Sherer says.The federal Bureau of Labor Statistics predicts that demand for data engineers will continue to grow at a fast clip for the next decade, with the U.S. economy adding 44,200 positions between 2016 and 2026 with an average annual pay already at $135,800.Yet, like many key positions in IT, experts say there aren’t enough data engineers to match demand — making IT departments who are now just beginning to hire or train for the position playing catch up.Keeping data past its prime, instead of managing its lifecycleThe cost of storage has dropped dramatically over the past decade, enabling IT to more easily afford to store reams of data for much longer than it ever could before. That might seem like good news, considering the volume and speed at which data is now created along with the increasing demand to have it for analysis.But while many have hailed the value of having troves and troves of data, it’s often too much of a good thing, says Penny Garbus, co-founder of Soaring Eagle Consulting in Apollo Beach, Fla., and co-author of Mining New Gold: Managing Your Business Data.Garbus says too many businesses hold onto data for way too long.“Not only do you have to pay for it, but if it’s older than 10 years, chances are the information is far from current,” she says. “We encourage people to put some timelines on it.”The expiration date for data varies not only from organization to organization, it varies by departments, Garbus says. A mistake in a key data field within a customer’s profile, for instance, could mean the company can’t process a loan at the lowest cost.“There’s money involved here, so we have a data quality dashboard where we track all of this. The inventory division within a retail company might only want relatively recent data, while marketing might want data that’s years old to track trends.If that’s the case, IT needs to implement the architecture that delivers the right timeframe of data to the right spot, to ensure everyone’s needs are met and old data doesn’t corrupt timely analytics programs.As Garbus notes: “Just because you have to keep [old data], doesn’t mean you have to keep it inside your core environment. You just have to have it.”Focusing on volume, rather than targeting relevancy“We’re still building models and running analytics with the data that is most available rather than with the data that is most relevant,” says Steve Escaravage, senior vice president of IT consulting company Booz Allen Hamilton.He says organizations frequently hold the mistaken notion that they should capture and add more and more datasets. Although that exercise starts with the business side, “the mechanisms to capture it and make it available, that’s the realm of the CIO, CTO or chief data officer.”Providing data, but ignoring where it came fromOne of the big topics today is bias in analytics, a scenario that can skew results or even produce faulty conclusions that lead to bad business decisions or outcomes. The problems that produce bias reside in many different arenas within an enterprise analytics program — including how IT handles the data itself, Escaravage says.Too often, he says, IT doesn’t do a good enough job tracking the provenance of the data it holds.“And if you don’t know that, it can impact the performance of your models,” Escaravage says, noting the lack of visibility into how and where data originated makes controlling for bias even more difficult.“It’s IT’s responsibility to understand where the data came from and what happened to it. There’s so much investment in data management, but there should also be a meta data management solution,” he says.Providing data, but failing to help users understand contextIT should not only have a strong metadata management program in place, where it tracks the origin of data and how it moves through its systems, it should provide users insight into some of that history and provide context for some of the results produced via analytics, Escaravage says.“We get very excited about what we can create. We think we have pretty good data, particularly data that’s not been analyzed, and we can build a mental model around how this data will be helpful,” he says. “But while the analytics methods of the past half-decade have been amazing, the results of these techniques are less interpretable than in the past when you had business rules applied after doing the data mining and it was easy to interpret the data.”The newer, deep learning models offer insights and actionable suggestions, Escaravage explains. Navient’s governance program includes long-recognized best practices, such as standardizing definitions for data fields and ensuring clean data.It assigns ownership for each of its approximately 2,600 enterprise data elements; ownership goes either to the business area where the data field first originated or the business area where the particular data field is integral to its processes.The company also has a data quality program that actively monitors the quality of fields to ensure high standards are constantly met. The company also launched a Data Governance Council (in 2006) and an Analytics Data Governance Council (in 2017) to address ongoing questions or concerns, make decisions across the enterprise, and continually improve data operations and how data feeds the company’s analytics work.“Data is so important to our business initiatives and to new business opportunities that we want to focus on always improving the data that supports our analytics program,” Koch says. However, the report found that almost 40 percent of responding organizations don’t have a separate budget for data governance and some 46 percent don’t have a formal strategy for it.The findings are based on responses from 118 respondents, including CIOs, CTOs, data center managers, IT staff and consultants.Given those figures, experts say it’s not surprising that there are weak spots in many enterprise data programs.

Read More

Maximum stake for ‘crack cocaine’ gambling machines WILL be slashed to just £2 after a deal is struck between the Treasury and Downing Street

dailymail.co.uk � � Femail Today A Peter Rabbit nursery, a nanny and hit music radio blaring at breakfast: How Kate and William’s new little prince will fit into the family’s ‘normal’ life� It’s getting serious! Scarlett Johansson and boyfriend Colin Jost make their red carpet debut at Avengers: Infinity War premiere in Los Angeles Candace Cameron Bure, 42, claps back at cruel body-shamer who said the actress looks like she weighs more than her hockey…

Read More

Hallucinogenic Deep Reinforcement Learning Using Python and Keras

Hallucinogenic Deep Reinforcement Learning Using Python and Keras

If Artificial Intelligence is your thing, you need to check this out:https://arxiv.org/abs/1803.10122In short, it’s a masterpiece, for three reasons:It combines several deep/reinforcement learning techniques to produce an amazing result — the first known agent to solve the popular 'Car Racing' reinforcement learning environment.It’s written in a very accessible style, so a great learning resource for anyone interested in cutting-edge AIYou can code the solution yourselfThis post is a step by step guide through the paper.We’ll cover the technical details and also walk through how you can get a version running on your own machine.Similarly to my post on AlphaZero, I’m not associated with the authors of the paper but just wanted to share my interpretation of their terrific work.We’re going to build a reinforcement learning algorithm (an ‘agent’) that gets good at driving a car around a 2D racetrack. This environment (Car Racing) is available through the OpenAI GymAt each time-step, the algorithm is fed an observation (a 64 x 64 pixel colour image of the car and immediate surroundings) and needs to return the next set of actions to take — specifically, the steering direction (-1 to 1), acceleration (0 to 1) and brake (0 to 1).This action is then passed to the environment, which returns the next observation and the cycle starts again.An agent scores 1000/N for each of the N track tiles visited and -0.1 for each time-step taken. Actually, we use pseudo-random actions, which forces the car to accelerate initially, in order to get it off the start line.Since the VAE and RNN are independent of the decision-making Controller, all we need to ensure is that we encounter a diverse range of observations and choose a diverse range of actions to save as training data.To generate the random rollouts, run the following from the command linepython 01_generate_data.py car_racing –total_episodes 2000 –start_batch 0 –time_steps 300or if you’re on a server without a display,xvfb-run -a -s "-screen 0 1400x900x24" python 01_generate_data.py car_racing –total_episodes 2000 –start_batch 0 –time_steps 300This will produce 2000 rollouts (saved in ten batches of 200), starting with batch number 0. Each rollout will be a maximum of 300 time-steps longTwo sets of files are saved in ./data, (* is the batch number)obs_data_*.npy (stores the 64*64*3 images as numpy arrays)action_data_*.npy (stores the 3 dimensional actions)Training the VAE only requires the obs_data_*.npy files. This way, you can iteratively train your VAE in batches, rather than all in one go.The VAE architecture specification in the ./vae/arch.py file.Now that we have a trained VAE, we can use it to generate the training set for the RNN.The RNN requires encoded image data (z) from the VAE and actions (a) as input and one time-step ahead encoded image data from the VAE as output.You can generate this data by running:python 03_generate_rnn_data.py –start_batch 0 –max_batch 9This will take the obs_data_*.npy and action_data_*.npy files from batches 0 to 9 and convert them to the correct format required by the RNN for training.Two sets of files will be saved in ./data, (* is the batch number)rnn_input_*.npy (stores the [z a] concatenated vectors)rnn_output_*.npy (stores the z vector one time-step ahead)Training the RNN only requires the rnn_input_*.npy and rnn_output_*.npy files. The –new_model flag tells the script to train the model from scratch.Similarly to the VAE, if there is an existing weights.h5 in this folder and the –new_model flag is not specified, the script will load the weights from this file and continue training the existing model. This was possible because we were able to create a training set for each, using random rollout data.To train the controller, we’ll use a form of reinforcement learning, that utilises an evolutionary algorithm known called CMA-ES (Covariance Matrix Adaptation — Evolution Strategy).Since the input is a vector of dimension 288 (= 32 + 256) and the output a vector of dimension 3, we have 288 * 3 + 1 (bias) = 867 parameters to train.CMA-ES works by first creating multiple randomly initialised copies of the 867 parameters (the ‘population’). In exactly the same principle as natural selection, the weights that generate the highest scores are allowed to ‘reproduce’ and spawn the next generation.To start this process on your machine, run the following command, with the appropriate values for the argumentspython 05_train_controller.py car_racing –num_worker 16 –num_worker_trial 4 –num_episode 16 –max_length 1000 –eval_steps 25or on a server without display:xvfb-run -s "-screen 0 1400x900x24" python 05_train_controller.py car_racing –num_worker 16 –num_worker_trial 2 –num_episode 4 –max_length 1000 –eval_steps 25–num_worker 16 : set this to no more than number of cores available–num_work_trial 2 : the number of members of the population that each worker will test (num_worker * num_work_trial gives the total population size for each generation)–num_episode 4 : the number of episodes each member of the population will be scored against (i.e. the score will be the average reward across this number of episodes)–max_length 1000 : the maximum number of time-steps in an episode–eval_steps 25: the number of generations between the evaluation of the best set of weights, across 100 episodes–init_opt ./controller/car_racing.cma.4.32.es.pk By default, the controller will start from scratch each time it is run and save the current state of the process to a pickle file in the controller directory. For example, if the agent completes the track in 732 frames, the reward is 1000–0.1*732 = 926.8 points.Here’s an example of an agent that chooses the action [0,1 0] for the first 200 time-steps then something random…not a great driving strategy.The aim is to train the agent to understand that it can use information from its surroundings to inform the next best action.There is an excellent online interactive explanation of the methodology, written by the authors, so I won’t go into the same level of detail here, but instead will focus on a high-level summary of how the pieces fit together, with an analogy to real driving to explain why the solution intuitively makes sense.The solution consists of three distinct parts, which are trained separately:When you make decisions whilst driving, you don’t actively analyse every single ‘pixel’ in your view — instead your brain condenses the visual information into a smaller number of ‘latent’ entities, such as the straightness of the road, upcoming bends and your position relative to the road, to inform your next action.This is exactly what the VAE is trained to do — condense the 64x64x3 (RGB) input image into a 32-dimensional latent vector (z) that follows a Gaussian distribution.This is useful because the agent can now work with a much smaller representation of its surroundings and therefore can be more efficient in its learning.If you didn’t have an MDN-RNN component to your decision making, your driving might look something like this.As you drive, each subsequent observation isn’t a complete surprise to you. This argument allows you to continue training from the last save point, by pointing it at the relevant file.After each generation, the current state of the algorithm and the best set of weights will be output to the ./controller folder.At the point of writing, I’ve managed to train an agent to achieve an average score of ~833.13 after 200 generations of training. 10,000 episodes of training data, 64 population size, 64 core machine, 16 episodes per trial etc.)To visualise the current state of your Controller, simply run:python model.py car_racing –filename ./controller/car_racing.cma.4.32.best.json –render_mode –record_video–filename : the path to the json of weights that you want to attach to the controller–render_mode : render the environment on your screen–record_video : outputs mp4 files into the ./video folder, showing each episode–final_mode : run a 100 episode test of your controller and output the average score.Here’s a demo!That’s already pretty cool — but the next part of the paper is mind-blowingly impressive and I think has major implications for AI.The paper goes on to show an amazing result, through another environment, DoomTakeCover. The object here is to move an agent to avoid fireballs and stay alive as long as possible.The authors show how it is possible for the agent to actually learn how to play the game within its own VAE / RNN inspired hallucinogenic dreams, rather than inside the environment itself.The only required addition is that the RNN is trained to also predict the probability of being killed in the next time-step. Through this, it builds up a latent understanding of how the world ‘works’ — its natural groupings, physics and how its own actions affect the state of the world.It can then use this understanding to establish an optimal strategy for a given task, without ever having to actually test it in the real world, because it can use its own mental model of the environment as the ‘playground’ for trying things out.This could easily be a description of a baby learning to walk. You know that if the current observation suggests a left turn in the road and you turn the wheel left, you expect the next observation to show that you are still in line with the road.This forward thinking is the job of the RNN — specifically this a Long Short-Term Memory Network (LSTM) with 256 hidden units. The vector of hidden states is represented by h.Similarly to the VAE, the RNN tries to capture a latent understanding of the current state of the car in its environment, but this time with the aim of predicting what the next ‘z’ might look like, based on the previous ‘z’ and the previous action.The MDN output layer simply allows for the fact that the next ‘z’ could actually be drawn from any one of several Gaussian distributions.MDN for handwriting generationThe same technique was applied in this article, by the same author, for handwriting generation, to describe the fact that the next pen point could land in any one of the red distinct areas.Similarly, in the World Models paper, the next observed latent state could be drawn from any one of five Gaussian distributions.Up until this point, we haven’t mentioned anything about choosing an action. The 3 output neurons correspond to the three actions and are scaled to fall in the appropriate ranges.To understand the different roles of the three components and how they work together, we can imagine a dialogue between them:Diagram of the World Model architecture (source: https://arxiv.org/pdf/1803.10122.pdf)VAE: (looks at latest 64*64*3 observation) This looks like a straight road, with a slight left bend approaching, with the car facing in the direction of the road (z).RNN: Based on that description (z) and the fact that the Controller chose to accelerate hard at the last time-step (action), I will update my hidden state (h) so that the next observation is predicted to still be a straight road, but with slightly more left turn in view.Controller: Based on the description from the VAE (z) and the current hidden state from the RNN (h) my neural network outputs next action to be [0.34, 0.8, 0].This action is then passed to the environment, which returns an updated observation and the cycle begins again.We’ll now look at how to set up an environment that allows you to train your own version of the agent for car racing.Time for some code!If you’ve got a high-spec laptop, you can run the solution locally, but I’d recommend using Google Cloud Compute for access to powerful machines that you can use in short bursts.The following has been tested on Linux (Ubuntu 16.04) — just change the relevant commands for package installation if you’re on Mac or Windows.In the command line, navigate to the place you want to store the repository and enter the following:git clone https://github.com/AppliedDataSciencePartners/WorldModels.gitThe repository is adapted from the highly useful estool library developed by David Ha, the first author of the World Models paper.For the neural network training, this implementation uses Keras with a Tensorflow backend, though in the original paper the authors used raw Tensorflow.2.

Read More
1 2 3 510