Artificial Intelligence: What's The Difference Between Deep Learning And Reinforcement Learning?

The various cutting-edge technologies that are under the umbrella of artificial intelligence are getting a lot of attention lately. As the amount of data we generate continues to grow to mind-boggling levels, our AI maturity and the potential problems AI can help solve grows right along with it. This data and the amazing computing power that’s now available for a reasonable cost is what fuels the tremendous growth in AI technologies and makes deep learning and reinforcement learning possible. With the rapid changes in the AI industry, it can be challenging to keep up with the latest cutting-edge technologies. In this post, I want to provide easy-to-understand definitions of deep learning and reinforcement learning so that you can understand the difference.

Both deep learning and reinforcement learning are machine learning functions, which in turn are part of a wider set of artificial intelligence tools.  What makes deep learning and reinforcement learning functions interesting is they enable a computer to develop rules on its own to solve problems. This ability to learn is nothing new for computers – but until recently we didn’t have the data or computing power to make it an everyday tool.

What is deep learning?

Deep learning is essentially an autonomous, self-teaching system in which you use existing data to train algorithms to find patterns and then use that to make predictions about new data. For example, you might train a deep learning algorithm to recognize cats on a photograph. You would do that by feeding it millions of images that either contains cats or not. The program will then establish patterns by classifying and clustering the image data (e.g. edges, shapes, colors, distances between the shapes, etc.). Those patterns will then inform a predictive model that is able to look at a new set of images and predict whether they contain cats or not, based on the model it has created using the training data.

Deep learning algorithms do this via various layers of artificial neural networks which mimic the network of neurons in our brain. This allows the algorithm to perform various cycles to narrow down patterns and improve the predictions with each cycle.

A great example of deep learning in practice is Apple’s Face ID. When setting up your phone you train the algorithm by scanning your face. Each time you log on using e.g. Face ID, the TrueDepth camera captures thousands of data points which create a depth map of your face and the phone’s inbuilt neural engine will perform the analysis to predict whether it is you or not.

What is reinforcement learning?

Reinforcement learning is an autonomous, self-teaching system that essentially learns by trial and error. It performs actions with the aim of maximizing rewards, or in other words, it is learning by doing in order to achieve the best outcomes. This is similar to how we learn things like riding a bike where in the beginning we fall off a lot and make too heavy and often erratic moves, but over time we use the feedback of what worked and what didn’t to fine-tune our actions and learn how to ride a bike. The same is true when computers use reinforcement learning, they try different actions, learn from the feedback whether that action delivered a better result, and then reinforce the actions that worked, i.e. reworking and modifying its algorithms autonomously over many iterations until it makes decisions that deliver the best result.

A good example of using reinforcement learning is a robot learning how to walk. The robot first tries a large step forward and falls. The outcome of a fall with that big step is a data point the reinforcement learning system responds to. Since the feedback was negative, a fall, the system adjusts the action to try a smaller step. The robot is able to move forward. This is an example of reinforcement learning in action.

One of the most fascinating examples of reinforcement learning in action I have seen was when Google’s Deep Mind applied the tool to classic Atari computer games such as Break Out. The goal (or reward) was to maximize the score and the actions were to move the bar at the bottom of the screen to bounce the playing ball back up to break the bricks at the top of the screen. You can watch the video here which shows how, in the beginning, the algorithm is making lots of mistakes but quickly improves to a stage where it would beat even the best human players.

Difference between deep learning and reinforcement learning

Deep learning and reinforcement learning are both systems that learn autonomously. The difference between them is that deep learning is learning from a training set and then applying that learning to a new data set, while reinforcement learning is dynamically learning by adjusting actions based in continuous feedback to maximize a reward.

Deep learning and reinforcement learning aren’t mutually exclusive. In fact, you might use deep learning in a reinforcement learning system, which is referred to as deep reinforcement learning and will be a topic I cover in another post.

10 Amazing Examples Of How Deep Learning AI Is Used In Practice?

You may have heard about deep learning and felt like it was an area of data science that is incredibly intimidating. How could you possibly get machines to learn like humans? And, an even scarier notion for some, why would we want machines to exhibit human-like behavior? Here, we look at 10 examples of how deep learning is used in practice that will help you visualize the potential.

What is deep learning?

Both machine and deep learning are subsets of artificial intelligence, but deep learning represents the next evolution of machine learning. In machine learning, algorithms created by human programmers are responsible for parsing and learning from the data. They make decisions based on what they learn from the data. Deep learning learns through an artificial neural network that acts very much like a human brain and allows the machine to analyze data in a structure very much as humans do. Deep learning machines don't require a human programmer to tell them what to do with the data. This is made possible by the extraordinary amount of data we collect and consume—data is the fuel for deep-learning models. For more on what deep learning is please check out my previous post here.

10 ways deep learning is used in practice

1. Customer experience
Machine learning is already used by many businesses to enhance the customer experience. Just a couple of examples include online self-service solutions and to create reliable workflows. There are already deep-learning models being used for chatbots, and as deep learning continues to mature, we can expect this to be an area deep learning will be used for many businesses.

2. Translations
Although automatic machine translation isn’t new, deep learning is helping enhance automatic translation of text by using stacked networks of neural networks and allowing translations from images.

3. Adding color to black-and-white images and videos
What used to be a very time-consuming process where humans had to add color to black-and-white images and videos by hand can now be automatically done with deep-learning models.

4. Language recognition
Deep learning machines are beginning to differentiate dialects of a language. A machine decides that someone is speaking English and then engages an AI that is learning to tell the differences between dialects. Once the dialect is determined, another AI will step in that specializes in that particular dialect. All of this happens without involvement from a human.

5. Autonomous vehicles
There's not just one AI model at work as an autonomous vehicle drives down the street. Some deep-learning models specialize in streets signs while others are trained to recognize pedestrians. As a car navigates down the road, it can be informed by up to millions of individual AI models that allow the car to act.

6. Computer vision
Deep learning has delivered super-human accuracy for image classification, object detection, image restoration and image segmentation—even handwritten digits can be recognized. Deep learning using enormous neural networks is teaching machines to automate the tasks performed by human visual systems.

7. Text generation
The machines learn the punctuation, grammar and style of a piece of text and can use the model it developed to automatically create entirely new text with the proper spelling, grammar and style of the example text. Everything from Shakespeare to Wikipedia entries have been created.

8. Image caption generation
Another impressive capability of deep learning is to identify an image and create a coherent caption with proper sentence structure for that image just like a human would write.

9. News aggregator based on sentiment
When you want to filter out the negative coming to your world, advanced natural language processing and deep learning can help. News aggregators using this new technology can filter news based on sentiment, so you can create news streams that only cover the good news happening.

10. Deep-learning robots
Deep-learning applications for robots are plentiful and powerful from an impressive deep-learning system that can teach a robot just by observing the actions of a human completing a task to a housekeeping robot that’s provided with input from several other AIs in order to take action. Just like how a human brain processes input from past experiences, current input from senses and any additional data that is provided, deep-learning models will help robots execute tasks based on the input of many different AI opinions.

The growth of deep-learning models is expected to accelerate and create even more innovative applications in the next few years.

3 Ways To Embrace Digitization To Improve Productivity

Going back to Economics 101, productivity is the stimulus that our economy needs. When productivity increases, wages and standards of living follow suit, causing the demand for goods and services to increase along with them. In a world where technology advances on a daily basis could we really be seeing a decline in productivity? You bet.

According to a recent report by McKinsey, “Productivity growth has fluctuated over time; it has been declining since the 1960s and today stands near historical lows.” In fact, between the years of 2010-2014, the total labor productivity growth stood at a negative 0.2%, compared to a positive 3.6% in 2000-2004 – just a decade difference.

There is hope, however. McKinsey believes that there is potential for the productivity levels to recover to at least 2 percent. But how? Thanks to the ever-growing realm of tech, we now have digitization and the digital transformation.

Let’s discuss how digitization and the transformation can both be the key to our productivity struggle.

Transformation Through Upskilling and Training

There is always talk about robots taking all our jobs. However, it is being proved to not be the case. Digitization and the transformation will impact both high-skill and low-skill jobs yet will create more in the long run for us all. However, those who are digital natives including millennials, Generation Z, and their kids will be the majority; those that have the skills necessary to perform these positions. To increase our productivity at ground level, we need to begin upskilling our current workforce and implementing training that fits.

Companies such as AT&T are beginning to see the true value in upskilling their employees. Scott Smith of AT&T put it this way, “You can go out to the street and hire for the skills, but we all know that the supply of technical talent is limited, and everybody is going after it. Or you can do your best to step up and reskill your existing workforce to fill the gap.” Since beginning their upskilling initiative, AT&T has reduced its product development lifecycle by 40% accelerated time to revenue. It’s an impressive feat that only upskilling and dedication can create.

During digitization and the digital transformation, your business will need to create a strategy for cybersecurity, artificial intelligence and more. How will you be able to manage these technologies? You must have a current talent base that can implement these tools once they hit your front door. Upskilling and training can completely change the game.

Digitization Through Diffusion

It is critical that digitization and technology are adopted by all enterprise, not just a select few, to boost productivity. Also called digital diffusion, McKinsey states, “Action is needed both to overcome adoption barriers of large incumbent business and to broaden the adoption of digital tools by all companies and citizens. Actions that can promote digital diffusion include: leading by example and digitizing the public sector, leveraging public procurement and investment in R&D and driving digital adoption by small and medium-sized enterprises.”

The digital transformation will have an impact on businesses that do not choose to adopt the up and coming technology. In fact, these businesses may suffer the consequences and be left behind. True digitization to boost our productivity will need to include all sectors and enterprise to make a difference. This means that large corporations will need to face their tech demons head-on and solve their adoption issues with strategy. Small businesses and mid-size enterprises will need to begin adopting technology to remain competitive.

Reinvention Through Strategy

McKinsey asks the question: “How do companies, labor organizations, and even economists respond to the challenge of restarting productivity growth in a digital age? Companies will need to develop a productivity strategy that includes the digital transformation of their business model as well as their entire sector and value chain.” Every change that must be made, every training strategy and move towards digitization is part of a larger digital transformation strategy.

When it comes to businesses that place emphasis on their digital strategy, “We found that more than twice as many leading companies closely tie their digital and corporate strategies than don’t. What’s more, winners tend to respond to digitization by changing their corporate strategies significantly.”

Businesses that are investing in digital transformation by changing their strategy are boosting their productivity through revenue growth and return on digital investment. In fact, McKinsey found in further research that 49 percent of leading companies, in revenue growth, EBIT growth and digital investment, are investing in digital more than their counterparts do.

McKinsey concluded that bold strategies win. And I agree. With a digital transformation strategy, strong levels of digital diffusion and upskilling of the workforce, we are sure to see the increase in productivity that is predicted. It is time now to embrace digitization more than ever, from the top of the ladder to the bottom. After all, our economy depends on it.

6 Ways To Make Smart Cities Future-Proof Cybersecurity Cities

By 2050, about 70% of the world’s population is expected to live in cities. Using the Internet of Things, analyzing lots of data, putting more services online—all herald the digital transformation of cities. Becoming digital, however, means a new life in the cybersecurity trenches.

There is no place like Israel to teach local government leaders how to make their cities and citizens cybersecurity resilient. Welcoming attendees from 80 countries to the Muni World 2018 event in Tel-Aviv, Eli Cohen, Israel’s minister of economy and industry, highlighted the fact that the country represents 10% of the global investment in cybersecurity. And it shares its expertise with others, including alerting 30 countries to pending cyber or terrorist attacks, Cohen said. (I was attending the event as a guest of Vibe Israel).

Cybersecurity is a prerequisite for the smart city, argued Gadi Mergi, CTO at Israel’s National Cyber Directorate. That means pursuing security, privacy and high-availability (having a cyberattack recovery plan, backup facility, cloud management, and manual overrides) by design. As other presenters discussed at the event (see the list of presenters below), smart cities must adjust and adapt to the requirements of the new cybersecurity landscape, characterized by:

The expansion of the attack surface with the introduction of new points of potential vulnerability such as connected and self-driving cars, and the Internet of Things (71% of local governments say IoT saves them money but 86% say they have already experienced an IoT-related security breach);

A wider range of attacker motivations, including ransomware (it was the motivation behind 50% of attacks in the US in 2017, with ransom payments totaling more than $1 billion) and hactivism (drawing attention to a specific cause, adding cultural and political dimensions to cyberattacks);

Increased consumer concern about personal data privacy and loss (30% of customers will take action following a data breach—demand compensation, sue or quit their relationship with the vendor);

Not enough people with the right expertise and experience (the much talked-about cybersecurity skill shortage is exacerbated in municipalities which find it hard to compete for scarce talent with organizations with much deeper pockets; this challenge becomes even more severe with the introduction of new approaches to cybersecurity involving new tools based on machine learning and artificial intelligence);

Insisting on fast time-to-everything (Agile is not agile enough) results in reduced quality of cybersecurity applications.

What’s to be done about meeting these challenges? Here’s a short list of priorities for leaders of smart cities worldwide, based on the presentations at Muni World:

Prepare for the worst - develop a protection strategy and emergency plans, and get outside experts to help;

Practice - training and testing and more training and testing and simulations;

Automate - implement a continuous adaptive protection, automate the process of detection and response, apply algorithms liberally, including AI and machine learning-based solutions;

Upgrade - keep up with attackers’ new methods and tools, improve the state of hardware and software including leveraging the cloud and big data analytics and invest in elevating the skill level of the people responsible for cybersecurity defense;

Share - raise public awareness, disclose your experiences, and exchange information with other local governments;

Separate and disinfect - insert a virtual layer between the internal network and the internet, allowing only for sending commands and showing display windows, and make downloadable files harmless by deleting areas where programs may exist or transform them into safe data, regardless if they are malicious or not.

In addition to Eli Cohen and Gadi Mergi, other presenters at Muni World included Jonathan Reichental, CIO, City of Palo Alto, California; Roy Zisapel, co-founder and CEO, Radware; Menny Barzilay, Co-founder and CEO, FortyTwo Global; Morten Illum, EMEA VP, Aruba/HPE; Takahiko Makino, City of Yokohama, Japan; Yosi Schneck, Senior VP, Israel Electric Corporation; and Sanaz Yashar, Senior Analyst, FireEye.

Tamir Pardo, the former Director of the Mossad (Israel’s national intelligence agency), also spoke at the event, comparing the cyber threat to “a soft and silent nuclear weapon.” There is no way to stop a penetration, he said, and there will never be a steady state for cyber security.

Meaning life in the cybersecurity trenches, for local governments and all other organizations, will continue to get very interesting. To quote FireEye’s Sanaz Yashar (who quoted President Eisenhower), “plans are nothing; planning is everything.”

A Complete Beginner's Guide To Bitcoin In 2018

When you dig into the details of Bitcoin, it’s almost an unbelievable tale about how to create money. Although it seems like fiction, it’s actually the best-known version of digital currency in use today. To help you wrap your head around what it is, what it does and how to earn Bitcoins, I pulled together this complete beginner’s guide to Bitcoin.

Before we go any further I just want to reiterate that investing in cryptocoins or tokens is highly speculative and the market is largely unregulated. Anyone considering it should be prepared to lose their entire investment.

A bit of bitcoin history

Bitcoin was the first established cryptocurrency - a digital asset that is secured with cryptography and can be exchanged like currency. Other versions of cryptocurrency had been launched but never fully developed when Bitcoin became available to the public in 2009. The anonymous Satoshi Nakamoto - possibly an individual or a group whose real identity is still unknown - is behind the development of Bitcoin who stated the goal of the technology was to create “a new electronic cash system” that was “completely decentralized with no server or central authority.” In 2010, someone decided to sell their Bitcoins for the first time to purchase two pizzas for 10,000 Bitcoins. I hope the pizza was good, because if that person would have held onto those Bitcoins, they would be worth more than $100 million today. In 2011, Nakamoto shared the source code and domains with the Bitcoin community and hasn’t been heard from again.

What is Bitcoin, really?

Bitcoin is a digital currency, so there are no coins to mint or bills to print. There is not a government, financial institution or any other authority that controls it, so it’s decentralized. The owners who have Bitcoins in the system are anonymous - there are no account numbers, names, social security numbers or any other identifying features that connect Bitcoins to its owners. Bitcoin uses blockchain technology and encryption keys to connect buyers and sellers. And, just like diamonds or gold, a Bitcoin gets “mined.”

How do you “mine” Bitcoins?

People - or more accurately extremely powerful, energy-intense computers - “mine” Bitcoins to make more of them. There are currently about 16 million Bitcoins in existence, and that leaves only 5 more million available to mine because Bitcoins developers capped the quantity to 21 million. Ultimately, each Bitcoin can be divided into smaller parts with the smallest fraction being one hundred millionth of a Bitcoin called a “Satoshi,” after the founder Nakamoto. The mining process involves computers solving an extremely challenging mathematical problem that progressively gets harder over time. Every time a problem is solved, one block of the Bitcoin is processed and the miner gets a new Bitcoin. A user establishes an Bitcoin address to receive the Bitcoins they mine; sort of like a virtual mailbox with a string of 27-34 numbers and letters. Unlike a mailbox, the user’s identity isn’t attached to it.

How are Bitcoins used?

In addition to mining Bitcoins, there are other ways to earn Bitcoins. First, you can accept Bitcoins as a means of payment for goods or services. Setting up your Bitcoin wallet is a simple as setting up a PayPal account and it’s the way you store, keep track of and spend your digital money. They are free and available through a provider such as Coinbase. While this might take more time than it’s worth, there are websites that will pay you in Bitcoins for completing certain tasks. Once you’ve earned Bitcoins, there are ways to lend them out and earn interest. There are even ways to earn Bitcoins through trading and recently Bitcoin futures were launched as a legitimate asset class. In addition, you can trade your regular currency for Bitcoins at Bitcoin exchanges, the largest one being Japan-based Mt. Gox that handles 70 percent of all Bitcoin transactions. There are more than 100,000 merchants who accept Bitcoin for payment for everything from gift cards to pizza and even accepts it.

What are the risks?

There’s risk as well as great opportunity with Bitcoin. While it has been appealing to criminals due to its anonymity and lack of regulation, there are lots of benefits to all of us if you’re willing to accept some risk to jump in to the Bitcoin marketplace. Since there is no governing body, it can be difficult to resolve issues if Bitcoins get stolen or lost. In 2014 Mt. Gox went offline, and 850,000 Bitcoins were never recovered. Once a transaction hits the blockchain it’s final. Since Bitcoin is relatively new, there are still a lot of unknowns and its value is very volatile and can change significantly daily.

So, the jury’s still out if Bitcoin will accomplish what its proponents predict, the replacement of government-controlled, centralized money. I fully expect 2018 to give us even more insight about the future of Bitcoin as the technology continues to grow and mature.

How Thermal Cameras Work

Our eyes work by seeing contrast between objects that are illuminated by either the sun or another form of light.  How thermal cameras work is by “seeing” heat energy from objects.  All objects – living or not – have heat energy that thermal cameras use to detect an image. This is why thermal cameras can operate at all times, even in complete darkness.

Because thermal cameras work by “seeing” heat rather than reflected light, thermal images look very different than what’s seen by a visible camera or the eye.  In order to present heat in a format appropriate for human vision, thermal cameras convert the temperature of objects into shades of gray which are darker or lighter than the background. On a cold day a person stands out as lighter because they are hotter than the background. On a hot day a person stands out as darker because they are cooler than the background.

Outdoor challenges can impact how thermal cameras work

For these reasons, thermal cameras have become a good choice as a sensor for “seeing in the dark” because at night background objects tend to be cooler than a person at 98.6 degrees. Under ideal conditions, people are well emphasized at night because they appear brighter than the background and stand out, even in zero light.

However, outdoor security conditions are rarely “ideal”, especially during the day when darker objects absorb the sun’s energy and heat up, an effect known as Thermal Loading. When objects in the scene become uniformly hot in any given area, many cameras have difficulty mapping the narrow range of temperature differences into a useful image. The result is an image with large areas that look “whited out” or “grayed out” and undefined. This makes it difficult to see what is happening in the scene, and it makes it difficult for smart thermal cameras to automatically detect intruders accurately.

The capture at right shows a daylight image from a thermal camera which cannot effectively compensate for white-out. Details such as the power lines, pavement, and other objects have become impossible to discern due to the effect of thermal loading. It’s even difficult to tell that this is a daytime image.

Lack of image clarity can reduce security effectiveness. Security personnel who have to view blurry, undefined video even on a single monitor can become fatigued and confused by images that are not as intuitive as they would be with daylight cameras, while on-board video analytics will have a more difficult time detecting intruders.

Video Processing and Thermal Cameras

Thermal imagery is very rich in data, sensing small temperature variations down to 1/20th of a degree. Thermal cameras must convert these fine temperature variations – representing 16,384 shades of gray – into about 250 gray scales to more closely match the capability of human vision to decipher shades of gray. The image below shows the eye’s difficulty distinguishing between close levels of gray. The top row shows six levels of gray which the eye can see.  The bottom row shows sixteen shades of gray – you can see how it is increasingly difficult to distinguish where the shades transition from one block to the next. Consider the fact that a thermal imager has 16,000 shades of gray, over 1000 times more than show in the lower bar graph, and the magnitude of the problem becomes clearer.

In the past, most thermal cameras converted this data in a simplistic way by mapping gross areas together that are close in temperature. This is why thermal images often look blurry, lack detail and conceal intruders, while the analytics would often misdetect intruders entirely.

New cameras with a high-level of image processing can emphasize small variations between objects and the background to exaggerate the fine details and present a clearer image in contrast to other image features, while automatically detecting intruders accurately, every day, every night, under all outdoor conditions.