Published in

·

5 min read

·

Dec 12, 2020

--

In my today’s medium post, I will teach you how to implement the Q-Learning algorithm. But before that, I will first explain the idea behind Q-Learning and its limitation. Please be sure to have some Reinforcement Learning (RL) basics. Otherwise, please check my previous post about the intuition and the key math behind RL.

Well, let’s recall some definitions and equations that we need for implementing the Q-Learning algorithm.

In RL, we have an environment that we want to learn. For doing that, we build an agent who will interact with the environment through a trial-error process. At each time step ** t**, the agent is at a certain state

**and chooses an action**

*s_t***to perform. The environment runs the selected action and returns a reward to the agent. The higher is the reward, the better is the action. The environment also tells the agent whether he is done or not. So an episode can be represented as a sequence of state-action-reward.**

*a_t*The goal of the agent is to maximize the total rewards he will get from the environment. The function to maximize is called the expected discounted return function that we denote as ** G**.

To do so, the agent needs to find an optimal policy ** 𝜋 **which

*is a probability distribution of a given state over actions.*

Under the optimal policy, the Bellman Optimality Equation is satisfied:

where ** q** is the Action-Value function or Q-Value function.

All these functions are explained in my previous post.

In the Q-Learning algorithm, the goal is to learn iteratively the optimal Q-value function using the Bellman Optimality Equation. To do so, we store all the Q-values in a table that we will update at each time step using the Q-Learning iteration:

where **α **is the learning rate, an important hyperparameter that we need to tune since it controls the convergence.

Now, we would start implementing the Q-Learning algorithm. But, we need to talk about the exploration-exploitation trade-off. But Why? In the beginning, the agent has no idea about the environment. He is more likely to explore new things than to exploit his knowledge because…he has no knowledge. Through time steps, the agent will get more and more information about how the environment works and then, he is more likely to exploit his knowledge than exploring new things. If we skip this important step, the Q-Value function will converge to a local minimum which in most of the time, is far from the optimal Q-value function. To handle this, we will have a threshold which will decay every episode using exponential decay formula. By doing that, at every time step ** t**, we will sample a variable uniformly over [0,1]. If the variable is smaller than the threshold, the agent will explore the environment. Otherwise, he will exploit his knowledge.

where *N_0* is the initial value and *λ,** *a constant called* decay constant.*

Below is an example of the exponential decay:

Alright, now we can start coding. Here, we will use the FrozenLake environment of the ** gym** python library which provides many environments including Atari games and CartPole.

FrozenLake environment consists of a 4 by 4 grid representing a surface. The agent always starts from the state 0, [0,0] in the grid, and his goal is to reach the state 16, [4,4] in the grid. On his way, he could find some frozen surfaces or fall in a hole. If he falls, the episode is ended. When the agent reaches the goal, the reward is equal to one. Otherwise, it is equal to 0.

First, we import the needed libraries. Numpy for accessing and updating the Q-table and gym to use the FrozenLake environment.

`import numpy as np`

import gym

Then, we instantiate our environment and get its sizes.

`env = gym.make("FrozenLake-v0")`

n_observations = env.observation_space.n

n_actions = env.action_space.n

We need to create and initialize the Q-table to 0.

`#Initialize the Q-table to 0`

Q_table = np.zeros((n_observations,n_actions))

print(Q_table)

We define the different parameters and hyperparameters we talked about earlier in this post

To evaluate the agent training, we will store the total rewards he gets from the environment after each episode in a list that we will use after the training is finished.

Now let’s go to the main loop where all the process will happen

Please read all the comments to follow the algorithm.

Once our agent is trained, we will test his performance using the rewards per episode list. We will do that by evaluating his performance every 1000 episodes.

As we can notice, the performance of the agent is very bad in the beginning but he improved his efficiency through training.

Q-learning algorithm is a very efficient way for an agent to learn how the environment works. Otherwise, in the case where the state space, the action space or both of them are continuous, it would be impossible to store all the Q-values because it would need a huge amount of memory. The agent would also need many more episodes to learn about the environment. As a solution, we can use a Deep Neural Network (DNN) to approximate the Q-Value function since DNNs are known for their efficiency to approximate functions. We talk about Deep Q-Networks and this will be the topic of my next post.

I hope you understood the Q-Learning algorithm and enjoyed this post.

Thank you!