Introduction
Reinforcement learning is about how an agent can learn to interact with its environment. Reinforcement learning uses the formal framework of Markov decision processes to define the interaction between a learning agent and its environment in terms of states, actions, and rewards.
Element of Reinforcement Learning
Policy defines the way that an agent acts, which is a mapping from perceived (观察的)states of the world to actions. It may be stochastic(随机的过程).
Reward defines the goal of the problem. A number is given to the agent as a (possibly stochastic) function of the state of the environment and the action taken.
Value function specifies what is good in the long run, essentially to maximize the expected reward. The central role of value estimation is arguably(可论证地) the most important thing that has been learned about reinforcement learning over the six decades.
Model mimics(模拟) the environment to facilitate (促进) planning. Not all enforcement learning algorithms have a model (if they don’t then they can’t plan, i.e. must use trial or error(试验或误差), and are called model-free).
|