Teaching material and code examples for reinforcement learning and algorithmic models used to solve classic problems and as computational description of recorded behavior (all codes are in Matlab).
Disclaimer: I usually run through the code several times before publishing it and especially the parts that have been used in a course/workshop have been commented, but do let me know if you find any bug or have any particular question. Please consider I am not a programmer, so these examples may not represent the best possible practice for coding.

This work by Vincenzo Fiore is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
ON and OFF policy solutions for the “cliff task”
As an example of the different solutions to a problem emerging by adopting ON- or OFF-policy TD algorithms, I use here a task described in the new edition of the Sutton and Barto 2017 (you can download the entire book in PDF for free from Stanford.edu, HERE, see chapter 6)….
ON and OFF policy solutions for the “windy grid-world task”
As an example of the different strategies emerging by adopting ON- or OFF-policy TD algorithms, I use here a task described in the new edition of the Sutton and Barto 2017 (you can download the entire book in PDF for free from Stanford.edu, HERE, see chapter 6). In this task…
Lotka–Volterra equations: basic dynamics and input-controlled parameter modulation
Lotka-Volterra equations have been first developed to simulate ecological nonlinear interactions among different species. Assuming two species affect each another in a prey-predator relationship, basic Lotka-Volterra equations describe the fluctuation in number of the population of both species as follows: \[ \frac{dx}{dt} = \alpha x – \beta xy \] \[…
Model-based and Model-free RL solving a sequential two-choice Markov decision task
In this example I replicated task and model described in Glasher et al. 2010 (available HERE). The task is essentially a two armed bandit with probabilistic outcomes (distribution of probabilities: 0.7-0.3), played on two levels, so that the agent has to perform 2 choices in sequence (left or right), to…
Reinforcement Learning vs Bayesian approach
As part of the Computational Psychiatry summer (pre) course, I have discussed the differences in the approaches characterising Reinforcement learning (RL) and Bayesian models (see slides 22 onward, HERE). In particular, I have presented a case in which values can be misleading, as the correct (optimal) choice selection leads to…