The following code examples represent four distinct codes to solve in different ways the task of generating a small onset detector neural network, as portrayed in the figure below:

An arbitrary input reaches two units performing a simple (identical) computation. The output unit (neuron 1) receives the excitatory input and, with a very short delay due to the longer path of the signal, it also receives inhibitory signal from the interneuron (neuron 2). As a result, the output unit exhibits neural activity for a very short time, starting when it receives the outside input stimulus and ending with the inhibitory signal from the interneuron.

**In the first example,** the parameters required to set the weights of the neural system have been “hand tuned”, therefore have been found by a process of trial and error.

**The second example** merely proposes a different version of the script generating the network. This is meant to illustrate the use of a function for the computation of each unit/neuron in the network. Due to the presence of the function, this new version of the code requires an extra file (download here format .rar) which is also reported at the end of this page.

For the third and fourth examples, let’s consider an auditory input provided to a volunteer during an experimental task. Imagine the input keeps a certain tone for a few seconds, then increases for another few seconds, before ending. This simple task leads to neural activity in two sensory areas of the cortex: in one case the fMRI has revealed onset detection, in the second case a linear response to the intensity of the stimulus (i.e. the stronger the auditory stimulus, the stronger the BOLD activity). These two BOLD activity recorded via fMRI represent a “target” and we want to find what parameters in the network allow simulating such activity. Both examples use as as input and target the results of the previous simulation (example 2). The final parameters can be interpreted as a simple version of effective connectivity values among areas in the brain, not dissimilar from DCM results.

In the **third example** the parameters will be searched randomly, running the simulation multiple times with multiple random parameters, and finally selecting the best fit (Montecarlo method).

In the **fourth example** the parameters will be generated following an heuristic inspired by the process of Darwinism and survival of the fittest: random variations of parameters will be generated in a range, starting from the best known results (Genetic Algorithm method).

**Code: Example 1**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
%Example for the creation of an onset detector: a neural unit responding %with increase activity only at the onset of any given input clear all close all T=100; %Total number of cycles or "time" of the simulation dim1=1; %number of units per neural layer: in this example, the %network merely consists of two units, one "neuron" per %each "layer" %inizialization neuron1 = zeros(T,dim1); neuron2 = zeros(T,dim1); inp = zeros(T,1); %Weight assignment for the connections in the network inp2neuron = 1.5; %weight for the synapse from input to both neurons neu12neu2 = 0.5; %weight for the synapse from neuron1 to neuron2 neu22neu1 = -1.5; %weight for the synapse from neuron2 to neuron1 for t=2:T %input generation if t>25 && t<50 inp(t,:)=rand(1)/10+0.2; elseif t>=50 && t<75 inp(t,:)=rand(1)/10+0.5; end %very simple function to represent the activity of a neuron as positive %values: max (0, f(.)) and a saturation function tanh(.) neuron1(t) = max(0, tanh(inp(t-1)*inp2neuron+neuron2(t-1)*neu22neu1)); neuron2(t) = max(0, tanh(inp(t-1)*inp2neuron+neuron1(t-1)*neu12neu2)); end %PLOT instructions h=figure; screen_size=get(0,'ScreenSize'); screen_size(1:2)=screen_size(3:4)*1/10; screen_size(3:4)=[800 800]; set(h,'Position',screen_size) subplot(2,1,1) plot (inp,'LineWidth',3) set(gca, 'XTick', 0:T/5:T, 'XTickLabel', 0:1:5, 'FontSize', 14) %example for altering the value in the axis xlabel ('Seconds', 'FontSize', 14) %example for a label of the axis title('Input', 'FontSize', 18) axis([0 T -0.1 1.1]) legend('input signal 1') subplot(2,1,2) %% Syntax 1 to plot the activity of the two neurons hold on plot (neuron1,'LineWidth',3) plot (neuron2,'LineWidth',3) hold off %% Syntax 2 to plot the activity of the two neurons %x=1:T; %plot (x,neuron1,x,neuron2,'LineWidth',3) %% title('Neural Activity', 'FontSize', 18) axis([0 T -0.1 1.1]) legend('neuron 1: output/onset detector', 'neuron 2: interneuron') |

**Code: Example 2**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
clear all close all T=100; %Total number of cycles or "time" of the simulation dim1=1; %number of units per neural layer: in this example, the %network merely consists of two units, one "neuron" per %each "layer" %inizialization neuron1 = zeros(T,dim1); neuron2 = zeros(T,dim1); inp = zeros(T,1); %Weight assignment for the connections in the network inp2neuron = 1.5; %weight for the synapse from input to both neurons neu12neu2 = 0.5; %weight for the synapse from neuron1 to neuron2 neu22neu1 = -1.5; %weight for the synapse from neuron2 to neuron1 for t=2:T %input generation if t>25 && t<50 inp(t,:)=rand(1)/10+0.2; elseif t>=50 && t<75 inp(t,:)=rand(1)/10+0.5; end %the neural activity of each neuron unit has been incapsulated in a function % requiring as arguments: 1) the time point (t), 2) the input at the previous % time point, 3) the entire vector of activity of the relative neuron neuron1 = neural_step(t, inp(t-1)*inp2neuron+neuron2(t-1)*neu22neu1, neuron1); neuron2 = neural_step(t, inp(t-1)*inp2neuron+neuron1(t-1)*neu12neu2, neuron2); end %PLOT instructions as for Example 1 |

**Code: Example 3**

Below the code relative to the main file. Download the entire package (3 files): Onset_Montecarlo

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
clear all close all %% Generation of the target that will be used for the parameter regression. T % This includes also the input signal, which will be loaded and therefore % is not generated again in the "for" cycle. load('target_activity.mat') target1=neuron1; target2=neuron2; T=100; %Total number of cycles or "time" of the simulation dim1=1; %number of units per neural layer: in this example, the %network merely consists of two units, one "neuron" per %each "layer" %inizialization neuron1 = zeros(T,dim1); neuron2 = zeros(T,dim1); tic for seed_n=1:50000 %number of randomly generated attempts to find the best fit %rng(seed_n) %seed control phenotype=rand(1,3)*10-5; %generation of the random "phenotype" or random %weights for the network, within the arbitrary %range of [-5 5] inp2neuron = phenotype(1); %weight for the synapse from input to both neurons neu12neu2 = phenotype(2); %weight for the synapse from neuron1 to neuron2 neu22neu1 = phenotype(3); %weight for the synapse from neuron2 to neuron1 for t=2:T % the neural activity of each neuron unit has been incapsulated in a function % requiring as arguments: 1) the time point (t), 2) the input at the previous % time point, 3) the entire vector of activity of the relative neuron neuron1 = neural_step(t, inp(t-1)*inp2neuron+neuron2(t-1)*neu22neu1, neuron1); neuron2 = neural_step(t, inp(t-1)*inp2neuron+neuron1(t-1)*neu12neu2, neuron2); end %% Error calculation: the distance between the target and the activity % recorded as a result of using the randomly generated parameters tot_er(seed_n)=sum(((neuron1-target1).^2)*10+(neuron2-target2).^2); % All phenotypes (combination of random parameters) are stored here store_phen(seed_n,:)=phenotype; end toc %% Matlab "min()" function is used to find which iteration has resulted in the % minimum error or in other words has resulted in activity that is the % closest to the target [M, I]=min(tot_er); bstres=store_phen(I,:); %% Matlab "disp()" function displays the array, without printing the array name. disp('Best parameters:'); disp(bstres); |

**Code: Example 4**

Below the code relative to the main file. Download the entire package (3 files): Onset_GeneticAlgorithm

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
clear all close all %% Generation of the target that will be used for the parameter regression. T % This includes also the input signal, which will be loaded and therefore % is not generated again in the "for" cycle. load('target_activity.mat') target1=neuron1; target2=neuron2; T=100; % Total number of cycles or "time" of the simulation dim1=1; % number of units per neural layer: in this example, the % network merely consists of two units, one "neuron" per % each "layer" %inizialization neuron1 = zeros(T,dim1); neuron2 = zeros(T,dim1); popul_n=200; % total number of elements of phenotypes in each population gener_n=400; % total number of generations: a new population will be created % for each generation. Each element or phenotype in the new % population will present minor (random) changes in the parameters % when compared with the best fit of the previousgeneration tic for gen=1:gener_n for pop=1:popul_n if gen==1 phenotype=rand(1,3)*10-5; % For the first generation, all phenotypes % are generated randomly elseif gen>1 && pop==1 phenotype=best_phen; % For any generation after the first, the first % element in the new population is represented % by the best fit of the previous generation elseif gen>1 && gen<=gener_n/3 phenotype=best_phen+rand(1,3)*4-2; % The first (out of 3) group of generations % generates wide variations of the best fit % in the interval [-2 2] elseif gen>gener_n/3 && gen<=gener_n*2/3 phenotype=best_phen+rand(1,3)*2-1; % The second (out of 3) group of generations % generates small variations of the best fit % in the interval [-1 1] elseif gen>gener_n*2/3 phenotype=best_phen+rand(1,3)*0.2-0.1;% The third (out of 3) group of generations % generates minimal variations of the best fit % in the interval [-0.1 0.1] end inp2neuron = phenotype(1); % weight for the synapse from input to both neurons neu12neu2 = phenotype(2); % weight for the synapse from neuron1 to neuron2 neu22neu1 = phenotype(3); % weight for the synapse from neuron2 to neuron1 for t=2:T % the neural activity of each neuron/unit has been incapsulated in a function % requiring as arguments: 1) the time point (t), 2) the input at the previous % time point, 3) the entire vector of activity of the relative neuron neuron1 = neural_step(t, inp(t-1)*inp2neuron+neuron2(t-1)*neu22neu1, neuron1); neuron2 = neural_step(t, inp(t-1)*inp2neuron+neuron1(t-1)*neu12neu2, neuron2); end %% Error calculation: the distance between the target and the activity % recorded as a result of using the parameters generated by the % genetic algorithm tot_er(pop)=sum(((neuron1-target1).^2)*10+(neuron2-target2).^2); % All phenotypes (combination of parameters) are stored here store_phen(pop,:)=phenotype; end %% Matlab "min()" function is used to find which iteration has resulted in the % minimum error or in other words has resulted in activity that is the % closest to the target [M, I]=min(tot_er); best_phen=store_phen(I,:); % Matlab "disp()" function displays the array, without printing the array name. disp(sprintf('Best parameters for the generation %s:', num2str(gen))); disp(best_phen) end toc |

**EXTRA Code: Function named “neural_step” used to simulate neural activity**

1 2 3 4 5 6 7 8 9 |
function output = neural_step(t, comp_input, neur_activity) %action potential neur_potential = tanh(comp_input); %transfer function neur_activity(t,:) = max (0, neur_potential); output = neur_activity; |