Creates an N-individuals by T-trials matrix of press-right decisions and resulting outcomes. It takes task definitions, as well as individually varying parameters for each parameter governing the reinforcement learning algorithm.

generate_responses(N, M, K, mm, Tsubj, cue, n_cues, condition, outcome, beta_xi,
  beta_b, beta_eps, beta_rho, press_right = NULL)

Arguments

N

number of individuals

M

number of samples

K

number of conditions

mm

a vector of group labels (1:M) for each individual from 1:N

Tsubj

a vector of trial counts for each individual

cue

an N by max(Tsubj) matrix of integers in 1:n_cues identifying the cue displayed on each trial

n_cues

total number of unique cues

condition

an N by max(Tsubj) matrix of integers in 1:K identifying the condition for each trial

outcome

an N by max(Tsubj) by 2 array of real numbers identifying the outcome reward if the individual chooses left (outcome[,,1]) or right (outcome[,,2])

beta_xi

an N by K matrix of coefficients governing the amount of irreducible noise

beta_b

an N by K matrix of coefficients governing the amount of press-right bias

beta_eps

an N by K matrix of coefficients governing the learning rate

beta_rho

an N by K matrix of coefficients adjusting the relative amount of reward

press_right

an N by max(Tsubj) matrix of press-right responses (either 0 or 1). If supplied, this is used in place of draws from the binomial distribution. Useful if one wants to retrieve expected probabilities of pressing right for a given set of model paramters and response patterns. Defaults to NULL.

Value

a list with two N by max(Tsubj) matrices (with NA in unused trial cells). The first matrix, press_right contains 0 if the decision was to press left, and 1 if the decision was to press right. The second matrix, outcome_realized contains the amount of the feedback received after the press. The third matrix, p_press_right, contains the trial-by-trial probability of a right-key press.