drghirlanda

Computational behavior theory and cultural evolution

Tag Archives: mathematical model

New paper: Studying associative learning without solving learning equations

Note: This is a somewhat technical post

While writing my previous JMP paper, On elemental and configural models of associative learning, I was also working out how the equivalence between elemental and configural models could be exploited for better analytical methods. My rationale for this research was that, in most cases, associative learning models are studied either intuitively or with computer simulation, making it difficult to establish general claims rigorously. After some time and fantastic input from reviewers and the editor, I am happy that Studying associative learning without solving learning equations came finally out over the Summer. This paper shows that the predictions of many models can be calculated analytically simply by solving systems of linear equations, which is much easier than trying to solve the models’ learning equations. For example, we can calculate that, in a simple summation experiment (training an associative strength v_A to stimulus A and v_B to B) the associative strength for the compound AB is, in the Rescorla & Wagner (1972) model:

v_{AB}=\frac{1}{1+c} \left( v_A + v_B \right)

and in Pearce’s (1987) model:

v_{AB}=\frac{1}{(1+c^2)(2-c)} \left( v_A + v_B \right)

where, in both cases, c is the proportion of stimulus elements in common between A and B. This makes it immediately apparent that v_{AB} / (v_A + v_B) in Rescorla & Wagner (1972) ranges between 1/2 and 1, while in Pearce (1987) it ranges between 1/2 and 0.54.  This results were previously known only in the special case c=0.

I hope the method presented in the paper will be used also by others to derive new theoretical predictions and design new theory driven experiments!

 

New paper: Solution of the comparator theory of associative learning

A few weeks ago I had the good news that our paper on the comparator model of associative learning had been accepted in Psychological Review. This is my first published paper co-authored with by an undergraduate student, Ismet Ibadullaiev, which makes me even happier. The paper (I put up an unofficial copy on my Papers page) deals with a very interesting model of associative learning in which most of the interesting phenomena are generated as memories are retrieved, rather than when memory are stored as assumed by most mainstream theories of associative learning (e.g., the Rescorla-Wagner model and its derivatives).

Our conclusion, unfortunately, is that the theory makes a number of paradoxical predictions that are hard to reconcile with empirical data on learning. For example, it predicts that, in many cases, animals would not distinguish which of two stimuli is most associated with a reward (they do distinguish, of course), or that they should learn equally about faint and intense stimuli (in reality, animals learn preferentially about intense rather than faint stimuli).

These problems have been hard to recognize because the theory had been studied exclusively by intuition and computer simulation. Both are fine tools, but they do run into trouble. The predictions of comparator, as it turns out, vary greatly depending on the value of a few parameters, and our intuition is not well equipped to reason about the non-linear effects that abound in the theory. Simulations give us correct results, but only for the parameter combinations we simulate. We have been fortunate enough to realize that one could write down a formal mathematical solution to the theory. With this solution it became much easier to see the big picture and actually prove what the theory can or cannot do.

I enjoyed working with comparator theory because of its distinct flavor – as hinted above, it’s rather different from other learning models – and because of the many surprises we had while exploring its predictions. Although we found what appear to be serious flaws in the theory, these might be more in its mathematical implementation than in its core concepts. The ideas that memory retrieval is an important factor in associative learning, and that stimulus-stimulus associations are more important than other models acknowledge, may well be worth pursuing. But the formulae that translate these ideas into a testable model will surely need to be revised.