drghirlanda

Computational behavior theory and cultural evolution

Tag Archives: machine learning

New paper: On elemental and configural theories of associative learning

A new paper of mine just came out in the Journal of Mathematical Psychology. It considers an old issue that has traditionally split the field of associative learning, and that echoes various scientific disputes between holism and reductionism. The question is, when an animal learns about a stimulus, how is the stimulus endowed with the power to cause a response? Configural models of learning assume that a mental representation of the stimulus “as a whole” acquires associative strength (learning psychologists’ term for a stimulus’ power to cause a response), while elemental theories assume that the stimulus is fragmented in a number of small representation elements (say, shape, color, size, and so on), each of which carries some associative strength.

Long story short, it turns out that there is practically no difference in these two approaches. They amount to different bookkeeping of associative strength without this having necessarily any observable consequence. In fact, the main result of the paper is that, given some mild assumptions, for every configural model there is an equivalent elemental model – one that makes exactly the same predictions about animal learning – and, vice-versa, every elemental model has an equivalent configural model.

Thus there is no “better way” to think about how stimuli acquire associative strength, something that I expect will surprise some learning scholars. What I have personally most enjoyed discovering while working on this topic is that learning psychologists, and specifically John M. Pearce in this 1987 paper, have re-invented the formalism of kernel machines, a workhorse of machine learning and computer science since the 1960s. In fact, my proof of the equivalence of configural and elemental models is itself a re-discovery, in a much simpler setting, of the “kernel trick” of machine learning (see the previous link, and thanks to an anonymous reviewer for pointing this out).

Intriguingly, this is not the first time learning psychologists independently develop concepts that had been introduced in machine learning. Another remarkable case is Donald Blough‘s 1975 re-invention of the least mean square filter (or delta rule), a kind of error-correction learning that had been developed in 1960 to build self-regulating electronic circuits, and that Blough developed as a model of animal learning. I resist from speculating too much on whether this means that there is only one way to be intelligent – be it for animals or machines.