Connection points between Ecological Psychology and Dynamic Systems Theory

TLDR; EP and DST share the perspective of their content matter as dynamic, systems-based, and relational.

Part of what I taught in my doctoral course brings up the connection between EP and DST, why do they seem to work well together? Here’s a general and concise answer to this question:

EP connects well with DST in my opinion because EP takes at its ontological core a relational (as including, implicating, both the environment and organism together as a system) and dynamic perspective on the ongoings of the world where organisms are involved. Dynamic Systems Theory also takes its subject matter as dynamic (perhaps obviously) as the analyses that DST contains require dynamic measurements to be carried out. Since I am an Ecological Psychologist, I will tell you that DST works the best with relational phenomena, evidenced by experiments on e.g. movement where we would be less likely to use the literal three-dimensional points of say a hand as the sole basis of an analysis (because it is a component and not a relation) but more likely e.g. an angle between one limb and another (not excluding that three-dimensional positions can be used to calculate a relational measure). However, cognitivists have argued that DST works fine with the former kinds of measurements too where a component is the basis of a measurement, so it may be a preconceived idea kind of thing. Then there is the third part, and that is that EP takes the entire system as the content matter of psychology, which includes the environment. To explain behavior we cannot just talk about the human, or worse, only the inside of a human. We need to understand the entire organism-environment system to understand why a particular behavior arose/emerged. Dynamic Systems Theory (again, perhaps obviously) shares the perspective that a system of many components, due to rules of engagement (relations), can result in the most complex of collective behaviors, given repetitions over time (partly summed up by the term emergence). However, some argue here that the brain has many components and can also be seen as a system, and based on the rules of DST this is allowable. Which is why DST is called a-theoretic, and its collection of methods and analyses really only require a numerical time-series, which is why I can work with some colleagues without a shared ontological/epistemological basis of content matter.

Further reading:
Self-Organization of Cognitive Performance
van Orden, Holden, & Turvey (2003)

Dynamics in cognition
Riley & Holden (2012)

Back on the anti-representation train

I have the wonderful opportunity at the moment to teach methods of Ecological Psychology and Dynamic Systems Theory, their philosophical basis, theoretical concepts, how they require certain analyses, and what kinds of explanations this/these perspective/s give. I am lucky to be in a department that is (as of yet) wholly representation-dominant, yet are curious, interested, and promotes theoretical plurality. I just personally keep running into the wall of not being able to believe in representations as an ontological foundation for psychology.

Perhaps someone can convince me otherwise. Four different ways of defining representations are as follows. The ‘literal-structural’ version, which amounts to old-school phrenology, i.e., it is literally neurons/physical structures in the brain that are/store representations. I do not meet many researchers that hold this belief. Mostly, I find this belief in folk psychology, because I think it is an easily envisionable version that has surface validity through movies and tv-series that talk about brain function in this way (the British “knowledge”-panel program QI is an example of this). Second, you have a ‘literal-activational’ version, which amounts to what I would call modern phrenology, i.e., it is the electrical activity in the brain that are representations. This and the literal-structural version are sometimes unhelpfully combined. fMRI, MRI, PET and similar techniques try to get at this by measuring blood flow to different parts of the brain, although it is an indirect technique since the assumptions go 1) thoughts, feelings, reactions, cognitions, etc, are produced in the brain, 2) in the brain there is electrical activity presumed to indicate usage of a particular brain area, 3) when a brain area is used, increased blood flow is seen to the area, 4) blood flow can indicate brain-part-usage and therefore thoughts, feelings, reactions, cognitions, etc. Third, we leave the literal kinds and hop the fence to ‘symbolic-mathematical’ accounts, that representations aren’t literal parts in the brain, but a more abstract version, instantiated by mathematics. This version is often combined with the methods of the ‘literal-activational’ version, and I’ve listened to several, prominent cognition scholars that have expressed beliefs in how mathematical equations are running in/produced by the brain… somehow. And several of them, upon explaining how the math got there to begin with, used various kinds of nativism as explanations. I’ve also found network-explanations, or ‘patterns of activation across groups of neurons’ and similar, both here and in the fourth grouping. Fourth, the ‘symbolic-abstract’ version, which often amounts to more hand-waving than the third, where representations are not mathematics, they can be groups of patterned activity, sometimes explained as dynamic clouds of activation of different kinds.

I just don’t find myself believing in any of them. With the literal-structural versions, there’s actually been attempts at finding them. Or, they were called Engrams at the time, and could simply not be found. There is little wiggle room in this description, either you can show empirically that a literal neuron/structure changes when you memorize something new or change a memory, or you are more or less forced to accept that this story isn’t the best one. Which is why I think so few ascribe to it beyond non-experts (e.g. folk psychology and AI researchers, the latter of which are almost exclusively dependent on this version).

The literal-activational version is far more popular however. In part, a lot of support comes from the medical sciences, where, if you poke a brain with electricity while talking or playing an instrument, are often disrupted in their activities. Or, if you have particular cognitive issues, a neurosurgeon can often quite easily pick out where a tumor is pushing up against (if you have a tumor, which isn’t always the case). This evidence is a muddle to me, since you can also completely remove brain parts, even half the brain(!), and still maintain functioning -which seems to me to break down the validity of this narrative. It is usually explained away with brain plasticity, but if brain plasticity is true (in reference to literal-activational representations), then the reliability across time when looking at the same brain, or when looking at different brains, break down (Anderson’s book After Phrenology is an interesting read). In fact, if either of the literal accounts are true, we should have specific structures and/or patterns of activation found to be stable across time for a single person, or across people. And if this were substantiated by research and industry, where the f* are our mind-reading helmets? Often when I bring this point up in discussions, neurocognitivists retreat back to neurons and how we don’t have the technical expertise to measure individual neurons in the entire brain simultaneously to answer that question. There are a couple of practices that seem to support the view however, sending a whole bunch of electricity down the spine seems to help people with motor-function constraints to regain/less disruptive movements. Which is great. But it is a far cry from being specific in the sense that’s needed to support the theoretical position. There are also toy versions like moving cubes on a screen using EEG, and beyond taking a surprising amount of time to train, once the person leaves the room and comes back the next day, the process restarts and cannot be continued the next day. I understand this position though, I do, I can deal with living alongside it (although I am extremely frustrated by grant-decisions heavily favoring neuroscience). However, the evidence surrounding it promises a specificity, a specificness, of identifying recurring activation/al patterns, that very clearly both empiricism and practical implementation does not live up to.

We get to the symbolic version of brain activity, particularly mathematical accounts, the story about activation standing in for representations seems to me to either give up the ontological foundation of representations (hand-wavy “it’s maths” explanations), or simply add another step of assumptions to the literal-activational account. Now, not only do you have to solve the above problems, but you also have to explain why a conglomerate of biology-chemical-electrical activity would instantiate an accounting tool that humans created to begin with. A good tool, mind you, but human-created nonetheless. From this point, of course, you get all kinds of half-to-non-scientific abstractions about how everything in the world is made up of mathematics, a tall-tale version of ruthless reductionism, and you of course lose the ontology of the phenomena you are trying to explain. You’d be surprised who I have heard literally say, in very public circumstances, that babies run physics equations in their brain that they are born with. And they are given so much research funding. My personal grievances aside, we have the symbolic-abstract version, which I find the most convincing, perhaps surprisingly. In some ways it can be seen as a less specific version of the literal-activational/structural versions of representations. Often, the explanation begins from the point that larger structures in the brain are not specific specific, but rather, there may be particular patterns of activation across structures that are recruited in a kind of online fashion. The patterns can thus “move around” the brain in part-deterministic-part-stochastic ways, meaning that, if you think about a cat, you come into that time-period from a different point than if you had been asked to think about it the next day. So, a similar pattern would repeat, but, not necessarily the same neurons and not necessarily all the same structures. This account would additionally fit the empirical findings detailed in After Phrenology. It would also explain how fMRI and MRI studies find general trends (averages over both spatial structures, and averages over time -how much time of course depends on imaging technique) across people. However. If this account is true, then I will most likely never get my mind-reading helmet, because it would be near-impossible to know how a particular pattern of activation would only stand for one object, and, what that pattern would look like the next day. Of course, an objection could be that it is not objects that are represented, but everything that we are exposed to continuously at the same time (plus memories, plus …). But then it would be nigh impossible to sort out which components of input lead to what activational pattern, particularly if that pattern changes over each instance. I do have some sympathy for this position though, as it seems to me to better fit more of the empirical data, but it gives a version of representations that is practically unusable except as a theoretical description.

To sum up. The more literal narratives of representations gives promises of specificity (particularly the medically inspired accounts), but this just hasn’t materialized on the practical-functional end, and, there is plenty of contradictory empirical evidence. The symbolic-mathematical perspective seems to explain some of the contradictions due to the shift of ontological basis to mathematics, but this step seems fantastical as it requires another set of beliefs to accept an ontological reality of maths. The world isn’t maths. The world is the world. Although it can be described in detail by maths. Lastly, we have the more symbolic-abstract-network version, which seems to me to cover most of the empirical literature and dispel the contradictions of the literal account. However, this perspective seems to me to not live up to the definition of a representation to begin with, losing the ‘specificness’ of representations that at the outset make them attractive to AI-researchers. In a recent interview with a prominent Ecological Psychological researcher, they were reacted to with a “I don’t see how anything could be anything else than computation”, and I have the exact opposite view, I have a very hard time seeing how anything could really be computational (that says something of value about psychological phenomena beyond simple, mechanical, surface level generalisations).So what is the brain up to? Biology does not preserve components that are not used.

Well, finally, we have the Raja-ian version of brain-activity, but we have left the realm of representations. Where the brain and central nervous system is seen more as a tuning fork, a resonance device, than something harboring ‘the real world’ in one way or another. As a non-content explanation of the brain I have high hopes for this perspective, and perhaps it is compatible with some versions of the abstract-variational-network version of representations (which do not live up to the demands of what a representation would be in the first place). But, just as more literal proponents of representations wait for the technological solutions to their theoretical problems, I’ll have to wait out the empirical evidence and theory-building required for a fuller account of the resonance narrative.

Network and doctoral course startup at Lund University for Dynamic Systems Theory and Ecological Psychology

TL/DR; I have officially started a research network for the proliferation of everything Dynamic Systems Theory and Ecological Psychology, along with the start of my brand new doctoral course ‘The Psychology in Dynamic and Ecological Systems’.

Outside of the doctoral programs at University of Connecticut and Cincinnati, there are research groups and individual researchers that work in Ecological Psychology and Dynamic Systems Theory (that is, the combination, also referred to as Ecological Dynamics). And while Dynamic Systems Theory is well known in its fields of origin, it is not too common still in Psychology. I personally have often felt quite isolated being at a university where it still is uncommon. Therefore, I decided to start this network, to collect international and interdisciplinary researchers and create seminars, presentations, and guest-lectures on the topic of EP and/or DST. If this sounds like something for you, then don’t hesitate to contact me or read more about the network Lund University Network for Dynamic Systems Theory and Ecological Psychology (LUNDSTEP) here.

In other good news, I’ve also put together an introductory doctoral course to the philosophy, psychological theory, research methods, and non-linear/dynamic analyses that started today. If anyone has any interest in this then we do take late signups and I can organize Teams for remote students. Also find more information on the Department of Psychology’s website here.

Comments on ENSO Seminar “Radical Embodiment and Real Cognition”

Over at 4e Cognition Group Anthony Chemero has given a talk (YouTube link) about a couple of interesting new directions that he and his students are working on for their dissertations and a paper. The main impetus is to explain “higher order cognition” through a rECS-able perspective.

The first turn is through Gui Sanches de Oliveira’s Artifactualism approach to models, essentially giving a thorough and solid argument for that scientific models are foremost tools, not accurate representations of the world. If it works, we use the model to predict, explain, plan, experiment, etc. It reminds me of the futile path that scientists often are found on: Focusing on finding The Truth, or finding objectivity. But the world seems to me to contain none, but even if it does, it doesn’t matter, at least not nearly as much as if the proposed model can be used in any applied setting. It reminds me of Nancy Cartwright’s arguments about truth and explanation, how far away those two concepts are from each other -and opting for truth takes us further away from a functioning tool. This is a really important step. Artifactualism rightfully criticizes the assumption that thoughts are for representing the world accurately, and replaces it with that cognition is for toolmaking. “Explicit, occurrent thoughts are tools, instruments, or artifacts that some agents create and use. Of course, models can meet formal definitions of representations, but that is not what they are for…”.

The second turn is through Vicente Raja Galian’s attempt at defining brain activity through resonance and oscillators. In his case, TALONs as resonant networks of neurons that resonate to certain ecological information and not others, that can continue to oscillate in the absence of the initial resonate -and that can be set in oscillatory motion at a later point in time (again without the initial resonate, through Ed Large’s work). The brain here, is driven by everything else (not the opposite way around). Oscillators, and non-linear oscillators, can act as filters and produce patterns not in the original driver.

Then, we take a turn into what Chemero refers to as slave/master systems, and while those words seem very culturally loaded, they make the point that slave systems wander (drift) in absence of a master system. E.g. circadian rythms stay in tune when we are regularly exposed to sunlight, but when deprived, our rhythms start to drift. An idea connected to that when we do try and use TALONs to think about things, or the past, but because it is not what they (and as a whole, the brain) is for, we just don’t seem to be very good at it. Marek McGann adds “‘Memories’ are constructed on the fly, and confabulation is rife, because it is not retrieval of things, but it is temporary toolmaking”.

Ultimately these initial steps in making more concrete the idea of ‘resonance’, seems very promising. An interesting aspect of resonance, is that it exists on all scales, it doesn’t matter if we look at the behavioral or neural scale, which makes them analyzable by methods like fractals. It makes it an empirically testable theory. Also, with resonant networks, they no longer have to contain content -Anthony Chemero suggests tool-making which will have to be defined further for me to understand if representational content hasn’t just been replaced by Gibsonian tool content. And don’t get me wrong, that would be a wonderful first step in better characterizing what humans do, but I am also currently on a quest for a non-content description of neural activity -and resonance seems to fit that description.

Abstractions and Scaling Up

TLDR: Abstract words and concepts are inseparable from specific instances, confusing it’s usage.

It seems that often in discussions about whether or not a certain phenomena ‘scales up’, or if we engage in abstractions of things, the concepts we talk about take on a life of their own. For example, I see a curious indent in the wall, turns out they are called power outlets and I can charge my laptop if I have a compatible prong. Here, some try and convince us that we have created a new concept, and for every instance we see of this new thing, we add it to the concept or we extract central features and then we go about talking ‘abstractly’ about some kind of general ‘power outlet’ -it has gained its own level of existence. I urge everyone to think differently about this: To deny the assumption that we are creating something new. I don’t think anyone would disagree with me denying that we just created some kind of outer worldly, non-physical, concept. But I think mainstream cognitive science would disagree with denying that we are creating an abstraction. In one sense, it is a mundane counter-argument: we see the first power outlet, representation in the brain created, we see another one, another representation, and/or we start creating a representation that is slightly less specific and only picks out the shared features of the first two. Any way you slice it, this is the work representations do for mainstream psychologists. But what do you do if you don’t believe in representations?

Taking a page out of Gibson’s 79’ bible, I would argue that ‘scaling up’ or ‘abstraction’ is simply a pole of attention. We can take any pole of attention that we are aware of, we can say the word ‘ball’ and just kind of mean a ball in general, we can say or take any pole of attention we want. However. Describing something from different perspectives (poles of attention) is just that. It doesn’t entail an ontological difference in the world. Same with abstraction, I can choose any pole of attention to make things seem general or specific in any which way, I can call a less featureful ball an abstraction that can be applied to the next ball I haven’t seen. But all that is going on is that you are seeing a couple of aspects in a new thing that also are true for another thing -you are not ontologically creating an overarching concept.
If you think we are, I need to be convinced it is not non-physical (enter contemporary cognition and representations and similarity hierarchies). I currently think it may be indefensible. It seems to me that we (EcoPsych/DynSys) wouldn’t need to accept an ontological shift, it is enough to describe it as a shift in the pole of attention, and we can be taught by others or by our own experience of the world to take on a pole of attention we haven’t before, or didn’t know existed, or didn’t want to, or anything else. It does not necessarily mean we have to accept a new ontological status of an utterance. I think most mundane arguments about abstraction and higher level (cognitive) faculties disappear, but not all.

Emergence. Then how in the world do we deal with things that ultimately do seem to create a new ‘level’ of functioning. A termite mound is not concerned with it’s shape, hell, not even termites are, but because of extraneous factors guiding the drop-off of pheromone induced dirt, all of those small lawful actions create a temperature regulated multi-story apartment building. Here, it is difficult to argue that the mound is just a pole of attention, since it clearly comes with new properties that aren’t written into its creation. I think this is a very different thing to talk about. Compare a termite mound to the word ‘honor’. Honor seems more non-physical, seems more like an abstraction, but as soon as you have to apply the word, you are forced to apply it to a specific situation. It is almost an asymmetry, the more abstract a word seems to be, the more specific an example needs to be to understand it -and multiple specific examples can be even more illuminating.

Ultimately, I may just have a problem with the way in which the term abstract is used. Colloquially it means ‘more general’ or ‘less specific’, applied it is necessarily always a specific instance. It seems to me to imply a separate thing with an ontological status (like a general concept), inviting representations. Perhaps it only invites, which saves its usage somewhat, but to me it just seems confusing.

First conference talk and proceedings publication!

Going to CogSci17 in London this summer for my first research presentation, the paper is to be published in the proceedings (and can be found here). Here’s the abstract:

The actualization of affordances can often be accomplished in numerous, equifinal ways. For instance, an individual could discard an item in a rubbish bin by walking over and dropping it, or by throwing it from a distance. The aim of the current study was to investigate the behavioral dynamics associated with such metastability using a ball-to-bin transportation task. Using time-interval between sequential ball-presentation as a control parameter, participants transported balls from a pickup location to a drop-off bin 9m away. A high degree of variability in task-actualization was expected and found, and the Cusp Catatrophe model was used to understand how this behavioral variability emerged as a function of hard (time interval) and soft (e.g. motivation) task dynamic constraints. Simulations demonstrated that this two parameter state manifold could capture the wide range of participant behaviors, and explain how these behaviors naturally emerge in an under-constrained task context.

Keywords: affordances, dynamic systems, cusp catastrophe, dynamic modeling, simulations, constraints

(3/4) Cognitive Psychology in Crisis: Ameliorating the Shortcomings of Representationalism. EcoPsy and rECS.

After a few more e-mails to a few people, I received my feedback. It was mostly general structuring issues and broader aspects of the thesis. Valuable and informative comments overall, so no change in posting the last two chapters as planned.
This chapter is to me a bit of an anti-climax. It mainly contains definitions and concepts, explanations and examples. So, if you already know your way around Gibson’s Ecological Psychology, Chemero’s radical Embodied Cognitive Science, van Gelder’s Watt Governor example for Dynamic Systems Theory and Wilson and Golonka’s four-point task analysis, there is not too much to gain from this chapter. You can find the whole 21 page chapter here. One thing of importance however, is that in this chapter I attempt to ontologically and epistemologically define affordances, something I have not seen in the literature before. However, I have already posted my ontological query here. The last section in this chapter does bring up a novel area of interest to EcoPsy however. It is called “Electronic Sports and Computer Resistance” and brings in the curious aspect of affordance/information from depictions. I have written about this in a previous blog post also, but have extended and reworked it a bit. So the following is a summary of that section;
Gibson, Chemero and Wilson discuss if affordances actually exist when perceiving depictions. This is quite curious because it is not intuitively simple to decide whether depictions actually afford something, or inform of something. Wilson is currently intellectualising about this, so we will have to wait to see what comes out of that. The official understanding (most likely to change) is that depictions do not afford us anything. This in turn impacts computer-screen research if you wish to stick to EcoPsy because the broad genres of computer gaming and on-screen research rely on it. If we immerse ourselves in virtual environments, are we dealing with affordances? Virtual affordances? Not affordances at all? Information? Virtual information? Do virtual environments inform us and not afford us? Does a virtual environment offer virtual affordances to virtual agents? This could easily be a point of criticism against EcoPsy in a philosophy-journal, but there’s no fun in that, is there? Instead I attempt to define virtual affordances and virtual environments as separate concepts, at least until their possible integration depending on the work of Wilson. The simplest core concept here is the verbal notation virtual which should be seen as a working definition.
I am going to try and summarize and post the last chapter, my thesis experiment, as soon as possible. If not today, then probably during the weekend seeing as Midsummer’s Eve is upon Sweden tomorrow! So, Happy Midsummer’s Eve and don’t forget to dance around the Midsummer Pole pretending you are a frog.