Back on the anti-representation train

I have the wonderful opportunity at the moment to teach methods of Ecological Psychology and Dynamic Systems Theory, their philosophical basis, theoretical concepts, how they require certain analyses, and what kinds of explanations this/these perspective/s give. I am lucky to be in a department that is (as of yet) wholly representation-dominant, yet are curious, interested, and promotes theoretical plurality. I just personally keep running into the wall of not being able to believe in representations as an ontological foundation for psychology.

Perhaps someone can convince me otherwise. Four different ways of defining representations are as follows. The ‘literal-structural’ version, which amounts to old-school phrenology, i.e., it is literally neurons/physical structures in the brain that are/store representations. I do not meet many researchers that hold this belief. Mostly, I find this belief in folk psychology, because I think it is an easily envisionable version that has surface validity through movies and tv-series that talk about brain function in this way (the British “knowledge”-panel program QI is an example of this). Second, you have a ‘literal-activational’ version, which amounts to what I would call modern phrenology, i.e., it is the electrical activity in the brain that are representations. This and the literal-structural version are sometimes unhelpfully combined. fMRI, MRI, PET and similar techniques try to get at this by measuring blood flow to different parts of the brain, although it is an indirect technique since the assumptions go 1) thoughts, feelings, reactions, cognitions, etc, are produced in the brain, 2) in the brain there is electrical activity presumed to indicate usage of a particular brain area, 3) when a brain area is used, increased blood flow is seen to the area, 4) blood flow can indicate brain-part-usage and therefore thoughts, feelings, reactions, cognitions, etc. Third, we leave the literal kinds and hop the fence to ‘symbolic-mathematical’ accounts, that representations aren’t literal parts in the brain, but a more abstract version, instantiated by mathematics. This version is often combined with the methods of the ‘literal-activational’ version, and I’ve listened to several, prominent cognition scholars that have expressed beliefs in how mathematical equations are running in/produced by the brain… somehow. And several of them, upon explaining how the math got there to begin with, used various kinds of nativism as explanations. I’ve also found network-explanations, or ‘patterns of activation across groups of neurons’ and similar, both here and in the fourth grouping. Fourth, the ‘symbolic-abstract’ version, which often amounts to more hand-waving than the third, where representations are not mathematics, they can be groups of patterned activity, sometimes explained as dynamic clouds of activation of different kinds.

I just don’t find myself believing in any of them. With the literal-structural versions, there’s actually been attempts at finding them. Or, they were called Engrams at the time, and could simply not be found. There is little wiggle room in this description, either you can show empirically that a literal neuron/structure changes when you memorize something new or change a memory, or you are more or less forced to accept that this story isn’t the best one. Which is why I think so few ascribe to it beyond non-experts (e.g. folk psychology and AI researchers, the latter of which are almost exclusively dependent on this version).

The literal-activational version is far more popular however. In part, a lot of support comes from the medical sciences, where, if you poke a brain with electricity while talking or playing an instrument, are often disrupted in their activities. Or, if you have particular cognitive issues, a neurosurgeon can often quite easily pick out where a tumor is pushing up against (if you have a tumor, which isn’t always the case). This evidence is a muddle to me, since you can also completely remove brain parts, even half the brain(!), and still maintain functioning -which seems to me to break down the validity of this narrative. It is usually explained away with brain plasticity, but if brain plasticity is true (in reference to literal-activational representations), then the reliability across time when looking at the same brain, or when looking at different brains, break down (Anderson’s book After Phrenology is an interesting read). In fact, if either of the literal accounts are true, we should have specific structures and/or patterns of activation found to be stable across time for a single person, or across people. And if this were substantiated by research and industry, where the f* are our mind-reading helmets? Often when I bring this point up in discussions, neurocognitivists retreat back to neurons and how we don’t have the technical expertise to measure individual neurons in the entire brain simultaneously to answer that question. There are a couple of practices that seem to support the view however, sending a whole bunch of electricity down the spine seems to help people with motor-function constraints to regain/less disruptive movements. Which is great. But it is a far cry from being specific in the sense that’s needed to support the theoretical position. There are also toy versions like moving cubes on a screen using EEG, and beyond taking a surprising amount of time to train, once the person leaves the room and comes back the next day, the process restarts and cannot be continued the next day. I understand this position though, I do, I can deal with living alongside it (although I am extremely frustrated by grant-decisions heavily favoring neuroscience). However, the evidence surrounding it promises a specificity, a specificness, of identifying recurring activation/al patterns, that very clearly both empiricism and practical implementation does not live up to.

We get to the symbolic version of brain activity, particularly mathematical accounts, the story about activation standing in for representations seems to me to either give up the ontological foundation of representations (hand-wavy “it’s maths” explanations), or simply add another step of assumptions to the literal-activational account. Now, not only do you have to solve the above problems, but you also have to explain why a conglomerate of biology-chemical-electrical activity would instantiate an accounting tool that humans created to begin with. A good tool, mind you, but human-created nonetheless. From this point, of course, you get all kinds of half-to-non-scientific abstractions about how everything in the world is made up of mathematics, a tall-tale version of ruthless reductionism, and you of course lose the ontology of the phenomena you are trying to explain. You’d be surprised who I have heard literally say, in very public circumstances, that babies run physics equations in their brain that they are born with. And they are given so much research funding. My personal grievances aside, we have the symbolic-abstract version, which I find the most convincing, perhaps surprisingly. In some ways it can be seen as a less specific version of the literal-activational/structural versions of representations. Often, the explanation begins from the point that larger structures in the brain are not specific specific, but rather, there may be particular patterns of activation across structures that are recruited in a kind of online fashion. The patterns can thus “move around” the brain in part-deterministic-part-stochastic ways, meaning that, if you think about a cat, you come into that time-period from a different point than if you had been asked to think about it the next day. So, a similar pattern would repeat, but, not necessarily the same neurons and not necessarily all the same structures. This account would additionally fit the empirical findings detailed in After Phrenology. It would also explain how fMRI and MRI studies find general trends (averages over both spatial structures, and averages over time -how much time of course depends on imaging technique) across people. However. If this account is true, then I will most likely never get my mind-reading helmet, because it would be near-impossible to know how a particular pattern of activation would only stand for one object, and, what that pattern would look like the next day. Of course, an objection could be that it is not objects that are represented, but everything that we are exposed to continuously at the same time (plus memories, plus …). But then it would be nigh impossible to sort out which components of input lead to what activational pattern, particularly if that pattern changes over each instance. I do have some sympathy for this position though, as it seems to me to better fit more of the empirical data, but it gives a version of representations that is practically unusable except as a theoretical description.

To sum up. The more literal narratives of representations gives promises of specificity (particularly the medically inspired accounts), but this just hasn’t materialized on the practical-functional end, and, there is plenty of contradictory empirical evidence. The symbolic-mathematical perspective seems to explain some of the contradictions due to the shift of ontological basis to mathematics, but this step seems fantastical as it requires another set of beliefs to accept an ontological reality of maths. The world isn’t maths. The world is the world. Although it can be described in detail by maths. Lastly, we have the more symbolic-abstract-network version, which seems to me to cover most of the empirical literature and dispel the contradictions of the literal account. However, this perspective seems to me to not live up to the definition of a representation to begin with, losing the ‘specificness’ of representations that at the outset make them attractive to AI-researchers. In a recent interview with a prominent Ecological Psychological researcher, they were reacted to with a “I don’t see how anything could be anything else than computation”, and I have the exact opposite view, I have a very hard time seeing how anything could really be computational (that says something of value about psychological phenomena beyond simple, mechanical, surface level generalisations).So what is the brain up to? Biology does not preserve components that are not used.

Well, finally, we have the Raja-ian version of brain-activity, but we have left the realm of representations. Where the brain and central nervous system is seen more as a tuning fork, a resonance device, than something harboring ‘the real world’ in one way or another. As a non-content explanation of the brain I have high hopes for this perspective, and perhaps it is compatible with some versions of the abstract-variational-network version of representations (which do not live up to the demands of what a representation would be in the first place). But, just as more literal proponents of representations wait for the technological solutions to their theoretical problems, I’ll have to wait out the empirical evidence and theory-building required for a fuller account of the resonance narrative.

Network and doctoral course startup at Lund University for Dynamic Systems Theory and Ecological Psychology

TL/DR; I have officially started a research network for the proliferation of everything Dynamic Systems Theory and Ecological Psychology, along with the start of my brand new doctoral course ‘The Psychology in Dynamic and Ecological Systems’.

Outside of the doctoral programs at University of Connecticut and Cincinnati, there are research groups and individual researchers that work in Ecological Psychology and Dynamic Systems Theory (that is, the combination, also referred to as Ecological Dynamics). And while Dynamic Systems Theory is well known in its fields of origin, it is not too common still in Psychology. I personally have often felt quite isolated being at a university where it still is uncommon. Therefore, I decided to start this network, to collect international and interdisciplinary researchers and create seminars, presentations, and guest-lectures on the topic of EP and/or DST. If this sounds like something for you, then don’t hesitate to contact me or read more about the network Lund University Network for Dynamic Systems Theory and Ecological Psychology (LUNDSTEP) here.

In other good news, I’ve also put together an introductory doctoral course to the philosophy, psychological theory, research methods, and non-linear/dynamic analyses that started today. If anyone has any interest in this then we do take late signups and I can organize Teams for remote students. Also find more information on the Department of Psychology’s website here.

Reflections on ‘Radical Enactivism and Ecological Psychology: friends or foes?’

Target article: Zahidi, K., Eemeren, J.V. (2018) Radical Enactivism and Ecological Psychology: friends or foes? (OPC on “Perception-Action Mutuality Obviates Mental Construction” by Martin Fultot et al.)

The article compares some core aspects of Enactivism (REC version) with Ecological Psychology (on the basis that Fultot et al. reflects it), they find disagreement and puts forth a decision between two aspects of EP -one amenable to REC, the other not. The one that isn’t is thankfully not a position held by EP, which makes it seem like the two are compatible (at least on these points). See below for extracts. Finally: The below is an attempt at understanding each other better, I like Enactivism, I think we can mutually benefit each other -but we need to know each other’s positions better. Also, I am not a representative of the entire discipline, nor do I claim to know it. But, this is a beginning to a discussion across fields so we can benefit from each other! (I hope if I mischaracterize Enactivism that you will help me too!)

 

p. 1 “The first thesis essentially says that all forms of basic cognition are “concrete spatio-temporally extended patterns of dynamic interaction between organisms and their environment ” (Hutto & Myin 2013: 5).”
If you take this quote out of its context of an Enactivism paper, you would need to be citing Gibson (1966, 1979) here and Chemero (2009), as it is a very close definition to how the concept of Affordances is used in EP. I.e. it being a relation within and/or between organisms and environments (this language can be tightened up though if you want, like niche and habitat (Baggs & Chemero, 2018). This is good news, because if it is a central tenet then there is hope we have amenable approaches.

p. 2 “Sensorimotor contingencies are the lawful ways in which sensory stimulation changes as the organism moves or acts in the environment . In perception the organism exploits these sensorimotor contingencies to find its way around the environment.”
I do not know the definition of sensory stimulation in Enactivism, if they mean the traditional behaviorist/representationalist sense that stimulation is the basis of perception, then I disagree. I believe this leads to a path down needing representations/content etc in the brain to build up whatever it is that we “see”. Due to Direct Perception, I cannot accept that. If they mean sensory stimulation the way that Gibson (1979) does, then the next point of issue is the exact definition of sensorimotor contingencies (SMCs) in general. Are SMCs the change in perspective? The change in perspective plus the change in layout that this brings with it? The change in all of the above plus the new changes in action that it also brings with it? The EP way of bringing about lawfulness is through specificity which can be found in Gibson (1979), Turvey, Solomon and Burton (1990), Stoffregen and Bardy (2001) and Chemero (2009) (and there are more…). Specificity is a debated term, what is included, what isn’t, but as a basic definition we need (in the case of vision) light that bounces off a bunch of stuff in the world, making it structured in different directions, this structure is referred to both as an ambient optic array, and information. Information here is not content. Never content. Ever. But with our visual perceptual system (which includes the entire body, not just the eyes) we have evolved to detect differences and changes in this structure. Information is argued by some to imply necessarily an animal, some don’t, yet others argue it doesn’t really matter if it does or not… For all that I can tell, the quoted sentence doesn’t contradict specificity…

p. 2 “However, from a REC point of view that interaction is always embodied interaction. It is the embodied nature of the organism that grounds the kind of interactions that are possible. Note however that the “body” referred to in the Embodiment Thesis is not our common-sense notion of body but rather the body engaged in non-linear and far-reaching sensorimotor interactions while engaging with certain salient features of the environment . In this interaction the physical body ànd brain areas are in play. ”
Gibson (1966, 1979) you cannot be disembodied. Some of the initial research in EP was driven by the idea that due to the definition of affordances, then our perception-action cycles must bear on aspects of our body in conjunction with aspects of the environment when we enact or perceive affordances. See Warren (1984) for an easy example. Lastly, for EP the brain surely does something, I won’t deny it is a curious structure that does something, but I definitely do not hold a modular view of the brain (see e.g. Anderson’s book After Phrenology about the specificity of brain regions). Now, this last point doesn’t necessarily contradict to Anderson, in that the brain does specific things, however, this to me (in both Anderson and REC) invites too easily the concept of representations if viewed strictly, or content if viewed loosely, as something in the brain, and that the brain does. That is not acceptable from an EP standpoint for very many different reasons. So, it does depend on what is actually meant by interaction, and what is supposed that the brain does, which isn’t explicitly stated in this quote.

p. 2 “All interactions are made possible by previous interactions between the organism and its environment (and recall that the history of interactions may extend beyond the organism’s life-time). ”
Question for Enactivists, I like the idea, if it doesn’t invite nativist arguments. How is it brought about across generations? (I can think of candidate answers but do not wish to speculate.)

p. 2 “The answer is that the interactions affect and change the neural and non-neural body of the organism. In the neural machinery the past experience is sedimented, and in order to explain current interactions this “sediment” is called upon.”
Another question, is REC ok with content either outside of or inside of the body, in whatever shape it takes? Ecological Psychology is not, for either, this to me hints at a real divide if content is assumed anywhere in the system on behalf of REC.

p. 2-3 “There is thus no danger that by invoking neural structure in enactivist explanation, representations slip back in by the back door.”
I want to apologize if I am reading REC unfavorably, but even though it is insisted upon that representations are not accepted in REC, it does seem like REC wants to talk about content without talking about representations. Is that true? Is that possible? If so, it hints again at a divide.

p. 3 “If the net of perception is so widely cast and is constituted by certain types of behavior by which organisms adapt to conditions in their environment, and if it is its aim to find in all these different kinds of behavior a kind of common structure as the essence of perception thus conceived, then it is unsurprising that perception does not depend on neural structure. One may wonder whether trying to find a communal structure to such a diverse set of behaviors is going to result in something genuinely explanatory.”
Firstly, ‘the essence of perception’ is not something that exists, it sounds representational/computational to me. A communal structure is slightly unclear what exactly is meant, I took it to mean that structure does things, but EP wouldn’t say that a structure does things. In fact, the way it is written makes me think it is referring to a fallacy not in EP but in representional/computational psychology due to the assumption of content. Structure, in EP, is of course implicated, but it is not the driver, it is not the controller -because there isn’t one.

p. 3 “Arguably, these are the phenomena for which REC invokes neural structure to explain them .”
This may be our currently dividing aspect, it’s not that neural structure is not involved, I of course think the brain is active, but I do not want to assign function to the brain if I don’t need to. So far, I haven’t come across any behavior, concept, situation, event, that I need to invoke content/representations/computation for. Maybe I will, and then that would be ok in certain forms, but I am going to try to explain things without the brain first, and if I can’t well then it may just be that ‘the brain does it’.

p. 4 “There is, at this point no reason to assume that the meaning of objects is in some way perceived by the organism . But EP makes the further assumption that the meaning is actually perceived through the perception of affordances. The way affordances are perceived is through the invariants in the optical array. The latter (as well as the ambient light at a convergence point) is said to contain (as in “content”) information about objects and their lay-out in the environment.”
Meaning is perceived in EP along with the relation between environment and organism, it is inherent in the term affordances. Since affordances (and the quote of Fultot) insists that ‘behavioral implications’ are what are important to the organism (for whatever reason, based on whatever past, current, and future that the organism is currently acting on). Content, it is never assumed in EP, there is no place for it. If some piece of writing seems to invite this idea, rest assured it is a misreading (or that it is really hard with the language we have at our disposal to say it in a different way that sounds better). Again, information is structure in an energy array, like the optic array in vision, which we can detect if we have perceptual systems sensitive to light (although it is deeper than this). Starting with Gibson (1966) is a good idea and then reading and noticing changes to his 1979 book. But there is a host of theorizing since then up until now also as a previous comment to a quote says (viz. specificity).

p. 4 “The use of the word “about” seems to indicate that information is a semantic notion and thus that information has semantic content.”
This made me think that perhaps the above, and this, hints at that this is a problem in philosophy -I’m a psychologist and I will throw around the word about or for interchangeably, in philosophy it seems to be a thing. Information does not have semantic content (however, see Michaels & Carello’s, 1981, about information about vs for). Content is not a concept in EP.

p. 4 “That Fultot et al. take this semantic road is far from clear, however, we suspect they do (cf. their complaint in § 51 that REC “doesn’t even tolerate content”).”
Fultot, just like me, aren’t representative of the entire field, although I of course understand that their article is the focus of Zahidi and Eemeren’s OPC. This statement, if true, doesn’t generalize to EP in general.

p. 4 “One way to do this is to interpret EP information as co-variance of worldly lay-out in relation to a convergence point and optical structure at that certain point .”
In comparison the the above quote, this one is closer to what you’d find in Gibson and authors after.

p. 4 “In other words, while the meaning of the objects is something objective (which we can, from a third-person perspective, describe), that meaning is not given in perception to the organism, it is something the organism enacts.”
Nothing is objective, objective and subjective doesn’t really exist. There is a commonly used Gibson quote that says that affordances cut across the subjective objective divide, it is both, or neither (1979). I take EP to include and couch behavior in culture, customs, norms, societies, groups, families, histories, futures… the list can be made long. We live in nested spatio-temporal scales. Of course, you can use words as if there is perspective such that, but in the reality of things, it is just a “pole of attention” (1979) and not some, actual, real, perspective that exists in the world. I think EP can give REC something here if my assumptions of REC aren’t too misguided.

p. 5 “For example, in order to avoid falling back on some “impoverished stimulus”-type of psychology, some ecological psychologists like Michael Turvey have swayed in the opposite direction, postulating ultra-information-richness of the media (light, air, …) around us. (cf. A. Chemero 2009: 106-107).”
“Ultra” richness is misleading, it just has a certain richness. EP does not say we perceive light as such when we go about our daily business, but, imagine the number of photons in certain cubic area -yeah it is super ultra amazing rich! But the point is that, we take it so for granted that we don’t realize that this is the norm. Impoverished stimulus is not. Light is incredibly dense, the pure physics of it checks out…

p. 5 “Here, the structure of the optical array determines completely the (behavioral) meaning of the object: there is no ambiguity possible . However, Rob Withagen and Anthony Chemero (2009) have challenged this view on evolutionary grounds.”
This is not exactly true, see Stoffregen and Bardy (2001) in their concept of the global array, and also see the reponses to their article (contained in the same pdf that the link directs to). The global array is a contested concept along with specificity, but it should give some insight into where the debate is. Also, there is ambiguity possible -but not in the traditional sense that would open up for arguing about impoverished stimulus- and also not only for evolutionary reasons, but for reasons you’ll find in Eleanor Gibson’s absolutely stunning theory of development within Ecological Psychology (e.g. Gibson & Pick, 2003). It really is a game-changer.

p. 5 “It merely allows for information available through the optic array to not fully specify affordances . That doesn’t imply a need for it to be supplemented through some type of mental construction. The organism can find out the affordances of the object by interacting with it . When the available (optic) invariant co-varies only moderately with the affordances, so to say, the history of interaction, c.q. learning history, explains the organism’s improved adjustment to its situation .”
Not fully specifying affordances gets tricky technically, other authors including myself, are skeptical about this, because it depends on where you think the consequence might “pop up” if we allow a 1-1-1 specificity (see Turvey, et al. 1990 above). As an example, finding out by interacting is a way of exploring the world for invariants -sometimes other invariants than the ones we first paid attention to that happened to be nonspecifying. See Stoffregen and Bardy (2001) for this, link further up. Finally, see Eleanor Gibson’s developmental theory, it does an excellent job of explaining learning history through differentiation.

p. 5 “With respect to the use of semantic notions, we have argued that these can be banned from EP without loss of explanatory power.”
This is good news because we don’t have anything called “semantic information” or “semantic notion” in our theory =D.

Comments on ENSO Seminar “Radical Embodiment and Real Cognition”

Over at 4e Cognition Group Anthony Chemero has given a talk (YouTube link) about a couple of interesting new directions that he and his students are working on for their dissertations and a paper. The main impetus is to explain “higher order cognition” through a rECS-able perspective.

The first turn is through Gui Sanches de Oliveira’s Artifactualism approach to models, essentially giving a thorough and solid argument for that scientific models are foremost tools, not accurate representations of the world. If it works, we use the model to predict, explain, plan, experiment, etc. It reminds me of the futile path that scientists often are found on: Focusing on finding The Truth, or finding objectivity. But the world seems to me to contain none, but even if it does, it doesn’t matter, at least not nearly as much as if the proposed model can be used in any applied setting. It reminds me of Nancy Cartwright’s arguments about truth and explanation, how far away those two concepts are from each other -and opting for truth takes us further away from a functioning tool. This is a really important step. Artifactualism rightfully criticizes the assumption that thoughts are for representing the world accurately, and replaces it with that cognition is for toolmaking. “Explicit, occurrent thoughts are tools, instruments, or artifacts that some agents create and use. Of course, models can meet formal definitions of representations, but that is not what they are for…”.

The second turn is through Vicente Raja Galian’s attempt at defining brain activity through resonance and oscillators. In his case, TALONs as resonant networks of neurons that resonate to certain ecological information and not others, that can continue to oscillate in the absence of the initial resonate -and that can be set in oscillatory motion at a later point in time (again without the initial resonate, through Ed Large’s work). The brain here, is driven by everything else (not the opposite way around). Oscillators, and non-linear oscillators, can act as filters and produce patterns not in the original driver.

Then, we take a turn into what Chemero refers to as slave/master systems, and while those words seem very culturally loaded, they make the point that slave systems wander (drift) in absence of a master system. E.g. circadian rythms stay in tune when we are regularly exposed to sunlight, but when deprived, our rhythms start to drift. An idea connected to that when we do try and use TALONs to think about things, or the past, but because it is not what they (and as a whole, the brain) is for, we just don’t seem to be very good at it. Marek McGann adds “‘Memories’ are constructed on the fly, and confabulation is rife, because it is not retrieval of things, but it is temporary toolmaking”.

Ultimately these initial steps in making more concrete the idea of ‘resonance’, seems very promising. An interesting aspect of resonance, is that it exists on all scales, it doesn’t matter if we look at the behavioral or neural scale, which makes them analyzable by methods like fractals. It makes it an empirically testable theory. Also, with resonant networks, they no longer have to contain content -Anthony Chemero suggests tool-making which will have to be defined further for me to understand if representational content hasn’t just been replaced by Gibsonian tool content. And don’t get me wrong, that would be a wonderful first step in better characterizing what humans do, but I am also currently on a quest for a non-content description of neural activity -and resonance seems to fit that description.

A non-content brain. 2/2

There is some misreading of Ecological Psychology due to the way direct perception and information detection are spoken about. Direct perception seems to carry with it a connotation of specificity (guarantee), that the world is in the specific way it is seen, we cannot be wrong and we have all of it available at once. There is an explicit rejection of the poverty of the stimulus. But pause here a second, because this is what information detection is about.

First, the production of photons exist regardless of my existence. They will bounce around on surfaces, be partly absorbed/reflected depending on surface makeup, and create structure (if we were to put an observer somewhere in this space). In this instance, it would be most appropriate to simply refer to this as the optic array, or structured light. It is not that this structure carries content, it simply is structured (and continuously re-restructured) in a manner specific, and guaranteed, by the surfaces around it and the medium(s) by which it came to any specific point.

Second, for a very long time, organisms have grown to be able to detect such structures. I cannot remember the organism, I think it’s a deep water fish, but a precursor to our eyes was sensitive only to ‘light’ or nothing. Since, eyes seemed to catch on as an important way (in an evolutionary sense) to keep developing, which in our case meant becoming more and more sensitive to the structure that light carries with it. There is no reason to believe that at once, in any given slice of time, that we can perceive all of the structures that light carries with it. ‘We see what we see’ and if we want to see more, we have to explore whatever we are trying to see by moving, to literally detect structure that may be occluded to us from one vantage point (like “illusions”), or, we simply have not looked at something for enough time that we have yet to learn to discriminate between smaller differences in structure in the optic array. I can, in the end, come to the same or a different conclusion about what I saw, depending on the history with which I came into the situation, but also depending on which parts of the array I was detecting, or trying to detect, at the time.

Third, we see and hear and detect pressure and other things at the location at which that information is available (but as you might expect, we do not necessarily detect it, but, we have the possibility to). The firing of cells in the eye that propagate to the brain, never held content, and was always in a ‘language neutral’, ‘symbol neutral’, non-content “signal”.

However. Vicente Raja Galian pointed out that so far, I have yet to assign any function to the brain, and it seems appropriate that we should since it is a curious structure and we have kept it evolutionarily. Keeping a biological structure does not entail function or even importance (in the strictest interpretation of the word), but it seems to me to be a very valid point. So far, I am having issues arguing against that the brain is for ‘where’ (on/in the body) and ‘in what order’. Something is detected at the foot as intense pressure, I look down and see a dog biting it, this (in a sense) creates a loop where whatever signals are propagated back from the retina together with the pressure of the foot are happening simultaneously. There is simultaneous increased firing from two directions into the brain. Solely by being simultaneous in a close (geographically) space, intertwines the two. Experience does not happen in the brain, it happens in the relationship between body and environment, but one thing happening before, after or simultaneously, may come to be through having a space within a body where the ‘where and when’ co-exists. Because a lot of neural propagation going on in the body, in one way or another, travels to one collected structure, the brain. No content is needed, all we need to “know” is where and when, which is simply (although plastic) a matter of bodily geography.

I also have a sneaking suspicion that the brain is for drawn-ness and repulsion, but that currently requires more thought and explication before I feel comfortable laying it out publically.

No ‘content’ in EcoPsych and Direct Perception

TL/DR: While a valid concern, I don’t think EcoPsych relies on ‘environmental’ content.

I share the worry with Dr. Edward Baggs, that Enactivist criticism of Ecological Psychology’s Direct Perception hints at a possible dualism -even if I think it may mostly arise from reading EcoPsych unfavourably or indeed unfavourably expressing EcoPsych.

The idea is this. Representationalists assume content is in the brain (created and/or passed on from the senses as input). Perception is simply input for the brain-processor which sends output signals to the passive body, hence Indirect Perception, what our eyes see is ultimately not what we experience, we experience what the brain creates (subject to criticism of being idealist and/or dualist, but that’s a different blog post). EcoPsych instead says, hang on,  the world is its own best model, there is absolutely no need to conceptualize the perceptual system as mere, passive, input devices, and there is no need to conceptualize the brain as a processor -we need no processing (in the traditional sense anyways). Rather, perception is active and intelligent on its own, what you are currently experiencing is unmediated by any interpretational processes, what you experience is what your perceptual system detects. Perception requires movement, perception and action are in this sense inseparable (your legs, e.g., are also a part of seeing, cue embodied theories). However, importantly, perception is action, action is perception. It’s a continuous and simultaneous loop…

Enactivism asks however, if this means that EcoPsych simply places content on the outside, as opposed to representationalists on the inside. If so, we are not really losing the dualistic consequences that believing in content brings with it.

I think one problem may arrive from reading specificity (roughly: guaranteed perception) into Direct Perception. The straightforward answer here is that this is a bit too literal a take on Direct Perception, although it comes from considerations such that if what we see is the world then why does the world look different to different people -we have access to the same information. A simple answer from EcoPsych would be that firstly we all have different capabilities that we bring to any situation, we inhabit different bodies, we can have different goals, and they all bring effects on what we attend to and why.

Another issue is that some EcoPsych’s talk about properties and effectivities, as if you can divide up organism from environment, landing us in traditional dualisms again. I do not subscribe to this way of talking specifically about the organism or the environment because I think it too easily invites dualist interpretations -but those who do still would say the affordance is primary, that we then can talk about its corresponding parts doesn’t mean that they see them as non-constitutive. Which sounds fine to me, but, I also understand how people can misread this.

As for answering the central question of -do EcoPsychs conceptualize content to be on the outside, I think a resounding ‘no’ is in order. Organisms detect structure in ambient arrays (e.g. the optic array) and they perceive/act on affordances (which necessarily is a relational aspect of the current, and continuously evolving, organism-environment system). The information itself (the structure in an ambient array) is not content, in the case of vision it is (from a specific point of observation) all of the converging photons from all angles (as a whole, continuously flowing) on that point that has bounced off of surfaces where light has been partly absorbed, reflected, etc (which is part of how light becomes structured) that then reaches the eyes. The eyes themselves have evolved to detect differences in structure to the point that was necessary for survival, and we bring an entire cultural/societal/historical as well as developmental baggage with us as we have started naming structures that we are taught from young age to reproduce. But there is no content, there is no standing-in-for the things in the environment = a wooden table is made up of wooden particles which are made up of atoms, when light strikes the top of a dark wood, photons are to a larger degree than a light wood absorbed by the material, but then of course, this becomes circular because we have already defined “dark” and “light” through the property of absorption. (It should be added here that “illusions” where dark and light can look the same, or where a blue dress can look yellow, is only a valid counter-argument if you rely on traditional optics where you discount contextual factors like general lighting conditions etcetera.)

First conference talk and proceedings publication!

Going to CogSci17 in London this summer for my first research presentation, the paper is to be published in the proceedings (and can be found here). Here’s the abstract:

The actualization of affordances can often be accomplished in numerous, equifinal ways. For instance, an individual could discard an item in a rubbish bin by walking over and dropping it, or by throwing it from a distance. The aim of the current study was to investigate the behavioral dynamics associated with such metastability using a ball-to-bin transportation task. Using time-interval between sequential ball-presentation as a control parameter, participants transported balls from a pickup location to a drop-off bin 9m away. A high degree of variability in task-actualization was expected and found, and the Cusp Catatrophe model was used to understand how this behavioral variability emerged as a function of hard (time interval) and soft (e.g. motivation) task dynamic constraints. Simulations demonstrated that this two parameter state manifold could capture the wide range of participant behaviors, and explain how these behaviors naturally emerge in an under-constrained task context.

Keywords: affordances, dynamic systems, cusp catastrophe, dynamic modeling, simulations, constraints

An ecological note on the Müller-Lyer illusion

The Müller-Lyer illusion

Are one of the lines longer, shorter or the same length as the other one?

Traditional psychology holds that perception is flawed, we see illusions because the underlying perceptual aspects of our experience need to be embellished, corrected, interpreted, etc, by our brain. While doing this we make mistakes. It is often posited as one of the major issues for ecological psychology to explain because it seems to invite cognition. After all, if all the information is detectable out in the environment, then why would we perceive the top line as shorter and the bottom one as longer?

Perceptual illusions in real life (not on paper or a screen) can quite easily be dealt with. We could argue that the we have not sampled enough of the available information in the ambient optic array, so we do that by locomotion and change of viewing angle. When we do, very many illusions are simply dispelled (Kennedy & Portal, 1990, or Michaels & Corello, 1981). In fact Michaels and Corello explains this very well; if we are in a desert, looking for water and see a mirage (which may or may not be water), it is right of us to investigate if it is or not, not doing so would be the wrong action to take. In the same way, they exemplify, could we be fooled by a hologram until we reach out to try and touch it.
Besides this point, the illusions that aren’t able to be dispelled by simply exploring, are always images or videos on a screen or paper. Making them in part unexplorable. As for the Müller-Lyer illusion, it is at the same time simple but abstract forms. All the information given is what is there, but even looking at it now, knowing they are the same length, I still perceive them as different lengths. A solution that has been suggested elsewhere (lost the reference sorry) is that the top figure shows an enclosing space, if we were to reach our hand in, we would be more constrained than the bottom figure. The information is specifying a smaller versus a larger space.
Another assumption of traditional psychology is that (visual) perception works with the simplest aspects, lines, points, and that then there is a constructive process cognitively that puts things together into more and more complex constructions of what we perceive (into the full 3d-experience of the world). For a traditionalist then, there is no doubt it is an illusion, the lines are the same length, we perceive them to be different -our brain is playing tricks with us. From an ecological standpoint, it is impossible not to see the four end lines, you can’t not perceive them when perceiving the end of the lines. Therefore, they matter.
It is fully possible that we actually don’t perceive the “simplest” form and construct it, in fact, this is what ecological psychology says we don’t do. The example comes from the Planimeter (Runeson). A mechanical, simple, device that can measure the area of irregular patches without knowing length, width or doing any computations. (For its full explanation, see here.) It thus measures a “higher order” thing, sqaure cm (or inch) without the “simpler” concepts. It is fully possible to conceive of the idea that we then simply cannot ignore the four end lines -they are not individual lines, they are a form that is important, in it’s whole, to our perception of it. For this reason, it may just be so that the question “Are one of the lines shorter, longer or the same length as the other?” a silly question to ask. Because our perceptual system does not work on simple structure and construction, it is forcing us to do something that we simply don’t do, and therefore do it poorly.
(Which unfortunately is not too uncommon in the traditional psychological literature.)

Is it an impossibility to “see” the Big Bang?

Even if we theorise there to be a universe contracting and expanding, or a Big Bang, it might be impossible to actually “see” it. The reason would be found in Ecological Psychology.

Only when Ambient Light has structure does it specify the environment. There has to be differences in different directions, for it to contain any information. Perfect symmetry, maximum entropy, is theorised to be a whole bunch of homogeniety in layout, distance between particles, etc. If we have a medium that is perfectly evenly distributed, we do not have difference. So how could we ever “see” it?

Created and programmed depictions: A useful distinguishing aspect?

Skip text between brackets (first two paragraphs) to avoid (unnecessary) thoughts leading up to the depiction-theoretical argument.

[Computers are fallible. They also rely necessarily on electricity. Is there then no wonder that we can have a preference for books because they are perceived as tangible/physical, as opposed to a Kindle e-book, which would be perceived as intangible? However it should also be said that the capitalist consumer model is better suited for e-items since buying and discarding wouldn’t take as large a toll on the world’s resources (hence the ongoing work of paper document free workplaces).

For me personally there is also a feeling of ownership involved. A book is mine, and I can keep it and pick it up when I want to. An e-book exists in something and only by virtue of both the device working and continuous access to electricity (over a longer time perspective). It doesn’t then feel like I own it because my access to it relies on things outside my perceived control.]

This is another distinguishing aspect between screen-presented alternative objects/environments and for example drawing a painting. I had written in my notes about the previous post “created vs. programmed” and couldn’t fit it in because it is another stratification of depictions and virtuals (virtual objects, environments and agents). Here goes. Both are created, essentially, but programmed necessarily requires programming and created does not… Ugh, awesome start.

Created depictions are objects that are a part of the environment but not the actual object themselves, they lend themselves for perception of information (about possible affordances about the object they depict or similar objects)*. This would include, for example, a painting of an apple. Created depictions rely necessarily on the existence of the environment, but not the opposite.

Programmed depictions are objects that are a part of the virtual environment but do not lend themselves to virtual affordances, they lend themselves for perception of information (about possible affordances about the object they depict or similar objects)*. This would include, for example, a jpeg image of a painting of an apple. Virtual depictions rely necessarily on the existence of the environment, and, a virtual environment, but not the opposite.

I believe that a depicted object is experienced differently depending on if it is created or programmed. This may have to do with a more so tangible feeling of a created depiction and the more so intangible feeling of a programmed depiction (but this is my personal experience and perception of created vs programmed depictions and so may hold very little value if put through a scientific method). I also get the feeling that this may have to do with “actual” and “perceived” reality, but that is another whole dimension and I need to think more thoroughly about it before feeling confident it holds any value.


*Note by the way, that the depiction merely needs to have a similar enough optical array for it to be considered depicting any specific object. Perception is also individual depending on experience, but this is covered already (although it may need further theorising).