Back on the anti-representation train

I have the wonderful opportunity at the moment to teach methods of Ecological Psychology and Dynamic Systems Theory, their philosophical basis, theoretical concepts, how they require certain analyses, and what kinds of explanations this/these perspective/s give. I am lucky to be in a department that is (as of yet) wholly representation-dominant, yet are curious, interested, and promotes theoretical plurality. I just personally keep running into the wall of not being able to believe in representations as an ontological foundation for psychology.

Perhaps someone can convince me otherwise. Four different ways of defining representations are as follows. The ‘literal-structural’ version, which amounts to old-school phrenology, i.e., it is literally neurons/physical structures in the brain that are/store representations. I do not meet many researchers that hold this belief. Mostly, I find this belief in folk psychology, because I think it is an easily envisionable version that has surface validity through movies and tv-series that talk about brain function in this way (the British “knowledge”-panel program QI is an example of this). Second, you have a ‘literal-activational’ version, which amounts to what I would call modern phrenology, i.e., it is the electrical activity in the brain that are representations. This and the literal-structural version are sometimes unhelpfully combined. fMRI, MRI, PET and similar techniques try to get at this by measuring blood flow to different parts of the brain, although it is an indirect technique since the assumptions go 1) thoughts, feelings, reactions, cognitions, etc, are produced in the brain, 2) in the brain there is electrical activity presumed to indicate usage of a particular brain area, 3) when a brain area is used, increased blood flow is seen to the area, 4) blood flow can indicate brain-part-usage and therefore thoughts, feelings, reactions, cognitions, etc. Third, we leave the literal kinds and hop the fence to ‘symbolic-mathematical’ accounts, that representations aren’t literal parts in the brain, but a more abstract version, instantiated by mathematics. This version is often combined with the methods of the ‘literal-activational’ version, and I’ve listened to several, prominent cognition scholars that have expressed beliefs in how mathematical equations are running in/produced by the brain… somehow. And several of them, upon explaining how the math got there to begin with, used various kinds of nativism as explanations. I’ve also found network-explanations, or ‘patterns of activation across groups of neurons’ and similar, both here and in the fourth grouping. Fourth, the ‘symbolic-abstract’ version, which often amounts to more hand-waving than the third, where representations are not mathematics, they can be groups of patterned activity, sometimes explained as dynamic clouds of activation of different kinds.

I just don’t find myself believing in any of them. With the literal-structural versions, there’s actually been attempts at finding them. Or, they were called Engrams at the time, and could simply not be found. There is little wiggle room in this description, either you can show empirically that a literal neuron/structure changes when you memorize something new or change a memory, or you are more or less forced to accept that this story isn’t the best one. Which is why I think so few ascribe to it beyond non-experts (e.g. folk psychology and AI researchers, the latter of which are almost exclusively dependent on this version).

The literal-activational version is far more popular however. In part, a lot of support comes from the medical sciences, where, if you poke a brain with electricity while talking or playing an instrument, are often disrupted in their activities. Or, if you have particular cognitive issues, a neurosurgeon can often quite easily pick out where a tumor is pushing up against (if you have a tumor, which isn’t always the case). This evidence is a muddle to me, since you can also completely remove brain parts, even half the brain(!), and still maintain functioning -which seems to me to break down the validity of this narrative. It is usually explained away with brain plasticity, but if brain plasticity is true (in reference to literal-activational representations), then the reliability across time when looking at the same brain, or when looking at different brains, break down (Anderson’s book After Phrenology is an interesting read). In fact, if either of the literal accounts are true, we should have specific structures and/or patterns of activation found to be stable across time for a single person, or across people. And if this were substantiated by research and industry, where the f* are our mind-reading helmets? Often when I bring this point up in discussions, neurocognitivists retreat back to neurons and how we don’t have the technical expertise to measure individual neurons in the entire brain simultaneously to answer that question. There are a couple of practices that seem to support the view however, sending a whole bunch of electricity down the spine seems to help people with motor-function constraints to regain/less disruptive movements. Which is great. But it is a far cry from being specific in the sense that’s needed to support the theoretical position. There are also toy versions like moving cubes on a screen using EEG, and beyond taking a surprising amount of time to train, once the person leaves the room and comes back the next day, the process restarts and cannot be continued the next day. I understand this position though, I do, I can deal with living alongside it (although I am extremely frustrated by grant-decisions heavily favoring neuroscience). However, the evidence surrounding it promises a specificity, a specificness, of identifying recurring activation/al patterns, that very clearly both empiricism and practical implementation does not live up to.

We get to the symbolic version of brain activity, particularly mathematical accounts, the story about activation standing in for representations seems to me to either give up the ontological foundation of representations (hand-wavy “it’s maths” explanations), or simply add another step of assumptions to the literal-activational account. Now, not only do you have to solve the above problems, but you also have to explain why a conglomerate of biology-chemical-electrical activity would instantiate an accounting tool that humans created to begin with. A good tool, mind you, but human-created nonetheless. From this point, of course, you get all kinds of half-to-non-scientific abstractions about how everything in the world is made up of mathematics, a tall-tale version of ruthless reductionism, and you of course lose the ontology of the phenomena you are trying to explain. You’d be surprised who I have heard literally say, in very public circumstances, that babies run physics equations in their brain that they are born with. And they are given so much research funding. My personal grievances aside, we have the symbolic-abstract version, which I find the most convincing, perhaps surprisingly. In some ways it can be seen as a less specific version of the literal-activational/structural versions of representations. Often, the explanation begins from the point that larger structures in the brain are not specific specific, but rather, there may be particular patterns of activation across structures that are recruited in a kind of online fashion. The patterns can thus “move around” the brain in part-deterministic-part-stochastic ways, meaning that, if you think about a cat, you come into that time-period from a different point than if you had been asked to think about it the next day. So, a similar pattern would repeat, but, not necessarily the same neurons and not necessarily all the same structures. This account would additionally fit the empirical findings detailed in After Phrenology. It would also explain how fMRI and MRI studies find general trends (averages over both spatial structures, and averages over time -how much time of course depends on imaging technique) across people. However. If this account is true, then I will most likely never get my mind-reading helmet, because it would be near-impossible to know how a particular pattern of activation would only stand for one object, and, what that pattern would look like the next day. Of course, an objection could be that it is not objects that are represented, but everything that we are exposed to continuously at the same time (plus memories, plus …). But then it would be nigh impossible to sort out which components of input lead to what activational pattern, particularly if that pattern changes over each instance. I do have some sympathy for this position though, as it seems to me to better fit more of the empirical data, but it gives a version of representations that is practically unusable except as a theoretical description.

To sum up. The more literal narratives of representations gives promises of specificity (particularly the medically inspired accounts), but this just hasn’t materialized on the practical-functional end, and, there is plenty of contradictory empirical evidence. The symbolic-mathematical perspective seems to explain some of the contradictions due to the shift of ontological basis to mathematics, but this step seems fantastical as it requires another set of beliefs to accept an ontological reality of maths. The world isn’t maths. The world is the world. Although it can be described in detail by maths. Lastly, we have the more symbolic-abstract-network version, which seems to me to cover most of the empirical literature and dispel the contradictions of the literal account. However, this perspective seems to me to not live up to the definition of a representation to begin with, losing the ‘specificness’ of representations that at the outset make them attractive to AI-researchers. In a recent interview with a prominent Ecological Psychological researcher, they were reacted to with a “I don’t see how anything could be anything else than computation”, and I have the exact opposite view, I have a very hard time seeing how anything could really be computational (that says something of value about psychological phenomena beyond simple, mechanical, surface level generalisations).So what is the brain up to? Biology does not preserve components that are not used.

Well, finally, we have the Raja-ian version of brain-activity, but we have left the realm of representations. Where the brain and central nervous system is seen more as a tuning fork, a resonance device, than something harboring ‘the real world’ in one way or another. As a non-content explanation of the brain I have high hopes for this perspective, and perhaps it is compatible with some versions of the abstract-variational-network version of representations (which do not live up to the demands of what a representation would be in the first place). But, just as more literal proponents of representations wait for the technological solutions to their theoretical problems, I’ll have to wait out the empirical evidence and theory-building required for a fuller account of the resonance narrative.

Exploding boxes and affordances

If I have understood this correctly, there is an issue with two boxes being identical but only one having a specific affordance -in this case being “pick-up-able” or “touch-able”. Both boxes, to us, are perceived to have the affordance but this is not the case. Therefore, we can’t rely on perception to decide which affordances are available to us.

I am not sure this makes sense. The central point of the masses of words beneath is, does it not assume us to be constant naive explorers of the world? And is this a fair assumption? (Now you don’t have to read all the details, you’re welcome.)

It strikes me however that, the consequence of trying to explore a possible affordance, gives information on which affordances are available to us. Because we are explorers/seekers, we navigate our environment and find out what is, and isn’t. We rely on perception to do so. Is ice walk-on-able? Sometimes. How do we go about finding out? We poke the ice infront of us with a stick, indirectly finding out if the ice is walk-on-able. So we use secondary mechanisms to find out if something has an affordance, if it is not directly perceptible to us, but this relies on that we have seen evidence of it not being able to be relied on by direct perception of our environment. The assumption in the issue may be that we are constant naive explorers, which we aren’t.

Is it not true that both boxes still persist in holding the affordance, only that, in one case we end up exploded and the other we don’t? A big enough box provides the affordance of sitting, regardless of what the consequence of sitting on it is?

It seems to me that this is a classic case of philosophy of mind issues -where we can’t rely on direct perception to perceive what is “actually out there”. For all its worth, if we are to be assumed to be naive explorers, I posit that we will always sit on the exploding chair, touch the exploding box, walk on what is perceived as a solid and rigid surface and so on -because the visual properties, surface and rigidity and so on, lend us to perceive the existence of such an affordance.

So it depends on reliance then. If we rely solely on direct perception (making us constant naive explorers) then objects retain their perceived affordances, regardless of which affordances actually are available to us. If two objects are identical in all perceptible ways, then we can not rely on direct perception to know which affordances are available to us. However. We are not constant naive explorers. We see others interact with objects, get burned on stoves that look like they are turned off. So what do we do? We quickly touch the stove, or look at the knob or hold our hand above the stove.

As for objects, without the involvement of interaction with other things, it is my view that they retain, many, finite number of affordances. These affordances will be available to other objects, but not all affordances to all other objects. Which ones are, will only be realised when another thing, with its own affordances, interact with it, perceive it. With this said, affordances are not the actual relationship between things, it is the fit between one objects affordance and another objects affordance. If they are compatible, then to the specific object, the other object has the specific affordance. They form a relationship, but importantly, both have their affordances retained in the non-presence of each other, if, the affordances are available when interacting with each other.

Example, the structure of DNA contains four characters, only A can bind to B but not C and D and vice versa, in A and Bs absence of each other they both retain the affordance of being able to bind to one other and they both do not have the affordance of binding to C and D. In A and Bs presence of each other they can bind to each other, and thereby realise both of their affordances. In essence, they retain their separate affordances in absence because they can be realised in presence.

No wonder social relationships are so damn important evolutionarily, seeing others being blown up by boxes would surely rule out to me going near any box even similar in visual makeup to the one that exploded!

Ugh.. always feel I lack knowledge when I finish a thought. Either way, these are thoughts associated to http://theboundsofcognition.blogspot.se/2011/01/s-does-not-visually-perceive-pick-up.html and http://psychsciencenotes.blogspot.se/2011/02/fcking-affordances-how-do-they-work.html

What in the world is the brain necessarily up to?

Some simple reflections on ‘necessariness’

How we consciously experience the world is not necessarily a reflection of what the brain is doing. While it is fully possible to assume that the brain does a bunch of things, I find it a better way of going about things to notassume that the brain does more than necessary. Is it possible that we internalize the world and represent it in our mind? Yes. Is it necessarily so? No. What then are the most basic abilities our brain necessarily has in order for us to function successfully in the world? In my perspective, it is necessary for our brain to perceive change in a meaningful way across all sensory modalities, inform each other and produce motor-movement.

·         Change here is defined as whatever is discernible to our senses from something else.

·         Meaningful here is defined as; Experiments where we do not see change, it is often in situations where change would not matter for our safety or well-being. Changing words in a text when someone isn’t looking and other change-blindness experiments, is non-threatening and not a part of the current goal of the situation, thus, non-meaningful. Even in repeating a pattern of coloured blocks, and changing the colour of completed blocks, is non-meaningful in the sense that, in a first person perspective a part turns non-meaningful when it has been completed (but obviously not in an objective sense, where the overarching goal is to create the same pattern of colour for all parts of the picture).

Change is something that could be universal within the brain and wherever our sensory organs connect with the brain, enables cells to activate on change, as well as connect to all other modalities. Detecting change is necessary, because without it we could not navigate through the environment. This all necessarily needs to be connected to motor-movement of our bodies, because without it we couldn’t respond to these changes. Why then aren’t representations necessary? Because of the simple fact that we do not need to internalize the world in order to successfully navigate in it. The Portia spider and Webb’s crickets in Louise Barrett’s Beyond the Brain exemplifies this. Does all of this mean that we don’t internalize the world and create “representations”? No. However, in order for us to conduct science, we need to criticize and reflect upon the assumptions we make about ourselves –even the ones that seem to make sense in regard to conscious experience as well as the concepts standing for invisible inner processing.
I believe it too indulgent to see the brain as an infinitely complex organ, I just do not believe it to be the pinnacle of evolution. We just make far too many mistakes. I also believe that internalizing every single object that exist through our contact with them makes little sense too. The amount of cognitive load that this requires, in terms of representations, computation, memory and other concepts created by traditional cognitive literature, seems to me to be all too overwhelming. While it is true our brain allows us to act in ways afforded to few other animals, we are still animals and we are not too different from other animals either. In my mind then, it is simply more probable that our brain evolved to sufficiently solve navigating our environment in a cost-effective way, rather than overkill with extreme specialisation. Evolution should have selected for the simplest possible way to achieve, shouldn’t it?

Embodied emotion?

A quick note on EC and the lack of discussion on emotion. Becoming fascinated by the Embodied Cognition approach championed by, amongst many others, Barrett (2011), Wilson & Golonka (2012) and Nöe (2009) I find the embodied approach to enable explanation of behaviour without the use of representations. As taught through philosophy, the less assumptions a theory makes about the world, still being able to explain the phenomenon, the better science we are producing. While there are personal and emotional resistances to the idea that cognition belongs more so in the dynamic reciprocal relationship with the environment, body and brain rather than solely in the brain, I have found one area of discussion lacking. That of emotion.

[Edit 22/02/2013] Semin och Smith’s ‘Embodied grounding…’ has a few chapters on affect. Still emotion seem hard to account for under env-body-brain…

I have long wondered if not our [subjective] experience of emotion may just be the consequence of brain activity [in a horribly general sense, as it obviously does not hold scientifically]. We happen to pay attention to some of it as it blends into the collected sensory experience [and their reciprocal relationship] we call consciousness. This [simplistic] view is not entirely coherent with representationlistic ideas and cognitive research seem to want to put the initiating processes of our inner workings down to cognition, not emotion (however, on occasion emotion is defined within the cognition concept). Where does an embodied cognitive approach consider emotion?

I have, as of yet, not touched upon literature discussing the topic on embodied cognitive terms. I can accept that cognition is not in the brain, or perhaps rather, not solely in the brain. I can accept that perception of our environment is enough to have us experience the world such as that we think it is inside us -because- all we really need in a brain is an elaborate change-detector for all sensory modalities, the ability to form connections between these systems (as if they are separate or “geographically” determined biologically from the start anyways -ferret example in Barrett comes to mind) and the ability to guide motor-movement of our bodies [in relation to what we perceive in our environment and vice versa] (cricket or spider example here) [also connecting this to modalities throughout the brain and body].

However, when it comes to emotion, accepting it as a stimuli-response mechanism, follow representationalist assumptions. Although, some literature would have it that arousal could be enough to place within the brain, then the act of determining what type of arousal (what am I feeling?) would be dependent on the situational, environmental and bodily factors. This is however slightly unsatisfactory since it still seems to rely on representation.

I will thus, for now, await further literature in the embodied cognition perspective that will deal with emotion. If not I get to it first (just have a master thesis in EC to take care of). Suggested literature more than welcome.