A non-content brain. 2/2

There is some misreading of Ecological Psychology due to the way direct perception and information detection are spoken about. Direct perception seems to carry with it a connotation of specificity (guarantee), that the world is in the specific way it is seen, we cannot be wrong and we have all of it available at once. There is an explicit rejection of the poverty of the stimulus. But pause here a second, because this is what information detection is about.

First, the production of photons exist regardless of my existence. They will bounce around on surfaces, be partly absorbed/reflected depending on surface makeup, and create structure (if we were to put an observer somewhere in this space). In this instance, it would be most appropriate to simply refer to this as the optic array, or structured light. It is not that this structure carries content, it simply is structured (and continuously re-restructured) in a manner specific, and guaranteed, by the surfaces around it and the medium(s) by which it came to any specific point.

Second, for a very long time, organisms have grown to be able to detect such structures. I cannot remember the organism, I think it’s a deep water fish, but a precursor to our eyes was sensitive only to ‘light’ or nothing. Since, eyes seemed to catch on as an important way (in an evolutionary sense) to keep developing, which in our case meant becoming more and more sensitive to the structure that light carries with it. There is no reason to believe that at once, in any given slice of time, that we can perceive all of the structures that light carries with it. ‘We see what we see’ and if we want to see more, we have to explore whatever we are trying to see by moving, to literally detect structure that may be occluded to us from one vantage point (like “illusions”), or, we simply have not looked at something for enough time that we have yet to learn to discriminate between smaller differences in structure in the optic array. I can, in the end, come to the same or a different conclusion about what I saw, depending on the history with which I came into the situation, but also depending on which parts of the array I was detecting, or trying to detect, at the time.

Third, we see and hear and detect pressure and other things at the location at which that information is available (but as you might expect, we do not necessarily detect it, but, we have the possibility to). The firing of cells in the eye that propagate to the brain, never held content, and was always in a ‘language neutral’, ‘symbol neutral’, non-content “signal”.

However. Vicente Raja Galian pointed out that so far, I have yet to assign any function to the brain, and it seems appropriate that we should since it is a curious structure and we have kept it evolutionarily. Keeping a biological structure does not entail function or even importance (in the strictest interpretation of the word), but it seems to me to be a very valid point. So far, I am having issues arguing against that the brain is for ‘where’ (on/in the body) and ‘in what order’. Something is detected at the foot as intense pressure, I look down and see a dog biting it, this (in a sense) creates a loop where whatever signals are propagated back from the retina together with the pressure of the foot are happening simultaneously. There is simultaneous increased firing from two directions into the brain. Solely by being simultaneous in a close (geographically) space, intertwines the two. Experience does not happen in the brain, it happens in the relationship between body and environment, but one thing happening before, after or simultaneously, may come to be through having a space within a body where the ‘where and when’ co-exists. Because a lot of neural propagation going on in the body, in one way or another, travels to one collected structure, the brain. No content is needed, all we need to “know” is where and when, which is simply (although plastic) a matter of bodily geography.

I also have a sneaking suspicion that the brain is for drawn-ness and repulsion, but that currently requires more thought and explication before I feel comfortable laying it out publically.

A non-content brain. 1/2

In search for a non-content perspective of brain activity, I often feel I come up empty handed. Either non-content is not really directly spoken about (e.g. Anderson, 2014, and isn’t really intended to -it does however very importantly free us from other assumptions), or when a positive account is languaged like “but the brain does this or that” is more confusing to me than clarifying. So I’ve been criticized for not having my own positive account, or even a reasonable idea of what I expect or accept as a good answer. So here’s a minimal start.

With a non-content view of the brain, I mean that, any and all activity in the brain is not representational, symbolic, or in any way carries any content in the sense that if I show you a picture of a cat then your cat neurons are firing (simplified of course). To clarify this further, Anderson to me gets close, talking about the brain in a functional sense, non-reductionally. Instead, everything “magic” already happens in a) the continuously ongoing relationship in a given organism-environment system, but importantly, b) in the sensory system(s) (e.g. eyes, ears, legs, body at large, etc.).

All that really would happen after sense-making at the sensory system organs, would be probabilistic (and likely functional as Anderson suggests) networks of directed firing. I mean this in quite a specific way. For example, eyes connect to brain at specific sites, electrical signals propagate from eye to brain at specific sites and an initial direction, but after that, neuronal firing is (due to specific reasons) a matter of what current state immediately neighbouring neurons are in. So, if one neighbour is in post-firing and another not, the latter has a higher likelihood of firing. At a larger scale, what we will see in an image of the brain is a dendritic spreading that at the time is part stochastic (and re-used) because neurons in this sense are non-essential. Of course, if a network of neurons (with part stochastic spread) are firing together, like the oscillators they are, they are more likely to fire together again at a short time scale (they are also likely to fire together again at a longer time scale, but less so. Here is where a lot of the misinterpreting of brain images (by cognitivists usually) exist if you ask me, neurons and often networks of neurons are seen as essential or carrying content so we make a one-to-one mapping between an image of a cat and the specific neurons that are firing -but there are far too many confounds for this to be a confident finding.

Like anything dynamic systems tells us, future (or current) state depends on the history of the system, and because there is no real beginning to any one individual’s brain activation, I cannot bring myself to believe that the brain ‘starts a series of neuronal firings to achieve a body movement’. Body movement is in relation to environment, that’s where the decision is made to move a certain way, that’s where “cognition” is. Actually moving a body part, yes, that is connected to brain firing -but not (necessarily) in a causal manner. Direction, intentionality, agency, mind, is not in the brain, it is in the relationship between organism and environment, a course of body movements is already given by that relationship, at most and only in this sense, is the brain a “mediating” structure.

An aside. Blood flow through the brain is already always ongoing. Co-developing with all our other organs, will also play a (perhaps minor) part in where and how a probabilistic dendritic neuronal network of firing will move through the brain. Then, wherever that was, will need more blood flow (as is the basis of most imaging techniques), however (and again), because the route through the brain is part stochastic anyways, it makes no sense to talk about brain regions, networks, or neurons in any detached, representational, contenty, essential, manner. Re-use, on the other hand, and functional (roughly similar from time 1 to time 2) networks of firing, over time, is what the brain is up to. Because of this, with current imaging techniques, they can get us worse or better probabilities of ‘what’s going on’, and interventions can hit or miss depending on individual and time of intervention. But if you are interested in human behavior, it is probably not the most productive scale or scope at which to analyse it (although there’ll be some absolutely beautiful oscillator dynamics going on at a neuronal level).

The first response ever to anything non-representational, ‘yeah, well, how do you explain closing your eyes and thinking to yourself “I am going to move my hand now” and then move your hand?’ Well, firstly, the question already assumes the brain did it, so it is always an unfair question. But. Nevertheless it needs to be answered. As always, closing your eyes and remaining still isn’t some kind of magical state where you are closed off to the world, you are still continuously co-constituted with it. In fact, I can predict that sentence above to be said because of the type of conversation we are having -the history of the system already determines and constrains direction and force into and with the future. But most importantly, the experience of the “decision” in the ‘word-sentence’ that you are thinking doesn’t ‘come from’/isn’t instantiated in the brain, it is already a decided course due to the relationship between you and the environment that you are in -alike other body movement through the world. I could respond and say “do you know how many people choose their arm/hand to move when we get to this point in the conversation? 100% so far”, that is how constraining our history is (and the direction it already gives us) even on a short timescale. You could respond “ok well now I can think of anything and maybe I won’t even move, just think that I will but don’t”, and we can go around forever in this type of dialogue, entrenching us further into that dissonant attractor state. The last point is, that question doesn’t really tell us what is going on, at worst it is a defensive reaction, at best a curiosity that likely can be satisfied empirically or by appealing to the continuously ongoing activity of our senses and sense-making.

Theory of Mind really is dead.

Brain in a vat, thoughts from embodiment.

The philosophical example goes;

If you put a brain in a vat and connect all the inputs necessary, would the brain be fooled that it actually wasn’t a brain in a vat, but a normal brain in a normal world?

All kinds of fun philosophical issues follow. Embodiment however, could firstly argue that since it is only a brain, it could not function at all because brain is body -there is no separating. The argument would then be that all the inputs is a misleading assumption behind the question. We would obviously not have all the inputs (bar for a moment that input/output type stuff is difficult to maintain under this perspective). However, for argument’s sake, let’s accept both the word input (and all its assumptions) as well as that a brain is connected in such a way that it may as well have been a part of a body and in a world. This does however take the fun out of the question since we are basically saying that it already is fooled to be a normal brain in a normal world. The curiosity however is that, from an embodied perspective, you are more or less forced to clarify the example to the extent where it isn’t an exciting question.


It is only really exciting to begin with because growing up we are taught that the brain is separate from the body, we may even be taught the the mind is separate from the brain -so the example feeds off of common sensical, traditional, dualism -brain is something different from body, and/or -mind is different from brain. Embodiment doesn’t allow this separation, which forces a restatement of the question -in a way that answers it implicitly. Neat, right!?

Contrasting Ecological and Computational Strategy in a Virtual Interception Task 1/5

Well, it looks like my master thesis will be admitted and graded. They brought in an extra examiner on my thesis since its philosophically heavy, so it’ll take a few more days to get it graded. When it is, I am going to correct some errors and format it properly (as luck had it, I was following a previous APA-style guideline, it was not appreciated). And I figured, I’ll post the whole thing here in three chunks. The arguments against representationalism, the basic definitions of ecological psychology and radical embodied cognitive science, and lastly, my research paper.

As I have been invited to speak at the Social Sciences Master Graduation Ceremony, I have that to focus on, as well as, wait for critique from my second examiner. Roughly, I’ll be able to post my stuff a few days after the 11th.

Non-(?)necessary discrimination between actualisation and realisation

In Gibson’s perspective, are they not really the same thing? Perception in Gibson’s terms seem to imply that “acting on” is implied by perception. I am confused with how this unfolds in practice. Take the definition I outlined in a previous post, that realisation is the perception of a possible interaction as opposed to actualisation which is the instantiation of an interaction. Perception is interaction?!

I think of studies on mirror neurons (if they exist, if it is assumed they do not do what trad. cog. sci. say they do and instead are simple sensory modality + movement overlap/association -happenstancily, not predeterminally- neurons/cluster of neurons/areas) in that, ‘visual perception of’ and ‘engaging in’ is the same thing physiologically -since, as I suppose Gibson would have it, perception (regardless of which kind) includes oneself always. If one is not separated from the environment, then one perceives what others and oneself do as the same thing (obviously, humans distinguish between self and others, but, even then, not innately -which in itself doesn’t have to decide in the matter, but may inform). Maybe this could be seen as the process behind empathy or sympathy for example. I feel disgust if I perceive rotting meat, because perception is that of systems and parallel modalities and not separate “input pathways”.

They may however have a practical, communicationally, significant aspect to them since it makes it easier to explain perspective or experience of a situation in those terms. Though I also get the feeling that they refer to the false dichotomy of conscious/unconscious perception. Something superfluous to the ecological model. Indeed, it perpetuates the false assumption of consciousness per se. Note here however that “how we consciously experience” situations, is central to psychiatry, for example, and can be useful to navigate within in therapy. Experimental psychology however, should refrain from allowing this massive source of frame-of-reference error to guide theory too heavily.

On subjectivity/objectivity of affordances… (2/3)

Building on the previous post.. Reading Gibson.. I think, unfortunately, that there is good use of communicating an objective and subjective perspective to clarify what there is and isn’t. This is an objective perspective in itself. Besides that, consider the point of which misperception of affordances comes into play, just by the word “misperception” there is an implication that -in a subjective perspective we may perceive an affordance, that in actual fact, is not there. Then, it has to have not been there in an objective sense to begin with.

I appreciate the fact that Gibson tries intently to explain and visualise the non-subjective/objective nature of affordances themselves -I am on board here. It’s only that, the dichotomous relationship of subj.-obj. bears on the information communicated and is entrenched in the linguistics. I do not think we can escape them unless we resort to dualism in some sense. Each time an affordance exists and not exists it must be said in a subjective sense. Unless, we wish to abandon that specific set of philosophical underpinnings.. is that possible?

[Edit, 12:17, 3/4-2013]
[Gibson also seems to confuse me at times in this area, he speaks of affordances very strictly as relationships in an initial definitional sense, but goes on saying that objects always afford their affordances to actors in a .. behavioral sense? But this is not entirely true, it is not only in a behavioural sense that he speaks of them as retained. He doesn’t speak of them differently in separate philosophical terms either (ontologically/epistemologically).. Could it be the distinction between realisation and actualisation that separates Koffka’s and Gibson’s view here? That Gibson picks up on but doesn’t mention explicitly?]

Communicationally necessary separation of objective and subjective perspectives (in rECS) (1/3)

I began writing the situated relationships between the concepts (mentioned in my previous post) and realised something terribly important. Even in the simplified taxonomy, I haven’t separated out subjective from objective, and I found out just how important that is when writing about the specific relationships. They exist in different realms (akin to the ontological and epistemological issues I have been writing about), also, communicating subjective relationships will depend on the specific organism and its umwelt (Louise Barrett). I have, for now, had human activity in mind, in an effort to keep it simple. This will guide the way I henceforth communicate about relationships in rECS where necessary to specify, unless someone has a good reason not to…

Objectively, here, refers to a mind-independent, theoretical perspective. I am not concerned here on how we come in contact, how we experience the world, but rather on the relationships between the concepts in how they affect each other, separated from how they are experienced (or might be experienced). It is not to do with separating ontology from epistemology, but there are surface similarities. For example, talking about Realisation and Actualisation, in an objective perspective you cannot have Actualisation without Realisation (I have written otherwise in other places, and should be revised on the basis of not separating objective and subjective perspectives clearly). This is so because Realisation is defined as perception of affordances, and, you cannot interact, act on, Actualise, affordances without perceiving them. The same goes for Limitations, which may be present and affect Actualisation, but not necessarily be experienced.

But. In a subjective perspective, here defined as experiential, i.e. how we experience the world. We can Actualise affordances without “paying attention” or consciously or deliberately perceive, we just act. An example can be very quick decisions, we need not experience the Realisation of the acted on affordances. Again, in a theoretical sense, an objective perspective, it is clear that we have to have Realisation (perception of) on some level, whatever level that is, for us to be able to Actualise the intersituational-affordances-relationships. Reflexive behaviour could exemplify this, since they are usually experientially Realised after one begins Actualising, after the affordances have been Actualised or not at all.

Thus, it is important to create two separate taxonomies for experiential, subjective, relationships (which will become mostly an empirical endeavour to sort out experimentally) and another for theoretical, objective, relationships. The theoretical perspective will necessarily incorporate more aspects, more relationships and be truer to dynamic systems theory than the experiential perspective. This is explained by the examples above and by that what we experience is dependent on our senses, which obviously are “limited” (put in quotation marks because I do not wish to support the view that we ought to be ideal agents, should be measured on the basis of ideals or are heading that way through evolution, since this imposes a frame-of-reference error. We are humans, and have developed under the pressures of our environment, and this is what we are, nothing more and nothing less).

If I find the time to explicate those taxonomies is another question…

Simplified taxonomy of modified rECS (5/5)

Well well, this is how far I’ve come in trying to visualise the whole tree of concepts in the modified version of rECS (Chemero), with additions from Golonka & Wilson and myself.

Starting out in the bottom right, with energy array and physical properties, it is worth mentioning that an energy array also could be said to be physical properties since we are talking about for example visually, light particles/waves. They are separated due to their function.

Energy array + Physical properties give rise to Structure.

Structure, non-perceived, is not information.

Structure + Perception give rise to Information.

Information give rise to Affordances of the object/agent and the Limitations.

Limitations + Affordances can be Realised and/or Actualised.

Affordances can be Realised and/or Actualised (without the need of perceiving Limitations).

Affordances can be Realised which can give rise to Actualisation.

Affordances can be Actualised giving rise to Realisation.

These are not static one-way relationships, change in one, changes the others down to Perception. Practically, there should be arrows from Affordances, Realisation, Actualisation, Limitations, Information and Perception, to each other.. My MS Paint skills need a bit of retouching for that to happen. I am on my way of separating out all the concepts one by one and link them to their implicated and or necessary concepts. This is meant as a simple overview.

Watch this space as I will try and post a new blog post each day (roughly) for each concept.

Temporary conclusion on affordance definitions (my head will explode if I don’t give this a rest for a while). (4/5)

I’ve been entirely engulfed by ontology, epistemology and affordances the past days. My head is about to explode. But I’ve reached a temporary conclusion. A conclusion that is generally applicable, follow most of the “traditional” ideas from ecological cog, embodied cog and rECS. They depart in some aspects, but I believe them to be necessary to live up to the philosophical demands.

Affordances, need to be, or to be grounded in, [perceived]* physical properties. The reason I have is that there is no other possible way to define it without departing from realism. Please prove me wrong, I am staring myself blind at this.

Epistemologically, affordances are perceptible through information.

Information, [any] structure of [any] energy array (brilliantly defined by Sabrina Golonka)

Epistemologically, sensory modalities discriminate between and within structures.

Perception, “the apprehension of [information] where 1) the structure is specific to an event or property in the world, 2) where the meaning of the structure (for that organism in that task) is about that event or property (i.e., a dog’s bark is about the event of a barking dog), and 3) where the meaning of the structure must be learned (or, more correctly, where an organism must learn how to coordinate action with respect to this structure).” (stolen again from Sabrina Golonka).

Realisation, perception of affordances.

Epistemologically, perceiving information and coming to an understanding (need not be conscious, obviously… as if there is a black and white divide of conscious and non-conscious…) of some/all/the situationally relevant agent/objects’ affordances.

Actualisation, agent/object(s) affordance(s) interaction with agent/object(s) affordance(s).

Epistemologically, bodily movement between and/or within agents/objects affordances and can be either compatible (by the agent(s) affordances or by extension, like using a stick or something) or not (like lifting the earth, the earth does not lend itself to be lift-able).

Constraints, boundaries of realisation and actualisation.

Epistemologically, restrict compatibility of affordances between and/or within agent(s)/object(s). The knee does not afford the leg to bend backwards. A local constraint that has consequences for bodily movement in the global environment. Being dynamically coupled to environment/objects/other agents, constraints vary depending on the current situationally available affordances.

*Edit 25/3

Ontological meanderings for the definition of affordance. (3/5)

Ontology deals with questions concerning what entities exist or can be said to exist.

Proposed rule: An ontological definition of affordances cannot include, in full or in part, a relationship between two entities, if we wish to adhere to a realist account of said concept.

Reason: Relationships imply mono-dependence or co-dependence.

Reasoning: An ontological definition of a concept including a relationship, implicates ‘mono- or co-dependence’ with ‘what exists’.

Premise A1: If either entity is dependent on the other, and
Premise A2: dependence is required for existence,
Conclusion A: then, there will be situations where either will not exist.

Premise B1: If both entities are dependent on each other, and
Premise B2: dependence is required for existence,
Conclusion B: then there will be situations where neither will exist.

Consequence: If affordances are in full or in part defined ontologically as a relationship, then affordances will align itself with idealism, since we will have situations where one or both entities do not exist.