Comments on ENSO Seminar “Radical Embodiment and Real Cognition”

Over at 4e Cognition Group Anthony Chemero has given a talk (YouTube link) about a couple of interesting new directions that he and his students are working on for their dissertations and a paper. The main impetus is to explain “higher order cognition” through a rECS-able perspective.

The first turn is through Gui Sanches de Oliveira’s Artifactualism approach to models, essentially giving a thorough and solid argument for that scientific models are foremost tools, not accurate representations of the world. If it works, we use the model to predict, explain, plan, experiment, etc. It reminds me of the futile path that scientists often are found on: Focusing on finding The Truth, or finding objectivity. But the world seems to me to contain none, but even if it does, it doesn’t matter, at least not nearly as much as if the proposed model can be used in any applied setting. It reminds me of Nancy Cartwright’s arguments about truth and explanation, how far away those two concepts are from each other -and opting for truth takes us further away from a functioning tool. This is a really important step. Artifactualism rightfully criticizes the assumption that thoughts are for representing the world accurately, and replaces it with that cognition is for toolmaking. “Explicit, occurrent thoughts are tools, instruments, or artifacts that some agents create and use. Of course, models can meet formal definitions of representations, but that is not what they are for…”.

The second turn is through Vicente Raja Galian’s attempt at defining brain activity through resonance and oscillators. In his case, TALONs as resonant networks of neurons that resonate to certain ecological information and not others, that can continue to oscillate in the absence of the initial resonate -and that can be set in oscillatory motion at a later point in time (again without the initial resonate, through Ed Large’s work). The brain here, is driven by everything else (not the opposite way around). Oscillators, and non-linear oscillators, can act as filters and produce patterns not in the original driver.

Then, we take a turn into what Chemero refers to as slave/master systems, and while those words seem very culturally loaded, they make the point that slave systems wander (drift) in absence of a master system. E.g. circadian rythms stay in tune when we are regularly exposed to sunlight, but when deprived, our rhythms start to drift. An idea connected to that when we do try and use TALONs to think about things, or the past, but because it is not what they (and as a whole, the brain) is for, we just don’t seem to be very good at it. Marek McGann adds “‘Memories’ are constructed on the fly, and confabulation is rife, because it is not retrieval of things, but it is temporary toolmaking”.

Ultimately these initial steps in making more concrete the idea of ‘resonance’, seems very promising. An interesting aspect of resonance, is that it exists on all scales, it doesn’t matter if we look at the behavioral or neural scale, which makes them analyzable by methods like fractals. It makes it an empirically testable theory. Also, with resonant networks, they no longer have to contain content -Anthony Chemero suggests tool-making which will have to be defined further for me to understand if representational content hasn’t just been replaced by Gibsonian tool content. And don’t get me wrong, that would be a wonderful first step in better characterizing what humans do, but I am also currently on a quest for a non-content description of neural activity -and resonance seems to fit that description.

A non-content brain. 2/2

There is some misreading of Ecological Psychology due to the way direct perception and information detection are spoken about. Direct perception seems to carry with it a connotation of specificity (guarantee), that the world is in the specific way it is seen, we cannot be wrong and we have all of it available at once. There is an explicit rejection of the poverty of the stimulus. But pause here a second, because this is what information detection is about.

First, the production of photons exist regardless of my existence. They will bounce around on surfaces, be partly absorbed/reflected depending on surface makeup, and create structure (if we were to put an observer somewhere in this space). In this instance, it would be most appropriate to simply refer to this as the optic array, or structured light. It is not that this structure carries content, it simply is structured (and continuously re-restructured) in a manner specific, and guaranteed, by the surfaces around it and the medium(s) by which it came to any specific point.

Second, for a very long time, organisms have grown to be able to detect such structures. I cannot remember the organism, I think it’s a deep water fish, but a precursor to our eyes was sensitive only to ‘light’ or nothing. Since, eyes seemed to catch on as an important way (in an evolutionary sense) to keep developing, which in our case meant becoming more and more sensitive to the structure that light carries with it. There is no reason to believe that at once, in any given slice of time, that we can perceive all of the structures that light carries with it. ‘We see what we see’ and if we want to see more, we have to explore whatever we are trying to see by moving, to literally detect structure that may be occluded to us from one vantage point (like “illusions”), or, we simply have not looked at something for enough time that we have yet to learn to discriminate between smaller differences in structure in the optic array. I can, in the end, come to the same or a different conclusion about what I saw, depending on the history with which I came into the situation, but also depending on which parts of the array I was detecting, or trying to detect, at the time.

Third, we see and hear and detect pressure and other things at the location at which that information is available (but as you might expect, we do not necessarily detect it, but, we have the possibility to). The firing of cells in the eye that propagate to the brain, never held content, and was always in a ‘language neutral’, ‘symbol neutral’, non-content “signal”.

However. Vicente Raja Galian pointed out that so far, I have yet to assign any function to the brain, and it seems appropriate that we should since it is a curious structure and we have kept it evolutionarily. Keeping a biological structure does not entail function or even importance (in the strictest interpretation of the word), but it seems to me to be a very valid point. So far, I am having issues arguing against that the brain is for ‘where’ (on/in the body) and ‘in what order’. Something is detected at the foot as intense pressure, I look down and see a dog biting it, this (in a sense) creates a loop where whatever signals are propagated back from the retina together with the pressure of the foot are happening simultaneously. There is simultaneous increased firing from two directions into the brain. Solely by being simultaneous in a close (geographically) space, intertwines the two. Experience does not happen in the brain, it happens in the relationship between body and environment, but one thing happening before, after or simultaneously, may come to be through having a space within a body where the ‘where and when’ co-exists. Because a lot of neural propagation going on in the body, in one way or another, travels to one collected structure, the brain. No content is needed, all we need to “know” is where and when, which is simply (although plastic) a matter of bodily geography.

I also have a sneaking suspicion that the brain is for drawn-ness and repulsion, but that currently requires more thought and explication before I feel comfortable laying it out publically.

A non-content brain. 1/2

In search for a non-content perspective of brain activity, I often feel I come up empty handed. Either non-content is not really directly spoken about (e.g. Anderson, 2014, and isn’t really intended to -it does however very importantly free us from other assumptions), or when a positive account is languaged like “but the brain does this or that” is more confusing to me than clarifying. So I’ve been criticized for not having my own positive account, or even a reasonable idea of what I expect or accept as a good answer. So here’s a minimal start.

With a non-content view of the brain, I mean that, any and all activity in the brain is not representational, symbolic, or in any way carries any content in the sense that if I show you a picture of a cat then your cat neurons are firing (simplified of course). To clarify this further, Anderson to me gets close, talking about the brain in a functional sense, non-reductionally. Instead, everything “magic” already happens in a) the continuously ongoing relationship in a given organism-environment system, but importantly, b) in the sensory system(s) (e.g. eyes, ears, legs, body at large, etc.).

All that really would happen after sense-making at the sensory system organs, would be probabilistic (and likely functional as Anderson suggests) networks of directed firing. I mean this in quite a specific way. For example, eyes connect to brain at specific sites, electrical signals propagate from eye to brain at specific sites and an initial direction, but after that, neuronal firing is (due to specific reasons) a matter of what current state immediately neighbouring neurons are in. So, if one neighbour is in post-firing and another not, the latter has a higher likelihood of firing. At a larger scale, what we will see in an image of the brain is a dendritic spreading that at the time is part stochastic (and re-used) because neurons in this sense are non-essential. Of course, if a network of neurons (with part stochastic spread) are firing together, like the oscillators they are, they are more likely to fire together again at a short time scale (they are also likely to fire together again at a longer time scale, but less so. Here is where a lot of the misinterpreting of brain images (by cognitivists usually) exist if you ask me, neurons and often networks of neurons are seen as essential or carrying content so we make a one-to-one mapping between an image of a cat and the specific neurons that are firing -but there are far too many confounds for this to be a confident finding.

Like anything dynamic systems tells us, future (or current) state depends on the history of the system, and because there is no real beginning to any one individual’s brain activation, I cannot bring myself to believe that the brain ‘starts a series of neuronal firings to achieve a body movement’. Body movement is in relation to environment, that’s where the decision is made to move a certain way, that’s where “cognition” is. Actually moving a body part, yes, that is connected to brain firing -but not (necessarily) in a causal manner. Direction, intentionality, agency, mind, is not in the brain, it is in the relationship between organism and environment, a course of body movements is already given by that relationship, at most and only in this sense, is the brain a “mediating” structure.

An aside. Blood flow through the brain is already always ongoing. Co-developing with all our other organs, will also play a (perhaps minor) part in where and how a probabilistic dendritic neuronal network of firing will move through the brain. Then, wherever that was, will need more blood flow (as is the basis of most imaging techniques), however (and again), because the route through the brain is part stochastic anyways, it makes no sense to talk about brain regions, networks, or neurons in any detached, representational, contenty, essential, manner. Re-use, on the other hand, and functional (roughly similar from time 1 to time 2) networks of firing, over time, is what the brain is up to. Because of this, with current imaging techniques, they can get us worse or better probabilities of ‘what’s going on’, and interventions can hit or miss depending on individual and time of intervention. But if you are interested in human behavior, it is probably not the most productive scale or scope at which to analyse it (although there’ll be some absolutely beautiful oscillator dynamics going on at a neuronal level).

The first response ever to anything non-representational, ‘yeah, well, how do you explain closing your eyes and thinking to yourself “I am going to move my hand now” and then move your hand?’ Well, firstly, the question already assumes the brain did it, so it is always an unfair question. But. Nevertheless it needs to be answered. As always, closing your eyes and remaining still isn’t some kind of magical state where you are closed off to the world, you are still continuously co-constituted with it. In fact, I can predict that sentence above to be said because of the type of conversation we are having -the history of the system already determines and constrains direction and force into and with the future. But most importantly, the experience of the “decision” in the ‘word-sentence’ that you are thinking doesn’t ‘come from’/isn’t instantiated in the brain, it is already a decided course due to the relationship between you and the environment that you are in -alike other body movement through the world. I could respond and say “do you know how many people choose their arm/hand to move when we get to this point in the conversation? 100% so far”, that is how constraining our history is (and the direction it already gives us) even on a short timescale. You could respond “ok well now I can think of anything and maybe I won’t even move, just think that I will but don’t”, and we can go around forever in this type of dialogue, entrenching us further into that dissonant attractor state. The last point is, that question doesn’t really tell us what is going on, at worst it is a defensive reaction, at best a curiosity that likely can be satisfied empirically or by appealing to the continuously ongoing activity of our senses and sense-making.

Theory of Mind really is dead.

No ‘content’ in EcoPsych and Direct Perception

TL/DR: While a valid concern, I don’t think EcoPsych relies on ‘environmental’ content.

I share the worry with Dr. Edward Baggs, that Enactivist criticism of Ecological Psychology’s Direct Perception hints at a possible dualism -even if I think it may mostly arise from reading EcoPsych unfavourably or indeed unfavourably expressing EcoPsych.

The idea is this. Representationalists assume content is in the brain (created and/or passed on from the senses as input). Perception is simply input for the brain-processor which sends output signals to the passive body, hence Indirect Perception, what our eyes see is ultimately not what we experience, we experience what the brain creates (subject to criticism of being idealist and/or dualist, but that’s a different blog post). EcoPsych instead says, hang on,  the world is its own best model, there is absolutely no need to conceptualize the perceptual system as mere, passive, input devices, and there is no need to conceptualize the brain as a processor -we need no processing (in the traditional sense anyways). Rather, perception is active and intelligent on its own, what you are currently experiencing is unmediated by any interpretational processes, what you experience is what your perceptual system detects. Perception requires movement, perception and action are in this sense inseparable (your legs, e.g., are also a part of seeing, cue embodied theories). However, importantly, perception is action, action is perception. It’s a continuous and simultaneous loop…

Enactivism asks however, if this means that EcoPsych simply places content on the outside, as opposed to representationalists on the inside. If so, we are not really losing the dualistic consequences that believing in content brings with it.

I think one problem may arrive from reading specificity (roughly: guaranteed perception) into Direct Perception. The straightforward answer here is that this is a bit too literal a take on Direct Perception, although it comes from considerations such that if what we see is the world then why does the world look different to different people -we have access to the same information. A simple answer from EcoPsych would be that firstly we all have different capabilities that we bring to any situation, we inhabit different bodies, we can have different goals, and they all bring effects on what we attend to and why.

Another issue is that some EcoPsych’s talk about properties and effectivities, as if you can divide up organism from environment, landing us in traditional dualisms again. I do not subscribe to this way of talking specifically about the organism or the environment because I think it too easily invites dualist interpretations -but those who do still would say the affordance is primary, that we then can talk about its corresponding parts doesn’t mean that they see them as non-constitutive. Which sounds fine to me, but, I also understand how people can misread this.

As for answering the central question of -do EcoPsychs conceptualize content to be on the outside, I think a resounding ‘no’ is in order. Organisms detect structure in ambient arrays (e.g. the optic array) and they perceive/act on affordances (which necessarily is a relational aspect of the current, and continuously evolving, organism-environment system). The information itself (the structure in an ambient array) is not content, in the case of vision it is (from a specific point of observation) all of the converging photons from all angles (as a whole, continuously flowing) on that point that has bounced off of surfaces where light has been partly absorbed, reflected, etc (which is part of how light becomes structured) that then reaches the eyes. The eyes themselves have evolved to detect differences in structure to the point that was necessary for survival, and we bring an entire cultural/societal/historical as well as developmental baggage with us as we have started naming structures that we are taught from young age to reproduce. But there is no content, there is no standing-in-for the things in the environment = a wooden table is made up of wooden particles which are made up of atoms, when light strikes the top of a dark wood, photons are to a larger degree than a light wood absorbed by the material, but then of course, this becomes circular because we have already defined “dark” and “light” through the property of absorption. (It should be added here that “illusions” where dark and light can look the same, or where a blue dress can look yellow, is only a valid counter-argument if you rely on traditional optics where you discount contextual factors like general lighting conditions etcetera.)

Abstractions and Scaling Up

TLDR: Abstract words and concepts are inseparable from specific instances, confusing it’s usage.

It seems that often in discussions about whether or not a certain phenomena ‘scales up’, or if we engage in abstractions of things, the concepts we talk about take on a life of their own. For example, I see a curious indent in the wall, turns out they are called power outlets and I can charge my laptop if I have a compatible prong. Here, some try and convince us that we have created a new concept, and for every instance we see of this new thing, we add it to the concept or we extract central features and then we go about talking ‘abstractly’ about some kind of general ‘power outlet’ -it has gained its own level of existence. I urge everyone to think differently about this: To deny the assumption that we are creating something new. I don’t think anyone would disagree with me denying that we just created some kind of outer worldly, non-physical, concept. But I think mainstream cognitive science would disagree with denying that we are creating an abstraction. In one sense, it is a mundane counter-argument: we see the first power outlet, representation in the brain created, we see another one, another representation, and/or we start creating a representation that is slightly less specific and only picks out the shared features of the first two. Any way you slice it, this is the work representations do for mainstream psychologists. But what do you do if you don’t believe in representations?

Taking a page out of Gibson’s 79’ bible, I would argue that ‘scaling up’ or ‘abstraction’ is simply a pole of attention. We can take any pole of attention that we are aware of, we can say the word ‘ball’ and just kind of mean a ball in general, we can say or take any pole of attention we want. However. Describing something from different perspectives (poles of attention) is just that. It doesn’t entail an ontological difference in the world. Same with abstraction, I can choose any pole of attention to make things seem general or specific in any which way, I can call a less featureful ball an abstraction that can be applied to the next ball I haven’t seen. But all that is going on is that you are seeing a couple of aspects in a new thing that also are true for another thing -you are not ontologically creating an overarching concept.
If you think we are, I need to be convinced it is not non-physical (enter contemporary cognition and representations and similarity hierarchies). I currently think it may be indefensible. It seems to me that we (EcoPsych/DynSys) wouldn’t need to accept an ontological shift, it is enough to describe it as a shift in the pole of attention, and we can be taught by others or by our own experience of the world to take on a pole of attention we haven’t before, or didn’t know existed, or didn’t want to, or anything else. It does not necessarily mean we have to accept a new ontological status of an utterance. I think most mundane arguments about abstraction and higher level (cognitive) faculties disappear, but not all.

Emergence. Then how in the world do we deal with things that ultimately do seem to create a new ‘level’ of functioning. A termite mound is not concerned with it’s shape, hell, not even termites are, but because of extraneous factors guiding the drop-off of pheromone induced dirt, all of those small lawful actions create a temperature regulated multi-story apartment building. Here, it is difficult to argue that the mound is just a pole of attention, since it clearly comes with new properties that aren’t written into its creation. I think this is a very different thing to talk about. Compare a termite mound to the word ‘honor’. Honor seems more non-physical, seems more like an abstraction, but as soon as you have to apply the word, you are forced to apply it to a specific situation. It is almost an asymmetry, the more abstract a word seems to be, the more specific an example needs to be to understand it -and multiple specific examples can be even more illuminating.

Ultimately, I may just have a problem with the way in which the term abstract is used. Colloquially it means ‘more general’ or ‘less specific’, applied it is necessarily always a specific instance. It seems to me to imply a separate thing with an ontological status (like a general concept), inviting representations. Perhaps it only invites, which saves its usage somewhat, but to me it just seems confusing.

First conference talk and proceedings publication!

Going to CogSci17 in London this summer for my first research presentation, the paper is to be published in the proceedings (and can be found here). Here’s the abstract:

The actualization of affordances can often be accomplished in numerous, equifinal ways. For instance, an individual could discard an item in a rubbish bin by walking over and dropping it, or by throwing it from a distance. The aim of the current study was to investigate the behavioral dynamics associated with such metastability using a ball-to-bin transportation task. Using time-interval between sequential ball-presentation as a control parameter, participants transported balls from a pickup location to a drop-off bin 9m away. A high degree of variability in task-actualization was expected and found, and the Cusp Catatrophe model was used to understand how this behavioral variability emerged as a function of hard (time interval) and soft (e.g. motivation) task dynamic constraints. Simulations demonstrated that this two parameter state manifold could capture the wide range of participant behaviors, and explain how these behaviors naturally emerge in an under-constrained task context.

Keywords: affordances, dynamic systems, cusp catastrophe, dynamic modeling, simulations, constraints

En dag till.

Sanningen är ibland, och på vissa nivåer, ganska simpel. Vi alla har känslor, får känslor, känner känslor. Det är också sant att vi ibland inte agerar på våra känslor, men vi bär de med oss. Vi släpper ut de på ett eller annat sätt, för i slutändan är vi inte så annorlunda från resten av den fysiska världen. Om där finns en koncentration av energi någonstans i jämförelse med miljön den befinner sig i, så finns där en spontan process som alltid strävar efter att jämna ut den skillnaden. Jag tror alltså att våra känslor är och fungerar i generell bemärkelse som vilken annan fysisk process som helst. Ett varmt glas i ett kallt rum, över tid jämnar temperaturen ut sig. Ilska, som vi kan släppa ut destruktivt genom aggressivitet, i ett slag, i ett ord, eller många små. Naturligtvis har denna analog en gräns, men det är en början till att bygga en bro mellan det oerhört svårt tekniska Dynamisk System Teori/Komplex System Teori till alldagligt tal och alldagliga händelser.
 
Ett är säkert dock. Vi måste börja lyssna på varandra, förstå varandra, låta varandras känslor ta plats, men även föra dialog och komma till kompromisser -ibland hårda och ibland mjuka. Vi måste förstå att vi inte kan använda våra känslor som vapen, utan istället som verktyg, för att förstå oss själva lite bättre, för att kunna navigera världen lite bättre. Om du blev förnärmad eller kände dig kränkt, hur kan den känslan påverka vad och hur du kommunicerar det tillbaka? På ett konstruktivt sätt? Att göra det på ett destruktivt sätt kan ge dig en skön vendetta, men i slutändan brukar det bara leda till mer av det som du reagerade på från första början. Du kanske inte hör det, men nästa person gör.
 
Vi måste också börja förstå att världen omkring oss inte går att dela upp i två alltid. Vi har kompetens till så mycket mer än så, till bara en lite, lite mindre simpel världsbild. Kan vi det, så kan vi förändra måendet hos en stor mängd individer som just i denna stund övertygar sig själva att, till exempel, inte ta sitt liv. För att, för vissa, så omvandlas känslor automatiskt till en självreflektion. Du är dålig. Varför fortsätter du? Förstår du inte att det inte finns hopp för dig?
 
Just därför är det så otroligt viktigt att visa andra att det finns en plats för de också. Att de inte är utanför samhället, att de inte är så annorlunda ändå, att de inte behöver passa in i samhällsnormer för att känna acceptans från sin omvärld. Att känna tillhörighet, på sina egna villkor, är en stark förbindelse till omvärlden. Du kan ge det till någon. Kanske inte fullt ut. Men lite. Bara så lite som kanske just behövdes den dagen, för att få just det där lilla ljuset djupt inom sig att flämta till. En dag till. En dag i taget.

…an old Radiolab episode about AI and assumptions about humans.

One of Radiolab’s older episodes “Talking to Machines” talks about programs that try to simulate conversations. The classic Turing test is that if you are speaking to a robot and a human, and you cannot determine which is human or which is a robot (or getting the classification wrong) then the robot/computer can be said to be intelligent.

Now, I would love to argue a different definition of intelligence, but that isn’t really what this post is about. One of the conclusions from the owner of Clever Bot (a fascinating program that saves whatever you write to it and then is coded to spit back a correlated response -from the pool of existing phrases) is that thinking intensely about AI and conversations is how complex it is to sit in front of one another and have a conversation. The reasoning goes, because it is so difficult to code a program to converse ‘like a human’, then our “coding” must be complex also. It falls in line with metaphors about the brain, giving it ostensible plausibility.

However, haven’t we reversed the reasoning here? Aren’t we assuming in the first place that brains/humans are computational machines, and so it should be possible to code a program for something humans do? It started out the opposite… That we were trying to have the computer behave like a human, not the other way around! Could it not be so that coding a program to converse ‘like a human’ is so difficult, because humans actually aren’t like computers?

If humans aren’t computational machines, then trying to code a program for something that isn’t written as “software in our brain hardware”, is going to be very difficult indeed. But going from the latter to the former is a potentially fallacious way of thinking about it. You could even make the case that conversations aren’t that complex… After all, they are wholly ubiquitous in our everyday lives! We might argue that there are successful and failed conversations, but I’d say they are conversations nonetheless, and very human.

Reproducibility in dynamic systems

What does it mean that something is easily reproducible?

It speaks to the stability of the processes and constraints on which the dynamic system in question relies on for its existence.

Reproducibility or recreation (or whichever other term you may want to use for the creation of something already existent, by itself or its scaffolded/nested position) could possibly be used as a measure of something’s existential stability. For example, the survival of a language depends on many individuals speaking the language, a shared geographical space wherein the language is spoken as well as a social context/culture (or perhaps rather, all together a system of mutual support and constraint).

Uttering the words of a language you have grown up in requires little energy (whereas the history of that word involves massive amounts of energy, the word is essentially a concentrated unit of historical energy expenditure -and it is from this history that it has gained its efficiency [or meaning, or …]). That you are able to say some specific word with ease and it is understood clearly, speaks to the underlying processes’ incredible stability (from which that word has sprung from and is maintained by). Or imagine DNA.

The links between ease of reproducibility, the large amount of work produced to enable its creation as well as the amount of stability over time in its underlying processes -explains the history and maintenance of any given phenomenon.

The ontological status of affordances…

…only exist when they are being actualised and/or are being realised. It is not that they do not exist when not, there always exists the information of -but until this information is picked up, they practically do not exist.

Information, independently exists. It exists for so long as there is a source to “illuminate” it: The sun shines, and other stars shine, and only if the source of photons cease to exist, shall the visual information that the visual systems of life forms do not have even the possibility of detecting it.

While information is not “what we see”, affordances are (often based on the middle-hand of medium-sized objects if our visual system is unaided). I guess you could say that objects are made up of information, but I’d be hard pressed to agree to this articulation in a strict scientific sense, it’ll do to make the point clear however. Affordances, being relationships between (for this example’s sake) properties and objects of the environment and the capabilities and effectivities of a human, seem to be able to not exist.

If we are in a room with chairs in it, and we leave the room, the chairs in the room still offer the affordance of sitting if we were to observe the room through a camera (the realisation part). If no life form is in the room to detect the information of said chairs affording sitting, then the affordance is what I’d like to call passive (or let’s say, doesn’t practically exist). It’s ontological status is pending a life form able to perceive the information of and act in accordance with, the affordance.

[Start Edit]Think of it as an establishment of any kind of relationship. Before one has been perceived/realised by anyone, there is a flux of information to be discovered and there is a non-relationship. Once discovered however, that relationship most probably cannot be undone. It reminds me of the very dramatic difference between not being able to not perceive the relationship when it has been established once, compared to when it is either just perceived or when we have yet to discover what type of relation we have/can have to it.

Is this problematic? I don’t think so. On the level of information, it must still exist. On the level of affordances however, it cannot. This is not problematic since nothing is going in and out of existence, but, the relationship between environment and actor necessarily incorporates both and is temporarily suspended -it is inactive, or passive. Innovation and creativity is sometimes defined as the discovery of new ways to relate to existent (or new) objects, properties, life forms, ourselves, etc…[End Edit]

Is this important? Perhaps not. However, ontology is important in a general sense. I think it necessitates at least a mention in a blog post, in a galaxy, far, far away.