Brain in a vat, thoughts from embodiment.

The philosophical example goes;

If you put a brain in a vat and connect all the inputs necessary, would the brain be fooled that it actually wasn’t a brain in a vat, but a normal brain in a normal world?

All kinds of fun philosophical issues follow. Embodiment however, could firstly argue that since it is only a brain, it could not function at all because brain is body -there is no separating. The argument would then be that all the inputs is a misleading assumption behind the question. We would obviously not have all the inputs (bar for a moment that input/output type stuff is difficult to maintain under this perspective). However, for argument’s sake, let’s accept both the word input (and all its assumptions) as well as that a brain is connected in such a way that it may as well have been a part of a body and in a world. This does however take the fun out of the question since we are basically saying that it already is fooled to be a normal brain in a normal world. The curiosity however is that, from an embodied perspective, you are more or less forced to clarify the example to the extent where it isn’t an exciting question.

Why?

It is only really exciting to begin with because growing up we are taught that the brain is separate from the body, we may even be taught the the mind is separate from the brain -so the example feeds off of common sensical, traditional, dualism -brain is something different from body, and/or -mind is different from brain. Embodiment doesn’t allow this separation, which forces a restatement of the question -in a way that answers it implicitly. Neat, right!?

It’s not really a coincidence that we exist…

“It is such a coincidence that we exist. It is such a coincidence… Accident… Chance… Random”

But the two most important things are that every second of every minute of the universe’s existence these “coincidences” keep being created, and, if that “coincidence” can’t be reproduced over and over, over the course of massive amounts of time, then it wouldn’t exist now (as so many things don’t). So we can call us being here “coincidental” or our universe being hospitable to life a “coincidence”, but in reality, we are no more an anomaly than anything else. Additionally, very importantly, if we didn’t reproduce the stable patterns of a body or even further back in time, cell division, then we wouldn’t be here either. But. The vital aspects of environment-organism has been kept stable, in a way that has allowed reproduction of biochemical matter, for us to eventually have been evolved into.

So if you ask me. Yes, of course it is a coincidence that we exist and it depended on the stability of our environment. But we are as improbable as anything else that currently exist in our universe, and, considering the number of “coincidences” created (and the vast majority deconstructed) all the time, it is infinitely probable that ‘that which can be reproduced’ will continue to exist. So we are not a coincidence. (I don’t subscribe to determinism or anthrocentrism, so, no, the universe didn’t evolve to provide space for us. The universe has no intention, it just is. But that is the topic of another post. Or book.)

Has “Has Milgram been misunderstood?” misunderstood Milgram?

Short article here. So this will be short too.

“This new analysis suggests that we may have misunderstood the ethical as well as the theoretical issues raised by Milgram’s studies. We need to ask whether it is right to protect participants’ own wellbeing by leading them to think that harming the wellbeing of others can be justified as long as it is in a good cause.”

There seems to be something missing here. What was unethical wasn’t ‘causing someone harm’, because it is not what actually happened in the experiment. It was debriefed that they hadn’t harmed someone. So Milgram didn’t excuse the behaviour in the experiment by justifying it being for a good cause, however, he justified the deception by saying it was for the greater good. And it worked (according to his book and to the authors of the article). The ethical discrepancy rather is; is it ethically sound to temporarily cause participants distress? Even if debriefing removes this distress? Is it ok if the means justify the ends…

My own contention about Milgram’s study is that, while it seems his means were worth the end, the thing is that prior to running the experiment we could not know if it was going to gain us anything. Even opinion stated that nothing exciting would come from it (by researchers’ and students’ best estimates), finding out that that wasn’t case, could be argued to justify the means. But only in retrospect. A “luxury” we most definitely don’t have today.

I have however not read the paper that the article is based off of. So perhaps I am misunderstanding the misunderstood Milgram misinterpretation.

Created and programmed depictions: A useful distinguishing aspect?

Skip text between brackets (first two paragraphs) to avoid (unnecessary) thoughts leading up to the depiction-theoretical argument.

[Computers are fallible. They also rely necessarily on electricity. Is there then no wonder that we can have a preference for books because they are perceived as tangible/physical, as opposed to a Kindle e-book, which would be perceived as intangible? However it should also be said that the capitalist consumer model is better suited for e-items since buying and discarding wouldn’t take as large a toll on the world’s resources (hence the ongoing work of paper document free workplaces).

For me personally there is also a feeling of ownership involved. A book is mine, and I can keep it and pick it up when I want to. An e-book exists in something and only by virtue of both the device working and continuous access to electricity (over a longer time perspective). It doesn’t then feel like I own it because my access to it relies on things outside my perceived control.]

This is another distinguishing aspect between screen-presented alternative objects/environments and for example drawing a painting. I had written in my notes about the previous post “created vs. programmed” and couldn’t fit it in because it is another stratification of depictions and virtuals (virtual objects, environments and agents). Here goes. Both are created, essentially, but programmed necessarily requires programming and created does not… Ugh, awesome start.

Created depictions are objects that are a part of the environment but not the actual object themselves, they lend themselves for perception of information (about possible affordances about the object they depict or similar objects)*. This would include, for example, a painting of an apple. Created depictions rely necessarily on the existence of the environment, but not the opposite.

Programmed depictions are objects that are a part of the virtual environment but do not lend themselves to virtual affordances, they lend themselves for perception of information (about possible affordances about the object they depict or similar objects)*. This would include, for example, a jpeg image of a painting of an apple. Virtual depictions rely necessarily on the existence of the environment, and, a virtual environment, but not the opposite.

I believe that a depicted object is experienced differently depending on if it is created or programmed. This may have to do with a more so tangible feeling of a created depiction and the more so intangible feeling of a programmed depiction (but this is my personal experience and perception of created vs programmed depictions and so may hold very little value if put through a scientific method). I also get the feeling that this may have to do with “actual” and “perceived” reality, but that is another whole dimension and I need to think more thoroughly about it before feeling confident it holds any value.


*Note by the way, that the depiction merely needs to have a similar enough optical array for it to be considered depicting any specific object. Perception is also individual depending on experience, but this is covered already (although it may need further theorising).

Do depictions afford us anything?

First and foremost to be able to argue for the framework below, restructuring of the language use surrounding depictions may be needed. It is argued here that the overarching term should be alternative objects and alternative environments including the two (or four) subcategories depicted objects/environments and virtual objects/environments.

The main division bears on the ongoing discussion if depictions afford us anything (Wilson, 2013). This is an attempt to further this discussion.

Virtual objects (such as those used in screen-based research) are able to be interacted with –however, only in the virtual environment. Virtual objects, then, relies on a virtual environment. They do not afford the agent anything, unless the virtual objects are connected to the environment (for example controlling movement of an object in the environment by virtue of a Human Computer Interface). They do afford the virtual agent whatever the virtual environment and virtual objects are programmed to afford. Depicted objects then, do not afford an agent anything, but can provide (accurate or inaccurate) information about what the depicted object represents. They can inform us of possible affordances in the environment just like objects afforded to others, perceived by us, inform us that this affordance may be possible. The point of divergence then is that depicted objects do not afford us anything because we cannot become part of the depicted environment as depicted agents. Also, we cannot become agents in a depicted environment, we can only ever be agents in the environment. If we can become part of the depicted environment, it is not a depicted environment but a virtual environment -and we only become part of it as a virtual agent. Virtual objects do not afford agents anything if we strictly speak of the environment, however, as a virtual agent in the virtual environment the depictions afford the virtual agent, while it still only informs the agent because hie [ie:] is only ever in the environment. This is also to say that although an agent interacts with a virtual environment it necessarily has to be done through some form of apparatus/machinery, without it, the virtual environment would merely be a depiction.

An agent is thus not afforded anything, by neither depicted nor virtual objects/environments. Depicted objects can provide information that may or may not be useful in the environment, just like perceiving objects in the environment can provide such information. Virtual objects can afford virtual agents, just like objects can afford agents. They can also provide information, just like depicted objects. They can only afford agents by virtue of being connected back to the environment. This is however an oxymoron, because agents are still only in the environment, and only afforded something in the environment.

Cause for Distress: Dichotomization 1/n

Disclaimer; in the Swedish language they introduced the word ‘hen’ to refer to a person where gender is irrelevant. There is none in English, until there is I am using ‘hie’ (silent h, long i, as in bee; [i:]). I have spent too much time already trying to tip-toe around using him/her, he/she to instead change complete sentence-grammar to fit in ‘one’ and so on, frankly, it is just much easier to use hie[i:]/hier[ɪə]/hierself[ɪəsɛlf]/etc. I’m not intent on discussing why, unless specifically asked. It is related to, but not the cause of, the following posts.

Since a class in gender studies last fall, it has dawned on me that much personal distress comes from holding a dichotomized view of whatever it is that is distressing someone. I began thinking intently about this, antecedents, consequents, in a wide range of specific areas and so on, to see how far it could be generalized. And wow, this is a worthwhile concept to start using. Clinically it can be very useful to challenge someone’s knowledge structure on this basis, as it opens up alternatives and reflections that otherwise are lost. I have used it on myself to challenge my own dichotomized perspectives and there have been wide ranging consequences and insights (as some would term self-development/realization). I intend on writing up my thoughts and reflections and invite anyone and everyone to comment with their own experiences, insights and/or theoretical concerns and perspectives.

I have been away for quite a bit from my blog lately due to moving countries and starting up a new job. Also, I don’t have internet at home, which means I either have to come early, or stay late, to type up my thoughts here. This also means that the posts to follow are from experience and armchair reasoning, and, I have yet to read up on literature (huge apologies for not reading up on the area, please suggest readings if you have any!). I am really excited to share the dichotomization-procedure we all are and have been subject to, in order to hear your thoughts, concerns, insights and stories. Watch this space!

A tiny example of logical abstraction being unequal to EcoPsy

I had issues coming up with a title for this because it is only a tiny example I use when speaking with undergraduates (usually first-years) about the inadequacy of logic to account for human behaviour. I link it to Gigerenzer’s arguments on this matter.

If this example may be useful for you to use, awesome, otherwise; you can always use it to tire people out and make something easy more of a burden (for no reason at all, which is always a fun pastime).

Imagine a round pillar with four plaques in each cardinal direction, you are to read all four. You may choose to either; walk one cardinal direction clockwise for the entire procedure, or; three cardinal directions counter-clockwise for the entire procedure. However, regardless of which one you choose you would read the plaques, gaining the same information, in the same order. Logically, abstracted from the procedure, you walk away with the same information. Ecologically, obviously, they are different because movement is different. The larger the pillar, the larger the difference.

The point here can then be elaborated further upon depending on the exact idea you are wanting to teach. I find it quite useful for undergraduates, as for postgraduates however, the example is often too basic and Gigerenzer’s ‘Wason four-card task argument’ I find more useful.

If you have any fun techniques or examples you use to teach these ideas, please enrich the comment field!

Ecological Psychology and Locke(d) Doors

A point of entry in the free will debate concerns Locke’s example (no reference I’m afraid, I’ve lost it) of a person entering a room, closing the door behind hier [ɪə]. In situation A hie [i:] just makes a decision to stay or leave, in situation B hie is unaware of the door locking behind hier. Now, my own take on this example is that it neatly shows subjective and objective ‘knowledge’ [and its impact on considering free will existent or not]. Hie makes a decision based on free will in the first case and hie only believes hie does in the second. From a subjective perspective there is free will, from an objective there isn’t.

Ecological Psychology does not like this at all. EcoPsy would probably decide that in both cases there is free will because perception, belonging to the observer, does not include the information that the door is locked. Or? EcoPsy is positioned with embodied, embedded and often extended cognition, including dynamicism. This would entail that movement and active exploration is an important aspect of being human. Therefore, the decision to stay in the room can either be classified as free will in both cases or defined a non-decision. The reason for the latter would be that, at that point in time, the perceiver has not actively explored the environment enough to be able to make the decision to begin with. Even a half-arsed exploration of hies [i:z] environment allows perception of which options are available. As I see it, the original example assumes naivité and passivity on part of the observer, and this is unfair.

The most important point however is that the original example also defines decision-making in a strict computational manner; at one point in time, without temporal perspective, in a very strict fashion. It does not take into consideration how we explore, find out and perceive in the real world -how decisions unfold over time and do not boil down to single points in time. In my perspective, there are several more philosophical examples that are conundrums simply because of the distinct connectionist/computationalist ignorance of temporal flow.

Retraction of exemplification of ‘virtual affordances’ in “Cognitive Psychology in Crisis” (2/2)

This reminded me of something that I have been struggling with in psychology in general for a very long time.

The issue I have is that, in my previous blog post, the exemplification by League of Legends (LoL) specifics (p. 37-38), can be used as conceptually equal to the definition of virtual affordances. This is why I didn’t spot the fallacy to begin with.

On a conceptual level, LoL does indeed contain virtual affordances, but, ontologically, the programming is too weak for it to be anything else -it is not ontologically equal. Another distinction is needed here; of course virtual affordances will not be defined exactly the same as affordances ontologically, they consist of different matter. However, in the realm of virtual environments -the ontological definition comes down to programming, 010101011s and eventually computer chips and electricity. As an abundance of philosophers argue, it is not down to the hardware (and I will refrain from getting into this argument here, worthy of books and hours of deliberating). This may sound representationalist also by the way. I assure you it is not. The point is; the programming code, the 010111s and so on is the environment in the sense that it is what it reduces down to, but, it is not when considering virtual agents/objects/environments interactivity (the epistemological stuffs). This is so for the exact same reason Gibson defines the ecological level for most organisms, and not the physics or astronomical levels.

That said, should each programmed virtual environment be treated as a “full” virtual environment, and that, virtual affordances are to be defined depending on the perspective from each virtual environment? Or should the virtual environments all be defined as “weaker” or “stronger” programmed when compared to the environment, essentially, defining the environment as the strict criteria to which virtual environments are to be judged?

As for psychology in general, it seems to me that they lack a connection between epistemology and ontology, but EcoPsy doesn’t. As usual, correct me if I am wrong.

Retraction of exemplification of ‘virtual affordances’ in “Cognitive Psychology in Crisis” (1/2)

I must admit a mistake. Virtual affordances, as defined in “Cognitive Psychology in Crisis: Ameliorating the Shortcomings of Representationalism” reads “invariants programmed in environment, objects and agents, allowing, limiting or disallowing virtual behaviours, interactions and coupled systems between those environments, objects and agents” (p. 37). By this definition, the examples used; League of Legends specifics, do not strictly hold up to this definition.

As one example, abilities usable by buttons lack one, very important, aspect of the traditional definition of affordances. Reciprocality. Abilities in LoL do not essentially display virtual agent interaction with virtual object/environment such as throwing corresponds to organism interaction with object/environment. An example of one that would count belongs to two characters named Volibear and Singed, who can run up to an enemy and toss over their shoulders. But even then, it is a stretch to count this as a virtual affordance. Since there are no universal laws of physics programmed into the game, even this activity does not strictly live up to the definition; it is simply a virtual behaviour visualised to mimic what would be an affordance had it been enacted in the environment.

There are better examples from even the earliest FPS-games such as Quake, where you can aim your rocket launcher towards the floor and fire (called rocketjumping) to overcome gravity and reach high altitude plateaus not otherwise reachable. Here, however, there would be debate about how much the virtual agent actually is a virtual agent or not, details, details…

In sum, Human Computer Interface type stuff, still involves human organisms and what they are able and not depending on what is depicted on a screen (which is what my thesis experiment would come closer to). Virtual agents in virtual environments however, requires more from programming than is currently displayed (in general) for me to feel comfortable calling them virtual affordances.