I must admit a mistake. Virtual affordances, as defined in “Cognitive Psychology in Crisis: Ameliorating the Shortcomings of Representationalism” reads “invariants programmed in environment, objects and agents, allowing, limiting or disallowing virtual behaviours, interactions and coupled systems between those environments, objects and agents” (p. 37). By this definition, the examples used; League of Legends specifics, do not strictly hold up to this definition.
As one example, abilities usable by buttons lack one, very important, aspect of the traditional definition of affordances. Reciprocality. Abilities in LoL do not essentially display virtual agent interaction with virtual object/environment such as throwing corresponds to organism interaction with object/environment. An example of one that would count belongs to two characters named Volibear and Singed, who can run up to an enemy and toss over their shoulders. But even then, it is a stretch to count this as a virtual affordance. Since there are no universal laws of physics programmed into the game, even this activity does not strictly live up to the definition; it is simply a virtual behaviour visualised to mimic what would be an affordance had it been enacted in the environment.
There are better examples from even the earliest FPS-games such as Quake, where you can aim your rocket launcher towards the floor and fire (called rocketjumping) to overcome gravity and reach high altitude plateaus not otherwise reachable. Here, however, there would be debate about how much the virtual agent actually is a virtual agent or not, details, details…
In sum, Human Computer Interface type stuff, still involves human organisms and what they are able and not depending on what is depicted on a screen (which is what my thesis experiment would come closer to). Virtual agents in virtual environments however, requires more from programming than is currently displayed (in general) for me to feel comfortable calling them virtual affordances.