Just had an idea as I was writing and don't want to forget it (even if it winds up not being a very good idea in the end). The direction my proposal is taking right now, it looks like I'd really like to look at interactivity - Do residents want to watch a simulated flap move from donor to recipient site, or do they want to click and drag it there themselves? BUT I have to keep in mind that I am supposed to be conveying information not just about flap position (which can be learned in the wet lab), but about the direction and magnitude of tension, proximity to anatomical landmarks etc.... so I had the idea of programming a vector arrow into the visual that would change direction and length in response to the position of the flap (which is either being controlled by the user, or just running through a pre-programmed linear animation). This would allow the user to:
a) see how the direction of tension relates to nearby landmarks (e.g. is it pulling on an eyelid?)
b) help the learner visualize the vector of tension in situations beyond the use of the e-resource (e.g. in the wet lab - "I am pulling the skin this way, and as i do this I have a mental image of what my action is doing to the vector of maximum tension")

I remember reading something about how deep comprehension occurs when the learner builds their own mental construction of the system being learned (I think it's Mayer and Moreno....), and I feel like this is somehow related...
I will have to re-read this in the morning and decide if it makes any sense or not.
*update - the paper i'm thinking of is Hari Narayanan & Hegarty 2002, NOT Mayer and Moreno
No comments:
Post a Comment