When you first hear that a computer game or a video game is trying to teach someone about the social or emotional world, you can’t be blamed if your first thought is, “Are you kidding?” If you want to learn how to shoot a lot of monsters or jump over an onslaught of obstacles, sure, go out and buy a video game. Instead, your preferred “technology” for learning about yourself and other people might be a good novel or a great movie. It’s easy to forget that books and movies are both forms of technology, too. What makes a movie or book great is not really the technology, it’s what’s in it—it’s style, story, and content. For any author, screenwriter, or director embarking on a new work, there are thousands of daunting choices to make about what to say and how to compose it. From these thousands of options, there are a million possible combinations and millions of ways to fail. It’s easy to end up writing a phone book of a novel or producing a low-quality movie.
I pondered these kinds of choices when I created FaceSay™, a social attention PC game for autistic students. What should the games teach about the social world—emotions, faces, attention to the eyes, imitation? What might make a human face as fun and compelling as a train or robot’s face? How can the games leverage the strengths, such as enhanced local perception, and avoid the challenges, such as sensory overload, of a child with an ASD? Can the games be playable for students with a wide range of IQs and verbal abilities?
I still ponder one question: what should we be teaching about the social world? Looking at the scores of emotion training games, it seems we often focus on teaching the analytic side of social interactions, perhaps at the expense of the affective side. That’s an understandable tendency. Decoding emotions seems to be the most concrete aspect of social interactions. We can reasonably simplify something that’s very nuanced to a one-word label and some specific facial movements. That is probably a good first step for kids who are concrete thinkers and who may be new to the nebulous idea of emotions. But how well does emotion decoding map to what we do in real-world social interactions?
Click here to sign up now!
When you are face to face with a friend, are you translating and labeling their facial expressions? I find that I’m just closely following the other person’s face and reliving the story they are telling. My face is probably subtly mirroring their movements. This simple mirroring provides a big bang for the buck. It signals to my friend that I’m participating and engaged, the opposite of the classic “still face” in infant experimental psychology. Simultaneously, it also triggers physiological changes in me. Like those motion seats at the movies, raising my eyebrows in sync with a friend’s facial motions helps put me in their “movie.” It literally recreates a subset of the physiology they are feeling. Our ears are amazing transducers of physical air pressure into neurological electrical signals. Similarly, our faces are amazing transducers of our physiological responses—for example, an increased heart rate or adrenaline—from one person to another. My friend’s face reflects their physiology, which I generate in myself when I mirror it. In an interesting post on the wonderful site of wrongplanet.net (which is a great resource for those on the spectrum and their families), someone expressed dismay that neurotypicals seem to be able to beam emotions to each other. Perhaps this invisible transduction, prosocial mirroring, is that puzzling beam. In any case, once we have introduced someone to the idea of emotions from a concrete and analytic, “decoding” perspective, I think it could be important to also introduce the affective experience, the wireless, non-verbal exchange that happens in face-to-face interactions.
When I was creating FaceSay, I was new to the field, the proverbial stranger in a new world. That helped me search widely and serendipitously. I generated a flood of ideas in an iterative design. Insights from articulate parents and feedback from researchers at the University of Alabama, Birmingham helped me identify the more promising ideas of the bunch. I made my thousand design choices and created three unique social attention games for FaceSay. The games: Amazing Gazing for joint attention and basic theory of mind, Bandaid Clinic for facial recognition, and Follow the Face for emotion discrimination (no emotion labels are used) and prosocial following—all avoid a conventional competitive video game framework. Instead, the games are inherently social, make use of pretend play, and put the child in the role of the leader or helper. There are points, but no explosions or chase scenes. The deliberately abbreviated dialog simulates a simple “collaborative conversation” between the child, a talking flower baby (our youngest at four months), and one of the talking animal coaches. The animals use a synthetic computer voice to address the child by name. The talking cat, for example, might give student Dave a hint by asking “What do you think, Dave, is Rebecca looking at the nine?” The animated characters in the game are also unconventional. To make them seem more “real,” the animated talking heads are relatively large and photorealistic. One unique feature of FaceSay is it’s Montessori-ish approach. Experiences from which the kids can inductively discover key insights are provided. I hoped this non-didactic approach would engage the kids’ curiosity, their drive to understand the world. This is just a brief intro to the dozens of ideas and elements I wove into FaceSay. You can visit FaceSay.com to learn more about the games, their patented techniques for teaching social attention, or to simply download the games to try them.
After making these thousand+ choices, what did I end up with? Not an Oscar, but encouraging scientific evidence that FaceSay helps autistic students in kindergarten through about 6th grade in a wide range of social attention areas, including emotion recognition (1, 2), face recognition (1), theory of mind (2) and most importantly, live social interactions with other kids (1). Over a half-dozen independent studies have been done, but only three have been published so far. I’ll briefly describe two, both randomized controlled studies (RCTs) published in the Journal of Autism and Developmental Disorders. Before I begin, I want to emphasize that although these are solid results—with not just significant “P-values,” but statistical effect sizes that are considered large—these are group statistics. So, the results provide no evidence that FaceSay will help every child. In addition, any improvement can help, but the benefits FaceSay can provide are just one more incremental resource for the proverbial toolbox.
In the first randomized controlled study led by Dr. Maria Hopkins at the University of Alabama, Birmingham, there were 49 subjects ages 6-14, roughly half with average to above average IQs (Apsergers/HFA) and half with IQs below average (LFA) (1). The control subjects played drawing software called Tuxe Painte while the treatment subjects played FaceSay. There was no coaching, tutoring, workbooks, or other active ingredient. The characters in the game taught the kids how to play. The kids played for 20 minutes twice a week on laptops that the grad student observers brought to a classroom in their school. The HFA FaceSay participants, but not the LFA FaceSay participants, improved significantly on the face recognition measure. Both the HFA and LFA FaceSay participants improved significantly relative to the corresponding controls on emotion recognition; most importantly, they improved significantly relative to controls in blinded playground observations on the playground. This level of generalization was a breakthrough for the field in 2007 and has yet to be matched. In the most recent RCT, led by Dr. Linda Rice, a School Psychologist, there were 32 participants. All participants were HFA students in K-5th in the Moorpark school district. Without any coaching, workbooks or other materials, the students played FaceSay (treatment) or Success Maker (control) once a week for about 12 weeks during their normal computer lab time. Dr. Rice’s study found that FaceSay participants improved significantly relative to controls on both Emotion Recognition and Theory of Mind measures. Again, these studies offer good support that FaceSay can help, but they are not yet FDA-grade scientific evidence.
I’d like to conclude with a story that a parent shared with me about her son. It not only made my day, but also my year:
“Just so you understand what I thought was so amazing…My nine-year-old son really did not seem to have any friends. He always talked about his interests, then played with his toys and ignored the people. After FaceSay, he was sitting beside a boy in class, looking at his face and interacting back and forth with him. It was remarkable. The teachers at school remarked he has been talking with classmates on the playground and at lunch. The boy actually invited David to a playdate. That had never happened before.”
1. Hopkins IM, Gower MW, Perez TA, et al. (2011) Avatar assistant: improving social skills in students with an ASD through a computer-based intervention. Journal of Autism and Developmental Disorders 41(11): 1543–1555.
2. Rice, L. M., Wall, C. A., Fogel, A., & Shic, F. (2015). Computer-assisted face processing instruction improves emotion recognition, mentalizing, and social skills in students with ASD. Journal of Autism and Developmental Disorders1–11.
This article was featured in Issue 62 – Motherhood: An Enduring Love