Accessible Academia: Serious Games for Dyslexia

In this shorter edition of “Accessible Academia” we look at a paper published back in May 2024 on a new method for collecting data for diagnosis of dyslexia.

What is Accessible Academia? (Click for the answer…)

We take the latest academic papers about neurodivergence and neurodivergent people, and with the permission of the authors, condense it into an accessible summary for non-academics.

Drop downs like this provide further information about research methods and writing styles and can help you learn more about how research is conducted and communicated.


“Serious Games”, those with a purpose beyond entertainment, can be great way to engage children in data collection, rather than the use of lengthy assessments. The research team, split between a number of Pakistani universities and Queen Mary University of London, took advantage of serious game design to support dyslexia diagnosis and screening.

Dyslexia, as the researchers define, is a condition which results in “an inability to efficiently process written language”. Other approaches to dyslexia diagnosis involve the presence of an expert, and machine learning approaches involving data from eye tracking, MRI, and EEG.

The authors suggest that game-based data collection can provide a cheaper and faster method for initial screening, identifying individuals who would benefit from an assessment with a specialist, overall reducing rates of late diagnosis.

When designing the game, the researchers used two feedback sessions to evaluate the design, with their main goals to identify how easy it was for children to play, and if the activities were targeted correctly to the target age bracket.

The game had nine rounds, each targeting a different trait associated with dyslexia, as defined by the DSM-5-TR. While rounds 1 and 2, those targetting letter and letter sound recognition, did not show any major between group differences, rounds 3 and 6 had the greatest differences.

How do you know which had the biggest difference?

The authors used the Mann-Whitney U test, a statistical test which compares the difference between two independent groups. In this case, the difference between scores were compared between the diagnosed and non-diagnosed groups.

When you calculate the test, you also get a p-value, which tells you whether or not an effect is statistically significant. If this is below some pre-defined significance level (in this case 0.05), you can say that these results didn’t happen by chance, and there is an actual difference present. Rounds 3 and 6 had p-values of 0.001 and <0.001 respectively.

The researchers also looked at other data collected from the game, such as number of and accuracy of clicks. Some of these variables also showed between group differences, especially with total hits and total efficiency.

While the data looks promising, further research is needed with more participants. This way, they can identify whether adjustments to the application are needed (such as removing rounds 1 and 2), and to see if the variables hold for larger groups.

“Serious Game for Dyslexia Screening: Design and Verification” was written by Gulmina Rextina, Sohail Asghar, Tony Stockman, and Arooj Khan. Many thanks to Gulmina and Tony for allowing us to share this paper: the full citation is provided in the references.


Do you have a topic that you’d like us to explore for our future “Accessible Academia” articles? Get in touch with us:


References

  1. Rextina, Gulmina, Sohail Asghar, Tony Stockman, and Arooj Khan. 2024. ‘Serious Game for Dyslexia Screening: Design and Verification’. International Journal of Human–Computer Interaction, May, 1–17. https://doi.org/10.1080/10447318.2024.2352205.