I work with a student I’ll call Leo. He’s eight, quite bright, and lives in a European city where he attends an English-medium international school. Leo was born in a country that uses the Cyrillic alphabet. He speaks that language at home, where his parents actively teach him to read and write in it multiple times a week. At school, he’s learning to read in English and also taking classes in a third language. He’s a trilingual third grader navigating three phonological systems and two completely different alphabets.
Recently, his school raised concerns. His teachers reported that his spelling was poor, that he confused certain letters, and that he kept forgetting how to capitalize. They wanted to assess him for dysgraphia.
His teachers are experienced, well-meaning professionals. They are also the humans in the room. And they still got it wrong.
I’ve been working with Leo since late fall. He doesn’t feel like a child with a learning disability. He learns what I teach him and applies it. He retains new concepts across sessions. His progress has been steady and strong. When I looked at the specific errors his teachers were flagging, they could all be explained by his multilingual profile.
No app would have caught that. But neither did the trained professionals who see him every day.
What pattern matching misses regardless of whether it’s human or machine
Let’s take the letter confusion his teachers reported: b, f, th, d. In a monolingual English-speaking child, that pattern might warrant investigation. But Leo’s first language doesn’t have a /th/ sound. Neither does the third language he’s studying at school. English is the only language in his life that uses this sound, and he’s been immersed in it for less than four years. When he writes “baf” instead of “bath,” he’s not making a random error – he’s encoding what he actually hears, filtered through the sound system of his first language.
The b/f confusion has an even more specific explanation. In his home language’s alphabet, the letter that looks like a capital B actually makes a /v/ sound. The sounds /v/ and /f/ are produced in the same position in the mouth – the only difference is vocal cord vibration. His brain is sorting out a genuine conflict between what a letter shape means in one script versus another.
The capitalization issue? His home language doesn’t capitalize nationalities, languages, days of the week, or months. He actively writes in that language at home. His brain is regularly reinforcing a set of rules that directly contradicts English conventions.
An AI reading tutor – pick one: Ello, Amira, Khanmigo, any of them – would hear Leo read “baf” for “bath” and flag it as a phonics error. It would serve him more /th/ practice. It would measure his progress against monolingual English-speaking norms and generate a report showing he’s behind. The algorithm would be technically correct and diagnostically useless.

And without experience to know better, Leo’s teachers did essentially the same thing. They saw the errors, matched them against what they know about learning disabilities, and reached a conclusion. They were pattern matching, just like the algorithm would. The only difference is that a human was doing it.
The problem isn’t whether a human or a machine is looking at the data. The problem is whether whoever is looking knows to ask the question that actually matters: Why is this child making this specific error?
The expertise gap
I know why Leo makes these errors because I have a degree in linguistics, twenty-plus years of experience with complex learner profiles, and a background in structured literacy. I also know it because I live with a version of this every day, my husband is a native Persian speaker, and after decades of English fluency, the /th/ sound is still something he navigates. Persian doesn’t have it either. These phonological patterns run deep.
But that combination of knowledge: linguistics, structured literacy, multilingual development, and clinical experience is rare. Most classroom teachers don’t have it. Most reading specialists don’t have it. Most school psychologists who conduct learning disability evaluations don’t have it. And no algorithm has it.
This isn’t a criticism of those professionals. They are doing important, difficult work with the training they were given. But the training most educators receive doesn’t equip them to distinguish between a learning disability and cross-linguistic interference. The result is that well-meaning, competent people, human beings with advanced degrees and years of classroom experience, can look at a child like Leo and reach the wrong conclusion.
The same principle applies far beyond multilingual learners. Earlier in my career, I researched how features of African American Vernacular English affect reading and spelling in Standard American English – work I presented at the New Ways of Analyzing Variation conference. The pattern is strikingly similar: a child whose phonological system produces “baf” for “bath” through th-fronting, a well-documented and rule-governed feature of their home dialect, gets flagged as having a phonological processing deficit. The child doesn’t need more phonics drill. They need someone who understands the linguistic system they’re bringing to the task and can build explicit bridges between that system and standard orthography.
An algorithm can’t do that. But most humans in the room can’t do it either. The issue isn’t human versus machine. It’s whether anyone involved has the specific expertise to interpret what they’re seeing.
We’re asking the wrong question
Here’s what I keep coming back to. We are pouring billions of dollars into building technology that can teach children: adaptive algorithms, AI tutors, gamified phonics programs. And we’re doing this while simultaneously undervaluing the human expertise that these tools cannot replicate.
The question the ed-tech industry keeps trying to answer is: How do we use technology to replace the expert?
But that’s not the right question, because most children don’t currently have access to the right expert in the first place. The teachers and specialists they do have access to are often working without the specialized knowledge these complex profiles require. Adding an AI tutor doesn’t solve the problem. It just automates the same gap.
The better question is: How do we use technology to connect every child with the right expert?
Leo lives in Amsterdam. I live in the United States. He attends an English-medium school, but the expertise he needs isn’t just “English reading instruction”- it’s someone who understands how his home literacy in a Cyrillic-script language interacts with English orthography, and who can tell the difference between cross-linguistic interference and a learning disability. Ten years ago, his family would have had to find that specific combination of skills locally. The odds of finding a tutor who combines structured literacy training, linguistics expertise, and experience with multilingual learners in any given city are slim.
But because we have the technology to connect us: a screen, a stable internet connection, a shared virtual workspace – Leo has access to exactly the expertise he needs. I didn’t have to sit next to him to understand what was happening. I needed the right knowledge, the right training, and the tools to reach him.
That’s what technology should be doing. Not replacing expertise. Connecting it.
The real cost of getting this wrong
If Leo’s school proceeds with their assessment — normed on monolingual English speakers — his scores will almost certainly look concerning. And here’s the critical part: the assessment will be administered and interpreted by credentialed professionals. Humans, not machines. But without the specific expertise to contextualize those results, Leo is at real risk of receiving a label that will follow him through his academic career.
Every future teacher will see “dysgraphia” on his file and interpret his work through that lens. His parents may be advised to simplify his language environment, drop a language, stop writing in his home script – I have heard all of these used before. The research says this is harmful. But the assessment, interpreted without the right expertise, will say it’s necessary.
This is not a hypothetical. I have seen it happen, repeatedly, to multilingual children and to children whose home dialects differ from the language of instruction. The system is not designed to distinguish between a disability and a difference. It takes a specific kind of expertise to do that – expertise that most professionals in the pipeline don’t have, and that no algorithm can replicate.
Leo’s errors aren’t random. They’re patterned and predictable based on exactly where his three languages conflict with each other. That predictability is actually evidence against a learning disability, because dysgraphia produces inconsistent, unpatterned errors. What Leo is doing is systematic. It’s his brain doing the extraordinary cognitive work of managing three languages simultaneously. But it takes someone with the right training to see that, and “someone with the right training” is a much smaller group than “someone with a teaching certificate” or “someone with an Ed.D.”
What I want parents to know
If your child is multilingual, or bidialectal, or navigating any kind of complex linguistic profile, and someone raises concerns about their reading or writing, please ask one question before anything else: Does the person evaluating my child understand their full language background?
Don’t assume that because they’re a professional, they do. Don’t assume that because they’re human, they can see what an app can’t. The credential doesn’t guarantee the expertise. The question isn’t whether your child is being evaluated by a person or a program. The question is whether that person has the specific knowledge to interpret what they’re seeing.
An assessment is only as good as its interpretation. A tutor is only as good as their ability to understand why a child is struggling, not just that they are. An app, no matter how well-designed, cannot ask the interpretive questions that distinguish between a learning disability and the predictable, temporary, and entirely normal effects of being a child whose brain is doing more linguistic work than most adults will ever attempt. But neither can most humans without specialized training.
Technology is extraordinary. I use it every day to reach students across time zones, to build materials, to collaborate with families I’d never otherwise meet. But I use it as a conduit, not a replacement. The value isn’t in the screen. It’s in what travels through it and whether the person on the other end has the knowledge to make it count.
Amy Oswalt is the founder of Conduit Academy, a virtual school for bright students with complex learning profiles. She holds degrees in linguistics and special education and has over twenty years of experience in international education.