Apple’s recent research article, The Illusion of Thinking, made waves. But the reaction quickly moved beyond the technical. For some, it confirmed LLM limitations; for others, it was inconsequential and human-level intelligence is inevitable. And so the cycle continues: one side dismisses, the other inflates.

This misses the point. The real question isn’t who’s right about the future, but whether these technologies are useful now, and how they might become more useful tomorrow. As an engineer, I think we should spend less energy predicting what these systems will become and more time evaluating what they allow us to do today.

My thinking on this was shaped by Karl Åström’s paper Automatic Control: The Hidden Technology. Åström noted that control systems are embedded in nearly every modern system (chemical plants, airplanes, electrical grids) yet rarely discussed. Control is everywhere, but invisible, because it works. When a technology becomes reliable, we stop talking about it and just build with it.

But Åström’s key point isn’t invisibility. It’s that control systems earned their place by being useful. Their value wasn’t established through hype, but by enabling better performance, safer systems, and more efficient processes. It was utility that made them fundamental.

This idea extends beyond control theory. Mark Weiser, who pioneered ubiquitous computing, wrote that “the most profound technologies are those that disappear.” He meant that the most transformative tools integrate so seamlessly into daily life that we stop noticing them. Writing, electricity, indoor plumbing don’t provoke debates anymore. They just work.

Even that, to me, is secondary. Disappearance is a byproduct. The deeper point is that these technologies solved real problems. Their value wasn’t hypothetical; it was operational.

This is the lens I apply to LLMs. Are they intelligent? Conscious? Will they achieve human-level reasoning? Fascinating questions, but not the place to start. A better question is: can they help today? Can they assist in writing, improve documentation, offer design suggestions, or help debug logic? Are they useful now, in small, concrete ways?

Dismissive cynicism and breathless optimism are both forms of projection. Neither is helpful. As engineers, our job isn’t to make sweeping predictions but to test, build, and adapt. When a tool works, we adopt it. When it doesn’t, we move on. The path from prototype to infrastructure isn’t paved by forecasts; it’s built by usefulness.

So I argue for a shift in focus. Less energy spent on whether these models will “think.” More attention on what they help us do now, and how we can shape them to be more useful tomorrow. In the end, that’s what matters—not what we expect, but what we can use.