AI is burning through electricity at a rate that should make anyone paying a power bill nervous. In 2024, AI systems and data centers in the US alone consumed roughly 415 terawatt hours. That is more than 10% of the country’s total electricity output, according to the International Energy Agency, and demand is on track to double by 2030.
So when a team of researchers says they have built an AI system that uses 100 times less energy while being more accurate, it is worth paying attention.
What the researchers actually built
A team at Tufts University, led by Matthias Scheutz, has developed what they call a neuro-symbolic visual-language-action model. The paper, titled “The Price Is Not Right: Neuro-Symbolic Methods Outperform VLAs on Structured Long-Horizon Manipulation Tasks with Significantly Lower Energy Consumption,” was published on arXiv in February 2026 and will be presented at the International Conference on Robotics and Automation in Vienna this May.
The idea is straightforward, even if the execution is not. Current AI for robotics, called visual-language-action models or VLAs, works a lot like large language models. They learn by brute force, crunching massive datasets and running millions of trial-and-error cycles until they stumble onto patterns that work. This produces impressive results sometimes, but it is wildly inefficient. Think of it as memorizing every possible chess position instead of learning the rules.
Neuro-symbolic AI takes a different path. It combines the pattern-recognition power of neural networks with actual rules and logical reasoning. The neural network handles perception and motor control. The symbolic layer handles planning and logic. Together, they work more like how a person would actually approach a problem: observe, think about the rules, plan the steps, then act.
The numbers tell the story
The researchers tested both approaches on the Tower of Hanoi puzzle. If you are not familiar, it is a classic logic puzzle that requires moving disks between pegs following strict rules. It was chosen specifically because it demands sequential reasoning and multi-step planning, things current AI struggles with.
Here is what happened:
- Success rate: The neuro-symbolic system completed the puzzle correctly 95% of the time. Standard VLA models managed only 34%.
- Novel scenarios: When given a more complex version of the puzzle it had never seen before, the hybrid system still succeeded 78% of the time. The conventional model? Zero percent. It failed every single attempt.
- Training time: 34 minutes for the neuro-symbolic system versus more than a day and a half for the standard approach.
- Energy for training: The new system needed just 1% of the energy that standard training consumed.
- Energy during operation: 5% of what conventional models required.
Those are not incremental improvements. That is a completely different order of performance.
Why this matters beyond robotics
Scheutz made a point that stuck with me. He compared the inefficiency to everyday AI tools, noting that even a basic Google search now runs through layers of AI that burn disproportionate energy for what amounts to a simple information retrieval task.
“These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task.”
He is right. The current approach to AI is fundamentally wasteful. We throw more GPUs at problems, build bigger data centers, and consume more electricity, all to get models that still hallucinate, still fail at basic reasoning, and still cannot reliably plan three steps ahead.
Some data centers now draw as much power as small cities. The IEA projects AI electricity consumption will keep climbing sharply through the decade. At some point, the economics stop working. You cannot keep doubling your power bill every few years and call it sustainable growth.
Why this is not a quick fix
I want to be clear about the limitations. This research focused on robotics tasks with well-defined rules. The Tower of Hanoi has clear constraints: disks go in order, one at a time, following specific rules. That structure is exactly what symbolic reasoning excels at.
Open-ended tasks like writing essays, generating code, or answering general knowledge questions do not have such clean rule sets. It is not obvious how you would apply symbolic reasoning to something as fuzzy as “write a persuasive email.” The current research does not address that.
There is also the scale question. The experiments involved relatively controlled lab settings with specific robotic manipulation tasks. Whether these efficiency gains hold up when you scale to more complex, messy, real-world environments remains to be seen.
The bigger picture on AI efficiency
That said, this research points at something important. The AI industry’s current strategy of brute-forcing every problem with more compute and more data is hitting physical and economic limits. Energy is expensive. Chips are expensive. Building data centers takes years and billions of dollars.
Microsoft just announced a $10 billion investment in AI infrastructure in Japan. SoftBank put $10 billion into OpenAI. UnitedHealth Group is spending $3 billion to embed AI across its operations. These bets only pay off if the underlying technology becomes more efficient, not less.
The neuro-symbolic approach suggests there is real headroom. Instead of building ever-larger models that learn everything from scratch through trial and error, you can encode known rules and let the neural network focus on what it is actually good at: perception, pattern matching, and motor control. The symbolic system handles the logic. Neither part has to carry the full burden alone.
What to watch for next
The paper will be formally presented at ICRA in Vienna in May 2026. Watch for whether other labs can replicate these results on different tasks and at larger scales. If the efficiency gains hold up across domains, neuro-symbolic AI could shift how the industry thinks about building intelligent systems.
The research also raises a question that more people should be asking: if we can get better results with 100 times less energy by adding structured reasoning to our models, why were we ever satisfied with the brute-force approach in the first place? Part of the answer is that throwing compute at problems is easier than thinking carefully about architecture. But easier is not the same as smarter.
The paper is available on arXiv (arXiv:2602.19260).
Comments