Encyclopedia Autonomica

Encyclopedia Autonomica

Grounded Autonomy: Neuro-symbolic Representations in the Reasoning Loop

The Ensigns of Command: Sense → Symbolize → Plan → Act

Jan Daniel Semrau (MFin, CAIO)'s avatar
Jan Daniel Semrau (MFin, CAIO)
Aug 04, 2025
∙ Paid

While monitoring Superbill, I started noticing a pattern. Some of its sub-agents were stalling, hitting about 90% accuracy on their assessments and refusing to go further.

While in general, LLMs with reasoning capabilities have become state-of-the-art for online applications like Claude or ChatGPT, I noticed that even these high-end services exhibit problems with logical consistency in the output they generate. And since I am not working on a news summarizer, but a high-risk investment product, being consistent is incredibly important to ensure accuracy and reliability, especially over longer reasoning horizons.

We humans conceptualize our world in relatively the same way. The sun rises in the morning. If it’s bright, its usually daytime. If you drop something, it will fall down.

LLMs lack that capability. Because of that, they struggle with reliability and consistency, especially in long-running tasks.

One of the potential solutions to this problem could be neuro-symbolic reasoning.

Share

But before I start, here are some relevant terms I will be using throughout this post.

Key Terms & References

  • Symbolic Reasoning: A method of reasoning using formal rules and discrete symbols such as constants, variables, and logic statements.

E.g., ∀x (Battery(x) ∧ Low(x) → Recharge(x))

In natural terms: For every entity x, if x is a battery and x is low, then x should be recharged.

  • Paraconsistent logic: A type of logic that tolerates contradictions without collapsing (i.e., in classical logic, if something false is true, then everything becomes provable. Paraconsistent logics avoid this explosion).

  • Parametric knowledge: the information encoded in the parameters of a trained model during training

  • Grounding: The process of mapping sensor data to symbolic representations

    E.g., visual input → Cup(Object1).

  • Analytic Containment (AC): A form of paraconsistent logic where LLMs return bilateral truth values ⟨u,v⟩ to logical queries. Allen et al. (2024) (see Sections 2.1 and 3).

  • Split-Brain Syndrome: The mismatch between an LLM-generated plan and actual execution behavior in agents, due to a lack of shared symbolic structure. Zhang (2024) (see Section 4).

  • Interpretation function: In formal logic, this assigns truth values to statements (e.g., "The sky is blue" is true). It's how the logic knows what a formula means in a specific context.

What Is Symbolic Reasoning?

Symbolic reasoning works with discrete logical elements like constants, predicates, and rules.

if something’s a battery and it’s low, recharge it.

Here, symbolic reasoning refers to the application of explicit logic rules and structured knowledge to make decisions. Through that, it’s an extension of traditional expert systems because it uses structured external rules, e.g., financial regulations, investment heuristics, or tax rules, to reason through cause-and-effect chains.

e.g., “If interest rates rise and the portfolio holds rate-sensitive bonds, then reduce exposure.”

If we read English, then we can all understand the above sentence, and also the terms “interest rate”, “portfolio”, or “exposure” might hold some meaning for us. Yet we also often fail in sophisticated deductive, inductive, or abductive reasoning given a collection of premises and constraints.

Logical deductions usually fall into two categories.

Whether a statement “the sky is blue” can be deduced from the provided information to a truth value (true, false, unknown). Or, deduce the correct solution that satisfies a set of given premises from the multiple choices.

But even with taking extra steps, symbolic systems aren’t perfect. They don’t handle noise well. Not unlike us, they break if they don’t have all the facts. Maybe that’s part of our difficulty. Words are all we have been using. Humans seem to take much stronger notice of actions

Sense → Symbolize → Plan → Act

I am proposing a deviation from the venerable Reflect-Act pattern most reasoning agents deploy when they “think deeper”, i.e., loop infinitely until they find a better solution. Reflect-Act is a good start. The agent does something, then reflects, then tries again. But if the seed is wrong, it likely never reaches a good conclusion. In that way, it’s all prompt glue and not internal structure. There is no “understanding”, if that is ever achievable, no real memory. no logic.

Just a number of retries/refine.

Symbolic reflection should be a state change.

I think what needs to be understood is that symbolic systems are more efficient because they don’t regenerate the whole search tree. They update just what changed. And maybe this should build the core loop of symbolic autonomy. To build systems that “understand” not just “pattern matches”.

True autonomy relies on hybrid systems. Systems that combine perception nets with structured logic. Since I started writing Encyclopedia Autonomica, I have always had sensors in my agent capability stack.

Action in Perception

Perception isn’t passive. It’s not a camera feed waiting to be parsed. In embodied agents, perception is shaped by action.

What you do determines what you can sense.

Let's say an agent’s belief state at time t is a set of symbolic assertions. When the agent acts, it doesn’t just affect the environment; it also reshapes the observations, which in turn updates a symbol map.

User's avatar

Continue reading this post for free, courtesy of Jan Daniel Semrau (MFin, CAIO).

Or purchase a paid subscription.
© 2026 JDS · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture