Encyclopedia Autonomica

Encyclopedia Autonomica

Share this post

Encyclopedia Autonomica
Encyclopedia Autonomica
Reasoning with Graph Encoding: Techniques and Applications in AI

Reasoning with Graph Encoding: Techniques and Applications in AI

Structuring Data for Earthquake Risk Evaluation in Japan

Jan Daniel Semrau (MFin, CAIO)'s avatar
Jan Daniel Semrau (MFin, CAIO)
Aug 22, 2024
∙ Paid
2

Share this post

Encyclopedia Autonomica
Encyclopedia Autonomica
Reasoning with Graph Encoding: Techniques and Applications in AI
1
Share

Japan keeps me busy. That’s a good thing while I am enjoying the recovery. Fortunately, the Megaquake didn’t happen (yet…knock on wood), the warning has been lifted, and Prime Minister Kishida stepped back. Also, the Japanese stock market had a hiccup.

In my previous post, I introduced graph structures as a reasoning problem for optimized pathfinding and adding probabilities (weights) to an existing graph. While that sounds simple; these graphs can be exceedingly complex and when implementing applications like the shortest-path algorithms it gets even more intricate.

Hence, I’d argue it is easy to lose sight of what matters.

What do we want to achieve? The idea is to convert our task into a graph structure, provide this to the agent, and then use the agent to reason over this graph and return a useful reply to the user. That means, that the cognitive agent should do the following.

Construct a graph

  1. Receive a prompt

  2. Generate all relevant entities involved in answering the prompt

  3. Understand and define all relationships between these entities, and then

  4. Construct the most efficient graph structure

Reason over the graph

  1. Receive a task in the form of a prompt

  2. Analyze the structure of the graph relative to the task

  3. Predict the most likely response based on reasoning on the graph.

Let’s do an example first.

A suitable prompt could be this.

“Assess the risk of earthquakes for travelers planning to visit Japan.”

The agent then identifies the following entities

  • Travelers

  • Japan

  • Earthquakes

  • Travel Destinations (e.g., Tokyo, Osaka, Kyoto)

  • Seismic Activity Data

  • Safety Measures

  • Emergency Services

  • Historical Earthquake Data

  • Travel Advisories

And then we can define relationships between them

  • Travelers are planning to visit Japan.

  • Japan is prone to earthquakes due to its location on the Pacific Ring of Fire.

  • Travel Destinations like Tokyo, Osaka, and Kyoto have varying levels of earthquake risk.

  • Seismic Activity Data provides information on recent and historical earthquakes.

  • Safety Measures include building codes, emergency kits, and evacuation plans.

  • Emergency Services are responsible for responding to earthquake-related incidents.

  • Historical Earthquake Data helps predict future seismic activity.

  • Travel Advisories provide current information on safety and risk levels.

If you express these relationships in a graph, it could look like this.

The length of the edges doesn’t indicate a difference in the quality of the relationship.

Graph Encoders

The above graph is already quite useful. How can we provide a data structure that describes this graph well to the agent? The background here is that we need an efficient way to create embeddings or tokenized representations of the graph, so it is easy for a vanilla LLM as part of our cognitive agent to understand it.

Of course, it is possible to finetune an LLM specifically on understanding graphs. This post, and in general my research approach, is about using vanilla LLMs that can be safely integrated into a standard corporate workflow. I hold the opinion that we will see model certifications in the same way we see SOC3 compliance on Amazon AWS.

Graph Encoders are tasked to encode graphs so they are understandable by LLMs. Common implementations as Graph Convolutional Networks, Gate Graph Neural Networks, or Graph Attention Networks. The goal is to be able to reason over the structure of the graph to identify if there are edges between nodes, or cycles in the graph, or if there is a path through a set of nodes and edges.

A greater example is coming from the Google DeepMind team in their paper “Understanding Transformer Reasoning Capabilities via Graph Algorithms”

source page 2

The role of the graph encoder is to transform the structural elements of a graph—nodes and edges—into a sequential representation. For the graph encoding, the research team introduced two general functions.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 JDS
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share