Encyclopedia Autonomica

Encyclopedia Autonomica

Beyond GPT: The Rise of Autonomous AI Agents in Technology

Attention companies! Don't get caught flat-footed by the next stage of AI.

Jan Daniel Semrau (MFin, CAIO)'s avatar
Jan Daniel Semrau (MFin, CAIO)
Feb 14, 2024
∙ Paid
2
Share

While large language models (LLMs) like ChatGPT are impressive, they're just the opening act. The real game-changer is coming: autonomous agents. And not only do I think so, OpenAI also agrees.

https://www.theinformation.com/articles/openai-shifts-ai-battleground-to-software-that-operates-devices-automates-tasks

Why I am in this space is a clear understanding that these AI systems are not just clever chatbots - we will be building independent workhorses capable of planning, executing, and adapting to achieve our goals. Think of them as having the powerful brain of an LLM with the memory, planning and reasoning, “arms and legs” to use tools, and a workflow to get things orchestrated.

Next-Gen Agent Tech

Here's what I think will happen next:

  • Autonomous agents will be mainstream in 3-5 years. It’s time to start preparing.

  • Agents can automate entire workflows → freeing up your team for more strategic tasks.

  • Agents can simulate at scale → Testing/validating products, services, and scenarios will be faster and cheaper.

  • RPA will be tech debt soon.

I think the impact will be massive. Therefore start preparing your workforce and architecture now.

To be clear, I believe

🔹 Large corporations will shift about 25% of their IT budget to AI
🔹 70% of corporates have already launched some kind of GenAI initiative.
🔹 GenAI Activities are shifting from consumer apps to B2B (YC already ahead here)
🔹 Testing AI remains a significant challenge, presenting opportunities
🔹 Hallucinations, safety, bias, and fairness are critical aspects to validate, especially with LLMs.
🔹 Companies are transitioning towards agents from simple LLM applications.
🔹 Despite human errors in processes and hallucinations, AI will undergo more thorough testing compared to manual processes.
🔹 I emphasized the importance of AI being self-aware to prevent hallucinations and ensure coherence. Memory will be an important stepping stone to provide context (Memory and new controls for ChatGPT (openai.com))
🔹 Cost and repurposing of data science teams emerged as the main discussion topics.

What can you do?

  1. Prepare your architecture: Enable two-way communication between LLMs and your internal architecture.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 JDS
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture