Encyclopedia Autonomica

Encyclopedia Autonomica

Share this post

Encyclopedia Autonomica
Encyclopedia Autonomica
Code Clinic | Improving LLM Response Reliability (Part 2)

Code Clinic | Improving LLM Response Reliability (Part 2)

Exploring Normal Computing’s “Outlines”

Jan Daniel Semrau (MFin, CAIO)'s avatar
Jan Daniel Semrau (MFin, CAIO)
Aug 28, 2023
∙ Paid

Share this post

Encyclopedia Autonomica
Encyclopedia Autonomica
Code Clinic | Improving LLM Response Reliability (Part 2)
1
Share

Part 1 of this series can be found here

Introduction

“Outlines” is an open-source library and is designed to be a “flexible replacement for the 'generate’ method in the transformers library.

The ‘generate’ method is a tool for generating text with a variety of Huggingface models. And, it can be used for a variety of tasks, such as creating chatbots, generating creative text formats, and translating languages.

Begs the question, why replace it if it’s that powerful?

Among the features of the Outlines library as of this writing are:

  1. Application of a Jinja templating engine to simplify prompt primitives

  2. Includes multiple-choice, type constraints, and dynamic stopping. The first item we will be using as an example during this post

  3. Regex-guided text generation

  4. JSON scheme generation (really important for Agent-2-Agent communication)

  5. Interleaved completions with loops, conditionals, and custom functions

  6. Integration of HF transformer models.

What we want to show here is how the outlines library can be used to improve the reliability of feedback from an LLM of our choice.

If you have come so far I deeply thank you. Please leave a like or subscribe.

The code for this exercise can be found as usual on my GitHub

Let’s dive in,

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 JDS
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share