Skip to content

Workflow

Workflow#

  1. Get a Generator object - usually with get_generator().
  2. Call generator.chat() to produce a ChatPipeline and ready it for generation.
  3. Call pipeline.run() to kick off generation and get your final Chat object.

ChatPipeline objects hold any messages waiting to be delivered to an LLM in exchange for a new response message. These objects are also where most of the power in rigging comes from. You'll build a generation pipeline with options, parsing, callbacks, etc. After prep this pipeline is used to make a final Chat which holding all messages prior to generation (.prev) and after generation (.next).

You should think of ChatPipeline objects like the configurable pre-generation step with calls like .with_(), .apply(), .until(), .using(), etc. Once you call one of the many .run() functions, the generator is used to produce the next message (or many messages) based on the prior context and any constraints you have in place. Once you have a Chat object, the interation is "done" and you can inspect and operate on the messages.

Chats vs Completions

Rigging supports both Chat objects (messages with roles in a "conversation" format), as well as raw text completions. While we use Chat objects in most of our examples, you can check out the Completions section to learn more about their feature parity.

You'll often see us use functional styling chaining as most of our utility functions return the object back to you.

chat = (
    await
    generator.chat(...)
    .using(...)
    .until(...)
    .with_(...)
    .run()
)