Skip to content

Chats and Messages#

Chat objects hold a sequence of Message objects pre and post generation. This is the most common way that we interact with LLMs, and the interface of both these and ChatPipeline's are very flexible objects that let you tune the generation process, gather structured outputs, validate parsing, perform text replacements, serialize and deserialize, fork conversations, etc.

Basic Usage#

import rigging as rg

generator = rg.get_generator("claude-2.1")
chat = await generator.chat(
    [
        {"role": "system", "content": "You're a helpful assistant."},
        {"role": "user", "content": "Say hello!"},
    ]
).run()

print(chat.last)
# [assistant]: Hello!

print(f"{chat.last!r}")
# Message(role='assistant', parts=[], content='Hello!')

print(chat.prev)
# [
#   Message(role='system', parts=[], content='You're a helpful assistant.'),
#   Message(role='user', parts=[], content='Say hello!'),
# ]

print(chat.message_dicts)
# [
#   {'role': 'system', 'content': 'You're a helpful assistant.'},
#   {'role': 'user', 'content': 'Say Hello!'},
#   {'role': 'assistant', 'content': 'Hello!'}
# ]

print(chat.conversation)
# [system]: You're a helpful assistant.

# [user]: Say hello!

# [assistant]: Hello!

Templating (apply)#

You can use both ChatPipeline.apply() and ChatPipeline.apply_to_all() to swap values prefixed with $ characters inside message contents for fast templating support.

This functionality uses string.Template.safe_substitute underneath.

import rigging as rg

template = (
    rg.get_generator("gpt-4")
    .chat("What is the capitol of $country?")
)

for country in ["France", "Germany"]:
    chat = await template.apply(country=country).run()
    print(chat.last)

# The capital of France is Paris.
# The capital of Germany is Berlin.

Parsed Parts#

Message objects hold all of their parsed ParsedMessagePart's inside their .parts property. These parts maintain both the instance of the parsed Rigging model object and a .slice_ property that defines exactly where in the message content they are located.

Every time parsing occurs, these parts are re-synced by using .to_pretty_xml() on the model, and stitching the clean content back into the message, fixing any other slices which might have been affected by the operation, and ordering the .parts property based on where they occur in the message content.

import rigging as rg
from pydantic import StringConstraints
from typing import Annotated

str_strip = Annotated[str, StringConstraints(strip_whitespace=True)]

class Summary(rg.Model):
    content: str_strip

message = rg.Message(
    "assistant",
    "Sure, the summary is: <summary  > Rigging is a very powerful library </summary>. I hope that helps!"
)

message.parse(Summary)

print(message.content) # (1)!
# Sure, the summary is: <summary>Rigging is a very powerful library</summary>. I hope that helps!

print(message.parts)
# [
#   ParsedMessagePart(model=Summary(content='Rigging is a very powerful library'), slice_=slice(22, 75, None))
# ]

print(message.content[message.parts[0].slice_])
# <summary>Rigging is a very powerful library</summary>
  1. Notice how our message content got updated to reflect fixing the the extra whitespace in our start tag and our string stripping annotation.

Stripping Parts#

Because we track exactly where a parsed model is inside a message, we can cleanly remove just that portion from the content and re-sync the other parts to align with the new content. This is helpful for removing context from a conversation that you might not want there for future generations.

This is a very powerful primitive, that allows you to operate on messages more like a collection of structured models than raw text.

import rigging as rg

class Reasoning(rg.Model):
    content: str

meaning = (
    await
    rg.get_generator("claude-2.1")
    .chat(
        "What is the meaning of life in one sentence? "
        f"Document your reasoning between {Reasoning.xml_tags()} tags.",
    )
    .run()
)

# Gracefully handle missing models
reasoning = meaning.last.try_parse(Reasoning)
if reasoning:
    print("Reasoning:", reasoning.content)

# Strip parsed content to avoid sharing
# previous thoughts with the model.
without_reasons = meaning.strip(Reasoning)
print("Meaning of life:", without_reasons.last.content)

follow_up = await without_reasons.continue_(...).run()

Metadata#

Both Chats and ChatPipelines support the concept of arbitrary metadata that you can use to store things like tags, metrics, and supporting data for storage, sorting, and filtering.

Metadata will carry forward from a ChatPipeline to a Chat object when generation completes. This metadata is also maintained in the serialization process.

import rigging as rg

chat = (
    await
    rg.get_generator("claude-2.1")
    .chat("Hello!")
    .meta(prompt_version=1)
    .run()
).meta(user="Will")

print(chat.metadata)
# {
#   'prompt_version': 1, 
#   'user': 'Will'
# }

Generation Context and Additional Data#

Chats maintain some additional data to understand more about the generation process:

It's the responsibility of the generator to populate these fields, and their content will vary dependent on the underlying implementation. For instance, the transformers generator doesn't provide any usage information and the vllm generator will add metrics information to the extra field.

We intentionally keep these fields as generic as possible to allow for future expansion. You'll often find deep information about the generation process in the Chat.extra field.

import rigging as rg

pipeline = (
    rg.get_generator("gpt-4")
    .chat("What is the 4th form of water?")
)

chat = await pipeline.with_(stop=["water"]).run()

print(chat.last.content) # "The fourth form of"
print(chat.stop_reason)  # stop
print(chat.usage)        # input_tokens=17 output_tokens=5 total_tokens=22
print(chat.extra)        # {'response_id': 'chatcmpl-9UgcwYrdaVrqUXoNrMGvgxGQqS04V'}

chat = await pipeline.with_(stop=[], max_tokens=10).run()

print(chat.last.content) # "The fourth form of water is often referred to as"
print(chat.stop_reason)  # length