rigging.chat
Chats are used pre and post generation to hold messages.
They are the primary way to interact with the generator.
DEFAULT_MAX_ROUNDS = 5
module-attribute
#
Maximum number of internal callback rounds to attempt during generation before giving up.
FailMode = t.Literal['raise', 'skip', 'include']
module-attribute
#
How to handle failures in pipelines.
- raise: Raise an exception when a failure is encountered.
- skip: Ignore the error and do not include the failed chat in the final output.
- include: Mark the message as failed and include it in the final output.
Chat(messages: Messages, generated: Messages | None = None, generator: t.Optional[Generator] = None, **kwargs: t.Any)
#
Bases: BaseModel
Represents a completed chat conversation.
Parameters:
-
messages
(Messages
) –The messages for the chat.
-
generated
(Messages | None
, default:None
) –The next messages for the chat.
-
generator
(Optional[Generator]
, default:None
) –The generator associated with this chat.
-
**kwargs
(Any
, default:{}
) –Additional keyword arguments (typically used for deserialization)
Source code in rigging/chat.py
all: list[Message]
property
#
Returns all messages in the chat, including the next messages.
conversation: str
property
#
Returns a string representation of the chat.
error: t.Optional[t.Annotated[Exception, PlainSerializer(lambda x: str(x), return_type=str, when_used='json-unless-none')]] = Field(None, repr=False)
class-attribute
instance-attribute
#
Holds any exception that was caught during the generation pipeline.
extra: dict[str, t.Any] = Field(default_factory=dict, repr=False)
class-attribute
instance-attribute
#
Any additional information from the generation.
failed: bool = Field(False, exclude=False, repr=True)
class-attribute
instance-attribute
#
Indicates whether conditions during generation were not met. This is typically used for graceful error handling when parsing.
generated: list[Message] = Field(default_factory=list)
class-attribute
instance-attribute
#
The list of messages resulting from the generation.
generator: t.Optional[Generator] = Field(None, exclude=True, repr=False)
class-attribute
instance-attribute
#
The generator associated with the chat.
generator_id: str | None
property
#
The identifier of the generator used to create the chat
last: Message
property
#
Alias for .all[-1]
message_dicts: list[MessageDict]
property
#
Returns the chat as a minimal message dictionaries.
Returns:
-
list[MessageDict]
–The MessageDict list
messages: list[Message]
instance-attribute
#
The list of messages prior to generation.
metadata: dict[str, t.Any] = Field(default_factory=dict)
class-attribute
instance-attribute
#
Additional metadata for the chat.
next: list[Message]
property
#
Alias for the .generated property
params: t.Optional[GenerateParams] = Field(None, exclude=True, repr=False)
class-attribute
instance-attribute
#
Any additional generation params used for this chat.
prev: list[Message]
property
#
Alias for the .messages property
stop_reason: StopReason = Field(default='unknown')
class-attribute
instance-attribute
#
The reason the generation stopped.
timestamp: datetime = Field(default_factory=datetime.now, repr=False)
class-attribute
instance-attribute
#
The timestamp when the chat was created.
usage: t.Optional[Usage] = Field(None, repr=False)
class-attribute
instance-attribute
#
The usage statistics for the generation if available.
uuid: UUID = Field(default_factory=uuid4)
class-attribute
instance-attribute
#
The unique identifier for the chat.
apply(**kwargs: str) -> Chat
#
Calls rigging.message.Message.apply on the last message in the chat with the given keyword arguments.
Parameters:
-
**kwargs
(str
, default:{}
) –The string mapping of replacements.
Returns:
-
Chat
–The modified Chat object.
Source code in rigging/chat.py
apply_to_all(**kwargs: str) -> Chat
#
Calls rigging.message.Message.apply on all messages in the chat with the given keyword arguments.
Parameters:
-
**kwargs
(str
, default:{}
) –The string mapping of replacements.
Returns:
-
Chat
–The modified chat object.
Source code in rigging/chat.py
clone(*, only_messages: bool = False) -> Chat
#
Creates a deep copy of the chat.
Parameters:
-
only_messages
(bool
, default:False
) –If True, only the messages will be cloned. If False (default), the entire chat object will be cloned.
Returns:
-
Chat
–A new instance of Chat.
Source code in rigging/chat.py
continue_(messages: t.Sequence[Message] | t.Sequence[MessageDict] | Message | str) -> ChatPipeline
#
Alias for the rigging.chat.Chat.fork with include_all=True
.
fork(messages: t.Sequence[Message] | t.Sequence[MessageDict] | Message | MessageDict | str, *, include_all: bool = False) -> ChatPipeline
#
Forks the chat by creating calling rigging.chat.Chat.restart and appending the specified messages.
Parameters:
-
messages
(Sequence[Message] | Sequence[MessageDict] | Message | MessageDict | str
) –The messages to be added to the new
ChatPipeline
instance. -
include_all
(bool
, default:False
) –Whether to include the next messages in the restarted chat.
Returns:
-
ChatPipeline
–A new instance of
ChatPipeline
with the specified messages added.
Source code in rigging/chat.py
inject_system_content(content: str) -> Message
#
Injects content into the chat as a system message.
Note
If the chat is empty or the first message is not a system message, a new system message with the given content is inserted at the beginning of the chat. If the first message is a system message, the content is appended to it.
Parameters:
-
content
(str
) –The content to be injected.
Returns:
-
Message
–The updated system message.
Source code in rigging/chat.py
inject_tool_prompt(tools: t.Sequence[Tool]) -> None
#
Injects a default tool use prompt into the system prompt.
Parameters:
-
tools
(Sequence[Tool]
) –A sequence of Tool objects.
Source code in rigging/chat.py
meta(**kwargs: t.Any) -> Chat
#
Updates the metadata of the chat with the provided key-value pairs.
Parameters:
-
**kwargs
(Any
, default:{}
) –Key-value pairs representing the metadata to be updated.
Returns:
-
Chat
–The updated chat object.
Source code in rigging/chat.py
restart(*, generator: t.Optional[Generator] = None, include_all: bool = False) -> ChatPipeline
#
Attempt to convert back to a ChatPipeline for further generation.
Parameters:
-
generator
(Optional[Generator]
, default:None
) –The generator to use for the restarted chat. Otherwise the generator from the original ChatPipeline will be used.
-
include_all
(bool
, default:False
) –Whether to include the next messages in the restarted chat.
Returns:
-
ChatPipeline
–The restarted chat.
Raises:
-
ValueError
–If the chat was not created with a ChatPipeline and no generator is provided.
Source code in rigging/chat.py
strip(model_type: type[Model], fail_on_missing: bool = False) -> Chat
#
Strips all parsed parts of a particular type from the message content.
Parameters:
-
model_type
(type[Model]
) –The type of model to keep in the chat.
-
fail_on_missing
(bool
, default:False
) –Whether to raise an exception if a message of the specified model type is not found.
Returns:
-
Chat
–A new Chat object with only the messages of the specified model type.
Source code in rigging/chat.py
to_df() -> t.Any
#
Converts the chat to a Pandas DataFrame.
See rigging.data.chats_to_df for more information.
Returns:
-
Any
–The chat as a DataFrame.
Source code in rigging/chat.py
to_elastic(index: str, client: AsyncElasticsearch, *, op_type: ElasticOpType = 'index', create_index: bool = True, **kwargs: t.Any) -> int
async
#
Converts the chat data to Elasticsearch format and indexes it.
See rigging.data.chats_to_elastic for more information.
Returns:
-
int
–The number of chats indexed.
Source code in rigging/chat.py
ChatList
#
Bases: list[Chat]
Represents a list of chat objects.
Inherits from the built-in list
class and is specialized for storing Chat
objects.
to_df() -> t.Any
#
Converts the chat list to a Pandas DataFrame.
See rigging.data.chats_to_df for more information.
Returns:
-
Any
–The chat list as a DataFrame.
Source code in rigging/chat.py
to_elastic(index: str, client: AsyncElasticsearch, *, op_type: ElasticOpType = 'index', create_index: bool = True, **kwargs: t.Any) -> int
async
#
Converts the chat list to Elasticsearch format and indexes it.
See rigging.data.chats_to_elastic for more information.
Returns:
-
int
–The number of chats indexed.
Source code in rigging/chat.py
to_json() -> list[dict[str, t.Any]]
#
ChatPipeline(generator: Generator, messages: t.Sequence[Message], *, params: t.Optional[GenerateParams] = None, watch_callbacks: t.Optional[list[WatchChatCallback]] = None)
#
Pipeline to manipulate and produce chats.
Source code in rigging/chat.py
chat: Chat = Chat(messages)
instance-attribute
#
The chat object representing the conversation.
errors_to_exclude: set[type[Exception]] = set()
instance-attribute
#
The list of exceptions to exclude from the catch list.
errors_to_fail_on: set[type[Exception]] = set()
instance-attribute
#
The list of exceptions to catch during generation if you are including or skipping failures.
ExhuastedMaxRounds is implicitly included.
generator: Generator = generator
instance-attribute
#
The generator object responsible for generating the chat.
metadata: dict[str, t.Any] = {}
instance-attribute
#
Additional metadata associated with the chat.
on_failed: FailMode = 'raise'
instance-attribute
#
How to handle failures in the pipeline unless overriden in calls.
params = params
instance-attribute
#
The parameters for generating messages.
add(messages: t.Sequence[MessageDict] | t.Sequence[Message] | MessageDict | Message | str) -> ChatPipeline
#
Appends new message(s) to the internal chat before generation.
Note
If the last message in the chat is the same role as the first new message, the content will be appended. instead of a new message being created.
Parameters:
-
messages
(Sequence[MessageDict] | Sequence[Message] | MessageDict | Message | str
) –The messages to be added to the chat. It can be a single message or a sequence of messages.
Returns:
-
ChatPipeline
–The updated ChatPipeline object.
Source code in rigging/chat.py
apply(**kwargs: str) -> ChatPipeline
#
Clones this chat pipeline and calls rigging.chat.Chat.apply with the given keyword arguments.
Parameters:
-
**kwargs
(str
, default:{}
) –Keyword arguments to be applied to the chat.
Returns:
-
ChatPipeline
–A new instance of ChatPipeline with the applied arguments.
Source code in rigging/chat.py
apply_to_all(**kwargs: str) -> ChatPipeline
#
Clones this chat pipeline and calls rigging.chat.Chat.apply_to_all with the given keyword arguments.
Parameters:
-
**kwargs
(str
, default:{}
) –Keyword arguments to be applied to the chat.
Returns:
-
ChatPipeline
–A new instance of ChatPipeline with the applied arguments.
Source code in rigging/chat.py
catch(*errors: type[Exception], on_failed: FailMode | None = None, exclude: list[type[Exception]] | None = None) -> ChatPipeline
#
Adds exceptions to catch during generation when including or skipping failures.
Parameters:
-
*errors
(type[Exception]
, default:()
) –The exception types to catch.
-
on_failed
(FailMode | None
, default:None
) –How to handle failures in the pipeline unless overriden in calls.
Returns:
-
ChatPipeline
–The updated ChatPipeline object.
Source code in rigging/chat.py
clone(*, only_messages: bool = False) -> ChatPipeline
#
Creates a clone of the current ChatPipeline
instance.
Parameters:
-
only_messages
(bool
, default:False
) –If True, only the messages will be cloned. If False (default), the entire
ChatPipeline
instance will be cloned including until callbacks, types, tools, metadata, etc.
Returns:
-
ChatPipeline
–A new instance of
ChatPipeline
that is a clone of the current instance.
Source code in rigging/chat.py
fork(messages: t.Sequence[MessageDict] | t.Sequence[Message] | MessageDict | Message | str) -> ChatPipeline
#
Creates a new instance of ChatPipeline
by forking the current chat and adding the specified messages.
This is a convenience method for calling clone().add(messages)
.
Parameters:
-
messages
(Sequence[MessageDict] | Sequence[Message] | MessageDict | Message | str
) –A sequence of messages or a single message to be added to the new chat.
Returns:
-
ChatPipeline
–A new instance the pipeline with the specified messages added.
Source code in rigging/chat.py
map(callback: MapChatCallback) -> ChatPipeline
#
Registers a callback to be executed after the generation process completes.
Note
You must return a list of Chat objects from the callback which will represent the state of chats for the remainder of the callbacks and the final return of control.
Parameters:
-
callback
(MapChatCallback
) –The callback function to be executed.
Returns:
-
ChatPipeline
–The current instance of the chat.
Source code in rigging/chat.py
meta(**kwargs: t.Any) -> ChatPipeline
#
Updates the metadata of the chat with the provided key-value pairs.
Parameters:
-
**kwargs
(Any
, default:{}
) –Key-value pairs representing the metadata to be updated.
Returns:
-
ChatPipeline
–The updated chat object.
Source code in rigging/chat.py
prompt(func: t.Callable[P, t.Coroutine[None, None, R]]) -> Prompt[P, R]
#
Decorator to convert a function into a prompt bound to this pipeline.
See rigging.prompt.prompt for more information.
Parameters:
-
func
(Callable[P, Coroutine[None, None, R]]
) –The function to be converted into a prompt.
Returns:
-
Prompt[P, R]
–The prompt.
Source code in rigging/chat.py
run(*, allow_failed: bool = False, on_failed: FailMode | None = None) -> Chat
async
#
Execute the generation process to produce the final chat.
Parameters:
-
allow_failed
(bool
, default:False
) –Ignore any errors and potentially return the chat in a failed state.
-
on_failed
(FailMode | None
, default:None
) –The behavior when a message fails to generate. (this is used as an alternative to allow_failed)
Returns:
-
Chat
–The generated Chat.
Source code in rigging/chat.py
run_batch(many: t.Sequence[t.Sequence[Message]] | t.Sequence[Message] | t.Sequence[MessageDict] | t.Sequence[str] | MessageDict | str, params: t.Sequence[t.Optional[GenerateParams]] | None = None, *, on_failed: FailMode | None = None) -> ChatList
async
#
Executes the generation process accross multiple input messages.
Note
Anything already in this chat pipeline will be prepended to the input messages.
Parameters:
-
many
(Sequence[Sequence[Message]] | Sequence[Message] | Sequence[MessageDict] | Sequence[str] | MessageDict | str
) –A sequence of sequences of messages to be generated.
-
params
(Sequence[Optional[GenerateParams]] | None
, default:None
) –A sequence of parameters to be used for each set of messages.
-
on_failed
(FailMode | None
, default:None
) –The behavior when a message fails to generate.
Returns:
-
ChatList
–A list of generatated Chats.
Source code in rigging/chat.py
run_many(count: int, *, params: t.Sequence[t.Optional[GenerateParams]] | None = None, on_failed: FailMode | None = None) -> ChatList
async
#
Executes the generation process multiple times with the same inputs.
Parameters:
-
count
(int
) –The number of times to execute the generation process.
-
params
(Sequence[Optional[GenerateParams]] | None
, default:None
) –A sequence of parameters to be used for each execution.
-
on_failed
(FailMode | None
, default:None
) –The behavior when a message fails to generate.
Returns:
-
ChatList
–A list of generatated Chats.
Source code in rigging/chat.py
run_over(*generators: Generator | str, include_original: bool = True, on_failed: FailMode | None = None) -> ChatList
async
#
Executes the generation process across multiple generators.
For each generator, this pipeline is cloned and the generator is replaced before the run call. All callbacks and parameters are preserved.
Parameters:
-
*generators
(Generator | str
, default:()
) –A sequence of generators to be used for the generation process.
-
include_original
(bool
, default:True
) –Whether to include the original generator in the list of runs.
-
on_failed
(FailMode | None
, default:None
) –The behavior when a message fails to generate.
Returns:
-
ChatList
–A list of generatated Chats.
Source code in rigging/chat.py
run_prompt(prompt: Prompt[P, R], /, *args: P.args, **kwargs: P.kwargs) -> R
async
#
Calls rigging.prompt.Prompt.run with this pipeline.
Warning
This method is deprecated and will be removed in a future release. Use Prompt.bind(pipeline) instead.
Source code in rigging/chat.py
run_prompt_many(prompt: Prompt[P, R], count: int, /, *args: P.args, **kwargs: P.kwargs) -> list[R]
async
#
Calls rigging.prompt.Prompt.run_many with this pipeline.
Warning
This method is deprecated and will be removed in a future release. Use Prompt.bind_many(pipeline) instead.
Source code in rigging/chat.py
run_prompt_over(prompt: Prompt[P, R], generators: t.Sequence[Generator | str], /, *args: P.args, **kwargs: P.kwargs) -> list[R]
async
#
Calls rigging.prompt.Prompt.run_over with this pipeline.
Warning
This method is deprecated and will be removed in a future release. Use Prompt.bind_over(pipeline) instead.
Source code in rigging/chat.py
then(callback: ThenChatCallback) -> ChatPipeline
#
Registers a callback to be executed after the generation process completes.
Note
Returning a Chat object from the callback will replace the current chat.
for the remainder of the callbacks + return value of run()
. This is
optional.
Parameters:
-
callback
(ThenChatCallback
) –The callback function to be executed.
Returns:
-
ChatPipeline
–The current instance of the chat.
Source code in rigging/chat.py
until(callback: UntilMessageCallback, *, attempt_recovery: bool = True, drop_dialog: bool = True, max_rounds: int = DEFAULT_MAX_ROUNDS) -> ChatPipeline
#
Registers a callback to participate in validating the generation process.
# Takes the next message being generated, and returns whether or not to continue
# generating new messages in addition to a list of messages to append before continuing
def callback(message: Message) -> tuple[bool, list[Message]]:
if is_valid(message):
return (False, [message])
else:
return (True, [message, ...])
await pipeline.until(callback).run()
Note
In general, your callback function should always include the message that was passed to it.
Whether these messages get used or discarded in the next round depends on attempt_recovery
.
Parameters:
-
callback
(UntilMessageCallback
) –The callback function to be executed.
-
attempt_recovery
(bool
, default:True
) –Whether to attempt recovery by continuing to append prior messages before the next round of generation.
-
drop_dialog
(bool
, default:True
) –Whether to drop the intermediate dialog of recovery before returning the final chat back to the caller.
-
max_rounds
(int
, default:DEFAULT_MAX_ROUNDS
) –The maximum number of rounds to attempt generation + callbacks before giving uop.
Returns:
-
ChatPipeline
–The current instance of the chat.
Source code in rigging/chat.py
until_parsed_as(*types: type[ModelT], attempt_recovery: bool = False, drop_dialog: bool = True, max_rounds: int = DEFAULT_MAX_ROUNDS) -> ChatPipeline
#
Adds the specified types to the list of types which should successfully parse before the generation process completes.
Parameters:
-
*types
(type[ModelT]
, default:()
) –The type or types of models to wait for.
-
attempt_recovery
(bool
, default:False
) –Whether to attempt recovery if parsing fails by providing validation feedback to the model before the next round.
-
drop_dialog
(bool
, default:True
) –Whether to drop the intermediate dialog of recovery efforts before returning the final chat to the caller.
-
max_rounds
(int
, default:DEFAULT_MAX_ROUNDS
) –The maximum number of rounds to try to parse successfully.
Returns:
-
ChatPipeline
–The updated ChatPipeline object.
Source code in rigging/chat.py
using(*tools: Tool, force: bool = False, attempt_recovery: bool = True, drop_dialog: bool = False, max_rounds: int = DEFAULT_MAX_ROUNDS, inject_prompt: bool | None = None) -> ChatPipeline
#
Adds a tool or a sequence of tools to participate in the generation process.
Parameters:
-
tools
(Tool
, default:()
) –The tool or sequence of tools to be added.
-
force
(bool
, default:False
) –Whether to force the use of the tool(s) at least once.
-
attempt_recovery
(bool
, default:True
) –Whether to attempt recovery if the tool(s) fail by providing validation feedback to the model before the next round.
-
drop_dialog
(bool
, default:False
) –Whether to drop the intermediate dialog of recovery efforts before returning the final chat to the caller.
-
max_rounds
(int
, default:DEFAULT_MAX_ROUNDS
) –The maximum number of rounds to attempt recovery.
-
inject_prompt
(bool | None
, default:None
) –Whether to inject the tool guidance prompt into a system message.and will override self.inject_tool_prompt if provided.
Returns:
-
ChatPipeline
–The updated ChatPipeline object.
Source code in rigging/chat.py
watch(*callbacks: WatchChatCallback, allow_duplicates: bool = False) -> ChatPipeline
#
Registers a callback to monitor any chats produced.
Parameters:
-
*callbacks
(WatchChatCallback
, default:()
) –The callback functions to be executed.
-
allow_duplicates
(bool
, default:False
) –Whether to allow (seemingly) duplicate callbacks to be added.
Returns:
-
ChatPipeline
–The current instance of the chat.
Source code in rigging/chat.py
with_(params: t.Optional[GenerateParams] = None, **kwargs: t.Any) -> ChatPipeline
#
Assign specific generation parameter overloads for this chat.
Note
This will trigger a clone
if overload params have already been set.
Parameters:
-
params
(Optional[GenerateParams]
, default:None
) –The parameters to set for the chat.
-
**kwargs
(Any
, default:{}
) –An alternative way to pass parameters as keyword arguments.
Returns:
-
ChatPipeline
–A new instance of ChatPipeline with the updated parameters.
Source code in rigging/chat.py
wrap(func: t.Callable[[CallableT], CallableT]) -> ChatPipeline
#
Helper for rigging.generator.base.Generator.wrap.
Parameters:
-
func
(Callable[[CallableT], CallableT]
) –The function to wrap the calls with.
Returns:
-
ChatPipeline
–The current instance of the pipeline.
Source code in rigging/chat.py
MapChatCallback
#
Bases: Protocol
__call__(chats: list[Chat]) -> t.Awaitable[list[Chat]]
#
Passed a finalized chats to process. Can replace chats in the pipeline by returning a new chat object.
UntilMessageCallback
#
Bases: Protocol
__call__(message: Message) -> tuple[bool, list[Message]]
#
Passed the next message, returns whether or not to continue and an optional list of messages to append before continuing.
WatchChatCallback
#
Bases: Protocol