Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion council/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
"""Init file."""
"""

A module for initializing various components of a language model-based system.

This module imports and provides the foundational building blocks for constructing and operating
language model agents, chains of agents, contexts, controllers, evaluators, and filters. Additionally, it
includes the specification for implementations of language model interfaces for Anthropic, Azure, and
OpenAI large language models (LLMs). Administrative structures like Chain Context and Skill Context are
imported for coordination of chain operations and management of skills within the agents, respectively.

Lastly, the module imports various runner objects to control the execution flow such as loops and
conditional executions that can operate either sequentially or in parallel.


"""

from .agents import Agent, AgentChain, AgentResult
from .chains import Chain, ChainBase
Expand Down
15 changes: 15 additions & 0 deletions council/agent_tests/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,16 @@
"""

A module that initializes and includes the agent testing framework components.

This module imports the classes provided by the Agent testing framework which is responsible for running test cases against an AI agent to evaluate its performance. The testing framework allows for defining various test cases with specific prompts and expected outcomes, along with a suite that can run multiple test cases and aggregate their results.

Classes:
AgentTestCaseOutcome (Enum): An enumeration of possible outcomes for a test case, representing success, error, unknown, or inconclusive results.
AgentTestCaseResult: Encapsulates the outcome of a single test case execution including the test prompt, actual result, execution time, and scores generated by the applied scorers, along with any errors encountered.
AgentTestCase: Represents a single test case with a defined prompt and a list of scorers used to evaluate the agent's response.
AgentTestSuiteResult: Collects the results of all the test cases executed as part of a test suite.
AgentTestSuite: Encapsulates a collection of test cases that can be run against an agent, offering methods to add test cases individually and execute the entire suite, optionally displaying a progress bar.


"""
from .agent_tests import AgentTestCaseOutcome, AgentTestCaseResult, AgentTestCase, AgentTestSuiteResult, AgentTestSuite
Loading