A TypeScript package for managing AI agent workflows with memory, tool execution, and conversation state management.
npm install @enso-labs/agent-coreAdds events to the thread state memory system.
Parameters:
toolIntent:{intent: string; args: any} | string- The tool intent or string identifiercontent:string- The content/message to storestate:ThreadState- Current thread statemetadata?:any- Optional metadata object
Returns: Promise<ThreadState> - Updated state with new event
Example:
import { agentMemory } from '@enso-labs/agent-core';
const newState = await agentMemory(
'user_input',
'Hello, how are you?',
currentState,
{ timestamp: new Date().toISOString() }
);Executes multiple tool intents and updates the state with results.
Parameters:
toolIntents:ToolIntent[]- Array of tool intents to executestate:ThreadState- Current thread statetools:Tool[]- Array of available LangChain tools
Returns: Promise<ThreadState> - Updated state with tool execution results
Example:
import { executeTools } from '@enso-labs/agent-core';
import { Tool } from 'langchain/tools';
const toolIntents = [
{ intent: 'search', args: { query: 'weather today' } }
];
const updatedState = await executeTools(toolIntents, currentState, availableTools);Converts thread state to XML format for compatibility with systems expecting XML.
Parameters:
state:ThreadState- The thread state to convert
Returns: string - XML representation of the thread state
Example:
import { convertStateToXML } from '@enso-labs/agent-core';
const xmlString = convertStateToXML(currentState);
console.log(xmlString);
// Output: <thread>\n<event intent="user_input">Hello</event>\n</thread>Main orchestration function that processes user queries through the complete agent workflow.
Parameters (Object):
prompt:string- User input querymodel?:string- Model identifier (default: 'openai:gpt-4.1-nano')tools?:Tool[]- Array of available tools (default: [])state?:ThreadState- Current thread state (default: empty state with usage tracking)
Returns: Promise<AgentResponse> - Response containing content, updated state, and token usage
Example:
import { agentLoop } from '@enso-labs/agent-core';
import type { ThreadState } from '@enso-labs/agent-core';
// Simple usage with just a prompt
const response = await agentLoop({
prompt: "What is the weather like today?",
tools: weatherTools
});
// Advanced usage with custom state
const initialState: ThreadState = {
thread: {
usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 },
events: []
}
};
const response = await agentLoop({
prompt: 'What is the weather like today?',
state: initialState,
model: 'openai:gpt-4o-mini',
tools: weatherTools
});
console.log(response.content); // AI response
console.log(response.tokens); // Token usage statsMain state management structure for conversation threads.
Response structure returned by agentLoop() containing:
content: string - The AI response contentstate: ThreadState - Updated thread statetokens?: object - Token usage information
Structure for tool execution requests:
intent: string - Tool name/identifierargs: any - Tool arguments
All functions include comprehensive error handling:
- Tool execution failures are captured and added to state
- LLM call failures return error messages in the response
- Invalid tool references are handled gracefully
- LangChain Tools for tool execution
- Internal utilities for intent classification and LLM calls
https://medium.com/@the_nick_morgan/creating-an-npm-package-with-typescript-c38b97a793cf