Documentation Index
Fetch the complete documentation index at: https://valmiio.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Action Context
Use action_context to send actions with user identity:
from value import initialize_sync
client = initialize_sync(agent_secret="your_agent_secret")
with client.action_context(user_id="user_123", anonymous_id="anon_456") as ctx:
# Your agent logic here
result = process_data()
ctx.send(
action_name="data_processed",
**{"value.action.description": "Processed user data"}
)
Context Parameters
- user_id (optional): Identified user ID
- anonymous_id (required): Anonymous session/request ID
Sending Actions
Within an action context, use ctx.send():
with client.action_context(user_id="user_123", anonymous_id="session_abc") as ctx:
# Send an LLM call action
ctx.send(
action_name="llm_call",
**{
"value.action.description": "Generated response",
"model": "gpt-4",
"input_tokens": 1000,
"output_tokens": 500,
}
)
# Send a tool call action
ctx.send(
action_name="tool_call",
**{
"value.action.description": "Called external API",
"tool_name": "stripe_api",
}
)
Standard Attributes
Use value.action.* prefix for standard attributes:
| Attribute | Description |
|---|
value.action.name | Action name (set automatically) |
value.action.description | Human-readable description |
value.action.user_id | User ID (set from context) |
value.action.anonymous_id | Anonymous ID (set from context) |
Custom attributes are stored in value.action.user_attributes.
Async Usage
import asyncio
from value import initialize_async
async def main():
client = await initialize_async(agent_secret="your_agent_secret")
with client.action_context(user_id="user_123", anonymous_id="anon_456") as ctx:
result = await async_process()
ctx.send(action_name="async_action")
asyncio.run(main())
Auto-Instrumentation
Automatically capture LLM calls without manual instrumentation:
from value import initialize_sync, auto_instrument
client = initialize_sync(agent_secret="your_agent_secret")
auto_instrument(["gemini", "langchain"])
# LLM calls are now automatically traced
from google import genai
gemini_client = genai.Client(api_key="your-key")
response = gemini_client.models.generate_content(
model="gemini-2.5-flash",
contents=["Hello"]
)
Supported Libraries
| Library | Extra | Install |
|---|
| Google Generative AI (Gemini) | genai | pip install value-python[genai] |
| LangChain | langchain | pip install value-python[langchain] |
Disable Auto-Instrumentation
from value import uninstrument
uninstrument(["gemini"]) # Disable specific libraries
uninstrument() # Disable all