Hugging Face smolagents

How to use the SmolagentsInstrumentor to trace smolagents by Hugging Face

smolagents is a minimalist AI agent framework developed by Hugging Face, designed to simplify the creation and deployment of powerful agents with just a few lines of code. It focuses on simplicity and efficiency, making it easy for developers to leverage large language models (LLMs) for various applications.

Phoenix provides auto-instrumentation, allowing you to track and visualize every step and call made by your agent.

Launch Phoenix

We have several code samples below on different ways to integrate with smolagents, based on how you want to use Phoenix.

Sign up for Phoenix:

Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint and API Key:

import os

# Add Phoenix API Key for tracing
PHOENIX_API_KEY = "ADD YOUR API KEY"
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com"

Your Phoenix API key can be found on the Keys section of your dashboard.

Install

pip install openinference-instrumentation-smolagents smolagents

Setup

Add your HF_TOKEN as an environment variable:

os.environ["HF_TOKEN"] = "<your_hf_token_value>"

Connect to your Phoenix instance using the register function.

from phoenix.otel import register

# configure the Phoenix tracer
tracer_provider = register(
  project_name="my-llm-app", # Default is 'default'
  auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)

Create & Run an Agent

Create your Hugging Face Model, and at every run, traces will be sent to Phoenix.

from smolagents import (
    CodeAgent,
    ToolCallingAgent,
    ManagedAgent,
    DuckDuckGoSearchTool,
    VisitWebpageTool,
    HfApiModel,
)

model = HfApiModel()

agent = ToolCallingAgent(
    tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],
    model=model,
)
managed_agent = ManagedAgent(
    agent=agent,
    name="managed_agent",
    description="This is an agent that can do web search.",
)
manager_agent = CodeAgent(
    tools=[],
    model=model,
    managed_agents=[managed_agent],
)
manager_agent.run(
    "If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?"
)

Observe

Now that you have tracing setup, all invocations and steps of your Agent will be streamed to your running Phoenix for observability and evaluation.

Resources

Last updated

Was this helpful?