Hyland’s AI agent stack pairs Hyland Agent Builder with agentic document processing to bring context-aware agents to core platforms like Onbase, Alfresco, and Nuxeo — turning document understanding into real actions. Given the company serves thousands of enterprises, including over half of the Fortune 100, reliability is essential.
In building agents, “determinism is greater than cleverness,” notes Gabriel Keith, Senior Manager of Engineering – Content Innovation Cloud at Hyland. “You can show a really cool demo to a customer and it’s pretty neat to see their eyes widen whenever you have an agent that did something really creative, but we need agents to always do what you ask them to do and never stray off course.”
To help with things like online evals and mitigating hallucinations, “Arize AX met most of our needs out of the box,” notes Keith.
About Hyland
Gabriel Keith: Hyland’s a company that’s been providing enterprise content management for almost 35 years. We have thousands of enterprise ECM customers, and we serve both cloud customers and on-premises solutions. So we have a very wide swath of customer bases.
The reason we’re talking today is we’ve more recently launched our Content Innovation Cloud—essentially, bringing AI to help our customers work with unstructured data for agentic workflows. What differentiates us in the space is our enterprise context engine and our agent mesh—capabilities we’re bringing to market first and positioning as a core part of our value proposition for enterprises. It’s a very exciting time.
Getting To Production
Gabriel Keith: There are a lot of learnings we’ve had as an organization partnering with customers. It’s not just about model quality or prompt engineering — it’s really about system behavior under real-world conditions, and the context engineering around that. When you move from a lab or a demo environment into production, agents stop being a neat little demo and start acting like what they are: distributed, probabilistic, non-deterministic systems. That’s where the challenges show up.
Building Agents
Gabriel Keith: Everyone obsesses over the LLM or the embeddings or whatever it is, but the differentiator is how you design how those systems work together. The basic stuff — retries, timeouts, monitoring, rate limiting — these are challenges every enterprise faces. For us to handle that on behalf of the agents is really valuable for customers. That’s how you determine if it’s reliable under a thousand requests or under one request — can you actually operate at enterprise scale?
Human In the Loop
Gabriel Keith: We need agents to be boringly predictable — to always do what you ask and never stray off course.
And human-in-the-loop isn’t a crutch. For the toughest workflows, you can’t expect 100% autonomy. You need escalation paths and human review so customers understand and trust what the system can do. As they get more familiar with the solution, some guardrails might come down—but building that in from the start is important.
Security & Connectivity
Gabriel Keith: Our agent platform — our AI platform — runs as a multi-tenant cloud runtime. We’re controlled by AM (Access Manager) access for security and guardrails around data.
We also have our Content Federation Service, which enables agents to talk to on-prem data. In the Content Innovation Cloud we can federate with on-prem systems, allowing agents to interact across VPC boundaries or into an on-prem environment to access the data and tools they need to make decisions.
Plugging Agents Into Existing Business Processes
Gabriel Keith: From the process-orchestration perspective, this is really unique to Hyland. We have many orchestration engines—that’s what we’ve built over the years. Our platforms—OnBase, Alfresco, Nuxeo—are orchestration engines.
Being able to integrate agents inside business processes that already exist is a big part of the battle. A lot of teams coming to an agent platform are trying to build everything from scratch; we can drive processes with agents, but we can also plug them into existing workflows to increase speed to market on decisions and assist in approvals. There’s a lot we can do with our existing solutions and orchestration layers.
Platform Components: Agent Builder, Core Runtime, Memory, Tools & RAG
Gabriel Keith: We have our Agent Builder, but it runs on top of our Agent Core Runtime. The runtime manages MCP tool registration, short- and long-term memory, and ties into our Hyland AI Platform for our inference services—so it’s all tightly integrated.
We support MCP for external APIs and for our own system APIs around Alfresco and OnBase. We build out MCPs that can utilize those REST APIs, so agents can access data even if it isn’t federated. We don’t lock people into federation—we want to meet customers where they are.
When we do federate with our RAG system, we have connectors for each of our platforms, which is a huge advantage. Customers don’t have to pre-process data or wrangle file shares or SharePoint to figure out what to include in RAG. Ingestion flows into our content lake, and we preserve security mappings between AM and local on-prem systems. We can cordon off access to specific documents based on your role and user within that system.
Why Evaluation Matters
Gabriel Keith: The most important part of evaluations is trust—making something repeatable and making sure it keeps doing the same thing over time, even as the data landscape changes. Processes based on content change.
A simple example is invoices: if a vendor changes their invoice format, you’ve got to be able to deliver the right context and validate that when the agent runs against differently formatted data, it performs the same way. That’s what online evaluation helps us ensure.
Why Arize AX for Online Evaluation & Observability
Gabriel Keith: We talked to many vendors with good capabilities, but we found that Arize AX met most of our needs out of the box. We don’t expect to require a lot of custom tooling to make it work well—there will always be some, no matter what you pick.
We need end-to-end ML observability—not just for agents. We also have intelligent document processing, as well as generative and predictive models. We need real-time monitoring and alerting, which is a huge benefit of going with Arize AX.