Data to monitoring.
Own the whole AI path: source data, model behavior, deployment, production checks, and iteration after launch.
I'm Rob Dalida — an AI Engineer at Frontline Education. I own production AI systems end to end: data, model behavior, deployment, monitoring, and the cross-functional decisions that make them useful.
I spent years inside the messy reality of school-based Medicaid claiming — IEP documentation, compliance, client operations, the workflows that software is supposed to make easier. Now I build the AI systems that actually do.
Redesigned a broad agent into orchestrator + specialists. Same prompts, far less context.
Auto-research loop took an Absence Mgmt eval from ~23% to ~80% — bounded, reversible changes only.
5 days with Snyk MCP + repo harness. All 10 critical findings cleared, triaged the rest.
Resolution Cards lifted mean relevancy from 0.912 → 0.976 across 30 support scenarios.
Last synced: Apr 27, 2026
Operating across architecture, delivery, and adoption. The work has grown into broader system ownership: defining direction, creating reusable AI patterns, and helping cross-functional teams turn hard ambiguity into production workflows.
Own the whole AI path: source data, model behavior, deployment, production checks, and iteration after launch.
Propose architecture, evaluate tradeoffs, and push decisions that make systems safer, reusable, and easier to operate.
Standardize evaluation metrics, repo harnesses, agent workflows, and pipeline practices so teams can move faster with less guesswork.
Partner across Support, Product, Legal, GTM, and Services to translate business pressure into practical AI systems.
An AI knowledge graph builder that takes Learning Center articles, embeds them, and surfaces missing links to optimize content for AI retrieval and support deflection.
A ReactFlow drag-and-drop workflow builder for data transformation — supporting Financial Cost Report calculations and no-code-style processing pipelines.
Production Zendesk SSN redaction with scanner/redactor phases, dry-run safety, backups, daily Lambda automation, and stakeholder reporting. Cleared an 88-ticket / 180-day backlog at launch.
Structured knowledge extracted from solved tickets. Lifted mean relevancy 0.912 → 0.976, faithfulness 0.935 → 0.963, and eliminated relevancy failures from 16.7% → 0%.
A governed loop that scores knowledge quality, proposes one bounded change, reruns deterministic eval, and keeps it only if quality improves — otherwise restores prior state.
Connected Snyk MCP to Codex for security triage, remediation, verification, and rescans. In 5 days, dropped findings 468 → 170 and cleared all 10 criticals.
A reusable repo contract for AI-assisted engineering: clear entrypoints, ownership boundaries, verification commands, secret stop conditions, and safe handoff rules.
A Machine Learning platform for expense classification — datasets, training/eval concepts, model registry patterns, and a playground for testing classification logic.
A nationwide compliance engine that sources state and federal regulatory documents, translates them into structured outputs, and helps product and engineering create business rules.
Real observability for AI support systems — anomalies, retrieval behavior, guardrails, and incidents in a single operational surface the team can open every morning.
Monitor chatbot performance and conversation signals.
A GTM and Sales pipeline intelligence platform that ranks accounts, identifies expansion opportunities, and highlights clients with upsell potential.
Ideal customer profile match
Source → target solution path
Nearby customer references
Likelihood to buy signals
Recent trigger events
Risk factor weighting
An AWS-based Certificate of Insurance pipeline that replaced an annual manual legal workflow with secure storage, scheduled processing, audit trails, and personalized email + PDF delivery.
A browser-local workflow for cleaning, normalizing, validating, and previewing OKR import files before they move into downstream planning systems.
An automation path for preparing and provisioning learning-platform users, reducing manual account setup work and improving consistency across imports.
A markdown knowledge-base pattern where raw sources stay immutable, synthesized pages stay maintainable, and agents can navigate durable context without vector-first complexity.
Agent workflows for routing tasks, preserving context, and turning repeated AI work into reliable, inspectable execution paths instead of one-off prompts.
A Codex-native workflow that turned real internal repositories into browseable, refreshable documentation, including 62 markdown pages across 3 repos.
"Rob took the time to understand our workflows, listened thoughtfully, and delivered a practical, effective solution."
"The team resolved hundreds of vulnerabilities within the first week, making an immediate reduction in risk exposure."
"Rob turned a tedious quarterly process into a clean, reliable import workflow, saving hours every quarter and giving the team confidence in the data."
"Rob brought the vision to life quickly, beautifully, and with real partnership."
"Rob brought a complex automation across the finish line with patience, care, and follow-through."
A source-grounded markdown wiki pattern for keeping internal documentation structured, reviewable, and easier to improve over time.
A repo-wiki workflow that turned real internal repositories into browseable, refreshable documentation without needing a separate AI SaaS tool.
A governed loop that proposes one bounded knowledge-base change, scores it, and keeps only improvements that pass deterministic checks.
A test of operational knowledge from solved tickets, showing higher relevancy and faithfulness than documentation-only retrieval.
A practical mental model for how LLMs become useful agent systems through memory, context, tools, prompts, planning, and evaluation.
A repository design pattern that gives AI coding agents clear entrypoints, workflow contracts, verification paths, and stop conditions.
A retrieval pattern for giving LLMs the right slices of context instead of overwhelming them with long, noisy prompts.
A specialist-agent routing pattern that reduced token usage while preserving the feeling of one visible assistant experience.