DIGITALOCEAN
DigitalOcean’s GenAI Platform
A zero-to-one no-code generative AI agent building experience
TL;DR
Designed from zero DigitalOcean’s first AI software layer, which includes four independent products that all work in concert to deliver a seamless AI agent building experience for users with no AI knowledge to those with deep industry expertise.
The Opportunity
In 2023, DigitalOcean identified two major trends in the burgeoning AI/ML infrastructure opportunity space that it wanted to take advantage of:
When it came to LLMs, inference, not training, was the long-term play for DO’s audience.
Data has gravity, and we needed to find a way to marry inference + data gravity to really make a play in the AI/ML landscape.
The GenAI Platform was developed as a way to address this opportunity space. From June 2024 to January 2025, I served on a small but quickly growing team of engineers as the sole designer to bring the GenAI Platform from a locally built PoC to a robust, all-in-one AI agent building experience.
My role on the team
I was the sole designer (yes really, literally, the only designer for 90%+ of this entire thing) on a team that started as seven engineers, 1 PM, and 1 director. The team is now over 20 engineers (30 by 25Q2), 0 PMs (I was acting PM Dec ‘24 - Mar ‘25), 1 director, 2 technical docs writers, and 1 dedicated GTM manager.
I was able to borrow a designer for a six-week stretch Aug - Oct 2024, a designer for a week to design a new hero on our welcome page for a new product feature, and a designer to deliver one feature in November 2024. I took work through our weekly design crits for feedback, and took larger initiatives through a new design UX governance review. Work was also subject to weekly executive leadership team (CEO, CTO, SVP, VP Eng) review and critique.
I completed the following product design / product management tasks over the course of the June ‘24 - Feb ‘25 timeline and am continuing to wear the hat of both PM and Lead PD until we hire PMs for the team:
Design: Concepting, user research definition and synthesis, wireframes, high res visual designs, component creation, illustration, interaction design and prototyping, documentation writing and editing, UX content strategy, IA, and probably a ton more I’m forgetting.
PM: Feature and project roadmapping definition and strategy, prioritization, PRFAQ and PRD writing, RFC contribution, analytics setup and analysis, pricing and packaging, customer support intake and triage, hands-on customer management for high value customers, GTM support, and probably a ton more I’m forgetting.
The Outcome
The GenAI Platform on DigitalOcean is an all-in-one AI agent building platform, where users can explore and compare foundation models to choose the right one for their AI agent. They can build an AI agent, then extend it by attaching resources such as a knowledge base for retrieval-augmented generation, functions for completing tasks, fetching external data, or running code, other agents for routing through multi-agent crews, and guardrails for agent, user, and data protection.
Users can experiment, revise, manage, and deploy agents via two mechanisms: a provided, free chatbot interface that can be put on their website, or with an endpoint they can hit through their own applications.
My work on the GenAI Platform began in earnest June 2024, was opened up to a private preview audience of over 500 invitees in October 2024, and released for public preview to all DigitalOcean users in January 2025.
EA saw moderate adoption, with over 100 agents created and managed by teams with use cases ranging from document classification and analysis to content generation to recommendations engines. Upon public preview release on January 22, 2025, we saw an immediate burst of adoption and are seeing the following adoption and growth numbers:
Feb 2025:
3 weeks after release - 3,500 agents / >1B tokens used between agents and knowledge bases
March 2025
5 weeks after release: 5,500 agents / >8B tokens used between agents and knowledge bases

Work Timeline
June 2024
Onboarded to project, quick on-ramp to understand target audience and likely audience, use cases, what the technology was and how it worked
Defined navigation strategy, content strategy, naming strategy, resource hierarchy
Translated engineering PoC to work inside DO’s cloud control panel
Designed
Welcome page (when you have no resources yet)
Index pages (when you do have resources)
Model Library
Model Playground (where you can converse with and compare up to two models to choose the one you vibe most with)
Agent creation journey
Agent management journey (Summary view, playground for experimentation and testing, attached resource management, endpoint management, and insights, and settings)
Knowledge base creation journey
Knowledge base management journey (Summary view, view and manage data sources, add new data source, settings)
embeddable chatbot
Prototyped end-to-end for Executive Leadership Team review
Prototyped demo for Deploy ‘24 presentation (first product announcement)
July 2024
Finalized demo for Deploy ‘24 Keynote
Demo recording
Onboarded Docs writers
Onboarded new designer
Finalized billing strategy and UI/UX (net-new patterns as DO’s first true usage-based SKUs)
New feature work began
Function calling (we call this routing)
Guardrails
Redesign welcome page
Redesign model playground
User research conducted
Welcome page content strategy
Agent and KB create flows
Qualitative research - what are people building? what use cases are common?
August 2024
Split tasks with new designer - she took on role-based access controls documentation, guardrails experience, model library; I tackled feature completion for agents, knowledge bases, functions, model playground, docs, and technical review
Kicked off Model Fine-tuning requirements gathering, concepts, and high fidelity designs
Kicked off Anthropic model integration (BYOKey support), OpenAI as extension
Kicked off Agent Evaluations requirements gathering and concepts (DeepEval framework)
Legal, compliance, security, risk reviews and revisions
Collaborated on research plan for usability and general ux feedback with Research Lead
September 2024
Offboarded designer
Continued, then finalized agent evals designs
Finalized designs for functions, designed agent routing experience
Redesigned Agent Resource management
Redesigned Agent Endpoint management
Redesigned Model and Agent Playground (again)
Finalized designs for Anthropic BYOKey
QA for EA scope
October 2024
EA month - released product to group of 500 invited existing users who expressed interest in GenAI; open to internal teams
Designed and implemented knowledge base communication strategy, introduced new metadata
Agent evaluation revisions
Tabled Model Fine-tuning for higher priority work
Executive leadership team offsite - new scope added for next release (Jan ‘25)
new “example agents” concept - agents with pre-defined model settings, a knowledge base, and instructions so the user can see how RAG works without having to set anything up themselves. Organized the entire design treatment of these in about 4 days so we could hand off the designing and testing of them to a small team of internal staff before handing off the rules to put in the BE for our engineers
This new feature required we redesign the Welcome page (again)
This new feature required a brand new creation experience and strategy for these templated agents, as I anticipate(d) we will be doing more of these in the future
November 2024
Designed example agents experience, worked with four other individuals within DO to plan the actual agents out, define and experiment with model settings, knowledge base data formatting and setup, prompt engineering for agent instructions
Oversaw another designer redesigning the Welcome page and banners to accommodate new example agents concept
New concept for endpoints: endpoint access keys
Oversaw another designer add content into the experience for account limits (tiered token rate limits)
Intake for feature requests from high value customers, EA customers
New knowledge base data source format: Spaces (S3-compatible) Bucket folder picking (formerly just buckets, now could do folders too)
December 2024
Documentation for role-based access controls, a11y, rate limits delivered
New knowledge base data source formats: local file upload, web crawling
Tapped to speak in Keynote for GenAI product launch and demo, which meant writing a script, reviewing GTM, designing and recording a demo
Then tapped for also demoing Cloudway’s AI SRE agent, so that also meant writing a script, reviewing GTM, designing and recording a demo
Helped with content for www.digitalocean.com/products/gen-ai and GTM for January product launch
Finalized designs for Agent API Endpoint Access Keys; required redesign of Agent Overview page
QA
Our PM left Dec 30, so I took over the day-to-day PM role starting Jan 2.
January 2025
Release month - mad dash to the finish line
Final QA for all Public Preview requirements
Keynote rehearsals, demo recordings, all the GTM including prepping our external partner on their presentation, facilitating interviews and video editing for customer spotlights, reviewing all blog and social media content
Getting all docs into final review, including API docs
Owned the flipper for flipping the feature on for everyone morning of Deploy conference
Final review and flip on for file upload feature
Final review and flip on for KB indexing tracking improvements
Added DeepSeek as model to Model Playground, Model Library, Welcome page, agent create, agent management. This included a redesign (again) of the agent playground to accommodate the reasoning output from this model, plus communication moments to inform users about reasoning models, how they work, how they impact token usage and user experience, and best use cases for a model like DeepSeek. DeepSeek was released publicly Jan 20 and we released it fully benchmarked, designed, tested, and integrated into our platform Feb 6. Legal approved its addition Jan 29.
Roadmap definition and prioritization exercises
Customer incident triage and hand-off with support and solutions architects/engineers
Took over Priority Customer Requests backlog, where we work with support and account reps to identify high risk churn customers and facilitate ways to keep them on our cloud
On hiring panel for 3 PM roles
Requirements gathering, PRFAQ and PRD writing for Agent Evals (take 2), all other roadmap items for Q1 and Q2.
Remainder of Q1 2025
Hiring 2 designers, 2 eng managers, 1 user research manager
web crawling as knowledge base data sources
redesigned empty state/welcome marketing page in-control panel
OpenAI model support
auto-indexing for new knowledge bases (comms effort)
UX improvements to function routing
Agent Evaluations (Q2 delivery)
Agent Application Templates (Q2 delivery)
Agent Insights (consulting, another designer delivering)
Agent versioning (consulting, another designer delivering)
A far from perfect UX effort
You might have noticed a distinct lack of a few critical “UX things” in the above section.
User Research
There was almost zero user research done for this platform. We relied heavily on market trends, feedback from our EA users, following best practices from industry leaders, and intuition. I can come up with a million excuses for why this was the case, but I won’t.
We should have started with users. We should have evaluated the market for what our users wanted and needed. We should have catered our experience to our target demographic, as opposed to trying to both capture the upmarket stack of those nascent to GenAI who are just building chatbots because their boss told them to, and those who are trying to integrate complex multi-agent crew workflows into their existing applications using IaC tools. But we didn’t. So we are doing our best to learn now with a robust user research effort in Q1 and filling the analytics gaps now that we’re out in the wild.
Iterative Design
Pretty much every page in the GenAI Platform was redesigned multiple times, but almost never was it because of user feedback. Each redesign came because we got new feature requirements that didn’t fit, or we descoped enough content that the page didn’t make sense anymore, or a competitor started doing something that we also needed to do for tablestakes. I had the luxury of redesigning a few parts of the experience to just “feel better” after learning more about how people build agents, but there are a million and one things I would and want to do differently.
Starting with low fidelity concepts
I might have done a grand total of ten wireframes for this project. For some pages, I only had time to go with the first iteration. With our design system, I was able to move fast enough to treat my first draft as a high fidelity mockup, and it would have been slower to do wireframes than just drop new features and design elements into my auto-layout frames. I wish I had taken the time to do more wireframing for form pages as opposed to just building vertically.
Understanding the API experience
I admit that I was not fluent in using APIs at the start of this project. I don’t build my own software so I don’t regularly spend time outside of the UI experience. But understanding how people build agents and knowledge bases with the API specifically would have allowed me to head off some complexities we accidentally designed into the API, especially around content strategy, naming, and order of operations.
VL;DR (very long, did read)
Thanks for making it this far. You are probably either saying wow in a good way, or wowwww in a bad way, but either way, thanks for being here. This project has been a career defining one for me in many ways. I am incredibly proud of the work I have done - becoming a subject matter expert and develop a true and real technical understanding not of “AI” but of how LLMs work, how agentic architectures need to be structured to be successful, how infrastructure is organized for multi-tenant "serverless” experiences, what RAG is, best practices for prompt engineering, and so much more. I feel fairly confident in saying that this is not the same type of work as designing “AI-powered SaaS tools”. I have not only designed a tool that consumes the LLM output; I have also designed the tools that allows that LLM output to exist in the first place. This is no-code AI agent architecture design, testing, and deployment that didn’t exist two years ago.
But while I am immeasurably proud of being a single human being who has done all of this in eight months, I am also deeply disturbed by the environmental, socioeconomic, sociopolitical, and ethical impacts “AI” and specifically generative AI are having on our society. From datacenters being built in landlocked, climate-change impacted locations to the theft of IP and the loss of human jobs that leave people without the ability to pay their bills to survive, I know that this platform I’ve painstakingly built is contributing directly to those problems. AI can be used for good, and I remain hopeful that there are builders using the DigitalOcean GenAI Platform to do just that.