
GenAI Design Principles
Developing design principles and standards for the implementation of assistive AI workflows in a programmatic ad tech platform.
In accordance with confidentiality agreements, certain brand visuals have been modified for this case study.
All work presented is human-generated.
My Role
Lead Designer
Timeline
March 2025 – Present
Team Size
3 Designers
1 Researcher
Deliverables
Team Vision
Design Principles
Visual identity
AI components
Challenge
In March 2025, company leadership reached out to our design team with an ask.
Come up with a vision for what our ad tech platform could look like with a full suite of generative AI tools and AI-integrated workflows.
Generative AI is a field that evolves every day by leaps and bounds. Before we even started developing the visual elements of our design system, we knew we would have to address the following challenge:
How might we create a design system that scales to accommodate rapidly changing technologies that have not been invented yet?
Furthermore, AI features are already in active development, and we can't put that on hold to wait for design.
What is the most efficient way to produce a design system while actively supporting concurrent development?
As design lead, I was tasked to guide the team through the following:
Find a way to keep the team up to date on current AI trends and developing technologies
Create a shared design vision for what AI might look like in our platform
Establish a process for maintaining design cohesion despite a constantly-changing field and short expected turnaround times.
Do all of the above while our engineers are actively implementing AI features as we go.

The Plan
Research — not just what AI technologies exist, but how other companies are approaching designing for AI
Design Vision — as a design team, agree on our guiding principles for AI interaction design
Align Existing Projects — heuristic evaluation for existing AI tools the platform had already started dabbling in
Establish New Design Processes — come up with a way to maintain design consistency as we lay down the tracks while the train is running
Help Product Figure Out What to Build — provide guidance on where to introduce AI into our product, pitch ideas for new workflows
Support Concurrent Feature Implementation — do all of the above while shipping designs for actively releasing AI features
Research
Topics to Cover
Industry Landscape
Generative AI, LLMs, hallucinations, tokens, context windows, agentic browsers…
The team buckled down and studied the existing technological landscape through a series of articles, research papers, YouTube videos, and presentations from experts both inside and outside the company.
Competitive Analysis
We compared 7 competitors not just on their existing AI tools, but also how their design teams approached AI.
The AI design guidelines these companies shared would form the foundation of our own design thinking.
User Needs
Our researcher surveyed 131 participants and held in-depth interviews with 15 campaign managers to determine our users' expectations for how AI would fit into their workflows.
This data would later be combined with complexity/feasibility ratings from engineering to determine what features we should actually build.
User Needs
Before we started creating design principles, we needed to know what we would be building towards, otherwise we would be designing in a vacuum.
As always, we asked our users what they wanted to see. The feedback was as follows:
Can have a detailed back-and-forth about specific topics
Gives different answers based on current context
Explicit confirmation before applying changes
Shows its work and backs up claims with citations


Results of user surveys, compiled by researcher Chen Zeng.
The first chart identifies the most important user tasks where AI might be useful.
The second maps out which workflows we have the technology to build.
Developing Design Principles
The Approach

We used an affinity diagram to prioritize our team's design principles.
On a series of sticky notes, we collected feedback from user surveys, design principles from competitors and research papers, and our own thoughts on what made for good AI design.
We then organized these ideas into themes and voted on which words or ideas best encapsulated each grouping. These themes would become the foundational categories for our design principles.
The data points within each category would help us determine tangible, specific examples for how to implement these themes in our own designs.
Design Principles
Here are our four final design principles.
High-Quality, Reliable Output
Output should be accurate and relevant to the user's current intent
No AI for AI's sake — Solve real problems. Only use AI if it's the right tool for the job.
Learn from feedback and context. Constantly improve usefulness of results
Accountable for Errors
Set the correct expectations. Clearly explain what the tool can and cannot do
Convey the potential consequences of user actions, especially in high-risk situations
Give users a way to move forward when the system fails or the result is not ideal
Allow users to correct and refine input and output
Save Users' Time
AI workflows need to be faster and easier than the same workflow without AI
Make it instantly clear how users should use a tool
Integrate AI seamlessly with existing workflows to minimize disruption
Be aware of long AI agent response times — Solutions should minimize time spent waiting
The Human is in Charge
The goal of the AI is to assist the human
AI should never take automatic action without user permission or confirmation
AI should adapt to the user's preferences, not the other way around
Never interrupt a user's workflow
Proposed Design Process
In an ideal world, we would have had the time to come up with a robust design system that covered all the components and flows we would need for all future AI products.
This is not an ideal world.
The company wanted to hit the ground running on AI, which means there were already experimental AI tools in alpha testing and even greater asks on the horizon.
With less than a week to figure out our approach before we'd have to start designing in earnest, we agreed upon the following:
The Plan for Current Features in Alpha
We had a smattering of experimental AI features in alpha testing that either lacked design, or that designers had thrown together cursory mocks for just so the dev teams could start backend work.
These designs needed to be unified and brought up to spec.
For these features, we would do a heuristic evaluation and the respective designer would go back and draft a second version of their designs based on our design principles.
The Go-Forward Plan for New Features
Given that we were laying down the train tracks as the train was moving, we would have to design features as they were needed.
In order to ensure consistency across all our designs, we would create a living library of shared components and do our best to document as we created them.
Components would rapidly change as product requirements changed. As such, we also established rules for how to use this library to minimize disruption in each others' work.
Heuristic Evaluation
A few features had already been started on and were currently in alpha testing. We gathered all the screens from those workflows and combined them into one Figma file for easy review (right). Then, in a Google Sheets file (below) we rated every feature on whether or not the current design successfully met that principle.
Any areas where the designs fell short, designers would revisit for a second iteration in accordance with the process from our "Go-Forward Plan".
We would add components from these designs to the start of our living library, and from henceforth all designs would pull from the living library if a desired component already existed, or create a new one to add to the living library if not.


Product Vision
Cross-Functional Brainstorm
To determine what features we'd work on next, we conducted a cross-functional brainstorm between product, design, and engineering.
Step 1.
All stakeholders generate as many ideas as possible, one per sticky note
Step 2.
Break-out groups — eng would rank ideas by technical feasibility, product/design would rank ideas by value to the user
Step 3.
Plot the results from Step 2 onto a decision matrix
We would then focus on the ideas that landed in the "high feasibility, high user value" quadrant.

The Process in Action
An example of one page of the living library is below.
The library would include a collection of components for easy reference, though due to resource limitations not every state or edge case would be mocked out.
It would also include a collection of screens to showcase previously-designed workflows, to demonstrate how these components should be used.


But what did we actually build?
Due to confidentiality agreements, I cannot discuss the specifics of the features we worked on. At a high level, our goal was to create an integrated AI assistant that fulfilled the following qualities:
Seamless
Integrates into users' existing workflows, rather than forcing users to learn a new workflow.
Does not Replace the Human
Offers advice to the user and helps them complete their tasks, rather than automating everything.
Out of the Way
The AI components embedded in the UI remain unobtrusive. More involved flows that take up a lot of page real estate, such as the chatbot, will only appear when prompted by the user.
Context-dependent
Instead of a jack-of-all-trades generalist assistant, the AI recognizes the context within which it is performing tasks and offers specialized responses and interactions.
If you have any further questions, I am happy to discuss my work in an interview. Contact me!

The "Top 5 Underperforming Lines" widget was created by a fellow designer and is not my work. The AI Assistant panel, however, is mine.
Enter password from resume to view project
© 2025 Eugenia Lee