Concepts and terminology
Case Study 1 How I defined terminology for two emerging AI products, aligning domain‑specific language, technical concepts, and cross‑functional stakeholders to create clarity in a rapidly evolving space.
In the early days of Dynamics 365 Customer Service Insights and Virtual Agent, we were building something that didn’t fully exist yet — a low‑code AI tool that allowed non‑technical users to design conversational experiences. The challenge wasn’t just designing the product, it was defining the language that made the product understandable. I worked across PM, engineering, research, and design to create terminology that balanced accuracy, emerging AI patterns, and user comprehension.
The ProblemThe product was introducing new concepts that didn’t have established names
Existing Microsoft terminology was inconsistent or contradictory
AI and machine‑learning patterns were emerging faster than language could keep up
Domain‑specific terms (customer service, call centers, automation) did not map to new conversational AI concepts
Stakeholders had different mental models and strong opinions
Users needed more clarity and less jargon
My Role
UX Content Designer and terminologist for Dynamics 365 Customer Service Insights and Dynamics 365 Virtual Agent
Defined new terminology for emerging AI concepts
Aligned cross‑functional teams on shared language
Validated terms with users, PMs, and engineering
Created early conceptual models for conversational AI patterns
Built the linguistic foundation for the product’s information architecture
My Process
A. Reconciled Three Competing Terms for the Same Underlying Concept
I was working across two emerging products and one established AI platform, each with its own vocabulary:
“Cases” in Dynamics 365 Customer Service Insights — grounded in CRM and call‑center operations
“Topics” in Dynamics 365 Virtual Agent — the building blocks of conversational design
“Intent” in Azure Cognitive Services — a machine‑learning term with a very specific technical meaning
My job was to determine whether these concepts were equivalent, adjacent, or fundamentally different, and then create a naming system that would work for Dynamics 365.
B. Evaluated Whether “Intent” Could Be Used in a Low‑Code, Non‑Technical Product
Azure’s intent was tied to:
model training
utterance classification
prediction confidence
developer‑level control
But the value proposition of Virtual Agent was aimed at empowering non-technical personas with subject matter expertise, not knowledge of data science.
Using “intent” would have:
introduced complexity the product didn’t expose
confused non‑technical users
created misalignment with CSI
forced us into Azure’s mental model instead of our own
This was a critical linguistic and UX decision point.
C. Defined “Topic” as the User‑Facing Concept That Bridged Both Products
I championed topic as the primary unit of meaning because it:
mapped cleanly to CSI’s “cases” — related cases were bundled into “topics”
was flexible enough to represent both automation of cases and the conversation design behind it
avoided the technical baggage of “intent”
aligned with how customer service teams already talked about recurring issues
scaled across the product as new features emerged
This decision became foundational to PVA’s information architecture.
D. Built a Conceptual Model That Clarified the Relationship Between the Terms
I articulated a simple, durable model:
Cases = real‑world customer problems
Topics = conversational representations of those problems
Intent = an internal ML mechanism the user never needed to see
This model helped PMs, engineers, and designers understand:
what belonged in the UI
what belonged in documentation
what belonged behind the scenes
how CSI and PVA connected without confusing users
E. Aligned Stakeholders Across Products, Disciplines, and Mental Models
This was my first lesson in change management as well as terminology development.
I facilitated alignment across:
PMs who wanted to reuse Azure terminology
engineers who prioritized technical accuracy
designers who prioritized user comprehension
researchers who validated mental models
leadership who needed cross‑product consistency
Through rationale, examples, and user‑testing insights, I aligned the team around topic as the correct user‑facing term and ensured “intent” remained a backend concept.
F. Documented the Terminology Decisions to Prevent Drift
I created early naming guidance that:
defined “topic” and its boundaries
clarified when “intent” should and should not appear
established consistency across CSI, PVA, and related features
prevented future teams from reintroducing conflicting terms
This work helped build the foundation of early conversational AI terminology at Microsoft.
The Outcome
established a shared conceptual model that aligned design, PM, engineering, and leadership
reduced ambiguity and accelerated decision‑making across the product
prevented terminology drift by defining clear, durable concepts and boundaries
created a framework teams now use to evaluate new features and resolve naming conflicts
enabled the experience to scale without accumulating additional vocabulary debt
strengthened cross‑product alignment by grounding language in consistent mental models