top of page

AI assistance that turns product context into safe action

Dot understands where the user is in the product, answers plain-language security questions, suggests next steps, supports workflow creation, and can proactively surface incidents.

dot.png
dott.png
dotrecent.png

MY CONTRIBUTION

  • Defined the assistant concept, name, and character

  • Designed context-aware behavior from any product surface

  • Created scenarios, prompts, response patterns, and action states

  • Added buttons, CTAs, and workflow support to drive remediation

  • Proposed proactive incident generation as part of the model

  • Wrote working specs in Figma as the product source of truth

  • Led developer review and real UX QA on implemented behavior

OVERVIEW

A context-aware AI security assistant

Dot is an AI assistant inside DoControl that helps security teams ask plain-language questions and get structured, trustworthy answers. Unlike generic chatbots, Dot understands the context of the page the user is on, so users never need to restate everything from zero.

The work was about designing an assistant that turns vague security questions into readable answers and safe next steps, with clear action paths, workflow support, and the ability to proactively surface incidents when relevant.

MY ROLE

Product Designer (End-to-end)

TEAM

Design Lead, PM, ML Engineer, 2 FE Engineers

TIMELINE

6 Months

THE PROBLEM

Security teams needed more than data, they needed understanding

DoControl gave security teams powerful visibility into their SaaS environment. But visibility alone does not create action. Teams needed a way to ask questions, get structured answers, and move toward safe remediation without switching context.

01

Signals without fast answers

Security teams had data, alerts, and risk signals across the platform, but not always a quick way to synthesize and understand what mattered most.

02

Context lost on every query

Security teams had data, alerts, and risk signals across the platform, but not always a quick way to synthesize and understand what mattered most.

03

Answers without action paths

AI answers alone are not enough. Users need clear next steps, buttons, workflow support, and safe action framing to move forward with confidence.

04

Findings without follow-up

Valuable security findings should become operational follow-up. When relevant, insights should surface as incidents so teams can act on them.

DESIGN PRINCIPLES

Principles that shaped every decision

These principles guided the design from concept through implementation. They reflect what matters most in AI for security: context, action, trust, and operational value.

Context should travel with the user

Dot understands the page the user is on. From anywhere in the product, Dot already knows the relevant surface, object, and investigation context.

Action over conversation

Every response leads somewhere. Clear buttons, suggested actions, and next-step prompts drive remediation, not just conversation.

Trust before delight

Transparent reasoning, explicit limitations, Beta badge, and human-readable answers. Trust is earned through clarity, not hidden behind magic.

Visible boundaries

Users know what Dot can and cannot do. Data freshness is explicit. Mistakes are expected and feedback loops are built in.

Proactive when useful, not noisy

Dot can surface incidents and findings when relevant. Proactive assistance adds value without becoming another alert channel.

Remediation one step away

Dot can check existing workflows and help create new ones. The path from insight to action is direct and supported.

DEFINING THE ASSISTANT MODEL

From page context to safe action

The experience model defines how Dot moves from understanding context to enabling action. Each stage was designed with specific UX requirements and acceptance criteria.

01

Page Context

Dot knows where the user is

02

Suggestions

Contextual prompts offered

03

Ask

User asks a question

04

Thinking

AI processes with visible reasoning

05

Answer

Structured, readable response

06

Follow-up

Suggestions for next questions

07

Workflow

Check or create workflows

08

Incident

Proactively surface findings

DOTTT.png
Suggest moving to a new conversation after a while.png

RESEARCH & PRODUCT FRAMING

Product definition that lived inside the design work

The PRD was not the source of truth. The scenarios, research, state logic, UX acceptance details, and product behavior were created by me directly in Figma and used as the working spec through development.

Research created by me

User interviews, workflow observation, and query pattern analysis shaped the assistant model. I synthesized findings into actionable design requirements.

Behavior design

Every state, transition, and interaction was defined in Figma. Developers built from the design spec, not from a separate requirements document.

Scenario design

I created detailed scenarios covering entry points, question types, response patterns, follow-up flows, and error states. These became the working spec.

Figma as source of truth

The design file contained flows, states, microcopy, edge cases, and acceptance criteria. Engineering reviewed and built directly from Figma.

Proactive Insight.png
Feedback dot.png
Tooltips.png

KEY EXPERIENCE AREAS

Five connected moments in the Dot experience

Each moment was designed to build trust, provide clarity, and enable action. The assistant feels present without being intrusive, accessible from anywhere in the product.

01

Context-aware entry from anywhere

From anywhere in the product, Dot already knows the relevant surface, object, and investigation context. Users do not need to restate everything from zero.

  • Global accessibility via sidebar and keyboard shortcut

  • Page context passed automatically to the assistant

  • Reduces cognitive load and investigation time

header.png
dott.png

02

Default state and suggested prompts

The welcome state introduces Dot with security-specific example queries that demonstrate capability. These are real investigation patterns, not generic prompts.

  • Security-specific suggested prompts

  • Quick filter chips for common parameters

  • Clear assistant greeting and personality

03

Asking and answering

Responses are structured, readable, and actionable. Security recommendations are numbered, link to specific entities, and provide clear next steps.

  • Structured response formatting

  • Direct links to identities, files, and events

  • Follow-up suggestion chips

all active.png
DOTTT.png
dotop.png

04

Thinking and transparency

When Dot processes a query, users see a collapsible thinking state that shows the AI reasoning. Power users expand it; casual users skip it but know it exists.

  • Expandable thinking state shows reasoning process

  • Flexibility for different mental models

  • Trust through transparency, not magic

05

Proactive incidents and operational follow-up

I proactively proposed that Dot should generate incidents, so the assistant could surface risk and help operationalize follow-up, not just answer questions.

  • Proactive incident generation when relevant

  • Findings become operational follow-up

  • Value without becoming another alert channel

Proactive Insight.png

EDGE CASES & SYSTEM STATES

Every state considered

A shipped product requires attention to every state. These edge cases were designed, specified, and validated through developer review and UX QA.

Section 3.png

No recent queries

Empty state when conversation history is clear

New chat

Starting fresh with cleared context

Recent queries

Quick access to previous questions

Tooltip / hover behavior

Contextual help on hover

Suggestions hover

Interactive prompt suggestions

Long text handling

Truncation and expansion patterns

Modal vs fullscreen

Different viewing modes

No suggestions state

When prompts cannot be generated

MY CONTRIBUTION

End-to-end ownership

I owned the work from research through implementation. The product definition lived inside the design work. I also led developer review and real UX QA on the implemented experience, validating behavior and edge cases in the live product.

Research

Trust UX

Concept definition

Naming

Microcopy

Figma specs

Flows

Assistant character

Scenarios

Developer review

Information architecture

CLOSING REFLECTION

Making AI useful, not magical

This project was not about making AI feel magical. It was about making AI useful, bounded, context-aware, readable, and safe inside a real security workflow.

​

The biggest lesson: AI interfaces need to earn trust through transparency, not hide behind magic. Users do not just want answers, they want to understand why, and they need an escape hatch when the AI gets it wrong.

​

Looking back, the real win was not the AI itself. It was making complex security queries accessible through natural language. Dot lowered the barrier to investigation, which means more analysts can contribute to security posture, not just the experts who know the product deeply.

Thanks for reading this case study.

Designed with care at DoControl, 2026.

©2022 by dalitca.

bottom of page