Skip to content
Greta.Agency
Case Studies

How Notion AI Is Integrated Into Workflows (And What They Got Right)

Notion AI isn't a chatbot bolted onto a doc tool. It's embedded into the editing workflow at the precise moments where AI adds value. Here's the integration pattern.

MichaelApril 7, 20264 min read

Where AI Is Used

Notion AI is embedded in four specific workflow contexts:

1. Inline generation (space bar trigger): Pressing space in an empty block opens an AI prompt. The user types a request ("write an intro for a product brief on X") and the AI writes directly into the document. The output streams token by token — the user sees text appearing as the model generates it.

2. Selection-based actions: Selecting existing text reveals an AI menu: Improve writing, Fix spelling and grammar, Make shorter, Make longer, Change tone, Translate, Summarize. These are bounded, specific operations on selected content — not open-ended prompts.

3. Q&A over the workspace: Notion AI can answer questions about the content of your entire workspace ("what did we decide about the pricing strategy?"). This uses vector search over embedded document chunks.

4. Autofill in databases: Database properties can be auto-filled by AI based on rules. A task database can auto-generate a "summary" property from the linked document. A CRM can auto-populate "next action" from the notes field.

Why It Matters

Most AI integrations fail because they're too open-ended. A "chat with your docs" interface requires users to know what to ask. Most users don't know what to ask — they know what they're trying to do.

Notion's integration is valuable because it meets users at the moment of task, not at a sidebar chat interface:

  • Writing a document? AI helps you write faster.
  • Editing existing content? AI helps you improve it.
  • Searching for a decision? AI finds it in the workspace.
  • Managing a database? AI fills in the boring fields.

Each use case is scoped to a specific moment in a specific workflow. The AI doesn't ask "how can I help?" — it offers specific, relevant actions at the right time.

Implementation Guess

Inline generation:

  • POST /api/ai/generate with { prompt, context: surrounding_blocks }
  • Response streams via Server-Sent Events (SSE)
  • Frontend inserts streamed tokens into the editor at the cursor position
  • On completion, user can accept (keep), retry (regenerate), or discard

Selection actions:

  • Pre-defined system prompts for each action type (improve, shorten, translate)
  • Selected text is the user content; the system prompt defines the operation
  • Result replaces or is inserted after the selection

Workspace Q&A:

  • Documents are chunked (512–1024 tokens per chunk) and embedded using OpenAI's text-embedding-3-small
  • Chunks stored in a vector database (pgvector in Postgres, or Pinecone)
  • On query: embed the question, retrieve top-K relevant chunks, pass to LLM with a RAG prompt
  • Response cites the source documents

Database autofill:

  • Background job triggered when a linked document is updated
  • Generates property value based on a configurable prompt template
  • Stores result in the database property — not shown as "AI generated" unless the user sees the generation history

Better Alternatives

What Notion does well: contextual integration, streaming output, scoped operations.

What could be better:

  • AI actions are still triggered manually. A truly useful AI layer would proactively surface relevant information without being asked — noticing that a project page hasn't been updated in 2 weeks and prompting a summary.
  • The workspace Q&A doesn't have good source attribution. Knowing which document an AI summary came from is critical for trust.
  • AI-generated content is indistinguishable from human-written content in the UI. For team workflows, knowing which sections were AI-generated matters.

How You Can Build This

Minimum viable AI integration for a content product:

  1. Add a /ai slash command that triggers a generation modal
  2. Pass selected text or surrounding context as the user message
  3. Stream the response back via SSE
  4. Let the user accept or regenerate

Tech stack suggestion: Next.js API route + OpenAI API (with streaming) + ai npm package (Vercel AI SDK handles SSE streaming with minimal boilerplate).

Estimated complexity:

  • Basic generation endpoint with streaming: 1–2 days
  • Selection-based action menu: 2–3 days
  • Workspace search with RAG: 5–7 days (requires embedding pipeline)
  • Database autofill: 3–4 days
M

Written by

Michael

Lead Engineer, Greta Agency

Michael has built AI integrations across content, productivity, and data products.