How Sesame Uses LLMs to Supercharge Product Sales, Architect Q&A, and Website Content

In this case, Sesame is building a practical LLM workflow that helps architects and partners explore the right product faster, while our team captures knowledge once and reuses it across sales, marketing and support. From on-page chat to transcript-driven content, this approach turns everyday conversations into SEO-rich, LLM-readable assets that compound over time.

Architects and partners frequently ask nuanced, product-specific questions (e.g., control options, pit depth formulas, or which model fits site constraints). Historically, answers lived in people’s heads, meeting notes, or scattered emails — hard to search, easy to forget, and time-consuming to repeat. We needed a way to capture expert knowledge once, make it available on demand, and keep the human conversation front-and-centre.

Sesame’s Approach

Record → Transcribe → Publish → Train

  • Capture Q&A in transcripts and turn them into structured, public-facing Knowledge Hub entries (no client names, no site specifics).
  • Publish SEO + LLM-optimised articles and cross-link to relevant product pages for deeper spec reading.
  • Train an on-site chat assistant only on Sesame content, so architects can ask real questions and get precise, transcript-sourced answers.
  • Review the bot’s answers and immediately fix any issues in the transcript source; this human-in-the-loop check is built into the process.

- Product selection guidance: The assistant can discuss when a Kensington style applies, when a Thames low-profile approach might be better, or when other models warrant a look — always rooted in Sesame’s own transcripts.

- Lead progression: The chat can offer to book a Teams meeting with project managers at the right moment, keeping the human relationship central.

- Content at scale: Each transcript fuels new Knowledge Hub entries and Q&As, improving our SEO and LLM retrievability without classic blog bottlenecks.

- Data hygiene & governance: We keep transcripts product-focused (no client names/locations) and use access rules so sensitive drawings or pricing aren’t exposed via chat.

> **Principle:** *Capture once, reuse everywhere.* Every recorded discussion becomes searchable knowledge that improves the website, informs sales calls, and trains the assistant — with people supervising every step.

- Faster answers for architects: Clear responses to common questions (controls, site allowances, alternative models) with links to product pages for detail.

- Consistent messaging: The same, verified guidance appears in chat, on pages, and in follow-ups. Errors are corrected at the source and ripple out automatically.

- Compounding SEO: Transcript-driven articles add depth, long-tail coverage, and LLM-friendly structure that helps search engines and AI tools understand our expertise.

- Human first: The LLM prompts meeting bookings instead of trying to replace conversations — striking the right balance between self-serve information and personal support.

For complex, bespoke accessibility projects, trust and clarity win work. Our LLM workflow turns real Sesame knowledge into 24/7 guidance while keeping engineers and project managers in the loop. It reduces repetition, shortens product selection cycles, and raises the overall quality of pre‑sales conversations — especially with architects comparing multiple Sesame models.

> **Note on languages:** We’re working to ensure Knowledge Hub items are available across sites/languages so partners can browse in their preferred language while we maintain a single, verified knowledge source.

- **Kensington** – hidden retracting stairlift (add your product URL in admin)

- **Mayfair** – hidden platform lift (add your product URL in admin)

- **Westminster** – hidden platform lift (add your product URL in admin)

- **Thames** – low-profile solution for limited pit depth (add your product URL in admin)

- **Wellington** – vertical-style mechanism that supersedes older variants (add your product URL in admin)

Q1. How is Sesame training the on-site chat assistant? Only on Sesame’s transcripts and pages — not the open web. This ensures precise, brand-safe answers that match engineering reality. We review outputs and push corrections back into the source transcript.

Q2. Will the LLM replace sales calls or technical meetings? No. It accelerates discovery and then offers to book a Teams meeting with our project managers when a conversation will help. Human expertise stays central.

Q3. Can it suggest alternative Sesame models when constraints appear? Yes. For example, if pit depth is limited, it can surface the Thames approach or other suitable models, then link to the relevant pages for deeper reading.

Q4. How are factual errors handled? When we spot an issue (e.g., confirming that the removable handheld post does not have buttons), we immediately update the transcript and republish so every channel stays aligned.

Q5. What about multi-language browsing? We’re enabling Knowledge Hub items across sites/languages and keeping the canonical knowledge in one place so translations remain consistent.

Udostępnij

Czy masz pytania?

Skontaktuj się z nami