Einstein Prompt Templates in Apex — Build a Sales Coach LWC

How to call Einstein prompt templates from Apex and surface coaching advice in a Lightning Web Component without wasting tokens. Walkthrough includes ConnectApi input construction, making the request, parsing the response and LWC integration.

Introduction

Generative AI features in Salesforce are powerful, and sometimes you want to call prompt templates from your own Apex code rather than relying on out-of-the-box components. This post shows a practical use case: a “Sales Coach” LWC that calls an Einstein prompt template from Apex, hydrates it with an Opportunity Id, and returns actionable advice to the sales rep only when they open a non-default tab.

Use case

The goal is to provide sales reps concise, on-demand guidance for an Opportunity — whether to pursue it, what next steps to take, and relevant talking points. To avoid unnecessary cost and token consumption, the coach should only run when explicitly requested (for example, by opening a tab on the record page).

Prepare the input in Apex

Prompt templates accept a map of input parameters via ConnectApi.EinsteinPromptTemplateGenerationsInput. Each parameter is a ConnectApi.WrappedValue that itself holds a keyed map (in this example the Opportunity Id keyed by ‘id’).

ConnectApi.EinsteinPromptTemplateGenerationsInput promptGenerationsInput = 
                           new ConnectApi.EinsteinPromptTemplateGenerationsInput();

Map<String,ConnectApi.WrappedValue> valueMap = new Map<String,ConnectApi.WrappedValue>();

Map<String, String> opportunityRecordIdMap = new Map<String, String>();
opportunityRecordIdMap.put('id', oppId); 

ConnectApi.WrappedValue opportunityWrappedValue = new ConnectApi.WrappedValue();
opportunityWrappedValue.value = opportunityRecordIdMap;

valueMap.put('Input:Candidate_Opportunity', opportunityWrappedValue);

promptGenerationsInput.inputParams = valueMap;

promptGenerationsInput.isPreview = false;

Notes:

  • Set isPreview=false to execute the prompt and return a model response.
  • Additional configuration (for example temperature) can be sent using the EinsteinLlmAdditionalConfigInput property.

Make the request

Use ConnectApi.EinsteinLLM.generateMessagesForPromptTemplate to request generations for a prompt template. At time of writing, prompt template records are referenced by Id in Apex — the GenAiPromptTemplate metadata type currently isn’t queryable from Apex, so you’ll typically store the template Id in configuration or pass it into your component.

ConnectApi.EinsteinPromptTemplateGenerationsRepresentation generationsOutput = 
    ConnectApi.EinsteinLLM.generateMessagesForPromptTemplate('0hfao000000hh9lAAA', 
                                                             promptGenerationsInput); 

Extract the response

The output contains EinsteinLLMGenerationItemOutput objects with a generations list. Typically you can choose the first element and read the text and safety scores. Handle nulls and empty lists defensively.

Lightning Web Component integration

In the LWC, accept the recordId as an @api property. When the tab is selected, set the property which triggers an imperative Apex call (must be imperative because ConnectAPI calls are not cacheable and therefore can’t be called from @wire cacheable methods).

@api get recordId() { 
    return this._recordId; 
}

set recordId(value) {
    this._recordId=value;
    GetAdvice({oppId : this._recordId})
    .then(data => {
        this.thinking=false;
        this.advice=data;
    })
}

UX considerations

  • Run coaching only on explicit user action (open a non-default tab or a button) to avoid token costs.
  • Show a friendly “Coach is thinking…” state while waiting for the LLM response.

Best practices & caveats

  • Store the prompt template Id in configuration or a custom metadata record rather than hard-coding it in Apex when possible.
  • Validate and sanitize prompt outputs before showing them to users. Consider safety scores returned by the API.
  • Monitor token usage and add rate-limiting or usage quotas for expensive components.

Conclusion — why this matters

Embedding Einstein prompt templates directly in Apex lets developers integrate generative AI into tailored workflows while controlling cost and user experience. For Salesforce admins, this pattern means you can add on-demand AI-driven coaching to records without changing core navigation. Developers gain a reusable ConnectApi-based pattern for hydrating prompts and parsing responses. Business users get timely, context-aware advice only when they ask for it — reducing noise and keeping the focus on high-value decisions.