How to Scale Apex Platform Event Triggers with Parallel Subscribers in Salesforce — Performance & Best Practices

Parallel platform event subscribers let you partition and balance high-volume Apex processing. This post explains how they work, performance trade-offs, and configuration best practices.

What are Platform Events and Why Parallel Subscribers Matter

Platform events provide an asynchronous publish-subscribe mechanism in Salesforce. They live on an event bus (built on Apache Kafka), are immutable, and are available for replay for up to 72 hours. While platform events are great for decoupling integrations and on-platform processing, a single Apex subscriber can become a bottleneck when volumes spike.

How Parallel Subscribers Solve the Bottleneck

Parallel subscribers let Salesforce materialize multiple instances of the same Apex trigger. Each event is assigned to exactly one partition using a partition key (by default EventUUID), which provides native load balancing without duplicate processing across subscribers.

When to use deterministic partition keys

Use a deterministic partition key (e.g., Loyalty_Tier__c) when you need ordering guarantees for a subset of events. This preserves order within a partition but can create hotspots if traffic is uneven.

Configuration (PlatformEventSubscriberConfig)

Parallel subscriptions are configured via the PlatformEventSubscriberConfig metadata type. Key fields include:

  • numPartitions — number of parallel subscribers to materialize.
  • partitionKey — field used for partitioning (default EventUUID).
  • Other options — batch size and the running user.
<?xml version="1.0" encoding="UTF-8" ?>
<PlatformEventSubscriberConfig xmlns="http://soap.sforce.com/2006/04/metadata">
    <masterLabel>InboundOrderConfig</masterLabel>
    <platformEventConsumer>InboundOrderSubscriber</platformEventConsumer>
    <numPartitions>10</numPartitions>
    <partitionKey>EventUuid</partitionKey>
    <user>[email protected]</user>
</PlatformEventSubscriberConfig>

Example Apex Subscriber Handler

Keep triggers thin and use a handler for bulk processing and instrumentation. Wrap trace or monitoring calls to help you benchmark parallelism and batch characteristics.

public with sharing class InboundOrderEventSubHandler {
  // single generic SObject method to process both sets of events.

  public static void handleInboundOrders(List orderEvents) {
    // used to capture metrics about the processing of platform events
    String batchId = UUID.randomUUID().toString();
    String eventName = orderEvents.getSObjectType().getDescribe().name;
    Datetime timestamp = System.now();

    // store start trace record
    Tracer.trace(
      (String) orderEvents[0].get('Trace_Id__c'),
      batchId,
      'Trace point start',
      eventName,
      orderEvents.size(),
      timestamp
    );

    // straightforward processing of sobject (in this case platform events)
    // remember that PE trigger batch sizes are up to 2000
    List<Inbound_Order_Status__c> ordersToInsert = new List<Inbound_Order_Status__c>();

    for (SObject orderEvent : orderEvents) {
      Inbound_Order_Status__c orderStatus = new Inbound_Order_Status__c();
      orderStatus.Status__c = (String) orderEvent.get('Status__c');
      orderStatus.Event_Uuid__c = (String) orderEvent.get('EventUuid');
      orderStatus.Order_Type__c = (String) orderEvent.get('Type__c');
      orderStatus.Trace_Id__c = (String) orderEvent.get('Trace_Id__c');
      orderStatus.Batch_Id__c = batchId;
      orderStatus.Platform_Event_Name__c = eventName;

      ordersToInsert.add(orderStatus);
    }

    insert ordersToInsert;

    // fetch new timestamp for end trace
    timestamp = System.now();

    // store end trace record
    Tracer.trace(
      (String) orderEvents[0].get('Trace_Id__c'),
      batchId,
      'Trace point stop',
      eventName,
      orderEvents.size(),
      timestamp
    );
  }
}

Performance Findings & Best Practices

  • A small number of additional partitions (2–4) yields large performance gains.
  • There are diminishing returns beyond ~6 partitions; Salesforce reports little improvement beyond 10.
  • Batch sizes for parallel subscribers rarely hit configured maxima — subscribers often self-adjust to efficient batch sizes.
  • Expect cold-start mini-batches at the start of a run (many partitions will pick tiny batches to bootstrap).

Considerations & Caveats

  • Older platform event schemas (pre-Spring ’19) do not support parallel subscribers — update these via metadata API if needed.
  • Repartitioning while events are in-flight can delay processing; plan carefully.
  • Platform events do not guarantee delivery — if you need guaranteed delivery, design additional safeguards.
  • Some Setup pages and debug/logging for parallel subscribers may be limited or buggy in certain org contexts.

Conclusion

Parallel subscribers extend platform event use cases into high-volume Apex processing by partitioning load and enabling parallelism with minimal duplication. Benchmark to find the right partition count for your workload, use deterministic partition keys when ordering is required, and instrument processing to understand behavior in your org.

Why this matters: Salesforce admins, architects, and developers can use parallel subscribers to offload high-throughput integrations from single-threaded subscribers — improving throughput and reducing event bus backlogs when designed correctly.