How to handle bulk record processing in Flows

Introduction

Salesforce Flow is bulk-aware when designed correctly. For record-triggered and auto-launched Flows that run on many records (up to 200 records per transaction), it’s essential to follow bulk-processing patterns so your automation is efficient, scalable, and governor-limits safe. This post explains practical strategies, Flow elements to use, and when to move to Apex.

Key concepts

Keep these Rails in mind when designing Flows that must handle many records:

  • Flows can run in bulk: record-triggered Flows are executed for batches of records (up to 200 records per transaction).
  • Avoid per-record DML or queries inside loops—accumulate and execute one bulk operation instead.
  • Use “Before-Save” (fast field updates) where possible for best performance.
  • When processing complexity or volume exceeds Flow limits, use asynchronous Apex (Queueable/Batch) via an invocable Apex method.

Before Save vs After Save vs Scheduled

Before-Save (Record-Triggered, Prior to Save) — fastest option for updating fields on the same record that triggered the Flow. This avoids extra DML and is ideal for simple derived-field updates across many records.

After-Save (Record-Triggered, After Save) — required when you need to create or update related records, callouts, or when your logic depends on record IDs. Still bulk-capable, but you must be careful to do bulk-friendly operations.

Scheduled Paths — use for asynchronous or deferred processing where you don’t need results in the same transaction. Scheduled paths run in bulk too, and reduce immediate governor pressure.

Flow elements & patterns for bulk processing

  • Get Records — retrieve related data in a single operation rather than repeatedly fetching in a loop.
  • Loop + Assignment — iterate a collection only to build an output collection; avoid updating inside the loop. Use Assignment to collect records that need changes.
  • Update Records — pass a collection variable to a single Update Records action to perform all DML in bulk.
  • Collection Filters — use Decision elements and Collection Filter (or use Get Records with filters) to reduce the set you must process.
  • Subflows / Invocable Actions — factor repeated logic into subflows, or call an invocable Apex for heavier or asynchronous processing.

Typical bulk pattern (step-by-step)

  1. Record-triggered Flow starts with a collection of triggering records.
  2. Use a single Get Records to fetch related records for all triggers (avoid inside-loops).
  3. Loop through the triggering records and use Assignment to build a collection of updated records.
  4. After the loop finishes, use one Update Records with the collection variable to update all records in bulk.

Example pseudo-assignments (Flow logic expressed conceptually):

// inside loop for each triggeringRecord
if (needsUpdate) {
updatedRecord = triggeringRecord; // modify fields as needed
UpdatedCollection = UpdatedCollection + {updatedRecord};
}
// after loop
Update Records using UpdatedCollection

Avoid common anti-patterns

  • Do not perform a Get Records and Update Records for each record inside a Loop—this multiplies SOQL/DML operations.
  • Avoid complex formulas or many Decision outcomes inside loops that force many operations.
  • Don’t rely on Screen Flows for bulk processing—screens are interactive and single-record oriented.

Error handling & partial failures

Flows run within transactions. If a Flow operation causes an unhandled error, the whole transaction may roll back. To isolate failures and allow partial success for large jobs:

  • Consider splitting processing into scheduled or asynchronous chunks.
  • Use Try/Catch patterns with Fault paths; capture failed record IDs and log details (custom object) for retry.
  • For complex retry semantics use an invocable Apex Queueable that can process subsets and handle partial commits.

When to use Apex instead

Flows are powerful, but for very large volumes, complex joins, or long-running operations, Apex (Queueable or Batchable) is preferable. Build an @InvocableMethod wrapper to let Flow pass collections into Apex so you can leverage batch processing and better limit control.


public with sharing class FlowInvocableExample {
@InvocableMethod
public static void process(List recordIds) {
// call Queueable or Batch Apex to process large volumes
}
}

Testing & monitoring

  • Bulk test with data loads (e.g., Data Loader) to simulate 200-record batches or larger via scheduled Flows.
  • Use Debug Logs and Flow Interviews (Paused Flows) to inspect collection sizes and element counts.
  • Monitor Apex and Flow limits from the Setup > Process Automation Usage and Limits pages.

Practical checklist

  • Prefer Before-Save for same-record fast field updates.
  • Use one Get Records for related data, not in-loop queries.
  • Accumulate changes in a collection and run one Update Records action.
  • Use Subflows and Invocable Apex for reusable or heavy operations.
  • Implement Fault paths and logging to capture failures.
  • Load-test your Flow with batches and monitor limits.

Following these patterns lets Salesforce Flow scale and remain reliable for bulk record processing. When in doubt, prototype your Flow with realistic data volumes and consider moving heavy lifting to asynchronous Apex.