Can we call a batch job from Trigger? | Call batch from trigger

The short answer: Can you call batch from trigger?

So, can you actually call batch from trigger logic in Salesforce? The short answer is yes, you can. But here is the thing: just because you can doesn’t mean you should. In my experience, dropping a direct Database.executeBatch() call into a trigger is one of those things that works fine in a sandbox with three records but blows up the second you hit production.

I have seen teams run into major headaches with this. You might think it is a quick fix for a complex calculation, but you are essentially playing a game of chicken with Salesforce governor limits. If you want to keep your org healthy, you need a more reliable strategy than a direct call.

Why you should avoid a direct call batch from trigger

Triggers are meant to be fast. They run in a synchronous transaction, and they often fire in chunks. If you try to call batch from trigger code directly, you are going to hit some hard walls very quickly. Here is why it usually goes south:

  • The Concurrency Wall: You can only have 5 batch jobs queued or running at once. If your trigger fires for a bulk update, you will hit that limit in a heartbeat.
  • Duplicate Jobs: If a trigger fires multiple times in a single transaction (which happens more than you think), you might end up spawning five identical batch jobs for the same data.
  • Maintenance Nightmares: Debugging a batch job that was triggered by another asynchronous process is a special kind of pain. It is hard to track what started what.

One thing that trips people up is forgetting that triggers often handle bulk data. If you are not careful, you will end up trying to start a new batch for every 200 records in a 10,000-record upload. That is a fast track to a “LimitExceededException.” You can read more about staying safe in my guide on asynchronous apex limits.

Better ways to call batch from trigger events

If you really need that batch processing, you have to be smart about how you kick it off. We want to aggregate those requests so we aren’t spamming the system. Here are the patterns I actually use in real projects.

1. The Queueable “Launcher” Pattern

This is probably the most common way to handle this. Instead of calling the batch directly, you call a Queueable class. The Queueable acts as a buffer. It collects the IDs from the trigger and then starts the batch job itself. This is a classic Apex trigger scenario that comes up in interviews all the time because it shows you understand scale.

public class BatchLauncher implements Queueable {
    private List<Id> recordIds;
    public BatchLauncher(List<Id> ids) { this.recordIds = ids; }

    public void execute(QueueableContext context) {
        // Now it is safe to start the batch
        Database.executeBatch(new MyProcessingBatch(recordIds));
    }
}

In your trigger handler, you just need to make sure you only enqueue this job once per transaction. Use a static boolean or a set of IDs to keep track of what you have already handled.

2. The Staging Table (Buffer)

If you are dealing with managing large data volumes, sometimes even a Queueable isn’t enough. I have worked on orgs where we wrote “Request” records to a custom object instead. Then, a scheduled job would run every 10 minutes, pick up all the pending requests, and run one single batch job to process them all. It is much more stable for high-traffic environments.

3. Platform Events

Now, if you want to get fancy, use Platform Events. Your trigger publishes an event, and an automated process consumer (like a triggered Flow or another Apex trigger) picks it up. This decouples the logic entirely. It is a great way to call batch from trigger without making the original transaction wait for anything.

Pro Tip: Always check the current job queue before calling executeBatch. You can query the AsyncApexJob object to see how many batches are currently active. If you are already at 5, log an error or retry later instead of letting the code crash.

When is it okay to call batch directly?

Look, I am a pragmatist. If you are working in a tiny org where you know the data volume will always be low, a direct call batch from trigger might be fine. If you have a trigger on an object that only gets updated once a week by a single admin, go for it. But honestly, most teams get this wrong because they don’t plan for growth. It is almost always better to spend the extra 20 minutes to set up a Queueable wrapper.

Key Takeaways

  • Never call Database.executeBatch directly in a high-volume trigger.
  • The 5-job concurrency limit is your biggest enemy.
  • Use a Queueable class to wrap your batch call for better stability.
  • For massive data, consider a staging object and a scheduled job.
  • Platform Events are a solid way to decouple your trigger from the batch logic.

Wrapping up

The next time someone asks if they can call batch from trigger, tell them yes-but with a huge asterisk. It is all about protecting the user experience and making sure your org doesn’t fall over when someone decides to do a mass update via the Data Loader. Stick to the Queueable or buffering patterns, and you will save yourself a lot of late-night debugging sessions. Keep your triggers light, your batches aggregated, and your governor limits green.