If you're more than a week into Apex you've probably written a trigger. But the difference between a maintainable trigger and a fragile one usually comes down to picking the right type — the right combination of timing (before / after) and event (insert / update / delete / undelete). This guide walks through all seven, the order they execute in, and the patterns that turn ad-hoc triggers into a clean trigger framework.
The 7 trigger events at a glance
| Timing | Event | Has record Id? | Can modify same record without DML? | Common use |
|---|---|---|---|---|
| Before | Insert | No (yet) | Yes | Default field values, normalize input |
| Before | Update | Yes | Yes | Validate / transform field changes |
| Before | Delete | Yes | N/A — record is going away | Block deletion, archive snapshot |
| After | Insert | Yes (new) | No (needs DML) | Update related records, send notifications |
| After | Update | Yes | No | Cross-object roll-ups, audit log |
| After | Delete | Yes (read-only) | N/A | Cleanup of orphan child records |
| After | Undelete | Yes | No | Restore counters / index entries |
Before vs after — the right mental model
Before triggers intercept the record on its way to the database. You hold the SObject in memory, you can change its fields directly, and the modified version is what gets saved — no extra DML required:
trigger AccountBefore on Account (before insert, before update) {
for (Account a : Trigger.new) {
if (a.Name != null) {
a.Name = a.Name.trim(); // No DML needed; the saved record has trimmed name
}
}
}
After triggers see the record after it's persisted. Inserts now have an Id (useful for creating child records), but to modify the same record you need a separate update DML — which fires another trigger pass:
trigger ContactAfter on Contact (after insert) {
// Use Trigger.new — every contact now has an Id
List<Account> updates = new List<Account>();
for (Contact c : Trigger.new) {
updates.add(new Account(Id = c.AccountId, Last_Contact_Created__c = Date.today()));
}
if (!updates.isEmpty()) update updates;
}
The mental rule: modifying the same record? before. Modifying related records? after.
System.TriggerOperation: the modern dispatch pattern
The legacy way to branch trigger logic was a chain of booleans:
if (Trigger.isBefore && Trigger.isInsert) { ... }
else if (Trigger.isAfter && Trigger.isUpdate) { ... }
The modern way uses the System.TriggerOperation enum:
trigger AccountTrigger on Account (
before insert, before update,
after insert, after update, after delete
) {
switch on Trigger.operationType {
when BEFORE_INSERT { AccountHandler.handleBeforeInsert(Trigger.new); }
when BEFORE_UPDATE { AccountHandler.handleBeforeUpdate(Trigger.new, Trigger.oldMap); }
when AFTER_INSERT { AccountHandler.handleAfterInsert(Trigger.new); }
when AFTER_UPDATE { AccountHandler.handleAfterUpdate(Trigger.new, Trigger.oldMap); }
when AFTER_DELETE { AccountHandler.handleAfterDelete(Trigger.old); }
when else { /* no-op */ }
}
}
The switch is more readable, exhaustive (the compiler nudges you toward handling every case), and pairs naturally with the trigger framework pattern below.
One trigger per object: the framework pattern
Salesforce permits multiple triggers per object, but the execution order is unspecified — which means a debugging nightmare. Best practice is one trigger per object that delegates to a handler class:
// AccountTrigger.trigger — only one per object
trigger AccountTrigger on Account (
before insert, before update, before delete,
after insert, after update, after delete, after undelete
) {
new AccountHandler().run();
}
// AccountHandler.cls — testable, recursion-safe
public class AccountHandler {
private static Boolean isRunning = false;
public void run() {
if (isRunning) return; // recursion guard
isRunning = true;
try {
switch on Trigger.operationType {
when BEFORE_INSERT { handleBeforeInsert(Trigger.new); }
when BEFORE_UPDATE { handleBeforeUpdate(Trigger.new, Trigger.oldMap); }
when AFTER_INSERT { handleAfterInsert(Trigger.new); }
// ...
}
} finally {
isRunning = false;
}
}
// private handler methods...
}
Three things this gives you:
- Unit-testable handlers. Call
new AccountHandler().handleBeforeInsert(records)directly from a test class without going through a real DML. - Recursion control. A single static flag prevents reentry within the same transaction.
- One known dispatch point per object. No mystery ordering between competing triggers.
The full order of execution (you'll wish you knew this earlier)
When a record is saved, this is what happens:
- System validation rules (required fields, lookup integrity)
- Before triggers fire
- Custom validation rules
- Record commits to the buffer (not yet to the DB)
- After triggers fire
- Assignment rules run
- Auto-response rules run
- Workflow field updates run — these can re-fire steps 2–8 once via a "post-update" cycle
- Process Builder / Flow trigger automations run
- Roll-up summary recalculations
- Criteria-based sharing rules
- DB commit happens
- Post-commit logic: emails, async Apex (
@future,Queueable,Schedulable), Platform Events
This is why a trigger sometimes appears to fire twice for one DML — workflow field updates re-trigger the save. Use the recursion guard pattern above to keep your handler logic idempotent. See Salesforce Order of Execution for the full reference with edge cases.
Common mistakes
- DML inside the loop. Always batch updates outside the for-loop.
- Forgetting Trigger.oldMap on update/delete events. It's the only way to compare old vs new values.
- Mutating Trigger.new in after triggers. It's read-only after the save. Use before triggers for self-modification.
- No recursion guard. Workflow field updates will silently re-fire your trigger.
- Multiple triggers per object. Pick one; route everything through it.
Pick the right timing (before for self-edits, after for cross-record), the right event (one of the seven listed above), use System.TriggerOperation for dispatch, and route everything through a handler class with a recursion guard. That's 95% of the trigger discipline you'll ever need.
Leave a Comment