Native Apex Compression: How to Use the New Zip Namespace in Salesforce Spring ’25

The End of Zipping Workarounds: Native Compression in Apex

For years, Salesforce developers have faced significant hurdles when handling file compression. Whether it was using heavy JavaScript libraries like JSZip in the browser or offloading the task to external services like AWS Lambda or Heroku via callouts, handling zip files was never “native” to the platform. With the Spring ’25 release, Salesforce has finally introduced the Compression namespace, providing a robust, on-platform solution for zipping and unzipping files.

Understanding the Compression Namespace

The new namespace revolves around two primary classes: Compression.ZipWriter for creating archives and Compression.ZipReader for extracting them. This functionality is built directly into the Apex runtime, offering significantly better performance, lower latency, and higher security than third-party or off-platform alternatives.

Creating Zip Files with ZipWriter

The ZipWriter class allows you to add multiple entries (files) to a single archive. You can pull content from ContentVersion, StaticResource, or even dynamically generated strings converted to Blobs.

// Example: Creating a Zip Archive
Compression.ZipWriter writer = new Compression.ZipWriter();
Blob fileBlob = [SELECT VersionData FROM ContentVersion WHERE Title = 'Contract' LIMIT 1].VersionData;

// Add a file from an existing Blob
writer.addEntry('documents/contract.pdf', fileBlob);

// Add a file from a string
writer.addEntry('readme.txt', Blob.valueOf('This is a native zip archive.'));

Blob zipResult = writer.getArchive();

Extracting Content with ZipReader

Conversely, ZipReader makes it trivial to iterate through an uploaded or retrieved zip file and extract its components. You can retrieve a list of all entries and extract their data as individual Blobs for further processing or storage.

// Example: Extracting from a Zip Archive
Compression.ZipReader reader = new Compression.ZipReader(zipBlob);
List<String> entries = reader.getEntries();

for (String fileName : entries) {
    Blob content = reader.extract(fileName);
    // Process the file content, e.g., create a new ContentVersion
}

Critical Considerations: Heap Limits and Performance

While native compression is a major milestone, developers must still respect platform constraints. The compression and extraction processes are memory-intensive. Since both the archive and its constituent files are handled as Blob data, they count toward your Apex Heap Limit. When processing large files or high volumes of documents, it is best practice to use asynchronous Apex—such as Queueable or Batchable—to avoid hitting governor limits and to ensure a smooth user experience.

Summary of Benefits

  • Reduced Latency: No more external callouts or round-trips to process file archives.
  • Security: Sensitive data never leaves the Salesforce trust boundary for processing.
  • Code Cleanliness: Replaces hundreds of lines of complex workaround code with a few simple, native method calls.