It's not obvious how to use this for _incremental_ decompression:
You feed _the compressed data_ into inflate_add() _in pieces_.
The internal state of the zlib context will make sure than you can split at any point and still get the correct total data out, as long as you keep reading until the end.
In this way, you don't have to hold the complete uncompressed data in memory at any one time (and don't have to materialize it either as a file for gzopen() etc.), allowing you to parse files much bigger than the available php memory limit.
<?php
$step = 500000;
$dataGz = load_gzip_compressed_data_to_string();
$start = 0;
$outLen = 0;
$ctxt = inflate_init(ZLIB_ENCODING_GZIP);
$status = inflate_get_status($inflCtxt);
while($status == ZLIB_OK) {
$split = substr($dataGz, $start, $step);
$dataFragment = inflate_add($inflCtxt, $split);
$outLen += strlen($dataFragment);
$status = inflate_get_status($inflCtxt);
$start += $step;
}
echo 'Input: ' . strlen($dataGz) . ' Bytes / Output: ' . $outLen . ' Bytes.';
?>
N.B.: Archives of extremely high compression will still bomb out with a stupid and unnecessary memory exhaustion, as it's not possible to define a limit in inflate_init() similar to gzuncompress().