Two of the most important features of ZFS are compression and deduplication.  Both can be used to reduce space utilization, but each has its own costs and benefits.  

Compression in ZFS is transparent, which means that it is done on every write to a dataset.  The default compression algorithm varies from one ZFS implementation to another, but it is generally relatively lightweight, in order to avoid impacting performance.  Compression using the default settings will usually have little or no impact on performance in any modern system.  

Space savings depend on the type of data being compressed.  Text data can be compressed by as much as 10:1 or more, while images or video, which are often already compressed, may see little benefit.  

Deduplication is also done in real time, and will only work on new or modified data.  Of course, deduplication is only useful if the data in question has duplicates.  Unlike compression, deduplication can use a lot of resources, because the deduplication table is stored in memory.  If the table grows beyond the size that can fit into memory, then performance can be dramatically reduced.  

Our servers uses compression on all of our Colleague datasets.  This makes it possible to store the entire Colleague REP and a year's worth of backup snapshots in less than 150GB of space.