Because datasets share all available space in a pool, we use multiple datasets to maximize our flexibility in storing Colleague data.  Each Colleague environment has its own dataset, and the daemon, Colleague, and LPR databases (daemon, coll18, and repository das) each have their own datasets as well.

In addition to Colleague itself, we also have the Unidata binaries stored on ZFS, so both /usr/ud73 and /usr/unishared are mounted as ZFS datasets.  We also have an extra dataset, vol1, that we can use to store backup data such as crontabs and config files usually stored in /etc. 


The resulting ZFS list looks like this:

NAME USED AVAIL REFER MOUNTPOINT
m10-1     345G 749G 31K /m10-1
m10-1/coll18     47.0G 749G 46.6G /datatel/coll18/coll18
m10-1/daemon     6.03M 749G 3.71M /datatel/coll18/daemon
m10-1/development     2.03G 749G 88.0G /datatel/coll18/development
m10-1/production     284G 749G 92.2G /datatel/coll18/production
m10-1/repository das     51.7M 749G 48.6M /datatel/coll18/repository_das
m10-1/test     11.3G 749G 91.8G /datatel/coll18/test
m10-1/ud73     650M 749G 640M /usr/ud73
m10-1/unishared     2.25M 749G 1.90M /usr/unishared
m10-1/vol1     140M 749G 139M /vol1


You may have noticed that while all three environments refer to the same amount of data (~90GB), the development and test environments actually use only a fraction of that.  That's because both were cloned from production, and the dataset only uses space to write changes since the clone was made (January for test and last September for dev).  The Colleague clone process is one where we can truly leverage ZFS features.