These warning messages have been really annoying me with my OpenSolaris installation so I thought I’d see if I could do something about it. The data zpool is a mirrored 1Tb pool and even though I’ve got 50+Gb of storage free, time-slider insists on clearing my snapshots…..
Jul 6 02:50:55 monkey time-slider-cleanup: [ID 702911 daemon.emerg] data is over 95% capacity. All automatic snapshots were destroyed
Jul 6 02:50:55 monkey time-slider-cleanup: [ID 702911 daemon.notice] 6 automatic snapshots were destroyed
Let’s see what service properties we have for the time-slider:
hippy@monkey:~$ svcprop time-slider
general/enabled boolean true
general/action_authorization astring solaris.smf.manage.zfs-auto-snapshot
general/entity_stability astring Unstable
general/single_instance boolean true
general/value_authorization astring solaris.smf.manage.zfs-auto-snapshot
zfs/value_authorization astring solaris.smf.manage.zfs-auto-snapshot
zfs/custom-selection boolean true
zpool/critical-level integer 90
zpool/emergency-level integer 95
zpool/value_authorization astring solaris.smf.manage.zfs-auto-snapshot
zpool/warning-level integer 90
auto-snapshot-svcs/entities fmri svc:/system/filesystem/zfs/auto-snapshot:frequent svc:/system/filesystem/zfs/auto-snapshot:hourly svc:/system/filesystem/zfs/auto-snapshot:daily svc:/system/filesystem/zfs/auto-snapshot:weekly svc:/system/filesystem/zfs/auto-snapshot:monthly
auto-snapshot-svcs/grouping astring require_all
auto-snapshot-svcs/restart_on astring refresh
auto-snapshot-svcs/type astring service
startd/duration astring transient
start/exec astring /lib/svc/method/time-slider start
start/timeout_seconds count 60
start/type astring method
stop/exec astring /lib/svc/method/time-slider stop
stop/timeout_seconds count 60
stop/type astring method
tm_common_name/C ustring GNOME Desktop Snapshot Management Service
tm_man_zfs/manpath astring /usr/share/man
tm_man_zfs/section astring 1M
tm_man_zfs/title astring zfs
restarter/auxiliary_state astring none
restarter/logfile astring /var/svc/log/application-time-slider:default.log
restarter/start_pid count 1519
restarter/start_method_timestamp time 1246813450.942248000
restarter/start_method_waitstatus integer 0
restarter/transient_contract count
restarter/next_state astring none
restarter/state astring online
restarter/state_timestamp time 1246813450.947212000
So if we up the following then I shouldn’t worry about snapshots until we’re at 98/99% pool capacity:
hippy@monkey:~$ pfexec svccfg -s time-slider setprop zpool/emergency-level=99
hippy@monkey:~$ pfexec svccfg -s time-slider setprop zpool/critical-level=98
hippy@monkey:~$ pfexec svccfg -s time-slider setprop zpool/warning-level=98
hippy@monkey:~$ pfexec svcadm refresh time-slider