plenty of memory, but system us intensively swapping
Trond Endrestøl
Trond.Endrestol at fagskolen.gjovik.no
Tue Nov 20 10:12:50 UTC 2018
On Tue, 20 Nov 2018 14:53+0500, Eugene M. Zheganin wrote:
> Hello,
>
>
> I have a recent FreeBSD 11-STABLE which is mainly used as an iSCSI target. The
> system has 64G of RAM but is swapping intensively. Yup, about of half of the
> memory is used as ZFS ARC (isn't capped in loader.conf), and another half is
> eaten by the kernel, but it oly uses only about half of it (thus 25% of the
> total amount).
>
> Could this be tweaked by some sysctl oids (I suppose not, but worth asking).
On freebsd-hackers the other day,
https://lists.freebsd.org/pipermail/freebsd-hackers/2018-November/053575.html,
it was suggested to set vm.pageout_update_period=0. This sysctl is at
600 initially.
ZFS' ARC needs to be capped, otherwise it will eat most, if not all,
of your memory.
> top, vmstat 1 snapshots and zfs-stats -a are listed below.
>
>
> Thanks.
>
>
> [root at san01:nginx/vhost.d]# vmstat 1
> procs memory page disks faults cpu
> r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id
> 0 0 38 23G 609M 1544 68 118 64 895 839 0 0 3644 2678 649 0 13
> 87
> 0 0 53 23G 601M 1507 185 742 315 1780 33523 651 664 56438 785 476583 0 28
> 72
> 0 0 53 23G 548M 1727 330 809 380 2377 33256 758 763 55555 1273 468545 0
> 26 73
> 0 0 53 23G 528M 1702 239 660 305 1347 32335 611 631 59962 1025 490365 0
> 22 78
> 0 0 52 23G 854M 2409 309 693 203 97943 16944 525 515 64309 1570 540533 0
> 29 71
> 3 0 54 23G 1.1G 2756 639 641 149 124049 19531 542 538 64777 1576 553946 0
> 35 65
> 0 0 53 23G 982M 1694 236 680 282 2754 35602 597 603 66540 1385 583687 0
> 28 72
> 0 0 41 23G 867M 1882 223 767 307 1162 34936 682 638 67284 780 568818 0 33
> 67
> 0 0 39 23G 769M 1542 167 673 336 1187 35123 646 610 65925 1176 551623 0
> 23 77
> 2 0 41 23G 700M 3602 535 688 327 2192 37109 622 594 65862 4256 518934 0
> 33 67
> 0 0 54 23G 650M 2957 219 726 464 4838 36464 852 868 65384 4110 558132 1
> 37 62
> 0 0 54 23G 641M 1576 245 730 344 1139 33681 740 679 67216 970 560379 0 31
> 69
>
>
> [root at san01:nginx/vhost.d]# top
> last pid: 55190; load averages: 11.32, 12.15, 10.76
> up 10+16:05:14 14:38:58
> 101 processes: 1 running, 100 sleeping
> CPU: 0.2% user, 0.0% nice, 28.9% system, 1.6% interrupt, 69.3% idle
> Mem: 85M Active, 1528K Inact, 12K Laundry, 62G Wired, 540M Free
> ARC: 31G Total, 19G MFU, 6935M MRU, 2979M Anon, 556M Header, 1046M Other
> 25G Compressed, 34G Uncompressed, 1.39:1 Ratio
> Swap: 32G Total, 1186M Used, 31G Free, 3% Inuse, 7920K In, 3752K Out
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 40132 root 131 52 0 3152M 75876K uwait 14 36:59 6.10% java
> 55142 root 1 20 0 7904K 2728K CPU20 20 0:00 0.72% top
> 20026 root 1 20 0 106M 5676K nanslp 28 1:23 0.60% gstat
> 53642 root 1 20 0 7904K 2896K select 14 0:03 0.58% top
> 977 zfsreplica 1 20 0 30300K 3568K kqread 21 4:00 0.42% uwsgi
> 968 zfsreplica 1 20 0 30300K 2224K swread 11 2:03 0.21% uwsgi
> 973 zfsreplica 1 20 0 30300K 2264K swread 13 12:26 0.13% uwsgi
> 53000 www 1 20 0 23376K 1372K kqread 24 0:00 0.05% nginx
> 1292 root 1 20 0 6584K 2040K select 29 0:23 0.04%
> blacklistd
> 776 zabbix 1 20 0 12408K 4236K nanslp 26 4:42 0.03%
> zabbix_agentd
> 1289 root 1 20 0 67760K 5148K select 13 9:50 0.03% bsnmpd
> 777 zabbix 1 20 0 12408K 1408K select 25 5:06 0.03%
> zabbix_agentd
> 785 zfsreplica 1 20 0 27688K 3960K kqread 28 2:04 0.02% uwsgi
> 975 zfsreplica 1 20 0 30300K 464K kqread 18 2:33 0.02% uwsgi
> 974 zfsreplica 1 20 0 30300K 480K kqread 30 3:39 0.02% uwsgi
> 965 zfsreplica 1 20 0 30300K 464K kqread 4 3:23 0.02% uwsgi
> 976 zfsreplica 1 20 0 30300K 464K kqread 14 2:59 0.01% uwsgi
> 972 zfsreplica 1 20 0 30300K 464K kqread 10 2:57 0.01% uwsgi
> 963 zfsreplica 1 20 0 30300K 460K kqread 3 2:45 0.01% uwsgi
> 971 zfsreplica 1 20 0 30300K 464K kqread 13 3:16 0.01% uwsgi
> 69644 emz 1 20 0 13148K 4596K select 24 0:05 0.01% sshd
> 18203 vryabov 1 20 0 13148K 4624K select 9 0:02 0.01% sshd
> 636 root 1 20 0 6412K 1884K select 17 4:10 0.01% syslogd
> 51266 emz 1 20 0 13148K 4576K select 5 0:00 0.01% sshd
> 964 zfsreplica 1 20 0 30300K 460K kqread 18 11:02 0.01% uwsgi
> 962 zfsreplica 1 20 0 30300K 460K kqread 28 6:56 0.01% uwsgi
> 969 zfsreplica 1 20 0 30300K 464K kqread 12 2:07 0.01% uwsgi
> 967 zfsreplica 1 20 0 30300K 464K kqread 27 5:18 0.01% uwsgi
> 970 zfsreplica 1 20 0 30300K 464K kqread 0 4:25 0.01% uwsgi
> 966 zfsreplica 1 22 0 30300K 468K kqread 14 4:29 0.01% uwsgi
> 53001 www 1 20 0 23376K 1256K kqread 10 0:00 0.01% nginx
> 791 zfsreplica 1 20 0 27664K 4244K kqread 17 1:34 0.01% uwsgi
> 52431 root 1 20 0 17132K 4492K select 21 0:00 0.01% mc
> 70013 root 1 20 0 17132K 4492K select 4 0:03 0.01% mc
> 870 root 1 20 0 12448K 12544K select 19 0:51 0.01% ntpd
> [root at san01:nginx/vhost.d]# zfs-stats -a
> ------------------------------------------------------------------------
> ZFS Subsystem Report Tue Nov 20 14:39:05 2018
> ------------------------------------------------------------------------
> System Information:
> Kernel Version: 1102503 (osreldate)
> Hardware Platform: amd64
> Processor Architecture: amd64
> ZFS Storage pool Version: 5000
> ZFS Filesystem Version: 5
> FreeBSD 11.2-STABLE #0 r340287M: Fri Nov 9 22:23:22 +05 2018 emz
> 14:39 up 10 days, 16:05, 5 users, load averages: 10,96 12,05 10,74
> ------------------------------------------------------------------------
> System Memory:
> 0.14% 90.20 MiB Active, 0.01% 8.62 MiB Inact
> 98.97% 61.57 GiB Wired, 0.00% 0 Cache
> 0.88% 560.02 MiB Free, -0.00% -184320 Bytes Gap
> Real Installed: 64.00 GiB
> Real Available: 99.77% 63.85 GiB
> Real Managed: 97.43% 62.21 GiB
> Logical Total: 64.00 GiB
> Logical Used: 99.13% 63.44 GiB
> Logical Free: 0.87% 568.64 MiB
> Kernel Memory: 22.77 GiB
> Data: 99.84% 22.73 GiB
> Text: 0.16% 36.52 MiB
> Kernel Memory Map: 62.21 GiB
> Size: 46.85% 29.15 GiB
> Free: 53.15% 33.06 GiB
> ------------------------------------------------------------------------
> ARC Summary: (HEALTHY)
> Memory Throttle Count: 0
> ARC Misc:
> Deleted: 5.61b
> Recycle Misses: 0
> Mutex Misses: 64.25m
> Evict Skips: 98.33m
> ARC Size: 50.02% 30.62 GiB
> Target Size: (Adaptive) 50.02% 30.62 GiB
> Min Size (Hard Limit): 12.50% 7.65 GiB
> Max Size (High Water): 8:1 61.21 GiB
> ARC Size Breakdown:
> Recently Used Cache Size: 50.69% 15.52 GiB
> Frequently Used Cache Size: 49.31% 15.10 GiB
> ARC Hash Breakdown:
> Elements Max: 8.35m
> Elements Current: 30.08% 2.51m
> Collisions: 2.18b
> Chain Max: 10
> Chains: 308.03k
> ------------------------------------------------------------------------
> ARC Efficiency: 48.52b
> Cache Hit Ratio: 84.49% 40.99b
> Cache Miss Ratio: 15.51% 7.53b
> Actual Hit Ratio: 84.19% 40.85b
> Data Demand Efficiency: 83.84% 13.06b
> Data Prefetch Efficiency: 40.66% 1.42b
> CACHE HITS BY CACHE LIST:
> Most Recently Used: 15.11% 6.19b
> Most Frequently Used: 84.54% 34.66b
> Most Recently Used Ghost: 0.97% 396.23m
> Most Frequently Used Ghost: 0.18% 75.49m
> CACHE HITS BY DATA TYPE:
> Demand Data: 26.70% 10.95b
> Prefetch Data: 1.41% 576.89m
> Demand Metadata: 70.83% 29.04b
> Prefetch Metadata: 1.06% 434.19m
> CACHE MISSES BY DATA TYPE:
> Demand Data: 28.04% 2.11b
> Prefetch Data: 11.18% 841.81m
> Demand Metadata: 60.37% 4.54b
> Prefetch Metadata: 0.40% 30.27m
> ------------------------------------------------------------------------
> L2ARC is disabled
> ------------------------------------------------------------------------
> File-Level Prefetch: (HEALTHY)
> DMU Efficiency: 9.20b
> Hit Ratio: 5.80% 533.41m
> Miss Ratio: 94.20% 8.66b
> Colinear: 0
> Hit Ratio: 100.00% 0
> Miss Ratio: 100.00% 0
> Stride: 0
> Hit Ratio: 100.00% 0
> Miss Ratio: 100.00% 0
> DMU Misc:
> Reclaim: 0
> Successes: 100.00% 0
> Failures: 100.00% 0
> Streams: 0
> +Resets: 100.00% 0
> -Resets: 100.00% 0
> Bogus: 0
> ------------------------------------------------------------------------
> VDEV cache is disabled
> ------------------------------------------------------------------------
> ZFS Tunables (sysctl):
> kern.maxusers 4422
> vm.kmem_size 66799345664
> vm.kmem_size_scale 1
> vm.kmem_size_min 0
> vm.kmem_size_max 1319413950874
> vfs.zfs.trim.max_interval 1
> vfs.zfs.trim.timeout 30
> vfs.zfs.trim.txg_delay 32
> vfs.zfs.trim.enabled 0
> vfs.zfs.vol.immediate_write_sz 131072
> vfs.zfs.vol.unmap_sync_enabled 0
> vfs.zfs.vol.unmap_enabled 1
> vfs.zfs.vol.recursive 0
> vfs.zfs.vol.mode 1
> vfs.zfs.version.zpl 5
> vfs.zfs.version.spa 5000
> vfs.zfs.version.acl 1
> vfs.zfs.version.ioctl 7
> vfs.zfs.debug 0
> vfs.zfs.super_owner 0
> vfs.zfs.immediate_write_sz 32768
> vfs.zfs.sync_pass_rewrite 2
> vfs.zfs.sync_pass_dont_compress 5
> vfs.zfs.sync_pass_deferred_free 2
> vfs.zfs.zio.dva_throttle_enabled 1
> vfs.zfs.zio.exclude_metadata 0
> vfs.zfs.zio.use_uma 1
> vfs.zfs.zil_slog_bulk 786432
> vfs.zfs.cache_flush_disable 0
> vfs.zfs.zil_replay_disable 0
> vfs.zfs.standard_sm_blksz 131072
> vfs.zfs.dtl_sm_blksz 4096
> vfs.zfs.min_auto_ashift 9
> vfs.zfs.max_auto_ashift 13
> vfs.zfs.vdev.trim_max_pending 10000
> vfs.zfs.vdev.bio_delete_disable 0
> vfs.zfs.vdev.bio_flush_disable 0
> vfs.zfs.vdev.def_queue_depth 32
> vfs.zfs.vdev.queue_depth_pct 1000
> vfs.zfs.vdev.write_gap_limit 4096
> vfs.zfs.vdev.read_gap_limit 32768
> vfs.zfs.vdev.aggregation_limit 1048576
> vfs.zfs.vdev.trim_max_active 64
> vfs.zfs.vdev.trim_min_active 1
> vfs.zfs.vdev.scrub_max_active 2
> vfs.zfs.vdev.scrub_min_active 1
> vfs.zfs.vdev.async_write_max_active 10
> vfs.zfs.vdev.async_write_min_active 1
> vfs.zfs.vdev.async_read_max_active 3
> vfs.zfs.vdev.async_read_min_active 1
> vfs.zfs.vdev.sync_write_max_active 10
> vfs.zfs.vdev.sync_write_min_active 10
> vfs.zfs.vdev.sync_read_max_active 10
> vfs.zfs.vdev.sync_read_min_active 10
> vfs.zfs.vdev.max_active 1000
> vfs.zfs.vdev.async_write_active_max_dirty_percent60
> vfs.zfs.vdev.async_write_active_min_dirty_percent30
> vfs.zfs.vdev.mirror.non_rotating_seek_inc1
> vfs.zfs.vdev.mirror.non_rotating_inc 0
> vfs.zfs.vdev.mirror.rotating_seek_offset1048576
> vfs.zfs.vdev.mirror.rotating_seek_inc 5
> vfs.zfs.vdev.mirror.rotating_inc 0
> vfs.zfs.vdev.trim_on_init 1
> vfs.zfs.vdev.cache.bshift 16
> vfs.zfs.vdev.cache.size 0
> vfs.zfs.vdev.cache.max 16384
> vfs.zfs.vdev.default_ms_shift 29
> vfs.zfs.vdev.min_ms_count 16
> vfs.zfs.vdev.max_ms_count 200
> vfs.zfs.txg.timeout 5
> vfs.zfs.space_map_ibs 14
> vfs.zfs.spa_allocators 4
> vfs.zfs.spa_min_slop 134217728
> vfs.zfs.spa_slop_shift 5
> vfs.zfs.spa_asize_inflation 24
> vfs.zfs.deadman_enabled 1
> vfs.zfs.deadman_checktime_ms 5000
> vfs.zfs.deadman_synctime_ms 1000000
> vfs.zfs.debug_flags 0
> vfs.zfs.debugflags 0
> vfs.zfs.recover 0
> vfs.zfs.spa_load_verify_data 1
> vfs.zfs.spa_load_verify_metadata 1
> vfs.zfs.spa_load_verify_maxinflight 10000
> vfs.zfs.max_missing_tvds_scan 0
> vfs.zfs.max_missing_tvds_cachefile 2
> vfs.zfs.max_missing_tvds 0
> vfs.zfs.spa_load_print_vdev_tree 0
> vfs.zfs.ccw_retry_interval 300
> vfs.zfs.check_hostid 1
> vfs.zfs.mg_fragmentation_threshold 85
> vfs.zfs.mg_noalloc_threshold 0
> vfs.zfs.condense_pct 200
> vfs.zfs.metaslab_sm_blksz 4096
> vfs.zfs.metaslab.bias_enabled 1
> vfs.zfs.metaslab.lba_weighting_enabled 1
> vfs.zfs.metaslab.fragmentation_factor_enabled1
> vfs.zfs.metaslab.preload_enabled 1
> vfs.zfs.metaslab.preload_limit 3
> vfs.zfs.metaslab.unload_delay 8
> vfs.zfs.metaslab.load_pct 50
> vfs.zfs.metaslab.min_alloc_size 33554432
> vfs.zfs.metaslab.df_free_pct 4
> vfs.zfs.metaslab.df_alloc_threshold 131072
> vfs.zfs.metaslab.debug_unload 0
> vfs.zfs.metaslab.debug_load 0
> vfs.zfs.metaslab.fragmentation_threshold70
> vfs.zfs.metaslab.force_ganging 16777217
> vfs.zfs.free_bpobj_enabled 1
> vfs.zfs.free_max_blocks -1
> vfs.zfs.zfs_scan_checkpoint_interval 7200
> vfs.zfs.zfs_scan_legacy 0
> vfs.zfs.no_scrub_prefetch 0
> vfs.zfs.no_scrub_io 0
> vfs.zfs.resilver_min_time_ms 3000
> vfs.zfs.free_min_time_ms 1000
> vfs.zfs.scan_min_time_ms 1000
> vfs.zfs.scan_idle 50
> vfs.zfs.scrub_delay 4
> vfs.zfs.resilver_delay 2
> vfs.zfs.top_maxinflight 32
> vfs.zfs.zfetch.array_rd_sz 1048576
> vfs.zfs.zfetch.max_idistance 67108864
> vfs.zfs.zfetch.max_distance 8388608
> vfs.zfs.zfetch.min_sec_reap 2
> vfs.zfs.zfetch.max_streams 8
> vfs.zfs.prefetch_disable 0
> vfs.zfs.delay_scale 500000
> vfs.zfs.delay_min_dirty_percent 60
> vfs.zfs.dirty_data_sync 67108864
> vfs.zfs.dirty_data_max_percent 10
> vfs.zfs.dirty_data_max_max 4294967296
> vfs.zfs.dirty_data_max 4294967296
> vfs.zfs.max_recordsize 1048576
> vfs.zfs.default_ibs 17
> vfs.zfs.default_bs 9
> vfs.zfs.send_holes_without_birth_time 1
> vfs.zfs.mdcomp_disable 0
> vfs.zfs.per_txg_dirty_frees_percent 30
> vfs.zfs.nopwrite_enabled 1
> vfs.zfs.dedup.prefetch 1
> vfs.zfs.dbuf_cache_lowater_pct 10
> vfs.zfs.dbuf_cache_hiwater_pct 10
> vfs.zfs.dbuf_metadata_cache_overflow 0
> vfs.zfs.dbuf_metadata_cache_shift 6
> vfs.zfs.dbuf_cache_shift 5
> vfs.zfs.dbuf_metadata_cache_max_bytes 1026962560
> vfs.zfs.dbuf_cache_max_bytes 2053925120
> vfs.zfs.arc_min_prescient_prefetch_ms 6
> vfs.zfs.arc_min_prefetch_ms 1
> vfs.zfs.l2c_only_size 0
> vfs.zfs.mfu_ghost_data_esize 1910587392
> vfs.zfs.mfu_ghost_metadata_esize 5158840832
> vfs.zfs.mfu_ghost_size 7069428224
> vfs.zfs.mfu_data_esize 17620227072
> vfs.zfs.mfu_metadata_esize 950300160
> vfs.zfs.mfu_size 20773338624
> vfs.zfs.mru_ghost_data_esize 6989578240
> vfs.zfs.mru_ghost_metadata_esize 18479132160
> vfs.zfs.mru_ghost_size 25468710400
> vfs.zfs.mru_data_esize 4455460352
> vfs.zfs.mru_metadata_esize 70236672
> vfs.zfs.mru_size 7413314560
> vfs.zfs.anon_data_esize 0
> vfs.zfs.anon_metadata_esize 0
> vfs.zfs.anon_size 3040037888
> vfs.zfs.l2arc_norw 1
> vfs.zfs.l2arc_feed_again 1
> vfs.zfs.l2arc_noprefetch 1
> vfs.zfs.l2arc_feed_min_ms 200
> vfs.zfs.l2arc_feed_secs 1
> vfs.zfs.l2arc_headroom 2
> vfs.zfs.l2arc_write_boost 8388608
> vfs.zfs.l2arc_write_max 8388608
> vfs.zfs.arc_meta_limit 16431400960
> vfs.zfs.arc_free_target 113124
> vfs.zfs.arc_kmem_cache_reap_retry_ms 0
> vfs.zfs.compressed_arc_enabled 1
> vfs.zfs.arc_grow_retry 60
> vfs.zfs.arc_shrink_shift 7
> vfs.zfs.arc_average_blocksize 8192
> vfs.zfs.arc_no_grow_shift 5
> vfs.zfs.arc_min 8215700480
> vfs.zfs.arc_max 65725603840
> vfs.zfs.abd_chunk_size 4096
> vfs.zfs.abd_scatter_enabled 1
> ------------------------------------------------------------------------
--
Trond.
More information about the freebsd-stable
mailing list