CAM Target Layer and Linux (continued)
Nikolay Denev
ndenev at gmail.com
Thu Sep 27 15:33:30 UTC 2012
Hi All,
With the help of Chuck Tuffli, I'm now able to use CTL to export a zvol over FC to a Linux host:
LUN Backend Size (Blocks) BS Serial Number Device ID
0 block 4185915392 512 FBSDZFS001 ORA_ASM_01
lun_type=0
num_threads=14
file=/dev/zvol/tank/oracle_asm_01
1 block 4185915392 512 FBSDZFS002 ORA_ASM_02
lun_type=0
num_threads=14
file=/dev/zvol/tank/oracle_asm_02
2 block 4185915392 512 FBSDZFS003 ORA_ASM_03
lun_type=0
num_threads=14
file=/dev/zvol/tank/oracle_asm_03
3 block 4185915392 512 FBSDZFS004 ORA_ASM_04
lun_type=0
num_threads=14
file=/dev/zvol/tank/oracle_asm_04
Then we ran some tests using Oracle's ORION benchmark tool from the Linux host.
We ran one test which passed successfully,
then I've just disabled zfs prefetch -> "vfs.zfs.prefetch_disable=1"
and rerun the test, which failed due to this error.
On the FreeBSD side:
(0:3:0:1): READ(10). CDB: 28 0 84 f9 58 0 0 4 0 0
(0:3:0:1): Tag: 0x116220, Type: 1
(0:3:0:1): CTL Status: SCSI Error
(0:3:0:1): SCSI Status: Check Condition
(0:3:0:1): SCSI sense: NOT READY asc:4b,0 (Data phase error)
Linux reported :
sd 4:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Add. Sense: Data phase error
end_request: I/O error, dev sdr, sector 2230933504
device-mapper: multipath: Failing path 65:16.
sd 4:0:0:1: Device not ready: <6>: Current: sense key: Not Ready
Add. Sense: Data phase error
end_request: I/O error, dev sdr, sector 2230934528
There are no other suspicious messages in dmesg.
Also, ctladm dumpooa does not show anything.
Here is dumpscructs output :
CTL IID to WWPN map start:
CTL IID to WWPN map end
CTL Persistent Reservation information start:
CTL Persistent Reservation information end
CTL Frontends:
Frontend CTL ioctl Type 4 pport 0 vport 0 WWNN 0 WWPN 0
Frontend ctl2cam Type 8 pport 0 vport 0 WWNN 0x5000000995680700 WWPN 0x5000000995680702
Frontend CTL internal Type 8 pport 0 vport 0 WWNN 0 WWPN 0
Frontend isp0 Type 1 pport 0 vport 0 WWNN 0x20000024ff376b98 WWPN 0x21000024ff376b98
isp0: max tagged openings: 4096, max dev openings: 4096
isp0: max_ccbs: 20488, ccb_count: 79
isp0: ccb_freeq is NOT empty
isp0: alloc_queue.entries 0, alloc_openings 4096
isp0: qfrozen_cnt:0:0:0:0:0
(ctl2:isp0:0:0:0): 0 requests total waiting for CCBs
(ctl2:isp0:0:0:0): 0 CCBs oustanding (17788811 allocated, 17788811 freed)
(ctl2:isp0:0:0:0): 0 CTIOs outstanding (17788811 sent, 17788811 returned
(ctl4:isp0:0:0:1): 0 requests total waiting for CCBs
(ctl4:isp0:0:0:1): 0 CCBs oustanding (16708305 allocated, 16708305 freed)
(ctl4:isp0:0:0:1): 0 CTIOs outstanding (16708305 sent, 16708305 returned
(ctl6:isp0:0:0:2): 0 requests total waiting for CCBs
(ctl6:isp0:0:0:2): 0 CCBs oustanding (16712865 allocated, 16712865 freed)
(ctl6:isp0:0:0:2): 0 CTIOs outstanding (16712865 sent, 16712865 returned
(ctl8:isp0:0:0:3): 0 requests total waiting for CCBs
(ctl8:isp0:0:0:3): 0 CCBs oustanding (16699727 allocated, 16699727 freed)
(ctl8:isp0:0:0:3): 0 CTIOs outstanding (16699727 sent, 16699727 returned
isp1: max tagged openings: 4096, max dev openings: 4096
isp1: max_ccbs: 20488, ccb_count: 1
isp1: ccb_freeq is NOT empty
isp1: alloc_queue.entries 0, alloc_openings 4096
isp1: qfrozen_cnt:0:0:0:0:0
(ctl3:isp1:0:0:0): 0 requests total waiting for CCBs
(ctl3:isp1:0:0:0): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl3:isp1:0:0:0): 0 CTIOs outstanding (0 sent, 0 returned
(ctl5:isp1:0:0:1): 0 requests total waiting for CCBs
(ctl5:isp1:0:0:1): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl5:isp1:0:0:1): 0 CTIOs outstanding (0 sent, 0 returned
(ctl7:isp1:0:0:2): 0 requests total waiting for CCBs
(ctl7:isp1:0:0:2): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl7:isp1:0:0:2): 0 CTIOs outstanding (0 sent, 0 returned
(ctl9:isp1:0:0:3): 0 requests total waiting for CCBs
(ctl9:isp1:0:0:3): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl9:isp1:0:0:3): 0 CTIOs outstanding (0 sent, 0 returned
Frontend isp1 Type 1 pport 1 vport 0 WWNN 0x20000024ff376b99 WWPN 0x21000024ff376b99
isp0: max tagged openings: 4096, max dev openings: 4096
isp0: max_ccbs: 20488, ccb_count: 79
isp0: ccb_freeq is NOT empty
isp0: alloc_queue.entries 0, alloc_openings 4096
isp0: qfrozen_cnt:0:0:0:0:0
(ctl2:isp0:0:0:0): 0 requests total waiting for CCBs
(ctl2:isp0:0:0:0): 0 CCBs oustanding (17788811 allocated, 17788811 freed)
(ctl2:isp0:0:0:0): 0 CTIOs outstanding (17788811 sent, 17788811 returned
(ctl4:isp0:0:0:1): 0 requests total waiting for CCBs
(ctl4:isp0:0:0:1): 0 CCBs oustanding (16708305 allocated, 16708305 freed)
(ctl4:isp0:0:0:1): 0 CTIOs outstanding (16708305 sent, 16708305 returned
(ctl6:isp0:0:0:2): 0 requests total waiting for CCBs
(ctl6:isp0:0:0:2): 0 CCBs oustanding (16712865 allocated, 16712865 freed)
(ctl6:isp0:0:0:2): 0 CTIOs outstanding (16712865 sent, 16712865 returned
(ctl8:isp0:0:0:3): 0 requests total waiting for CCBs
(ctl8:isp0:0:0:3): 0 CCBs oustanding (16699727 allocated, 16699727 freed)
(ctl8:isp0:0:0:3): 0 CTIOs outstanding (16699727 sent, 16699727 returned
isp1: max tagged openings: 4096, max dev openings: 4096
isp1: max_ccbs: 20488, ccb_count: 1
isp1: ccb_freeq is NOT empty
isp1: alloc_queue.entries 0, alloc_openings 4096
isp1: qfrozen_cnt:0:0:0:0:0
(ctl3:isp1:0:0:0): 0 requests total waiting for CCBs
(ctl3:isp1:0:0:0): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl3:isp1:0:0:0): 0 CTIOs outstanding (0 sent, 0 returned
(ctl5:isp1:0:0:1): 0 requests total waiting for CCBs
(ctl5:isp1:0:0:1): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl5:isp1:0:0:1): 0 CTIOs outstanding (0 sent, 0 returned
(ctl7:isp1:0:0:2): 0 requests total waiting for CCBs
(ctl7:isp1:0:0:2): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl7:isp1:0:0:2): 0 CTIOs outstanding (0 sent, 0 returned
(ctl9:isp1:0:0:3): 0 requests total waiting for CCBs
(ctl9:isp1:0:0:3): 0 CCBs oustanding (0 allocated, 0 freed)
(ctl9:isp1:0:0:3): 0 CTIOs outstanding (0 sent, 0 returned
CTL Frontend information end
zpool status is showing no errors.
P.S.:
The machine is 8 core 2.0Ghz Xeon E5-2650 with 196G of RAM
Other things to note are that I'm running with "hw.mfi.max_cmds=254"
Also my zpool is on GELI hw encrypted disks: 24 JBOD/RAID0 drives on mfi, each encrypted with GELI, and arranged in 3 raidz2 vdevs of 8 drives.
And the machine also acts as a fairly loaded NFS server.
More information about the freebsd-stable
mailing list