Discussion:
[Bug 240145] [zfs] kernel panic with hanging vdev
(too old to reply)
b***@freebsd.org
2019-08-28 08:48:04 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

Andriy Gapon <***@FreeBSD.org> changed:

What |Removed |Added
----------------------------------------------------------------------------
Assignee|***@FreeBSD.org |***@FreeBSD.org

--- Comment #1 from Andriy Gapon <***@FreeBSD.org> ---
ZFS just reported a stuck I/O operation.
The problem is likely to be either in the driver or in the hardware.
Maybe it's triggered by the I/O load that a scrub creates.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2019-08-28 08:48:36 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

Andriy Gapon <***@FreeBSD.org> changed:

What |Removed |Added
----------------------------------------------------------------------------
Summary|[zfs] kernel panic with |[smartpqi][zfs] kernel
|hanging vdev |panic with hanging vdev
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2019-08-28 09:18:04 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #2 from ***@ultra-secure.de ---
OK, thanks.

I have two of these servers, this is actually the one that has less I/O (and
less drives, it finished scrubbing 19T in 4.5h yesterday).

So, I would also tend to point towards hardware. But what is it?
A specific drive? Or is the HBA toast?

I'll have to look if I can actually swap out the HBA or if I need to swap the
motherboard.

I've disabled scrubs, so the server works for the moment.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2019-08-28 11:27:39 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

Peter Eriksson <***@lysator.liu.se> changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@lysator.liu.se

--- Comment #3 from Peter Eriksson <***@lysator.liu.se> ---
Just another (rather worthless, but anyway) datapoint:

The same thing happened to us on one of our production file servers just this
Monday during prime daytime (1pm). No Scrub running just a normal load of SMB
and NFS traffic (some ~400 SMB clients and ~40 NFS clients).

FreeBSD kernel: 11.2-RELEASE-p10

Hardware: Dell PowerEdge R730xd with LSI SAS3008 (Dell-branded) HBA and the
DATA pool the error occured in has 12 x 10TN SAS 7200rpm drives in a RAID-Z2
config.

After the reboot no errors could be found via Smartctl or in any logs (other
than the "panic" message on the disk (or any other disk)

The vdev pointed at in the panic message was the one named
"diskid/DISK-7PK8RSLC" below

# zpool status -v DATA
pool: DATA
state: ONLINE
scan: scrub repaired 0 in 83h42m with 0 errors on Tue Jan 8 07:44:05 2019
config:

NAME STATE READ WRITE CKSUM
DATA ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
diskid/DISK-7PK784UC ONLINE 0 0 0
diskid/DISK-7PK2GT9G ONLINE 0 0 0
diskid/DISK-7PK8RSLC ONLINE 0 0 0
diskid/DISK-7PK77Z2C ONLINE 0 0 0
diskid/DISK-7PK1U91G ONLINE 0 0 0
diskid/DISK-7PK2GBPG ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
diskid/DISK-7PK1AZ4G ONLINE 0 0 0
diskid/DISK-7PK2GEEG ONLINE 0 0 0
diskid/DISK-7PK14ARG ONLINE 0 0 0
diskid/DISK-7PK7HS5C ONLINE 0 0 0
diskid/DISK-7PK2GERG ONLINE 0 0 0
diskid/DISK-7PK200TG ONLINE 0 0 0
logs
diskid/DISK-BTHV7146043R400NGN ONLINE 0 0 0
diskid/DISK-BTHV715403A9400NGN ONLINE 0 0 0
cache
diskid/DISK-CVCQ72660083400AGN ONLINE 0 0 0
spares
diskid/DISK-7PK1RNVG AVAIL
diskid/DISK-7PK784NC AVAIL

errors: No known data errors

# sas3ircu 0 DISPLAY
Avago Technologies SAS3 IR Configuration Utility.
Version 11.00.00.00 (2015.08.04)
Copyright (c) 2009-2015 Avago Technologies. All rights reserved.

Read configuration has been initiated for controller 0
------------------------------------------------------------------------
Controller information
------------------------------------------------------------------------
Controller type : SAS3008
BIOS version : 8.37.00.00
Firmware version : 16.00.04.00
Channel description : 1 Serial Attached SCSI
Initiator ID : 0
Maximum physical devices : 543
Concurrent commands supported : 9584
Slot : 5
Segment : 0
Bus : 2
Device : 0
Function : 0
RAID Support : No
...
Device is a Hard disk
Enclosure # : 2
Slot # : 2
SAS Address : 5000cca-2-51b8-fbb1
State : Ready (RDY)
Size (in MB)/(in sectors) : 9470975/2424569855
Manufacturer : HGST
Model Number : HUH721010AL4200
Firmware Revision : LS17
Serial No : 7PK8RSLC
GUID : N/A
Protocol : SAS
Drive Type : SAS_HDD
...
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2019-09-02 00:06:42 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #4 from ***@ultra-secure.de ---
So, replacing the controller:

HPE E208i-p SR Gen10


seems to have helped.

The scrub went through.

I know hardware errors are difficult to diagnose from the OS above it, but
maybe there could somehow be more diagnostics?


We will have to send back this controller (we pre-ordered a new one on a
hunch).
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2019-10-05 12:11:19 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #5 from ***@ultra-secure.de ---
Now, the other of two servers is also acting up.

After rebooting, it finished its scrub though.

I've not yet ordered a replacement HBA but will do soon.

The server with the replaced HBA has never shown a problem again. So far ;-)
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-01-26 14:04:55 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #10 from ***@ultra-secure.de ---
OK, so I still get this panic:

[49406] [ERROR]::[55:655.0][0,68,0][CPU
9][pqi_map_request][540]:bus_dmamap_load_ccb failed = 36 count = 131072
[49406] [WARN]:[55:655.0][CPU 9][pqisrc_io_start][794]:In Progress on 68
[50411] panic: I/O to pool 'datapool' appears to be hung on vdev guid
3875563786885777386 at '/dev/da9'.
[50411] cpuid = 14
[50411] time = 1611665350
[50411] KDB: stack backtrace:
[50411] #0 0xffffffff80c0a8e5 at kdb_backtrace+0x65
[50411] #1 0xffffffff80bbeb9b at vpanic+0x17b
[50411] #2 0xffffffff80bbea13 at panic+0x43
[50411] #3 0xffffffff828a2314 at vdev_deadman+0x184
[50411] #4 0xffffffff828a21d1 at vdev_deadman+0x41
[50411] #5 0xffffffff828a21d1 at vdev_deadman+0x41
[50411] #6 0xffffffff828930f6 at spa_deadman+0x86
[50411] #7 0xffffffff80c1ced4 at taskqueue_run_locked+0x144
[50411] #8 0xffffffff80c1e2c6 at taskqueue_thread_loop+0xb6
[50411] #9 0xffffffff80b804ce at fork_exit+0x7e
[50411] #10 0xffffffff81067f9e at fork_trampoline+0xe
[50411] Uptime: 14h0m11s


This is with the OG Adaptec HBA:

<Adaptec Smart Adapter 3.21> at scbus1 target 72 lun 0 (ses1,pass12)
<Adaptec 3154-8i 3.21> at scbus1 target 1088 lun 0 (pass13)

set to HBA mode.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-02-01 21:57:40 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

seri <***@gmail.com> changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@gmail.com

--- Comment #11 from seri <***@gmail.com> ---
zed trying to fault the disk.

PID: 8821 TASK: ffff89f704878000 CPU: 6 COMMAND: "zed"
#0 [ffffa03c47a0f930] __schedule at ffffffffa88789dc
#1 [ffffa03c47a0f9a8] schedule at ffffffffa8878e76
#2 [ffffa03c47a0f9c0] schedule_timeout at ffffffffa887bcc8
#3 [ffffa03c47a0fa60] wait_for_completion at ffffffffa887981d
#4 [ffffa03c47a0fab8] flush_work at ffffffffa80c02ca
#5 [ffffa03c47a0fb30] __cancel_work_timer at ffffffffa80c0443
#6 [ffffa03c47a0fba0] cancel_delayed_work_sync at ffffffffa80c0593
#7 [ffffa03c47a0fbb0] disk_block_events at ffffffffa83d9d67
#8 [ffffa03c47a0fbe8] __blkdev_get at ffffffffa828c147
#9 [ffffa03c47a0fc38] blkdev_get at ffffffffa828c6ff
#10 [ffffa03c47a0fcb8] blkdev_get_by_path at ffffffffa828ca13
#11 [ffffa03c47a0fce0] vdev_disk_open at ffffffffc0734591 [zfs]
#12 [ffffa03c47a0fd40] vdev_open at ffffffffc0730808 [zfs]
#13 [ffffa03c47a0fd88] vdev_reopen at ffffffffc07318c1 [zfs]
#14 [ffffa03c47a0fda8] vdev_fault at ffffffffc0732310 [zfs]
#15 [ffffa03c47a0fdd8] zfs_ioc_vdev_set_state at ffffffffc0762737 [zfs]
#16 [ffffa03c47a0fe08] zfsdev_ioctl at ffffffffc076ae82 [zfs]
#17 [ffffa03c47a0fe70] do_vfs_ioctl at ffffffffa8264a76
#18 [ffffa03c47a0fee8] sys_ioctl at ffffffffa8265009
#19 [ffffa03c47a0ff28] do_syscall_64 at ffffffffa8003997


crash> bt
PID: 47708 TASK: ffff89f5dfb5c000 CPU: 0 COMMAND: "z_ioctl_iss"
#0 [ffffa03c572ff820] machine_kexec at ffffffffa805a19c
#1 [ffffa03c572ff878] __crash_kexec at ffffffffa8137513
#2 [ffffa03c572ff940] crash_kexec at ffffffffa81375ec
#3 [ffffa03c572ff960] oops_end at ffffffffa802f81a
#4 [ffffa03c572ff988] no_context at ffffffffa8067c52
#5 [ffffa03c572ff9e0] __bad_area_nosemaphore at ffffffffa8067f8e
#6 [ffffa03c572ffa30] bad_area_nosemaphore at ffffffffa8068084
#7 [ffffa03c572ffa40] __do_page_fault at ffffffffa8068748
#8 [ffffa03c572ffab0] trace_do_page_fault at ffffffffa8068c43
#9 [ffffa03c572ffae8] do_async_page_fault at ffffffffa806162a
#10 [ffffa03c572ffb00] async_page_fault at ffffffffa887e9f8
[exception RIP: generic_make_request_checks+73]
https://bestdoorbellcamera2021.com/
RIP: ffffffffa83c6159 RSP: ffffa03c572ffbb0 RFLAGS: 00010287
RAX: 0000000000000000 RBX: ffff89ed72203700 RCX: 000000003cd63180
RDX: 0000000000000080 RSI: 0000884849f33767 RDI: ffff89f7020cee80
RBP: ffffa03c572ffc10 R8: 0000000000000010 R9: 0000000002400000
R10: ffff89f713407980 R11: 0000000000000000 R12: 000000003cd63200
R13: 0000000000000080 R14: 0000000000000000 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#11 [ffffa03c572ffc18] generic_make_request at ffffffffa83c8a44
#12 [ffffa03c572ffc68] submit_bio at ffffffffa83c8d2d
#13 [ffffa03c572ffcb8] vdev_disk_io_start at ffffffffc07350ce [zfs]
#14 [ffffa03c572ffd98] zio_vdev_io_start at ffffffffc07a0844 [zfs]
#15 [ffffa03c572ffde8] zio_execute at ffffffffc0796665 [zfs]
#16 [ffffa03c572ffe28] taskq_thread at ffffffffc059b396 [spl]
#17 [ffffa03c572ffec8] kthread at ffffffffa80c6ce7
#18 [ffffa03c572fff50] ret_from_fork at ffffffffa887d755

if (vd == NULL) {
...
} else if (ZIO_IS_TRIM(zio)) {
/*
* For TRIM, it is important to
* take the SCL_ZIO lock to avoid another thread messing
* with the vdev state
*/
spa_config_enter(spa, SCL_ZIO, zio, RW_READER);
}

And in zio_vdev_io_assess

if ((vd == NULL && !(zio->io_flags & ZIO_FLAG_CONFIG_WRITER)) ||
(ZIO_IS_TRIM(zio)))
spa_config_exit(zio->io_spa, SCL_ZIO, zio);
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-02-01 22:18:50 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

***@microchip.com changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@microchip.com

--- Comment #12 from ***@microchip.com ---
Have you tried with the latest FreeBSD drivers for found here:

https://storage.microsemi.com/en-us/speed/raid/aac/unix/smartpqi_freebsd_v4030.0.101_tgz.php

We've been trying to get the latest driver code changes inbox at

https://reviews.freebsd.org/D24428

but I guess we've lost the magic on getting these changes submitted.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-02-06 22:03:45 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #13 from ***@ultra-secure.de ---
Hi,

thanks for this update.

I will then try your driver. Unfortunately, I don't have a test-environment to
try it out.

I guess, I can make a boot-environment and if it causes problems, just revert
to the old boot-environment?

I'm sorry that your efforts are not honored.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-02-06 22:46:39 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #14 from ***@ultra-secure.de ---
OK,

so I realized I can compile this anywhere, not just on my server.

This is what I did:

- take a FreeBSD 12.2-RELEASE-p3 install
- download src.txz, extract
- freebsd-update fetch && freebsd-update install
- cd /usr/src
- patch -p 0 < /root/D24428.diff
- make buildkernel && make installkernel


this is what I get:

cc -target x86_64-unknown-freebsd12.2
--sysroot=/usr/obj/usr/src/amd64.amd64/tmp
-B/usr/obj/usr/src/amd64.amd64/tmp/usr/bin -c -O2 -pipe -fno-strict-aliasing
-g -nostdinc -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include
-I/usr/src/sys/contrib/libfdt -D_KERNEL -DHAVE_KERNEL_OPTION_HEADERS -include
opt_global.h -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer
-MD -MF.depend.nehemiah.o -MTnehemiah.o
-fdebug-prefix-map=./machine=/usr/src/sys/amd64/include
-fdebug-prefix-map=./x86=/usr/src/sys/x86/include -mcmodel=kernel -mno-red-zone
-mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding
-fwrapv -fstack-protector -gdwarf-2 -Wall -Wredundant-decls -Wnested-externs
-Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef
-Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs
-fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error-tautological-compare
-Wno-error-empty-body -Wno-error-parentheses-equality
-Wno-error-unused-function -Wno-error-pointer-sign
-Wno-error-shift-negative-value -Wno-address-of-packed-member -mno-aes
-mno-avx -std=iso9899:1999 -Werror /usr/src/sys/dev/random/nehemiah.c
ctfconvert -L VERSION -g nehemiah.o
cc -target x86_64-unknown-freebsd12.2
--sysroot=/usr/obj/usr/src/amd64.amd64/tmp
-B/usr/obj/usr/src/amd64.amd64/tmp/usr/bin -c -O2 -pipe -fno-strict-aliasing
-g -nostdinc -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include
-I/usr/src/sys/contrib/libfdt -D_KERNEL -DHAVE_KERNEL_OPTION_HEADERS -include
opt_global.h -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer
-MD -MF.depend.smartpqi_cam.o -MTsmartpqi_cam.o
-fdebug-prefix-map=./machine=/usr/src/sys/amd64/include
-fdebug-prefix-map=./x86=/usr/src/sys/x86/include -mcmodel=kernel -mno-red-zone
-mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding
-fwrapv -fstack-protector -gdwarf-2 -Wall -Wredundant-decls -Wnested-externs
-Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef
-Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs
-fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error-tautological-compare
-Wno-error-empty-body -Wno-error-parentheses-equality
-Wno-error-unused-function -Wno-error-pointer-sign
-Wno-error-shift-negative-value -Wno-address-of-packed-member -mno-aes
-mno-avx -std=iso9899:1999 -Werror /usr/src/sys/dev/smartpqi/smartpqi_cam.c
In file included from /usr/src/sys/dev/smartpqi/smartpqi_cam.c:34:
In file included from /usr/src/sys/dev/smartpqi/smartpqi_includes.h:86:
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:973:27: error: redefinition of
typedef 'OS_ATOMIC64_T' is a C11 feature [-Werror,-Wtypedef-redefinition]
typedef volatile uint64_t OS_ATOMIC64_T;
^
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:825:33: note: previous definition
is here
typedef volatile uint64_t OS_ATOMIC64_T;
^
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:975:9: error: 'OS_ATOMIC64_READ'
macro redefined [-Werror,-Wmacro-redefined]
#define OS_ATOMIC64_READ(_softs, target)
atomic_load_acq_64(&(_softs)->target)
^
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:826:9: note: previous definition
is here
#define OS_ATOMIC64_READ(p) atomic_load_acq_64(p)
^
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:976:9: error: 'OS_ATOMIC64_INC'
macro redefined [-Werror,-Wmacro-redefined]
#define OS_ATOMIC64_INC(_softs, target)
atomic_add_64(&(_softs)->target, 1)
^
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:831:9: note: previous definition
is here
#define OS_ATOMIC64_INC(p) (atomic_fetchadd_64(p, 1) + 1)
^
/usr/src/sys/dev/smartpqi/smartpqi_cam.c:619:4: error: use of undeclared
identifier 'bsd_status'
bsd_status = EIO;
^
/usr/src/sys/dev/smartpqi/smartpqi_cam.c:623:31: error: use of undeclared
identifier 'bsd_status'; did you mean 'dumpstatus'?
DBG_FUNC("OUT error = %d\n", bsd_status);
^~~~~~~~~~
dumpstatus
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:1083:58: note: expanded from macro
'DBG_FUNC'
printf("[FUNC]:[ %s ] [ %d
]"fmt,__func__,__LINE__,##args); \

^
/usr/src/sys/sys/systm.h:217:5: note: 'dumpstatus' declared here
int dumpstatus(vm_offset_t addr, off_t count);
^
/usr/src/sys/dev/smartpqi/smartpqi_cam.c:623:31: error: format specifies type
'int' but the argument has type 'int (*)(vm_offset_t, off_t)' (aka 'int
(*)(unsigned long, long)')
[-Werror,-Wformat]
DBG_FUNC("OUT error = %d\n", bsd_status);
~~ ^~~~~~~~~~
/usr/src/sys/dev/smartpqi/smartpqi_defines.h:1083:58: note: expanded from macro
'DBG_FUNC'
printf("[FUNC]:[ %s ] [ %d
]"fmt,__func__,__LINE__,##args); \
~~~
^~~~
/usr/src/sys/dev/smartpqi/smartpqi_cam.c:625:9: error: use of undeclared
identifier 'bsd_status'; did you mean 'dumpstatus'?
return bsd_status;
^~~~~~~~~~
dumpstatus
/usr/src/sys/sys/systm.h:217:5: note: 'dumpstatus' declared here
int dumpstatus(vm_offset_t addr, off_t count);
^
/usr/src/sys/dev/smartpqi/smartpqi_cam.c:625:9: error: incompatible pointer to
integer conversion returning 'int (vm_offset_t, off_t)' (aka 'int (unsigned
long, long)') from a function
with result type 'int' [-Werror,-Wint-conversion]
return bsd_status;
^~~~~~~~~~
8 errors generated.
*** Error code 1

Stop.
make[2]: stopped in /usr/obj/usr/src/amd64.amd64/sys/GENERIC
*** Error code 1
*** Error code 1


Unfortunately, I have no idea how to fix this.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-02-18 18:02:37 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #15 from ***@ultra-secure.de ---
Hi,

with the updated diff, I get:

(f-hosting <src>) 1 # patch -l -p 0 < /root/D24428.diff
Hmm... Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_cam.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_cam.c
|+++ sys/dev/smartpqi/smartpqi_cam.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_cam.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 36.
Hunk #3 succeeded at 56.
Hunk #4 succeeded at 67.
Hunk #5 succeeded at 79.
Hunk #6 succeeded at 106.
Hunk #7 succeeded at 135.
Hunk #8 succeeded at 151.
Hunk #9 succeeded at 162.
Hunk #10 succeeded at 184.
Hunk #11 succeeded at 210.
Hunk #12 succeeded at 241.
Hunk #13 succeeded at 257.
Hunk #14 succeeded at 348.
Hunk #15 succeeded at 363.
Hunk #16 succeeded at 380.
Hunk #17 succeeded at 400.
Hunk #18 succeeded at 439.
Hunk #19 succeeded at 466.
Hunk #20 succeeded at 489.
Hunk #21 succeeded at 515.
Hunk #22 failed at 539.
Hunk #23 failed at 577.
Hunk #24 succeeded at 613 (offset -14 lines).
Hunk #25 succeeded at 638 (offset -14 lines).
Hunk #26 succeeded at 646 (offset -14 lines).
Hunk #27 succeeded at 663 (offset -14 lines).
Hunk #28 succeeded at 702 (offset -14 lines).
Hunk #29 succeeded at 723 (offset -14 lines).
Hunk #30 succeeded at 747 (offset -14 lines).
Hunk #31 succeeded at 774 (offset -14 lines).
Hunk #32 succeeded at 798 (offset -14 lines).
Hunk #33 succeeded at 868 (offset -14 lines).
Hunk #34 succeeded at 946 (offset -14 lines).
Hunk #35 succeeded at 957 (offset -14 lines).
Hunk #36 succeeded at 985 (offset -14 lines).
Hunk #37 succeeded at 1003 (offset -14 lines).
Hunk #38 succeeded at 1046 (offset -14 lines).
Hunk #39 succeeded at 1112 (offset -14 lines).
Hunk #40 succeeded at 1125 (offset -14 lines).
Hunk #41 succeeded at 1170 (offset -14 lines).
Hunk #42 succeeded at 1190 (offset -14 lines).
Hunk #43 succeeded at 1206 (offset -14 lines).
Hunk #44 succeeded at 1217 (offset -14 lines).
Hunk #45 succeeded at 1228 (offset -14 lines).
Hunk #46 succeeded at 1247 (offset -14 lines).
Hunk #47 succeeded at 1267 (offset -14 lines).
Hunk #48 succeeded at 1294 (offset -14 lines).
Hunk #49 succeeded at 1317 (offset -14 lines).
2 out of 49 hunks failed--saving rejects to sys/dev/smartpqi/smartpqi_cam.c.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_cmd.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_cmd.c
|+++ sys/dev/smartpqi/smartpqi_cmd.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_cmd.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 34.
Hunk #3 succeeded at 45.
Hunk #4 succeeded at 72.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_defines.h
|===================================================================
|--- sys/dev/smartpqi/smartpqi_defines.h
|+++ sys/dev/smartpqi/smartpqi_defines.h
--------------------------
Patching file sys/dev/smartpqi/smartpqi_defines.h using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 31.
Hunk #3 succeeded at 90.
Hunk #4 succeeded at 100.
Hunk #5 succeeded at 124.
Hunk #6 succeeded at 186.
Hunk #7 succeeded at 208.
Hunk #8 succeeded at 221.
Hunk #9 succeeded at 240.
Hunk #10 succeeded at 276.
Hunk #11 succeeded at 327.
Hunk #12 succeeded at 346.
Hunk #13 succeeded at 355.
Hunk #14 succeeded at 380.
Hunk #15 succeeded at 403.
Hunk #16 succeeded at 423.
Hunk #17 succeeded at 490.
Hunk #18 succeeded at 555.
Hunk #19 succeeded at 604.
Hunk #20 succeeded at 666.
Hunk #21 succeeded at 682.
Hunk #22 succeeded at 706.
Hunk #23 succeeded at 760.
Hunk #24 succeeded at 789.
Hunk #25 succeeded at 800.
Hunk #26 succeeded at 818.
Hunk #27 succeeded at 917.
Hunk #28 failed at 980.
Hunk #29 succeeded at 1031.
Hunk #30 succeeded at 1042.
Hunk #31 succeeded at 1054.
Hunk #32 succeeded at 1073.
Hunk #33 succeeded at 1091.
Hunk #34 succeeded at 1163.
1 out of 34 hunks failed--saving rejects to
sys/dev/smartpqi/smartpqi_defines.h.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_discovery.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_discovery.c
|+++ sys/dev/smartpqi/smartpqi_discovery.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_discovery.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 31.
Hunk #3 succeeded at 44.
Hunk #4 succeeded at 58.
Hunk #5 succeeded at 70.
Hunk #6 succeeded at 129.
Hunk #7 succeeded at 150.
Hunk #8 succeeded at 162.
Hunk #9 succeeded at 197.
Hunk #10 succeeded at 212.
Hunk #11 succeeded at 259.
Hunk #12 succeeded at 290.
Hunk #13 succeeded at 302.
Hunk #14 succeeded at 315.
Hunk #15 succeeded at 335.
Hunk #16 succeeded at 348.
Hunk #17 succeeded at 358.
Hunk #18 succeeded at 367.
Hunk #19 succeeded at 399.
Hunk #20 succeeded at 432.
Hunk #21 succeeded at 515.
Hunk #22 succeeded at 530.
Hunk #23 succeeded at 577.
Hunk #24 succeeded at 590.
Hunk #25 succeeded at 602.
Hunk #26 succeeded at 617.
Hunk #27 succeeded at 630.
Hunk #28 succeeded at 642.
Hunk #29 succeeded at 653.
Hunk #30 succeeded at 684.
Hunk #31 succeeded at 753.
Hunk #32 succeeded at 766.
Hunk #33 succeeded at 787.
Hunk #34 succeeded at 808.
Hunk #35 succeeded at 816.
Hunk #36 succeeded at 857.
Hunk #37 succeeded at 877.
Hunk #38 succeeded at 888.
Hunk #39 succeeded at 898.
Hunk #40 succeeded at 927.
Hunk #41 succeeded at 940.
Hunk #42 succeeded at 958.
Hunk #43 succeeded at 991.
Hunk #44 succeeded at 1008.
Hunk #45 succeeded at 1027.
Hunk #46 succeeded at 1038.
Hunk #47 succeeded at 1057.
Hunk #48 succeeded at 1072.
Hunk #49 succeeded at 1108.
Hunk #50 succeeded at 1139.
Hunk #51 succeeded at 1152.
Hunk #52 succeeded at 1172.
Hunk #53 succeeded at 1186.
Hunk #54 succeeded at 1212.
Hunk #55 succeeded at 1277.
Hunk #56 succeeded at 1297.
Hunk #57 succeeded at 1345.
Hunk #58 succeeded at 1391.
Hunk #59 succeeded at 1424.
Hunk #60 succeeded at 1435.
Hunk #61 succeeded at 1486.
Hunk #62 succeeded at 1497.
Hunk #63 succeeded at 1555.
Hunk #64 succeeded at 1563.
Hunk #65 succeeded at 1581.
Hunk #66 succeeded at 1597.
Hunk #67 succeeded at 1616.
Hunk #68 succeeded at 1662.
Hunk #69 succeeded at 1674.
Hunk #70 succeeded at 1684.
Hunk #71 succeeded at 1708.
Hunk #72 succeeded at 1718.
Hunk #73 succeeded at 1737.
Hunk #74 succeeded at 1752.
Hunk #75 succeeded at 1761.
Hunk #76 succeeded at 1775.
Hunk #77 succeeded at 1804.
Hunk #78 succeeded at 1865.
Hunk #79 succeeded at 1882.
Hunk #80 succeeded at 1919.
Hunk #81 succeeded at 1970.
Hunk #82 succeeded at 1987.
Hunk #83 succeeded at 2007.
Hunk #84 succeeded at 2027.
Hunk #85 succeeded at 2045.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_event.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_event.c
|+++ sys/dev/smartpqi/smartpqi_event.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_event.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 37.
Hunk #3 succeeded at 61.
Hunk #4 succeeded at 74.
Hunk #5 succeeded at 94.
Hunk #6 succeeded at 109.
Hunk #7 succeeded at 121.
Hunk #8 succeeded at 168.
Hunk #9 succeeded at 176.
Hunk #10 succeeded at 209.
Hunk #11 succeeded at 224.
Hunk #12 succeeded at 246.
Hunk #13 succeeded at 259.
Hunk #14 succeeded at 281.
Hunk #15 succeeded at 301.
Hunk #16 succeeded at 320.
Hunk #17 succeeded at 347.
Hunk #18 succeeded at 381.
Hunk #19 succeeded at 399.
Hunk #20 succeeded at 419.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_helper.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_helper.c
|+++ sys/dev/smartpqi/smartpqi_helper.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_helper.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 30.
Hunk #3 succeeded at 97.
Hunk #4 succeeded at 142.
Hunk #5 succeeded at 151.
Hunk #6 succeeded at 160.
Hunk #7 succeeded at 188.
Hunk #8 succeeded at 230.
Hunk #9 succeeded at 287.
Hunk #10 succeeded at 299.
Hunk #11 succeeded at 319.
Hunk #12 succeeded at 330.
Hunk #13 succeeded at 353.
Hunk #14 succeeded at 364.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_includes.h
|===================================================================
|--- sys/dev/smartpqi/smartpqi_includes.h
|+++ sys/dev/smartpqi/smartpqi_includes.h
--------------------------
Patching file sys/dev/smartpqi/smartpqi_includes.h using Plan A...
Hunk #1 succeeded at 1.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_init.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_init.c
|+++ sys/dev/smartpqi/smartpqi_init.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_init.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 33.
Hunk #3 succeeded at 67.
Hunk #4 succeeded at 82.
Hunk #5 succeeded at 110.
Hunk #6 succeeded at 133.
Hunk #7 succeeded at 154.
Hunk #8 succeeded at 169.
Hunk #9 succeeded at 185.
Hunk #10 succeeded at 223.
Hunk #11 succeeded at 240.
Hunk #12 succeeded at 264.
Hunk #13 succeeded at 296.
Hunk #14 succeeded at 314.
Hunk #15 succeeded at 327.
Hunk #16 succeeded at 340.
Hunk #17 succeeded at 385.
Hunk #18 succeeded at 609.
Hunk #19 succeeded at 635.
Hunk #20 succeeded at 673.
Hunk #21 succeeded at 690.
Hunk #22 succeeded at 705.
Hunk #23 succeeded at 714.
Hunk #24 succeeded at 750.
Hunk #25 succeeded at 766.
Hunk #26 succeeded at 774.
Hunk #27 succeeded at 793.
Hunk #28 succeeded at 842.
Hunk #29 succeeded at 852.
Hunk #30 succeeded at 877.
Hunk #31 succeeded at 886.
Hunk #32 succeeded at 948.
Hunk #33 succeeded at 1024.
Hunk #34 succeeded at 1067.
Hunk #35 succeeded at 1089.
Hunk #36 succeeded at 1098.
Hunk #37 succeeded at 1119.
Hunk #38 succeeded at 1142.
Hunk #39 succeeded at 1155.
Hunk #40 succeeded at 1185.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_intr.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_intr.c
|+++ sys/dev/smartpqi/smartpqi_intr.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_intr.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 34.
Hunk #3 succeeded at 91.
Hunk #4 succeeded at 113.
Hunk #5 succeeded at 134.
Hunk #6 succeeded at 160.
Hunk #7 succeeded at 171.
Hunk #8 succeeded at 212.
Hunk #9 succeeded at 240.
Hunk #10 succeeded at 254.
Hunk #11 succeeded at 268.
Hunk #12 succeeded at 298.
Hunk #13 succeeded at 326.
Hunk #14 succeeded at 379.
Hunk #15 succeeded at 412.
Hunk #16 succeeded at 437.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_ioctl.h
|===================================================================
|--- sys/dev/smartpqi/smartpqi_ioctl.h
|+++ sys/dev/smartpqi/smartpqi_ioctl.h
--------------------------
Patching file sys/dev/smartpqi/smartpqi_ioctl.h using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 34.
Hunk #3 succeeded at 69.
Hunk #4 succeeded at 77.
Hunk #5 succeeded at 96.
Hunk #6 succeeded at 105.
Hunk #7 succeeded at 136.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_ioctl.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_ioctl.c
|+++ sys/dev/smartpqi/smartpqi_ioctl.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_ioctl.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 37.
Hunk #3 succeeded at 47.
Hunk #4 succeeded at 57.
Hunk #5 succeeded at 99.
Hunk #6 succeeded at 124.
Hunk #7 succeeded at 194.
Hunk #8 succeeded at 207.
Hunk #9 succeeded at 249.
Hunk #10 succeeded at 286.
Hunk #11 succeeded at 339.
Hunk #12 succeeded at 355.
Hunk #13 succeeded at 375.
Hunk #14 succeeded at 389.
Hunk #15 succeeded at 413.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_main.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_main.c
|+++ sys/dev/smartpqi/smartpqi_main.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_main.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 35.
Hunk #3 succeeded at 50.
Hunk #4 succeeded at 132.
Hunk #5 succeeded at 145.
Hunk #6 succeeded at 155.
Hunk #7 succeeded at 175.
Hunk #8 succeeded at 212.
Hunk #9 succeeded at 222.
Hunk #10 succeeded at 243.
Hunk #11 succeeded at 266.
Hunk #12 succeeded at 277.
Hunk #13 succeeded at 325.
Hunk #14 failed at 347.
Hunk #15 failed at 389.
Hunk #16 succeeded at 426 (offset -2 lines).
Hunk #17 failed at 439.
Hunk #18 succeeded at 480 (offset 2 lines).
Hunk #19 succeeded at 497 (offset -2 lines).
Hunk #20 succeeded at 554 (offset 2 lines).
3 out of 20 hunks failed--saving rejects to
sys/dev/smartpqi/smartpqi_main.c.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_mem.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_mem.c
|+++ sys/dev/smartpqi/smartpqi_mem.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_mem.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 failed at 30.
Hunk #3 succeeded at 42.
Hunk #4 succeeded at 94.
Hunk #5 succeeded at 118.
Hunk #6 failed at 169.
Hunk #7 succeeded at 187.
Hunk #8 succeeded at 200.
2 out of 8 hunks failed--saving rejects to sys/dev/smartpqi/smartpqi_mem.c.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_misc.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_misc.c
|+++ sys/dev/smartpqi/smartpqi_misc.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_misc.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 31.
Hunk #3 succeeded at 42.
Hunk #4 succeeded at 52.
Hunk #5 failed at 71.
Hunk #6 succeeded at 93 (offset 1 line).
Hunk #7 failed at 102.
Hunk #8 succeeded at 112 (offset 1 line).
Hunk #9 succeeded at 162 (offset 1 line).
2 out of 9 hunks failed--saving rejects to sys/dev/smartpqi/smartpqi_misc.c.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_prototypes.h
|===================================================================
|--- sys/dev/smartpqi/smartpqi_prototypes.h
|+++ sys/dev/smartpqi/smartpqi_prototypes.h
--------------------------
Patching file sys/dev/smartpqi/smartpqi_prototypes.h using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 40.
Hunk #3 succeeded at 55.
Hunk #4 succeeded at 93.
Hunk #5 succeeded at 112.
Hunk #6 succeeded at 129.
Hunk #7 succeeded at 137.
Hunk #8 succeeded at 212.
Hunk #9 succeeded at 229.
Hunk #10 succeeded at 237.
Hunk #11 succeeded at 271.
Hunk #12 succeeded at 295.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_queue.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_queue.c
|+++ sys/dev/smartpqi/smartpqi_queue.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_queue.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 34.
Hunk #3 succeeded at 51.
Hunk #4 succeeded at 60.
Hunk #5 succeeded at 73.
Hunk #6 succeeded at 109.
Hunk #7 succeeded at 123.
Hunk #8 succeeded at 131.
Hunk #9 succeeded at 148.
Hunk #10 succeeded at 160.
Hunk #11 succeeded at 190.
Hunk #12 succeeded at 231.
Hunk #13 succeeded at 240.
Hunk #14 succeeded at 289 with fuzz 2.
Hunk #15 succeeded at 309.
Hunk #16 succeeded at 332.
Hunk #17 succeeded at 343.
Hunk #18 succeeded at 370.
Hunk #19 succeeded at 385.
Hunk #20 succeeded at 408.
Hunk #21 succeeded at 420.
Hunk #22 succeeded at 433.
Hunk #23 succeeded at 443.
Hunk #24 succeeded at 459.
Hunk #25 succeeded at 477.
Hunk #26 succeeded at 492.
Hunk #27 succeeded at 520.
Hunk #28 succeeded at 529.
Hunk #29 succeeded at 539 with fuzz 1.
Hunk #30 succeeded at 549.
Hunk #31 succeeded at 565 with fuzz 1.
Hunk #32 succeeded at 579.
Hunk #33 succeeded at 591.
Hunk #34 succeeded at 599.
Hunk #35 succeeded at 610 with fuzz 1.
Hunk #36 succeeded at 629.
Hunk #37 failed at 672.
Hunk #38 succeeded at 689.
Hunk #39 succeeded at 740.
Hunk #40 succeeded at 761.
Hunk #41 succeeded at 771.
Hunk #42 succeeded at 794.
Hunk #43 succeeded at 817.
Hunk #44 succeeded at 825.
Hunk #45 succeeded at 858.
Hunk #46 succeeded at 900.
Hunk #47 succeeded at 919.
Hunk #48 succeeded at 953.
Hunk #49 succeeded at 969.
Hunk #50 succeeded at 979.
Hunk #51 succeeded at 992.
Hunk #52 succeeded at 1012.
1 out of 52 hunks failed--saving rejects to
sys/dev/smartpqi/smartpqi_queue.c.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_request.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_request.c
|+++ sys/dev/smartpqi/smartpqi_request.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_request.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 49.
Hunk #3 succeeded at 61.
Hunk #4 succeeded at 82.
Hunk #5 succeeded at 105.
Hunk #6 succeeded at 122.
Hunk #7 succeeded at 132.
Hunk #8 succeeded at 149.
Hunk #9 succeeded at 167.
Hunk #10 succeeded at 181.
Hunk #11 succeeded at 189.
Hunk #12 succeeded at 216.
Hunk #13 succeeded at 235.
Hunk #14 succeeded at 265.
Hunk #15 succeeded at 281.
Hunk #16 succeeded at 292.
Hunk #17 succeeded at 301.
Hunk #18 succeeded at 318.
Hunk #19 succeeded at 328.
Hunk #20 succeeded at 394.
Hunk #21 succeeded at 403.
Hunk #22 succeeded at 424.
Hunk #23 succeeded at 437.
Hunk #24 succeeded at 481.
Hunk #25 succeeded at 521.
Hunk #26 failed at 603.
Hunk #27 succeeded at 631.
Hunk #28 succeeded at 757.
Hunk #29 succeeded at 801.
Hunk #30 succeeded at 867.
1 out of 30 hunks failed--saving rejects to
sys/dev/smartpqi/smartpqi_request.c.rej
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_response.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_response.c
|+++ sys/dev/smartpqi/smartpqi_response.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_response.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 33.
Hunk #3 succeeded at 47.
Hunk #4 succeeded at 57.
Hunk #5 succeeded at 85.
Hunk #6 succeeded at 95.
Hunk #7 succeeded at 176.
Hunk #8 succeeded at 201.
Hunk #9 succeeded at 219.
Hunk #10 succeeded at 231.
Hunk #11 succeeded at 274.
Hunk #12 succeeded at 317.
Hunk #13 succeeded at 348.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_sis.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_sis.c
|+++ sys/dev/smartpqi/smartpqi_sis.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_sis.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 30.
Hunk #3 succeeded at 47.
Hunk #4 succeeded at 82.
Hunk #5 succeeded at 94.
Hunk #6 succeeded at 135.
Hunk #7 succeeded at 162.
Hunk #8 succeeded at 176.
Hunk #9 succeeded at 226.
Hunk #10 succeeded at 249.
Hunk #11 succeeded at 274.
Hunk #12 succeeded at 291.
Hunk #13 succeeded at 306.
Hunk #14 succeeded at 387.
Hunk #15 succeeded at 440.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_structures.h
|===================================================================
|--- sys/dev/smartpqi/smartpqi_structures.h
|+++ sys/dev/smartpqi/smartpqi_structures.h
--------------------------
Patching file sys/dev/smartpqi/smartpqi_structures.h using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 31.
Hunk #3 succeeded at 40.
Hunk #4 succeeded at 57.
Hunk #5 succeeded at 93.
Hunk #6 succeeded at 107.
Hunk #7 succeeded at 188.
Hunk #8 succeeded at 225.
Hunk #9 succeeded at 252.
Hunk #10 succeeded at 276.
Hunk #11 succeeded at 355.
Hunk #12 succeeded at 378.
Hunk #13 succeeded at 388.
Hunk #14 succeeded at 408.
Hunk #15 succeeded at 421.
Hunk #16 succeeded at 437.
Hunk #17 succeeded at 553.
Hunk #18 succeeded at 589.
Hunk #19 succeeded at 705.
Hunk #20 succeeded at 748.
Hunk #21 succeeded at 779.
Hunk #22 succeeded at 795.
Hunk #23 succeeded at 958.
Hunk #24 succeeded at 979.
Hunk #25 succeeded at 1063.
Hunk #26 succeeded at 1086.
Hunk #27 succeeded at 1107.
Hunk #28 succeeded at 1122.
Hunk #29 succeeded at 1150.
Hunk #30 succeeded at 1162.
Hunk #31 succeeded at 1181.
Hmm... The next patch looks like a unified diff to me...
The text leading up to this was:
--------------------------
|Index: sys/dev/smartpqi/smartpqi_tag.c
|===================================================================
|--- sys/dev/smartpqi/smartpqi_tag.c
|+++ sys/dev/smartpqi/smartpqi_tag.c
--------------------------
Patching file sys/dev/smartpqi/smartpqi_tag.c using Plan A...
Hunk #1 succeeded at 1.
Hunk #2 succeeded at 35.
Hunk #3 succeeded at 52.
Hunk #4 succeeded at 77.
Hunk #5 succeeded at 98.
Hunk #6 succeeded at 135.
Hunk #7 succeeded at 195.
Hunk #8 succeeded at 241.
Hunk #9 succeeded at 250.
Hunk #10 succeeded at 264.
done


I did a make clean, then deleted /usr/src and re-extracted a clean src.tar.gz
before running freebsd-update fetch && freebsd-update install again.
Then I ran

patch -l -p 0 < /root/D24428.diff

again.
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-02-24 13:07:38 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

Srikanth <***@microchip.com> changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@microchip.c
| |om

--- Comment #16 from Srikanth <***@microchip.com> ---
Hi,

Does the driver/kernel compiled after following steps which was mentioned in
https://reviews.freebsd.org/D24428 ?
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-03-15 21:35:11 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #17 from ***@ultra-secure.de ---
Hi,

I'm not successful here:

(f-hosting <smartpqi>) 0 # git apply --check /root/D24428_2.diff
error: patch failed: sys/dev/smartpqi/smartpqi_cam.c:231
error: sys/dev/smartpqi/smartpqi_cam.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_cmd.c:43
error: sys/dev/smartpqi/smartpqi_cmd.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_defines.h:77
error: sys/dev/smartpqi/smartpqi_defines.h: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_discovery.c:62
error: sys/dev/smartpqi/smartpqi_discovery.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_event.c:35
error: sys/dev/smartpqi/smartpqi_event.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_helper.c:43
error: sys/dev/smartpqi/smartpqi_helper.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_init.c:31
error: sys/dev/smartpqi/smartpqi_init.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_intr.c:32
error: sys/dev/smartpqi/smartpqi_intr.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_ioctl.h:67
error: sys/dev/smartpqi/smartpqi_ioctl.h: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_ioctl.c:53
error: sys/dev/smartpqi/smartpqi_ioctl.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_main.c:134
error: sys/dev/smartpqi/smartpqi_main.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_misc.c:39
error: sys/dev/smartpqi/smartpqi_misc.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_prototypes.h:120
error: sys/dev/smartpqi/smartpqi_prototypes.h: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_queue.c:32
error: sys/dev/smartpqi/smartpqi_queue.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_request.c:47
error: sys/dev/smartpqi/smartpqi_request.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_response.c:85
error: sys/dev/smartpqi/smartpqi_response.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_sis.c:77
error: sys/dev/smartpqi/smartpqi_sis.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_structures.h:29
error: sys/dev/smartpqi/smartpqi_structures.h: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_tag.c:73
error: sys/dev/smartpqi/smartpqi_tag.c: patch does not apply


It seems it checks out HEAD/current.
Is that supposed to happen?
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-03-15 21:38:44 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #18 from ***@ultra-secure.de ---
OK,

with

git clone -b releng/12.2 --depth 1 https://git.freebsd.org/src.git src

(f-hosting <smartpqi>) 0 # git apply --check /root/D24428_2.diff
error: patch failed: sys/dev/smartpqi/smartpqi_cam.c:473
error: sys/dev/smartpqi/smartpqi_cam.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_defines.h:856
error: sys/dev/smartpqi/smartpqi_defines.h: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_main.c:312
error: sys/dev/smartpqi/smartpqi_main.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_mem.c:28
error: sys/dev/smartpqi/smartpqi_mem.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_misc.c:69
error: sys/dev/smartpqi/smartpqi_misc.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_queue.c:280
error: sys/dev/smartpqi/smartpqi_queue.c: patch does not apply
error: patch failed: sys/dev/smartpqi/smartpqi_request.c:540
error: sys/dev/smartpqi/smartpqi_request.c: patch does not apply
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-03-26 15:12:46 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #19 from Srikanth <***@microchip.com> ---
(In reply to rainer from comment #18)
Can you please apply the latest patch on 12.2
--
You are receiving this mail because:
You are the assignee for the bug.
_______________________________________________
freebsd-***@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "freebsd-scsi-***@freebsd.org"

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 13:19:20 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

Peter <***@guenschel.com> changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@guenschel.com

--- Comment #20 from Peter <***@guenschel.com> ---
Hopping on this thread as I have the exact same issue. I have spent significant
time attempting to debug this and will share what I've found so far. I have two
nearly identical systems HPE dl180 g10 with p816i controllers. One has SATA
disks, the other SAS. Only the system with SAS disks seems to be affected. Only
a zfs scrub triggers this panic - the system is otherwise stable. The hardware
has been verified OK by successfully completing a scrub under CentOS 8.4 with 0
errors. I have been able to reproduce this on every OS/driver/firmware
combination up to and including:

FreeBSD 13.0
Microsemi driver v4130 (8/5/2021)
HPE SmartArray Firmware 3.53

I'm willing to help debug as this is a 100% reproduceable issue - sometimes
within first 1% of scrub progress, but never more than 8-9%.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 16:28:11 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #21 from ***@ultra-secure.de ---
I believe this has been MFC'ed to 13-stable a while ago and 12-STABLE recently.

Please try that.

https://cgit.freebsd.org/src/log/sys/dev/smartpqi?h=stable/13

https://cgit.freebsd.org/src/commit/sys/dev/smartpqi?h=stable/13&id=1569aab1cb38a38fb619f343ed1e47d4b4070ffe

For me (DL 380 Gen 10 with P408i + Microsemi Smart-RAID 3154-8i) it works
without issue, so far.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 17:07:22 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #22 from Peter <***@guenschel.com> ---
(In reply to rainer from comment #21)

I don't understand what you're asking me to try. The latest drivers from
Microsemi are newer than anything in your links. Are you implying that the
official latest drivers don't contain these patches? Or the included driver in
13.0 doesn't contain these patches?
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 18:13:02 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #23 from ***@ultra-secure.de ---
Ah, OK.

Sorry - when I checked, there were no newer drivers on Microsemi's homepage.

I have one DL380 Gen 10 with P408 that I still have a scrub to run on. The
other two show no problems.

Maybe you can try 13-stable and if there's a problem, comment on the
differential and open a new PR here?

The biggest problem is that none of the committers actually have access to the
hardware and are thus reliant on 3rd-parties like us for verification.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 19:17:52 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #24 from Peter <***@guenschel.com> ---
(In reply to rainer from comment #23)

As before, I can reproduce this bug on all versions of FreeBSD, up to and
including 13.0 stable. I'd prefer not to split the thread as it is the same
unresolved issue present in 12.x.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 19:22:17 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #25 from ***@ultra-secure.de ---
Yes, do so and post the PR here.


How many drives do you have?
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-16 19:55:53 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #26 from Peter <***@guenschel.com> ---
(In reply to rainer from comment #25)

See bug #257890
12x Seagate ST16000NM002G
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-17 22:55:53 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

Warner Losh <***@FreeBSD.org> changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@FreeBSD.org

--- Comment #27 from Warner Losh <***@FreeBSD.org> ---
As far as I know, I've committed all the smartpqi drivers from microsemi. The
13.0 and -current drivers are identical. The 12.x driver has a few differences,
but I don't believe they will affect its operation on a 12.x kernel. The latest
drivers are not yet in a release, though, so you'd have to test on -stable
(which it looks like you are doing).

If there's newer drivers on the microsemi website, can someone point me at
them? There were long delays in getting their last release in and I'd like to
avoid that in the future by keeping more on top of it.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-18 12:06:04 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #28 from Peter <***@guenschel.com> ---
(In reply to Warner Losh from comment #27)

These are the drivers that were tested after the BSD-included drivers failed.
They have the same issue.

https://storage.microsemi.com/en-us/speed/raid/aac/unix/smartpqi_freebsd_v4130.0.1008_tgz.php
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-18 14:38:21 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #29 from Warner Losh <***@FreeBSD.org> ---
(In reply to Peter from comment #28)
Yea, the drivers that I found had no source included so including them in
FreeBSD is going to be tough.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-30 14:58:17 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

***@one-energy-it.de changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |***@one-energy-it.de

--- Comment #30 from ***@one-energy-it.de ---
I was hit by that one too...

Running a "HPE DL380 Gen10" with a "HPE Smart Array P816i-a SR Gen10"

pqi_map_requests: bus_dnamap_load_ccb failed error

All this using HPE's latest Firmware for the controller:
HPE Smart Array P816i-a SR Gen10 3.53

Hope there'll be a fix soon!
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-30 16:37:12 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #31 from Peter <***@guenschel.com> ---
(In reply to Mirco Schmidt from comment #30)

What model# and quantity of disks do you have in this system?
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-30 17:47:41 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #33 from benoitc <***@enki-multimedia.eu> ---
(In reply to benoitc from comment #32)
on latest ReleASe-13-p4.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 08:52:57 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #34 from Mirco Schmidt <***@one-energy-it.de> ---
(In reply to Peter from comment #31)

Hi hi,

I've got 5 8TB 7.2k SAS Disks (MB008000JWJRQ) behind the P816i-a. Additionaly
to that there are 2 480GB NVMe SSD on the "HPE NS204i-p Gen10+ Boot Controller"
which I intend to use as Log & Zil (VS000480KXALB) and two 240 mSATA SSD from
which I now boot the ProxMox which I had to setup yesterday as the BSD was
repeatatly crashing! And the machine had to go live that day...

So I'm now running the BSD from inside KVM and have those 5 Disk passed through
to the VM and it is stable & fast ;-)
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 08:53:20 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #35 from Mirco Schmidt <***@one-energy-it.de> ---
(In reply to Peter from comment #31)

Hi hi,

I've got 5 8TB 7.2k SAS Disks (MB008000JWJRQ) behind the P816i-a. Additionaly
to that there are 2 480GB NVMe SSD on the "HPE NS204i-p Gen10+ Boot Controller"
which I intend to use as Log & Zil (VS000480KXALB) and two 240 mSATA SSD from
which I now boot the ProxMox which I had to setup yesterday as the BSD was
repeatatly crashing! And the machine had to go live that day...

So I'm now running the BSD from inside KVM and have those 5 Disk passed through
to the VM and it is stable & fast ;-)
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 12:03:13 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #36 from Peter <***@guenschel.com> ---
(In reply to Mirco Schmidt from comment #35)

Thanks for sharing that - my suspicion is that this issue is related to SAS
transport. My systems with SATA disks do not have this issue, but only having
one system with SAS disks didn't seem like enough of a sample size. As an added
bonus those look like HPE disks so HPE can stop screeching about compatibility
issues being the cause of this. ;)
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 15:29:45 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #37 from Mirco Schmidt <***@one-energy-it.de> ---
(In reply to Peter from comment #36)

If that is the case (HPE moaning about issues with "unsupported" disks)
consider me your testbed!

I'm willing to prove this anytime HPE come's up with a change or firmware
upgrade... I'll easily drive to the client, drop in a USB-stick boot up to BSD
and check if the upgrade from HPE fixes the issue ;-)
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 16:37:20 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #38 from benoitc <***@enki-multimedia.eu> ---
(In reply to Peter from comment #36)
disk I have are also HPE drives. What would be a possible fix if it's due to
the SAS transport?
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 16:53:41 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #39 from Peter <***@guenschel.com> ---
(In reply to Mirco Schmidt from comment #37)
I've already ruled out hardware as the issue. My system performs flawlessly
under CentOS 8.4. I intend to follow up with HPE when we have a resolution and
will definitely let them know HPE disks are also affected. These events are
intermittently logged as a hardware failure by the system bios (this seems
dependent on the driver version, though), which is why HPE was originally
involved.

(In reply to benoitc from comment #38)
Are your disks SATA or SAS? Post the model number if you can find it. All
indicators currently point to this being a driver issue and I'm trying to
collect as much information as possible for the devs at Microsemi.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 16:59:44 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #40 from benoitc <***@enki-multimedia.eu> ---
(In reply to Peter from comment #39)
SAS disks : 2x300 GB (EG000300JWEBF) in raid 1 and 2x 2TB (MM2000JEFRC) in HBA
mode
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-08-31 17:02:34 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #41 from benoitc <***@enki-multimedia.eu> ---
(In reply to benoitc from comment #40)
the RAID1 is mounted as UFS while the 2 others are in a zfs pool.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-09-02 08:45:19 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #42 from benoitc <***@enki-multimedia.eu> ---
it seems using latest driver from 13.0-stable applied on 13.0-releng worked for
me. I am not using any more hardware raid, only 2 zfs pools (2x300GB and
2x2TB).


Commits used :


commit 2c98463a296974dec38707b3c346c570dbfb3630 (HEAD -> releng/13.0)
Author: Edward Tomasz Napierala <***@FreeBSD.org>
Date: Fri May 28 00:33:37 2021 -0600

smartpqi: clear CCBs allocated on the stack

Differential Revision: https://reviews.freebsd.org/D30299

(cherry picked from commit e20e60be501204c3ba742e266afecc6c6e498a6c)

commit 0ea861c05c484f5fcc8c1cc36c70f842daef04b1
Author: PAPANI SRIKANTH <***@microchip.com>
Date: Fri May 28 00:17:56 2021 -0600

Newly added features and bug fixes in latest Microchip SmartPQI driver

It includes:

1)Newly added TMF feature.
2)Added newly Huawei & Inspur PCI ID's
3)Fixed smartpqi driver hangs in Z-Pool while running on FreeBSD12.1
4)Fixed flooding dmesg in kernel while the controller is offline during in
ioctls.
5)Avoided unnecessary host memory allocation for rcb sg buffers.
6)Fixed race conditions while accessing internal rcb structure.
7)Fixed where Logical volumes exposing two different names to the OS it's
due to the system memory is overwritten with DMA stale data.
8)Fixed dynamically unloading a smartpqi driver.
9)Added device_shutdown callback instead of deprecated shutdown_final
kernel event in smartpqi driver.
10)Fixed where Os is crashed during physical drive hot removal during heavy
IO.
11)Fixed OS crash during controller lockup/offline during heavy IO.
12)Fixed coverity issues in smartpqi driver
13)Fixed system crash while creating and deleting logical volume in a
continuous loop.
14)Fixed where the volume size is not exposing to OS when it expands.
15)Added HC3 pci id's.

Reviewed by: Scott Benesh (microsemi), Murthy Bhat (microsemi),
imp
Differential Revision: https://reviews.freebsd.org/D30182

(cherry picked from commit 9fac68fc3853b696c8479bb3a8181d62cb9f59c9)
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-09-02 08:47:37 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #43 from benoitc <***@enki-multimedia.eu> ---
(In reply to benoitc from comment #42)

$ dmesg |grep -i smart
smartpqi0: <E208i-a SR Gen10> port 0xc000-0xc0ff mem 0xf3800000-0xf3807fff at
device 0.0 numa-domain 0 on pci12
smartpqi0: using MSI-X interrupts (16 vectors)
ses2 at smartpqi0 bus 0 scbus16 target 68 lun 0
ses2: <HPE Smart Adapter 3.53> Fixed Enclosure Services SPC-3 SCSI device
pass7 at smartpqi0 bus 0 scbus16 target 1088 lun 1
da3 at smartpqi0 bus 0 scbus16 target 67 lun 0
da2 at smartpqi0 bus 0 scbus16 target 66 lun 0
da0 at smartpqi0 bus 0 scbus16 target 64 lun 0
da1 at smartpqi0 bus 0 scbus16 target 65 lun 0


$ sudo camcontrol devlist
<AHCI SGPIO Enclosure 2.00 0001> at scbus6 target 0 lun 0 (ses0,pass0)
<AHCI SGPIO Enclosure 2.00 0001> at scbus15 target 0 lun 0 (ses1,pass1)
<HP EG000300JWEBF HPD4> at scbus16 target 64 lun 0 (da0,pass2)
<HP EG000300JWEBF HPD4> at scbus16 target 65 lun 0 (da1,pass3)
<HP MM2000JEFRC HPD8> at scbus16 target 66 lun 0 (da2,pass4)
<HP MM2000JEFRC HPD8> at scbus16 target 67 lun 0 (da3,pass5)
<HPE Smart Adapter 3.53> at scbus16 target 68 lun 0 (ses2,pass6)
<HPE E208i-a SR Gen10 3.53> at scbus16 target 1088 lun 1 (pass7)
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-09-02 12:25:33 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #44 from Peter <***@guenschel.com> ---
(In reply to benoitc from comment #42)

Good to know there's progress being made. The latest driver dated 8/5/21 still
contains this issue though. In all fairness, this only became a problem for me
after the addition of disks to the system (4x->12x 16TB). It was stable for
over a year prior to the addition.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-02 13:58:25 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #46 from Palle Girgensohn <***@FreeBSD.org> ---
(In reply to Palle Girgensohn from comment #45)
...and I'm using UFS, btw. works better than ZFS for PostgreSQL.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-02 14:21:26 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #47 from Peter <***@guenschel.com> ---
(In reply to Palle Girgensohn from comment #45)

The best tidbit I have to offer at the moment is that I distinctly remember
large amounts of ZFS checksum errors on reads under load using a particular
version of the smartpqi driver. Unfortunately I don't remember exactly which
version(s). After performing a scrub under CentOS, my mind was at ease knowing
the integrity of the data written to disk was 100% and that these checksum
errors on reads were due to a driver issue. I can't say with any certainty
that's what's happening in your case, but it may be worth the piece of mind to
investigate.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-09 20:21:57 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #50 from ***@ultra-secure.de ---
I may be able to further test this - if our customer decides to order the
hardware.

This would be 24x1.8TB SAS, likely on a HP p408i.

Sometime near the end of the year.

I don't have spare hardware for this sitting around, especially not with that
many drives (each of these servers is around 20k CHF...)

I will update this ticket once it actually materializes (lead time for those is
usually weeks).
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-09 20:46:27 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #51 from Warner Losh <***@FreeBSD.org> ---
As always, I'd love to have the latest SOURCES in the base system, so if
there's changes needed, I'm happy to usher them into the system. I believe that
I have the latest publicly available ones there now.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-11 05:36:01 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #52 from Hermes T K <***@microchip.com> ---
Hi,

In freeBSD 13.0, while running IO with blocksize 1MB , observed a corruption
in SGL received.
Due to this corrupted SGL was leading to FW lockup.
With this driver hangs & crashes.
Observed that the incomplete SGL is observed during IO with higher transfer
size.

Created a freebsd bugzilla ticket for this issue
:https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=259129
But there is no update to this ticket till now.

Workaround for this issue is to reduce the Maximum transfer size of the IO.
With the attached patch , i have reduced the tranfer size & was not observing
the
issue.

Thanks & Regards
Hermes T K
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-11 13:12:00 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #53 from Peter <***@guenschel.com> ---
(In reply to Hermes T K from comment #52)

From bug #259129...
When we tried in FreeBSD 12.2, the maximum block size allowed to run in fio is 128k.
We are suspecting some issue in SGL handling with FreeBSD 13.0.
The issue I'm having affects FreeBSD 12.X and 13.X with identical symptoms.
That said, if you feel you have a workaround I'll gladly test it. You mentioned
an attached patch, but I don't see one here or the other ticket - only a log
file.
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-11 14:41:06 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #54 from Hermes T K <***@microchip.com> ---
Created attachment 229431
--> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=229431&action=edit
Attaching the changes , reducing the maximum transfer size
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
b***@freebsd.org
2021-11-11 15:56:51 UTC
Permalink
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240145

--- Comment #55 from Warner Losh <***@FreeBSD.org> ---
(In reply to Peter from comment #53)
A workaround for people w/o the patch is to set hw.maxphys=131072 which will
have the same effect and likely not affect anything else in the system.

A question to the microsemi folks: What's the limit the firmware can do?
--
You are receiving this mail because:
You are the assignee for the bug.

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...