Serveur Proxmox qui plante quotidiennement

Bonjour,
je me permet de me tourner vers vous car depuis 1 mois mon Nuc avec proxmox plante tout les jours.
je cherche sur internet pourquoi mais je ne trouve pas.
pendant le plantage j’entends toujours le ventilateur tournait, le port RJ45 clignote, et les clé usb allumais mais proxmox a disparu du réseau et reste injoignable.

j’en déduit un plantage cette nuit à 4h17, dernier log avant redémarrage du nuc.

Voici les logs de la nuit mais je ne voit rien d’anormal

	Apr 08 22:06:05 SVR IPCC.xs[989]: pam_unix(proxmox-ve-auth:auth): authentication failure; logname= uid=0 euid=0 t>
Apr 08 22:06:06 SVR pvedaemon[989]: authentication failure; rhost=::ffff:192.168.1.68 user=root@pam msg=Authentic>
Apr 08 22:17:01 SVR CRON[154172]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 08 22:17:01 SVR CRON[154173]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 08 22:17:01 SVR CRON[154172]: pam_unix(cron:session): session closed for user root
Apr 08 22:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 08 23:17:01 SVR CRON[179399]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 08 23:17:01 SVR CRON[179400]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 08 23:17:01 SVR CRON[179399]: pam_unix(cron:session): session closed for user root
Apr 08 23:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 00:00:56 SVR systemd[1]: Starting dpkg-db-backup.service - Daily dpkg database backup service...
Apr 09 00:00:56 SVR systemd[1]: Starting logrotate.service - Rotate log files...
Apr 09 00:00:56 SVR systemd[1]: dpkg-db-backup.service: Deactivated successfully.
Apr 09 00:00:56 SVR systemd[1]: Finished dpkg-db-backup.service - Daily dpkg database backup service.
Apr 09 00:00:56 SVR systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Apr 09 00:00:56 SVR pveproxy[197611]: send HUP to 995
Apr 09 00:00:56 SVR pveproxy[995]: received signal HUP
Apr 09 00:00:56 SVR pveproxy[995]: server closing
Apr 09 00:00:56 SVR pveproxy[995]: server shutdown (restart)
Apr 09 00:00:56 SVR systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Apr 09 00:00:56 SVR systemd[1]: Reloading spiceproxy.service - PVE SPICE Proxy Server...
Apr 09 00:00:57 SVR spiceproxy[197614]: send HUP to 1001
Apr 09 00:00:57 SVR spiceproxy[1001]: received signal HUP
Apr 09 00:00:57 SVR spiceproxy[1001]: server closing
Apr 09 00:00:57 SVR spiceproxy[1001]: server shutdown (restart)
Apr 09 00:00:57 SVR systemd[1]: Reloaded spiceproxy.service - PVE SPICE Proxy Server.
Apr 09 00:00:57 SVR pvefw-logger[619]: received terminate request (signal)
Apr 09 00:00:57 SVR pvefw-logger[619]: stopping pvefw logger
Apr 09 00:00:57 SVR systemd[1]: Stopping pvefw-logger.service - Proxmox VE firewall logger...
Apr 09 00:00:57 SVR spiceproxy[1001]: restarting server
Apr 09 00:00:57 SVR spiceproxy[1001]: starting 1 worker(s)
Apr 09 00:00:57 SVR spiceproxy[1001]: worker 197624 started
Apr 09 00:00:57 SVR pveproxy[995]: restarting server
Apr 09 00:00:57 SVR pveproxy[995]: starting 3 worker(s)
Apr 09 00:00:57 SVR pveproxy[995]: worker 197625 started
Apr 09 00:00:57 SVR pveproxy[995]: worker 197626 started
Apr 09 00:00:57 SVR pveproxy[995]: worker 197627 started
Apr 09 00:00:57 SVR systemd[1]: pvefw-logger.service: Deactivated successfully.
Apr 09 00:00:57 SVR systemd[1]: Stopped pvefw-logger.service - Proxmox VE firewall logger.
Apr 09 00:00:57 SVR systemd[1]: pvefw-logger.service: Consumed 4.568s CPU time.
Apr 09 00:00:57 SVR systemd[1]: Starting pvefw-logger.service - Proxmox VE firewall logger...
Apr 09 00:00:57 SVR pvefw-logger[197629]: starting pvefw logger
Apr 09 00:00:57 SVR systemd[1]: Started pvefw-logger.service - Proxmox VE firewall logger.
Apr 09 00:00:57 SVR systemd[1]: logrotate.service: Deactivated successfully.
Apr 09 00:00:57 SVR systemd[1]: Finished logrotate.service - Rotate log files.
Apr 09 00:01:02 SVR spiceproxy[1002]: worker exit
Apr 09 00:01:02 SVR spiceproxy[1001]: worker 1002 finished
Apr 09 00:01:02 SVR pveproxy[997]: worker exit
Apr 09 00:01:02 SVR pveproxy[998]: worker exit
Apr 09 00:01:02 SVR pveproxy[996]: worker exit
Apr 09 00:01:02 SVR pveproxy[995]: worker 998 finished
Apr 09 00:01:02 SVR pveproxy[995]: worker 996 finished
Apr 09 00:01:02 SVR pveproxy[995]: worker 997 finished
Apr 09 00:17:01 SVR CRON[204278]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 00:17:01 SVR CRON[204279]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 00:17:01 SVR CRON[204278]: pam_unix(cron:session): session closed for user root
Apr 09 00:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 00:24:01 SVR CRON[207207]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 00:24:01 SVR CRON[207208]: (root) CMD (if [ $(date +%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then />
Apr 09 00:24:01 SVR CRON[207207]: pam_unix(cron:session): session closed for user root
Apr 09 01:17:01 SVR CRON[229231]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 01:17:01 SVR CRON[229232]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 01:17:01 SVR CRON[229231]: pam_unix(cron:session): session closed for user root
Apr 09 01:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 02:17:01 SVR CRON[254125]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 02:17:01 SVR CRON[254126]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 02:17:01 SVR CRON[254125]: pam_unix(cron:session): session closed for user root
Apr 09 02:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 03:10:01 SVR CRON[276190]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 03:10:01 SVR CRON[276191]: (root) CMD (test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A >
Apr 09 03:10:01 SVR CRON[276190]: pam_unix(cron:session): session closed for user root
Apr 09 03:12:56 SVR systemd[1]: Starting pve-daily-update.service - Daily PVE download activities...
Apr 09 03:12:56 SVR pveupdate[277404]: <root@pam> starting task UPID:SVR:00043B9D:003C0DE2:66149618:aptupdate::ro>
Apr 09 03:13:03 SVR pveupdate[277405]: update new package list: /var/lib/pve-manager/pkgupdates
Apr 09 03:13:03 SVR pveupdate[277404]: <root@pam> end task UPID:SVR:00043B9D:003C0DE2:66149618:aptupdate::root@pa>
Apr 09 03:13:03 SVR systemd[1]: pve-daily-update.service: Deactivated successfully.
Apr 09 03:13:03 SVR systemd[1]: Finished pve-daily-update.service - Daily PVE download activities.
Apr 09 03:17:01 SVR CRON[279141]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 03:17:01 SVR CRON[279142]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 03:17:01 SVR CRON[279141]: pam_unix(cron:session): session closed for user root
Apr 09 03:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 04:17:01 SVR CRON[304075]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 04:17:01 SVR CRON[304076]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 04:17:01 SVR CRON[304075]: pam_unix(cron:session): session closed for user root
Apr 09 04:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful

root@SVR:~# journalctl --since "2024-04-08 22:00:00" --until "2024-04-09 09:30:00"
Apr 08 22:06:05 SVR IPCC.xs[989]: pam_unix(proxmox-ve-auth:auth): authentication failure; logname= uid=0 euid=0 t>
Apr 08 22:06:06 SVR pvedaemon[989]: authentication failure; rhost=::ffff:192.168.1.68 user=root@pam msg=Authentic>
Apr 08 22:17:01 SVR CRON[154172]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 08 22:17:01 SVR CRON[154173]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 08 22:17:01 SVR CRON[154172]: pam_unix(cron:session): session closed for user root
Apr 08 22:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 08 23:17:01 SVR CRON[179399]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 08 23:17:01 SVR CRON[179400]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 08 23:17:01 SVR CRON[179399]: pam_unix(cron:session): session closed for user root
Apr 08 23:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 00:00:56 SVR systemd[1]: Starting dpkg-db-backup.service - Daily dpkg database backup service...
Apr 09 00:00:56 SVR systemd[1]: Starting logrotate.service - Rotate log files...
Apr 09 00:00:56 SVR systemd[1]: dpkg-db-backup.service: Deactivated successfully.
Apr 09 00:00:56 SVR systemd[1]: Finished dpkg-db-backup.service - Daily dpkg database backup service.
Apr 09 00:00:56 SVR systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Apr 09 00:00:56 SVR pveproxy[197611]: send HUP to 995
Apr 09 00:00:56 SVR pveproxy[995]: received signal HUP
Apr 09 00:00:56 SVR pveproxy[995]: server closing
Apr 09 00:00:56 SVR pveproxy[995]: server shutdown (restart)
Apr 09 00:00:56 SVR systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Apr 09 00:00:56 SVR systemd[1]: Reloading spiceproxy.service - PVE SPICE Proxy Server...
Apr 09 00:00:57 SVR spiceproxy[197614]: send HUP to 1001
Apr 09 00:00:57 SVR spiceproxy[1001]: received signal HUP
Apr 09 00:00:57 SVR spiceproxy[1001]: server closing
Apr 09 00:00:57 SVR spiceproxy[1001]: server shutdown (restart)
Apr 09 00:00:57 SVR systemd[1]: Reloaded spiceproxy.service - PVE SPICE Proxy Server.
Apr 09 00:00:57 SVR pvefw-logger[619]: received terminate request (signal)
Apr 09 00:00:57 SVR pvefw-logger[619]: stopping pvefw logger
Apr 09 00:00:57 SVR systemd[1]: Stopping pvefw-logger.service - Proxmox VE firewall logger...
Apr 09 00:00:57 SVR spiceproxy[1001]: restarting server
Apr 09 00:00:57 SVR spiceproxy[1001]: starting 1 worker(s)
Apr 09 00:00:57 SVR spiceproxy[1001]: worker 197624 started
Apr 09 00:00:57 SVR pveproxy[995]: restarting server
Apr 09 00:00:57 SVR pveproxy[995]: starting 3 worker(s)
Apr 09 00:00:57 SVR pveproxy[995]: worker 197625 started
Apr 09 00:00:57 SVR pveproxy[995]: worker 197626 started
Apr 09 00:00:57 SVR pveproxy[995]: worker 197627 started
Apr 09 00:00:57 SVR systemd[1]: pvefw-logger.service: Deactivated successfully.
Apr 09 00:00:57 SVR systemd[1]: Stopped pvefw-logger.service - Proxmox VE firewall logger.
Apr 09 00:00:57 SVR systemd[1]: pvefw-logger.service: Consumed 4.568s CPU time.
Apr 09 00:00:57 SVR systemd[1]: Starting pvefw-logger.service - Proxmox VE firewall logger...
Apr 09 00:00:57 SVR pvefw-logger[197629]: starting pvefw logger
Apr 09 00:00:57 SVR systemd[1]: Started pvefw-logger.service - Proxmox VE firewall logger.
Apr 09 00:00:57 SVR systemd[1]: logrotate.service: Deactivated successfully.
Apr 09 00:00:57 SVR systemd[1]: Finished logrotate.service - Rotate log files.
Apr 09 00:01:02 SVR spiceproxy[1002]: worker exit
Apr 09 00:01:02 SVR spiceproxy[1001]: worker 1002 finished
Apr 09 00:01:02 SVR pveproxy[997]: worker exit
Apr 09 00:01:02 SVR pveproxy[998]: worker exit
Apr 09 00:01:02 SVR pveproxy[996]: worker exit
Apr 09 00:01:02 SVR pveproxy[995]: worker 998 finished
Apr 09 00:01:02 SVR pveproxy[995]: worker 996 finished
Apr 09 00:01:02 SVR pveproxy[995]: worker 997 finished
Apr 09 00:17:01 SVR CRON[204278]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 00:17:01 SVR CRON[204279]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 00:17:01 SVR CRON[204278]: pam_unix(cron:session): session closed for user root
Apr 09 00:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 00:24:01 SVR CRON[207207]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 00:24:01 SVR CRON[207208]: (root) CMD (if [ $(date +%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then />
Apr 09 00:24:01 SVR CRON[207207]: pam_unix(cron:session): session closed for user root
Apr 09 01:17:01 SVR CRON[229231]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 01:17:01 SVR CRON[229232]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 01:17:01 SVR CRON[229231]: pam_unix(cron:session): session closed for user root
Apr 09 01:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 02:17:01 SVR CRON[254125]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 02:17:01 SVR CRON[254126]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 02:17:01 SVR CRON[254125]: pam_unix(cron:session): session closed for user root
Apr 09 02:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 03:10:01 SVR CRON[276190]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 03:10:01 SVR CRON[276191]: (root) CMD (test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A >
Apr 09 03:10:01 SVR CRON[276190]: pam_unix(cron:session): session closed for user root
Apr 09 03:12:56 SVR systemd[1]: Starting pve-daily-update.service - Daily PVE download activities...
Apr 09 03:12:56 SVR pveupdate[277404]: <root@pam> starting task UPID:SVR:00043B9D:003C0DE2:66149618:aptupdate::ro>
Apr 09 03:13:03 SVR pveupdate[277405]: update new package list: /var/lib/pve-manager/pkgupdates
Apr 09 03:13:03 SVR pveupdate[277404]: <root@pam> end task UPID:SVR:00043B9D:003C0DE2:66149618:aptupdate::root@pa>
Apr 09 03:13:03 SVR systemd[1]: pve-daily-update.service: Deactivated successfully.
Apr 09 03:13:03 SVR systemd[1]: Finished pve-daily-update.service - Daily PVE download activities.
Apr 09 03:17:01 SVR CRON[279141]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 03:17:01 SVR CRON[279142]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 03:17:01 SVR CRON[279141]: pam_unix(cron:session): session closed for user root
Apr 09 03:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
Apr 09 04:17:01 SVR CRON[304075]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Apr 09 04:17:01 SVR CRON[304076]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Apr 09 04:17:01 SVR CRON[304075]: pam_unix(cron:session): session closed for user root
Apr 09 04:17:08 SVR pmxcfs[846]: [dcdb] notice: data verification successful
-- Boot 2b741931a47744a2bdd380b6040bbd80 --
Apr 09 09:14:34 SVR kernel: Linux version 6.2.16-3-pve (tom@sbuild) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU B>
Apr 09 09:14:34 SVR kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.2.16-3-pve root=/dev/mapper/pve-root ro quiet
Apr 09 09:14:34 SVR kernel: KERNEL supported cpus:
Apr 09 09:14:34 SVR kernel:   Intel GenuineIntel
Apr 09 09:14:34 SVR kernel:   AMD AuthenticAMD
Apr 09 09:14:34 SVR kernel:   Hygon HygonGenuine
Apr 09 09:14:34 SVR kernel:   Centaur CentaurHauls
Apr 09 09:14:34 SVR kernel:   zhaoxin   Shanghai  
Apr 09 09:14:34 SVR kernel: x86/split lock detection: #AC: crashing the kernel on kernel split_locks and warning >
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Apr 09 09:14:34 SVR kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Apr 09 09:14:34 SVR kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Apr 09 09:14:34 SVR kernel: x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
Apr 09 09:14:34 SVR kernel: x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
Apr 09 09:14:34 SVR kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
Apr 09 09:14:34 SVR kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]:    8
Apr 09 09:14:34 SVR kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted'>
Apr 09 09:14:34 SVR kernel: signal: max sigframe size: 3632
Apr 09 09:14:34 SVR kernel: BIOS-provided physical RAM map:
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x000000000009f000-0x00000000000fffff] reserved
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003a13afff] usable
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x000000003a13b000-0x0000000040ad7fff] reserved
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x0000000040ad8000-0x0000000040ba3fff] ACPI data
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x0000000040ba4000-0x0000000040cdcfff] ACPI NVS
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x0000000040cdd000-0x00000000416d0fff] reserved
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x00000000416d1000-0x00000000417fefff] type 20
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x00000000417ff000-0x00000000417fffff] usable
Apr 09 09:14:34 SVR kernel: BIOS-e820: [mem 0x0000000041800000-0x0000000047ffffff] reserved

Mon nuc n’est pas en surcharge

les infos de mon nuc

j’avais tout désactiver sauf jeedom et il plantait toujours.

qu’en pensez vous

merci :slight_smile:

Bonjour,
Si tu as le plugin monitoring, la température du nuc est bonne ?
Thierry

j’oscille entre 61 et 68 degrés.

Bonjour,

Lancer un memtest pour voir si ce n’est pas la RAM.

Salut,

J’ai expérimenté grosso modo le même soucis mais de façon un peu plus aléatoire que « tous les jours ».

En voulant réinstaller from scratch (puis restaurer les VM), impossible d’avancer, le mini-pc redemarrait durant l’installation.

J’ai changé l’alimentation : plus de soucis.

ca ?

sudo apt-get install memtester
sudo memtester 1024 5

je vais essayé mais mon nuc + alim à 6 mois maxi… et la charge reste faible sur le nuc.

Salut

De mon côté j’ai eu ce genre de problèmes au debut avec un nouveau modele J6412 qui etait mal supporté par le kernel. As tu regardé les erreurs au boot?

dmesg
journalctl -p 3 -xb

dmesg

15460.610946] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15460.612692] docker0: port 2(vethe4828a4) entered blocking state
[15460.612694] docker0: port 2(vethe4828a4) entered disabled state
[15460.612724] device vethe4828a4 entered promiscuous mode
[15460.802058] eth0: renamed from veth767a4c4
[15460.817744] IPv6: ADDRCONF(NETDEV_CHANGE): vethe4828a4: link becomes ready
[15460.817768] docker0: port 2(vethe4828a4) entered blocking state
[15460.817770] docker0: port 2(vethe4828a4) entered forwarding state
[15464.173680] docker0: port 2(vethe4828a4) entered disabled state
[15464.174574] veth767a4c4: renamed from eth0
[15464.261903] docker0: port 2(vethe4828a4) entered disabled state
[15464.262678] device vethe4828a4 left promiscuous mode
[15464.262681] docker0: port 2(vethe4828a4) entered disabled state
[15524.169721] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15524.172095] docker0: port 2(veth3b74d7f) entered blocking state
[15524.172098] docker0: port 2(veth3b74d7f) entered disabled state
[15524.172132] device veth3b74d7f entered promiscuous mode
[15524.402190] eth0: renamed from vetha3d492d
[15524.421703] IPv6: ADDRCONF(NETDEV_CHANGE): veth3b74d7f: link becomes ready
[15524.421737] docker0: port 2(veth3b74d7f) entered blocking state
[15524.421741] docker0: port 2(veth3b74d7f) entered forwarding state
[15527.862889] docker0: port 2(veth3b74d7f) entered disabled state
[15527.863880] vetha3d492d: renamed from eth0
[15527.922576] docker0: port 2(veth3b74d7f) entered disabled state
[15527.923963] device veth3b74d7f left promiscuous mode
[15527.923967] docker0: port 2(veth3b74d7f) entered disabled state
[15587.856762] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15587.858554] docker0: port 2(veth6d60cef) entered blocking state
[15587.858556] docker0: port 2(veth6d60cef) entered disabled state
[15587.858581] device veth6d60cef entered promiscuous mode
[15587.858610] docker0: port 2(veth6d60cef) entered blocking state
[15587.858611] docker0: port 2(veth6d60cef) entered forwarding state
[15587.859326] docker0: port 2(veth6d60cef) entered disabled state
[15588.074177] eth0: renamed from veth876ebf2
[15588.097753] IPv6: ADDRCONF(NETDEV_CHANGE): veth6d60cef: link becomes ready
[15588.097777] docker0: port 2(veth6d60cef) entered blocking state
[15588.097778] docker0: port 2(veth6d60cef) entered forwarding state
[15591.460640] docker0: port 2(veth6d60cef) entered disabled state
[15591.461425] veth876ebf2: renamed from eth0
[15591.501687] docker0: port 2(veth6d60cef) entered disabled state
[15591.502509] device veth6d60cef left promiscuous mode
[15591.502511] docker0: port 2(veth6d60cef) entered disabled state
[15651.456029] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15651.457806] docker0: port 2(veth01191b5) entered blocking state
[15651.457808] docker0: port 2(veth01191b5) entered disabled state
[15651.457833] device veth01191b5 entered promiscuous mode
[15651.457862] docker0: port 2(veth01191b5) entered blocking state
[15651.457863] docker0: port 2(veth01191b5) entered forwarding state
[15651.458000] docker0: port 2(veth01191b5) entered disabled state
[15651.614024] eth0: renamed from vethd158865
[15651.649715] IPv6: ADDRCONF(NETDEV_CHANGE): veth01191b5: link becomes ready
[15651.649739] docker0: port 2(veth01191b5) entered blocking state
[15651.649741] docker0: port 2(veth01191b5) entered forwarding state
[15655.016275] docker0: port 2(veth01191b5) entered disabled state
[15655.016915] vethd158865: renamed from eth0
[15655.093501] docker0: port 2(veth01191b5) entered disabled state
[15655.094449] device veth01191b5 left promiscuous mode
[15655.094451] docker0: port 2(veth01191b5) entered disabled state
[15715.010366] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15715.012189] docker0: port 2(vethb4f00ed) entered blocking state
[15715.012192] docker0: port 2(vethb4f00ed) entered disabled state
[15715.012219] device vethb4f00ed entered promiscuous mode
[15715.012249] docker0: port 2(vethb4f00ed) entered blocking state
[15715.012250] docker0: port 2(vethb4f00ed) entered forwarding state
[15715.012962] docker0: port 2(vethb4f00ed) entered disabled state
[15715.185932] eth0: renamed from vethb22ca33
[15715.201633] IPv6: ADDRCONF(NETDEV_CHANGE): vethb4f00ed: link becomes ready
[15715.201655] docker0: port 2(vethb4f00ed) entered blocking state
[15715.201657] docker0: port 2(vethb4f00ed) entered forwarding state
[15718.576789] docker0: port 2(vethb4f00ed) entered disabled state
[15718.577967] vethb22ca33: renamed from eth0
[15718.653380] docker0: port 2(vethb4f00ed) entered disabled state
[15718.654299] device vethb4f00ed left promiscuous mode
[15718.654303] docker0: port 2(vethb4f00ed) entered disabled state
[15778.570316] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15778.572042] docker0: port 2(vethe2b2353) entered blocking state
[15778.572044] docker0: port 2(vethe2b2353) entered disabled state
[15778.572069] device vethe2b2353 entered promiscuous mode
[15778.572099] docker0: port 2(vethe2b2353) entered blocking state
[15778.572100] docker0: port 2(vethe2b2353) entered forwarding state
[15778.573354] docker0: port 2(vethe2b2353) entered disabled state
[15778.758371] eth0: renamed from veth968f5f7
[15778.777695] IPv6: ADDRCONF(NETDEV_CHANGE): vethe2b2353: link becomes ready
[15778.777720] docker0: port 2(vethe2b2353) entered blocking state
[15778.777722] docker0: port 2(vethe2b2353) entered forwarding state
[15782.120214] docker0: port 2(vethe2b2353) entered disabled state
[15782.121009] veth968f5f7: renamed from eth0
[15782.173032] docker0: port 2(vethe2b2353) entered disabled state
[15782.174225] device vethe2b2353 left promiscuous mode
[15782.174228] docker0: port 2(vethe2b2353) entered disabled state
[15842.115015] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15842.116877] docker0: port 2(veth9f9c750) entered blocking state
[15842.116880] docker0: port 2(veth9f9c750) entered disabled state
[15842.116948] device veth9f9c750 entered promiscuous mode
[15842.310048] eth0: renamed from veth33885b2
[15842.329792] IPv6: ADDRCONF(NETDEV_CHANGE): veth9f9c750: link becomes ready
[15842.329816] docker0: port 2(veth9f9c750) entered blocking state
[15842.329818] docker0: port 2(veth9f9c750) entered forwarding state
[15845.704673] docker0: port 2(veth9f9c750) entered disabled state
[15845.705320] veth33885b2: renamed from eth0
[15845.773235] docker0: port 2(veth9f9c750) entered disabled state
[15845.774416] device veth9f9c750 left promiscuous mode
[15845.774419] docker0: port 2(veth9f9c750) entered disabled state
[15905.698893] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15905.701310] docker0: port 2(veth4f29e34) entered blocking state
[15905.701314] docker0: port 2(veth4f29e34) entered disabled state
[15905.701356] device veth4f29e34 entered promiscuous mode
[15905.886103] eth0: renamed from veth489c6b6
[15905.921798] IPv6: ADDRCONF(NETDEV_CHANGE): veth4f29e34: link becomes ready
[15905.921822] docker0: port 2(veth4f29e34) entered blocking state
[15905.921824] docker0: port 2(veth4f29e34) entered forwarding state
[15909.252022] docker0: port 2(veth4f29e34) entered disabled state
[15909.252774] veth489c6b6: renamed from eth0
[15909.313506] docker0: port 2(veth4f29e34) entered disabled state
[15909.314266] device veth4f29e34 left promiscuous mode
[15909.314268] docker0: port 2(veth4f29e34) entered disabled state
[15910.163700] docker0: port 1(vethf8a29cb) entered disabled state
[15910.163782] veth7f40fec: renamed from eth0
[15910.220621] docker0: port 1(vethf8a29cb) entered disabled state
[15910.220929] device vethf8a29cb left promiscuous mode
[15910.220932] docker0: port 1(vethf8a29cb) entered disabled state
[15910.285530] docker0: port 1(vethae1baf3) entered blocking state
[15910.285534] docker0: port 1(vethae1baf3) entered disabled state
[15910.285562] device vethae1baf3 entered promiscuous mode
[15910.285598] docker0: port 1(vethae1baf3) entered blocking state
[15910.285599] docker0: port 1(vethae1baf3) entered forwarding state
[15910.465857] eth0: renamed from vethce79bb7
[15910.493702] IPv6: ADDRCONF(NETDEV_CHANGE): vethae1baf3: link becomes ready
[15969.246653] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[15969.248830] docker0: port 2(vethb4a3a47) entered blocking state
[15969.248833] docker0: port 2(vethb4a3a47) entered disabled state
[15969.248864] device vethb4a3a47 entered promiscuous mode
[15969.470184] eth0: renamed from vethc7988d4
[15969.481781] IPv6: ADDRCONF(NETDEV_CHANGE): vethb4a3a47: link becomes ready
[15969.481805] docker0: port 2(vethb4a3a47) entered blocking state
[15969.481807] docker0: port 2(vethb4a3a47) entered forwarding state
[15972.845721] docker0: port 2(vethb4a3a47) entered disabled state
[15972.846442] vethc7988d4: renamed from eth0
[15972.912796] docker0: port 2(vethb4a3a47) entered disabled state
[15972.913861] device vethb4a3a47 left promiscuous mode
[15972.913864] docker0: port 2(vethb4a3a47) entered disabled state
[16032.840131] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16032.842517] docker0: port 2(vethbe6a3e1) entered blocking state
[16032.842520] docker0: port 2(vethbe6a3e1) entered disabled state
[16032.842550] device vethbe6a3e1 entered promiscuous mode
[16032.842610] docker0: port 2(vethbe6a3e1) entered blocking state
[16032.842611] docker0: port 2(vethbe6a3e1) entered forwarding state
[16032.843618] docker0: port 2(vethbe6a3e1) entered disabled state
[16032.994198] eth0: renamed from veth7651d9e
[16033.009650] IPv6: ADDRCONF(NETDEV_CHANGE): vethbe6a3e1: link becomes ready
[16033.009673] docker0: port 2(vethbe6a3e1) entered blocking state
[16033.009676] docker0: port 2(vethbe6a3e1) entered forwarding state
[16036.421824] docker0: port 2(vethbe6a3e1) entered disabled state
[16036.422503] veth7651d9e: renamed from eth0
[16036.481037] docker0: port 2(vethbe6a3e1) entered disabled state
[16036.481734] device vethbe6a3e1 left promiscuous mode
[16036.481737] docker0: port 2(vethbe6a3e1) entered disabled state
[16096.417768] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16096.420826] docker0: port 2(veth50b2ba2) entered blocking state
[16096.420830] docker0: port 2(veth50b2ba2) entered disabled state
[16096.420871] device veth50b2ba2 entered promiscuous mode
[16096.586205] eth0: renamed from veth362066f
[16096.609743] IPv6: ADDRCONF(NETDEV_CHANGE): veth50b2ba2: link becomes ready
[16096.609778] docker0: port 2(veth50b2ba2) entered blocking state
[16096.609782] docker0: port 2(veth50b2ba2) entered forwarding state
[16099.942729] docker0: port 2(veth50b2ba2) entered disabled state
[16099.943452] veth362066f: renamed from eth0
[16100.024835] docker0: port 2(veth50b2ba2) entered disabled state
[16100.025455] device veth50b2ba2 left promiscuous mode
[16100.025458] docker0: port 2(veth50b2ba2) entered disabled state
[16159.938196] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16159.939937] docker0: port 2(veth0b5dd9d) entered blocking state
[16159.939940] docker0: port 2(veth0b5dd9d) entered disabled state
[16159.939964] device veth0b5dd9d entered promiscuous mode
[16159.939994] docker0: port 2(veth0b5dd9d) entered blocking state
[16159.939996] docker0: port 2(veth0b5dd9d) entered forwarding state
[16159.940983] docker0: port 2(veth0b5dd9d) entered disabled state
[16160.106193] eth0: renamed from vethd957e22
[16160.133822] IPv6: ADDRCONF(NETDEV_CHANGE): veth0b5dd9d: link becomes ready
[16160.133846] docker0: port 2(veth0b5dd9d) entered blocking state
[16160.133848] docker0: port 2(veth0b5dd9d) entered forwarding state
[16163.493974] docker0: port 2(veth0b5dd9d) entered disabled state
[16163.494030] vethd957e22: renamed from eth0
[16163.548525] docker0: port 2(veth0b5dd9d) entered disabled state
[16163.549657] device veth0b5dd9d left promiscuous mode
[16163.549660] docker0: port 2(veth0b5dd9d) entered disabled state
[16223.489142] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16223.490972] docker0: port 2(veth1f78e5e) entered blocking state
[16223.490975] docker0: port 2(veth1f78e5e) entered disabled state
[16223.491032] device veth1f78e5e entered promiscuous mode
[16223.670229] eth0: renamed from veth1b8b2cf
[16223.689815] IPv6: ADDRCONF(NETDEV_CHANGE): veth1f78e5e: link becomes ready
[16223.689839] docker0: port 2(veth1f78e5e) entered blocking state
[16223.689841] docker0: port 2(veth1f78e5e) entered forwarding state
[16227.069270] docker0: port 2(veth1f78e5e) entered disabled state
[16227.069869] veth1b8b2cf: renamed from eth0
[16227.132408] docker0: port 2(veth1f78e5e) entered disabled state
[16227.133526] device veth1f78e5e left promiscuous mode
[16227.133530] docker0: port 2(veth1f78e5e) entered disabled state
[16287.064312] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16287.066454] docker0: port 2(veth98617ce) entered blocking state
[16287.066457] docker0: port 2(veth98617ce) entered disabled state
[16287.066528] device veth98617ce entered promiscuous mode
[16287.278148] eth0: renamed from vethd8b96ee
[16287.329840] IPv6: ADDRCONF(NETDEV_CHANGE): veth98617ce: link becomes ready
[16287.329863] docker0: port 2(veth98617ce) entered blocking state
[16287.329866] docker0: port 2(veth98617ce) entered forwarding state
[16290.720143] docker0: port 2(veth98617ce) entered disabled state
[16290.720699] vethd8b96ee: renamed from eth0
[16290.777102] docker0: port 2(veth98617ce) entered disabled state
[16290.778248] device veth98617ce left promiscuous mode
[16290.778250] docker0: port 2(veth98617ce) entered disabled state
[16350.715270] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16350.716984] docker0: port 2(veth98bee47) entered blocking state
[16350.716987] docker0: port 2(veth98bee47) entered disabled state
[16350.717012] device veth98bee47 entered promiscuous mode
[16350.717040] docker0: port 2(veth98bee47) entered blocking state
[16350.717042] docker0: port 2(veth98bee47) entered forwarding state
[16350.718230] docker0: port 2(veth98bee47) entered disabled state
[16350.922232] eth0: renamed from veth6d5b992
[16350.945819] IPv6: ADDRCONF(NETDEV_CHANGE): veth98bee47: link becomes ready
[16350.945843] docker0: port 2(veth98bee47) entered blocking state
[16350.945845] docker0: port 2(veth98bee47) entered forwarding state
[16354.313090] docker0: port 2(veth98bee47) entered disabled state
[16354.313783] veth6d5b992: renamed from eth0
[16354.379487] docker0: port 2(veth98bee47) entered disabled state
[16354.380110] device veth98bee47 left promiscuous mode
[16354.380113] docker0: port 2(veth98bee47) entered disabled state
[16414.307971] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16414.310241] docker0: port 2(veth886b39d) entered blocking state
[16414.310243] docker0: port 2(veth886b39d) entered disabled state
[16414.310314] device veth886b39d entered promiscuous mode
[16414.502232] eth0: renamed from vethe84662e
[16414.525833] IPv6: ADDRCONF(NETDEV_CHANGE): veth886b39d: link becomes ready
[16414.525857] docker0: port 2(veth886b39d) entered blocking state
[16414.525859] docker0: port 2(veth886b39d) entered forwarding state
[16417.864345] docker0: port 2(veth886b39d) entered disabled state
[16417.865143] vethe84662e: renamed from eth0
[16417.912953] docker0: port 2(veth886b39d) entered disabled state
[16417.913606] device veth886b39d left promiscuous mode
[16417.913610] docker0: port 2(veth886b39d) entered disabled state
[16477.859858] overlayfs: fs on '/var/lib/docker/overlay2/l/KBDMZBQBASPKALASWV4PLX2VND' does not support file handles, falling back to xino=off.
[16477.861849] docker0: port 2(veth0a7e5d7) entered blocking state
[16477.861852] docker0: port 2(veth0a7e5d7) entered disabled state
[16477.861881] device veth0a7e5d7 entered promiscuous mode
[16477.861912] docker0: port 2(veth0a7e5d7) entered blocking state
[16477.861914] docker0: port 2(veth0a7e5d7) entered forwarding state
[16477.863076] docker0: port 2(veth0a7e5d7) entered disabled state
[16478.018200] eth0: renamed from veth9357f05
[16478.053832] IPv6: ADDRCONF(NETDEV_CHANGE): veth0a7e5d7: link becomes ready
[16478.053859] docker0: port 2(veth0a7e5d7) entered blocking state
[16478.053861] docker0: port 2(veth0a7e5d7) entered forwarding state
[16481.423298] docker0: port 2(veth0a7e5d7) entered disabled state
[16481.423973] veth9357f05: renamed from eth0
[16481.510042] docker0: port 2(veth0a7e5d7) entered disabled state
[16481.510947] device veth0a7e5d7 left promiscuous mode
[16481.510953] docker0: port 2(veth0a7e5d7) entered disabled stat

journalctl -p 3 -xb

Apr 09 09:14:35 SVR kernel: Serial bus multi instantiate pseudo device driver INT3515:00: error -ENXIO: IRQ index>
Apr 09 09:14:35 SVR kernel: Serial bus multi instantiate pseudo device driver INT3515:00: error -ENXIO: Error req>
Apr 09 09:14:37 SVR pmxcfs[856]: [quorum] crit: quorum_initialize failed: 2
Apr 09 09:14:37 SVR pmxcfs[856]: [quorum] crit: can't initialize service
Apr 09 09:14:37 SVR pmxcfs[856]: [confdb] crit: cmap_initialize failed: 2
Apr 09 09:14:37 SVR pmxcfs[856]: [confdb] crit: can't initialize service
Apr 09 09:14:37 SVR pmxcfs[856]: [dcdb] crit: cpg_initialize failed: 2
Apr 09 09:14:37 SVR pmxcfs[856]: [dcdb] crit: can't initialize service
Apr 09 09:14:37 SVR pmxcfs[856]: [status] crit: cpg_initialize failed: 2
Apr 09 09:14:37 SVR pmxcfs[856]: [status] crit: can't initialize service
Apr 09 09:14:37 SVR kernel: Bluetooth: hci0: Malformed MSFT vendor event: 0x02
Apr 09 09:15:29 SVR QEMU[1032]: kvm: libusb_release_interface: -4 [NO_DEVICE]
Apr 09 09:15:29 SVR QEMU[1032]: kvm: libusb_release_interface: -4 [NO_DEVICE]
Apr 09 09:19:20 SVR pvedaemon[999]: authentication failure; rhost=::ffff:192.168.1.164 user=root@pam msg=Authenti>
Apr 09 09:19:28 SVR pvedaemon[998]: authentication failure; rhost=::ffff:192.168.1.164 user=root@pam msg=Authenti>
Apr 09 10:12:04 SVR pveproxy[1007]: detected empty handle

Qu’est-ce qu’a révélé le memtest?

je n’arrive pas à l’installer.

root@SVR:~# apt-get install memtester
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package memtester is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

Bonjour

Si c’est un NUC Intel regarde cette vidéo de linuxtricks :

je l’ai lancé depuuis ma VM de test. mais je ne sais pas s’il teste les 32GO! car la VM n’a que 1024

Bonjour,

Il faut le lancer non pas dans un VM mais sur l’hyperviseur. Il faut le faire à partir d’une clé usb bootable.

PS : pour une utilisation serveur skin la CM l’accepte il est préférable d’utiliser de la RAM ECC, encore plus si on choisit ZFS pour proxmox.

Ok, comme tu as coché solution, j’ai crunque tu avais vu un souci et donc résolu ton problème.

Merci

hdmi dummy commandé :slight_smile:
on verra :slight_smile:

non non c’est un clic malheureux

Bonjour

Si le NUC est planté il y a eu peut être un un kernel panic.

Regarde le journal kern.log. peux tu nous transmettre son contenu.

J’ai un plantage 1 fois par mois sur mon NUC intel mais j’ai le dongle dummy hdmi.
Dans le log kern.log j’ai des messages d’erreurs concernant la carte réseau, j’ai fait des modifications, j’attend le prochain plantage.
Comme Linuxtricks j’ai mis un un programme bash pour avoir la date et heure du plantage pour les recherches dans les logs.

root@pve-nuc:~/testproxmox# cat testprox
#!/bin/bash
#
date >> /root/testproxmox/testprox.txt

Puis en CLI
crontab -e
tu rajoutes pour lancer toute les minutes le programme :

* * * * * bash /root/testproxmox/testprox

contenu du fichier créé:

Tue Apr  9 02:10:01 PM CEST 2024
Tue Apr  9 02:11:01 PM CEST 2024
Tue Apr  9 02:12:01 PM CEST 2024
Tue Apr  9 02:13:01 PM CEST 2024
Tue Apr  9 02:14:01 PM CEST 2024
Tue Apr  9 02:15:01 PM CEST 2024
Tue Apr  9 02:16:01 PM CEST 2024
Tue Apr  9 02:17:01 PM CEST 2024
Tue Apr  9 02:18:01 PM CEST 2024
Tue Apr  9 02:19:01 PM CEST 2024
Tue Apr  9 02:20:01 PM CEST 2024
Tue Apr  9 02:21:01 PM CEST 2024
root@pve-nuc:~/testproxmox# 

dans le dossier
/var/log/

je n’ai pas kern.log

je fais bien les cde depuis le Shell de SRV.

Ha, effectivement, tu es bien en version 8.1.10?

Ou alors c’est une question de paramétrage, ou le kernel n’a pas généré de message.

Mon Proxmox à 4 ans, je le mets à jour régulièrement.