Commande "arrêter noeud" ne marche pas + questions

Bonjour,

Je viens d’acheter le plug-in proxmox, mon objectif est d’arrêter mon nuc i7 en cas de coupure de courant lorsque que mon onduleur APC aura un temps de batterie inférieure à 10 %. (Pour cela j’ajoute une condition et une action sur l’information « temps sur batterie » de l’onduleur APC sur jeddom.)

J’ai donc installé le plug-in, pas de soucis et les infos de proxmox remontent bien.
Par contre, j’ai un peu de difficultés de compréhension de proxmox.

Pour arrêter le nuc i7 (led bleue en façade éteinte) qui héberge proxmox j’ai constaté qu’il fallait que j’arrête le nœud « Nucjeedom » : (vous confirmez ?)

Cependant, la commande « Arréter le noeud » n’a aucun effet sur proxmox, la commande « Arreter tout » n’arrête que les VM, pas le noeud, et n’éteint pas de maniere hardware le nuc i7.

Comment éteindre proprement le nuc ?

Bonjour,

Attention de bien mettre le tag du plug-in si la question porte sur celui-ci sinon je risque de ne pas le voir.

Un « noeud » proxmox est un serveur du système (le cluster); votre nuc donc.

Donc oui la commande « arrêter le noeud » sert à cela.
Je l’utilise pour cette raison également sans avoir remarqué de soucis.

Avez-vous un log ? Côté plug-in et côté proxmox ?
Avez-vous bien donné les permissions nécessaires ?

Bonjour,

Merci pour la précision pour le noeud.
J’ai bien créé un utilisateur « Jeedom » qui a tous les droits sur promox, il est admin pve


Voici le log proxmox, il est indiqué : [2021-11-28 18:23:02][WARNING] : Impossible d’assigner le parent par défaut à l’équipement ZWaveJS2MQTT, un équipement portant le même nom existe déjà ou l’objet n’existe plus.
==> Effectivement cet equipement existe déjà sur jmqtt et sur mon nuc…je viens de changer le nom de l’equipement dans Jmqtt et j’ai synchronisé le plugin proxmox, je ne sais pas si c’etait le problème ?

[2021-11-28 18:23:02][INFO] : Creating new resource with id='qemu/101' and name='JeedomBuster'
[2021-11-28 18:23:02][INFO] : Creating new resource with id='qemu/102' and name='TRUENAS'
[2021-11-28 18:23:02][INFO] : Creating new resource with id='qemu/103' and name='ZWaveJS2MQTT'
[2021-11-28 18:23:02][WARNING] : Impossible d'assigner le parent par défaut à l'équipement ZWaveJS2MQTT, un équipement portant le même nom existe déjà ou l'objet n'existe plus.
[2021-11-28 18:23:02][INFO] : Creating new resource with id='storage/NucJEEDOM/local-lvm' and name='local-lvm'
[2021-11-28 18:23:02][INFO] : Creating new resource with id='storage/NucJEEDOM/Disktation' and name='Disktation'
[2021-11-28 18:23:02][INFO] : Creating new resource with id='storage/NucJEEDOM/local' and name='local'
`

Ci-dessous un extrait du syslog promox, j’ai lancé d’abord la commande arréter tout, et, qqe minutes après "arréter noeud :

Nov 28 18:28:01 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:28:17 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:28:17 NucJEEDOM pvedaemon[1001]: <Jeedom@pve> starting task UPID:NucJEEDOM:000011A2:0013FA6C:61A3BC31:stopall::Jeedom@pve:
Nov 28 18:28:17 NucJEEDOM pvedaemon[4515]: shutdown VM 102: UPID:NucJEEDOM:000011A3:0013FA6D:61A3BC31:qmshutdown:102:Jeedom@pve:
Nov 28 18:28:17 NucJEEDOM pvedaemon[4514]: <Jeedom@pve> starting task UPID:NucJEEDOM:000011A3:0013FA6D:61A3BC31:qmshutdown:102:Jeedom@pve:
Nov 28 18:28:17 NucJEEDOM pvedaemon[4516]: shutdown VM 101: UPID:NucJEEDOM:000011A4:0013FA6E:61A3BC31:qmshutdown:101:Jeedom@pve:
Nov 28 18:28:17 NucJEEDOM pvedaemon[4514]: <Jeedom@pve> starting task UPID:NucJEEDOM:000011A4:0013FA6E:61A3BC31:qmshutdown:101:Jeedom@pve:
Nov 28 18:28:34 NucJEEDOM QEMU[1840]: kvm: terminating on signal 15 from pid 665 (/usr/sbin/qmeventd)
Nov 28 18:28:34 NucJEEDOM kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Nov 28 18:28:34 NucJEEDOM kernel: GPT:5860533166 != 5860533167
Nov 28 18:28:34 NucJEEDOM kernel: GPT:Alternate GPT header not at the end of the disk.
Nov 28 18:28:34 NucJEEDOM kernel: GPT:5860533166 != 5860533167
Nov 28 18:28:34 NucJEEDOM kernel: GPT: Use GNU Parted to correct GPT errors.
Nov 28 18:28:34 NucJEEDOM kernel:  sda: sda1 sda2
Nov 28 18:28:34 NucJEEDOM kernel:  sdb: sdb1 sdb2
Nov 28 18:28:34 NucJEEDOM kernel:  sdc: sdc1 sdc2
Nov 28 18:28:34 NucJEEDOM kernel: fwbr102i0: port 2(tap102i0) entered disabled state
Nov 28 18:28:34 NucJEEDOM kernel:  sdd: sdd1 sdd2
Nov 28 18:28:34 NucJEEDOM kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Nov 28 18:28:34 NucJEEDOM kernel: vmbr0: port 3(fwpr102p0) entered disabled state
Nov 28 18:28:34 NucJEEDOM kernel: device fwln102i0 left promiscuous mode
Nov 28 18:28:34 NucJEEDOM kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Nov 28 18:28:34 NucJEEDOM kernel: device fwpr102p0 left promiscuous mode
Nov 28 18:28:34 NucJEEDOM kernel: vmbr0: port 3(fwpr102p0) entered disabled state
Nov 28 18:28:35 NucJEEDOM systemd[1]: 102.scope: Succeeded.
Nov 28 18:28:35 NucJEEDOM qmeventd[657]: Starting cleanup for 102
Nov 28 18:28:35 NucJEEDOM qmeventd[657]: Finished cleanup for 102
Nov 28 18:28:36 NucJEEDOM pvedaemon[4514]: end task UPID:NucJEEDOM:000011A3:0013FA6D:61A3BC31:qmshutdown:102:Jeedom@pve:
Nov 28 18:29:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:29:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:29:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:29:31 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'root@pam'
Nov 28 18:29:49 NucJEEDOM QEMU[1030]: kvm: terminating on signal 15 from pid 665 (/usr/sbin/qmeventd)
Nov 28 18:29:49 NucJEEDOM kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Nov 28 18:29:49 NucJEEDOM kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Nov 28 18:29:49 NucJEEDOM kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Nov 28 18:29:49 NucJEEDOM kernel: device fwln101i0 left promiscuous mode
Nov 28 18:29:49 NucJEEDOM kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Nov 28 18:29:49 NucJEEDOM kernel: device fwpr101p0 left promiscuous mode
Nov 28 18:29:49 NucJEEDOM kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Nov 28 18:29:50 NucJEEDOM kernel: usb 1-1: reset low-speed USB device number 2 using xhci_hcd
Nov 28 18:29:50 NucJEEDOM qmeventd[657]: Starting cleanup for 101
Nov 28 18:29:50 NucJEEDOM qmeventd[657]: trying to acquire lock...
Nov 28 18:29:50 NucJEEDOM kernel: hid-generic 0003:051D:0002.0002: hiddev0,hidraw0: USB HID v1.10 Device [APC Back-UPS ES 700G FW:871.O2 .I USB FW:O2 ] on usb-0000:00:14.0-1/input0
Nov 28 18:29:50 NucJEEDOM kernel: usb 1-4: reset full-speed USB device number 4 using xhci_hcd
Nov 28 18:29:50 NucJEEDOM kernel: ftdi_sio 1-4:1.0: FTDI USB Serial Device converter detected
Nov 28 18:29:50 NucJEEDOM kernel: usb 1-4: Detected FT232RL
Nov 28 18:29:50 NucJEEDOM kernel: usb 1-4: FTDI USB Serial Device converter now attached to ttyUSB0
Nov 28 18:29:51 NucJEEDOM kernel: usb 1-3: reset full-speed USB device number 3 using xhci_hcd
Nov 28 18:29:51 NucJEEDOM kernel: cdc_acm 1-3:1.0: ttyACM0: USB ACM device
Nov 28 18:29:51 NucJEEDOM systemd[1]: 101.scope: Succeeded.
Nov 28 18:29:52 NucJEEDOM qmeventd[657]:  OK
Nov 28 18:29:52 NucJEEDOM qmeventd[657]: Finished cleanup for 101
Nov 28 18:29:53 NucJEEDOM pvedaemon[4514]: end task UPID:NucJEEDOM:000011A4:0013FA6E:61A3BC31:qmshutdown:101:Jeedom@pve:
Nov 28 18:29:54 NucJEEDOM pvedaemon[4514]: all VMs and CTs stopped
Nov 28 18:29:54 NucJEEDOM pvedaemon[1001]: <Jeedom@pve> end task UPID:NucJEEDOM:000011A2:0013FA6C:61A3BC31:stopall::Jeedom@pve: OK
Nov 28 18:30:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:30:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:30:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:30:10 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'root@pam'
Nov 28 18:31:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:31:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:31:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:31:36 NucJEEDOM pvedaemon[5127]: start VM 101: UPID:NucJEEDOM:00001407:0014483D:61A3BCF8:qmstart:101:root@pam:
Nov 28 18:31:36 NucJEEDOM pvedaemon[999]: <root@pam> starting task UPID:NucJEEDOM:00001407:0014483D:61A3BCF8:qmstart:101:root@pam:
Nov 28 18:31:36 NucJEEDOM systemd[1]: Started 101.scope.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5138]: Using default interface naming scheme 'v240'.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5138]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5138]: Could not generate persistent MAC address for tap101i0: No such file or directory
Nov 28 18:31:36 NucJEEDOM kernel: device tap101i0 entered promiscuous mode
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5138]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5138]: Could not generate persistent MAC address for fwbr101i0: No such file or directory
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5135]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5141]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5135]: Using default interface naming scheme 'v240'.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5141]: Using default interface naming scheme 'v240'.
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5135]: Could not generate persistent MAC address for fwpr101p0: No such file or directory
Nov 28 18:31:36 NucJEEDOM systemd-udevd[5141]: Could not generate persistent MAC address for fwln101i0: No such file or directory
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Nov 28 18:31:36 NucJEEDOM kernel: device fwln101i0 entered promiscuous mode
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Nov 28 18:31:36 NucJEEDOM kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Nov 28 18:31:36 NucJEEDOM kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Nov 28 18:31:36 NucJEEDOM kernel: device fwpr101p0 entered promiscuous mode
Nov 28 18:31:36 NucJEEDOM kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Nov 28 18:31:36 NucJEEDOM kernel: vmbr0: port 2(fwpr101p0) entered forwarding state
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Nov 28 18:31:36 NucJEEDOM kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Nov 28 18:31:36 NucJEEDOM kernel: ftdi_sio ttyUSB0: FTDI USB Serial Device converter now disconnected from ttyUSB0
Nov 28 18:31:36 NucJEEDOM kernel: ftdi_sio 1-4:1.0: device disconnected
Nov 28 18:31:36 NucJEEDOM pvedaemon[999]: <root@pam> end task UPID:NucJEEDOM:00001407:0014483D:61A3BCF8:qmstart:101:root@pam: OK
Nov 28 18:31:42 NucJEEDOM pvedaemon[5227]: start VM 102: UPID:NucJEEDOM:0000146B:00144AAE:61A3BCFE:qmstart:102:root@pam:
Nov 28 18:31:42 NucJEEDOM pvedaemon[1001]: <root@pam> starting task UPID:NucJEEDOM:0000146B:00144AAE:61A3BCFE:qmstart:102:root@pam:
Nov 28 18:31:42 NucJEEDOM systemd[1]: Started 102.scope.
Nov 28 18:31:42 NucJEEDOM systemd-udevd[5239]: Using default interface naming scheme 'v240'.
Nov 28 18:31:42 NucJEEDOM systemd-udevd[5239]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:42 NucJEEDOM systemd-udevd[5239]: Could not generate persistent MAC address for tap102i0: No such file or directory
Nov 28 18:31:42 NucJEEDOM kernel: device tap102i0 entered promiscuous mode
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5239]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5239]: Could not generate persistent MAC address for fwbr102i0: No such file or directory
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5238]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5234]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5234]: Using default interface naming scheme 'v240'.
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5238]: Using default interface naming scheme 'v240'.
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5234]: Could not generate persistent MAC address for fwln102i0: No such file or directory
Nov 28 18:31:43 NucJEEDOM systemd-udevd[5238]: Could not generate persistent MAC address for fwpr102p0: No such file or directory
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Nov 28 18:31:43 NucJEEDOM kernel: device fwln102i0 entered promiscuous mode
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 1(fwln102i0) entered forwarding state
Nov 28 18:31:43 NucJEEDOM kernel: vmbr0: port 3(fwpr102p0) entered blocking state
Nov 28 18:31:43 NucJEEDOM kernel: vmbr0: port 3(fwpr102p0) entered disabled state
Nov 28 18:31:43 NucJEEDOM kernel: device fwpr102p0 entered promiscuous mode
Nov 28 18:31:43 NucJEEDOM kernel: vmbr0: port 3(fwpr102p0) entered blocking state
Nov 28 18:31:43 NucJEEDOM kernel: vmbr0: port 3(fwpr102p0) entered forwarding state
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 2(tap102i0) entered disabled state
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Nov 28 18:31:43 NucJEEDOM kernel: fwbr102i0: port 2(tap102i0) entered forwarding state
Nov 28 18:31:43 NucJEEDOM pvedaemon[1001]: <root@pam> end task UPID:NucJEEDOM:0000146B:00144AAE:61A3BCFE:qmstart:102:root@pam: OK
Nov 28 18:31:46 NucJEEDOM kernel: usb 1-1: reset low-speed USB device number 2 using xhci_hcd
Nov 28 18:31:47 NucJEEDOM kernel: usb 1-3: reset full-speed USB device number 3 using xhci_hcd
Nov 28 18:31:48 NucJEEDOM kernel: usb 1-4: reset full-speed USB device number 4 using xhci_hcd
Nov 28 18:32:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:32:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:32:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 407)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 407)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 529)
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU4: Core temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU0: Core temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU6: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU2: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU4: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU0: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU7: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU3: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU5: Package temperature/speed normal
Nov 28 18:32:07 NucJEEDOM kernel: mce: CPU1: Package temperature/speed normal
Nov 28 18:33:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:33:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:33:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:33:03 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:34:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:34:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:34:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:34:02 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:34:09 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:34:19 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'root@pam'
Nov 28 18:34:59 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:35:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:35:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:35:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:35:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:36:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:36:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:36:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:36:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:37:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:37:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:37:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:37:02 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:38:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:38:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:38:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:38:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:39:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:39:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:39:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:39:01 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:39:43 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:40:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:40:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:40:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:40:02 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU2: Core temperature above threshold, cpu clock throttled (total events = 176)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU6: Core temperature above threshold, cpu clock throttled (total events = 176)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 651)
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU6: Core temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU2: Core temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU0: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU4: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU2: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU6: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU5: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU3: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU1: Package temperature/speed normal
Nov 28 18:40:02 NucJEEDOM kernel: mce: CPU7: Package temperature/speed normal
Nov 28 18:41:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:41:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:41:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:41:01 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:42:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:42:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:42:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:42:01 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:43:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:43:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:43:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:43:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:44:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:44:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:44:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:44:02 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:45:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:45:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:45:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:45:02 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:46:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:46:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:46:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:46:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:47:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:47:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:47:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:47:02 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:48:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:48:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:48:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:48:02 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:49:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:49:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:49:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:49:01 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:50:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:50:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:50:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:50:02 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:51:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:51:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:51:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:51:02 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:52:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:52:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:52:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:52:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:53:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:53:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:53:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:53:01 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:54:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:54:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:54:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:54:01 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:55:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:55:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:55:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:55:02 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:56:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:56:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:56:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:56:01 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:57:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:57:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:57:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:57:01 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:58:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:58:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:58:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:58:01 NucJEEDOM pvedaemon[1001]: <root@pam> successful auth for user 'Jeedom@pve'
Nov 28 18:58:13 NucJEEDOM pvedaemon[1000]: <root@pam> successful auth for user 'root@pam'
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 471)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 471)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 659)
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU0: Core temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU4: Core temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU0: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU4: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU6: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU1: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU2: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU5: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU3: Package temperature/speed normal
Nov 28 18:58:37 NucJEEDOM kernel: mce: CPU7: Package temperature/speed normal
Nov 28 18:59:00 NucJEEDOM systemd[1]: Starting Proxmox VE replication runner...
Nov 28 18:59:00 NucJEEDOM systemd[1]: pvesr.service: Succeeded.
Nov 28 18:59:00 NucJEEDOM systemd[1]: Started Proxmox VE replication runner.
Nov 28 18:59:02 NucJEEDOM pvedaemon[999]: <root@pam> successful auth for user 'Jeedom@pve'

Voici le log proxmox Log proxmox.txt (107,6 Ko) en pièce jointe

Enfin, j’ai lancer la commande en cliquant sur le bouton « tester » :

Non, c’est juste la création auto qui n’a pas placé l’équipement dans l’objet voulu mais cela n’a aucun impact.

Est-ce le log au moment de la commande « arreter le noeud »? pouvez-vous passer en debug et voir?

Je ne suis pas sur qu’on va voir qlqch dans le syslog, p-e plus dans l’audit des accès et tâches? (dans le bas de l’interface proxmox.

Bonjour,

Je confirme mon premier post : chez moi, la commande « arreter le noeud » n’a aucun effet sur le noeud
Vous faites comment ? y a t il un autre myen d’arreter le noeud , donc le nuc ?
Et si cette commande fonctionnerait, est-ce qu’elle arrete proprement les VM avant ??


Et moi je confirme que cette commande fonctionne bien, pour le reste il faudrait que vous répondiez aux questions posées il y a trois mois car là on n’avance pas :wink:

Edit: je me rends compte qu’en fait vous avez éditer votre message précédent ma première demande de logs en y mettant des logs :confused:
Pas très malin, si vous ne répondez pas à ma question je ne sais pas qu’il y a eu du nouveau sur le sujet…
Et là le log est dans un fichier texte, trop long et trop compliqué à lire sur mobile, je regarderais cela dès que possible sur ordi.

Concernant l’arrêt des vms je vous invite à consulter la doc proxmox puisque c’est bien proxmox qui se charge d’arrêter les vms ou pas.
Et sur mes vms, avec l’agent installé, les vms sont arrêtées proprement, mais j’avoue que j’ignore s’il y des conditions pour que cela se passe bien.

Bonjour, Mips

En fait, j’avais édité mon 1er post il y a longtemps, mais cette semaine est prévu des coupures de courant et j’ai installé un nouveau onduleur, du coup je reteste.
L’arrêt des VM fonctionne, bien , c’est la commande « arreter le noeud » qui est sans effet.
Voici donc le log en mode débug pour le test que je viens de refaire : commande « arreter le noeud » du plugin (+ en PJ)
proxmox.txt (90,1 Ko)

[2022-02-20 12:12:20][INFO] : Start sync
[2022-02-20 12:12:20][DEBUG] : Read Qemu config of TESTJEEDOM
[2022-02-20 12:12:20][DEBUG] : Checking commands of qemu / TESTJEEDOM
[2022-02-20 12:12:20][DEBUG] : set value:'0.90190887451172' / maxValue:'4' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][INFO] : Start refresh qemu/Lxc TESTJEEDOM
[2022-02-20 12:12:20][DEBUG] : set value:'0.90008163452148' / maxValue:'4' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][DEBUG] : Read Qemu config of JeedomBuster
[2022-02-20 12:12:20][DEBUG] : Checking commands of qemu / JeedomBuster
[2022-02-20 12:12:20][DEBUG] : set value:'3.1886596679688' / maxValue:'4' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][INFO] : Start refresh qemu/Lxc JeedomBuster
[2022-02-20 12:12:20][DEBUG] : set value:'3.1896057128906' / maxValue:'4' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][DEBUG] : Read Qemu config of TRUENAS
[2022-02-20 12:12:20][DEBUG] : Checking commands of qemu / TRUENAS
[2022-02-20 12:12:20][DEBUG] : set value:'6.1345933545381' / maxValue:'7.90625' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][INFO] : Start refresh qemu/Lxc TRUENAS
[2022-02-20 12:12:20][DEBUG] : set value:'6.1437997827306' / maxValue:'7.90625' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][DEBUG] : Read Qemu config of ZWaveJS2MQTT
[2022-02-20 12:12:20][DEBUG] : Checking commands of qemu / ZWaveJS2MQTT
[2022-02-20 12:12:20][DEBUG] : set value:'0' / maxValue:'2' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][INFO] : Start refresh qemu/Lxc ZWaveJS2MQTT
[2022-02-20 12:12:20][DEBUG] : set value:'0' / maxValue:'2' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][DEBUG] : Read node config of NucJEEDOM
[2022-02-20 12:12:20][DEBUG] : Checking commands of node / NucJEEDOM
[2022-02-20 12:12:20][DEBUG] : set value:'21.732719421387' / maxValue:'29.404209136963' / unit:'GiB' for cmdId:'disk'
[2022-02-20 12:12:20][DEBUG] : set value:'8.5982322692871' / maxValue:'15.568717956543' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][INFO] : Start refresh node NucJEEDOM
[2022-02-20 12:12:20][DEBUG] : set value:'8.5970687866211' / maxValue:'15.568717956543' / unit:'GiB' for cmdId:'mem'
[2022-02-20 12:12:20][DEBUG] : set value:'21.732723236084' / maxValue:'29.404209136963' / unit:'GiB' for cmdId:'disk'
[2022-02-20 12:12:20][DEBUG] : set value:'0.203369140625' / maxValue:'15.999996185303' / unit:'GiB' for cmdId:'swap'
[2022-02-20 12:12:20][DEBUG] : {"maxmem":8489271296,"status":"running","disk":0,"netout":90615795,"vmid":"102","netin":165447862,"diskwrite":0,"template":"","pid":"11766","mem":6596854785,"maxdisk":8589934592,"cpu":0.0197187603545898,"cpus":4,"uptime":3878,"name":"TRUENAS","diskread":0}
[2022-02-20 12:12:20][DEBUG] : {"cpus":2,"diskread":0,"name":"ZWaveJS2MQTT","uptime":0,"mem":0,"maxdisk":8589934592,"cpu":0,"pid":null,"template":"","maxmem":2147483648,"status":"stopped","diskwrite":0,"netin":0,"vmid":"103","netout":0,"disk":0}
[2022-02-20 12:12:20][DEBUG] : {"template":"","pid":"11374","netout":7855938,"vmid":"100","disk":0,"diskwrite":0,"netin":26861987,"status":"running","maxmem":4294967296,"diskread":0,"uptime":3969,"name":"TESTJEEDOM","cpus":8,"cpu":0.0182063449133948,"mem":1140191820,"maxdisk":17179869184}
[2022-02-20 12:12:20][DEBUG] : {"netin":251944472723,"diskwrite":0,"disk":0,"netout":189202881515,"vmid":"101","maxmem":4294967296,"status":"running","pid":"1024","template":"","cpu":0.0251064251542487,"maxdisk":25769803776,"mem":3005454792,"uptime":1886666,"name":"JeedomBuster","diskread":0,"cpus":8}
[2022-02-20 12:12:20][DEBUG] : qemuRunning=3 - qemuStopped=1
[2022-02-20 12:12:20][DEBUG] : lxcRunning=0 - lxcStopped=0
[2022-02-20 12:12:20][DEBUG] : Read storage config of Disktation
[2022-02-20 12:12:20][DEBUG] : Checking commands of storage / Disktation
[2022-02-20 12:12:20][DEBUG] : set value:'14.549144983292' / maxValue:'26.845287084579' / unit:'TiB' for cmdId:'disk'
[2022-02-20 12:12:20][DEBUG] : Read storage config of local-lvm
[2022-02-20 12:12:20][DEBUG] : Checking commands of storage / local-lvm
[2022-02-20 12:12:20][DEBUG] : set value:'77.226660155691' / maxValue:'166.9765625' / unit:'GiB' for cmdId:'disk'
[2022-02-20 12:12:20][DEBUG] : Read storage config of local
[2022-02-20 12:12:20][DEBUG] : Checking commands of storage / local
[2022-02-20 12:12:20][DEBUG] : set value:'21.732719421387' / maxValue:'29.404209136963' / unit:'GiB' for cmdId:'disk'
[2022-02-20 12:12:20][DEBUG] : checking for old resources: 8
[2022-02-20 12:12:26][INFO] : Trying to connect to 192.168.1.96
[2022-02-20 12:12:26][INFO] : Start sync
[2022-02-20 12:12:26][INFO] : Start refresh qemu/Lxc TESTJEEDOM
[2022-02-20 12:12:26][INFO] : Start refresh qemu/Lxc JeedomBuster
[2022-02-20 12:12:26][INFO] : Start refresh qemu/Lxc TRUENAS
[2022-02-20 12:12:27][INFO] : Start refresh qemu/Lxc ZWaveJS2MQTT
[2022-02-20 12:12:27][INFO] : Start refresh node NucJEEDOM
[2022-02-20 12:12:31][INFO] : Trying to connect to 192.168.1.96
[2022-02-20 12:12:31][INFO] : nodeShutdown Status: 403 - Response: 
[2022-02-20 12:12:57][INFO] : Trying to connect to 192.168.1.96
[2022-02-20 12:12:57][INFO] : Start sync
[2022-02-20 12:12:57][INFO] : Start refresh qemu/Lxc TESTJEEDOM
[2022-02-20 12:12:57][INFO] : Start refresh qemu/Lxc JeedomBuster
[2022-02-20 12:12:57][INFO] : Start refresh qemu/Lxc TRUENAS
[2022-02-20 12:12:57][INFO] : Start refresh qemu/Lxc ZWaveJS2MQTT
[2022-02-20 12:12:57][INFO] : Start refresh node NucJEEDOM

Je n’y connais pas grand chose mais dans le log il est repris : nodeShutdown Status: 403 - Response:
apparemment pas de réponse du node a la commande shutdown ?
J’ai vérifié les droits de l’utilisateur Jeedom dans proxmox : ok
Ci-dessous la page santé du plugin

Effectivement, le plug-in reçoit un 403 qui veut dire que l’accès à cette action est refusé par proxmox.
Êtes-vous sur que votre utilisateur à les droits nécessaires ?

Pouvez-vous montrer la config de l’utilisateur sous proxmox ?

Voilà pour les droits : l’utilisateur Jeedom est administrateur. J’ai vérifié la longue liste des droits : tout est sur « Oui »

Voila une copie d’ecran de la console de proxmox pour l’utilisateur Jeedom@pve :

Je ne vois pas Sys.PowerMgmt qui a mon avis est celui nécessaire pour éteindre un noeud;

pour rappel, tous les droits nécessaires sont documentés dans le tableau du paragraphe « Attribution des permissions », je peux difficilement donner plus de détail: https://mips2648.github.io/jeedom-plugins-docs/proxmox/fr_FR/#tocAnchor-1-3-2

Extrait:
image
Cela veut dire que pour avoir les infos sur les noeuds il faut sys.audit et pour faire toutes les actions supportées par le plugin sur les noeuds il faut en plus sys.modify et sys.powerMgmt

edit: pour info, moi j’ai créé un role personnalisé sous proxmox dans lequel j’ai mis toutes les permissions décrites dans la doc et je donne ce role à mes users jeedom

Ce n’est pas la config du user que vous donnez là

Je viens de relire vos précédents posts:

le role pveadmin ne donne pas sys.powerMgmt, je pense que le problème est bien une mauvaise config de l’utilisateur proxmox

Ok, merci, je vais modifier cela
Effectivement, je pensais qu’en étant admin l’utilisateur avait tout les droits requis, ce n’est apparemment pas la cas.

Il existe « Administrator » qui a tous les droits (bien plus que nécessaire pour le plugin) et « PVEAdmin ».

Je répète mon edit de mon post précédent: pour info, moi j’ai créé un role personnalisé sous proxmox dans lequel j’ai mis toutes les permissions décrites dans la doc et je donne ce role à mes users jeedom

Merci pour ces précisions, désolé si je suis passé à côté

pas de soucis, avec les logs et les captures de config le « problème » a été vite identifié :wink:

Je vous laisse tester après avoir corrigé votre config et cocher la solution ensuite

Enfin, pas simple proxmox !
Il a fallu que je crée un rôle avec sys.powerMgmt et l’affecter à l’utilisateur Jeedom…

Mais ça marche enfin. merci encore

1 « J'aime »

Ce sujet a été automatiquement fermé après 24 heures suivant le dernier commentaire. Aucune réponse n’est permise dorénavant.