Du verwendest einen veralteten Browser. Es ist möglich, dass diese oder andere Websites nicht korrekt angezeigt werden.
Du solltest ein Upgrade durchführen oder einen alternativen Browser verwenden.
Du solltest ein Upgrade durchführen oder einen alternativen Browser verwenden.
Proxmox: VM stoppt - Warum
- Ersteller Don_2020
- Erstellt am
redjack1000
Fleet Admiral
- Registriert
- März 2022
- Beiträge
- 11.952
Schau mal im Syslog nach, eventuell gibt es Hinweise auf die Ursache.
Cu
redjack
Cu
redjack
Zuletzt bearbeitet:
Krik
Fleet Admiral
- Registriert
- Juni 2005
- Beiträge
- 14.208
Das sagt nicht viel aus. Wenn du zwei VMs hast und jeder 32 GB RAM gibst, dann kann Proxmox das nicht umsetzen.Don_2020 schrieb:64 GB RAM
16 x AMD Ryzen 7 7700 8-Core Processor (1 Socket)
Sollte eigentlich reichen.
A
Anonymous209
Gast
jo, log schauen oder den log anpassen, damit man was sieht.
dms
Lt. Commander
- Registriert
- Dez. 2020
- Beiträge
- 1.508
@DON - welche BS werden denn virtualisiert?
Jain ... früher bekannt unter dem Stichwort balloning
https://pve.proxmox.com/wiki/Dynamic_Memory_Management
Jain ... früher bekannt unter dem Stichwort balloning
https://pve.proxmox.com/wiki/Dynamic_Memory_Management
0x8100
Admiral
- Registriert
- Okt. 2015
- Beiträge
- 9.507
doch, das geht. probleme gibt es erst wenn der von den vms auch wirklich genutzte ram den physischen und swap übersteigt. dann kommt der oom-killer.Krik schrieb:Wenn du zwei VMs hast und jeder 32 GB RAM gibst, dann kann Proxmox das nicht umsetzen.
dms
Lt. Commander
- Registriert
- Dez. 2020
- Beiträge
- 1.508
@Krik - ja da könntest du recht haben - aber eine Laborumgebung ist ja zum Testen und Lernen
aber uU schon das "machen das es geht" von
https://www.computerbase.de/forum/threads/truenas-scale-als-proxmox-vm-oder-bar-metal.2202746/
aber uU schon das "machen das es geht" von
https://www.computerbase.de/forum/threads/truenas-scale-als-proxmox-vm-oder-bar-metal.2202746/
0x8100
Admiral
- Registriert
- Okt. 2015
- Beiträge
- 9.507
Zuletzt bearbeitet:
Wo steht das Log? Ich habe nur unter pve1 (das ist der Server) unter System, System Log. Ist das das Log-Verzeichnis? Die letzte VM die gestoppt wurde ist die 101. Dort läuft mein Urbackupserver.
Was sollte denn da drin stehen?
Anbei mein Log von 09:50 bis 10:00 Uhr, in dieser Zeit ist die VM 101 von Proxmox beendet worden.
Jul 14 09:51:34 pve1 pvedaemon[4045740]: worker exit
Jul 14 09:51:34 pve1 pvedaemon[2591]: worker 4045740 finished
Jul 14 09:51:34 pve1 pvedaemon[2591]: starting 1 worker(s)
Jul 14 09:51:34 pve1 pvedaemon[2591]: worker 732057 started
Jul 14 09:53:02 pve1 pvedaemon[2921181]: <root@pam> starting task UPIDve1:00100EFB:08291ABD:669383DE:vncshell::root@pam:
Jul 14 09:53:02 pve1 pvedaemon[1052411]: starting termproxy UPIDve1:00100EFB:08291ABD:669383DE:vncshell::root@pam:
Jul 14 09:53:02 pve1 pvedaemon[2094498]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:53:02 pve1 login[1052629]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 14 09:53:02 pve1 systemd-logind[2062]: New session 10614 of user root.
Jul 14 09:53:02 pve1 systemd[1]: Started session-10614.scope - Session 10614 of User root.
Jul 14 09:53:02 pve1 login[1052730]: ROOT LOGIN on '/dev/pts/2'
Jul 14 09:53:28 pve1 nfsidmap[1147874]: nss_getpwnam: name 'root@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:28 pve1 nfsidmap[1147943]: nss_name_to_gid: name 'root@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:28 pve1 nfsidmap[1148111]: nss_getpwnam: name 'admin@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:28 pve1 nfsidmap[1148165]: nss_name_to_gid: name 'users@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:31 pve1 pvedaemon[732057]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:53:53 pve1 postfix/qmgr[1213327]: 594982815A3: from=<root@pve1.fritzbox>, size=35740, nrcpt=1 (queue active)
Jul 14 09:53:53 pve1 postfix/qmgr[1213327]: 71AFD2816FB: from=<root@pve1.fritzbox>, size=34694, nrcpt=1 (queue active)
Jul 14 09:53:53 pve1 postfix/smtp[1238570]: 594982815A3: host mx01.emig.gmx.net[212.227.17.5] refused to talk to me: 554-gmx.net (mxgmx106) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1MiLEc-1rpjXz2ULO-00kfn5
Jul 14 09:53:53 pve1 postfix/smtp[1238589]: 71AFD2816FB: host mx01.emig.gmx.net[212.227.17.5] refused to talk to me: 554-gmx.net (mxgmx107) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1MT8Fj-1svt4o2VGZ-00XlJH
Jul 14 09:53:53 pve1 postfix/smtp[1238570]: 594982815A3: to=<@gmx.de>, relay=mx00.emig.gmx.net[212.227.15.9]:25, delay=304207, delays=304207/0.01/0.14/0, dsn=4.0.0, status=deferred (host mx00.emig.gmx.net[212.227.15.9] refused to talk to me: 554-gmx.net (mxgmx009) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1MG7kU-1saetR2eRY-00Be0U)
Jul 14 09:53:53 pve1 postfix/smtp[1238589]: 71AFD2816FB: to=<@gmx.de>, relay=mx00.emig.gmx.net[212.227.15.9]:25, delay=218736, delays=218736/0.01/0.14/0, dsn=4.0.0, status=deferred (host mx00.emig.gmx.net[212.227.15.9] refused to talk to me: 554-gmx.net (mxgmx008) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1N5CUh-1sKtMa2fx7-014w3B)
Jul 14 09:55:01 pve1 CRON[1494879]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 14 09:55:01 pve1 CRON[1494885]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jul 14 09:55:01 pve1 CRON[1494879]: pam_unix(cron:session): session closed for user root
Jul 14 09:56:18 pve1 systemd-logind[2062]: Session 10614 logged out. Waiting for processes to exit.
Jul 14 09:56:18 pve1 pvedaemon[2921181]: <root@pam> end task UPIDve1:00100EFB:08291ABD:669383DE:vncshell::root@pam: OK
Jul 14 09:56:19 pve1 systemd[1]: session-10614.scope: Deactivated successfully.
Jul 14 09:56:19 pve1 systemd[1]: session-10614.scope: Consumed 56.408s CPU time.
Jul 14 09:56:19 pve1 systemd-logind[2062]: Removed session 10614.
Jul 14 09:56:31 pve1 pvedaemon[1850303]: starting termproxy UPIDve1:001C3BBF:08296C50:669384AF:vncshell::root@pam:
Jul 14 09:56:31 pve1 pvedaemon[732057]: <root@pam> starting task UPIDve1:001C3BBF:08296C50:669384AF:vncshell::root@pam:
Jul 14 09:56:31 pve1 pvedaemon[2094498]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:56:31 pve1 login[1850476]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 14 09:56:31 pve1 systemd-logind[2062]: New session 10617 of user root.
Jul 14 09:56:31 pve1 systemd[1]: Started session-10617.scope - Session 10617 of User root.
Jul 14 09:56:31 pve1 login[1850571]: ROOT LOGIN on '/dev/pts/2'
Jul 14 09:56:54 pve1 systemd-logind[2062]: Session 10617 logged out. Waiting for processes to exit.
Jul 14 09:56:54 pve1 pvedaemon[732057]: <root@pam> end task UPIDve1:001C3BBF:08296C50:669384AF:vncshell::root@pam: OK
Jul 14 09:56:55 pve1 systemd[1]: session-10617.scope: Deactivated successfully.
Jul 14 09:56:55 pve1 systemd[1]: session-10617.scope: Consumed 4.784s CPU time.
Jul 14 09:56:55 pve1 systemd-logind[2062]: Removed session 10617.
Jul 14 09:56:56 pve1 pvedaemon[1949397]: start VM 101: UPIDve1:001DBED5:08297606:669384C8:qmstart:101:root@pam:
Jul 14 09:56:56 pve1 pvedaemon[2094498]: <root@pam> starting task UPIDve1:001DBED5:08297606:669384C8:qmstart:101:root@pam:
Jul 14 09:56:56 pve1 systemd[1]: Started 101.scope.
Jul 14 09:56:56 pve1 kvm[1950394]: auxpropfunc error invalid parameter supplied
Jul 14 09:56:56 pve1 kvm[1950394]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb
Jul 14 09:56:56 pve1 kvm[1950394]: ldapdb_canonuser_plug_init() failed in sasl_canonuser_add_plugin(): invalid parameter supplied
Jul 14 09:56:56 pve1 kvm[1950394]: _sasl_plugin_load failed on sasl_canonuser_init for plugin: ldapdb
Jul 14 09:56:56 pve1 kernel: tap101i0: entered promiscuous mode
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:56:56 pve1 kernel: fwpr101p0: entered allmulticast mode
Jul 14 09:56:56 pve1 kernel: fwpr101p0: entered promiscuous mode
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered forwarding state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:56:56 pve1 kernel: fwln101i0: entered allmulticast mode
Jul 14 09:56:56 pve1 kernel: fwln101i0: entered promiscuous mode
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 14 09:56:56 pve1 kernel: tap101i0: entered allmulticast mode
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Jul 14 09:56:57 pve1 pvedaemon[2094498]: <root@pam> end task UPIDve1:001DBED5:08297606:669384C8:qmstart:101:root@pam: OK
Jul 14 09:57:00 pve1 pvedaemon[732057]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 14 09:57:08 pve1 pvedaemon[1997230]: shutdown VM 101: UPIDve1:001E79AE:08297AC1:669384D4:qmshutdown:101:root@pam:
Jul 14 09:57:08 pve1 pvedaemon[2094498]: <root@pam> starting task UPIDve1:001E79AE:08297AC1:669384D4:qmshutdown:101:root@pam:
Jul 14 09:57:10 pve1 kernel: tap101i0: left allmulticast mode
Jul 14 09:57:10 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 14 09:57:10 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:57:10 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:57:10 pve1 kernel: fwln101i0 (unregistering): left allmulticast mode
Jul 14 09:57:10 pve1 kernel: fwln101i0 (unregistering): left promiscuous mode
Jul 14 09:57:10 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:57:10 pve1 kernel: fwpr101p0 (unregistering): left allmulticast mode
Jul 14 09:57:10 pve1 kernel: fwpr101p0 (unregistering): left promiscuous mode
Jul 14 09:57:10 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:57:11 pve1 qmeventd[2061]: read: Connection reset by peer
Jul 14 09:57:11 pve1 pvedaemon[2921181]: VM 101 qmp command failed - unable to open monitor socket
Jul 14 09:57:11 pve1 pvedaemon[2094498]: <root@pam> end task UPIDve1:001E79AE:08297AC1:669384D4:qmshutdown:101:root@pam: OK
Jul 14 09:57:11 pve1 systemd[1]: 101.scope: Deactivated successfully.
Jul 14 09:57:11 pve1 systemd[1]: 101.scope: Consumed 13.347s CPU time.
Jul 14 09:57:11 pve1 qmeventd[2007903]: Starting cleanup for 101
Jul 14 09:57:11 pve1 qmeventd[2007903]: Finished cleanup for 101
Jul 14 09:57:25 pve1 pvedaemon[2921181]: <root@pam> update VM 101: -balloon 0 -delete shares -memory 8192
Jul 14 09:57:25 pve1 pvedaemon[2921181]: cannot delete 'shares' - not set in current configuration!
Jul 14 09:57:35 pve1 pvedaemon[2106475]: start VM 101: UPIDve1:0020246B:08298578:669384EF:qmstart:101:root@pam:
Jul 14 09:57:35 pve1 pvedaemon[2921181]: <root@pam> starting task UPIDve1:0020246B:08298578:669384EF:qmstart:101:root@pam:
Jul 14 09:57:35 pve1 systemd[1]: Started 101.scope.
Jul 14 09:57:35 pve1 kvm[2107326]: auxpropfunc error invalid parameter supplied
Jul 14 09:57:35 pve1 kvm[2107326]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb
Jul 14 09:57:35 pve1 kvm[2107326]: ldapdb_canonuser_plug_init() failed in sasl_canonuser_add_plugin(): invalid parameter supplied
Jul 14 09:57:35 pve1 kvm[2107326]: _sasl_plugin_load failed on sasl_canonuser_init for plugin: ldapdb
Jul 14 09:57:36 pve1 kernel: tap101i0: entered promiscuous mode
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:57:36 pve1 kernel: fwpr101p0: entered allmulticast mode
Jul 14 09:57:36 pve1 kernel: fwpr101p0: entered promiscuous mode
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered forwarding state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:57:36 pve1 kernel: fwln101i0: entered allmulticast mode
Jul 14 09:57:36 pve1 kernel: fwln101i0: entered promiscuous mode
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 14 09:57:36 pve1 kernel: tap101i0: entered allmulticast mode
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Jul 14 09:57:36 pve1 pvedaemon[2921181]: <root@pam> end task UPIDve1:0020246B:08298578:669384EF:qmstart:101:root@pam: OK
Jul 14 09:57:43 pve1 pvedaemon[732057]: <root@pam> starting task UPIDve1:0020997F:08298862:669384F7:vncshell::root@pam:
Jul 14 09:57:43 pve1 pvedaemon[2136447]: starting termproxy UPIDve1:0020997F:08298862:669384F7:vncshell::root@pam:
Jul 14 09:57:43 pve1 pvedaemon[732057]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:57:43 pve1 login[2136668]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 14 09:57:43 pve1 systemd-logind[2062]: New session 10618 of user root.
Jul 14 09:57:43 pve1 systemd[1]: Started session-10618.scope - Session 10618 of User root.
Jul 14 09:57:43 pve1 login[2136755]: ROOT LOGIN on '/dev/pts/2'
Was sollte denn da drin stehen?
Anbei mein Log von 09:50 bis 10:00 Uhr, in dieser Zeit ist die VM 101 von Proxmox beendet worden.
Jul 14 09:51:34 pve1 pvedaemon[4045740]: worker exit
Jul 14 09:51:34 pve1 pvedaemon[2591]: worker 4045740 finished
Jul 14 09:51:34 pve1 pvedaemon[2591]: starting 1 worker(s)
Jul 14 09:51:34 pve1 pvedaemon[2591]: worker 732057 started
Jul 14 09:53:02 pve1 pvedaemon[2921181]: <root@pam> starting task UPIDve1:00100EFB:08291ABD:669383DE:vncshell::root@pam:
Jul 14 09:53:02 pve1 pvedaemon[1052411]: starting termproxy UPIDve1:00100EFB:08291ABD:669383DE:vncshell::root@pam:
Jul 14 09:53:02 pve1 pvedaemon[2094498]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:53:02 pve1 login[1052629]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 14 09:53:02 pve1 systemd-logind[2062]: New session 10614 of user root.
Jul 14 09:53:02 pve1 systemd[1]: Started session-10614.scope - Session 10614 of User root.
Jul 14 09:53:02 pve1 login[1052730]: ROOT LOGIN on '/dev/pts/2'
Jul 14 09:53:28 pve1 nfsidmap[1147874]: nss_getpwnam: name 'root@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:28 pve1 nfsidmap[1147943]: nss_name_to_gid: name 'root@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:28 pve1 nfsidmap[1148111]: nss_getpwnam: name 'admin@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:28 pve1 nfsidmap[1148165]: nss_name_to_gid: name 'users@localdomain' does not map into domain 'fritz.box'
Jul 14 09:53:31 pve1 pvedaemon[732057]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:53:53 pve1 postfix/qmgr[1213327]: 594982815A3: from=<root@pve1.fritzbox>, size=35740, nrcpt=1 (queue active)
Jul 14 09:53:53 pve1 postfix/qmgr[1213327]: 71AFD2816FB: from=<root@pve1.fritzbox>, size=34694, nrcpt=1 (queue active)
Jul 14 09:53:53 pve1 postfix/smtp[1238570]: 594982815A3: host mx01.emig.gmx.net[212.227.17.5] refused to talk to me: 554-gmx.net (mxgmx106) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1MiLEc-1rpjXz2ULO-00kfn5
Jul 14 09:53:53 pve1 postfix/smtp[1238589]: 71AFD2816FB: host mx01.emig.gmx.net[212.227.17.5] refused to talk to me: 554-gmx.net (mxgmx107) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1MT8Fj-1svt4o2VGZ-00XlJH
Jul 14 09:53:53 pve1 postfix/smtp[1238570]: 594982815A3: to=<@gmx.de>, relay=mx00.emig.gmx.net[212.227.15.9]:25, delay=304207, delays=304207/0.01/0.14/0, dsn=4.0.0, status=deferred (host mx00.emig.gmx.net[212.227.15.9] refused to talk to me: 554-gmx.net (mxgmx009) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1MG7kU-1saetR2eRY-00Be0U)
Jul 14 09:53:53 pve1 postfix/smtp[1238589]: 71AFD2816FB: to=<@gmx.de>, relay=mx00.emig.gmx.net[212.227.15.9]:25, delay=218736, delays=218736/0.01/0.14/0, dsn=4.0.0, status=deferred (host mx00.emig.gmx.net[212.227.15.9] refused to talk to me: 554-gmx.net (mxgmx008) Nemesis ESMTP Service not available 554-No SMTP service 554-IP address is block listed. 554 For explanation visit https://postmaster.gmx.net/de/case?c=r0301&i=ip&v=??.??.??.??&r=1N5CUh-1sKtMa2fx7-014w3B)
Jul 14 09:55:01 pve1 CRON[1494879]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 14 09:55:01 pve1 CRON[1494885]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jul 14 09:55:01 pve1 CRON[1494879]: pam_unix(cron:session): session closed for user root
Jul 14 09:56:18 pve1 systemd-logind[2062]: Session 10614 logged out. Waiting for processes to exit.
Jul 14 09:56:18 pve1 pvedaemon[2921181]: <root@pam> end task UPIDve1:00100EFB:08291ABD:669383DE:vncshell::root@pam: OK
Jul 14 09:56:19 pve1 systemd[1]: session-10614.scope: Deactivated successfully.
Jul 14 09:56:19 pve1 systemd[1]: session-10614.scope: Consumed 56.408s CPU time.
Jul 14 09:56:19 pve1 systemd-logind[2062]: Removed session 10614.
Jul 14 09:56:31 pve1 pvedaemon[1850303]: starting termproxy UPIDve1:001C3BBF:08296C50:669384AF:vncshell::root@pam:
Jul 14 09:56:31 pve1 pvedaemon[732057]: <root@pam> starting task UPIDve1:001C3BBF:08296C50:669384AF:vncshell::root@pam:
Jul 14 09:56:31 pve1 pvedaemon[2094498]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:56:31 pve1 login[1850476]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 14 09:56:31 pve1 systemd-logind[2062]: New session 10617 of user root.
Jul 14 09:56:31 pve1 systemd[1]: Started session-10617.scope - Session 10617 of User root.
Jul 14 09:56:31 pve1 login[1850571]: ROOT LOGIN on '/dev/pts/2'
Jul 14 09:56:54 pve1 systemd-logind[2062]: Session 10617 logged out. Waiting for processes to exit.
Jul 14 09:56:54 pve1 pvedaemon[732057]: <root@pam> end task UPIDve1:001C3BBF:08296C50:669384AF:vncshell::root@pam: OK
Jul 14 09:56:55 pve1 systemd[1]: session-10617.scope: Deactivated successfully.
Jul 14 09:56:55 pve1 systemd[1]: session-10617.scope: Consumed 4.784s CPU time.
Jul 14 09:56:55 pve1 systemd-logind[2062]: Removed session 10617.
Jul 14 09:56:56 pve1 pvedaemon[1949397]: start VM 101: UPIDve1:001DBED5:08297606:669384C8:qmstart:101:root@pam:
Jul 14 09:56:56 pve1 pvedaemon[2094498]: <root@pam> starting task UPIDve1:001DBED5:08297606:669384C8:qmstart:101:root@pam:
Jul 14 09:56:56 pve1 systemd[1]: Started 101.scope.
Jul 14 09:56:56 pve1 kvm[1950394]: auxpropfunc error invalid parameter supplied
Jul 14 09:56:56 pve1 kvm[1950394]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb
Jul 14 09:56:56 pve1 kvm[1950394]: ldapdb_canonuser_plug_init() failed in sasl_canonuser_add_plugin(): invalid parameter supplied
Jul 14 09:56:56 pve1 kvm[1950394]: _sasl_plugin_load failed on sasl_canonuser_init for plugin: ldapdb
Jul 14 09:56:56 pve1 kernel: tap101i0: entered promiscuous mode
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:56:56 pve1 kernel: fwpr101p0: entered allmulticast mode
Jul 14 09:56:56 pve1 kernel: fwpr101p0: entered promiscuous mode
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:56:56 pve1 kernel: vmbr0: port 5(fwpr101p0) entered forwarding state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:56:56 pve1 kernel: fwln101i0: entered allmulticast mode
Jul 14 09:56:56 pve1 kernel: fwln101i0: entered promiscuous mode
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 14 09:56:56 pve1 kernel: tap101i0: entered allmulticast mode
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:56:56 pve1 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Jul 14 09:56:57 pve1 pvedaemon[2094498]: <root@pam> end task UPIDve1:001DBED5:08297606:669384C8:qmstart:101:root@pam: OK
Jul 14 09:57:00 pve1 pvedaemon[732057]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Jul 14 09:57:08 pve1 pvedaemon[1997230]: shutdown VM 101: UPIDve1:001E79AE:08297AC1:669384D4:qmshutdown:101:root@pam:
Jul 14 09:57:08 pve1 pvedaemon[2094498]: <root@pam> starting task UPIDve1:001E79AE:08297AC1:669384D4:qmshutdown:101:root@pam:
Jul 14 09:57:10 pve1 kernel: tap101i0: left allmulticast mode
Jul 14 09:57:10 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 14 09:57:10 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:57:10 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:57:10 pve1 kernel: fwln101i0 (unregistering): left allmulticast mode
Jul 14 09:57:10 pve1 kernel: fwln101i0 (unregistering): left promiscuous mode
Jul 14 09:57:10 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:57:10 pve1 kernel: fwpr101p0 (unregistering): left allmulticast mode
Jul 14 09:57:10 pve1 kernel: fwpr101p0 (unregistering): left promiscuous mode
Jul 14 09:57:10 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:57:11 pve1 qmeventd[2061]: read: Connection reset by peer
Jul 14 09:57:11 pve1 pvedaemon[2921181]: VM 101 qmp command failed - unable to open monitor socket
Jul 14 09:57:11 pve1 pvedaemon[2094498]: <root@pam> end task UPIDve1:001E79AE:08297AC1:669384D4:qmshutdown:101:root@pam: OK
Jul 14 09:57:11 pve1 systemd[1]: 101.scope: Deactivated successfully.
Jul 14 09:57:11 pve1 systemd[1]: 101.scope: Consumed 13.347s CPU time.
Jul 14 09:57:11 pve1 qmeventd[2007903]: Starting cleanup for 101
Jul 14 09:57:11 pve1 qmeventd[2007903]: Finished cleanup for 101
Jul 14 09:57:25 pve1 pvedaemon[2921181]: <root@pam> update VM 101: -balloon 0 -delete shares -memory 8192
Jul 14 09:57:25 pve1 pvedaemon[2921181]: cannot delete 'shares' - not set in current configuration!
Jul 14 09:57:35 pve1 pvedaemon[2106475]: start VM 101: UPIDve1:0020246B:08298578:669384EF:qmstart:101:root@pam:
Jul 14 09:57:35 pve1 pvedaemon[2921181]: <root@pam> starting task UPIDve1:0020246B:08298578:669384EF:qmstart:101:root@pam:
Jul 14 09:57:35 pve1 systemd[1]: Started 101.scope.
Jul 14 09:57:35 pve1 kvm[2107326]: auxpropfunc error invalid parameter supplied
Jul 14 09:57:35 pve1 kvm[2107326]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb
Jul 14 09:57:35 pve1 kvm[2107326]: ldapdb_canonuser_plug_init() failed in sasl_canonuser_add_plugin(): invalid parameter supplied
Jul 14 09:57:35 pve1 kvm[2107326]: _sasl_plugin_load failed on sasl_canonuser_init for plugin: ldapdb
Jul 14 09:57:36 pve1 kernel: tap101i0: entered promiscuous mode
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 14 09:57:36 pve1 kernel: fwpr101p0: entered allmulticast mode
Jul 14 09:57:36 pve1 kernel: fwpr101p0: entered promiscuous mode
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 14 09:57:36 pve1 kernel: vmbr0: port 5(fwpr101p0) entered forwarding state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 14 09:57:36 pve1 kernel: fwln101i0: entered allmulticast mode
Jul 14 09:57:36 pve1 kernel: fwln101i0: entered promiscuous mode
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 14 09:57:36 pve1 kernel: tap101i0: entered allmulticast mode
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 14 09:57:36 pve1 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Jul 14 09:57:36 pve1 pvedaemon[2921181]: <root@pam> end task UPIDve1:0020246B:08298578:669384EF:qmstart:101:root@pam: OK
Jul 14 09:57:43 pve1 pvedaemon[732057]: <root@pam> starting task UPIDve1:0020997F:08298862:669384F7:vncshell::root@pam:
Jul 14 09:57:43 pve1 pvedaemon[2136447]: starting termproxy UPIDve1:0020997F:08298862:669384F7:vncshell::root@pam:
Jul 14 09:57:43 pve1 pvedaemon[732057]: <root@pam> successful auth for user 'root@pam'
Jul 14 09:57:43 pve1 login[2136668]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jul 14 09:57:43 pve1 systemd-logind[2062]: New session 10618 of user root.
Jul 14 09:57:43 pve1 systemd[1]: Started session-10618.scope - Session 10618 of User root.
Jul 14 09:57:43 pve1 login[2136755]: ROOT LOGIN on '/dev/pts/2'
Zuletzt bearbeitet:
A
Anonymous209
Gast
09:56:56 pve1 pvedaemon[1949397]: start VM 101
09:57:08 pve1 pvedaemon[1997230]: shutdown VM 101: UPID
die lebt doch nicht mal? die kann doch gar nicht starten?!
update VM 101: -balloon 0 -delete shares -memory 8192
Jul 14 09:57:25 pve1 pvedaemon[2921181]: cannot delete 'shares' - not set in current configuration!
VM 101 qmp command failed - unable to open monitor socket
falsch konfiguriert würde ich sagen
09:57:08 pve1 pvedaemon[1997230]: shutdown VM 101: UPID
die lebt doch nicht mal? die kann doch gar nicht starten?!
update VM 101: -balloon 0 -delete shares -memory 8192
Jul 14 09:57:25 pve1 pvedaemon[2921181]: cannot delete 'shares' - not set in current configuration!
VM 101 qmp command failed - unable to open monitor socket
falsch konfiguriert würde ich sagen
redjack1000
Fleet Admiral
- Registriert
- März 2022
- Beiträge
- 11.952
Wie ist denn die ID, von der VM, die das Verhalten betrifft?
Welche Proxmox Version wird verwendet?
CU
redjack
Welche Proxmox Version wird verwendet?
CU
redjack
uname -a gibt folgendes aus:
Linux pve1 6.8.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-2 (2024-06-24T09:00Z) x86_64 GNU/Linux
ID 101 ist mein Server mit urbackup
Ich habe nur eine Disk mit 42 GB. Ein Verzeichnis/Disk mit Namen "shares" gibt es nicht.
Linux pve1 6.8.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-2 (2024-06-24T09:00Z) x86_64 GNU/Linux
ID 101 ist mein Server mit urbackup
Ich habe nur eine Disk mit 42 GB. Ein Verzeichnis/Disk mit Namen "shares" gibt es nicht.
redjack1000
Fleet Admiral
- Registriert
- März 2022
- Beiträge
- 11.952
Beachte #15Don_2020 schrieb:ID 101 ist mein Server mit urbackup
Cu
redjack
A
Anonymous209
Gast
hat die VM schon mal funktioniert? war die mal im einsatz?
#15 hilft mir nicht weiter.
Start VM: 09:56:56 pve1 pvedaemon[1949397]: start VM 101
Stop VM: 09:57:08 pve1 pvedaemon[1997230]: shutdown VM 101: UPID
Soweit so gut.
update VM 101: -balloon 0 -delete shares -memory 8192
Speicher sind 8 GiB und Ballon = 0 eingestellt.
-delete shares ; Was ist damit gemeint?
Jul 14 09:57:25 pve1 pvedaemon[2921181]: cannot delete 'shares' - not set in current configuration!
VM 101 qmp command failed - unable to open monitor socket
Damit kann ich auch nichts anfangen!
Die VM ist bereits seit ca. 4 Wochen im Einsatz und speichert meine Daten alle 5 Stunden.
Start VM: 09:56:56 pve1 pvedaemon[1949397]: start VM 101
Stop VM: 09:57:08 pve1 pvedaemon[1997230]: shutdown VM 101: UPID
Soweit so gut.
update VM 101: -balloon 0 -delete shares -memory 8192
Speicher sind 8 GiB und Ballon = 0 eingestellt.
-delete shares ; Was ist damit gemeint?
Jul 14 09:57:25 pve1 pvedaemon[2921181]: cannot delete 'shares' - not set in current configuration!
VM 101 qmp command failed - unable to open monitor socket
Damit kann ich auch nichts anfangen!
Die VM ist bereits seit ca. 4 Wochen im Einsatz und speichert meine Daten alle 5 Stunden.
Ähnliche Themen
- Antworten
- 17
- Aufrufe
- 1.009
- Antworten
- 12
- Aufrufe
- 1.648
- Antworten
- 2
- Aufrufe
- 362
- Antworten
- 6
- Aufrufe
- 443
- Antworten
- 8
- Aufrufe
- 908