proxmox update breaks lxc fusionpbx

Status
Not open for further replies.

yaboc

New Member
Nov 23, 2017
10
2
3
33
hi

i'm running quite outdated fusionpx 4.2.4 on Debian 8 in LXC container under proxmox 6.2.1

proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve) pve-manager: 6.2-4 (running version: 6.2-4/9824574a) pve-kernel-5.4: 6.2-1

i noticed that the latest proxmox updates breaks the fpbx container where freeswitch systemd service is not starting. i tested it on fresh debian 9.0 container (using the proxmox debian template) with the latest fusionpbx install (4.5.13) and i can replicate the problem. I then tested it on fresh Debian 10 KVM with fresh fusionpbx install and everything works as expected.

Code:
systemctl status freeswitch.service
* freeswitch.service - freeswitch
   Loaded: loaded (/lib/systemd/system/freeswitch.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-05-17 22:17:54 EDT; 16min ago
  Process: 5968 ExecStartPre=/bin/mkdir -p /var/run/freeswitch/ (code=exited, status=214/SETSCHEDULER)

May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Control process exited, code=exited status=214
May 17 22:17:54 siplxc systemd[1]: Failed to start freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Unit entered failed state.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Failed with result 'exit-code'.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Service hold-off time over, scheduling restart.
May 17 22:17:54 siplxc systemd[1]: Stopped freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Start request repeated too quickly.
May 17 22:17:54 siplxc systemd[1]: Failed to start freeswitch.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Unit entered failed state.
May 17 22:17:54 siplxc systemd[1]: freeswitch.service: Failed with result 'exit-code'.

i was able to start it manually with /usr/bin/freeswitch -u www-data -g www-data -ncwait.
perhaps editing /etc/systemd/system/multi-user.target.wants/freeswitch.service should fix it.

anyone experiencing the same ?

thinking about switching to KVM to avoid these things in the future since i just got notified about this problem by a customer. backup rebooted the container and service didn't start.
 
Last edited:

DigitalDaz

Administrator
Staff member
Sep 29, 2016
3,070
577
113
Did you make the new container unprivileged? By default Proxmox seems to be crating unprivileged containers now and this will break things.
 
  • Like
Reactions: yaboc

yaboc

New Member
Nov 23, 2017
10
2
3
33
Did you make the new container unprivileged? By default Proxmox seems to be crating unprivileged containers now and this will break things.

nope. i tried to restore the old one from backup and made sure unprivileged was unchecked, same with fresh install unprivileged was not checked.

i thought this was related at first but the container starts in my case, whereas it doesn't in the proxmox thread because of zfs not mounting correctly.

https://forum.proxmox.com/threads/update-broke-lxc.59776/
 
Last edited:
Status
Not open for further replies.