Recordings to AWS S3, S3FS-Fuse causing CPU & memory issues, services need to restart. Thinking to switch to AWS API instead of a file system.

Status
Not open for further replies.

luisp

New Member
May 17, 2021
2
0
1
39
Hi guys, so quickly this the history of our production FusionPBX server, it was previously hosted in a VM machine by a datacenter and there were issues that i couldn't point why it would happen, for example, "504 Gateway error" almost weekly, every 6-8 weeks or so the server would just drop registrations and within 5 minutes it would be good again, i am not an expert so we would pay for support and they will usually tell us there is nothing wrong and could have been our internet... We are using s3fs for call recordings, CDRs are kept locally/archived server.

Now that we have more control over the server we are having similar issues and we were able to determine it was s3fs causing this issue. We are hosting on an ec2 instance and when we first discovered the issue we were experiencing quality issues; the s3sf-fuse was using 100% CPU causing quality issues! we decided to use a previous version of s3fs-fuse but a week after we experienced quality issues again. Our next step was to upgrade to an ec2 with 8 vCPU and 16GB of memory and now we are having "Out of memory" issues which happen every 2-3 weeks. ***As i was writing this the problem happened *** :eek: and this causes the server to stop services and we need to restart them.

I have attached a screenshot from this current incident please take a look at the image.

Have any of you experience these issues as well? i would like to move from the file system and develop to use the aws api, is anyone using the aws api to upload recordings to S3, playback, and download?

Or

Is there a way to fix the s3fs issue? but I am inclining more to aws api since it seems more reliable.
 

Attachments

  • i-0d3ff0d98c1f46696 (1).jpg
    i-0d3ff0d98c1f46696 (1).jpg
    59.2 KB · Views: 6

KonradSC

Active Member
Mar 10, 2017
166
99
28
I am running a similar setup and have not experienced this issue. I have some servers on Deb 8 and some on Deb 10. Right now we have about 350,000 recordings in the db.

One thought...do you have any cron jobs that will scan your s3fs directories or scan all files. I'm thinking of the default backup script that deletes recordings older than x number of days using the "find" command. I removed that and created a script that pulls old files from the database and deletes them individually.

I agree with you about having an option for accessing recordings with a URL instead of a local path. S3FS is not really meant for production at scale.
 
Status
Not open for further replies.