BuddyBoss

Content Error or Suggest an Edit

Notice a grammatical error or technical inaccuracy? Let us know; we will give you credit!

BuddyBoss Facebook Groups

There is one that is smaller and hyper focused on working with BuddyBoss properly. It’s run by Nick Chomey who is a big brain on BuddyBoss.

https://www.facebook.com/groups/buddybossdevelopment

BuddyBoss Google Doc

The following document is a creation from someone in the BuddyBoss community. Below is the direct link.

https://docs.google.com/document/d/1S_mBVR2i3lU7cAO_mXbhxd_ekol5JEWx0wLZO0RENKU/edit?tab=t.0

Here is a mirror of the file as of 01-30-2025

Overview

Special thanks to Wes Tatters, who has graciously provided his expertise to the BuddyBoss community throughout the years. Nick Chomey has also done some great research into Hetzner bare metal servers and using xDebug. Much of this document is pulled from replies to people posting questions on the BuddyBoss Official Facebook User Group. This document contains both recommendations and explanations of what you need to take into consideration when selecting your hosting solution. While this document is being actively maintained, take any plugin or service recommendations with a grain of salt – research other available options and let us know if you find something better!  

(highlighted text in this document is to emphasize recent updates to it, orange means the information is in question for some reason or other)

Buddyboss is far more than a simple plugin, it’s an entire application framework running inside of WordPress – it’s database intensive, cpu intensive, and resource intensive. On most active sites it will account for the lion’s share of resource usage.

Note that it is much better for a number of reasons to develop your site locally on your computer, not least of which is the cost-savings. Here is a tutorial on how to very easily set that up using LocalWP or DevKinsta. In which case, don’t worry about hosting or reading through this document until you’re good and ready to launch your site!   

Basic Setup Requirements: 

1)  AWS ec2 c6i, Vultr high frequency, or Hetzner Bare Metal  (i.e. VPS with at least 3rd generation Xeon or Epyc processor, or a bare metal server with Ryzen 7000-series processor – either way it needs to be clocked at 3ghz or better baseline). For a discussion, see here.

2) OpenLitespeed (there are constraint issues with lsworkers and memory on LiteSpeed that make it a more expensive option) Nginx is an alternative option, but Apache should be avoided. For a discussion, see here.

3) At least PHP 8.0  

4) MySQL version >8.0 or MariaDB version >10.3

5) Install FFmpeg and ImageMagick PHP packages (see here)

6) If not planning to offload media, configure Symbolic Links  (see here, and here)

7) Restrict Media Access in (Open)LiteSpeed configuration by setting Auto Load in .htaccess to “Yes” (see here for Nginx instructions)

8) Run wp-cron via Linux crontab rather than WordPress default (see here)

9) Redis and Relay installed on server and Redis Object Cache Plugin activated (relay.so with Redis Object Cache plugin is a great value for performance). For a discussion, see here.

10) Install and configure WP OPCache (see discussion in Config section for recommended minimum requirements and ongoing monitoring). See here.

11) Offload media (note that BuddyBoss uses a custom media folder structure and the WP Offload Media plugin is currently the only plugin that works with it – Amazon S3, DigitalOcean Spaces, and Google Cloud Storage are the only offload services that the plugin supports as well) or do spaces with a compatible cdn (i.e. configure a CDN to offload custom BuddyBoss media folder structure). See here.

12) Use a CDN, and, if hosting lots of video, a dedicated Video CDN (Cloudfront works with the WP Offload Media plugin, i.e. implementing a dedicated video streaming solution across BuddyBoss’s entire social network may only be possible with CloudFront Video-On-Demand. Otherwise, there are more affordable solutions that work well with wordpress and LMS plugins, see discussion in infrastructure section) 

13) Backups and Staging Site on an external server

14) Set up an email service provider for transactional emails (FluentSMTP connected to Amazon SES, see here, or SendInBlue, see here)

15) Set up security for your site (see here)

16) For sites with thousands of concurrent users, split the database to a dedicated instance or for ultimate performance aws aurora / rds (see discussion in infrastructure section) 

The WP Performance Tester plugin is used to benchmark the site’s performance. Queries per second is recommended to score at least 2200-2400 for a live BuddyBoss site (with somewhere around 8cpu/32gb of ram instance).

Develop In A Local Dev Environment!

If you are planning to get BB or you are still developing your site, you can do it on your computer with DevKinsta or LocalWP for free. It is much more cost-effective and you don’t need to worry about hosting or security until you are ready for launch – which could take months or even years. If you do decide to develop on a live server, a 2-4cpu plan on Vultr or Hetzner is much, much cheaper than Rapyd and you can migrate either when you are ready to officially launch or when you outgrow your server.

Rapyd or Self-Managed? 

Hetzner bare metal servers using Ryzen 7000-series processors will be as much as 50% faster than Rapyd and even cheaper than Vultr. However, even though Hetzner bare metal is faster and cheaper – 1. Rapyd is fully managed and Hetzner is not, 2. Rapyd CPU is scalable while bare metal servers are not (memory and storage can be increased in bare metal), and 3. You need to implement additional redundancy measures for the storage when using bare metal (see “Hardware Considerations” section).

Rapyd is a new hosting solution optimized for BuddyBoss. They use AWS cpus based on 4th gen server grade hardware clocking around 3.7ghz with ddr5 memory that provides high bandwidth. Performance-1 is a likely entry point for buddyboss app users to ensure a seamless user experience, but for smaller sites only using the BuddyBoss Web platform the Startup-3 is appropriate. 

When people say Rapyd is “configured” and “optimized” for BuddyBoss, they largely mean that the server config settings as outlined in this document have been prepared for you and that it comes with Monarx server-side security, Redis Object Cache Pro, and Relay pre-installed. None of that is hard to implement on your own, and OCP and Relay premium (i.e. beyond the free versions) are not likely doing much for most BuddyBoss sites. You may also want to use a different stack than what Rapyd offers. For instance, BitNinja is another server-side security solution. With server-side security solutions like Monarx and BitNinja, if you have 10 sites on a server that you own, it is working on all 10. So Rapyd’s value is largely a matter of convenience and support.

Rapyd is using Redis Object Cache Pro and Relay premium, but the free versions of Redis Object Cache and Relay Community serve enough benefit for most sites (for a discussion see here). As another example, the free version of PatchStack alerts you to any known plugin vulnerabilities – PatchStack Developer *sometimes* offers virtual patches to temporarily secure vulnerable plugins until an update is available, but otherwise provides automatic updates and some developer-oriented addons.

When people don’t benchmark their sites, they tend to either under-compensate or overcompensate – which have negative impacts of different sorts – but overcompensating can have a pleasant-feeling side-effect of harnessing more power. Reading about benchmarking a website in documents like this and learning how to properly spec a server will help you to determine how to pick the server that is right for you and, later, when to upgrade. This document also covers additional setup options not included with hosting plans, such as offloading media, dedicated Video CDN’s, caching, and when a site may need more advanced architecture like load balancing and clustering. However, note that the document currently assumes 3rd gen cpu cores, so some additional calculation is required when comparing Rapyd cpus as already mentioned. 

…On the other hand, Rapyd becomes less important if you’re a decent sysadmin managing a number of (non-BB) client sites and turning enough profit to migrate to a larger server. Alternatively if you are not comfortable with configuring the server according to the instructions in this document you can hire somebody for a one-time fee and show them this document if they are not familiar with BuddyBoss’s hosting requirements already. For instance, Rahul Khosla and Nick Chomey (focused on Hetzner bare metal) are both experienced members of the BuddyBoss community who have helped many people to configure servers for BuddyBoss-based sites. Much of the work to configure a server is done at the beginning and then it is only a matter of paying for hosting and upgrading when necessary. 

BuddyBoss Resource Demands 

To run buddyboss it is recommended that you use a vps or bare metal server with cpu speeds over 3ghz per core to handle the potential load requirement. The resource needs are going to depend a lot on how many concurrent users are active on the site, as well as how many plugins are activated, what kind of plugins, what take and loads, tasks, etc. 

Concurrency

Concurrency is based on two factors: 1) how fast the cpu can handle a page request, and 2) how much memory needs to be uniquely allocated to that page request while it is running through the server. The period of time is the minimum concurrency window. So say it takes up to 5 seconds to handle a page request, then your window is at least 3-5 seconds. During that time the phpworker can only handle that one page request, and during that time that worker consumes and locks a physical block of ram. It varies somewhat based on plugin makeup but a typical buddyboss/learndash site that locked block of ram is around 100meg of ram (a 50/50 allocation of resources so BB is only 50megs). That means you need roughly 1gb of ram to simultaneously process 10 page request’s in that 3-5 second window (or 20 page requests without learndash). So to achieve 100 concurrent requests within the concurrent window you need at least 10gb of ram uniquely dedicated to phpworker processing, and that’s before you consider the needs of the web server, database, opcache, and redis ram. In a bb environment opcache on its own can consume up to 256meg of ram. In reality what we find is that to hit 150 true concurrent users you need 8 high frequency vcpu (this assumes 3rd generation cores) and between 8 and 12 gb of ram. 

RAM

Since buddyboss 1.9.0, elementor 3.6.0, and learndash 4.x the amount of memory needed per worker thread has significantly increased. This effectively translates into more ram being needed per concurrent user – something like a gig of ram per 20 concurrent users at the low end. The database alone can benefit from up to 4gig of ram or more. 8cpu/32ram is becoming more common but, again, each site is unique. 

CPU

Putting CPU requirements into perspective, there are sites with 10,000 users and the buddyboss app running on 8 vcpu all day, but they have lower concurrency (100 users) others have 10,000 users with (300 concurrent users) that are better suited to a 16 vcpu vps. While some have very high concurrent periods (500-750) requiring 32 vcpu aws instances. Someone recently demonstrated what 850 concurrent users looked like: 32cores and at least 64gb ram was the server config and we have seen 64vcpu C6i instance on AWS with 128gb of ram handling 1000s of concurrent users all day. More cores means you can handle more users at once, but the number of cores above 4 won’t change the speed much if you only have a couple of users on. 

A simple example here: The activity feed in BuddyBoss now uses a lot of statically cached elements to improve load speed. Statically cached elements are stored in ram by the php worker for the duration of the page request but there is significantly different usage needs of that ram depending in the size of the site. A site with 20 posts and a couple of comments from the same 4 user’s needs much less static ram allocation than an activity feed with 20 posts and 1000 comments and likes from 500 different users, all of who’s meta data gets statically cached. Learndash is particularly heavy on resources – reliably handling 1,000 concurrent users needs at least 128gb of ram and more likely something closer to 32cores.

Hardware Considerations

Complicating things further is that not all cores are created equal. As Wes Tatters put it, “I’ve seen platforms with 30 cores that still run like a dog with 3 legs because of how the underlying hardware is set up.” You need a dedicated VPS with at least third generation Intel Xeon or AMD Epyc scalable processor, or for a bare metal server, a Ryzen 7000-series processor. Regardless of what server type that you choose the processor needs to be clocked at least 3ghz or better baseline. VPS’s allow for vertical scaling. Vertical scalability is why AMD Epyc and Intel Xeon product lines even exist, however while bare metal servers are faster and significantly cheaper, they lack the scalability of VPS. So you need to size it to meet or exceed your highest expected load and prepare to migrate to a larger bare metal server when it exceeds that expectation. 

AWS can scale to 192 vcpus, which is actually more like 96 physical cpus. Hetzner Ax102 has 16 cpus, but they’re 50% faster, so it’s more like 24 in comparison as far as concurrent users goes. This is because of the faster, state-of-the-art Ryzen 7000-series CPU, DDR5 ECC RAM, local NVMe Gen4 storage (rather than storage that is accessed through the network, as with AWS), and no overhead from any VPS virtualization software. This is how certain 8 core setups are able to outrun many 16core cloud setups especially when they start talking about vCPUs, which are really just tennanted hyperthreads. 

Hetzner AX42, AX52 or AX102 managed through Runcloud with OpenLiteSpeed is a powerful, cost-effective alternative to Rapyd’s hosting solution for both raw performance and value. A common objection to these servers is that they are meant for desktop usage, not server usage. However, AMD specifically markets them for server usage and they are commonly used in Korean pc bang (gaming cafes) running full tilt 24/7. Because of their affordability, you can easily “over provision” them such that they are not running anywhere near full capacity.

If the Hetzner AX servers aren’t enough, Hetzner also has some Dell brand server-grade plans that go up to 64 physical cores (of comparable performance to aws’ latest offerings).

Local NVMe storage on bare metal servers lacks the inherent redundancy and durability features provided by network-attached storage solutions like Amazon EBS. However, there are strategies you can implement to enhance the safety and reliability of data stored on local NVMe storage in a bare metal server and Hetzner plans come with 2 storage drives to implement redundancy. However, you need to set up a monitoring mechanism that will alert you if one of your two redundant storage drives has failed, so you can schedule a replacement.

Additional steps may include:

  1. RAID Configurations:
    • For servers installed using Hetzner’s automatic installation system or ordered with a Windows or Linux addon, a RAID level 1 (mirroring) is configured and running by default. However, RAID 5 is more efficient and scalable than RAID 1 but will need to be manually set up and requires a minimum of 3 hard drives to implement. Hetzner provides guides on how to set up different RAID levels for various RAID controllers. See https://docs.hetzner.com/robot/dedicated-server/raid/raid/
  2. Regular Backups:
    • Hetzner offers 100GB of backup space for free in the form of a “Storage Box BX10,” which you can retrieve in your account. You can also order additional Storage Boxes for a fee and use them as backup space to regularly back up critical data. This ensures that even if there’s a failure in the local NVMe storage, you can recover your data from backups. See https://docs.hetzner.com/robot/storage-box/
  3. Snapshots:
  4. Monitoring and Alerts:
  5. Hot Spare Drives:
    • A hot spare drive is a backup drive in a RAID array that is not actively used for storing data but is ready to automatically replace a failed drive in the array. The main benefit of a hot spare drive is that it minimizes downtime in the event of a drive failure, as the RAID system can immediately start rebuilding the array using the hot spare drive. However, adding a hot spare drive to your RAID setup would require ordering an additional drive, which would incur extra charges. See https://community.hetzner.com/tutorials/howto-setup-mdadm#step-23—add-a-hot-spare-optional
  6. High Availability Setup:
    • Design a high availability (HA) setup with redundant servers. If one server experiences a hardware failure, another server in the cluster can take over, minimizing the impact on data availability.

*High frequency instance types are more appropriate for buddyboss type platforms in most cases but running a split rds database can mitigate this to a certain extent. One of our performance testbed platforms is running on 4vcpu lightsail and a split rds instance with cloudfront cdn all configured inside a lightsail container. 

Example BuddyBoss Lifecycle/Autoscale ranges*:

  • 2cpu and 4-5gb ram for sites in development (or 0-10 concurrent users)
  • 4cpu and 5-10gb ram at launch (or 10-100 concurrent users)
  • 8cpu and 10+gb ram for 100 and 150 concurrent users.
  • 16cpu and 20+gb ram for 300-350 concurrent users.
  • See below for discussion on higher tiers

*higher ram ranges for when elementor and/or an LMS plugin, like LearnDash or TutorLMS, are activated on the site, while the starting range is the bare minimum requirements. The more plugins there are, especially resource-heavy plugins, ever-higher ram resources will be needed, while more concurrent users will require more cpu’s. 

Infrastructure 

LiteSpeed is widely regarded as the best platform for handling high concurrent user loads. Its event-driven architecture ensures that every request is processed, even under heavy traffic, preventing server bottlenecks. Here’s why LiteSpeed stands out:

  1. Efficient Resource Management:
    LiteSpeed’s architecture allows more concurrent users per CPU core and per gigabyte of RAM compared to other web servers. Its use of the native LSPHP API for managing PHP workers further optimizes resource utilization and improves Time to First Byte (TTFB), making it particularly well-suited for dynamic content platforms like WordPress and WooCommerce.
  2. Best-in-Class Caching:
    LiteSpeed includes LSCache, a server-based caching solution that is integrated directly into the server. This eliminates the need for external caching plugins or configurations and ensures faster load times with minimal effort. LSCache supports advanced features like Edge Side Includes (ESI), object caching, and database query caching.
  3. Compatibility and Flexibility:
    LiteSpeed natively supports .htaccess files and offers an Apache-compatible configuration format. This makes it an ideal choice for sites migrating from Apache or for those requiring .htaccess rules.
  4. Built-In Scalability:
    LiteSpeed supports multiple server workers on the same platform, significantly enhancing scalability and enabling it to handle large-scale traffic with ease.
  5. Native Integration with QUIC.cloud:
    LiteSpeed integrates seamlessly with QUIC.cloud CDN, offering features such as image optimization, edge caching, and HTTP/3 support, further boosting site performance.

LiteSpeed shines in environments with highly concurrent user loads, such as busy e-commerce sites. Its efficient event-based routing and resource management provide a small but noticeable edge over competitors, ensuring that even under heavy load, users experience consistent performance.

OpenLiteSpeed is preferable to Litespeed: 

However, we have tested both LiteSpeed and OpenLitespeed with buddyboss and while both work, OpenLitespeed is ultimately a better fit for buddyboss in terms of cost. Firstly, well it’s free – but seriously the LiteSpeed pricing model is designed for a specific sort of hosting mindset – think enterprise multi-site hosting farms. They don’t need lots of concurrency and the base license of LiteSpeed enterprise is limited to a single LS worker thread and also memory capped to 8gig. Openlitespeed is much more flexible in terms of configuration and can be used for free. In most real world tests it is possible to achieve almost identical performance on OLS and LS. The key is in the configuration. OLS can have multiple httpd processes out of the box which can be leveraged to handle complex cpu tasks like ssl decode and encode in a separate process

Under the hood, LiteSpeed works very differently than Apache as well. An LSworker thread is very different to a PHP worker and they do very different things. The lsworker is a routing task. Its job is to handle the request cycle. The cap on the base license of LiteSpeed enterprise is usually sufficient for generic hosting, but for a system running something like buddyboss and in particular bbapp there are strong arguments for running additional ls workers. Firstly, the workers are affinity based so they are locked to a single CPU thread, which can result in a CPU processor performance imbalance and ram under-utilization. Running 2 lsworkers on an 8 core system makes more sense and balances load and Openlitespeed allows you to configure workers and PHP threads in ways that would cost lots more money on an enterprise license. 

LSphp workers are not the same as LSworkers

Each concurrent user needs a block of ram that is allocated to only them for the time it takes to load a page. With a normal wp blog site that block of ram is about 15meg, but with a typical BuddyBoss, LearnDash, GamiPress, WooCommerce site that quickly gets up to 65 or even as much as 100 meg per page request… Redis and database are phpworker. And even at 50mg that means 20 concurrent users in say a 5-10 second window needs 1gb of ram (on the web server). So at a bare minimum 100 users need at least 5gb of LSphp worker ram assuming all the requests arrive in the same 5-10 second window. Those are not capped by LiteSpeed, those are capped by ram and cpu resource limits. The amount of ram each LSphp worker needs is entirely dependent on the site – how many plugins, what plugins, what take and loads, tasks, etc. What we eventually find is that a 16gb server will handle around 100-150 concurrent users providing the cpu core speeds are fast enough but 8cpu and 16-32gb ram for 100 and 150 concurrent users is realistic. Again, each site is unique.

Load Balancing and Clustering 

Honestly horizontal scaling and mariadb cluster are expensive overkill for most sites. For most people it’s vertical scaling that is most relevant. Load balancers start to come in when you start scaling beyond 8 or 16 cores and the database is still being overloaded. There is also a price fallover point where two web instances and a dedicated SQL instance becomes cheaper than a 64 core server with 128gig of ram (handling several thousand concurrent users). At that point dynamic or elastic load balancing allows you to manage your cost across the day by spinning up instances as site load increases.*  

Otherwise, load balancing lets you scale the web server and SQL server instances independently and, depending on your hosting platform, there are some long term benefits to be had from splitting your webserver and MySQL server onto separate instances. For instance, splitting the database to its own instance frees up CPU cycles for PHP worker threads and allows you to dedicate as much ram as needed to the database. More ram on the web server won’t make it run any faster, while more cores can – especially again if the SQL database isn’t stealing all the resources. However, until you hit 32cores we find running split databases has a very diminishing return on most servers due to the way buddyboss makes sql requests. Every one of those 300to500 sql requests has to be sent back and forward via a slow network interface.

Multi-regional High-Availability

Multi-regional is infrastructure heavy, requiring a number of instances and moving parts to make it work. It may not be worth the effort until a site hits 50,000 users.

Offloading Media and Connecting to a CDN

Since 1.7 BuddyBoss has intentionally introduced a symbolic link system that is specifically designed to remove all media posted to the activity feed from the WordPress media library. This allows them to do their own permission management using symlinks in order to prevent unwanted or unauthorized access to documents and media. If you offload all the files to s3 then their symlink system is not possible since you can then access the media using a cdn based url. Which their symlink could not block. Their document manager and pdf preview thumbnails have similar issues with s3. 

The new media folders are now located at: 

/uploads/bb-platform-previews 

/uploads/bbapp 

/uploads/bb_documents 

/uploads/bb_medias 

Possibly also: 

/uploads/avatars 

/uploads/group-avatars  

/uploads/buddypress/groups 

/uploads/buddypress/members 

The WP Offload Media plugin is the only solution that currently offers a BuddyBoss integration that offloads from their custom media folder structure (see here). WP Offload Media is compatible with Amazon S3, DigitalOcean Spaces, and Google Cloud Storage. WP Offload Media actually supports any S3 compatible object storage provider and you can roll your own using Minio or even iDrive e2 Storage, which is the cheapest out there.

NOTE: The BuddyBoss symlinks and media permissions will not work with offloaded media. Those features are automatically disabled by BuddyBoss when WP Offload Media is activated in order to avoid some conflicts as well (if you are on a multisite network, you need to manually disable symbolic links and media permissions). However, you can achieve similar functionalities by taking advantage of WP Offload Media’s custom domain name feature and applying S3 bucket policies. The delivery settings and custom domain name feature of WP Offload Media can be used to achieve something like media.mysite.com/[unique-timestamp]/filename. WP Engine recommends setting up S3 with CloudFront as the CDN, but it should also be possible to accomplish something similar by configuring an Access Policy in CloudFlare’s Zero Trust service (check back later for this).

When you offload media files to AWS S3, there is typically no need to implement rules to block PHP execution or enable BuddyBoss code execution protection because S3 does not support the execution of scripts, and access control is managed in AWS S3. The .htaccess rules related to that aren’t applicable when offloading media to S3.

CDN

…The developers of the WP Offload Media plugin worked with BuddyBoss to provide a full CDN-based permission locking system – basically each file gets a private key assigned to it which prevents the file from being downloaded as a url, or the url being copied. It’s only offered as a feature of the WP Offload Media plugin when used with the Cloudfront CDN. So CloudFront is the most native fit for S3 because of the shared AWS infrastructure that allows for blazingly fast speeds. However, CloudFlare and Quic.cloud are other viable options.

If you offload your BuddyBoss media onto S3 using the WP Offload Media plugin, it’s possible to connect BunnyCDN directly to your S3 bucket to reduce the retrieval costs in S3.* BunnyCDN is another very capable**, but far more affordable premium CDN that may be configured to work with WP Offload Media. CloudFront and Cloudflare both offer a free tier, while BunnyCDN starts at a minimum of $1/month. CloudFront’s free tier is a great option for website’s starting out (it’s the equivalent of $5-10/month for 1tb bandwidth on BunnyCDN). However, CloudFront is far more expensive than Bunny CDN once a site’s CDN bandwidth grows beyond 1tb – especially given Bunny’s volume network pricing starting as little as .005/gb – globally – but even their standard network is far cheaper than CloudFront. New site owners might also want to take advantage of the customer support that is included with BunnyCDN, but not with CloudFront. Given CloudFront’s shared AWS architecture with S3, it is most likely faster than BunnyCDN – albeit more than 8x’s more expensive than BunnyCDN once your site’s bandwidth expands beyond CloudFront’s free tier.

* See this article about setting up your Amazon S3 file delivery using Bunny CDN. BunnyCDN also includes a wordpress plugin that allows you to configure it directly from your wordpress dashboard.

** See this likely biased but simple feature comparison of BunnyCDN and CloudFront, or this less biased comparison that ultimately recommends BunnyCDN over Cloudflare’s Anycast network model that uses the ISP to serve cached data from its nearest CDN edge. StackPath came up quite a bit as well but BunnyCDN’s pricing is cheaper.

Dedicated Video CDN 

Many sites running an LMS plugin on BuddyBoss, such as LearnDash or TutorLMS offload their instructional videos. Amazon AWS S3 is the most popular offloading solution,* but it is a basic file storage system that is not optimized for video and does not offer any additional features for video like a streaming-optimized CDN, video encoding, extra privacy protections, customizable players, etc… Dedicated Video CDN’s actually come out to be very price competitive once you consider the cost of putting together a similar configuration made up of separate component parts. However, the BuddyBoss custom file structure and the frontend course builders of plugins like TutorLMS are potential concerns that will need to be checked/resolved. Because of this, implementing a dedicated video streaming solution across BuddyBoss’s entire social network may only be possible with services that work with CloudFront and S3. There are solutions that will work in a more limited sense, i.e. that use the default wordpress library.  

To deliver video on demand (VOD) streaming with CloudFront, you must use S3 to store the content in its original format and the transcoded video. You must also use an encoder, such as AWS Elemental MediaConvert, to package video content before CloudFront can distribute the content. (for delivering live content, see this tutorial). You can explore how to use an AWS CloudFormation template to deploy a VOD AWS solution together with all the associated components. To see the steps for using the template, see Automated Deployment in the Video on Demand on AWS guide.

Wowza and Unified Streaming also provide tools that you can use for streaming video with CloudFront. For more information about using Wowza with CloudFront, see Bring your Wowza Streaming Engine license to CloudFront live HTTP streaming on the Wowza documentation website. For information about using Unified Streaming with CloudFront for VOD streaming, see Unified Streaming’s documentation on Amazon CloudFront.

In Swarmify’s case, their offloading process is kicked off as soon as the video is (pre)viewed. This will work for pages/posts and plugins that utilize the default WordPress media library. However, it will not work with media uploaded through BuddyBoss given its custom media file structure. A potential workaround would be to offload video source files to S3 and connect Swarmify directly to the bucket. 

Bunny Stream is an all-in-one storage, encoding, and Video CDN service* but it does not provide an automatic offloading solution for videos uploaded to the wordpress media library, BuddyBoss, or other plugins. Only users with direct access to the Bunny Stream account will be able to upload videos to Bunny Stream. Given its tightly integrated system that includes storage, using S3 with Bunny Stream as a workaround is not feasible.

The Presto Player plugin is a video player that is integrated with BunnyStream, LearnDash, and TutorLMS. PrestoPlayer is currently working with BuddyBoss on an integration for the web platform, but there is currently no timeline for this. Presto Player can be used on BuddyBoss, but there are significant limitations that implicate if and how it should be implemented on a BuddyBoss site (as of 9/10/2022). For users on the social network to use the Presto Player you have to create a Presto Player Media hub. Once done, they can use the shortcode or video URL to post on activity feeds. The presto player also currently works in the app but will appear as a stripped down version of the player that simply plays the video files and does not include any extra features.

(Need to confirm that CloudFront’s VOD, Wowza, and Unified Streaming services are compatible with BudyBoss’s media restrictions system before comparing price and features) Bunny Stream is a great option for smaller sites that have <24,000 streams (assumes 10 minute videos) per month. Not because it’s cheaper than Swarmify per se**, but because Swarmify currently does not offer smaller plans, and their existing plans do not make sense below that number of streams per month when compared to Bunny Stream. However, for videos much larger than 10 minutes, that dramatically shifts to Swarmify’s advantage as they charge per instant play, so a 45 minute video counts the same as a 10 minute video. Given the constraints of all these systems, S3 is currently the most widely compatible solution, but is not a dedicated Video CDN service.  

* Other video CDN services like Swarmify may not store your actual source video, at best they generate encoded versions that are cached in their CDN and available to download, but they may not be at the same compression levels as the original source file. For this reason, storing your videos on S3 is still necessary for those types of services because it is not guaranteed that the file will remain cached in the CDN caches, especially not for long periods of time if it’s not being accessed.

** Breaking down the cost of the available video hosting platform services is difficult given the different pricing models but let’s try to compare Swarmify with Bunny[.]net. The pricing model of Bunny Stream is pay-as-you-go starting from .01/gb storage and .005/gb CDN (streaming). To compare that with Swarmify, I will assume a 10 minute video is ~250mb, which also means 1gb of streaming is 4 views. Given that Swarmify’s pricing model is based on instant-start views I will also assume a 20% re-play per user, so 50k views is more like 40k views. At 1gb for every 4 views, that makes an equivalent of 10,000gb worth of streaming for Swarmify’s $30 plan. To stream an equivalent amount on Bunny[.]net would then cost $50 (10,000gb * .005). However, Bunny[.]net also charges for storage, while Swarmify does not. Plus, you also have to account for transcoding, which multiplies the storage per video to at least 3x the original upload. So storing 100, 10 minute videos on Bunny[.]net is 250mb-per-video * 100 videos * 3 video encodings / 1000 (to convert to gb) * .01 = $.75/month. Correct me if I’m wrong, but it looks like Swarmify is still a better option, it’s just that the $30 flat fee per month for the starter plan seems like a lot more than .01 and .005/gb. 

Restricting Who Can Upload Media In BuddyBoss

You can also control who can upload media in BuddyBoss > Settings > Media > Media Access. This will help you to control your storage costs and also allow you to offer uploading capabilities as a premium feature based on membership. Note that this is different then restricting media access to resolve privacy concerns, which is covered in the next section.

Config

PHP version

Maximum php version supported by BuddyBoss Platform is 8.0. 

Upgrade MariaDB and Redis

The default version of MySQL/Mariadb that comes with popular OSs like Ubuntu 20.04 tend to be very out-of-date (e.g. mariadb 10.5 vs 10.6 LTS). Likewise for Redis – v5 vs the current v7.

Configuring a Server Side CronJob

You can configure cronjobs in RunCloud by going to your server > Cronjob > add new job. Choose your php version in the vendor binary dropdown and type the following in the command text field, replacing “https://your-site.com” and “YourWebApp” with your information. 

-q phar://wp-cli.phar/vendor/wp-cli/wp-cli –url=https://your-site.com –path=/webapps/YourWebApp/ wp cron event run –due-now

To log the output of the cron job to a file in order to monitor its execution, detect errors, and analyze its performance, just add this to the end of the cron job line above (Jordan Trask’s tip):

>> /var/log/wp-cli-cron.log 2>&1

The log file will likely be located at /var/log/wp-cli-cron.log. Select “every minute” or your preferred schedule in the “Run In” dropdown. Click save. 

To learn more, see this article.

FFmpeg is used by BuddyBoss to create video thumbnails. Follow this tutorial to install FFmpeg on Ubuntu 20.xx.* 

Symbolic links are used to create “shortcuts” to media files uploaded by members, providing optimal security and performance. Note that the Symbolic Links feature does not apply to offloaded media. See Offloading Media section for alternative options. ** 

The BuddyBoss documentation is outdated and the imagick*** PHP extension is used instead of the GD Library now. While the GD Library has been included by default in PHP since 4.3, ImageMagick (Imagick) is an alternative image processing package that offers much more functionality, has much cleaner code, and supports far more image formats then GD (which only supports JPG, PNG, GIF, WBMP, WebP, XBM and XPM vs over 200 in imagick). BuddyBoss uses imagick for generating pdf previews for instance**** 

* If BuddyBoss has issues detecting ffmpeg try updating open_basedir path in Web App > Settings to /home/runcloud/webapps/your-web-app:/var/lib/php/session:/tmp:/usr/bin/ffmpeg where you-web-app is the name of your web app in RunCloud. And/OR add these lines in wp-config.php right before “/* That’s all, stop editing! Happy Publishing */”:

/** binary file path for FFmpeg */

define( “BB_FFMPEG_BINARY_PATH”, “/usr/bin/ffmpeg” );

define( “BB_FFPROBE_BINARY_PATH”, “/usr/bin/ffprobe” );

** If you are using OpenLiteSpeed on RunCloud, you can enable Symbolic links by going to your Web Application -> Settings -> PHP Settings -> disable_functions and removing “symlink” from the list. Also confirm that in Server->Litespeed>Litespeed-Server-Config followSymbolLink is set to 1 in the fileAccessControl block. 

*** You can install imagick on RunCloud following this tutorial

**** ImageMagick might need to be tweaked once installed, see here.

Restrict Media Access 

To restrict media (e.g. photos, videos, and documents) access to protect your privacy and maintain the security of your private files, set “autoLoadHtaccess” to 1 under Openlitespeed configuration, see here for more details (including how to set it up on Nginx). Media permissions will not apply to media that is offloaded. See Offloading Media section for alternative options. 


Caching 

(This section is currently under construction but on BuddyBoss there are very few opportunities to cache content. As a matter of fact, this area will cover how to resolve potential conflicts that may be caused by common caching solutions)

Issues with the message inbox caching content in the app likely means the REST API is being cached. Make sure the wp-json is not cached.

LiteSpeed servers perform caching inside the server regardless of whether you configure the LScache plugin. The LScache plugin is only a control panel. Be sure to activate LScache and add excludes for ^/wp-admin and ^/wp-json, and disable private access …

Redis installed on server and Redis Object Cache Plugin 

The bb platform makes extensive use of object caching, powered by Redis, to significantly improve activity feed and server performance of concurrency. 

What is Redis? It is a data store that we use to cache (store/save) values that typically take a long time to produce (via php code) or retrieve (from the database). It stores these values in RAM (memory), which can be accessed considerably faster than the storage drives. The database actually has a RAM cache as well and NVME storage drives on quality servers are quite fast too. But, still, adding Redis makes a BIG difference because we can avoid redundant PHP processing and slower mysql database requests.

What is Redis Object Cache Pro? It is the Pro version of the Redis Object Cache plugin (https://wordpress.org/plugins/redis-cache/), both of which we use to connect WordPress/PHP code with the Redis database. The pro version does some fancy things to both reduce the amount of requests made to the Redis database, as well as speed them up (e.g. compressing the results).

What is Relay? Relay.so is a replacement for the php module that connects the Redis server to the redis object cache wp plugin. It is faster than the default phpredis client, which serves the same role, due to faster code. There is a free version of it which anyone can use, but the main benefit of the Pro version is that it adds *another* layer of RAM caching that is more readily accessible by your PHP code, often preventing the PHP code from needing to communicate with the Redis database at all.

Both the free and Pro versions of the Redis Object Cache plugin can work with Relay since both can either use it or the built in redis module that comes with php to handle the connection to the redis server. However, even a moderately active BuddyBoss site during non-peak periods exhibits resource consumption patterns, on average, of 21MB of actual RAM and 38MB of Redis RAM, showcasing an impressive 95% hit ratio with compression fully enabled. The free Relay Community Plan starts with 64MB of PHP usage initially, diminishing to 16MB after the first hour. Optimized compression facilitates effective utilization, ranging between 32-64MB, depending on the data type within the object store.

After installing Redis on your server you will need the free Redis Object Cache plugin. 

Be careful however to test your site carefully on a staging site before pushing this to your live sites. Especially if you have woocommerce cart plugins, memberpress, learndash reporting or learndash groups installed. Not all plugins play well with object caching by default and may require consultation with plug-in developers to achieve optimum results. Test, test, test… Once it’s all working the results are well worth the effort This is the current exclusion list recommended for Redis:

// Excluding groups

‘non_persistent_groups’ => [

‘comment’,

‘counts’,

‘plugins’,

‘themes’,

‘wc_session_id’,

‘learndash_reports’,

‘learndash_admin_profile’,

‘bp_messages’,

‘bp_messages_threads’,

‘bp_document’,

‘bp_album’

],

Do I need Redis Object Cache Pro and/or Redis Relay Pro by Nick Chomey

SHORT ANSWER: Almost certainly not.

You’ll save an extremely negligible amount of time per page request – 10 milliseconds (0.01 seconds) or maybe 100 milliseconds in a more extreme case.

If you are using the free Redis Object Cache plugin (which you should be), go to the WP Admin -> Settings -> Redis -> Metrics tab, where you will see a chart that shows the amount of time spent fetching data from your Redis database, per page request, across time.

It is likely exceptionally low, so there’s nothing meaningful to be gained by moving to Redis Object Cache Pro (estimated with the dashed grey line). If it isn’t low, the real solution is better server hardware and a more streamlined website. For most sites, we’ve already gotten maybe 95% of the gains simply by using Redis at all. Moreover, in most cases, there’s considerably more impactful things that you can do to improve your site’s performance.

Longer answer:

Clearly they are beneficial, but how much do they improve performance beyond just using the free Redis plugin (and possibly the free Relay client)? 

On each page request, hundreds (maybe even 1000+) requests are made to the redis server. For each request, the time taken (measured in fractions of a millisecond, because redis can typically handle well over 100000 requests per second on good hardware) is added to a cumulative total. At the end of the request, that time, along with other stats on hit ratio etc, are stored in redis. The graph then fetches all of the metric data for each request, groups them all on a per-minute basis, and then calculates the median for each metric. I’d prefer to see average, to reflect outliers. Or, better yet, some sort of intervals of 25, 50, 75, 95 percentiles. But it is what it is.

In the example chart above, a real-world site spends about 20 milliseconds (0.02 seconds) reading from Redis, per page request. Meanwhile, each page request might take 3-5 seconds in total to load, depending on your server and configuration. The time spent in the free Redis plugin itself estimates how much time you would save by upgrading to Redis Object Cache Pro. It shows this in the dotted grey line and estimates maybe 50% – or 0.01 seconds.

Relay Pro benchmarks say that it can speed things up another 10x. Wow! But, 10x faster than nearly 0 is still nearly 0. So, we’re saving another 0.01 seconds perhaps. Irrelevant.

Bottom line – the VAST majority of BB sites will not meaningfully benefit from Redis Object Cache Pro. Relay free version can help a bit, even without a meaningful degree of in-ram caching. But there’s 100 low hanging performance wins on most sites that are much more meaningful than that. Your Redis stats and overall performance are likely to improve considerably from better hardware, configuring php (and mysql/mariadb), and building a leaner, more streamlined WordPress site (fewer plugins, leaner plugins, selectively loaded plugins). Please first see this post where I described how I improved a site’s speed from 60 seconds to under 4 seconds for the real performance needle-movers.

https://www.facebook.com/groups/buddybossdevelopment/posts/889568009372949

And, here, install this tiny plugin that fixes a bug in BuddyBoss, and shave 0.2 seconds off of each page request – for free!

https://www.facebook.com/groups/buddybossdevelopment/permalink/919188219744261

DragonflyDB is a multithreaded version of Redis that could handle considerably more requests per second. You could also install the free version of Relay if you want, or configure Redis to communicate over a faster Unix Socket. (the chart here is from a server with those things – they’re free, but tricky to implement).

Optimizing OPCache for BuddyBoss Performance  

Configuring OPCache for optimal performance is crucial, especially for websites with demanding plugins like BuddyBoss and LearnDash. By default, most hosts allocate around 64MB or 128MB for precompiling and storing PHP files, which is sufficient for basic WordPress sites. However, as plugins like BuddyBoss or LearnDash are added, the cache requirements increase. Insufficient cache memory might lead to slower site performance due to frequent hard drive access instead of using cached PHP files stored in RAM. Here are essential steps to fine-tune OPCache.

  1. Install the WP OPCache plugin
  2. Check the Statistics page provided by the plugin.
  3. If the memory usage shows 100%, it’s an indicator that your server isn’t optimally configured.
  4. In the server php.ini file, increase opcache.memory_consumption to an adequate level, at least 200MB. What we have done is told the server it can have more memory to cache all the php files in your server – what this means is that all of a sudden your server is not having to “precompile” any of your sites php files every page request. Which lets your cpu get on with the task of showing a page to users faster.
  5. Similarly, increase opcache.interned_strings_buffer to 32MB. By default it’s set around 8MB – but most heavy learndash and buddyboss sites will happily use double or more – if it’s available. This is basically a trick that lets php store a value “once” in cache that may be used in many worker threads at the same time – the net result is that as your server comes under load each worker thread needs less of the available memory to keep running. 
  6. ​​Check if Free Memory shows 0% in Memory Usage or Interned Strings usage. 
  7. If it is, increase memory settings slightly to free up memory and check again.

These are not fixes for every performance issue but on a site “under load” with concurrent users – (and learndash) for example – it can potentially make the difference between your site running and grinding to a halt as you add more concurrent users. You should not need more than opcache.memory_consumption=200, and opcache.interned_strings_buffer=32 is a good balance on heavier servers – but it does vary. 

* Additional Notes for RunCloud Users:

If you are using RunCloud, you can adjust the opcache settings by uncommenting and modifying lines in the php.ini file. For example, for PHP 8.1, locate the file at /usr/local/lsws/lsphp81/etc/php/8.1/litespeed/php.ini and modify the following lines:

; The OPcache shared memory storage size.

opcache.memory_consumption=256

; The amount of memory for interned strings in Mbytes.

opcache.interned_strings_buffer=16

Restart the service via the ssh terminal using the command: service lsws-rc restart. On RunCloud, it’s also necessary to Rebuild Web App Config after making modifications to the settings for the changes to take effect, even if the service is restarted in the terminal.

Following these steps, along with RunCloud-specific adjustments, can optimize OPCache performance and potentially mitigate performance issues caused by increased user concurrency on your site. It’s literally related to how many plugins that you have activated, so check your OPCache every time you install a new plugin, especially a heavy plugin.

Optimizing MySQL Performance: Configuring InnoDB Buffer Pool and Monitoring Best Practices

Database access speed is closely linked to disk access speeds. Considering your server resources and the workload on your database, it’s advisable to fine-tune MySQL settings to enhance database performance. This involves optimizing the use of memory to minimize disk I/O operations and improve query response times. Assess the amount of available RAM utilized during daily operations on your server. This assessment will help determine if there’s excess RAM that could be allocated to MySQL’s InnoDB buffer pool, controlled by the innodb_buffer_pool_size parameter in the /etc/mysql/my.cnf configuration file.

The primary objective of setting innodb_buffer_pool_size is to ensure that frequently accessed data resides in memory, reducing the reliance on disk I/O operations. It significantly enhances database performance. It’s also wise to reserve a buffer of RAM beyond the current demand for daily operations. This precaution ensures system stability and flexibility to accommodate additional memory needs for other server processes or temporary resource spikes. For instance, if daily demand utilizes 40% of total RAM, consider allocating 50% or less to the MySQL InnoDB buffer pool.

Remember, monitoring the server’s performance post-adjustments is crucial. This practice allows for real-world assessment and ensures optimal performance under varying conditions. Monitoring serves as a best practice when optimizing MySQL performance on a server.

How to limit the number of draft posts saved

Database cleanup pro that will actively cleanup drafts. LSCache also includes a feature to cleanup drafts. The options table can also get very, very large. WordPress also includes a feature called autoload so plugins can mark items in the options table as very important to load them in with every page request. You can get significant performance improvements if you clean up you wp_options table. Plugins like plugin organizer or Perfmatters also allow you to gain control by disabling plugins when you don’t need it on a page. 

Monitoring 

There are several tools available for monitoring the resources of a website, including:

WordPress Hosting Benchmark tool 

WP Performance Tester plugin

Used to benchmark the site’s performance as a synthetic reference to WordPress baseline performance across multiple hosting environments as part of the industry wide and highly respected https://wphostingbenchmarks.com/ suite of yearly global wp hosting performance evaluations. Queries per second is recommended to score at least 2200-2400 for a live BuddyBoss site (with somewhere around 8cpu/32ram instance). 

Query Monitor plugin
A free wordpress plugin that allows you to debug database queries, PHP errors, hooks and actions, block editor blocks, enqueued scripts and stylesheets, HTTP API calls, and more. It includes the ability to narrow down much of its output by plugin or theme, allowing you to quickly determine poorly performing plugins, themes, or functions. https://wordpress.org/plugins/query-monitor/

Code Profiler

A wordpress plugin that allows you to monitor the page speed of pages and breaks down the load time of all plugins called in order to identify any heavy plugins or unnecessary calls on a particular page. https://wordpress.org/plugins/code-profiler/

Plugin Organizer or PerfMatters

This plugin works hand in hand with the Code Profiler plugin by allowing you to fine-tune when specific plugins load. https://wordpress.org/plugins/plugin-organizer/ 

Amazon CloudWatch

This is a built-in monitoring service provided by AWS that allows you to monitor and collect metrics for your resources, such as CPU and memory usage, network traffic, and disk I/O. It also allows you to set alarms and notifications based on specific thresholds, so you can be alerted when a resource is approaching capacity.

New Relic

This is a popular monitoring and performance management tool that provides detailed visibility into the performance of your application and infrastructure. It allows you to monitor metrics such as CPU and memory usage, network traffic, and disk I/O.

Datadog

This is a monitoring and analytics platform that allows you to collect and analyze metrics and traces from your infrastructure, applications, and logs. It allows you to monitor metrics such as CPU and memory usage, network traffic, and disk I/O.

Prometheus

This is an open-source monitoring system that allows you to collect and analyze metrics from your infrastructure and applications. It provides a flexible query language and a powerful alerting system.

Grafana

This is an open-source monitoring and visualization platform that allows you to create and share dashboards, and analyze metrics and logs from your infrastructure and applications.

It’s recommended to use a combination of these tools to gain a comprehensive view of the performance of your website and resources.

Security

Rapyd, uses the premium version of PatchStack, Monarx, and a CDN firewall. The free version of PatchStack offers much the same service as the premium version which provides automatic updates and *sometimes* offers virtual patches to temporarily secure vulnerable plugins until an update is available (note that WordPress core supports implementing automatic updates per plugin/theme if needed). Monarx is a server-level security suite that works on every site installed on that server, while Rapyd plans are per site. BitNinja is another good server-based security solution with a better focus on WordPress and cheaper than Monarx but it currently does not support openlitespeed config files in json format that RunCloud uses. BitNinja is working on supporting OLS configs in json format. MalCare (a premium service) is a third plugin-based solution that can be used in place of Monarx or BitNinja. It is easier to install but because it is a plugin it needs to be activated per site. 

In either case, setting up a firewall on a CDN is also recommended and the free version of All In One Security is worth looking into for implementing 2FA, changing the default login path, and other WordPress hardening settings, but back up and test when first setting it up. Note that All-In-One Security actually includes an on-site WAF and malware Scanner as well, but using Monarx, BitNinja, or MalCare is better for security and performance reasons, so keep the AIOS firewall and malware scanner disabled (MalCare connects your site to an offsite WAF and malware scanner). 

See this github gist for adding a tab to the user account settings page on the frontend of your BuddyBoss site to let your users configure 2FA for themselves. 

Using Xdebug

By Nick Chomey

I have gone from knowing nothing to now developing my own custom plugin that uses not just PHP (the backend/server language that runs wordpress) but also dive into Javascript (particulary the AJAX mechanism) that allows for users to do interactive things without reloading the page. I have complete confidence that I can figure out what I need to learn because xDebug helps me to identify and inspect anything.

Setting it all up is pretty detailed, but here are various resources which should convince you of why you need it, and help you figure it out.

I found that using the Xdebug install wizard (https://xdebug.org/wizard) was easier and more successful than installing the package (https://xdebug.org/docs/install). I suspect this has something to do with foibles with my LiteSpeed server (which, itself, is worth using)

You can connect VS Code to your server with SSH with this extension, which essentially allows you to work on your website files as if they were on your local computer. https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh

You’ll need this extension as well to use Xdebug in VS Code https://marketplace.visualstudio.com/items?itemName=xdebug.php-debug

And this in your browser to let xdebug know you are on a relevant page https://chromewebstore.google.com/detail/xdebug-helper/eadndfjplgieldjbigjakmdgkmoaaaoc

I truly can’t think of a better investment of time and energy for someone who is trying to launch a WordPress website than to get this set up (and I’ve saved you hundreds of hours of failures with the information provided here…).

p.s A lot of resources you’ll find will be for Xdebug v2. You definitely want to be using v3. In addition to being far more powerful and efficient, it’s actually easier to set up and use anyway. The easiest way to see if the instructions are for v3 is that it uses port 9003 by default, rather than 9000. That doesn’t mean that v2 info won’t be useful though.