For years, the notion that Moodle “does not behave well” in large installations or is not flexible enough to take advantage of additional processing resources has managed to survive in forums and articles around the web.
In reality, time and again Moodle has shown itself capable of handling installations featuring hundreds of thousands of users accessing resources and performing changes simultaneously. The conclusion that always arises after looking into the details of a reported issue is that the code was not written with scale in mind. This is often due to inexperienced developers who fail to follow the extensive performance and scalability guidelines available online, starting with the Moodle documentation itself.
Moodle can handle sites with 100,000s users
First, it’s important to clarify that by “handling,” we could be referring to the number of users who are registered on the site as admins, teachers, students, or something else. We could also be talking about the number of users who at a given time are simultaneously logged in and actively navigating in Moodle, requesting, downloading, or updating information. The second definition is the more critical one when discussing performance.
Answering thousands on requests at the same time is primarily a matter of hardware specifications and the way the devices are connected. Software, of course, could place restrictions on the way it uses physical resources, which may in fact help speed up the site. Moodle lets developers set limits on memory requests by a given page in order to prevent memory hoarding by one user or process.
If memory access is controlled and users still face a slow Moodle, it could be time to increase the hardware capability.
Moodle can scale and “balance loads” across CPUs
The installation does have plenty of free resources, but the site does not always appear to take advantage of them. Again, there is nothing in Moodle that keeps it from using the extra resources it needs. But managing tasks and assigning them to available CPUs, or “cores,” is a complicated issue, as the benefits of distributing threads of one or more processes must be weighted against the costs of splitting and splicing, and include the risks of synchronization failure.
While these operations are best left to expert programmers, the first step they would probably take is to make sure the right tools are in place for each job. This example by SeveralNines illustrates the design of a network that eliminates “single points of failure” and takes advantage of open technologies to optimize clustering and the performance of the database engine.
Moodle’s performance does not have to be compromised after sudden increases in volume
Similarly to the point made above, the ability to access extra resources automatically requires the help of broadly available software. The ability to use them, however, does depend on the server features. AWS, for example, offers tools to automatically scale services and to deploy new servers with the same configurations if necessary.
Moodle Cache can be optimized
Check out Martin Langhoff’s talk at MoodleMoot US 2015 as he showcases Moodle Universal Cache, or MUC. MUC is the result of ongoing research about a way to keep commonly used resources close to the users who access it repeatedly. It is a promising line of work that might involve usage pattern forecasting.
Moodle plugins can schedule and automate large tasks
Finally, take a look at the all too common experience of creating massive Moodle content repositories basically from scratch as a semi-annual routine and the plugins developed by Moodlers to help optimize and automate thousands of tasks. For operations that would take long in any LMS, Moodle welcomes any development that helps speed things up and makes them more available for any user.
This Moodle Practice related post is made possible by: eThink Education, a Certified Moodle Partner that provides a fully-managed Moodle experience including implementation, integration, cloud-hosting, and management services. To learn more about eThink, click here.