56Bit Logo


Lovinmalta logo

Lovinmalta Case Study

Scaling WordPress sites using Amazon ECS

About the customer

Lovinmalta is one of the leading news portals on the Maltese Islands. The portal also acts as an alternative guide to Malta and Gozo - celebrating the Maltese islands, their food and the people who shape them.

The Challenge

The challenge with Lovinmalta involved very tight project deadlines as well as the possibility of very fast surges of visitors when there is breaking news. The portal commonly has 5 to 10x traffic surges within a few minutes of breaking news. Since, for cost saving reasons, having a large pool of servers waiting for such traffic surges was not possible or ideal, another solution had to be found.

The Solution

Lovinmalta and 56Bit, decided to solve this issue by eliminating the need to do any significant predictions in the first place. By using AWS's very mature, and ever-growing portfolio of container orchestration products we were able to provide a highly redundant, scalable and secure system whilst fully controlling costs.

Why AWS?

After evaluating various options, AWS was chosen as the public cloud provider of choice. The reason was not just based on costs, which were considerably less when compared like with like, but also the maturity, scalability potential and unparalleled feature set provided by the AWS service portfolio.

Why 56Bit?

56Bit provides peace of mind to technology-driven business through best-in-class cloud solutions. Lovinmalta, whose core business is totally dependent on the underlying technology required an experienced partner with profound knowledge on serverless technologies that could deliver a high-quality service on time and within budget. Lovinmalta teamed up with 56Bit to consult, design, build and maintain this platform, working hand-in-hand with the software development team.

Solution Details

The solution is based on the AWS Well-Architected Framework and includes the following components:

Lovinmalta infrastructure design
Containerized Compute

Amazon ECS is used to orchestrate the containerized backend logic and frontend dynamic html logic. All containers, connected to VPCs, are run on multiple AZs to ensure high availability and maximum scalability. Very aggressive and strict scaling policies are used to scale the underlying EC2 hosts and the overlying containers. Multiple lambda functions are also used for internal tasks such as Development and Staging environment switch on and off which is key to reducing costs.

Fully managed SQL database

Amazon RDS (Aurora engine with MySQL compatibility) was chosen to host a fully managed cluster of MySQL servers on multiple availability zones.

Serverless Storage

Amazon S3 is used as a static file storage system. Considering it was designed with 99.99% availability and 99.999999999% (that's 11 x 9s!) durability, the decision was easy. Multiple buckets, some fronted by the Amazon CloudFront CDN are used for image hosting, logging and other miscellaneous uses.

Serverless Security and Compliance
  • The system uses multiple AWS accounts, one for each of the Staging and Production environments. Another account acts as the master account with centralized billing, DNS, Single-Sign On, etc. Finally, 2 accounts are used for security logging, with one account acting as a black hole (i.e. an immutable highly secure dump of security logs) and another acting as a single pane of glass for everything security related.
  • Security logging is handled using AWS Config, AWS CloudWatch and AWS CloudTrail. The above accounts are provisioned using AWS Control Tower, which creates the AWS Organization, an SSO entry point, an AWS Account factory (using AWS Service Catalog), as well as a number of Guardrails to protect the accounts from unintended changes. All management users are assigned MFA-backed credentials onto the SSO endpoint which in turn provides role-based temporary access to the different accounts.
  • AWS Guardduty provides intelligent threat detection and AWS Security Hub acts a single pane of glass for security alerts and notifications.
  • All system components are deployed in private VPCs
  • Non-public S3 buckets are encrypted with strong cryptographic keys (AES256) managed by AWS KMS. All data is encrypted at rest and in transit with TLS-enabled network protocols. Even the backups and its transfer to a second physical location are encrypted. All secrets like database passwords are encrypted and stored in a highly available and secure secrets vault provided by AWS Parameter Store (a sub-service of AWS Systems Manager) and automatically injected into ECS for consumption. PKI Certificates are provisioned using AWS Certificate Manager.
Serverless Deployment and Orchestration
  • The infrastructure was fully built using CloudFormation, which is released using a CI/CD pipeline built on top of the AWS Code family of products (Codecommit, Codebuild, Codepipeline).
  • This Infrastructure as Code passes through the full commit-build-test-deploy lifecycle, incorporating the staging and production environments. The backend and frontend code bases also pass through multiple CI/CD pipelines, built using similar infrastructure.
  • Environment separation is implemented by using completely independent VPCs in different AWS accounts. New releases can always be tested in a staging environment, where unit and integration tests along with load and stress tests can be conducted by the same testing suite. This mitigates the risk of unexpected functional changes and performance degradation. The master account serves as a CI/CD single pane of glass for the teams working on the project, whilst cross-account IAM roles enable the pipelines to test, build and deploy code in different AWS accounts.

Possible future improvements

Implement a Disaster Recovery (DR) solution that handles region-wide failures. Whilst the system is already highly-available, since everything runs in multiple AZs, this can be improved further with a cross-region solution. It will not be very hard to move to another region, since everything is implemented using IaC.

Implement database caching.

At the moment we do not envisage the need for database caching, but a database caching layer can easily be spun up using AWS Elasticache (Redis) to add a very fast caching layer between the ECS compute and the RDS database. This will increase performance, reduce database hosting costs and decouple the architecture even further.

Go completely serverless

Moving to AWS Fargate (for ECS) and using Aurora Serverless (for Mysql) would remove all manual management of the underlying servers and provide us with the space to focus on improving and monitoring the infrastructure.