WordPress and Static websites on S3

WordPress and Static websites on S3 are two very distinct hosting solutions. WordPress is a Content Management System (CMS) that allows users to easily build and maintain dynamic, content-rich websites. On the other hand, Static websites on S3 are a more traditional hosting solution where HTML, CSS, and JavaScript are all hosted on Amazon’s Simple Storage Service (S3). Both solutions offer different benefits depending on the type of website you are building, so it is important to weigh the pros and cons of each before making a decision.

This website is hosted on Amazon Web Services’ Simple Storage Service (S3). The content management system (CMS) used to edit the website is WordPress.

This is achieved by running a WordPress installation on a local server. The website is then edited as any other WordPress website. A static site exporter is then used to convert the WordPress website into a set of static HTML, CSS, and images. Finally, Rclone is used to sync these files with an S3 bucket.

Once the static files have been synced to S3, they can be hosted as any other static website. This cycle of editing, exporting, and syncing can be repeated as often as necessary.

This method does have its limitations. Any pages that require dynamic content, such as an online store, will not be supported by a static export. However, for websites that can work within these constraints, hosting costs are greatly reduced and the website can be hosted on Amazon Web Services’ Content Delivery Network (CDN) for lightning-fast speeds from anywhere in the world.

HITS Stores

I recently launched two online stores that are built on WordPress/Woocommerce platforms. The first website, 800adventures.com.au, is targeted towards CF Moto 800MT owners. This website is hosted on AWS Lightsail and uses Lightsail’s Proxy CDN to cache images and static files with recommended settings for WordPress dynamic websites. It achieves a score of A on GTMetrix.

The second website, getpunnyshirts.com, offers both self-designed and reseller T-shirts. While this website currently has lower sales than 800 Adventures, it is hosted on my home network due to its lower traffic volume. To improve its performance, I am using a proxy on Lightsail with a public IP, which runs a dockerised installation of Nginx configured as a forward proxy with SSL certificates for the domain. The traffic is forwarded via a Zerotier VPN PtP network to my home Proxmox cluster that hosts the web server. Unfortunately, due to the time it takes to receive the first byte, the website only achieves an F on GTMetrix. However, the bandwidth is sufficient for low traffic.

Proxmox, clustering and high availability for the home lab

To enable HA in Proxmox you need a way to allow Proxmox to move VMs between systems without having to move the associated disks of the VM.

This is achieved by using a shared disk system between the cluster nodes.

Options I have tried include:

NFS to a (very old) Dlink NAS

Using a DNS-345 with 4x 500gb in Raid 5 and a single 1Gb network connection.

I enabled NFS and allowed the cluster access. After adding the share to the cluster I was able to use it to store VM and CT disks.

I was able to run about 20 Windows and Linux based systems as long as they didn’t need heavy disk IO. Any actions that did required patience to complete.

Monitoring the DNS-345 it became apparent the limitation seemed to be the network connection. CPU and disk IO on the NAS itself was minimal.

The NAS itself is not upgradeable.

NFS to Unraid

Similar to the DNS-345 but with a more powerful system.

A bit of background if you are not familiar, Unraid is not aimed at this sort of task and is more suited to home usage as a media and file storage system. It worked well but I also like the feature in Unraid where it shuts down disks during quiet times. This is achieved by the ability to read data from a single drive (unlike ‘proper’ RAID where reads are spread out across disks). Running VMs meant there was going to be disk access to any disks hosting a VM. I could limit usage of disks in Unraid’s share settings.

I also went through upgrades on this server including:

  • Change CPU from Ryzen3 to Xeon. The Ryzen worked well but the associated motherboards in the price range I was in lacked certain features (PCI-e slots or ECC memory) that were better served by a server style board.
  • SATA add on cards where changed to SAS to allow use of SAS drives. Enterprise style drives are surprisingly cheap on ebay.
  • Emulex cards allowed 10gb fibre connections via IP for Unraid

To cut a long story short, I felt a more dedicated storage system would work better than Unraid.

iSCSI to Unraid

While looking over options I had a look into the iSCSI plugin available on Unraid. This was a great way to learn about iSCSI as it leads you through the process. I was able to make this available to the Proxmox cluster just as you would any iSCSI drives. Along with the 10gb fiber IP network this allowed for quite a quick disk backend.

iSCSI to ESOS

After acquiring more dedicated hardware which included a rack mount server with SAS 10k drives and (lowend) hardware RAID cards I went looking for appropriate software to allow Proxmox access.

I settled on Enterprise Storage Systems. This is designed to boot off a USB stick, is Linux based and is dedicated to being a storage system. Unlike FreeNAS and its various versions it has little features outside those required. It does have a text based user interface for configuration and some statistics on connections & usage.

I initially ran with Qlogic fiber cards but these will not work with Unraid, and I didnt want to run two seperate fiber networks. I went with Emulex and iSCSI to share the disks with the Proxmox cluster.

This is easily the best disk IO out of the options I had tried in my low budget home lab.

Proxmox for the home lab

Proxmox is a great option for a home lab. It has features enabled that others have only in their paid tiers.

It wouldnt see that way as every time you login, you are reminded you are not using the Enterprise supported version. This is completely fair as the work certainly deservers its price tag.

For those of use who run a small home lab to learn and get familiar with the running of enterprise systems Proxmox has the option of a ‘free’ upgrade system.

This will remove the enterprise repository and add the no-subscription repository.

Execute the following to change over each of your Proxmox installs:

rm /etc/apt/sources.list.d/pve-enterprise.list && \
echo "deb http://download.proxmox.com/debian/pve buster pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list && \
apt update -y && \
apt full-upgrade -y

Linux shell – Disk space commands

Find all files 100Mb and greater in root

find / -xdev -type f -size +100M

Show current directory disk space usage, this filesystem only, and only at one level (but counts all files in the subfolders)

du -hx --max-depth=1

As above, but sort largest folders/files to the bottom of the list

du -hx --max-depth=1 | sort -h

Laravel and Docker

Laravel is a PHP framework that makes building a web application faster (once you climb the mountain to learn it!)

Docker is the hosting environment that brings your ‘development’ and ‘production’ environments closer together.

Laradock is the glue holding them together.

Getting started with Laravel for Linux System Administrators

I recently had a go at learning Laravel.

I’m already very familiar with Linux servers as I ran a web hosting business. Yet all the Getting started documents seemed to assume I was unfamiliar and lead me down the path of using Vagrant on my Windows desktop

To use Laravel on a Linux web hosting account, all you actually need is composer. This is a PHP dependency manager, not unlike Yum or Apt-get you would use to manage packages on your server.

You can install composer with the standard package management. From there create a new Laravel project with

composer create-project –prefer-dist laravel/laravel blog

in the folder where you would like it created, where blog is the name of the new project.

This creates a project using the initial laravel files.

Once thats done you have a very basic laravel website ready to go. Place it behind an apache web server and you should see the logo of laravel.

Anouther important piece is the artisan file. This command line PHP script performs a few important functions such managing the associated database for the laravel site in an intelligent way.

WordPress Appliance - Powered by TurnKey Linux