McGarrah Technical Blog

Running Github Pages locally

How to run Github Pages locally in my Microsoft Windows 10 Pro WSLv2 Ubuntu 22.04 LTS environment and using Visual Studio Code to modify the contents. I’m not a Ruby or Jekyll expert by any means but just wanted a quick guide on running my Github Pages website locally to review them before pushing to this website. Seemed like an easy enough thing but there were a couple of hiccups to sort out so thought I’d write them down for future me when I try this again.

This should also lets me test out new plugins, new versions and changes to templates without breaking the public website. I’m still sorting out how to do the abstracts and formatting of the archive pages correctly.

HP ProCurve 2800 initial setup

Get access to switch console

You will need a console serial cable to get into your HP ProCurve 2800 switch.

HP ProCurve 2810-24 Console Cable

Here is the one I bought from Amazon OIKWAN - USB Cisco Console Cable, USB to RJ45 Console Cable which has been useful on some other project as well. I have a break out for the RJ45 to let me use this on an old BlackArmor NAS and to interface with some robotics equipment. You mileage may vary but this one works great for me.

This is the ProCurve 2810-24 that is all 1Gbps ports with four SFP (not SFP+) ports that you can use with fiber or DACs. I bought three of these so I have a SAN and two home networks… then I picked up another one as a spare because it was less than $25 on eBay. So I’m all in on this switch for my home networks.

HP ProCurve 2810-24 Front View

Ceph Cluster rebalance issue

This is rough draft that I’m just pushing out as it might be useful to someone not stay in my drafts folder forever… Good enough beats Perfect that never ships every time.

I think I have mentioned my ProxMox/Ceph combo cluster in an earlier post. A quick summary is it consists of a five (5) node cluster for ProxMox HA and three of those nodes have Ceph with three (3) OSDs each for a total of nine (9) 5Tb OSDs. They are in a 3/2 ceph configuration with three copies of each piece of data allowing for running if two nodes are active. Those OSD / hard drives have been added in batches of three (3) with one added on each node as I could get drives cleaned and available. So I added them piece meal in a sets of three OSDs, then three more and finally the last batch of three. I’m also committing the sin of not using 10Gbps SAN networking for the Ceph cluster and using 1Gbps so performance is impacted.

Adding them in pieces as I also loaded up the CephFS with media content is what is hurting me now. My first three OSDs that are spread across the three nodes are pretty full at 75-85% and as I added the next batches, the cluster has never fully caught up and rebalanced the initial contents. This impacts the results of my ‘ceph osd df tree’ results showing I have less space then I actually have available.

Something that I’m navigating is Ceph will go into read-only mode when you approach the fill limits which is typically 95% of space available. It starts alerting like crazy at 85% filled with warning of dire things coming. Notice in my OSD status below that I have massive imbalances between the initial OSDs 0,1,2 versus 3,4,5 and 6,7,8.

Ceph OSD Status

Aggregated Network Connections with LAG/LACP

This is a meandering post without an immediate happy outcome.

I am working on a five node ProxMox 8.1 cluster with three nodes as a Ceph cluster to host my media collection. I’m learning a bunch about Ceph and Proxmox which I’ll post about later. The media collection I am importing into Ceph is a little over 16Tb from ripping my VHS, DVD, BluRay collections of movies and tv shows. Movies end up being less than a third of that content.

Posts