<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[vmh@me]]></title><description><![CDATA[Thoughts and stories]]></description><link>https://vmh.one/</link><generator>Ghost 5.88</generator><lastBuildDate>Tue, 05 May 2026 01:20:39 GMT</lastBuildDate><atom:link href="https://vmh.one/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[My setup with Proxmox and Kubernetes]]></title><description><![CDATA[<blockquote>Coming from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab post</a>, <a href="https://proxmox.com/en/?ref=vmh.one" rel="noreferrer">Proxmox Virtual Environment</a> is a powerful hypervisor for my setup, PVE can be installed as the OS and replace Windows, Hyper-V in my new machines. And <a href="https://kubernetes.io/?ref=vmh.one" rel="noreferrer">Kubernetes</a> is a powerful container orchestration platform that helps to deploy and manage backend containers. In this post, I</blockquote>]]></description><link>https://vmh.one/my-setup-with-proxmox-and-kubernetes/</link><guid isPermaLink="false">673f09310aaf420001a1ece4</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Tue, 26 Nov 2024 09:30:28 GMT</pubDate><content:encoded><![CDATA[<blockquote>Coming from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab post</a>, <a href="https://proxmox.com/en/?ref=vmh.one" rel="noreferrer">Proxmox Virtual Environment</a> is a powerful hypervisor for my setup, PVE can be installed as the OS and replace Windows, Hyper-V in my new machines. And <a href="https://kubernetes.io/?ref=vmh.one" rel="noreferrer">Kubernetes</a> is a powerful container orchestration platform that helps to deploy and manage backend containers. In this post, I want to explain how I maximize hardware resources with this combo.</blockquote><h3 id="setup">Setup</h3><p>Proxmox is a Debian-based Linux distribution, I can deploy and manage virtualized environments through a web console or a command line. I used <a href="https://etcher.balena.io/?ref=vmh.one" rel="noreferrer">Etcher</a>  to flash a USB drive with Proxmox iso then Install Proxmox VE (Graphical). The detailed steps are easy to follow: <a href="https://phoenixnap.com/kb/install-proxmox?ref=vmh.one" rel="noreferrer">How to Install Proxmox VE</a>.</p><p>Kubernetes (k8s) is an open source system for automating deployment, scaling, and management of containerized applications. This guide helps with steps on <a href="https://phoenixnap.com/kb/install-kubernetes-on-ubuntu?ref=vmh.one" rel="noreferrer">How to Install Kubernetes on Ubuntu 22.04</a>. <a href="https://helm.sh/?ref=vmh.one" rel="noreferrer">Helm</a> chart is the package manager for Kubernetes, it is really useful to install open source packages.</p><h3 id="machines-structure">Machines structure</h3><p>My hardware rigs are with Intel 2680v4 CPU and 128GB ECC RAM, the CPU has 14 cores and 28 threads. I break down a rig to 3 or 4 virtual machines with Proxmox, a VM has 4 vCPUs 16GB RAM or 8 vCPUs 32GB RAM, which are equivalent to AWS EC2 t3.xlarge and t3.2xlarge.</p><p>I have Ubuntu Server 22.04 on all VMs, use them to host backend services (PSQL, Mongo, Elastic, vLLM) and make each VM a k8s worker node. By using xlarge and 2xlarge config, a VM has enough resources for 1 or 2 BE services, and can mix with smaller BE containers using k8s. This is the key practice that helps to maximize my hardware resources.</p><h3 id="mornitoring-tools">Mornitoring Tools</h3><p>There are many tools to monitor cluster health and resources.</p><p><strong>Prometheus and Grafana</strong>: Prometheus is a popular tool to collect and query real-time metrics, and Grafana is useful to visualize metrics using graphs. I can <a href="https://medium.com/@gayatripawar401/deploy-prometheus-and-grafana-on-kubernetes-using-helm-5aa9d4fbae66?ref=vmh.one" rel="noreferrer">Deploy Prometheus and Grafana on Kubernetes using Helm</a></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/grafana.jpg" class="kg-image" alt loading="lazy" width="2000" height="1087" srcset="https://vmh.one/content/images/size/w600/2024/11/grafana.jpg 600w, https://vmh.one/content/images/size/w1000/2024/11/grafana.jpg 1000w, https://vmh.one/content/images/size/w1600/2024/11/grafana.jpg 1600w, https://vmh.one/content/images/2024/11/grafana.jpg 2358w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Cluster metrics with Prometheus and Grafana</span></figcaption></figure><p>Prometheus and Grafana will require Persistent Volume claims for their storage, using TrueNAS is the best way to achieve it, and <strong>Democratic CSI</strong> provides a convenient way to make connection from k8s pods to TrueNAS. <a href="https://www.lisenet.com/2021/moving-to-truenas-and-democratic-csi-for-kubernetes-persistent-storage/?ref=vmh.one" rel="noreferrer">Moving to TrueNAS and Democratic CSI for Kubernetes Persistent Storage</a> is tough to set up but it works beautifully after the struggle.</p><p><strong>Graylog</strong> is the portal for centralized logs, <a href="https://www.elastic.co/elastic-stack?ref=vmh.one" rel="noreferrer">ELK stack</a> is more popular but a bit overkill for my use, Graylog is simple to set up: <a href="https://go2docs.graylog.org/current/downloading_and_installing_graylog/ubuntu_installation.htm?ref=vmh.one" rel="noreferrer">Ubuntu Installation</a></p><p><strong>Syslog</strong> library helps to collect system logs and send to Graylog: <a href="https://linuxtechlab.medium.com/setup-syslog-server-on-ubuntu-or-centos-for-centralized-logs-management-7faeda81edf0?ref=vmh.one" rel="noreferrer">Setup syslog server on Ubuntu for Centralized Logs management</a></p><p><strong>Fluent Bit</strong> helm chart helps to collect k8s logs and send to Graylog: <a href="https://linuxtechlab.medium.com/setup-syslog-server-on-ubuntu-or-centos-for-centralized-logs-management-7faeda81edf0?ref=vmh.one" rel="noreferrer">Fluent Bit Kubernetes</a></p><h3 id="bonus">Bonus</h3><p><strong>HiveOS</strong> is a popular Operating System for crypto miners, it is easy to manage GPUs and crypto algorithms though the cloud WebUI. <a href="https://hiveon.com/os/?ref=vmh.one" rel="noreferrer">HiveOS</a> can be installed as a Proxmox Virtual Machine, by import the os image disk and config it as a USB drive. The setup steps are similar to <a href="https://www.nicksherlock.com/2020/12/running-tails-as-a-vm-with-persistence-on-proxmox/?ref=vmh.one" rel="noreferrer">Running Tails as a VM with persistence on Proxmox</a> </p><p><strong>Proxmox Private Network</strong>: I have a query to build 1 dedicated machine for a standalone Proxmox cluster, Proxmox private network is quite simple to set up an internal network for this niche purpose: <a href="https://blog.jenningsga.com/private-network-with-proxmox/?ref=vmh.one" rel="noreferrer">How to Create a Private Network in Proxmox</a></p>]]></content:encoded></item><item><title><![CDATA[My setup for High Availability, Redundancy and Resilience]]></title><description><![CDATA[<blockquote>Coming from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab</a>, <a href="https://vmh.one/my-techniques-with-cloudflare/" rel="noreferrer">Cloudflare</a>, <a href="https://vmh.one/my-packages-with-pfsense/" rel="noreferrer">pfSense</a> posts; to make my lab environment ready for production use I must have Resilience in my setup. In this post, I want to explore on how I configure and structure my production tools for High Availability and Redundancy of backend services for Resilience purpose.</blockquote>]]></description><link>https://vmh.one/high-availability-redundancy-and-resilience/</link><guid isPermaLink="false">673ec8a40aaf420001a1ecb5</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Sun, 24 Nov 2024 12:31:55 GMT</pubDate><content:encoded><![CDATA[<blockquote>Coming from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab</a>, <a href="https://vmh.one/my-techniques-with-cloudflare/" rel="noreferrer">Cloudflare</a>, <a href="https://vmh.one/my-packages-with-pfsense/" rel="noreferrer">pfSense</a> posts; to make my lab environment ready for production use I must have Resilience in my setup. In this post, I want to explore on how I configure and structure my production tools for High Availability and Redundancy of backend services for Resilience purpose.</blockquote><h3 id="high-availability">High Availability</h3><p>pfSense is the crucial part of my lab, it&apos;s sometime unstable and crash so I must have a backup instance for it, and there is a built-in High Availability feature for this purpose. The steps are straightforward:</p><ol><li>Clone my pfSense VM to another computer with the same setup, the original VM is the master and the clone is the the failover</li><li>In the master, create a CARP Virtual IP for the WAN interface and another CARP Virtual IP for the LAN interface</li><li>Enable High Availability in the master, put in the the failover&apos;s IP, after that Firewall and Virtual IP will be synced between master and failover</li><li>Enable <code>Sync HAProxy configuration to backup CARP members via XMLRPC. </code>settings to sync HAProxy configs as well.</li></ol><p>The 2 tutorials helped me with my setup: <a href="https://www.provya.com/blog/pfsense-configuring-high-availability/?ref=vmh.one" rel="noreferrer">[pfSense] Configuring High Availability</a> and <a href="https://www.youtube.com/watch?v=-1Og5ogkyZY&amp;ref=vmh.one" rel="noreferrer">pfsense HA / High Availability Setup and Testing Using CARP, XMLRPC &amp; pfsync</a>. Also, my pfSense instances are Hyper-V VMs and they couldn&apos;t find each other, there is a promiscuous setting to enable in Hyper-V virtual switch to solve this (<a href="https://answers.microsoft.com/en-us/windowserver/forum/all/hyper-v-nic-in-promiscuous-mode/e0ee8f6c-7a5a-4d8d-babc-601d871e2736?ref=vmh.one" rel="noreferrer">guide</a>). </p><p>After this setup, if the master pfSense goes down the failover will becomes master instantly, this provides resilience for my lab&apos;s connectivity. Cloudflare Connector also need high availability, I cloned the Connector to 3 replicas so if anyone fails, Cloudflare still have connection to my lab.</p><h3 id="redundancy">Redundancy</h3><p>Redundancy is also necessary for a multi computers setup. Kubernetes helps to clone my backend containers to as many instances as needed and PostgreSQL, MongoDB, Elasticsearch do have their built-in clusters feature.</p><p><strong>Backend</strong>: with RoR as my backend, this guide <a href="https://kubernetes-rails.com/?ref=vmh.one" rel="noreferrer">Deploying a Rails application to Kubernetes</a> helps with the steps to create the Docker Image and how to deploy to Kubernetes with multiple instances</p><p><strong>PostgresSQL</strong>: <a href="https://ubuntu.com/server/docs/install-and-configure-postgresql?ref=vmh.one#streaming-replication" rel="noreferrer">Streaming replication</a> is the key feature to provide redundancy for my SQL servers. I have the master database hosted on a resilient PC with UPS and battery, the 3 replicas are hosted on other machines with better performance and more optimized for query.</p><p><strong>MongoDB</strong>: I had some confuses to set up a Replica Set for MongoDB, the documents are unclear about what steps to make, but it is quite stable after the correct setup with 3 nodes. <a href="https://www.mongodb.com/docs/manual/tutorial/convert-standalone-to-replica-set/?ref=vmh.one" rel="noreferrer">Convert a Standalone Self-Managed mongod to a Replica Set</a> and <a href="https://www.mongodb.com/docs/manual/tutorial/expand-replica-set/?ref=vmh.one" rel="noreferrer">Add Members to a Self-Managed Replica Set</a></p><p><strong>Elasticsearch</strong>: the cluster setup for Elasticsearch is straightforward, a 3 nodes setup will grant Elasticsearch green status. <a href="https://logz.io/blog/elasticsearch-cluster-tutorial/?ref=vmh.one" rel="noreferrer">Creating an Elasticsearch Cluster: Getting Started</a> and <a href="https://opster.com/guides/elasticsearch/operations/elasticsearch-cluster-setup/?ref=vmh.one" rel="noreferrer">Mastering the Art of Elasticsearch Cluster Setup</a></p>]]></content:encoded></item><item><title><![CDATA[My packages with pfSense]]></title><description><![CDATA[<blockquote>Another topic I want to explore from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab post</a> is <a href="https://www.pfsense.org/?ref=vmh.one" rel="noreferrer">pfSense</a>, which is an open source enterprise solution for network security. pfSense is so powerful to secure my network, and also very convenient to extend its usecases with packages. The key pfSense packages in my lab are <code>HAProxy</code> <code>Acme</code></blockquote>]]></description><link>https://vmh.one/my-packages-with-pfsense/</link><guid isPermaLink="false">673f086f0aaf420001a1ecdc</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Sat, 23 Nov 2024 16:03:54 GMT</pubDate><content:encoded><![CDATA[<blockquote>Another topic I want to explore from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab post</a> is <a href="https://www.pfsense.org/?ref=vmh.one" rel="noreferrer">pfSense</a>, which is an open source enterprise solution for network security. pfSense is so powerful to secure my network, and also very convenient to extend its usecases with packages. The key pfSense packages in my lab are <code>HAProxy</code> <code>Acme</code> <code>Tailscale</code> <code>Wireguard</code> <code>iperf</code> </blockquote><h3 id="pfsense-setup">PfSense Setup</h3><p><a href="https://docs.netgate.com/pfsense/en/latest/recipes/virtualize-hyper-v.html?ref=vmh.one" rel="noreferrer">Virtualizing pfSense Software with Hyper-V</a> was my first step with pfSense - to build a separate router for my lab. My home internet connection is coming from a consumer router, it has minimal security level for easy daily use of home devices. This router provides the internet connection for my PC, which makes it the WAN interface for my pfSense VM. I bought a 2nd Network Card (2.5Gb), connected to my PC using a PCIe 1x lane, and this NIC is the LAN interface for my pfSense. I connect this LAN to a Switch and this Switch to other lab devices, and that&apos;s it I have a pfSense router to manage network security for my lab.</p><h3 id="reverse-proxy-and-load-balancer">Reverse Proxy and Load Balancer</h3><p>Beside the built-in Firewall and DHCP server, my pfSense also act as the Reverse Proxy and Load Balancer server, they are necessary to expose backend services.</p><p>The key package to install is <code>HAProxy</code> , it&apos;s a powerful tool to set up frontends to listen for network requests, and backends to route traffic from requests to my BE instances. I can also set up HAProxy to load balance between BE instances using its built-in algorithms: Round Robin, Least Connections, etc. HAProxy makes it so easy to scale my backend to multiple instances and deployed to multiple computers.</p><p>Another package to add is <code>Acme</code>, it helps to create and manage Let&apos;s Encrypt SSL certificates. Cloudflare has built-in SSL certificates for my public services, and Acme helps to add SSL certificates for my internal services.</p><p>It was confusing at first to set up a Reverse Proxy, but services become so easy to expose once I get used to HAProxy. Two tutorials helped me alot with my setup: <a href="https://youtu.be/FWodNSZXcXs?ref=vmh.one" rel="noreferrer">pfsense + HAProxy + Let&apos;s Encrypt Howto</a> and <a href="https://youtu.be/bU85dgHSb2E?ref=vmh.one" rel="noreferrer">How To Guide For HAProxy and Let&apos;s Encrypt on pfSense: Detailed Steps for Setting Up Reverse Proxy</a>. I later discovered another great video to understand HAProxy: <a href="https://youtu.be/qYnA2DFEELw?ref=vmh.one" rel="noreferrer">HAProxy Crash Course</a></p><h3 id="other-packages">Other Packages</h3><p><strong>Tailscale</strong>: this helps pfSense to route traffic to other nodes of tailscale overlay network. It&apos;s possible to make pfSense a <a href="https://tailscale.com/kb/1019/subnets?tab=linux&amp;ref=vmh.one" rel="noreferrer">Subnet router</a> and allow access from other nodes to the local subnet, mark pfSense as an exit node will route internet traffic of connected machines through pfSense (like a VPN server).</p><p><strong>Wireguard</strong>: this package can be installed to manage Wireguard VPN tunnels and peers, allow dirrect connection to pfSense and act as a VPN server. Howerver, its Web UI is harder to set up if compare to wg-easy.</p><p><a href="https://github.com/esnet/iperf?ref=vmh.one" rel="noreferrer">iperf</a>: a useful tool to test local network speed, I install this package to make a performance server then test the spead with <code>iperf3 -c</code> command from client nodes. </p>]]></content:encoded></item><item><title><![CDATA[My techniques with Cloudflare]]></title><description><![CDATA[<blockquote>Coming from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab post</a>, Cloudflare was the key part to expose my backend services. Their <a href="https://www.cloudflare.com/products/tunnel/?ref=vmh.one" rel="noreferrer">Zero Trust Tunnel </a>makes it so easy to connect my host with a domain, and <a href="https://www.cloudflare.com/lp/pg-load-balancing-bundle?ref=vmh.one" rel="noreferrer">Load Balancer</a> helps to provide availability and resilience to my setup. In this post, I want to cover some</blockquote>]]></description><link>https://vmh.one/my-techniques-with-cloudflare/</link><guid isPermaLink="false">673f076f0aaf420001a1ecd2</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Fri, 22 Nov 2024 11:45:47 GMT</pubDate><content:encoded><![CDATA[<blockquote>Coming from <a href="https://vmh.one/my-homelab-as-a-developer/" rel="noreferrer">my homelab post</a>, Cloudflare was the key part to expose my backend services. Their <a href="https://www.cloudflare.com/products/tunnel/?ref=vmh.one" rel="noreferrer">Zero Trust Tunnel </a>makes it so easy to connect my host with a domain, and <a href="https://www.cloudflare.com/lp/pg-load-balancing-bundle?ref=vmh.one" rel="noreferrer">Load Balancer</a> helps to provide availability and resilience to my setup. In this post, I want to cover some techniques with these two services.</blockquote><h3 id="performance-issues">Performance Issues</h3><p>Back to <a href="https://noted.lol/cloudflare-tunnel-and-zero-trust/?ref=vmh.one" rel="noreferrer">this setup</a>, I had the domains bought at <a href="https://porkbun.com/?ref=vmh.one" rel="noreferrer">Porkbun</a>, name servers route to Cloudflare, Cloudflare Tunnels route to backend services, backend services hosted with Hyper-V, Proxmox Ubuntu Server VMs, all connected together with Tailscale overlay network. It is a combo of cheap, easy to use, easy to scale solution; but it&apos;s also fragmented with separate components owned by different entities.</p><p>The most common issue with fragmented (or decentralized) setup is performance; no matter how neat it is, there will always be a botteneck somewhere that drag me down. I want to explore some tips to fix performance issues first before getting into horizontal scaling solutions. And, with a multi components setup, performance issues come mostly from networking.</p><p>At the context of <a href="https://justblog.1dreamm.com/app?l=en&amp;ref=vmh.one" rel="noreferrer">JustChill</a>, its backend is a really legacy codebase with Rails 5 and Ruby 2.7, there are plans to migrate to Go/Rust frameworks but that requires a huge amount of engineering resources. Looking closer, I figured out that this system is ancient but it&apos;s not slow actually; with <code>puma</code> as the application server and some workers &amp; threads fine-tunning, REST API requests can be handled with just a few milliseconds in average. Frontend side (mobile apps) is a different story, the API latency jumps to nearly a second, sometime a few seconds; this made it unfair to judge RoR as the bottleneck of this system, and brought me to solving network issues instead.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/newrelic-report.jpg" class="kg-image" alt loading="lazy" width="1290" height="475" srcset="https://vmh.one/content/images/size/w600/2024/11/newrelic-report.jpg 600w, https://vmh.one/content/images/size/w1000/2024/11/newrelic-report.jpg 1000w, https://vmh.one/content/images/2024/11/newrelic-report.jpg 1290w" sizes="(min-width: 720px) 720px"><figcaption><a href="https://newrelic.com/?ref=vmh.one" rel="noreferrer"><span style="white-space: pre-wrap;">New Relic</span></a><span style="white-space: pre-wrap;"> report for API servers</span></figcaption></figure><p>The first thing I figured out was the intermittent API speed, the same API come through sometime fast but some other times really slow; a quick research led me to the slow upload speed issue of Hyper-V Virtual Switch, disable Large Send Offload helped to improve stability. Another change was with my internet router, it has an anti-hacking feature to limit the maximum number of TCP or UDP connections to 100, increase it to 65000 will allow more connections during surging time. The last change I made was to create <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/deploy-tunnels/deploy-cloudflared-replicas/?ref=vmh.one" rel="noreferrer">3 replicas of Cloudflare Connector</a> - deployed to 3 different computers, this makes sure Cloudflare Tunnel still have connection to my services if one connector goes into trouble.</p><h3 id="load-balancer-and-failover">Load Balancer and Failover</h3><p>Another technique to improve the overall performance is to deploy backend services to different locations, and rely on <a href="https://www.cloudflare.com/application-services/products/load-balancing/?ref=vmh.one" rel="noreferrer">Cloudflare DNS based Load Balancer</a> to load balance API requests.</p><p>I used DNS Load Balancer to provide a failover scenario for my setup. Basically, I have another computer rig at my parents&apos; house, which is a very compact one with limited computing resource. It was built in a way that, just plug in power and internet cable then all BE services are connected to Cloudflare within an independent Tunnel. Having 2 separate tunnels serving the same backend stack, I <a href="https://nyan.im/p/cloudflare-load-balancer-tunnel?ref=vmh.one" rel="noreferrer">Use Cloudflare Load Balancer with Cloudflare Tunnel</a> to set up my services behind Cloudflare Load Balancer.</p><p>With this upgrade, I provide another level of resilience to JustChill and FlashChat services; if my lab goes down, the failover trigger will steer traffic to my backup server while I&apos;m working on the fixes. This may scale up to multi geolocation regions as well to improve network traveling.</p>]]></content:encoded></item><item><title><![CDATA[My homelab as a developer]]></title><description><![CDATA[<blockquote>This is the last part of my homelab journey, after the <a href="https://vmh.one/my-homelab-journey/" rel="noreferrer">unexpected journey</a> and <a href="https://vmh.one/my-homelab-self-host-journey/" rel="noreferrer">self-host journey</a>. In this post, I want to cover all of my homelab capabilities, which have been very useful to my works as a software developer.</blockquote><h2 id="webserver">Webserver</h2><p>I have some contribution to the products at <a href="https://1dreamm.com/?ref=vmh.one" rel="noreferrer">1dreamm</a></p>]]></description><link>https://vmh.one/my-homelab-as-a-developer/</link><guid isPermaLink="false">6739f5f270d40d000154fece</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Mon, 18 Nov 2024 10:26:14 GMT</pubDate><content:encoded><![CDATA[<blockquote>This is the last part of my homelab journey, after the <a href="https://vmh.one/my-homelab-journey/" rel="noreferrer">unexpected journey</a> and <a href="https://vmh.one/my-homelab-self-host-journey/" rel="noreferrer">self-host journey</a>. In this post, I want to cover all of my homelab capabilities, which have been very useful to my works as a software developer.</blockquote><h2 id="webserver">Webserver</h2><p>I have some contribution to the products at <a href="https://1dreamm.com/?ref=vmh.one" rel="noreferrer">1dreamm</a>, one of them is <a href="https://justblog.1dreamm.com/app?l=en&amp;ref=vmh.one" rel="noreferrer">JustChill</a> - a mobile platform to meet new people. Its backend was developed using <a href="https://rubyonrails.org/?ref=vmh.one" rel="noreferrer">Ruby on Rails</a>, with <a href="https://www.postgresql.org/?ref=vmh.one" rel="noreferrer">PostgreSQL</a>, <a href="https://www.mongodb.com/?ref=vmh.one" rel="noreferrer">MongoDB</a>, <a href="https://www.elastic.co/elasticsearch?ref=vmh.one" rel="noreferrer">Elasticsearch</a>, <a href="https://redis.io/?ref=vmh.one" rel="noreferrer">Redis</a> as databases, and <a href="https://aws.amazon.com/s3/?ref=vmh.one" rel="noreferrer">Amazon S3</a> as object storage server. They were all hosted on AWS and were getting expensive, so the goal was to use my homelab to share some computing resources. I moved away from AWS for the staging servers, but homelab is clearly not ideal for production use; I had my setup to load balance between AWS and homelab to tackle that.</p><p>The first step was to move the master PostgresSQL to my homelab and keep the AWS database as replica. By doing that, I was allowed to remove RDS dependency and scale down the PostgresSQL replica. I had a new Ubuntu Server VM on my PC and named it <code>DatabaseCenter</code>, deployed the master database by following <a href="https://ubuntu.com/server/docs/install-and-configure-postgresql?ref=vmh.one" rel="noreferrer">this guide</a> and took advantage of Postgres streaming replication feature. The same strategy was applied to deploy MongoDB to my DatabaseCenter, by following <a href="https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/?ref=vmh.one" rel="noreferrer">Install MongoDB Community</a> and <a href="https://www.mongodb.com/docs/manual/tutorial/convert-standalone-to-replica-set/?ref=vmh.one" rel="noreferrer">Convert a Stadalone mongod to Replica Set</a>. I added UPS and battery to my PC to protect the DatabaseServer.</p><p>The second step was to clone Elasticsearch and Redis to my homelab. Those services require more RAM so I created another Ubuntu server VM on the Xeon Rig, named it <code>JustChillCenter</code> and deployed Elasticsearch using <a href="https://logz.io/blog/elasticsearch-cluster-tutorial/?ref=vmh.one" rel="noreferrer">this guide</a> and <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-redis-on-ubuntu-20-04?ref=vmh.one" rel="noreferrer">this</a> for Redis. The master Ruby on Rails server was also deployed to this VM without problems.</p><p>The last step was to connect all those servers together and load balance them. I used <a href="https://tailscale.com/?ref=vmh.one" rel="noreferrer">Tailscale</a> to connect my homelab servers with AWS servers, it was free and so easy to set up an overlay network. <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-remote-tunnel/?ref=vmh.one" rel="noreferrer">Cloudflare Tunnel</a> helped again to expose the webserver, RESTful API server to the public, and load balance between my homelab and AWS servers with <a href="https://nyan.im/p/cloudflare-load-balancer-tunnel?ref=vmh.one" rel="noreferrer">DNS based Cloudflare Tunnel</a>.</p><p>The above steps were really time consuming to make a stable enough system for production use, those require some expertise in Software Engineering so I won&apos;t go deeper into the details. The key players in my setup were <strong>Cloudflare</strong> and <strong>Tailscale</strong>, by using the proxy and overlay network it is secure to expose my services. And flexibility is the key benefit it brings, with this setup I am able to scale up/down my homelab or AWS servers based on the demand with a more reasonable cost. It also allow me to deploy my services to more VPS providers with different geolocations.</p><h2 id="genai-server">GenAI server</h2><p>Another <a href="https://1dreamm.com/?ref=vmh.one" rel="noreferrer">1dreamm</a> product is <a href="https://apps.apple.com/us/app/id1671857001?ref=vmh.one" rel="noreferrer">FlashChat</a> - a GPT chat app and tools to learn new languages, it integrated with <a href="https://platform.openai.com/docs/api-reference/chat?ref=vmh.one" rel="noreferrer">OpenAI API</a> to provide the chat completion service. The product usecase was very minimal and OpenAI integration seemed overkill, I took advantage of the opensource Large Language Models and move the chat completion service to my homelab.</p><p>The release of <a href="https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct?ref=vmh.one" rel="noreferrer">Meta llama 3.2</a> was a game changer for me, its <code>Llama-3.2-3B-Instruct</code> model can be hosted with 8GB VRAM and it is smart enough for FlashChat usecase. Also, <a href="https://github.com/vllm-project/vllm?ref=vmh.one" rel="noreferrer">vLLM</a> was a great library to handle LLM inference and serving, its OpenAI-compatible API server was really helpful, I migrated from OpenAI API to vLLM API without problems.</p><p>With these tools at hand, I took advantage of the RTX 3070ti GPU from my PC and a vLLM server was built up using Docker Desktop with <a href="https://docs.vllm.ai/en/latest/serving/deploying_with_docker.html?ref=vmh.one" rel="noreferrer">this guide</a>. The chat service was fast and good enough to handle chat completion requests, and Cloudflare Tunnel helped to expose this service for app integration.</p><p>The last key component of my setup was the PCIE riser, this item is very popular among crypto miners to connect multiple GPUs through the PCIE 1x lanes. With this riser, I could scale up the number of vLLM instances by just adding more GPUs to the mainboard with no performance tradeoffs.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/pcie-riser.jpg" class="kg-image" alt loading="lazy" width="1000" height="1000" srcset="https://vmh.one/content/images/size/w600/2024/11/pcie-riser.jpg 600w, https://vmh.one/content/images/2024/11/pcie-riser.jpg 1000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">PCIE riser</span></figcaption></figure><p>A problem came up, I had to use Docker Desktop with WSL on Windows to run these vLLM instances because I was unable to passthrough GPU to my Ubuntu VMs. Hyper-V was not so optimized for this purpose and a quick research led me to <a href="https://www.proxmox.com/en/?ref=vmh.one" rel="noreferrer">Proxmox</a>, which was a better choice for Hypervisor and easier to scale up my lab.</p><h2 id="cluster-setup">Cluster Setup</h2><p>So, my lab was proved to be capable of production uses, I upgraded my PC to 32GB RAM, the Xeon Rig to 128GB RAM, added 3 more rigs and a couple of GPUs for the purpose of redundancy and resilience.</p><p><a href="https://www.proxmox.com/en/?ref=vmh.one" rel="noreferrer">Proxmox</a> was easier to manage the new Virtual Machines; the <a href="https://phoenixnap.com/kb/install-proxmox?ref=vmh.one" rel="noreferrer">installations steps</a> were straightforward, Proxmox WebUI was easy to use and create a Virtual Machine was similar to Hyper-V. A problem with Proxmox was whenever I add a new PCIE device, its network interfaces messed up and caused me unable to connect with its static IP. Edit <code>/etc/network/interfaces</code> was the key part to resolve the network issue.</p><p><a href="https://www.pfsense.org/?ref=vmh.one" rel="noreferrer">pfSense</a>: my internet router was inefficient to provide routing and firewall for my new rigs and VMs, I needed an Enterprise solution and pfSense was the standout opensource option. I followed <a href="https://docs.netgate.com/pfsense/en/latest/recipes/virtualize-hyper-v.html?ref=vmh.one" rel="noreferrer">this guide</a> to set up pfSense as a Hyper-V VM and create a new isolated subnet for my lab. There are many useful packages that can be installed within pfSense, I use <code>acme</code> to manage LetsEncrypt certificates and <code>haproxy</code> to provide Reverse Proxy and Load Balancer for my lab, <code>tailscale</code> can also be installed as a package. This part was really struggling and these two tutorials were extremely useful for me: <a href="https://www.youtube.com/watch?v=bU85dgHSb2E&amp;ref=vmh.one" rel="noreferrer">How To Guide For HAProxy and Let&apos;s Encrypt on pfSense: Detailed Steps for Setting Up Reverse Proxy</a> and <a href="https://www.youtube.com/watch?v=FWodNSZXcXs&amp;ref=vmh.one" rel="noreferrer">pfsense + HAProxy + Let&apos;s Encrypt Howto</a>. A network switch was also needed to work with pfSense.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/switch.jpg" class="kg-image" alt loading="lazy" width="2000" height="889" srcset="https://vmh.one/content/images/size/w600/2024/11/switch.jpg 600w, https://vmh.one/content/images/size/w1000/2024/11/switch.jpg 1000w, https://vmh.one/content/images/size/w1600/2024/11/switch.jpg 1600w, https://vmh.one/content/images/2024/11/switch.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">2.5G Ethernet Switch</span></figcaption></figure><p><a href="https://kubernetes.io/?ref=vmh.one" rel="noreferrer">Kubernetes</a>: with the new infrastructure, a container orchestration was irresistible in my setup. Some people may see k8s as an overkill but I do enjoy the benefits it brings, the key practice is to just deploy the services that scale. I had JustChill RESTful api server and <a href="https://github.com/soketi/charts/tree/master/charts/soketi?ref=vmh.one" rel="noreferrer">soketi</a> websocket server deployed on Kubernetes, database services stayed fixed on Ubuntu. Kubernetes was so tough to set up, I followed <a href="https://phoenixnap.com/kb/install-kubernetes-on-ubuntu?ref=vmh.one" rel="noreferrer">this guide</a> to deploy a master node and 10 worker nodes, the master node kept failing at api server healthy check and costed me hours to resolve. This kubeadm command finally worked for me <code>sudo kubeadm init --control-plane-endpoint=kube-master-node --pod-network-cidr=10.234.0.0/16 --service-cidr=10.10.0.0/24 --service-dns-domain=dcs.vmh.local --upload-certs</code>, the <code>cidr</code> was needed.</p><p>Some useful tools to work with k8s: <a href="https://github.com/derailed/k9s?ref=vmh.one" rel="noreferrer">k9s</a>, <a href="https://www.lisenet.com/2021/moving-to-truenas-and-democratic-csi-for-kubernetes-persistent-storage/?ref=vmh.one" rel="noreferrer">Democratic CSI</a>, <a href="https://medium.com/@gayatripawar401/deploy-prometheus-and-grafana-on-kubernetes-using-helm-5aa9d4fbae66?ref=vmh.one" rel="noreferrer">Prometheus and Grafana</a>, <a href="https://docs.fluentbit.io/manual/installation/kubernetes?ref=vmh.one" rel="noreferrer">Fluent Bit</a>, <a href="https://yusufmujawar.medium.com/install-configure-graylog-on-the-ubuntu-operating-system-e808b36d344b?ref=vmh.one" rel="noreferrer">Graylog</a>.</p><p>With this infra, I got my PostgreSQL, Elasticsearch, MongoDB, vLLM scaled to 3 replicas each and connected to more than 10 backend instances, which provides enough redundancy for <a href="https://1dreamm.com/?ref=vmh.one" rel="noreferrer">1dreamm</a> products.</p><h2 id="the-end">The End</h2><p>This is the final post of my homelab journey, the series have covered the key parts of my last 6 months work. I had alot of struggles but most of this journey was filled with fun, satisfaction and lessions. There are many details that could not be covered but we will have chances to go back to this in my future posts.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://vmh.one/my-techniques-with-cloudflare/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">My techniques with Cloudflare</div><div class="kg-bookmark-description">Coming from my homelab post, Cloudflare was the key part to expose my backend services. Their Zero Trust Tunnel makes it so easy to connect my host with a domain, and Load Balancer helps to provide availability and resilience to my setup. In this post, I want to cover some</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://vmh.one/content/images/size/w256h256/format/jpeg/2024/11/icon.jpg" alt><span class="kg-bookmark-author">vmh@me</span><span class="kg-bookmark-publisher">vmh</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://vmh.one/content/images/2024/11/newrelic-report.jpg" alt></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://vmh.one/my-packages-with-pfsense/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">My packages with pfSense</div><div class="kg-bookmark-description">Another topic I want to explore from my homelab post is pfSense, which is an open source enterprise solution for network security. pfSense is so powerful to secure my network, and also very convenient to extend its usecases with packages. The key pfSense packages in my lab are HAProxy Acme</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://vmh.one/content/images/size/w256h256/format/jpeg/2024/11/icon.jpg" alt><span class="kg-bookmark-author">vmh@me</span><span class="kg-bookmark-publisher">vmh</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://vmh.one/content/images/2024/11/logo-720p.jpg" alt></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://vmh.one/high-availability-redundancy-and-resilience/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">My setup for High Availability, Redundancy and Resilience</div><div class="kg-bookmark-description">Coming from my homelab, Cloudflare, pfSense posts; to make my lab environment ready for production use I must have Resilience in my setup. In this post, I want to explore on how I configure and structure my production tools for High Availability and Redundancy of backend services for Resilience purpose.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://vmh.one/content/images/size/w256h256/format/jpeg/2024/11/icon.jpg" alt><span class="kg-bookmark-author">vmh@me</span><span class="kg-bookmark-publisher">vmh</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://vmh.one/content/images/2024/11/logo-720p.jpg" alt></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://vmh.one/my-setup-with-proxmox-and-kubernetes/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">My setup with Proxmox and Kubernetes</div><div class="kg-bookmark-description">Coming from my homelab post, Proxmox Virtual Environment is a powerful hypervisor for my setup, PVE can be installed as the OS and replace Windows, Hyper-V in my new machines. And Kubernetes is a powerful container orchestration platform that helps to deploy and manage backend containers. In this post, I</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://vmh.one/content/images/size/w256h256/format/jpeg/2024/11/icon.jpg" alt><span class="kg-bookmark-author">vmh@me</span><span class="kg-bookmark-publisher">vmh</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://vmh.one/content/images/2024/11/grafana.jpg" alt></div></a></figure>]]></content:encoded></item><item><title><![CDATA[My homelab self-host journey]]></title><description><![CDATA[<blockquote>Continue from <a href="https://vmh.one/my-homelab-journey/" rel="noreferrer">my homelab unexpected journey</a>, this post will cover the journey that I discovered some open source tools can be hosted myself, which replace many external cloud services.</blockquote><h2 id="redundant-hard-drives">Redundant Hard Drives</h2><p>So, I had 50TB storage for my media server and a 12TB hard drive waiting to be utilized,</p>]]></description><link>https://vmh.one/my-homelab-self-host-journey/</link><guid isPermaLink="false">6735e8db70d40d000154f821</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Sat, 16 Nov 2024 17:39:13 GMT</pubDate><content:encoded><![CDATA[<blockquote>Continue from <a href="https://vmh.one/my-homelab-journey/" rel="noreferrer">my homelab unexpected journey</a>, this post will cover the journey that I discovered some open source tools can be hosted myself, which replace many external cloud services.</blockquote><h2 id="redundant-hard-drives">Redundant Hard Drives</h2><p>So, I had 50TB storage for my media server and a 12TB hard drive waiting to be utilized, it&apos;s again an overkill so I looked for other uses for the redundant storage. Found out my Google Photos and Google Drive are charging me based on GB, then why not host them myself with TB of storage. A quick research led me to <a href="https://immich.app/?ref=vmh.one" rel="noreferrer">Immich</a> and <a href="https://www.truenas.com/?ref=vmh.one" rel="noreferrer">TrueNAS</a>, both are free open source.</p><p>Immich was straightforward, the app and its usecases are similar to Google Photos. How to setup the Immich server was a different story, I couldn&apos;t host it straight on my Windows PC like the Plex media server, the <a href="https://immich.app/docs/install/docker-compose?ref=vmh.one" rel="noreferrer">recommended method</a> was to install with Docker Compose. I knew Docker is a tool to dockernize backend micro services but Docker Compose was a new world to me, explore Docker Compose was a must step to quickly get my self-host services running.</p><p>I could use Docker Desktop on my Windows PC but I chose the hard way to host Docker Compose, which was to get an Ubuntu Server and run it as a Virtual Machine (VM) on Hyper-V. My first experience on Virtual Machine was with VMware in 2012, back then I couldn&apos;t afford a MacBook so I run Mac OS-X on VMware on my Dell laptop (which was quite struggling).<em> </em>Discovered Hyper-V was a game changer, I followed this <a href="https://www.makeuseof.com/windows-11-enable-hyper-v/?ref=vmh.one" rel="noreferrer">guide to enable Hyper-V on Windows</a>, then I was shocked on how easy to set up a Virtual Machine nowaday, steps are straightforward and the machine is powerful. For Ubuntu Server, I picked the <a href="https://ubuntu.com/download/server?ref=vmh.one" rel="noreferrer">22.04 LTS</a> version and followed this <a href="https://www.linuxtechi.com/install-ubuntu-server-22-04-step-by-step/?ref=vmh.one" rel="noreferrer">step-by-step guide</a> to get my first Ubuntu Server, I named it <em>vmh-ubuntu-home-server</em>. I gave my home server a static ip, learned to ssh to Ubuntu from my macOS terminal, then the last step was to <a href="https://docs.docker.com/engine/install/ubuntu/?ref=vmh.one" rel="noreferrer">Install Docker Engine on Ubuntu</a>.</p><p>Back to the Immich server; from Ubuntu <code>~/immich-app/</code> folder and followed the <a href="https://immich.app/docs/install/docker-compose?ref=vmh.one" rel="noreferrer">Docker Compose commands</a>, I realized I missed an important part: how tf do I attach the hard drive? So the Hyper-V VM was stored in my PC SSD <code>D:\</code> and it&apos;s clearly not enough space for Immich assets, the solution was to create a 2TB <code>vhdx</code> drive, store that file in my 16TB HDD <code>E:\</code>, attach <code>vhdx</code> hard drive to Hyper-V VM, from Ubuntu mount the drive to <code>/mnt/immich</code> folder, all by following <a href="https://encircletech.freshservice.com/support/solutions/articles/21000267933?ref=vmh.one" rel="noreferrer">this guide</a>. After that, I changed the Immich environtment variables accordingly in the <code>.env</code> file: <code>UPLOAD_LOCATION=/mnt/immich/library</code> <code>DB_DATA_LOCATION=/mnt/immich/postgres</code>, run the command <code>docker compose up -d</code>, then I got my Immich server up and running at <a href="http://192.168.1.23:2283/?ref=vmh.one">http://&lt;ubuntu-home-server-ip&gt;:2283</a>. A small tip: to keep the drive mounted at startup, I had this line <code>/dev/sdb1 /mnt/immich ext4 defaults 0 1</code> in <code>/etc/fstab</code>. Then, the <a href="https://immich.app/docs/overview/quick-start?ref=vmh.one" rel="noreferrer">Immich Quick Start</a> guide was easy, I got it to work with no struggles.</p><p>My Immich server was successful launched but no assets in it yet, the next part is to migrate my data from Google Photos to Immich server. The first step was to use <a href="https://takeout.google.com/?ref=vmh.one" rel="noreferrer">Google Takeout</a> to export my Google Photos, it took a few hours to compact the whole library. Downloaded the zip files from Google Takeout, then I used this <a href="https://github.com/simulot/immich-go?ref=vmh.one" rel="noreferrer">Immich-Go</a> script to upload to Immich server. And here is the result:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/immich.jpg" class="kg-image" alt loading="lazy" width="720" height="1427" srcset="https://vmh.one/content/images/size/w600/2024/11/immich.jpg 600w, https://vmh.one/content/images/2024/11/immich.jpg 720w" sizes="(min-width: 720px) 720px"><figcaption><i><em class="italic" style="white-space: pre-wrap;">Immich from my iPhone</em></i></figcaption></figure><p>Hyper-V, Ubuntu Server, Docker Compose and Immich were so cool for me, got it to work was a big milestone and the self-host journey became so easy after this point. I still had many more storage to play with and a NAS server was the next move.</p><h2 id="truenas-and-a-new-rig">TrueNAS and a new Rig</h2><p>A NAS server is a very popular way to deal with the limited resource of external cloud storage. There are many options to host a NAS server; I saw many people own a Synology, others will self-host Unraid or TrueNAS, the most standout option for me was TrueNAS SCALE.</p><p>The steps to install TrueNAS SCALE on Hyper-V was similar to install the Ubuntu Server. Download the <code>iso</code> file from <a href="https://www.truenas.com/download-truenas-scale/?ref=vmh.one" rel="noreferrer">their website</a>, create a Hyper-V Virtual Machine, then follow the OS installation steps, no struggles at all. Assigned the VM a static ip then access the web UI on my browser, and the TrueNAS menu looked really interesting.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/truenas-menu.png" class="kg-image" alt loading="lazy" width="490" height="1090"><figcaption><span style="white-space: pre-wrap;">TrueNAS SCALE menu</span></figcaption></figure><p>This has more features than I thought; beside managing and sharing my storage, I can have Virtual Machines or Apps hosted on TrueNAS SCALE. This was like another Home Server, but a problem came up: the 16GB RAM of my PC was not enough to host both Ubuntu and TrueNAS. Checked the TrueNAS <a href="https://www.truenas.com/docs/scale/24.04/gettingstarted/scalehardwareguide/?ref=vmh.one" rel="noreferrer">Minimum Hardware Requirements</a> and its full capabilities, I decided to have a dedicated computer with more suitable hardware for self-host.</p><p>The dedicated computer was not expensive, there were many X99 mainboard options with unbelievable cheap price. My monitor had 2 extra HDMI ports so it could be shared between the 2 computers, the mouse and keyboard were shared too using a KVM switch. I went for an open case rig with the Sniper X99 mainboard, <a href="https://www.intel.com/content/www/us/en/products/sku/91754/intel-xeon-processor-e52680-v4-35m-cache-2-40-ghz/specifications.html?ref=vmh.one" rel="noreferrer">Intel Xeon E5 2680v4</a> CPU, 64 GB ECC RAM (2 sticks), <a href="https://www.msi.com/Graphics-Card/Radeon-RX-5600-XT-GAMING-MX/Overview?ref=vmh.one" rel="noreferrer">MSI Radeon RX-5600-XT</a> GPU, 512GB SSD, 12TB HDD, and a 550W PSU. The rig fitted perfectly under my gaming corner.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/x99.jpg" class="kg-image" alt loading="lazy" width="1080" height="810" srcset="https://vmh.one/content/images/size/w600/2024/11/x99.jpg 600w, https://vmh.one/content/images/size/w1000/2024/11/x99.jpg 1000w, https://vmh.one/content/images/2024/11/x99.jpg 1080w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">The open case rig</span></figcaption></figure><p>It was quite fun to build the rig myself piece by piece, I got the chance to learn about the cables, lanes, ports and components of a computer. I went for Windows 11 for the OS, a better choice should have been Proxmox but this came into place later. With the new rig, I moved my TrueNAS VM here with much more resource to play with.</p><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/TrueNAS.png" class="kg-image" alt loading="lazy" width="1080" height="633" srcset="https://vmh.one/content/images/size/w600/2024/11/TrueNAS.png 600w, https://vmh.one/content/images/size/w1000/2024/11/TrueNAS.png 1000w, https://vmh.one/content/images/2024/11/TrueNAS.png 1080w" sizes="(min-width: 720px) 720px"></figure><p>With TrueNAS at hand, I had a Warebox drive to store documents and softwares, a Backup drive to store work and important data, a SSD drive for claims by other services (this came later). The new rig had 8 SATA ports and 4 NVME lanes using PCIe bifurcation, I could build up RAID arrays from this setup easily.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://vmh.one/content/images/2024/11/TrueNAS-drives.png" class="kg-image" alt loading="lazy" width="1080" height="580" srcset="https://vmh.one/content/images/size/w600/2024/11/TrueNAS-drives.png 600w, https://vmh.one/content/images/size/w1000/2024/11/TrueNAS-drives.png 1000w, https://vmh.one/content/images/2024/11/TrueNAS-drives.png 1080w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">TrueNAS drives</span></figcaption></figure><p>By sharing with SMB and NFS, I could attach the drives to any Windows, macOS, Linux machines.</p><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/TrueNAS-sharing.png" class="kg-image" alt loading="lazy" width="1080" height="671" srcset="https://vmh.one/content/images/size/w600/2024/11/TrueNAS-sharing.png 600w, https://vmh.one/content/images/size/w1000/2024/11/TrueNAS-sharing.png 1000w, https://vmh.one/content/images/2024/11/TrueNAS-sharing.png 1080w" sizes="(min-width: 720px) 720px"></figure><h2 id="other-self-host">Other self-host</h2><p>I hosted many more apps with my Ubuntu Server and TrueNAS SCALE, some are really useful.</p><p><strong>WireGuard</strong>: I needed a VPN solution to access my homelab services from the outside, VPN also route traffic through a secure tunnel so it is safer to use internet from any location. WireGuard was the standout VPN solution, and <a href="https://github.com/wg-easy/wg-easy?ref=vmh.one" rel="noreferrer">wg-easy</a> helped to run both WireGuard server and a WebUI to add new devices; wg-easy could be installed either by Docker Compose, or as an app on TrueNAS. The most challenging part was to <strong>Expose Wireguard VPN Server to the Internet</strong>, which was described in this <a href="https://markliversedge.blogspot.com/2023/09/wireguard-setup-for-dummies.html?m=1&amp;ref=vmh.one" rel="noreferrer">Wireguard guide for dummies</a>. My internet router has the built-in DDNS support for no-ip.com, I took advantage of that then set up a UDP Port Forwarding to my WireGuard server. My public ip was behind CGNAT, so the last part was to contact my ISP to remove me from CGNAT. All done, my WireGuard server was ready to connect by the WireGuard apps on my phone and laptop.</p><p><strong>Gitea</strong>: this is the best solution to have a self-host git server, I got it to work easily with Docker Compose by following <a href="https://docs.gitea.com/installation/install-with-docker?ref=vmh.one" rel="noreferrer">this guide</a></p><p><strong>Paperless-ngx</strong>: a <a href="https://docs.paperless-ngx.com/?ref=vmh.one" rel="noreferrer">great option</a> to upload and manage documents, I installed it as an app on TrueNAS.</p><p><strong>Vaultwarden</strong>: I wanted to have my own self-host service to store my secrets, I followed <a href="https://noted.lol/vaultwarden/?ref=vmh.one" rel="noreferrer">this guide</a> to install it via Docker Compose, and connect to the server using Bitwarden app. The text and file sharing of vaultwarden were also cool.</p><p><strong>Ghost</strong>: a cool option to self-host a blog, it is easy to customize with great themes and great tools. I&#x2019;m hosting this blog using <a href="https://hub.docker.com/_/ghost?ref=vmh.one" rel="noreferrer">ghost 5</a> via Docker Compose.</p><p><strong>Homepage</strong>: with many services at hand, I used <a href="https://gethomepage.dev/installation/docker/?ref=vmh.one" rel="noreferrer">homepage</a> to organize all of them</p><p><strong>Nginx Proxy Manager</strong>: I hosted different services on different machines, so a proxy server was needed to connect all of them from 1 portal, <a href="https://nginxproxymanager.com/setup/?ref=vmh.one" rel="noreferrer">the setup guide</a> with Docker Compose was straightforward. NPM also help to give <code>https</code> for my self-host services, via a free Let&apos;s Encrypt SSL certificate.</p><p><strong>Cloudflare</strong>: I wanted to expose vaultwarden and ghost to the public so I needed Cloudflare. I followed <a href="https://noted.lol/cloudflare-tunnel-and-zero-trust/?ref=vmh.one" rel="noreferrer">this guide</a> to buy a domain, enable Cloudflare Tunnel, create a Cloudflare connector using Docker Compose, then my services were ready to be accessed. Also, Cloudflare API was the DNS Challenge Provider for my Let&apos;s Encrypt SSL cert.</p><h2 id="what-next">What next?</h2><p>This was quite a journey! I went through a lot of struggles to build all the servers, but it was really exciting and worth it after all. With the new Rig at hand, I wanted to go even further, not just hosting the open source code. In the next journey, I worked on some other tools to host my own code within my homelab, and replaced some expensive VPS.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://vmh.one/my-homelab-as-a-developer/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">My homelab as a developer</div><div class="kg-bookmark-description">This is the last part of my homelab journey, after the unexpected journey and self-host journey. In this post, I want to cover all of my homelab capabilities, which has been very useful to my works as a software developer. Webserver I have some contribution to the products at 1dreamm,</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://vmh.one/content/images/size/w256h256/format/jpeg/2024/11/icon.jpg" alt><span class="kg-bookmark-author">vmh@me</span><span class="kg-bookmark-publisher">vmh</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://vmh.one/content/images/2024/11/pcie-riser.jpg" alt></div></a></figure>]]></content:encoded></item><item><title><![CDATA[My homelab unexpected journey]]></title><description><![CDATA[<blockquote>From a working corner, entertainment setup to Plex media server and self-host services.</blockquote><h2 id="the-new-corner">The New Corner</h2><p>Back in June 2021, I moved to a new apartment and had a dedicated room to host all of my working equipments. I tried to keep it compact with an adjustable desk, Macs and</p>]]></description><link>https://vmh.one/my-homelab-journey/</link><guid isPermaLink="false">6734c9e270d40d000154f476</guid><dc:creator><![CDATA[vmh]]></dc:creator><pubDate>Thu, 14 Nov 2024 12:09:39 GMT</pubDate><content:encoded><![CDATA[<blockquote>From a working corner, entertainment setup to Plex media server and self-host services.</blockquote><h2 id="the-new-corner">The New Corner</h2><p>Back in June 2021, I moved to a new apartment and had a dedicated room to host all of my working equipments. I tried to keep it compact with an adjustable desk, Macs and a monitor.</p><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/desk-720p.jpg" class="kg-image" alt loading="lazy" width="720" height="515" srcset="https://vmh.one/content/images/size/w600/2024/11/desk-720p.jpg 600w, https://vmh.one/content/images/2024/11/desk-720p.jpg 720w" sizes="(min-width: 720px) 720px"></figure><p>In the left corner, I have a Xiaomi projector, which was my main source of entertainment. Months later, I added to this setup a <a href="https://usa.yamaha.com/products/audio_visual/av_receivers_amps/rx-v6a/index.html?ref=vmh.one" rel="noreferrer">RX-V6A </a>AVR and a 5.1.2 sound system of Jamo speakers. This room became a home theatre and the last piece was a <a href="https://www.nvidia.com/en-us/shield/shield-tv-pro/?ref=vmh.one" rel="noreferrer">Nvidia Shield TV Pro</a> player I bought in 2022.</p><p>I was quite satisfied with my setup, I have my works done during daytime and enjoy Netflix movies TV shows at night, all in the same room. I kept it that way and not much changed in the next 2 years.</p><h2 id="the-gaming-pc">The Gaming PC</h2><p>My works require macOS, and not many games can be installed on my computer, the obvious solution was to have a game console to fill the gap. I did buy a PS5 in late 2023, and it served me well for a couple of AAA games (God of War, Ghost of Tsushima, PES). But I really wanted to have better FPS games as well (Dota 2, CS2) and a gaming PC is inevitable.</p><p>Early in this year (2024), I did acquire a gaming PC which was a budget one. Back then, I didn&apos;t have much knowledge on how to build a PC, I simply picked one based on recommendations from the internet. My PC had the <a href="https://www.asus.com/vn/motherboards-components/motherboards/prime/prime-b660m-k-d4/?ref=vmh.one" rel="noreferrer">ASUS B660M</a> motherboard, 512GB SSD, 16GB RAM, <a href="https://www.intel.com/content/www/us/en/products/sku/134587/intel-core-i512400f-processor-18m-cache-up-to-4-40-ghz/specifications.html?ref=vmh.one" rel="noreferrer">Intel 12400F</a> CPU and <a href="https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3070-3070ti/?ref=vmh.one" rel="noreferrer">Nvidia RTX 3070ti</a> GPU, coupled with it is a 240hz 1440p monitor (<a href="https://www.msi.com/Monitor/Optix-MAG274QRX?ref=vmh.one" rel="noreferrer">MSI MAG274QRX</a>).</p><p>The new PC worked really well with my existing sound system; I had a really cool corner for gaming after adding a few RGB gadgets.</p><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/PC-corner.jpg" class="kg-image" alt loading="lazy" width="720" height="520" srcset="https://vmh.one/content/images/size/w600/2024/11/PC-corner.jpg 600w, https://vmh.one/content/images/2024/11/PC-corner.jpg 720w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/gaming-corner.jpg" class="kg-image" alt loading="lazy" width="720" height="1051" srcset="https://vmh.one/content/images/size/w600/2024/11/gaming-corner.jpg 600w, https://vmh.one/content/images/2024/11/gaming-corner.jpg 720w" sizes="(min-width: 720px) 720px"></figure><p>The Xiaomi projector was replaced with a <a href="https://www.benq.com/en-us/projector/gaming/tk700.html?ref=vmh.one" rel="noreferrer">BenQ TK700</a> 4K; it improved the image quality of my movies and AAA games quite alot.</p><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/4k-gaming.jpg" class="kg-image" alt loading="lazy" width="720" height="960" srcset="https://vmh.one/content/images/size/w600/2024/11/4k-gaming.jpg 600w, https://vmh.one/content/images/2024/11/4k-gaming.jpg 720w" sizes="(min-width: 720px) 720px"></figure><p>My setup became so overkill at this level, but I couldn&apos;t resist it. I had a really good time with gaming for a few months, but when the joys faded out I started to regret.</p><h2 id="the-media-server">The Media Server</h2><p>During May 2024, it became so time consuming to play games, and I could not afford it anymore. I still enjoyed the great movies with the current home theatre setup, but the gaming PC became a waste, I have no more energy and free time to put into gaming.</p><p>I searched for other uses for my Windows PC, and <a href="https://www.plex.tv/media-server-downloads?ref=vmh.one" rel="noreferrer">Plex media server</a> was the first to stand out. I also discovered Hyper-V, which can host Ubuntu VMs; but at this point I wanted to keep things simple and decided to host my media server on Windows.</p><p>Had Plex media server in mind; I turned my attention to the PC components. The 512GB SSD was clearly not enough for the media server; I had 6TB external hard drives but that storage ran out quickly. I wanted to extend my PC storage with more external drives, I went for a search for some good options, and it turned out there are more budget choices with internal HDDs. I learned that I can extend the PC storage with SATA (I had no ideas about PC setup back then), and discovered that there are 4 SATA ports on my PC motherboard.</p><p>This soon became very exciting; I bought a <a href="https://www.westerndigital.com/en-ap/products/internal-drives/data-center-drives/ultrastar-dc-hc550-hdd?sku=0F38462&amp;ref=vmh.one" rel="noreferrer">16TB WD Ultrastar</a> HDD (2nd) with a really good price but then struggled to make it works. I figured out that, beside connecting to a SATA port for data transfer, I also need to connect the HDD with a power source; which was not the case for my external drives because they only need a USB port for both power and data transfer. I felt so dumb for not knowing this, I got no clues where to attach the power source to the drive.</p><figure class="kg-card kg-image-card"><img src="https://vmh.one/content/images/2024/11/HDD.jpg" class="kg-image" alt loading="lazy" width="720" height="720" srcset="https://vmh.one/content/images/size/w600/2024/11/HDD.jpg 600w, https://vmh.one/content/images/2024/11/HDD.jpg 720w" sizes="(min-width: 720px) 720px"></figure><p>Scratched my head while watching tutorials on Youtube, I discovered that the HDD power source is not coming from the motherboard but from the PSU. I had never paid attention to this part of my PC (beside the number of Watts), which suddently became so important. I removed the back cover of my PC case to discover that there are many more cables coming from the PSU, these cables are always hidden inside the PC case for a cleaner look. I finally found the SATA power cable, plug it to the HDD and plug the HDD data cable to the motherboard.  The moment I successfully made my Windows to recognize the new HDD was really satisfied.</p><p>From this point, I added 2 more HDDs and a SATA SSD to fully utilize the 4 SATA ports, I have totally 50TB of storage for my media server. I moved from Netflix, Spotify, Kodi to Plex as the single app to consume most of media. There are more opensource tools to manage and enhance the media server, some are not right to share widely.</p><h2 id="what-next">What next?</h2><p>The media server was an attempt to add more uses for my gaming PC; on the journey to complete it, I discovered a new term: <strong>self-host</strong>, which became the next interesting part of my homelab journey. By adding more HDDs, I also discovered new capability of the motherboard, I can extend it through the PCIe lanes and host more cool things, which led to more computer rigs added to my homelab. My self-host services involved more technical efforts and Windows was no longer sufficient, this was when Linux Virtual Machines came into place.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://vmh.one/my-homelab-self-host-journey/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">My homelab self-host journey</div><div class="kg-bookmark-description">Continue from my homelab unexpected journey, this post will cover the journey that I discovered some open source tools that can be hosted myself, which replace many external cloud services. Redundant Hard Drives So, I had 50TB storage for my media server and a 12TB hard drive waiting to be</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://vmh.one/content/images/size/w256h256/format/jpeg/2024/11/icon.jpg" alt><span class="kg-bookmark-author">vmh@me</span><span class="kg-bookmark-publisher">vmh</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://vmh.one/content/images/2024/11/immich.jpg" alt></div></a></figure>]]></content:encoded></item></channel></rss>