<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[SyntacticSugar, yet another nerd blog]]></title><description><![CDATA[Thoughts of a 40-ish fossil webdeveloper / fulltime nerd]]></description><link>https://syntacticsugar.nl</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 00:17:23 GMT</lastBuildDate><atom:link href="https://syntacticsugar.nl/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Benchmarking a TrueNAS Scale VM in Proxmox]]></title><description><![CDATA[Small recap from my first post; I've been watching a lot of video's of other nerds claiming that running TrueNAS Scale as a VM works fine. I even had some video's telling me you can run TrueNAS Scale in a VM, SMB-share the ZFS datasets from that VM w...]]></description><link>https://syntacticsugar.nl/benchmarking-a-truenas-scale-vm-in-proxmox</link><guid isPermaLink="true">https://syntacticsugar.nl/benchmarking-a-truenas-scale-vm-in-proxmox</guid><category><![CDATA[server]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ivo Toby]]></dc:creator><pubDate>Tue, 17 May 2022 14:56:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652799303245/OJ7L2Pc9I.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Small recap <a target="_blank" href="https://syntacticsugar.nl/my-quest-for-a-new-home-server-part-1">from my first post</a>; I've been watching a lot of video's of other nerds claiming that running TrueNAS Scale as a VM works fine. I even had some video's telling me you can run TrueNAS Scale in a VM, SMB-share the ZFS datasets from that VM with the Proxmox host and run other VM's from those SMB-shares. </p>
<p><iframe src="https://giphy.com/embed/26ufdipQqU2lhNA4g" width="480" height="480" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/producthunt-mind-blown-blow-your-26ufdipQqU2lhNA4g">via GIPHY</a></p><p></p>
<p>First thing that came to mind seeing this ; <strong>WHY</strong>?? </p>
<p>Why would you want to run a VM from a Samba-share. One could argue that having TrueNAS ZFS for replication and snapshot could be valuable, and since it's all running on a virtual network you would not be encumbered by the speeds of your physical network-adapter. Proxmox has support for ZFS out of the box and though Proxmox itself is not really a solid solution as NAS, using ZFS with a bunch of SSD's in mirror for VM's would probably be a lot faster than running those VM's from a SMB-share.. <em>Or NFS for that matter.</em> </p>
<p>But then again; most of the guys in those video's were very positive about VM's on  SMB-shares so I had to check.</p>
<p><iframe src="https://giphy.com/embed/Gpf8A8aX2uWAg" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/reactiongifs-Gpf8A8aX2uWAg">via GIPHY</a></p><p></p>
<h1 id="heading-the-setup">The setup</h1>
<p>I installed TrueNAS Core in a VM running on a NVME SSD. I gave it 4 cores and 24 Gb's of RAM and 2 virtual NIC's. Both paravirtualized VirtIO nics, <em>this is important</em> because this allows speeds up to 10Gbit/s. If you use Realtek or Intel you're essentially emulating hardware with lower speeds (up to 1 gbit/s).</p>
<p>My TrueNAS VM has 2x3,5" Harddisks in mirror and one 2.5" SSD (Samsung) as striped ZFS dataset. All disks are connected to SATA-ports. I added these disks following <a target="_blank" href="https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM">this guide</a>. Both ZFS datasets have no caching disks at all.</p>
<p>For testing I've created a clean Debian 11 VM with Gnome desktop. The VM itself runs on the same NVME SSD as TrueNAS Scale runs from. Debian has 2 cores, 2Gb RAM, one VirtIO Paravirtualized NIC.</p>
<h1 id="heading-benchmarking">Benchmarking</h1>
<p>I wanted to run several test-cases;</p>
<ul>
<li>HDD vs. SSD </li>
<li>SMB vs NFS </li>
<li>Virtio SCSI vs Virtio Block (and SATA)</li>
<li>Raw vs qcow2 </li>
</ul>
<p>These are settings from Proxmox, everytime I wanted to benchmark a new test-case I created a new virtual drive and assigned it to the Debian VM. SMB and NFS shares are mounted from Proxmox.</p>
<p>For benchmarking I used the default disk-management app from Gnome. There's a benchmark feature and every time I used 100 samples of 100MiB each. I also ran some real-world tests by copying a large file from local to the mounted test-drive.</p>
<h2 id="heading-observations">Observations</h2>
<p>I did test most scenario's with SMB as well. While benchmarking with the Gnome Disk-tool went fine I could not do any real-world tests; the VM would freeze after 10 seconds of copying and Proxmox would throw a IO-error. I have encountered this before while trying to install a VM on a SMB-share. So I left these results out of the conclusion.</p>
<p>Another observation; when using qcow2 as format I could run the benchmark once, the second time the transferspeeds would exceed 20gbit/s. I'm not sure why this happens and this needs further investigation.</p>
<h1 id="heading-the-results">The results</h1>
<p>One day, when I finally understand Excel I'll generate some nice graph's. But for now I'll stick to raw data and a conclusion.</p>
<p>To view the raw results, check out <a target="_blank" href="https://toby-brain.notion.site/fd1dbd6fa41e419185985e8b76407815?v=d1249629606a4aed9dc20f6e96ac73d7">this Notion database</a></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Best performing setup is a ZFS dataset backed by a SSD, shared over NFS, connected to a VM using VirtIO Block and file-type RAW. Real-world writes are around 280mb/s and reads around 560Mb/s. Not at all bad, especially if you compare those number with a virtual disk running on the NVME-ssd (360 mb/s write, 420mb/s read). What really surprised me is the fact that reads are slower from NVME, maybe because the SSD is also running the TrueNAS and Debian VM's? The SSD from TrueNAS is only a SATA disk and it has some overhead from NFS as well. So I do not really understand why NVME is performing significantly worse in this scenario.</p>
<p>If you don't have a ZFS dataset backed by SSD you will get decent read-performance from a HDD backed ZFS dataset, around 560 mb/s, but writing to that set is pretty slow; around 80mb/s.</p>
<p><strong>There are some things to keep in mind. </strong></p>
<p><iframe src="https://giphy.com/embed/Y077qfBlPpZzjJCHkz" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/rupaulsdragrace-rupauls-drag-race-rpdr-all-stars-5-Y077qfBlPpZzjJCHkz">via GIPHY</a></p><p></p>
<p>If you are running a lot of VM's, and some of those VM's are accessing data from TrueNAS using NFS you will suffer from lower throughput. The VirtIO NIC performance will be hindered by the CPU and is the bottleneck in your system. I did not test the disk-throughput when NFS is handling other stuff as well (maybe another day).</p>
<h1 id="heading-final-thoughts">Final thoughts</h1>
<p>Though performance is really good when running from a shared ZFS dataset it still feels wrong to me. Especially if VM's will be accessing files from the same ZFS dataset over NFS. I did not even touch on access times and those are a lot worse on shares. Random access and databases running in a VM on a share would probably perform pretty bad. 
I'll be benchmarking the same setup only with TrueNAS core instead of Scale, I don't expect huge differences but it would help making a decision about where to store my ZFS dataset. </p>
<p><em>In the end I will be running my VM's from 2 SSD's in Mirror in ZFS on Proxmox, not using shares. </em></p>
<p><iframe src="https://giphy.com/embed/jS27LWasgUIYrXtP83" width="480" height="480" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/WhenWeAllVote-vote-go-when-we-all-jS27LWasgUIYrXtP83">via GIPHY</a></p><p></p>
]]></content:encoded></item><item><title><![CDATA[VM Inception]]></title><description><![CDATA[I'm still struggling to determine what kind of setup I want to use for my new home-server. I've watched countless Youtube video's about Proxmox, TrueNAS, TrueNAS in Proxmox, Debian in Proxmox and more and more alike. It all boils down to "use Proxmox...]]></description><link>https://syntacticsugar.nl/vm-inception</link><guid isPermaLink="true">https://syntacticsugar.nl/vm-inception</guid><category><![CDATA[Linux]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Ivo Toby]]></dc:creator><pubDate>Sat, 14 May 2022 12:29:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652530955828/ovwwbzy6C.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I'm still struggling to determine what kind of setup I want to use for my new home-server. I've watched countless Youtube video's about Proxmox, TrueNAS, TrueNAS in Proxmox, Debian in Proxmox and more and more alike. It all boils down to "use Proxmox as your hypervisor and use TrueNAS as your NAS" (either TrueNAS Core or TrueNAS Scale). I have one issue with this though; how do I get the ZFS datasets running from virtualized TrueNAS to other VM's? So far the answer seems to be SMB or NFS. There's one issue with that; <strong>speed</strong>.</p>
<p><iframe src="https://giphy.com/embed/fBEMsUeGHdpsClFsxM" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/minions-minionsmovie-minionsriseofgru-riseofgru-fBEMsUeGHdpsClFsxM">via GIPHY</a></p><p></p>
<p>In my current setup TrueNAS Core shares datasets with FreeBSD-jails (running apps like Plex, Sonar, IceCast) using null-mounts. You mount a dataset from the host into the jail and you're done. No hassle with NFS, SMB, iSCSI, just simple null-mounts. You can make them readonly which is nice, you generally have less issues with access-rights and most important of all; it's <strong>FAST</strong>. You're accessing the dataset directly and not using any layers in between.</p>
<p>I did run 2 VM's with BHyve on TrueNAS core, this is slow to begin with, and you can't use null-mounts, so accessing data from the host in a timely fashion is quite a struggle. Those who remember the days of FreeNAS (especially 9.x) know the pain with PHP-virtualbox, so BHyve looked like a blessing at the time ;) </p>
<p>I've come up with several scenario's which I'm currently investigating.</p>
<p><iframe src="https://giphy.com/embed/3oriO8vwmRIZO6kcNO" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/impastortv-tv-land-tvland-3oriO8vwmRIZO6kcNO">via GIPHY</a></p><p></p>
<h1 id="heading-1-proxmox-host-truenas-core-vm">1. Proxmox Host, TrueNAS Core VM</h1>
<p><em>Effectively moving my current server to a VM</em></p>
<p>I'll be installing TrueNAS Core 13 as VM running on Proxmox. All 3.5" disks (storage) will be mapped to the VM and TrueNAS will expose the datasets using NFS, AFP, SMB or iSCSI. 
There's one very big advantage to this setup: I'm able to transfer all my drives from my old server to the new one (even the SSD's running the jails) and restore the configuration from the old server in this new VM. All services will be up &amp; running in no-time and to a certain degree I have the power-efficiency I was looking for. In time I'll be moving the jails in TrueNAS to VM's in Proxmox. </p>
<p>Another advantage; I'll keep using FreeBSD under the hood, albeit virtualized. I like FreeBSD.</p>
<p>The downside; I have 6 SATA ports on the motherboard available and I will need all of them to run the 4+2 disks from my old setup. So there's no room for an extra set of SSD's to run new VM's. I can solve this by adding a PCI-SATA controller (which I still have laying around) however; this will increase power-usage.</p>
<p>Another downside; FreeBSD (on which TrueNAS core runs) is not known as being a very power-efficient OS, so eventually I will need to replace TrueNAS Core with something else when I'm done moving all the jails to VM's.</p>
<h1 id="heading-2-proxmox-with-truenas-scale-vm">2. Proxmox with TrueNAS Scale VM</h1>
<p>I'll be installing TrueNAS Scale as VM running on Proxmox. My current dataset can be easily imported. I will have to rebuild all jails in VM's in Proxmox, datasets will be shared using NFS or SMB (no AFP support in Scale unfortunately). </p>
<p>I can use plugins (K8s) in Scale to run services, this would make it a bit easier to migrate all my stuff. I can even create VM's in a TrueNAS VM with KVM! I tried that and performance ain't that bad, not nearly as fast as a VM on Proxmox though. <em>True VM-Inception!</em></p>
<p>But then again; TrueNAS Scale has KVM support, maybe not as incredible as Proxmox but good enough for me. Both are Debian Based, both are fairly energy efficient, so the real advantage of virtualizing TrueNAS is not really clear to me. The only thing I came up with; I can manage the TrueNAS VM (rebooting and such) easily from Proxmox.</p>
<h1 id="heading-3-proxmox-with-debian-vm-and-docker">3. Proxmox with Debian VM (and Docker!)</h1>
<p>This solution was brought to my attention from one of the nerds from <a target="_blank" href="https://www.metnerdsomtafel.nl/community">Met-Nerd-Om-Tafel-Slack</a>. I'll be installing Debian as VM in Proxmox. Let Debian deal with ZFS and share the dataset. Debian will also run docker for services. Other VM's can be run from Proxmox (like PFSense or Home-Assistant) so if the Debian-with-docker instance needs to reboot other services won't be impacted (as long as they don't use the shares from the dataset).</p>
<p>This is a terminal only-solution (you could run a desktop environment from Debian though). I can manage that but since I've been using FreeNAS I did get used to managing my server with a web-interface. 
One big plus; this is very power-efficient.</p>
<h1 id="heading-4-just-truenas-scale">4. Just TrueNAS Scale</h1>
<p>Not virtualized, just TrueNAS Scale on bare metal. All the advantages from #2, but without overhead from Proxmox (though it's not clear how much this overhead really is). I can run K8s-templates/apps, Docker and Docker-compose, use KVM, have a nice NAS-solution and pretty web-interface. </p>
<p>I have not tried it yet but I did run into some networking issues when I ran TrueNAS on a VM, so those issues need to be ironed out before I'm satisfied with this choice. And i'm not that versed with k8s yet, but the UI from TrueNAS takes a lot of complexity away.</p>
<p>#5 Just TrueNAS Core</p>
<p>This would be by far the most easy way to migrate. Just move the SSD's, HDD's and boot-USB from old server to new one, boot it up and I'm good to go. But there's no fun in that and the power-efficiency of FreeBSD (or lack thereof) would become an issue for me. Oh; and the pretty bad virtualization support.. So, this is no-deal.</p>
<h1 id="heading-general-thoughts-and-unanswered-questions">General thoughts and unanswered questions</h1>
<p><iframe src="https://giphy.com/embed/XHVmD4RyXgSjd8aUMb" width="480" height="360" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/chuber-feelings-thoughts-questions-XHVmD4RyXgSjd8aUMb">via GIPHY</a></p><p></p>
<p>There are some general issues I've been wondering and thinking about.</p>
<ol>
<li>I only have 32gb's of RAM. Will that be enough when I'm moving to more virtualization?</li>
<li>The Fujitsu Board (D3644-B) has 2 M2-slots. One 80mm for SSD's and one 30 mm for Wifi. Can I use the latter M2 port for storage as well?</li>
<li>The specs of the Fujitsu board say it supports up to 64Gb's of RAM, but I've seen reports it can also go up to 128Gb. Not sure if this is really the case though.</li>
<li>I have a Intel 1gbit dual-nic PCI network card in my current server, should I move that to my new machine (the Fujitsu board has a Intel NIC as well). Could be of use when splitting up my network, but it might use more power.</li>
<li>If I'd get a set of new SSD's for storing VM's, which ones would be suited for ZFS in mirror? I've read several posts on the TrueNAS forum about Samsung SSD's which fail after a few weeks due to too much writes. I always assumed Samsung SSD's were on of the best out there.</li>
<li>Can I (and should I) forward the Intel GPU to a VM to run hardware encoding for Plex?  I've seen video's of people doing that, but I'm uncertain if the onboard GPU can be forwarded to a VM.</li>
<li>Does TrueNAS Scale support forwarding USB-devices to VM's? If not I won't be able to migrate Home-Assistant from my Raspberry PI4 to TrueNAS Scale.</li>
</ol>
<p>I'll be doing some benchmarking and reviewing of each of the possible setups next days/weeks to get better understanding of performance impact and power-efficiency.. I'll reporting back here!</p>
<p><iframe src="https://giphy.com/embed/8JZxSl9H3gsn9vtfYd" width="480" height="480" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/dietz-and-watson-dietzandwatson-8JZxSl9H3gsn9vtfYd">via GIPHY</a></p><p></p>
]]></content:encoded></item><item><title><![CDATA[My Quest for a new Home Server - Part 1 (of many)]]></title><description><![CDATA[How it started
I've been running a home-server (NAS) for ages. It started back in the early 2000's with a Mac Mini and a bunch of USB-harddisks, connected with a USB-hub. I used it as NAS, recorded tv-shows with it (with an Elgato Coax-tuner), served...]]></description><link>https://syntacticsugar.nl/my-quest-for-a-new-home-server-part-1</link><guid isPermaLink="true">https://syntacticsugar.nl/my-quest-for-a-new-home-server-part-1</guid><category><![CDATA[Linux]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Ivo Toby]]></dc:creator><pubDate>Thu, 12 May 2022 18:43:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652381629552/Kegklhf27.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-how-it-started">How it started</h1>
<p>I've been running a home-server (NAS) for ages. It started back in the early 2000's with a Mac Mini and a bunch of USB-harddisks, connected with a USB-hub. I used it as NAS, recorded tv-shows with it (with an Elgato Coax-tuner), served PHP pages with Apache. Fun times ;) I bought a "real" NAS several years later (not sure but I think it was a QNAP) and kept using the Mac Mini for services.</p>
<p>Around 2010 and a lot of failed USB-disks later I build my first full system; a Tower PC with Core2Duo, 8Gb of RAM and 6x2Tb + 2x500gb HDD's. At first I tried to use Ubuntu with soft-raid until a friend mentions ZFS. I quickly installed PCBSD (FreeBSD with a desktop environment) and created my first ZRAID-dataset (I used ZRAID2 which was a wise choice in hindsight). I installed all services I needed in FreeBSD-jails and all was fine and dandy. This one day I lost 2 drives at once, both were pretty old and I was very happy I chose ZRAID2 for my dataset. After a while I stumbled upon FreeNAS and fell in love with it. I decided to backup all my data and setup my home-server with FreeNAS. Again; fun times!</p>
<p>In 2016 I decided to build a new server with new drives. My focus was on a more energy efficient system, and I needed more storage. So I build a new system with an I3-6100, 16Gb of RAM and 4x3Tb in ZRAID1. I added an SSD as ZIL. I used FreeNAS again. </p>
<p><iframe src="https://giphy.com/embed/WSqsdbIH6mLrHe78tJ" width="480" height="480" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/wind-turbines-eolien-WSqsdbIH6mLrHe78tJ">via GIPHY</a></p>Around 2019 I replaced the disks with 4Tb models (I dropped WD Green in favor for WD Reds) added 2 brand new SSD's in mirror (for running jails and VM's) and expanded RAM to 32Gb. Eventually I upgraded FreeNAS to TrueNAS. <p></p>
<p>This system is still running. Doing a lot of stuff; downloading, VPN Gateway, Reverse Proxy, IceCast streaming, Security Cam Recorder, Plex Media Server, hosting websites, GIT, VSCode Server, Home-automation. Officially the hardware is still good to go, however; the energy efficiency is not that good compared to modern hardware. It's never idle, but doing it's thing it uses about 115watt.. Which is quite a lot.</p>
<p>So; <strong>time for a new home-server</strong></p>
<h1 id="heading-hardware">Hardware</h1>
<p>I spend a good month researching energy efficient parts, from the motherboard, the best CPU to new drives. The forum on the dutch website Tweakers.net has <a target="_blank" href="https://gathering.tweakers.net/forum/list_messages/2096876">a very long thread</a> about energy efficient hardware and I talked a lot about it with my nerdfriends on <a target="_blank" href="https://www.metnerdsomtafel.nl/community">Met-Nerd-Om-Tafel-Slack</a>.</p>
<p><iframe src="https://giphy.com/embed/eU2sRBEme4GIM" width="343" height="480" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/freaks-and-geeks-portrait-eU2sRBEme4GIM">via GIPHY</a></p>
This is what I came up with;<p></p>
<ul>
<li>Intel I3-8100</li>
<li>Fujitsu D3644-B Motherboard</li>
<li>2x Corsair Valueselect DDR4-2400 memory (non-ecc, yeah, I know)</li>
<li>be quiet! Pure Rock 2 black CPU Cooler</li>
<li>Corsair RM550x power supply unit </li>
<li>Cooler Master MasterBox NR600 Midi Tower</li>
<li>Gigabyte 128Gb PCI-E M2 SSD</li>
</ul>
<p>I'll be reusing my 4x4Tb WD Reds and I'm planning on buying 2 Sata-SSD's. Eventually I'll add 2x16Gb of RAM to get to 64Gb.
I expect the WD Reds to keep up for another 2 years after which they will be replaced by either 2x16Tb or a bunch of 2,5'' disks (which are more energy efficient). </p>
<p>After receiving all parts I went to work and in 2 hours I had my new system up &amp; running. After some small tweaks in the BIOS (mainly enabling P-states) and installing Debian 11 I checked my power-meter; <strong>7 watts idle!!</strong>  now that's a difference!
Ok, the system was running without the powerhungry 3.5" disks, but I'd looked promising already.</p>
<p>Next; <strong>determining what OS to use. </strong></p>
<h1 id="heading-software-analysis-paralysis-awaits">Software -  Analysis Paralysis Awaits!</h1>
<p><em>Kudos to Saber Karmous for the term Analysis Paralysis</em></p>
<p><iframe src="https://giphy.com/embed/Iy0Kg9oUBw9smEKWH8" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/stickergiant-sticker-giant-Iy0Kg9oUBw9smEKWH8">via GIPHY</a></p>
This is going to be a tough one. I have a list of must-haves and nice-to-haves and I don't want to have to start over because I'm not happy with the OS.  <p></p>
<p><strong>The list Must-haves </strong></p>
<ul>
<li>Mature ZFS support</li>
<li>Able to run docker containers</li>
<li>Able to install apps from either templates (jails/k8s) or docker-compose</li>
<li>Able to map USB-devices to VM's or container (in order to replace my Pi4 with Home-Assistant)</li>
<li>Able to run VM's</li>
<li>Assign a LAN-IP to each of the VM's or containers</li>
<li>Energy-efficient (doh)</li>
<li>Be extremely quiet </li>
<li>Reuse my current ZFS dataset without moving data to a temporary set of disks</li>
<li>Easy to restore in case of emergency</li>
<li>Have decent throughput</li>
<li>Run all the stuff I use already (preferably more :p )</li>
</ul>
<p><strong>The list of wanna-haves</strong></p>
<ul>
<li>Able to assign VM's or containers to specific VLAN's (though I still have no idea how I got this working on my current setup)</li>
<li>Capable of running OBS from a Linux VM and transcode 2 live-streams at once </li>
<li>Replace my current fiber-router (OPNsense)</li>
</ul>
<h2 id="heading-the-contenders">The contenders</h2>
<p><iframe src="https://giphy.com/embed/1RzxeL2PuHYD1pw32i" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/ShalitaGrant-shalita-grant-its-on-oh-it-is-1RzxeL2PuHYD1pw32i">via GIPHY</a></p><p></p>
<p>There's a lot out there. But this is my shortlist</p>
<ul>
<li><strong>TrueNAS Core.</strong> Trusty TrueNas. I know my way around this OS, it's incredibly stable and secure. Does most of the things I want it to do, but there is one big downside; it does not run everything I throw at it. I've had several moments past few years in which I simply just could not get something to run on FreeBSD because there was no FreeBSD package for it. Yeah, you can use BHyve to start a VM, but it's quite cumbersome to configure and it's not fast. </li>
<li><strong>TrueNAS Scale</strong>. New kid on the block. Looks very promising. Based on Debian 11, with mature ZFS support, docker/k8s out of the box, virtualization with KVM. Looks like a perfect solution. And it probably is. I've tested it on my laptop in VirtualBox and I was impressed. I have not been able to assign LAN ip's to containers or VM's but I did not spend a lot of time on that yet, I'll probably get that working as well. This one I need to check out.</li>
<li><strong>Proxmox</strong>. Looks promising. I looked at Proxmox years ago, did not like it back then, but I saw a lot of video's about it and I think I should give it another chance. Also to be checked out!</li>
<li><strong>UnRAID</strong>. Has a lot of fans, I'm not one of them. The way UnRAID handles disks seems very efficient and safe, but I want ZFS, I like ZFS, I know ZFS and ZFS saved me several times before. There are plugins for UnRAID to have it support ZFS, but it feels a bit wonky to me. It's on the list, but I'm not sure if I really want to try it.</li>
<li>Plain old <strong>Debian</strong> with ZFS. I can install another Debian VM on top of it with docker containers, for easy rebooting. No shiny GUI, just plain ol' terminal. But it seems flexible and possible incredibly fast. Must try this as well.</li>
<li>Finally; <em>Any combination of one of the solutions mentioned above</em>. </li>
</ul>
<p>Say what? Yeah, I came across <a target="_blank" href="https://youtu.be/wPd6lpM01FY">this video on youtube</a> where this guy has a  Proxmox-host, installed TrueNAS Scale as a VM, pass along a hardware-controller with a bunch of harddrives to TrueNAS Scale and setup a ZFS dataset in the TrueNAS VM, returning the disks to Proxmox using SMB-shares. Proxmox was now able to run VM's from those SMB-shares. I was watching this and first thing that came to mind; what's wrong with this guy? Proxmox has ZFS support, why would you want to run ZFS from a VM? And why would you want to run VM's from a SMB-share???</p>
<h2 id="heading-i-have-questions">I have questions</h2>
<p><iframe src="https://giphy.com/embed/d1E1YlkOTe4IfdNC" width="480" height="480" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/latenightseth-question-seth-meyers-d1E1YlkOTe4IfdNC">via GIPHY</a></p><p></p>
<p>I mentioned this TrueNAS as VM in Proxmox in Slack to my nerdfriends. The first reaction I got was; yeah, I have my server running exactly like in that video.  Aight, this is a thing? </p>
<p>So, I'll be installing Proxmox, I wanted to take a good look at it anyhow. Time for some benchmarking! In a few days I'll report back here!</p>
<p>Subscribe to be updated when I report my findings!</p>
<p><iframe src="https://giphy.com/embed/3o7qDSOvfaCO9b3MlO" width="480" height="342" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/obama-mic-drop-out-3o7qDSOvfaCO9b3MlO">via GIPHY</a></p><p></p>
]]></content:encoded></item><item><title><![CDATA[VSCode server on Azure Ubuntu VM.]]></title><description><![CDATA[In this article I'll explain step-by-step on how to create your own VSCode server running on a VM in Microsoft Azure.
You do not need Azure, you can also use this guide on a VM on a home-server, any other cloud provider or a VM provided by your emplo...]]></description><link>https://syntacticsugar.nl/vscode-server-on-azure-ubuntu-vm</link><guid isPermaLink="true">https://syntacticsugar.nl/vscode-server-on-azure-ubuntu-vm</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[Visual Studio Code]]></category><category><![CDATA[vscode extensions]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Ivo Toby]]></dc:creator><pubDate>Sat, 08 May 2021 22:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1651648003038/jrmwfyIah.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article I'll explain step-by-step on how to create your own VSCode server running on a VM in Microsoft Azure.
You do not need Azure, you can also use this guide on a VM on a home-server, any other cloud provider or a VM provided by your employer.</p>
<h1 id="heading-what-to-expect-from-this-guide">What to expect from this guide?</h1>
<p>After following these steps, you'll end up with a development server which you can use to work on Node.js frontend and backend projects. You could probably use it for other stacks. From my experience working with this setup is almost the same as running VSCode on your own machine, except it's not on your local machine.</p>
<p>You can connect your local VSCode editor with the VSCode server using SSH. Code completion, 'go to definition', debugging, running your code, even the terminal, it's all the same. If you ever worked from VSCode on Windows with locally running containers or WSL ; it's exactly the same.</p>
<p>Your development server will connect to your Global Area Network using <a target="_blank" href="https://www.zerotier.com/">ZeroTier</a>. This makes it easy to connect without the need to change local configuration each time you start the server and it gets a new IP-address. Your development machine will be available from a local network. Nice bonus; you can completely fence the VM behind a firewall and still have SMB access.</p>
<h3 id="heading-why-would-you-want-to-run-vscode-from-a-server">Why would you want to run VSCode from a server?</h3>
<p>Couple of reasons</p>
<ul>
<li><strong>Resources</strong>; a fairly complex webapp can contain a lot of files and do a lot of resource hungry stuff. My main development machine is a Macbook Pro 13'' from 2019. It has an Intel Core i7 and 16 GB of RAM. Should be enough, right? While working, most of you have other stuff running; Slack or other chat apps, an email client, your browser (most webdevs have several running at once), a music player. My Macbook does throttle a lot and gets pretty hot when working on a fairly large codebase in TypeScript. Using a dedicated VM will result in faster transpiling, faster response when testing your app, faster code completion and overall a more productive IDE/text editor.</li>
<li><strong>Security</strong>; having the code I write for my employer sitting on a system from my employer is safe. If you're freelancing you can even use this as a USP; "everything I code for you is on systems you own".</li>
<li><strong>Flexibility</strong>; you can work from whatever machine you have within reach, as long as it is connected and has VSCode. Let me correct that; as long as it has a decent browser. You can use a normal local VSCode instance to connect to your VSCode server, I'd recommend this as daily driver. However; since VSCode is a webapp you can also use your browser to connect to your VSCode server. You can use a Chromebook, or even an iPad with a keyboard. You have the full functionality you'd have when you use the VSCode application. And since your code is not actually on the machine you're using it does not really matter if it's a company laptop or personal laptop.</li>
</ul>
<h1 id="heading-costs">Costs</h1>
<p>Free Microsoft credits aside, this VM will probably set you back around $25,- per month. You can shutdown the vm when you are not working, but you will need some grunt to comfortably run VSCode server. I use the <em>Standard B2ms (2 vcpus, 8 GB memory) VM-size</em> which costs $70,- per month. That's steep, and you might get the same results use the B2s instance, which has 2 cores, 4GB of RAM and 16GB SSD and will set you back roughly $15,- per month. If you'd left it running a full month you'd be paying $35,- per month. I'll be testing the B2S instance upcoming week and will report back on my findings.</p>
<p><em>Update</em> : after one morning working on the B2s instance I ran into memory issues. I had 3 projects open, 2 of them running (a serverless backend and a isomorphic frontend), I noticed the editor getting sluggish and <code>top</code> revealed there was no RAM left. Since by default the Azure Linux VM's have no swap enabled the VM was slowly crashing. So I created a swap-file using the procedure described at the end of this article and I'm currently working with 4GB RAM and 5GB of swap.</p>
<h1 id="heading-prerequisites">Prerequisites</h1>
<p>I assume you have all of the next items in place, or know a decent amount about;</p>
<ul>
<li>An Azure account, either with credits or a valid creditcard and reasonable understanding of what Azure is, how to use it and the way the webapp works.</li>
<li>Comfortable with Linux terminal, you know how to create a SSH-key, install packages</li>
<li>You already have a ZeroTier account and the ZeroTier client installed on your own machine. There are a lot resources explaining setting up ZeroTier, so use the-Google for that (or read <a target="_blank" href="https://www.stratospherix.com/support/setupvpn_01.php">this</a>)</li>
<li>If you want to secure the webinterface with an SSL certificate; a (sub)domain of which you can update the DNS records(recommended!)</li>
</ul>
<h1 id="heading-lets-get-started">Let's get started!</h1>
<ul>
<li><strong>Create</strong> a Virtual Machine in Azure in the region close to where you are, select whatever type you want and what your credit card allows. I will be setting up a B2s instance, with 2 Cores and 4GB or RAM. </li>
<li><strong>Select</strong> Ubuntu Server 21.04 - Gen1 as image.</li>
<li>Use <strong>SSH public</strong> key authentication and use the key Azure creates or use a key you already have in place. Please note; you can not use ed25519 keys for now. Don't forget to enter a username to login.</li>
<li><strong>Network</strong>; for now allow SSH (22) and port 80 (service: http).</li>
<li><strong>Disks;</strong> depending on your needs you can add extra data disks. For my situation the standard amount of 32 GB is enough.</li>
<li><strong>Management;</strong> Enable auto shutdown and set a time that's convenient for you, I use 9 pm, the likelihood of me still working at 9 pm is very slim.</li>
<li>When the VM is up and running, connect to it with SSH. You can use the IP found at "Overview" in the Azure portal. If the SSH key you used is not the default key you can use the -i argument to switch sshkeys like so:  <pre><code>ssh <span class="hljs-operator">-</span>i <span class="hljs-operator">~</span><span class="hljs-operator">/</span>.ssh/id_rsa user@<span class="hljs-keyword">address</span>
</code></pre></li>
<li>First thing I usually do is add my <strong>ed25519</strong> key to <code>~/.ssh/authorized_keys</code> (<em>optional</em>).</li>
<li>Second thing; <strong>update</strong> the system; <pre><code>sudo apt<span class="hljs-operator">-</span>get update <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> sudo apt<span class="hljs-operator">-</span>get upgrade
</code></pre></li>
<li>Configure <strong>max_user_watches</strong>. If you keep this at it's default value you might get errors like this <code>Error: ENOSPC: System limit for number of file watchers reached</code> when you use node_modules like <code>nodemon</code> or other file watchers in larger codebases. You can increase the value for <code>max_user_watches</code>:<pre><code>echo fs.inotify.max_user_watches=<span class="hljs-number">524288</span> <span class="hljs-operator">|</span> sudo tee <span class="hljs-operator">-</span>a <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>sysctl.conf <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> sudo sysctl <span class="hljs-operator">-</span>p
</code></pre></li>
<li>Now install <strong>ZeroTier</strong>: <pre><code><span class="hljs-attribute">curl</span> -s https://install.zerotier.com | sudo bash
</code></pre></li>
<li>Authorize the client at the ZeroTier website and give it a static IP (by adding an address to the machine by hand on the website instead of letting the site decide).</li>
<li><p>Disable the ubuntu <strong>firewall</strong>:  </p>
<pre><code>sudo ufw <span class="hljs-keyword">disable</span>
</code></pre><p><em>Now try to connect to the VM with SSH on its ZeroTier address be fore proceding. It could take a while before the virtual network is up &amp; running, also after rebooting!</em></p>
</li>
<li><p>Set a <strong>password</strong> for your user, you will need it to install packages from VSCode terminal; <code>sudo passwd [your username]</code> </p>
</li>
<li><p>Download VSCode server from <a target="_blank" href="https://github.com/cdr/code-server/releases">https://github.com/cdr/code-server/releases</a> and install it:</p>
<pre><code><span class="hljs-attribute">wget</span> -q https://github.com/cdr/code-server/releases/download/v<span class="hljs-number">3</span>.<span class="hljs-number">9</span>.<span class="hljs-number">3</span>/code-server_<span class="hljs-number">3</span>.<span class="hljs-number">9</span>.<span class="hljs-number">3</span>_amd<span class="hljs-number">64</span>.deb
<span class="hljs-attribute">sudo</span> dpkg -i code-server_<span class="hljs-number">3</span>.<span class="hljs-number">9</span>.<span class="hljs-number">3</span>_amd<span class="hljs-number">64</span>.deb
</code></pre></li>
<li><p>Setup <strong>systemctl</strong>:</p>
<pre><code>systemctl <span class="hljs-operator">-</span><span class="hljs-operator">-</span>user start code<span class="hljs-operator">-</span>server
systemctl <span class="hljs-operator">-</span><span class="hljs-operator">-</span>user enable code<span class="hljs-operator">-</span>server
</code></pre></li>
<li><p>Configure <strong>authentication</strong> by editing <code>~/.config/code-server/config.yaml</code>. Set up a strong password, you won't need to change the IP-binding since we'll be setting up a reverse proxy.</p>
</li>
</ul>
<p><em>If you don't want to use the web interface and will only use SSH from another VSCode app you're basically ready, skip next steps to finish up.</em></p>
<p>If you do like to use VSCode from a <strong>browser</strong>, move on to install <strong>NGINX</strong> and optionally <strong>Let's Encrypt</strong>.</p>
<p>You need to set up a (sub)domain with an A record that points to the IP address assigned to the VM. For this tutorial I set up <code>vscode.syntacticsugar.nl</code> with a TTL of 60 seconds to ensure it's available quickly. You can change the IP to the IP you've assigned from <strong>ZeroTier</strong> in a later stage.</p>
<p>Install Let's Encrypt</p>
<pre><code><span class="hljs-attribute">sudo</span> apt install certbot -y
</code></pre><p>Request a certificate</p>
<pre><code>certbot certonly <span class="hljs-operator">-</span><span class="hljs-operator">-</span>standalone <span class="hljs-operator">-</span><span class="hljs-operator">-</span>agree<span class="hljs-operator">-</span>tos <span class="hljs-operator">-</span>m <span class="hljs-operator">&lt;</span>enter your email<span class="hljs-operator">&gt;</span> <span class="hljs-operator">-</span>d <span class="hljs-operator">&lt;</span>the domain you set up<span class="hljs-operator">&gt;</span>
</code></pre><p><em>This could fail the first few times as DNS updates tend to be slower whenever you need them to be fast.</em></p>
<p>When the certificate has been successfully created, change the DNS to the IP address you assigned in ZeroTier.</p>
<p><strong>NGINX reverse (SSL) proxy</strong></p>
<ul>
<li>Install NGINX<pre><code><span class="hljs-attribute">sudo</span> apt install nginx -y
</code></pre></li>
<li>Create a config<pre><code>cd <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>sites<span class="hljs-operator">-</span>available<span class="hljs-operator">/</span>
sudo vim code<span class="hljs-operator">-</span>server
</code></pre></li>
<li>If you have setup SSL, paste this config (and change ) : </li>
</ul>
<pre><code>server {
 listen <span class="hljs-number">80</span>;
 server_name <span class="hljs-operator">&lt;</span>YOUR DOMAIN<span class="hljs-operator">&gt;</span>;
 <span class="hljs-keyword">return</span> <span class="hljs-number">301</span> https:<span class="hljs-comment">//$server_name:443$request_uri;</span>
}

server {
 listen <span class="hljs-number">443</span> ssl http2;
 server_name <span class="hljs-operator">&lt;</span>YOUR DOMAIN<span class="hljs-operator">&gt;</span>;

 ssl_certificate <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>letsencrypt<span class="hljs-operator">/</span>live<span class="hljs-operator">/</span><span class="hljs-operator">&lt;</span>YOUR DOMAIN<span class="hljs-operator">&gt;</span><span class="hljs-operator">/</span>fullchain.pem;
 ssl_certificate_key <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>letsencrypt<span class="hljs-operator">/</span>live<span class="hljs-operator">/</span><span class="hljs-operator">&lt;</span>YOUR DOMAIN<span class="hljs-operator">&gt;</span><span class="hljs-operator">/</span>privkey.pem;

 location <span class="hljs-operator">/</span> {
 proxy_pass http:<span class="hljs-comment">//127.0.0.1:8080/;</span>
 proxy_set_header Host $host;
 proxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection upgrade;
 proxy_set_header Accept<span class="hljs-operator">-</span>Encoding gzip;
 }
}
</code></pre><ul>
<li>Activate the vhost<pre><code>sudo ln <span class="hljs-operator">-</span>s <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>sites<span class="hljs-operator">-</span>available<span class="hljs-operator">/</span>code<span class="hljs-operator">-</span>server <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>sites<span class="hljs-operator">-</span>enabled<span class="hljs-operator">/</span>
</code></pre></li>
<li>Check the config<pre><code><span class="hljs-attribute">sudo</span> nginx -t
</code></pre></li>
<li>If all's fine, restart the services;<pre><code>sudo systemctl <span class="hljs-keyword">restart</span> nginx
sudo systemctl <span class="hljs-keyword">enable</span> nginx
</code></pre></li>
</ul>
<p>Check if you can reach code-server from your browser by going to https://yourdomain</p>
<p>Harden the <strong>firewall</strong> of your VM in the Azure Portal in the Networking Section. If you dare to rely on your ZeroTier connection you can disable SSH completely. If you're not the daring type consider only allowing connections to SSH from your own company or home IP. Also remove the rule for port 80. If you are planning to use VSCode from a browser <strong>outside</strong> ZeroTier you can leave port 80 and add an allow rule for port 443. This is <strong>NOT</strong> recommended from a security point of view.</p>
<h1 id="heading-optional-steps">Optional steps</h1>
<p>So basically you have a VSCode server running! Hurray! Next would be installing the toolset you need for the language your're working with. I'm sticking with NodeJS, so the following steps only apply if you want to work with Node.</p>
<p>Install <strong>NVM</strong> (node version manager, check <a target="_blank" href="https://github.com/nvm-sh/nvm">https://github.com/nvm-sh/nvm</a> for the latest version)</p>
<pre><code>curl -o- [<span class="hljs-string">https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh</span>](<span class="hljs-link">https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh</span>) | bash
</code></pre><p>Setup <strong>paths</strong></p>
<pre><code>nano <span class="hljs-operator">~</span><span class="hljs-operator">/</span>.zshrc
</code></pre><p>Paste the following config at the <strong>end</strong> of the file:</p>
<pre><code><span class="hljs-built_in">export</span> NVM_DIR=<span class="hljs-string">"<span class="hljs-variable">$HOME</span>/.nvm"</span>
[ -s <span class="hljs-string">"<span class="hljs-variable">$NVM_DIR</span>/nvm.sh"</span> ] &amp;&amp; \. <span class="hljs-string">"<span class="hljs-variable">$NVM_DIR</span>/nvm.sh"</span>  <span class="hljs-comment"># This loads nvm</span>
[ -s <span class="hljs-string">"<span class="hljs-variable">$NVM_DIR</span>/bash_completion"</span> ] &amp;&amp; \. <span class="hljs-string">"<span class="hljs-variable">$NVM_DIR</span>/bash_completion"</span>
</code></pre><p><strong>Reload</strong> your environment:</p>
<pre><code>source <span class="hljs-operator">~</span><span class="hljs-operator">/</span>.zshrc
</code></pre><p>install the <strong>Node.js</strong> version you want to use (to list al available versions, use <code>nvm ls-remote</code>):</p>
<pre><code><span class="hljs-attribute">nvm</span> install v<span class="hljs-number">16</span>.<span class="hljs-number">4</span>.<span class="hljs-number">2</span>
</code></pre><h1 id="heading-connecting-vscode-from-your-local-machine-with-vscode-server">Connecting VSCode from your local machine with VSCode server</h1>
<p>Install the <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack"><strong>VSCode Remote Development extension pack</strong></a>.
Open VSCode and click "<strong>Open Remote window</strong>" at the most bottom left corner </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651735143564/I3-XW9Ias.png" alt="n4c2hd9k220if6mmlc1p.png" /></p>
<p>Select <code>Open SSH Configuration File</code> and select the config file in your <code>home-directory/.ssh</code></p>
<p>Add the following configuration (and modify it to reflect your environment):</p>
<pre><code>Host [the hostname you used to <span class="hljs-keyword">create</span> ssl <span class="hljs-keyword">or</span> the Zerotier IP address ]
HostName [the ZeroTier Ip address]
<span class="hljs-keyword">User</span> [your username]
IdentityFile ~/.ssh/id25519 [<span class="hljs-keyword">or</span> the SSH <span class="hljs-keyword">private</span> <span class="hljs-keyword">key</span> <span class="hljs-keyword">file</span> you <span class="hljs-keyword">use</span> <span class="hljs-keyword">to</span> <span class="hljs-keyword">connect</span>]
</code></pre><p>Now click the same button <code>Open Remote window</code> , select <code>Connect to host</code> and select the host you just added.</p>
<p>If all is fine you should get an empty VSCode window, the button has changed and shows <code>SSH: [hostname]</code>. </p>
<p><strong>Congrats; you are now working on your VSCode server!</strong></p>
<h1 id="heading-plugins">Plugins</h1>
<p>Open the plugins tab and scroll through the windows with locally installed plugins. Click <code>Install in SSH: [hostname]</code> to install them on your VSCode server. You probably need to close VSCode and reconnect.</p>
<h1 id="heading-tips-and-tricks-and-daily-usage">Tips and tricks and daily usage</h1>
<h2 id="heading-getting-started-in-the-morning">Getting started in the morning</h2>
<p>I have not found an easy way to autostart my VM every morning. To be honest; I don't think I need that either. I have days with back to back meetings and I don't want the VM burning to my Azure credits without me using it.
So I login to the Azure portal and start it manually every time I need it. When it's up and running I connect my local VSCode app and hack away.
<em>Update</em>: I stumbled upon the Azure App for iOS, this app makes it very easy to start your development VM.</p>
<h2 id="heading-portmapping">Portmapping</h2>
<p>If you run a project using node you'd normally fire up a browser and navigate to http://localhost:port . <em>Using VSCode server is exactly the same! </em>VSCode will create SSH tunnels for you so you can connect to localhost:portnumber. So you won't run into CORS issues or other strange behaviour. </p>
<h2 id="heading-opening-a-vscode-window-from-the-vscode-terminal">Opening a VSCode window from the VSCode terminal</h2>
<p>Imagine; you are working on a frontend from on your VSCode server from a local VSCode instance using SSH. You realize you need to check some stuff in another project, which has been cloned into another folder on your VSCode server. You can <code>cd</code> to that project using the terminal within VSCode and fire up a new editor by simply typing <code>code .</code></p>
<h2 id="heading-finishing-up-for-the-day">Finishing up for the day</h2>
<p>You had a productive day writing elegant code and finishing several tickets. You're ready for a quiet evening doing other stuff. Before closing the lid of your laptop be sure to save ALL files in VSCode and commit &amp; push your work. Your VM will shut down later tonight which could lead to data loss. I have not run into this, but better safe than sorry right?</p>
<h1 id="heading-known-issues">Known Issues</h1>
<ul>
<li>It could take a while for ZeroTier to connect after booting the server. If you have issues ZeroTier not connecting at all try to login using SSH with the dynamic IP assigned by Azure and run ZeroTier join command; <code>sudo zerotier-cli join &lt;your network-id from ZeroTier&gt;</code></li>
<li>The VSCode webinterface might work better if you use Chrome.</li>
<li>Not enough memory? Enable swap on your Azure VM;
edit <code>/etc/waagent.conf</code>.
add or uncomment these lines (set <code>SwapSizeMB</code> to match the amount of RAM your VM or more) :<pre><code>ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=<span class="hljs-operator">/</span>mnt<span class="hljs-operator">/</span>resource 
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=<span class="hljs-number">4096</span>
</code></pre>reboot your VM and you should see swap memory in <code>top</code></li>
</ul>
<h1 id="heading-questions-praise-complaints">Questions? Praise? Complaints?</h1>
<p>Email: ivo@syntacticsugar.nl
Twitter: <a target="_blank" href="https://twitter.com/buttonfreak">https://twitter.com/buttonfreak</a></p>
]]></content:encoded></item><item><title><![CDATA[Securing Goodwe inverter]]></title><description><![CDATA[Last year we got 15 of PV’s/solar-panels installed, accompanied with a Goodwe inverter. The inverter has a USB-Wifi stick to get a connection to Goodwe and upload data so I can monitor the amount of power we produce with an app. 

No need for scannin...]]></description><link>https://syntacticsugar.nl/securing-goodwe-inverter</link><guid isPermaLink="true">https://syntacticsugar.nl/securing-goodwe-inverter</guid><category><![CDATA[hacking]]></category><dc:creator><![CDATA[Ivo Toby]]></dc:creator><pubDate>Thu, 22 Aug 2019 16:41:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652978185538/sxOygKIVy.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last year we got 15 of PV’s/solar-panels installed, accompanied with a Goodwe inverter. The inverter has a USB-Wifi stick to get a connection to Goodwe and upload data so I can monitor the amount of power we produce with an app. </p>
<blockquote>
<pre><code><span class="hljs-keyword">No</span> need <span class="hljs-keyword">for</span> scanning vulnerabilities <span class="hljs-keyword">and</span> <span class="hljs-keyword">no</span> brute-forcing.
Just <span class="hljs-keyword">log</span> <span class="hljs-keyword">on</span> <span class="hljs-keyword">and</span> you’re home network <span class="hljs-keyword">is</span> exposed!
</code></pre></blockquote>
<p>Besides my data being uploaded to some Chinese server, which I really don’t like, I also noticed the default WIFI-settings are very weak and can be guessed without any need for password-lists. </p>
<p>By default the inverter stays in AP mode visible as “Solar-Wifi”, even when connected to your home network, with password “12345678”. </p>
<p><iframe src="https://giphy.com/embed/l0HlJdvh9AEfwDAiI" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/southparkgifs-l0HlJdvh9AEfwDAiI">via GIPHY</a></p><p></p>
<p>After getting the connection all you need to do is find out which gateway you get assigned from DHCP, browse to that address (probably http://10.10.100.253 ), enter “admin/admin” as username &amp; password and you’re in.</p>
<p><iframe src="https://giphy.com/embed/YQitE4YNQNahy" width="480" height="270" class="giphy-embed"></iframe></p><p><a href="https://giphy.com/gifs/YQitE4YNQNahy">via GIPHY</a></p><p></p>
<p>Shockingly the interface shows the password of the home network in clear text, so connecting and penetrating the connected network is very very easy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652978358848/mtmTd94Tr.jpg" alt="Untitled-2-1024x860.jpg" /></p>
<p>Yes, my home network Wifi AP is called “<em>homeland-security</em>” :p</p>
<p>Several of our neighbors have the same inverter, all of them have their wifi exposed and none of them have taken action to secure the inverter (up until now ;-)</p>
<p>Unfortunately you can’t disable the wifi-AP on the inverter itself, and you can’t change the admin-password to access the web-interface. You can however change the default wifi-password of the inverter which I strongly recommend.</p>
<p>Sidenote; the inverter software seems to think it’s not connected to a wifi-AP and reports having no connection, while it is connected and can be reached from the home network. This is probably a bug and might be responsible for keeping it’s own AP active and visible.</p>
<p>I’ve “reverse engineered” the latest Goodwe-API to allow syncing of power-data from Goodwe to pvoutput.org. It’s still “beta” but you can download and install the script from <a target="_blank" href="https://github.com/buttonfreak/goodwe-api">Github</a>. Fortunately Goodwe dropped the old API which had no authentication at all and was easy to query for other users’ power-data (and location).</p>
<p>For my my next Goodwe-project I’ll be working on getting rid of the Goodwe-backends. I’ve already seen the values posted by the inverter by using an ARP-spoof, the data is transmitted unencrypted and with a simple HTTP-post, so creating a simple service in Node should not be very complex. </p>
]]></content:encoded></item></channel></rss>