Raspberry Pi ddns

Featured

A couple of weeks back, I found my lost Raspberry Pi 1, and I wanted to put it into good use.


Since I built my lab, I had a Virtual Machine running Dynamic DNS service from noIP to “link” my PublicIP with a DNS name to be able connect to my jump-server from the internet (outside of my local LAN). This is necessary because your PublicIP is not static (unless you paid for one) and it changes from time to time, and let’s be frank, it’s easier to remeber a name than an IP šŸ˜‰


This was all fun and games except when something would go wrong on my ESXi hosts, and VM would reboot, or when your better half asks you to power off the server cuz it was making too much noise šŸ˜€ (I bet you all went through this).

The plan is simple, use RPi (Raspberry Pi) to host the DDNS service as well as a Wireguard VPN Server. For now, we will focus only on the DDNS service.

What will you need to perform this:

  • Raspberry Pi # (does not matter the version)
  • SD or microSD card depending on the Raspberry Pi version you have
  • SD/microSD card reader
  • Power supply for your RPi
  • Network cabel (CAT 6 in my case) or you can use the builtin Wifi that you RPi might have
  • noIP account with a host already created

Steps involved

  • Install any Raspberry OS flavour on your RPi
  • Install and configure the noIP

How to install and OS into your Raspberry Pi

For this tutorial I will use the RPi Imager that you can download from the official Raspberry Pi webside

Once you have it installed just run it and follow the wizzard

First let us select the Operating System. In my case I will install Ubuntu Server 20.10 as I don’t need GUI for this project. Because they don’t offer Ubuntu Server for RPi 1 I opted for the 32bit (RPi 2/3/4/400)

Then Select your storage device (pay close attention you are selecting the correct one)

Press Write and Yes to continue and in a couple of minutes your RPi will be installed and ready to rock, well, almost….

After the OS is installed I had to enable SSH as I won’t have access to a monitor close to the RPi.

How to enable SSH on a fresh RPi installation

After the installation is completed you should have a new drive (SD/microSD card) connected to your computer. In my case it’s the letter I: The letter might be different in your case but the drive name will always be boot

Navigate to the root folder of boot drive and create a file called ssh with no extension and leave it empty

And that is it, SSH is enable on your RPi

Connecting to your RPi and installing the noIP service

First step is to connect your RPi to the network using the network cable, plug the power supply and let it boot up. Wait for one minute, connect to your Router administrator page and search for the connected device called RASPBERRYPI and copy the IP address.

You can use PuTTY to establish a connection to the RPi

Default credentials are:
user: pi
password: raspberry

After you login the next step is to update and upgrade your RPi installation. For that type the following commands:

$sudo apt-get update
$sudo apt-get upgrade
press ENTER to continue

Create a folder on your home directory called noip and wget the installation files for noip software inside with the commands:

$ mkdir noip
$ cd noip
$ wget https://www.noip.com/client/linux/noip-duc-linux.tar.gz

After the download is completed, unpack the file, go into noip-2.1.9-1 folder and follow the commands

$ tar vzxf noip-duc-linux.tar.gz
$ cd noip-2.1.9-1
$ sudo make
$ sudo make install

Now it’s time to follow the instructions.

In this case select 0 as the RPi is connected using a network cable.

Next you will be asked for your no-ip.com account and password
And if you want to update all your hosts. I selected no but ended up choosing all as I have 2 right now.
Select the interval update I left it as default.

After this point noIP service is configured on your environment.

To run the service and check the status run the following commands

$ sudo /usr/local/bin/noip2
$ sudo noip2 -S

Now that the service is runnig fine the next thing you’ll wanna do is to start the service once your RPi starts.

Adding noip2 service to autostart

To perform this task we will use the systemd feature from Ubuntu.

Create a file on /etc/systemd/system/noip2.service using the folowing command:

$ sudo touch /etc/systemd/system/noip2.service

After the file is created edit it and copy the following content inside:

$ sudo vi /etc/systemd/system/noip2.service

Once vi is open just press “i” once so you can insert text and paste the below lines inside

[Unit]
Description=noip2 service

[Service]
Type=forking
ExecStart=/usr/local/bin/noip2
Restart=always

[Install]
WantedBy=default.target

Once you’re finishe type “:wq” this will save your file and close it.

After run the folowing command:

$ sudo systemctl daemon-reload

This will reload your systemd so it’s aware of the new unit you have just created.

Now you can get status, start and stop your sevice using the folowing commands:

$ sudo systemctl status noip2
$ sudo systemctl start  noip2
$ sudo systemctl status noip2
$ sudo systemctl stop   noip2
$ sudo systemctl status noip2

To enable the autostart issue this command

$ sudo systemctl enable noip2

And that makes it for today.

You have just installed noIP in your RPi and set it to autostart on every boot.

KISS šŸ˜‰

References:

https://ubuntu.forumming.com/question/7826/can-39-t-get-service-noip2-to-start-on-boot

https://www.noip.com/support/knowledgebase/install-ip-duc-onto-raspberry-pi/

[VMwareLAB] – 3.the design

Time for the fun stuff šŸ˜€

On my previous posts (here and here) I talked about what made me decide to go for a physical server, the parts that I selected considering my budget and the lowest noise possible and today I will talk about the logical design behind the lab.

I will use my physical server mainly to host nested environments as I have plenty of resources for that. I can run multiple versions of the same product, destroy and recreate everything without having the need to reinstall the whole lab.

Without further ado, this is my lab

homelabv1

Now that you’ve seen it let’s talk a little bit about every component and why I do have them.

vCenter Server: vCenter Server Appliance with embedded PSC and it serves the purpose of being able to use instant clones and to clone VMs;

Windows Domain Controller: it serves as DNS/AD/DHCP for the first layer of VMs and for the nested environment;

Logical Router: using pfSense for routing and protection (firewall) the NESTED environment;

Automation VM: Ubuntu with Terraform/Ansible. Will be used for deploying the NESTED Env as well as some Kubernetes clusters;

Network-Attached Storage:Ā running Free-NAS to provide storage for the NESTED Env

NESTED Env: Instant clones from ESXi prebuilt and prepared for it, you can read all about that in VirtuallyGhettoĀ Later I will migrate the deployment from PowerCLI to Terraform.

Now for the network configuration

network

I know that this looks very simple for the great majority of you guys but if you are just starting and have no bases on how VMware works I hope this helps. With that being said let me explain the setup.

I have two switches right now switch0 and switch1.

vStandard switch 0 is for management, has one uplink, accommodates all the management VMs and it’s connected to the internet.

vStandard switch 1 serves the NESTED Env with no uplink so everything inside it is isolated from the outside world. Besides this, there are two VMs connected to it as well, Windows Domain Controller and Logical Router. The Logical Router, as the name suggests acts as a router between the NESTED Env and everything else if needed (internet, vCenter, network-attached storage…). I connected the Windows Domain Controller directly to this switch because I want for it to serve as a DHCP/DNS server for the NESTED Env which have a different IP range (172.21.30.0) from all the other VMs (192.168.0.0). In the future, I am thinking about letting the Logical Router do the DHCP part and disconnect the Windows Domain Controller but for now, let’s keep it like this.

And this covers the basic setup of the environment. On my next posts, I’ll show you guys how I installed and configure some of the components.

As always,

Have fun and KISS

[VMwareLAB] ā€“ 2.hardware

Now that I made my choice (good or bad only the future will tell) on my previous post and you can read all about it hereĀ it’s time to start the hunt for the hardware. At the end of the post, I’ll have the components list and the prices for each.

Because I want to keep the lab build the most budget effective as possible my main “shop” will be eBay šŸ˜€

First things first, the motherboard. After talking with some colleagues and doing some research I decided to go for the SUPERMICROĀ  X9DRi-LN4F+Ā mainly because it’s dual CPU, can go up to 1.5TB of RAM, has IPMI and supports vShpere 6.7U3. eBay was kind enough to provide me with a seller for this board with two passive coolers.

Now that I have the motherboard selected it’s time for the CPUs. I wanted to get some Intel Xeon E5 V2 Low Voltage version. A quick search on eBay led me to the E5-2650L v2 10Cores 1.7GHz, more than enough for what I need. And because I wanted to fill the sockets I bought two (the seller was selling a pack)

Time for the memory. Again Low Voltage one and preferably one of the ones from the compatibility list. Sk Hynix PC3L-10600R DDR3 1333MHz, ECC, HMT42GR7MFR4A-H9 was the ones I went for mainly because of value. I got 16 modules of 16GB (256GB) at a very nice price, at least I think it was.

So, the main components are already bought, nice job!!! Now the missing parts šŸ™‚

For disks, I selected theĀ WD Blue 3D SSD 500GB mainly because my local shop had a nice sale on them, got 2 (1TB SSD)

Because I’ll install this board on a “normal” ATX case, mainly because of noise reduction I had to select a power supply that would fit this case and becauseĀ  I didn’t want to get short, I got one Zalman ZM1000-GVM – 1000W and now I realize it’s bit overkill.

And now the thing that took the longest to find, the case. Because this board does not have a regular size EE-ATX (Enhanced Extended ATX) I was not able to find a “normal” desktop case that could hold this board and had without any tweaking. After researching for quite some time I went for the Corsair Carbide Quiet Air 740 High Airflow, it was a low noise, high airflow case and it could hold the EE-ATX, had to do some extra holes but all fits quite nice.

So, to sum it all up here is the list of the components I bought

Motherboard: SUPERMICRO X9DRi-LN4F+ Server Motherboard LGA 2011 +2 HEATSINKS – 243EUROS
CPU: Set of 2x IntelĀ® XeonĀ® Processor E5-2650L v2 25M Cache, 1.70 GHz – 90EUROS
RAM: 256GB Sk Hynix (16x16GB), 2Rx4 PC3L-10600R DDR3 1333MHz, ECC, HMT42GR7MFR4A-H9 – 223EUROS
Disk:Ā x2 WD Blue 3D SSD NAND 500GB – 2*65,5 =Ā 131EUROS
Power Supply: Zalman ZM1000-GVM – 1000W – 116EUROS
Case: Corsair Carbide Quiet Air 740 High Airflow – 132EUROS

In the end, everything was around 935EUROS

Considering that I have “enough hardware” for the upcoming years the price it’s not that high.

I hope I was able to make some light to some of you.

Next post I’ll show the assembly process and the fist my logical setup of the Lab itself.

As always,

Have fun and KISS

[VMwareLAB] – 1.first thoughts

A lot of has been written about building your own VMware LAB, nevertheless, I just wanna share my personal experience and maybe still help someone šŸ™‚

As a VMware enthusiast you, at some point in time had the need to test new products or features from VMware and found yourself without options for it.

Like many of you, I started with VMware workstation on my desktop, building a nested environment to try out simple features but soon realized the resources were just not there and that was the trigger to start thinking about alternatives.

There were a couple of options:

  1. Rent a physical server from an IaaS provider
  2. Use my friend’s lab (I have really nice friends)
  3. Upgrade my desktop to be used as a lab
  4. Buy dedicated hardware

Next step, defineĀ the Pros and Cons for each option.

1. Rent a physical server from an IaaS provider

PROS:

  • No need to worry about hardware
  • No electric bill
  • Easy and fastest setup times

CONS:

  • Not budget-friendly, even the cheapest one was too expensive
  • Limited hardware options (RAM, CPU, Disks)

2. Use my friend’s lab

PROS

  • Free or close to free (shared electrical bill)
  • Able to influence hardware upgrades

CONS

  • Be dependent on my friend’s goodwill
  • Don’t have full control of the Lab

3. Upgrade my desktop to be used as a Lab

PROS

  • Full control of hardware
  • Cheaper compared to 1st option
  • Just missing RAM

CONS

  • Hardware limitation (max 64GB RAM)
  • DDR4 it’s very expensive
  • Limit scalability

4. Buy dedicated hardware

PROS

  • Scalability
  • Customizable
  • Dedicated for the Lab

CONS

  • Big initial investment
  • Electic/Internet bill

Verdict

After breaking my piggybank I went for the 4th option.

Next steps are to actually select the hardware and assemble everything…

As always,

Have fun andĀ KISS

App Volumes – Office 365

appvolumes

After installing Office 365 32bits in a Windows 7 64bit everything seemed to work as expected.

But after attaching a writable volume to the VM, Macros on Office stopped working even if it was a cleanĀ writable volume (not used before).

VMware KB provided a walk aroundĀ for environments that use snapvol.cfg in Writable Volumes or in Appsatcks, but after trying to implement it, nothing changed, as soon as a writable volume is attached to the VM, Macros on Office365 stop working.

KB2145079(I will be using the Writable Volumes option but the same applies to Appstacks):

Writable volumes:
1. Attach writable volumes to a virtual machine.
2. Log in to the virtual machine as the administrator.
3. Open the computer management on the Windows machine.
4. Go toĀ StorageĀ >Ā DiskĀ Management.
5. Right-clickĀ CVWritableĀ and clickĀ Change Drive Letter and Paths.
6. Assign a drive letter to the CVWritable.
7. Copy theĀ snapvol.cfgĀ file fromĀ the root folder of CVWritable and paste it on other location, such as desktop.
8. Open theĀ snapvol.cfgĀ file using a text editor.
9. Add this entry:
################################################################
# Office 365 Virtual Registry exclusions
################################################################
exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Microsoft\Office\15.0\ClickToRun\REGISTRY
10. Save and close theĀ snapvol.cfgĀ file.
11. Zip theĀ snapvol.cfgĀ file.
12. Go to the App Volumes Manager.
13. In the Volumes tab, clickĀ WritablesĀ tab and clickĀ UpdateĀ Writables.
14. Upload theĀ snapvol.cfg.zipĀ file to Update Writable volume.

After reading the walk around I noticed two things:

  1. Office 365 it’s not on 15.0 but instead on 16.0
  2. The path for the exclude_registry does not fully exist

After a quick search, I found that the REGISTRY folder is not under

\REGISTRY\MACHINE\SOFTWARE\Microsoft\Office\15.0\ClickToRun\REGISTRY

but instead

Ā \REGISTRY\MACHINE\SOFTWARE\Microsoft\Office\ClickToRun\REGISTRY

So after changing:

exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Microsoft\Office\15.0\ClickToRun\REGISTRY

to (removing the 15.0)

exclude_registry=\REGISTRY\MACHINE\SOFTWARE\Microsoft\Office\ClickToRun\REGISTRY

log off and log on again, Office 365 Macros are working as they are intended to.

I hope this helps someone that is facing the same issue!