Managing and observing system’s vital stats is crucial for optimizing performance and understanding usage patterns. In this post, I’ll be building a simple web application using Go that will display key system statistics such as CPU usage, RAM, disk space, and uptime.
To begin with, let’s set up a new directory and initialize a Go module for our project. After creating a new directory, initialize a new module and create a main.go file:
mkdir go-proj && cd go-proj
go mod init go-proj
touch main.go
Now add the package name and some imports that we will need:
package main
import (
"fmt"
"html/template"
"net/http"
"os"
)
We define a structure named PageVariables
to hold the formatted strings of system statistics to display on the web-page.
type PageVariables struct {
Uptime string
TotalRAM string
FreeRAM string
TotalDisk string
FreeDisk string
CPUUsage string
}
syscall
package in Go provides a straightforward interface to the operating system’s low-level system call API.sysInfo
, a syscall.Sysinfo_t
type, which is a struct containing various system information, including uptime.string
that provides the system’s uptime in a human-readable format (days, hours, minutes, and seconds).func getUptime(sysInfo syscall.Sysinfo_t) string {
// Retrieve uptime in seconds
seconds := sysInfo.Uptime
days := seconds / (60 * 60 * 24)
hours := (seconds % (60 * 60 * 24)) / (60 * 60)
minutes := (seconds % (60 * 60)) / 60
remainingSeconds := seconds % 60
// `%d` is a placeholder for decimal numbers
uptimeString := fmt.Sprintf("%d days, %d hours, %d minutes, %d seconds", days, hours, minutes, remainingSeconds)
return uptimeString
}
syscall.Sysinfo_t
Sysinfo_t
is a struct provided by the syscall
package that’s used to hold system information.Uptime
: How long the system has been running, in seconds.Loads
: 1, 5, and 15-minute load averages, which give a rough idea of system usage.Totalram
: Total usable RAM (i.e., physical RAM minus a few reserved bits and the kernel binary code).Freeram
: Amount of free RAM.Procs
: Number of current processes.var sysInfo syscall.Sysinfo_t
err := syscall.Sysinfo(&sysInfo)
&
operator is used to pass the memory address of sysInfo
to the function, allowing the function to modify the original variable.err
would hold any error that occurred during the function call (e.g., if for some reason the system information could not be retrieved).The current CPU status can be retrieved from the /proc/stat
file:
data, err := os.ReadFile("/proc/stat")
More information: https://man7.org/linux/man-pages/man5/proc.5.html
float64
variable which holds the calculated CPU use percentage and an error
when applicable.func getCPUUsage() (float64, error) {
// Read the contents of /proc/stat.
data, err := os.ReadFile("/proc/stat")
// If an error occurs during reading the file, return 0 and the error.
if err != nil {
return 0, err
}
// Split the contents of the data into lines.
lines := strings.Split(string(data), "\n")
// Iterate through each line of data.
for _, line := range lines {
// Split each line into fields based on white space.
fields := strings.Fields(line)
// Check if the current line contains CPU information.
if fields[0] == "cpu" {
// Initialize a variable to keep track of total CPU time.
total := 0
// Iterate through each field (ignoring the first one) and convert them to integers,
// adding them to the `total`.
for _, v := range fields[1:] {
// Convert string to integer.
value, err := strconv.Atoi(v)
// If an error occurs during conversion, return 0 and the error.
if err != nil {
return 0, err
}
// Add the converted integer to total.
total += value
}
// Convert the 5th field, which is the idle time, to an integer.
idle, err := strconv.Atoi(fields[4])
// If an error occurs during conversion, return 0 and the error.
if err != nil {
return 0, err
}
// Calculate the CPU usage percentage and return it.
// The formula is: 100 * (1 - (idle time / total time))
return 100 * (1.0 - float64(idle)/float64(total)), nil
}
}
// If no line with "cpu" is found, return 0 and an error indicating so.
return 0, fmt.Errorf("cpu info not found")
}
func getRAM(sysInfo syscall.Sysinfo_t) (uint64, uint64) {
totalRAM := sysInfo.Totalram / 1024 / 1024
freeRAM := sysInfo.Freeram / 1024 / 1024
return totalRAM, freeRAM
}
func getDiskSpace(stat syscall.Statfs_t) (uint64, uint64) {
totalDisk := (stat.Blocks * uint64(stat.Bsize)) / 1024 / 1024
freeDisk := (stat.Bfree * uint64(stat.Bsize)) / 1024 / 1024
return totalDisk, freeDisk
}
syscall.Statfs_t
syscall.Statfs_t
is a structure in Go that’s defined in the syscall
package. It’s used to hold information about a mounted filesystem. The data contained in Statfs_t
provides various pieces of information about the filesystem, such as its type, its block size, and space usage (in terms of blocks).Statfs_t
structure:
Bsize
, you can determine the total size of the filesystem.Bsize
gives you the total free space in the filesystem.Bfree
because some filesystems reserve a certain percentage of space that can only be used by the superuser.func main() {
http.HandleFunc("/", handler)
fmt.Println("Starting Webserver at http://localhost:8080")
http.ListenAndServe(":8080", nil)
}
A function provided by the net/http
package in Go. The http
package provides functionalities to implement HTTP clients and servers in Go applications.
The HandleFunc
function has the following signature:
func HandleFunc(pattern string, handler func(ResponseWriter, *Request))
It takes two parameters:
pattern
: A string that contains the URL pattern that you want your handler function to respond to.handler
: A function that gets called when the URL pattern is matched.func handler(w http.ResponseWriter, r *http.Request) {
var sysInfo syscall.Sysinfo_t
err := syscall.Sysinfo(&sysInfo)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
var stat syscall.Statfs_t
err = syscall.Statfs("/", &stat)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
cpuUsage, err := getCPUUsage()
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
uptime := getUptime(sysInfo)
totalRAM, freeRAM := getRAM(sysInfo)
totalDisk, freeDisk := getDiskSpace(stat)
pageVariables := PageVariables{
Uptime: fmt.Sprintf("%v", uptime),
TotalRAM: fmt.Sprintf("%v", totalRAM),
FreeRAM: fmt.Sprintf("%v", freeRAM),
TotalDisk: fmt.Sprintf("%v", totalDisk),
FreeDisk: fmt.Sprintf("%v", freeDisk),
CPUUsage: fmt.Sprintf("%.2f", cpuUsage),
}
tmpl, err := template.ParseFiles("index.html")
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
tmpl.Execute(w, pageVariables)
}
Create a simplistic HTML template named index.html
to elegantly display the fetched statistics.
<!DOCTYPE html>
<html>
<head>
<title>System Stats</title>
</head>
<body>
<h1>System Statistics</h1>
<p>Uptime: {{.Uptime}}</p>
<p>Total RAM: {{.TotalRAM}} MB</p>
<p>Free RAM: {{.FreeRAM}} MB</p>
<p>Total Disk Space: {{.TotalDisk}} MB</p>
<p>Free Disk Space: {{.FreeDisk}} MB</p>
<p>CPU Usage: {{.CPUUsage}}%</p>
</body>
</html>
Ensure Go is installed and your project setup is complete. Navigate to your project directory and execute:
go run main.go
The web server should be running, and we can navigate to http://localhost:8080 to visualize the system statistics.
]]>I love experimenting around with different wireless frequencies. Also the NRF24L01+ is a very-powerful-for-its-cost module that allows care-free experimentation because of its price. I want to have a correct perception of what is happening around me, which frequencies are saturated and also discover secret comms.
The nRF24L01+ can operate on 125 different channels which means that it includes the Wifi, Bluetooth and Zigbee ranges (among others).
A raspberry pico, a voltage regulator (at 3.3V) and a cheap, chinese NRF24L01+.
I haven’t yet used any sophisticated UI to view the per frequency saturation, I am just using the Thonny’s terminal one.
I am thinking about adding an LCD wide screen to the schematics and maybe a battery, all well fitted in a 3D printed box that could be carried around. Also track the saturation and draw some conclusions about its usage.
]]>It must have been a long time since I started dreaming about creating a UGV with extensive telemetry and parametrization, military-grade (!) construction, semi (or maybe fully) autonomous navigation, with good FPV support and encrypted communications. I decided to give it a go and see what I can do, as it’s very fun and challenges all parts of my knowledge: software, hardware, electronics, DIY etc. The first attempt should be something less than a prototype. I will focus on the design of the main control unit and its development on a small, easy to hack device. It should also be easy to transfer to another chassis, as I am planning to create it from scratch.
One of the RC cars I had on my closet was a Nikko Velocitrax.
It was given to me as a gift and I had been playing a bit occassionally. The biggest issue with it was that it only worked at full speed. The remote controller that comes with the box does not have gimbals that gradually increase or decrease speed, but only buttons and the motors are way too powerful so it’s very difficult to drive it in limited spaces (for example inside the house or on a small garden). No matter how much I liked the construction quality of the car, it lived on the shelf.
I first took some photos to keep track of the original design of the RC, then I desoldered the motor and power cables from the PCB.
Initially I tried to use the plain, classic and several times used, Arduino Nano. I had some issues with my Linux O/S and the drivers, because all my nano’s were Chinese clones and I already spent some hours to make the CH340 chipset for USB work but tr repeatedly failed. I had done this successfully in the past a long time has passed since then and I was starting to get deeply tired. I decided to give the new Raspberry Pico a go because it seemed like an upgraded Nano to me and uPython is also quite fast and straightforward to program.
I have never used a third party controller other than the one included with my RC’s. A friend of mine who has constructed an RC Aircraft by himself, suggested a Futaba one but unfortunately I found out they were too expensive for me to invest on, for only an experiment that things most possibly will crash and burn. Several searches on similar projects showed that FlySky controllers are widely used for DIY projects and they possess a great VFM rate. I choose their latest one which seems futuristic and has a touch screen.
The box with the controller and the transmitter arrived.
I don’t get the point of why they have to include a 6 channel transmitter with a controller that supports 10 channels. I opened the box and installed some batteries in it. At this point even though it wasn’t connected to a vehicle, it looked pretty awesome as a standalone gadget device. It has a blue backlight screen and the touch layer has a great response. I didn’t expect it to be that good tbh.
One of the killing features is the iBus protocol where you can control all channels with only one hardware pin instead of n ones where n = no of channels.
I usually try to keep things as simple as possible in order to reach the first milestone that things just work as soon as possible and then improve from that point on. My first attempt was to use a presoldered motor driver shield which seemed of a very good quality to me. This Kitronik one:
The setup was very easy. I connected the cables from the two motors to the very well designed screw terminals. The GPIO pins are predefined so I downloaded this Micropython Kitronik Motor Driver uPython library and in 5 minutes the motors started moving.
No need to say much about this one. This is always a tricky one but very cheap and easy to use. The greatest downside is that it loses accuracy under 4cm.
This is how things looked after connecting them all together.
Inside…
and outside…
Hopefully, the motors worked fine but I witnessed some strange behavior. I was going forward/backwards several times and the motors accelerated somehow strangely. At some point, after several minutes of driving and testing, smoke went out the motor driver. Tt simply died just like this, never worked again.
The motors consume less than 1A while running and they peak at 2.2A. These peaks stretched the Kitronik shield too much. I think that maybe they should have some sort of protection for current overload.
I researched for solutions and motor drivers quickly available in Greece, cause I didn’t want to lose time to order from Europe or abroad. Finally most of the drivers I could find was boards based on the L298n chipset, so I ordered two of them, just in case.
The L298n is a pretty old chipset which does the job but has some downsides. One of them is that it has a voltage dropout of 2V so my motors instead of 7.4V work at 5.4V. It also gets hot after some minutes of hard usage but it has a heatsink installed so I am more confident it won’t be fried very easily like the Kitronik one. So far I’ve been using it for a month with no issues.
Another thing is that sometimes while it gets very hot, stops to function. It might be just a coincidence but I am happy with it because as I said, I don’t have to buy new boards and switch over every few hours/days of experimenting and frying them out. I have also purchased some new motor drivers (an MDD3A and another 7A motor driver) which will be on my next list of hardware upgrades.
This time everything went fine except some random glitches that I haven’t identified yet. I was able to control the car from several meters away although I haven’t placed the antenna properly.
I have also tried to implement an ABS algorithm for break assistance but things became complicated too quickly. Then I decided to rollback and use just a simple condition to stop movement If the distance to some object on the front of the car is less than 40cm. No need to go further at this point.
There is lack of space but..
I will be creating a very draft plywood case as for a beginning or apply some modification on the original one.
I am also conducting some tests with my nrf24 modules to use them for telemetry and additional parametrization of the car. Lights controlling with 5050/WS2812 leds will also be possibly included on next version.
]]>The image I am using is that of Debian 10 which is basic and sophisticated at the same time and will also allow me to deploy to my Raspberry Pi’s.
Both tools can easily be installed in macOS with homebrew
and pip
.
brew install vagrant
Homebrew takes care of the dependency installation also like Virtualbox and takes a while to complete.
The Vagrantfile is pretty basic for now:
Vagrant.configure("2") do |config|
HOST_DIR = "/Users/chrisveleris/projects/paqetz"
GUEST_DIR= "/home/vagrant/paqetz"
config.vm.box = "generic/debian10"
config.vm.network "public_network"
config.vm.network "private_network", ip: "192.168.55.10"
config.vm.synced_folder HOST_DIR, GUEST_DIR, type: "nfs"
end
Vagrant will try to find the vagrant box and If it’s not previously downloaded, it will try to download and spin up a new image - then ssh
to it.
First install pip for python3. Then ansible:
sudo python3 -m pip install ansible
Create a configuration file in ~/.ansible.cfg
:
[defaults]
inventory = .ansible/hosts
Create ~/.ansible/hosts
:
[vagrant-box]
192.168.55.10 ansible_user=vagrant
Check out If everything is fine:
ansible vagrant-box -m ping
# or
ansible-playbook provision.yml -m ping
Start provisioning:
ansible-playbook -l vagrant-box provision.yml
What’s next? Setting up an ansible playbook for a new project.
]]>db/migrations
directory tends to pile up with obsolete, unused migrations.
The best way I found to squash them into one file and move on is to:
db/migrations
directory, somewhere outside the project or don’t forget to add it to .gitgnore
If you decide to keep it inside of itrails g migration squash_v1
db/schema.rb
file inside the ActiveRecord::Schema.define(version: xxxxyyyyzzzz) do
blockdb/migrations/xxxxxxxxxxxx_squash_v1.rb
file and paste inside the block
class SquashMigrationV1 < ActiveRecord::Migration[6.1]
def change
....
– Done –
]]>I wanted to create a library of markdown files, which would be rendered to a user on my website as individual pages. It was all about documents about network services e.g. SMTP, DNS, HTTP etc., each one having its own markdown file. The amount of services/files was unknown, as there were lots that have been already added and more that would be added in the future. No much thinking is necessary for 5 actions but imagine If there are… 100 of them.
The solution:
I chose Markdown format because I am not working with a database at the moment and I hate the over-engineering of things. Choosing redcarpet and rouge was an easy thing as they have a strong community to support them. I used dynamic actions/method creation in order to reduce useless code bloat. At the end, I just have to drop a markdown file in the directory mentioned and that’s it! *note: as with any other problem, there isn’t only one solution. The purpose there is to use dynamic actions intentionally to solve this one.
First of all I created a directory in the /public/md/ Rails path. I copied my .md files there, keeping the name of the service as the filename.
The usual path was to create a GET route for each service, setting up its own action in the controller and thus bloat and code smells. So I created a route for all of the services:
resources :services do
get '/:service', on: :collection, action: 'service'
end
Next, I started building the controller. First the markdown processing:
# Reads markdown files from public directory path
def read_md_data(service)
path = File.join(Rails.root, 'public/md/' + service + '.md')
return File.read(path)
rescue NilClass
return nil
end
# Renders text to syntax highlighted text
def create_content(data)
return Redcarpet::Markdown.new(
::PageRender, fenced_code_blocks: true)
.render(data)
.html_safe
rescue NoMethodError, NilClass
return ''
end
Then I added the respond_to_missing?
override and I used the method_missing
method in order to accept services as service_<name>
:
# Overrides default respond_to_missing? method so it returns false # when it doesn't respond to a missing service method
def respond_to_missing?(method_name, include_private = true)
!method_name.match(/^service_[a-z0-9]+$/).nil?
end
# Takes actions to render markdown text to the browser
def method_missing(method_name, *arguments)
service = method_name.match(/^service_([a-z0-9]+)$/)[1]
data = read_md_data(service)
@content = create_content(data)
render :service_page
rescue NoMethodError, NilClass
@content = 'ERROR: There was an issue while rendering content'
end
Next thing, I built the main service action. I chose this RegEx because there are services which carry a number next to the name e.g. POP3:
# Calls a dynamic method "service_<name> If validated properly"
def service
if params[:service] && params[:service].match(/^[a-z0-9]+$/)
self.send("service_#{params[:service]}")
else
raise StandardError
end
rescue StandardError
@content = "ERROR: Page for #{params[:service]} is missing"
render :service_page
end
Finally, the action_missing
method:
# Calls service method If overridden respond_to_missing? == true
def action_missing(name)
self.send(name) if self.send(:respond_to_missing?, name)
end
All the controller code in one block:
class ServicesController < ApplicationController
# Calls a dynamic method "service_<name> If validated properly"
def service
if params[:service] && params[:service].match(/^[a-z0-9]+$/)
self.send("service_#{params[:service]}")
else
raise StandardError
end
rescue StandardError
@content = "ERROR: Page for #{params[:service]} is missing"
render :service_page
end
# Calls service method If overridden respond_to_missing? returns true
def action_missing(name)
self.send(name) if self.send(:respond_to_missing?, name)
end
private
# Reads markdown files from public directory path
def read_md_data(service)
path = File.join(Rails.root, 'public/md/' + service + '.md')
return File.read(path)
rescue NilClass
return nil
end
# Renders text to syntax highlighted text
def create_content(data)
return Redcarpet::Markdown.new(
::PageRender, fenced_code_blocks: true)
.render(data)
.html_safe
rescue NoMethodError, NilClass
return ''
end
# Overrides default respond_to_missing? method so it returns false # when it doesn't respond to a missing service method
def respond_to_missing?(method_name, include_private = true)
!method_name.match(/^service_[a-z0-9]+$/).nil?
end
# Takes actions to render markdown text to the browser
def method_missing(method_name, *arguments)
service = method_name.match(/^service_([a-z0-9]+)$/)[1]
data = read_md_data(service)
@content = create_content(data)
render :service_page
rescue NoMethodError, NilClass
@content = 'ERROR: There was an issue while rendering content'
end
end
The view is pretty simple (bootstrap 4 used):
<div class="container-fluid">
<h2> <strong>Services</strong> </h2>
<hr>
</div>
<div class="container">
<div class="row">
<div class="col">
<%= @content %>
</div>
</div>
</div>
I know that it’s not perfect and the code can be reduced more in size, but that’s my first version of the implementation. To be honest, it didn’t take more than an hour for me to set this up so I will revise the code again and make the appropriate changes. Hope this helps someone.
]]>Hopefully, the version control problem was solved a long way back when I started using Git, but I was still facing issues with my current setup. I was working on different kinds of machines with different tools and different databases, building apps and uploading them by FTP to hosting providers. I had a rather complicated setup, built on different versions of Apache/nginx, different configurations and the worst: different O/Ses and versions. Then I found Virtualbox and gradually entered the world of Virtualization.
Virtualbox was open source (and still is, for those who haven’t used it) and pretty easy to use, so I started building different kind of machines and kept my individual configurations in each one separately.
After many years of moving from PHP to Ruby and Rails stack (while first passing through Python/Django for something more than a half year), I found out Vagrant and my development process changed to its best, until today.
Vagrant is a cross-compatible open source tool that lets us build complete development environments inside individual virtual machines. It decreases the time needed to setup a new development environment, totally controlling multiple virtual machine “instances” from the console and does all this by parsing a simple configuration file.
Vagrant can be used with Virtualbox, vmware, Parallels among others. It can also use Chef, Puppet, Ansible etc. in order to provision the virtual machine. This means that by using a simple command you’re able to create as many instances (they are called “boxes”) of a machine as you want.
Because:
If you have not been using any kind of Virtualization for development, you may start considering it soon. You will be doomed If your employer assigns you a project based on a different configuration.
If you feel you’ll be “just fine” developing for differently configured servers from your current “personal” one, then good luck! I guess you also see testing as waste of time, correct?
There’s no need to spend a lot of time here as the installation is pretty smooth, at least on OSX/macOS and Ubuntu/Debian/Kali that I’ve tried.
Make sure you’ve installed the latest version of Virtualbox (or any other Virtualization tool you want) first.
You can either download the package from the official website downloads page or install it manually. In macOS a simple
brew install vagrant
will suffice, considering that you’ve installed homebrew first. In Debian-based O/Ses, it installs as usual with
sudo apt-get install vagrant
You can find out the version running by typing
vagrant --version
in the console.
The first location that must be bookmarked should be Hashicorp’s vagrant box search engine:
Discover Vagrant Boxes - Atlas by HashiCorp
This page lets you discover and use Vagrant Boxes created by the community. You can search by operating system… atlas.hashicorp.com There are lots of ready-to-install boxes, including Windows, BSDs and everything else. As an example I’ll choose the Official Ubuntu Server 14.04 LTS box:
Vagrant box ubuntu/trusty64 - Atlas by HashiCorp
vagrant init ubuntu/trusty64; vagrant up --provider virtualbox
I won’t go through the process of creating a file by hand and entering all information there from scratch, but use the ready-to-go command that’s listed in the page:
vagrant init ubuntu/trusty64; vagrant up --provider virtualbox
This command:
There will be lots of informative messages displayed and installation may take a while depending on the system, its resources and the box you’re installing. If everything completes fine, you’ll be able to SSH to the newly installed box with a simple command:
vagrant ssh
I am working with several operating systems so I have to set them up separately. At the end all of them run smoothly and in parallel on my macbook.
My debian configuration I am using lately in order to build a full Ruby on Rails v5 stack looks like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrant configuration
Vagrant.configure(2) do |config|
# Requires
require “./settings”
# Vagrant box
config.vm.box = “debian/jessie64”
# Networks
config.vm.network “public_network”
config.vm.network “private_network”, ip: “192.168.44.10”
# Shared folders
config.vm.synced_folder HOST_DIR, QUEST_DIR, type: “nfs”
# Resource configuration
config.vm.provider “virtualbox” do |vb|
vb.cpus = 2
vb.memory = 512
end
# Shell scripts
config.vm.provision :shell,
path: “scripts/install-tools.sh”
config.vm.provision :shell,
path: “scripts/install-rvm.sh”,
args: “stable”,
privileged: false
config.vm.provision :shell,
path: “scripts/install-ruby.sh”,
args: “2.3.0 bundler”,
privileged: false
config.vm.provision :shell,
path: “scripts/postgresql.sh”
config.vm.provision :shell,
path: “scripts/install-redis.sh”
end
I usually work in a private_network but there are cases e.g. I want to scan the entire subnet or I want to show something to a colleague. Then I include a public_network with an IP Address acquired by DHCP Server. What I love in Vagrant about that is that you can write a script to create multiple network interfaces or simple copy and paste them. There was I case I wanted to add 8 network interfaces to test a network application, so I didn’t have to create them one by one in Virtualbox.
I am actually requiring ./settings.rb because I want to keep my folders dynamic, as I might want to share this Vagrantfile with another colleague who might have a different folder structure on his machine. I also have included the settings file in .gitignore and renamed it to settings.local.rb in order to be there If someone clones the repository.
HOST_DIR= “~/myusername/projects/new-project”
QUEST_DIR= “/home/vagrant/new-project”
I usually follow the same practise to add multiple mappings with the help of NFS.
I am also using shell scripts to provision my VM as this is pretty simple and I don’t have to set up a whole Chef stack in order to complete basic operations.
Most things are obvious here, I use -y to auto reply to requests and autoremove to delete junk. The packages I am installing are the ones essential to setup a basic Rails v5 stack.
#!/bin/sh -e
apt-get update
apt-get -y upgrade
sudo apt-get -y install curl git nodejs build-essential libgmp-dev libpq-dev
sudo apt-get autoremove -y
install-rvm.sh
Time to install RVM. I am willing to replace it in the future but for now it does what it says, so I a pretty fine with it. The instructions to set it up have been grabbed from the original website: https://rvm.io.
I am only adding an argument as a version, in order to control its installation from the Vagrantfile.
#!/usr/bin/env bash
gpg — keyserver hkp://keys.gnupg.net — recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
curl -sSL https://get.rvm.io | bash -s $1
install-ruby.sh
Parsing again an argument as the version I want to install from the Vagrantfile is the only different thing here.
#!/usr/bin/env bash
source $HOME/.rvm/scripts/rvm
rvm use — default — install $1
shift
if (( $# ))
then gem install $@
fi
gem update — system
postgresql.sh
Setting up PostgreSQL initially may seem a bit hard in the beginning but If the steps are analyzed, what we have to do seems pretty obvious. Every command has comments on the top to describe its purpose.
#!/bin/sh -e
# Edit the following to change the name of the database user that will be created:
APP_DB_USER=vagrant
APP_DB_PASS=dbpass
# Edit the following to change the version of PostgreSQL that is installed
PG_VERSION=9.4
export DEBIAN_FRONTEND=noninteractive
PROVISIONED_ON=/etc/vm_provision_on_timestamp
if [ -f “$PROVISIONED_ON” ]
then
echo “VM was already provisioned at: $(cat $PROVISIONED_ON)”
echo “To run system updates manually login via ‘vagrant ssh’ and run ‘apt-get update && apt-get upgrade’”
echo “”
exit
fi
PG_REPO_APT_SOURCE=/etc/apt/sources.list.d/pgdg.list
if [ ! -f “$PG_REPO_APT_SOURCE” ]
then
# Add PG apt repo:
echo “deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main” > “$PG_REPO_APT_SOURCE”
# Add PGDG repo key:
wget — quiet -O — https://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | apt-key add -
fi
# Update package list and upgrade all packages
apt-get update
apt-get -y upgrade
apt-get -y install “postgresql-$PG_VERSION” “postgresql-contrib-$PG_VERSION”
PG_CONF=”/etc/postgresql/$PG_VERSION/main/postgresql.conf”
PG_HBA=”/etc/postgresql/$PG_VERSION/main/pg_hba.conf”
PG_DIR=”/var/lib/postgresql/$PG_VERSION/main”
# Edit postgresql.conf to change listen address to ‘*’:
sed -i “s/#listen_addresses = ‘localhost’/listen_addresses = ‘*’/” “$PG_CONF”
# Append to pg_hba.conf to add password auth:
echo “host all all localhost trust” >> “$PG_HBA”
echo “host all all all md5” >> “$PG_HBA”
# Explicitly set default client_encoding
echo “client_encoding = utf8” >> “$PG_CONF”
# Restart so that all new config is loaded:
service postgresql restart
cat << EOF | sudo su — postgres -c psql
— Create the database user:
CREATE USER $APP_DB_USER WITH CREATEDB CREATEROLE;
EOF
# Tag the provision time:
date > “$PROVISIONED_ON”
echo “Successfully created PostgreSQL dev virtual machine.”
echo “”
install-redis.sh
Redis is necessary for Rails v5 in order for ActionCable to work, so I am including it to the system’s configuration. I have been also using Sidekiq and others related to Redis and I had no option to ignore it.
#!/bin/bash
mkdir /opt/redis
cd /opt/redis
# Use latest stable
wget -q http://download.redis.io/redis-stable.tar.gz
# Only update newer files
tar -xz — keep-newer-files -f redis-stable.tar.gz
cd redis-stable
make
sudo make install
sudo mkdir -p /etc/redis
sudo mkdir -p /var/redis/6379
sudo useradd — system — home-dir /var/redis redis
sudo chown -R redis:redis /var/redis
sudo cp -u /vagrant/redis.conf /etc/redis/6379.conf
sudo cp -u /vagrant/redis.init.d /etc/init.d/redis_6379
sudo update-rc.d redis_6379 defaults
sudo /etc/init.d/redis_6379 start
sudo chmod a+x /etc/init.d/redis_6379
sudo /etc/init.d/redis_6379 start
vagrant init
vagrant up
vagrant status
vagrant suspend
vagrant halt
vagrant reload
vagrant ssh
vagrant box list
vagrant destroy
vagrant global-status
The company behind Vagrant is HashiCorp which has been developing some other also awesome tools. They have also tried to replace Vagrant with Otto but people seem to love it too much to migrate to another tool, so it seems that they’re abandoning it.
Vagrant has limitations but it’s a wonderful tool for what it offers. Except from some adventures with NFS and Virtualbox in OSX I recently went into, it seems that it is stable and works always issue-free, at least for the last two years I’ve been using it !
References:
]]>The issue started after upgrading OSX to macOS Sierra, Virtualbox to 5.1.8 & Vagrant to 1.6.8. What were the issue and the facts?
Some of the things I’ve tried:
There are a lot of bugs regarding compatibility of Vagrant and Virtualbox. Among them, Vagrant <1.8.6 seem to support Virtualbox 5.0.18 but it doesn’t. I was getting a message telling me that I need to install a Virtualbox version >4.x when I’ve already had installed 5.0.18.
Then I tried to upgrade Virtualbox to 5.0.20 and the problem came up again in its full extend. It seems that something has changed around 5.0.18–5.0.20 that broke the compatibility between Virtualbox, nfsd and macOS. There was no hope. Then a colleague suggested to try something like reversing the NFS shares. This is how I tried that:
192.168.10.25:/home/vagrant/shared_dir /Users/username/shared_dir nfs resvport
/home/vagrant/shared_dir/ -network=192.168.10 -mask=255.255.255.0 -alldirs -mapall=<your_username_here>
rcctl enable portmap mountd nfsd
rcctl set nfsd flags -tun 4
# after making changes to the export file
rcctl reload mountd
mount -a
And finally I was able to share the directory I wanted from openBSD to macOS.
But I had a problem with permissions on macOS which was that I couldn’t read or write to the share. I created a user with the same username in openBSD and assigned him the same uid and gid with macOS. I don’t know If there are arguments to add uid/gid to a user in a simple command (e.g. adduser) but as far as this worked I’m fine:
# to get the id info of the user, do this both in openBSD and macOS.
# in my case they were uid=502, gid=20
id my_username
# create a user in openBSD with the same name as in macOS
useradd my_username
# assign uid and gid
usermod -u 502 my_username
usermod -g 20 my_username
and everything from this point worked perfectly!
References:
http://www.cyberciti.biz/faq/apple-mac-osx-nfs-mount-command-tutorial