Setting up passwordless SSH

Information on setting up passwordless SSH from one machine to another.

Background

Source Information Links

  1. https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
  2. https://stackoverflow.com/questions/17846529/could-not-open-a-connection-to-your-authentication-agent
  3. https://stackoverflow.com/questions/64043238/enter-pin-for-authenticator-issue-related-to-ssh

Summary

In fact, the Raspberrypi.org documentation gets you up and running with passwordless SSH.

However, when you create the certificate, if you specifiy a passphrase you still will need to authenticate with your passphrase on your side to allow use of the certificate in the connection. You can additionally store the passphrase in you ssh agent keychain such that you will not need to supply it when used.

A final caveat, if you are running this from the WSL under Windows, the ssh-agent is a service that when running will keep your VM alive which will manifest in the memory its using being held on to longer than perhaps is desired.

Implementation

Summarizing the steps from the RaspberryPi documentation (Source Information Link #1, above.)

Check for existing SSH keys

First, check whether there are already keys on the computer you are using to connect to the Raspberry Pi:

ls ~/.ssh

If you see files named id_rsa.pub or id_dsa.pub then you have keys set up already, so you can skip the ‘Generate new SSH keys’ step below.

Generate new SSH keys

To generate new SSH keys enter the following command:

ssh-keygen

Upon entering this command, you will be asked where to save the key. We suggest saving it in the default location (~/.ssh/id_rsa) by pressing Enter.

You will also be asked to enter a passphrase, which is optional. The passphrase is used to encrypt the private SSH key, so that if someone else copied the key, they could not impersonate you to gain access. If you choose to use a passphrase, type it here and press Enter, then type it again when prompted. Leave the field empty for no passphrase.

Now look inside your .ssh directory:

ls ~/.ssh

and you should see the files id_rsa and id_rsa.pub:

authorized_keys id_rsa id_rsa.pub known_hosts

The id_rsa file is your private key. Keep this on your computer.

The id_rsa.pub file is your public key. This is what you share with machines that you connect to: in this case your Raspberry Pi. When the machine you try to connect to matches up your public and private key, it will allow you to connect.

Take a look at your public key to see what it looks like:

cat ~/.ssh/id_rsa.pub

It should be in the form:

ssh-rsa <REALLY LONG STRING OF RANDOM CHARACTERS> user@host

Copy your public key to your Raspberry Pi

Using the computer which you will be connecting from, append the public key to your authorized_keys file on the Raspberry Pi by sending it over SSH:

ssh-copy-id <USERNAME>@<IP-ADDRESS>

Note that for this step you will need to authenticate with your password.

Alternatively, if ssh-copy-id is not available on your system, you can copy the file manually over SSH:

cat ~/.ssh/id_rsa.pub | ssh <USERNAME>@<IP-ADDRESS> 'mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys'

If you see the message ssh: connect to host <IP-ADDRESS> port 22: Connection refused and you know the IP-ADDRESS is correct, then you may not have enabled SSH on your Raspberry Pi. Run sudo raspi-config in the Pi’s terminal window, enable SSH, then try to copy the files again.

Now try ssh <USER>@<IP-ADDRESS> and you should connect without a password prompt.

If you see a message "Agent admitted failure to sign using the key" then add your RSA or DSA identities to the authentication agent ssh-agent then execute the following command:

ssh-add

If this does not work, you can get assistance on the Raspberry Pi forums.

Note: you can also send files over SSH using the scp command (secure copy). See the SCP guide for more information.

Note: I ran into this issue and found some additional information on https://stackoverflow.com/questions/17846529/could-not-open-a-connection-to-your-authentication-agent

The #1 rated answer solved my problem:

Did You Start ssh-agent?

You might need to start ssh-agent before you run the ssh-add command:

eval `ssh-agent -s`
ssh-add

Note that this will start the agent for msysgit Bash on Windows. If you’re using a different shell or operating system, you might need to use a variant of the command, such as those listed in the other answers.

See the following answers:

  1. ssh-add complains: Could not open a connection to your authentication agent
  2. Git push requires username and password (contains detailed instructions on how to use ssh-agent)
  3. How to run (git/ssh) authentication agent?.
  4. Could not open a connection to your authentication agent

To automatically start ssh-agent and allow a single instance to work in multiple console windows, see Start ssh-agent on login.

Why do we need to use eval instead of just ssh-agent?

To find out why, see Robin Green’s answer. Public vs Private Keys

Also, whenever I use ssh-add, I always add private keys to it. The file ~/.ssh/id_rsa.pub looks like a public key, I’m not sure if that will work. Do you have a ~/.ssh/id_rsa file? If you open it in a text editor, does it say it’s a private key?

Why do we need to use eval instead of just ssh-agent?

To find out why, see Robin Green’s answer.

Public vs Private Keys

Also, whenever I use ssh-add, I always add private keys to it. The file ~/.ssh/id_rsa.pub looks like a public key, I’m not sure if that will work. Do you have a ~/.ssh/id_rsa file? If you open it in a text editor, does it say it’s a private key?

Adjust permissions for your home and .ssh directories

If you can’t establish a connection after following the steps above there might be a problem with your directory permissions. First, you want to check the logs for any errors:

tail -f /var/log/secure
# might return:
Nov 23 12:31:26 raspberrypi sshd[9146]: Authentication refused: bad ownership or modes for directory /home/pi

If the log says Authentication refused: bad ownership or modes for directory /home/pi there is a permission problem regarding your home directory. SSH needs your home and ~/.ssh directory to not have group write access. You can adjust the permissions using chmod:

chmod g-w $HOME
chmod 700 $HOME/.ssh
chmod 600 $HOME/.ssh/authorized_keys

Now only the user itself has access to .ssh and .ssh/authorized_keys in which the public keys of your remote machines are stored.

Store the passphrase in the macOS keychain

If you are using macOS, and after verifying that your new key allows you to connect, you have the option of storing the passphrase for your key in the macOS keychain. This allows you to connect to your Raspberry Pi without entering the passphrase.

Run the following command to store it in your keychain:

ssh-add -K ~/.ssh/id_rsa

Note: If you run into an Enter PIN issue described in https://stackoverflow.com/questions/64043238/enter-pin-for-authenticator-issue-related-to-ssh

Not on macOS?

If you are not on macOS, ssh-add will likely not have a built in Keychain to store it to, at least that is the case for WSL on Windows, and according to the ssh-add(1) man page.

Simply remove the -K argument:

ssh-add ~/.ssh/id_rsa

Weekly Unbound Updates on RaspberryPi

Weekly Unbound Updates on RaspberryPi

Information and configuration steps around setting up an automated cron job to pool the root hints weekly and restart unbound after doing so.

Helpful links

Crontab site(s)

Unbound Installation / Scheduling Info

The goal is to create a weekly schedule to run a script which will:

  1. Download a fresh set of root hints for unbound to use once a week
  2. Restart the unbound service to accept the new root hints

Cron Schedule String

The portion of the crontab line that dictates the scheduling.

"At 03:15 every Sunday."

15 3 * * 0

File locations

The locations I am using to store the scritpts this will use.

  • /usr/local/bin/
    • update-unbound-root-hints.sh
      • Updates the root hints
    • unbound-weekly-maintenance.sh
      • Calls the update-unbound-root-hints.sh and then restarts unbound

Script Source

The source for the two scripts listed above.

  1. update-unbound-root-hints.sh

    #!/bin/bash
    
    wget -O root.hints https://www.internic.net/domain/named.root
    mv root.hints /var/lib/unbound/
    
  2. unbound-weekly-maintenance.sh

    #!/bin/bash
    
    #Update root hints
    sh /usr/local/bin/update-unbound-root-hints.sh
    
    #Restart Unbound
    systemctl restart unbound
    

Example Crontab line

# backup using the rsbu program to the internal 4TB HDD and then 4TB external
01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2

As a user Crontab line

Some jobs are more appropriate to run under a user context rather than root. The following Crontab line would be added in the users crontab file by running crontab -e as the user.

15 3 * * 0 sh /usr/local/bin/unbound-weekly-maintenance.sh > /usr/local/bin/unbound-weekly-maintenance.log 2>&1

The /etc/crontab line

System-wide jobs seem more appropriate to run vi the /etc/crontab file specifying the user in question.

15 3    * * 0   root    sh /usr/local/bin/unbound-weekly-maintenance.sh > /usr/local/bin/unbound-weekly-maintenance.log 2>&1

Pi-hole specific cron jobs

My pi-hole has this cron configuration in: /etc/cron.d/pihole.conf

# This file is under source-control of the Pi-hole installation and update
# scripts, any changes made to this file will be overwritten when the software
# is updated or re-installed. Please make any changes to the appropriate crontab
# or other cron file snippets.

# Pi-hole: Update the ad sources once a week on Sunday at a random time in the
#          early morning. Download any updates from the adlists
#          Squash output to log, then splat the log to stdout on error to allow for
#          standard crontab job error handling.
16 4   * * 7   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updateGravity >/var/log/pihole_updateGravity.log || cat /var/log/pihole_updateGravity.log

# Pi-hole: Flush the log daily at 00:00
#          The flush script will use logrotate if available
#          parameter "once": logrotate only once (default is twice)
#          parameter "quiet": don't print messages
00 00   * * *   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole flush once quiet

@reboot root /usr/sbin/logrotate /etc/pihole/logrotate

# Pi-hole: Grab local version and branch every 10 minutes
*/10 *  * * *   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker local

# Pi-hole: Grab remote version every 24 hours
38 14  * * *   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker remote
@reboot root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker remote reboot

crontab -e as user: pi yields the following configuration. Note the #0 2 1 */4 * /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log is commented out. There is nothing wrong with this but there is nothing in this to do.

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
#0 2 1 */4 *  /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log

Whereas, running sudo crontab -e yields. Note that 0 2 1 */4 * /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log is not commented out here as the root user.

The schedule below is

At 01:02 on every 4th day-of-month.

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
0 2 1 */4 *  /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log

Windows Stat Collection with Telegraf, InfluxDB and Grafana

A small screenshot of the Telegraf & Influx Windows Host Overview Dashboard.

Pre-requisites

  1. InfluxDB

    • Not in scope of this guide.
  2. Grafana

    • Not in scope of this guide.
  3. Creating a new database for Telegraf stats.

    Replace INFLUXDB_HOST with the hostname or IP address of your InfluxDB server.

    curl -POST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE telegraf"
    
  4. Add an InfluxDb data source

    • Click the settings Gear Icon and choose the Data Sources option

    • Click the Add Data Source button

    • Find the InfluxDb data source and choose Select

    • Set the HTTP Url setting to the InfluxDb container’s IP address and port

      http://172.17.0.2:8086

    • Set the database name to the database name you created in the previous step (telegraf is the default.)

    • Click the Save and Test button to verify Grafana can connect to InfluxDb.

Telegraf for Windows installation

  1. Download Telegraf for windows and extract it to your drive/

  2. Extract and update telegraf.conf

    • Create the install directory for telegraf, this guide will use c:\telegraf-1.17.0\telegraf.conf.

      mkdir "c:\telegraf-1.17.0"
      cd "c:\telegraf-1.17.0"
      telegraf.exe config > telegraf.conf
      
    • The open telegraf.conf in a text editor and update as you will. At bare minimum, update:

      agent section’s interval key to 10 seconds.

      [agent]
      ## Default data collection interval for all inputs
      interval = "10s"
      
    • The InfluxDB Output section’s urls and database keys with the url to the InfluxDB server and the database name if you do not want to use the default which is telegraf.

      [[outputs.influxdb]]
      urls = ["http://192.168.2.221:8086"]
      database = "telegraf"
      
    • Finally, locate the [[inputs.win_perf_counters]] section and replace it completely with the following.

      See Collector Configuration Details from https://grafana.com/grafana/dashboards/1902 for more details.

      [[inputs.win_perf_counters]]
      [[inputs.win_perf_counters.object]]
      # Processor usage, alternative to native, reports on a per core.
      ObjectName = "Processor"
      Instances = ["*"]
      Counters = [
          "% Idle Time",
          "% Interrupt Time",
          "% Privileged Time",
          "% User Time",
          "% Processor Time"
      ]
      Measurement = "win_cpu"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      # Disk times and queues
      ObjectName = "LogicalDisk"
      Instances = ["*"]
      Counters = [
          "% Idle Time",
          "% Disk Time",
          "% Disk Read Time",
          "% Disk Write Time",
          "% User Time",
          "% Free Space",
          "Current Disk Queue Length",
          "Free Megabytes",
          "Disk Read Bytes/sec",
          "Disk Write Bytes/sec"
      ]
      Measurement = "win_disk"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      ObjectName = "System"
      Counters = [
          "Context Switches/sec",
          "System Calls/sec",
          "Processor Queue Length",
          "Threads",
          "System Up Time",
          "Processes"
      ]
      Instances = ["------"]
      Measurement = "win_system"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      # Example query where the Instance portion must be removed to get data back,
      # such as from the Memory object.
      ObjectName = "Memory"
      Counters = [
          "Available Bytes",
          "Cache Faults/sec",
          "Demand Zero Faults/sec",
          "Page Faults/sec",
          "Pages/sec",
          "Transition Faults/sec",
          "Pool Nonpaged Bytes",
          "Pool Paged Bytes"
      ]
      # Use 6 x - to remove the Instance bit from the query.
      Instances = ["------"]
      Measurement = "win_mem"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      # more counters for the Network Interface Object can be found at
      # https://msdn.microsoft.com/en-us/library/ms803962.aspx
      ObjectName = "Network Interface"
      Counters = [
          "Bytes Received/sec",
          "Bytes Sent/sec",
          "Packets Received/sec",
          "Packets Sent/sec"
      ]
      Instances = ["*"] # Use 6 x - to remove the Instance bit from the query.
      Measurement = "win_net"
      #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
      
      [[inputs.win_perf_counters.object]]
      # Process metrics
      ObjectName = "Process"
      Counters = [
          "% Processor Time",
          "Handle Count",
          "Private Bytes",
          "Thread Count",
          "Virtual Bytes",
          "Working Set"
      ]
      Instances = ["*"]
      Measurement = "win_proc"
      #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
      
  3. Install Telegraf as a service

    • Start cmd.exe as an administrator.

    • execute the following command to have telegraf install itself as a service:

      telegraf.exe --service install --config "c:\telegraf-1.17.0\telegraf.conf"
      
  4. Import the Telegraf & Influx Windows Host Overview dashboard

Unifi-poller Setup and Configuration

A display of the Unifi-poller dashboards.

Background

The unifi-profiler depends on either InfluxDb or Prometheus for the database and Grafana for the data display. This documents creating three disperate docker containers and configuring them to work with the unifi-profiler as the data source to an influx database that Grafana uses to display our data.

The Information in this guide was inspired and based on the unifi-poller project:

Perhaps my ideal would be similar to this docker compose guide at: https://nerdygeek.uk/2020/06/18/unifi-poller-an-easy-step-by-step-guide/ which lends itself to more a a single applicance but I have other uses for Grafana and InfluxDb so this works for my current needs.

Sections

  1. Pre-requisites
  2. InfluxDb
  3. Grafana
  4. Unifi-poller
  5. Enjoy the fruits of your labors
  6. How to get the docker container id of a running docker container
  7. How to get the IP address of a running docker container
  8. How to get a shell inside a running docker container
    • How to exit a shell inside a docker container
  9. How to restart a docker container
  10. How to create a named docker volume
  11. How to remove a docker container

Pre-Requisites

  1. Create a read-only adminstrator account on the Unifi controller for use by the unifi-poller

    • Add a user to the UniFi Controller. After logging into your controller:
    1. Go to Settings -> Admins
    2. Add a read-only user (unifipoller) with a nice long password.
    3. The new user needs access to each site.
      • For each UniFi Site you want to poll, add admin via the ‘Invite existing admin’ option.
    4. Take note of this info, you need to put it into the unifi-poller config file in a moment.
  2. Install Docker

    • Make sure the Pi is up to date:

      sudo apt-get update && sudo apt-get upgrade
      
    • Agree to any updates, and wait for them to install. Then the one line command to install Docker is. It is worth noting that this downloads a script from the internet and runs it in your bash shell:

      curl -sSL https://get.docker.com | sh
      
    • Now add the user pi to the docker group, so as to avoid permissions issues later. Please note, this guide may not contain a unified use of sudo for the shell commands.

      sudo usermod -aG docker pi
      
    • Reboot the PI

      sudo reboot
      
    • Make sure everything is working by running

      docker run hello-world
      
    • The output should contain the magic lines:

      Hello from Docker!

  3. Create locations for the local version on the docker host of the configuration files and data files the docker containers will have mapped into the container for use.

    • Create the following directories:

      • A host location to hold files and directories one would normally use /etc/ for within the container.
        • /etc/docker_hosts
      • The influxdb directory, will be used to hold the influxdb.conf file.
        • /etc/docker_hosts/influxdb
      • The unifi-poller directory, will be used to hold the unifi-poller.conf
        • /etc/docker_hosts/unifi-poller
    • Optional steps

      These steps are only needed for mapping a host’s local directory as a volume to a docker container. This guide uses a docker named volumes for influxdb and grafana which allows docker to manage it rather than the end-user.

      How to use this method is described in the How to create a named docker volume section, should you choose to use that instead.

      • /var/lib/docker_hosts/grafana
      • /var/lib/docker_hosts/influxdb

InfluxDb

  1. Create config file

    Change to the docker_hosts/influxdb directory. You will want to be sudo/root for this.

    cd /etc/docker_hosts/influxdb
    

    Run the following to extract the influxdb.conf to the directory.

    docker run --rm influxdb influxd config > influxdb.conf
    
  2. Create the docker named volume for the influxdb data:

    docker volume create influxdb_data
    
  3. Run the container as a daemon (the -d argument specified daemon.)

    docker run --name influxdb -d -p 8086:8086 -v influxdb_data:/var/lib/influxdb -v /etc/docker_hosts/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro influxdb -config /etc/influxdb/influxdb.conf
    

    To break down the above docker command

    Parameter Value Meaning
    –name influxdb sets the container id to influxdb
    -d runs the the container as a service/daemon
    -p 8086:8086 maps the port on the left to the host’s port while the port on the right is mapped to the container’s port and IP address. The value here maps host port 8086 to container port 8086.
    -v influxdb_data:/var/lib/influxdb maps a docker volume named influxdb_data on the host to /var/lib/influxdb inside the container for the data on the host to overlay into the container.
    -v /etc/docker_hosts/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro maps the host file /etc/docker_hosts/influxdb/influxdb.conf to the file in the container at /etc/influxdb/influxdb.conf as a Read-Only (:ro) file in the container
    influxdb the image to run , docker resolves this.
    -config /etc/influxdb/influxdb.conf use this configuration file at /etc/influxdb/influxdb.conf in the container
  4. Create a database

    • Documentation as posted (https://hub.docker.com/_/influxdb/) has an issue. The command:

      curl -G http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
      
    • Using Get to transfer the data is identified as a deprecated call:

      {"results":[{"statement_id":0,"messages":[{"level":"warning","text":"deprecated use of 'CREATE DATABASE mydb' in a read only context,      please use a POST request instead"}]}]}
      
    • Instead use Post:

      curl -POST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
      

    Addtl. Info: https://github.com/influxdata/docs.influxdata.com-ARCHIVE/issues/493

  5. Insert a Row

    curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
    

Grafana

  1. Pull down the docker Grafana image

    docker pull grafana/grafana:latest
    
  2. Create the docker named volume for the grafana data:

    docker volume create grafana_data
    
  3. Run the container as a daemon (the -d argument specified daemon.)

    docker run --name grafana -d -p 3000:3000 -v grafana_data:/var/lib/grafana grafana/grafana:latest
    
  4. Navigate to the Grafana web ui: http://localhost:3000

    • default username: admin
    • default password: admin
  5. Add an InfluxDb data source

    • Click the settings Gear Icon and choose the Data Sources option

    • Click the Add Data Source button

    • Find the InfluxDb data source and choose Select

    • Set the HTTP Url setting to the InfluxDb container’s IP address and port

      1. Find the IP of the influxdb container (see section How to get the IP address of a running docker container)

      2. Add the url including the port number used (8086 was used in this guide.)

      http://172.17.0.2:8086

    • Set the Database value to the name of the database you created during the InfluxDb steps. The guide uses mydb

    • Click the Save and Test button to verify Grafana can connect to InfluxDb.

  6. Go on to add and configure the Grafana dashboards for unifi-poller.

  7. Install the additional Grafana Plug-ins that the unifi-poller dashboards use.

Grafana dashboards

Source: https://grafana.com/grafana/dashboards?search=unifi-poller

  1. Navigate to the Grafana web ui: http://localhost:3000

  2. Click on the + Create icon and choose the Import option

  3. In the Import via grafana.com textbox put the Import Code of the plug-in below to install.

    • Ensure you choose the InfluxDb Data Source in the drop-down at the bottom labeled: Select an InfluxDB data source
  4. Import the following Dashboards

Grafana Plug-ins

Additional Grafana plug-ins used by the Unifi-poller dashboards:

  1. grafana-piechart-panel
  2. grafana-clock-panel
  3. natel-discrete-panel

Note: Grafana requires a restart after installing new plug-ins.

Plug-in Installation

Perform the following from within the running Docker container (see section How to get a shell inside a running docker container):

grafana-cli plugins install PLUGIN-NAME

Unifi-poller

Based on the instructions here: https://github.com/unifi-poller/unifi-poller

  1. Pull down the docker unifi-poller image

    docker pull golift/unifi-poller
    
  2. Create a copy of the unifi-poller.conf in the /etc/docker_hosts/unifi-poller directory on the host your created earlier.

    • edit the /etc/docker_hosts/unifi-poller.conf as needed

    • I disabled Prometheus support as I am using Influx

      [prometheus]
      disable = true
      
    • Configure InfluxDb, I am running default no username or password so only the url and disable = false needed to be updated.

      Do note: The IP address is the InfluxDb container’s IP address in the example below.

      [influxdb]
      disable = false
      # InfluxDB does not require auth by default, so the user/password are probably unimportant.
      url  = "http://172.17.0.3:8086"
      db   = "mydb"
      
    • Configure the unifi.defaults section.

      InfluxDb supports more items than Prometheus so those are enabled below as well as saving the Deep Packet Inspection data (save_dpi)

      # The following section contains the default credentials/configuration for any
      # dynamic controller (see above section), or the primary controller if you do not
      # provide one and dynamic is disabled. In other words, you can just add your
      # controller here and delete the following section. The internal defaults are
      # shown below. Any missing values will assume these displayed defaults.
      [unifi.defaults]
      url            = "https://192.168.2.1:8443"
      user           = "unifi-admin-unifipoller-username"
      pass           = "unifi-admin-unifiprofiler-password"
      sites          = ["all"]
      save_ids       = true
      save_events    = true
      save_alarms    = true
      save_anomalies = true
      save_dpi       = true
      save_sites     = true
      hash_pii       = false
      verify_ssl     = false
      
  3. Run unifi-poller container as a daemon (the -d argument specified daemon.)

    docker run --name unifi-poller -d -v /etc/docker_hosts/unifi-poller/unifi-poller.conf:/config/unifi-poller.conf golift/unifi-poller
    

Enjoy the fruits of your labors

  1. Navigate back to your Grafana instance: http://localhost:3000
  2. Click the Dashboard icon and choose the Manage option.
  3. Click one of the imported dashboards to view the beautiful data.

How to get the docker container id of a running docker container

docker ps

Results:

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
3183bbb971ed        grafana/grafana:latest   "/run.sh"                3 hours ago         Up 3 hours          0.0.0.0:3000->3000/tcp                           grafana2
dfbce9a7c751        golift/unifi-poller      "/image"                 4 hours ago         Up 3 hours                                                           unifi-poller
a6e4f76a2677        influxdb                 "/entrypoint.sh -con…"   6 hours ago         Up 6 hours          0.0.0.0:8086->8086/tcp                           influxdb

How to get the IP address of a running docker container

Sources:

  1. Get the container id of the container you’d like the IP address from

  2. Execute the ip a command within the container

    docker exec influxdb ip -4 -o address
    

    Results

    1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
    21: eth0    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0\       valid_lft forever preferred_lft forever
    

How to get a shell inside a running docker container

  1. Get the container id of the container you want to start the shell in

  2. Use docker exec to open a shell

    docker exec -it 3183bbb971ed sh
    

How to exit a shell inside a docker container

  • exit command at the shell
  • CTRL+P followed by CTRL+Q docker shell escape sequence

How to restart a docker container

Source: https://docs.docker.com/engine/reference/commandline/container_restart/

  1. Get the container id of the container you want to restart

  2. Use docker container restart to restart it. The below will restart it immediately.

    docker container restart 3183bbb971ed
    

How to create a named docker volume

A "named docker volume" is simply a location on the host that docker is aware of. Volumes can be used by multiple containers and are available on the host as well. This is a convenient way to save application data and configurations to persist across container runs or the containers that reference them.

These volumes are not required and can be done manually using the full path to a resource with the docker run -v command:

Volume Type Command
Named Volume docker run -v influxdb_data:/var/lib/influxdb
Mapped Directory docker run -v /var/docker_hosts/influxdb/influxdb_data:/var/lib/influxdb

To create a named volume

docker volume create [volume_name]

Example:

docker volume create influxdb_data

which can then later be attached with the -v command line argument of docker run:

docker run –name influxdb -d -p 8086:8086 -v influxdb_data:/var/lib/influxdb […]

How to remove a docker container

If you need to remove a docker container and re-run it with different starting arguments, you can remove an existing docker container using the docker container rm command:

  1. Get the container id of the container you want to start the shell in

  2. Use docker exec to open a shell

    docker container rm 3183bbb971ed
    

    or

    docker container rm grafana
    

Markdown Test

Markdown

Markdown is a lightweight markup language with plain text formatting syntax. Its design allows it to be converted to many output formats, but the original tool by the same name only supports HTML. Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor. Since the initial description of Markdown contained ambiguities and unanswered questions, the implementations that appeared over the years have subtle differences and many come with syntax extensions. Wikipedia

It is a delightful way to simply work with text and formatting in a single place. It really only gets tricky when you want to view the final stylized product. Anything that can save plain text can save the file, but some interpretation of the file is required to transform it to its marked up view.

This is a simple test of an iOS app called MWeb which has the facility to push to a WordPress site. I wanted to see how that worked and here we are!

Getting my new old job

I just upgraded my blog a new version and found this old draft from October 19th 2011. I’m not sure why I never published it but I am now. It is not particularly interesting or inciting but enjoy my thoughts.

This past Monday I started a new job, well sort-of new anyway. I find myself at a former employer of mine again, this time as a contractor.  “Always move forward, never back” is my philosophy when it comes to jobs. Vesting in my work is part of my method and it is difficult to untangle business from this investment. It’s just easier to keep moving forward, never mind the tangential concept of being able to follow opportunities that otherwise may have not come your way.

Opportunities seem to work in a very subtle way. I find I am much more likely to spot an opportunity when I need an opportunity. I bet they are everywhere and I simply do not see them,  or fail to act.

The story starts at our annual Weenie Roast this past summer. A good friend of mine is there and I mention my concerns with my contract at the time. The job was great, the people were great, the commute was awesome! Unfortunately their standard contract term of 2 months at a time was proving difficult.

If you’ve never spent a few years as a contractor, you may not know. The most difficult time to get a new gig is the last two-thirds of the 4th quarter. There is a small burst near the end as any remaining budget surpluses are used to avoid being allocated less next year.

I mentioned my fears to my buddy and he told me to polish up the resume and he would put it in the process; an opening was in the pipe. In the interest of permanent employ, I did exactly that.

This was mid summer and was squarely in a contract with plenty of projects and budget on the horizon and the time was right to lay some groundwork . I fully wanted to and expected to stay where I was. A good team is hard to come by and special in my industry, and worth hanging on to.

Tick-tock, time passes and mid September comes along and my team lead voices some concerns about project and budget showed some signs of beginning to lag. He used to be a contractor and appreciated the need to move light and maintain your reputation. This daddy’s got bills to pay!

I gave my recruiter at the contracting company I work for and gave her the heads up. And I will say this, I really appreciate the way that this place has handled me and my career. Very professional and made me feel like a person and not just contractor number QQ3984.

In the meantime, my buddy’s organization had contacted me and I had a few phone screens and had to fly out to Oklahoma (beautiful state, I wish I had been able to take in some sights.) I really liked everyone I spoke with and met, truly a great group of guys and gals! The one thing that worried me was that primary management was in California, and was a marketing company. This, in and of itself is not a big deal but if you’re in IT goals for a marketing company differ from those of a systems or business company. Goal alignment may mismatch. This has been my experience, mileage may vary.

The day after the trip to OK, my recruiter called with an opportunity from a former employer of mine. I’ll be honest, I was at this place for 4 years and I truly enjoyed every second of the people I worked with. The politics, I was not so fond of. But, the culture was great as well so, which I find I appreciate more the older I get. I have earnestly been trying to get back in there since I left. Their HR department is a hard nut to crack. So I quickly accepted and then had a phone screen.

Now here’s where the story begins to get weird, at least for me. I ended up receiving an offer from both my buddy’s place, and my former employer, and it looked like my current employer would be able to renew me for two or three more terms based on what was pending. Three options, I am never that guy with options.

 

Disable Hyper-V under Windows 10

Image of the RAW mode error.

Trying to disable Hyper-V under Windows 10 is a little trickier than simply uninstalling it; but not much!

tl;dr If you’re just interested in how to disable Hyper-V so VirtualBox can get RAW mode, skip down to the “Turning off Hyper-V” section below.

Some background

I am a fan of virtual machines but I don’t get to use them often at work. I do use them at home to evaluate operating systems like Linux from time to time for fun and staying aware.

Most of my computer use is with Microsoft platforms though I have had a few Macs and Linux desktops over the years.

The addition of the Hyper-V into the Windows 10 operating system is exciting because it reduces the number of software setups required after a new  setup – everybody has time for that!

I have  been digging into privacy and security lately; No doubt you’ve seen the data-breaches and customer privacy abuse headlines over the years.

I have used Tor browser by the Tor Project which led me to their Tails operating system, “a live operating system that you can start on almost any computer from a USB stick or a DVD.

The plot thickens

While looking into Tails, two other security and privacy minded operating systems were found to be more recommended (high counts of recommendations not necessarily authoritative ones): Qubes and Whonix.

This post won’t go into the details of what these operating systems offer, but they were interesting enough to make it on to the list for evaluation.

And the Gotcha!

Qubes will not install on a Hyper-V virtual machine. There are many systems it can run, however Qubes is not one of them. The only choice is disabling Hyper-V.

More digging uncovered reports of being able to convert an existing Qubes VM to the Hyper-V format. My use case is to install the operating system manually to an empty VM.

There are third-party Qubes VMS out there but not being able to build it yourself stands against the fundamental privacy premise.

Starting a VirtualBox virtual machine with an active Hyper-V will cause the following error:

Image of the RAW mode error: "Raw-mode is unavailable courtesy of Hyper-V. (VERR_SUPDRV_NO_RAW_MODE_HYPER_V_ROOT)."
RAW mode error caused by Hyper-V being active. “Raw-mode is unavailable courtesy of Hyper-V. (VERR_SUPDRV_NO_RAW_MODE_HYPER_V_ROOT).”

Disable Hyper-V

Step 1: Disable Hyper-V Windows 10 features

  • Start the Turn Windows features on or off application
  • Ensure the following items’ checkbox icons are unchecked:
    • Hyper-V and it’s sub-items
      • Hyper-V Management Tools
        • Hyper-V GUI Manaement Tools
        • Hyper-V Module for Windows PowerShell
      • Hyper-V Platform
        • Hyper-V Hypervisor
        • Hyper-V Services
    • Windows Hypervisor Platform
    • Virtual Machine Platform (may be unrelated, needs additional testing. Not a feature I need or use normally.)

Step 2: Ensure Windows 10 boot does not automatically launch Hypervisor

Found this post at ErpNext.com by: Sirjames

  • Open the command prompt as an Administrator.
  • Run the command with no argument: bcdedit. Note the property hypervisorlaunchtype is set Auto by default.
  • Disable Hyper-V by running the command: bcdedit /set hypervisorlaunchtype off
  • Restart the system.

Turning it back on

To re-enable Hyper-V back on, run the command: bcdedit /set hypervisorlaunchtype auto
You will need to reboot the system to use Hyper-V.

Symbolic Links on Windows 10

Screenshot of Link Shell Extension
This is not meant as a tutorial in any way, I’m simply trying to not forget this as I need it about once every two years or so and always forget how to do it. The information was found here: https://superuser.com/questions/1020821/how-to-create-a-symbolic-link-on-windows-10 The option I went with was the PowerShell route suggested by Peter Hahndorf:
Open a PowerShell session as elevated administrator:
New-Item -ItemType SymbolicLink -Path E:\Data\MyGames -Target "C:\users\UserName\MyGames"
or using less verbose syntax:
ni E:\Data\MyGames -i SymbolicLink -ta "C:\users\UserName\MyGames"
Another approach is a Windows Shell extension which looked interesting but probably overkill for my current needs (from odvpbre):
If you want a GUI Tool for making/editing that symlinks use http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html Link Shell Extension (LSE) provides for the creation of Hardlinks , Junctions , Volume Mountpoints , and Windows7/8’s Symbolic Links, (herein referred to collectively as Links) a folder cloning process that utilises Hardlinks or Symbolic Links and a copy process taking care of Junctions, Symbolic Links, and Hardlinks. LSE, as its name implies is implemented as a Shell extension and is accessed from Windows Explorer, or similar file/folder managers. The extension allows the user to select one or many files or folders, then using the mouse, complete the creation of the required Links – Hardlinks, Junctions or Symbolic Links or in the case of folders to create Clones consisting of Hard or Symbolic Links. LSE is supported on all Windows versions that support NTFS version 5.0 or later, including Windows XP64 and Windows7/8/10. Hardlinks, Junctions and Symbolic Links are NOT supported on FAT file systems, and nor is the Cloning and Smart Copy process supported on FAT file systems.
Screenshot of Link Shell Extension
Link Shell Extension
  Some additional information on different types of links from http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html#hardlinks : Hardlinks are a feature common to many Unix based systems, but are not directly available with NT4/W2K/WXP. It is a feature, which must be supported by the file system of the operating system. So what are Hardlinks? It is common to think of a file as being an association between a file name and a data object. Using Windows Explorer, the file system can be readily browsed, showing a 1:1 relationship between the file name and the data object, but this 1:1 relationship does not hold for all file systems. Some file systems, including UFS, XFS, and NTFS have a N:1 relationship between file name and the data object, hence there can be more than one directory entry for a file. So, how does one create multiple entries for the same data object? In Unix there is a command line utility ln, which is used to create link entries for existing files, hence there are many file names, or so called Hardlinks, for the one data object. For each HardLink created, the file system increments a reference count stored with the data object, i.e. it stores how many file names refer to the data object, this counter is maintained (by the file system) within the data object itself. When a file name referencing a data object is deleted, the data object’s reference count is decremented by one. The data object itself only gets deleted when the reference count is decremented to zero. The reference count is the only way of determining whether there are multiple file name references to a data object, and it only informs of their number NOT there whereabouts. Junctions are wormholes in the tree structure of a directed graph. By browsing a Junction a maybe far distant location in the file system is made available. Modifying, Creating, Renaming and Deleting files within a junction tree structure operates at the junction target, i.e. if you delete a file in a Junction it is deleted at the original location. Symbolic Links are to files what Junctions are to folders in that they are both transparent and Symbolic. Transparency means that an application can access them just as they would any other file, Symbolism means that the data objects can reside on any available volume, i.e. they are not limited to a single volume like Hardlinks. Symbolic Links differ from Shortcuts in that they offer a transparent pathway to the desired data object, with a shortcut (.lnk), something has to read and interpret the content of the shortcut file and then open the file that it references (i.e. it is a two step process). When an application uses a symlink it gains immediate access to the data object referenced by the symlink (i.e. it is a one step process).

Limitations

  • Supported platforms are NT4/W2K/WXP/W2K3/W2K3R2/W2K8/W2K8R2/W2K12/W2K12R2/WXP64/Vista/Vista/Windows7/8/10 in 32bit, 64bit or Itanium.
  • Hardlinks can only be made on NTFS volumes, under the supported platforms.
  • Hardlinks can only be made within one NTFS volumes, and can not span across NTFS volumes.
  • Junctions can not be created on NTFS volumes with NT4.
  • The Pick Link Source and Drop … choices are only visible, if it’s possible to create Hardlinks/Junctions/Symbolic Links. E.G.: If you select a file on a FAT drive and press the action button, you wont see the Pick Link Source in the action menu, because FAT file systems, don’t support Hardlinks/Junctions/Symbolic Links. This also happens, if you select source files on a network drive, or select a file as destination, etc.
  • There is an OS limit of creating more than 1023 hardlinks per file. This is less known, but it is there.
  • ReFs does not support hardlinks.

My first run with my new Moov 2

This week I received my new Moov fitness tracker, two in fact because some sports that it supports makes use of multiple trackers (Boxing) but he makers claim they are working on supporting it multiple trackers for all of their supported activities. 

I ordered them maybe two months ago before they were released and got them at $50 a piece instead of the $75 they are selling for now. I will probably try the boxing cardio just to check out but my main interest, right now anyway, is running and walking.

The Moov device is billed as a fitness trainer who monitors your movements in 3D space and recommends changes to make you more efficient or safe in form. She (I’ll call her she as the application has a woman’s voice, this may be changeable someday but not at the time of this writing) also tracks your activity as it progresses and announces your split times and distances or other activity specific data you might be interested while you are performing he activity.

When you go to use your Moov, there is a required free smartphone app that you download and pair to the tracker. You then pick an activity (like Running &Walking, Swimming, Cycling, Boxing, Isometric workouts) then you choose a goal for that activity. Running & Walking has “Run farther and easier” which addresses running efficiency, “Improve my pace and distance” for speed endurance, “Walk to Sweat” for brisk walking, “Push to the limit” for sprint intervals, and “Run my own way” to address open run / walk training.

I chose the Run & Walk activity and did a Run my own way workout. For running, the tracker goes around your ankle. The Moov package comes with two bands, a longer one meant for the ankle and a smaller one meant for the wrist. They are very easy to switch between the two. 

The Moov also supports a Daily Moov concept where you wear the device all day and night on your wrist and it tracks your steps and sleep patterns as well. That is great but it seems weird that as a runner I would have to keep switching between Daily Moov for most of my day, then onto my ankle for my runs. Maybe it really doesn’t matter but the video on Moov’s site says to wear it on your wrist. 

I am using the device just for the training aspect at this point so I only strap it on to my ankle when I’m running and it is sitting on my nightstand the rest of the time for now because my FitBit HR (with display) is already covering that duty and I can just glance at the device instead of navigating through the smartphone app, which while good, makes a poor clock. Maybe an Apple Watch app can change that?

For the run, I decided to go with a free 5 mile run which is what I typically log for a “regular” run distance. From recent memory, my average pace for 5 miles is somewhere in the 9:06 to 10:05 miles per minute. I ended up finishing it in 48 minutes. 

I fired up my music, started the Moov activity and got to running. After about two minutes Moov gave me some preliminary metrics like current pace and average impact and maybe mentioned something about cadence. After that she was pretty quiet only announcing audibly at the mile marker with some quick splits info. 

Each time an audible announcement played my music volume lowered and the Moov lady’s voice was very cleanly mixed in. It really was very unobtrusive.

I’m not sure exactly what I expected but I was surprised to not get more direction or hints or tips. Maybe my dorm isn’t as bad as I thought (probably not, complete self-taught amateur here.)

Overall I was happy with the implementation. And the feel of the device. Thankfully my iPhone has a pretty decent GPS, I am not so sure the experience would have been as good on my last phone (Samsung Note 4 which definitely had some GPS tracking issues.)

I am looking forward to trying he isometric 7-Minute workouts as well as three speed walking and interval running and cycling this winter, although most will be indoor as I am not a fan of cold.

Below are some screen captures from the application that show the analysis of the run.

sdsd

Display of the Cadence + Range of Motion screen
Cadence and Range of Motion
The route display of the Moov iPhone application.
GPS courtesy my iPhone.
Splits info
Don’t judge, I ran out of juice after the 4th mile of a 5 miler.
The elevation display of the Moov iPhone app.
Not too much of a climb.
The Elevation details graph, page 1.
The Elevation details graph, page 1.
Detailed elevation graph 2.
Detailed elevation graph, page 2.
Impact Detail data.
Impact Detail graph
Range of Motion Graph.
Range of Motion Graph
Cadence Graph.
Cadence Graph
Pace graph.
Pace graph

Musings on a plane

As Rachel and I fly to Charlotte, North Carolina to visit Cristian at school for Dinner at the school restauraunt, Phidon. He has the role of General Manager for this project. As I sit here, I realize I want to write down a few thoughts.

We are on the first leg of our journey, from Cleveland to Atlanta and our first and only layover until the return flight tomorrow night. Our flight boarded around 6:00 am at the Akron-Canton Airport. It’s a smaller airport and much quicker to get through but it’s a farther drive to get there, especially at a quarter to five in the morning; thank goodness Rachel was driving.

I have a window seat and as I write this I am   looking down I to the top of clouds, the view is amazing! My seat is directly next for the engine and I can see most of the wing in my frame of view out of the window and when I look out it, I can’t help but marvel at the engineering that lets us play in places where people aren’t made for.
We must be descending because the clouds below are now closer and larger. The view is amazing, these phone pictures just don’t really do it justice.

Out the window and up

This Is the first airplane trip that Rachel and I have taken together. We have been on trips that involved planes however we both have never been on the same o e at the same time. I (well both of us honestly) have been looking forward to this trip because though short, it’s just her and I traveling now that the kids are old enough and this is some exciting new territory for us as LJ is a sophomore now and will be college bound soon enough, we will be able to travel more and hopefully move somewhere who’s climate agrees better with us.

Looking out the plane window above the cloud line.
Break on through to the other side!

Up here above the clouds, i am reminded that on the ground, when the skies are dark and overcast, just above the clouds, the skies are bright blue and it’s a beautiful day. I hope I can remember that when my feet are back on the ground.
We are descending now to land in Atlanta, GA at around 8:01 am. As we go through the clouds, you can’t see anything but grey out the window; I can only hope the pilot has ways of seeing through this (ok ow they do 😉).

 

How do you navigate through this pea soup?
How do you navigate through this pea soup?

We’ll be landing soon, here’s hoping the landing gear works!