Setting up passwordless SSH

Information on setting up passwordless SSH from one machine to another.

Background

Source Information Links

  1. https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
  2. https://stackoverflow.com/questions/17846529/could-not-open-a-connection-to-your-authentication-agent
  3. https://stackoverflow.com/questions/64043238/enter-pin-for-authenticator-issue-related-to-ssh

Summary

In fact, the Raspberrypi.org documentation gets you up and running with passwordless SSH.

However, when you create the certificate, if you specifiy a passphrase you still will need to authenticate with your passphrase on your side to allow use of the certificate in the connection. You can additionally store the passphrase in you ssh agent keychain such that you will not need to supply it when used.

A final caveat, if you are running this from the WSL under Windows, the ssh-agent is a service that when running will keep your VM alive which will manifest in the memory its using being held on to longer than perhaps is desired.

Implementation

Summarizing the steps from the RaspberryPi documentation (Source Information Link #1, above.)

Check for existing SSH keys

First, check whether there are already keys on the computer you are using to connect to the Raspberry Pi:

ls ~/.ssh

If you see files named id_rsa.pub or id_dsa.pub then you have keys set up already, so you can skip the ‘Generate new SSH keys’ step below.

Generate new SSH keys

To generate new SSH keys enter the following command:

ssh-keygen

Upon entering this command, you will be asked where to save the key. We suggest saving it in the default location (~/.ssh/id_rsa) by pressing Enter.

You will also be asked to enter a passphrase, which is optional. The passphrase is used to encrypt the private SSH key, so that if someone else copied the key, they could not impersonate you to gain access. If you choose to use a passphrase, type it here and press Enter, then type it again when prompted. Leave the field empty for no passphrase.

Now look inside your .ssh directory:

ls ~/.ssh

and you should see the files id_rsa and id_rsa.pub:

authorized_keys id_rsa id_rsa.pub known_hosts

The id_rsa file is your private key. Keep this on your computer.

The id_rsa.pub file is your public key. This is what you share with machines that you connect to: in this case your Raspberry Pi. When the machine you try to connect to matches up your public and private key, it will allow you to connect.

Take a look at your public key to see what it looks like:

cat ~/.ssh/id_rsa.pub

It should be in the form:

ssh-rsa <REALLY LONG STRING OF RANDOM CHARACTERS> user@host

Copy your public key to your Raspberry Pi

Using the computer which you will be connecting from, append the public key to your authorized_keys file on the Raspberry Pi by sending it over SSH:

ssh-copy-id <USERNAME>@<IP-ADDRESS>

Note that for this step you will need to authenticate with your password.

Alternatively, if ssh-copy-id is not available on your system, you can copy the file manually over SSH:

cat ~/.ssh/id_rsa.pub | ssh <USERNAME>@<IP-ADDRESS> 'mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys'

If you see the message ssh: connect to host <IP-ADDRESS> port 22: Connection refused and you know the IP-ADDRESS is correct, then you may not have enabled SSH on your Raspberry Pi. Run sudo raspi-config in the Pi’s terminal window, enable SSH, then try to copy the files again.

Now try ssh <USER>@<IP-ADDRESS> and you should connect without a password prompt.

If you see a message "Agent admitted failure to sign using the key" then add your RSA or DSA identities to the authentication agent ssh-agent then execute the following command:

ssh-add

If this does not work, you can get assistance on the Raspberry Pi forums.

Note: you can also send files over SSH using the scp command (secure copy). See the SCP guide for more information.

Note: I ran into this issue and found some additional information on https://stackoverflow.com/questions/17846529/could-not-open-a-connection-to-your-authentication-agent

The #1 rated answer solved my problem:

Did You Start ssh-agent?

You might need to start ssh-agent before you run the ssh-add command:

eval `ssh-agent -s`
ssh-add

Note that this will start the agent for msysgit Bash on Windows. If you’re using a different shell or operating system, you might need to use a variant of the command, such as those listed in the other answers.

See the following answers:

  1. ssh-add complains: Could not open a connection to your authentication agent
  2. Git push requires username and password (contains detailed instructions on how to use ssh-agent)
  3. How to run (git/ssh) authentication agent?.
  4. Could not open a connection to your authentication agent

To automatically start ssh-agent and allow a single instance to work in multiple console windows, see Start ssh-agent on login.

Why do we need to use eval instead of just ssh-agent?

To find out why, see Robin Green’s answer. Public vs Private Keys

Also, whenever I use ssh-add, I always add private keys to it. The file ~/.ssh/id_rsa.pub looks like a public key, I’m not sure if that will work. Do you have a ~/.ssh/id_rsa file? If you open it in a text editor, does it say it’s a private key?

Why do we need to use eval instead of just ssh-agent?

To find out why, see Robin Green’s answer.

Public vs Private Keys

Also, whenever I use ssh-add, I always add private keys to it. The file ~/.ssh/id_rsa.pub looks like a public key, I’m not sure if that will work. Do you have a ~/.ssh/id_rsa file? If you open it in a text editor, does it say it’s a private key?

Adjust permissions for your home and .ssh directories

If you can’t establish a connection after following the steps above there might be a problem with your directory permissions. First, you want to check the logs for any errors:

tail -f /var/log/secure
# might return:
Nov 23 12:31:26 raspberrypi sshd[9146]: Authentication refused: bad ownership or modes for directory /home/pi

If the log says Authentication refused: bad ownership or modes for directory /home/pi there is a permission problem regarding your home directory. SSH needs your home and ~/.ssh directory to not have group write access. You can adjust the permissions using chmod:

chmod g-w $HOME
chmod 700 $HOME/.ssh
chmod 600 $HOME/.ssh/authorized_keys

Now only the user itself has access to .ssh and .ssh/authorized_keys in which the public keys of your remote machines are stored.

Store the passphrase in the macOS keychain

If you are using macOS, and after verifying that your new key allows you to connect, you have the option of storing the passphrase for your key in the macOS keychain. This allows you to connect to your Raspberry Pi without entering the passphrase.

Run the following command to store it in your keychain:

ssh-add -K ~/.ssh/id_rsa

Note: If you run into an Enter PIN issue described in https://stackoverflow.com/questions/64043238/enter-pin-for-authenticator-issue-related-to-ssh

Not on macOS?

If you are not on macOS, ssh-add will likely not have a built in Keychain to store it to, at least that is the case for WSL on Windows, and according to the ssh-add(1) man page.

Simply remove the -K argument:

ssh-add ~/.ssh/id_rsa

Weekly Unbound Updates on RaspberryPi

Weekly Unbound Updates on RaspberryPi

Information and configuration steps around setting up an automated cron job to pool the root hints weekly and restart unbound after doing so.

Helpful links

Crontab site(s)

Unbound Installation / Scheduling Info

The goal is to create a weekly schedule to run a script which will:

  1. Download a fresh set of root hints for unbound to use once a week
  2. Restart the unbound service to accept the new root hints

Cron Schedule String

The portion of the crontab line that dictates the scheduling.

"At 03:15 every Sunday."

15 3 * * 0

File locations

The locations I am using to store the scritpts this will use.

  • /usr/local/bin/
    • update-unbound-root-hints.sh
      • Updates the root hints
    • unbound-weekly-maintenance.sh
      • Calls the update-unbound-root-hints.sh and then restarts unbound

Script Source

The source for the two scripts listed above.

  1. update-unbound-root-hints.sh

    #!/bin/bash
    
    wget -O root.hints https://www.internic.net/domain/named.root
    mv root.hints /var/lib/unbound/
    
  2. unbound-weekly-maintenance.sh

    #!/bin/bash
    
    #Update root hints
    sh /usr/local/bin/update-unbound-root-hints.sh
    
    #Restart Unbound
    systemctl restart unbound
    

Example Crontab line

# backup using the rsbu program to the internal 4TB HDD and then 4TB external
01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2

As a user Crontab line

Some jobs are more appropriate to run under a user context rather than root. The following Crontab line would be added in the users crontab file by running crontab -e as the user.

15 3 * * 0 sh /usr/local/bin/unbound-weekly-maintenance.sh > /usr/local/bin/unbound-weekly-maintenance.log 2>&1

The /etc/crontab line

System-wide jobs seem more appropriate to run vi the /etc/crontab file specifying the user in question.

15 3    * * 0   root    sh /usr/local/bin/unbound-weekly-maintenance.sh > /usr/local/bin/unbound-weekly-maintenance.log 2>&1

Pi-hole specific cron jobs

My pi-hole has this cron configuration in: /etc/cron.d/pihole.conf

# This file is under source-control of the Pi-hole installation and update
# scripts, any changes made to this file will be overwritten when the software
# is updated or re-installed. Please make any changes to the appropriate crontab
# or other cron file snippets.

# Pi-hole: Update the ad sources once a week on Sunday at a random time in the
#          early morning. Download any updates from the adlists
#          Squash output to log, then splat the log to stdout on error to allow for
#          standard crontab job error handling.
16 4   * * 7   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updateGravity >/var/log/pihole_updateGravity.log || cat /var/log/pihole_updateGravity.log

# Pi-hole: Flush the log daily at 00:00
#          The flush script will use logrotate if available
#          parameter "once": logrotate only once (default is twice)
#          parameter "quiet": don't print messages
00 00   * * *   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole flush once quiet

@reboot root /usr/sbin/logrotate /etc/pihole/logrotate

# Pi-hole: Grab local version and branch every 10 minutes
*/10 *  * * *   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker local

# Pi-hole: Grab remote version every 24 hours
38 14  * * *   root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker remote
@reboot root    PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updatechecker remote reboot

crontab -e as user: pi yields the following configuration. Note the #0 2 1 */4 * /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log is commented out. There is nothing wrong with this but there is nothing in this to do.

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
#0 2 1 */4 *  /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log

Whereas, running sudo crontab -e yields. Note that 0 2 1 */4 * /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log is not commented out here as the root user.

The schedule below is

At 01:02 on every 4th day-of-month.

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
0 2 1 */4 *  /usr/local/bin/update-unbound-root-hints.sh > unbound-root-hint-update.log

Windows Stat Collection with Telegraf, InfluxDB and Grafana

A small screenshot of the Telegraf & Influx Windows Host Overview Dashboard.

Pre-requisites

  1. InfluxDB

    • Not in scope of this guide.
  2. Grafana

    • Not in scope of this guide.
  3. Creating a new database for Telegraf stats.

    Replace INFLUXDB_HOST with the hostname or IP address of your InfluxDB server.

    curl -POST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE telegraf"
    
  4. Add an InfluxDb data source

    • Click the settings Gear Icon and choose the Data Sources option

    • Click the Add Data Source button

    • Find the InfluxDb data source and choose Select

    • Set the HTTP Url setting to the InfluxDb container’s IP address and port

      http://172.17.0.2:8086

    • Set the database name to the database name you created in the previous step (telegraf is the default.)

    • Click the Save and Test button to verify Grafana can connect to InfluxDb.

Telegraf for Windows installation

  1. Download Telegraf for windows and extract it to your drive/

  2. Extract and update telegraf.conf

    • Create the install directory for telegraf, this guide will use c:\telegraf-1.17.0\telegraf.conf.

      mkdir "c:\telegraf-1.17.0"
      cd "c:\telegraf-1.17.0"
      telegraf.exe config > telegraf.conf
      
    • The open telegraf.conf in a text editor and update as you will. At bare minimum, update:

      agent section’s interval key to 10 seconds.

      [agent]
      ## Default data collection interval for all inputs
      interval = "10s"
      
    • The InfluxDB Output section’s urls and database keys with the url to the InfluxDB server and the database name if you do not want to use the default which is telegraf.

      [[outputs.influxdb]]
      urls = ["http://192.168.2.221:8086"]
      database = "telegraf"
      
    • Finally, locate the [[inputs.win_perf_counters]] section and replace it completely with the following.

      See Collector Configuration Details from https://grafana.com/grafana/dashboards/1902 for more details.

      [[inputs.win_perf_counters]]
      [[inputs.win_perf_counters.object]]
      # Processor usage, alternative to native, reports on a per core.
      ObjectName = "Processor"
      Instances = ["*"]
      Counters = [
          "% Idle Time",
          "% Interrupt Time",
          "% Privileged Time",
          "% User Time",
          "% Processor Time"
      ]
      Measurement = "win_cpu"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      # Disk times and queues
      ObjectName = "LogicalDisk"
      Instances = ["*"]
      Counters = [
          "% Idle Time",
          "% Disk Time",
          "% Disk Read Time",
          "% Disk Write Time",
          "% User Time",
          "% Free Space",
          "Current Disk Queue Length",
          "Free Megabytes",
          "Disk Read Bytes/sec",
          "Disk Write Bytes/sec"
      ]
      Measurement = "win_disk"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      ObjectName = "System"
      Counters = [
          "Context Switches/sec",
          "System Calls/sec",
          "Processor Queue Length",
          "Threads",
          "System Up Time",
          "Processes"
      ]
      Instances = ["------"]
      Measurement = "win_system"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      # Example query where the Instance portion must be removed to get data back,
      # such as from the Memory object.
      ObjectName = "Memory"
      Counters = [
          "Available Bytes",
          "Cache Faults/sec",
          "Demand Zero Faults/sec",
          "Page Faults/sec",
          "Pages/sec",
          "Transition Faults/sec",
          "Pool Nonpaged Bytes",
          "Pool Paged Bytes"
      ]
      # Use 6 x - to remove the Instance bit from the query.
      Instances = ["------"]
      Measurement = "win_mem"
      # Set to true to include _Total instance when querying for all (*).
      #IncludeTotal=false
      
      [[inputs.win_perf_counters.object]]
      # more counters for the Network Interface Object can be found at
      # https://msdn.microsoft.com/en-us/library/ms803962.aspx
      ObjectName = "Network Interface"
      Counters = [
          "Bytes Received/sec",
          "Bytes Sent/sec",
          "Packets Received/sec",
          "Packets Sent/sec"
      ]
      Instances = ["*"] # Use 6 x - to remove the Instance bit from the query.
      Measurement = "win_net"
      #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
      
      [[inputs.win_perf_counters.object]]
      # Process metrics
      ObjectName = "Process"
      Counters = [
          "% Processor Time",
          "Handle Count",
          "Private Bytes",
          "Thread Count",
          "Virtual Bytes",
          "Working Set"
      ]
      Instances = ["*"]
      Measurement = "win_proc"
      #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
      
  3. Install Telegraf as a service

    • Start cmd.exe as an administrator.

    • execute the following command to have telegraf install itself as a service:

      telegraf.exe --service install --config "c:\telegraf-1.17.0\telegraf.conf"
      
  4. Import the Telegraf & Influx Windows Host Overview dashboard

Unifi-poller Setup and Configuration

A display of the Unifi-poller dashboards.

Background

The unifi-profiler depends on either InfluxDb or Prometheus for the database and Grafana for the data display. This documents creating three disperate docker containers and configuring them to work with the unifi-profiler as the data source to an influx database that Grafana uses to display our data.

The Information in this guide was inspired and based on the unifi-poller project:

Perhaps my ideal would be similar to this docker compose guide at: https://nerdygeek.uk/2020/06/18/unifi-poller-an-easy-step-by-step-guide/ which lends itself to more a a single applicance but I have other uses for Grafana and InfluxDb so this works for my current needs.

Sections

  1. Pre-requisites
  2. InfluxDb
  3. Grafana
  4. Unifi-poller
  5. Enjoy the fruits of your labors
  6. How to get the docker container id of a running docker container
  7. How to get the IP address of a running docker container
  8. How to get a shell inside a running docker container
    • How to exit a shell inside a docker container
  9. How to restart a docker container
  10. How to create a named docker volume
  11. How to remove a docker container

Pre-Requisites

  1. Create a read-only adminstrator account on the Unifi controller for use by the unifi-poller

    • Add a user to the UniFi Controller. After logging into your controller:
    1. Go to Settings -> Admins
    2. Add a read-only user (unifipoller) with a nice long password.
    3. The new user needs access to each site.
      • For each UniFi Site you want to poll, add admin via the ‘Invite existing admin’ option.
    4. Take note of this info, you need to put it into the unifi-poller config file in a moment.
  2. Install Docker

    • Make sure the Pi is up to date:

      sudo apt-get update && sudo apt-get upgrade
      
    • Agree to any updates, and wait for them to install. Then the one line command to install Docker is. It is worth noting that this downloads a script from the internet and runs it in your bash shell:

      curl -sSL https://get.docker.com | sh
      
    • Now add the user pi to the docker group, so as to avoid permissions issues later. Please note, this guide may not contain a unified use of sudo for the shell commands.

      sudo usermod -aG docker pi
      
    • Reboot the PI

      sudo reboot
      
    • Make sure everything is working by running

      docker run hello-world
      
    • The output should contain the magic lines:

      Hello from Docker!

  3. Create locations for the local version on the docker host of the configuration files and data files the docker containers will have mapped into the container for use.

    • Create the following directories:

      • A host location to hold files and directories one would normally use /etc/ for within the container.
        • /etc/docker_hosts
      • The influxdb directory, will be used to hold the influxdb.conf file.
        • /etc/docker_hosts/influxdb
      • The unifi-poller directory, will be used to hold the unifi-poller.conf
        • /etc/docker_hosts/unifi-poller
    • Optional steps

      These steps are only needed for mapping a host’s local directory as a volume to a docker container. This guide uses a docker named volumes for influxdb and grafana which allows docker to manage it rather than the end-user.

      How to use this method is described in the How to create a named docker volume section, should you choose to use that instead.

      • /var/lib/docker_hosts/grafana
      • /var/lib/docker_hosts/influxdb

InfluxDb

  1. Create config file

    Change to the docker_hosts/influxdb directory. You will want to be sudo/root for this.

    cd /etc/docker_hosts/influxdb
    

    Run the following to extract the influxdb.conf to the directory.

    docker run --rm influxdb influxd config > influxdb.conf
    
  2. Create the docker named volume for the influxdb data:

    docker volume create influxdb_data
    
  3. Run the container as a daemon (the -d argument specified daemon.)

    docker run --name influxdb -d -p 8086:8086 -v influxdb_data:/var/lib/influxdb -v /etc/docker_hosts/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro influxdb -config /etc/influxdb/influxdb.conf
    

    To break down the above docker command

    Parameter Value Meaning
    –name influxdb sets the container id to influxdb
    -d runs the the container as a service/daemon
    -p 8086:8086 maps the port on the left to the host’s port while the port on the right is mapped to the container’s port and IP address. The value here maps host port 8086 to container port 8086.
    -v influxdb_data:/var/lib/influxdb maps a docker volume named influxdb_data on the host to /var/lib/influxdb inside the container for the data on the host to overlay into the container.
    -v /etc/docker_hosts/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro maps the host file /etc/docker_hosts/influxdb/influxdb.conf to the file in the container at /etc/influxdb/influxdb.conf as a Read-Only (:ro) file in the container
    influxdb the image to run , docker resolves this.
    -config /etc/influxdb/influxdb.conf use this configuration file at /etc/influxdb/influxdb.conf in the container
  4. Create a database

    • Documentation as posted (https://hub.docker.com/_/influxdb/) has an issue. The command:

      curl -G http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
      
    • Using Get to transfer the data is identified as a deprecated call:

      {"results":[{"statement_id":0,"messages":[{"level":"warning","text":"deprecated use of 'CREATE DATABASE mydb' in a read only context,      please use a POST request instead"}]}]}
      
    • Instead use Post:

      curl -POST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
      

    Addtl. Info: https://github.com/influxdata/docs.influxdata.com-ARCHIVE/issues/493

  5. Insert a Row

    curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
    

Grafana

  1. Pull down the docker Grafana image

    docker pull grafana/grafana:latest
    
  2. Create the docker named volume for the grafana data:

    docker volume create grafana_data
    
  3. Run the container as a daemon (the -d argument specified daemon.)

    docker run --name grafana -d -p 3000:3000 -v grafana_data:/var/lib/grafana grafana/grafana:latest
    
  4. Navigate to the Grafana web ui: http://localhost:3000

    • default username: admin
    • default password: admin
  5. Add an InfluxDb data source

    • Click the settings Gear Icon and choose the Data Sources option

    • Click the Add Data Source button

    • Find the InfluxDb data source and choose Select

    • Set the HTTP Url setting to the InfluxDb container’s IP address and port

      1. Find the IP of the influxdb container (see section How to get the IP address of a running docker container)

      2. Add the url including the port number used (8086 was used in this guide.)

      http://172.17.0.2:8086

    • Set the Database value to the name of the database you created during the InfluxDb steps. The guide uses mydb

    • Click the Save and Test button to verify Grafana can connect to InfluxDb.

  6. Go on to add and configure the Grafana dashboards for unifi-poller.

  7. Install the additional Grafana Plug-ins that the unifi-poller dashboards use.

Grafana dashboards

Source: https://grafana.com/grafana/dashboards?search=unifi-poller

  1. Navigate to the Grafana web ui: http://localhost:3000

  2. Click on the + Create icon and choose the Import option

  3. In the Import via grafana.com textbox put the Import Code of the plug-in below to install.

    • Ensure you choose the InfluxDb Data Source in the drop-down at the bottom labeled: Select an InfluxDB data source
  4. Import the following Dashboards

Grafana Plug-ins

Additional Grafana plug-ins used by the Unifi-poller dashboards:

  1. grafana-piechart-panel
  2. grafana-clock-panel
  3. natel-discrete-panel

Note: Grafana requires a restart after installing new plug-ins.

Plug-in Installation

Perform the following from within the running Docker container (see section How to get a shell inside a running docker container):

grafana-cli plugins install PLUGIN-NAME

Unifi-poller

Based on the instructions here: https://github.com/unifi-poller/unifi-poller

  1. Pull down the docker unifi-poller image

    docker pull golift/unifi-poller
    
  2. Create a copy of the unifi-poller.conf in the /etc/docker_hosts/unifi-poller directory on the host your created earlier.

    • edit the /etc/docker_hosts/unifi-poller.conf as needed

    • I disabled Prometheus support as I am using Influx

      [prometheus]
      disable = true
      
    • Configure InfluxDb, I am running default no username or password so only the url and disable = false needed to be updated.

      Do note: The IP address is the InfluxDb container’s IP address in the example below.

      [influxdb]
      disable = false
      # InfluxDB does not require auth by default, so the user/password are probably unimportant.
      url  = "http://172.17.0.3:8086"
      db   = "mydb"
      
    • Configure the unifi.defaults section.

      InfluxDb supports more items than Prometheus so those are enabled below as well as saving the Deep Packet Inspection data (save_dpi)

      # The following section contains the default credentials/configuration for any
      # dynamic controller (see above section), or the primary controller if you do not
      # provide one and dynamic is disabled. In other words, you can just add your
      # controller here and delete the following section. The internal defaults are
      # shown below. Any missing values will assume these displayed defaults.
      [unifi.defaults]
      url            = "https://192.168.2.1:8443"
      user           = "unifi-admin-unifipoller-username"
      pass           = "unifi-admin-unifiprofiler-password"
      sites          = ["all"]
      save_ids       = true
      save_events    = true
      save_alarms    = true
      save_anomalies = true
      save_dpi       = true
      save_sites     = true
      hash_pii       = false
      verify_ssl     = false
      
  3. Run unifi-poller container as a daemon (the -d argument specified daemon.)

    docker run --name unifi-poller -d -v /etc/docker_hosts/unifi-poller/unifi-poller.conf:/config/unifi-poller.conf golift/unifi-poller
    

Enjoy the fruits of your labors

  1. Navigate back to your Grafana instance: http://localhost:3000
  2. Click the Dashboard icon and choose the Manage option.
  3. Click one of the imported dashboards to view the beautiful data.

How to get the docker container id of a running docker container

docker ps

Results:

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
3183bbb971ed        grafana/grafana:latest   "/run.sh"                3 hours ago         Up 3 hours          0.0.0.0:3000->3000/tcp                           grafana2
dfbce9a7c751        golift/unifi-poller      "/image"                 4 hours ago         Up 3 hours                                                           unifi-poller
a6e4f76a2677        influxdb                 "/entrypoint.sh -con…"   6 hours ago         Up 6 hours          0.0.0.0:8086->8086/tcp                           influxdb

How to get the IP address of a running docker container

Sources:

  1. Get the container id of the container you’d like the IP address from

  2. Execute the ip a command within the container

    docker exec influxdb ip -4 -o address
    

    Results

    1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
    21: eth0    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0\       valid_lft forever preferred_lft forever
    

How to get a shell inside a running docker container

  1. Get the container id of the container you want to start the shell in

  2. Use docker exec to open a shell

    docker exec -it 3183bbb971ed sh
    

How to exit a shell inside a docker container

  • exit command at the shell
  • CTRL+P followed by CTRL+Q docker shell escape sequence

How to restart a docker container

Source: https://docs.docker.com/engine/reference/commandline/container_restart/

  1. Get the container id of the container you want to restart

  2. Use docker container restart to restart it. The below will restart it immediately.

    docker container restart 3183bbb971ed
    

How to create a named docker volume

A "named docker volume" is simply a location on the host that docker is aware of. Volumes can be used by multiple containers and are available on the host as well. This is a convenient way to save application data and configurations to persist across container runs or the containers that reference them.

These volumes are not required and can be done manually using the full path to a resource with the docker run -v command:

Volume Type Command
Named Volume docker run -v influxdb_data:/var/lib/influxdb
Mapped Directory docker run -v /var/docker_hosts/influxdb/influxdb_data:/var/lib/influxdb

To create a named volume

docker volume create [volume_name]

Example:

docker volume create influxdb_data

which can then later be attached with the -v command line argument of docker run:

docker run –name influxdb -d -p 8086:8086 -v influxdb_data:/var/lib/influxdb […]

How to remove a docker container

If you need to remove a docker container and re-run it with different starting arguments, you can remove an existing docker container using the docker container rm command:

  1. Get the container id of the container you want to start the shell in

  2. Use docker exec to open a shell

    docker container rm 3183bbb971ed
    

    or

    docker container rm grafana
    

Symbolic Links on Windows 10

Screenshot of Link Shell Extension
This is not meant as a tutorial in any way, I’m simply trying to not forget this as I need it about once every two years or so and always forget how to do it. The information was found here: https://superuser.com/questions/1020821/how-to-create-a-symbolic-link-on-windows-10 The option I went with was the PowerShell route suggested by Peter Hahndorf:
Open a PowerShell session as elevated administrator:
New-Item -ItemType SymbolicLink -Path E:\Data\MyGames -Target "C:\users\UserName\MyGames"
or using less verbose syntax:
ni E:\Data\MyGames -i SymbolicLink -ta "C:\users\UserName\MyGames"
Another approach is a Windows Shell extension which looked interesting but probably overkill for my current needs (from odvpbre):
If you want a GUI Tool for making/editing that symlinks use http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html Link Shell Extension (LSE) provides for the creation of Hardlinks , Junctions , Volume Mountpoints , and Windows7/8’s Symbolic Links, (herein referred to collectively as Links) a folder cloning process that utilises Hardlinks or Symbolic Links and a copy process taking care of Junctions, Symbolic Links, and Hardlinks. LSE, as its name implies is implemented as a Shell extension and is accessed from Windows Explorer, or similar file/folder managers. The extension allows the user to select one or many files or folders, then using the mouse, complete the creation of the required Links – Hardlinks, Junctions or Symbolic Links or in the case of folders to create Clones consisting of Hard or Symbolic Links. LSE is supported on all Windows versions that support NTFS version 5.0 or later, including Windows XP64 and Windows7/8/10. Hardlinks, Junctions and Symbolic Links are NOT supported on FAT file systems, and nor is the Cloning and Smart Copy process supported on FAT file systems.
Screenshot of Link Shell Extension
Link Shell Extension
  Some additional information on different types of links from http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html#hardlinks : Hardlinks are a feature common to many Unix based systems, but are not directly available with NT4/W2K/WXP. It is a feature, which must be supported by the file system of the operating system. So what are Hardlinks? It is common to think of a file as being an association between a file name and a data object. Using Windows Explorer, the file system can be readily browsed, showing a 1:1 relationship between the file name and the data object, but this 1:1 relationship does not hold for all file systems. Some file systems, including UFS, XFS, and NTFS have a N:1 relationship between file name and the data object, hence there can be more than one directory entry for a file. So, how does one create multiple entries for the same data object? In Unix there is a command line utility ln, which is used to create link entries for existing files, hence there are many file names, or so called Hardlinks, for the one data object. For each HardLink created, the file system increments a reference count stored with the data object, i.e. it stores how many file names refer to the data object, this counter is maintained (by the file system) within the data object itself. When a file name referencing a data object is deleted, the data object’s reference count is decremented by one. The data object itself only gets deleted when the reference count is decremented to zero. The reference count is the only way of determining whether there are multiple file name references to a data object, and it only informs of their number NOT there whereabouts. Junctions are wormholes in the tree structure of a directed graph. By browsing a Junction a maybe far distant location in the file system is made available. Modifying, Creating, Renaming and Deleting files within a junction tree structure operates at the junction target, i.e. if you delete a file in a Junction it is deleted at the original location. Symbolic Links are to files what Junctions are to folders in that they are both transparent and Symbolic. Transparency means that an application can access them just as they would any other file, Symbolism means that the data objects can reside on any available volume, i.e. they are not limited to a single volume like Hardlinks. Symbolic Links differ from Shortcuts in that they offer a transparent pathway to the desired data object, with a shortcut (.lnk), something has to read and interpret the content of the shortcut file and then open the file that it references (i.e. it is a two step process). When an application uses a symlink it gains immediate access to the data object referenced by the symlink (i.e. it is a one step process).

Limitations

  • Supported platforms are NT4/W2K/WXP/W2K3/W2K3R2/W2K8/W2K8R2/W2K12/W2K12R2/WXP64/Vista/Vista/Windows7/8/10 in 32bit, 64bit or Itanium.
  • Hardlinks can only be made on NTFS volumes, under the supported platforms.
  • Hardlinks can only be made within one NTFS volumes, and can not span across NTFS volumes.
  • Junctions can not be created on NTFS volumes with NT4.
  • The Pick Link Source and Drop … choices are only visible, if it’s possible to create Hardlinks/Junctions/Symbolic Links. E.G.: If you select a file on a FAT drive and press the action button, you wont see the Pick Link Source in the action menu, because FAT file systems, don’t support Hardlinks/Junctions/Symbolic Links. This also happens, if you select source files on a network drive, or select a file as destination, etc.
  • There is an OS limit of creating more than 1023 hardlinks per file. This is less known, but it is there.
  • ReFs does not support hardlinks.

“What do you need?”, a life lesson; courtesy Amazon Web Services

“What do you need?” is an important thing to know. It allows you to more accurately predict what your costs might be for any longer term obligation. And to be obvious, this relates to pretty much everything, from how many miles you drive to derive fuel costs and a budget for them, to how much free time you have to devote to a pet. You’ve got some quantity of a finite resource, generally it’s a good thing to have some reserves at all times. And that, dear reader, never happens by accident.

If you’ve been reading along at home, you’ve noticed that spurred on by a friend of mine’s experience migrating his web hosting to Amazon.com Web Services, on an Elastic Computing Cloud (EC2); that I have started a EC2 of my own using their Free Tier services.

My feelings on the service are quite positive, it’s pretty darn awesome to have the ability to go from a single small server with a website to a enterprise size data-center and web-farm on demand. I don’t use it for such but, that’s worth something. Personally, I love having a root ssh available and the ability to run whatever service I deem fit, feels good; real good!

However, what had happened was, I popped on over to check my account activity and was greeted with this:

WhatTheWhat

What I’d like to draw your attention to is the “AWS Data Transfer (excluding Amazon CloudFront)” group. This section contains four items:

  1. Data transfer out under the monthly global free tier
  2. Region data transfer under the monthly global free tier
  3. Data transfer in per month
  4. First 10 TB / month data transfer out beyond the global free tier

And of those items, #4 is the little devil. Good ‘ole First 10 TB / month data transfer out beyond the global free tier.

I’m not going to complain about the price, $9.95 sounds reasonable for the transfers. The thing is, I’ve had this account for 20 days, and even if I did use quite a bit during the “load up” phase of server configuration, I’d say honestly it would have been under 10 GB. I did have a game server running for a week or so of that. I don’t believe the bandwidth use would have been in 60 GB. I could be wrong about that.

And there you have it, I have no idea because, I don’t know what I need. At least when it comes to pay-as-you-go computing platforms and web-enabled services. Well at least I have something to think about. I’m not really quite sure where to begin to map out my needs on this.

I’ve turned my instance off until I can sort it all out, I have time for a free experiment, but sadly not the funds. I do have an email into Amazon Web Services Support, in particular asking how I can tell if the charges are valid and identifying where my usage was to/from. Hopefully they have the capabilities and it’s just unpreparedness on my side.

Either way, the service is excellent and I highly recommend it. I probably wouldn’t run a game server on it without getting far better bandwidth usage scenarios.

Programmatic Paralysis Or a senior developer’s hobby problem

Over the course of my career, heck, over the course of my life, I’ve had a reoccurring, “issue” shall we say. In a nutshell, I feel like doing some programming, but what shall I write?

You see, programming for me, since the age of nine, has been my hobby. I just happen to be one of those guys that actually gets paid to do what he loves to do, and would be doing anyway. For the time being, I will ignore the downsides of that as it’s a post and then some in it’s own right Winking smile 

What got me into it was really two things. The first is learning things. That “Eureka” moment when moments ago a bunch of mumbo-jumbo clicks in and make sense is an awesome feeling. The other draw was making some machine do my bidding. Again an awesome feeling which I can’t put into words if you’re not a programmer, I’m sure there are analogs in other disciplines, however I’m not familiar with those disciplines.

Well that is all fine and good, but I’ve been a professional (e.g. my sole income) programmer for seventeen years. I’ve got a lot of experience at a lot of levels of business types (differing in both type of industry and company size. Size may not be the right phrase, I’m trying to speak towards the level of “enterprise” processes an organization may or may not embrace)

Every now and again I get lucky and a new relatively self contained technology emerges that I’d like to toy around appear (I’m thinking Linq, jQuery, WebService, etc) Things that are big enough to have a challenge but small enough to not require a massive underlying framework.

The latter is really the problem, if I’m going to spend any of my not very often free time to writing code, I need to be getting something out of it. I need it to genuinely teach me something and therefore enhance my skill-set, and thus my career. Or I need it to be fun enough that it’s worth the cost.

An example of fun to be worth the cost, I will cite my work on the Quake3 Arena III mod, ReactanceUnlagged. (sorry the site is really nothing more then where to grab the latest version which is a few years old.) But working on this was a blast! At one time we had a number of people running the mod on their servers and a fair amount of the very waning quake3 community (it was a 10 year old game when I started modding.) It was written in plain jane C which is my favorite play language, mostly because while it’s possible to blow off your foot, it’s really rewarding when you don’t and it works right Open-mouthed smile

Now the down side to that project was the ramp up, learning their API’s and their virtual machine you coded to.

So the other type of project I’d be drawn to for learning experiences is something more enterprise level, where I’s need to design and implement a database, a data access layer to access the data, and a back-end to process the data, not to mention the ui to work with the data (I’m not really a UI guy, but I can make something clean and jQuery is fun) So that’s a lot of ramp up to. A lot of the work I already know how to do and to do well, but implementing it again just to have something to work off to learn something is rough.

The same applies to open source projects, the ramp up is rough, while the rest is rewarding.

I seem to be having a problem picking where I want to invest the ramp up time, and while I believe in supporting and working on open source projects, I also feel that if I have something unique to contribute to the world (even if it’s just my own immediate world) I’d rather do that.

And the more time passes as I can’t decide, the more time I’m not ramping up and wasting.  Most likely I should just pick something and go, worst case I find something else that rips me away which must mean I was more interested in it than the other, so be it.

Team: Work Gaming Clan

A gaming group, “Team: Work” has just blown into town. Not so much a competive gaming clan, but more of a group of folks who like to play online multi-player games of all styles and genres. The name comes from the concept that team based games require Team Work! Crazy concept I know. There is definintely room for individual excellence, but we feel the fun is from playing with your team!

 Hop on over to the site and forums, and certainly try out the new Team: Work Team Fortress 2 server:

http://www.clanteamwork.com



Regards!

The search has ended!

Well my friends, I have rejoined the ranks of the employed once again! A little spot called Insurance.com. I have to say I'm incredibly excited to start! The guys I interviewed with were awesome and I have a long time friend (26 years (whoa, I feel old lol 🙂 )) has been working there for 7 years and I probably owe him everything I have for sparking my love for computers, plus he's a freakin' genius so I get to absorb his smartness, heheh!

At any rate, I think it will be a great fit, as I am coming out of a large auto insurance company in northeast Ohio, and into, what I think, is the natural progression of selling insurance via comparative rating. Which is a big phrase for giving the consumer options without making them go to 16 insurance web sites and give their information again and again and again. And the development staff is much smaller which I think gives the people doing the work an incredible sense of involvement and ownership which you just do not get in companies with multiple divisions of 300+ IT staff.

I want to thank all my friends who were so supportive, and Insurance.com for giving me an opportunity. thank you all so very much!

Regards!