Ubiquiti UniFi 16 XG Switch Basics

First off let’s start with what Ubiquiti’s UniFi 16 XG switch is and isn’t. The 16XG is a managed 10gig switch which is configurable using Ubiquiti’s UniFi controller. The 16 ports on the 16XG are broken down as 12 SFP+ ports that can accept DAC cables, RJ45 copper transceivers, or fiber transceivers and 4 fixed RJ45 copper ports. However, the 16XG is not a L3 managed switch so you should not expect to do L3 switch based inter-vlan routing and if your uplink port is a 1gbps link then expect traffic between vlans to be limited by the speed of that port.

Connecting and Adopting the 16 XG

There are a few ways to tell new UniFi devices where the controller is

  1. If the controller and UniFi device are on the same subnet the device will automagically show up in the controller for adoption
  2. Find the device IP and ssh into the device and run: set-inform http://ip-of-controller:8080/inform
  3. Set DHCP Option 43 to the IP of your controller

Port Basics

The first thing that you need to know about vlans on the 16XG is that Ubiquiti calls them “Port Profiles” they may have additional functionality that I’m not aware of besides simply being named vlans but for the purposes of the switch and APs that I own they’re just that. Another thing that a new user should know is that ports configured as the all profile function as trunk ports with all passed packets being vlan tagged. As usual vlans need to be added in the controller to be used on the switch in addition to being simply passed to it over a port – Ubiquiti didn’t add any auto-discover magic here.

If you want to use a single uplink trunk port between your router or other switch as I did you will want to first configure the uplink port as an untagged vlan, adopt the device and then configure the management vlan and ip settings of your management vlan before configuring the uplink port as the desired set of trunked vlans. Setting paths:

  • Management vlan: switch > config > services > management vlan
  • IP settings: switch > config > network > your usual IP options
  • vlan settings: settings > networks > new network > set vlan only, name as desired, and set the vlan number to the same as your other devices

Conclusion

Overall, for the price of $599 or less the UniFi 16 XG is a relatively affordable, capable, and mostly silent switch for homelab or homeprod purposes or perhaps a small business setup. Although the 16 XG does have a pair of 80mm exhaust fans they are not audible over the fan noise of my r210ii and general background noise of my apartment. If running a business that requires more than 16 10gig ports or you need 10gig layer 3 routing capabilities then you’ll need to look elsewhere for a louder and likely more expensive switch but then you probably aren’t reading this blog anyway. Thanks for reading, I plan to write a followup post about running this switch as the network back-end for my Proxmox Ceph cluster so look for that in the near future. If you have any questions please us know by dropping a comment or via the contact us page.

Sources

https://help.ubnt.com/hc/en-us/articles/204909754-UniFi-Device-Adoption-Methods-for-Remote-UniFi-Controllers

Building and Pushing Docker Images with Gitlab

Intro

I’ve finally had time to start using Gitlab for more than just storing old code  I’ve written for my college courses that I have nearly no reason to ever look at again. One of the first issues that I wanted to solve was a CI platform, we use GOCD at work but its lackluster template system and frequent UI changes not to mention the fact its what I already have to deal with at work turns me off from using it for anything home-prod. Gitlab’s native CI integration seems to be the ideal situation and a few companies (Azure DevOps) are following suit. Without further ado let’s get to it then.

Requirements

  • Gitlab server – I use the Omnibus installation which works great
  • Gitlab Runners – I’m using a pair of shared runner docker containers
  • Sonatype Nexus repository for storing artifacts

The Plan

By now I probably have you asking, “why not use the Gitlab internal docker repository for storing images?”. While that is an option I wanted to keep my source control and package repository separate to split the load and Nexus supports other package types I’m using. In this example, we’re going to use the simplest repository I could think of, a container for Gitlab runners to run linting and syntax checking against my Ansible repositories.

The Process

After much trial and error to get docker to talk to a repository with a self-signed certificate authority and then running into the same issue with the runner itself I came up with a working system.

The first thing we’ll need to do is create and configure a new internal docker repository in Nexus and a service account that has permissions to push to it.

  1. Login to your Nexus instance and navigate to settings > security > roles
  2. Create a new role and name it whatever you like I used GitlabCI
  3. Under privileges add nx-docker-view-* (I selected everything except delete permissions)
  4. Under roles assign the role nx-anonymous 
  5. Go to users > create user > new user and pick whatever username you desire then add the role you created in step 2 to its permissions
  6. Set the users password and record it for later
  7. Go to repositories > Create repository > docker internal and select whatever storage you desire then enable an http connector or https connector if you do not plan on putting it behind a reverse proxy
  8. If your Nexus is behind a reverse proxy add the port as an additional virtual server block to your site config for Nexus
# My nginx config file for Sonatype Nexus
server {
    listen 443 ssl;
    server_name nexus.clevelandcoding.com;
    ssl_certificate /etc/ssl/certs/mil-nrepo-sp01.pem;
    ssl_certificate_key /etc/ssl/private/mil-nrepo-sp01.key;
    # allow large uploads of files
    client_max_body_size 1G;

    # optimize downloading files larger than 1G
    #proxy_max_temp_file_size 2G;

    location / {
      proxy_pass http://172.16.1.158:8081/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto "https";
    }
}
  server {
    listen   *:80;
    server_name  nexus.clevelandcoding.com;
    return 301 https://$server_name$request_uri;
  }

 server {
    listen 8082 ssl;
    server_name nexus.clevelandcoding.com;
    ssl_certificate /etc/ssl/certs/mil-nrepo-sp01.pem;
    ssl_certificate_key /etc/ssl/private/mil-nrepo-sp01.key;
    # allow large uploads of files
    client_max_body_size 2G;

    # optimize downloading files larger than 1G
    #proxy_max_temp_file_size 2G;

    location / {
      proxy_pass http://172.16.1.158:8082/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto "https";
    }
}

 server {
    listen 8083 ssl;
    server_name nexus.clevelandcoding.com;
    ssl_certificate /etc/ssl/certs/mil-nrepo-sp01.pem;
    ssl_certificate_key /etc/ssl/private/mil-nrepo-sp01.key;
    # allow large uploads of files
    client_max_body_size 2G;

    # optimize downloading files larger than 1G
    #proxy_max_temp_file_size 2G;

    location / {
      proxy_pass http://172.16.1.158:8083/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto "https";
    }
}
Step 2: configuring a nx-ci role for our Gitlab user
Step 7: Creating a new internal Docker repository

The next step is to connect our Gitlab runners to the repository with the docker login command.

  1. ssh into one of your docker hosts and run docker login https://nexus.domain.com:$port
  2. Enter the service account username and password when prompted
  3. Get the DOCKER_AUTH_CONFIG content which should be in a directory similar to ~/.docker/config.json which we need for your runner itself to authenticate against the repository when pulling images. NOTE: You do not need to add this variable for this repository but you do for each repository where you pull an image as part of a pipeline
  4. Go to the Gitlab repository that contains the Dockerfile you want to build and navigate to settings > CI > Variables
  5. Add the new variable PASSWORD with the password you set for the service account

Now the runner can push and pull from our Nexus repository but we need to write a pipeline file to tell Gitlab to do that:

  1. Create a new file in your repository called  .gitlab-ci.yml I use vs code for this but you can use pretty much any editor or the web IDE if you prefer.
  2. See below for the file that I use, you will need to replace nexus.clevelandcoding.com:8083 with the uri and port you defined in your Nginx and/ or Nexus config, svcdockeragent with your service account username, and ansible_ci_agent with the name you would like to use for the image.
image: docker:stable

services:
  - name: docker:dind
    command: ["--insecure-registry=nexus.clevelandcoding.com:8083"]

stages:
- build
- test
- release

variables:
  TEST_IMAGE: nexus.clevelandcoding.com:8083/ansible_ci_agent:$CI_COMMIT_REF_NAME
  RELEASE_IMAGE: nexus.clevelandcoding.com:8083/ansible_ci_agent:latest

before_script:
  - docker info
  - docker login -u svcdockeragent -p $PASSWORD nexus.clevelandcoding.com:8083

build:
  stage: build
  script:
    - docker build --pull -t $TEST_IMAGE .
    - docker push $TEST_IMAGE

test:
  stage: test
  script:
    - docker pull $TEST_IMAGE
    - docker run $TEST_IMAGE
release:
  stage: release
  script:
    - docker pull $TEST_IMAGE
    - docker tag $TEST_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE
  only:
    - master

Results and Future Plans

I’ve been using this image I built for two repositories I keep of Ansible playbooks for the past few weeks and it has been working flawlessly. Previously I  was installing Ansible and Ansible-Lint in the before_scripts stage of my Gitlab pipeline which resulted in an average pipeline run of 4.5 minutes, using this pre-built docker image the pipeline now completes in just over 1 minute. I would like to figure out if there’s a better way to authenticate against a Docker repository in a pipeline without specifying –insecure-repository but I haven’t been able to figure out how to inject a self–signed CA certificate into the runner containers. If you have any questions, comments, or suggestions feel free to leave them below or contact me via Discord, thanks for reading!

References