Compare commits

..

10 Commits

Author SHA1 Message Date
rskntroot
bf918a3aa8 add why.md; update 2026-02-14 09:05:02 +00:00
rskntroot
f73b9dfe86 expand k3s documentation (#1)
traefik
clusterissuer
longhorn
webhook
2025-06-20 23:42:36 -06:00
rskntroot
526683319b update dirs 2025-06-20 00:52:46 +00:00
rskntroot
d231294b15 add step ca guide 2025-06-19 04:55:26 +00:00
rskntroot
2f05bc88ac update k3s version info 2025-06-17 23:15:28 +00:00
rskntroot
c8829a7840 + docs updates 2025-06-17 22:07:10 +00:00
rskntroot
02aa1cda9b update and clean docs 2025-06-17 21:45:36 +00:00
rskntroot
103851089b + skyforge; nas^ 2025-02-26 02:20:14 -07:00
rskntroot
c7dbf97030 network+ 2025-02-22 17:39:51 -07:00
rskntroot
13784a90e7 +sbc_lab; linux^; network^ 2025-02-22 17:33:59 -07:00
20 changed files with 1233 additions and 648 deletions

View File

@@ -24,11 +24,11 @@ University Java courses were a breeze.
"Real life" had started for me; I didn't have $100 to my name, let alone a bed.
I pleaded with both friends and extended family to host me while I figured things out.
Within a few months, I managed to secure a job as a C++ programmer for a company that provided custom software solutions aimed at healthcarewild!
Within a few months, I managed to secure a job as a C++ programmer for a company that provided custom software solutions for healthcare--wild!
This time was short-lived, and out of desperation I decided to enlist.
As God would have it, I ended up in computer networking despite my best efforts at Navscoleod.
Looking back at that time, I marvel at how I operated.
A boy fixed on dreams of grandeur, yet consumed by the consequences of naivety.
A boy fixed on dreams of grandeur inevitably consumed by the consequences of naivety.
Imagine being a hobbyist and pseudo-classically trained programmer in the military.
Your only task: to maintain critical communications networks.
@@ -38,5 +38,5 @@ Imagine being a hobbyist and pseudo-classically trained programmer in the milita
After separating, I held several contracting positions, including a multi-year stint as a Security Operations Center Lead Engineer.
While tackling cybersecurity challenges in air-gapped environments, I grew weary of the pace of government work.
These days, Im a full-time network development engineer, designing and deploying network infrastructure for a Tier-1 cloud provider.
These days, Im a full-time network development engineer, designing and deploying network infrastructure for a tier-1 cloud provider.
In my spare time, I either work on personal projects or daydream of the financial freedom that would allow me to dedicate myself to those projects full-time.

View File

@@ -0,0 +1,64 @@
# oxpasta
A minimal shell script for interacting with a [rustypaste](https://github.com/orhun/rustypaste) server
## Brief
As someone who needed quick access to only a handful of features, [rpaste](https://github.com/orhun/rustypaste-cli) was overkill. As such, this shell only provides shortcuts for 3 features: upload, oneshot (-o), and url shortening (-s).
## Help
``` zsh
Usage: oxpasta [OPTION] FILE
Options:
[none] {file} Upload a file
-o, --oneshot {file} Upload a file as a oneshot link
-s, --shorten-url {url} Shorten a given URL
-h, --help Display this help message
Description:
minimal rustypaste cli script
Requires:
export OXP_SERVER="https://example.com"
Examples:
oxpasta /path/to/file
| Uploads the file located at /path/to/file
oxpasta -o /path/to/file
| Uploads the oneshot URL https://example.com
oxpasta -s https://example.com/long/url
| Shortens the URL to https://<server>/<some-text>
```
## Setup
1. save `oxpasta.sh` file
1. symlink `oxpasta`
``` zsh
sudo ln -s /path/to/oxpasta.sh /usr/local/bin/oxpasta
```
1. set server url
``` zsh
echo 'export OXP_SERVER="https://<rustypaste-server-url>"' >> ~/.bashrc
source ~/.bashrc
```
## Example
``` zsh
$ git clone https://github.com/rskntroot/oxpasta.git
$ echo $PATH | grep -o '/usr/local/sbin'
$ sudo ln -s /home/${USER}/workspace/oxpasta/oxpasta.sh /usr/local/bin/oxpasta
$
$ sha256sum oxpasta/oxpasta.sh > file && cat file
8fb227774b7f24c22b1437303af7bcd222b4bd058563576102f87c351595deb0 workspace/oxpasta/oxpasta.sh
$ oxpasta file
https://paste.rskio.com/unsolicitous-fredricka.txt
$ curl https://paste.rskio.com/unsolicitous-fredricka.txt
8fb227774b7f24c22b1437303af7bcd222b4bd058563576102f87c351595deb0 workspace/oxpasta/oxpasta.sh
```

View File

@@ -14,20 +14,18 @@ This is intended to be installed on a public-facing loadbalancer.
## Assumptions
1. Your ISP randomly changes your PublicIP and that pisses you off.
1. Your ISP randomly changes your PublicIP and that upsets you.
1. You just want something that will curl `ipv4.icanhazip.com`, check 3rd-party dns, and update Route53.
1. Your Name records only contain a single IP. (future update maybe).
1. Your Name records only contain a single IP.
If so, this is for you.
## Setup
1. setup `Route53AllowRecordUpdate.policy`
```zsh
DNS_ZONE_ID=YOURZONEIDHERE \
envsubst < aws.policy > Route53AllowRecordUpdate.policy
```
1. in aws, create IAM user, attach policy, generate access keys for automated service
1. get
1. in [aws console](https://console.aws.amazon.com):
- create IAM user
- attach policy `aws.policy` file provided
- generate access keys for automated service
1. log into aws cli with the account you created above
```
aws configure
@@ -36,14 +34,18 @@ If so, this is for you.
``` zsh
ln -sf ~/r53-ddns/target/release/r53-ddns /usr/bin/r53-ddns
```
1. get your hosted_zone_id
``` zsh
aws route53 list-hosted-zones
```
1. setup systemd service and then install as normal
```zsh
``` zsh
DNS_ZONE_ID=YOURZONEIDHERE \
DOMAIN_NAME=your.domain.com. \
envsubst < r53-ddns.service | sudo tee -a /etc/systemd/system/r53-ddns.service
envsubst < r53-ddns.service | sudo tee /etc/systemd/system/r53-ddns.service
```
## CLI Usage
## Usage
```
$ r53-ddns -h
@@ -54,6 +56,7 @@ Usage: r53-ddns --dns-zone-id <DNS_ZONE_ID> --domain-name <DOMAIN_NAME>
Options:
-z, --dns-zone-id <DNS_ZONE_ID> DNS ZONE ID (see AWS Console Route53)
-d, --domain-name <DOMAIN_NAME> DOMAIN NAME (ex. 'docs.rskio.com.')
-s, --seconds <SECONDS> SECONDS refresh timer in seconds [default: 180]
-h, --help Print help
```
@@ -73,7 +76,9 @@ sudo systemctl status r53-ddns.service
```
```
$ systemctl status r53-ddns.service
$ envsubst < r53-ddns.service | sudo tee /etc/systemd/system/r53-ddns.service
$ sudo systemctl enable --now r53-ddns.service
$ sudo systemctl status r53-ddns.service
● r53-ddns.service - Route53 Dynamic DNS Service
Loaded: loaded (/etc/systemd/system/r53-ddns.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2024-07-29 09:03:40 UTC; 7min ago
@@ -86,18 +91,19 @@ $ systemctl status r53-ddns.service
Jul 29 09:03:40 hostname systemd[1]: Started Route53 Dynamic DNS Service.
Jul 29 09:03:40 hostname r53-ddns[215630]: [2024-07-29T09:03:40Z INFO r53_ddns] starting with options: -z [##TRUNCATED##] -d rskio.com.
Jul 29 09:03:40 hostname r53-ddns[215630]: [2024-07-29T09:03:40Z INFO r53_ddns] current public address is: 10.0.0.1
Jul 29 09:09:41 hostname r53-ddns[215630]: [2024-07-29T09:09:41Z INFO r53_ddns::dns] dynamic ip drift detected: 10.0.0.1 -> 71.211.88.219
Jul 29 09:09:41 hostname r53-ddns[215630]: [2024-07-29T09:09:41Z INFO r53_ddns::route53] requesting update to route53 record for A rskio.com. -> 71.211.88.219
Jul 29 09:09:41 hostname r53-ddns[215630]: [2024-07-29T09:09:41Z INFO r53_ddns::route53] change_id: /change/C02168177BNS6R50C32Q has status: Pending
Jul 29 09:10:41 hostname r53-ddns[215630]: [2024-07-29T09:09:41Z INFO r53_ddns::route53] change_id: /change/C02168177BNS6R50C32Q has status: Insync
```
## Q&A
## FAQs
> Why did you do create this monster in rust?
> Does this handle multiple record updates?
To be able to handle errors in the future.
No. The goal here was for a single server to sync its dns record. If you are running multiple services from the same host, then consider using CNAMEs to point at a global A|AAAA record for this to update.
> wen IPv6?
> What if I need to update only a single address in the record?
It should work with IPv6.
Let me know. I have been considering this use-case, but haven't implemented it yet.

View File

@@ -1,154 +0,0 @@
# IPADDR
## Brief
A naive attempt at optimizing an ipv4 address with only std::env
Note, using `strace` to judge efficacy not a valid approach.
I ended up trying a couple different tests, but need to work on better methodology.
## Assumptions
=== "Cargo.tml"
``` toml
[profile.release]
strip = "symbols"
debug = 0
opt-level = "z"
lto = true
codegen-units = 1
panic = "abort"
```
## Code
### Unoptimized
- Stores args as an immutable (imut) string vector
- Stores `ip_addr` as imut string then shadows as imut string slice vector
- Uses len() calls for no real reason
=== "main.rs"
``` rust
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() > 1 {
let ip_addr: String = args[1].to_string();
let ip_addr: Vec<&str> = ip_addr.split('.').collect();
if ip_addr.len() == 4 {
for octect in ip_addr {
octect.parse::<u8>().expect(&format!("invalid ip"));
}
} else {
panic!("invalid ip")
}
}
}
```
=== "strace"
``` zsh
~/workspace/ipcheck> sha256sum src/main.rs
4cb6865ea743c3a2cee6e05966e117b8db51f32cb55de6baad205196bbc4195d src/main.rs
~/workspace/ipcheck> cargo build --release
Compiling ipcheck v0.1.0 (/home/lost/workspace/ipcheck)
Finished `release` profile [optimized] target(s) in 2.93s
~/workspace/ipcheck> strace -c ./target/release/ipcheck 1.1.1.1
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ------------------
37.07 0.000470 470 1 execve
14.43 0.000183 14 13 mmap
8.52 0.000108 21 5 read
7.10 0.000090 15 6 mprotect
6.78 0.000086 21 4 openat
3.63 0.000046 23 2 munmap
3.08 0.000039 9 4 newfstatat
2.76 0.000035 11 3 brk
2.60 0.000033 6 5 rt_sigaction
2.52 0.000032 8 4 close
2.37 0.000030 7 4 pread64
1.50 0.000019 6 3 sigaltstack
1.34 0.000017 17 1 1 access
1.34 0.000017 8 2 prlimit64
1.10 0.000014 7 2 1 arch_prctl
1.03 0.000013 13 1 poll
0.71 0.000009 9 1 sched_getaffinity
0.63 0.000008 8 1 getrandom
0.55 0.000007 7 1 set_tid_address
0.47 0.000006 6 1 set_robust_list
0.47 0.000006 6 1 rseq
------ ----------- ----------- --------- --------- ------------------
100.00 0.001268 19 65 2 total
```
### Optimized
- Needs some cleanup
- Needs break for args after index 1
=== "main.rs"
``` rust
use std::env;
fn main() {
for (index, arg) in env::args().enumerate(){
if index == 1 {
for (i, octect) in arg.split('.').collect::<Vec<&str>>().iter().enumerate() {
if i > 3 {
panic!("invalid")
} else {
let _ = &octect.parse::<u8>().expect("invalid");
}
}
}
}
}
```
=== "strace"
``` zsh
~/workspace/ipcheck> sha256sum src/main.rs
838b3f0c99448e8bbe88001de4d12f5062d293a2a1fd236deacfabdb30a7e2e4 src/main.rs
~/workspace/ipcheck> cargo build --release
Compiling ipcheck v0.1.0 (/home/lost/workspace/ipcheck)
Finished `release` profile [optimized] target(s) in 2.89s
~/workspace/ipcheck> strace -c ./target/release/ipcheck 1.1.1.1 06/22/2024 07:57:31 PM
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ------------------
23.07 0.000161 12 13 mmap
15.33 0.000107 21 5 read
13.04 0.000091 15 6 mprotect
10.17 0.000071 17 4 openat
6.73 0.000047 23 2 munmap
4.87 0.000034 6 5 rt_sigaction
4.01 0.000028 7 4 pread64
4.01 0.000028 7 4 newfstatat
3.72 0.000026 6 4 close
2.87 0.000020 6 3 sigaltstack
2.15 0.000015 5 3 brk
2.01 0.000014 14 1 poll
1.86 0.000013 6 2 prlimit64
1.29 0.000009 9 1 sched_getaffinity
1.15 0.000008 8 1 getrandom
1.00 0.000007 3 2 1 arch_prctl
1.00 0.000007 7 1 set_tid_address
0.86 0.000006 6 1 set_robust_list
0.86 0.000006 6 1 rseq
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
------ ----------- ----------- --------- --------- ------------------
100.00 0.000698 10 65 2 total
```

View File

@@ -6,7 +6,7 @@ This site is meant to catalog my efforts.
Over the years, I've "spun my wheels" to learn, get things working, or explore interesting ideas--only for them to be lost to time.
You might see this site as a collection of my notes or at times my memoirs, words shaped only by my inspiration in the moment.
However, I intend for it to be much more.
This site exists for me along with the hope that something I've done might help you.
This site exists for me in hope that something I've done might help you.
## What does Rskio Mean?
@@ -26,13 +26,17 @@ Nothing.
It made sense if I blended "Ruskonator" (an old nickname) with Input/Output (IO).
The same goes for "rskntroot", it's a mix of that same nickname and "root".
## Coding
## Code
Currently, this is an unorganized list of things I have spent many of what corporate America refers to as "cycles" on.
Some code that I have spent many of what corporate America refers to as "cycles" on.
## Notes
References to information that I have found myself revisiting.
## Projects
Currently, the same as "coding".
An unorganized list of guides and project ideas that I have taken the time to document.
## Storage

20
mkdocs/docs/notes/cat8.md Normal file
View File

@@ -0,0 +1,20 @@
# CAT8
Never heard of her, but she is real.
## Really...
Telco Data [article](https://www.telco-data.com/blog/cat-cables/):
"Category 8 is the official successor to Cat6A cabling.
It is officially recognized by the IEEE and EIA and parts and pieces are standardized across manufacturers.
The primary benefit of Cat8 cabling is faster throughput over short distances: 40 Gbps up to 78 and 25 Gbps up to 100.
From 100 to 328, Cat8 provides the same 10Gbps throughput as Cat6A cabling."
ANSI/TIA [TIA Press Release](https://standards.tiaonline.org/tia-issues-new-balanced-twisted-pair-telecommunications-cabling-and-components-standard-addendum-1):
"TIA-568-C.2-1 - This addendum specifies minimum requirements for shielded category 8 balanced twisted-pair telecommunications
cabling (e.g. channels and permanent links) and components (e.g. cable,connectors, connecting hardware, and equipment cords)
that are used up to and including the equipment outlet/connector in data centers, equipment rooms, and other spaces that need
high speed applications. This addendum also specifies field test procedures and applicable laboratory reference measurement
procedures for all transmission parameters."

View File

@@ -54,38 +54,43 @@ exit
- see [https://docs.docker.com/engine/install/ubuntu/](https://docs.docker.com/engine/install/ubuntu/)
``` bash
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh ./get-docker.sh
sudo systemctl enable --now docker
rm -f ./get-docker.sh
sudo usermod -a -G docker $(whoami)
sudo -i
curl -fsSL https://get.docker.com | sh
systemctl enable --now docker
```
``` bash
usermod -aG docker ${SOME_USER}
docker ps
```
#### Completion
- see [https://docs.docker.com/config/completion/](https://docs.docker.com/config/completion/)
=== "Debian"
=== "docker"
``` bash
mkdir -p ~/.local/share/bash-completion/completions
docker completion bash > ~/.local/share/bash-completion/completions/docker
source ~/.bashrc
```
=== "bash"
``` bash
sudo apt install bash-completion -y
```
=== "Fedora"
``` bash
sudo dnf install bash-completion -y
cat <<%% >> ~/.bashrc
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
%%
```
``` bash
cat <<%% >> ~/.bashrc
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
%%
mkdir -p ~/.local/share/bash-completion/completions
docker completion bash > ~/.local/share/bash-completion/completions/docker
source ~/.bashrc
```
### Tools
@@ -114,37 +119,27 @@ source ~/.bashrc
#### fastfetch
- see [fastfetch](https://github.com/fastfetch-cli/fastfetch) for more info
- see [fastfetch](https://github.com/fastfetch-cli/fastfetch/releases) for more info
=== "Debian"
``` bash
url="https://github.com/fastfetch-cli/fastfetch/releases/download/2.47.0/fastfetch-linux-amd64.deb"
```
``` bash
url="https://github.com/fastfetch-cli/fastfetch/releases/download/2.37.0/fastfetch-linux-aarch64.deb"
```
``` bash
mkdir -p ~/downloads/ && cd ~/downloads
curl -fsSL ${url} -o fastfetch.deb
sudo dpkg -i ./fastfetch.deb
```
``` bash
mkdir -p ~/downloads/ && cd ~/downloads
curl -fsSLO ${url} -o ${file}
sudo dpkg -i fastfetch-installer
```
=== "Fedora"
``` bash
url="https://github.com/fastfetch-cli/fastfetch/releases/download/2.37.0/fastfetch-linux-aarch64.deb"
```
``` bash
mkdir -p ~/downloads/ && cd ~/downloads
curl -fsSLO ${url} -o ${file}
sudo dnf install fastfetch-installer
```
#### Profile
``` bash
cat <<%% >> ~/.bashrc
# RSKIO
fastfetch
alias ..="cd .."
alias ...="cd ..."
alias q="exit"
alias ff="fastfetch"
%%
source ~/.bashrc
```

View File

@@ -0,0 +1,123 @@
# ClusterIssuer
Allows certificate requests from an ACME provider. This is used to enable HTTPS TLS for services you stand up.
## Setup
see [cert-manager kubectl install](https://cert-manager.io/docs/installation/kubectl/) for more info
=== "v1.18"
``` bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.18.0/cert-manager.yaml
```
create at least one of the `clusterissuers` types below
### External
uses LetsEncrypt and public DNS records to sign https for your sites
``` yaml title="letsencrypt/clusterissuer.yml"
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ${EMAIL}
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector: {}
http01:
ingress:
class: traefik
```
### Internal
pointed at an internal ACME provider to generate certs for an intranet
``` yaml title="internal/clusterissuer.yml"
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: internal-issuer
spec:
acme:
email: ${EMAIL}
server: ${ACME_URL}
privateKeySecretRef:
name: interal-issuer-account-key
caBundle: ${CA_BUNDLE_BASE64} # ca bundle that was used to generate the tls cert for the acme site
solvers:
- selector: {}
http01:
ingress:
class: traefik
```
## Certificate
### Example
create a `certificate.yml` file for a traefik `IngressRoute`
=== "Certificate"
``` yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: io-rsk-docs-tls
spec:
secretName: io-rsk-docs-tls
issuerRef:
name: dev-step-issuer
kind: ClusterIssuer
commonName: docs.dev.rsk.io
dnsNames:
- docs.dev.rsk.io
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
duration: 2160h # 90 days
renewBefore: 360h # 15 days
secretTemplate:
annotations:
kubeseal-secret: "true"
labels:
domain: docs-dev-rsk-io
```
=== "IngressRoute"
``` yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: rskio-docs
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`docs.dev.rsk.io`)
kind: Rule
services:
- name: rskio-docs
port: 80
tls:
secretName: io-rsk-docs-tls
```
After applying this `Certifcate` a `Secret` is created containing the `.crt` and `.key` files.
These are loaded by the traefik.io `IngressRoute` under `spec.tls.secretName`.
This enables usage of the tls cert for https client reachability.

View File

@@ -0,0 +1,110 @@
# Longhorn
Provides distributed storage for the cluster.
We will only be editing the nodes as many of the defaults are sufficient.
## Requirements
All cluster nodes need these packages installed:
``` bash
sudo apt install open-iscsi nfs-common -y
```
see [longhorn os-specific requirements](https://longhorn.io/docs/1.9.0/deploy/install/#osdistro-specific-configuration) for more information.
## Setup
=== "v1.9.0"
``` bash
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.9.0/deploy/longhorn.yaml
```
see [longhorn installation](https://longhorn.io/docs/1.9.0/deploy/install/install-with-kubectl/#installing-longhorn) for more information.
## Dashboard
### Service
create and apply `longhorn/service.yml`
``` bash
apiVersion: v1
kind: Service
metadata:
labels:
app: longhorn-ui
name: longhorn-dashboard
namespace: longhorn-system
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
name: web
selector:
app: longhorn-ui
```
### Ingress
create and apply `longhorn/ingress.yml`
``` bash
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: longhorn-dashboard
namespace: longhorn-system
spec:
entryPoints:
- websecure
routes:
- match: Host(`storage.${DOMAIN_NAME}`)
kind: Rule
services:
- name: longhorn-dashboard
port: 8000
```
After creating a `ClusterIssuer` be sure to create a `Certificate` and apply it with `spec.tls.secretName`.
With Traefik you can also use certResolver, though clusterissuer certs allow for more fine-grain control.
## StorageClass
create and apply `longhorn/storageclass.yml`
``` bash
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-data
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "300"
fromBackup: ""
fsType: "ext4"
```
## PVC
create and apply `some-app/pvc.yml`
``` yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: some-app-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500M
storageClassName: longhorn-data
```

View File

@@ -0,0 +1,163 @@
# Traefik
## Brief
Enabling traefik access to dashboard and metrics for traefik ingress controller in k3s kubernetes cluster
- by `rskntroot` on `2024-07-01`
## Assumptions
``` bash
$ k3s --version
k3s version v1.32.5+k3s1 (8e8f2a47)
go version go1.23.8
```
``` bash
$ kubectl version
Client Version: v1.32.5+k3s1
Kustomize Version: v5.5.0
Server Version: v1.32.5+k3s1
```
## Dashboards
K3S comes packaged with `Traefik Dashboard` enabled by default, but not exposed.
### Preparation
#### DNS
=== "DNS"
Set DNS record `traefik.your.domain.com`
=== "Hosts File"
Alternatively, you can just edit your `hosts` file.
``` title="/etc/hosts"
10.0.0.1 traefik.your.domain.com
```
!!! warning "This example does not include authentication. Exposing these dashboards is a security risk. Recommend enabling mTLS."
#### Middlewares
On host with `kubectl` access.
``` yaml title="middlewares.yml"
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-https
namespace: default
spec:
redirectScheme:
scheme: https
permanent: true
port: "443"
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-dashboard
namespace: default
spec:
redirectRegex:
regex: "^https?://([^/]+)/?$"
replacement: "https://${1}/dashboard/"
permanent: true
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: ratelimit
namespace: default
spec:
rateLimit:
average: 100
burst: 50
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: compress
namespace: default
spec:
compress: {}
```
``` bash
kubectl apply -f middlewares.yml
```
### Setup IngressRoute
create `ingress.yml` and update `"edge.rskio.com"` with your domain name
``` yaml title="ingress.yml"
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`edge.rskio.com`) # Update with your domain name
kind: Rule
services:
- name: api@internal
kind: TraefikService
middlewares:
- name: redirect-https
- name: redirect-dashboard
- name: ratelimit
- name: compress
```
``` bash
kubectl apply -f ingress.yml
```
## Access Dashboards
You should now be able to access the Traefik Ingress Controller Dashboard and metrics remotely.
From web browser go to the domain you specified in the ingress.
=== "Traefik Dashboard"
```
https://edge.your.domain.com
```
will follow `redirect-https` and get you to
```
https://edge.your.domain.com/dashboard/#/
```
### Disable Dashboards
=== "Bash"
``` bash
kubectl delete -f ingress.yml
```
=== "Example"
``` bash
$ kubectl delete -f traefik/ingress.yml
ingressroute.traefik.io "traefik-ingress" deleted
```
## References
- [https://docs.k3s.io](https://docs.k3s.io)
- [https://doc.traefik.io/traefik/](https://doc.traefik.io/traefik/)

View File

@@ -0,0 +1,372 @@
# Webhooks
Continuous integration on easy mode.
Webhooks allow for a ton of functionality,
but we are going to use it to kick off a kubernetes job.
Effectivitely automating reloading content on a static website.
## Background
This docs website is a static site that is hosted inside an nginx container.
The storage for these redundant pods is a longhorn rwx pvc that gets stood up.
To initialize the storage a kubernetes job is run. This job does the following:
- git clones the `rskntroot/rskio` repo containing the artifacts required to render the site
- executes the `mkdocs` command to render the static site
So what if when we push to github, we setup a webhook that tells kubernetes to kick off that job?
Well, we achieve some form of automation.
So how do we do this?
## Setup
### RBAC
=== "ServiceAccount"
``` yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: webhook-job-trigger
```
=== "Dev Roles"
``` yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: job-creator
namespace: dev
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: job-creator-binding
namespace: dev
subjects:
- kind: ServiceAccount
name: webhook-job-trigger
namespace: default
roleRef:
kind: Role
name: job-creator
apiGroup: rbac.authorization.k8s.io
```
=== "Prod Roles"
``` yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: job-creator
namespace: prod
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: job-creator-binding
namespace: prod
subjects:
- kind: ServiceAccount
name: webhook-job-trigger
namespace: default
roleRef:
kind: Role
name: job-creator
apiGroup: rbac.authorization.k8s.io
```
### ConfigMap
We will create a config map from a directory including the following files.
#### ConvertJob
We are going to be using curl to call the kubernetes API directly,
so we need to convert our job from yaml to json.
Convert the job to JSON and save to `etc/mkdocs-dev.json`
=== "Job"
``` yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: mkdocs-builder-
namespace: dev
spec:
ttlSecondsAfterFinished: 600
template:
spec:
containers:
- name: mkdocs
image: squidfunk/mkdocs-material
command: ["/bin/sh", "-c"]
args:
- |
git clone --single-branch -b dev https://github.com/rskntroot/rskio.git --depth 1 /docs
cd /docs/mkdocs
mkdocs build --site-dir /output
volumeMounts:
- name: mkdocs-storage
mountPath: /output
restartPolicy: Never
volumes:
- name: mkdocs-storage
persistentVolumeClaim:
claimName: mkdocs-pvc
```
=== "Convert"
``` bash
mkdir etc
cat job.yml | yq -e -j | jq > etc/mkdocs-dev.json
```
The following docs we will assume that you also created `etc/mkdocs-main.json`.
#### Hooks
create `etc/hooks.yaml`
=== "etc/hooks.yaml"
``` yaml
- id: rskio-mkdocs
execute-command: /etc/webhook/reload.sh
command-working-directory: /etc/webhook
response-message: payload received
response-headers:
- name: Access-Control-Allow-Origin
value: "*"
pass-arguments-to-command:
- source: payload
name: ref
- source: payload
name: repository.full_name
trigger-rule:
and:
- match:
type: value
value: push
parameter:
source: header
name: X-GitHub-Event
- match:
type: value
value: rskntroot/rskio
parameter:
source: payload
name: repository.full_name
```
=== "Secret"
after testing come back to implement secrets
``` yaml
trigger-rule:
and:
- match:
type: payload-hmac-sha1
secret: mysecret
parameter:
source: header
name: X-Hub-Signature
```
apply the `configmap` and rollout restart the webhook deployment
#### Command
``` bash title="etc/reload.sh"
#!/bin/sh
REF=$1
REPO=$2
dispatch() {
NS=$1
JOB_JSON=$2
SA_PATH="/var/run/secrets/kubernetes.io/serviceaccount"
curl https://kubernetes.default.svc/apis/batch/v1/namespaces/${NS}/jobs \
-X POST \
-H "Authorization: Bearer $(cat ${SA_PATH}/token)" \
-H "Content-Type: application/json" \
--cacert "${SA_PATH}/ca.crt" \
-d "@${JOB_JSON}"
}
docs(){
case ${REF} in
refs/heads/dev)
dispatch dev "/etc/webhook/mkdocs-dev.json"
;;
refs/heads/main)
dispatch prod "/etc/webhook/mkdocs-main.json"
;;
*)
echo "skipping push to unsupported ref ${REF}"
exit 0
;;
esac
}
case ${REPO} in
rskntroot/rskio)
docs
;;
*)
echo "skipping push to unsupported repo ${REPO}"
;;
esac
```
#### Create
once all resources in `etc` are created run the following command:
``` bash
kubectl create configmap webhook-etc --from-file=etc
```
if you need to update anything run the following:
``` bash
kubectl delete configmap webhook-etc
kubectl create configmap webhook-etc --from-file=etc
```
### Resources
The following resources will complete the work
=== "Deployment"
``` yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook-docs
spec:
replicas: 1
selector:
matchLabels:
app: webhook-docs
template:
metadata:
labels:
app: webhook-docs
spec:
serviceAccountName: webhook-job-trigger
containers:
- name: webhook-docs
image: ghcr.io/linuxserver-labs/webhook:latest
command: ["/app/webhook"]
args:
- -hooks=/etc/webhook/hooks.yaml
- -hotreload
- -verbose
volumeMounts:
- name: webhook-etc
mountPath: /etc/webhook
volumes:
- name: webhook-etc
configMap:
name: webhook-etc
defaultMode: 493 # 0755
```
=== "Service"
``` yaml
apiVersion: v1
kind: Service
metadata:
name: webhook
spec:
selector:
app: webhook-docs
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
```
=== "Certificate"
``` yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: io-rsk-dev-hooks-tls
spec:
secretName: io-rsk-dev-hooks-tls
issuerRef:
name: dev-step-issuer
kind: ClusterIssuer
commonName: hooks.dev.rsk.io
dnsNames:
- hooks.dev.rsk.io
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
duration: 2160h # 90 days
renewBefore: 360h # 15 days
secretTemplate:
annotations:
kubeseal-secret: "true"
labels:
domain: hooks-dev-rsk-io
```
=== "Ingress"
``` yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: webhook
spec:
entryPoints:
- websecure
routes:
- match: Host(`hooks.dev.rsk.io`)
kind: Rule
services:
- name: webhook
port: 9000
middlewares:
- name: ratelimit
tls:
secretName: io-rsk-hooks-tls
```
## Testing
``` bash
curl -X POST https://hooks.dev.rsk.io/hooks/rskio-mkdocs \
-H 'X-Github-Event: push' \
-H 'Content-type: application-json' \
-d '{"ref": "refs/heads/dev","repository": {"full_name":"rskntroot/rskio"}}'
```
!!! note "Github needs access to a public domain for this to work."

View File

@@ -1,384 +0,0 @@
# K3S Traefik Setup
## Brief
Enabling traefik access to dashboard and metrics for traefik ingress controller in k3s kubernetes cluster
- by `rskntroot` on `2024-07-01`
## Assumptions
``` bash
$ k3s --version
k3s version v1.29.5+k3s1 (4e53a323)
go version go1.21.9
```
``` bash
$ kubectl version
Client Version: v1.29.5+k3s1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.5+k3s1
```
## Traefik Dashboards
K3S comes packaged with `Traefik Dashboard` and `Prometheus Metrics` which are disabled by default.
### Preparation
=== "DNS"
Set DNS record `traefik.your.domain.com` in a non-public DNS
=== "Hosts File"
Alternatively, you can just edit your workstations `hosts` file.
``` title="/etc/hosts"
10.0.0.1 traefik.your.domain.com
```
!!! warning "This example does not include authentication. Exposing these dashboards is a security risk."
### Update Manifest
On host with `kubectl` access.
Add the following to `spec.valuesContent` in:
``` bash
vim /var/lib/rancher/k3s/server/manifests/traefik.yaml
```
=== "Yaml"
``` yaml
dashboard:
enabled: true
metrics:
prometheus: true
```
=== "Example"
``` yaml
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-25.0.3+up25.0.0.tgz
set:
global.systemDefaultRegistry: ""
valuesContent: |-
deployment:
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
dashboard:
enabled: true
metrics:
prometheus: true
```
### Restart Ingress Controller
=== "Bash"
``` bash
kubectl -n kube-system scale deployment traefik --replicas=0
# wait a few seconds
kubectl -n kube-system get deployment traefik
kubectl -n kube-system scale deployment traefik --replicas=1
```
=== "Example"
``` bash
$ kubectls scale deployment traefik --replicas=0
deployment.apps/traefik scaled
$ kubectls get deployment traefik
NAME READY UP-TO-DATE AVAILABLE AGE
traefik 0/0 0 0 3d1h
$ kubectls scale deployment traefik --replicas=1
deployment.apps/traefik scaled
```
### Create Resource Definition YAML
Save the following to `traefik-dashboard.yml` in your workspace.
=== "Traefik Dashboard"
``` yaml title="traefik-dashboard.yml"
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard
namespace: kube-system
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik-dashboard
spec:
type: ClusterIP
ports:
- name: traefik
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
spec.ingressClassName: traefik
spec:
rules:
- host: traefik.${DOMAIN}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: traefik-dashboard
port:
number: 9000
```
=== "Promethus Only"
``` yaml title="traefik-dashboard.yml"
apiVersion: v1
kind: Service
metadata:
name: traefik-metrics
namespace: kube-system
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik-metrics
spec:
type: ClusterIP
ports:
- name: traefik
port: 9100
targetPort: 9100
protocol: TCP
selector:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
spec.ingressClassName: traefik
spec:
rules:
- host: traefik.${DOMAIN}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: traefik-dashboard
port:
number: 9000
- path: /metrics
pathType: Prefix
backend:
service:
name: traefik-metrics
port:
number: 9100
```
=== "Both"
``` yaml title="traefik-dashboard.yml"
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard
namespace: kube-system
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik-dashboard
spec:
type: ClusterIP
ports:
- name: traefik
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/name: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-metrics
namespace: kube-system
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik-metrics
spec:
type: ClusterIP
ports:
- name: traefik
port: 9100
targetPort: 9100
protocol: TCP
selector:
app.kubernetes.io/instance: traefik-kube-system
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
spec.ingressClassName: traefik
spec:
rules:
- host: traefik.${DOMAIN}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: traefik-dashboard
port:
number: 9000
- path: /metrics
pathType: Prefix
backend:
service:
name: traefik-metrics
port:
number: 9100
```
### Create Service & Ingress Resources
First, set the environment variable for to your domain.
``` bash
export DOMAIN=your.domain.com
```
=== "Bash"
``` bash
envsubst < traefik-dashboard.yml | kubectl apply -f -
```
=== "Example"
``` bash
$ envsubst < traefik-dashboards.yml | kubectl apply -f -
service/traefik-dashboard created
service/traefik-metrics created
ingress.networking.k8s.io/traefik-ingress created
$ kubectls get svc | grep traefik-
traefik-dashboard ClusterIP 10.43.157.54 <none> 9000/TCP 25s
traefik-metrics ClusterIP 10.43.189.128 <none> 9100/TCP 25s
```
!!! note annotate "Why are passing the yaml file into `envsubst`? (1)"
1. `envsubst` - [gnu](https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) - enables code-reuse by providing environment variable substituion as demonstrated above.
### Access Dashboards
That's it. You should now be able to access the Traefik Ingress Controller Dashboard and metrics remotely.
Don't forget to include the appropriate uri paths:
=== "Traefik Dashboard"
```
https://traefik.your.domain.com/dashboard/
```
!!! tip "When navigating to the traefik dashboard the `/` at the end is necessary. `/dashboard` will not work. "
=== "Promethus Metrics"
```
https://traefik.your.domain.com/metrics
```
### Disable Dashboards
=== "Bash"
``` bash
envsubst < traefik-dashboard.yml | kubectl delete -f -
```
=== "Example"
``` bash
$ envsubst < traefik-dashboards.yml | kubectl delete -f -
service "traefik-dashboard" deleted
service "traefik-metrics" deleted
ingress.networking.k8s.io "traefik-ingress" deleted
```
## Shortcuts
### alias kubectls
!!! tip "When using an `alias` to substitute `kubectl` command completion will not work."
=== "Bash"
``` bash
echo 'alias kubectls="kubectl -n kube-system"' >> ~/.bashrc
source ~/.bashrc
```
=== "Example"
``` bash
$ echo 'alias kubectls="kubectl -n kube-system"' >> ~/.bashrc
$ source ~/.bashrc
$ kubectls get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 3d2h
local-path-provisioner 1/1 1 1 3d2h
metrics-server 1/1 1 1 3d2h
traefik 1/1 1 1 3d2h
```
#### Alternatives
- `skubectl` means you can hit `[up-arrow]` `[ctrl]+[a]` `[s]` `[enter]` when you inevitably forget to include `-n kube-system`
- `kubectls` just adds `[alt]+[right-arrow]` into the above before `[s]`
- `kubesctl` makes sense because all of these are really kube-system-ctl, but that adds 4x `[right-arrow]`, ewww.
## References
- [https://docs.k3s.io](https://docs.k3s.io)
- [https://k3s.rocks/traefik-dashboard/](https://k3s.rocks/traefik-dashboard/)
- [https://doc.traefik.io/traefik/v2.10/](https://doc.traefik.io/traefik/v2.10/)

View File

@@ -1,46 +1,113 @@
# HomeLab Network
# Premium Home Network
## Classy
## Brief
Welcome to my recommended HomeLab network setup! Heres a breakdown of the key components.
- by `rskntroot` on `2025-06-17`
---
## Components
### Router
Unifi [Dream Machine Special Edition](https://techspecs.ui.com/unifi/unifi-cloud-gateways/udm-se)
=== "USDM"
**Unifi [Dream Machine Special Edition](https://techspecs.ui.com/unifi/unifi-cloud-gateways/udm-se)**
- All-in-one gateway with security, routing, and network management.
- Provides 10Gb/s SFP ports
=== "Max"
**Unifi [Cloud Gateway Max](https://techspecs.ui.com/unifi/cloud-gateways/ucg-max)**
- All-in-one gateway with security, routing, and network management.
- Provides 2.5Gb/s SFP ports
- Limited Camera/NVR storage
---
### Switching
=== "Expensive"
=== "Premium"
Unifi [Pro Max 24 PoE](https://techspecs.ui.com/unifi/switching/usw-pro-max-24)
**Unifi [Pro Max 24 PoE](https://techspecs.ui.com/unifi/switching/usw-pro-max-24)**
`$799 USD` PoE Switch with 8 x 2.5GbE & 16 1GbE PoE++@400W + 2x 10G SFP Uplink
- **Price:** $799 USD
- **Specs:**
- 8 × 2.5GbE PoE++
- 16 × 1GbE PoE++ @ 400W
- 2 × 10G SFP+ uplinks
=== "Cheap"
=== "Standard"
Mokerlink [8-Port 2.5Gb PoE Switch](https://www.amazon.com/dp/B0C7VT8TVB/)
**Cisco [WS-C3650-12X48UQ-S](https://www.ebay.com/itm/365350985160)**
`$89 USD` Unmanaged PoE Switch with 8 x 2.5GbE PoE+@135W + 1x 10G SFP Uplink
- **Price** $130 USD
- **Specs:**
- 12 × 100Mbps/1/2.5/5/10 Gbps PoE+ @820W shared
- 36 × 10/100/1000 Gbps PoE+ @820W shared
- 4 × 10G SFP+
Amcrest [8-Port 1Gb PoE Switch](https://www.amazon.com/dp/dp/B08FCQ8BRC/)
=== "Cheaper"
`$79 USD` Unmanaged PoE Switch with 8 x 1GbE PoE+@96W
**Mokerlink [8-Port 2.5Gb PoE Switch](https://www.amazon.com/dp/B0C7VT8TVB/)**
### Wifi
- **Price:** $89 USD
- **Specs:**
- 8 × 2.5GbE PoE+
- 1 × 10G SFP+ uplink
- 135W Total PoE
Unifi [U7 Pro](https://techspecs.ui.com/unifi/wifi/u7-pro) x6
**2x Amcrest [8-Port 1Gb PoE Switch](https://www.amazon.com/dp/dp/B08FCQ8BRC/)**
- **Price:** $79 USD x2
- **Specs:**
- 8 × 1GbE PoE+
- 96W Total PoE
---
### WiFi
=== "Wifi7"
**Unifi [U7 Pro](https://techspecs.ui.com/unifi/wifi/u7-pro)**
- WiFi 7 access points with strong coverage and performance.
=== "Wifi6"
**Unifi [U6 Pro](https://techspecs.ui.com/unifi/wifi/u6-pro)**
- WiFi 6 access points with strong coverage and performance.
---
### Cameras
Unifi [G5 Bullet](https://techspecs.ui.com/unifi/cameras-nvrs/uvc-g5-bullet) x6
**Unifi [G5 Bullet](https://techspecs.ui.com/unifi/cameras-nvrs/uvc-g5-bullet)**
### Network Attached Storage
- 4MP resolution, HDR, AI motion detection.
=== "6-Bay"
UGREEN [DXP6800 PRO](https://www.ugreen.com/collections/nas-storage/products/ugreen-nasync-dxp6800-pro-nas-storage)
---
see [Personal NAS](../storage/personal_nas.md)
## Network Attached Storage
=== "8-Bay"
UGREEN [DXP9800 PRO](https://www.ugreen.com/collections/nas-storage/products/ugreen-nasync-dxp8800-plus-nas-storage)
=== "6-Bay NAS"
see [Enterprise NAS](../storage/enterprise_nas.md)
**UGREEN [DXP6800 PRO](https://www.ugreen.com/collections/nas-storage/products/ugreen-nasync-dxp6800-pro-nas-storage)**
- See [Personal NAS](../storage/personal_nas.md) for setup details.
=== "8-Bay NAS"
**UGREEN [DXP9800 PRO](https://www.ugreen.com/collections/nas-storage/products/ugreen-nasync-dxp8800-plus-nas-storage)**
- See [Enterprise NAS](../storage/soho_nas.md) for more details.
---
This setup balances cost and performance, making it flexible for both home and small business use.

View File

@@ -1,9 +1,11 @@
# Raspberry Pi Homelab Build
# RaspberryPi HomeLab
## Brief
This page documents a Raspberry Pi-based homelab build, listing the required components and their respective sources.
- by `rskntroot` on `2025-02-22`
## Core Components
These are the essential components required to build the Raspberry Pi homelab setup. Recommend between 3-4

View File

@@ -35,7 +35,7 @@ If you need more RAM, USB3.0, or AI Acceleration is mandatory, checkout LibreCom
## Projects
This website is hosted on 2 sweet potatos with an alta as the cluster controller.
I am running a K3s cluster with a couple of these as worker nodes.
## Notes
@@ -43,26 +43,9 @@ This website is hosted on 2 sweet potatos with an alta as the cluster controller
Using Power over Ethernet (PoE) to run your SoCs is just awesome! You only need 1 cable?! Be sure to get yourself some good cables and a solid PoE switch.
I have personnally been using these:
Examples:
- [CAT8 Ethernet cables](https://www.amazon.com/dp/B08PL1P53C/)
- Ive used countless Ethernet Cables and fashioning hundreds of my own; can confirm these are premium.
- [1G PoE+ 8-port Switch](https://www.amazon.com/dp/B08FCQ8BRC)
- Unmanaged switch that I can recommend. Works like a charm.
### CAT8 Real?
Telco Data [article](https://www.telco-data.com/blog/cat-cables/):
"Category 8 is the official successor to Cat6A cabling.
It is officially recognized by the IEEE and EIA and parts and pieces are standardized across manufacturers.
The primary benefit of Cat8 cabling is faster throughput over short distances: 40 Gbps up to 78 and 25 Gbps up to 100.
From 100 to 328, Cat8 provides the same 10Gbps throughput as Cat6A cabling."
ANSI/TIA [TIA Press Release](https://standards.tiaonline.org/tia-issues-new-balanced-twisted-pair-telecommunications-cabling-and-components-standard-addendum-1):
"TIA-568-C.2-1 - This addendum specifies minimum requirements for shielded category 8 balanced twisted-pair telecommunications
cabling (e.g. channels and permanent links) and components (e.g. cable,connectors, connecting hardware, and equipment cords)
that are used up to and including the equipment outlet/connector in data centers, equipment rooms, and other spaces that need
high speed applications. This addendum also specifies field test procedures and applicable laboratory reference measurement
procedures for all transmission parameters."

View File

@@ -0,0 +1,159 @@
# Step CA
An internal CA and ACME Provider.
## Brief
Guide to setup a internal Certificate Authority and ACME Provider
for issuing trusted TLS certs for internal sites.
This is useful for both traefik certificateResolver or kubernetes ClusterIssuer.
Step can do more, but lets configure the basics.
- by `rskntroot` on `2025-06-18`
## Assumptions
- An Internal DNS server is configured and accessible.
- Debian is your choice for the ACME/CA server install.
## Install
``` bash
sudo -i
```
``` bash
apt-get update && apt-get install -y --no-install-recommends curl vim gpg ca-certificates
curl -fsSL https://packages.smallstep.com/keys/apt/repo-signing-key.gpg -o /etc/apt/trusted.gpg.d/smallstep.asc && \
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/smallstep.asc] https://packages.smallstep.com/stable/debian debs main' \
| tee /etc/apt/sources.list.d/smallstep.list
apt-get update && apt-get -y install step-cli step-ca
```
!!! note "For more install instructions see smallstep [installation guide](https://smallstep.com/docs/step-ca/installation/)."
## Config Setup
``` bash
echo 'some-password' > secret
```
=== "Config"
``` bash
step ca init \
--deployment-type standalone \
--name ${CA_NAME} \
--dns=${CA_DNS_NAMES} \
--address "0.0.0.0:5001" \
--provisioner ${CA_EMAIL} \
--password-file ./secret
```
=== "Example"
``` bash
step ca init \
--deployment-type standalone \
--name rskio \
--dns=rskio.com,rskntr.com \
--address "0.0.0.0:5001" \
--provisioner dev@rskio.com \
--password-file ./secret
```
``` bash
step ca provisioner add dev --type ACME
mv secret /root/.step/config/.
```
## Service
``` bash
vi /root/.step/step.service
```
paste the following and save with `[ESC] [:] [x] [ENTER]`
``` toml
[Unit]
Description=Step CA & ACME Provider
After=network-online.target
Requires=network-online.target
[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/usr/bin/step-ca /root/.step/config/ca.json --password-file /root/.step/config/secret
User=root
Restart=always
RestartSec=60
[Install]
WantedBy=multi-user.target
```
``` bash
ln -s /root/.step/step.service /etc/systemd/system/.
systemctl daemon-reload
systemctl enable --now step.service
systemctl status step.service
```
``` bash
ss -pnlt | grep 5001
curl -k https://localhost:5001/acme/dev/directory
```
you should see your service logs showing it is listening on port `:5001` and see the contents of the webpage from `curl`
## Certificates
### Trust
``` bash
cat ~/.step/certs/root_ca.crt
cat ~/.step/certs/intermediate_ca.crt
```
save and install the files into the trusted certificates on your endpoint and enable trust for ssl signing.
you should now be able to browse to your sites without warning
### ClusterIssuer
``` bash
cat .step/certs/root_ca.crt | base64 -w0
```
use above output under `spec.acme.caBundle`
``` yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: dev-step-issuer
spec:
acme:
email: ${SOME_EMAIL}
server: https://${CA_DOMAIN}/acme/dev/directory
privateKeySecretRef:
name: dev-step-issuer-account-key
caBundle: ${CA_ROOT_PEM}
solvers:
- selector: {}
http01:
ingress:
class: traefik
```
## FAQs
> Why didnt you containerize this?
Because I have multiple kubernetes clusters.
Running this on a separate machine means that I don't have to install a `rootCA.pem` for each cluster instance.
You might say "yeah, but you can specify the rootCA as an input to step CA"--but who wants to key files and
setup CA for each kuberenetes install?
So yeah, maybe I'll do it in the future.

View File

@@ -30,8 +30,8 @@ Concerns regarding redudant power or battery backup for the solution are outside
=== "UGREEN DXP6800 PRO"
- `1x` QNAP TS-673A 6-Bay `$1199 USD`
- [QNAP Product Page](https://www.qnap.com/en-us/product/ts-673a)
- `1x` UGREEN NASync DXP6800 Pro `$1199 USD`
- [UGREEN Product Page](https://nas.ugreen.com/products/ugreen-nasync-dxp6800-pro-nas-storage)
=== "TeamGroup 32GB DDR5 SODIMM"

View File

@@ -0,0 +1,38 @@
# Why get a NAS
## Why Relying on Portable Drives Can Be a Risky Move for Your Business
Being reliant on portable storage media to keep your business running is dangerous. So what other options are there?
Portable drives are not only expensive per terabyte, but drive failures can be catastrophic. Imagine spending a weekend shooting footage for a one-time event. You fly home, start editing, and a few days later — the drive containing all that footage dies. Talk about a tragedy! Delivering a partial product isnt an option, and now youve lost both time and money. Maybe you can piece something together, but thats not the kind of product you strive to deliver. In the worst case, a single drive failure can damage not just customer trust, but your brand.
A few friends in photo and videography startups reached out to me to share their workflows. As you might expect, during events they would shoot footage, then offload it onto a fresh 4TB flash drive before shooting more. At the end of the event, theyd head home and spend considerable time editing directly from that drive. Once finished, theyd deliver the product — and then either delete the footage or buy a new drive for the next event. Honestly, who doesnt have a pile of multi-terabyte drives lying around with who-knows-what on them?
Boy, did I have some news for them! I had a few recommendations that could not only improve their workflow, but also prevent potential catastrophes from hurting their business. After some discussion, they convinced me to start a business to help small and medium-sized companies solve their storage and workflow challenges.
---
### So, what can we do to prevent disasters like drive failures?
It turns out that local storage solutions such as Network Attached Storage (NAS) have come down significantly in price. Consider that an average 4TB flash drive — with no redundancy — costs around $250. To safely store a critical project, youd need at least two drives ($500 total), not to mention the extra time spent duplicating your data for backup.
Now compare that with our recommended solution: **56TB of usable, redundant storage** in a device that can be remotely accessed for your convenience. This is made possible with a Network Attached Storage unit — or NAS — a type of specialized computer designed for reliable storage and accessibility. Traditionally, NAS systems have been expensive or targeted toward enthusiasts due to their technical nature.
Our solution offers an **56TB NAS (84TB physically storage)** for under **$40 per terabyte**, shipped and ready to go at around **$3,200**. This supports roughly **thirteen 4TB projects** before youd need to compress, delete, or expand your storage. (We also offer larger and custom configurations to meet your specific needs.)
---
### Additional Benefits
- Edit footage remotely over your local network
- Back up your entire laptop or workstation
- Securely share files with clients or editors — no need for cloud subscriptions
- Compress and archive data efficiently
- Host websites or services directly from your NAS
- *(Optional)* AI photo recognition to streamline your workflow
---
Thanks to those early conversations, we founded **RSKIO Limited** — a company dedicated to helping creators and small businesses safeguard their data, simplify their workflows, and scale their storage with confidence.
📩 **Reach out to [lost@rskio.com](mailto:lost@rskio.com)** for an estimate, custom solutions, or answers to your storage questions.

17
mkdocs/scripts/update.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash
set -e
repos=("oxpasta" "r53-ddns")
user="rskntroot"
dest_dir="docs/code"
mkdir -p "$dest_dir"
for repo in "${repos[@]}"; do
url="https://raw.githubusercontent.com/$user/$repo/main/README.md"
dest_file="$dest_dir/$repo.md"
echo "Fetching $url -> $dest_file"
curl -sSfL "$url" -o "$dest_file"
done