Compare commits
No commits in common. "51f55e8d07adf159579f25ce70c68914939ae96e" and "9868c823a69225aec6fc7a2e8ffd9836dfe9e02a" have entirely different histories.
51f55e8d07
...
9868c823a6
22
README.md
22
README.md
@ -3,17 +3,13 @@ A monorepo for all my projects and dotfiles. Hosted on [my Gitea](https://gitea.
|
||||
|
||||
## Map of Contents
|
||||
|
||||
| Project | Summary |
|
||||
|:----------------------:|:-------:|
|
||||
| [dotfiles](/dotfiles/) | Configuration and documentation for my PCs. |
|
||||
| [homelab](/homelab/) | Configuration and documentation for my homelab. |
|
||||
| [projects](/projects/) | Self-contained projects in a variety of scripting and programming languages. |
|
||||
| [sites](/sites/) | Static site files |
|
||||
| [.gitea/workflows](/.gitea/workflows/) & [.github/workflows](/.github/workflows/) | GitHub Actions workflows running on [Gitea](https://gitea.jafner.tools/Jafner/Jafner.net/actions) and [GitHub](https://github.com/Jafner/Jafner.net/actions), respectively. |
|
||||
| [.sops](/.sops/) | Scripts and documentation implementing [sops](https://github.com/getsops/sops) to securely store secrets in this repo. |
|
||||
| Project | Summary | Path |
|
||||
|:-------------------:|:-------:|:----:|
|
||||
| homelab | Configuration and documentation for my homelab. | [`homelab/`](homelab/) |
|
||||
| nix | Nix flake defining my PC & k3s cluster configurations | [`nix`](nix/) |
|
||||
| Jafner.dev | Hugo static site configuration files for my [Jafner.dev](https://jafner.dev) blog. | [`blog/`](blog/) |
|
||||
| razer-bat | Indicate Razer mouse battery level with the RGB LEDs on the dock. Less metal than it sounds. | [`projects/razer-bat/`](projects/razer-bat/) |
|
||||
| 5etools-docker | Docker image to make self-hosting 5eTools a little bit better. | [`projects/5etools-docker/`](projects/5etools-docker/) |
|
||||
| 5eHomebrew | 5eTools-compatible homebrew content by me. | [`projects/5ehomebrew/`](projects/5ehomebrew/) |
|
||||
| archive | Old, abandoned, unmaintained projects. Fun to look back at. | [`archive/`](archive/) |
|
||||
|
||||
## LICENSE: MIT License
|
||||
> See [LICENSE](/LICENSE) for details.
|
||||
|
||||
## Contributing
|
||||
Presently this project is a one-man operation with no external contributors. All contributions will be addressed in good faith on a best-effort basis.
|
2
archive/5etools_on_oracle_cloud/.env
Normal file
2
archive/5etools_on_oracle_cloud/.env
Normal file
@ -0,0 +1,2 @@
|
||||
DOMAIN=
|
||||
EMAIL=
|
77
archive/5etools_on_oracle_cloud/README.md
Normal file
77
archive/5etools_on_oracle_cloud/README.md
Normal file
@ -0,0 +1,77 @@
|
||||
# Project No Longer Maintained
|
||||
> There are better ways to do this.
|
||||
> I recommend looking for general-purpose guides to self-hosting with Docker and Traefik. For 5eTools specifically, I do maintain [5etools-docker](https://github.com/Jafner/5etools-docker).
|
||||
|
||||
This guide will walk you through setting up an Oracle Cloud VM to host a personal instance of 5eTools using your own domain.
|
||||
|
||||
# Before Getting Started
|
||||
|
||||
1. You will need a domain! I used NameCheap to purchase my `.tools` domain for $7 per year.
|
||||
2. You will need an [Oracle Cloud](https://www.oracle.com/cloud/) account - This will be used to create an Always Free cloud virtual machine, which will host the services we need. You will need to attach a credit card to your account. I used a [Privacy.com](https://privacy.com/) temporary card to ensure I wouldn't be charged accidentally at the end of the 30-day trial. The services used in this guide are under Oracle's Always Free category, so unless you exceed the 10TB monthly traffic alotment, you won't be charged.
|
||||
3. You will need a [Cloudflare](https://www.cloudflare.com/) account - This will be used to manage the domain name after purchase. You will need to migrate your domain from the registrar you bought the domain from to Cloudflare.
|
||||
4. An SSH terminal (I use Tabby (formerly Terminus)). This will be used to log into and manage the Oracle Cloud virtual machine.
|
||||
|
||||
# Walkthrough
|
||||
|
||||
## Purchase a domain name from a domain registrar.
|
||||
I used NameCheap, which offered my `.tools` domain for $7 per year. Some top-level domains (TLDs) can be purchased for as little as $2-3 per year (such as `.xyz`, `.one`, or `.website`). Warning: these are usually 1-year special prices, and the price will increase significantly after the first year.
|
||||
|
||||
## Migrate your domain to Cloudflare.
|
||||
The Cloudflare docs have a [domain transfer guide](https://developers.cloudflare.com/registrar/domain-transfers/transfer-to-cloudflare), which addresses how to do this. This process may take up to 24 hours. Cloudflare won't like that you are importing the domain without any DNS records, but that's okay.
|
||||
|
||||
## Create your Oracle Cloud virtual machine.
|
||||
If you've already created your Oracle Cloud account, go to the [Oracle Cloud portal](https://cloud.oracle.com). Then under the "Launch Resources" section, click "Create a VM instance". Most of the default settings are fine. Click "Change Image" and uncheck Oracle Linux, then check Cannonical Ubuntu, then click "Select Image". Under "Add SSH keys", download the private key for the instance by clicking the "Save Private Key" button. Finally, click "Create". You will need to wait a while for the instance to come online.
|
||||
|
||||
## Move your SSH key.
|
||||
Move your downloaded SSH key to your `.ssh` folder with `mkdir ~/.ssh/` and then `mv ~/Downloads/ssh-key-*.key ~/.ssh/`. If you already have a process for SSH key management, feel free to ignore this.
|
||||
|
||||
## SSH into your VM.
|
||||
Once your Oracle Cloud VM is provisioned (created), SSH into it. Get its public IP address from the "Instance Access" section of the instance's details page. Then run `ssh -i ~/.ssh/ssh-key-<YOUR-KEY>.key ubuntu@<YOUR-INSTANCE-IP>`, replacing "<YOUR-KEY>" and "<YOUR-INSTANCE-IP>" with your key name and instance IP. (Tip: you can use tab to auto-complete the filename of the key). Then enter the command to connect to the instance.
|
||||
|
||||
## Set up the VM with all the software we need.
|
||||
Now that we're in the terminal, you can just copy-paste commands to run. You can either run the following command (which is just a bunch of commands strung together), or run each command one at a time by following the lettered instructions below:
|
||||
|
||||
`sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get -y install git docker docker-compose && sudo systemctl enable docker && sudo usermod -aG docker $USER && logout` OR:
|
||||
|
||||
### Update the system.
|
||||
with `sudo apt-get update && sudo apt-get upgrade -y`.
|
||||
|
||||
### Install Git, Docker, and Docker Compose.
|
||||
with `sudo apt-get -y install git docker docker-compose`.
|
||||
|
||||
### Enable the docker service.
|
||||
with `sudo systemctl enable docker`.
|
||||
|
||||
### Add your user to the docker group.
|
||||
with `sudo usermod -aG docker $USER`.
|
||||
|
||||
### Log out.
|
||||
with `logout`.
|
||||
|
||||
## Configure the VM firewall.
|
||||
On the "Compute -> Instances -> Instance Details" page, under "Instance Information -> Primary VNIC -> Subnet", click the link to the subnet's configuration page, then click on the default security list. Click "Add Ingress Rules", then "+ Another Ingress Rule" and fill out your ingress rules like this:
|
||||
|
||||
![ingress_rules.png](https://github.com/jafner/cloud_tools/blob/main/ingress_rules.PNG?raw=true)
|
||||
|
||||
This will allow incoming traffic from the internet on ports 80 and 443 (the ports used by HTTP and HTTPS respectively).
|
||||
|
||||
## Configure the Cloudflare DNS records.
|
||||
After your domain has been transferred to Cloudflare, log into the [Cloudflare dashboard](https://dash.cloudflare.com) and click on your domain. Then click on the DNS button at the top, and click "Add record" with the following information:
|
||||
|
||||
* Type: A
|
||||
* Name: 5e
|
||||
* IPv4 Address: <YOUR-INSTANCE-IP>
|
||||
* TTL: Auto
|
||||
* Proxy status: DNS only
|
||||
|
||||
This will route `5e.your.domain` to <YOUR-INSTANCE-IP>. You can change the name to whatever you prefer, or use @ to use the root domain (just `your.domain`) instead. I found that using Cloudflare's proxy interferes with acquiring certificates.
|
||||
|
||||
## Log back into your VM and set up the services.
|
||||
Clone this repository onto the host with `git clone https://github.com/jafner/cloud_tools.git`, then move into the directory with `cd cloud_tools/`. Edit the file `.env` with your domain (including subdomain) and email. For example:
|
||||
|
||||
```
|
||||
DOMAIN=5e.your.domain
|
||||
EMAIL=youremail@gmail.com
|
||||
```
|
||||
|
||||
Make the setup script executable, then run it with `chmod +x setup.sh && ./setup.sh`.
|
29
archive/5etools_on_oracle_cloud/docker-compose.yml
Normal file
29
archive/5etools_on_oracle_cloud/docker-compose.yml
Normal file
@ -0,0 +1,29 @@
|
||||
version: "3"
|
||||
services:
|
||||
traefik:
|
||||
container_name: traefik
|
||||
image: traefik:latest
|
||||
networks:
|
||||
- web
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./traefik.toml:/traefik.toml
|
||||
- ./acme.json:/acme.json
|
||||
|
||||
5etools:
|
||||
container_name: 5etools
|
||||
image: jafner/5etools-docker
|
||||
volumes:
|
||||
- ./htdocs:/usr/local/apache2/htdocs
|
||||
networks:
|
||||
- web
|
||||
labels:
|
||||
- traefik.http.routers.5etools.rule=Host(`$DOMAIN`)
|
||||
- traefik.http.routers.5etools.tls.certresolver=lets-encrypt
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
BIN
archive/5etools_on_oracle_cloud/ingress_rules.PNG
Normal file
BIN
archive/5etools_on_oracle_cloud/ingress_rules.PNG
Normal file
Binary file not shown.
After Width: | Height: | Size: 31 KiB |
7
archive/5etools_on_oracle_cloud/setup.sh
Normal file
7
archive/5etools_on_oracle_cloud/setup.sh
Normal file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
docker network create web
|
||||
sed -i "s/email = \"\"/email = \"$EMAIL\"/g" traefik.toml
|
||||
mkdir -p ./htdocs/download
|
||||
touch acme.json
|
||||
chmod 600 acme.json
|
||||
docker-compose up -d
|
18
archive/5etools_on_oracle_cloud/traefik.toml
Normal file
18
archive/5etools_on_oracle_cloud/traefik.toml
Normal file
@ -0,0 +1,18 @@
|
||||
[entryPoints]
|
||||
[entryPoints.web]
|
||||
address = ":80"
|
||||
[entryPoints.web.http.redirections.entryPoint]
|
||||
to = "websecure"
|
||||
scheme = "https"
|
||||
[entryPoints.websecure]
|
||||
address = ":443"
|
||||
|
||||
[certificatesResolvers.lets-encrypt.acme]
|
||||
email = ""
|
||||
storage = "acme.json"
|
||||
caServer = "https://acme-v02.api.letsencrypt.org/directory"
|
||||
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
|
||||
|
||||
[providers.docker]
|
||||
watch = true
|
||||
network = "web"
|
19
archive/PyClipIt/UsefulCommands.md
Normal file
19
archive/PyClipIt/UsefulCommands.md
Normal file
@ -0,0 +1,19 @@
|
||||
# Assign test file names/paths
|
||||
SOURCE_FILE=$(realpath ~/Git/Clip/TestClips/"x264Source.mkv") && echo "SOURCE_FILE: $SOURCE_FILE"
|
||||
TRANSCODED_FILE="$(realpath ~/Git/Clip/TestClips)/TRANSCODED.mp4" && echo "TRANSCODED_FILE: $TRANSCODED_FILE"
|
||||
|
||||
# TRANSCODE $SOURCE_FILE into 'TRANSCODED.mp4'
|
||||
ffmpeg -hide_banner -i "$SOURCE_FILE" -copyts -copytb 0 -map 0 -bf 0 -c:v libx264 -crf 23 -preset slow -video_track_timescale 1000 -g 60 -keyint_min 60 -bsf:v setts=ts=STARTPTS+N/TB_OUT/60 -c:a copy "$TRANSCODED_FILE"
|
||||
|
||||
# GET KEYFRAMES FROM $SOURCE_FILE
|
||||
KEYFRAMES=( $(ffprobe -hide_banner -loglevel error -select_streams v:0 -show_entries packet=pts,flags -of csv=print_section=0 "$SOURCE_FILE" | grep K | cut -d',' -f 1) ) && echo "$KEYFRAMES"
|
||||
ffprobe -hide_banner -loglevel error -select_streams v:0 -show_entries packet=pts,flags -of csv=print_section=0 "$SOURCE_FILE"
|
||||
|
||||
# GET KEYFRAMES FROM $TRANSCODED_FILE
|
||||
KEYFRAMES=( $(ffprobe -hide_banner -loglevel error -select_streams v:0 -show_entries packet=pts,flags -of csv=print_section=0 "$TRANSCODED_FILE" | grep K | cut -d',' -f 1) ) && echo "$KEYFRAMES"
|
||||
ffprobe -hide_banner -loglevel error -select_streams v:0 -show_entries packet=pts,flags -of csv=print_section=0 "$TRANSCODED_FILE"
|
||||
|
||||
# Compare keyframes between $SOURCE_FILE and $TRANSCODED_FILE
|
||||
sdiff <(ffprobe -hide_banner -loglevel error -select_streams v:0 -show_entries packet=pts,flags -of csv=print_section=0 "$SOURCE_FILE") <(ffprobe -hide_banner -loglevel error -select_streams v:0 -show_entries packet=pts,flags -of csv=print_section=0 "$TRANSCODED_FILE")
|
||||
|
||||
https://code.videolan.org/videolan/x264/-/blob/master/common/base.c#L489
|
7
archive/PyClipIt/clip.code-workspace
Normal file
7
archive/PyClipIt/clip.code-workspace
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"folders": [
|
||||
{
|
||||
"path": "."
|
||||
}
|
||||
]
|
||||
}
|
262
archive/PyClipIt/main.py
Executable file
262
archive/PyClipIt/main.py
Executable file
@ -0,0 +1,262 @@
|
||||
import tkinter as tk # https://docs.python.org/3/library/tkinter.html
|
||||
from tkinter import filedialog, ttk
|
||||
import av
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
import time
|
||||
import datetime
|
||||
from PIL import Image, ImageTk, ImageOps
|
||||
from RangeSlider.RangeSlider import RangeSliderH
|
||||
from ffpyplayer.player import MediaPlayer
|
||||
from probe import get_keyframes_list, get_keyframe_interval, get_video_duration
|
||||
|
||||
class VideoClipExtractor:
|
||||
def __init__(self, master):
|
||||
# Initialize variables
|
||||
self.video_duration = int() # milliseconds
|
||||
self.video_path = Path() # Path object
|
||||
self.video_keyframes = list() # list of ints (keyframe pts in milliseconds)
|
||||
self.clip_start = tk.IntVar(value = 0) # milliseconds
|
||||
self.clip_end = tk.IntVar(value = 1) # milliseconds
|
||||
|
||||
self.preview_image_timestamp = tk.IntVar(value = 0) # milliseconds
|
||||
|
||||
self.debug_checkvar = tk.IntVar() # Checkbox variable
|
||||
|
||||
self.background_color = "#BBBBBB"
|
||||
self.text_color = "#000000"
|
||||
self.preview_background_color = "#2222FF"
|
||||
|
||||
# Set up master UI
|
||||
self.master = master
|
||||
self.master.title("Video Clip Extractor")
|
||||
self.master.configure(background=self.background_color)
|
||||
self.master.resizable(False, False)
|
||||
self.master.geometry("")
|
||||
self.window_max_width = self.master.winfo_screenwidth()*0.75
|
||||
self.window_max_height = self.master.winfo_screenheight()*0.75
|
||||
self.preview_width = 1280
|
||||
self.preview_height = 720
|
||||
self.preview_image = Image.new("RGB", (self.preview_width, self.preview_height), color=self.background_color)
|
||||
self.preview_image_tk = ImageTk.PhotoImage(self.preview_image)
|
||||
|
||||
self.timeline_width = self.preview_width
|
||||
self.timeline_height = 64
|
||||
|
||||
self.interface_width = self.preview_width
|
||||
self.interface_height = 200
|
||||
|
||||
# Initialize frames, buttons and labels
|
||||
self.preview_frame = tk.Frame(self.master, width=self.preview_width, height=self.preview_height, bg=self.preview_background_color, borderwidth=0, bd=0)
|
||||
self.timeline_frame = tk.Frame(self.master, width=self.timeline_width, height=self.timeline_height, bg=self.background_color)
|
||||
self.interface_pane = tk.Frame(self.master, width=self.interface_width, height=self.interface_height, bg=self.background_color)
|
||||
self.buttons_pane = tk.Frame(self.interface_pane, bg=self.background_color)
|
||||
self.info_pane = tk.Frame(self.interface_pane, bg=self.background_color)
|
||||
|
||||
self.preview_canvas = tk.Canvas(self.preview_frame, width=self.preview_width, height=self.preview_height, bg=self.preview_background_color, borderwidth=0, bd=0)
|
||||
self.browse_button = tk.Button(self.buttons_pane, text="Browse...", command=self.browse_video_file, background=self.background_color, foreground=self.text_color)
|
||||
self.extract_button = tk.Button(self.buttons_pane, text="Extract Clip", command=self.extract_clip, background=self.background_color, foreground=self.text_color)
|
||||
self.debug_checkbutton = tk.Checkbutton(self.buttons_pane, text="Print ffmpeg to console", variable=self.debug_checkvar, background=self.background_color, foreground=self.text_color)
|
||||
self.preview_button = tk.Button(self.buttons_pane, text="Preview Clip", command=self.ffplaySegment, background=self.background_color, foreground=self.text_color)
|
||||
self.video_path_label = tk.Label(self.info_pane, text=f"Source video: {self.video_path}", background=self.background_color, foreground=self.text_color)
|
||||
self.clip_start_label = tk.Label(self.timeline_frame, text=f"{self.timeStr(self.clip_start.get())}", background=self.background_color, foreground=self.text_color)
|
||||
self.clip_end_label = tk.Label(self.timeline_frame, text=f"{self.timeStr(self.clip_end.get())}", background=self.background_color, foreground=self.text_color)
|
||||
self.video_duration_label = tk.Label(self.info_pane, text=f"Video duration: {self.timeStr(self.video_duration)}", background=self.background_color, foreground=self.text_color)
|
||||
self.timeline_canvas = tk.Canvas(self.timeline_frame, width=self.preview_width, height=self.timeline_height, background=self.background_color)
|
||||
self.timeline = RangeSliderH(
|
||||
self.timeline_canvas,
|
||||
[self.clip_start, self.clip_end],
|
||||
max_val=max(self.video_duration,1),
|
||||
show_value=False,
|
||||
bgColor=self.background_color,
|
||||
Width=self.timeline_width,
|
||||
Height=self.timeline_height
|
||||
)
|
||||
self.preview_label = tk.Label(self.preview_frame, image=self.preview_image_tk)
|
||||
|
||||
print(f"Widget widths (after pack):\n\
|
||||
self.clip_start_label.winfo_width(): {self.clip_start_label.winfo_width()}\n\
|
||||
self.clip_end_label.winfo_width(): {self.clip_end_label.winfo_width()}\n\
|
||||
self.timeline.winfo_width(): {self.timeline.winfo_width()}\n\
|
||||
")
|
||||
|
||||
# Arrange frames inside master window
|
||||
self.preview_frame.pack(side='top', fill='both', expand=True, padx=0, pady=0)
|
||||
self.timeline_frame.pack(fill='x', expand=True, padx=20, pady=20)
|
||||
self.interface_pane.pack(side='bottom', fill='both', expand=True, padx=10, pady=10)
|
||||
self.buttons_pane.pack(side='left')
|
||||
self.info_pane.pack(side='right')
|
||||
|
||||
# Draw elements inside frames
|
||||
self.browse_button.pack(side='top')
|
||||
self.extract_button.pack(side='top')
|
||||
self.preview_button.pack(side='top')
|
||||
self.debug_checkbutton.pack(side='top')
|
||||
self.video_path_label.pack(side='top')
|
||||
self.clip_start_label.pack(side='left')
|
||||
self.clip_end_label.pack(side='right')
|
||||
self.video_duration_label.pack(side='top')
|
||||
self.preview_label.pack(fill='both', expand=True)
|
||||
|
||||
# Draw timeline canvas and timeline slider
|
||||
self.timeline_canvas.pack(fill="both", expand=True)
|
||||
self.timeline.pack(fill="both", expand=True)
|
||||
|
||||
print(f"Widget widths (after pack):\n\
|
||||
self.clip_start_label.winfo_width(): {self.clip_start_label.winfo_width()}\n\
|
||||
self.clip_end_label.winfo_width(): {self.clip_end_label.winfo_width()}\n\
|
||||
self.timeline.winfo_width(): {self.timeline.winfo_width()}\n\
|
||||
")
|
||||
|
||||
def getThumbnail(self):
|
||||
with av.open(str(self.video_path)) as container:
|
||||
time_ms = self.clip_start.get() # This works as long as container has a timebase of 1/1000
|
||||
container.seek(time_ms, stream=container.streams.video[0])
|
||||
time.sleep(0.1)
|
||||
frame = next(container.decode(video=0)) # Get the frame object for the seeked timestamp
|
||||
if self.preview_image_timestamp != time_ms:
|
||||
self.preview_image_tk = ImageTk.PhotoImage(frame.to_image(width=self.preview_width, height=self.preview_height)) # Convert the frame object to an image
|
||||
self.preview_label.config(image=self.preview_image_tk)
|
||||
self.preview_image_timestamp = time_ms
|
||||
|
||||
def ffplaySegment(self):
|
||||
ffplay_command = [
|
||||
"ffplay",
|
||||
"-hide_banner",
|
||||
"-autoexit",
|
||||
"-volume", "10",
|
||||
"-window_title", f"{self.timeStr(self.clip_start.get())} to {self.timeStr(self.clip_end.get())}",
|
||||
"-x", "1280",
|
||||
"-y", "720",
|
||||
"-ss", f"{self.clip_start.get()}ms",
|
||||
"-i", str(self.video_path),
|
||||
"-t", f"{self.clip_end.get() - self.clip_start.get()}ms"
|
||||
]
|
||||
print("Playing video. Press \"q\" or \"Esc\" to exit.")
|
||||
print("")
|
||||
subprocess.run(ffplay_command, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
||||
|
||||
|
||||
def redrawTimeline(self):
|
||||
self.timeline.forget()
|
||||
step_size = get_keyframe_interval(self.video_keyframes)
|
||||
step_marker = False
|
||||
if len(self.video_keyframes) < self.timeline_width/4 and step_size > 0:
|
||||
step_marker = True
|
||||
self.timeline = RangeSliderH(
|
||||
self.timeline_canvas,
|
||||
[self.clip_start, self.clip_end],
|
||||
max_val=max(self.video_duration,1),
|
||||
step_marker=step_marker,
|
||||
step_size=step_size,
|
||||
show_value=False,
|
||||
bgColor=self.background_color,
|
||||
Width=self.timeline_width,
|
||||
Height=self.timeline_height
|
||||
)
|
||||
self.timeline.pack()
|
||||
#self.preview_canvas.create_text(self.preview_canvas.winfo_width() // 2, self.preview_canvas.winfo_height() // 2, text=f"Loading video...", fill="black", font=("Helvetica", 48))
|
||||
|
||||
|
||||
def timeStr(self, milliseconds: int): # Takes milliseconds int or float and returns a string in the preferred format
|
||||
h = int(milliseconds/3600000) # Get the hours component
|
||||
m = int((milliseconds%3600000)/60000) # Get the minutes component
|
||||
s = int((milliseconds%60000)/1000) # Get the seconds component
|
||||
ms = int(milliseconds%1000) # Get the milliseconds component
|
||||
if milliseconds < 60000:
|
||||
return f"{s}.{ms:03}"
|
||||
elif milliseconds < 3600000:
|
||||
return f"{m}:{s:02}.{ms:03}"
|
||||
else:
|
||||
return f"{h}:{m:02}:{s:02}.{ms:03}"
|
||||
|
||||
def clip_selector(self):
|
||||
def updateClipRange(var, index, mode):
|
||||
clip_end = self.clip_end.get()
|
||||
nearest_keyframe_start = self.nearest_keyframe(self.clip_start.get(), self.video_keyframes)
|
||||
# Add a specific check to make sure that the clip end is not changing to be equal to or less than the clip start
|
||||
if clip_end <= nearest_keyframe_start:
|
||||
clip_end = nearest_keyframe_start + self.timeline.__dict__['step_size']
|
||||
self.clip_start_label.config(text=f"{self.timeStr(nearest_keyframe_start)}")
|
||||
self.clip_end_label.config(text=f"{self.timeStr(clip_end)}")
|
||||
self.timeline.forceValues([nearest_keyframe_start, clip_end])
|
||||
self.getThumbnail()
|
||||
if str(self.video_path) == "()":
|
||||
return False
|
||||
self.clip_start.trace_add("write", callback=updateClipRange) # This actually triggers on both start and end
|
||||
|
||||
def nearest_keyframe(self, test_pts: int, valid_pts: list):
|
||||
return(min(valid_pts, key=lambda x:abs(x-float(test_pts))))
|
||||
|
||||
def browse_video_file(self):
|
||||
video_path = filedialog.askopenfilename(
|
||||
initialdir="~/Git/Clip/TestClips/",
|
||||
title="Select file",
|
||||
filetypes=(("mp4/mkv files", '*.mp4 *.mkv'), ("all files", "*.*"))
|
||||
)
|
||||
print(f"video path: \"{video_path}\" (type: {type(video_path)})")
|
||||
if not Path(str(video_path)).is_file():
|
||||
return
|
||||
video_keyframes = get_keyframes_list(video_path)
|
||||
while video_keyframes == None:
|
||||
print(f"No keyframes found in {video_path}. Choose a different video file.")
|
||||
video_path = filedialog.askopenfilename(
|
||||
initialdir="~/Git/Clip/TestClips/",
|
||||
title="Select file",
|
||||
filetypes=(("mp4/mkv files", '*.mp4 *.mkv'), ("all files", "*.*"))
|
||||
)
|
||||
# Once we have a video file, we need to set the Source video, Clip start, Clip end, and Video duration values and redraw the GUI.
|
||||
self.video_path = Path(video_path)
|
||||
self.video_duration = get_video_duration(video_path)
|
||||
self.video_keyframes = video_keyframes
|
||||
self.clip_start.set(min(self.video_keyframes))
|
||||
self.clip_end.set(max(self.video_keyframes))
|
||||
self.clip_start_label.config(text=f"{self.timeStr(self.nearest_keyframe(self.clip_start.get(), self.video_keyframes))}")
|
||||
self.clip_end_label.config(text=f"{self.timeStr(self.clip_end.get())}")
|
||||
|
||||
self.getThumbnail()
|
||||
|
||||
self.video_path_label.config(text=f"Source video: {self.video_path}")
|
||||
self.video_duration_label.config(text=f"Video duration: {self.timeStr(self.video_duration)}")
|
||||
self.redrawTimeline()
|
||||
self.clip_selector()
|
||||
|
||||
def extract_clip(self):
|
||||
video_path = self.video_path
|
||||
file_extension = video_path.suffix
|
||||
clip_start = self.clip_start.get()
|
||||
clip_end = self.clip_end.get()
|
||||
|
||||
output_path = Path(
|
||||
filedialog.asksaveasfilename(
|
||||
initialdir=video_path.parent,
|
||||
initialfile=str(
|
||||
f"[Clip] {video_path.stem} ({datetime.timedelta(milliseconds=clip_start)}-{datetime.timedelta(milliseconds=clip_end)}){file_extension}"),
|
||||
title="Select output file",
|
||||
defaultextension=file_extension
|
||||
)
|
||||
)
|
||||
if output_path == Path("."):
|
||||
return False
|
||||
ffmpeg_command = [
|
||||
"ffmpeg",
|
||||
"-y", # The output path prompt asks for confirmation before overwriting
|
||||
"-hide_banner",
|
||||
"-i", str(video_path),
|
||||
"-ss", f"{clip_start}ms",
|
||||
"-to", f"{clip_end}ms",
|
||||
"-map", "0",
|
||||
"-c:v", "copy",
|
||||
"-c:a", "copy",
|
||||
str(output_path),
|
||||
]
|
||||
if self.debug_checkvar.get() == 1:
|
||||
subprocess.run(ffmpeg_command)
|
||||
else:
|
||||
subprocess.run(ffmpeg_command, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
||||
print(f"Finished! Saved to {output_path}")
|
||||
|
||||
root = tk.Tk()
|
||||
app = VideoClipExtractor(root)
|
||||
root.mainloop()
|
||||
|
35
archive/PyClipIt/preview.py
Normal file
35
archive/PyClipIt/preview.py
Normal file
@ -0,0 +1,35 @@
|
||||
# Version using ffPyPlayer
|
||||
from pathlib import Path
|
||||
import tkinter as tk
|
||||
import time
|
||||
from ffpyplayer.player import MediaPlayer
|
||||
|
||||
def ffplaySegment(file: Path, start: int, end: int):
|
||||
print(f"Playing {file} from {start}ms to {end}ms")
|
||||
file = str(file) # Must be string
|
||||
seek_to = float(start)/1000
|
||||
play_for = float(end - start)/1000
|
||||
x = int(1280)
|
||||
y = int(720)
|
||||
volume = float(0.2) # Float 0.0 to 1.0
|
||||
# Must be dict
|
||||
ff_opts = {
|
||||
"paused": False, # Bool
|
||||
"t": play_for, # Float seconds
|
||||
"ss": seek_to, # Float seconds
|
||||
"x": x,
|
||||
"y": y,
|
||||
"volume": volume
|
||||
}
|
||||
val = ''
|
||||
player = MediaPlayer(file, ff_opts=ff_opts)
|
||||
while val != 'eof':
|
||||
frame, val = player.get_frame()
|
||||
print(f"frame: (type: {type(frame)})", end=', ')
|
||||
if val != 'eof' and frame is not None:
|
||||
img, t = frame
|
||||
print(f"img: (type: {type(img)})", end=', ')
|
||||
print(f"t: (type: {type(t)})")
|
||||
# Use the create_image method of the canvas widget to draw the image to the canvas.
|
||||
|
||||
|
43
archive/PyClipIt/probe.py
Normal file
43
archive/PyClipIt/probe.py
Normal file
@ -0,0 +1,43 @@
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
import numpy as np
|
||||
import ffmpeg
|
||||
|
||||
# Get a list of keyframes by pts (milliseconds from start) from a video file.
|
||||
def get_keyframes_list(video_path):
|
||||
ffprobe_command = [
|
||||
"ffprobe",
|
||||
"-hide_banner",
|
||||
"-loglevel", "error",
|
||||
"-skip_frame", "nokey",
|
||||
"-select_streams", "v:0",
|
||||
"-show_entries", "packet=pts,flags",
|
||||
"-of", "csv=print_section=0",
|
||||
video_path
|
||||
]
|
||||
|
||||
ffprobe_output = subprocess.run(ffprobe_command, capture_output=True, text=True)
|
||||
keyframes = list(map(int, np.array([line.split(",") for line in list(filter(lambda frame_packet: "K" in frame_packet, ffprobe_output.stdout.splitlines()))])[:,0]))
|
||||
if len(keyframes) <= 1:
|
||||
# Pop up a warning if there are no keyframes.
|
||||
return(None)
|
||||
return(keyframes)
|
||||
|
||||
def get_keyframe_interval(keyframes_list: list): # Takes a list of ints representing keyframe pts (milliseconds from start) and returns either the keyframe interval in milliseconds, or None if the keyframe intervals are not all the same.
|
||||
intervals = list(np.diff(keyframes_list)) # List of keyframe intervals in milliseconds.
|
||||
if np.all(intervals == intervals[0]):
|
||||
return(intervals[0])
|
||||
else:
|
||||
return(0)
|
||||
|
||||
def get_keyframe_intervals(keyframes_list):
|
||||
# Return a list of keyframe intervals in milliseconds.
|
||||
return(list(np.diff(keyframes_list)))
|
||||
|
||||
def keyframe_intervals_are_clean(keyframe_intervals):
|
||||
# Return whether the keyframe intervals are all the same.
|
||||
return(np.all(keyframe_intervals == keyframe_intervals[0]))
|
||||
|
||||
# Get the duration of a video file in milliseconds (useful for ffmpeg pts).
|
||||
def get_video_duration(video_path):
|
||||
return int(float(ffmpeg.probe(video_path)["format"]["duration"])*1000)
|
6
archive/PyClipIt/requirements.txt
Normal file
6
archive/PyClipIt/requirements.txt
Normal file
@ -0,0 +1,6 @@
|
||||
av==12.0.0
|
||||
ffmpeg-python==0.2.0
|
||||
ffpyplayer==4.5.1
|
||||
numpy==1.26.4
|
||||
pillow==10.3.0
|
||||
RangeSlider==2023.7.2
|
21
archive/PyClipIt/test.py
Normal file
21
archive/PyClipIt/test.py
Normal file
@ -0,0 +1,21 @@
|
||||
import av
|
||||
|
||||
with av.open("TestClips/x264Source.mkv") as container:
|
||||
frame_num = 622
|
||||
time_base = container.streams.video[0].time_base
|
||||
framerate = container.streams.video[0].average_rate
|
||||
timestamp = frame_num/framerate
|
||||
rounded_pts = round((frame_num / framerate) / time_base)
|
||||
print(f"Variables:\n\
|
||||
frame_num: {frame_num} (type: {type(frame_num)}\n\
|
||||
time_base: {time_base} (type: {type(time_base)}\n\
|
||||
timestamp: {timestamp} (type: {type(timestamp)}\n\
|
||||
frame_num / framerate: {frame_num / framerate}\n\
|
||||
frame_num / time_base: {frame_num / time_base}\n\
|
||||
(frame_num / framerate) / time_base: {(frame_num / framerate) / time_base}\n\
|
||||
rounded_pts = {rounded_pts}\
|
||||
")
|
||||
|
||||
container.seek(rounded_pts, backward=True, stream=container.streams.video[0])
|
||||
frame = next(container.decode(video=0))
|
||||
frame.to_image().save("TestClips/Thumbnail3.jpg".format(frame.pts), quality=80)
|
3
archive/docker_config/arr/.env
Normal file
3
archive/docker_config/arr/.env
Normal file
@ -0,0 +1,3 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/arr
|
||||
DOWNLOAD_DIR=/mnt/torrenting/NZB
|
||||
INCOMPLETE_DOWNLOAD_DIR=/mnt/torrenting/NZB_incomplete
|
82
archive/docker_config/arr/docker-compose.yml
Normal file
82
archive/docker_config/arr/docker-compose.yml
Normal file
@ -0,0 +1,82 @@
|
||||
version: "3"
|
||||
services:
|
||||
radarr:
|
||||
image: linuxserver/radarr
|
||||
container_name: radarr
|
||||
networks:
|
||||
- web
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- /mnt/nas/Video/Movies:/movies
|
||||
- "${DOCKER_DATA}/radarr_config:/config"
|
||||
- "${DOWNLOAD_DIR}:/downloads"
|
||||
labels:
|
||||
- traefik.http.routers.radarr.rule=Host(`radarr.jafner.net`)
|
||||
- traefik.http.routers.radarr.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.radarr.loadbalancer.server.port=7878
|
||||
- traefik.http.routers.radarr.middlewares=lan-only@file
|
||||
sonarr:
|
||||
image: linuxserver/sonarr
|
||||
container_name: sonarr
|
||||
networks:
|
||||
- web
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- /mnt/nas/Video/Shows:/shows
|
||||
- "${DOCKER_DATA}/sonarr_config:/config"
|
||||
- "${DOWNLOAD_DIR}:/downloads"
|
||||
labels:
|
||||
- traefik.http.routers.sonarr.rule=Host(`sonarr.jafner.net`)
|
||||
- traefik.http.routers.sonarr.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.sonarr.loadbalancer.server.port=8989
|
||||
- traefik.http.routers.sonarr.middlewares=lan-only@file
|
||||
|
||||
nzbhydra2:
|
||||
image: linuxserver/nzbhydra2
|
||||
container_name: nzbhydra2
|
||||
networks:
|
||||
- web
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- "${DOCKER_DATA}/nzbhydra2_config:/config"
|
||||
- "${DOWNLOAD_DIR}:/downloads"
|
||||
labels:
|
||||
- traefik.http.routers.nzbhydra2.rule=Host(`nzbhydra.jafner.net`)
|
||||
- traefik.http.routers.nzbhydra2.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.nzbhydra2.loadbalancer.server.port=5076
|
||||
- traefik.http.routers.nzbhydra2.middlewares=lan-only@file
|
||||
|
||||
sabnzbd:
|
||||
image: linuxserver/sabnzbd
|
||||
container_name: sabnzbd
|
||||
networks:
|
||||
- web
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
ports:
|
||||
- 8085:8080
|
||||
volumes:
|
||||
- "${DOCKER_DATA}/sabnzbd_config:/config"
|
||||
- "${DOWNLOAD_DIR}:/downloads"
|
||||
- "${INCOMPLETE_DOWNLOAD_DIR}:/incomplete-downloads"
|
||||
labels:
|
||||
- traefik.http.routers.sabnzbd.rule=Host(`sabnzbd.jafner.net`)
|
||||
- traefik.http.routers.sabnzbd.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.sabnzbd.loadbalancer.server.port=8080
|
||||
- traefik.http.routers.sabnzbd.middlewares=lan-only@file
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
|
1
archive/docker_config/calibre-web/.env
Normal file
1
archive/docker_config/calibre-web/.env
Normal file
@ -0,0 +1 @@
|
||||
LIBRARY_DIR=/mnt/nas/Ebooks/Calibre
|
40
archive/docker_config/calibre-web/docker-compose.yml
Normal file
40
archive/docker_config/calibre-web/docker-compose.yml
Normal file
@ -0,0 +1,40 @@
|
||||
version: '3'
|
||||
services:
|
||||
calibre-web-rpg:
|
||||
image: linuxserver/calibre-web
|
||||
container_name: calibre-web-rpg
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- calibre-web-rpg_data:/config
|
||||
- /mnt/calibre/rpg:/books
|
||||
labels:
|
||||
- traefik.http.routers.calibre-rpg.rule=Host(`rpg.jafner.net`)
|
||||
- traefik.http.routers.calibre-rpg.tls.certresolver=lets-encrypt
|
||||
networks:
|
||||
- web
|
||||
|
||||
calibre-web-sff:
|
||||
image: linuxserver/calibre-web
|
||||
container_name: calibre-web-sff
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- calibre-web-sff_data:/config
|
||||
- /mnt/calibre/sff:/books
|
||||
labels:
|
||||
- traefik.http.routers.calibre.rule=Host(`calibre.jafner.net`)
|
||||
- traefik.http.routers.calibre.tls.certresolver=lets-encrypt
|
||||
networks:
|
||||
- web
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
volumes:
|
||||
calibre-web-rpg_data:
|
||||
calibre-web-sff_data:
|
12
archive/docker_config/cloudflare-ddns/docker-compose.yml
Normal file
12
archive/docker_config/cloudflare-ddns/docker-compose.yml
Normal file
@ -0,0 +1,12 @@
|
||||
version: "3"
|
||||
services:
|
||||
cloudflare-ddns:
|
||||
image: oznu/cloudflare-ddns
|
||||
container_name: cloudflare-ddns
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- API_KEY=***REMOVED***
|
||||
- ZONE=jafner.net
|
||||
- SUBDOMAIN=*
|
||||
labels:
|
||||
- traefik.enable=false
|
1
archive/docker_config/drawio/.env
Normal file
1
archive/docker_config/drawio/.env
Normal file
@ -0,0 +1 @@
|
||||
DRAWIO_BASE_URL=https://draw.jafner.net
|
74
archive/docker_config/drawio/docker-compose.yml
Normal file
74
archive/docker_config/drawio/docker-compose.yml
Normal file
@ -0,0 +1,74 @@
|
||||
version: '3'
|
||||
services:
|
||||
plantuml-server:
|
||||
image: jgraph/plantuml-server
|
||||
container_name: drawio_plantuml-server
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "8080"
|
||||
networks:
|
||||
- drawionet
|
||||
volumes:
|
||||
- fonts_volume:/usr/share/fonts/drawio
|
||||
image-export:
|
||||
image: jgraph/export-server
|
||||
container_name: drawio_export-server
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "8000"
|
||||
networks:
|
||||
- drawionet
|
||||
volumes:
|
||||
- fonts_volume:/usr/share/fonts/drawio
|
||||
environment:
|
||||
- DRAWIO_SERVER_URL=${DRAWIO_BASE_URL}
|
||||
drawio:
|
||||
image: jgraph/drawio
|
||||
container_name: drawio_drawio
|
||||
links:
|
||||
- plantuml-server:plantuml-server
|
||||
- image-export:image-export
|
||||
depends_on:
|
||||
- plantuml-server
|
||||
- image-export
|
||||
networks:
|
||||
- drawionet
|
||||
- web
|
||||
environment:
|
||||
- DRAWIO_SELF_CONTAINED=1
|
||||
- PLANTUML_URL=http://plantuml-server:8080/
|
||||
- EXPORT_URL=http://image-export:8000/
|
||||
- DRAWIO_BASE_URL=${DRAWIO_BASE_URL}
|
||||
- DRAWIO_CSP_HEADER=${DRAWIO_CSP_HEADER}
|
||||
- DRAWIO_VIEWER_URL=${DRAWIO_VIEWER_URL}
|
||||
- DRAWIO_CONFIG=${DRAWIO_CONFIG}
|
||||
- DRAWIO_GOOGLE_CLIENT_ID=${DRAWIO_GOOGLE_CLIENT_ID}
|
||||
- DRAWIO_GOOGLE_APP_ID=${DRAWIO_GOOGLE_APP_ID}
|
||||
- DRAWIO_GOOGLE_CLIENT_SECRET=${DRAWIO_GOOGLE_CLIENT_SECRET}
|
||||
- DRAWIO_GOOGLE_VIEWER_CLIENT_ID=${DRAWIO_GOOGLE_VIEWER_CLIENT_ID}
|
||||
- DRAWIO_GOOGLE_VIEWER_APP_ID=${DRAWIO_GOOGLE_VIEWER_APP_ID}
|
||||
- DRAWIO_GOOGLE_VIEWER_CLIENT_SECRET=${DRAWIO_GOOGLE_VIEWER_CLIENT_SECRET}
|
||||
- DRAWIO_MSGRAPH_CLIENT_ID=${DRAWIO_MSGRAPH_CLIENT_ID}
|
||||
- DRAWIO_MSGRAPH_CLIENT_SECRET=${DRAWIO_MSGRAPH_CLIENT_SECRET}
|
||||
- DRAWIO_GITLAB_ID=${DRAWIO_GITLAB_ID}
|
||||
- DRAWIO_GITLAB_URL=${DRAWIO_GITLAB_URL}
|
||||
- DRAWIO_CLOUD_CONVERT_APIKEY=${DRAWIO_CLOUD_CONVERT_APIKEY}
|
||||
- DRAWIO_CACHE_DOMAIN=${DRAWIO_CACHE_DOMAIN}
|
||||
- DRAWIO_MEMCACHED_ENDPOINT=${DRAWIO_MEMCACHED_ENDPOINT}
|
||||
- DRAWIO_PUSHER_MODE=2
|
||||
- DRAWIO_IOT_ENDPOINT=${DRAWIO_IOT_ENDPOINT}
|
||||
- DRAWIO_IOT_CERT_PEM=${DRAWIO_IOT_CERT_PEM}
|
||||
- DRAWIO_IOT_PRIVATE_KEY=${DRAWIO_IOT_PRIVATE_KEY}
|
||||
- DRAWIO_IOT_ROOT_CA=${DRAWIO_IOT_ROOT_CA}
|
||||
- DRAWIO_MXPUSHER_ENDPOINT=${DRAWIO_MXPUSHER_ENDPOINT}
|
||||
labels:
|
||||
- traefik.http.routers.drawio.rule=Host(`draw.jafner.net`)
|
||||
- traefik.http.routers.drawio.tls.certresolver=lets-encrypt
|
||||
|
||||
networks:
|
||||
drawionet:
|
||||
web:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
fonts_volume:
|
5
archive/docker_config/git_update.sh
Executable file
5
archive/docker_config/git_update.sh
Executable file
@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
cd /home/joey/docker_config/
|
||||
git add --all
|
||||
git commit -am "$(date)"
|
||||
git push
|
2
archive/docker_config/grafana-stack/.env
Normal file
2
archive/docker_config/grafana-stack/.env
Normal file
@ -0,0 +1,2 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/grafana-stack
|
||||
MINECRAFT_DIR=/home/joey/docker_data/minecraft
|
1
archive/docker_config/grafana-stack/.forgetps.json
Normal file
1
archive/docker_config/grafana-stack/.forgetps.json
Normal file
@ -0,0 +1 @@
|
||||
[]
|
63
archive/docker_config/grafana-stack/docker-compose.yml
Normal file
63
archive/docker_config/grafana-stack/docker-compose.yml
Normal file
@ -0,0 +1,63 @@
|
||||
version: '3'
|
||||
services:
|
||||
influxdb:
|
||||
image: influxdb:latest
|
||||
container_name: influxdb
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- monitoring
|
||||
ports:
|
||||
- 8086:8086
|
||||
- 8089:8089/udp
|
||||
volumes:
|
||||
- ./influxdb.conf:/etc/influxdb/influxdb.conf:ro
|
||||
- "${DOCKER_DATA}/influxdb:/var/lib/influxdb"
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
- INFLUXDB_HTTP_ENABLED=true
|
||||
- INFLUXDB_DB=host
|
||||
command: -config /etc/influxdb/influxdb.conf
|
||||
|
||||
telegraf:
|
||||
image: telegraf:latest
|
||||
container_name: telegraf
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- influxdb
|
||||
networks:
|
||||
- monitoring
|
||||
volumes:
|
||||
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
|
||||
- ./scripts/.forgetps.json:/.forgetps.json:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- /sys:/rootfs/sys:ro
|
||||
- /proc:/rootfs/proc:ro
|
||||
- /etc:/rootfs/etc:ro
|
||||
|
||||
grafana:
|
||||
image: mbarmem/grafana-render:latest
|
||||
container_name: grafana
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- influxdb
|
||||
- telegraf
|
||||
networks:
|
||||
- monitoring
|
||||
- web
|
||||
user: "0"
|
||||
volumes:
|
||||
- ./grafana:/var/lib/grafana
|
||||
- ./grafana.ini:/etc/grafana/grafana.ini
|
||||
environment:
|
||||
- GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource,grafana-worldmap-panel,grafana-piechart-panel
|
||||
labels:
|
||||
- traefik.http.routers.grafana.rule=Host(`grafana.jafner.net`)
|
||||
- traefik.http.routers.grafana.tls.certresolver=lets-encrypt
|
||||
#- traefik.http.routers.grafana.middlewares=authelia@file
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
external: true
|
||||
web:
|
||||
external: true
|
||||
|
622
archive/docker_config/grafana-stack/grafana.ini
Normal file
622
archive/docker_config/grafana-stack/grafana.ini
Normal file
@ -0,0 +1,622 @@
|
||||
##################### Grafana Configuration Example #####################
|
||||
#
|
||||
# Everything has defaults so you only need to uncomment things you want to
|
||||
# change
|
||||
|
||||
# possible values : production, development
|
||||
;app_mode = production
|
||||
|
||||
# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty
|
||||
;instance_name = ${HOSTNAME}
|
||||
|
||||
#################################### Paths ####################################
|
||||
[paths]
|
||||
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
|
||||
;data = /var/lib/grafana
|
||||
|
||||
# Temporary files in `data` directory older than given duration will be removed
|
||||
;temp_data_lifetime = 24h
|
||||
|
||||
# Directory where grafana can store logs
|
||||
;logs = /var/log/grafana
|
||||
|
||||
# Directory where grafana will automatically scan and look for plugins
|
||||
;plugins = /var/lib/grafana/plugins
|
||||
|
||||
# folder that contains provisioning config files that grafana will apply on startup and while running.
|
||||
;provisioning = conf/provisioning
|
||||
|
||||
#################################### Server ####################################
|
||||
[server]
|
||||
# Protocol (http, https, h2, socket)
|
||||
;protocol = http
|
||||
|
||||
# The ip address to bind to, empty will bind to all interfaces
|
||||
;http_addr =
|
||||
|
||||
# The http port to use
|
||||
;http_port = 3000
|
||||
|
||||
# The public facing domain name used to access grafana from a browser
|
||||
;domain = localhost
|
||||
|
||||
# Redirect to correct domain if host header does not match domain
|
||||
# Prevents DNS rebinding attacks
|
||||
;enforce_domain = false
|
||||
|
||||
# The full public facing url you use in browser, used for redirects and emails
|
||||
# If you use reverse proxy and sub path specify full url (with sub path)
|
||||
;root_url = http://localhost:3000
|
||||
|
||||
# Serve Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons.
|
||||
;serve_from_sub_path = false
|
||||
|
||||
# Log web requests
|
||||
;router_logging = false
|
||||
|
||||
# the path relative working path
|
||||
;static_root_path = public
|
||||
|
||||
# enable gzip
|
||||
;enable_gzip = false
|
||||
|
||||
# https certs & key file
|
||||
;cert_file =
|
||||
;cert_key =
|
||||
|
||||
# Unix socket path
|
||||
;socket =
|
||||
|
||||
#################################### Database ####################################
|
||||
[database]
|
||||
# You can configure the database connection by specifying type, host, name, user and password
|
||||
# as separate properties or as on string using the url properties.
|
||||
|
||||
# Either "mysql", "postgres" or "sqlite3", it's your choice
|
||||
;type = sqlite3
|
||||
;host = 127.0.0.1:3306
|
||||
;name = grafana
|
||||
;user = root
|
||||
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
|
||||
;password =
|
||||
|
||||
# Use either URL or the previous fields to configure the database
|
||||
# Example: mysql://user:secret@host:port/database
|
||||
;url =
|
||||
|
||||
# For "postgres" only, either "disable", "require" or "verify-full"
|
||||
;ssl_mode = disable
|
||||
|
||||
# For "sqlite3" only, path relative to data_path setting
|
||||
;path = grafana.db
|
||||
|
||||
# Max idle conn setting default is 2
|
||||
;max_idle_conn = 2
|
||||
|
||||
# Max conn setting default is 0 (mean not set)
|
||||
;max_open_conn =
|
||||
|
||||
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
|
||||
;conn_max_lifetime = 14400
|
||||
|
||||
# Set to true to log the sql calls and execution times.
|
||||
;log_queries =
|
||||
|
||||
# For "sqlite3" only. cache mode setting used for connecting to the database. (private, shared)
|
||||
;cache_mode = private
|
||||
|
||||
#################################### Cache server #############################
|
||||
[remote_cache]
|
||||
# Either "redis", "memcached" or "database" default is "database"
|
||||
;type = database
|
||||
|
||||
# cache connectionstring options
|
||||
# database: will use Grafana primary database.
|
||||
# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=0,ssl=false`. Only addr is required. ssl may be 'true', 'false', or 'insecure'.
|
||||
# memcache: 127.0.0.1:11211
|
||||
;connstr =
|
||||
|
||||
#################################### Data proxy ###########################
|
||||
[dataproxy]
|
||||
|
||||
# This enables data proxy logging, default is false
|
||||
;logging = false
|
||||
|
||||
# How long the data proxy should wait before timing out default is 30 (seconds)
|
||||
;timeout = 30
|
||||
|
||||
# If enabled and user is not anonymous, data proxy will add X-Grafana-User header with username into the request, default is false.
|
||||
;send_user_header = false
|
||||
|
||||
#################################### Analytics ####################################
|
||||
[analytics]
|
||||
# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
|
||||
# No ip addresses are being tracked, only simple counters to track
|
||||
# running instances, dashboard and error counts. It is very helpful to us.
|
||||
# Change this option to false to disable reporting.
|
||||
;reporting_enabled = true
|
||||
|
||||
# Set to false to disable all checks to https://grafana.net
|
||||
# for new vesions (grafana itself and plugins), check is used
|
||||
# in some UI views to notify that grafana or plugin update exists
|
||||
# This option does not cause any auto updates, nor send any information
|
||||
# only a GET request to http://grafana.com to get latest versions
|
||||
;check_for_updates = true
|
||||
|
||||
# Google Analytics universal tracking code, only enabled if you specify an id here
|
||||
;google_analytics_ua_id =
|
||||
|
||||
# Google Tag Manager ID, only enabled if you specify an id here
|
||||
;google_tag_manager_id =
|
||||
|
||||
#################################### Security ####################################
|
||||
[security]
|
||||
# default admin user, created on startup
|
||||
;admin_user = admin
|
||||
admin_user = jafner
|
||||
|
||||
# default admin password, can be changed before first start of grafana, or in profile settings
|
||||
;admin_password = admin
|
||||
admin_password = joeyyeoj
|
||||
|
||||
# used for signing
|
||||
;secret_key = ***REMOVED***
|
||||
|
||||
# disable gravatar profile images
|
||||
;disable_gravatar = false
|
||||
|
||||
# data source proxy whitelist (ip_or_domain:port separated by spaces)
|
||||
;data_source_proxy_whitelist =
|
||||
|
||||
# disable protection against brute force login attempts
|
||||
;disable_brute_force_login_protection = false
|
||||
|
||||
# set to true if you host Grafana behind HTTPS. default is false.
|
||||
;cookie_secure = false
|
||||
|
||||
# set cookie SameSite attribute. defaults to `lax`. can be set to "lax", "strict" and "none"
|
||||
;cookie_samesite = lax
|
||||
|
||||
# set to true if you want to allow browsers to render Grafana in a <frame>, <iframe>, <embed> or <object>. default is false.
|
||||
;allow_embedding = false
|
||||
|
||||
# Set to true if you want to enable http strict transport security (HSTS) response header.
|
||||
# This is only sent when HTTPS is enabled in this configuration.
|
||||
# HSTS tells browsers that the site should only be accessed using HTTPS.
|
||||
# The default version will change to true in the next minor release, 6.3.
|
||||
;strict_transport_security = false
|
||||
|
||||
# Sets how long a browser should cache HSTS. Only applied if strict_transport_security is enabled.
|
||||
;strict_transport_security_max_age_seconds = 86400
|
||||
|
||||
# Set to true if to enable HSTS preloading option. Only applied if strict_transport_security is enabled.
|
||||
;strict_transport_security_preload = false
|
||||
|
||||
# Set to true if to enable the HSTS includeSubDomains option. Only applied if strict_transport_security is enabled.
|
||||
;strict_transport_security_subdomains = false
|
||||
|
||||
# Set to true to enable the X-Content-Type-Options response header.
|
||||
# The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised
|
||||
# in the Content-Type headers should not be changed and be followed. The default will change to true in the next minor release, 6.3.
|
||||
;x_content_type_options = false
|
||||
|
||||
# Set to true to enable the X-XSS-Protection header, which tells browsers to stop pages from loading
|
||||
# when they detect reflected cross-site scripting (XSS) attacks. The default will change to true in the next minor release, 6.3.
|
||||
;x_xss_protection = false
|
||||
|
||||
#################################### Snapshots ###########################
|
||||
[snapshots]
|
||||
# snapshot sharing options
|
||||
;external_enabled = true
|
||||
;external_snapshot_url = https://snapshots-origin.raintank.io
|
||||
;external_snapshot_name = Publish to snapshot.raintank.io
|
||||
|
||||
# Set to true to enable this Grafana instance act as an external snapshot server and allow unauthenticated requests for
|
||||
# creating and deleting snapshots.
|
||||
;public_mode = false
|
||||
|
||||
# remove expired snapshot
|
||||
;snapshot_remove_expired = true
|
||||
|
||||
#################################### Dashboards History ##################
|
||||
[dashboards]
|
||||
# Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1
|
||||
;versions_to_keep = 20
|
||||
|
||||
#################################### Users ###############################
|
||||
[users]
|
||||
# disable user signup / registration
|
||||
;allow_sign_up = true
|
||||
|
||||
# Allow non admin users to create organizations
|
||||
;allow_org_create = true
|
||||
|
||||
# Set to true to automatically assign new users to the default organization (id 1)
|
||||
;auto_assign_org = true
|
||||
|
||||
# Default role new users will be automatically assigned (if disabled above is set to true)
|
||||
;auto_assign_org_role = Viewer
|
||||
|
||||
# Background text for the user field on the login page
|
||||
;login_hint = email or username
|
||||
;password_hint = password
|
||||
|
||||
# Default UI theme ("dark" or "light")
|
||||
;default_theme = dark
|
||||
|
||||
# External user management, these options affect the organization users view
|
||||
;external_manage_link_url =
|
||||
;external_manage_link_name =
|
||||
;external_manage_info =
|
||||
|
||||
# Viewers can edit/inspect dashboard settings in the browser. But not save the dashboard.
|
||||
;viewers_can_edit = false
|
||||
|
||||
# Editors can administrate dashboard, folders and teams they create
|
||||
;editors_can_admin = false
|
||||
|
||||
[auth]
|
||||
# Login cookie name
|
||||
;login_cookie_name = grafana_session
|
||||
|
||||
# The lifetime (days) an authenticated user can be inactive before being required to login at next visit. Default is 7 days,
|
||||
;login_maximum_inactive_lifetime_days = 7
|
||||
login_maximum_inactive_lifetime_days = 999
|
||||
|
||||
# The maximum lifetime (days) an authenticated user can be logged in since login time before being required to login. Default is 30 days.
|
||||
;login_maximum_lifetime_days = 30
|
||||
login_maximum_lifetime_days = 999
|
||||
|
||||
# How often should auth tokens be rotated for authenticated users when being active. The default is each 10 minutes.
|
||||
;token_rotation_interval_minutes = 10
|
||||
|
||||
# Set to true to disable (hide) the login form, useful if you use OAuth, defaults to false
|
||||
;disable_login_form = false
|
||||
|
||||
# Set to true to disable the signout link in the side menu. useful if you use auth.proxy, defaults to false
|
||||
;disable_signout_menu = false
|
||||
|
||||
# URL to redirect the user to after sign out
|
||||
;signout_redirect_url =
|
||||
|
||||
# Set to true to attempt login with OAuth automatically, skipping the login screen.
|
||||
# This setting is ignored if multiple OAuth providers are configured.
|
||||
;oauth_auto_login = false
|
||||
|
||||
#################################### Anonymous Auth ######################
|
||||
[auth.anonymous]
|
||||
# enable anonymous access
|
||||
;enabled = false
|
||||
enabled = true
|
||||
|
||||
# specify organization name that should be used for unauthenticated users
|
||||
;org_name = Main Org.
|
||||
|
||||
# specify role for unauthenticated users
|
||||
;org_role = Viewer
|
||||
|
||||
#################################### Github Auth ##########################
|
||||
[auth.github]
|
||||
;enabled = false
|
||||
;allow_sign_up = true
|
||||
;client_id = some_id
|
||||
;client_secret = some_secret
|
||||
;scopes = user:email,read:org
|
||||
;auth_url = https://github.com/login/oauth/authorize
|
||||
;token_url = https://github.com/login/oauth/access_token
|
||||
;api_url = https://api.github.com/user
|
||||
;team_ids =
|
||||
;allowed_organizations =
|
||||
|
||||
#################################### Google Auth ##########################
|
||||
[auth.google]
|
||||
;enabled = false
|
||||
;allow_sign_up = true
|
||||
;client_id = some_client_id
|
||||
;client_secret = some_client_secret
|
||||
;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
|
||||
;auth_url = https://accounts.google.com/o/oauth2/auth
|
||||
;token_url = https://accounts.google.com/o/oauth2/token
|
||||
;api_url = https://www.googleapis.com/oauth2/v1/userinfo
|
||||
;allowed_domains =
|
||||
|
||||
#################################### Generic OAuth ##########################
|
||||
[auth.generic_oauth]
|
||||
;enabled = false
|
||||
;name = OAuth
|
||||
;allow_sign_up = true
|
||||
;client_id = some_id
|
||||
;client_secret = some_secret
|
||||
;scopes = user:email,read:org
|
||||
;email_attribute_name = email:primary
|
||||
;email_attribute_path =
|
||||
;auth_url = https://foo.bar/login/oauth/authorize
|
||||
;token_url = https://foo.bar/login/oauth/access_token
|
||||
;api_url = https://foo.bar/user
|
||||
;team_ids =
|
||||
;allowed_organizations =
|
||||
;tls_skip_verify_insecure = false
|
||||
;tls_client_cert =
|
||||
;tls_client_key =
|
||||
;tls_client_ca =
|
||||
|
||||
; Set to true to enable sending client_id and client_secret via POST body instead of Basic authentication HTTP header
|
||||
; This might be required if the OAuth provider is not RFC6749 compliant, only supporting credentials passed via POST payload
|
||||
;send_client_credentials_via_post = false
|
||||
|
||||
#################################### SAML Auth ###########################
|
||||
[auth.saml] # Enterprise only
|
||||
# Defaults to false. If true, the feature is enabled.
|
||||
;enabled = false
|
||||
|
||||
# Base64-encoded public X.509 certificate. Used to sign requests to the IdP
|
||||
;certificate =
|
||||
|
||||
# Path to the public X.509 certificate. Used to sign requests to the IdP
|
||||
;certificate_path =
|
||||
|
||||
# Base64-encoded private key. Used to decrypt assertions from the IdP
|
||||
;private_key =
|
||||
|
||||
;# Path to the private key. Used to decrypt assertions from the IdP
|
||||
;private_key_path =
|
||||
|
||||
# Base64-encoded IdP SAML metadata XML. Used to verify and obtain binding locations from the IdP
|
||||
;idp_metadata =
|
||||
|
||||
# Path to the SAML metadata XML. Used to verify and obtain binding locations from the IdP
|
||||
;idp_metadata_path =
|
||||
|
||||
# URL to fetch SAML IdP metadata. Used to verify and obtain binding locations from the IdP
|
||||
;idp_metadata_url =
|
||||
|
||||
# Duration, since the IdP issued a response and the SP is allowed to process it. Defaults to 90 seconds.
|
||||
;max_issue_delay = 90s
|
||||
|
||||
# Duration, for how long the SP's metadata should be valid. Defaults to 48 hours.
|
||||
;metadata_valid_duration = 48h
|
||||
|
||||
# Friendly name or name of the attribute within the SAML assertion to use as the user's name
|
||||
;assertion_attribute_name = displayName
|
||||
|
||||
# Friendly name or name of the attribute within the SAML assertion to use as the user's login handle
|
||||
;assertion_attribute_login = mail
|
||||
|
||||
# Friendly name or name of the attribute within the SAML assertion to use as the user's email
|
||||
;assertion_attribute_email = mail
|
||||
|
||||
#################################### Grafana.com Auth ####################
|
||||
[auth.grafana_com]
|
||||
;enabled = false
|
||||
;allow_sign_up = true
|
||||
;client_id = some_id
|
||||
;client_secret = some_secret
|
||||
;scopes = user:email
|
||||
;allowed_organizations =
|
||||
|
||||
#################################### Auth Proxy ##########################
|
||||
[auth.proxy]
|
||||
;enabled = false
|
||||
;header_name = X-WEBAUTH-USER
|
||||
;header_property = username
|
||||
;auto_sign_up = true
|
||||
;ldap_sync_ttl = 60
|
||||
;whitelist = 192.168.1.1, 192.168.2.1
|
||||
;headers = Email:X-User-Email, Name:X-User-Name
|
||||
|
||||
#################################### Basic Auth ##########################
|
||||
[auth.basic]
|
||||
;enabled = true
|
||||
|
||||
#################################### Auth LDAP ##########################
|
||||
[auth.ldap]
|
||||
;enabled = false
|
||||
;config_file = /etc/grafana/ldap.toml
|
||||
;allow_sign_up = true
|
||||
|
||||
# LDAP backround sync (Enterprise only)
|
||||
# At 1 am every day
|
||||
;sync_cron = "0 0 1 * * *"
|
||||
;active_sync_enabled = true
|
||||
|
||||
#################################### SMTP / Emailing ##########################
|
||||
[smtp]
|
||||
# enabled = true
|
||||
# host =
|
||||
# user =
|
||||
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
|
||||
# password =
|
||||
;cert_file =
|
||||
;key_file =
|
||||
# skip_verify = true
|
||||
# from_address =
|
||||
# from_name =
|
||||
# EHLO identity in SMTP dialog (defaults to instance_name)
|
||||
;ehlo_identity = dashboard.example.com
|
||||
|
||||
[emails]
|
||||
;welcome_email_on_sign_up = false
|
||||
|
||||
#################################### Logging ##########################
|
||||
[log]
|
||||
# Either "console", "file", "syslog". Default is console and file
|
||||
# Use space to separate multiple modes, e.g. "console file"
|
||||
;mode = console file
|
||||
|
||||
# Either "debug", "info", "warn", "error", "critical", default is "info"
|
||||
;level = info
|
||||
|
||||
# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
|
||||
;filters =
|
||||
|
||||
# For "console" mode only
|
||||
[log.console]
|
||||
;level =
|
||||
|
||||
# log line format, valid options are text, console and json
|
||||
;format = console
|
||||
|
||||
# For "file" mode only
|
||||
[log.file]
|
||||
;level =
|
||||
|
||||
# log line format, valid options are text, console and json
|
||||
;format = text
|
||||
|
||||
# This enables automated log rotate(switch of following options), default is true
|
||||
;log_rotate = true
|
||||
|
||||
# Max line number of single file, default is 1000000
|
||||
;max_lines = 1000000
|
||||
|
||||
# Max size shift of single file, default is 28 means 1 << 28, 256MB
|
||||
;max_size_shift = 28
|
||||
|
||||
# Segment log daily, default is true
|
||||
;daily_rotate = true
|
||||
|
||||
# Expired days of log file(delete after max days), default is 7
|
||||
;max_days = 7
|
||||
|
||||
[log.syslog]
|
||||
;level =
|
||||
|
||||
# log line format, valid options are text, console and json
|
||||
;format = text
|
||||
|
||||
# Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used.
|
||||
;network =
|
||||
;address =
|
||||
|
||||
# Syslog facility. user, daemon and local0 through local7 are valid.
|
||||
;facility =
|
||||
|
||||
# Syslog tag. By default, the process' argv[0] is used.
|
||||
;tag =
|
||||
|
||||
#################################### Alerting ############################
|
||||
[alerting]
|
||||
# Disable alerting engine & UI features
|
||||
;enabled = true
|
||||
# Makes it possible to turn off alert rule execution but alerting UI is visible
|
||||
;execute_alerts = true
|
||||
|
||||
# Default setting for new alert rules. Defaults to categorize error and timeouts as alerting. (alerting, keep_state)
|
||||
;error_or_timeout = alerting
|
||||
|
||||
# Default setting for how Grafana handles nodata or null values in alerting. (alerting, no_data, keep_state, ok)
|
||||
;nodata_or_nullvalues = no_data
|
||||
|
||||
# Alert notifications can include images, but rendering many images at the same time can overload the server
|
||||
# This limit will protect the server from render overloading and make sure notifications are sent out quickly
|
||||
;concurrent_render_limit = 5
|
||||
|
||||
|
||||
# Default setting for alert calculation timeout. Default value is 30
|
||||
;evaluation_timeout_seconds = 30
|
||||
|
||||
# Default setting for alert notification timeout. Default value is 30
|
||||
;notification_timeout_seconds = 30
|
||||
|
||||
# Default setting for max attempts to sending alert notifications. Default value is 3
|
||||
;max_attempts = 3
|
||||
|
||||
#################################### Explore #############################
|
||||
[explore]
|
||||
# Enable the Explore section
|
||||
;enabled = true
|
||||
|
||||
#################################### Internal Grafana Metrics ##########################
|
||||
# Metrics available at HTTP API Url /metrics
|
||||
[metrics]
|
||||
# Disable / Enable internal metrics
|
||||
;enabled = true
|
||||
# Disable total stats (stat_totals_*) metrics to be generated
|
||||
;disable_total_stats = false
|
||||
|
||||
# Publish interval
|
||||
;interval_seconds = 10
|
||||
|
||||
# Send internal metrics to Graphite
|
||||
[metrics.graphite]
|
||||
# Enable by setting the address setting (ex localhost:2003)
|
||||
;address =
|
||||
;prefix = prod.grafana.%(instance_name)s.
|
||||
|
||||
#################################### Distributed tracing ############
|
||||
[tracing.jaeger]
|
||||
# Enable by setting the address sending traces to jaeger (ex localhost:6831)
|
||||
;address = localhost:6831
|
||||
# Tag that will always be included in when creating new spans. ex (tag1:value1,tag2:value2)
|
||||
;always_included_tag = tag1:value1
|
||||
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
|
||||
;sampler_type = const
|
||||
# jaeger samplerconfig param
|
||||
# for "const" sampler, 0 or 1 for always false/true respectively
|
||||
# for "probabilistic" sampler, a probability between 0 and 1
|
||||
# for "rateLimiting" sampler, the number of spans per second
|
||||
# for "remote" sampler, param is the same as for "probabilistic"
|
||||
# and indicates the initial sampling rate before the actual one
|
||||
# is received from the mothership
|
||||
;sampler_param = 1
|
||||
# Whether or not to use Zipkin propagation (x-b3- HTTP headers).
|
||||
;zipkin_propagation = false
|
||||
# Setting this to true disables shared RPC spans.
|
||||
# Not disabling is the most common setting when using Zipkin elsewhere in your infrastructure.
|
||||
;disable_shared_zipkin_spans = false
|
||||
|
||||
#################################### Grafana.com integration ##########################
|
||||
# Url used to import dashboards directly from Grafana.com
|
||||
[grafana_com]
|
||||
;url = https://grafana.com
|
||||
|
||||
#################################### External image storage ##########################
|
||||
[external_image_storage]
|
||||
# Used for uploading images to public servers so they can be included in slack/email messages.
|
||||
# you can choose between (s3, webdav, gcs, azure_blob, local)
|
||||
;provider =
|
||||
|
||||
[external_image_storage.s3]
|
||||
;bucket =
|
||||
;region =
|
||||
;path =
|
||||
;access_key =
|
||||
;secret_key =
|
||||
|
||||
[external_image_storage.webdav]
|
||||
;url =
|
||||
;public_url =
|
||||
;username =
|
||||
;password =
|
||||
|
||||
[external_image_storage.gcs]
|
||||
;key_file =
|
||||
;bucket =
|
||||
;path =
|
||||
|
||||
[external_image_storage.azure_blob]
|
||||
;account_name =
|
||||
;account_key =
|
||||
;container_name =
|
||||
|
||||
[external_image_storage.local]
|
||||
# does not require any configuration
|
||||
|
||||
[rendering]
|
||||
# Options to configure a remote HTTP image rendering service, e.g. using https://github.com/grafana/grafana-image-renderer.
|
||||
# URL to a remote HTTP image renderer service, e.g. http://localhost:8081/render, will enable Grafana to render panels and dashboards to PNG-images using HTTP requests to an external service.
|
||||
;server_url =
|
||||
# If the remote HTTP image renderer service runs on a different server than the Grafana server you may have to configure this to a URL where Grafana is reachable, e.g. http://grafana.domain/.
|
||||
;callback_url =
|
||||
|
||||
[enterprise]
|
||||
# Path to a valid Grafana Enterprise license.jwt file
|
||||
;license_path =
|
||||
|
||||
[panels]
|
||||
# If set to true Grafana will allow script tags in text panels. Not recommended as it enable XSS vulnerabilities.
|
||||
;disable_sanitize_html = false
|
||||
|
||||
[plugins]
|
||||
;enable_alpha = false
|
||||
;app_tls_skip_verify_insecure = false
|
305
archive/docker_config/grafana-stack/influxdb.conf
Normal file
305
archive/docker_config/grafana-stack/influxdb.conf
Normal file
@ -0,0 +1,305 @@
|
||||
### Welcome to the InfluxDB configuration file.
|
||||
|
||||
# Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
|
||||
# The data includes a random ID, os, arch, version, the number of series and other
|
||||
# usage data. No data from user databases is ever transmitted.
|
||||
# Change this option to true to disable reporting.
|
||||
reporting-disabled = false
|
||||
|
||||
# we'll try to get the hostname automatically, but if it the os returns something
|
||||
# that isn't resolvable by other servers in the cluster, use this option to
|
||||
# manually set the hostname
|
||||
# hostname = "localhost"
|
||||
|
||||
###
|
||||
### [meta]
|
||||
###
|
||||
### Controls the parameters for the Raft consensus group that stores metadata
|
||||
### about the InfluxDB cluster.
|
||||
###
|
||||
|
||||
[meta]
|
||||
# Where the metadata/raft database is stored
|
||||
dir = "/var/lib/influxdb/meta"
|
||||
|
||||
retention-autocreate = true
|
||||
|
||||
# If log messages are printed for the meta service
|
||||
logging-enabled = true
|
||||
pprof-enabled = false
|
||||
|
||||
# The default duration for leases.
|
||||
lease-duration = "1m0s"
|
||||
|
||||
###
|
||||
### [data]
|
||||
###
|
||||
### Controls where the actual shard data for InfluxDB lives and how it is
|
||||
### flushed from the WAL. "dir" may need to be changed to a suitable place
|
||||
### for your system, but the WAL settings are an advanced configuration. The
|
||||
### defaults should work for most systems.
|
||||
###
|
||||
|
||||
[data]
|
||||
# Controls if this node holds time series data shards in the cluster
|
||||
enabled = true
|
||||
|
||||
dir = "/var/lib/influxdb/data"
|
||||
|
||||
# These are the WAL settings for the storage engine >= 0.9.3
|
||||
wal-dir = "/var/lib/influxdb/wal"
|
||||
wal-logging-enabled = true
|
||||
|
||||
# Trace logging provides more verbose output around the tsm engine. Turning
|
||||
# this on can provide more useful output for debugging tsm engine issues.
|
||||
# trace-logging-enabled = false
|
||||
|
||||
# Whether queries should be logged before execution. Very useful for troubleshooting, but will
|
||||
# log any sensitive data contained within a query.
|
||||
# query-log-enabled = true
|
||||
|
||||
# Settings for the TSM engine
|
||||
|
||||
# CacheMaxMemorySize is the maximum size a shard's cache can
|
||||
# reach before it starts rejecting writes.
|
||||
# cache-max-memory-size = 524288000
|
||||
|
||||
# CacheSnapshotMemorySize is the size at which the engine will
|
||||
# snapshot the cache and write it to a TSM file, freeing up memory
|
||||
# cache-snapshot-memory-size = 26214400
|
||||
|
||||
# CacheSnapshotWriteColdDuration is the length of time at
|
||||
# which the engine will snapshot the cache and write it to
|
||||
# a new TSM file if the shard hasn't received writes or deletes
|
||||
# cache-snapshot-write-cold-duration = "1h"
|
||||
|
||||
# MinCompactionFileCount is the minimum number of TSM files
|
||||
# that need to exist before a compaction cycle will run
|
||||
# compact-min-file-count = 3
|
||||
|
||||
# CompactFullWriteColdDuration is the duration at which the engine
|
||||
# will compact all TSM files in a shard if it hasn't received a
|
||||
# write or delete
|
||||
# compact-full-write-cold-duration = "24h"
|
||||
|
||||
# MaxPointsPerBlock is the maximum number of points in an encoded
|
||||
# block in a TSM file. Larger numbers may yield better compression
|
||||
# but could incur a performance penalty when querying
|
||||
# max-points-per-block = 1000
|
||||
|
||||
###
|
||||
### [coordinator]
|
||||
###
|
||||
### Controls the clustering service configuration.
|
||||
###
|
||||
|
||||
[coordinator]
|
||||
write-timeout = "10s"
|
||||
max-concurrent-queries = 0
|
||||
query-timeout = "0"
|
||||
log-queries-after = "0"
|
||||
max-select-point = 0
|
||||
max-select-series = 0
|
||||
max-select-buckets = 0
|
||||
|
||||
###
|
||||
### [retention]
|
||||
###
|
||||
### Controls the enforcement of retention policies for evicting old data.
|
||||
###
|
||||
|
||||
[retention]
|
||||
enabled = true
|
||||
check-interval = "30m"
|
||||
|
||||
###
|
||||
### [shard-precreation]
|
||||
###
|
||||
### Controls the precreation of shards, so they are available before data arrives.
|
||||
### Only shards that, after creation, will have both a start- and end-time in the
|
||||
### future, will ever be created. Shards are never precreated that would be wholly
|
||||
### or partially in the past.
|
||||
|
||||
[shard-precreation]
|
||||
enabled = true
|
||||
check-interval = "10m"
|
||||
advance-period = "30m"
|
||||
|
||||
###
|
||||
### Controls the system self-monitoring, statistics and diagnostics.
|
||||
###
|
||||
### The internal database for monitoring data is created automatically if
|
||||
### if it does not already exist. The target retention within this database
|
||||
### is called 'monitor' and is also created with a retention period of 7 days
|
||||
### and a replication factor of 1, if it does not exist. In all cases the
|
||||
### this retention policy is configured as the default for the database.
|
||||
|
||||
[monitor]
|
||||
store-enabled = true # Whether to record statistics internally.
|
||||
store-database = "_internal" # The destination database for recorded statistics
|
||||
store-interval = "10s" # The interval at which to record statistics
|
||||
|
||||
###
|
||||
### [admin]
|
||||
###
|
||||
### Controls the availability of the built-in, web-based admin interface. If HTTPS is
|
||||
### enabled for the admin interface, HTTPS must also be enabled on the [http] service.
|
||||
###
|
||||
|
||||
[admin]
|
||||
enabled = true
|
||||
bind-address = ":8083"
|
||||
https-enabled = false
|
||||
https-certificate = "/etc/ssl/influxdb.pem"
|
||||
|
||||
###
|
||||
### [http]
|
||||
###
|
||||
### Controls how the HTTP endpoints are configured. These are the primary
|
||||
### mechanism for getting data into and out of InfluxDB.
|
||||
###
|
||||
|
||||
[http]
|
||||
enabled = true
|
||||
bind-address = ":8086"
|
||||
auth-enabled = false
|
||||
log-enabled = true
|
||||
write-tracing = false
|
||||
pprof-enabled = false
|
||||
https-enabled = false
|
||||
https-certificate = "/etc/ssl/influxdb.pem"
|
||||
### Use a separate private key location.
|
||||
# https-private-key = ""
|
||||
max-row-limit = 10000
|
||||
realm = "InfluxDB"
|
||||
|
||||
###
|
||||
### [subsciber]
|
||||
###
|
||||
### Controls the subscriptions, which can be used to fork a copy of all data
|
||||
### received by the InfluxDB host.
|
||||
###
|
||||
|
||||
[subsciber]
|
||||
enabled = true
|
||||
http-timeout = "30s"
|
||||
|
||||
|
||||
###
|
||||
### [[graphite]]
|
||||
###
|
||||
### Controls one or many listeners for Graphite data.
|
||||
###
|
||||
|
||||
[[graphite]]
|
||||
enabled = false
|
||||
# database = "graphite"
|
||||
# bind-address = ":2003"
|
||||
# protocol = "tcp"
|
||||
# consistency-level = "one"
|
||||
|
||||
# These next lines control how batching works. You should have this enabled
|
||||
# otherwise you could get dropped metrics or poor performance. Batching
|
||||
# will buffer points in memory if you have many coming in.
|
||||
|
||||
# batch-size = 5000 # will flush if this many points get buffered
|
||||
# batch-pending = 10 # number of batches that may be pending in memory
|
||||
# batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
|
||||
# udp-read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
|
||||
|
||||
### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
|
||||
# separator = "."
|
||||
|
||||
### Default tags that will be added to all metrics. These can be overridden at the template level
|
||||
### or by tags extracted from metric
|
||||
# tags = ["region=us-east", "zone=1c"]
|
||||
|
||||
### Each template line requires a template pattern. It can have an optional
|
||||
### filter before the template and separated by spaces. It can also have optional extra
|
||||
### tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
### similar to the line protocol format. There can be only one default template.
|
||||
# templates = [
|
||||
# "*.app env.service.resource.measurement",
|
||||
# # Default template
|
||||
# "server.*",
|
||||
# ]
|
||||
|
||||
###
|
||||
### [collectd]
|
||||
###
|
||||
### Controls one or many listeners for collectd data.
|
||||
###
|
||||
|
||||
[[collectd]]
|
||||
enabled = false
|
||||
# bind-address = ""
|
||||
# database = ""
|
||||
# typesdb = ""
|
||||
|
||||
# These next lines control how batching works. You should have this enabled
|
||||
# otherwise you could get dropped metrics or poor performance. Batching
|
||||
# will buffer points in memory if you have many coming in.
|
||||
|
||||
# batch-size = 1000 # will flush if this many points get buffered
|
||||
# batch-pending = 5 # number of batches that may be pending in memory
|
||||
# batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
|
||||
# read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
|
||||
|
||||
###
|
||||
### [opentsdb]
|
||||
###
|
||||
### Controls one or many listeners for OpenTSDB data.
|
||||
###
|
||||
|
||||
[[opentsdb]]
|
||||
enabled = false
|
||||
# bind-address = ":4242"
|
||||
# database = "opentsdb"
|
||||
# retention-policy = ""
|
||||
# consistency-level = "one"
|
||||
# tls-enabled = false
|
||||
# certificate= ""
|
||||
# log-point-errors = true # Log an error for every malformed point.
|
||||
|
||||
# These next lines control how batching works. You should have this enabled
|
||||
# otherwise you could get dropped metrics or poor performance. Only points
|
||||
# metrics received over the telnet protocol undergo batching.
|
||||
|
||||
# batch-size = 1000 # will flush if this many points get buffered
|
||||
# batch-pending = 5 # number of batches that may be pending in memory
|
||||
# batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
|
||||
|
||||
###
|
||||
### [[udp]]
|
||||
###
|
||||
### Controls the listeners for InfluxDB line protocol data via UDP.
|
||||
###
|
||||
|
||||
[[udp]]
|
||||
enabled = true
|
||||
bind-address = "0.0.0.0:8089"
|
||||
database = "host"
|
||||
# retention-policy = ""
|
||||
|
||||
# These next lines control how batching works. You should have this enabled
|
||||
# otherwise you could get dropped metrics or poor performance. Batching
|
||||
# will buffer points in memory if you have many coming in.
|
||||
|
||||
batch-size = 1000 # will flush if this many points get buffered
|
||||
# batch-pending = 5 # number of batches that may be pending in memory
|
||||
batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
|
||||
# read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
|
||||
|
||||
# set the expected UDP payload size; lower values tend to yield better performance, default is max UDP size 65536
|
||||
# udp-payload-size = 65536
|
||||
|
||||
###
|
||||
### [continuous_queries]
|
||||
###
|
||||
### Controls how continuous queries are run within InfluxDB.
|
||||
###
|
||||
|
||||
[continuous_queries]
|
||||
log-enabled = true
|
||||
enabled = true
|
||||
# run-interval = "1s" # interval for how often continuous queries will be checked if they need to run
|
19
archive/docker_config/grafana-stack/scripts/diskstatus.sh
Executable file
19
archive/docker_config/grafana-stack/scripts/diskstatus.sh
Executable file
@ -0,0 +1,19 @@
|
||||
#!/usr/bin/env sh
|
||||
|
||||
SMARTCTL=/usr/local/sbin/smartctl
|
||||
|
||||
DISKS=$(/sbin/sysctl -n kern.disks | cut -d= -f2)
|
||||
|
||||
for DISK in ${DISKS}
|
||||
do
|
||||
TEMP=$(${SMARTCTL} -l scttemp /dev/${DISK} | grep '^Current Temperature:' | awk '{print $3}')
|
||||
HEALTH=$(${SMARTCTL} -H /dev/${DISK} | grep 'test result:' | cut -d: -f2 | sed 's/^[ \t]*//')
|
||||
if [ -z != ${TEMP} ] && [ -z != ${HEALTH} ]
|
||||
then
|
||||
JSON=$(echo ${JSON}{\"disk\":\"${DISK}\",\"health\":\"${HEALTH}\",\"temperature\":${TEMP}},)
|
||||
fi
|
||||
done
|
||||
|
||||
JSON=$(echo ${JSON} | sed 's/,$//')
|
||||
|
||||
echo [${JSON}] >&1
|
3
archive/docker_config/grafana-stack/scripts/forgepc.sh
Executable file
3
archive/docker_config/grafana-stack/scripts/forgepc.sh
Executable file
@ -0,0 +1,3 @@
|
||||
#!/bin/bash
|
||||
|
||||
docker exec e6 rcon-cli forge entity list "minecraft:player" >&1
|
19
archive/docker_config/grafana-stack/scripts/forgetps-to-json.sh
Executable file
19
archive/docker_config/grafana-stack/scripts/forgetps-to-json.sh
Executable file
@ -0,0 +1,19 @@
|
||||
#!/bin/bash
|
||||
# this script converts the output of the "forge tps" command (in the form of the .forgetps file) into json for sending to influxdb
|
||||
# by default it reads from stdin and outputs to a .forgetps.json file
|
||||
while IFS= read -r line; do
|
||||
if [ "$line" != "" ]; then
|
||||
DIM=$(echo -n "$line" | awk '{print $2}')
|
||||
if [ "$DIM" = "Mean" ]; then
|
||||
DIM="Overall"
|
||||
fi
|
||||
TPT=$(echo "$line" | grep -oE 'Mean tick time: .+ms' | awk '{print $4}')
|
||||
TPS=$(echo "$line" | grep -oE 'Mean TPS: .+' | awk '{print $3}')
|
||||
JSON+=\{$(echo \"dim\":\"$DIM\",\"tpt\":$TPT,\"tps\":$TPS)\},
|
||||
fi
|
||||
#done < .forgetps # inputs from .forgetps file
|
||||
done <$1 # inputs from file passed via stdin
|
||||
JSON=$(echo ${JSON} | sed 's/,$//')
|
||||
|
||||
#echo [${JSON}] >&1 # outputs to stdout
|
||||
echo [${JSON}] > .forgetps.json # uncomment this to output to file
|
32
archive/docker_config/grafana-stack/telegraf.conf
Normal file
32
archive/docker_config/grafana-stack/telegraf.conf
Normal file
@ -0,0 +1,32 @@
|
||||
[global_tags]
|
||||
[agent]
|
||||
interval = "10s"
|
||||
round_interval = true
|
||||
metric_batch_size = 1000
|
||||
metric_buffer_limit = 10000
|
||||
collection_jitter = "0s"
|
||||
flush_interval = "10s"
|
||||
flush_jitter = "0s"
|
||||
precision = ""
|
||||
hostname = ""
|
||||
omit_hostname = false
|
||||
[[outputs.influxdb]]
|
||||
urls = ["http://influxdb:8086"]
|
||||
database = "jafgraf"
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = true
|
||||
collect_cpu_time = false
|
||||
report_active = false
|
||||
[[inputs.disk]]
|
||||
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
|
||||
[[inputs.mem]]
|
||||
[[inputs.system]]
|
||||
[[inputs.docker]]
|
||||
endpoint = "unix:///var/run/docker.sock"
|
||||
[[inputs.sensors]]
|
||||
[[inputs.file]]
|
||||
files = ["/.forgetps.json"]
|
||||
data_format = "json"
|
||||
name_override = "tickinfo"
|
||||
tag_keys = ["dim"]
|
12
archive/docker_config/joplin/.env
Normal file
12
archive/docker_config/joplin/.env
Normal file
@ -0,0 +1,12 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/joplin
|
||||
|
||||
DB_CLIENT=pg
|
||||
POSTGRES_PASSWORD=postgres
|
||||
POSTGRES_DATABASE=joplin
|
||||
POSTGRES_DB=joplin
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PORT=5432
|
||||
POSTGRES_HOST=joplin_db
|
||||
|
||||
APP_BASE_URL=https://joplin.jafner.net
|
||||
APP_PORT=22300
|
34
archive/docker_config/joplin/docker-compose.yml
Normal file
34
archive/docker_config/joplin/docker-compose.yml
Normal file
@ -0,0 +1,34 @@
|
||||
version: '3'
|
||||
services:
|
||||
joplin:
|
||||
image: joplin/server:2.6-beta
|
||||
container_name: joplin
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
- joplin_db
|
||||
networks:
|
||||
- web
|
||||
- joplin
|
||||
labels:
|
||||
- traefik.http.routers.joplin.rule=Host(`joplin.jafner.net`)
|
||||
- traefik.http.routers.joplin.tls.certresolver=lets-encrypt
|
||||
- traefik.http.middlewares.joplin.headers.customrequestheaders.X-Forwarded-Proto = http
|
||||
- traefik.http.services.joplin.loadbalancer.server.port=22300
|
||||
- traefik.http.services.joplin.loadbalancer.passhostheader=true
|
||||
joplin_db:
|
||||
image: postgres:13.1
|
||||
container_name: joplin_db
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- .env
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/db:/var/lib/postresql/data
|
||||
networks:
|
||||
- joplin
|
||||
|
||||
networks:
|
||||
joplin:
|
||||
web:
|
||||
external: true
|
2
archive/docker_config/landing/.env
Normal file
2
archive/docker_config/landing/.env
Normal file
@ -0,0 +1,2 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/landing
|
||||
|
46
archive/docker_config/landing/docker-compose.yml
Normal file
46
archive/docker_config/landing/docker-compose.yml
Normal file
@ -0,0 +1,46 @@
|
||||
version: '3.1'
|
||||
|
||||
services:
|
||||
landing:
|
||||
image: wordpress
|
||||
container_name: landing
|
||||
restart: always
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: landing_db
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: wordpress
|
||||
WORDPRESS_DB_NAME: wordpressdb
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/html:/var/www/html
|
||||
- ./docker-php-memlimit.ini:/usr/local/etc/php/conf.d/docker-php-memlimit.ini:ro
|
||||
labels:
|
||||
- traefik.http.routers.landing.rule=Host(`www.jafner.net`)
|
||||
- traefik.http.routers.landing.tls=true
|
||||
- traefik.http.routers.landing.tls.certresolver=lets-encrypt
|
||||
- traefik.port=80
|
||||
networks:
|
||||
- web
|
||||
- landing
|
||||
depends_on:
|
||||
- landing_db
|
||||
|
||||
landing_db:
|
||||
image: mysql:5.7
|
||||
container_name: landing_db
|
||||
restart: always
|
||||
networks:
|
||||
- landing
|
||||
environment:
|
||||
MYSQL_DATABASE: wordpressdb
|
||||
MYSQL_USER: wordpress
|
||||
MYSQL_PASSWORD: wordpress
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: '1'
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/db:/var/lib/mysql
|
||||
labels:
|
||||
- traefik.enable=false
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
landing:
|
1
archive/docker_config/landing/docker-php-memlimit.ini
Normal file
1
archive/docker_config/landing/docker-php-memlimit.ini
Normal file
@ -0,0 +1 @@
|
||||
memory_limit = 512M
|
18
archive/docker_config/mc-monitor/docker-compose.yml
Normal file
18
archive/docker_config/mc-monitor/docker-compose.yml
Normal file
@ -0,0 +1,18 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
telegraf:
|
||||
image: telegraf:1.13
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 8094:8094
|
||||
volumes:
|
||||
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
|
||||
monitor:
|
||||
image: itzg/mc-monitor
|
||||
command: gather-for-telegraf
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
GATHER_INTERVAL: 10s
|
||||
GATHER_TELEGRAF_ADDRESS: telegraf:8094
|
||||
GATHER_SERVERS: e6.jafner.net
|
9
archive/docker_config/mc-monitor/telegraf.conf
Normal file
9
archive/docker_config/mc-monitor/telegraf.conf
Normal file
@ -0,0 +1,9 @@
|
||||
[agent]
|
||||
interval = "10s"
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = ["http://192.168.1.23:8086"]
|
||||
database = "mc-monitor"
|
||||
|
||||
[[inputs.socket_listener]]
|
||||
service_address = "tcp://:8094"
|
3
archive/docker_config/minecraft/.env
Normal file
3
archive/docker_config/minecraft/.env
Normal file
@ -0,0 +1,3 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/minecraft
|
||||
DOCKER_CONFIG=/home/joey/docker_config/minecraft
|
||||
RCON_PASSWORD=***REMOVED***
|
23
archive/docker_config/minecraft/e6-056.yml
Normal file
23
archive/docker_config/minecraft/e6-056.yml
Normal file
@ -0,0 +1,23 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
e6-056:
|
||||
image: itzg/minecraft-server:java8
|
||||
container_name: e6-056
|
||||
environment:
|
||||
- EULA=TRUE
|
||||
- MAX_MEMORY=12G
|
||||
- TYPE=FORGE
|
||||
- VERSION=1.16.5
|
||||
- FORGEVERSION=36.1.31
|
||||
- OPS=jafner425
|
||||
- ENABLE_RCON=true
|
||||
- RCON_PASSWORD=${RCON_PASSWORD}
|
||||
volumes:
|
||||
- $DOCKER_DATA/e6-056:/data:rw
|
||||
networks:
|
||||
- mc-router
|
||||
restart: always
|
||||
networks:
|
||||
mc-router:
|
||||
external: true
|
17
archive/docker_config/minecraft/router.yml
Normal file
17
archive/docker_config/minecraft/router.yml
Normal file
@ -0,0 +1,17 @@
|
||||
version: '3'
|
||||
services:
|
||||
router:
|
||||
image: itzg/mc-router
|
||||
container_name: mc-router
|
||||
restart: always
|
||||
networks:
|
||||
- mc-router
|
||||
ports:
|
||||
- 25565:25565
|
||||
command: --mapping=e6.jafner.net=e6-056:25565,vanilla.jafner.net=vanilla:25565,tnp.jafner.net=tnp:25565,bmcp.jafner.net=bmcp:25565 --api-binding=0.0.0.0:25566
|
||||
|
||||
networks:
|
||||
mc-router:
|
||||
external: true
|
||||
volumes:
|
||||
mc-router:
|
21
archive/docker_config/minecraft/telegraf.conf
Normal file
21
archive/docker_config/minecraft/telegraf.conf
Normal file
@ -0,0 +1,21 @@
|
||||
[global_tags]
|
||||
[agent]
|
||||
interval = "10s"
|
||||
round_interval = true
|
||||
metric_batch_size = 1000
|
||||
metric_buffer_limit = 10000
|
||||
collection_jitter = "0s"
|
||||
flush_interval = "10s"
|
||||
flush_jitter = "0s"
|
||||
precision = ""
|
||||
hostname = ""
|
||||
omit_hostname = false
|
||||
[[outputs.influxdb]]
|
||||
urls = ["http://192.168.1.23:8086"]
|
||||
database = "minecraft"
|
||||
[[inputs.exec]]
|
||||
name_override = "tickinfo"
|
||||
commands = ["/data/get-tps.sh"]
|
||||
timeout = "30s"
|
||||
data_format = "json"
|
||||
tag_keys = ["dim","tpt"]
|
18
archive/docker_config/minecraft/vanilla.yml
Normal file
18
archive/docker_config/minecraft/vanilla.yml
Normal file
@ -0,0 +1,18 @@
|
||||
version: '3'
|
||||
services:
|
||||
vanilla:
|
||||
image: itzg/minecraft-server:java16
|
||||
container_name: vanilla
|
||||
environment:
|
||||
- EULA=TRUE
|
||||
- VERSION=1.17.1
|
||||
- OPS=mollymsmom
|
||||
- MAX_MEMORY=6G
|
||||
volumes:
|
||||
- $DOCKER_DATA/vanilla:/data:rw
|
||||
networks:
|
||||
- mc-router
|
||||
|
||||
networks:
|
||||
mc-router:
|
||||
external: true
|
2
archive/docker_config/nvgm/.env
Normal file
2
archive/docker_config/nvgm/.env
Normal file
@ -0,0 +1,2 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/nvgm
|
||||
|
45
archive/docker_config/nvgm/docker-compose.yml
Normal file
45
archive/docker_config/nvgm/docker-compose.yml
Normal file
@ -0,0 +1,45 @@
|
||||
version: '3.1'
|
||||
|
||||
services:
|
||||
nvgm:
|
||||
image: wordpress
|
||||
container_name: nvgm
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: nvgm_db
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: wordpress
|
||||
WORDPRESS_DB_NAME: wordpressdb
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/html:/var/www/html
|
||||
labels:
|
||||
- traefik.http.routers.nvgm.rule=Host(`nvgm.jafner.net`)
|
||||
- traefik.http.routers.nvgm.tls=true
|
||||
- traefik.http.routers.nvgm.tls.certresolver=lets-encrypt
|
||||
- traefik.port=80
|
||||
networks:
|
||||
- web
|
||||
- nvgm
|
||||
depends_on:
|
||||
- nvgm_db
|
||||
|
||||
nvgm_db:
|
||||
image: mysql:5.7
|
||||
container_name: nvgm_db
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- nvgm
|
||||
environment:
|
||||
MYSQL_DATABASE: wordpressdb
|
||||
MYSQL_USER: wordpress
|
||||
MYSQL_PASSWORD: wordpress
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: '1'
|
||||
labels:
|
||||
- traefik.enable=false
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/db:/var/lib/mysql
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
nvgm:
|
51
archive/docker_config/peertube/.env
Normal file
51
archive/docker_config/peertube/.env
Normal file
@ -0,0 +1,51 @@
|
||||
# Database / Postgres service configuration
|
||||
POSTGRES_USER=postgresuser
|
||||
POSTGRES_PASSWORD=postgrespassword
|
||||
# Postgres database name "peertube"
|
||||
POSTGRES_DB=peertube
|
||||
# Editable only with a suffix :
|
||||
#POSTGRES_DB=peertube_prod
|
||||
#PEERTUBE_DB_SUFFIX=_prod
|
||||
PEERTUBE_DB_USERNAME=postgresuser
|
||||
PEERTUBE_DB_PASSWORD=postgrespassword
|
||||
PEERTUBE_DB_SSL=false
|
||||
# Default to Postgres service name "postgres" in docker-compose.yml
|
||||
PEERTUBE_DB_HOSTNAME=postgres
|
||||
|
||||
# Server configuration
|
||||
PEERTUBE_WEBSERVER_HOSTNAME=peertube.jafner.net
|
||||
# If you do not use https and a reverse-proxy in docker-compose.yml
|
||||
#PEERTUBE_WEBSERVER_PORT=80
|
||||
#PEERTUBE_WEBSERVER_HTTPS=false
|
||||
# If you need more than one IP as trust_proxy
|
||||
# pass them as a comma separated array:
|
||||
PEERTUBE_TRUST_PROXY=["127.0.0.1", "loopback", "172.80.0.0/16"]
|
||||
|
||||
# E-mail configuration
|
||||
# If you use a Custom SMTP server
|
||||
#PEERTUBE_SMTP_USERNAME=
|
||||
#PEERTUBE_SMTP_PASSWORD=
|
||||
# Default to Postfix service name "postfix" in docker-compose.yml
|
||||
# May be the hostname of your Custom SMTP server
|
||||
PEERTUBE_SMTP_HOSTNAME=postfix
|
||||
PEERTUBE_SMTP_PORT=25
|
||||
PEERTUBE_SMTP_FROM=noreply@jafner.net
|
||||
PEERTUBE_SMTP_TLS=false
|
||||
PEERTUBE_SMTP_DISABLE_STARTTLS=false
|
||||
PEERTUBE_ADMIN_EMAIL=joey@jafner.net
|
||||
|
||||
# Postfix service configuration
|
||||
POSTFIX_myhostname=jafner.net
|
||||
# If you need to generate a list of sub/DOMAIN keys
|
||||
# pass them as a whitespace separated string <DOMAIN>=<selector>
|
||||
OPENDKIM_DOMAINS=jafner.net=peertube
|
||||
# see https://github.com/wader/postfix-relay/pull/18
|
||||
OPENDKIM_RequireSafeKeys=no
|
||||
|
||||
# /!\ Prefer to use the PeerTube admin interface to set the following configurations /!\
|
||||
#PEERTUBE_SIGNUP_ENABLED=true
|
||||
#PEERTUBE_TRANSCODING_ENABLED=true
|
||||
#PEERTUBE_CONTACT_FORM_ENABLED=true
|
||||
|
||||
# Docker volume location
|
||||
DOCKER_VOLUME=/mnt/md0/peertube
|
70
archive/docker_config/peertube/docker-compose.yml
Normal file
70
archive/docker_config/peertube/docker-compose.yml
Normal file
@ -0,0 +1,70 @@
|
||||
version: "3.3"
|
||||
|
||||
services:
|
||||
peertube:
|
||||
image: chocobozzz/peertube:production-buster
|
||||
container_name: peertube_peertube
|
||||
networks:
|
||||
web:
|
||||
peertube:
|
||||
ipv4_address: 172.80.0.42
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- "1935:1935" # If you don't want to use the live feature, you can comment this line
|
||||
volumes:
|
||||
- assets:/app/client/dist
|
||||
- ${DOCKER_VOLUME}/data:/data
|
||||
- ${DOCKER_VOLUME}/config:/config
|
||||
labels:
|
||||
- "traefik.http.routers.peertube.rule=Host(`peertube.jafner.net`)"
|
||||
- "traefik.http.routers.peertube.tls.certresolver=lets-encrypt"
|
||||
- "traefik.http.services.peertube.loadbalancer.server.port=9000"
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
- postfix
|
||||
restart: "unless-stopped"
|
||||
|
||||
postgres:
|
||||
image: postgres:13-alpine
|
||||
container_name: peertube_postgres
|
||||
networks:
|
||||
- peertube
|
||||
env_file:
|
||||
- .env
|
||||
volumes:
|
||||
- ${DOCKER_VOLUME}/db:/var/lib/postgresql/data
|
||||
restart: "unless-stopped"
|
||||
|
||||
redis:
|
||||
image: redis:6-alpine
|
||||
container_name: peertube_redis
|
||||
networks:
|
||||
- peertube
|
||||
volumes:
|
||||
- ${DOCKER_VOLUME}/redis:/data
|
||||
restart: "unless-stopped"
|
||||
|
||||
postfix:
|
||||
image: mwader/postfix-relay
|
||||
container_name: peertube_postfix
|
||||
networks:
|
||||
- peertube
|
||||
env_file:
|
||||
- .env
|
||||
volumes:
|
||||
- ${DOCKER_VOLUME}/opendkim/keys:/etc/opendkim/keys
|
||||
restart: "unless-stopped"
|
||||
|
||||
networks:
|
||||
peertube:
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 172.80.0.0/16
|
||||
web:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
assets:
|
1
archive/docker_config/plex/.env
Normal file
1
archive/docker_config/plex/.env
Normal file
@ -0,0 +1 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/plex
|
53
archive/docker_config/plex/docker-compose.yml
Normal file
53
archive/docker_config/plex/docker-compose.yml
Normal file
@ -0,0 +1,53 @@
|
||||
version: "3"
|
||||
services:
|
||||
plex:
|
||||
image: linuxserver/plex
|
||||
container_name: plex
|
||||
restart: "no"
|
||||
networks:
|
||||
- web
|
||||
ports:
|
||||
- 32400:32400/tcp
|
||||
- 32400:32400/udp
|
||||
- 3005:3005/tcp
|
||||
- 8324:8324/tcp
|
||||
- 32469:32469/tcp
|
||||
- 1900:1900/udp
|
||||
- 32410:32410/udp
|
||||
- 32412:32412/udp
|
||||
- 32413:32413/udp
|
||||
- 32414:32414/udp
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- VERSION=latest
|
||||
- ADVERTISE_IP="https://plex.jafner.net:443"
|
||||
- PLEX_CLAIM=claim-DPRoiMnzcby-YxKdFpqJ
|
||||
volumes:
|
||||
- /mnt/nas/Video/Movies:/movies
|
||||
- /mnt/nas/Video/Shows:/shows
|
||||
- "${DOCKER_DATA}/plex:/config"
|
||||
labels:
|
||||
- traefik.http.routers.plex.rule=Host(`plex.jafner.net`)
|
||||
- traefik.http.routers.plex.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.plex.loadbalancer.server.port=32400
|
||||
ombi:
|
||||
image: ghcr.io/linuxserver/ombi
|
||||
container_name: ombi
|
||||
restart: "no"
|
||||
networks:
|
||||
- web
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- "${DOCKER_DATA}/ombi:/config"
|
||||
labels:
|
||||
- traefik.http.routers.ombi.rule=Host(`ombi.jafner.net`)
|
||||
- traefik.http.routers.ombi.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.ombi.loadbalancer.server.port=3579
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
24
archive/docker_config/portainer/docker-compose.yml
Normal file
24
archive/docker_config/portainer/docker-compose.yml
Normal file
@ -0,0 +1,24 @@
|
||||
version: "3"
|
||||
services:
|
||||
portainer:
|
||||
image: portainer/portainer-ce
|
||||
container_name: portainer
|
||||
restart: unless-stopped
|
||||
command: -H unix:///var/run/docker.sock
|
||||
networks:
|
||||
- web
|
||||
restart: always
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- portainer_data:/data
|
||||
labels:
|
||||
- traefik.http.routers.portainer.rule=Host(`portainer.jafner.net`)
|
||||
- traefik.http.routers.portainer.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.portainer.loadbalancer.server.port=9000
|
||||
- traefik.http.routers.portainer.middlewares=authelia@file
|
||||
|
||||
volumes:
|
||||
portainer_data:
|
||||
networks:
|
||||
web:
|
||||
external: true
|
2
archive/docker_config/portfolio/.env
Normal file
2
archive/docker_config/portfolio/.env
Normal file
@ -0,0 +1,2 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/portfolio
|
||||
|
45
archive/docker_config/portfolio/docker-compose.yml
Normal file
45
archive/docker_config/portfolio/docker-compose.yml
Normal file
@ -0,0 +1,45 @@
|
||||
version: '3.1'
|
||||
|
||||
services:
|
||||
portfolio:
|
||||
image: wordpress
|
||||
container_name: portfolio
|
||||
restart: always
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: portfolio_db
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: wordpress
|
||||
WORDPRESS_DB_NAME: wordpressdb
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/html:/var/www/html
|
||||
labels:
|
||||
- traefik.http.routers.portfolio.rule=Host(`portfolio.jafner.net`)
|
||||
- traefik.http.routers.portfolio.tls=true
|
||||
- traefik.http.routers.portfolio.tls.certresolver=lets-encrypt
|
||||
- traefik.port=80
|
||||
networks:
|
||||
- web
|
||||
- portfolio
|
||||
depends_on:
|
||||
- portfolio_db
|
||||
|
||||
portfolio_db:
|
||||
image: mysql:5.7
|
||||
container_name: portfolio_db
|
||||
restart: always
|
||||
networks:
|
||||
- portfolio
|
||||
environment:
|
||||
MYSQL_DATABASE: wordpressdb
|
||||
MYSQL_USER: wordpress
|
||||
MYSQL_PASSWORD: wordpress
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: '1'
|
||||
volumes:
|
||||
- ${DOCKER_DATA}/db:/var/lib/mysql
|
||||
labels:
|
||||
- traefik.enable=false
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
portfolio:
|
0
archive/docker_config/prometheus/.env
Normal file
0
archive/docker_config/prometheus/.env
Normal file
45
archive/docker_config/prometheus/docker-compose.yml
Normal file
45
archive/docker_config/prometheus/docker-compose.yml
Normal file
@ -0,0 +1,45 @@
|
||||
version: '3'
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus:latest
|
||||
container_name: prometheus
|
||||
networks:
|
||||
- monitoring
|
||||
- web
|
||||
ports:
|
||||
- 9090:9090
|
||||
volumes:
|
||||
- ./prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
restart: unless-stopped
|
||||
command:
|
||||
- "--config.file=/etc/prometheus/prometheus.yml"
|
||||
labels:
|
||||
- traefik.http.routers.prometheus.rule=Host(`prometheus.jafner.net`)
|
||||
- traefik.http.routers.prometheus.tls.certresolver=lets-encrypt
|
||||
|
||||
5e-jafner-tools:
|
||||
image: lusotycoon/apache-exporter
|
||||
container_name: prometheus-5e-jafner-tools
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- monitoring
|
||||
command: '--scrape_uri "https://5e.jafner.tools/server-status/?auto"'
|
||||
|
||||
pihole-jafner-net:
|
||||
image: ekofr/pihole-exporter:latest
|
||||
container_name: prometheus-pihole-jafner-net
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- monitoring
|
||||
environment:
|
||||
- PIHOLE_HOSTNAME=pihole.jafner.net
|
||||
- PIHOLE_PASSWORD=***REMOVED***
|
||||
- INTERVAL=15s
|
||||
- PORT=9617
|
||||
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
external: true
|
||||
web:
|
||||
external: true
|
28
archive/docker_config/prometheus/prometheus.yml
Normal file
28
archive/docker_config/prometheus/prometheus.yml
Normal file
@ -0,0 +1,28 @@
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
scrape_interval: 60s
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: '5e.jafner.tools'
|
||||
scrape_interval: 60s
|
||||
static_configs:
|
||||
- targets: ['5e-jafner-tools:9117']
|
||||
metrics_path: "/metrics"
|
||||
|
||||
- job_name: 'pihole.jafner.net'
|
||||
scrape_interval: 60s
|
||||
static_configs:
|
||||
- targets: ['pihole-jafner-net:9617']
|
||||
metrics_path: "/metrics"
|
||||
|
||||
- job_name: 'uptime-kuma'
|
||||
scrape_interval: 60s
|
||||
static_configs:
|
||||
- targets: ['uptime.jafner.net']
|
||||
basic_auth:
|
||||
username: Jafner
|
||||
password: ***REMOVED***
|
0
archive/docker_config/traefik/.env
Normal file
0
archive/docker_config/traefik/.env
Normal file
380
archive/docker_config/traefik/acme.json
Normal file
380
archive/docker_config/traefik/acme.json
Normal file
File diff suppressed because one or more lines are too long
59
archive/docker_config/traefik/authelia/configuration.yml
Normal file
59
archive/docker_config/traefik/authelia/configuration.yml
Normal file
@ -0,0 +1,59 @@
|
||||
---
|
||||
###############################################################
|
||||
# Authelia configuration #
|
||||
###############################################################
|
||||
|
||||
host: 0.0.0.0
|
||||
port: 9091
|
||||
# This secret can also be set using the env variables AUTHELIA_JWT_SECRET_FILE
|
||||
jwt_secret: XvvpN7uFoSmQAPdgAynDTHG8yQX4tjtS
|
||||
default_redirection_url: https://www.jafner.net
|
||||
totp:
|
||||
issuer: authelia.com
|
||||
period: 30
|
||||
skew: 1
|
||||
|
||||
authentication_backend:
|
||||
file:
|
||||
path: /config/users_database.yml
|
||||
password:
|
||||
algorithm: argon2id
|
||||
iterations: 1
|
||||
salt_length: 16
|
||||
parallelism: 8
|
||||
memory: 1024
|
||||
|
||||
access_control:
|
||||
default_policy: deny
|
||||
rules:
|
||||
# Rules applied to everyone
|
||||
- domain:
|
||||
- "*.jafner.net"
|
||||
- "jafner.net"
|
||||
policy: two_factor
|
||||
|
||||
session:
|
||||
name: authelia_session
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_SECRET_FILE
|
||||
secret: ***REMOVED***
|
||||
expiration: 3600 # 1 hour
|
||||
inactivity: 300 # 5 minutes
|
||||
domain: jafner.net # Should match whatever your root protected domain is
|
||||
redis:
|
||||
host: redis
|
||||
port: 6379
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_REDIS_PASSWORD_FILE
|
||||
# password: authelia
|
||||
|
||||
regulation:
|
||||
max_retries: 3
|
||||
find_time: 120
|
||||
ban_time: 300
|
||||
|
||||
storage:
|
||||
local:
|
||||
path: /config/db.sqlite3
|
||||
|
||||
notifier:
|
||||
filesystem:
|
||||
filename: /config/notification.txt
|
58
archive/docker_config/traefik/docker-compose.yml
Normal file
58
archive/docker_config/traefik/docker-compose.yml
Normal file
@ -0,0 +1,58 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
traefik:
|
||||
container_name: traefik
|
||||
image: traefik:latest
|
||||
depends_on:
|
||||
- authelia
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- web
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./traefik.toml:/traefik.toml
|
||||
- ./traefik_dynamic.toml:/traefik_dynamic.toml
|
||||
- ./acme.json:/acme.json
|
||||
- ./.htpasswd:/.htpasswd
|
||||
labels:
|
||||
- traefik.http.routers.traefik.rule=Host(`traefik.jafner.net`)
|
||||
- traefik.http.routers.traefik.tls.certresolver=lets-encrypt
|
||||
|
||||
authelia:
|
||||
image: authelia/authelia
|
||||
container_name: authelia
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./authelia:/config
|
||||
networks:
|
||||
- web
|
||||
labels:
|
||||
- 'traefik.http.routers.authelia.rule=Host(`auth.jafner.net`)'
|
||||
- 'traefik.http.routers.authelia.entrypoints=websecure'
|
||||
- 'traefik.http.routers.authelia.tls.certresolver=lets-encrypt'
|
||||
- "traefik.http.middlewares.security.headers.sslRedirect=true"
|
||||
- "traefik.http.middlewares.security.headers.stsSeconds=15768000"
|
||||
- "traefik.http.middlewares.security.headers.browserXSSFilter=true"
|
||||
- "traefik.http.middlewares.security.headers.contentTypeNosniff=true"
|
||||
- "traefik.http.middlewares.security.headers.forceSTSHeader=true"
|
||||
- "traefik.http.middlewares.security.headers.stsPreload=true"
|
||||
- "traefik.http.middlewares.security.headers.frameDeny=true"
|
||||
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: redis
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./redis:/data
|
||||
networks:
|
||||
- web
|
||||
expose:
|
||||
- 6379
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
8
archive/docker_config/traefik/labels.txt
Normal file
8
archive/docker_config/traefik/labels.txt
Normal file
@ -0,0 +1,8 @@
|
||||
# for all web-facing services
|
||||
traefik.http.routers.router-name.rule=Host(`subdomain.jafner.net`)
|
||||
traefik.http.routers.router-name.tls=true
|
||||
traefik.http.routers.router-name.tls.certresolver=lets-encrypt
|
||||
# for restricting service to LAN IPs
|
||||
traefik.http.routers.router-name.middlewares=lan-only@file
|
||||
# for setting a non-default port
|
||||
traefik.http.services.service-name.loadbalancer.server.port=1234
|
28
archive/docker_config/traefik/traefik.toml
Normal file
28
archive/docker_config/traefik/traefik.toml
Normal file
@ -0,0 +1,28 @@
|
||||
[entryPoints]
|
||||
[entryPoints.web]
|
||||
address = ":80"
|
||||
[entryPoints.web.http.redirections.entryPoint]
|
||||
to = "websecure"
|
||||
scheme = "https"
|
||||
[entryPoints.websecure]
|
||||
address = ":443"
|
||||
|
||||
[api]
|
||||
dashboard = true
|
||||
|
||||
[metrics]
|
||||
[metrics.prometheus]
|
||||
|
||||
[certificatesResolvers.lets-encrypt.acme]
|
||||
email = "jafner425@gmail.com"
|
||||
storage = "acme.json"
|
||||
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
|
||||
|
||||
[providers.docker]
|
||||
watch = true
|
||||
network = "web"
|
||||
|
||||
[providers.file]
|
||||
filename = "traefik_dynamic.toml"
|
||||
|
||||
|
18
archive/docker_config/traefik/traefik_dynamic.toml
Normal file
18
archive/docker_config/traefik/traefik_dynamic.toml
Normal file
@ -0,0 +1,18 @@
|
||||
[http.middlewares]
|
||||
[http.middlewares.lan-only.ipWhiteList]
|
||||
sourceRange = ["127.0.0.1/32", "192.168.1.1/24"]
|
||||
[http.middlewares.simpleauth.basicAuth]
|
||||
usersFile = "/.htpasswd"
|
||||
[http.middlewares.authelia.forwardAuth]
|
||||
address = "http://authelia:9091/api/verify?rd=https://auth.jafner.net"
|
||||
trustForwardHeader = "true"
|
||||
authResponseHeaders = ["Remote-User", "Remote-Groups", "Remote-Name", "Remote-Email"]
|
||||
|
||||
[http.routers.api]
|
||||
rule = "Host(`traefik.jafner.net`)"
|
||||
entrypoints = ["websecure"]
|
||||
middlewares = ["authelia@file"]
|
||||
service = "api@internal"
|
||||
[http.routers.api.tls]
|
||||
certResolver = "lets-encrypt"
|
||||
|
23
archive/docker_config/unifi_controller/docker-compose.yml
Normal file
23
archive/docker_config/unifi_controller/docker-compose.yml
Normal file
@ -0,0 +1,23 @@
|
||||
version: '3'
|
||||
services:
|
||||
unifi-controller:
|
||||
image: lscr.io/linuxserver/unifi-controller
|
||||
container_name: unifi_controller
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- MEM_LIMIT=1024M
|
||||
- MEM_STARTUP=1024M
|
||||
volumes:
|
||||
- ${PWD}/config:/config
|
||||
ports:
|
||||
- 3478:3478/udp # unifi STUN port
|
||||
- 10001:10001/udp # AP discovery port
|
||||
- 8080:8080 # communicate with devices
|
||||
- 8443:8443 # web admin port
|
||||
#- 1900:1900/udp # discoverability on layer 2
|
||||
- 8843:8843 # guest portal http
|
||||
- 8880:8880 # guest portal https
|
||||
- 6789:6789 # mobile throughput test port
|
||||
- 5514:5514/udp # remote syslog
|
21
archive/docker_config/uptime-kuma/docker-compose.yml
Normal file
21
archive/docker_config/uptime-kuma/docker-compose.yml
Normal file
@ -0,0 +1,21 @@
|
||||
# Simple docker-composer.yml
|
||||
# You can change your port or volume location
|
||||
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma
|
||||
container_name: uptime-kuma
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
networks:
|
||||
- web
|
||||
labels:
|
||||
- traefik.http.routers.uptime-kuma.rule=Host(`uptime.jafner.net`)
|
||||
- traefik.http.routers.uptime-kuma.tls.certresolver=lets-encrypt
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
1
archive/docker_config/vaultwarden/.env
Normal file
1
archive/docker_config/vaultwarden/.env
Normal file
@ -0,0 +1 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/vaultwarden
|
17
archive/docker_config/vaultwarden/docker-compose.yml
Normal file
17
archive/docker_config/vaultwarden/docker-compose.yml
Normal file
@ -0,0 +1,17 @@
|
||||
version: '3'
|
||||
services:
|
||||
vaultwarden:
|
||||
image: vaultwarden/server:latest
|
||||
container_name: vaultwarden
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- web
|
||||
volumes:
|
||||
- ${DOCKER_DATA}:/data
|
||||
labels:
|
||||
- traefik.http.routers.vaultwarden.rule=Host(`bitwarden.jafner.net`)
|
||||
- traefik.http.routers.vaultwarden.tls=true
|
||||
- traefik.http.routers.vaultwarden.tls.certresolver=lets-encrypt
|
||||
networks:
|
||||
web:
|
||||
external: true
|
1
archive/docker_config/wikijs/.env
Normal file
1
archive/docker_config/wikijs/.env
Normal file
@ -0,0 +1 @@
|
||||
DOCKER_DATA=/home/joey/docker_data/wikijs
|
48
archive/docker_config/wikijs/docker-compose.yml
Normal file
48
archive/docker_config/wikijs/docker-compose.yml
Normal file
@ -0,0 +1,48 @@
|
||||
version: '3'
|
||||
services:
|
||||
db:
|
||||
image: postgres:11-alpine
|
||||
container_name: wikijs_db
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: wiki
|
||||
POSTGRES_PASSWORD: wikijsrocks
|
||||
POSTGRES_USER: wikijs
|
||||
networks:
|
||||
- wikijs
|
||||
logging:
|
||||
driver: "none"
|
||||
volumes:
|
||||
- wikijs_db:/var/lib/postgresql/data
|
||||
|
||||
wiki:
|
||||
image: requarks/wiki:2
|
||||
container_name: wikijs_wiki
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
- web
|
||||
- wikijs
|
||||
environment:
|
||||
DB_TYPE: postgres
|
||||
DB_HOST: db
|
||||
DB_PORT: 5432
|
||||
DB_USER: wikijs
|
||||
DB_PASS: wikijsrocks
|
||||
DB_NAME: wiki
|
||||
volumes:
|
||||
- ${PWD}/id_rsa:/id_rsa
|
||||
- ${PWD}/id_rsa.pub:/id_rsa.pub
|
||||
labels:
|
||||
- traefik.http.routers.wikijs.rule=Host(`wikijs.jafner.net`)
|
||||
- traefik.http.routers.wikijs.tls.certresolver=lets-encrypt
|
||||
- traefik.http.services.wikijs.loadbalancer.server.port=3000
|
||||
|
||||
volumes:
|
||||
wikijs_db:
|
||||
|
||||
networks:
|
||||
wikijs:
|
||||
web:
|
||||
external: true
|
@ -0,0 +1,6 @@
|
||||
ORIG_SERVERURL="www.jafner.net"
|
||||
ORIG_SERVERPORT="53820"
|
||||
ORIG_PEERDNS="10.13.13.1"
|
||||
ORIG_PEERS="joey-phone,joey-xps13,maddie-phone"
|
||||
ORIG_INTERFACE="10.13.13"
|
||||
ORIG_ALLOWEDIPS="0.0.0.0/0, ::/0"
|
4
archive/docker_config/wireguard/config/coredns/Corefile
Normal file
4
archive/docker_config/wireguard/config/coredns/Corefile
Normal file
@ -0,0 +1,4 @@
|
||||
. {
|
||||
loop
|
||||
forward . /etc/resolv.conf
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
[Interface]
|
||||
Address = 10.13.13.7
|
||||
PrivateKey = ***REMOVED***
|
||||
ListenPort = 51820
|
||||
DNS = 10.13.13.1
|
||||
|
||||
[Peer]
|
||||
PublicKey = 9MgzBzqw0J+xX/IJJHTBUTLRaT82p7fjv+1W5Y6vVE8=
|
||||
Endpoint = www.jafner.net:53820
|
||||
AllowedIPs = 0.0.0.0/0, ::/0
|
Binary file not shown.
After Width: | Height: | Size: 1013 B |
@ -0,0 +1 @@
|
||||
***REMOVED***
|
@ -0,0 +1 @@
|
||||
+jGu2Iz7Wy7m3bfxcE3IXAo6bnUGtKVWUtD+yH7vc2M=
|
@ -0,0 +1,10 @@
|
||||
[Interface]
|
||||
Address = 10.13.13.8
|
||||
PrivateKey = wNeoTRbK++UbDbvGR9k3DGKYsjhoOKMAO8QgV2aO5n4=
|
||||
ListenPort = 51820
|
||||
DNS = 10.13.13.1
|
||||
|
||||
[Peer]
|
||||
PublicKey = 9MgzBzqw0J+xX/IJJHTBUTLRaT82p7fjv+1W5Y6vVE8=
|
||||
Endpoint = www.jafner.net:53820
|
||||
AllowedIPs = 0.0.0.0/0, ::/0
|
Binary file not shown.
After Width: | Height: | Size: 1012 B |
@ -0,0 +1 @@
|
||||
wNeoTRbK++UbDbvGR9k3DGKYsjhoOKMAO8QgV2aO5n4=
|
@ -0,0 +1 @@
|
||||
***REMOVED***
|
@ -0,0 +1,10 @@
|
||||
[Interface]
|
||||
Address = 10.13.13.9
|
||||
PrivateKey = MFUebhcmzAMVDy1DYvbj10K9AcDEaedG0k/MFWtVOWo=
|
||||
ListenPort = 51820
|
||||
DNS = 10.13.13.1
|
||||
|
||||
[Peer]
|
||||
PublicKey = 9MgzBzqw0J+xX/IJJHTBUTLRaT82p7fjv+1W5Y6vVE8=
|
||||
Endpoint = www.jafner.net:53820
|
||||
AllowedIPs = 0.0.0.0/0, ::/0
|
Binary file not shown.
After Width: | Height: | Size: 1021 B |
@ -0,0 +1 @@
|
||||
MFUebhcmzAMVDy1DYvbj10K9AcDEaedG0k/MFWtVOWo=
|
@ -0,0 +1 @@
|
||||
qh5czZ7MHzjNgUqxZ/fsAlMnaVHoUU8mxcJCXwb0Pxs=
|
@ -0,0 +1 @@
|
||||
***REMOVED***
|
@ -0,0 +1 @@
|
||||
9MgzBzqw0J+xX/IJJHTBUTLRaT82p7fjv+1W5Y6vVE8=
|
10
archive/docker_config/wireguard/config/templates/peer.conf
Normal file
10
archive/docker_config/wireguard/config/templates/peer.conf
Normal file
@ -0,0 +1,10 @@
|
||||
[Interface]
|
||||
Address = ${CLIENT_IP}
|
||||
PrivateKey = $(cat /config/${PEER_ID}/privatekey-${PEER_ID})
|
||||
ListenPort = 51820
|
||||
DNS = ${PEERDNS}
|
||||
|
||||
[Peer]
|
||||
PublicKey = $(cat /config/server/publickey-server)
|
||||
Endpoint = ${SERVERURL}:${SERVERPORT}
|
||||
AllowedIPs = ${ALLOWEDIPS}
|
@ -0,0 +1,6 @@
|
||||
[Interface]
|
||||
Address = ${INTERFACE}.1
|
||||
ListenPort = 51820
|
||||
PrivateKey = $(cat /config/server/privatekey-server)
|
||||
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
||||
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
|
22
archive/docker_config/wireguard/config/wg0.conf
Normal file
22
archive/docker_config/wireguard/config/wg0.conf
Normal file
@ -0,0 +1,22 @@
|
||||
[Interface]
|
||||
Address = 10.13.13.1
|
||||
ListenPort = 51820
|
||||
PrivateKey = ***REMOVED***
|
||||
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
|
||||
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
|
||||
|
||||
[Peer]
|
||||
# peer_joey-phone
|
||||
PublicKey = +jGu2Iz7Wy7m3bfxcE3IXAo6bnUGtKVWUtD+yH7vc2M=
|
||||
AllowedIPs = 10.13.13.7/32
|
||||
|
||||
[Peer]
|
||||
# peer_joey-xps13
|
||||
PublicKey = ***REMOVED***
|
||||
AllowedIPs = 10.13.13.8/32
|
||||
|
||||
[Peer]
|
||||
# peer_maddie-phone
|
||||
PublicKey = qh5czZ7MHzjNgUqxZ/fsAlMnaVHoUU8mxcJCXwb0Pxs=
|
||||
AllowedIPs = 10.13.13.9/32
|
||||
|
23
archive/docker_config/wireguard/docker-compose.yml
Normal file
23
archive/docker_config/wireguard/docker-compose.yml
Normal file
@ -0,0 +1,23 @@
|
||||
version: "3"
|
||||
services:
|
||||
wireguard:
|
||||
image: linuxserver/wireguard
|
||||
container_name: wireguard
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 53820:51820/udp
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
- SYS_MODULE
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- PEERS=joey-phone,joey-xps13,maddie-phone
|
||||
- SERVERURL=www.jafner.net
|
||||
- SERVERPORT=53820
|
||||
volumes:
|
||||
- ./config:/config
|
||||
- /lib/modules:/lib/modules
|
||||
sysctls:
|
||||
- net.ipv4.conf.all.src_valid_mark=1
|
25
archive/doradash/API.md
Normal file
25
archive/doradash/API.md
Normal file
@ -0,0 +1,25 @@
|
||||
# Design doc for API
|
||||
|
||||
Stuff we're gonna need:
|
||||
|
||||
- GET for each of the four metrics, plus most recent "rating".
|
||||
- POST for data-generating events: deployment, outage/restoration.
|
||||
|
||||
Given that this service relies on having data *pushed* to it, we can only ever return metrics based on the most recent deployment or outage/restoration event.
|
||||
|
||||
So with that in mind, we have the following design for each of our endpoints:
|
||||
|
||||
| Method | Description | Endpoint | Request Payload | Response Payload |
|
||||
|:------:|:-----------:|:--------:|:---------------:|:----------------:|
|
||||
| GET | Get deployment frequency | /api/metrics/deployment_frequency | - | {"TIMESTAMP", "COUNT", "UNIT"} |
|
||||
| GET | Get lead time for changes | /api/metrics/lead_time_for_changes | - | {"TIMESTAMP", "COMPUTED_TIME"} |
|
||||
| GET | Get time to restore service | /api/metrics/time_to_restore_service | - | {"TIMESTAMP", "COMPUTED_TIME"} |
|
||||
| GET | Get change failure rate | /api/metrics/change_failure_rate | - | {"TIMESTAMP", "RATE"} |
|
||||
| GET | Get current rating | /api/metrics/vanity | - | {"TIMESTAMP", "RATING"} |
|
||||
| POST | Post new deployment event | /api/events/deployment | {"TIMESTAMP", "{INCLUDED_GIT_HASHES}", "OLDEST_COMMIT_TIMESTAMP", "DEPLOY_RETURN_STATUS"} | OK |
|
||||
| POST | Post new service availability change event | /api/events/service_availability | {"TIMESTAMP", "SERVICE_ID", "EVENT_TYPE"} |
|
||||
|
||||
### Notes
|
||||
- As-is, this API leaves no room for versioning, publisher IDs, or meaningful correlation between deployments and service availability changes.
|
||||
- As-is, we have no identification, authentication, or authorization systems.
|
||||
- As-is, we have no way to view the dataset from which the values are calculated.
|
22
archive/doradash/README.Docker.md
Normal file
22
archive/doradash/README.Docker.md
Normal file
@ -0,0 +1,22 @@
|
||||
### Building and running your application
|
||||
|
||||
When you're ready, start your application by running:
|
||||
`docker compose up --build`.
|
||||
|
||||
Your application will be available at http://localhost:8000.
|
||||
|
||||
### Deploying your application to the cloud
|
||||
|
||||
First, build your image, e.g.: `docker build -t myapp .`.
|
||||
If your cloud uses a different CPU architecture than your development
|
||||
machine (e.g., you are on a Mac M1 and your cloud provider is amd64),
|
||||
you'll want to build the image for that platform, e.g.:
|
||||
`docker build --platform=linux/amd64 -t myapp .`.
|
||||
|
||||
Then, push it to your registry, e.g. `docker push myregistry.com/myapp`.
|
||||
|
||||
Consult Docker's [getting started](https://docs.docker.com/go/get-started-sharing/)
|
||||
docs for more detail on building and pushing.
|
||||
|
||||
### References
|
||||
* [Docker's Python guide](https://docs.docker.com/language/python/)
|
120
archive/doradash/app/main.py
Normal file
120
archive/doradash/app/main.py
Normal file
@ -0,0 +1,120 @@
|
||||
from datetime import datetime, timedelta
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from pydantic import BaseModel
|
||||
from pydantic.functional_validators import field_validator
|
||||
import re
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
class Deployment(BaseModel):
|
||||
event_timestamp: datetime = None # should look like 2024-03-12T14:29:46-0700
|
||||
hashes: list = None # each should match an sha1 hash format regex(\b[0-9a-f]{5,40}\b)
|
||||
timestamp_oldest_commit: datetime = None # should look like 2024-03-12T14:29:46-0700
|
||||
deploy_return_status: str = None # should be "Success", "Failure", or "Invalid"
|
||||
|
||||
@field_validator("event_timestamp","timestamp_oldest_commit")
|
||||
def validate_datetime(cls, d):
|
||||
# oh lord jesus datetime validation
|
||||
date_text = str(d)
|
||||
iso8601_regex = r"^([\+-]?\d{4}(?!\d{2}\b))((-?)((0[1-9]|1[0-2])(\3([12]\d|0[1-9]|3[01]))?|W([0-4]\d|5[0-2])(-?[1-7])?|(00[1-9]|0[1-9]\d|[12]\d{2}|3([0-5]\d|6[1-6])))([T\s]((([01]\d|2[0-3])((:?)[0-5]\d)?|24\:?00)([\.,]\d+(?!:))?)?(\17[0-5]\d([\.,]\d+)?)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?)?)?$"
|
||||
if re.match(iso8601_regex, date_text):
|
||||
return d
|
||||
else:
|
||||
raise ValueError(f"date must be in ISO-8601 format: {d}")
|
||||
|
||||
@field_validator("hashes")
|
||||
def validate_hashes(cls, hashes):
|
||||
if not len(hashes) > 0:
|
||||
raise ValueError(f"commit hash list cannot be empty")
|
||||
for h in hashes:
|
||||
if not re.match(r"\b[0-9a-f]{5,40}\b", h):
|
||||
raise ValueError(f"hash not valid sha1: {h}")
|
||||
else:
|
||||
return hashes
|
||||
|
||||
@field_validator("deploy_return_status")
|
||||
def validate_return_status(cls, status):
|
||||
if status not in ["Success", "Failure", "Invalid"]:
|
||||
raise ValueError(f"return_status must be one of \"Success\", \"Failure\", or \"Invalid\": {status}")
|
||||
else:
|
||||
return status
|
||||
|
||||
class ServiceAvailabilityChange(BaseModel):
|
||||
event_timestamp: datetime # should look like 2024-03-12T14:29:46-0700
|
||||
service_id: str # practically arbitrary, but maybe useful for later
|
||||
event_type: str # should be "outage" or "restoration"
|
||||
|
||||
@field_validator("event_type")
|
||||
def validate_balanced_events(cls,event_type):
|
||||
# since all inputs are validated one at a time, we can simplify the balancing logic
|
||||
# we can use a naive algorithm (count outages, count restorations) here because we validate each input one at a time
|
||||
|
||||
stack = []
|
||||
for event in service_events:
|
||||
if event.event_type == "outage":
|
||||
stack.append(event)
|
||||
else:
|
||||
if not stack or (\
|
||||
event.event_type == 'restoration' and \
|
||||
stack[-1] != 'outage'\
|
||||
):
|
||||
raise ValueError("no preceding outage for restoration event")
|
||||
stack.pop()
|
||||
|
||||
# please replace "store the dataset in an array in memory" before deploying
|
||||
deployments = []
|
||||
service_events = []
|
||||
|
||||
@app.post("/api/events/deployment")
|
||||
def append_deployment(deployment: Deployment):
|
||||
deployments.append(deployment)
|
||||
return deployment
|
||||
|
||||
@app.post("/api/events/service_availability")
|
||||
def append_service_availability(service_event: ServiceAvailabilityChange):
|
||||
service_events.append(service_event)
|
||||
return service_event
|
||||
|
||||
@app.get("/api/metrics/deployment_frequency")
|
||||
def get_deployment_frequency():
|
||||
deploys_in_day = {}
|
||||
for deployment in deployments:
|
||||
if deployment.event_timestamp.date() in deploys_in_day:
|
||||
deploys_in_day[deployment.event_timestamp.date()] += 1
|
||||
else:
|
||||
deploys_in_day[deployment.event_timestamp.date()] = 1
|
||||
return len(deployments) / len(deploys_in_day)
|
||||
|
||||
@app.get("/api/metrics/lead_time_for_changes")
|
||||
def get_lead_time_for_changes():
|
||||
time_deltas = []
|
||||
for deployment in deployments:
|
||||
time_delta = deployment.event_timestamp - deployment.timestamp_oldest_commit
|
||||
time_deltas.append(time_delta.seconds)
|
||||
lead_time_for_changes = sum(time_deltas) / len(time_deltas)
|
||||
return str(timedelta(seconds=lead_time_for_changes)) # standardize output format?
|
||||
|
||||
@app.get("/api/metrics/time_to_restore_service")
|
||||
def get_time_to_restore_service():
|
||||
# check for balanced events (a preceding outage for each restoration)
|
||||
# for each balanced root-level event, get the time delta from first outage to final restoration
|
||||
# append time delta to array of time deltas
|
||||
# return average of time deltas array
|
||||
|
||||
|
||||
|
||||
@app.get("/api/metrics/change_failure_rate")
|
||||
def get_change_failure_rate():
|
||||
success_counter = 0
|
||||
failure_counter = 0
|
||||
for deployment in deployments:
|
||||
if deployment.deploy_return_status == "Invalid":
|
||||
pass
|
||||
elif deployment.deploy_return_status == "Success":
|
||||
success_counter += 1
|
||||
else:
|
||||
failure_counter += 1
|
||||
return failure_counter / (success_counter + failure_counter)
|
||||
|
||||
# @app.get("/api/metrics/vanity")
|
||||
# def get_vanity():
|
18
archive/doradash/app/main_test.py
Normal file
18
archive/doradash/app/main_test.py
Normal file
@ -0,0 +1,18 @@
|
||||
from datetime import datetime
|
||||
import json
|
||||
import requests
|
||||
|
||||
valid_deployment = {
|
||||
"event_timestamp": str(datetime.now()),
|
||||
"hashes": ["d7d8937e8f169727852dea77bae30a8749fe21fc"],
|
||||
"oldest_commit_timestamp": str(datetime.now()),
|
||||
"deploy_return_status": "Success"
|
||||
}
|
||||
|
||||
def test_valid_deployment():
|
||||
#payload =
|
||||
endpoint = "http://127.0.0.1:8000/api/events/deployment"
|
||||
response = requests.post(endpoint, json=json.dumps(valid_deployment))
|
||||
print(response)
|
||||
print(valid_deployment)
|
||||
#assert response.status_code == 200
|
4
archive/doradash/requirements.txt
Normal file
4
archive/doradash/requirements.txt
Normal file
@ -0,0 +1,4 @@
|
||||
uvicorn==0.28.0
|
||||
fastapi==0.110.0
|
||||
pydantic==2.6.4
|
||||
pytest==8.1.1
|
6
archive/doradash/test_deployments.sh
Executable file
6
archive/doradash/test_deployments.sh
Executable file
@ -0,0 +1,6 @@
|
||||
#!/bin/bash
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/deployment' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T22:02:38.689Z","hashes": ["6ece311c24dd6a4b3dbbf8525a3a61854a32838d","d7d8937e8f169727852dea77bae30a8749fe21fc"],"timestamp_oldest_commit": "2024-03-11T22:02:38.689Z","deploy_return_status": "Failure"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/deployment' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T23:03:38.689Z","hashes": ["f5521851965c4866c5dc0e8edc9d5e2a40b5ebe6","b8c3bb11a978dbcbe507c53c62f715a728cdfd52"],"timestamp_oldest_commit": "2024-03-10T22:05:38.689Z","deploy_return_status": "Success"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/deployment' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-11T21:03:38.689Z","hashes": ["ae35c9c0e4f71ddf280bd297c42f04f2c0ce3838","d53e974d7e60295ed36c38a57870d1a6bfc7e399"],"timestamp_oldest_commit": "2024-03-11T20:05:38.689Z","deploy_return_status": "Success"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/deployment' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-10T23:03:38.689Z","hashes": ["b6a707faa68bc987ae549c0f36d053a412bd40da","b6a707faa68bc987ae549c0f36d053a412bd40da"],"timestamp_oldest_commit": "2024-03-10T14:05:38.689Z","deploy_return_status": "Success"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/deployment' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-02-10T23:03:38.689Z","hashes": ["94036270dd329559b58edc6f8780e03bd94509a3","b6d1abc911c08778424fb244de1f172f54905b81"],"timestamp_oldest_commit": "2024-02-09T14:05:38.689Z","deploy_return_status": "Invalid"}'
|
11
archive/doradash/test_service_events.sh
Executable file
11
archive/doradash/test_service_events.sh
Executable file
@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-11T18:02:00.000Z","service_id": "plex","event_type": "outage"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-11T19:02:00.000Z","service_id": "plex","event_type": "restoration"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-11T20:02:00.000Z","service_id": "nextcloud","event_type": "outage"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-11T21:02:00.000Z","service_id": "nextcloud","event_type": "restoration"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-11T22:02:00.000Z","service_id": "nextcloud","event_type": "outage"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T01:02:00.000Z","service_id": "nextcloud","event_type": "restoration"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T02:02:00.000Z","service_id": "plex","event_type": "outage"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T03:02:00.000Z","service_id": "plex","event_type": "restoration"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T04:02:00.000Z","service_id": "plex","event_type": "outage"}'
|
||||
curl -X 'POST' 'http://127.0.0.1:8000/api/events/service_availability' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"event_timestamp": "2024-03-12T04:02:00.000Z","service_id": "plex","event_type": "restoration"}'
|
25
archive/dotfiles/.gitconfig
Normal file
25
archive/dotfiles/.gitconfig
Normal file
@ -0,0 +1,25 @@
|
||||
[user]
|
||||
name = Joey Hafner
|
||||
email = joey@jafner.net
|
||||
[init]
|
||||
defaultBranch = main
|
||||
[filter "lfs"]
|
||||
clean = git-lfs clean -- %f
|
||||
smudge = git-lfs smudge -- %f
|
||||
process = git-lfs filter-process
|
||||
required = true
|
||||
[credential "https://huggingface.co"]
|
||||
username = ShortSwedishMan
|
||||
[core]
|
||||
editor = nano
|
||||
excludesfile = ~/.gitignore
|
||||
autocrlf = input
|
||||
[color]
|
||||
branch = auto
|
||||
diff = auto
|
||||
interactive = auto
|
||||
status = auto
|
||||
[push]
|
||||
default = simple
|
||||
[pull]
|
||||
rebase = true
|
BIN
archive/dotfiles/.profile_img.webp
Normal file
BIN
archive/dotfiles/.profile_img.webp
Normal file
Binary file not shown.
After Width: | Height: | Size: 5.2 KiB |
0
archive/dotfiles/README.md
Normal file
0
archive/dotfiles/README.md
Normal file
BIN
archive/dotfiles/config/goxlr/Default - Red.goxlr
Normal file
BIN
archive/dotfiles/config/goxlr/Default - Red.goxlr
Normal file
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user