Compare commits

...

29 Commits

Author SHA1 Message Date
ᗪєνιη ᗷυнʟ
93774a4ed6 [lidarr] use common chart (#123) 2020-11-08 13:25:59 -05:00
ᗪєνιη ᗷυнʟ
5cfe91e0f3 [sonarr] use common chart (#122) 2020-11-08 13:00:49 -05:00
nolte
aacd8ceac0 [mosquitto] add Prometheus Exporter as Sidecar Container (#118)
Signed-off-by: nolte <nolte07@googlemail.com>

Co-authored-by: nolte <nolte07@googlemail.com>
2020-11-08 12:41:30 -05:00
ᗪєνιη ᗷυнʟ
799111dddb [radarr] use new common chart (#121)
* radarr: use new common chart

* jackett

* radarr fix newline
2020-11-08 12:39:37 -05:00
Bernd Schörgers
2aa2718559 [jackett] Bump library, add ingress test (#117) 2020-11-07 16:40:14 -05:00
Bernd Schörgers
2b158892e3 [common] Add capabilities to determine apiVersion (#116)
* [common] Add capabilities to determine apiVersion

* [common] Add capabilities to determine apiVersion
2020-11-07 08:17:31 -05:00
Bernd Schörgers
45c9f3c39e [jackett] Migrate to common library (#113) 2020-11-06 16:40:53 -05:00
Bernd Schörgers
c7f15f37a2 [common] Fix syntax error (#114) 2020-11-06 16:22:57 -05:00
Bernd Schörgers
6b9650f348 [common] Fix classes logic (#112) 2020-11-06 15:54:25 -05:00
Bernd Schörgers
f36de85c15 [common] Better defaults for service and ingress (#111) 2020-11-06 14:28:01 -05:00
Bernd Schörgers
a3da4245f3 [media-common] Migrate to library chart (#109) 2020-11-06 13:46:50 -05:00
Patrik Boström
bc17f3cc7b [home-assistant] Added metricRelabelings for service monitor (#101)
* Added metricRelabelings for service monitor

* Fixes

* Added end

* Changed chart version to 2.6.0

Co-authored-by: Jeff Billimek <jeff@billimek.com>
2020-10-28 08:40:20 -04:00
nolte
cce27da342 bump up esphome (#103)
Signed-off-by: nolte <nolte07@googlemail.com>

Co-authored-by: nolte <nolte07@googlemail.com>
Co-authored-by: Jeff Billimek <jeff@billimek.com>
2020-10-28 08:32:07 -04:00
Michael Kötter
3a08566dd4 fix stable repo (#104) 2020-10-28 08:12:20 -04:00
Michael Kötter
2282b4113b add extraVolumes & extraVolumeMounts support (#98)
Co-authored-by: Jeff Billimek <jeff@billimek.com>
2020-10-26 10:08:00 -04:00
Michael Kötter
714708050a add extraEnv etc., extraVolumes & extraVolumeMounts (#99) 2020-10-26 08:25:07 -04:00
Patrik Boström
f55c117431 [piaware] Added support for BEASTHOST and BEASTPORT (#93)
Signed-off-by: Patrik Boström <patbos@patbos.com>
2020-10-20 11:25:14 -04:00
ᗪєνιη ᗷυнʟ
0470f937bf [zwave2mqtt] Remove the persistent /usr/local/etc/openzwave volume (#90)
* Remove the persistent /usr/local/etc/openzwave volume

* Bump chart version

* bump to major version

* add upgrade instructions
2020-10-14 10:37:26 -04:00
nolte
930df4c36b [home-assistant] pump up esphome chart version (#89)
Co-authored-by: nolte <nolte07@googlemail.com>
Co-authored-by: ᗪєνιη ᗷυнʟ <onedr0p@users.noreply.github.com>
2020-10-13 16:04:55 -04:00
Nicholas St. Germain
a1a0fd4c99 Merge pull request #85 from CuBiC3D/master
[media-common] Bump charts depending on media-common
2020-10-12 19:36:02 -05:00
Nicholas St. Germain
0487aa49fb Merge branch 'master' into master 2020-10-12 19:11:13 -05:00
Jeff Billimek
490dc82894 [multiple] Bump various chart image versions (#88)
* Bump various chart image versions

* friagte: 0.6.0
* home-assistant: 0.116.1
* plex: 1.20.2.3402-0fec14d92
* teslamate: 1.20.0

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fix teslamate postgres dependency chart

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* remove global reference

Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-10-09 13:16:16 -04:00
CuBiC
54efffaf52 Merge branch 'master' into master 2020-10-09 01:57:32 +02:00
Hugo Fonseca
8be3edfc59 [Adguard-home] Allow to mount secret with certs so we can set the tls … (#87)
* Adguard-home: Allow to mount secret with certs so we can set the tls configs with these

* adguard-home bump to 2.2.0
2020-10-08 15:38:10 -04:00
CuBiC
20047cade1 Merge branch 'master' into master 2020-10-07 19:38:08 +02:00
Denis
eb2f4bac88 [uptimerobot-prometheus] Support annotations in Service (#86)
* bump chart version

* [uptimerobot-prometheus] Support annotations in Service (#1)

* add annotations to service

* add example annotations to values.yaml

* fix trailing spaces
2020-10-07 13:26:13 -04:00
Waldemar Faist
b4dda5a1ad Bump charts depending on media-common
Signed-off-by: Waldemar Faist <cubic@coldice.net>
2020-10-07 17:54:01 +02:00
Ryan Holt
7f1f2b9150 Merge pull request #84 from CuBiC3D/master
[media-common] Fixes HELM error on extraIngresses
2020-10-07 11:25:44 -04:00
Waldemar Faist
4bde4fa33f Fixes HELM error on extraIngresses
Signed-off-by: Waldemar Faist <cubic@coldice.net>
2020-10-07 14:01:34 +02:00
120 changed files with 1634 additions and 1361 deletions

View File

@@ -36,7 +36,7 @@ jobs:
./get_helm.sh
- name: Add dependency chart repos
run: |
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add stable https://charts.helm.sh/stable
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.0.0
with:

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v0.102.0
description: DNS proxy as ad-blocker for local network
name: adguard-home
version: 2.1.1
version: 2.2.0
keywords:
- adguard-home
- adguard

View File

@@ -83,6 +83,11 @@ spec:
- name: config
mountPath: /opt/adguardhome/conf
readOnly: false
{{- if .Values.tlsSecretName }}
- name: certs
mountPath: /certs
readOnly: false
{{- end }}
ports:
- name: http
{{- if .Values.configAsCode.enabled }}
@@ -153,6 +158,11 @@ spec:
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
{{- if .Values.tlsSecretName }}
- name: certs
secret:
secretName: {{ .Values.tlsSecretName }}
{{- end }}
{{- if .Values.configAsCode.enabled }}
- name: configmap
configMap:

View File

@@ -165,6 +165,10 @@ configAsCode:
verbose: false
schema_version: 6
tlsSecretName: ""
# name of the secret that contains the tls cert and key.
# this secret will be mounted inside the adguard container /certs path. e.g. works with cert-manager
image:
repository: adguard/adguardhome
# Image tag is set via charts appVersion. If you want to override the tag, specify it here

12
charts/common/Chart.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: v2
name: common
description: Function library for k8s-at-home charts
type: library
version: 1.0.4
keywords:
- k8s-at-home
- common
home: https://github.com/k8s-at-home/charts/tree/master/charts/common
maintainers:
- name: BJW-S
email: me@juggels.online

30
charts/common/README.md Normal file
View File

@@ -0,0 +1,30 @@
# Library chart for k8s@home media charts
## **THIS CHART IS NOT MEANT TO BE INSTALLED DIRECTLY**
This is a [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between k8s@home charts.
## Introduction
This chart provides common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager.
## TL;DR
```yaml
dependencies:
- name: common
version: 0.x.x
repository: https://k8s-at-home.com/charts/
```
```bash
$ helm dependency update
```
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.names.fullname" . }}
data:
myvalue: "Hello World"
```

View File

@@ -0,0 +1,24 @@
{{- define "common.all" -}}
{{- /* Merge the local chart values and the common chart defaults */ -}}
{{- $defaultValues := .Values.common -}}
{{- $_ := deepCopy $defaultValues | merge .Values -}}
{{- $_ := unset .Values "common" -}}
{{- /* Enable OpenVPN VPN add-on if required */ -}}
{{- if .Values.addons.vpn.enabled }}
{{- include "common.addon.vpn" . }}
{{- end -}}
{{- /* Build the templates */ -}}
{{- include "common.pvc" . }}
{{- print "---" | nindent 0 -}}
{{- if eq .Values.controllerType "statefulset" }}
{{- include "common.statefulset" . | nindent 0 }}
{{ else }}
{{- include "common.deployment" . | nindent 0 }}
{{- end -}}
{{- print "---" | nindent 0 -}}
{{ include "common.service" . | nindent 0 }}
{{- print "---" | nindent 0 -}}
{{ include "common.ingress" . | nindent 0 }}
{{- end -}}

View File

@@ -0,0 +1,55 @@
{{- define "common.deployment" -}}
apiVersion: {{ include "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "common.names.fullname" . }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "common.labels.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "common.labels.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
{{- include "common.controller.mainContainer" . | nindent 6 }}
{{- with .Values.additionalContainers }}
{{- toYaml . | nindent 6 }}
{{- end }}
volumes:
{{- include "common.controller.volumes" . | nindent 6 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,25 @@
{{- define "common.ingress" -}}
{{- if .Values.ingress.enabled -}}
{{- $svcPort := .Values.service.port.port -}}
{{- /* Generate primary ingress */ -}}
{{- $ingressValues := .Values.ingress -}}
{{- $_ := set $ingressValues "svcPort" $svcPort -}}
{{- $_ := set . "ObjectValues" (dict "ingress" $ingressValues) -}}
{{- include "common.classes.ingress" . }}
{{- /* Generate additional ingresses as required */ -}}
{{- range $index, $extraIngress := .Values.ingress.additionalIngresses }}
{{- if $extraIngress.enabled -}}
{{- print ("---") | nindent 0 -}}
{{- $ingressValues := $extraIngress -}}
{{- $_ := set $ingressValues "svcPort" $svcPort -}}
{{- if not $ingressValues.nameSuffix -}}
{{- $_ := set $ingressValues "nameSuffix" $index -}}
{{ end -}}
{{- $_ := set . "ObjectValues" (dict "ingress" $ingressValues) -}}
{{- include "common.classes.ingress" . -}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,19 +1,25 @@
{{/*
Default NOTES.txt content.
*/}}
{{- define "common.notes.defaultNotes" -}}
{{- $svcPort := .Values.service.port.port -}}
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ .host }}{{ (first .paths).path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "common.names.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
You can watch the status of by running 'kubectl get svc -w {{ include "common.names.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "common.names.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ $svcPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "common.names.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
kubectl port-forward $POD_NAME 8080:{{ $svcPort }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- define "common.pvc" -}}
{{- /* Generate pvc as required */ -}}
{{- $context := . -}}
{{- range $index, $PVC := .Values.persistence }}
{{- if and $PVC.enabled (not (or $PVC.emptyDir $PVC.existingClaim)) -}}
{{- $persistenceValues := $PVC -}}
{{- if not $persistenceValues.nameSuffix -}}
{{- $_ := set $persistenceValues "nameSuffix" $index -}}
{{- end -}}
{{- $_ := set $context "ObjectValues" (dict "persistence" $persistenceValues) -}}
{{- print ("---") | nindent 0 -}}
{{- include "common.classes.pvc" $context -}}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,5 @@
{{- define "common.service" -}}
{{- if .Values.service.enabled -}}
{{- include "common.classes.service" . }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,56 @@
{{- define "common.statefulset" -}}
apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }}
kind: StatefulSet
metadata:
name: {{ template "common.names.fullname" . }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "common.labels.selectorLabels" . | nindent 6 }}
serviceName: {{ include "common.names.fullname" . }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "common.labels.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
{{- include "common.controller.mainContainer" . | nindent 6 }}
{{- with .Values.additionalContainers }}
{{- toYaml . | nindent 6 }}
{{- end }}
volumes:
{{- include "common.controller.volumes" . | nindent 6 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{/*
The OpenVPN configmaps to be included
*/}}
{{- define "common.addon.vpn.configmap" -}}
{{- if or .Values.addons.vpn.configFile .Values.addons.vpn.scripts.up .Values.addons.vpn.scripts.down }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.names.fullname" . }}-vpn
labels:
{{- include "common.labels" . | nindent 4 }}
data:
{{- if .Values.addons.vpn.configFile }}
vpnConfigfile: |-
{{- .Values.addons.vpn.configFile | nindent 4}}
{{- end }}
{{- if .Values.addons.vpn.scripts.up }}
up.sh: |-
{{- .Values.addons.vpn.scripts.up | nindent 4}}
{{- end }}
{{- if .Values.addons.vpn.scripts.down }}
down.sh: |-
{{- .Values.addons.vpn.scripts.down | nindent 4}}
{{- end }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,21 @@
{{/*
The OpenVPN networkpolicy to be included
*/}}
{{- define "common.addon.vpn.networkpolicy" -}}
{{- if .Values.addons.vpn.networkPolicy.enabled -}}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "common.names.fullname" . }}
spec:
podSelector:
matchLabels:
{{- include "common.labels.selectorLabels" . | nindent 6 }}
policyTypes:
- Egress
egress:
{{- if .Values.addons.vpn.networkPolicy.egress }}
{{- .Values.addons.vpn.networkPolicy.egress | toYaml | nindent 4 }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,25 @@
{{/*
The OpenVPN shared volume to be inserted
*/}}
{{- define "common.addon.vpn.volume" -}}
{{- if or .Values.addons.vpn.vpnConf .Values.addons.vpn.scripts.up .Values.addons.vpn.scripts.down -}}
name: vpnconfig
configMap:
name: {{ template "common.names.fullname" . }}-vpn
items:
{{- if .Values.addons.vpn.vpnConf }}
- key: vpnConfigfile
path: vpnConfigfile
{{- end }}
{{- if .Values.addons.vpn.scripts.up }}
- key: up.sh
path: up.sh
mode: 0777
{{- end }}
{{- if .Values.addons.vpn.scripts.down }}
- key: down.sh
path: down.sh
mode: 0777
{{- end }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,35 @@
{{/*
Template to render VPN addon
*/}}
{{- define "common.addon.vpn" -}}
{{- if .Values.addons.vpn.enabled -}}
{{- if eq "openvpn" .Values.addons.vpn.type -}}
{{- include "common.addon.openvpn" . }}
{{- end -}}
{{- if eq "wireguard" .Values.addons.vpn.type -}}
{{- include "common.addon.wireguard" . }}
{{- end -}}
{{/* Include the configmap if not empty */}}
{{- $configmap := include "common.addon.vpn.configmap" . -}}
{{- if $configmap -}}
{{- print "---" | nindent 0 -}}
{{- $configmap -}}
{{- end -}}
{{/* Append the vpn config volume to the additionalVolumes */}}
{{- $volume := include "common.addon.vpn.volume" . | fromYaml -}}
{{- if $volume -}}
{{- $additionalVolumes := append .Values.additionalVolumes $volume }}
{{- $_ := set .Values "additionalVolumes" $additionalVolumes -}}
{{- end -}}
{{/* Include the networkpolicy if not empty */}}
{{- $networkpolicy := include "common.addon.vpn.networkpolicy" . -}}
{{- if $networkpolicy -}}
{{- print "---" | nindent 0 -}}
{{- $networkpolicy -}}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,18 @@
{{/*
Template to render OpenVPN addon
*/}}
{{- define "common.addon.openvpn" -}}
{{/* Append the openVPN container to the additionalContainers */}}
{{- $container := include "common.addon.openvpn.container" . | fromYaml -}}
{{- if $container -}}
{{- $additionalContainers := append .Values.additionalContainers $container }}
{{- $_ := set .Values "additionalContainers" $additionalContainers -}}
{{- end -}}
{{/* Include the secret if not empty */}}
{{- $secret := include "common.addon.openvpn.secret" . -}}
{{- if $secret -}}
{{- print "---" | nindent 0 -}}
{{- $secret -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,57 @@
{{/*
The OpenVPN container(s) to be inserted
*/}}
{{- define "common.addon.openvpn.container" -}}
name: openvpn
image: "{{ .Values.addons.vpn.openvpn.image.repository }}:{{ .Values.addons.vpn.openvpn.image.tag }}"
imagePullPolicy: {{ .Values.addons.vpn.imagePullPolicy }}
securityContext:
capabilities:
add:
- NET_ADMIN
{{- if .Values.addons.vpn.env }}
env:
{{- range $k, $v := .Values.addons.vpn.env }}
- name: {{ $k }}
value: {{ $v }}
{{- end }}
{{- end }}
{{- if or .Values.addons.vpn.openvpn.auth .Values.addons.vpn.openvpn.authSecret }}
envFrom:
- secretRef:
{{- if .Values.addons.vpn.openvpn.authSecret }}
name: {{ .Values.addons.vpn.openvpn.authSecret }}
{{- else }}
name: {{ template "common.names.fullname" . }}-openvpn
{{- end }}
{{- end }}
{{- if or .Values.addons.vpn.configFile .Values.addons.vpn.scripts.up .Values.addons.vpn.scripts.down .Values.addons.vpn.additionalVolumeMounts .Values.persistence.shared.enabled }}
volumeMounts:
{{- if .Values.addons.vpn.configFile }}
- name: vpnconfig
mountPath: /vpn/vpn.conf
subPath: vpnConfigfile
{{- end }}
{{- if .Values.addons.vpn.scripts.up }}
- name: vpnconfig
mountPath: /vpn/up.sh
subPath: up.sh
{{- end }}
{{- if .Values.addons.vpn.scripts.down }}
- name: vpnconfig
mountPath: /vpn/down.sh
subPath: down.sh
{{- end }}
{{- if .Values.persistence.shared.enabled }}
- mountPath: {{ .Values.persistence.shared.mountPath }}
name: shared
{{- end }}
{{- if .Values.addons.vpn.additionalVolumeMounts }}
{{- toYaml .Values.addons.vpn.additionalVolumeMounts | nindent 2 }}
{{- end }}
{{- end }}
{{- if .Values.addons.vpn.livenessProbe }}
livenessProbe:
{{- toYaml .Values.addons.vpn.livenessProbe | nindent 4 }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,15 @@
{{/*
The OpenVPN secrets to be included
*/}}
{{- define "common.addon.openvpn.secret" -}}
{{- if .Values.addons.vpn.openvpn.auth -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "common.names.fullname" . }}-openvpn
labels:
{{- include "common.labels" . | nindent 4 }}
data:
VPN_AUTH: {{ .Values.addons.vpn.openvpn.auth | b64enc }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,11 @@
{{/*
Template to render Wireguard addon
*/}}
{{- define "common.addon.wireguard" -}}
{{/* Append the Wireguard container to the additionalContainers */}}
{{- $container := include "common.addon.wireguard.container" . | fromYaml -}}
{{- if $container -}}
{{- $additionalContainers := append .Values.additionalContainers $container }}
{{- $_ := set .Values "additionalContainers" $additionalContainers -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,50 @@
{{/*
The Wireguard container(s) to be inserted
*/}}
{{- define "common.addon.wireguard.container" -}}
name: wireguard
image: "{{ .Values.addons.vpn.wireguard.image.repository }}:{{ .Values.addons.vpn.wireguard.image.tag }}"
imagePullPolicy: {{ .Values.addons.vpn.imagePullPolicy }}
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
{{- if .Values.addons.vpn.env }}
env:
{{- range $k, $v := .Values.addons.vpn.env }}
- name: {{ $k }}
value: {{ $v }}
{{- end }}
{{- end }}
{{- if or .Values.addons.vpn.configFile .Values.addons.vpn.scripts.up .Values.addons.vpn.scripts.down .Values.addons.vpn.additionalVolumeMounts .Values.persistence.shared.enabled }}
volumeMounts:
{{- if .Values.addons.vpn.configFile }}
- name: vpnconfig
mountPath: /config/wg0.conf
subPath: vpnConfigfile
{{- end }}
{{- if .Values.addons.vpn.scripts.up }}
- name: vpnconfig
mountPath: /config/up.sh
subPath: up.sh
{{- end }}
{{- if .Values.addons.vpn.scripts.down }}
- name: vpnconfig
mountPath: /config/down.sh
subPath: down.sh
{{- end }}
{{- if .Values.persistence.shared.enabled }}
- mountPath: {{ .Values.persistence.shared.mountPath }}
name: shared
{{- end }}
{{- if .Values.addons.vpn.additionalVolumeMounts }}
{{- toYaml .Values.addons.vpn.additionalVolumeMounts | nindent 2 }}
{{- end }}
{{- end }}
{{- if .Values.addons.vpn.livenessProbe }}
livenessProbe:
{{- toYaml .Values.addons.vpn.livenessProbe | nindent 4 }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,46 @@
{{- define "common.classes.ingress" -}}
{{- $ingressName := include "common.names.fullname" . -}}
{{- $values := .Values.ingress -}}
{{- if hasKey . "ObjectValues" -}}
{{- with .ObjectValues.ingress -}}
{{- $values = . -}}
{{- end -}}
{{ end -}}
{{- if hasKey $values "nameSuffix" -}}
{{- $ingressName = printf "%v-%v" $ingressName $values.nameSuffix -}}
{{ end -}}
{{- $svcPort := $values.svcPort -}}
apiVersion: {{ include "common.capabilities.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ $ingressName }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with $values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if $values.tls }}
tls:
{{- range $values.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range $values.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $ingressName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,36 @@
{{- define "common.classes.pvc" -}}
{{- $values := .Values.persistence -}}
{{- if hasKey . "ObjectValues" -}}
{{- with .ObjectValues.persistence -}}
{{- $values = . -}}
{{- end -}}
{{ end -}}
{{- $pvcName := include "common.names.fullname" . -}}
{{- if hasKey $values "nameSuffix" -}}
{{- $pvcName = printf "%v-%v" $pvcName $values.nameSuffix -}}
{{ end -}}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ $pvcName }}
{{- if or $values.skipuninstall $values.annotations }}
annotations:
{{- if $values.skipuninstall }}
"helm.sh/resource-policy": keep
{{- end }}
{{- with $values.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ required (printf "accessMode is required for PVC %v" $pvcName) $values.accessMode | quote }}
resources:
requests:
storage: {{ required (printf "size is required for PVC %v" $pvcName) $values.size | quote }}
{{- if $values.storageClass }}
storageClassName: {{ if (eq "-" $values.storageClass) }}""{{- else }}{{ $values.storageClass | quote }}{{- end }}
{{- end }}
{{- end -}}

View File

@@ -0,0 +1,70 @@
{{- define "common.classes.service" -}}
{{- $values := .Values.service -}}
{{- if hasKey . "ObjectValues" -}}
{{- with .ObjectValues.service -}}
{{- $values = . -}}
{{- end -}}
{{ end -}}
{{- $svcType := $values.type -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "common.names.fullname" . }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- if $values.labels }}
{{ toYaml $values.labels | nindent 4 }}
{{- end }}
{{- with $values.annotations }}
annotations:
{{ toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if (or (eq $svcType "ClusterIP") (empty $svcType)) }}
type: ClusterIP
{{- if $values.clusterIP }}
clusterIP: {{ $values.clusterIP }}
{{end}}
{{- else if eq $svcType "LoadBalancer" }}
type: {{ $svcType }}
{{- if $values.loadBalancerIP }}
loadBalancerIP: {{ $values.loadBalancerIP }}
{{- end }}
{{- if $values.externalTrafficPolicy }}
externalTrafficPolicy: {{ $values.externalTrafficPolicy }}
{{- end }}
{{- if $values.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml $values.loadBalancerSourceRanges | nindent 4 }}
{{- end -}}
{{- else }}
type: {{ $svcType }}
{{- end }}
{{- if $values.sessionAffinity }}
sessionAffinity: {{ $values.sessionAffinity }}
{{- if $values.sessionAffinityConfig }}
sessionAffinityConfig:
{{ toYaml $values.sessionAffinityConfig | nindent 4 }}
{{- end -}}
{{- end }}
{{- with $values.externalIPs }}
externalIPs:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if $values.publishNotReadyAddresses }}
publishNotReadyAddresses: {{ $values.publishNotReadyAddresses }}
{{- end }}
ports:
- port: {{ $values.port.port }}
targetPort: {{ $values.port.targetPort }}
protocol: {{ $values.port.protocol }}
name: {{ $values.port.name }}
{{- if (and (eq $svcType "NodePort") (not (empty $values.port.nodePort))) }}
nodePort: {{ $values.port.nodePort }}
{{ end }}
{{- with $values.additionalPorts }}
{{ toYaml . | nindent 4 }}
{{- end }}
selector:
{{- include "common.labels.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,32 @@
{{/*
Return the appropriate apiVersion for deployment.
*/}}
{{- define "common.capabilities.deployment.apiVersion" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for statefulset.
*/}}
{{- define "common.capabilities.statefulset.apiVersion" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "apps/v1beta1" -}}
{{- else -}}
{{- print "apps/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "common.capabilities.ingress.apiVersion" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{/*
Common labels
*/}}
{{- define "common.labels" -}}
helm.sh/chart: {{ include "common.names.chart" . }}
{{ include "common.labels.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "common.labels.selectorLabels" -}}
app.kubernetes.io/name: {{ include "common.names.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,42 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "common.names.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "common.names.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "common.names.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "common.names.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "k8s-at-home.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,43 @@
{{- /* The main containter that will be included in the controller */ -}}
{{- define "common.controller.mainContainer" -}}
- name: {{ template "common.names.fullname" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.env }}
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
ports:
- name: {{ .Values.service.port.name }}
containerPort: {{ .Values.service.port.port }}
protocol: {{ .Values.service.port.protocol }}
{{- range $port := .Values.service.additionalPorts }}
- name: {{ $port.name }}
containerPort: {{ $port.port }}
protocol: {{ $port.protocol }}
{{- end }}
volumeMounts:
{{- range $index, $PVC := .Values.persistence }}
{{- if $PVC.enabled }}
- mountPath: {{ $PVC.mountPath }}
name: {{ $index }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 2 }}
{{- end }}
{{- include "common.controller.probes.tcpSocket" . | nindent 2 }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- end -}}

View File

@@ -0,0 +1,29 @@
{{/*
Default liveness/readiness/startup probes
*/}}
{{- define "common.controller.probes.tcpSocket" -}}
{{- if .Values.probes.liveness.enabled -}}
livenessProbe:
tcpSocket:
port: {{ .Values.service.port.name }}
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
{{- end }}
{{- if .Values.probes.readiness.enabled }}
readinessProbe:
tcpSocket:
port: {{ .Values.service.port.name }}
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
{{- end }}
{{- if .Values.probes.startup.enabled }}
startupProbe:
tcpSocket:
port: {{ .Values.service.port.name }}
initialDelaySeconds: {{ .Values.probes.startup.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.startup.failureThreshold }}
periodSeconds: {{ .Values.probes.startup.periodSeconds }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,33 @@
{{/*
volumes included by the controller
*/}}
{{- define "common.controller.volumes" -}}
{{/* Store the context to refer in later scope */}}
{{- $context := . -}}
{{/* Determine the PVC name */}}
{{- range $index, $PVC := .Values.persistence }}
{{- if $PVC.enabled }}
{{- $claimName := "" -}}
{{- if $PVC.existingClaim -}}
{{- $claimName = $PVC.existingClaim -}}
{{- else }}
{{- if $PVC.nameSuffix -}}
{{- $claimName = printf "%s-%s" (include "common.names.fullname" $context) $PVC.nameSuffix -}}
{{- else }}
{{- $claimName = printf "%s-%s" (include "common.names.fullname" $context) $index -}}
{{- end -}}
{{- end -}}
- name: {{ $index }}
{{- if not $PVC.emptyDir }}
persistentVolumeClaim:
claimName: {{ $claimName }}
{{- else }}
emptyDir: {}
{{- end }}
{{ end }}
{{- end }}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes | nindent 0 }}
{{- end }}
{{- end }}

196
charts/common/values.yaml Normal file
View File

@@ -0,0 +1,196 @@
# type: options are statefulset or deployment
controllerType: deployment
env: {}
initContainers: []
additionalContainers: []
# Probes configuration
probes:
liveness:
enabled: true
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
readiness:
enabled: true
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
startup:
enabled: false
initialDelaySeconds: 5
failureThreshold: 30
periodSeconds: 10
service:
enabled: true
type: ClusterIP
# Specify the default port information
port:
port: ""
name: http
protocol: TCP
targetPort: http
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
additionalPorts: []
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
hosts:
- host: chart-example.local
paths:
- path: /
# Ignored if not kubeVersion >= 1.14-0
pathType: Prefix
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
additionalIngresses: []
# - enabled: false
# nameSuffix: "api"
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# labels: {}
# hosts:
# - host: chart-example.local
# paths:
# - path: /api
# # Ignored if not kubeVersion >= 1.14-0
# pathType: Prefix
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
persistence:
config:
enabled: false
mountPath: /config
## configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
# Create an emptyDir volume to share between all containers
shared:
enabled: false
emptyDir: true
mountPath: /shared
additionalVolumes: []
additionalVolumeMounts: []
addons:
vpn:
enabled: false
# VPN type: options are openvpn or wireguard
type: openvpn
# OpenVPN specific configuration
openvpn:
image:
repository: dperson/openvpn-client
tag: latest
# Credentials to connect to the VPN Service (used with -a)
auth: # "user;password"
# OR specify an existing secret that contains the credentials. Credentials should be stored
# under the VPN_AUTH key
authSecret: # my-vpn-secret
# OpenVPN specific configuration
wireguard:
image:
repository: linuxserver/wireguard
tag: version-v1.0.20200827
imagePullPolicy: IfNotPresent
# All variables specified here will be added to the vpn sidecar container
# See the documentation of the VPN image for all config values
env: {}
# TZ: UTC
# Provide a customized vpn configuration file to be used by the VPN.
configFile: # |-
# Some Example Config
# remote greatvpnhost.com 8888
# auth-user-pass
# Cipher AES
# Provide custom up/down scripts that can be used by the vpnConf
scripts:
up: # |-
# #!/bin/bash
# echo "connected" > /shared/vpnstatus
down: # |-
# #!/bin/bash
# echo "disconnected" > /shared/vpnstatus
additionalVolumeMounts: []
# Optionally specify a livenessProbe, e.g. to check if the connection is still
# being protected by the VPN
livenessProbe: {}
# exec:
# command:
# - sh
# - -c
# - if [ $(curl -s https://ipinfo.io/country) == 'US' ]; then exit 0; else exit $?; fi
# initialDelaySeconds: 30
# periodSeconds: 60
# failureThreshold: 1
# If set to true, will deploy a network policy that blocks all outbound
# traffic except traffic specified as allowed
networkPolicy:
enabled: false
# The egress configuration for your network policy, All outbound traffic
# From the pod will be blocked unless specified here. Your cluster must
# have a CNI that supports network policies (Canal, Calico, etc...)
# https://kubernetes.io/docs/concepts/services-networking/network-policies/
# https://github.com/ahmetb/kubernetes-network-policy-recipes
egress:
# - to:
# - ipBlock:
# cidr: 0.0.0.0/0
# ports:
# - port: 53
# protocol: UDP
# - port: 53
# protocol: TCP

View File

@@ -1,8 +1,8 @@
apiVersion: v2
appVersion: 1.15.2
appVersion: 1.15.3
description: ESPHome
name: esphome
version: 2.2.0
version: 2.3.0
keywords:
- esphome
home: https://github.com/k8s-at-home/charts/tree/master/charts/esphome

View File

@@ -4,7 +4,7 @@
image:
repository: esphome/esphome
tag: 1.15.2
tag: 1.15.3
pullPolicy: IfNotPresent
pullSecrets: []

View File

@@ -1,8 +1,8 @@
apiVersion: v2
appVersion: "0.5.2"
appVersion: "0.6.0"
description: Realtime object detection on RTSP cameras with the Google Coral
name: frigate
version: 4.0.0
version: 4.0.1
keywords:
- tensorflow
- coral

View File

@@ -9,7 +9,7 @@ strategyType: Recreate
image:
repository: blakeblackshear/frigate
tag: 0.5.2
tag: 0.6.0
pullPolicy: IfNotPresent
rtspPassword: password

View File

@@ -1,8 +1,8 @@
apiVersion: v2
appVersion: 0.115.2
appVersion: 0.116.1
description: Home Assistant
name: home-assistant
version: 2.5.0
version: 2.6.0
keywords:
- home-assistant
- hass
@@ -22,7 +22,7 @@ maintainers:
dependencies:
- name: esphome
repository: https://k8s-at-home.com/charts/
version: ~1.0.0
version: ~2.2.0
condition: esphome.enabled
- name: postgresql
version: 9.1.2

View File

@@ -191,6 +191,7 @@ The following tables lists the configurable parameters of the Home Assistant cha
| `monitoring.serviceMonitor.labels` | Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator | `{}` |
| `monitoring.serviceMonitor.bearerTokenFile` | Set bearerTokenFile for home-assistant auth (use long lived access tokens) | `nil` |
| `monitoring.serviceMonitor.bearerTokenSecret` | Set bearerTokenSecret for home-assistant auth (use long lived access tokens) | `nil` |
| `monitoring.serviceMonitor.metricRelabelings` | Add metricRelabelings [Documentation](https://coreos.com/operators/prometheus/docs/latest/api.html#relabelconfig) | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

View File

@@ -27,6 +27,10 @@ spec:
{{- if .Values.monitoring.serviceMonitor.bearerTokenSecret.optional }}
optional: {{ .Values.monitoring.serviceMonitor.bearerTokenSecret.optional }}
{{- end }}
{{- end }}
{{- if .Values.monitoring.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{ toYaml .Values.monitoring.serviceMonitor.metricRelabelings | indent 4 }}
{{- end }}
jobLabel: {{ template "home-assistant.fullname" . }}-prometheus-exporter
namespaceSelector:

View File

@@ -4,7 +4,7 @@
image:
repository: homeassistant/home-assistant
tag: 0.115.2
tag: 0.116.1
pullPolicy: IfNotPresent
pullSecrets: []
@@ -224,7 +224,18 @@ monitoring:
# bearerTokenFile:
# Set bearerTokenSecret for home assistant auth (use long lived access tokens)
# bearerTokenSecret:
# Relabel metrics if needed example removes pod and instance labels from metrics beginning with hass
# metricRelabelings: []
# - regex: hass.*
# replacement: ""
# sourceLabels:
# - __name__
# targetLabel: pod
# - regex: hass_.*
# replacement: ""
# sourceLabels:
# - __name__
# targetLabel: instance
vscode:
enabled: false

View File

@@ -1,8 +1,8 @@
apiVersion: v2
appVersion: v0.16.1045
appVersion: v0.16.2106
description: API Support for your favorite torrent trackers
name: jackett
version: 4.0.0
version: 5.0.2
keywords:
- jackett
- torrent
@@ -15,7 +15,6 @@ maintainers:
- name: billimek
email: jeff@billimek.com
dependencies:
- name: media-common
- name: common
repository: https://k8s-at-home.com/charts/
version: ^1.0.0
alias: jackett
version: ^1.0.4

View File

@@ -1,6 +1,6 @@
# Jackett
This is a helm chart for [Jackett](https://github.com/Jackett/Jackett) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/jackett/)
This is a helm chart for [Jackett](https://github.com/Jackett/Jackett).
## TL;DR;
@@ -28,8 +28,9 @@ helm delete my-release --purge
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
Read through the charts [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/jackett/values.yaml)
file. It has several commented out suggested values.
Additionally you can take a look at the common library [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/common/values.yaml) for more (advanced) configuration options.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
@@ -43,11 +44,9 @@ chart. For example,
helm install jackett k8s-at-home/jackett --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
jackett:
image:
tag: ...
image:
tag: ...
```
---
@@ -59,4 +58,19 @@ Error: rendered manifests contain a resource that already exists. Unable to cont
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
---
## Upgrading an existing Release to a new major version
A major chart version change (like 4.0.1 -> 5.0.0) indicates that there is an incompatible breaking change potentially needing manual actions.
### Upgrading from 4.x.x to 5.x.x
Due to migrating to a centralized common library some values in `values.yaml` have changed.
Examples:
* `service.port` has been moved to `service.port.port`.
* `persistence.type` has been moved to `controllerType`.
Refer to the library values.yaml for more configuration options.

View File

@@ -1,10 +1,2 @@
jackett:
image:
organization: linuxserver
repository: jackett
tag: v0.16.1045-ls14
service:
type: ClusterIP
port: 9117
ingress:
enabled: false
ingress:
enabled: true

View File

@@ -1,20 +1 @@
{{- $svcPort := .Values.jackett.service.port -}}
1. Get the application URL by running these commands:
{{- if .Values.jackett.ingress.enabled }}
{{- range .Values.jackett.ingress.hosts }}
http{{ if $.Values.jackett.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.jackett.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.jackett.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.jackett.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ $svcPort }}
{{- else if contains "ClusterIP" .Values.jackett.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ $svcPort }}
{{- end }}
{{- include "common.notes.defaultNotes" . -}}

View File

@@ -0,0 +1 @@
{{ include "common.all" . }}

View File

@@ -1,22 +0,0 @@
{{- if and .Values.jackett.persistence.torrentblackhole.enabled (not .Values.jackett.persistence.torrentblackhole.existingClaim) }}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}-downloads
{{- if .Values.jackett.persistence.torrentblackhole.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.jackett.persistence.torrentblackhole.accessMode | quote }}
resources:
requests:
storage: {{ .Values.jackett.persistence.torrentblackhole.size | quote }}
{{- if .Values.jackett.persistence.torrentblackhole.storageClass }}
storageClassName: {{ if (eq "-" .Values.jackett.persistence.torrentblackhole.storageClass) }}""{{- else }}{{ .Values.jackett.persistence.torrentblackhole.storageClass | quote}}{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,43 +1,37 @@
# Default values for Jackett.
jackett:
image:
organization: linuxserver
repository: jackett
pullPolicy: IfNotPresent
tag: v0.16.1045-ls14
image:
repository: linuxserver/jackett
pullPolicy: IfNotPresent
tag: version-v0.16.2106
service:
service:
port:
port: 9117
env: {}
# TZ: UTC
# PUID: 1001
# PGID: 1001
env: {}
# TZ: UTC
# PUID: 1001
# PGID: 1001
persistence:
torrentblackhole:
enabled: false
## Jackett torrent torrentblackhole Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 1Gi
## Do not delete the pvc upon helm uninstall
# skipuninstall: false
# existingClaim: ""
persistence:
config:
enabled: true
emptyDir: true
additionalVolumes:
- name: torrentblackhole
emptyDir: {}
## When using persistence.torrentblackhole.enabled: true, adjust this to:
# persistentVolumeClaim:
# claimName: jackett-torrentblackhole
additionalVolumeMounts:
- name: torrentblackhole
mountPath: /downloads
torrentblackhole:
enabled: true
emptyDir: true
mountPath: /downloads
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 1Gi
## Do not delete the pvc upon helm uninstall
# skipuninstall: false
# existingClaim: ""

View File

@@ -1,31 +1,21 @@
apiVersion: v2
name: lidarr
appVersion: 0.8.0.1886
description: Looks and smells like Sonarr but made for music
type: application
version: 4.0.1
appVersion: 0.7.1.1785-ls18
name: lidarr
version: 5.0.0
keywords:
- lidarr
- torrent
- usenet
home: https://github.com/k8s-at-home/charts/tree/master/charts/lidarr
icon: https://github.com/lidarr/Lidarr/blob/develop/Logo/512.png?raw=true
sources:
- https://github.com/Lidarr/Lidarr
- https://hub.docker.com/r/linuxserver/lidarr
maintainers:
- name: DirtyCajunRice
email: nick@cajun.pro
url: https://github.com/dirtycajunrice
- name: billimek
email: jeff@billimek.com
dependencies:
- name: media-common
- name: common
repository: https://k8s-at-home.com/charts/
version: ^1.0.0
alias: lidarr
annotations:
artifacthub.io/links: |
- name: App Source
url: https://github.com/Lidarr/Lidarr
- name: Default Docker Image
url: https://hub.docker.com/r/linuxserver/lidarr
artifacthub.io/maintainers: |
- name: Nicholas St. Germain
email: nick@cajun.pro
version: ^1.0.4

View File

@@ -1,4 +1,4 @@
approvers:
- DirtyCajunRice
- billimek
reviewers:
- DirtyCajunRice
- billimek

View File

@@ -1,52 +1,36 @@
# Lidarr | Looks and smells like Sonarr but made for music
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/lidarr/)
* Deploys [Lidarr](https://github.com/lidarr/Lidarr)
# Lidarr
## TL;DR
```console
This is a helm chart for [Lidarr](https://github.com/lidarr/Lidarr).
## TL;DR;
```shell
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/lidarr
```
## Installing the Chart
To install the chart with the release name `lidarr`:
To install the chart with the release name `my-release`:
```console
helm install lidarr k8s-at-home/lidarr
helm install --name my-release k8s-at-home/lidarr
```
## Upgrading
Chart versions before 4.0.0 did not use media-common. Upgrading will require you to nest your values.yaml file under
a top-level `lidarr:` key.
Chart versions 1.0.1 and earlier used separate PVCs for Downloads and Music. This presented an issue where Lidarr would
be unable to hard-link files between the /downloads and /music directories when importing media. This is caused because
each PVC exposed to the pod as a separate filesystem. It resulted in Lidarr copying files rather than linking;
using additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and Music. This means all of your media (and downloads) must be in, or
be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/music, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Lidarr's
`Mass Editor` under the `Library` tab. Simply select all artists in your library, and use the editor to change the
`Root Folder` and hit save.
## Uninstalling the Chart
To uninstall the `lidarr` deployment:
To uninstall/delete the `my-release` deployment:
```console
helm uninstall lidarr
helm delete my-release --purge
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
Read through the charts [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/lidarr/values.yaml)
file. It has several commented out suggested values.
Additionally you can take a look at the common library [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/common/values.yaml) for more (advanced) configuration options.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
@@ -60,11 +44,9 @@ chart. For example,
helm install lidarr k8s-at-home/lidarr --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
lidarr:
image:
tag: ...
image:
tag: ...
```
---
@@ -74,6 +56,21 @@ If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use`existingClaim`.
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
## Upgrading an existing Release to a new major version
A major chart version change (like 4.0.1 -> 5.0.0) indicates that there is an incompatible breaking change potentially needing manual actions.
### Upgrading from 4.x.x to 5.x.x
Due to migrating to a centralized common library some values in `values.yaml` have changed.
Examples:
* `service.port` has been moved to `service.port.port`.
* `persistence.type` has been moved to `controllerType`.
Refer to the library values.yaml for more configuration options.

View File

@@ -0,0 +1,2 @@
ingress:
enabled: true

View File

@@ -0,0 +1 @@
{{- include "common.notes.defaultNotes" . -}}

View File

@@ -0,0 +1 @@
{{ include "common.all" . }}

View File

@@ -1,10 +1,37 @@
# Default values for lidarr.
# Default values for Lidarr.
lidarr:
image:
organization: linuxserver
repository: lidarr
pullPolicy: IfNotPresent
tag: 0.7.1.1785-ls18
service:
image:
repository: linuxserver/lidarr
pullPolicy: IfNotPresent
tag: version-0.8.0.1886
service:
port:
port: 8686
env: {}
# TZ: UTC
# PUID: 1001
# PGID: 1001
persistence:
config:
enabled: true
emptyDir: true
media:
enabled: true
emptyDir: true
mountPath: /media
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 1Gi
## Do not delete the pvc upon helm uninstall
# skipuninstall: false
# existingClaim: ""

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: media-common-openvpn
description: OpenVPN add-on for `media-common`-based charts
type: library
keywords:
- media-common
- openvpn
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common-openvpn
maintainers:
- name: bjw-s
email: bjw-s@users.noreply.github.com
version: 1.0.1

View File

@@ -1,16 +0,0 @@
# Add-on chart for k8s@home media charts
This chart provides a single maintainable OpenVPN add-on to the `meda-common` chart.
## Configuration
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common-openvpn/values.yaml) file.
It has several commented out suggested values.
These values will normally be nested as it is a dependency, for example:
```yaml
radarr:
openvpn:
enabled: true
<values>
```

View File

@@ -1,24 +0,0 @@
{{/*
The OpenVPN configmaps to be inserted
*/}}
{{- define "media-common.openvpn.configmap" -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "media-common.fullname" . }}-openvpn
labels:
{{- include "media-common.labels" . | nindent 4 }}
data:
{{- if .Values.openvpn.vpnConf }}
vpnConf: |-
{{- .Values.openvpn.vpnConf | nindent 4}}
{{- end }}
{{ if .Values.openvpn.scripts.up }}
up.sh: |-
{{- .Values.openvpn.scripts.up | nindent 4}}
{{- end }}
{{- if .Values.openvpn.scripts.down }}
down.sh: |-
{{- .Values.openvpn.scripts.down | nindent 4}}
{{- end }}
{{- end -}}

View File

@@ -1,48 +0,0 @@
{{/*
The OpenVPN container(s) to be inserted
*/}}
{{- define "media-common.openvpn.container" -}}
- name: openvpn
image: "{{ .Values.openvpn.image.repository }}:{{ .Values.openvpn.image.tag }}"
imagePullPolicy: {{ .Values.openvpn.image.pullPolicy }}
securityContext:
capabilities:
add: ["NET_ADMIN"]
{{- if .Values.openvpn.env }}
env:
{{- range $k, $v := .Values.openvpn.env }}
- name: {{ $k }}
value: {{ $v }}
{{- end }}
{{- end }}
envFrom:
{{- if or .Values.openvpn.auth .Values.openvpn.authSecret }}
- secretRef:
{{- if .Values.openvpn.authSecret }}
name: {{ .Values.openvpn.authSecret }}
{{- else }}
name: {{ template "media-common.fullname" . }}-openvpn
{{- end }}
{{- end }}
volumeMounts:
{{- if .Values.openvpn.vpnConf }}
- name: openvpnconf
mountPath: /vpn/vpn.conf
subPath: vpnConf
{{- end }}
{{- if .Values.openvpn.scripts.up }}
- name: openvpnconf
mountPath: /vpn/up.sh
subPath: up.sh
{{- end }}
{{- if .Values.openvpn.scripts.down }}
- name: openvpnconf
mountPath: /vpn/down.sh
subPath: down.sh
{{- end }}
{{- if .Values.openvpn.additionalVolumeMounts }}
{{- toYaml .Values.openvpn.additionalVolumeMounts | nindent 2 }}
{{- end }}
livenessProbe:
{{- toYaml .Values.openvpn.livenessProbe | nindent 4 }}
{{- end -}}

View File

@@ -1,22 +0,0 @@
{{/*
The OpenVPN networkpolicy to be inserted
*/}}
{{- define "media-common.openvpn.networkpolicy" -}}
{{- if .Values.openvpn.networkPolicy.enabled -}}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "media-common.fullname" . }}-deny-all-netpol
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: {{ include "media-common.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
policyTypes:
- Egress
egress:
{{- if .Values.openvpn.networkPolicy.egress }}
{{- .Values.openvpn.networkPolicy.egress | toYaml | nindent 4 }}
{{- end -}}
{{- end -}}
{{- end -}}

View File

@@ -1,15 +0,0 @@
{{/*
The OpenVPN secrets to be inserted
*/}}
{{- define "media-common.openvpn.secret" -}}
{{- if .Values.openvpn.auth -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "media-common.fullname" . }}-openvpn
labels:
{{- include "media-common.labels" . | nindent 4 }}
data:
VPN_AUTH: {{ .Values.openvpn.auth | b64enc }}
{{- end -}}
{{- end -}}

View File

@@ -1,25 +0,0 @@
{{/*
The OpenVPN volumes to be inserted
*/}}
{{- define "media-common.openvpn.volume" -}}
{{- if or .Values.openvpn.vpnConf .Values.openvpn.scripts.up .Values.openvpn.scripts.down -}}
- name: openvpnconf
configMap:
name: {{ template "media-common.fullname" . }}-openvpn
items:
{{- if .Values.openvpn.vpnConf }}
- key: vpnConf
path: vpnConf
{{- end }}
{{- if .Values.openvpn.scripts.up }}
- key: up.sh
path: up.sh
mode: 0777
{{- end }}
{{- if .Values.openvpn.scripts.down }}
- key: down.sh
path: down.sh
mode: 0777
{{- end }}
{{- end -}}
{{- end -}}

View File

@@ -1,67 +0,0 @@
# Default values for media-common-openvpn.
image:
repository: dperson/openvpn-client
tag: latest
pullPolicy: IfNotPresent
# All variables specified here will be added to the openvpn sidecar container
# Ref https://hub.docker.com/r/dperson/openvpn-client for all config values
env: []
# TZ: UTC
# Provide a customized vpn.conf file to be used by openvpn.
vpnConf: # |-
# Some Example Config
# remote greatvpnhost.com 8888
# auth-user-pass
# Cipher AES
# Provide custom up/down scripts that can be used by the vpnConf
scripts:
up: # |-
# #!/bin/bash
# echo "connected" > /shared/vpnstatus
down: # |-
# #!/bin/bash
# echo "disconnected" > /shared/vpnstatus
# Credentials to connect to the VPN Service (used with -a)
auth: # "user;password"
# OR specify an existing secret that contains the credentials. Credentials should be stored
# under the VPN_AUTH key
authSecret: # my-vpn-secret
additionalVolumeMounts: []
# Optionally specify a livenessProbe, e.g. to check if the connection is still
# being protected by the VPN
livenessProbe: {}
# exec:
# command:
# - sh
# - -c
# - if [ $(curl -s https://ipinfo.io/country) == 'US' ]; then exit 0; else exit $?; fi
# initialDelaySeconds: 30
# periodSeconds: 60
# failureThreshold: 1
# If set to true, will deploy a network policy that blocks all outbound
# traffic except traffic specified as allowed
networkPolicy:
enabled: false
# The egress configuration for your network policy, All outbound traffic
# From the pod will be blocked unless specified here. Your cluster must
# have a CNI that supports network policies (Canal, Calico, etc...)
# https://kubernetes.io/docs/concepts/services-networking/network-policies/
# https://github.com/ahmetb/kubernetes-network-policy-recipes
egress:
# - to:
# - ipBlock:
# cidr: 0.0.0.0/0
# ports:
# - port: 53
# protocol: UDP
# - port: 53
# protocol: TCP

View File

@@ -1,17 +0,0 @@
apiVersion: v2
name: media-common
description: Common dependancy chart for media ecosystem containers
type: application
version: 1.3.0
keywords:
- media-common
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common
maintainers:
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common-openvpn
repository: https://k8s-at-home.com/charts/
version: ^1.0.0
condition: openvpn.enabled
alias: openvpn

View File

@@ -1,4 +0,0 @@
approvers:
- DirtyCajunRice
reviewers:
- DirtyCajunRice

View File

@@ -1,30 +0,0 @@
# Shared base chart for k8s@home media charts
Many containers have no environmentally configurable settings. This chart allows a single maintainable
base with umbrella charts for container-specific differences. This chart does not have a default
repository or tag, and not designed to be deployed directly.
## Known Parent Charts
* [k8s-at-home/radarr](https://github.com/k8s-at-home/charts/tree/master/charts/radarr)
* [k8s-at-home/sonarr](https://github.com/k8s-at-home/charts/tree/master/charts/sonarr)
* [k8s-at-home/lidarr](https://github.com/k8s-at-home/charts/tree/master/charts/lidarr)
* [k8s-at-home/tautulli](https://github.com/k8s-at-home/charts/tree/master/charts/tautulli)
* [k8s-at-home/ombi](https://github.com/k8s-at-home/charts/tree/master/charts/ombi)
* [k8s-at-home/organizr](https://github.com/k8s-at-home/charts/tree/master/charts/organizr)
## Configuration
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml) file.
It has several commented out suggested values.
These values will normally be nested as it is a dependency, for example:
```yaml
radarr:
<values>
```
## Add-ons
### OpenVPN
It is possible to enable an OpenVPN add-on by setting `openvpn.enabled: true`. For more information refer to [k8s-at-home/media-common-openvpn](https://github.com/k8s-at-home/charts/tree/master/charts/media-common-openvpn)

View File

@@ -1,35 +0,0 @@
---
image:
organization: linuxserver
repository: radarr
tag: latest
service:
port: 7878
openvpn:
enabled: true
image:
repository: dperson/openvpn-client
tag: latest
pullPolicy: IfNotPresent
auth: user;pass
env:
TZ: UTC
scripts:
up:
down:
networkPolicy:
enabled: false
livenessProbe:
initialDelaySeconds: 10
periodSeconds: 10
exec:
command:
- echo
- success

View File

@@ -1,85 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "media-common.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "media-common.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "media-common.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "media-common.labels" -}}
helm.sh/chart: {{ include "media-common.chart" . }}
{{ include "media-common.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "media-common.selectorLabels" -}}
app.kubernetes.io/name: {{ include "media-common.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Init Containers
*/}}
{{- define "media-common.initContainers" -}}
{{- if .Values.initContainers }}
{{- toYaml .Values.initContainers }}
{{- end }}
{{- end -}}
{{/*
Additional Containers
*/}}
{{- define "media-common.additionalContainers" -}}
{{- if .Values.additionalContainers }}
{{- toYaml .Values.additionalContainers }}
{{- end }}
{{- if .Values.openvpn.enabled }}
{{ include "media-common.openvpn.container" . }}
{{- end }}
{{- end -}}
{{/*
Additional Volumes
*/}}
{{- define "media-common.additionalVolumes" -}}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes }}
{{- end }}
{{- if .Values.openvpn.enabled }}
{{ include "media-common.openvpn.volume" . }}
{{- end }}
{{- end -}}

View File

@@ -1,8 +0,0 @@
{{- if .Values.openvpn.enabled -}}
---
{{ include "media-common.openvpn.configmap" . }}
---
{{ include "media-common.openvpn.secret" . }}
---
{{ include "media-common.openvpn.networkpolicy" . }}
{{- end -}}

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- if .Values.env }}
data:
{{- toYaml .Values.env | nindent 2 }}
{{- end }}

View File

@@ -1,108 +0,0 @@
{{- if eq .Values.persistence.type "deployment" }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "media-common.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "media-common.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- include "media-common.initContainers" . | nindent 8 }}
{{- end }}
containers:
- name: {{ template "media-common.fullname" . }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
image: "{{ .Values.image.organization }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
envFrom:
- configMapRef:
name: {{ template "media-common.fullname" . }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
volumeMounts:
- mountPath: {{ .Values.configPath }}
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- include "media-common.additionalContainers" . | nindent 8 }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- name: media
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}-media{{- end }}
{{- end }}
{{- include "media-common.additionalVolumes" . | nindent 8 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@@ -1,106 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $kubeVersion := .Capabilities.KubeVersion.GitVersion -}}
{{- $fullName := include "media-common.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">= 1.19-0" $kubeVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">= 1.14-0 < 1.19-0" $kubeVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if semverCompare ">= 1.14-0" $kubeVersion}}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">= 1.19-0" $kubeVersion}}
service:
name: {{ $fullName }}
port:
name: http
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- range $index, $ingress := .Values.ingress.extraIngresses }}
---
{{- if semverCompare ">= 1.19-0" $kubeVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">= 1.14-0 < 1.19-0" $kubeVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}-{{ $ingress.nameSuffix | default $index }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- with $ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if $ingress.tls }}
tls:
{{- range $ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range $ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if semverCompare ">= 1.14-0" $kubeVersion}}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">= 1.19-0" $kubeVersion}}
service:
name: {{ $fullName }}
port:
name: http
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,44 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) -}}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
storageClassName: {{ if (eq "-" .Values.persistence.config.storageClass) }}""{{- else }}{{ .Values.persistence.config.storageClass | quote }}{{- end }}
{{- end }}
{{- end -}}
{{- if and .Values.persistence.media.enabled (not .Values.persistence.media.existingClaim) }}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}-media
{{- if .Values.persistence.media.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.persistence.media.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.media.size | quote }}
{{- if .Values.persistence.media.storageClass }}
storageClassName: {{ if (eq "-" .Values.persistence.media.storageClass) }}""{{- else }}{{ .Values.persistence.media.storageClass | quote}}{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,28 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
{{- with .Values.service.additionalSpec }}
{{- toYaml . | nindent 2 }}
{{- end }}
selector:
{{- include "media-common.selectorLabels" . | nindent 4 }}

View File

@@ -1,109 +0,0 @@
{{- if eq .Values.persistence.type "statefulset" }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "media-common.selectorLabels" . | nindent 6 }}
serviceName: {{ include "media-common.fullname" . }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "media-common.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- include "media-common.initContainers" . | nindent 8 }}
{{- end }}
containers:
- name: {{ template "media-common.fullname" . }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
image: "{{ .Values.image.organization }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
envFrom:
- configMapRef:
name: {{ template "media-common.fullname" . }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
volumeMounts:
- mountPath: {{ .Values.configPath }}
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- include "media-common.additionalContainers" . | nindent 8 }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- name: media
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}-media{{- end }}
{{- end }}
{{- include "media-common.additionalVolumes" . | nindent 8 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@@ -1,162 +0,0 @@
# Default values for media-common.
image:
organization: ""
repository: ""
pullPolicy: IfNotPresent
tag: ""
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
configPath: /config
env:
TZ: UTC
service:
type: ClusterIP
port: ""
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
additionalSpec: {}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
hosts:
- host: chart-example.local
paths:
- path: /
# Ignored if not kubeVersion >= 1.14-0
pathType: Prefix
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
extraIngresses:
# - enabled: false
# nameSuffix: "api"
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# labels: {}
# hosts:
# - host: chart-example.local
# paths:
# - path: /api
# # Ignored if not kubeVersion >= 1.14-0
# pathType: Prefix
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
persistence:
# type: options are statefulset or deployment
type: statefulset
config:
enabled: true
## media-common configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
media:
enabled: false
## media-common media volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
initContainers: []
additionalContainers: []
additionalVolumes: []
additionalVolumeMounts: []
# Enable the OpenVPN add-on here
# See https://github.com/k8s-at-home/charts/tree/master/charts/media-common-openvpn for more details
openvpn:
enabled: false
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}

View File

@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "1.6.12"
description: Eclipse Mosquitto - An open source MQTT broker
name: mosquitto
version: 0.3.3
version: 0.5.0
keywords:
- message queue
- MQTT

View File

@@ -0,0 +1,3 @@
monitoring:
sidecar:
enabled: true

View File

@@ -0,0 +1,40 @@
{{- if .Values.monitoring.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
{{ include "mosquitto.labels" . | indent 4 }}
{{- if .Values.monitoring.podMonitor.labels }}
{{ toYaml .Values.monitoring.podMonitor.labels }}
{{- end }}
name: {{ template "mosquitto.fullname" . }}-prometheus-exporter
{{- if .Values.monitoring.podMonitor.namespace }}
namespace: {{ .Values.monitoring.podMonitor.namespace }}
{{- end }}
spec:
podMetricsEndpoints:
- port: prometheus
path: /metrics
{{- if .Values.monitoring.podMonitor.interval }}
interval: {{ .Values.monitoring.podMonitor.interval }}
{{- end }}
{{- if .Values.monitoring.podMonitor.bearerTokenFile }}
bearerTokenFile: {{ .Values.monitoring.podMonitor.bearerTokenFile }}
{{- end }}
{{- if .Values.monitoring.podMonitor.bearerTokenSecret }}
bearerTokenSecret:
name: {{ .Values.monitoring.podMonitor.bearerTokenSecret.name }}
key: {{ .Values.monitoring.podMonitor.bearerTokenSecret.key }}
{{- if .Values.monitoring.podMonitor.bearerTokenSecret.optional }}
optional: {{ .Values.monitoring.podMonitor.bearerTokenSecret.optional }}
{{- end }}
{{- end }}
jobLabel: {{ template "mosquitto.fullname" . }}-prometheus-exporter
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -25,6 +25,12 @@ spec:
targetPort: websocket
protocol: TCP
name: websocket
{{- if .Values.monitoring.sidecar.enabled }}
- port: {{ .Values.monitoring.sidecar.port }}
targetPort: prometheus
protocol: TCP
name: prometheus
{{- end }}
selector:
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -25,6 +25,23 @@ spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
{{- if .Values.monitoring.sidecar.enabled }}
- name: exporter
image: "{{ .Values.monitoring.sidecar.image.repository }}:{{ .Values.monitoring.sidecar.image.tag }}"
imagePullPolicy: {{ .Values.monitoring.sidecar.image.pullPolicy }}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
args:
{{ toYaml .Values.monitoring.sidecar.args | indent 12 }}
env:
{{ toYaml .Values.monitoring.sidecar.envs | indent 12 }}
resources:
{{ toYaml .Values.monitoring.sidecar.resources | indent 12 }}
ports:
- containerPort: {{ .Values.monitoring.sidecar.port }}
name: prometheus
protocol: TCP
{{- end }}
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
@@ -44,6 +61,7 @@ spec:
mountPath: /mosquitto/config
- name: data
mountPath: /mosquitto/data
{{- if .Values.extraVolumeMounts }}{{ toYaml .Values.extraVolumeMounts | trim | nindent 12 }}{{ end }}
volumes:
- name: configmap
configMap:
@@ -57,6 +75,7 @@ spec:
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
{{- end }}
{{- if .Values.extraVolumes }}{{ toYaml .Values.extraVolumes | trim | nindent 8 }}{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -74,3 +74,42 @@ persistence:
size: 5Gi
# customConfig:
# Any extra volumes to define for the pod
extraVolumes: []
# - name: example-name
# hostPath:
# path: /path/on/host
# type: DirectoryOrCreate
# Any extra volume mounts to define for the containers
extraVolumeMounts: []
# - name: example-name
# mountPath: /path/in/container
monitoring:
podMonitor:
enabled: false
sidecar:
enabled: false
port: 9234
args:
- "--use-splitted-config"
envs:
- name: MQTT_CLIENT_ID
value: exporter
- name: BROKER_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
image:
repository: nolte/mosquitto-exporter
tag: v0.6.3
pullPolicy: IfNotPresent
resources:
limits:
cpu: 300m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v21.0
description: NZBGet is a Usenet downloader client
name: nzbget
version: 5.0.0
version: 5.0.1
keywords:
- nzbget
- usenet

View File

@@ -2,7 +2,7 @@ apiVersion: v2
name: ombi
description: Want a Movie or TV Show on Plex or Emby? Use Ombi!
type: application
version: 4.0.1
version: 4.0.2
appVersion: 4.0.471
keywords:
- ombi

View File

@@ -2,7 +2,7 @@ apiVersion: v2
name: organizr
description: HTPC/Homelab Services Organizer - Written in PHP
type: application
version: 1.0.1
version: 1.0.2
appVersion: latest
keywords:
- organizr

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v3.8.1
description: Program for forwarding ADS-B data to FlightAware
name: piaware
version: 2.0.1
version: 2.1.0
keywords:
- piaware
- flight-aware

View File

@@ -17,6 +17,15 @@ To install the chart with the release name `my-release`:
helm install --name my-release k8s-at-home/piaware
```
### Configuration
There are two main options for this chart, either use a UBB device on the node where the pod runs or use
[readsb](https://hub.docker.com/r/mikenye/readsb) with beast
#### USB
Set the value
device: "/dev/bus/usb/001/004"
**IMPORTANT NOTE:** a flight-aware USB device must be accessible on the node where this pod runs, in order for this chart to function properly.
A way to achieve this can be with nodeAffinity rules, for example:
@@ -35,6 +44,12 @@ affinity:
... where a node with an attached flight-aware USB device is labeled with `app: flight-aware`
#### Beast
Use this together with the [readsb](https://hub.docker.com/r/mikenye/readsb)
Set the value
beastHost: <host running readsb>
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
@@ -53,12 +68,12 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```console
helm install --name my-release \
--set rtspPassword="nosecrets" \
--set feederId="nosecrets" \
k8s-at-home/piaware
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install --name my-release -f values.yaml stable/piaware
helm install --name my-release -f values.yaml k8s-at-home/piaware
```

View File

@@ -33,8 +33,10 @@ spec:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.device }}
securityContext:
privileged: true
{{- end }}
ports:
- name: http
containerPort: 8080
@@ -56,15 +58,27 @@ spec:
- name: FEEDER_ID
value: "{{ .Values.feederId }}"
{{- end }}
{{- if .Values.beastHost }}
- name: BEASTHOST
value: "{{ .Values.beastHost }}"
{{- end }}
{{- if .Values.beastPort }}
- name: BEASTPORT
value: "{{ .Values.beastPort }}"
{{- end }}
{{- if .Values.device }}
volumeMounts:
- mountPath: {{ .Values.device }}
name: usb
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.device }}
volumes:
- name: usb
hostPath:
path: {{ .Values.device }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -21,7 +21,10 @@ image:
# longitude: "30.66783"
# device where the flight-aware device can be accessed
device: "/dev/bus/usb/001/004"
# device: "/dev/bus/usb/001/004"
# beastHost: 10.0.1.88
# beastPort: 30005
imagePullSecrets: []
nameOverride: ""

View File

@@ -1,8 +1,8 @@
apiVersion: v2
appVersion: 1.20.1.3252
appVersion: 1.20.2.3402
description: Plex Media Server
name: plex
version: 2.0.3
version: 2.0.4
keywords:
- plex
home: https://plex.tv/

View File

@@ -6,7 +6,7 @@
image:
repository: plexinc/pms-docker
tag: 1.20.1.3252-a78fef9a9
tag: 1.20.2.3402-0fec14d92
pullPolicy: IfNotPresent
##### START --> Official PLEX container environment variables

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: 4.2.5
description: qBittorrent is a cross-platform free and open-source BitTorrent client
name: qbittorrent
version: 5.0.0
version: 5.0.1
keywords:
- qbittorrent
- torrrent

View File

@@ -1,31 +1,21 @@
apiVersion: v2
name: radarr
appVersion: 3.0.0.3989
description: A fork of Sonarr to work with movies à la Couchpotato
type: application
version: 6.0.1
appVersion: 3.0.0.3591
name: radarr
version: 7.0.0
keywords:
- radarr
- torrent
- usenet
home: https://github.com/k8s-at-home/charts/tree/master/charts/radarr
icon: https://github.com/Radarr/Radarr/blob/aphrodite/Logo/512.png?raw=true
sources:
- https://github.com/Radarr/Radarr
- https://hub.docker.com/r/linuxserver/radarr
maintainers:
- name: DirtyCajunRice
email: nick@cajun.pro
url: https://github.com/dirtycajunrice
- name: billimek
email: jeff@billimek.com
dependencies:
- name: media-common
- name: common
repository: https://k8s-at-home.com/charts/
version: ^1.0.0
alias: radarr
annotations:
artifacthub.io/links: |
- name: App Source
url: https://github.com/Radarr/Radarr
- name: Default Docker Image
url: https://hub.docker.com/r/linuxserver/radarr
artifacthub.io/maintainers: |
- name: Nicholas St. Germain
email: nick@cajun.pro
version: ^1.0.4

View File

@@ -1,4 +1,4 @@
approvers:
- DirtyCajunRice
- billimek
reviewers:
- DirtyCajunRice
- billimek

View File

@@ -1,52 +1,36 @@
# Radarr | A fork of Sonarr to work with movies à la Couchpotato
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/radarr/)
* Deploys [Radarr](https://github.com/Radarr/Radarr)
# Radarr
## TL;DR
```console
This is a helm chart for [Radarr](https://github.com/Radarr/Radarr).
## TL;DR;
```shell
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/radarr
```
## Installing the Chart
To install the chart with the release name `radarr`:
To install the chart with the release name `my-release`:
```console
helm install radarr k8s-at-home/radarr
helm install --name my-release k8s-at-home/radarr
```
## Upgrading
Chart versions before 6.0.0 did not use media-common. Upgrading will require you to nest your values.yaml file under
a top-level `radarr:` key.
Chart versions 3.2.0 and earlier used separate PVCs for Downloads and Movies. This presented an issue where Radarr would
be unable to hard-link files between the /downloads and /movies directories when importing media. This is caused because
each PVC exposed to the pod as a separate filesystem. It resulted in Radarr copying files rather than linking;
using additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and Movies. This means all of your media (and downloads) must be in, or
be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/movies, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Radarr's
`Movie Editor` under the `Movies` tab. Simply select all artists in your library, and use the editor to change the
`Root Folder` and hit save.
## Uninstalling the Chart
To uninstall the `radarr` deployment:
To uninstall/delete the `my-release` deployment:
```console
helm uninstall radarr
helm delete my-release --purge
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
Read through the charts [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/radarr/values.yaml)
file. It has several commented out suggested values.
Additionally you can take a look at the common library [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/common/values.yaml) for more (advanced) configuration options.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
@@ -60,11 +44,9 @@ chart. For example,
helm install radarr k8s-at-home/radarr --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
radarr:
image:
tag: ...
image:
tag: ...
```
---
@@ -77,3 +59,18 @@ Error: rendered manifests contain a resource that already exists. Unable to cont
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
## Upgrading an existing Release to a new major version
A major chart version change (like 4.0.1 -> 5.0.0) indicates that there is an incompatible breaking change potentially needing manual actions.
### Upgrading from 6.x.x to 7.x.x
Due to migrating to a centralized common library some values in `values.yaml` have changed.
Examples:
* `service.port` has been moved to `service.port.port`.
* `persistence.type` has been moved to `controllerType`.
Refer to the library values.yaml for more configuration options.

View File

@@ -0,0 +1,2 @@
ingress:
enabled: true

View File

@@ -0,0 +1 @@
{{- include "common.notes.defaultNotes" . -}}

View File

@@ -0,0 +1 @@
{{ include "common.all" . }}

View File

@@ -1,10 +1,37 @@
# Default values for radarr.
# Default values for Radarr.
radarr:
image:
organization: linuxserver
repository: radarr
pullPolicy: IfNotPresent
tag: 3.0.0.3624-ls21
service:
image:
repository: linuxserver/radarr
pullPolicy: IfNotPresent
tag: version-3.0.0.3989
service:
port:
port: 7878
env: {}
# TZ: UTC
# PUID: 1001
# PGID: 1001
persistence:
config:
enabled: true
emptyDir: true
media:
enabled: true
emptyDir: true
mountPath: /media
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 1Gi
## Do not delete the pvc upon helm uninstall
# skipuninstall: false
# existingClaim: ""

Some files were not shown because too many files have changed in this diff Show More