Compare commits

..

5 Commits

Author SHA1 Message Date
Feliksas The Lion
fb1c653533 [node-red] Added serviceAccountName to pod spec (#54)
* Added option to specify ServiceAccount (needed to associate a PodSecurityPolicy)

* Bumped chart version, added variable to README

* Bumped version to 3.1.0, as per request
2020-09-21 09:55:26 -04:00
Bernd Schörgers
9e284da7a6 [jackett] Migrate to media-common (#50)
* [jackett] Migrate to media-common

* [jackett] Rename alias

* [jackett] Process review comments
2020-09-21 09:36:29 -04:00
Bernd Schörgers
6929543b6f [nzbget] Migrate to media-common (#49)
* [nzbget] Migrate to media-common

* [nzbget] Migrate to media-common

* [nzbget] Fix indenting in values

Co-authored-by: Jeff Billimek <jeff@billimek.com>
2020-09-19 09:03:23 -04:00
Bernd Schörgers
979349b96f [prometheus-nut-exporter] Fix selector (#52)
* [prometheus-nut-exporter] Fix selector

* [prometheus-nut-exporter] Add tests

* Remove tests because no serviceMonitor in pipeline
2020-09-15 11:25:35 -04:00
Bernd Schörgers
521d473cc0 [prometheus-nut-exporter] New chart (#51) 2020-09-15 08:28:32 -04:00
42 changed files with 586 additions and 1152 deletions

2
.gitignore vendored
View File

@@ -1,4 +1,4 @@
.env
.idea
charts/*/Chart.lock
charts/*/charts
charts/*/charts

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v0.16.1045
description: API Support for your favorite torrent trackers
name: jackett
version: 3.1.0
version: 4.0.0
keywords:
- jackett
- torrent
@@ -14,3 +14,8 @@ sources:
maintainers:
- name: billimek
email: jeff@billimek.com
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ^1.0.0
alias: jackett

View File

@@ -28,82 +28,35 @@ helm delete my-release --purge
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
| -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| `image.repository` | Image repository | `linuxserver/jackett` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/jackett/tags/). | `v0.12.1132-ls37` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the Jackett instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the Jackett instance should run as | `1001` |
| `pgid` | process groupID the Jackett instance should run as | `1001` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.periodSeconds` | Specify liveness `periodSeconds` parameter for the deployment | `10` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.periodSeconds` | Specify readiness `periodSeconds` parameter for the deployment | `10` |
| `probes.startup.initialDelaySeconds` | Specify startup `initialDelaySeconds` parameter for the deployment | `5` |
| `probes.startup.failureThreshold` | Specify startup `failureThreshold` parameter for the deployment | `30` |
| `probes.startup.periodSeconds` | Specify startup `periodSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the Jackett GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the Jackett GUI is exposed | `9117` |
| `Service.annotations` | Service annotations for the Jackett GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the Jackett GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.config.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.torrentblackhole.enabled` | Use persistent volume to store torrent files | `false` |
| `persistence.torrentblackhole.size` | Size of persistent volume claim | `1Gi` |
| `persistence.torrentblackhole.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.torrentblackhole.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.torrentblackhole.storageClass` | Type of persistent volume claim | `-` |
| `persistence.torrentblackhole.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.torrentblackhole.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
| `hostNetwork` | Specify whether pods should use host networking | `false` |
| `dnsPolicy` | Set the DNS policy for pods, ex: ClusterFirst, ClusterFirstWithHostNet. See info [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) | `ClusterFirst` |
| `dnsConfig` | Specify DNS options for pods, see values.yaml for details, or see [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) | `{}` |
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
helm install jackett \
--set jackett.env.TZ="America/New York" \
k8s-at-home/jackett
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
chart. For example,
```console
helm install --name my-release -f values.yaml k8s-at-home/jackett
helm install jackett k8s-at-home/jackett --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
jackett:
image:
tag: ...
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/jackett/values.yaml) file. It has several commented out suggested values.
---

View File

@@ -0,0 +1,10 @@
jackett:
image:
organization: linuxserver
repository: jackett
tag: v0.16.1045-ls14
service:
type: ClusterIP
port: 9117
ingress:
enabled: false

View File

@@ -1,19 +1,20 @@
{{- $svcPort := .Values.jackett.service.port -}}
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- if .Values.jackett.ingress.enabled }}
{{- range .Values.jackett.ingress.hosts }}
http{{ if $.Values.jackett.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.jackett.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "jackett.fullname" . }})
{{- else if contains "NodePort" .Values.jackett.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
{{- else if contains "LoadBalancer" .Values.jackett.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "jackett.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "jackett.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "jackett.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9117 to use your application"
kubectl port-forward $POD_NAME 9117:80
{{- end }}
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ $svcPort }}
{{- else if contains "ClusterIP" .Values.jackett.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ $svcPort }}
{{- end }}

View File

@@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "jackett.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "jackett.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "jackett.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "jackett.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
helm.sh/chart: {{ include "jackett.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,122 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "jackett.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
helm.sh/chart: {{ include "jackett.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
hostNetwork: {{ .Values.hostNetwork }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- if .Values.dnsConfig }}
dnsConfig: {{ toYaml .Values.dnsConfig | nindent 8}}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 9117
protocol: TCP
livenessProbe:
tcpSocket:
port: http
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
periodSeconds: {{ .Values.probes.liveness.periodSeconds }}
readinessProbe:
tcpSocket:
port: http
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
periodSeconds: {{ .Values.probes.readiness.periodSeconds }}
startupProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.startup.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.startup.failureThreshold }}
periodSeconds: {{ .Values.probes.startup.periodSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: "{{ .Values.persistence.config.subPath }}"
{{- end }}
- mountPath: /downloads
name: torrentblackhole
{{- if .Values.persistence.torrentblackhole.subPath }}
subPath: "{{ .Values.persistence.torrentblackhole.subPath }}"
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "jackett.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{ end }}
- name: torrentblackhole
{{- if .Values.persistence.torrentblackhole.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.torrentblackhole.existingClaim }}{{ .Values.persistence.torrentblackhole.existingClaim }}{{- else }}{{ template "jackett.fullname" . }}-torrentblackhole{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "jackett.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
helm.sh/chart: {{ include "jackett.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if and .Values.jackett.persistence.torrentblackhole.enabled (not .Values.jackett.persistence.torrentblackhole.existingClaim) }}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}-downloads
{{- if .Values.jackett.persistence.torrentblackhole.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.jackett.persistence.torrentblackhole.accessMode | quote }}
resources:
requests:
storage: {{ .Values.jackett.persistence.torrentblackhole.size | quote }}
{{- if .Values.jackett.persistence.torrentblackhole.storageClass }}
storageClassName: {{ if (eq "-" .Values.jackett.persistence.torrentblackhole.storageClass) }}""{{- else }}{{ .Values.jackett.persistence.torrentblackhole.storageClass | quote}}{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,53 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "jackett.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
helm.sh/chart: {{ include "jackett.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "jackett.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.torrentblackhole.enabled (not .Values.persistence.torrentblackhole.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "jackett.fullname" . }}-torrentblackhole
{{- if .Values.persistence.torrentblackhole.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "jackett.name" . }}
helm.sh/chart: {{ include "jackett.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.torrentblackhole.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.torrentblackhole.size | quote }}
{{- if .Values.persistence.torrentblackhole.storageClass }}
{{- if (eq "-" .Values.persistence.torrentblackhole.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.torrentblackhole.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,149 +1,43 @@
# Default values for Jackett.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: linuxserver/jackett
tag: v0.16.1045-ls14
pullPolicy: IfNotPresent
jackett:
image:
organization: linuxserver
repository: jackett
pullPolicy: IfNotPresent
tag: v0.16.1045-ls14
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
service:
port: 9117
# Probes configuration
probes:
liveness:
failureThreshold: 5
periodSeconds: 10
readiness:
failureThreshold: 5
periodSeconds: 10
startup:
initialDelaySeconds: 5
failureThreshold: 30
periodSeconds: 10
env: {}
# TZ: UTC
# PUID: 1001
# PGID: 1001
nameOverride: ""
fullnameOverride: ""
persistence:
torrentblackhole:
enabled: false
## Jackett torrent torrentblackhole Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 1Gi
## Do not delete the pvc upon helm uninstall
# skipuninstall: false
# existingClaim: ""
timezone: UTC
puid: 1001
pgid: 1001
additionalVolumes:
- name: torrentblackhole
emptyDir: {}
## When using persistence.torrentblackhole.enabled: true, adjust this to:
# persistentVolumeClaim:
# claimName: jackett-torrentblackhole
service:
type: ClusterIP
port: 9117
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
config:
enabled: true
## Jackett configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 1Gi
## If subPath is set mount a sub folder of a volume instead of the root of the volume.
## This is especially handy for volume plugins that don't natively support sub mounting (like glusterfs).
##
subPath: ""
## Do not delete the pvc upon helm uninstall
skipuninstall: false
torrentblackhole:
enabled: false
## Jackett torrentblackhole directory volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
dnsPolicy: ClusterFirst
dnsConfig: {}
# dnsConfig may be used with any dnsPolicy, but is required when dnsPolicy: "None"
# To use, remove the braces above, and uncomment/modify the following lines.
# See https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
# for additional information
# nameservers:
# - 1.1.1.1
# searches:
# - ns1.mysearch.domain
# options:
# - name: ndots
# value: "1"
hostNetwork: false
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}
additionalVolumeMounts:
- name: torrentblackhole
mountPath: /downloads

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: 1.0.6-12
description: Node-RED is low-code programming for event-driven applications
name: node-red
version: 3.0.0
version: 3.1.0
keywords:
- nodered
- node-red

View File

@@ -42,6 +42,7 @@ The following tables lists the configurable parameters of the Node-RED chart and
| `image.tag` | node-red image tag | `1.0.6-12-minimal` |
| `image.pullPolicy` | node-red image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `serviceAccountName` | Service account to run the pod as | `` |
| `livenessProbePath` | Default livenessProbe path | `/` |
| `readinessProbePath` | Default readinessProbe path | `/` |
| `flows` | Default flows configuration | `flows.json` |

View File

@@ -33,6 +33,9 @@ spec:
{{- end }}
{{- end }}
spec:
{{- if .Values.serviceAccountName }}
serviceAccountName: {{ .Values.serviceAccountName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"

View File

@@ -13,6 +13,8 @@ image:
nameOverride: ""
fullnameOverride: ""
serviceAccountName: ""
livenessProbePath: /
readinessProbePath: /

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v21.0
description: NZBGet is a Usenet downloader client
name: nzbget
version: 4.0.1
version: 5.0.0
keywords:
- nzbget
- usenet
@@ -14,3 +14,8 @@ sources:
maintainers:
- name: billimek
email: jeff@billimek.com
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ^1.0.0
alias: nzbget

View File

@@ -33,75 +33,35 @@ helm delete my-release --purge
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `linuxserver/nzbget` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/nzbget/tags/).| `v21.0-ls14`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the nzbget instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the nzbget instance should run as | `1001` |
| `pgid` | process groupID the nzbget instance should run as | `1001` |
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the nzbget GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the nzbget GUI is exposed| `6789` |
| `Service.annotations` | Service annotations for the nzbget GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the nzbget GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}`
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.downloads.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.downloads.size` | Size of persistent volume claim | `10Gi` |
| `persistence.downloads.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.downloads.storageClass` | Type of persistent volume claim | `-` |
| `persistence.downloads.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.downloads.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraMounts` | Array of additional claims to mount | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as pod annotations | `{}` |
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
helm install nzbget \
--set radarr.env.TZ="America/New York" \
k8s-at-home/nzbget
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
chart. For example,
```console
helm install --name my-release -f values.yaml stable/nzbget
helm install radarr k8s-at-home/nzbget --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
nzbget:
image:
tag: ...
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/nzbget/values.yaml) file. It has several commented out suggested values.

View File

@@ -0,0 +1,10 @@
nzbget:
image:
organization: linuxserver
repository: nzbget
tag: latest
service:
type: ClusterIP
port: 6789
ingress:
enabled: false

View File

@@ -1,21 +1,23 @@
{{- $svcPort := .Values.nzbget.service.port -}}
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- if .Values.nzbget.ingress.enabled }}
{{- range .Values.nzbget.ingress.hosts }}
http{{ if $.Values.nzbget.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.nzbget.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "nzbget.fullname" . }})
{{- else if contains "NodePort" .Values.nzbget.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
{{- else if contains "LoadBalancer" .Values.nzbget.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "nzbget.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "nzbget.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "nzbget.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ $svcPort }}
{{- else if contains "ClusterIP" .Values.nzbget.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
kubectl port-forward $POD_NAME 8080:{{ $svcPort }}
{{- end }}
The default login to the GUI is login:nzbget, password:tegbzn6789
The default login to the GUI is login:nzbget, password:tegbzn6789
You should change this as soon as possible!

View File

@@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nzbget.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nzbget.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nzbget.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "nzbget.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,140 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nzbget.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 6789
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
- mountPath: /downloads
name: downloads
{{- if .Values.persistence.downloads.subPath }}
subPath: {{ .Values.persistence.downloads.subPath }}
{{ end }}
{{- range .Values.persistence.extraMounts }}
{{- if .mountPath }}
- mountPath: /{{ .mountPath }}
{{- else }}
- mountPath: /{{ .name }}
{{- end }}
name: {{ .name }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.openvpn.enabled }}
- name: openvpn
image: "{{ .Values.openvpn.image.repository }}:{{ .Values.openvpn.image.tag }}"
imagePullPolicy: {{ .Values.openvpn.image.pullPolicy }}
securityContext:
capabilities:
add: ["NET_ADMIN"]
{{- if .Values.openvpn.env }}
envFrom:
- secretRef:
name: {{ template "nzbget.fullname" . }}-openvpnenv
{{- end }}
{{- if .Values.openvpn.vpnConf }}
volumeMounts:
- name: openvpnconf
mountPath: /vpn/vpn.conf
subPath: vpnConf
{{- end }}
env:
- name: NETWORK_POLICY_ENABLED
value: {{ .Values.openvpn.networkPolicy.enabled | quote }}
{{- end }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "nzbget.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{ end }}
- name: downloads
{{- if .Values.persistence.downloads.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.downloads.existingClaim }}{{ .Values.persistence.downloads.existingClaim }}{{- else }}{{ template "nzbget.fullname" . }}-downloads{{- end }}
{{- else }}
emptyDir: {}
{{ end }}
{{- if .Values.openvpn.vpnConf }}
- name: openvpnconf
configMap:
name: {{ template "nzbget.fullname" . }}-openvpnconf
{{ end }}
{{- range .Values.persistence.extraMounts }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .claimName }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.downloads.enabled (not .Values.persistence.downloads.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "nzbget.fullname" . }}-downloads
{{- if .Values.persistence.downloads.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.downloads.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.downloads.size | quote }}
{{- if .Values.persistence.downloads.storageClass }}
{{- if (eq "-" .Values.persistence.downloads.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.downloads.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "nzbget.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}

View File

@@ -1,16 +0,0 @@
{{- if and .Values.openvpn.enabled .Values.openvpn.vpnConf}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "nzbget.fullname" . }}-openvpnconf
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
{{- if .Values.openvpn.vpnConf }}
vpnConf: |-
{{- .Values.openvpn.vpnConf | nindent 4}}
{{- end }}
{{- end -}}

View File

@@ -1,20 +0,0 @@
{{- if and .Values.openvpn.enabled ( or .Values.openvpn.env .Values.openvpn.auth )}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "nzbget.fullname" . }}-openvpnenv
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
{{- if .Values.openvpn.auth }}
VPN_AUTH: {{ .Values.openvpn.auth | b64enc }}
{{- end }}
{{- if .Values.openvpn.env }}
{{- range $k, $v := .Values.openvpn.env }}
{{ $k }}: {{ $v | b64enc }}
{{- end }}
{{- end -}}
{{- end -}}

View File

@@ -1,17 +0,0 @@
{{- if .Values.openvpn.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "nzbget.fullname" . }}-deny-all-netpol
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
policyTypes:
- Egress
egress:
{{- if .Values.openvpn.networkPolicy.egress }}
{{- .Values.openvpn.networkPolicy.egress | toYaml | nindent 4 }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,22 @@
{{- if and .Values.nzbget.persistence.downloads.enabled (not .Values.nzbget.persistence.downloads.existingClaim) }}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}-downloads
{{- if .Values.nzbget.persistence.downloads.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.nzbget.persistence.downloads.accessMode | quote }}
resources:
requests:
storage: {{ .Values.nzbget.persistence.downloads.size | quote }}
{{- if .Values.nzbget.persistence.downloads.storageClass }}
storageClassName: {{ if (eq "-" .Values.nzbget.persistence.downloads.storageClass) }}""{{- else }}{{ .Values.nzbget.persistence.downloads.storageClass | quote}}{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,53 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nzbget.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
helm.sh/chart: {{ include "nzbget.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "nzbget.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,179 +1,40 @@
# Default values for nzbget.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: linuxserver/nzbget
tag: v21.0-ls61
pullPolicy: IfNotPresent
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
service:
type: ClusterIP
port: 6789
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
openvpn:
# Enables an openvpn sidecar that when configured properly will provide a
# Secure outbound VPN for use by NZBGet.
enabled: false
nzbget:
image:
repository: dperson/openvpn-client
tag: latest
organization: linuxserver
repository: nzbget
pullPolicy: IfNotPresent
tag: v21.0-ls61
service:
port: 6789
# All variables specified here will be added to the openvpn sidecar container
# Ref https://hub.docker.com/r/dperson/openvpn-client for all config values
env: []
# DNS: "true"
# TZ: EST5EDT
# Provide a customized vpn.conf file to be used by openvpn.
vpnConf: # |-
# Some Example Config
# remote greatvpnhost.com 8888
# auth-user-pass
# Cipher AES
# Credentials to connect to the VPN Service (used with -a)
auth: # "user;password"
# If set to true, will deploy a network policy that blocks all outbound
# traffic except traffic specified as allowed
networkPolicy:
# Configure the OpenVPN add-on
openvpn:
enabled: false
# The egress configuration for your network policy, All outbound traffic
# From the pod will be blocked unless specified here. Your cluster must
# have a CNI that supports network policies (Canal, Calico, etc...)
# https://kubernetes.io/docs/concepts/services-networking/network-policies/
# https://github.com/ahmetb/kubernetes-network-policy-recipes
egress:
# - to:
# - ipBlock:
# cidr: 0.0.0.0/0
# ports:
# - port: 53
# protocol: UDP
# - port: 53
# protocol: TCP
persistence:
downloads:
enabled: false
## nzbget downloads Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
# storageClass: "-"
# accessMode: ReadWriteOnce
# size: 1Gi
## Do not delete the pvc upon helm uninstall
# skipuninstall: false
persistence:
config:
enabled: true
## nzbget configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
downloads:
enabled: true
## nzbget torrents downloads volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
extraMounts: []
## Include additional claims that can be mounted inside the
## pod. This is useful if you wish to use different paths with categories
## Claim will me mounted as /{mountPath} if specified. If no {mountPath} is given,
## mountPath will default to {name}
# - name: video
# claimName: video-claim
# mountPath: /mnt/path/in/pod
additionalVolumes:
- name: downloads
emptyDir: {}
## When using persistence.downloads.enabled: true, adjust this to:
# persistentVolumeClaim:
# claimName: nzbget-downloads
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}
additionalVolumeMounts:
- name: downloads
mountPath: /downloads

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,16 @@
apiVersion: v2
name: prometheus-nut-exporter
description: A Helm chart for Kubernetes
type: application
version: 1.0.1
appVersion: 1.0.1
keywords:
- nut
- prometheus
home: https://github.com/k8s-at-home/charts/tree/master/charts/prometheus-nut-exporter
icon: https://www.iconfinder.com/data/icons/wpzoom-developer-icon-set/500/125-512.png
sources:
- https://github.com/HON95/prometheus-nut-exporter
maintainers:
- name: billimek
email: jeff@billimek.com

View File

@@ -0,0 +1,50 @@
# Prometheus NUT Exporter
This is a helm chart provides a service monitor to send NUT server metrics to a Prometheus instance. Based on [Prometheus NUT Exporter](https://github.com/HON95/prometheus-nut-exporter).
## TL;DR;
```console
helm repo add k8s-at-home https://k8s-at-home.com/charts/
helm install k8s-at-home/prometheus-nut-exporter
```
## Installing the Chart
To install the chart with the release name `prometheus-nut-exporter`:
```console
helm install --name prometheus-nut-exporter k8s-at-home/prometheus-nut-exporter
```
## Uninstalling the Chart
To uninstall/delete the `prometheus-nut-exporter` deployment:
```console
helm delete prometheus-nut-exporter
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/prometheus-nut-exporter/values.yaml) file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name prometheus-nut-exporter \
--set serviceMonitor.enabled=true \
k8s-at-home/prometheus-nut-exporter
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install --name prometheus-nut-exporter -f values.yaml k8s-at-home/prometheus-nut-exporter
```
## Metrics
You can find the exported metrics here: [metrics](https://github.com/HON95/prometheus-nut-exporter/blob/master/metrics.md).

View File

@@ -0,0 +1,15 @@
1. Get the application URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "prometheus-nut-exporter.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "prometheus-nut-exporter.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "prometheus-nut-exporter.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "prometheus-nut-exporter.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:{{ .Values.service.port }}
{{- end }}

View File

@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "prometheus-nut-exporter.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "prometheus-nut-exporter.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "prometheus-nut-exporter.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "prometheus-nut-exporter.labels" -}}
helm.sh/chart: {{ include "prometheus-nut-exporter.chart" . }}
{{ include "prometheus-nut-exporter.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "prometheus-nut-exporter.selectorLabels" -}}
app.kubernetes.io/name: {{ include "prometheus-nut-exporter.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "prometheus-nut-exporter.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "prometheus-nut-exporter.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,72 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "prometheus-nut-exporter.fullname" . }}
labels:
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "prometheus-nut-exporter.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
{{- if .Values.env }}
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key | quote }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "prometheus-nut-exporter.fullname" . }}
labels:
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "prometheus-nut-exporter.serviceAccountName" . }}
labels:
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "prometheus-nut-exporter.fullname" . }}
labels:
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 6 }}
endpoints:
{{- range .Values.serviceMonitor.targets }}
- port: http
interval: 15s
scrapeTimeout: 10s
path: "/nut"
params:
target:
- "{{ .hostname }}:{{ .port }}"
{{- end }}
{{- end }}

View File

@@ -0,0 +1,79 @@
# Default values for prometheus-nut-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: hon95/prometheus-nut-exporter
pullPolicy: IfNotPresent
tag: "1.0.1"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
env: {}
# TZ: UTC
serviceMonitor:
enabled: false
# Specify the list of NUT servers that should be monitored
targets: []
# - hostname: nut-server
# port: 3493
# Probes configuration
probes:
liveness:
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 30
failureThreshold: 5
timeoutSeconds: 10
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 9995
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}