Compare commits
26 Commits
sonarr-6.0
...
node-red-3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb1c653533 | ||
|
|
9e284da7a6 | ||
|
|
6929543b6f | ||
|
|
979349b96f | ||
|
|
521d473cc0 | ||
|
|
00f3ce5523 | ||
|
|
f7e980ab9c | ||
|
|
a037936b3e | ||
|
|
63b87146a3 | ||
|
|
a5b55b33e4 | ||
|
|
2eedb285e8 | ||
|
|
54d5f5aaeb | ||
|
|
f8babcb5a2 | ||
|
|
f15926425f | ||
|
|
b6ec5f8e71 | ||
|
|
8158841f31 | ||
|
|
ff58303989 | ||
|
|
614f2bd25f | ||
|
|
ca2c348e6d | ||
|
|
5bff2ae5ed | ||
|
|
1bd47c326a | ||
|
|
7d06c3d5e3 | ||
|
|
451d0510c2 | ||
|
|
cd06a6fb61 | ||
|
|
1eb548d382 | ||
|
|
befa7553fa |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1,2 +1,4 @@
|
||||
.env
|
||||
.idea
|
||||
charts/*/Chart.lock
|
||||
charts/*/charts
|
||||
|
||||
13
.pre-commit-config.yaml
Normal file
13
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
# See https://pre-commit.com for more information
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: ct-lint
|
||||
name: "Chart Test: Lint"
|
||||
language: docker_image
|
||||
pass_filenames: false
|
||||
types: ['file']
|
||||
files: '^charts/.*(\.ya?ml|\.tpl|\.helmignore|NOTES.txt)'
|
||||
entry: -u 0 quay.io/helmpack/chart-testing:v3.0.0 ct
|
||||
args:
|
||||
- lint
|
||||
@@ -33,9 +33,9 @@ See `git help commit`:
|
||||
|
||||
### Technical Requirements
|
||||
|
||||
* Must follow [Charts best practices](https://helm.sh/docs/topics/chart_best_practices/)
|
||||
* Must pass CI jobs for linting and installing changed charts with the [chart-testing](https://github.com/helm/chart-testing) tool
|
||||
* Any change to a chart requires a version bump following [semver](https://semver.org/) principles. See [Immutability(#immutability) and [Versioning](#versioning) below
|
||||
* Must follow [Charts best practices](https://helm.sh/docs/topics/chart_best_practices/).
|
||||
* Must pass CI jobs for linting and installing changed charts with the [chart-testing](https://github.com/helm/chart-testing) tool See [pre-commit](#pre-commit) below.
|
||||
* Any change to a chart requires a version bump following [semver](https://semver.org/) principles. See [Immutability](#immutability) and [Versioning](#versioning) below.
|
||||
|
||||
Once changes have been merged, the release job will automatically run to package and release changed charts.
|
||||
|
||||
@@ -51,3 +51,7 @@ Charts should start at `1.0.0`. Any breaking (backwards incompatible) changes to
|
||||
|
||||
1. Bump the MAJOR version
|
||||
2. In the README, under a section called "Upgrading", describe the manual steps necessary to upgrade to the new (specified) MAJOR version
|
||||
|
||||
### pre-commit
|
||||
|
||||
This repo supports the [pre-commit](https://pre-commit.com) framework. By installing the framework (see [docs](https://pre-commit.com/#install)) it is possible to perform the chart linting step before committing your code. This can help prevent linter issues in the pipeline. Note that this requires having Docker running on your development environment.
|
||||
|
||||
@@ -2,7 +2,8 @@
|
||||
|
||||
[](https://opensource.org/licenses/Apache-2.0)
|
||||
[](https://github.com/k8s-at-home/charts/actions)
|
||||
|
||||
[](https://github.com/pre-commit/pre-commit)
|
||||
[](https://artifacthub.io/packages/search?repo=k8s-at-home)
|
||||
## Usage
|
||||
|
||||
[Helm](https://helm.sh) must be installed to use the charts.
|
||||
|
||||
72
charts/README.templates.md.gotmpl
Normal file
72
charts/README.templates.md.gotmpl
Normal file
@@ -0,0 +1,72 @@
|
||||
{{- define "repository.organization" -}}
|
||||
k8s-at-home
|
||||
{{- end -}}
|
||||
|
||||
{{- define "repository.url" -}}
|
||||
https://github.com/k8s-at-home/charts
|
||||
{{- end -}}
|
||||
|
||||
{{- define "helm.url" -}}
|
||||
https://k8s-at-home.com/charts/
|
||||
{{- end -}}
|
||||
|
||||
{{- define "helm.path" -}}
|
||||
{{ template "repository.organization" . }}/{{ template "chart.name" . }}
|
||||
{{- end -}}
|
||||
{{- define "badge.artifactHub" -}}
|
||||
[](https://artifacthub.io/packages/helm/{{ template "chart.name" . }})
|
||||
{{- end -}}
|
||||
{{- define "description.multiarch" -}}
|
||||
The default values and container images used in this chart will allow for running in a multi-arch cluster (amd64, arm, arm64)
|
||||
{{- end -}}
|
||||
|
||||
{{- define "install.tldr" -}}
|
||||
## TL;DR
|
||||
```console
|
||||
$ helm repo add {{ template "repository.organization" . }} {{ template "helm.url" . }}
|
||||
$ helm install {{ template "helm.path" . }}
|
||||
```
|
||||
{{- end -}}
|
||||
|
||||
{{- define "install" -}}
|
||||
## Installing the Chart
|
||||
To install the chart with the release name `{{ template "chart.name" . }}`:
|
||||
```console
|
||||
helm install {{ template "chart.name" . }} {{ template "helm.path" . }}
|
||||
```
|
||||
{{- end -}}
|
||||
|
||||
{{- define "uninstall" -}}
|
||||
## Uninstalling the Chart
|
||||
To uninstall the `{{ template "chart.name" . }}` deployment:
|
||||
```console
|
||||
helm uninstall {{ template "chart.name" . }}
|
||||
```
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
{{- end -}}
|
||||
|
||||
{{- define "configuration.header" -}}
|
||||
## Configuration
|
||||
{{- end -}}
|
||||
|
||||
{{- define "configuration.readValues" -}}
|
||||
Read through the [values.yaml]({{ template "repository.url" . }}/blob/master/charts/{{ template "chart.name" . }}/values.yaml)
|
||||
file. It has several commented out suggested values.
|
||||
{{- end -}}
|
||||
|
||||
{{- define "configuration.example.set" -}}
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
```console
|
||||
helm install {{ template "chart.name" . }} \
|
||||
--set env.TZ="America/New York" \
|
||||
{{ template "helm.path" . }}
|
||||
```
|
||||
{{- end -}}
|
||||
|
||||
{{- define "configuration.example.file" -}}
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart.
|
||||
For example,
|
||||
```console
|
||||
helm install {{ template "chart.name" . }} {{ template "helm.path" . }} --values values.yaml
|
||||
```
|
||||
{{- end -}}
|
||||
@@ -2,7 +2,7 @@ apiVersion: v2
|
||||
appVersion: 0.114.0
|
||||
description: Home Assistant
|
||||
name: home-assistant
|
||||
version: 2.0.0
|
||||
version: 2.1.0
|
||||
keywords:
|
||||
- home-assistant
|
||||
- hass
|
||||
|
||||
@@ -222,7 +222,18 @@ Much of the home assistant configuration occurs inside the various files persist
|
||||
|
||||
## Git sync secret
|
||||
|
||||
In order to sync the home assistant from a git repo, you have to store a ssh key as a kubernetes git secret
|
||||
In order to sync the home assistant from a git repo, you can optionally store an ssh key as a kubernetes git secret:
|
||||
```shell
|
||||
kubectl create secret generic git-creds --from-file=id_rsa=git/k8s_id_rsa --from-file=known_hosts=git/known_hosts --from-file=id_rsa.pub=git/k8s_id_rsa.pub
|
||||
```
|
||||
|
||||
## git-crypt support
|
||||
|
||||
When using Git sync it is possible to specify a file called `git-crypt-key` in the secret referred to in `git.secret`. When this file is present, `git-crypt unlock` will automatically be executed after the repo has been synced.
|
||||
|
||||
**Note:** `git-crypt` is not installed by default in the other images! If you wish to push changes from the VS Code or Configurator containers, you will have to make sure that it is installed.
|
||||
|
||||
The value for this secret can be obtained by running the following command in an unlocked version of your Home Assistant settings repo. It will export the unlock key, base64 encode it and copy it to your clipboard.
|
||||
```shell
|
||||
git-crypt export-key ./tmp-key && cat ./tmp-key | base64 | pbcopy && rm ./tmp-key
|
||||
```
|
||||
|
||||
@@ -48,7 +48,28 @@ spec:
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
command: ['sh', '-c', '[ "$(ls {{ .Values.git.syncPath }})" ] || git clone {{ .Values.git.repo }} {{ .Values.git.syncPath }}']
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- set -e;
|
||||
if [ -d "{{ .Values.git.syncPath }}/.git" ];
|
||||
then
|
||||
git -C "{{ .Values.git.syncPath }}" pull || true;
|
||||
else
|
||||
if [ "$(ls -A {{ .Values.git.syncPath }})" ];
|
||||
then
|
||||
git clone --depth 2 "{{ .Values.git.repo }}" /tmp/repo;
|
||||
cp -rf /tmp/repo/.git "{{ .Values.git.syncPath }}";
|
||||
cd "{{ .Values.git.syncPath }}";
|
||||
git checkout -f;
|
||||
else
|
||||
git clone --depth 2 "{{ .Values.git.repo }}" "{{ .Values.git.syncPath }}";
|
||||
fi;
|
||||
fi;
|
||||
if [ -f "{{ .Values.git.keyPath }}/git-crypt-key" ];
|
||||
then
|
||||
cd {{ .Values.git.syncPath }};
|
||||
git-crypt unlock "{{ .Values.git.keyPath }}/git-crypt-key";
|
||||
fi;
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
@@ -396,6 +417,7 @@ spec:
|
||||
secret:
|
||||
defaultMode: 256
|
||||
secretName: {{ .Values.git.secret }}
|
||||
optional: true
|
||||
{{ end }}
|
||||
{{- if .Values.extraVolumes }}{{ toYaml .Values.extraVolumes | trim | nindent 6 }}{{ end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
|
||||
@@ -118,12 +118,9 @@ usePodSecurityContext: true
|
||||
git:
|
||||
enabled: false
|
||||
|
||||
## we just use the hass-configurator container image
|
||||
## you can use any image which has git and openssh installed
|
||||
##
|
||||
image:
|
||||
repository: causticlab/hass-configurator-docker
|
||||
tag: 0.3.5-x86_64
|
||||
repository: k8sathome/git-crypt
|
||||
tag: 2020.09.07
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
## Specify the command that runs in the git-sync container to pull in configuration.
|
||||
@@ -134,7 +131,7 @@ git:
|
||||
name: ""
|
||||
email: ""
|
||||
|
||||
# repo:
|
||||
repo: ""
|
||||
secret: git-creds
|
||||
syncPath: /config
|
||||
keyPath: /root/.ssh
|
||||
|
||||
@@ -2,7 +2,7 @@ apiVersion: v2
|
||||
appVersion: v0.16.1045
|
||||
description: API Support for your favorite torrent trackers
|
||||
name: jackett
|
||||
version: 3.1.0
|
||||
version: 4.0.0
|
||||
keywords:
|
||||
- jackett
|
||||
- torrent
|
||||
@@ -14,3 +14,8 @@ sources:
|
||||
maintainers:
|
||||
- name: billimek
|
||||
email: jeff@billimek.com
|
||||
dependencies:
|
||||
- name: media-common
|
||||
repository: https://k8s-at-home.com/charts/
|
||||
version: ^1.0.0
|
||||
alias: jackett
|
||||
|
||||
@@ -28,82 +28,35 @@ helm delete my-release --purge
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following tables lists the configurable parameters of the Sentry chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
|
||||
| `image.repository` | Image repository | `linuxserver/jackett` |
|
||||
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/jackett/tags/). | `v0.12.1132-ls37` |
|
||||
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
|
||||
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
|
||||
| `timezone` | Timezone the Jackett instance should run as, e.g. 'America/New_York' | `UTC` |
|
||||
| `puid` | process userID the Jackett instance should run as | `1001` |
|
||||
| `pgid` | process groupID the Jackett instance should run as | `1001` |
|
||||
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
|
||||
| `probes.liveness.periodSeconds` | Specify liveness `periodSeconds` parameter for the deployment | `10` |
|
||||
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
|
||||
| `probes.readiness.periodSeconds` | Specify readiness `periodSeconds` parameter for the deployment | `10` |
|
||||
| `probes.startup.initialDelaySeconds` | Specify startup `initialDelaySeconds` parameter for the deployment | `5` |
|
||||
| `probes.startup.failureThreshold` | Specify startup `failureThreshold` parameter for the deployment | `30` |
|
||||
| `probes.startup.periodSeconds` | Specify startup `periodSeconds` parameter for the deployment | `10` |
|
||||
| `Service.type` | Kubernetes service type for the Jackett GUI | `ClusterIP` |
|
||||
| `Service.port` | Kubernetes port where the Jackett GUI is exposed | `9117` |
|
||||
| `Service.annotations` | Service annotations for the Jackett GUI | `{}` |
|
||||
| `Service.labels` | Custom labels | `{}` |
|
||||
| `Service.loadBalancerIP` | Loadbalance IP for the Jackett GUI | `{}` |
|
||||
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None |
|
||||
| `ingress.enabled` | Enables Ingress | `false` |
|
||||
| `ingress.annotations` | Ingress annotations | `{}` |
|
||||
| `ingress.labels` | Custom labels | `{}` |
|
||||
| `ingress.path` | Ingress path | `/` |
|
||||
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
|
||||
| `ingress.tls` | Ingress TLS configuration | `[]` |
|
||||
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
|
||||
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
|
||||
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
|
||||
| `persistence.config.subPath` | Mount a sub directory of the persistent volume if set | `""` |
|
||||
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
|
||||
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
|
||||
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
|
||||
| `persistence.torrentblackhole.enabled` | Use persistent volume to store torrent files | `false` |
|
||||
| `persistence.torrentblackhole.size` | Size of persistent volume claim | `1Gi` |
|
||||
| `persistence.torrentblackhole.existingClaim` | Use an existing PVC to persist data | `nil` |
|
||||
| `persistence.torrentblackhole.subPath` | Mount a sub directory of the persistent volume if set | `""` |
|
||||
| `persistence.torrentblackhole.storageClass` | Type of persistent volume claim | `-` |
|
||||
| `persistence.torrentblackhole.accessMode` | Persistence access mode | `ReadWriteOnce` |
|
||||
| `persistence.torrentblackhole.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
|
||||
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
|
||||
| `resources` | CPU/Memory resource requests/limits | `{}` |
|
||||
| `nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `tolerations` | Toleration labels for pod assignment | `[]` |
|
||||
| `affinity` | Affinity settings for pod assignment | `{}` |
|
||||
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
|
||||
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
|
||||
| `hostNetwork` | Specify whether pods should use host networking | `false` |
|
||||
| `dnsPolicy` | Set the DNS policy for pods, ex: ClusterFirst, ClusterFirstWithHostNet. See info [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) | `ClusterFirst` |
|
||||
| `dnsConfig` | Specify DNS options for pods, see values.yaml for details, or see [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) | `{}` |
|
||||
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
|
||||
file. It has several commented out suggested values.
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
helm install --name my-release \
|
||||
--set timezone="America/New York" \
|
||||
helm install jackett \
|
||||
--set jackett.env.TZ="America/New York" \
|
||||
k8s-at-home/jackett
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
|
||||
chart. For example,
|
||||
```console
|
||||
helm install --name my-release -f values.yaml k8s-at-home/jackett
|
||||
helm install jackett k8s-at-home/jackett --values values.yaml
|
||||
```
|
||||
|
||||
These values will be nested as it is a dependency, for example
|
||||
```yaml
|
||||
jackett:
|
||||
image:
|
||||
tag: ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**NOTE**
|
||||
|
||||
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
|
||||
If you get
|
||||
```console
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
|
||||
```
|
||||
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
|
||||
|
||||
---
|
||||
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/jackett/values.yaml) file. It has several commented out suggested values.
|
||||
---
|
||||
10
charts/jackett/ci/ct-values.yaml
Normal file
10
charts/jackett/ci/ct-values.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
jackett:
|
||||
image:
|
||||
organization: linuxserver
|
||||
repository: jackett
|
||||
tag: v0.16.1045-ls14
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 9117
|
||||
ingress:
|
||||
enabled: false
|
||||
@@ -1,19 +1,20 @@
|
||||
{{- $svcPort := .Values.jackett.service.port -}}
|
||||
1. Get the application URL by running these commands:
|
||||
{{- if .Values.ingress.enabled }}
|
||||
{{- range .Values.ingress.hosts }}
|
||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
|
||||
{{- if .Values.jackett.ingress.enabled }}
|
||||
{{- range .Values.jackett.ingress.hosts }}
|
||||
http{{ if $.Values.jackett.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.jackett.ingress.path }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "jackett.fullname" . }})
|
||||
{{- else if contains "NodePort" .Values.jackett.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
{{- else if contains "LoadBalancer" .Values.jackett.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc -w {{ include "jackett.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "jackett.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "jackett.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:9117 to use your application"
|
||||
kubectl port-forward $POD_NAME 9117:80
|
||||
{{- end }}
|
||||
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ $svcPort }}
|
||||
{{- else if contains "ClusterIP" .Values.jackett.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||
kubectl port-forward $POD_NAME 8080:{{ $svcPort }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
|
||||
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ template "jackett.fullname" . }}-config
|
||||
{{- if .Values.persistence.config.skipuninstall }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
helm.sh/chart: {{ include "jackett.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.persistence.config.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.config.size | quote }}
|
||||
{{- if .Values.persistence.config.storageClass }}
|
||||
{{- if (eq "-" .Values.persistence.config.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.persistence.config.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,122 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "jackett.fullname" . }}
|
||||
{{- if .Values.deploymentAnnotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.deploymentAnnotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
helm.sh/chart: {{ include "jackett.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 3
|
||||
strategy:
|
||||
type: {{ .Values.strategyType }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- if .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.podAnnotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
hostNetwork: {{ .Values.hostNetwork }}
|
||||
dnsPolicy: {{ .Values.dnsPolicy }}
|
||||
{{- if .Values.dnsConfig }}
|
||||
dnsConfig: {{ toYaml .Values.dnsConfig | nindent 8}}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9117
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
|
||||
periodSeconds: {{ .Values.probes.liveness.periodSeconds }}
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
|
||||
periodSeconds: {{ .Values.probes.readiness.periodSeconds }}
|
||||
startupProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: {{ .Values.probes.startup.initialDelaySeconds }}
|
||||
failureThreshold: {{ .Values.probes.startup.failureThreshold }}
|
||||
periodSeconds: {{ .Values.probes.startup.periodSeconds }}
|
||||
env:
|
||||
- name: TZ
|
||||
value: "{{ .Values.timezone }}"
|
||||
- name: PUID
|
||||
value: "{{ .Values.puid }}"
|
||||
- name: PGID
|
||||
value: "{{ .Values.pgid }}"
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
name: config
|
||||
{{- if .Values.persistence.config.subPath }}
|
||||
subPath: "{{ .Values.persistence.config.subPath }}"
|
||||
{{- end }}
|
||||
- mountPath: /downloads
|
||||
name: torrentblackhole
|
||||
{{- if .Values.persistence.torrentblackhole.subPath }}
|
||||
subPath: "{{ .Values.persistence.torrentblackhole.subPath }}"
|
||||
{{- end }}
|
||||
{{- range .Values.persistence.extraExistingClaimMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 12 }}
|
||||
volumes:
|
||||
- name: config
|
||||
{{- if .Values.persistence.config.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "jackett.fullname" . }}-config{{- end }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{ end }}
|
||||
- name: torrentblackhole
|
||||
{{- if .Values.persistence.torrentblackhole.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.torrentblackhole.existingClaim }}{{ .Values.persistence.torrentblackhole.existingClaim }}{{- else }}{{ template "jackett.fullname" . }}-torrentblackhole{{- end }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- range .Values.persistence.extraExistingClaimMounts }}
|
||||
- name: {{ .name }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .existingClaim }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
@@ -1,41 +0,0 @@
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $fullName := include "jackett.fullname" . -}}
|
||||
{{- $ingressPath := .Values.ingress.path -}}
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
helm.sh/chart: {{ include "jackett.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- with .Values.ingress.labels -}}
|
||||
{{ toYaml . | nindent 4 }}
|
||||
{{- end -}}
|
||||
{{- with .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
- host: {{ . | quote }}
|
||||
http:
|
||||
paths:
|
||||
- path: {{ $ingressPath }}
|
||||
backend:
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: http
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
22
charts/jackett/templates/pvc.yaml
Normal file
22
charts/jackett/templates/pvc.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
{{- if and .Values.jackett.persistence.torrentblackhole.enabled (not .Values.jackett.persistence.torrentblackhole.existingClaim) }}
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ template "media-common.fullname" . }}-downloads
|
||||
{{- if .Values.jackett.persistence.torrentblackhole.skipuninstall }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "media-common.labels" . | nindent 4 }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.jackett.persistence.torrentblackhole.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.jackett.persistence.torrentblackhole.size | quote }}
|
||||
{{- if .Values.jackett.persistence.torrentblackhole.storageClass }}
|
||||
storageClassName: {{ if (eq "-" .Values.jackett.persistence.torrentblackhole.storageClass) }}""{{- else }}{{ .Values.jackett.persistence.torrentblackhole.storageClass | quote}}{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,53 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "jackett.fullname" . }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
helm.sh/chart: {{ include "jackett.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- if .Values.service.labels }}
|
||||
{{ toYaml .Values.service.labels | indent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
|
||||
type: ClusterIP
|
||||
{{- if .Values.service.clusterIP }}
|
||||
clusterIP: {{ .Values.service.clusterIP }}
|
||||
{{end}}
|
||||
{{- else if eq .Values.service.type "LoadBalancer" }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- if .Values.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalTrafficPolicy }}
|
||||
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: http
|
||||
port: {{ .Values.service.port }}
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
|
||||
nodePort: {{.Values.service.nodePort}}
|
||||
{{ end }}
|
||||
selector:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
|
||||
{{- if and .Values.persistence.torrentblackhole.enabled (not .Values.persistence.torrentblackhole.existingClaim) }}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ template "jackett.fullname" . }}-torrentblackhole
|
||||
{{- if .Values.persistence.torrentblackhole.skipuninstall }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "jackett.name" . }}
|
||||
helm.sh/chart: {{ include "jackett.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.persistence.torrentblackhole.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.torrentblackhole.size | quote }}
|
||||
{{- if .Values.persistence.torrentblackhole.storageClass }}
|
||||
{{- if (eq "-" .Values.persistence.torrentblackhole.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.persistence.torrentblackhole.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,149 +1,43 @@
|
||||
# Default values for Jackett.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
image:
|
||||
repository: linuxserver/jackett
|
||||
tag: v0.16.1045-ls14
|
||||
pullPolicy: IfNotPresent
|
||||
jackett:
|
||||
image:
|
||||
organization: linuxserver
|
||||
repository: jackett
|
||||
pullPolicy: IfNotPresent
|
||||
tag: v0.16.1045-ls14
|
||||
|
||||
# upgrade strategy type (e.g. Recreate or RollingUpdate)
|
||||
strategyType: Recreate
|
||||
service:
|
||||
port: 9117
|
||||
|
||||
# Probes configuration
|
||||
probes:
|
||||
liveness:
|
||||
failureThreshold: 5
|
||||
periodSeconds: 10
|
||||
readiness:
|
||||
failureThreshold: 5
|
||||
periodSeconds: 10
|
||||
startup:
|
||||
initialDelaySeconds: 5
|
||||
failureThreshold: 30
|
||||
periodSeconds: 10
|
||||
env: {}
|
||||
# TZ: UTC
|
||||
# PUID: 1001
|
||||
# PGID: 1001
|
||||
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
persistence:
|
||||
torrentblackhole:
|
||||
enabled: false
|
||||
## Jackett torrent torrentblackhole Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
# storageClass: "-"
|
||||
# accessMode: ReadWriteOnce
|
||||
# size: 1Gi
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
# skipuninstall: false
|
||||
# existingClaim: ""
|
||||
|
||||
timezone: UTC
|
||||
puid: 1001
|
||||
pgid: 1001
|
||||
additionalVolumes:
|
||||
- name: torrentblackhole
|
||||
emptyDir: {}
|
||||
## When using persistence.torrentblackhole.enabled: true, adjust this to:
|
||||
# persistentVolumeClaim:
|
||||
# claimName: jackett-torrentblackhole
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 9117
|
||||
## Specify the nodePort value for the LoadBalancer and NodePort service types.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
|
||||
##
|
||||
# nodePort:
|
||||
## Provide any additional annotations which may be required. This can be used to
|
||||
## set the LoadBalancer service type to internal only.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
|
||||
##
|
||||
annotations: {}
|
||||
labels: {}
|
||||
## Use loadBalancerIP to request a specific static IP,
|
||||
## otherwise leave blank
|
||||
##
|
||||
loadBalancerIP:
|
||||
# loadBalancerSourceRanges: []
|
||||
## Set the externalTrafficPolicy in the Service to either Cluster or Local
|
||||
# externalTrafficPolicy: Cluster
|
||||
|
||||
ingress:
|
||||
enabled: false
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
labels: {}
|
||||
path: /
|
||||
hosts:
|
||||
- chart-example.local
|
||||
tls: []
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
persistence:
|
||||
config:
|
||||
enabled: true
|
||||
## Jackett configuration data Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
##
|
||||
## If you want to reuse an existing claim, you can pass the name of the PVC using
|
||||
## the existingClaim variable
|
||||
# existingClaim: your-claim
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
|
||||
## If subPath is set mount a sub folder of a volume instead of the root of the volume.
|
||||
## This is especially handy for volume plugins that don't natively support sub mounting (like glusterfs).
|
||||
##
|
||||
subPath: ""
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
skipuninstall: false
|
||||
torrentblackhole:
|
||||
enabled: false
|
||||
## Jackett torrentblackhole directory volume configuration
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
##
|
||||
## If you want to reuse an existing claim, you can pass the name of the PVC using
|
||||
## the existingClaim variable
|
||||
# existingClaim: your-claim
|
||||
# subPath: some-subpath
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
skipuninstall: false
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
dnsPolicy: ClusterFirst
|
||||
|
||||
dnsConfig: {}
|
||||
# dnsConfig may be used with any dnsPolicy, but is required when dnsPolicy: "None"
|
||||
# To use, remove the braces above, and uncomment/modify the following lines.
|
||||
# See https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
|
||||
# for additional information
|
||||
# nameservers:
|
||||
# - 1.1.1.1
|
||||
# searches:
|
||||
# - ns1.mysearch.domain
|
||||
# options:
|
||||
# - name: ndots
|
||||
# value: "1"
|
||||
|
||||
hostNetwork: false
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
deploymentAnnotations: {}
|
||||
additionalVolumeMounts:
|
||||
- name: torrentblackhole
|
||||
mountPath: /downloads
|
||||
|
||||
23
charts/media-common-openvpn/.helmignore
Normal file
23
charts/media-common-openvpn/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
11
charts/media-common-openvpn/Chart.yaml
Normal file
11
charts/media-common-openvpn/Chart.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v2
|
||||
name: media-common-openvpn
|
||||
description: OpenVPN add-on for `media-common`-based charts
|
||||
type: library
|
||||
keywords:
|
||||
- media-common
|
||||
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common-openvpn
|
||||
maintainers:
|
||||
- name: bjw-s
|
||||
email: bjw-s@users.noreply.github.com
|
||||
version: 1.0.0
|
||||
16
charts/media-common-openvpn/README.md
Normal file
16
charts/media-common-openvpn/README.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# Add-on chart for k8s@home media charts
|
||||
|
||||
This chart provides a single maintainable OpenVPN add-on to the `meda-common` chart.
|
||||
|
||||
## Configuration
|
||||
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common-openvpn/values.yaml) file.
|
||||
It has several commented out suggested values.
|
||||
|
||||
These values will normally be nested as it is a dependency, for example:
|
||||
```yaml
|
||||
radarr:
|
||||
openvpn:
|
||||
enabled: true
|
||||
<values>
|
||||
```
|
||||
24
charts/media-common-openvpn/templates/_configmap.tpl
Normal file
24
charts/media-common-openvpn/templates/_configmap.tpl
Normal file
@@ -0,0 +1,24 @@
|
||||
{{/*
|
||||
The OpenVPN configmaps to be inserted
|
||||
*/}}
|
||||
{{- define "media-common.openvpn.configmap" -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "media-common.fullname" . }}-openvpn
|
||||
labels:
|
||||
{{- include "media-common.labels" . | nindent 4 }}
|
||||
data:
|
||||
{{- if .Values.openvpn.vpnConf }}
|
||||
vpnConf: |-
|
||||
{{- .Values.openvpn.vpnConf | nindent 4}}
|
||||
{{- end }}
|
||||
{{ if .Values.openvpn.scripts.up }}
|
||||
up.sh: |-
|
||||
{{- .Values.openvpn.scripts.up | nindent 4}}
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.scripts.down }}
|
||||
down.sh: |-
|
||||
{{- .Values.openvpn.scripts.down | nindent 4}}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
50
charts/media-common-openvpn/templates/_container.tpl
Normal file
50
charts/media-common-openvpn/templates/_container.tpl
Normal file
@@ -0,0 +1,50 @@
|
||||
{{/*
|
||||
The OpenVPN container(s) to be inserted
|
||||
*/}}
|
||||
{{- define "media-common.openvpn.container" -}}
|
||||
- name: openvpn
|
||||
image: "{{ .Values.openvpn.image.repository }}:{{ .Values.openvpn.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.openvpn.image.pullPolicy }}
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN"]
|
||||
{{- if .Values.openvpn.env }}
|
||||
env:
|
||||
{{- if .Values.openvpn.env }}
|
||||
{{- range $k, $v := .Values.openvpn.env }}
|
||||
- name: {{ $k }}
|
||||
value: {{ $v }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
envFrom:
|
||||
{{- if or .Values.openvpn.auth .Values.openvpn.authSecret }}
|
||||
- secretRef:
|
||||
{{- if .Values.openvpn.authSecret }}
|
||||
name: {{ .Values.openvpn.authSecret }}
|
||||
{{- else }}
|
||||
name: {{ template "media-common.fullname" . }}-openvpn
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
{{- if .Values.openvpn.vpnConf }}
|
||||
- name: openvpnconf
|
||||
mountPath: /vpn/vpn.conf
|
||||
subPath: vpnConf
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.scripts.up }}
|
||||
- name: openvpnconf
|
||||
mountPath: /vpn/up.sh
|
||||
subPath: up.sh
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.scripts.down }}
|
||||
- name: openvpnconf
|
||||
mountPath: /vpn/down.sh
|
||||
subPath: down.sh
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.additionalVolumeMounts }}
|
||||
{{- toYaml .Values.openvpn.additionalVolumeMounts | nindent 4 }}
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- toYaml .Values.openvpn.livenessProbe | nindent 4 }}
|
||||
{{- end -}}
|
||||
@@ -1,12 +1,16 @@
|
||||
{{- if .Values.openvpn.networkPolicy.enabled }}
|
||||
{{/*
|
||||
The OpenVPN networkpolicy to be inserted
|
||||
*/}}
|
||||
{{- define "media-common.openvpn.networkpolicy" -}}
|
||||
{{- if .Values.openvpn.networkPolicy.enabled -}}
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "nzbget.fullname" . }}-deny-all-netpol
|
||||
name: {{ template "media-common.fullname" . }}-deny-all-netpol
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
app.kubernetes.io/name: {{ include "media-common.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
policyTypes:
|
||||
- Egress
|
||||
@@ -14,4 +18,5 @@ spec:
|
||||
{{- if .Values.openvpn.networkPolicy.egress }}
|
||||
{{- .Values.openvpn.networkPolicy.egress | toYaml | nindent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
15
charts/media-common-openvpn/templates/_secret.tpl
Normal file
15
charts/media-common-openvpn/templates/_secret.tpl
Normal file
@@ -0,0 +1,15 @@
|
||||
{{/*
|
||||
The OpenVPN secrets to be inserted
|
||||
*/}}
|
||||
{{- define "media-common.openvpn.secret" -}}
|
||||
{{- if .Values.openvpn.auth -}}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "media-common.fullname" . }}-openvpn
|
||||
labels:
|
||||
{{- include "media-common.labels" . | nindent 4 }}
|
||||
data:
|
||||
VPN_AUTH: {{ .Values.openvpn.auth | b64enc }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
25
charts/media-common-openvpn/templates/_volume.tpl
Normal file
25
charts/media-common-openvpn/templates/_volume.tpl
Normal file
@@ -0,0 +1,25 @@
|
||||
{{/*
|
||||
The OpenVPN volumes to be inserted
|
||||
*/}}
|
||||
{{- define "media-common.openvpn.volume" -}}
|
||||
{{- if or .Values.openvpn.vpnConf .Values.openvpn.scripts.up .Values.openvpn.scripts.down -}}
|
||||
- name: openvpnconf
|
||||
configMap:
|
||||
name: {{ template "media-common.fullname" . }}-openvpn
|
||||
items:
|
||||
{{- if .Values.openvpn.vpnConf }}
|
||||
- key: vpnConf
|
||||
path: vpnConf
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.scripts.up }}
|
||||
- key: up.sh
|
||||
path: up.sh
|
||||
mode: 0777
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.scripts.down }}
|
||||
- key: down.sh
|
||||
path: down.sh
|
||||
mode: 0777
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
67
charts/media-common-openvpn/values.yaml
Normal file
67
charts/media-common-openvpn/values.yaml
Normal file
@@ -0,0 +1,67 @@
|
||||
# Default values for media-common-openvpn.
|
||||
|
||||
image:
|
||||
repository: dperson/openvpn-client
|
||||
tag: latest
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
# All variables specified here will be added to the openvpn sidecar container
|
||||
# Ref https://hub.docker.com/r/dperson/openvpn-client for all config values
|
||||
env: []
|
||||
# TZ: UTC
|
||||
|
||||
# Provide a customized vpn.conf file to be used by openvpn.
|
||||
vpnConf: # |-
|
||||
# Some Example Config
|
||||
# remote greatvpnhost.com 8888
|
||||
# auth-user-pass
|
||||
# Cipher AES
|
||||
|
||||
# Provide custom up/down scripts that can be used by the vpnConf
|
||||
scripts:
|
||||
up: # |-
|
||||
# #!/bin/bash
|
||||
# echo "connected" > /shared/vpnstatus
|
||||
down: # |-
|
||||
# #!/bin/bash
|
||||
# echo "disconnected" > /shared/vpnstatus
|
||||
|
||||
# Credentials to connect to the VPN Service (used with -a)
|
||||
auth: # "user;password"
|
||||
# OR specify an existing secret that contains the credentials. Credentials should be stored
|
||||
# under the VPN_AUTH key
|
||||
authSecret: # my-vpn-secret
|
||||
|
||||
additionalVolumeMounts: []
|
||||
|
||||
# Optionally specify a livenessProbe, e.g. to check if the connection is still
|
||||
# being protected by the VPN
|
||||
livenessProbe: {}
|
||||
# exec:
|
||||
# command:
|
||||
# - sh
|
||||
# - -c
|
||||
# - if [ $(curl -s https://ipinfo.io/country) == 'US' ]; then exit 0; else exit $?; fi
|
||||
# initialDelaySeconds: 30
|
||||
# periodSeconds: 60
|
||||
# failureThreshold: 1
|
||||
|
||||
# If set to true, will deploy a network policy that blocks all outbound
|
||||
# traffic except traffic specified as allowed
|
||||
networkPolicy:
|
||||
enabled: false
|
||||
|
||||
# The egress configuration for your network policy, All outbound traffic
|
||||
# From the pod will be blocked unless specified here. Your cluster must
|
||||
# have a CNI that supports network policies (Canal, Calico, etc...)
|
||||
# https://kubernetes.io/docs/concepts/services-networking/network-policies/
|
||||
# https://github.com/ahmetb/kubernetes-network-policy-recipes
|
||||
egress:
|
||||
# - to:
|
||||
# - ipBlock:
|
||||
# cidr: 0.0.0.0/0
|
||||
# ports:
|
||||
# - port: 53
|
||||
# protocol: UDP
|
||||
# - port: 53
|
||||
# protocol: TCP
|
||||
@@ -2,10 +2,16 @@ apiVersion: v2
|
||||
name: media-common
|
||||
description: Common dependancy chart for media ecosystem containers
|
||||
type: application
|
||||
version: 1.0.1
|
||||
version: 1.1.0
|
||||
keywords:
|
||||
- media-common
|
||||
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common
|
||||
maintainers:
|
||||
- name: DirtyCajunRice
|
||||
email: nick@cajun.pro
|
||||
dependencies:
|
||||
- name: media-common-openvpn
|
||||
repository: https://k8s-at-home.com/charts/
|
||||
version: ^1.0.0
|
||||
condition: openvpn.enabled
|
||||
alias: openvpn
|
||||
|
||||
@@ -22,4 +22,9 @@ These values will normally be nested as it is a dependency, for example:
|
||||
```yaml
|
||||
radarr:
|
||||
<values>
|
||||
```
|
||||
```
|
||||
|
||||
## Add-ons
|
||||
|
||||
### OpenVPN
|
||||
It is possible to enable an OpenVPN add-on by setting `openvpn.enabled: true`. For more information refer to [k8s-at-home/media-common-openvpn](https://github.com/k8s-at-home/charts/tree/master/charts/media-common-openvpn)
|
||||
|
||||
@@ -1,6 +1,30 @@
|
||||
---
|
||||
image:
|
||||
organization: linuxserver
|
||||
repository: radarr
|
||||
tag: latest
|
||||
service:
|
||||
port: 7878
|
||||
|
||||
openvpn:
|
||||
enabled: true
|
||||
|
||||
image:
|
||||
repository: dperson/openvpn-client
|
||||
tag: latest
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
scripts:
|
||||
up:
|
||||
down:
|
||||
|
||||
networkPolicy:
|
||||
enabled: false
|
||||
|
||||
livenessProbe:
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
exec:
|
||||
command:
|
||||
- echo
|
||||
- success
|
||||
|
||||
@@ -50,3 +50,27 @@ Selector labels
|
||||
app.kubernetes.io/name: {{ include "media-common.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Additional Containers
|
||||
*/}}
|
||||
{{- define "media-common.additionalContainers" -}}
|
||||
{{- if .Values.additionalContainers }}
|
||||
{{- toYaml .Values.additionalContainers }}
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.enabled }}
|
||||
{{ include "media-common.openvpn.container" . }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Additional Volumes
|
||||
*/}}
|
||||
{{- define "media-common.additionalVolumes" -}}
|
||||
{{- if .Values.additionalVolumes }}
|
||||
{{- toYaml .Values.additionalVolumes }}
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.enabled }}
|
||||
{{ include "media-common.openvpn.volume" . }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
8
charts/media-common/templates/addon-openvpn.yaml
Normal file
8
charts/media-common/templates/addon-openvpn.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
{{- if .Values.openvpn.enabled -}}
|
||||
---
|
||||
{{ include "media-common.openvpn.configmap" . }}
|
||||
---
|
||||
{{ include "media-common.openvpn.secret" . }}
|
||||
---
|
||||
{{ include "media-common.openvpn.networkpolicy" . }}
|
||||
{{- end -}}
|
||||
@@ -74,6 +74,7 @@ spec:
|
||||
resources:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- include "media-common.additionalContainers" . | nindent 8 }}
|
||||
volumes:
|
||||
- name: config
|
||||
{{- if .Values.persistence.config.enabled }}
|
||||
@@ -87,9 +88,7 @@ spec:
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}-media{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.additionalVolumes }}
|
||||
{{- toYaml .Values.additionalVolumes | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "media-common.additionalVolumes" . | nindent 8 }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml . | indent 8 }}
|
||||
|
||||
@@ -75,6 +75,7 @@ spec:
|
||||
resources:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- include "media-common.additionalContainers" . | nindent 8 }}
|
||||
volumes:
|
||||
- name: config
|
||||
{{- if .Values.persistence.config.enabled }}
|
||||
@@ -88,9 +89,7 @@ spec:
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}-media{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.additionalVolumes }}
|
||||
{{- toYaml .Values.additionalVolumes | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "media-common.additionalVolumes" . | nindent 8 }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml . | indent 8 }}
|
||||
|
||||
@@ -113,10 +113,17 @@ persistence:
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
skipuninstall: false
|
||||
|
||||
additionalContainers: []
|
||||
|
||||
additionalVolumes: []
|
||||
|
||||
additionalVolumeMounts: []
|
||||
|
||||
# Enable the OpenVPN add-on here
|
||||
# See https://github.com/k8s-at-home/charts/tree/master/charts/media-common-openvpn for more details
|
||||
openvpn:
|
||||
enabled: false
|
||||
|
||||
podSecurityContext: {}
|
||||
# fsGroup: 2000
|
||||
|
||||
|
||||
22
charts/mosquitto/.helmignore
Normal file
22
charts/mosquitto/.helmignore
Normal file
@@ -0,0 +1,22 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
17
charts/mosquitto/Chart.yaml
Normal file
17
charts/mosquitto/Chart.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
appVersion: "1.6.12"
|
||||
description: Eclipse Mosquitto - An open source MQTT broker
|
||||
name: mosquitto
|
||||
version: 0.3.3
|
||||
keywords:
|
||||
- message queue
|
||||
- MQTT
|
||||
- mosquitto
|
||||
- eclipse-iot
|
||||
home: https://mosquitto.org/
|
||||
icon: https://mosquitto.org/images/mosquitto-text-side-28.png
|
||||
sources:
|
||||
- https://github.com/eclipse/mosquitto
|
||||
maintainers:
|
||||
- name: ishioni
|
||||
email: helm@movishell.pl
|
||||
46
charts/mosquitto/README.md
Normal file
46
charts/mosquitto/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Mosquitto: A small MQTT broker
|
||||
|
||||
This is a helm chart for [mosquitto](https://mosquitto.org/)
|
||||
|
||||
## TL;DR;
|
||||
|
||||
```shell
|
||||
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
|
||||
$ helm install k8s-at-home/mosquitto
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
helm install --name my-release k8s-at-home/mosquitto
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `my-release` deployment:
|
||||
|
||||
```console
|
||||
helm delete my-release --purge
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/mosquitto/values.yaml) file. It has several commented out suggested values.
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
helm install --name my-release \
|
||||
--set persistence.enabled=true \
|
||||
k8s-at-home/mosquitto
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
helm install --name my-release -f values.yaml k8s-at-home/mosquitto
|
||||
```
|
||||
38
charts/mosquitto/templates/NOTES.txt
Normal file
38
charts/mosquitto/templates/NOTES.txt
Normal file
@@ -0,0 +1,38 @@
|
||||
** Please be patient while the chart is being deployed **
|
||||
|
||||
Mosquitto can be accessed within the cluster on port 1883 at {{ template "mosquitto.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
|
||||
|
||||
To access for outside the cluster, perform the following steps:
|
||||
|
||||
{{- if contains "NodePort" .Values.service.type }}
|
||||
|
||||
Obtain the NodePort IP and ports:
|
||||
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[1].nodePort}" services {{ template "mosquitto.fullname" . }})
|
||||
|
||||
To Access the Mosquitto MQTT port:
|
||||
|
||||
echo "URL : amqp://$NODE_IP:$NODE_PORT/"
|
||||
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
|
||||
Obtain the LoadBalancer IP:
|
||||
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "mosquitto.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "mosquitto.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
||||
|
||||
To Access the Moquitto port:
|
||||
|
||||
echo "URL : mqtt://$SERVICE_IP:1883/"
|
||||
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
|
||||
To Access the Mosquitto MQTT port:
|
||||
|
||||
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "mosquitto.fullname" . }} 1883:1883
|
||||
echo "URL : mqtt://127.0.0.1:1883/"
|
||||
|
||||
{{- end }}
|
||||
@@ -2,7 +2,7 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "jackett.name" -}}
|
||||
{{- define "mosquitto.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
@@ -11,7 +11,7 @@ Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "jackett.fullname" -}}
|
||||
{{- define "mosquitto.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
@@ -27,6 +27,30 @@ If release name contains chart name it will be used as a full name.
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "jackett.chart" -}}
|
||||
{{- define "mosquitto.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "mosquitto.labels" -}}
|
||||
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
|
||||
helm.sh/chart: {{ include "mosquitto.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "mosquitto.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "mosquitto.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
1009
charts/mosquitto/templates/configmap.yaml
Normal file
1009
charts/mosquitto/templates/configmap.yaml
Normal file
File diff suppressed because it is too large
Load Diff
30
charts/mosquitto/templates/service.yaml
Normal file
30
charts/mosquitto/templates/service.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "mosquitto.fullname" . }}
|
||||
labels:
|
||||
{{ include "mosquitto.labels" . | indent 4 }}
|
||||
{{- with .Values.service.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
{{- if .Values.service.externalTrafficPolicy }}
|
||||
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- port: 1883
|
||||
targetPort: default
|
||||
protocol: TCP
|
||||
name: default
|
||||
- port: 9001
|
||||
targetPort: websocket
|
||||
protocol: TCP
|
||||
name: websocket
|
||||
selector:
|
||||
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
8
charts/mosquitto/templates/serviceaccount.yaml
Normal file
8
charts/mosquitto/templates/serviceaccount.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "mosquitto.serviceAccountName" . }}
|
||||
labels:
|
||||
{{ include "mosquitto.labels" . | indent 4 }}
|
||||
{{- end -}}
|
||||
95
charts/mosquitto/templates/statefullset.yaml
Normal file
95
charts/mosquitto/templates/statefullset.yaml
Normal file
@@ -0,0 +1,95 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ include "mosquitto.fullname" . }}
|
||||
labels:
|
||||
{{ include "mosquitto.labels" . | indent 4 }}
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
serviceName: {{ include "mosquitto.name" . }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "mosquitto.serviceAccountName" . }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||
image: "{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: default
|
||||
containerPort: 1883
|
||||
protocol: TCP
|
||||
- name: websocket
|
||||
containerPort: 9001
|
||||
protocol: TCP
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
volumeMounts:
|
||||
- name: configmap
|
||||
mountPath: /mosquitto/config
|
||||
- name: data
|
||||
mountPath: /mosquitto/data
|
||||
volumes:
|
||||
- name: configmap
|
||||
configMap:
|
||||
name: {{ template "mosquitto.fullname" . }}
|
||||
{{- if not .Values.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.persistence.existingClaim }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
volumeClaimTemplates:
|
||||
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
|
||||
- metadata:
|
||||
name: data
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- if .Values.persistence.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.persistence.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
accessModes: [ {{ .Values.persistence.accessMode | quote }} ]
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.size | quote }}
|
||||
{{- if .Values.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: {{ .Values.persistence.storageClass | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
76
charts/mosquitto/values.yaml
Normal file
76
charts/mosquitto/values.yaml
Normal file
@@ -0,0 +1,76 @@
|
||||
# Default values for mosquitto.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: eclipse-mosquitto
|
||||
tag: "{{ .Chart.AppVersion }}"
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: true
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name:
|
||||
|
||||
podSecurityContext: {}
|
||||
# fsGroup: 2000
|
||||
|
||||
securityContext: {}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
|
||||
service:
|
||||
annotations: {}
|
||||
type: ClusterIP
|
||||
# externalTrafficPolicy:
|
||||
# loadBalancerIP:
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
persistence:
|
||||
enabled: False
|
||||
annotations: {}
|
||||
## mosquitto data Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
##
|
||||
## If you want to reuse an existing claim, you can pass the name of the PVC using
|
||||
## the existingClaim variable
|
||||
# existingClaim: mosquitto-data
|
||||
accessMode: ReadWriteOnce
|
||||
size: 5Gi
|
||||
|
||||
# customConfig:
|
||||
@@ -2,7 +2,7 @@ apiVersion: v2
|
||||
appVersion: 1.0.6-12
|
||||
description: Node-RED is low-code programming for event-driven applications
|
||||
name: node-red
|
||||
version: 3.0.0
|
||||
version: 3.1.0
|
||||
keywords:
|
||||
- nodered
|
||||
- node-red
|
||||
|
||||
@@ -42,6 +42,7 @@ The following tables lists the configurable parameters of the Node-RED chart and
|
||||
| `image.tag` | node-red image tag | `1.0.6-12-minimal` |
|
||||
| `image.pullPolicy` | node-red image pull policy | `IfNotPresent` |
|
||||
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
|
||||
| `serviceAccountName` | Service account to run the pod as | `` |
|
||||
| `livenessProbePath` | Default livenessProbe path | `/` |
|
||||
| `readinessProbePath` | Default readinessProbe path | `/` |
|
||||
| `flows` | Default flows configuration | `flows.json` |
|
||||
|
||||
@@ -33,6 +33,9 @@ spec:
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.serviceAccountName }}
|
||||
serviceAccountName: {{ .Values.serviceAccountName }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
|
||||
@@ -13,6 +13,8 @@ image:
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
serviceAccountName: ""
|
||||
|
||||
livenessProbePath: /
|
||||
readinessProbePath: /
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ apiVersion: v2
|
||||
appVersion: v21.0
|
||||
description: NZBGet is a Usenet downloader client
|
||||
name: nzbget
|
||||
version: 4.0.1
|
||||
version: 5.0.0
|
||||
keywords:
|
||||
- nzbget
|
||||
- usenet
|
||||
@@ -14,3 +14,8 @@ sources:
|
||||
maintainers:
|
||||
- name: billimek
|
||||
email: jeff@billimek.com
|
||||
dependencies:
|
||||
- name: media-common
|
||||
repository: https://k8s-at-home.com/charts/
|
||||
version: ^1.0.0
|
||||
alias: nzbget
|
||||
|
||||
@@ -33,75 +33,35 @@ helm delete my-release --purge
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following tables lists the configurable parameters of the Sentry chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|----------------------------|-------------------------------------|---------------------------------------------------------|
|
||||
| `image.repository` | Image repository | `linuxserver/nzbget` |
|
||||
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/nzbget/tags/).| `v21.0-ls14`|
|
||||
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
|
||||
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
|
||||
| `timezone` | Timezone the nzbget instance should run as, e.g. 'America/New_York' | `UTC` |
|
||||
| `puid` | process userID the nzbget instance should run as | `1001` |
|
||||
| `pgid` | process groupID the nzbget instance should run as | `1001` |
|
||||
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
|
||||
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
|
||||
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
|
||||
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
|
||||
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
|
||||
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
|
||||
| `Service.type` | Kubernetes service type for the nzbget GUI | `ClusterIP` |
|
||||
| `Service.port` | Kubernetes port where the nzbget GUI is exposed| `6789` |
|
||||
| `Service.annotations` | Service annotations for the nzbget GUI | `{}` |
|
||||
| `Service.labels` | Custom labels | `{}` |
|
||||
| `Service.loadBalancerIP` | Loadbalance IP for the nzbget GUI | `{}` |
|
||||
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
|
||||
| `ingress.enabled` | Enables Ingress | `false` |
|
||||
| `ingress.annotations` | Ingress annotations | `{}` |
|
||||
| `ingress.labels` | Custom labels | `{}`
|
||||
| `ingress.path` | Ingress path | `/` |
|
||||
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
|
||||
| `ingress.tls` | Ingress TLS configuration | `[]` |
|
||||
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
|
||||
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
|
||||
| `persistence.config.existingClaim`| Use an existing PVC to persist data | `nil` |
|
||||
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
|
||||
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
|
||||
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
|
||||
| `persistence.downloads.enabled` | Use persistent volume to store configuration data | `true` |
|
||||
| `persistence.downloads.size` | Size of persistent volume claim | `10Gi` |
|
||||
| `persistence.downloads.existingClaim`| Use an existing PVC to persist data | `nil` |
|
||||
| `persistence.downloads.storageClass` | Type of persistent volume claim | `-` |
|
||||
| `persistence.downloads.accessMode` | Persistence access mode | `ReadWriteOnce` |
|
||||
| `persistence.downloads.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
|
||||
| `persistence.extraMounts` | Array of additional claims to mount | `[]` |
|
||||
| `resources` | CPU/Memory resource requests/limits | `{}` |
|
||||
| `nodeSelector` | Node labels for pod assignment | `{}` |
|
||||
| `tolerations` | Toleration labels for pod assignment | `[]` |
|
||||
| `affinity` | Affinity settings for pod assignment | `{}` |
|
||||
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
|
||||
| `deploymentAnnotations` | Key-value pairs to add as pod annotations | `{}` |
|
||||
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
|
||||
file. It has several commented out suggested values.
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
helm install --name my-release \
|
||||
--set timezone="America/New York" \
|
||||
helm install nzbget \
|
||||
--set radarr.env.TZ="America/New York" \
|
||||
k8s-at-home/nzbget
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
|
||||
chart. For example,
|
||||
```console
|
||||
helm install --name my-release -f values.yaml stable/nzbget
|
||||
helm install radarr k8s-at-home/nzbget --values values.yaml
|
||||
```
|
||||
|
||||
These values will be nested as it is a dependency, for example
|
||||
```yaml
|
||||
nzbget:
|
||||
image:
|
||||
tag: ...
|
||||
```
|
||||
|
||||
---
|
||||
**NOTE**
|
||||
|
||||
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
|
||||
If you get
|
||||
```console
|
||||
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
|
||||
```
|
||||
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
|
||||
|
||||
---
|
||||
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/nzbget/values.yaml) file. It has several commented out suggested values.
|
||||
|
||||
10
charts/nzbget/ci/ct-values.yaml
Normal file
10
charts/nzbget/ci/ct-values.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
nzbget:
|
||||
image:
|
||||
organization: linuxserver
|
||||
repository: nzbget
|
||||
tag: latest
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 6789
|
||||
ingress:
|
||||
enabled: false
|
||||
@@ -1,21 +1,23 @@
|
||||
{{- $svcPort := .Values.nzbget.service.port -}}
|
||||
1. Get the application URL by running these commands:
|
||||
{{- if .Values.ingress.enabled }}
|
||||
{{- range .Values.ingress.hosts }}
|
||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
|
||||
{{- if .Values.nzbget.ingress.enabled }}
|
||||
{{- range .Values.nzbget.ingress.hosts }}
|
||||
http{{ if $.Values.nzbget.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.nzbget.ingress.path }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "nzbget.fullname" . }})
|
||||
{{- else if contains "NodePort" .Values.nzbget.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
{{- else if contains "LoadBalancer" .Values.nzbget.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc -w {{ include "nzbget.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "nzbget.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "nzbget.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:{{ $svcPort }}
|
||||
{{- else if contains "ClusterIP" .Values.nzbget.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||
kubectl port-forward $POD_NAME 8080:80
|
||||
kubectl port-forward $POD_NAME 8080:{{ $svcPort }}
|
||||
{{- end }}
|
||||
|
||||
The default login to the GUI is login:nzbget, password:tegbzn6789
|
||||
The default login to the GUI is login:nzbget, password:tegbzn6789
|
||||
You should change this as soon as possible!
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "nzbget.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "nzbget.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "nzbget.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
@@ -1,29 +0,0 @@
|
||||
|
||||
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ template "nzbget.fullname" . }}-config
|
||||
{{- if .Values.persistence.config.skipuninstall }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.persistence.config.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.config.size | quote }}
|
||||
{{- if .Values.persistence.config.storageClass }}
|
||||
{{- if (eq "-" .Values.persistence.config.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.persistence.config.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,140 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "nzbget.fullname" . }}
|
||||
{{- if .Values.deploymentAnnotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.deploymentAnnotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 3
|
||||
strategy:
|
||||
type: {{ .Values.strategyType }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- if .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.podAnnotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 6789
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
|
||||
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
|
||||
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
|
||||
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
|
||||
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
|
||||
env:
|
||||
- name: TZ
|
||||
value: "{{ .Values.timezone }}"
|
||||
- name: PUID
|
||||
value: "{{ .Values.puid }}"
|
||||
- name: PGID
|
||||
value: "{{ .Values.pgid }}"
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
name: config
|
||||
- mountPath: /downloads
|
||||
name: downloads
|
||||
{{- if .Values.persistence.downloads.subPath }}
|
||||
subPath: {{ .Values.persistence.downloads.subPath }}
|
||||
{{ end }}
|
||||
{{- range .Values.persistence.extraMounts }}
|
||||
{{- if .mountPath }}
|
||||
- mountPath: /{{ .mountPath }}
|
||||
{{- else }}
|
||||
- mountPath: /{{ .name }}
|
||||
{{- end }}
|
||||
name: {{ .name }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 12 }}
|
||||
{{- if .Values.openvpn.enabled }}
|
||||
- name: openvpn
|
||||
image: "{{ .Values.openvpn.image.repository }}:{{ .Values.openvpn.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.openvpn.image.pullPolicy }}
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN"]
|
||||
{{- if .Values.openvpn.env }}
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: {{ template "nzbget.fullname" . }}-openvpnenv
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.vpnConf }}
|
||||
volumeMounts:
|
||||
- name: openvpnconf
|
||||
mountPath: /vpn/vpn.conf
|
||||
subPath: vpnConf
|
||||
{{- end }}
|
||||
env:
|
||||
- name: NETWORK_POLICY_ENABLED
|
||||
value: {{ .Values.openvpn.networkPolicy.enabled | quote }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
{{- if .Values.persistence.config.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "nzbget.fullname" . }}-config{{- end }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{ end }}
|
||||
- name: downloads
|
||||
{{- if .Values.persistence.downloads.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.downloads.existingClaim }}{{ .Values.persistence.downloads.existingClaim }}{{- else }}{{ template "nzbget.fullname" . }}-downloads{{- end }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{ end }}
|
||||
{{- if .Values.openvpn.vpnConf }}
|
||||
- name: openvpnconf
|
||||
configMap:
|
||||
name: {{ template "nzbget.fullname" . }}-openvpnconf
|
||||
{{ end }}
|
||||
{{- range .Values.persistence.extraMounts }}
|
||||
- name: {{ .name }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .claimName }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
@@ -1,29 +0,0 @@
|
||||
|
||||
{{- if and .Values.persistence.downloads.enabled (not .Values.persistence.downloads.existingClaim) }}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ template "nzbget.fullname" . }}-downloads
|
||||
{{- if .Values.persistence.downloads.skipuninstall }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.persistence.downloads.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.downloads.size | quote }}
|
||||
{{- if .Values.persistence.downloads.storageClass }}
|
||||
{{- if (eq "-" .Values.persistence.downloads.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.persistence.downloads.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,41 +0,0 @@
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $fullName := include "nzbget.fullname" . -}}
|
||||
{{- $ingressPath := .Values.ingress.path -}}
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- with .Values.ingress.labels -}}
|
||||
{{ toYaml . | nindent 4 }}
|
||||
{{- end -}}
|
||||
{{- with .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
- host: {{ . | quote }}
|
||||
http:
|
||||
paths:
|
||||
- path: {{ $ingressPath }}
|
||||
backend:
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: http
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -1,16 +0,0 @@
|
||||
{{- if and .Values.openvpn.enabled .Values.openvpn.vpnConf}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "nzbget.fullname" . }}-openvpnconf
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
data:
|
||||
{{- if .Values.openvpn.vpnConf }}
|
||||
vpnConf: |-
|
||||
{{- .Values.openvpn.vpnConf | nindent 4}}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,20 +0,0 @@
|
||||
{{- if and .Values.openvpn.enabled ( or .Values.openvpn.env .Values.openvpn.auth )}}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "nzbget.fullname" . }}-openvpnenv
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
data:
|
||||
{{- if .Values.openvpn.auth }}
|
||||
VPN_AUTH: {{ .Values.openvpn.auth | b64enc }}
|
||||
{{- end }}
|
||||
{{- if .Values.openvpn.env }}
|
||||
{{- range $k, $v := .Values.openvpn.env }}
|
||||
{{ $k }}: {{ $v | b64enc }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
22
charts/nzbget/templates/pvc.yaml
Normal file
22
charts/nzbget/templates/pvc.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
{{- if and .Values.nzbget.persistence.downloads.enabled (not .Values.nzbget.persistence.downloads.existingClaim) }}
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ template "media-common.fullname" . }}-downloads
|
||||
{{- if .Values.nzbget.persistence.downloads.skipuninstall }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "media-common.labels" . | nindent 4 }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.nzbget.persistence.downloads.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.nzbget.persistence.downloads.size | quote }}
|
||||
{{- if .Values.nzbget.persistence.downloads.storageClass }}
|
||||
storageClassName: {{ if (eq "-" .Values.nzbget.persistence.downloads.storageClass) }}""{{- else }}{{ .Values.nzbget.persistence.downloads.storageClass | quote}}{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
@@ -1,53 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "nzbget.fullname" . }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
helm.sh/chart: {{ include "nzbget.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- if .Values.service.labels }}
|
||||
{{ toYaml .Values.service.labels | indent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
|
||||
type: ClusterIP
|
||||
{{- if .Values.service.clusterIP }}
|
||||
clusterIP: {{ .Values.service.clusterIP }}
|
||||
{{end}}
|
||||
{{- else if eq .Values.service.type "LoadBalancer" }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- if .Values.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- else }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalTrafficPolicy }}
|
||||
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: http
|
||||
port: {{ .Values.service.port }}
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
|
||||
nodePort: {{.Values.service.nodePort}}
|
||||
{{ end }}
|
||||
selector:
|
||||
app.kubernetes.io/name: {{ include "nzbget.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
|
||||
@@ -1,179 +1,40 @@
|
||||
# Default values for nzbget.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
image:
|
||||
repository: linuxserver/nzbget
|
||||
tag: v21.0-ls61
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
# upgrade strategy type (e.g. Recreate or RollingUpdate)
|
||||
strategyType: Recreate
|
||||
|
||||
# Probes configuration
|
||||
probes:
|
||||
liveness:
|
||||
initialDelaySeconds: 60
|
||||
failureThreshold: 5
|
||||
timeoutSeconds: 10
|
||||
readiness:
|
||||
initialDelaySeconds: 60
|
||||
failureThreshold: 5
|
||||
timeoutSeconds: 10
|
||||
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
timezone: UTC
|
||||
puid: 1001
|
||||
pgid: 1001
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 6789
|
||||
## Specify the nodePort value for the LoadBalancer and NodePort service types.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
|
||||
##
|
||||
# nodePort:
|
||||
## Provide any additional annotations which may be required. This can be used to
|
||||
## set the LoadBalancer service type to internal only.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
|
||||
##
|
||||
annotations: {}
|
||||
labels: {}
|
||||
## Use loadBalancerIP to request a specific static IP,
|
||||
## otherwise leave blank
|
||||
##
|
||||
loadBalancerIP:
|
||||
# loadBalancerSourceRanges: []
|
||||
## Set the externalTrafficPolicy in the Service to either Cluster or Local
|
||||
# externalTrafficPolicy: Cluster
|
||||
|
||||
ingress:
|
||||
enabled: false
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
labels: {}
|
||||
path: /
|
||||
hosts:
|
||||
- chart-example.local
|
||||
tls: []
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
openvpn:
|
||||
# Enables an openvpn sidecar that when configured properly will provide a
|
||||
# Secure outbound VPN for use by NZBGet.
|
||||
enabled: false
|
||||
|
||||
nzbget:
|
||||
image:
|
||||
repository: dperson/openvpn-client
|
||||
tag: latest
|
||||
organization: linuxserver
|
||||
repository: nzbget
|
||||
pullPolicy: IfNotPresent
|
||||
tag: v21.0-ls61
|
||||
service:
|
||||
port: 6789
|
||||
|
||||
# All variables specified here will be added to the openvpn sidecar container
|
||||
# Ref https://hub.docker.com/r/dperson/openvpn-client for all config values
|
||||
env: []
|
||||
# DNS: "true"
|
||||
# TZ: EST5EDT
|
||||
|
||||
# Provide a customized vpn.conf file to be used by openvpn.
|
||||
vpnConf: # |-
|
||||
# Some Example Config
|
||||
# remote greatvpnhost.com 8888
|
||||
# auth-user-pass
|
||||
# Cipher AES
|
||||
|
||||
# Credentials to connect to the VPN Service (used with -a)
|
||||
auth: # "user;password"
|
||||
|
||||
# If set to true, will deploy a network policy that blocks all outbound
|
||||
# traffic except traffic specified as allowed
|
||||
networkPolicy:
|
||||
# Configure the OpenVPN add-on
|
||||
openvpn:
|
||||
enabled: false
|
||||
|
||||
# The egress configuration for your network policy, All outbound traffic
|
||||
# From the pod will be blocked unless specified here. Your cluster must
|
||||
# have a CNI that supports network policies (Canal, Calico, etc...)
|
||||
# https://kubernetes.io/docs/concepts/services-networking/network-policies/
|
||||
# https://github.com/ahmetb/kubernetes-network-policy-recipes
|
||||
egress:
|
||||
# - to:
|
||||
# - ipBlock:
|
||||
# cidr: 0.0.0.0/0
|
||||
# ports:
|
||||
# - port: 53
|
||||
# protocol: UDP
|
||||
# - port: 53
|
||||
# protocol: TCP
|
||||
persistence:
|
||||
downloads:
|
||||
enabled: false
|
||||
## nzbget downloads Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
# storageClass: "-"
|
||||
# accessMode: ReadWriteOnce
|
||||
# size: 1Gi
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
# skipuninstall: false
|
||||
|
||||
persistence:
|
||||
config:
|
||||
enabled: true
|
||||
## nzbget configuration data Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
##
|
||||
## If you want to reuse an existing claim, you can pass the name of the PVC using
|
||||
## the existingClaim variable
|
||||
# existingClaim: your-claim
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
skipuninstall: false
|
||||
downloads:
|
||||
enabled: true
|
||||
## nzbget torrents downloads volume configuration
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
##
|
||||
## If you want to reuse an existing claim, you can pass the name of the PVC using
|
||||
## the existingClaim variable
|
||||
# existingClaim: your-claim
|
||||
# subPath: some-subpath
|
||||
accessMode: ReadWriteOnce
|
||||
size: 10Gi
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
skipuninstall: false
|
||||
extraMounts: []
|
||||
## Include additional claims that can be mounted inside the
|
||||
## pod. This is useful if you wish to use different paths with categories
|
||||
## Claim will me mounted as /{mountPath} if specified. If no {mountPath} is given,
|
||||
## mountPath will default to {name}
|
||||
# - name: video
|
||||
# claimName: video-claim
|
||||
# mountPath: /mnt/path/in/pod
|
||||
additionalVolumes:
|
||||
- name: downloads
|
||||
emptyDir: {}
|
||||
## When using persistence.downloads.enabled: true, adjust this to:
|
||||
# persistentVolumeClaim:
|
||||
# claimName: nzbget-downloads
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
deploymentAnnotations: {}
|
||||
additionalVolumeMounts:
|
||||
- name: downloads
|
||||
mountPath: /downloads
|
||||
|
||||
23
charts/prometheus-nut-exporter/.helmignore
Normal file
23
charts/prometheus-nut-exporter/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
16
charts/prometheus-nut-exporter/Chart.yaml
Normal file
16
charts/prometheus-nut-exporter/Chart.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: v2
|
||||
name: prometheus-nut-exporter
|
||||
description: A Helm chart for Kubernetes
|
||||
type: application
|
||||
version: 1.0.1
|
||||
appVersion: 1.0.1
|
||||
keywords:
|
||||
- nut
|
||||
- prometheus
|
||||
home: https://github.com/k8s-at-home/charts/tree/master/charts/prometheus-nut-exporter
|
||||
icon: https://www.iconfinder.com/data/icons/wpzoom-developer-icon-set/500/125-512.png
|
||||
sources:
|
||||
- https://github.com/HON95/prometheus-nut-exporter
|
||||
maintainers:
|
||||
- name: billimek
|
||||
email: jeff@billimek.com
|
||||
50
charts/prometheus-nut-exporter/README.md
Normal file
50
charts/prometheus-nut-exporter/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Prometheus NUT Exporter
|
||||
|
||||
This is a helm chart provides a service monitor to send NUT server metrics to a Prometheus instance. Based on [Prometheus NUT Exporter](https://github.com/HON95/prometheus-nut-exporter).
|
||||
|
||||
## TL;DR;
|
||||
|
||||
```console
|
||||
helm repo add k8s-at-home https://k8s-at-home.com/charts/
|
||||
helm install k8s-at-home/prometheus-nut-exporter
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `prometheus-nut-exporter`:
|
||||
|
||||
```console
|
||||
helm install --name prometheus-nut-exporter k8s-at-home/prometheus-nut-exporter
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `prometheus-nut-exporter` deployment:
|
||||
|
||||
```console
|
||||
helm delete prometheus-nut-exporter
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/prometheus-nut-exporter/values.yaml) file. It has several commented out suggested values.
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
```console
|
||||
helm install --name prometheus-nut-exporter \
|
||||
--set serviceMonitor.enabled=true \
|
||||
k8s-at-home/prometheus-nut-exporter
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
|
||||
|
||||
```console
|
||||
helm install --name prometheus-nut-exporter -f values.yaml k8s-at-home/prometheus-nut-exporter
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
You can find the exported metrics here: [metrics](https://github.com/HON95/prometheus-nut-exporter/blob/master/metrics.md).
|
||||
15
charts/prometheus-nut-exporter/templates/NOTES.txt
Normal file
15
charts/prometheus-nut-exporter/templates/NOTES.txt
Normal file
@@ -0,0 +1,15 @@
|
||||
1. Get the application URL by running these commands:
|
||||
{{- if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "prometheus-nut-exporter.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "prometheus-nut-exporter.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "prometheus-nut-exporter.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "prometheus-nut-exporter.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:{{ .Values.service.port }}
|
||||
{{- end }}
|
||||
62
charts/prometheus-nut-exporter/templates/_helpers.tpl
Normal file
62
charts/prometheus-nut-exporter/templates/_helpers.tpl
Normal file
@@ -0,0 +1,62 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "prometheus-nut-exporter.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "prometheus-nut-exporter.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "prometheus-nut-exporter.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "prometheus-nut-exporter.labels" -}}
|
||||
helm.sh/chart: {{ include "prometheus-nut-exporter.chart" . }}
|
||||
{{ include "prometheus-nut-exporter.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "prometheus-nut-exporter.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "prometheus-nut-exporter.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "prometheus-nut-exporter.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "prometheus-nut-exporter.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
72
charts/prometheus-nut-exporter/templates/deployment.yaml
Normal file
72
charts/prometheus-nut-exporter/templates/deployment.yaml
Normal file
@@ -0,0 +1,72 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "prometheus-nut-exporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
{{- with .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 8 }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "prometheus-nut-exporter.serviceAccountName" . }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
securityContext:
|
||||
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: {{ .Values.service.port }}
|
||||
protocol: TCP
|
||||
{{- if .Values.env }}
|
||||
env:
|
||||
{{- range $key, $value := .Values.env }}
|
||||
- name: {{ $key | quote }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: http
|
||||
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
|
||||
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
|
||||
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: http
|
||||
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
|
||||
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
|
||||
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
15
charts/prometheus-nut-exporter/templates/service.yaml
Normal file
15
charts/prometheus-nut-exporter/templates/service.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "prometheus-nut-exporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 4 }}
|
||||
12
charts/prometheus-nut-exporter/templates/serviceaccount.yaml
Normal file
12
charts/prometheus-nut-exporter/templates/serviceaccount.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "prometheus-nut-exporter.serviceAccountName" . }}
|
||||
labels:
|
||||
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
22
charts/prometheus-nut-exporter/templates/servicemonitor.yaml
Normal file
22
charts/prometheus-nut-exporter/templates/servicemonitor.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
{{- if .Values.serviceMonitor.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: {{ include "prometheus-nut-exporter.fullname" . }}
|
||||
labels:
|
||||
{{- include "prometheus-nut-exporter.labels" . | nindent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "prometheus-nut-exporter.selectorLabels" . | nindent 6 }}
|
||||
endpoints:
|
||||
{{- range .Values.serviceMonitor.targets }}
|
||||
- port: http
|
||||
interval: 15s
|
||||
scrapeTimeout: 10s
|
||||
path: "/nut"
|
||||
params:
|
||||
target:
|
||||
- "{{ .hostname }}:{{ .port }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
79
charts/prometheus-nut-exporter/values.yaml
Normal file
79
charts/prometheus-nut-exporter/values.yaml
Normal file
@@ -0,0 +1,79 @@
|
||||
# Default values for prometheus-nut-exporter.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: hon95/prometheus-nut-exporter
|
||||
pullPolicy: IfNotPresent
|
||||
tag: "1.0.1"
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: true
|
||||
# Annotations to add to the service account
|
||||
annotations: {}
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name: ""
|
||||
|
||||
env: {}
|
||||
# TZ: UTC
|
||||
|
||||
serviceMonitor:
|
||||
enabled: false
|
||||
# Specify the list of NUT servers that should be monitored
|
||||
targets: []
|
||||
# - hostname: nut-server
|
||||
# port: 3493
|
||||
|
||||
# Probes configuration
|
||||
probes:
|
||||
liveness:
|
||||
initialDelaySeconds: 30
|
||||
failureThreshold: 5
|
||||
timeoutSeconds: 10
|
||||
readiness:
|
||||
initialDelaySeconds: 30
|
||||
failureThreshold: 5
|
||||
timeoutSeconds: 10
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
podSecurityContext: {}
|
||||
# fsGroup: 2000
|
||||
|
||||
securityContext: {}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 9995
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
23
charts/statping/.helmignore
Normal file
23
charts/statping/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
22
charts/statping/Chart.yaml
Normal file
22
charts/statping/Chart.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
apiVersion: v2
|
||||
name: statping
|
||||
description: Status page for monitoring your websites and applications
|
||||
type: application
|
||||
version: 1.0.0
|
||||
appVersion: v0.90.65
|
||||
keywords:
|
||||
- statping
|
||||
- status
|
||||
- status-page
|
||||
home: https://github.com/k8s-at-home/charts/tree/master/charts/statping
|
||||
sources:
|
||||
- https://github.com/statping/statping
|
||||
maintainers:
|
||||
- name: DirtyCajunRice
|
||||
email: nick@cajun.pro
|
||||
icon: https://github.com/statping/statping/blob/dev/frontend/src/assets/logo.png?raw=true
|
||||
dependencies:
|
||||
- name: postgresql
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 9.4.0
|
||||
condition: postgres.posgresql.enabled
|
||||
4
charts/statping/OWNERS
Normal file
4
charts/statping/OWNERS
Normal file
@@ -0,0 +1,4 @@
|
||||
approvers:
|
||||
- DirtyCajunRice
|
||||
reviewers:
|
||||
- DirtyCajunRice
|
||||
37
charts/statping/README.md
Normal file
37
charts/statping/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# statping | Status page for monitoring your websites and applications
|
||||
|
||||
## TL;DR
|
||||
```console
|
||||
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
|
||||
$ helm install k8s-at-home/statping
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
To install the chart with the release name `statping`:
|
||||
```console
|
||||
helm install statping k8s-at-home/statping
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
To uninstall the `statping` deployment:
|
||||
```console
|
||||
helm uninstall statping
|
||||
```
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/statping/values.yaml)
|
||||
file. It has several commented out suggested values.
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
```console
|
||||
helm install statping \
|
||||
--set statping.env.TZ="America/New York" \
|
||||
k8s-at-home/statping
|
||||
```
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
|
||||
chart. For example,
|
||||
```console
|
||||
helm install statping k8s-at-home/statping --values values.yaml
|
||||
```
|
||||
|
||||
21
charts/statping/templates/NOTES.txt
Normal file
21
charts/statping/templates/NOTES.txt
Normal file
@@ -0,0 +1,21 @@
|
||||
1. Get the application URL by running these commands:
|
||||
{{- if .Values.ingress.enabled }}
|
||||
{{- range $host := .Values.ingress.hosts }}
|
||||
{{- range .paths }}
|
||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "statping.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "statping.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "statping.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "statping.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
|
||||
{{- end }}
|
||||
62
charts/statping/templates/_helpers.tpl
Normal file
62
charts/statping/templates/_helpers.tpl
Normal file
@@ -0,0 +1,62 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "statping.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "statping.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "statping.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "statping.labels" -}}
|
||||
helm.sh/chart: {{ include "statping.chart" . }}
|
||||
{{ include "statping.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "statping.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "statping.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "statping.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "statping.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
166
charts/statping/templates/deployment.yaml
Normal file
166
charts/statping/templates/deployment.yaml
Normal file
@@ -0,0 +1,166 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "statping.fullname" . }}
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
spec:
|
||||
{{- if not .Values.autoscaling.enabled }}
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "statping.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
{{- with .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "statping.selectorLabels" . | nindent 8 }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "statping.serviceAccountName" . }}
|
||||
{{- with .Values.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
{{- with .Values.securityContext }}
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
env:
|
||||
{{- if .Values.statping.name }}
|
||||
- name: NAME
|
||||
value: {{ .Values.statping.name | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.statping.description }}
|
||||
- name: DESCRIPTION
|
||||
value: {{ .Values.statping.description | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.statping.domain }}
|
||||
- name: DOMAIN
|
||||
value: {{ .Values.statping.domain | quote }}
|
||||
{{- end }}
|
||||
- name: ADMIN_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
{{- if .Values.statping.admin.existingSecret.enabled }}
|
||||
name: {{ .Values.statping.admin.existingSecret.name | quote }}
|
||||
key: {{ .Values.statping.admin.existingSecret.userKey | default "admin-user" }}
|
||||
{{- else }}
|
||||
name: {{ include "statping.fullname" . }}
|
||||
key: admin-user
|
||||
{{- end }}
|
||||
- name: ADMIN_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
{{- if .Values.statping.admin.existingSecret.enabled }}
|
||||
name: {{ .Values.statping.admin.existingSecret.name | quote }}
|
||||
key: {{ .Values.statping.admin.existingSecret.passwordKey | default "admin-password" }}
|
||||
{{- else }}
|
||||
name: {{ include "statping.fullname" . }}
|
||||
key: admin-password
|
||||
{{- end }}
|
||||
- name: ADMIN_EMAIL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
{{- if .Values.statping.admin.existingSecret.enabled }}
|
||||
name: {{ .Values.statping.admin.existingSecret.name | quote }}
|
||||
key: {{ .Values.statping.admin.existingSecret.emailKey | default "admin-email" }}
|
||||
{{- else }}
|
||||
name: {{ include "statping.fullname" . }}
|
||||
key: admin-email
|
||||
{{- end }}
|
||||
{{- if and (eq .Values.postgres.type "kubedb") .Values.postgres.kubedb.enabled }}
|
||||
- name: DB_CONN
|
||||
value: postgres
|
||||
- name: DB_HOST
|
||||
value: postgres-{{ template "statping.fullname" . }}
|
||||
- name: DB_DATABASE
|
||||
value: postgres
|
||||
- name: DB_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-{{ template "statping.fullname" . }}-auth
|
||||
key: POSTGRES_USER
|
||||
- name: DB_PASS
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-{{ template "statping.fullname" . }}-auth
|
||||
key: POSTGRES_PASSWORD
|
||||
{{- end }}
|
||||
{{- if and (eq .Values.postgres.type "postgresql") .Values.postgres.posgresql.enabled }}
|
||||
- name: DB_CONN
|
||||
value: postgres
|
||||
- name: DB_HOST
|
||||
value: {{ template "postgresql.fullname" . }}-postgresql
|
||||
- name: DB_DATABASE
|
||||
value: {{ template "postgresql.database" . }}
|
||||
- name: DB_USER
|
||||
value: {{ template "postgresql.username" . }}
|
||||
- name: DB_PASS
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "postgresql.secretName" . }}-postgresql
|
||||
key: postgresql-password
|
||||
{{- end }}
|
||||
{{- with .Values.env }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: http
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: http
|
||||
volumeMounts:
|
||||
- mountPath: /app
|
||||
name: config
|
||||
{{- if .Values.persistence.subPath }}
|
||||
subPath: {{ .Values.persistence.subPath }}
|
||||
{{- end }}
|
||||
{{- if .Values.additionalVolumeMounts }}
|
||||
{{- toYaml .Values.additionalVolumes | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- with .Values.resources }}
|
||||
resources:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
{{- if .Values.persistence.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ if .Values.persistence.existingClaim }}{{ .Values.persistence.existingClaim }}{{- else }}{{ template "statping.fullname" . }}{{- end }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- if .Values.additionalVolumes }}
|
||||
{{- toYaml .Values.additionalVolumes | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
14
charts/statping/templates/externalsecret.yaml
Normal file
14
charts/statping/templates/externalsecret.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
{{- if and .Values.externalSecret.enabled (eq .Values.externalSecret.type "kubernetes-external-secrets") }}
|
||||
apiVersion: kubernetes-client.io/v1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: {{ include "statping.fullname" . }}
|
||||
spec:
|
||||
{{- with .Values.externalSecret.kubernetesExternalSecrets.spec }}
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
data:
|
||||
{{- with .Values.externalSecret.kubernetesExternalSecrets.data }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
28
charts/statping/templates/hpa.yaml
Normal file
28
charts/statping/templates/hpa.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
{{- if .Values.autoscaling.enabled }}
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: {{ include "statping.fullname" . }}
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: {{ include "statping.fullname" . }}
|
||||
minReplicas: {{ .Values.autoscaling.minReplicas }}
|
||||
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
|
||||
metrics:
|
||||
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||
{{- end }}
|
||||
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
41
charts/statping/templates/ingress.yaml
Normal file
41
charts/statping/templates/ingress.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $fullName := include "statping.fullname" . -}}
|
||||
{{- $svcPort := .Values.service.port -}}
|
||||
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
{{- else -}}
|
||||
apiVersion: extensions/v1beta1
|
||||
{{- end }}
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
{{- with .Values.ingress.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
- host: {{ .host | quote }}
|
||||
http:
|
||||
paths:
|
||||
{{- range .paths }}
|
||||
- path: {{ . }}
|
||||
backend:
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: {{ $svcPort }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
13
charts/statping/templates/postgres.yaml
Normal file
13
charts/statping/templates/postgres.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
{{- if and .Values.postgres.enabled (eq .Values.postgres.type "kubedb") }}
|
||||
apiVersion: kubedb.com/v1alpha1
|
||||
kind: Postgres
|
||||
metadata:
|
||||
name: postgres-{{ template "statping.fullname" . }}
|
||||
spec:
|
||||
version: {{ .Values.postgres.kubedb.version }}
|
||||
storageType: {{ .Values.postgres.kubedb.storageType }}
|
||||
{{- with .Values.postgres.kubedb.storage }}
|
||||
storage:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
15
charts/statping/templates/pvc.yaml
Normal file
15
charts/statping/templates/pvc.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) -}}
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ include "statping.fullname" . }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.persistence.accessMode | quote }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistence.size | quote }}
|
||||
{{- if .Values.persistence.storageClass }}
|
||||
storageClassName: {{ .Values.persistence.storageClass | quote }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
13
charts/statping/templates/secret.yaml
Normal file
13
charts/statping/templates/secret.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
{{- if not .Values.statping.admin.existingSecret.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "statping.fullname" . }}
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
type: Opaque
|
||||
data:
|
||||
admin-user: {{ default "admin" .Values.statping.admin.user | b64enc | quote }}
|
||||
admin-password: {{ randAlphaNum 16 | b64enc | quote }}
|
||||
admin-email: {{ default "info@admin.com" .Values.statping.admin.email | b64enc | quote}}
|
||||
{{- end }}
|
||||
15
charts/statping/templates/service.yaml
Normal file
15
charts/statping/templates/service.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ include "statping.fullname" . }}
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
ports:
|
||||
- port: {{ .Values.service.port }}
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
{{- include "statping.selectorLabels" . | nindent 4 }}
|
||||
12
charts/statping/templates/serviceaccount.yaml
Normal file
12
charts/statping/templates/serviceaccount.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "statping.serviceAccountName" . }}
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
15
charts/statping/templates/tests/test-connection.yaml
Normal file
15
charts/statping/templates/tests/test-connection.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: "{{ include "statping.fullname" . }}-test-connection"
|
||||
labels:
|
||||
{{- include "statping.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
"helm.sh/hook": test-success
|
||||
spec:
|
||||
containers:
|
||||
- name: wget
|
||||
image: busybox
|
||||
command: ['wget']
|
||||
args: ['{{ include "statping.fullname" . }}:{{ .Values.service.port }}']
|
||||
restartPolicy: Never
|
||||
167
charts/statping/values.yaml
Normal file
167
charts/statping/values.yaml
Normal file
@@ -0,0 +1,167 @@
|
||||
# Default values for statping.
|
||||
|
||||
image:
|
||||
repository: statping/statping
|
||||
pullPolicy: IfNotPresent
|
||||
tag: ""
|
||||
|
||||
global:
|
||||
postgresql:
|
||||
postgresqlDatabase: "postgres"
|
||||
postgresqlUsername: "postgres"
|
||||
|
||||
statping:
|
||||
name: ""
|
||||
description: ""
|
||||
domain: ""
|
||||
admin:
|
||||
user: ""
|
||||
password: ""
|
||||
email: ""
|
||||
existingSecret:
|
||||
enabled: false
|
||||
name: ""
|
||||
userKey: ""
|
||||
passwordKey: ""
|
||||
emailKey: ""
|
||||
|
||||
# Probes configuration
|
||||
probes:
|
||||
liveness:
|
||||
initialDelaySeconds: 60
|
||||
failureThreshold: 5
|
||||
timeoutSeconds: 10
|
||||
readiness:
|
||||
initialDelaySeconds: 60
|
||||
failureThreshold: 5
|
||||
timeoutSeconds: 10
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
env: []
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 8080
|
||||
## Specify the nodePort value for the LoadBalancer and NodePort service types.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
|
||||
##
|
||||
# nodePort:
|
||||
## Provide any additional annotations which may be required. This can be used to
|
||||
## set the LoadBalancer service type to internal only.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
|
||||
##
|
||||
annotations: {}
|
||||
labels: {}
|
||||
additionalSpec: {}
|
||||
|
||||
ingress:
|
||||
enabled: false
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
labels: {}
|
||||
hosts:
|
||||
- host: chart-example.local
|
||||
paths:
|
||||
- /
|
||||
tls: []
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
persistence:
|
||||
enabled: true
|
||||
## statping configuration data Persistent Volume Storage Class
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
##
|
||||
## If you want to reuse an existing claim, you can pass the name of the PVC using
|
||||
## the existingClaim variable
|
||||
# existingClaim: your-claim
|
||||
# subPath: some-subpath
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
## Do not delete the pvc upon helm uninstall
|
||||
skipuninstall: false
|
||||
|
||||
postgres:
|
||||
type: postgresql
|
||||
kubedb:
|
||||
enabled: false
|
||||
version: 11.1
|
||||
storageType: Durable
|
||||
storage:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
posgresql:
|
||||
enabled: true
|
||||
# See https://github.com/bitnami/charts/tree/master/bitnami/postgresql for configuration
|
||||
|
||||
externalSecret:
|
||||
enabled: false
|
||||
type: kubernetes-external-secrets
|
||||
kubernetesExternalSecrets:
|
||||
spec: {}
|
||||
data: []
|
||||
|
||||
additionalVolumes: []
|
||||
|
||||
additionalVolumeMounts: []
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: true
|
||||
# Annotations to add to the service account
|
||||
annotations: {}
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name: ""
|
||||
|
||||
autoscaling:
|
||||
enabled: false
|
||||
minReplicas: 1
|
||||
maxReplicas: 3
|
||||
targetCPUUtilizationPercentage: 80
|
||||
# targetMemoryUtilizationPercentage: 80
|
||||
|
||||
podSecurityContext: {}
|
||||
# fsGroup: 2000
|
||||
|
||||
securityContext: {}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
podAnnotations: {}
|
||||
25
charts/traefik-forward-auth/.helmignore
Normal file
25
charts/traefik-forward-auth/.helmignore
Normal file
@@ -0,0 +1,25 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
|
||||
README.md.gotmpl
|
||||
19
charts/traefik-forward-auth/Chart.yaml
Normal file
19
charts/traefik-forward-auth/Chart.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
apiVersion: v2
|
||||
name: traefik-forward-auth
|
||||
description: A minimal forward authentication service that provides OAuth/SSO login and authentication for the traefik reverse proxy/load balancer
|
||||
type: application
|
||||
version: 1.0.0
|
||||
appVersion: 2.2.0
|
||||
keywords:
|
||||
- traefik
|
||||
- traefik-forward-auth
|
||||
- oauth
|
||||
- oauth2
|
||||
- oidc
|
||||
home: https://github.com/k8s-at-home/charts/tree/master/charts/traefik-forward-auth
|
||||
sources:
|
||||
- https://github.com/thomseddon/traefik-forward-auth
|
||||
- https://hub.docker.com/r/thomseddon/traefik-forward-auth
|
||||
maintainers:
|
||||
- name: DirtyCajunRice
|
||||
email: nick@cajun.pro
|
||||
4
charts/traefik-forward-auth/OWNERS
Normal file
4
charts/traefik-forward-auth/OWNERS
Normal file
@@ -0,0 +1,4 @@
|
||||
approvers:
|
||||
- DirtyCajunRice
|
||||
reviewers:
|
||||
- DirtyCajunRice
|
||||
120
charts/traefik-forward-auth/README.md
Normal file
120
charts/traefik-forward-auth/README.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# traefik-forward-auth
|
||||
|
||||
   [](https://artifacthub.io/packages/helm/traefik-forward-auth)
|
||||
|
||||
A minimal forward authentication service that provides OAuth/SSO login and authentication for the traefik reverse proxy/load balancer
|
||||
|
||||
The default values and container images used in this chart will allow for running in a multi-arch cluster (amd64, arm, arm64)
|
||||
|
||||
Chart that
|
||||
* Adds docker image information leveraging the [official image](https://github.com/thomseddon/traefik-forward-auth)
|
||||
* Deploys [traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth)
|
||||
|
||||
## TL;DR
|
||||
```console
|
||||
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
|
||||
$ helm install k8s-at-home/traefik-forward-auth
|
||||
```
|
||||
|
||||
## Installing the Chart
|
||||
To install the chart with the release name `traefik-forward-auth`:
|
||||
```console
|
||||
helm install traefik-forward-auth k8s-at-home/traefik-forward-auth
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
To uninstall the `traefik-forward-auth` deployment:
|
||||
```console
|
||||
helm uninstall traefik-forward-auth
|
||||
```
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/traefik-forward-auth/values.yaml)
|
||||
file. It has several commented out suggested values.
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
```console
|
||||
helm install traefik-forward-auth \
|
||||
--set env.TZ="America/New York" \
|
||||
k8s-at-home/traefik-forward-auth
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart.
|
||||
For example,
|
||||
```console
|
||||
helm install traefik-forward-auth k8s-at-home/traefik-forward-auth --values values.yaml
|
||||
```
|
||||
|
||||
## Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| affinity | object | `{}` | |
|
||||
| authHost | string | `""` | Single host to use when returning from 3rd party auth |
|
||||
| autoscaling.enabled | bool | `false` | |
|
||||
| autoscaling.maxReplicas | int | `100` | |
|
||||
| autoscaling.minReplicas | int | `1` | |
|
||||
| autoscaling.targetCPUUtilizationPercentage | int | `80` | |
|
||||
| cookie.csrfName | string | `""` | CSRF Cookie Name (default: _forward_auth_csrf) |
|
||||
| cookie.domain | string | `""` | Domain(s) to set auth cookie on. (Comma delimited) |
|
||||
| cookie.insecure | string | `""` | Use insecure cookies |
|
||||
| cookie.name | string | `""` | Cookie Name (default: _forward_auth) |
|
||||
| default.action | string | `""` | [auth|allow] Default action (default: auth) |
|
||||
| default.provider | string | `""` | [google|oidc|generic-oauth] Default provider (default: google) |
|
||||
| env | list | `[]` | |
|
||||
| fullnameOverride | string | `""` | |
|
||||
| image.pullPolicy | string | `"IfNotPresent"` | |
|
||||
| image.repository | string | `"thomseddon/traefik-forward-auth"` | |
|
||||
| image.tag | string | `""` | |
|
||||
| imagePullSecrets | list | `[]` | |
|
||||
| ingress.annotations | object | `{}` | |
|
||||
| ingress.enabled | bool | `false` | |
|
||||
| ingress.hosts[0].host | string | `"chart-example.local"` | |
|
||||
| ingress.hosts[0].paths | list | `[]` | |
|
||||
| ingress.tls | list | `[]` | |
|
||||
| lifetime | string | `""` | Lifetime in seconds (default: 43200) |
|
||||
| logging.format | string | `""` | [text|json|pretty] Log format (default: text) |
|
||||
| logging.level | string | `""` | [trace|debug|info|warn|error|fatal|panic] Log level (default: warn) |
|
||||
| logoutRedirect | string | `""` | URL to redirect to following logout |
|
||||
| middleware.enabled | bool | `false` | Enable to deploy a preconfigured middleware |
|
||||
| middleware.name | string | `""` | Name for the middleware |
|
||||
| nameOverride | string | `""` | |
|
||||
| nodeSelector | object | `{}` | |
|
||||
| podAnnotations | object | `{}` | |
|
||||
| podSecurityContext | object | `{}` | |
|
||||
| providers.genericOauth.authUrl | string | `""` | Auth/Login URL |
|
||||
| providers.genericOauth.clientId | string | `""` | Client ID |
|
||||
| providers.genericOauth.clientSecret | string | `""` | Client Secret |
|
||||
| providers.genericOauth.enabled | bool | `false` | Enable the generic OAUTH2 provider |
|
||||
| providers.genericOauth.resource | string | `""` | Optional resource indicator |
|
||||
| providers.genericOauth.scope | string | `""` | Scopes (default: profile, email) |
|
||||
| providers.genericOauth.tokenStyle | string | `""` | How token is presented when querying the User URL |
|
||||
| providers.genericOauth.tokenUrl | string | `""` | Token URL |
|
||||
| providers.genericOauth.userUrl | string | `""` | URL used to retrieve user info |
|
||||
| providers.google.clientId | string | `""` | Client ID |
|
||||
| providers.google.clientSecret | string | `""` | Client Secret |
|
||||
| providers.google.enabled | bool | `false` | Enable the google provider |
|
||||
| providers.google.prompt | string | `""` | Space separated list of OpenID prompt options |
|
||||
| providers.oidc.clientId | string | `""` | Client ID |
|
||||
| providers.oidc.clientSecret | string | `""` | Client Secret |
|
||||
| providers.oidc.enabled | bool | `false` | Enable the generic OIDC provider |
|
||||
| providers.oidc.issuerUrl | string | `""` | Issuer URL |
|
||||
| providers.oidc.resource | string | `""` | Optional resource indicator |
|
||||
| replicaCount | int | `1` | |
|
||||
| resources | object | `{}` | |
|
||||
| restrictions.domain | string | `""` | Only allow given email domains. (Comma delimited) |
|
||||
| restrictions.whitelist | string | `""` | Only allow given email addresses. (Comma delimited) |
|
||||
| secret | string | `""` | Secret used for signing. If empty, one will be generated. If specifying your own in env use "-" |
|
||||
| securityContext | object | `{}` | |
|
||||
| service.additionalSpec | object | `{}` | |
|
||||
| service.annotations | object | `{}` | |
|
||||
| service.labels | object | `{}` | |
|
||||
| service.port | int | `4181` | |
|
||||
| service.type | string | `"ClusterIP"` | |
|
||||
| serviceAccount.annotations | object | `{}` | |
|
||||
| serviceAccount.create | bool | `true` | |
|
||||
| serviceAccount.name | string | `""` | |
|
||||
| tolerations | list | `[]` | |
|
||||
| urlPath | string | `""` | Callback URL Path (default: /_oauth) |
|
||||
26
charts/traefik-forward-auth/README.md.gotmpl
Normal file
26
charts/traefik-forward-auth/README.md.gotmpl
Normal file
@@ -0,0 +1,26 @@
|
||||
{{ template "chart.header" . }}
|
||||
{{ template "chart.typeBadge" . }}{{ template "chart.versionBadge" . }}{{ template "chart.appVersionBadge" . }}{{ template "badge.artifactHub" . }}
|
||||
|
||||
{{ template "chart.description" . }}
|
||||
|
||||
{{ template "description.multiarch" . }}
|
||||
|
||||
Chart that
|
||||
* Adds docker image information leveraging the [official image](https://github.com/thomseddon/traefik-forward-auth)
|
||||
* Deploys [traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth)
|
||||
|
||||
{{ template "install.tldr" . }}
|
||||
|
||||
{{ template "install" . }}
|
||||
|
||||
{{ template "uninstall" . }}
|
||||
|
||||
{{ template "configuration.header" . }}
|
||||
|
||||
{{ template "configuration.readValues" . }}
|
||||
|
||||
{{ template "configuration.example.set" .}}
|
||||
|
||||
{{ template "configuration.example.file" . }}
|
||||
|
||||
{{ template "chart.valuesSection" . }}
|
||||
5
charts/traefik-forward-auth/ci/ct-values.yaml
Normal file
5
charts/traefik-forward-auth/ci/ct-values.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
providers:
|
||||
google:
|
||||
enabled: true
|
||||
clientId: "fakeclientid"
|
||||
clientSecret: "fakeclientsecret"
|
||||
21
charts/traefik-forward-auth/templates/NOTES.txt
Normal file
21
charts/traefik-forward-auth/templates/NOTES.txt
Normal file
@@ -0,0 +1,21 @@
|
||||
1. Get the application URL by running these commands:
|
||||
{{- if .Values.ingress.enabled }}
|
||||
{{- range $host := .Values.ingress.hosts }}
|
||||
{{- range .paths }}
|
||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else if contains "NodePort" .Values.service.type }}
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "traefik-forward-auth.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "traefik-forward-auth.fullname" . }}'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "traefik-forward-auth.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "traefik-forward-auth.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
|
||||
{{- end }}
|
||||
63
charts/traefik-forward-auth/templates/_helpers.tpl
Normal file
63
charts/traefik-forward-auth/templates/_helpers.tpl
Normal file
@@ -0,0 +1,63 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "traefik-forward-auth.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "traefik-forward-auth.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "traefik-forward-auth.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "traefik-forward-auth.labels" -}}
|
||||
helm.sh/chart: {{ include "traefik-forward-auth.chart" . }}
|
||||
{{ include "traefik-forward-auth.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "traefik-forward-auth.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "traefik-forward-auth.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "traefik-forward-auth.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "traefik-forward-auth.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user