Compare commits

...

107 Commits

Author SHA1 Message Date
Ryan Holt
614f2bd25f Merge pull request #47 from ishioni/mosquitto
[mosquitto] Add chart
2020-09-11 17:11:44 -04:00
Piotr Maksymiuk
ca2c348e6d fix maintainer 2020-09-11 22:46:15 +02:00
Piotr Maksymiuk
7d06c3d5e3 Add mosquitto chart 2020-09-11 13:46:25 +02:00
Bernd Schörgers
1eb548d382 Add pre-commit support (#42) 2020-09-09 08:00:50 -04:00
Bernd Schörgers
befa7553fa [home-assistant] Improve git-sync script, add git-crypt support (#40)
* Improve git-sync script, add git-crypt support

* Remove old comment
2020-09-07 08:11:31 -04:00
Jeff Billimek
b629ecc876 Merge pull request #34 from k8s-at-home/media-common
Multiple Chart Replacement
2020-09-06 18:18:06 -04:00
Nicholas St. Germain
2676dbded2 quote checks [skip install] [skip lint] 2020-09-06 17:15:41 -05:00
Nicholas St. Germain
7e92803f87 space negative operator [skip install] [skip lint] 2020-09-06 17:14:25 -05:00
Nicholas St. Germain
10cfeb8bd1 fix copy/paste on readmes, standardize readme lines to approx 120 char per line, and allow commit messages to skip linting. [skip install] [skip lint] 2020-09-06 17:10:34 -05:00
Nicholas St. Germain
4f99bc67fb update readmes to original verbosity and address in separate pr 2020-09-06 16:22:54 -05:00
Nicholas St. Germain
6d5c992852 switch organization back to linuxserver until itscontained supports multiarch 2020-09-06 15:40:16 -05:00
Nicholas St. Germain
75fd9f4e6d add back icons sourced from project owner repos 2020-09-06 15:34:41 -05:00
Nicholas St. Germain
da9bea90b3 fix descriptions to projects taglines 2020-09-06 15:23:15 -05:00
Nicholas St. Germain
3b06c431b0 add tag. .Chart.appVersion cannot be passed to dependency until https://github.com/helm/helm/pull/6876 is merged 2020-09-06 01:08:47 -05:00
Nicholas St. Germain
b899548da9 use ls version for now 2020-09-05 23:56:45 -05:00
Nicholas St. Germain
74845ca08e Merge remote-tracking branch 'origin/media-common' into media-common 2020-09-05 22:36:14 -05:00
Nicholas St. Germain
3a40f65b46 version bump to appease chart releaser 2020-09-05 22:36:08 -05:00
Nicholas St. Germain
43392e1e7a Merge branch 'master' into media-common 2020-09-05 22:30:50 -05:00
Nicholas St. Germain
d3406d1f39 add owners and readmes 2020-09-05 22:28:34 -05:00
Nicholas St. Germain
db24d009cc add readme for media-common 2020-09-05 22:17:16 -05:00
Nicholas St. Germain
b94814d3d7 move the rest and replace 2020-09-05 21:58:48 -05:00
Nicholas St. Germain
3070528d2f replace radarr 2020-09-05 21:45:52 -05:00
Ryan Holt
de73201b2b Merge pull request #33 from k8s-at-home/media-common
[media-common] New Chart
2020-09-05 21:38:50 -04:00
Nicholas St. Germain
ba4e6b978c Merge remote-tracking branch 'origin/media-common' into media-common 2020-09-05 20:32:36 -05:00
Nicholas St. Germain
48df925051 stupid line 2020-09-05 20:32:31 -05:00
Ryan Holt
2ecc70f1df newline 2020-09-05 21:31:10 -04:00
Nicholas St. Germain
5c35aa1a1d Merge remote-tracking branch 'origin/media-common' into media-common 2020-09-05 20:31:00 -05:00
Nicholas St. Germain
12853f3b9a add configpath 2020-09-05 20:30:55 -05:00
Ryan Holt
31959e5e37 newline 2020-09-05 21:30:32 -04:00
Ryan Holt
a75a6cef77 newline 2020-09-05 21:30:08 -04:00
Ryan Holt
ac68205d8b newline 2020-09-05 21:29:27 -04:00
Ryan Holt
66d5bd7193 newline 2020-09-05 21:29:05 -04:00
Ryan Holt
c40bdfeff7 newline 2020-09-05 21:28:48 -04:00
Ryan Holt
04478fd52f newline 2020-09-05 21:28:34 -04:00
Ryan Holt
ab4fd1b1e0 newline 2020-09-05 21:28:19 -04:00
Ryan Holt
aec35fe08f newline 2020-09-05 21:28:02 -04:00
Ryan Holt
16828ba415 newline 2020-09-05 21:27:49 -04:00
Ryan Holt
2a3f676426 newline 2020-09-05 21:27:32 -04:00
Ryan Holt
1b1898809b newline 2020-09-05 21:27:11 -04:00
Ryan Holt
1dff5670d8 new line 2020-09-05 21:26:56 -04:00
Ryan Holt
e5b78c7314 added newline 2020-09-05 21:26:02 -04:00
Ryan Holt
c5b81a263f added newline 2020-09-05 21:25:45 -04:00
Ryan Holt
e7e4665389 added newline 2020-09-05 21:25:24 -04:00
Ryan Holt
990ba59dfa added newline 2020-09-05 21:25:07 -04:00
Ryan Holt
0ed3ecbb48 added newline 2020-09-05 21:24:42 -04:00
Ryan Holt
480fa5a7d3 add newline 2020-09-05 21:24:20 -04:00
Nicholas St. Germain
1f6050759b fix configpath, volumemount, and helpers 2020-09-05 20:12:50 -05:00
Nicholas St. Germain
0f37c8776d Merge branch 'master' into media-common 2020-09-05 19:59:54 -05:00
Nicholas St. Germain
5451ce26ab ... 2020-09-05 19:50:27 -05:00
Nicholas St. Germain
107e53d3b7 add extra ingress option for apis and test ct-values.yaml 2020-09-05 19:49:12 -05:00
Jeff Billimek
f4855955cf Merge pull request #32 from k8s-at-home/unifi
[unifi] adding unifi chart
2020-09-05 11:20:11 -04:00
Jeff Billimek
a5694ab9d9 Merge branch 'master' into unifi 2020-09-05 11:13:11 -04:00
Thomas John Wesolowski
2508a42660 Add dns options to values.yaml (#30)
Signed-off-by: Thomas John Wesolowski <wojoinc@gmail.com>
2020-09-05 11:12:14 -04:00
Ryan Holt
cf4c0ba997 added newline to end of file 2020-09-05 11:06:08 -04:00
Nicholas St. Germain
5705371a35 fix last portselector 2020-09-05 04:36:30 -05:00
Nicholas St. Germain
76c5160e37 back to application or templates arent rendered 2020-09-05 04:25:20 -05:00
Nicholas St. Germain
a26921bba5 add tautulli 2020-09-05 03:38:33 -05:00
Nicholas St. Germain
c67e3df333 add organizr and ombi 2020-09-05 03:31:43 -05:00
Nicholas St. Germain
97f18a033c cleanup 2020-09-05 03:21:49 -05:00
Nicholas St. Germain
22017632bc test 2020-09-05 03:20:13 -05:00
Nicholas St. Germain
c991d11bce fix gitignore 2020-09-05 01:14:28 -05:00
Nicholas St. Germain
561a0f25bb change type 2020-09-05 00:53:09 -05:00
Nicholas St. Germain
e0f64a26f2 fix old template var 2020-09-05 00:30:43 -05:00
Nicholas St. Germain
8999baca25 fix old template var 2020-09-05 00:28:40 -05:00
Nicholas St. Germain
90daf5bcf1 media-common with base radarr/sonarr/lidarr 2020-09-05 00:22:54 -05:00
Jeff Billimek
1746270044 changes to migrate chart to new repo
Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-09-04 23:49:37 -04:00
Matt Farmer
55313d0be2 [stable/unifi] Docs: Fix name of cert secret (#23379)
* Fix name of cert secret

The original name in the documentation is incorrect.

Signed-off-by: Matt Farmer <matt@frmr.me>

* Increment patch number

Signed-off-by: Matt Farmer <matt@frmr.me>

* Correctly bump unifi chart version

Signed-off-by: Matt Farmer <matt@frmr.me>
2020-09-04 23:47:36 -04:00
Stephen Liang
153620272e Add ingress for Unifi controller sevice when not using the unified service. (#22703)
Fixes #21887

Bump version to 0.10.0

Signed-off-by: Stephen Liang <stephenliang@users.noreply.github.com>
2020-09-04 23:47:36 -04:00
Marcin Iwiński
8a7fe72ea6 [stable/unifi] adding functionality to mount extra volumes (#22702)
* [stable/unifi] adding functionality to mount extra volumes

This change adds possibility to specify additional volumes
when deploying Unifi controller.

Signed-off-by: Marcin Iwinski <marcin.iwinski@gmail.com>

* fixing defaults in README.md

Signed-off-by: Marcin Iwinski <marcin.iwinski@gmail.com>

* [stable/unifi] bumping version to 0.9.0

Signed-off-by: Marcin Iwinski <marcin.iwinski@gmail.com>
2020-09-04 23:47:36 -04:00
Marcin Iwiński
ca6493faf3 Adding secretName variable to customCert (#22453)
Adding possibility to expose certificate and its key via k8s secret/tls.
Since secret/tls keeps cert under tls.crt and key under tls.key modified
default values for customCert.certName and customCert.keyName to be
more compatible with k8s native way of storing certificates.

Signed-off-by: Marcin Iwinski <marcin.iwinski@gmail.com>
2020-09-04 23:47:35 -04:00
James Choncholas
576ff487df stable/unifi implements subPath functionality (#22432)
* unifi chart supports subPath for existing PVCs

Signed-off-by: James Choncholas <jchoncholas@gmail.com>

* bump version number

Signed-off-by: James Choncholas <jchoncholas@gmail.com>
2020-09-04 23:47:35 -04:00
Jonas Janz
65abab892e [stable/unifi] add custom cert options (#21863)
* feat(unifi): add custom cert options

Signed-off-by: PixelJonas <jonas@janz.digital>

* feat(unifi): bump version to 0.7.0

Signed-off-by: PixelJonas <jonas@janz.digital>
2020-09-04 23:47:35 -04:00
Jeff Billimek
50ce4d6bde Bumping the container version to 5.12.35 (#21492)
Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-09-04 23:47:34 -04:00
Arno DUBOIS
c69cc6751f [stable/unifi] Ingress was not referring to the good service (#21321)
Signed-off-by: Arno <arno.du@orange.fr>
2020-09-04 23:47:34 -04:00
Arno DUBOIS
995ef7ef2b [stable/unifi] Fixed some mistakes with nodePort (#21320)
Signed-off-by: Arno <arno.du@orange.fr>
2020-09-04 23:47:34 -04:00
Arno DUBOIS
6a3b129a4b [stable/unifi] Adding captive portal service (#21241)
* [stable/unifi] Adding captive portal service
Signed-off-by: Arno Dubois <arno.du@orange.fr>

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

* [stable/unifi] Annnd bumping version
Signed-off-by: Arno Dubois <arno.du@orange.fr>

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

* Added an enabled switch

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

* [stable/unifi] Fixing feedbacks

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

* [stable/unifi] Adding captive portal ingress

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

* [stable/unifi] Better table formatting

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

* [stable/unifi] Fixed ingress

Signed-off-by: Arno DUBOIS <arnodubois@sweetpunk.com>

Co-authored-by: Arno DUBOIS <arnodubois@sweetpunk.com>
2020-09-04 23:47:33 -04:00
Ryan Holt
6c8d01add3 add deploymentannotations, bump chart version (#20763)
Signed-off-by: Ryan Holt <ryan@ryanholt.net>
2020-09-04 23:47:33 -04:00
Marco Kilchhofer
9798bb82cc Add ability to specify additional jvm options and config files (#20163)
I use this to override the log4j config to see the logs also on stdout.

Signed-off-by: Marco Kilchhofer <marco@kilchhofer.info>
2020-09-04 23:47:33 -04:00
WTPascoe
0322acc6fe HTTPS is required for unifi gui (#19612)
* HTTPS is required for unifi gui

Signed-off-by: Wayne Pascoe <wayne@penguinpowered.org>

* Removed new annotation in values.yaml
Added instructions in README

Signed-off-by: Wayne Pascoe <wayne+github@penguinpowered.org>
2020-09-04 23:47:32 -04:00
lnattrass
0a221f5297 [stable/unifi] Allow wildcard ingress certificates (#18356)
* [stable/unifi] Allow wildcard ingress certificates

Signed-off-by: Liam Nattrass <liam.d.nattrass+git@gmail.com>

* [stable/unifi] Bump version

Signed-off-by: Liam Nattrass <liam.d.nattrass+git@gmail.com>
2020-09-04 23:47:32 -04:00
Per Otterström
ab941ae48d [stable/unifi] Make web interface ports configurable (#18052)
* bump the unifi docker image to version 5.11.50
* forward port values to unifi docker environment variables

Closes #18051

Signed-off-by: Per Otterström <per.otterstrom@gmail.com>
2020-09-04 23:47:32 -04:00
sherbang
a078da5499 Fix unifi NOTES to find correct service (#13252)
* Fix unifi NOTES to find correct service

Unifi installs the gui service as unifi-gui, but the command in the notes points to a non-existent 'unifi' service.  Use unifi.name + '-gui' to construct the service name here which duplicates the logic in gui-svc.yaml.

Signed-off-by: Brian Johnson <brian@sherbang.com>

* Increment unifi version to 0.4.2

Signed-off-by: Brian Johnson <brian@sherbang.com>
2020-09-04 23:47:31 -04:00
Jeff Billimek
c2df150921 fixing label-name migration (#12691)
Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-09-04 23:47:31 -04:00
Jeff Billimek
d28bf3fecf [stable/unifi] unifi chart enhancements (#12047)
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](https://github.com/helm/charts/issues/1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-09-04 23:47:31 -04:00
Thiemo
7f3bc53d12 fix(stable/unifi ingress): fix scoping issue (#12482)
.Values was out of scope for hosts block, since its in a range statement
Moved the failing access to unifiedService.enabled to a variable

Signed-off-by: Thiemo Krause <krausethiemo@googlemail.com>
2020-09-04 23:47:30 -04:00
Mike Cronce
652612e76b stable/unifi: Added "unified service" option to place everything under one service (#11550)
Signed-off-by: Mike Cronce <mike@quadra-tec.net>
2020-09-04 23:47:30 -04:00
Jeff Billimek
93addda234 [stable/unifi] Revert #10789 (#11278)
* Revert "Simplify  for unifi (#10789)"

This reverts commit b09535dfb4.

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* bumping chart version as part of reversion

Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-09-04 23:47:30 -04:00
Werner Buck
08f9adbd73 Simplify for unifi (#10789)
The discovery and stun ports are part of the same service. Unifi depends
on them to be on the same hostname.

Signed-off-by: Werner Buck <email@wernerbuck.nl>
2020-09-04 23:47:30 -04:00
nreisbeck
73956c3eed stable/unifi/README.md: fix current version (#10784)
Signed-off-by: Nolan Reisbeck <nolan.reisbeck@gmail.com>
2020-09-04 23:47:29 -04:00
Christian Erhardt
609b2dbe31 Port forward in NOTES.txt is wrong (#10200)
If you do a port-forward to 8080, the unifi controller tries to forward you to a secure TLS connection on port 8443. This fails because the port 8443 is not forwarded. If you do a direct forward to 8443, everything works as expected.

Signed-off-by: Christian Erhardt <christian.erhardt@mojo2k.de>
2020-09-04 23:47:29 -04:00
Mike Cronce
ac0202a0c4 stable/unifi: Replace "addSetfcap" option with simply adding that capability when "runAsRoot" is not set to true (#10359)
Signed-off-by: Mike Cronce <mike@quadra-tec.net>
2020-09-04 23:47:29 -04:00
Jesse Stuart
1a67cf9070 [stable/unifi] Fix typos/formatting in README. (#10277)
Signed-off-by: Jesse Stuart <hi@jessestuart.com>
2020-09-04 23:47:28 -04:00
Jacob Block
4cbe828448 [stable/unifi] Add UID and GID options. (#10218)
Signed-off-by: Jacob Block <jacob.block@gmail.com>
2020-09-04 23:47:28 -04:00
Mike Cronce
be82a0fccb stable/unifi: Add "addSetfcap" option to give the SETFCAP capability to the Unifi container (#10143)
Signed-off-by: Mike Cronce <mike@quadra-tec.net>
2020-09-04 23:47:28 -04:00
Lyle Franklin
d1fbb47709 Add configurable podAnnotations to unifi chart (#9833)
Use case is using `ark` + `restic` to take backups which requires pods
with persistent data to be annotated like:
```
kubectl annotate pod unifi-55f6dcc44c-khbrk backup.ark.heptio.com/backup-volumes=unifi-data
```

Signed-off-by: Lyle Franklin <lylejfranklin@gmail.com>
2020-09-04 23:47:27 -04:00
Lucas Servén Marín
214dd6eaac stable/unifi/templates/deployment.yaml: fix probes (#9180)
* stable/unifi/templates/deployment.yaml: fix probes

The `livenessProbe` and `readinessProbe` are incorrectly defined.
The `initialDelaySeconds` field should not be nested withing the `httpGet`
field.

Signed-off-by: Lucas Serven <lserven@gmail.com>

* stable/unifi: bump patch version

Signed-off-by: Lucas Serven <lserven@gmail.com>
2020-09-04 23:47:27 -04:00
Jeff Billimek
3f50bc7f61 upgrading to unifi v5.9.29 (#8887)
* upgrading to unifi v5.9.29

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* Update Chart.yaml

Signed-off-by: Reinhard Nägele <unguiculus@gmail.com>
2020-09-04 23:47:27 -04:00
Jonathan Herlin
10348d1c0b [stable/unifi] Invalid link in chart sources (#8639)
* Invalid link in chart sources

There was a invalid link in sources, this commit fixes the link
Signed-off-by: Jonathan Herlin <jonte@jherlin.se>

* stable/unifi bump version

Signed-off-by: Jonathan Herlin <jonte@jherlin.se>
2020-09-04 23:47:26 -04:00
Jeff Billimek
062db282ed [stable/unifi] unifi controller chart (New chart) (#6426)
* initial commit - unifi controller chart

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* enabling persistence by default, per guidelines

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* enabling persistence by default, per guidelines

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* changes requested in PR

* Pegging to a certain version for the chart (0.1.0) until otherwise directed
* Using consistent indentation for lists
* Using camelCase
* updating app version to current (5.8.28)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* correcting linting failures

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* adding OWNERS for more timely merges in the future

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* Correcting inconsistent service definitions

* fixing inconsistencies with service port & name definitions as described in PR
* bumping app version to current
* correcting typo in Charts.yaml

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* correcting ingress servicePort definition

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* correcting ingress servicePort definition

Signed-off-by: Jeff Billimek <jeff@billimek.com>

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* adding missing NodePort settings

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* Expanding service definitions

* The values and readme reflect that the various services (deployment, stun, gui, controller) can handle annotations, but there is no use of those in the templates. This is now fixed
* Added externalTrafficPolicy to all services
* Some of these changes were requested via https://github.com/billimek/billimek-charts/issues/3

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* switching to apps/v1

Signed-off-by: Jeff Billimek <jeff@billimek.com>
2020-09-04 23:47:26 -04:00
Jeff Billimek
457a149637 Merge pull request #27 from halkeye/patch-1
[nzbhyrda2] Fix case in readme for service
2020-09-02 07:58:52 -04:00
Gavin Mogan
1b9cfcfb80 Bump version 2020-09-01 21:56:13 -07:00
Gavin Mogan
23a666b18b [nzbhyrda2] Fix case in readme for service 2020-09-01 21:54:28 -07:00
Ryan Holt
66a943c448 [dashmachine] initial chart release for dashmachine (#26) 2020-09-01 11:02:47 -04:00
Jeff Billimek
8c958cbadb Merge pull request #25 from blmhemu/master
[Bazarr] Added subpath for config
2020-09-01 07:50:21 -04:00
Devin Buhl
ba63649c59 Merge branch 'master' into master 2020-09-01 07:45:22 -04:00
Christian Haller
d149fb6bd7 [plex] Fix values reference for "customCertificateDomain" (#24) 2020-09-01 07:44:32 -04:00
Hemanth Bollamreddi
f5241bde3a [Bazarr] Added subpath for config 2020-09-01 14:24:34 +05:30
129 changed files with 3982 additions and 2686 deletions

View File

@@ -1,30 +1,28 @@
name: Lint and Test Charts
on: pull_request
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Fetch history
run: git fetch --prune --unshallow
run: |
git fetch --prune --unshallow;
echo ::set-env name=commitmsg::$(git log --format=%B -n 1 ${{ github.event.after }})
- name: Run chart-testing (lint)
id: lint
uses: helm/chart-testing-action@v1.0.0
if: "! contains(env.commitmsg, '[skip lint]')"
with:
command: lint
config: ct.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.0.0
if: steps.lint.outputs.changed == 'true'
if: "steps.lint.outputs.changed == 'true' && ! contains(env.commitmsg, '[skip install]')"
- name: Run chart-testing (install)
uses: helm/chart-testing-action@v1.0.0
if: "steps.lint.outputs.changed == 'true' && ! contains(env.commitmsg, '[skip install]')"
with:
command: install
config: ct.yaml

1
.gitignore vendored
View File

@@ -1 +1,2 @@
.env
.idea

13
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,13 @@
# See https://pre-commit.com for more information
repos:
- repo: local
hooks:
- id: ct-lint
name: "Chart Test: Lint"
language: docker_image
pass_filenames: false
types: ['file']
files: '^charts/.*(\.ya?ml|\.tpl|\.helmignore|NOTES.txt)'
entry: -u 0 quay.io/helmpack/chart-testing:v3.0.0 ct
args:
- lint

View File

@@ -33,9 +33,9 @@ See `git help commit`:
### Technical Requirements
* Must follow [Charts best practices](https://helm.sh/docs/topics/chart_best_practices/)
* Must pass CI jobs for linting and installing changed charts with the [chart-testing](https://github.com/helm/chart-testing) tool
* Any change to a chart requires a version bump following [semver](https://semver.org/) principles. See [Immutability(#immutability) and [Versioning](#versioning) below
* Must follow [Charts best practices](https://helm.sh/docs/topics/chart_best_practices/).
* Must pass CI jobs for linting and installing changed charts with the [chart-testing](https://github.com/helm/chart-testing) tool See [pre-commit](#pre-commit) below.
* Any change to a chart requires a version bump following [semver](https://semver.org/) principles. See [Immutability](#immutability) and [Versioning](#versioning) below.
Once changes have been merged, the release job will automatically run to package and release changed charts.
@@ -51,3 +51,7 @@ Charts should start at `1.0.0`. Any breaking (backwards incompatible) changes to
1. Bump the MAJOR version
2. In the README, under a section called "Upgrading", describe the manual steps necessary to upgrade to the new (specified) MAJOR version
### pre-commit
This repo supports the [pre-commit](https://pre-commit.com) framework. By installing the framework (see [docs](https://pre-commit.com/#install)) it is possible to perform the chart linting step before committing your code. This can help prevent linter issues in the pipeline. Note that this requires having Docker running on your development environment.

View File

@@ -2,6 +2,7 @@
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![](https://github.com/k8s-at-home/charts/workflows/Release%20Charts/badge.svg?branch=master)](https://github.com/k8s-at-home/charts/actions)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)
## Usage

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v0.9.0.2
description: Bazarr is a companion application to Sonarr and Radarr. It manages and downloads subtitles based on your requirements
name: bazarr
version: 3.0.1
version: 3.1.0
keywords:
- bazarr
- radarr

View File

@@ -75,6 +75,7 @@ The following tables lists the configurable parameters of the Sentry chart and t
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.config.subpath` | Select a subpath in the PVC | `nil` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |

View File

@@ -64,6 +64,9 @@ spec:
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}

View File

@@ -78,6 +78,7 @@ persistence:
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall

View File

@@ -0,0 +1,12 @@
apiVersion: v2
appVersion: v0.5-4
description: DashMachine is another web application bookmark dashboard, with fun features.
icon: https://github.com/rmountjoy92/DashMachine/raw/master/dashmachine/static/images/logo/logo.png
home: https://github.com/rmountjoy92/DashMachine
name: dashmachine
version: 1.0.0
sources:
- https://github.com/rmountjoy92/DashMachine
maintainers:
- name: carpenike
email: ryan@ryanholt.net

View File

@@ -0,0 +1,31 @@
dashmachine
===========
DashMachine is another web application bookmark dashboard, with fun features.
## Chart Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | |
| deploymentAnnotations | object | `{}` | |
| fullnameOverride | string | `""` | |
| image.pullPolicy | string | `"IfNotPresent"` | |
| image.repository | string | `"rmountjoy/dashmachine"` | |
| image.tag | string | `"latest"` | |
| ingress.annotations | object | `{}` | |
| ingress.enabled | bool | `false` | |
| ingress.hosts[0] | string | `"chart-example.local"` | |
| ingress.paths[0] | string | `"/"` | |
| ingress.tls | list | `[]` | |
| nameOverride | string | `""` | |
| nodeSelector | object | `{}` | |
| persistence.accessModes[0] | string | `"ReadWriteOnce"` | |
| persistence.enabled | bool | `false` | |
| persistence.size | string | `"1Gi"` | |
| persistence.storageClassName | string | `""` | |
| podAnnotations | object | `{}` | |
| replicaCount | int | `1` | |
| resources | object | `{}` | |
| service.port | int | `5000` | |
| service.type | string | `"ClusterIP"` | |
| tolerations | list | `[]` | |

View File

@@ -1,19 +1,21 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- range $host := .Values.ingress.hosts }}
{{- range $.Values.ingress.paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "radarr.fullname" . }})
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "dashmachine.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "radarr.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "radarr.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
You can watch the status of by running 'kubectl get svc -w {{ include "dashmachine.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "dashmachine.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "radarr.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "dashmachine.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -2,7 +2,7 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "lidarr.name" -}}
{{- define "dashmachine.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
@@ -11,7 +11,7 @@ Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "lidarr.fullname" -}}
{{- define "dashmachine.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
@@ -27,6 +27,6 @@ If release name contains chart name it will be used as a full name.
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "lidarr.chart" -}}
{{- define "dashmachine.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -0,0 +1,78 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "dashmachine.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
helm.sh/chart: {{ include "dashmachine.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
{{- if .Values.dnsConfig }}
dnsConfig:
{{- toYaml .Values.dnsConfig | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 5000
protocol: TCP
# livenessProbe:
# httpGet:
# path: /notifications
# port: http
# readinessProbe:
# httpGet:
# path: /notifications
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: config
mountPath: /dashmachine/dashmachine/user_data
volumes:
- name: config
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ template "dashmachine.fullname" . }}
{{- else }}
emptyDir: {}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,22 +1,19 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "lidarr.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $fullName := include "dashmachine.fullname" . -}}
{{- $ingressPaths := .Values.ingress.paths -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
helm.sh/chart: {{ include "lidarr.chart" . }}
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
helm.sh/chart: {{ include "dashmachine.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
@@ -33,9 +30,11 @@ spec:
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
{{- range $ingressPaths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,24 @@
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "dashmachine.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
helm.sh/chart: {{ include "dashmachine.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end -}}

View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "dashmachine.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
helm.sh/chart: {{ include "dashmachine.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "dashmachine.fullname" . }}-test-connection"
labels:
app.kubernetes.io/name: {{ include "dashmachine.name" . }}
helm.sh/chart: {{ include "dashmachine.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "dashmachine.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@@ -0,0 +1,65 @@
# Default values for dashmachine.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: rmountjoy/dashmachine
tag: v0.5-4
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 5000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: ["/"]
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
persistence:
enabled: false
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: ""
accessModes:
- ReadWriteOnce
size: 1Gi
podAnnotations: {}
deploymentAnnotations: {}

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: 0.114.0
description: Home Assistant
name: home-assistant
version: 2.0.0
version: 2.1.0
keywords:
- home-assistant
- hass

View File

@@ -222,7 +222,18 @@ Much of the home assistant configuration occurs inside the various files persist
## Git sync secret
In order to sync the home assistant from a git repo, you have to store a ssh key as a kubernetes git secret
In order to sync the home assistant from a git repo, you can optionally store an ssh key as a kubernetes git secret:
```shell
kubectl create secret generic git-creds --from-file=id_rsa=git/k8s_id_rsa --from-file=known_hosts=git/known_hosts --from-file=id_rsa.pub=git/k8s_id_rsa.pub
```
## git-crypt support
When using Git sync it is possible to specify a file called `git-crypt-key` in the secret referred to in `git.secret`. When this file is present, `git-crypt unlock` will automatically be executed after the repo has been synced.
**Note:** `git-crypt` is not installed by default in the other images! If you wish to push changes from the VS Code or Configurator containers, you will have to make sure that it is installed.
The value for this secret can be obtained by running the following command in an unlocked version of your Home Assistant settings repo. It will export the unlock key, base64 encode it and copy it to your clipboard.
```shell
git-crypt export-key ./tmp-key && cat ./tmp-key | base64 | pbcopy && rm ./tmp-key
```

View File

@@ -48,7 +48,28 @@ spec:
- {{ . | quote }}
{{- end }}
{{- else }}
command: ['sh', '-c', '[ "$(ls {{ .Values.git.syncPath }})" ] || git clone {{ .Values.git.repo }} {{ .Values.git.syncPath }}']
command: ["/bin/sh", "-c"]
args:
- set -e;
if [ -d "{{ .Values.git.syncPath }}/.git" ];
then
git -C "{{ .Values.git.syncPath }}" pull || true;
else
if [ "$(ls -A {{ .Values.git.syncPath }})" ];
then
git clone --depth 2 "{{ .Values.git.repo }}" /tmp/repo;
cp -rf /tmp/repo/.git "{{ .Values.git.syncPath }}";
cd "{{ .Values.git.syncPath }}";
git checkout -f;
else
git clone --depth 2 "{{ .Values.git.repo }}" "{{ .Values.git.syncPath }}";
fi;
fi;
if [ -f "{{ .Values.git.keyPath }}/git-crypt-key" ];
then
cd {{ .Values.git.syncPath }};
git-crypt unlock "{{ .Values.git.keyPath }}/git-crypt-key";
fi;
{{- end }}
volumeMounts:
- mountPath: /config
@@ -396,6 +417,7 @@ spec:
secret:
defaultMode: 256
secretName: {{ .Values.git.secret }}
optional: true
{{ end }}
{{- if .Values.extraVolumes }}{{ toYaml .Values.extraVolumes | trim | nindent 6 }}{{ end }}
{{- with .Values.nodeSelector }}

View File

@@ -118,12 +118,9 @@ usePodSecurityContext: true
git:
enabled: false
## we just use the hass-configurator container image
## you can use any image which has git and openssh installed
##
image:
repository: causticlab/hass-configurator-docker
tag: 0.3.5-x86_64
repository: k8sathome/git-crypt
tag: 2020.09.07
pullPolicy: IfNotPresent
## Specify the command that runs in the git-sync container to pull in configuration.
@@ -134,7 +131,7 @@ git:
name: ""
email: ""
# repo:
repo: ""
secret: git-creds
syncPath: /config
keyPath: /root/.ssh

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v0.16.1045
description: API Support for your favorite torrent trackers
name: jackett
version: 3.0.1
version: 3.1.0
keywords:
- jackett
- torrent

View File

@@ -31,55 +31,58 @@ The command removes all the Kubernetes components associated with the chart and
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `linuxserver/jackett` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/jackett/tags/).| `v0.12.1132-ls37`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the Jackett instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the Jackett instance should run as | `1001` |
| `pgid` | process groupID the Jackett instance should run as | `1001` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.periodSeconds` | Specify liveness `periodSeconds` parameter for the deployment | `10` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.periodSeconds` | Specify readiness `periodSeconds` parameter for the deployment | `10` |
| `probes.startup.initialDelaySeconds` | Specify startup `initialDelaySeconds` parameter for the deployment | `5` |
| `probes.startup.failureThreshold` | Specify startup `failureThreshold` parameter for the deployment | `30` |
| `probes.startup.periodSeconds` | Specify startup `periodSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the Jackett GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the Jackett GUI is exposed| `9117` |
| `Service.annotations` | Service annotations for the Jackett GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the Jackett GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}`
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.config.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.torrentblackhole.enabled` | Use persistent volume to store torrent files | `false` |
| `persistence.torrentblackhole.size` | Size of persistent volume claim | `1Gi` |
| `persistence.torrentblackhole.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.torrentblackhole.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.torrentblackhole.storageClass` | Type of persistent volume claim | `-` |
| `persistence.torrentblackhole.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.torrentblackhole.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
| Parameter | Description | Default |
| -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| `image.repository` | Image repository | `linuxserver/jackett` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/jackett/tags/). | `v0.12.1132-ls37` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the Jackett instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the Jackett instance should run as | `1001` |
| `pgid` | process groupID the Jackett instance should run as | `1001` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.periodSeconds` | Specify liveness `periodSeconds` parameter for the deployment | `10` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.periodSeconds` | Specify readiness `periodSeconds` parameter for the deployment | `10` |
| `probes.startup.initialDelaySeconds` | Specify startup `initialDelaySeconds` parameter for the deployment | `5` |
| `probes.startup.failureThreshold` | Specify startup `failureThreshold` parameter for the deployment | `30` |
| `probes.startup.periodSeconds` | Specify startup `periodSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the Jackett GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the Jackett GUI is exposed | `9117` |
| `Service.annotations` | Service annotations for the Jackett GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the Jackett GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.config.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.torrentblackhole.enabled` | Use persistent volume to store torrent files | `false` |
| `persistence.torrentblackhole.size` | Size of persistent volume claim | `1Gi` |
| `persistence.torrentblackhole.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.torrentblackhole.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.torrentblackhole.storageClass` | Type of persistent volume claim | `-` |
| `persistence.torrentblackhole.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.torrentblackhole.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
| `hostNetwork` | Specify whether pods should use host networking | `false` |
| `dnsPolicy` | Set the DNS policy for pods, ex: ClusterFirst, ClusterFirstWithHostNet. See info [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) | `ClusterFirst` |
| `dnsConfig` | Specify DNS options for pods, see values.yaml for details, or see [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -96,6 +99,7 @@ helm install --name my-release -f values.yaml k8s-at-home/jackett
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.

View File

@@ -34,6 +34,11 @@ spec:
{{- end }}
{{- end }}
spec:
hostNetwork: {{ .Values.hostNetwork }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- if .Values.dnsConfig }}
dnsConfig: {{ toYaml .Values.dnsConfig | nindent 8}}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"

View File

@@ -121,6 +121,23 @@ resources: {}
# cpu: 100m
# memory: 128Mi
dnsPolicy: ClusterFirst
dnsConfig: {}
# dnsConfig may be used with any dnsPolicy, but is required when dnsPolicy: "None"
# To use, remove the braces above, and uncomment/modify the following lines.
# See https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
# for additional information
# nameservers:
# - 1.1.1.1
# searches:
# - ns1.mysearch.domain
# options:
# - name: ndots
# value: "1"
hostNetwork: false
nodeSelector: {}
tolerations: []

View File

@@ -1,17 +1,21 @@
apiVersion: v2
appVersion: 0.7.1.1784
description: Looks and smells like Sonarr but made for music.
name: lidarr
version: 3.0.1
description: Looks and smells like Sonarr but made for music
type: application
version: 4.0.0
appVersion: 0.7.1.1785-ls18
keywords:
- lidarr
- usenet
- bittorrent
home: https://github.com/k8s-at-home/charts/tree/master/charts/lidarr
icon: https://lidarr.audio/img/logo.png
icon: https://github.com/lidarr/Lidarr/blob/develop/Logo/512.png?raw=true
sources:
- https://hub.docker.com/r/linuxserver/lidarr/
- https://github.com/lidarr/Lidarr/
- https://github.com/Lidarr/Lidarr
- https://hub.docker.com/r/linuxserver/lidarr
maintainers:
- name: billimek
email: jeff@billimek.com
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ~1.0.0
alias: lidarr

View File

@@ -1,4 +1,4 @@
approvers:
- billimek
- DirtyCajunRice
reviewers:
- billimek
- DirtyCajunRice

View File

@@ -1,115 +1,79 @@
# lidarr music download client
# Lidarr | Looks and smells like Sonarr but made for music
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/lidarr/)
* Deploys [Lidarr](https://github.com/lidarr/Lidarr)
This is a helm chart for [lidarr](https://github.com/lidarr/Lidarr) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/lidarr/)
## TL;DR;
```shell
## TL;DR
```console
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/lidarr
```
## Installing the Chart
To install the chart with the release name `my-release`:
To install the chart with the release name `lidarr`:
```console
helm install --name my-release k8s-at-home/lidarr
helm install lidarr k8s-at-home/lidarr
```
## Upgrading
Chart versions before 4.0.0 did not use media-common. Upgrading will require you to nest your values.yaml file under
a top-level `lidarr:` key.
Chart versions 1.0.1 and earlier used separate PVCs for Downloads and Music. This presented an issue where Lidarr would be unable to hard-link files between the /downloads and /music directories when importing media. This is caused because each PVC is exposed to the pod as a separate filesystem. This resulted in Lidarr copying files rather than linking; using additional storage without the user's knowledge.
Chart versions 1.0.1 and earlier used separate PVCs for Downloads and Music. This presented an issue where Lidarr would
be unable to hard-link files between the /downloads and /music directories when importing media. This is caused because
each PVC exposed to the pod as a separate filesystem. It resulted in Lidarr copying files rather than linking;
using additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and Music. This means all of your media (and downloads) must be in, or be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
This chart now uses a single PVC for Downloads and Music. This means all of your media (and downloads) must be in, or
be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/music, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Lidarr's `Mass Editor` under the `Library` tab. Simply select all artists in your library, and use the editor to change the `Root Folder` and hit save.
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Lidarr's
`Mass Editor` under the `Library` tab. Simply select all artists in your library, and use the editor to change the
`Root Folder` and hit save.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
To uninstall the `lidarr` deployment:
```console
helm delete my-release --purge
helm uninstall lidarr
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `linuxserver/lidarr` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/lidarr/tags/).| `0.7.1.1381-ls7`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the lidarr instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the lidarr instance should run as | `1001` |
| `pgid` | process groupID the lidarr instance should run as | `1001` |
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the lidarr GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the lidarr GUI is exposed| `8686` |
| `Service.annotations` | Service annotations for the lidarr GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the lidarr GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}`
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.media.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.media.size` | Size of persistent volume claim | `10Gi` |
| `persistence.media.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.media.storageClass` | Type of persistent volume claim | `-` |
| `persistence.media.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.media.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
helm install lidarr \
--set lidarr.env.TZ="America/New York" \
k8s-at-home/lidarr
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
chart. For example,
```console
helm install --name my-release -f values.yaml stable/lidarr
helm install lidarr k8s-at-home/lidarr --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
lidarr:
image:
tag: ...
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use`existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/lidarr/values.yaml) file. It has several commented out suggested values.

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "lidarr.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
helm.sh/chart: {{ include "lidarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,110 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "lidarr.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
helm.sh/chart: {{ include "lidarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8686
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "lidarr.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: media
{{- if .Values.persistence.media.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "lidarr.fullname" . }}-media{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.media.enabled (not .Values.persistence.media.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "lidarr.fullname" . }}-media
{{- if .Values.persistence.media.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
helm.sh/chart: {{ include "lidarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.media.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.media.size | quote }}
{{- if .Values.persistence.media.storageClass }}
{{- if (eq "-" .Values.persistence.media.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.media.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,52 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "lidarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
helm.sh/chart: {{ include "lidarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "lidarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,132 +1,10 @@
# Default values for lidarr.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: linuxserver/lidarr
tag: 0.7.1.1784-ls18
pullPolicy: IfNotPresent
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
service:
type: ClusterIP
port: 8686
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
config:
enabled: true
## lidarr configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
media:
enabled: true
## lidarr media volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
extraExistingClaimMounts: []
# - name: external-mount
# mountPath: /srv/external-mount
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
# readOnly: true
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}
lidarr:
image:
organization: linuxserver
repository: lidarr
pullPolicy: IfNotPresent
tag: 0.7.1.1785-ls18
service:
port: 8686

View File

@@ -0,0 +1,11 @@
apiVersion: v2
name: media-common
description: Common dependancy chart for media ecosystem containers
type: application
version: 1.0.1
keywords:
- media-common
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common
maintainers:
- name: DirtyCajunRice
email: nick@cajun.pro

View File

@@ -0,0 +1,4 @@
approvers:
- DirtyCajunRice
reviewers:
- DirtyCajunRice

View File

@@ -0,0 +1,25 @@
# Shared base chart for k8s@home media charts
Many containers have no environmentally configurable settings. This chart allows a single maintainable
base with umbrella charts for container-specific differences. This chart does not have a default
repository or tag, and not designed to be deployed directly.
## Known Parent Charts
* [k8s-at-home/radarr](https://github.com/k8s-at-home/charts/tree/master/charts/radarr)
* [k8s-at-home/sonarr](https://github.com/k8s-at-home/charts/tree/master/charts/sonarr)
* [k8s-at-home/lidarr](https://github.com/k8s-at-home/charts/tree/master/charts/lidarr)
* [k8s-at-home/tautulli](https://github.com/k8s-at-home/charts/tree/master/charts/tautulli)
* [k8s-at-home/ombi](https://github.com/k8s-at-home/charts/tree/master/charts/ombi)
* [k8s-at-home/organizr](https://github.com/k8s-at-home/charts/tree/master/charts/organizr)
## Configuration
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml) file.
It has several commented out suggested values.
These values will normally be nested as it is a dependency, for example:
```yaml
radarr:
<values>
```

View File

@@ -0,0 +1,6 @@
image:
organization: linuxserver
repository: radarr
tag: latest
service:
port: 7878

View File

@@ -4,16 +4,16 @@
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "lidarr.fullname" . }})
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "media-common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "lidarr.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "lidarr.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
You can watch the status of by running 'kubectl get svc -w {{ include "media-common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "media-common.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "lidarr.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "media-common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -2,7 +2,7 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "radarr.name" -}}
{{- define "media-common.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
@@ -11,7 +11,7 @@ Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "radarr.fullname" -}}
{{- define "media-common.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
@@ -27,6 +27,26 @@ If release name contains chart name it will be used as a full name.
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "radarr.chart" -}}
{{- define "media-common.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "media-common.labels" -}}
helm.sh/chart: {{ include "media-common.chart" . }}
{{ include "media-common.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "media-common.selectorLabels" -}}
app.kubernetes.io/name: {{ include "media-common.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- if .Values.env }}
data:
{{- toYaml .Values.env | nindent 2 }}
{{- end }}

View File

@@ -0,0 +1,105 @@
{{- if eq .Values.persistence.type "deployment" }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "media-common.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "media-common.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ template "media-common.fullname" . }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
image: "{{ .Values.image.organization }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
envFrom:
- configMapRef:
name: {{ template "media-common.fullname" . }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
volumeMounts:
- mountPath: {{ .Values.configPath }}
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- name: media
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}-media{{- end }}
{{- end }}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,81 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "media-common.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- range $index, $ingress := .Values.ingress.extraIngresses }}
---
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}-{{ $ingress.nameSuffix | default $index }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- with $ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if $ingress.tls }}
tls:
{{- range $ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range $ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,44 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) -}}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
storageClassName: {{ if (eq "-" .Values.persistence.config.storageClass) }}""{{- else }}{{ .Values.persistence.config.storageClass | quote }}{{- end }}
{{- end }}
{{- end -}}
{{- if and .Values.persistence.media.enabled (not .Values.persistence.media.existingClaim) }}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "media-common.fullname" . }}-media
{{- if .Values.persistence.media.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.persistence.media.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.media.size | quote }}
{{- if .Values.persistence.media.storageClass }}
storageClassName: {{ if (eq "-" .Values.persistence.media.storageClass) }}""{{- else }}{{ .Values.persistence.media.storageClass | quote}}{{- end }}
{{- end }}
{{- end -}}

View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
{{- with .Values.service.additionalSpec }}
{{- toYaml . | nindent 2 }}
{{- end }}
selector:
{{- include "media-common.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,106 @@
{{- if eq .Values.persistence.type "statefulset" }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "media-common.fullname" . }}
labels:
{{- include "media-common.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "media-common.selectorLabels" . | nindent 6 }}
serviceName: {{ include "media-common.fullname" . }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "media-common.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ template "media-common.fullname" . }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
image: "{{ .Values.image.organization }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
envFrom:
- configMapRef:
name: {{ template "media-common.fullname" . }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
volumeMounts:
- mountPath: {{ .Values.configPath }}
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- end }}
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.persistence.media.enabled }}
- name: media
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "media-common.fullname" . }}-media{{- end }}
{{- end }}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,149 @@
# Default values for media-common.
image:
organization: ""
repository: ""
pullPolicy: IfNotPresent
tag: ""
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
configPath: /config
env:
TZ: UTC
service:
type: ClusterIP
port: ""
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
additionalSpec: {}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
hosts:
- host: chart-example.local
paths:
- /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
extraIngresses:
# - enabled: false
# nameSuffix: "api"
# annotations: {}
# # kubernetes.io/ingress.class: nginx
# # kubernetes.io/tls-acme: "true"
# labels: {}
# hosts:
# - host: chart-example.local
# paths:
# - /api
# tls: []
# # - secretName: chart-example-tls
# # hosts:
# # - chart-example.local
persistence:
# type: options are statefulset or deployment
type: statefulset
config:
enabled: true
## media-common configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
media:
enabled: false
## media-common media volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
additionalVolumes: []
additionalVolumeMounts: []
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}

View File

@@ -19,5 +19,4 @@
.project
.idea/
*.tmproj
# OWNERS file for Kubernetes
OWNERS
.vscode/

View File

@@ -0,0 +1,17 @@
apiVersion: v1
appVersion: "1.6.12"
description: Eclipse Mosquitto - An open source MQTT broker
name: mosquitto
version: 0.3.3
keywords:
- message queue
- MQTT
- mosquitto
- eclipse-iot
home: https://mosquitto.org/
icon: https://mosquitto.org/images/mosquitto-text-side-28.png
sources:
- https://github.com/eclipse/mosquitto
maintainers:
- name: ishioni
email: helm@movishell.pl

View File

@@ -0,0 +1,46 @@
# Mosquitto: A small MQTT broker
This is a helm chart for [mosquitto](https://mosquitto.org/)
## TL;DR;
```shell
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/mosquitto
```
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install --name my-release k8s-at-home/mosquitto
```
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
helm delete my-release --purge
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/mosquitto/values.yaml) file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set persistence.enabled=true \
k8s-at-home/mosquitto
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install --name my-release -f values.yaml k8s-at-home/mosquitto
```

View File

@@ -0,0 +1,38 @@
** Please be patient while the chart is being deployed **
Mosquitto can be accessed within the cluster on port 1883 at {{ template "mosquitto.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
To access for outside the cluster, perform the following steps:
{{- if contains "NodePort" .Values.service.type }}
Obtain the NodePort IP and ports:
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[1].nodePort}" services {{ template "mosquitto.fullname" . }})
To Access the Mosquitto MQTT port:
echo "URL : amqp://$NODE_IP:$NODE_PORT/"
{{- else if contains "LoadBalancer" .Values.service.type }}
Obtain the LoadBalancer IP:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "mosquitto.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "mosquitto.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
To Access the Moquitto port:
echo "URL : mqtt://$SERVICE_IP:1883/"
{{- else if contains "ClusterIP" .Values.service.type }}
To Access the Mosquitto MQTT port:
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "mosquitto.fullname" . }} 1883:1883
echo "URL : mqtt://127.0.0.1:1883/"
{{- end }}

View File

@@ -2,7 +2,7 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "sonarr.name" -}}
{{- define "mosquitto.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
@@ -11,7 +11,7 @@ Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "sonarr.fullname" -}}
{{- define "mosquitto.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
@@ -27,6 +27,30 @@ If release name contains chart name it will be used as a full name.
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "sonarr.chart" -}}
{{- define "mosquitto.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "mosquitto.labels" -}}
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
helm.sh/chart: {{ include "mosquitto.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "mosquitto.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "mosquitto.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,30 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "mosquitto.fullname" . }}
labels:
{{ include "mosquitto.labels" . | indent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
- port: 1883
targetPort: default
protocol: TCP
name: default
- port: 9001
targetPort: websocket
protocol: TCP
name: websocket
selector:
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -0,0 +1,8 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "mosquitto.serviceAccountName" . }}
labels:
{{ include "mosquitto.labels" . | indent 4 }}
{{- end -}}

View File

@@ -0,0 +1,95 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "mosquitto.fullname" . }}
labels:
{{ include "mosquitto.labels" . | indent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
serviceName: {{ include "mosquitto.name" . }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ template "mosquitto.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: default
containerPort: 1883
protocol: TCP
- name: websocket
containerPort: 9001
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: configmap
mountPath: /mosquitto/config
- name: data
mountPath: /mosquitto/data
volumes:
- name: configmap
configMap:
name: {{ template "mosquitto.fullname" . }}
{{- if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
- name: data
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumeClaimTemplates:
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
- metadata:
name: data
labels:
app.kubernetes.io/name: {{ include "mosquitto.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.persistence.annotations }}
annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
spec:
accessModes: [ {{ .Values.persistence.accessMode | quote }} ]
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,76 @@
# Default values for mosquitto.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: eclipse-mosquitto
tag: "{{ .Chart.AppVersion }}"
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
annotations: {}
type: ClusterIP
# externalTrafficPolicy:
# loadBalancerIP:
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
persistence:
enabled: False
annotations: {}
## mosquitto data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: mosquitto-data
accessMode: ReadWriteOnce
size: 5Gi
# customConfig:

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: v2.26.0
description: Usenet meta search
name: nzbhydra2
version: 3.0.1
version: 3.0.2
keywords:
- nzbhydra2
- usenet

View File

@@ -47,12 +47,12 @@ The following tables lists the configurable parameters of the Sentry chart and t
| `probes.startup.initialDelaySeconds` | Specify startup `initialDelaySeconds` parameter for the deployment | `5` |
| `probes.startup.failureThreshold` | Specify startup `failureThreshold` parameter for the deployment | `30` |
| `probes.startup.periodSeconds` | Specify startup `periodSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the nzbhydra2 GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the nzbhydra2 GUI is exposed| `5076` |
| `Service.annotations` | Service annotations for the nzbhydra2 GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the nzbhydra2 GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `service.type` | Kubernetes service type for the nzbhydra2 GUI | `ClusterIP` |
| `service.port` | Kubernetes port where the nzbhydra2 GUI is exposed| `5076` |
| `service.annotations` | Service annotations for the nzbhydra2 GUI | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.loadBalancerIP` | Loadbalance IP for the nzbhydra2 GUI | `{}` |
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}`

View File

@@ -1,16 +1,21 @@
apiVersion: v2
appVersion: v4.0.464
description: Want a Movie or TV Show on Plex or Emby? Use Ombi!
name: ombi
version: 3.0.1
description: Want a Movie or TV Show on Plex or Emby? Use Ombi!
type: application
version: 4.0.0
appVersion: 4.0.471
keywords:
- ombi
- plex
home: https://github.com/k8s-at-home/charts/tree/master/charts/ombi
icon: https://ombi.io/img/logo-orange-small.png
icon: https://github.com/tidusjar/Ombi/blob/feature/v4/src/Ombi/wwwroot/images/ms-icon-310x310.png?raw=true
sources:
- https://hub.docker.com/r/linuxserver/ombi/
- https://ombi.io/
- https://github.com/tidusjar/Ombi
- https://hub.docker.com/r/linuxserver/ombi
maintainers:
- name: billimek
email: jeff@billimek.com
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ~1.0.0
alias: ombi

View File

@@ -1,4 +1,4 @@
approvers:
- billimek
- DirtyCajunRice
reviewers:
- billimek
- DirtyCajunRice

View File

@@ -1,97 +1,79 @@
# Ombi
# Ombi | Want a Movie or TV Show on Plex or Emby? Use Ombi!
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/ombi/)
* Deploys [Ombi](https://github.com/tidusjar/Ombi)
This is a helm chart for [Ombi](https://ombi.io/) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/ombi/)
## TL;DR;
```shell
## TL;DR
```console
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/ombi
```
## Installing the Chart
To install the chart with the release name `my-release`:
To install the chart with the release name `ombi`:
```console
helm install --name my-release k8s-at-home/ombi
helm install ombi k8s-at-home/ombi
```
## Upgrading
Chart versions before 4.0.0 did not use media-common. Upgrading will require you to nest your values.yaml file under
a top-level `ombi:` key.
Chart versions 1.0.1 and earlier used separate PVCs for Downloads and Music. This presented an issue where Ombi would
be unable to hard-link files between the /downloads and /music directories when importing media. This is caused because
each PVC exposed to the pod as a separate filesystem. It resulted in Ombi copying files rather than linking; using
additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and Music. This means all of your media (and downloads) must be in, or
be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/music, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Ombi's
`Mass Editor` under the `Library` tab. Simply select all artists in your library, and use the editor to change the
`Root Folder` and hit save.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
To uninstall the `ombi` deployment:
```console
helm delete my-release --purge
helm uninstall ombi
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `linuxserver/ombi` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/ombi/tags/).| `3.0.4914-ls72`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the Ombi instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the Ombi instance should run as | `1001` |
| `pgid` | process groupID the Ombi instance should run as | `1001` |
| `baseUrl` | adjust baseUrl if behind a reverse proxy | null |
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
| `Service.type` | Kubernetes service type for the Ombi GUI | `ClusterIP` |
| `Service.port` | Kubernetes port where the Ombi GUI is exposed| `3579` |
| `Service.annotations` | Service annotations for the Ombi GUI | `{}` |
| `Service.labels` | Custom labels | `{}` |
| `Service.loadBalancerIP` | Loadbalance IP for the Ombi GUI | `{}` |
| `Service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}`
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.config.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
helm install ombi \
--set ombi.env.TZ="America/New York" \
k8s-at-home/ombi
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart.
For example,
```console
helm install --name my-release -f values.yaml k8s-at-home/ombi
helm install ombi k8s-at-home/ombi --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
ombi:
image:
tag: ...
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/ombi/values.yaml) file. It has several commented out suggested values.

View File

@@ -1,19 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "ombi.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "ombi.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "ombi.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "ombi.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:3579 to use your application"
kubectl port-forward $POD_NAME 3579:80
{{- end }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "ombi.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "ombi.name" . }}
helm.sh/chart: {{ include "ombi.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,95 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "ombi.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "ombi.name" . }}
helm.sh/chart: {{ include "ombi.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "ombi.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "ombi.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3579
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
{{- if .Values.baseUrl }}
- name: BASE_URL
value: "{{ .Values.baseUrl }}"
{{ end }}
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: "{{ .Values.persistence.config.subPath }}"
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "ombi.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,53 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "ombi.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "ombi.name" . }}
helm.sh/chart: {{ include "ombi.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "ombi.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,117 +1,10 @@
# Default values for Ombi.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Default values for ombi.
image:
repository: linuxserver/ombi
tag: v4.0.464-ls10
pullPolicy: IfNotPresent
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
# Subfolder can optionally be defined as an env variable for reverse proxies. Keep
# in mind that once this value is defined, the gui setting for base url no longer
# works. To use the gui setting, remove this env variable.
#
# baseUrl: /ombi
service:
type: ClusterIP
port: 3579
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
config:
enabled: true
## Ombi configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 1Gi
## If subPath is set mount a sub folder of a volume instead of the root of the volume.
## This is especially handy for volume plugins that don't natively support sub mounting (like glusterfs).
##
subPath: ""
## Do not delete the pvc upon helm uninstall
skipuninstall: false
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}
ombi:
image:
organization: linuxserver
repository: ombi
pullPolicy: IfNotPresent
tag: v4.0.471-ls10
service:
port: 3579

View File

@@ -0,0 +1,21 @@
apiVersion: v2
name: organizr
description: HTPC/Homelab Services Organizer - Written in PHP
type: application
version: 1.0.0
appVersion: latest
keywords:
- organizr
home: https://github.com/k8s-at-home/charts/tree/master/charts/organizr
icon: https://github.com/causefx/Organizr/blob/v2-master/plugins/images/organizr/logo.png?raw=true
sources:
- https://github.com/causefx/Organizr
- https://hub.docker.com/r/organizr/organizr
maintainers:
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ~1.0.0
alias: organizr

4
charts/organizr/OWNERS Normal file
View File

@@ -0,0 +1,4 @@
approvers:
- DirtyCajunRice
reviewers:
- DirtyCajunRice

58
charts/organizr/README.md Normal file
View File

@@ -0,0 +1,58 @@
# Organizr | HTPC/Homelab Services Organizer - Written in PHP
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [official image](https://hub.docker.com/r/organizr/organizr/)
* Deploys [Organizr](https://github.com/causefx/Organizr)
## TL;DR
```console
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/organizr
```
## Installing the Chart
To install the chart with the release name `organizr`:
```console
helm install organizr k8s-at-home/organizr
```
## Uninstalling the Chart
To uninstall the `organizr` deployment:
```console
helm uninstall organizr
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install organizr \
--set organizr.env.TZ="America/New York" \
k8s-at-home/organizr
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart.
For example,
```console
helm install organizr k8s-at-home/organizr --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
organizr:
image:
tag: ...
```
---
**NOTE**
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---

View File

@@ -0,0 +1,10 @@
# Default values for organizr.
organizr:
image:
organization: organizr
repository: organizr
pullPolicy: IfNotPresent
tag: latest
service:
port: 80

View File

@@ -2,7 +2,7 @@ apiVersion: v2
appVersion: 1.20.1.3252
description: Plex Media Server
name: plex
version: 2.0.1
version: 2.0.2
keywords:
- plex
home: https://plex.tv/

View File

@@ -188,7 +188,7 @@ spec:
name: {{ .Values.certificate.pkcsMangler.pfxPassword.secretName }}
key: {{ .Values.certificate.pkcsMangler.pfxPassword.passwordKey }}
- name: "PKCSMANGLER_CUSTOMCERTDOMAIN"
value: "customCertificateDomain={{.Values.certificate.pkcsMangler.plexPreferences.customCertificateDomain}}"
value: "customCertificateDomain={{.Values.certificate.pkcsMangler.setPlexPreferences.customCertificateDomain}}"
{{- end }}
{{- end }}
readinessProbe:

View File

@@ -1,17 +1,21 @@
apiVersion: v2
appVersion: 3.0.0.3543
description: Radarr is a movie downloading client
name: radarr
version: 5.0.1
description: A fork of Sonarr to work with movies à la Couchpotato
type: application
version: 6.0.0
appVersion: 3.0.0.3591
keywords:
- radarr
- usenet
- bittorrent
home: https://github.com/k8s-at-home/charts/tree/master/charts/radarr
icon: https://avatars3.githubusercontent.com/u/25025331?s=400&v=4
icon: https://github.com/Radarr/Radarr/blob/aphrodite/Logo/512.png?raw=true
sources:
- https://hub.docker.com/r/linuxserver/radarr/
- https://github.com/Radarr/Radarr/
- https://github.com/Radarr/Radarr
- https://hub.docker.com/r/linuxserver/radarr
maintainers:
- name: billimek
email: jeff@billimek.com
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ~1.0.0
alias: radarr

View File

@@ -1,4 +1,4 @@
approvers:
- billimek
- DirtyCajunRice
reviewers:
- billimek
- DirtyCajunRice

View File

@@ -1,131 +1,79 @@
# radarr movie download client
# Radarr | A fork of Sonarr to work with movies à la Couchpotato
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/radarr/)
* Deploys [Radarr](https://github.com/Radarr/Radarr)
This is a helm chart for [radarr](https://github.com/Radarr/Radarr/) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/radarr/)
## TL;DR;
```shell
## TL;DR
```console
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/radarr
```
## Installing the Chart
To install the chart with the release name `my-release`:
To install the chart with the release name `radarr`:
```console
helm install --name my-release k8s-at-home/radarr
helm install radarr k8s-at-home/radarr
```
## Upgrading
Chart versions before 6.0.0 did not use media-common. Upgrading will require you to nest your values.yaml file under
a top-level `radarr:` key.
Chart versions 3.2.0 and earlier used separate PVCs for Downloads and Movies. This presented an issue where Radarr would be unable to hard-link files between the /downloads and /movies directories when importing media. This is caused because each PVC is exposed to the pod as a separate filesystem. This resulted in Radarr copying files rather than linking; using additional storage without the user's knowledge.
Chart versions 3.2.0 and earlier used separate PVCs for Downloads and Movies. This presented an issue where Radarr would
be unable to hard-link files between the /downloads and /movies directories when importing media. This is caused because
each PVC exposed to the pod as a separate filesystem. It resulted in Radarr copying files rather than linking;
using additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and Movies. This means all of your media (and downloads) must be in, or be subdirectories of, a single directory. If upgrading from v1 of the chart, do the following:
This chart now uses a single PVC for Downloads and Movies. This means all of your media (and downloads) must be in, or
be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/movies, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Radarr's `Movie Editor` under the `Movies` tab. Simply select all movies in your library, and use the editor to change the `Root Folder` and hit save.
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Radarr's
`Movie Editor` under the `Movies` tab. Simply select all artists in your library, and use the editor to change the
`Root Folder` and hit save.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
To uninstall the `radarr` deployment:
```console
helm delete my-release --purge
helm uninstall radarr
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
| ------------------------------------------- | -------------------------------------------------------------------------------------------- | ---------------------------------------------- |
| `image.repository` | Image repository | `linuxserver/radarr` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/radarr/tags/). | `v0.2.0.1480-ls58` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the instance should run as | `1001` |
| `pgid` | process groupID the instance should run as | `1001` |
| `exportarr.enabled` | Enable Prometheus monitoring with [Exportarr](https://github.com/onedr0p/exportarr) | `false` |
| `exportarr.image.repository` | Exportarr image repository | `onedr0p/exportarr` |
| `exportarr.image.tag` | Exportarr image tag | `v0.3.0` |
| `exportarr.image.pullPolicy` | Exportarr image pullPolicy | `IfNotPresent` |
| `exportarr.port` | Prometheus scrape port | `9708` |
| `exportarr.url` | Radarr's URL | `http://radarr.default.svc.cluster.local:7878` |
| `exportarr.apikey` | Radarr's API Key | |
| `exportarr.serviceMonitor.enabled` | Enable Prometheus Operator ServiceMonitor monitoring | `false` |
| `exportarr.serviceMonitor.namespace` | Define namespace where to deploy the ServiceMonitor resource | (namespace where you are deploying) |
| `exportarr.serviceMonitor.path` | Prometheus scrape path | `/metrics` |
| `exportarr.serviceMonitor.interval` | Prometheus scrape interval | `4m` |
| `exportarr.serviceMonitor.scrapeTimeout` | Prometheus scrape timeout | `90s` |
| `exportarr.serviceMonitor.additionalLabels` | Add custom labels to ServiceMonitor | `{}` |
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
| `service.type` | Kubernetes service type for the GUI | `ClusterIP` |
| `service.port` | Kubernetes port where the GUI is exposed | `7878` |
| `service.annotations` | Service annotations for the GUI | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.loadBalancerIP` | Loadbalancer IP for the GUI | `{}` |
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.subPath` | Mount a sub directory if set | `nil ` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.media.enabled` | Use persistent volume to store media | `true` |
| `persistence.media.size` | Size of persistent volume claim | `10Gi` |
| `persistence.media.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.media.storageClass` | Type of persistent volume claim | `-` |
| `persistence.media.subPath` | Mount a sub directory if set | `nil ` |
| `persistence.media.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.media.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
helm install radarr \
--set radarr.env.TZ="America/New York" \
k8s-at-home/radarr
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
chart. For example,
```console
helm install --name my-release -f values.yaml stable/radarr
helm install radarr k8s-at-home/radarr --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
radarr:
image:
tag: ...
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/radarr/values.yaml) file. It has several commented out suggested values.

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "radarr.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,149 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "radarr.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 7878
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.exportarr.enabled }}
- name: radarr-exporter
image: "{{ .Values.exportarr.image.repository }}:{{ .Values.exportarr.image.tag }}"
imagePullPolicy: {{ .Values.exportarr.image.pullPolicy }}
command: ["exportarr"]
args: ["radarr"]
env:
- name: PORT
value: "{{ .Values.exportarr.port }}"
- name: URL
value: "{{ .Values.exportarr.url }}"
- name: APIKEY
value: "{{ .Values.exportarr.apikey }}"
ports:
- name: monitoring
containerPort: {{ .Values.exportarr.port }}
livenessProbe:
httpGet:
path: /healthz
port: monitoring
failureThreshold: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: monitoring
failureThreshold: 5
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
{{- end }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "radarr.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: media
{{- if .Values.persistence.media.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "radarr.fullname" . }}-media{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "radarr.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.media.enabled (not .Values.persistence.media.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "radarr.fullname" . }}-media
{{- if .Values.persistence.media.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.media.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.media.size | quote }}
{{- if .Values.persistence.media.storageClass }}
{{- if (eq "-" .Values.persistence.media.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.media.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,20 +0,0 @@
{{- if .Values.exportarr.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "radarr.fullname" . }}-exporter
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- name: monitoring
port: {{ .Values.exportarr.port }}
targetPort: monitoring
selector:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -1,52 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "radarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,24 +0,0 @@
{{- if .Values.exportarr.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "radarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "radarr.chart" . }}
{{- with .Values.exportarr.serviceMonitor.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
endpoints:
- port: monitoring
interval: {{ .Values.exportarr.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.exportarr.serviceMonitor.scrapeTimeout }}
path: {{ .Values.exportarr.serviceMonitor.path }}
{{- end }}

View File

@@ -1,151 +1,10 @@
# Default values for radarr.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: linuxserver/radarr
tag: 3.0.0.3543-ls21
pullPolicy: IfNotPresent
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
# Prometheus Metrics
exportarr:
enabled: false
radarr:
image:
repository: onedr0p/exportarr
tag: v0.3.0
organization: linuxserver
repository: radarr
pullPolicy: IfNotPresent
url: "http://radarr.default.svc.cluster.local:7878"
apikey:
port: 9708
serviceMonitor:
enabled: false
namespace: default
path: /metrics
interval: 4m
scrapeTimeout: 90s
additionalLabels: {}
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
service:
type: ClusterIP
port: 7878
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
config:
enabled: true
## radarr configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
media:
enabled: true
## radarr media volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
extraExistingClaimMounts: []
# - name: external-mount
# mountPath: /srv/external-mount
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
# readOnly: true
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}
tag: 3.0.0.3624-ls21
service:
port: 7878

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
# OWNERS file for Kubernetes
OWNERS

View File

@@ -1,17 +1,21 @@
apiVersion: v2
appVersion: 3.0.3.911
description: Sonarr is a television show downloading client
name: sonarr
version: 5.0.1
description: Smart PVR for newsgroup and bittorrent users
type: application
version: 6.0.0
appVersion: 3.0.3.913
keywords:
- sonarr
- usenet
- bittorrent
home: https://github.com/k8s-at-home/charts/tree/master/charts/sonarr
icon: https://avatars1.githubusercontent.com/u/1082903?s=400&v=4
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common/sonarr
icon: https://github.com/Sonarr/Sonarr/blob/phantom-develop/Logo/512.png?raw=true
sources:
- https://hub.docker.com/r/linuxserver/sonarr/
- https://sonarr.tv/
- https://github.com/Sonarr/Sonarr
- https://hub.docker.com/r/linuxserver/sonarr
maintainers:
- name: billimek
email: jeff@billimek.com
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: ~1.0.0
alias: sonarr

View File

@@ -1,4 +1,4 @@
approvers:
- billimek
- DirtyCajunRice
reviewers:
- billimek
- DirtyCajunRice

View File

@@ -1,132 +1,80 @@
# sonarr televsion show download client
# Sonarr | Smart PVR for newsgroup and bittorrent users
Umbrella chart that
* Uses [media-common](https://github.com/k8s-at-home/charts/tree/master/charts/media-common) as a base
* Adds docker image information leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/sonarr/)
* Deploys [Sonarr](https://github.com/sonarr/Sonarr)
This is a helm chart for [sonarr](https://github.com/sonarr/sonarr/) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/sonarr/)
## TL;DR;
```shell
## TL;DR
```console
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/sonarr
```
## Installing the Chart
To install the chart with the release name `my-release`:
To install the chart with the release name `sonarr`:
```console
helm install --name my-release k8s-at-home/sonarr
helm install sonarr k8s-at-home/sonarr
```
## Upgrading
Chart versions before 6.0.0 did not use media-common. Upgrading will require you to nest your values.yaml file under
a top-level `sonarr:` key.
Chart versions 3.2.0 and earlier used separate PVCs for Downloads and TV. This presented an issue where Sonarr would be unable to hard-link files between the /downloads and /tv directories when importing media. This is caused because each PVC is exposed to the pod as a separate filesystem. This resulted in Sonarr copying files rather than linking; using additional storage without the user's knowledge.
Chart versions 3.2.0 and earlier used separate PVCs for Downloads and TV. This presented an issue where Sonarr would
be unable to hard-link files between the /downloads and /tv directories when importing media. This is caused because
each PVC exposed to the pod as a separate filesystem. It resulted in Sonarr copying files rather than linking; using
additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and TV. This means all of your media (and downloads) must be in, or be subdirectories of, a single directory. If upgrading from v1 of the chart, do the following:
This chart now uses a single PVC for Downloads and TV. This means all of your media (and downloads) must be in, or
be subdirectories of, a single directory. If upgrading from an earlier version of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/tv, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Sonarr's `Series Editor` under the `Series` tab. Simply select all series in your library, and use the editor to change the `Root Folder` and hit save.
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Sonarr's
`Series Editor` under the `Series` tab. Simply select all series in your library, and use the editor to change the
`Root Folder` and hit save.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
To uninstall the `sonarr` deployment:
```console
helm delete my-release --purge
helm uninstall sonarr
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
| ------------------------------------------- | -------------------------------------------------------------------------------------------- | ---------------------------------------------- |
| `image.repository` | Image repository | `linuxserver/sonarr` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/sonarr/tags/). | `2.0.0.5344-ls60` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the instance should run as | `1001` |
| `pgid` | process groupID the instance should run as | `1001` |
| `exportarr.enabled` | Enable Prometheus monitoring with [Exportarr](https://github.com/onedr0p/exportarr) | `false` |
| `exportarr.image.repository` | Exportarr image repository | `onedr0p/exportarr` |
| `exportarr.image.tag` | Exportarr image tag | `v0.3.0` |
| `exportarr.image.pullPolicy` | Exportarr image pullPolicy | `IfNotPresent` |
| `exportarr.port` | Prometheus scrape port | `9707` |
| `exportarr.url` | Sonarr's URL | `http://sonarr.default.svc.cluster.local:8989` |
| `exportarr.apikey` | Sonarr's API Key | |
| `exportarr.enableEpisodeQualityMetrics` | Enable episode quality metrics gathering | `false` |
| `exportarr.serviceMonitor.enabled` | Enable Prometheus Operator ServiceMonitor monitoring | `false` |
| `exportarr.serviceMonitor.namespace` | Define namespace where to deploy the ServiceMonitor resource | (namespace where you are deploying) |
| `exportarr.serviceMonitor.path` | Prometheus scrape path | `/metrics` |
| `exportarr.serviceMonitor.interval` | Prometheus scrape interval | `4m` |
| `exportarr.serviceMonitor.scrapeTimeout` | Prometheus scrape timeout | `90s` |
| `exportarr.serviceMonitor.additionalLabels` | Add custom labels to ServiceMonitor | {} |
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
| `service.type` | Kubernetes service type for the GUI | `ClusterIP` |
| `service.port` | Kubernetes port where the GUI is exposed | `8989` |
| `service.annotations` | Service annotations for the GUI | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.loadBalancerIP` | Loadbalancer IP for the GUI | `{}` |
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.subPath` | Mount a sub directory if set | `nil ` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.media.enabled` | Use persistent volume for media | `true` |
| `persistence.media.size` | Size of persistent volume claim | `10Gi` |
| `persistence.media.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.media.storageClass` | Type of persistent volume claim | `-` |
| `persistence.media.subPath` | Mount a sub directory if set | `nil ` |
| `persistence.media.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.media.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
Read through the media-common [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/media-common/values.yaml)
file. It has several commented out suggested values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
helm install sonarr \
--set sonarr.env.TZ="America/New York" \
k8s-at-home/sonarr
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
chart. For example,
```console
helm install --name my-release -f values.yaml stable/sonarr
helm install sonarr k8s-at-home/sonarr --values values.yaml
```
These values will be nested as it is a dependency, for example
```yaml
sonarr:
image:
tag: ...
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use
`existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/sonarr/values.yaml) file. It has several commented out suggested values.

View File

@@ -1,19 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "sonarr.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "sonarr.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "sonarr.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "sonarr.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "sonarr.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
helm.sh/chart: {{ include "sonarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,151 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sonarr.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
helm.sh/chart: {{ include "sonarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8989
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: "{{ .Values.persistence.config.subPath }}"
{{- end }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.exportarr.enabled }}
- name: sonarr-exporter
image: "{{ .Values.exportarr.image.repository }}:{{ .Values.exportarr.image.tag }}"
imagePullPolicy: {{ .Values.exportarr.image.pullPolicy }}
command: ["exportarr"]
args: ["sonarr"]
env:
- name: PORT
value: "{{ .Values.exportarr.port }}"
- name: URL
value: "{{ .Values.exportarr.url }}"
- name: APIKEY
value: "{{ .Values.exportarr.apikey }}"
- name: ENABLE_EPISODE_QUALITY_METRICS
value: "{{ .Values.exportarr.enableEpisodeQualityMetrics }}"
ports:
- name: monitoring
containerPort: {{ .Values.exportarr.port }}
livenessProbe:
httpGet:
path: /healthz
port: monitoring
failureThreshold: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: monitoring
failureThreshold: 5
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
{{- end }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "sonarr.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: media
{{- if .Values.persistence.media.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "sonarr.fullname" . }}-media{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "sonarr.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
helm.sh/chart: {{ include "sonarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}

View File

@@ -1,29 +0,0 @@
{{- if and .Values.persistence.media.enabled (not .Values.persistence.media.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "sonarr.fullname" . }}-media
{{- if .Values.persistence.media.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
helm.sh/chart: {{ include "sonarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.media.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.media.size | quote }}
{{- if .Values.persistence.media.storageClass }}
{{- if (eq "-" .Values.persistence.media.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.media.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,20 +0,0 @@
{{- if .Values.exportarr.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "sonarr.fullname" . }}-exporter
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
helm.sh/chart: {{ include "sonarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- name: monitoring
port: {{ .Values.exportarr.port }}
targetPort: monitoring
selector:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -1,52 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "sonarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
helm.sh/chart: {{ include "sonarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,24 +0,0 @@
{{- if .Values.exportarr.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "sonarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "sonarr.chart" . }}
{{- with .Values.exportarr.serviceMonitor.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ include "sonarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
endpoints:
- port: monitoring
interval: {{ .Values.exportarr.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.exportarr.serviceMonitor.scrapeTimeout }}
path: {{ .Values.exportarr.serviceMonitor.path }}
{{- end }}

View File

@@ -1,153 +1,10 @@
# Default values for sonarr.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: linuxserver/sonarr
tag: 3.0.3.911-ls39
pullPolicy: IfNotPresent
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
# Prometheus Metrics
exportarr:
enabled: false
sonarr:
image:
repository: onedr0p/exportarr
tag: v0.3.0
organization: linuxserver
repository: sonarr
pullPolicy: IfNotPresent
url: "http://sonarr.default.svc.cluster.local:8989"
apikey:
port: 9707
# Enable to gather episode quality metrics, if enabled slows down scrape timing due to more API calls
enableEpisodeQualityMetrics: false
serviceMonitor:
enabled: false
namespace: default
path: /metrics
interval: 4m
scrapeTimeout: 90s
additionalLabels: {}
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
service:
type: ClusterIP
port: 8989
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
config:
enabled: true
## sonarr configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
media:
enabled: true
## sonarr media volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
extraExistingClaimMounts: []
# - name: external-mount
# mountPath: /srv/external-mount
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
# readOnly: true
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}
tag: 3.0.3.913-ls40
service:
port: 8989

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
# OWNERS file for Kubernetes
OWNERS

Some files were not shown because too many files have changed in this diff Show More