Skip to content

Checklist

[[TOC]]

This checklist will help you to create a Chart for your service.

So, let's gathering some informations:

Before you begin

Create an folder called checklist

Create two files:

  • Chart.yaml
  • values.yaml

add the following content to your Chart.yaml

apiVersion: v2
name: checklist
description: My first checklist
type: application
version: 0.1.0
appVersion: "1.16.0"

dependencies:
  - name: ohmyhelm
    alias: myservice
    repository: https://gitlab.com/api/v4/projects/28993678/packages/helm/stable
    version: 1.15.2
    condition: myservice.enabled

Basics

enable the chart function

Add the following snippet to your values.yaml to enable your first chart for the dependency myservice.

In the next steps, you will add some other content to your values.yaml.

Start with this:

myservice:
  chart:
    enabled: true

Which kind of service you will create (Deployment, Statefulset or Daemonset)?

myservice:
  chart:
    # You just need on of the three types in your values.yaml. See next code snippet
    deployment: true
    statefulset: false
    daemonset: false
myservice:
  chart:
    # ...
    statefulset: true

What's your service name?

myservice:
  chart:
    # ...
    fullnameOverride: "nodered"

Does your service need a dynamically port range?

myservice:
  chart:
    # This is the default value. You don't need to add this.
    hostNetwork: false

Pulling your image

If you need credentials to pull your images, add the following code:

# You need to specify the imageCredentials helper just on time in your values.yaml. 
# If you need to add multiple registry secrets add a new entry within imageCredentials.
# Sometimes you need your imageCredentials in different namespaces. By default, all objects will be installed in the namesace that is set in the helm command.
imageCredentials:
  - name: my-container-registry
    registry: https://mydockerregistry.example.com
    username: "myname"
    accessToken: "mytoken"
  - name: my-second-container-registry
    # This Secret will created in the dev namespace. If this namespace doesn't exist the deployment will fail. (See HELPER namespaces)
    namespace: dev
    registry: https://my2dockerregistry.example.com
    username: "myname"
    accessToken: "mytoken"

myservice:
  chart:
    # Now, add your imageCredential to your chart. Remember that this chart will be installed in the namespace that is set in the helm command.
    imagePullSecrets: 
      - name: my-container-registry

Enabled prometheus metrics for this service

myservice:
  chart:
    # this will add an applabel with the servicename for prometheus
    applabel: true
    # the name in `chart.fullnameOverride` and `monitoring[0].name must match`
    monitoring: 
      - enabled: true
        name: nodered
        release: prometheus
        endpoints:
          - port: http
            interval: 15s
            path: merics
            scheme: http

container

This is the default configuration for an container

myservice:
  chart:
    # ...
    container:
      # Add your container image here
      image: mydockerregistry.example.com/mycusomservice:latest
      # ohMyHelm default values. Remove unnecessary parts if you doesn't need them.
      # When disabling ports: you should also diable the service. (See "## Service")
      # When changing or adding new entries to ports (name, protocoll), remember to add them to your service too. (See "## Service")
      ports:
        - name: http
          containerPort: 80
          protocol: TCP
      imageConfig:
        pullPolicy: IfNotPresent
      securityContext: {}
      command: []
      args: []
      env: []
      extraEnv: []
      envFrom: []

So, we dont need to expose a port but we need to add some evironemnts

myservice:
  chart:
    # ...
    container:
      # Add your container image here
      image: mydockerregistry.example.com/mycusomservice:latest
      # Disable ports
      ports: []
      env:
        - name: foo
          value: "bar"
          # Load a single environment from an config map. See next code block.
        - name: DEBUGLEVEL
          valueFrom: 
            configMapKeyRef:
              name: anotherconfig
              key: debuglevel
          # Load a single secret to your environment. See next code block.
        - name: SECRET
          valueFrom:
            secretKeyRef:
              name: allsecrets
              key: myservicesecret
      # Load multiple environments from an config map. See next code block.
      envFrom:
        - configMapRef:
          name: specialconfig

do you need a configmap or a secret object?

Now, we need to add the configMaps end secrets

myservice:
  chart:
    # Config CHART helper
    configs:
      - name: specialconfig
        values:
          BAR: foo
          TEST: "1"
      - name: anotherconfig
        values:
          debuglevel: WARN
          config.yaml: |
            number: "1"
            foo:
              bar:
                - 1
                - 2
                - 3
    # Secret CHART helper
    secrets:
      - name: allsecrets
        values:
          myservicesecret: "admin;-)"
          mytoken: "******"
          secret.file: |
            somesensitive content.
            FOO:BAR 1,2,3

liveness and readiness probe

By default livenessProbe and readinessProbe are not set. To configure the liveness* section, check the Kubernetes docs

myservice:
  chart:
    # ...
    container:
      # ...
      livenessProbe:
        initialDelaySeconds: 120
        failureThreshold: 3
        periodSeconds: 10
        timeoutSeconds: 3
        httpGet:
          path: /healthz
          port: http
myservice:
  chart:
    # ...
    container:
      # ...
      readinessProbe:
        initialDelaySeconds: 120
        failureThreshold: 3
        periodSeconds: 10
        timeoutSeconds: 3
        httpGet:
          path: /healthz
          port: http

Service

This is the Kubernetes Service object. Check your chart.container.port settings and modify the service object if necesarry.

myservice:
  chart:
    # ...
    service:
      # This is the default configuration. You don't need to add them.
      type: ClusterIP
      #clusterIP: 1.2.3.4
      #selector:
      #  app.kubernetes.io/name: MyApp
      ports:
        - port: 80
          targetPort: http
          protocol: TCP
          name: http

Disable the Service

myservice:
  chart:
    # ...
      service: []

initContainer

This is the default configuration for an init-container.

The initContainer is disabled by default. There is a wait-for-job image called ohmyhelm-job-helper included.

If you use a kubernetes job object to init your database, it will wait until the job has finished and exit with returncode 0. Then your container will start. To use this feature, replace the value yourjobnamehere with the name given in chart.fullnameOverride and add enable job and rbac.

Restrictions:

  • You can't add a port to your initContainer
  • You can't mount a volume
myservice:
  chart:
    # ...
    initContainer:
      enabled: false
      # Add your container image here
      image: registry.gitlab.com/ayedocloudsolutions/ohmyhelm-job-helper:1.0.0
      imageConfig:
        pullPolicy: IfNotPresent
      securityContext: {}
      command: []
      args:
        - "job"
        - "yourjobnamehere"
      env: []
      extraEnv: []
      envFrom: []

job (container)

This is the default configuration for a job.

The job is disabled by default. If no image is set, we assume that you have an init job in your chart.container.image configured and you can start it by setting args.

Remember to add the rbac when using ohmyhelm-job-helper in initContainer.

You can cleanup your jobs by setting the value chart.job.removejob.enabled to true.

myservice:
  chart:
    # ...
    job:
      enabled: false
      # Add your container image here. If no image is set `chart.container.image` will be used.
      #image:
      imageConfig:
        pullPolicy: IfNotPresent
      securityContext: {}
      command: []
      args: []
      env: []
      extraEnv: []
      envFrom: []
      restartPolicy: Never
      removejob:
        enabled: false
        ttlSecondsAfterFinished: 60
      activeDeadlineSeconds: 1200
      backoffLimit: 20

Role based access control

This is the default configuration for rbac.

The rbac is disabled by default.

myservice:
  chart:
    # ...
    rbac:
      enabled: false
      roleRules:
        - apiGroups: ["", "batch"]
          resources: ["*"]
          verbs: ["*"]

Create an ingress

There are two types to creating ingress in ohMyHelm

  • ingress You can configure ingress from spec: see Kubernetes docs ingress
  • ingressSimple minimlistic setup with one host and one tls.

ingress

myservice:
  chart:
    # ...
    ingress:
      enabled: false
      annotations: {}
      tls:
        - hosts:
            - my.example.com
          secretName: example-tls
      # You can add multiple hosts / paths
      hosts:
        - host: my.example.com
          http:
            paths:
            - path: /
              # pathType Supported Kubernetes >= v1.19 [stable]
              pathType: Prefix
              backend:
                serviceName: your-service-name-here
                servicePort: http

simpleIngress

myservice:
  chart:
    # ...
    ingressSimple:
      enabled: false
      annotations: {}
      host: my.example.com
      tlsSecretName: example-tls
      pathType: Prefix
      path: /

example with annotations:

myservice:
  chart:
    # ...
    ingressSimple:
      enabled: false
      annotations:
        kubernetes.io/ingress.class: nginx
        kubernetes.io/tls-acme: "true"
        cert-manager.io/cluster-issuer: "letsencrypt-staging"
        nginx.ingress.kubernetes.io/rewrite-target: /$2
        nginx.ingress.kubernetes.io/configuration-snippet: |
          rewrite ^(/myservice)$ $1/ redirect;
        nginx.ingress.kubernetes.io/x-forwarded-prefix: "/myservice"
      host: my.example.com
      tlsSecretName: example-tls
      pathType: Prefix
      path: /myservice(/|$)(.*)

Do you need persistance?

In a Future release, we will improve this part.

Depending on your choice chart.deployment, chart.statefulset or chart.daemonset you need to use on of the following options, to use a volume

  • statefulsetVolume: (Optionally with PVC)
  • deploymentVolume:
  • daemonsetVolume:
  • (Not implemented yet, same as *Volume) persistance: (Optionally with PVC)

To configure ...

statefulsetVolume

myservice:
  chart:
    # ...
    statefulsetVolume:
      volumeClaimTemplate: []
      volumeMounts: []
      volumes: []

Add an PVC to your service.

myservice:
  chart:
    # ...
    statefulsetVolume:
      volumeClaimTemplate:
        - metadata:
            name: "database-data"
          spec:
            accessModes: [ReadWriteOnce]
            resources:
              requests:
                storage: 5Gi
      volumeMounts:
        - name: database-data
          mountPath: /app/database
          subPath:

deploymentVolume and daemonsetVolume

To add an PVC you need to use the manifests HELPER.

myservice:
  chart:
    # ...
    daemonsetVolume:
      volumeMounts: []
      volumes: []

Create an PVC for an Deployment

myservice:
  chart:
    # ...
    deploymentVolume:
      volumeMounts:
        - name: filesystem-data
          mountPath: /app/filesystem
          subPath:
      volumes: []
        - name: filesystem-data
          persistentVolumeClaim:
            claimName: filesystem-data-deployment
  # Use mainfests helper to create the pvc
  manifests:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      content:
        metadata:
          name: filesystem-data-deployment
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 100Gi
          storageClassName: filesystem-data-deployment

persistance (not implemented yet)

:warn: This is not implemented yet!

By default, a statefulset has the option to create a PVC. A deployment and daemonset don't has this function by default.

With persistance: you can simply add a PVC to a deployment or daemonset.

You can create a volume with PVC (statefulset, deployment, daemonset)

To configure ...

Default configuration

myservice:
  chart:
    # ...
    persistance:
      volumeClaimTemplate: []
      volumeMounts: []
      volumes: []

volume and mount to override daemon.json with a configMap

myservice:
  chart:
    # ...
    persistance:
      volumeClaimTemplate: []
      volumeMounts:
        - name: daemon-json
          mountPath: /etc/docker/daemon.json
          subPath: daemon.json
      volumes:
        - name: daemon-json
          configMap:
            name: daemon-json
    configs:
      - name: daemon-json
        values:
          daemon.json: |
            {
              "mtu": 1350
            }

Add a PVC

myservice:
  chart:
    # ...
    persistance:
      volumeClaimTemplate:
        - metadata:
            name: "database-data"
          spec:
            accessModes: [ReadWriteOnce]
            resources:
              requests:
                storage: 5Gi
      volumeMounts:
        - name: database-data
          mountPath: /app/database
          subPath:
      volumes: []

TODO replica, serviceaccount, etc

:warn: TODO

myservice:
  chart:
    # ...
    replicaCount: 1
    podAnnotations: {}
    podSecurityContext: {}
    serviceAccount:
      create: true
      annotations: {}
      name: ""
    resources: {}
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 100
      targetCPUUtilizationPercentage: 80
      targetMemoryUtilizationPercentage: 80
    nodeSelector: {}
    tolerations: []
    affinity: {}

INSTALL YOUR CHART

Now, it's time to test your ohMyHelm chart.

helm dep update
helm upgrade --install -n "mychecklist" --create-namespace my-checklist .

Examples

  • Our pod and Service listens on port 80.
  • We need a service object.
  • Registry is public.

Chart.yaml

apiVersion: v2
name: my-application
description: my simple apllication
type: application
version: 1.2.2
appVersion: "1.3.5"

dependencies:
  - name: ohmyhelm
    alias: myservice
    repository: https://gitlab.com/api/v4/projects/28993678/packages/helm/stable
    version: 1.15.2
    condition: myservice.enabled

values.yaml

myservice:
  enabled: true
  chart:
    enabled: true
    deployment: true 
    fullnameOverride: "myservice"
    container:
      image: my-public-registry/my-image:1.3.5

Complex

Your Service needs a volume that support ReadWriteMany function, but our CSI doesn't support this. So, we add the nfs-subdir-external-provisioner and a nfs-server.

myservice:

  • Our pod and Service listens on port 80 and we need a service object ( Port 80 is default, so nothing is set )
  • Registry is NOT public and we need imageCredentials (set in dependency secretsconfigs)
  • We need a configMap for our environments (set in dependency secretsconfigs)
  • we need volumes and and PVC for our deployment (set in dependency myservice)

nfs:

  • we need multiple ports
  • securityContext set to privileged: true
  • We need volumes and PVC for our statefulset

Chart.yaml

apiVersion: v2
name: my-application
description: myapplication needs ReadWriteMany
type: application
version: 1.2.3
appVersion: "1.3.5"

dependencies:
  - name: secretsconfigs
    alias: myservice
    repository: https://gitlab.com/api/v4/projects/28993678/packages/helm/stable
    version: 1.15.2
    condition: secretsconfigs.enabled
  - name: ohmyhelm
    alias: myservice
    repository: https://gitlab.com/api/v4/projects/28993678/packages/helm/stable
    version: 1.15.2
    condition: myservice.enabled
  - name: ohmyhelm
    alias: nfs
    repository: https://gitlab.com/api/v4/projects/28993678/packages/helm/stable
    version: 1.15.2
    condition: nfs.enabled
  - name: nfs-subdir-external-provisioner
    repository: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
    version: 4.0.16

values.yaml

secretsconfigs:
  imageCredentials:
    - name: my-container-registry
      registry: https://mydockerregistry.example.com
      username: "myname"
      accessToken: "mytoken"
  # # ohMyHelm config Helper for a single config
  # config:
  #   enabled: true
  #   name: myspecialconfig
  #     values:
  #       ENV: prod
  #       DEBUG: info
  #       START_WITH: "2022"
  # # ohMyHelm config Helper for multiple configs
  configs:
    - name: myspecialconfig
      values:
        ENV: prod
        DEBUG: info
        START_WITH: "2022"

myservice:
  enabled: true
  chart:
    enabled: true
    deployment: true 
    fullnameOverride: "myservice"
    imagePullSecrets:
      - name: my-registry
    container:
      image: my-registry/my-image:1.3.5
      envFrom:
        - configMapRef:
            name: myspecialconfig
    deploymentVolume:
      volumeMounts:
        - name: nfs-volume
          mountPath: /data
      volumes:
        - name: nfs-volume
          persistentVolumeClaim:
            claimName: data-service-nfs
    # # ohMyHelm Chart config helper for multiple configs
    # configs:
    #   - name: myspecialconfig
    #     values:
    #       ENV: prod
    #       DEBUG: info
    #       START_WITH: "2022"
  # We need to add an PVC for the Deployment
  manifests:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      content:
        metadata:
          name: data-service-nfs
        spec:
          accessModes:
            - ReadWriteMany
          resources:
            requests:
              storage: 100Gi
          storageClassName: nfs-client

nfs:
  enabled: true
  chart:
    enabled: true
    statefulset: true 
    fullnameOverride: "nfs"
    container:
      image: k8s.gcr.io/volume-nfs:0.8
      securityContext:
        privileged: true
      args:
        - /app/data
      ports:
        - name: nfs
          containerPort: 2049
        - name: mountd
          containerPort: 20048
        - name: rpcbind
          containerPort: 111
    service:
      type: ClusterIP
      clusterIP: 10.43.249.55
      ports:
        - port: 2049
          name: nfs
          targetPort: nfs
        - port: 20048
          targetPort: mountd
          name: mountd
        - port: 111
          targetPort: rpcbind
          name: rpcbind
    statefulsetVolume:
      volumeMounts:
        - name: data-nfs
          mountPath: /app/data
      volumeClaimTemplates:
        - metadata:
            name: "data-nfs"
          spec:
            accessModes: [ReadWriteOnce]
            resources:
              requests:
                storage: 10Gi

nfs-subdir-external-provisioner:
  storageClass:
    name: nfs-client
  nfs:
    server: 10.43.249.55
    path: "/"
    mountOptions:
      - nolock
      - nfsvers=4.2

Last update: September 26, 2022