Skip to content

Blog

CloudFlare: Partial zone sign-up not allowed (1104) terraform

If you are enterprise customer, you can make use of CloudFlare Partial domain setup maintain your own DNS servers. Though name suggests partial DNS it is creation of a proxy host and you assign CNAME pointint to it.

When you try to create partial Zone using terraform provider you may end up with below error message.

cloudflare_zone.testvettomzone: Creating...
 Error: error creating zone "vettom.online": Partial zone signup not allowed (1104)

   with cloudflare_zone.testvettomzone,
   on main.tf line 19, in resource "cloudflare_zone" "testvettomzone":
   19: resource "cloudflare_zone" "testvettomzone" {

Solution

This is an error from CloudFlare provider. It seems they have to enable/allow terraform partial Zone creation using Terraform

  • Raise support ticket with Cloudflare to allow terraform Partial Zone creation
  • Ensure you are using Global API Key for authentication
  • Must have Enterprise plan, not available for other plans
resource "cloudflare_zone" "zone" {
  account_id = var.accountid
  zone       = var.domain
  type       = "partial"
  plan       = "enterprise"
}

ECR public repo unable to retrieve credentials

When trying to access public repo hosted on AWS ECR can result in Authentication failure.

** ERROR

helm pull oci://public.ecr.aws/dynatrace/dynatrace-operator                                                       
Error: GET "
https://public.ecr.aws/v2/dynatrace/dynatrace-operator/tags/list":
unable to retrieve credentials

Solution

Log on to helm registry with AWS ECR public access credentials

aws ecr-public get-login-password \
     --region us-east-1 | helm registry login \
     --username AWS \
     --password-stdin public.ecr.aws

Adobe Premier : no audio while editing (Mac OS)

While editing video's on Adobe premier, there could be scenario where audio not working while performing edit. You may also get error stating The sample rate is not supported by current audio device. Video itself plays fine on Mac. Changing output hardware does not make difference!

Adobe premier audio error

It turns out that if Input or output hardware does not support required sample rate, Premier stops the audio completely! In my case my Audio input device was Bose QC35 and Bose Speakers via USB. Since Input device is not supported, audio failed to work despite changing to various output devices include MacBook speaker. Fixed only when input device changed.

Solution

Go to Adobe Premier -> settings -> Audio Hardware and update Input/Output device to working device configuration Adobe premier audio fix

Alternative

Launch Audio Midi Setup.app on Mac and try to match sample rate of your device

Mac midi audio setup

ArgoCD how to reset Admin password.

ArgoCD Admin account credentials are stored in argocd-secret and stored as k8s secret in argocd-initial-admin-secret. For security, you must remove argocd-initial-admin-secret while disabling admin account. Admin password in argocd-initial-admin-secret is stored as one way hash (.htpassword) and hence cannot be decrypted.

Recovery steps

  • Enable Admin account and update argocd. configs.cm.admin.enabled: true
  • Remove admin.password from secret argocd-secret by editing or by running below command
  • Delete/redeploy argocd-server pod to get new admin password
Removing admin password

kubectl patch -n argocd cm argocd-cm --type='json' -p='[{"op": "remove", "path": "/data/accounts.alice"}]' Example argocd-secret

apiVersion: v1
kind: Secret
  name: argocd-secret
  namespace: argocd
type: Opaque
data:
  admin.password: JDJhJDEwJHhFWTVCZUxHaW1oLnV5TVlzaGdhb3ZxSVE5ZklL
  admin.passwordMtime: MjAyNC0wNi0xOFQxMDowNzo0OVo=
  server.secretkey: K1ZCZlpEeWYwMFpjUzV5NG5tTUROOFllS0plYz0=

Complete EKS cluster [Terraform]

Getting started with creating a functional EKS cluster from scratch can be challenging as requires some specific settings. While EKS module will create a new cluster, it does not address how you will expose an applicaition, tags required for subnets, number of pod IP addresses etc

🖥 EKS cluster using terraform contains everything required for you to spin up a new cluster and expose application via Application Loadbalancer. All you need to do is apply terraform code

Source Code, Sample app

  • [x] VPC with 2 private and 2 public zones
  • [x] EKS cluster with Managed NodeGroup (1 Node)
  • [x] VPC CNI add-on with prefix delegation
  • [x] AWS Loadbalancer controller

EKS secrets as env variable with CSI driver

alt text

CSI driver by default configured to mount secrets as file. However it is possible to mount secrets are Environment variable using below method. When the pod is started, driver will create a secret and mount as Environment variable. Secrets object only exists while pod is active.

Source code Aws-Eks-SecretsManager

Configuring Secrets manager AWS Secrets Manager and Config Provider

Scenario

  • You have configured Secret called MySecret with data username and password
  • Necessary policy created in AWS to allow access to Secret
  • Iam serviceAccount called 'nginx-deployment-sa' created and policy attached

Creating K8s Secrets from AWS Secrets

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secret-to-k8s-secret
  namespace: default
spec:
  provider: aws
  parameters:
    objects: |
        - objectName: "MySecret"
          objectType: "secretsmanager"
          objectAlias: mysecret
          jmesPath:
            - path: "username"
              objectAlias: "Username"
  secretObjects:
    - secretName: myusername
      type: Opaque
      data: 
        - objectName: "Username"
          key: "username"
    - secretName: myk8ssecret
      type: Opaque
      data: 
        - objectName: "mysecret"
          key: "mysecret"
spec.parameters
  • objectName: Name of the secret object in secretStore
  • objectAlias: Optional Alias name for secretObject.
  • jmesPath.path : Name of specific secret to be exposed
  • jmesPath.objectAlias: Alias name for the seccret to be used.
spec.secretObjects
  • secretName: Name of the secret to be created in k8s
  • data.objectName: Name of the secretObject/Alias to retrieve data from
  • key: Name of the key with in k8s secret to be used for storing retrieved data.

Above configuration will create k8s secret called 'myusername' with value of username in key 'username'. k8s secret 'mysecrets' will contain all objects in Mysecrets under k8s secret key 'mysecrets'

EKS: avoid errors and timeout during deployment (ALB)

Scenario

Eks cluster configured with Application loadbalancer. During deployments, pods become unhealthy in target group for short while and causes brief outage.

alt text

Root cause

There are 2 possible reasons for this scenario and both must be addressed. 1. ALB taking longer to initialize new pods 2. ALB is slow to detect and drain terminated pods.

Solution

Enable pod readiness Gate

Configure Pod readiness Gate to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. This will ensure pod is healthy in target group before terminating old pod.

To enable Pod readiness Gate, add label elbv2.k8s.aws/pod-readiness-gate-inject: enabled to applications Namespace. Change will be effective for any new pod being deployed.

kind: Namespace
metadata:
  labels:
    elbv2.k8s.aws/pod-readiness-gate-inject: enabled

Pod lifecycle preStop

When a pod is terminated, it can take couple of seconds for ALB to pick up the change and start draining connection. By this time, most likely pod already been terminated by K8s. Solution to this issue is a workaround. Add a lifecycle policy to the pod to ensure pods are de-registered before termination

    spec:
      terminationGracePeriodSeconds: 60
      containers:          
          lifecycle:
             preStop:
               exec:
                 command: ["/bin/sh", "-c", "sleep 60"]

Adjust ALB/TG De-registration time to be smaller than lifecycle time by adding annotation de-registration_delay.timeout_seconds

ingress:
  enabled: true
  className: "alb"
  annotations: 
    alb.ingress.kubernetes.io/scheme: internet-facing
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30