Skip to content

fix: add insecure registry configuration to containerd; refs #555 #572

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Evynglais
Copy link

@Evynglais Evynglais commented Mar 4, 2025

Changes

Unable to pull images from local registry created with --registry

  • Adds the missing configuration required to pull images from the local registry iff using the --registry flag.

/kind bug

Fixes #555

Release Note

Kind is now able to pull images from the local registry created using the `--registry` flag.

@knative-prow knative-prow bot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 4, 2025
Copy link

linux-foundation-easycla bot commented Mar 4, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@knative-prow knative-prow bot requested review from dsimansk and rhuss March 4, 2025 09:53
Copy link

knative-prow bot commented Mar 4, 2025

Welcome @Evynglais! It looks like this is your first PR to knative-extensions/kn-plugin-quickstart 🎉

Copy link

knative-prow bot commented Mar 4, 2025

Hi @Evynglais. Thanks for your PR.

I'm waiting for a knative-extensions member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@knative-prow knative-prow bot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Mar 4, 2025
Copy link
Contributor

@dsimansk dsimansk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some other places needs to be fixed to reflect new function signature, per review annotations.

@dsimansk
Copy link
Contributor

dsimansk commented Mar 4, 2025

@Evynglais thanks for the PR! There seems to be a at least 1 place to fix new signature. It might be good to take a look at unit tests, if we can have some coverage, but I need to take a look as well. :)

@dsimansk
Copy link
Contributor

dsimansk commented Mar 6, 2025

/ok-to-test

@knative-prow knative-prow bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 6, 2025
@psschwei
Copy link
Contributor

Hi @Evynglais are you still working on this?

@Evynglais
Copy link
Author

Hi @Evynglais are you still working on this?

Hi,

Apologies, I've been distracted for a while. I'll aim to get this sorted on Sunday this week.

Thanks

Copy link

knative-prow bot commented May 11, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Evynglais
Once this PR has been reviewed and has the lgtm label, please ask for approval from dsimansk. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@psschwei psschwei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, the control plane fails to start when enabling the registry:

$ ./kn-quickstart kind --registry
Running Knative Quickstart using Kind
✅ Checking dependencies...
    Kind version is: 0.27.0
💽 Installing local registry...
Pulling from library/registry: 2
Digest: sha256:a3d8aaa63ed8681a604f1dea0aa03f100d5895b6a58ace528858a7b332415373: %!s(<nil>)
Status: Image is up to date for registry:2: %!s(<nil>)
☸ Creating Kind cluster...
Creating cluster "knative" ...
 ✓ Ensuring node image (kindest/node:v1.30.0) 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✗ Starting control-plane 🕹 
Deleted nodes: ["knative-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged knative-control-plane kubeadm init --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0512 13:21:00.546655     201 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
W0512 13:21:00.548542     201 initconfiguration.go:348] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.30.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0512 13:21:00.552555     201 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0512 13:21:00.641634     201 certs.go:483] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [knative-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
I0512 13:21:00.910745     201 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
I0512 13:21:01.005748     201 certs.go:483] validating certificate period for front-proxy-ca certificate
I0512 13:21:01.083804     201 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
I0512 13:21:01.148677     201 certs.go:483] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [knative-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [knative-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
I0512 13:21:01.644584     201 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0512 13:21:01.765234     201 kubeconfig.go:112] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0512 13:21:01.972373     201 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0512 13:21:02.058068     201 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0512 13:21:02.159833     201 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0512 13:21:02.195115     201 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0512 13:21:02.275027     201 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0512 13:21:02.275053     201 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0512 13:21:02.275232     201 certs.go:483] validating certificate period for CA certificate
I0512 13:21:02.275274     201 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0512 13:21:02.275278     201 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0512 13:21:02.275280     201 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0512 13:21:02.275282     201 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0512 13:21:02.275284     201 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0512 13:21:02.275780     201 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0512 13:21:02.275789     201 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0512 13:21:02.275892     201 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0512 13:21:02.275898     201 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0512 13:21:02.275901     201 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0512 13:21:02.275903     201 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0512 13:21:02.275906     201 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0512 13:21:02.275908     201 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0512 13:21:02.275910     201 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0512 13:21:02.276355     201 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0512 13:21:02.276363     201 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0512 13:21:02.276476     201 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0512 13:21:02.276757     201 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0512 13:21:02.276765     201 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0512 13:21:02.381158     201 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000887467s

Unfortunately, an error has occurred:
        The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' returned error: Get "http://localhost:10248/healthz": context deadline exceeded


This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:110
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:115
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
main.main
        k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        runtime/proc.go:271
runtime.goexit
        runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
main.main
        k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        runtime/proc.go:271
runtime.goexit
        runtime/asm_amd64.s:1695
Error: creating cluster: existing cluster: new cluster: kind create: piping output: exit status 1
Usage:
  kn-quickstart kind [flags]

Flags:
      --extraMountContainerPath string   set the extraMount containerPath on Kind quickstart cluster
      --extraMountHostPath string        set the extraMount hostPath on Kind quickstart cluster
  -h, --help                             help for kind
      --install-eventing                 install Eventing on quickstart cluster
      --install-serving                  install Serving on quickstart cluster
  -k, --kubernetes-version string        kubernetes version to use (1.x.y) or (kindest/node:v1.x.y)
  -n, --name string                      kind cluster name to be used by kn-quickstart (default "knative")
      --registry                         install registry for Kind quickstart cluster

creating cluster: existing cluster: new cluster: kind create: piping output: exit status 1

@knative-prow-robot knative-prow-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 1, 2025
@knative-prow-robot
Copy link

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Can't pull images from local registry
4 participants