Disclaimer
note
This document is maintained by the engineering team and is provided for informational purposes only. While every effort is made to keep the content accurate and up to date, it may not reflect the most current developments, practices, or standards. The information contained herein is not guaranteed to be complete, and no representations or warranties are made regarding its accuracy, reliability, or applicability. This document is not supported or might not be actively managed for ongoing updates or troubleshooting.
To suggest edits to this document, please visit https://github.com/crc-org/engineering-docs/
Developing CRC
Overview
The following sections describe how to build and test the project.
Prerequisites
git
make
- A recent Go distribution (>=1.11)
note
You should be able to develop the project on Linux, Windows, or macOS.
Setting up the development environment
Cloning the repository
Get the sources from GitHub:
$ git clone https://github.com/crc-org/crc.git
note
Do not keep the source code in your $GOPATH
, as Go modules will cause make
to fail.
Dependency management
CRC uses Go modules for dependency management.
For more information, see the following:
Compiling the CRC Binary
In order to compile the crc executable for your local platform, run the following command:
$ make
By default, the above command will place the crc executable in the $GOBIN
path.
Run the following command to cross-compile the crc executable for many platforms:
$ make cross
Note: This command will output the cross-compiled crc executable(s) in the out
directory by default:
$ tree out/
out/
├── linux-amd64
│ └── crc
├── macos-amd64
│ └── crc
└── windows-amd64
└── crc.exe
Running unit tests
To run all unit test use:
$ make test
If you need to update mocks use:
$ make generate_mocks
Debugging guide
crc start
failed and you don't know where to go next. This guide will help you find clue about the failure.
Access the VM
First, check if the VM is running and if you can enter it.
With this following ssh-config
, enter the VM. The IP can be found with crc ip
.
~/.ssh/config
Host crc
Hostname 192.168.130.11
User core
IdentityFile ~/.crc/machines/crc/id_rsa
IdentityFile ~/.crc/machines/crc/id_ecdsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
If you use vsock network mode, the IP is 127.0.0.1 and the port is 2222.
On Windows, the relevant SSH keys is in C:\Users\%USERNAME%\.crc\machines\crc\id_ecdsa
You can also run directly this command:
Linux
$ ssh -i ~/.crc/machines/crc/id_ecdsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@192.168.130.11
MacOS
$ ssh -i ~/.crc/machines/crc/id_ecdsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 2222 core@127.0.0.1
Windows
PS> ssh -i C:\Users\$env:USERNAME\.crc\machines\crc\id_ecdsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 2222 core@127.0.0.1
Checking the status of the VM
First, you can check you have internet connectivity with curl https://quay.io
.
A working kubeconfig
is stored in /opt/kubeconfig
. You can use it to get the status of the cluster.
$ KUBECONFIG=/opt/kubeconfig kubectl get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.6.9 True False False 8h
cloud-credential 4.6.9 True False False 11d
cluster-autoscaler 4.6.9 True False False 11d
config-operator 4.6.9 True False False 11d
console 4.6.9 True False False 11d
note
They should all look like this
$ KUBECONFIG=/opt/kubeconfig kubectl get nodes
NAME STATUS ROLES AGE VERSION
crc-lf65c-master-0 Ready master,worker 11d v1.19.0+7070803
(should be ready)
KUBECONFIG=/opt/kubeconfig kubectl describe nodes
...
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 25 Jan 2021 18:55:15 +0000 Fri, 15 Jan 2021 02:46:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Jan 2021 18:55:15 +0000 Fri, 15 Jan 2021 02:46:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Jan 2021 18:55:15 +0000 Fri, 15 Jan 2021 02:46:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 25 Jan 2021 18:55:15 +0000 Fri, 15 Jan 2021 02:46:11 +0000 KubeletReady kubelet is posting ready status
...
note
Conditions should all be like this
$ KUBECONFIG=/opt/kubeconfig kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver-operator openshift-apiserver-operator-5677877bdf-8g6bm 1/1 Running 0 11d
openshift-apiserver apiserver-66f58cdf9f-d96bp 2/2 Running 0 10d
openshift-authentication-operator authentication-operator-76548bccd7-dq9g5 1/1 Running 0 11d
openshift-authentication oauth-openshift-5744c7c4bd-mnz8g 1/1 Running 0 10d
openshift-authentication oauth-openshift-5744c7c4bd-vnwms 1/1 Running 0 10d
openshift-cluster-machine-approver machine-approver-7f5c9dc658-rfr8k 2/2 Running 0 11d
openshift-cluster-node-tuning-operator cluster-node-tuning-operator-76bf4c756-6llzh 1/1 Running 0 11d
note
Look for suspicious failed pod
If you still have no clue, you can take a look at container activity.
$ sudo crictl ps | head
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
7021ae2801875 registry.redhat.io/redhat/redhat-operator-index@sha256:6519ef7cef0601786e6956372abba556da20570ba03f43866dd1b7582043b061 15 minutes ago Running registry-server 0 cfcfe4356e368
53a1204ae4473 registry.redhat.io/redhat/community-operator-index@sha256:2bae3ba4b7acebf810770cbb7444d14b6b90226a0f53dfd453ca1509ea6aa5e0 3 hours ago Running registry-server 0 175e5557785eb
4609e49599e21 cfce721939963e593158b60ab6d1e16278a4c4e681d305af6124e978be6a3687 8 hours ago Running controller 1 8d05bd4f82250
note
The first container started 15min ago where almost all containers started few hours ago. This is suspicious.
Testing
Running e2e tests
We have automated e2e tests which ensure the quality of our deliverable; CRC, and system bundle.
Introduction
End-to-end (e2e) tests borrow code from Clicumber package to provide basic functionality for testing CLI binaries. This facilitates running commands in a persistent shell instance (bash
, tcsh
, zsh
, Command Prompt, or PowerShell), assert its outputs (standard output, standard error, or exit code), check configuration files, and so on. The general functionality of Clicumber is then extended by CRC specific test code to cover the whole functionality of CRC.
How to run
First, one needs to set the following flags in Makefile
, under e2e
target:
--pull-secret-file
absolute path to your OpenShift pull secret.--bundle-location
if bundle is embedded, this flag should be set to--bundle-location=embedded
or not passed at all
if bundle is not embedded, then absolute path to the bundle should be passed--crc-binary
ifcrc
binary resides in$GOPATH/bin
, then this flag does not need to be passed
otherwise absolute path to thecrc
binary should be passed.
To start e2e tests, run:
$ make e2e
How to run only a subset of all e2e tests
Implicitly, all e2e tests for your operating system are executed. If you want to run only tests from one feature file, you have to override GODOG_OPTS
environment variable. For example:
make e2e GODOG_OPTS="--godog.tags='@basic && @windows'" BUNDLE_LOCATION=<bundle location> PULL_SECRET_FILE=<pull secret path>
Please notice @basic && @windows
, where @basic
tag stands for basic.feature
file and @windows
tag for e2e tests designed for Windows.
How to test cert rotation
On linux platform first stop the network time sync using:
$ sudo timedatectl set-ntp off
Set the time 2 month ahead:
$ sudo date -s '2 month'
Start the crc with CRC_DEBUG_ENABLE_STOP_NTP=true
set:
$ CRC_DEBUG_ENABLE_STOP_NTP=true crc start
Logs
Test logs can be found in test/e2e/out/test-results
.
Releasing on GitHub
Releasing using the github actions workflow
The GitHub Actions workflow Publish release on github
creates a draft release and provides a template with all the component versions and the git change log.
To start the workflow go to the workflow page and click on the Run Workflow
button, make sure to choose the appropriate tag for the release.
Once the draft release is available, edit it to include the notable changes for the release and press publish to make it public.
Releasing using the gh-release.sh
script
In the CRC repository, we have a script gh-release.sh
which uses the gh
tool, make sure it is installed.
Create a markdown file containing a list of the notable changes named notable_changes.txt
in the same directory as the script.
An example notable_changes.txt
:
$ cat notable_changes.txt
- Fixes a bug where `oc` binary was not extracted from bundle when using microshift preset [#3581](https://github.com/crc-org/crc/issues/3581)
- Adds 'microshift' as a possible value to the help string of the 'preset' config option [#3576](https://github.com/crc-org/crc/issues/3576)
Then run the script from the release tag and follow the prompts, it’ll ask for confirmation before pushing the draft release to GitHub.
[!NOTE] The script will exit with error if it doesn’t find a tag starting with
v
in the current git HEAD.
$ git checkout v2.18.0
$ ./gh-release.sh
Verify the draft release on the releases page and if everything looks good press publish to make the release public.
Code signing the macOS installer
Instructions
This document lists the step I took to codesign the crc installer
make out/macos-universal/crc-macos-installer.tar
- copy the tarball to the macOS machine which will sign the installer
- unpack the tarball
- set
CODESIGN_IDENTITY
andPRODUCTSIGN_IDENTITY
to match the certificates you'll be using.
For example:
$ export PRODUCTSIGN_IDENTITY="Developer ID Installer: Christophe Fergeau (GSP9DR7D3R)"
$ export CODESIGN_IDENTITY="Developer ID Application: Christophe Fergeau (GSP9DR7D3R)"
- run
packaging/package.sh ./packaging
, this will generate a signedpackaging/crc-macos-installer.pkg
file
This file can now be notarized with
xcrun notarytool submit --apple-id apple@crc.dev --team-id GSP9DR7D3R --wait ./packaging/crc-macos-installer.pkg
.
note
The --wait
is optional.
xcrun notarytool info
and xcrun notarytool log
can be used to monitor the progress.
Once the notarization reports Accepted
, you can run:
xcrun stapler staple ./packaging/crc-macos-installer.pkg
to attach the result to the installer
Afterwards, spctl --assess -vv --type install ./packaging/crc-macos-installer.pkg
can be used to check the signature and notarization of the .pkg file.
Windows installation process
On Windows, setting up the system is shared between the installer (msi or chocolatey) and crc preflights
MSI installer
- creates the
crc-users
group - adds the current user to the
crc-users
group - sets up the
admin-helper
service - creates the registry key required by hvsock
- adds the user to the hyper-v admin group
- installs Hyper-V
- configures SMB for file sharing
Chocolatey
- creates the
crc-users
group - sets up the
admin-helper
service - creates the registry key required by hvsock
- installs hyper-v
CRC preflights
- checks if the
crc-users
group exists - checks if hyper-v is installed and running
- checks if the hvsock registry key exists
- checks if the admin-helper service is running
- adds the current user to the
crc-users
group and hyper-v admin group - starts the crc daemon task
Track TCP proxy connections
Create an image with this
Containerfile
FROM docker.io/library/centos:latest
RUN dnf install -y \
bcc-tools \
http://download.eng.bos.redhat.com/brewroot/vol/rhel-8/packages/kernel/4.18.0/147.3.1.el8_1/x86_64/kernel-devel-4.18.0-147.3.1.el8_1.x86_64.rpm \
http://download.eng.bos.redhat.com/brewroot/vol/rhel-8/packages/kernel/4.18.0/147.3.1.el8_1/x86_64/kernel-headers-4.18.0-147.3.1.el8_1.x86_64.rpm \
&& dnf clean all \
&& rm -rf /var/cache/yum
ENTRYPOINT ["/usr/share/bcc/tools/tcpconnect"]
note
The kernel-devel
and kernel-headers
versions must exactly match the one used by the CRC bundle
Image creation and publishing
$ podman build -t bcc-tcpconnect -f Containerfile .
$ podman push localhost/bcc-tcpconnect quay.io/teuf/experiments:147.3.1.el8_1
note
The image is published to ensure the VM is able to download the image
Then after running crc start
, you can run (possibly as soon as ssh
is up in the VM):
$ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.crc/machines/crc/id_rsa core@192.168.130.11 \
sudo podman run --privileged -v /lib/modules:/lib/modules:ro \
quay.io/teuf/experiments:4.18.0-147.3.1.el8_1
User-mode networking stack
With v1.19 CRC introduced a new network mode that allows users to work with a VPN turned on.
Instead of using a traditional NAT, it uses a userspace network stack. The virtual machine is
now seen by the host operating system as a normal application.
Instructions
Windows
Since 1.32.1, the default network mode is usermode.
macOS
Since 1.26.0, the default network mode is usermode.
- Run the tray application
- Click start or run
crc start
Linux
- Cleanup the previous installation of crc.
Runcrc delete
,crc cleanup
and remove the folder$HOME/.crc
- Remove eventual *.crc.testing records in your hosts file /etc/hosts.
- Activate the user network mode.
crc config set network-mode user
- Prepare the host machine
crc setup
- Start the virtual machine as usual
crc start
Reaching the host from the VM
You can enable this feature with the config property host-network-access
.
- Close the application (or the daemon) and the VM.
- crc config set host-network-access true
- Start the application (or the daemon) and the VM.
In your containers, you will be able to use host.crc.testing
DNS name to reach the host.
It is the equivalent of host.docker.internal
in Docker Desktop.
Using with Docker Desktop
You can build your containers with Docker Desktop and push them to the OpenShift registry.
On macOS and Linux, you can directly use docker login
.
On Windows, this is slightly more complicated. Please follow this guide:
- https://github.com/code-ready/crc/issues/1917#issuecomment-814037029
- or https://github.com/code-ready/crc/issues/2354#issuecomment-851320171
What to test
- Please turn on your VPN software before and after starting CRC.
We would like to know if CRC behave well and if you can login to the OpenShift console (usecrc console
). - Deploy a pod that connects to a resource on your VPN network.
caution
Don't run Docker Desktop with Kubernetes activated and CRC side-by-side. This might lead to overlapping of ports.
Technical details
Block traffic with nwfilter
When testing the bundle, we needed to block NTP traffic to test the cert recovery code. Turns out this can be done directly for the crc VM with libvirt on linux
$ cat drop-ntp.xml
<filter name='drop-ntp' chain='ipv4'>
<rule action='drop' direction='out' >
<ip protocol='udp' dstportstart='123'/>
</rule>
<rule action='drop' direction='in' priority='100'>
<ip protocol='udp' srcportstart='123'/>
</rule>
</filter>
$ virsh -c qemu:///system nwfilter-define drop-ntp.xml
$ virsh -c qemu:///system edit crc
Then, in crc XML definitions, a <filterref>
element needs to be added:
<domain>
...
<devices>
...
<interface type='network'>
...
<filterref filter='drop-ntp'/>
</interface>
This filter can be applied dynamically to a running VM using virt-xml
:
$ virt-xml -c qemu:///system crc --edit --update --network filterref.filter='drop-ntp'
With that in place, we can run the cluster "in the future" by changing this in the domain xml:
<clock offset='variable' adjustment='30000' basis='utc'>
With 'adjustment' being a value in seconds. I need to experiment a bit more with this, as to exercise the cert recovery code, we probably need to change the time on the host too, and with NTP blocked, the cluster will probably sync with the host time without needing any changes to that <clock>
element.
A similar nwfilter rule can be used for http/https traffic, this is useful for proxy testing. If the proxy is running on ports 3128/3129, this filter will block most http/https traffic which is not going through the proxy:
<filter name='drop-http-https' chain='ipv4'>
<rule action='drop' direction='out' >
<ip protocol='tcp' dstportstart='443'/>
</rule>
<rule action='drop' direction='in' priority='100'>
<ip protocol='tcp' srcportstart='443'/>
</rule>
<rule action='drop' direction='out' >
<ip protocol='tcp' dstportstart='80'/>
</rule>
<rule action='drop' direction='in' priority='100'>
<ip protocol='tcp' srcportstart='80'/>
</rule>
</filter>
Reference
- https://github.com/crc-org/crc/issues/1242#issuecomment-629698002
- https://libvirt.org/formatnwfilter.html
Track TCP proxy connections
Create an image with this
Containerfile
FROM docker.io/library/centos:latest
RUN dnf install -y \
bcc-tools \
http://download.eng.bos.redhat.com/brewroot/vol/rhel-8/packages/kernel/4.18.0/147.3.1.el8_1/x86_64/kernel-devel-4.18.0-147.3.1.el8_1.x86_64.rpm \
http://download.eng.bos.redhat.com/brewroot/vol/rhel-8/packages/kernel/4.18.0/147.3.1.el8_1/x86_64/kernel-headers-4.18.0-147.3.1.el8_1.x86_64.rpm \
&& dnf clean all \
&& rm -rf /var/cache/yum
ENTRYPOINT ["/usr/share/bcc/tools/tcpconnect"]
note
The kernel-devel
and kernel-headers
versions must exactly match the one used by the CRC bundle
Image creation and publishing
$ podman build -t bcc-tcpconnect -f Containerfile .
$ podman push localhost/bcc-tcpconnect quay.io/teuf/experiments:147.3.1.el8_1
note
The image is published to ensure the VM is able to download the image
Then after running crc start
, you can run (possibly as soon as ssh
is up in the VM):
$ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ~/.crc/machines/crc/id_rsa core@192.168.130.11 \
sudo podman run --privileged -v /lib/modules:/lib/modules:ro \
quay.io/teuf/experiments:4.18.0-147.3.1.el8_1
Nested virtualization setup
warning
Using nested virtualization with CRC is generally not supported.
In some cases nested virtualization works and can be useful for example for testing crc, such as on Windows.
Consider for example the following setup:
Ryzen 3600 with 32GiB RAM
vendor_id : AuthenticAMD
cpu family : 23
model : 113
model name : AMD Ryzen 5 3600 6-Core Processor
stepping : 0
- OS: RHEL8 (tested 8.3 and 8.4)
libvirt-6.0.0-35.module+el8.4.0+10230+7a9b21e4.x86_64
qemu-kvm-4.2.0-48.module+el8.4.0+10368+630e803b.x86_64
Most RHEL8 libvirt/qemu versions should work
Virtualization is enabled in the host's bios, and nested virt has to be explicitly enabled on the host through options kvm_amd nested=1
in /etc/modprobe.d/kvm.conf
The VM is created using virt-manager
, most settings are the default ones except for 14GB memory, 4 vcpus, and 120GB qcow2 disk. These numbers are not representative of the minimum required config, they are just the ones I picked to be sure the VM is correctly sized.
For nested virtualization to work, the VM CPU config must be set to "host-passthrough". This can be done before starting the Windows VM installation, or at a later time through the 'Details' window. The VM then needs to be shut off and started again for this setting to take effect.
After this, just install your Windows VM, make sure it's up to date, and you should be able to download and start crc as if on a real machine. Since it's a VM, you can take disk snapshots at interesting times, so that you can get back to that state later on.
Apple Silicon support
M1 support uses vfkit
which is a small binary wrapper which maps command line arguments to the API provided by the macOS virtualization framework. It does this using the go bindings provided by https://github.com/Code-Hex/vz
Lifecycle
The main reason for needing this separate vfkit binary is that when creating VMs with macOS virtualization framework, their lifetime is tied to the process which created them. This is why we need a separate process which will stay alive as long as the VM is needed.
note
There is no separate machine driver for vfkit, it's integrated directly in crc codebase, similarly to what is done for Hyper-V.
note
Apple silicon support has been available since CRC 2.4.1.
Add another user to the cluster
For CRC we use htpasswd method to manage the users in the OpenShift cluster https://docs.openshift.com/container-platform/latest/authentication/identity_providers/configuring-htpasswd-identity-provider.html#add-identity-provider_configuring-htpasswd-identity-provider, by default we have developer
and kubeadmin
user which is created at disk creation time and kubeadmin
user has the cluster-admin
role.
If you want to add a new user to cluster following steps should work.
note
Make sure you have the htpasswd
command.
In Fedora it is provided by httpd-tools
package
$ export HTPASSWD_FILE=/tmp/htpasswd
$ htpasswd -c -B -b $HTPASSWD_FILE user1 password1
$ htpasswd -b $HTPASSWD_FILE user2 password2
$ cat $HTPASSWD_FILE
user1:$2y$05$4QxnejXAJ2nmnVFXlNXn/ega9BUrKbaGLpOtdS2LJXmbOECXWSVDa
user2:$apr1$O9jL/dfz$qXs216/W8Waw2.p7rvhJR.
warning
Make sure the existing developer
and kubeadmin
users are part of htpasswd
file because kubeadmin
has the cluster admin role.
$ oc get secrets htpass-secret -n openshift-config -ojsonpath='{.data.htpasswd}' | base64 -d >> htpasswd
$ oc create secret generic htpass-secret --from-file=$HTPASSWD_FILE -n openshift-config --dry-run -o yaml > /tmp/htpass-secret.yaml
$ oc replace -f /tmp/htpass-secret.yaml
Check the auth pods which are going to recreated because of this config change.
$ oc get pods -n openshift-authentication
$ oc get pods -n openshift-authentication
NAME READY STATUS RESTARTS AGE
oauth-openshift-7f4994c969-8fz44 1/1 Running 0 11s
oauth-openshift-7f4994c969-mjrjc 1/1 Running 0 11s
Add mirror registry
- Create
imageContentSourePolicy
$ cat registryrepomirror.yaml
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: ubi8repo
spec:
repositoryDigestMirrors:
- mirrors:
- <my-repo-host>:<port>/ubi8-minimal
source: registry.redhat.io/ubi8-minimal
$ oc apply -f registryrepomirror.yaml
- Add registry to the OpenShift image config
- https://docs.openshift.com/container-platform/latest/openshift_images/image-configuration.html#images-configuration-file_image-configuration
- In case of insecure registry check https://github.com/code-ready/crc/wiki/Adding-an-insecure-registry
- In case of self signed registry check https://github.com/code-ready/crc/wiki/Adding-a-self-signed-certificate-registry
- SSH to the VM and perform following
note
For more info about registeries.conf check https://github.com/containers/image/blob/master/docs/containers-registries.conf.5.md
or man containers-registries.conf
Here we are using the mirror registry which is self signed and behind the auth.
$ crc ip
192.168.64.92
$ ssh -i ~/.crc/machines/crc/id_rsa -o StrictHostKeyChecking=no core@192.168.64.92
<CRC-VM> $ cat /etc/containers/registries.conf
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
[[registry]]
prefix = ""
location = "registry.redhat.io/ubi8-minimal"
mirror-by-digest-only = true
[[registry.mirror]]
location = "<your-mirror-registry>:<port>/ubi8-minimal"
- If you need to have global pull secret then update the
/var/lib/kubelet/config.json
file inside the VM along withpull-secret
onopenshift-config
namespace.
$ oc get secret pull-secret -n openshift-config --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode > pull-secret.json
$ oc registry login -a pull-secret.json --registry <your-mirror-registry>:<port> --auth-basic='<username>:<password>'
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret.json
<CRC-VM> $ cat /var/lib/kubelet/config.json
note
This should have same content as pull-secret.json
file
- Restart the
kubelet and crio
service in the VM.
<CRC-VM> $ systemctl restart crio
<CRC-VM> $ systemctl restart kubelet
References
- https://docs.openshift.com/container-platform/latest/openshift_images/samples-operator-alt-registry.html#installation-creating-mirror-registry_samples-operator-alt-registry
- https://docs.openshift.com/container-platform/latest/openshift_images/image-configuration.html#images-configuration-registry-mirror_image-configuration
Add a self-signed certificate registry
CRC does not have any option to configure a self-signed registry.
note
For insecure registries (no valid TLS certificates, or HTTP-only), see this page.
Instructions
To provide the self-signed registry my.self-signed.registry.com
:
note
The registry needs to be resolvable by DNS and reachable from the CRC VM.
- Start the cluster and log in to it as
kubeadmin
viaoc
:
$ crc start
[...]
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
$ eval $(crc oc-env)
$ oc login -u kubeadmin -p <kubeadmin_password> https://api.crc.testing:6443
Login successful.
You have access to 51 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
-
Follow https://docs.openshift.com/container-platform/latest/openshift_images/image-configuration.html#images-configuration-file_image-configuration to make require changes in the cluster image resource.
-
SSH to the VM and update the registry cert file:
Ref: https://github.com/containers/image/blob/master/docs/containers-certs.d.5.md
<CRC-VM> $ sudo mkdir /etc/containers/certs.d/my.self-signed.registry.com
<CRC-VM> $ cat /etc/containers/certs.d/my.self-signed.registry.com/ca.crt
-----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE1ODEyOTA4MTUwHhcNMjAwMjA5MjMyNjU0WhcNMjUwMjA3
MjMyNjU1WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1ODEyOTA4MTUw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9GYTSeKJisiaUPVV8jlrm
Btx7cA6sFlPwjJB08G7eVdlBzjUwDZ59kTNlJbwzJXDcvIfYFabTt0G69diXOfmy
fQZvm5odVLARSVnDqBf3qIymiMMR8iWqxhwYkbB+Z5Dk8h2miR9FxyWg/q/pDw0G
yurtuOerMgZ0T5I0DSPnva1PXwJBV5lyR/65gC62F8H0K/MOInOfjdMifMoKdpBj
3o+tF1iv91mQztYC4y0G7Y3pq75bfJb1XQw0bqdYe4ULxDaZnAW7jRrvdiSSWbSd
zbGoZ2yFNIu+WKvUu8BOnUwFFVqLb8BLXtKbRxuQ
-----END CERTIFICATE-----
<CRC-VM> $ sudo systemctl restart crio
<CRC-VM> $ sudo systemctl restart kubelet
<CRC-VM> $ exit
-
If the self signed registry require authentication then you need to follow https://docs.openshift.com/container-platform/latest/openshift_images/managing-images/using-image-pull-secrets.html#images-allow-pods-to-reference-images-from-secure-registries_using-image-pull-secrets
-
Deploy app using the self signed registry.
$ oc new-app --docker-image=my.self-signed.registry.com/test-project1/httpd-example:latest --allow-missing-images --name=world
[...]
--> Creating resources ...
deploymentconfig.apps.openshift.io "world" created
--> Success
Run 'oc status' to view your app.
$ oc get pods
NAME READY STATUS RESTARTS AGE
world-1-6xbpb 1/1 Running 0 2m10s
world-1-deploy 0/1 Completed 0 2m19s
Adding an insecure registry
CRC does not have a configuration option to provide an insecure registry. An insecure registry is a registry without a valid TLS certificate, or one which only supports HTTP connections.
note
For self-signed registries, see this page.
Instructions
To provide the insecure registry my.insecure.registry.com:8888
:
note
The registry needs to be resolvable by DNS and reachable from the CRC VM.
- Start the cluster and log in to it as
kubeadmin
viaoc
:
$ crc start
[...]
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
$ eval $(crc oc-env)
$ oc login -u kubeadmin -p <kubeadmin_password> https://api.crc.testing:6443
Login successful.
You have access to 51 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
- Patch the
image.config.openshift.io
resource and add your insecure registry details:
$ oc patch --type=merge --patch='{
"spec": {
"registrySources": {
"insecureRegistries": [
"my.insecure.registry.com:8888"
]
}
}
}' image.config.openshift.io/cluster
image.config.openshift.io/cluster patched
- SSH to the VM and update the
/etc/containers/registries.conf
file to add details about the insecure registry:
$ crc ip
192.168.64.92
$ ssh -i ~/.crc/machines/crc/id_rsa -o StrictHostKeyChecking=no core@192.168.64.92
<CRC-VM> $ sudo cat /etc/containers/registries.conf
unqualified-search-registries = ['registry.access.redhat.com', 'docker.io']
[[registry]]
location = "my.insecure.registry.com:8888"
insecure = true
blocked = false
mirror-by-digest-only = false
prefix = ""
<CRC-VM> $ sudo systemctl restart crio
<CRC-VM> $ sudo systemctl restart kubelet
<CRC-VM> $ exit
- Deploy your workload using the insecure registry:
$ cat test.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
containers:
- image: my.insecure.registry.com:8888/test/testimage
name: test
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
$ oc create -f test.yaml
pod/test created
$ watch oc get events
46s Normal Scheduled pod/test Successfully assigned default/test to crc-shdl4-master-0
38s Normal Pulling pod/test Pulling image "my.insecure.registry.com:8888/test/testimage"
15s Normal Pulled pod/test Successfully pulled image "my.insecure.registry.com:8888/test/testimage"
15s Normal Created pod/test Created container test
15s Normal Started pod/test Started container test
$ oc get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 8m40s
To perform each steps as bash script, you can try something following.
$ oc login -u kubeadmin -p $(crc console --credentials | awk -F "kubeadmin" '{print $2}' | cut -c 5- | rev | cut -c31- | rev) https://api.crc.testing:6443
$ oc patch --type=merge --patch='{
"spec": {
"registrySources": {
"insecureRegistries": [
"YOUR_REGISTRY"
]
}
}
}' image.config.openshift.io/cluster
$ ssh -i ~/.crc/machines/crc/id_rsa -o StrictHostKeyChecking=no core@$(crc ip) << EOF
sudo echo " " | sudo tee -a /etc/containers/registries.conf
sudo echo "[[registry]]" | sudo tee -a /etc/containers/registries.conf
sudo echo " location = \"YOUR_REGISTRY\"" | sudo tee -a /etc/containers/registries.conf
sudo echo " insecure = true" | sudo tee -a /etc/containers/registries.conf
sudo echo " blocked = false" | sudo tee -a /etc/containers/registries.conf
sudo echo " mirror-by-digest-only = false" | sudo tee -a /etc/containers/registries.conf
sudo echo " prefix = \"\"" | sudo tee -a /etc/containers/registries.conf
sudo systemctl restart crio
sudo systemctl restart kubelet
EOF
Change the domain for CRC
We have default route for apps is apps-crc.testing
and for API server api.crc.testing
, Some users want to use a different domain and as long as it resolve the Instance IP, a user should able to change the domain name.
Changes to the ingress
domain are not permitted as a day-2 operation https://access.redhat.com/solutions/4853401
What we have to do is add component routes and appDomain
to ingress resource to make our custom domain to work with cluster.
- https://docs.openshift.com/container-platform/latest/rest_api/config_apis/ingress-config-openshift-io-v1.html#spec-componentroutes
- https://docs.openshift.com/container-platform/latest/web_console/customizing-the-web-console.html#customizing-the-console-route_customizing-web-console
- https://docs.openshift.com/container-platform/latest/authentication/configuring-internal-oauth.html#customizing-the-oauth-server-url_configuring-internal-oauth
- https://docs.openshift.com/container-platform/latest/security/certificates/api-server.html#customize-certificates-api-add-named_api-server-certificates
In these steps we are using <VM_IP>.nip.io
on Linux box where ip
is set to 192.168.130.11
in case of user mode networking, you can check it with crc ip
command.
note
Whatever domain you want to use make sure it is resolvable inside cluster. Otherwise after all those steps you will see following warning for oauth and console operator because console-openshift-console.apps.192.168.130.11.nip.io
not able to be resolved inside cluster.
RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.192.168.130.11.nip.io): Get "https://console-openshift-console.apps.192.168.130.11.nip.io": dial tcp: lookup console-openshift-console.apps.192.168.130.11.nip.io on 10.217.4.10:53: server misbehaving
Instructions
- Create a custom cert/key pair for the domain
$ openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout nip.key -out nip.crt -subj "/CN=192.168.130.11.nip.io" -addext "subjectAltName=DNS:apps.192.168.130.11.nip.io,DNS:*.apps.192.168.130.11.nip.io,DNS:api.192.168.130.11.nip.io"
- Create TLS secret using those cert/key pair (here named
nip-secret
)
$ oc create secret tls nip-secret --cert=nip.crt --key=nip.key -n openshift-config
- Create an ingress patch which have details about component routes and
appsDomain
and apply it.
$ cat <<EOF > ingress-patch.yaml
spec:
appsDomain: apps.192.168.130.11.nip.io
componentRoutes:
- hostname: console-openshift-console.apps.192.168.130.11.nip.io
name: console
namespace: openshift-console
servingCertKeyPairSecret:
name: nip-secret
- hostname: oauth-openshift.apps.192.168.130.11.nip.io
name: oauth-openshift
namespace: openshift-authentication
servingCertKeyPairSecret:
name: nip-secret
EOF
$ oc patch ingresses.config.openshift.io cluster --type=merge --patch-file=ingress-patch.yaml
- Create a patch request for apiserver to add our custom certificate as serving cert.
$ oc patch apiserver cluster --type=merge -p '{"spec":{"servingCerts": {"namedCertificates":[{"names":["api.192.168.130.11.nip.io"],"servingCertificate": {"name": "nip-secret"}}]}}}'
- Update the old routes host to new one.
$ oc patch -p '{"spec": {"host": "default-route-openshift-image-registry.192.168.130.11.nip.io"}}' route default-route -n openshift-image-registry --type=merge
- Keep looking at
oc get co
to make sure everything is available.
# Wait till all the operator reconcile and in Available state (no progressing or degraded state)
$ oc get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.11.3 True False False 73m
config-operator 4.11.3 True False False 5d19h
console 4.11.3 True False False 73m
dns 4.11.3 True False False 92m
etcd 4.11.3 True False False 5d19h
image-registry 4.11.3 True False False 87m
ingress 4.11.3 True False False 5d19h
kube-apiserver 4.11.3 True False False 5d19h
kube-controller-manager 4.11.3 True False False 5d19h
kube-scheduler 4.11.3 True False False 5d19h
machine-api 4.11.3 True False False 5d19h
machine-approver 4.11.3 True False False 5d19h
machine-config 4.11.3 True False False 5d19h
marketplace 4.11.3 True False False 5d19h
network 4.11.3 True False False 5d19h
node-tuning 4.11.3 True False False 5d19h
openshift-apiserver 4.11.3 True False False 80m
openshift-controller-manager 4.11.3 True False False 87m
openshift-samples 4.11.3 True False False 5d19h
operator-lifecycle-manager 4.11.3 True False False 5d19h
operator-lifecycle-manager-catalog 4.11.3 True False False 5d19h
operator-lifecycle-manager-packageserver 4.11.3 True False False 92m
service-ca 4.11.3 True False False 5d19h
Try to login to cluster using the new api URI
# Get the kubeadmin user password
$ crc console --credentials
$ oc login -u kubeadmin -p <password> https://api.192.168.130.11.nip.io:6443
The server is using a certificate that does not match its hostname: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, 172.25.0.1, not api.192.168.130.11.nip.io
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Login successful.
You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Try to create a sample app and expose the route
$ oc new-project demo
$ oc new-app ruby~https://github.com/sclorg/ruby-ex.git
$ oc expose svc/ruby-ex
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
ruby-ex ruby-ex-demo.192.168.130.11.nip.io ruby-ex 8080-tcp None
$ curl -Ik ruby-ex-demo.192.168.130.11.nip.io
HTTP/1.1 200 OK
content-type: text/html
content-length: 39559
set-cookie: 5735a0b0e41f7362ba688320968404a3=4268ca9aa18f871004be9c1bd0112787; path=/; HttpOnly
cache-control: private
Custom CA cert for proxy
note
These steps are no longer needed, this is automated in newer CRC releases. This page is only useful for historical documentation
- Start the CRC with proxy setting as mentioned here.
- Create a
user-ca-bundle.yaml
resource as instructed by the OpenShift docs:
$ cat user-ca-bundle.yaml
apiVersion: v1
data:
ca-bundle.crt: |
-----BEGIN CERTIFICATE-----
.
.
.
-----END CERTIFICATE-----
kind: ConfigMap
metadata:
name: user-ca-bundle
namespace: openshift-config
- Apply the resource to cluster:
$ oc apply user-ca-bundle.yaml
- Check the status of operators (most of then will go to progressing state and the come back as available:
$ oc get co
- SSH to crc VM and add the custom cert and run update-ca-trust:
$ crc ip
$ ssh -i ~/.crc/machines/crc/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@<crc_ip>
$ sudo vi /etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt
$ sudo update-ca-trust
$ sudo systemctl restart crio
$ sudo systemctl restart kubelet
- Exit from the crc vm and check the operators:
$ oc get co
Dynamic volume provisioning
By default in crc we don't support dynamic volume provision because of the hostPath volumes. https://github.com/rancher/local-path-provisioner have a way to use hostPath for create local provisioner to use as dynamic one. This make this work for OpenShift some small changes are required.
Instructions for local-path-provisioner
Start crc as usual way and wait till cluster is up.
note
This uses https://github.com/rancher/local-path-provisioner
$ oc login -u kubeadmin -p <passwd> https://api.crc.testing:6443
$ oc new-project local-path-storage
$ oc create serviceaccount local-path-provisioner-service-account -n local-path-storage
$ oc adm policy add-scc-to-user hostaccess -z local-path-provisioner-service-account -n local-path-storage
$ cat <<EOF | oc apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.12
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/mnt/pv-data"]
}
]
}
EOF
Check that the provisioner pod is running
$ oc get all -n local-path-storage
NAME READY STATUS RESTARTS AGE
pod/local-path-provisioner-58b55cb6b6-rn2vd 1/1 Running 0 15m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/local-path-provisioner 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/local-path-provisioner-58b55cb6b6 1 1 1 15m
Instructions for hostpath-provisioner
note
This uses https://github.com/kubevirt/hostpath-provisioner/
oc apply -f 'https://raw.githubusercontent.com/kubevirt/hostpath-provisioner/main/deploy/kubevirt-hostpath-security-constraints.yaml'
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: kubevirt-hostpath-provisioner
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubevirt-hostpath-provisioner
provisioner: kubevirt.io/hostpath-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubevirt-hostpath-provisioner
subjects:
- kind: ServiceAccount
name: kubevirt-hostpath-provisioner-admin
namespace: kubevirt-hostpath-provisioner
roleRef:
kind: ClusterRole
name: kubevirt-hostpath-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubevirt-hostpath-provisioner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubevirt-hostpath-provisioner-admin
namespace: kubevirt-hostpath-provisioner
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubevirt-hostpath-provisioner
labels:
k8s-app: kubevirt-hostpath-provisioner
namespace: kubevirt-hostpath-provisioner
spec:
selector:
matchLabels:
k8s-app: kubevirt-hostpath-provisioner
template:
metadata:
labels:
k8s-app: kubevirt-hostpath-provisioner
spec:
serviceAccountName: kubevirt-hostpath-provisioner-admin
containers:
- name: kubevirt-hostpath-provisioner
image: quay.io/kubevirt/hostpath-provisioner
imagePullPolicy: Always
env:
- name: USE_NAMING_PREFIX
value: "false" # change to true, to have the name of the pvc be part of the directory
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: PV_DIR
value: /var/hpvolumes
volumeMounts:
- name: pv-volume # root dir where your bind mounts will be on the node
mountPath: /var/hpvolumes
#nodeSelector:
#- name: xxxxxx
volumes:
- name: pv-volume
hostPath:
path: /mnt/pv-data
EOF
Check that the provisioner pod is running
$ oc get all -n kubevirt-hostpath-provisioner
NAME READY STATUS RESTARTS AGE
pod/kubevirt-hostpath-provisioner-xw777 1/1 Running 0 41m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kubevirt-hostpath-provisioner 1 1 1 1 1 <none> 41m
Failed to add the user to the libvirt group in Fedora Silverblue
If you are trying to use CRC on Fedora Silverblue, you might get following error.
crc setup fails to add the user to the libvirt group.
This is a longstanding bug in Fedora Silverblue where the system groups in /usr/lib/group
are not reflected in /etc/groups
.
As a result, the libvirt group does exist after libvirt is installed, but the user cannot be added to said group via crc.
Users can manually work around this issue by copying the libvirt group info in /usr/lib/group to /etc/groups:
grep -E '^libvirt:' /usr/lib/group | sudo tee -a /etc/group
References
- https://docs.fedoraproject.org/en-US/fedora-silverblue/troubleshooting/#_unable_to_add_user_to_group
Thanks @adambkaplan for creating issue and providing the workaround #2402
Podman support
Usage
crc setup
crc start
eval $(crc podman-env)
podman version
(macOS/Windows) orpodman-remote version
(Linux)
Limitations
-
It exposes the rootless podman socket of the virtual machine
-
It still require to have the full OpenShift cluster running
-
Bind mounts don't work.
-
Ports are not automatically exposed on the host.
- Workaround when using vsock network mode:
- Expose a port:
curl --unix-socket ~/.crc/crc-http.sock http:/unix/network/services/forwarder/expose -X POST -d '{"local":":8080","remote":"192.168.127.3:8080"}'
- Unexpose a port:
curl --unix-socket ~/.crc/crc-http.sock http:/unix/network/services/forwarder/unexpose -X POST -d '{"local":":8080"}'
- Expose a port:
- Workaround when using vsock network mode:
Using ko
with CRC exposed registry
By default CRC expose the internal registry default-route-openshift-image-registry.apps-crc.testing
for use. But this registry route uses a self signed certificate. To use it with ko
you need to follow some manual steps.
Instructions
- Download the route ca cert which used to sign the registry route.
$ oc extract secret/router-ca --keys=tls.crt -n openshift-ingress-operator
- Use this cert to login to registry using docker.
$ sudo mkdir -p /etc/docker/certs.d/default-route-openshift-image-registry.apps-crc.testing
$ sudo cp tls.crt /etc/docker/certs.d/default-route-openshift-image-registry.apps-crc.testing
$ docker login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing
ko
doesn't have a way to specify the registry cert https://github.com/google/ko/issues/142 so for Linux you can useSSL_CERT_FILE
environment variable to specify it and for MacOS you need to add it to the keyring https://github.com/google/go-containerregistry/issues/211
$ export SSL_CERT_FILE=/etc/docker/certs.d/default-route-openshift-image-registry.apps-crc.testing/tls.crt
- Now you can use
ko
with internal registry to push the image (Using tekton example here).
$ git clone https://github.com/redhat-developer/tekton-hub.git
$ cd /tekton-hub/backend/api
$ KO_DOCKER_REPO=default-route-openshift-image-registry.apps-crc.testing/tekton-hub ko apply -f config/
$ KO_DOCKER_REPO=default-route-openshift-image-registry.apps-crc.testing/tekton-hub ko apply -f config/
2020/03/16 12:15:55 Using base gcr.io/distroless/static:latest for github.com/redhat-developer/tekton-hub/backend/api/cmd/api
namespace/tekton-hub unchanged
secret/db configured
persistentvolumeclaim/db unchanged
deployment.apps/db unchanged
service/db unchanged
secret/api configured
2020/03/16 12:15:58 Building github.com/redhat-developer/tekton-hub/backend/api/cmd/api
2020/03/16 12:16:05 Publishing default-route-openshift-image-registry.apps-crc.testing/tekton-hub/api-b786b59ff17bae65aa137e516553ea05:latest
2020/03/16 12:16:05 Published default-route-openshift-image-registry.apps-crc.testing/tekton-hub/api-b786b59ff17bae65aa137e516553ea05@sha256:34f4ad707c69fc7592ae3f92f62cf5741468fc7083d0662dd67dc15b08cf5128
deployment.apps/api unchanged
route.route.openshift.io/api unchanged
service/api unchanged
- By default the exposed registry is behind the auth so you will see following.
$ oc get all -n tekton-hub
NAME READY STATUS RESTARTS AGE
pod/api-6cf586db66-4djtr 0/1 ImagePullBackOff 0 88m
pod/db-7f6bdf76c8-g6g84 1/1 Running 2 3d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/api NodePort 172.30.62.51 <none> 5000:32601/TCP 3d
service/db ClusterIP 172.30.16.148 <none> 5432/TCP 3d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/api 0/1 1 0 88m
deployment.apps/db 1/1 1 1 3d
NAME DESIRED CURRENT READY AGE
replicaset.apps/api-6cf586db66 1 1 0 88m
replicaset.apps/db-7f6bdf76c8 1 1 1 3d
NAME IMAGE REPOSITORY TAGS UPDATED
imagestream.image.openshift.io/api-b786b59ff17bae65aa137e516553ea05 default-route-openshift-image-registry.apps-crc.testing/tekton-hub/api-b786b59ff17bae65aa137e516553ea05 latest 2 hours ago
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/api api-tekton-hub.apps-crc.testing api <all> edge/Redirect None
$ oc get events -n tekton-hub
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Normal Scheduled pod/api-6cf586db66-4djtr Successfully assigned tekton-hub/api-6cf586db66-4djtr to crc-jccc5-master-0
87m Normal Pulling pod/api-6cf586db66-4djtr Pulling image "default-route-openshift-image-registry.apps-crc.testing/tekton-hub/api-b786b59ff17bae65aa137e516553ea05@sha256:34f4ad707c69fc7592ae3f92f62cf5741468fc7083d0662dd67dc15b08cf5128"
87m Warning Failed pod/api-6cf586db66-4djtr Failed to pull image "default-route-openshift-image-registry.apps-crc.testing/tekton-hub/api-b786b59ff17bae65aa137e516553ea05@sha256:34f4ad707c69fc7592ae3f92f62cf5741468fc7083d0662dd67dc15b08cf5128": rpc error: code = Unknown desc = Error reading manifest sha256:34f4ad707c69fc7592ae3f92f62cf5741468fc7083d0662dd67dc15b08cf5128 in default-route-openshift-image-registry.apps-crc.testing/tekton-hub/api-b786b59ff17bae65aa137e516553ea05: unauthorized: authentication required
- You need to add the docker registry secret to the
tekton-hub
namespace.
$ oc create secret docker-registry internal-registry --docker-server=default-route-openshift-image-registry.apps-crc.testing --docker-username=kubeadmin --docker-password=$(oc whoami -t) --docker-email=abc@gmail.com -n tekton-hub
$ oc secrets link default internal-registry --for=pull -n tekton-hub
$ oc secrets link builder internal-registry -n tekton-hub
$ KO_DOCKER_REPO=default-route-openshift-image-registry.apps-crc.testing/tekton-hub ko apply -f config/
<== Remove old ImagePullBackOff pod ==>
$ oc delete pod/api-6cf586db66-4djtr -n tekton-hub