1. Introducing CRC
1.1. About CRC
CRC brings a minimal OpenShift Container Platform 4 cluster and Podman container runtime to your local computer. These runtimes provide minimal environments for development and testing purposes. CRC is mainly targeted at running on developers' desktops. For other OpenShift Container Platform use cases, such as headless or multi-developer setups, use the full OpenShift installer.
See the OpenShift Container Platform documentation for a full introduction to OpenShift Container Platform.
CRC includes the crc
command-line interface (CLI) to interact with the CRC instance using the desired container runtime.
1.2. Differences from a production OpenShift Container Platform installation
The OpenShift preset for CRC provides a regular OpenShift Container Platform installation with the following notable differences:
-
The OpenShift Container Platform cluster is ephemeral and is not intended for production use.
-
CRC does not have a supported upgrade path to newer OpenShift Container Platform versions. Upgrading the OpenShift Container Platform version may cause issues that are difficult to reproduce.
-
It uses a single node which behaves as both a control plane and worker node.
-
It disables the Cluster Monitoring Operator by default. This disabled Operator causes the corresponding part of the web console to be non-functional.
-
The OpenShift Container Platform cluster runs in a virtual machine known as an instance. This may cause other differences, particularly with external networking.
The OpenShift Container Platform cluster provided by CRC also includes the following non-customizable cluster settings. These settings should not be modified:
-
Use of the *.crc.testing domain.
-
The address range used for internal cluster communication.
-
The cluster uses the 172 address range. This can cause issues when, for example, a proxy is run in the same address space.
-
2. Installation
2.1. Minimum system requirements
CRC has the following minimum hardware and operating system requirements.
2.1.1. Hardware requirements
CRC is supported on AMD64 and Intel 64 processor architectures. The Podman container runtime preset is supported on the ARM-based M1 architecture. The OpenShift Container Platform preset is not supported on the M1 architecture. CRC does not support nested virtualization.
Depending on the desired container runtime, CRC requires the following system resources:
For OpenShift Container Platform
-
4 physical CPU cores
-
9 GB of free memory
-
35 GB of storage space
Note
|
The OpenShift Container Platform cluster requires these minimum resources to run in the CRC instance. Some workloads may require more resources. To assign more resources to the CRC instance, see Configuring the instance. |
For the Podman container runtime
-
2 physical CPU cores
-
2 GB of free memory
-
35 GB of storage space
2.1.2. Operating system requirements
CRC requires the following minimum version of a supported operating system:
Microsoft Windows
-
On Microsoft Windows, CRC requires the Windows 10 Fall Creators Update (version 1709) or later. CRC does not work on earlier versions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
-
On macOS, CRC requires macOS 11 Big Sur or later. CRC does not work on earlier versions of macOS.
Linux
-
On Linux, CRC is supported only on the latest two Red Hat Enterprise Linux/CentOS 7, 8 and 9 minor releases and on the latest two stable Fedora releases.
-
When using Red Hat Enterprise Linux, the machine running CRC must be registered with the Red Hat Customer Portal.
-
Ubuntu 18.04 LTS or later and Debian 10 or later are not supported and may require manual set up of the host machine.
-
See Required software packages to install the required packages for your Linux distribution.
2.2. Required software packages for Linux
CRC requires the libvirt
and NetworkManager
packages to run on Linux.
Consult the following table to find the command used to install these packages for your Linux distribution:
Linux Distribution | Installation command |
---|---|
Fedora/Red Hat Enterprise Linux/CentOS |
|
Debian/Ubuntu |
|
2.3. Installing CRC
CRC is available as a portable executable for Red Hat Enterprise Linux. On Microsoft Windows and macOS, CRC is available using a guided installer.
-
Your host machine must meet the minimum system requirements. For more information, see Minimum system requirements.
-
Download the latest release of CRC for your platform.
-
On Microsoft Windows, extract the contents of the archive.
-
On macOS or Microsoft Windows, run the guided installer and follow the instructions.
NoteOn Microsoft Windows, you must install CRC to your local C:\ drive. You cannot run CRC from a network drive.
On Red Hat Enterprise Linux, assuming the archive is in the ~/Downloads directory, follow these steps:
-
Extract the contents of the archive:
$ cd ~/Downloads $ tar xvf crc-linux-amd64.tar.xz
-
Create the ~/bin directory if it does not exist and copy the
crc
executable to it:$ mkdir -p ~/bin $ cp ~/Downloads/crc-linux-*-amd64/crc ~/bin
-
Add the ~/bin directory to your
$PATH
:$ export PATH=$PATH:$HOME/bin $ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
-
2.4. About usage data collection
CRC prompts you before use for optional, anonymous usage data collection to assist with development. No personally identifiable information is collected. Consent for usage data collection can be granted or revoked by you at any time.
-
For more information about collected data, see the Red Hat Telemetry data collection notice.
-
To grant or revoke consent for usage data collection, see Configuring usage data collection.
2.5. Configuring usage data collection
Consent for usage data collection can be granted or revoked by you at any time using the following configuration commands.
Note
|
Changes to telemetry consent do not modify a running instance.
The change will take effect next time you run the |
-
To manually enable telemetry, run the following command:
$ crc config set consent-telemetry yes
-
To manually disable telemetry, run the following command:
$ crc config set consent-telemetry no
-
For more information about the collected data, see the Red Hat Telemetry data collection notice.
2.6. Upgrading CRC
Newer versions of the CRC executable require manual set up to prevent potential incompatibilities with earlier versions.
-
Delete the existing CRC instance:
$ crc delete
WarningThe
crc delete
command results in the loss of data stored in the CRC instance. Save any desired information stored in the instance before running this command. -
Replace the earlier
crc
executable with the executable of the latest release. Verify that the newcrc
executable is in use by checking its version:$ crc version
-
Set up the new CRC release:
$ crc setup
-
Start the new CRC instance:
$ crc start
3. Using CRC
3.1. About presets
CRC presets represent a managed container runtime and the lower bounds of system resources required by the instance to run it. CRC offers presets for OpenShift Container Platform, OKD and the Podman container runtime.
On Microsoft Windows and macOS, the CRC guided installer prompts you for your desired preset.
On Linux, the OpenShift Container Platform preset is selected by default.
You can change this selection using the crc config
command before running the crc setup
command.
You can change your selected preset from the system tray on Microsoft Windows and macOS or from the command line on all supported operating systems.
Only one preset can be active at a time.
-
For more information about the minimum system requirements for each preset, see Minimum system requirements.
-
For more information on changing the selected preset, see Changing the selected preset.
3.2. Setting up CRC
The crc setup
command performs operations to set up the environment of your host machine for the CRC instance.
The crc setup
command creates the ~/.crc directory if it does not already exist.
Warning
|
If you are setting up a new version, capture any changes made to the instance before setting up a new CRC release. |
-
On Linux or macOS, ensure that your user account has permission to use the
sudo
command. On Microsoft Windows, ensure that your user account can elevate to Administrator privileges.
Note
|
Do not run the |
-
(Optional) On Linux, the OpenShift Container Platform preset is selected by default. To select the the Podman container runtime preset:
$ crc config set preset podman
-
Set up your host machine for CRC:
$ crc setup
-
For more information about the available container runtime presets, see About presets.
3.3. Starting the instance
The crc start
command starts the CRC instance and configured container runtime.
-
To avoid networking-related issues, ensure that you are not connected to a VPN and that your network connection is reliable.
-
You set up the host machine using the
crc setup
command. For more information, see Setting up CRC. -
On Microsoft Windows, ensure that your user account can elevate to Administrator privileges.
-
For the OpenShift preset, ensure that you have a valid OpenShift user pull secret. Copy or download the pull secret from the Pull Secret section of the CRC page on the Red Hat Hybrid Cloud Console.
NoteAccessing the user pull secret requires a Red Hat account.
-
Start the CRC instance:
$ crc start
-
For the OpenShift preset, supply your user pull secret when prompted.
NoteThe cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.
-
To change the default resources allocated to the instance, see Configuring the instance.
-
If you see errors during
crc start
, see the Troubleshooting CRC section for potential solutions.
3.4. Accessing the OpenShift cluster
Access the OpenShift Container Platform cluster running in the CRC instance by using the OpenShift Container Platform web console or OpenShift CLI (oc
).
3.4.1. Accessing the OpenShift web console
Access the OpenShift Container Platform web console by using your web browser.
Access the cluster by using either the kubeadmin
or developer
user.
Use the developer
user for creating projects or OpenShift applications and for application deployment.
Use the kubeadmin
user only for administrative tasks such as creating new users or setting roles.
-
CRC is configured to use the OpenShift preset. For more information, see Changing the selected preset.
-
A running CRC instance. For more information, see Starting the instance.
-
To access the OpenShift Container Platform web console with your default web browser, run the following command:
$ crc console
-
Log in as the
developer
user with the password printed in the output of thecrc start
command. You can also view the password for thedeveloper
andkubeadmin
users by running the following command:$ crc console --credentials
See Troubleshooting CRC if you cannot access the OpenShift Container Platform cluster managed by CRC.
-
The OpenShift Container Platform documentation covers the creation of projects and applications.
3.4.2. Accessing the OpenShift cluster with the OpenShift CLI
Access the OpenShift Container Platform cluster managed by CRC by using the OpenShift CLI (oc
).
-
CRC is configured to use the OpenShift preset. For more information, see Changing the selected preset.
-
A running CRC instance. For more information, see Starting the instance.
-
Run the
crc oc-env
command to print the command needed to add the cachedoc
executable to your$PATH
:$ crc oc-env
-
Run the printed command.
-
Log in as the
developer
user:$ oc login -u developer https://api.crc.testing:6443
NoteThe
crc start
command prints the password for thedeveloper
user. You can also view it by running thecrc console --credentials
command. -
You can now use
oc
to interact with your OpenShift Container Platform cluster. For example, to verify that the OpenShift Container Platform cluster Operators are available, log in as thekubeadmin
user and run the following command:$ oc config use-context crc-admin $ oc whoami kubeadmin $ oc get co
NoteCRC disables the Cluster Monitoring Operator by default.
See Troubleshooting CRC if you cannot access the OpenShift Container Platform cluster managed by CRC.
-
The OpenShift Container Platform documentation covers the creation of projects and applications.
3.4.3. Accessing the internal OpenShift registry
The OpenShift Container Platform cluster running in the CRC instance includes an internal container image registry by default. This internal container image registry can be used as a publication target for locally developed container images. To access the internal OpenShift Container Platform registry, follow these steps.
-
CRC is configured to use the OpenShift preset. For more information, see Changing the selected preset.
-
A running CRC instance. For more information, see Starting the instance.
-
A working OpenShift CLI (
oc
) command. For more information, see Accessing the OpenShift cluster with the OpenShift CLI.
-
Check which user is logged in to the cluster:
$ oc whoami
NoteFor demonstration purposes, the current user is assumed to be
kubeadmin
. -
Log in to the registry as that user with its token:
$ oc registry login --insecure=true
-
Create a new project:
$ oc new-project demo
-
Mirror an example container image:
$ oc image mirror registry.access.redhat.com/ubi8/ubi:latest=default-route-openshift-image-registry.apps-crc.testing/demo/ubi8:latest --insecure=true --filter-by-os=linux/amd64
-
Get imagestreams and verify that the pushed image is listed:
$ oc get is
-
Enable image lookup in the imagestream:
$ oc set image-lookup ubi8
This setting allows the imagestream to be the source of images without having to provide the full URL to the internal registry.
-
Create a pod using the recently pushed image:
$ oc run demo --image=ubi8 --command -- sleep 600s
3.5. Deploying a sample application with odo
You can use odo
to create OpenShift projects and applications from the command line.
This procedure deploys a sample application to the OpenShift Container Platform cluster running in the CRC instance.
-
You have installed
odo
. For more information, see Installingodo
in theodo
documentation. -
CRC is configured to use the OpenShift preset. For more information, see Changing the selected preset.
-
The CRC instance is running. For more information, see Starting the instance.
-
Log in to the running OpenShift Container Platform cluster managed by CRC as the
developer
user:$ odo login -u developer -p developer
-
Create a project for your application:
$ odo project create sample-app
-
Create a directory for your components:
$ mkdir sample-app $ cd sample-app
-
Clone an example Node.js application:
$ git clone https://github.com/openshift/nodejs-ex $ cd nodejs-ex
-
Add a
nodejs
component to the application:$ odo create nodejs
-
Create a URL and add an entry to the local configuration file:
$ odo url create --port 8080
-
Push the changes:
$ odo push
Your component is now deployed to the cluster with an accessible URL.
-
List the URLs and check the desired URL for the component:
$ odo url list
-
View the deployed application using the generated URL.
-
For more information about using
odo
, see theodo
documentation.
3.6. Stopping the instance
The crc stop
command stops the running CRC instance and container runtime.
The stopping process takes a few minutes while the cluster shuts down.
-
Stop the CRC instance and container runtime:
$ crc stop
3.7. Deleting the instance
The crc delete
command deletes an existing CRC instance.
-
Delete the CRC instance:
$ crc delete
WarningThe
crc delete
command results in the loss of data stored in the CRC instance. Save any desired information stored in the instance before running this command.
4. Configuring CRC
4.1. About CRC configuration
Use the crc config
command to configure both the crc
executable and the CRC instance.
The crc config
command requires a subcommand to act on the configuration.
The available subcommands are get
, set,
unset
, and view
.
The get
, set
, and unset
subcommands operate on named configurable properties.
Run the crc config --help
command to list the available properties.
You can also use the crc config
command to configure the behavior of the startup checks for the crc start
and crc setup
commands.
By default, startup checks report an error and stop execution when their conditions are not met.
Set the value of a property starting with skip-check
to true
to skip the check.
4.2. Viewing CRC configuration
The CRC executable provides commands to view configurable properties and the current CRC configuration.
-
To view the available configurable properties:
$ crc config --help
-
To view the values for a configurable property:
$ crc config get <property>
-
To view the complete current configuration:
$ crc config view
NoteThe
crc config view
command does not return any information if the configuration consists of default values.
4.3. Changing the selected preset
You can change the container runtime used for the CRC instance by selecting the desired preset.
On Microsoft Windows and macOS, you can change the selected preset using the system tray or command line interface. On Linux, use the command line interface.
Important
|
You cannot change the preset of an existing CRC instance. Preset changes are only applied when a CRC instance is created. To enable preset changes, you must delete the existing instance and start a new one. |
-
Change the selected preset from the command line:
$ crc config set preset <name>
Valid preset names are
openshift
for OpenShift Container Platform,okd
for OKD andpodman
for the Podman container runtime.
-
For more information about the minimum system requirements for each preset, see Minimum system requirements.
4.4. Configuring the instance
Use the cpus
and memory
properties to configure the default number of vCPUs and amount of memory available to the CRC instance, respectively.
Alternatively, the number of vCPUs and amount of memory can be assigned using the --cpus
and --memory
flags to the crc start
command, respectively.
Important
|
You cannot change the configuration of a running CRC instance. To enable configuration changes, you must stop the running instance and start it again. |
-
To configure the number of vCPUs available to the instance:
$ crc config set cpus <number>
The default value for the
cpus
property is4
. The number of vCPUs to assign must be greater than or equal to the default. -
To start the instance with the desired number of vCPUs:
$ crc start --cpus <number>
-
To configure the memory available to the instance:
$ crc config set memory <number-in-mib>
NoteValues for available memory are set in mebibytes (MiB). One gibibyte (GiB) of memory is equal to 1024 MiB.
The default value for the
memory
property is9216
. The amount of memory to assign must be greater than or equal to the default. -
To start the instance with the desired amount of memory:
$ crc start --memory <number-in-mib>
5. Networking
5.1. DNS configuration details
5.1.1. General DNS setup
The OpenShift Container Platform cluster managed by CRC uses 2 DNS domain names, crc.testing
and apps-crc.testing
.
The crc.testing
domain is for core OpenShift Container Platform services.
The apps-crc.testing
domain is for accessing OpenShift applications deployed on the cluster.
For example, the OpenShift Container Platform API server is exposed as api.crc.testing
while the OpenShift Container Platform console is accessed as console-openshift-console.apps-crc.testing
.
These DNS domains are served by a dnsmasq
DNS container running inside the CRC instance.
The crc setup
command detects and adjusts your system DNS configuration so that it can resolve these domains.
Additional checks are done to verify DNS is properly configured when running crc start
.
5.1.2. Linux
On Linux, depending on your distribution, CRC expects the following DNS configuration:
NetworkManager + systemd-resolved
This configuration is used by default on Fedora 33 or newer, and on Ubuntu Desktop editions.
-
CRC expects NetworkManager to manage networking.
-
CRC configures
systemd-resolved
to forward requests for thetesting
domain to the192.168.130.11
DNS server.192.168.130.11
is the IP of the CRC instance. -
systemd-resolved
configuration is done with a NetworkManager dispatcher script in /etc/NetworkManager/dispatcher.d/99-crc.sh:#!/bin/sh export LC_ALL=C systemd-resolve --interface crc --set-dns 192.168.130.11 --set-domain ~testing exit 0
Note
|
|
NetworkManager + dnsmasq
This configuration is used by default on Fedora 32 or older, on Red Hat Enterprise Linux, and on CentOS.
-
CRC expects NetworkManager to manage networking.
-
NetworkManager uses
dnsmasq
with the /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf configuration file. -
The configuration file for this
dnsmasq
instance is /etc/NetworkManager/dnsmasq.d/crc.conf:server=/crc.testing/192.168.130.11 server=/apps-crc.testing/192.168.130.11
-
The NetworkManager
dnsmasq
instance forwards requests for thecrc.testing
andapps-crc.testing
domains to the192.168.130.11
DNS server.
-
5.2. Reserved IP subnets
The OpenShift Container Platform cluster managed by CRC reserves IP subnets for internal use which should not collide with your host network. Ensure that the following IP subnets are available for use:
-
10.217.0.0/22
-
10.217.4.0/23
-
192.168.126.0/24
Additionally, the host hypervisor may reserve another IP subnet depending on the host operating system.
On Microsoft Windows, the hypervisor reserves a randomly generated IP subnet that cannot be determined ahead-of-time.
No additional subnet is reserved on macOS.
The additional reserved subnet for Linux is 192.168.130.0/24
.
5.3. Starting CRC behind a proxy
You can start CRC behind a defined proxy using environment variables or configurable properties.
Note
|
SOCKS proxies are not supported by OpenShift Container Platform. |
-
If you are not using
crc oc-env
, when interacting with the cluster, export the.testing
domain as part of theno_proxy
environment variable. The embeddedoc
executable does not require manual settings. For more information about using the embeddedoc
executable, see Accessing the OpenShift cluster with the OpenShift CLI.
-
Define a proxy using the
http_proxy
andhttps_proxy
environment variables or using thecrc config set
command as follows:$ crc config set http-proxy http://proxy.example.com:<port> $ crc config set https-proxy http://proxy.example.com:<port> $ crc config set no-proxy <comma-separated-no-proxy-entries>
-
If the proxy uses a custom CA certificate file, set it as follows:
$ crc config set proxy-ca-file <path-to-custom-ca-file>
Note
|
Proxy-related values set in the configuration for CRC have priority over values set with environment variables. |
5.4. Setting up CRC on a remote server
Configure a remote server to run an OpenShift Container Platform cluster provided by CRC.
This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS server. Run every command in this procedure on the remote server.
Warning
|
Perform this procedure only on a local network. Exposing an insecure server on the internet has many security implications. |
-
CRC is installed and set up on the remote server. For more information, see Installing CRC and Setting up CRC.
-
CRC is configured to use the OpenShift preset on the remote server. For more information, see Changing the selected preset.
-
Your user account has
sudo
permissions on the remote server.
-
Start the cluster:
$ crc start
Ensure that the cluster remains running during this procedure.
-
Install the
haproxy
package and other utilities:$ sudo dnf install haproxy /usr/sbin/semanage
-
Modify the firewall to allow communication with the cluster:
$ sudo systemctl enable --now firewalld $ sudo firewall-cmd --add-service=http --permanent $ sudo firewall-cmd --add-service=https --permanent $ sudo firewall-cmd --add-service=kube-apiserver --permanent $ sudo firewall-cmd --reload
-
For SELinux, allow HAProxy to listen on TCP port 6443 to serve
kube-apiserver
on this port:$ sudo semanage port -a -t http_port_t -p tcp 6443
-
Create a backup of the default
haproxy
configuration:$ sudo cp /etc/haproxy/haproxy.cfg{,.bak}
-
Configure
haproxy
for use with the cluster:$ export CRC_IP=$(crc ip) $ sudo tee /etc/haproxy/haproxy.cfg &>/dev/null <<EOF global log /dev/log local0 defaults balance roundrobin log global maxconn 100 mode tcp timeout connect 5s timeout client 500s timeout server 500s listen apps bind 0.0.0.0:80 server crcvm $CRC_IP:80 check listen apps_ssl bind 0.0.0.0:443 server crcvm $CRC_IP:443 check listen api bind 0.0.0.0:6443 server crcvm $CRC_IP:6443 check EOF
-
Start the
haproxy
service:$ sudo systemctl start haproxy
5.5. Connecting to a remote CRC instance
Use dnsmasq
to connect a client machine to a remote server running an OpenShift Container Platform cluster managed by CRC.
This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS client. Run every command in this procedure on the client.
Important
|
Connect to a server that is only exposed on your local network. |
-
A remote server is set up for the client to connect to. For more information, see Setting up CRC on a remote server.
-
You know the external IP address of the server.
-
You have the latest OpenShift CLI (
oc
) in your$PATH
on the client.
-
Install the
dnsmasq
package:$ sudo dnf install dnsmasq
-
Enable the use of
dnsmasq
for DNS resolution in NetworkManager:$ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF [main] dns=dnsmasq EOF
-
Add DNS entries for CRC to the
dnsmasq
configuration:$ sudo tee /etc/NetworkManager/dnsmasq.d/external-crc.conf &>/dev/null <<EOF address=/apps-crc.testing/SERVER_IP_ADDRESS address=/api.crc.testing/SERVER_IP_ADDRESS EOF
NoteComment out any existing entries in /etc/NetworkManager/dnsmasq.d/crc.conf. These entries are created by running a local instance of CRC and will conflict with the entries for the remote cluster.
-
Reload the NetworkManager service:
$ sudo systemctl reload NetworkManager
-
Log in to the remote cluster as the
developer
user withoc
:$ oc login -u developer -p developer https://api.crc.testing:6443
The remote OpenShift Container Platform web console is available at https://console-openshift-console.apps-crc.testing.
6. Administrative tasks
6.1. Starting monitoring
CRC disables cluster monitoring by default to ensure that CRC can run on a typical notebook. Monitoring is responsible for listing your cluster in the Red Hat Hybrid Cloud Console. Follow this procedure to enable monitoring for your cluster.
-
You must assign additional memory to the CRC instance. At least 14 GiB of memory, a value of
14336
, is recommended for core functionality. Increased workloads will require more memory. For more information, see Configuring the instance.
-
Set the
enable-cluster-monitoring
configurable property totrue
:$ crc config set enable-cluster-monitoring true
-
Start the instance:
$ crc start
WarningCluster monitoring cannot be disabled. To remove monitoring, set the
enable-cluster-monitoring
configurable property tofalse
and delete the existing CRC instance.
6.2. Enabling override Operators
To make sure CRC can run on a typical laptop, some resource-heavy services get disabled by default. These services can be enabled by manually removing the desired Operator from the Operator override list.
-
A running CRC virtual machine and a working
oc
command. For more information, see Accessing the OpenShift cluster withoc
. -
You must log in through
oc
as thekubeadmin
user.
-
List unmanaged Operators and note the numeric index for the desired Operator:
-
On Linux or macOS:
$ oc get clusterversion version -ojsonpath='{range .spec.overrides[*]}{.name}{"\n"}{end}' | nl -v 0
-
On Microsoft Windows using PowerShell:
PS> oc get clusterversion version -ojsonpath='{range .spec.overrides[*]}{.name}{"\n"}{end}' | % {$nl++;"`t$($nl-1) `t $_"};$nl=0
-
-
Start the desired Operator using the identified numeric index:
$ oc patch clusterversion/version --type='json' -p '[{"op":"remove", "path":"/spec/overrides/<unmanaged-operator-index>"}]' clusterversion.config.openshift.io/version patched
7. Troubleshooting CRC
Note
|
The goal of CRC is to deliver an OpenShift Container Platform environment for development and testing purposes. Issues occurring during installation or usage of specific OpenShift applications are outside of the scope of CRC. Report such issues to the relevant project. |
7.1. Getting shell access to the OpenShift cluster
To access the cluster for troubleshooting or debugging purposes, follow this procedure.
Note
|
Direct access to the OpenShift Container Platform cluster is not needed for regular use and is strongly discouraged. |
-
Enable OpenShift CLI (
oc
) access to the cluster and log in as thekubeadmin
user. For detailed steps, see Accessing the OpenShift cluster with the OpenShift CLI.
-
Run the
oc get nodes
command to identify the desired node. The output will be similar to this:$ oc get nodes NAME STATUS ROLES AGE VERSION crc-shdl4-master-0 Ready master,worker 7d7h v1.14.6+7e13ab9a7
-
Run
oc debug nodes/<node>
where<node>
is the name of the node printed in the previous step.
7.2. Troubleshooting expired certificates
The system bundle in each released crc
executable expires 30 days after the release.
This expiration is due to certificates embedded in the OpenShift Container Platform cluster.
The crc start
command triggers an automatic certificate renewal process when needed.
Certificate renewal can add up to five minutes to the start time of the cluster.
To avoid this additional startup time, or in case of failures in the certificate renewal process, use the following procedure:
To resolve expired certificate errors that cannot be automatically renewed:
-
Download the latest CRC release and place the
crc
executable in your$PATH
. -
Remove the cluster with certificate errors using the
crc delete
command:$ crc delete
WarningThe
crc delete
command results in the loss of data stored in the CRC instance. Save any desired information stored in the instance before running this command. -
Set up the new release:
$ crc setup
-
Start the new instance:
$ crc start
7.3. Troubleshooting bundle version mismatch
Created CRC instances contain bundle information and instance data.
Bundle information and instance data is not updated when setting up a new CRC release.
This information is not updated due to customization in the earlier instance data.
This will lead to errors when running the crc start
command:
$ crc start ... FATA Bundle 'crc_hyperkit_4.2.8.crcbundle' was requested, but the existing VM is using 'crc_hyperkit_4.2.2.crcbundle'
-
Issue the
crc delete
command before attempting to start the instance:$ crc delete
WarningThe
crc delete
command results in the loss of data stored in the CRC instance. Save any desired information stored in the instance before running this command.
7.4. Troubleshooting unknown issues
Resolve most issues by restarting CRC with a clean state.
This involves stopping the instance, deleting it, reverting changes made by the crc setup
command, reapplying those changes, and restarting the instance.
-
You set up the host machine with the
crc setup
command. For more information, see Setting up CRC. -
You started CRC with the
crc start
command. For more information, see Starting the instance. -
You are using the latest CRC release. Using a version earlier than CRC 1.2.0 may result in errors related to expired x509 certificates. For more information, see Troubleshooting expired certificates.
To troubleshoot CRC, perform the following steps:
-
Stop the CRC instance:
$ crc stop
-
Delete the CRC instance:
$ crc delete
WarningThe
crc delete
command results in the loss of data stored in the CRC instance. Save any desired information stored in the instance before running this command. -
Clean up remaining changes from the
crc setup
command:$ crc cleanup
NoteThe
crc cleanup
command removes an existing CRC instance and reverts changes to DNS entries created by thecrc setup
command. On macOS, thecrc cleanup
command also removes the system tray. -
Set up your host machine to reapply the changes:
$ crc setup
-
Start the CRC instance:
$ crc start
NoteThe cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.
If your issue is not resolved by this procedure, perform the following steps:
-
Search open issues for the issue that you are encountering.
-
If no existing issue addresses the encountered issue, create an issue and attach the ~/.crc/crc.log file to the created issue. The ~/.crc/crc.log file has detailed debugging and troubleshooting information which can help diagnose the problem that you are experiencing.