Networking
DNS configuration details
General DNS setup
The OpenShift Container Platform cluster managed by CRC uses 2 DNS domain names, crc.testing and apps-crc.testing.
The crc.testing domain is for core OpenShift Container Platform services.
The apps-crc.testing domain is for accessing OpenShift applications deployed on the cluster.
For example, the OpenShift Container Platform API server is exposed as api.crc.testing while the OpenShift Container Platform console is accessed as console-openshift-console.apps-crc.testing.
These DNS domains are served by a dnsmasq DNS container running inside the CRC instance.
The crc setup command detects and adjusts your system DNS configuration so that it can resolve these domains.
Additional checks are done to verify DNS is properly configured when running crc start.
DNS on Linux
On Linux, depending on your distribution, CRC expects the following DNS configuration:
NetworkManager + systemd-resolved
This configuration is used by default on Fedora 33 or newer, and on Ubuntu Desktop editions.
-
CRC expects NetworkManager to manage networking.
-
CRC configures
systemd-resolvedto forward requests for thetestingdomain to the192.168.130.11DNS server.192.168.130.11is the IP of the CRC instance. -
systemd-resolvedconfiguration is done with a NetworkManager dispatcher script in /etc/NetworkManager/dispatcher.d/99-crc.sh:#!/bin/sh export LC_ALL=C systemd-resolve --interface crc --set-dns 192.168.130.11 --set-domain ~testing exit 0
|
|
NetworkManager + dnsmasq
This configuration is used by default on Fedora 32 or older, on Red Hat Enterprise Linux, and on CentOS.
-
CRC expects NetworkManager to manage networking.
-
NetworkManager uses
dnsmasqwith the /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf configuration file. -
The configuration file for this
dnsmasqinstance is /etc/NetworkManager/dnsmasq.d/crc.conf:server=/crc.testing/192.168.130.11 server=/apps-crc.testing/192.168.130.11
-
The NetworkManager
dnsmasqinstance forwards requests for thecrc.testingandapps-crc.testingdomains to the192.168.130.11DNS server.
-
Reserved IP subnets
The OpenShift Container Platform cluster managed by CRC reserves IP subnets for internal use, which should not collide with your host network. Ensure that the following IP subnets are available for use:
-
10.217.0.0/22 -
10.217.4.0/23 -
192.168.126.0/24
Additionally, the host hypervisor might reserve another IP subnet depending on the host operating system.
No additional subnet is reserved on macOS and Microsoft Windows.
The additional reserved subnet for Linux is 192.168.130.0/24.
Starting CRC behind a proxy
You can start CRC behind a defined proxy using environment variables or configurable properties.
|
SOCKS proxies are not supported by OpenShift Container Platform. |
-
If you are not using
crc oc-env, when interacting with the cluster, export the.testingdomain as part of theno_proxyenvironment variable. The embeddedocexecutable does not require manual settings. For more information about using the embeddedocexecutable, see Accessing the OpenShift cluster with the OpenShift CLI.
-
Define a proxy using the
http_proxyandhttps_proxyenvironment variables or using thecrc config setcommand as follows:$ crc config set http-proxy http://proxy.example.com:<port> $ crc config set https-proxy http://proxy.example.com:<port> $ crc config set no-proxy <comma-separated-no-proxy-entries>
-
If the proxy uses a custom CA certificate file, set it as follows:
$ crc config set proxy-ca-file <path-to-custom-ca-file>
|
Proxy-related values set in the configuration for CRC have priority over values set with environment variables. |
Accessing services running on your host from CRC
When you run services on your host, you can configure CRC to access these services.
-
You have a service which is running on your host and want to consume it with CRC.
-
You are using the user mode network. On macOS and Microsoft Windows this is the default behavior.
-
Enable accessing services from host to CRC:
$ crc config set host-network-access true
-
Verify that the CRC configuration uses user network mode and enables host network access:
$ crc config view [...] - network-mode : user - host-network-access : true [...]
-
If CRC instance is already running then restart it (stop ⇒ start), otherwise just start it.
$ crc stop && crc start
Assuming your service is running on the host on port 8080, to access
it from the CRC instance, use host.crc.testing:8080.
Setting up CRC on a remote server
Configure a remote server to run an OpenShift Container Platform cluster provided by CRC.
This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS server. Run every command in this procedure on the remote server.
|
These instructions will only work for system-mode networking. |
|
Perform this procedure only on a local network. Exposing an insecure server on the internet has many security implications. |
-
CRC is installed and set up on the remote server. For more information, see Installing CRC and Setting up CRC.
-
CRC is configured to use the OpenShift preset on the remote server. For more information, see Changing the selected preset.
-
CRC is configured to use system-mode networking.
-
Your user account has
sudopermissions on the remote server.
-
Start the cluster:
$ crc start
Ensure that the cluster remains running during this procedure.
-
Install the
haproxypackage and other utilities:$ sudo dnf install haproxy /usr/sbin/semanage
-
Modify the firewall to allow communication with the cluster:
$ sudo systemctl enable --now firewalld $ sudo firewall-cmd --add-service=http --permanent $ sudo firewall-cmd --add-service=https --permanent $ sudo firewall-cmd --add-service=kube-apiserver --permanent $ sudo firewall-cmd --reload
-
For SELinux, allow HAProxy to listen on TCP port 6443 to serve
kube-apiserveron this port:$ sudo semanage port -a -t http_port_t -p tcp 6443
-
Create a backup of the default
haproxyconfiguration:$ sudo cp /etc/haproxy/haproxy.cfg{,.bak} -
Configure
haproxyfor use with the cluster:$ export CRC_IP=$(crc ip) $ sudo tee /etc/haproxy/haproxy.cfg &>/dev/null <<EOF global log /dev/log local0 defaults balance roundrobin log global maxconn 100 mode tcp timeout connect 5s timeout client 500s timeout server 500s listen apps bind 0.0.0.0:80 server crcvm $CRC_IP:80 check listen apps_ssl bind 0.0.0.0:443 server crcvm $CRC_IP:443 check listen api bind 0.0.0.0:6443 server crcvm $CRC_IP:6443 check EOF -
Start the
haproxyservice:$ sudo systemctl start haproxy
Connecting to a remote CRC instance
Use dnsmasq to connect a client machine to a remote server running an OpenShift Container Platform cluster managed by CRC.
This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS client. Run every command in this procedure on the client.
|
Connect to a server that is only exposed on your local network. |
-
A remote server is set up for the client to connect to. For more information, see Setting up CRC on a remote server.
-
You know the external IP address of the server.
-
You have the latest OpenShift CLI (
oc) in your$PATHon the client.
-
Install the
dnsmasqpackage:$ sudo dnf install dnsmasq
-
Enable the use of
dnsmasqfor DNS resolution in NetworkManager:$ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF [main] dns=dnsmasq EOF
-
Add DNS entries for CRC to the
dnsmasqconfiguration:$ sudo tee /etc/NetworkManager/dnsmasq.d/external-crc.conf &>/dev/null <<EOF address=/apps-crc.testing/SERVER_IP_ADDRESS address=/api.crc.testing/SERVER_IP_ADDRESS EOF
Comment out any existing entries in /etc/NetworkManager/dnsmasq.d/crc.conf. These entries are created by running a local instance of CRC and will conflict with the entries for the remote cluster.
-
Reload the NetworkManager service:
$ sudo systemctl reload NetworkManager
-
Log in to the remote cluster as the
developeruser withoc:$ oc login -u developer -p developer https://api.crc.testing:6443
The remote OpenShift Container Platform web console is available at https://console-openshift-console.apps-crc.testing.