Using Sonatype Nexus as a generic proxy registry to deploy OpenShift














( or much I dislike arbitrary limitations )

Red Hat Openshift

Red Hat Openshift (OCP) is a superb container platform, but a few things in that ecosystem have bugged me since the start for my homelab.

  • Quay.io has limits/throttling in place to limit things that can be downloaded/mirrored in a given timeframe.
  • Most literature around disconnected installs revolves around Using Red Hat Quay as -the- registry and involves a semi-manual way to mirror a -specific- OCP release.
  • Those DMZ quays have to be pre-populated with content/versions to be able to support any OCP disconnected installs.

What if I wanted to:
  • be able to do 10-15 OCP deploys in a single day without hitting quay.io's download limits?
  • be able to deploy OCP 4.16.25 and two hours later deploy OCP 4.8.z-latest or OCP 4.19-RC3 on another system  in Lab just a few minutes later?
  • be able to avoid taking down the family's Internet when I deploy OCP?
  • minimize manual steps involved (no more manual pre-mirroring involved)?
  • be able to have a general-purpose proxy registry in my Lab (or in a DMZ) to capture whatever goes through?

I tried to do that with Sonatype Nexus, and it has worked fine for me. 

First, a small VM with Linux, ZoL, and haproxy.

Here's the VM I'll be using in the rest of this article.
I named it 'registry', which has 4vcpus + 12GB RAM.























There is nothing fancy about this; you can just use any Linux distro with Docker CE or Podman. 
I chose to use RHEL8 and podman.

A little bit of design

Some time ago, Nexus did not play too well with SSL certs so I decided to run Nexus without certificates and put haproxy in front to carry the SSL encryption tasks.
Here are the takeaways:

- haproxy will run on the VM's public IP and carry the SSL certs for 'registry.lasthome.solace.krynn' (the FQDN of the registry in my Lab). haproxy will redirect to 5000-5010 to ports 18000-18010 on the Nexus container. The SSL cert is managed by my own PKI.

I've placed a copy of the haproxy.cfg file here:

Remember to create an SSL cert and key for your FQDN.

- Nexus will run without encryption as a simple container. The main registry port will be 5000 (redirected to 18000 on the Nexus container). Nexus will also use ports 5001 to 5010 for each proxy registry (redirected to 18001-18010 on the container).
This is apparent in my docker/podman run script:
















A copy of that script is here:

- I will be using the java17 dockerfile from the Nexus docker-nexus3 git repo here:

Since I need to patch the Nexus Dockerfile (to remove the removal of 'gzip') I will be using a local copy of their repo which I'll be updating and patching each time I need a new version:






The build script is located here:

- Since I expect to host a few hundred GBs in that Nexus, I went with ZFS (ZoL) on RHEL to setup a 1Tb zpool to host container blobs:














Launching Nexus with podman or docker

If all goes well, it should now be as simple as './build.sh' and './run.sh' to get a working Nexus in a container:































Let's login to our Nexus after configuring 'admin' and continue the configuration there:



























Configuring Nexus for ( OpenShift) containers

Let's check the blob store:

One important setting you'll need for this to work is adding the docker bearer token to the realm



We're all set, let's head to 




The above pull secret, when base64-decrypted, will show us what to configure in Nexus for proxy registries.
One Nexus proxy registry will be needed for each repo/login/password entry from the pull secret.

Use a small script like https://github.com/ElCoyote27/docker-nexus3-scripts/blob/master/decipher.sh to translate it to proper English:





The OSS version of Nexus has some limitations with proxy registries, and a way to work with that limit is to create one proxy registry per upstream repo and then aggregate them all under a 'group'.

First, let's allocate ports:
- cloud.redhat.com will be port 18001
- cloud.openshift.com will be port 18002
- registry.connect.redhat.com will be port 18003
- registry.redhat.io will be port 18004
- quay.io will be port 18005
- my own private registry will port 18010

These ports are arbitrary but they must match what you're using when starting the nexus container (see above).

Let's create a 'Proxy' registry in Nexus for each of the above, e.g:





Using quay.io an example, here's what we'll configure.
For quay.io we'll use port 18005.



There are a few settings there which will make your life easier, e.g: anonymous docker pull. foreign layer caching, etc.. Just pick what you need.

For quay.io, it'll be useful to cache everything for cdn0*.quay.io



At the bottom of the configuration page, we'll find the location where the credentials from the pull secret are being used:




Do the same for all of the other repositories from the pull secret and add any other repo you might need, each with their unique TCP port. We don't do SSL since it will be haproxy doing this outside of Nexus.

Finally, create a 'group' registry in Nexus and add all of the proxy registries to it, host it on port 18000 and we're almost done!



Further down, select and add the proxy registries you previously created.




The last thing to do is to create a local user in Nexus that we'll use to pull for our cache, e.g:



Testing the registry

If all works properly, we should now have a registry that will redirect and cache everything we throw at it:

quay.io/<container1> to registry.lasthome.solace.krynn:5000/<container1>

registry.redhat.io/<container2> to registry.lasthome.solace.krynn:5000/<container2>

cloud.openshift.com/<container3> to registry.lasthome.solace.krynn:5000/<container3>


Here are a few good URLs to try:

quay.io/projectquay/golang:1.17
quay.io/fairwinds/polaris:0.6
registry.redhat.io/openshift-kni/ztp-site-generator
openshift-release-dev/ocp-release:4.16.25-x86_64


Consuming the registry with OpenShift

For IPI or SNO installs, just add the following to your install-config:

imageContentSources:
- mirrors:
  - registry.lasthome.solace.krynn:5000/openshift-release-dev/ocp-v4.0-art-dev
  - registry.lasthome.solace.krynn:5000/openshift/release
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
- mirrors:
  - registry.lasthome.solace.krynn:5000/openshift-release-dev
  source: quay.io/openshift-release-dev
- mirrors:
  - registry.lasthome.solace.krynn:5000/openshift/release-images
  source: quay.io/openshift-release-dev/ocp-release

If you are using Karim's excellent 'kcli' tool to deploy OpenShift ( https://github.com/karmab/kcli ), all you need to do is add those lines to your parameters file (adjust for the local Nexus user you created earlier):

disconnected_url: registry.lasthome.solace.krynn:5000
disconnected_user: openshift
disconnected_password: strongpassword

Just let it bake and after a few OCP installs you should be able to see the cached containers in Nexus:



Please note that for many operators, the above install-config is not sufficient. I've added a more extensive copy here for Day2:

How to tell it's working?
I performed the same test OCP deploy, both without Nexus (CDN Only) and with Nexus in place.
Please note that I am on 7Gbps Fiber so bandwitdth is not the issue here provided that the Red Hat CDN works. For people on limited Internet Plans, using a proxy cache would yield even better results.
The network graph is telling. The deploy was about 25% faster and the Internet bandwidth was almost used a lot less:








Comments

Popular posts from this blog

VMWare Worksation 12 on Fedora Core 23/24 (fc23 and fc24)

Some Tips about running a Dell PowerEdge Tower Server as your workstation

LSI MegaRaid HBA's, overheating and one ugly hack