Openshift

Hier schreibe ich Zeugs auf, das grad spannend ist und ich immer wieder vergesse.
Die Liste hat (fast) keine Ordnung und ist sehr zufällig. Übrigens: es gibt auch einen offiziellen Spickzettel

OpenShift Kleinzeugs

create and switch projects

oc new-project thespark
oc projects
oc project thespark
oc delete project thespark

restart something:

oc rollout restart deployment (D name)
oc rollout restart deployment schufi

port forward: (local:remote)

rschumm@kassandra:~$ oc port-forward postgresql-1-6pkns 15432:5432 (local:remote)
rschumm@kyburg ~ % oc port-forward mein-pod-b-a 5005:5005 8801:8801 (mehrere Port-Paare möglich)

edit a resource

oc edit dc/hallospark
oc -n sobrado-dev edit vshnpostgresqls.vshn.appcat.vshn.io sobrado-postgres-restored-demo

copy stuff from pvc

oc cp sobrado-prod/sobrado-parser-podadflakefw:/var/rulesets . 

Logo in Topology Vie:

label: app.openshift.io/runtime=java

rollback to latest successful deployment

oc rollout undo dc/hallospark
re-enable triggers:  
oc set triggers --auto dc/hallospark

expose applications

oc expose dc/hallogo --port=8080 (generates a service from deployment configuration)
oc expose svc/hallogo (generares a route from service)

describe etc. resource types, print to yaml

oc describe bc
oc describe dc
oc get secret gitlab-hallospark -o yaml
oc get user
oc get nodes

quotes:

oc describe namespace (name) quoata

templates

oc get templates -n openshift 
oc describe template postgresql-persistent -n openshift
oc get template jenkins-pipeline-example -n openshift -o yaml
oc export all --as-template=javapg

image streams
update an external image stream with the newest version: (e.g. iceScrum):

oc get is
oc import-image icescrum

follow all logs of one deployment:

oc logs -f deployment/some-deployment

collect all logs of a namespace:

#!/bin/bash

namespace='openshift-monitoring'

oc project $namespace

mkdir -p $namespace-logs/pod

for pod in $(oc get pods -o name -n $namespace); do 
     echo "$pod"
     echo "===============$pod (tailed to 2000 lines)===================" >> $namespace-logs/$pod-logs.txt
     oc logs $pod --all-containers --tail 2000 >> $namespace-logs/$pod-logs.txt 
done

tar -cvf $namespace-logs.tar $namespace-logs/
rschumm@kassandra:~$ tar -cvf monlogs.tar monlogs/

Garbage collection (Müllabfuhr)

configure garbage collection.

get rid of evicted pods:

oc get pod --all-namespaces  | grep Evicted
oc get pod --all-namespaces  | awk '{if ($4=="Evicted") print "oc delete pod " $2 " -n " $1;}' | sh 

get rid of garbage by pruning:
(as a cluster admin on a master node, so that registry is accessible)

oc login -u rschumm

oc adm prune builds --confirm 
oc adm prune images --confirm 

list, describe and delete all resources with a label:

oc get all --selector app=hallospark -o yaml
oc describe all --selector app=hallospark
oc delete all --selector app=hallospark 
oc delete pvc --all 

… e.g. delete (and restart) all Prometheus node-exporter and prometheus logs pods:

oc project openshift-monitoring
oc get -o name pods --selector app=node-exporter
oc delete pods --selector app=node-exporter
oc delete pods --selector app=prometheus

look for Pods with DiskPressure

oc describe node  | grep -i NodeHasDiskPressure

drain and reboot node:

oc adm manage-node ocp-app-1 --schedulable=false
oc adm drain ocp-app-1

systemctl restart atomic-openshift-node.service

📦 Legacy Notes

Admin, Access etc.

minishift: 
minishift addons apply admin-user
oc login -u admin (admin)

normal: (on first master)
oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin rschumm

allow root user etc: 
oc adm policy add-scc-to-user anyuid -z default -n myproject --as system:admin

Edit Node Config Map:

oc edit cm node-config-compute -n openshift-node

minishift, eclipse che addon etc.

minishift start --cpus 3 --memory 6GB
minishift addons apply che
minishift addons remove che
minishift addons list
minishift update 

Zugriff von Aussen auf die Datenbank, Java-Debug etc. etc. (Routes in OpenShift sind nur für HTTP):

rschumm@kassandra:~/git/schufi$ oc port-forward postgresql-1-6pkns 15432:5432
rschumm@kyburg ~ % oc port-forward mein-pod-b-a 5005:5005 8801:8801

oder noch direkter:

oc exec postgresql-1-nhvs5 -- psql -d explic -c "select experiment from video"

kubernetes dashboard UI

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy 
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

Builds

also cf. my Polyglot Example Blog

s2i maven “binary workflow”

mvn package fabric8:resource fabric8:build fabric8:deploy

s2i “source workflow” for different languages:

oc new-app fabric8/s2i-java~https://github.com/rschumm/hallospark.git
oc new-app fabric8/s2i-java~https://gitlab.com/rschumm/hallospark.git --source-secret='gitlab-hallospark'
oc new-app openshift/php~https://github.com/rschumm/hallophp.git
oc new-app openshift/dotnet~https://github.com/rschumm/hallodotnet.git

Doku for different languages.

apply a resource

oc apply -f src/main/fabric8/pipeline_bc.yml 

increase resources for bc:

spec:
  nodeSelector: null
  output:
    to:
      kind: ImageStreamTag
      name: 'hallogo-git:latest'
  resources:
    limits:
      cpu: 100m
      memory: 256Mi

Java Options and other Configurations

when encoding is wrong in a Java Application…

JAVA_OPTIONS -> -Dfile.encoding=UTF-8

…in the Environement of the DeploymentConfig

For Quarkus Configuration: to override the config values in application.properties just set a Environment Value in the Deployment Config, e.g.:

quarkus.datasource.url -> jdbc:postgresql://postgresql:5432/holz?sslmode=disable

if a Java System Property is needed, use -Dquarkus.datasource.url..., if a Unix Env is needed, use export QUARKUS_DATASOURCE_URL=... respectively.

Openshift Multi-Stage-Deployment

Einfachstes Blueprint für Multi-Stage Deployment mit Jenkins-Build-Pipeline

tag manuell:

oc tag sparkpipe/hallospark:latest sparkpipe/hallospark:prod

Deploy von Images aus anderem Namespace:

Service account default will need image pull authority to deploy images from sparkpipe. You can grant authority with the command:

oc policy add-role-to-user system:image-puller system:serviceaccount:sparkpipe-prod:default -n sparkpipe

Jenkinsfile

try {
    //node('maven') {
    node {
        stage('deploy to dev'){
            openshiftBuild(buildConfig: 'hallospark', showBuildLogs: 'true')
        }
        //stage ('deploy'){
        //    openshiftDeploy(deploymentConfig: 'hallospark')
        //}
        stage("approve the deployment") {
            input message: "Test deployment: Isch guät?", id: "approval"
        }
        stage("deploy prod"){
            openshift.withCluster() { 
                openshift.tag("sparkpipe/hallospark:latest", "sparkpipe/hallospark:prod")
            }
        }
    }
} catch (err) {
   echo "in catch block"
   echo "Caught: ${err}"
   currentBuild.result = 'FAILURE'
   throw err
}    

Cluster install and update

install

on bastion host:

[cloud-user@bastion openshift-ansible]$ cd /usr/share/ansible/openshift-ansible/
[cloud-user@bastion openshift-ansible]$ ansible-playbook playbooks/prerequisites.yml 
[cloud-user@bastion openshift-ansible]$ ansible-playbook playbooks/deploy_cluster.yml

host inventory file is in /etc/ansible/hosts
See Documentation

update

Sample update Release. See the automated update Doku

on bastion host as root:

subscription-manager refresh

update repos:

yum update -y openshift-ansible

or update everything with yum update

check:

/etc/ansible/hosts
openshift_master_manage_htpasswd=false 

perform the update:

[cloud-user@bastion ~]$ cd /usr/share/ansible/openshift-ansible
[cloud-user@bastion ~]$ ansible-playbook playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade.yml --check
[cloud-user@bastion ~]$ ansible-playbook playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade.yml

as a cluster admin:

reboot all nodes.

oc adm diagnostics

adding NFS provisioner

in okd documentation, dynamic persistence volume provisioning for nfs is not supported.
So we have to use following incubator

clone the repo and go to the folder above.

oc new-project nfs-provisioner

then replace the namespace with following code snippets:

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NAMESPACE=`oc project -q`
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
$ oc create -f deploy/rbac.yaml
$ oadm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

in deployment.yaml, replace the values accordingly, e.g.:

server: 10.0.0.14
path: /var/nfs/general

then apply the files.

oc apply -f 
- rbac.yaml
- class.yaml
- deployment.yaml 

the provisioner will run as a normal pod and create pv when they are claimed, including making directories on the server and deleting them.

To make the new storage class the default one, patch it with:

kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

PostgreSQL kaputt

thanks to the Government of British Columbia, Canada the solution to access and fix a corrupted and crash-looping PostgreSQL Database:

Log Signature:

pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
..... done
server started
=> sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ...
ERROR:  tuple concurrently updated

in principle:

oc debug (a crashing pod)

scale down the postgresql deployment to 0 replicas

in the debug session:

run-postgresql (should provoke the same error)
pg_ctl stop -D /var/lib/pgsql/data/userdata 
pg_ctl start -D /var/lib/pgsql/data/userdata
pg_ctl stop -D /var/lib/pgsql/data/userdata 

end the debug session and re-init the postgresql deployment.

credits

Häcks

Branding

to change the Branding of the WebConsole, you can change the /etc/origin/master/master-config.yaml File or - much easier:
In the Namespace openshift-web-console just manipulate the ConfigMap webconsole-config and add the following snippet to the config-yml:

extensions:
  properties: {}
  scriptURLs:
    - https://deinewolke.io/exp/holz.js
  stylesheetURLs:
    - https://deinewolke.io/exp/holz.css

The configs are watched, and after a few minutes all WebConsole Pods will be reloaded with the new config.

The snippets could look like this and must reside on a Webserver somewhere:

// Add items to the application launcher dropdown menu.
window.OPENSHIFT_CONSTANTS.APP_LAUNCHER_NAVIGATION = [{
    title: "System Status Dashboards",                    // The text label
    iconClass: "fa fa-dashboard",          // The icon you want to appear
    href: "https://grafana-openshift-monitoring.apps.deinewolke.io",  // Where to go when this item is clicked
    tooltip: 'View Grafana dashboards provided by Prometheus'              // Optional tooltip to display on hover
  }, {
    title: "Internal Registry",
    iconClass: "fa fa-database",
    href: "https://registry-console-default.apps.deinewolke.io/registry",
    tooltip: "See the internal docker registry managed by the cluster"
  }];
#header-logo {
	background-image: url(https://deinewolke.io/new_logo.svg);
	width: 320px;
}

.nav-item-iconic {
    color: #aa3dcc;
}

zurück zum Seitenanfang