Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Serdar Osman Onur 8:29 am on February 1, 2019 Permalink | Reply
    Tags: , , openshift-deployment-problem,   

    getsockopt: connection refused 

    I had this error while trying to deploy a pod on my cluster.

    I first checked the cluster events. The related section showed below:

    bilgi-edinme-yonetimi-1-build Pod Normal Scheduled default-scheduler Successfully assigned bilgi-edinme-yonetimi-1-build to tybsrhosnode02.defence.local

    bilgi-edinme-yonetimi-1-build Pod spec.containers{sti-build} Normal Pulled kubelet, tybsrhosnode02.defence.local Container image “openshift3/ose-sti-builder:v3.6.173.0.21” already present on machine

    bilgi-edinme-yonetimi-1-build Pod spec.containers{sti-build} Normal Created kubelet, tybsrhosnode02.defence.local Created container

    bilgi-edinme-yonetimi-1-build Pod spec.containers{sti-build} Normal Started kubelet, tybsrhosnode02.defence.local Started container

    bilgi-edinme-yonetimi-1-pvp8g Pod Normal Scheduled default-scheduler Successfully assigned bilgi-edinme-yonetimi-1-pvp8g to tybsrhosnode01.defence.local

    bilgi-edinme-yonetimi-1-pvp8g Pod spec.containers{bilgi-edinme-yonetimi} Normal Pulling kubelet, tybsrhosnode01.defence.local pulling image “docker-registry.default.svc:5000/tybsdev/[email protected]:2f65a5c551207830c29b158cd2a82d8ee86b9c5c079c39c347df9c275a3e59cf”

    bilgi-edinme-yonetimi-1-pvp8g Pod spec.containers{bilgi-edinme-yonetimi} Warning Failed kubelet, tybsrhosnode01.defence.local Failed to pull image “docker-registry.default.svc:5000/tybsdev/[email protected]:2f65a5c551207830c29b158cd2a82d8ee86b9c5c079c39c347df9c275a3e59cf”: rpc error: code = 2 desc = Get http://docker-registry.default.svc:5000/v2/: dial tcp 172.30.253.125:5000: getsockopt: connection refused

    bilgi-edinme-yonetimi-1-pvp8g Pod Warning FailedSync kubelet, tybsrhosnode01.defence.local Error syncing pod

    bilgi-edinme-yonetimi-1-pvp8g Pod spec.containers{bilgi-edinme-yonetimi} Normal BackOff kubelet, tybsrhosnode01.defence.local Back-off pulling image “docker-registry.default.svc:5000/tybsdev/[email protected]:2f65a5c551207830c29b158cd2a82d8ee86b9c5c079c39c347df9c275a3e59cf”

     

    Then I tried logging into the registry and manually pulling the image:

    Error response from daemon: Get http://docker-registry.default.svc:5000/v1/users/: dial tcp 172.30.253.125:5000: getsockopt: connection refused

    Then I tried to see the status of registry pod:

    default docker-registry-6-p8lgn 0/1 Pending 0 1d <none>
    default registry-console-1-4xj6p 1/1 Running 0 1d 10.131.1.200 tybsrhosnode02.defence.local
    openshift-infra recycler-for-registry-storage 0/1 ContainerCreating 0 53s <none> tybsrhosnode02.defence.local

    So the problem is actually with the registry pod…

    Switched to the “default” namespace where the pod is in and observed the events

    [[email protected] ~]# oc get events
    LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
    2m 1d 6396 docker-registry-6-p8lgn Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the following predicates:: CheckServiceAffinity (2), MatchNodeSelector (2).
    7m 8m 7 docker-registry-6-p8lgn Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), MatchNodeSelector (1).
    4s 2m 11 docker-registry-6-ssg8j Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the following predicates:: CheckServiceAffinity (2), MatchNodeSelector (2).

    Logged into the infra node and checked the space left over there

    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/rhel-root 50G 2.0G 49G 4% /
    devtmpfs 7.8G 0 7.8G 0% /dev
    /dev/sda1 1014M 186M 829M 19% /boot
    /dev/sdd2 50G 187M 50G 1% /srv/logging
    /dev/sdc1 300G 15G 286G 5% /srv/nfs
    /dev/sdd1 51G 33M 51G 1% /srv/metrics
    /dev/mapper/rhel-var 15G 15G 20K 100% /var
    /dev/mapper/rhel-tmp 1014M 33M 982M 4% /tmp
    /dev/mapper/rhel-usr_local_bin 1014M 33M 982M 4% /usr/local/bin

    Looks like we have a storage problem in the infra node.

    A quick & dirty solution would be to clear some log files from the node

    Go to /var/log. You will see huge log files over there. Delete them.

    Now check the space

    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/rhel-root 50G 2.0G 49G 4% /
    devtmpfs 7.8G 0 7.8G 0% /dev
    /dev/sda1 1014M 186M 829M 19% /boot
    /dev/sdd2 50G 187M 50G 1% /srv/logging
    /dev/sdc1 300G 15G 286G 5% /srv/nfs
    /dev/sdd1 51G 33M 51G 1% /srv/metrics
    /dev/mapper/rhel-var 15G 3.1G 12G 21% /var
    /dev/mapper/rhel-tmp 1014M 33M 982M 4% /tmp
    /dev/mapper/rhel-usr_local_bin 1014M 33M 982M 4% /usr/local/bin

    Once a freed some space the pod was scheduled on the infra node

    NAME READY STATUS RESTARTS AGE
    docker-registry-6-ssg8j 1/1 Running 0 9m
    registry-console-1-4xj6p 1/1 Running 1 1d
    router-3-46m1h 1/1 Running 0 1d

     

    Everything seemed fine. I restarted the deployment of my application by triggering the corresponding Jenkins pipeline (which in turn triggers a template in my OpenShift cluster and start the S2I process).

    Now I get this:

     

    Starting S2I Java Build …..
    S2I source build for Maven detected
    Found pom.xml …
    Running ‘mvn -Dmaven.repo.local=/tmp/artifacts/m2 package -Dfabric8.skip=true -Ddb=postgres -DskipTests=true -pl etkilesim-yonetimi/bilgi-edinme-yonetimi –also-make ‘
    Picked up JAVA_TOOL_OPTIONS: -Duser.home=/home/jboss -Duser.name=jboss
    Apache Maven 3.3.9 (Red Hat 3.3.9-2.8)
    Maven home: /opt/rh/rh-maven33/root/usr/share/maven
    Java version: 1.8.0_141, vendor: Oracle Corporation
    Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-1.b16.el7_3.x86_64/jre
    Default locale: en_US, platform encoding: ANSI_X3.4-1968
    OS name: “linux”, version: “3.10.0-693.2.1.el7.x86_64”, arch: “amd64”, family: “unix”
    Picked up JAVA_TOOL_OPTIONS: -Duser.home=/home/jboss -Duser.name=jboss
    [INFO] Scanning for projects…
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/springframework/boot/spring-boot-dependencies/1.5.8.RELEASE/spring-boot-dependencies-1.5.8.RELEASE.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/springframework/boot/spring-boot-dependencies/1.5.8.RELEASE/spring-boot-dependencies-1.5.8.RELEASE.pom (100 KB at 21.6 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/com/fasterxml/jackson/jackson-bom/2.8.10/jackson-bom-2.8.10.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/com/fasterxml/jackson/jackson-bom/2.8.10/jackson-bom-2.8.10.pom (11 KB at 16.8 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/com/fasterxml/jackson/jackson-parent/2.8/jackson-parent-2.8.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/com/fasterxml/jackson/jackson-parent/2.8/jackson-parent-2.8.pom (8 KB at 40.5 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/com/fasterxml/oss-parent/27/oss-parent-27.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/com/fasterxml/oss-parent/27/oss-parent-27.pom (20 KB at 65.9 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/apache/logging/log4j/log4j-bom/2.7/log4j-bom-2.7.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/apache/logging/log4j/log4j-bom/2.7/log4j-bom-2.7.pom (6 KB at 26.3 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/apache/apache/9/apache-9.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/apache/apache/9/apache-9.pom (15 KB at 75.2 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/springframework/spring-framework-bom/4.3.12.RELEASE/spring-framework-bom-4.3.12.RELEASE.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/springframework/spring-framework-bom/4.3.12.RELEASE/spring-framework-bom-4.3.12.RELEASE.pom (6 KB at 9.9 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/springframework/data/spring-data-releasetrain/Ingalls-SR8/spring-data-releasetrain-Ingalls-SR8.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/springframework/data/spring-data-releasetrain/Ingalls-SR8/spring-data-releasetrain-Ingalls-SR8.pom (5 KB at 22.9 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/springframework/data/build/spring-data-build/1.9.8.RELEASE/spring-data-build-1.9.8.RELEASE.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/springframework/data/build/spring-data-build/1.9.8.RELEASE/spring-data-build-1.9.8.RELEASE.pom (7 KB at 62.3 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/springframework/integration/spring-integration-bom/4.3.12.RELEASE/spring-integration-bom-4.3.12.RELEASE.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/springframework/integration/spring-integration-bom/4.3.12.RELEASE/spring-integration-bom-4.3.12.RELEASE.pom (9 KB at 41.5 KB/sec)
    Downloading: http://192.168.63.121:8081/repository/maven-public/org/springframework/security/spring-security-bom/4.2.3.RELEASE/spring-security-bom-4.2.3.RELEASE.pom
    Downloaded: http://192.168.63.121:8081/repository/maven-public/org/springframework/security/spring-security-bom/4.2.3.RELEASE/spring-security-bom-4.2.3.RELEASE.pom (5 KB at 21.7 KB/sec)
    /usr/local/bin/mvn: line 9: 49 Killed $M2_HOME/bin/mvn “[email protected]
    Aborting due to error code 137 for Maven build
    error: build error: non-zero (13) exit code from registry.access.redhat.com/jboss-fuse-6/[email protected]:d106c79f3270426a24b318cac91435519bc03523a2c8b43a5beeb6e0d33a9abd

     

    I checked the node that the builder pod was scheduled on and tried manually fetching the image from Red Hat registry, it worked!

    So, what was the problem? I created a ticket and waiting for the response.

    To be continued…

     
  • Serdar Osman Onur 11:01 am on January 21, 2019 Permalink | Reply
    Tags: HeapDump, ,   

    OpenShift Taking a HeapDump and Displaying Class Histogram

    heap dump

    histogram

    • In most of the cases, your java process will be the only process running inside the container and will have the id 1
    • anket-yonetimi-1-3h38r : name of the pod
    • anket-yonetimi : name of the container
     
  • Serdar Osman Onur 10:58 am on January 21, 2019 Permalink | Reply
    Tags: PowerShell,   

    Emulate Linux’s top command in Windows PowerShell:

    while (1) {ps | sort -desc cpu | select -first 30; sleep -seconds 2; cls;write-host “Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName”;write-host “——- —— —– —– —– —— — ———–“}

     
  • Serdar Osman Onur 8:04 am on December 24, 2018 Permalink | Reply
    Tags: , , openshift-limits-quotas,   

    Creating a limit range for an OpenShift project 

    Create the below yaml file:

    Run the below command:

    oc create -f tybsdev_limit_range -n tybsdev

    Format of this command is  $ oc create -f <limit_range_file> -n <project>

    Check that LimitRange has been added

    View your limit settings

    Remove any active limit range to no longer enforce the limits of a project

    $ oc delete limits <limit_name>

     
  • Serdar Osman Onur 1:55 pm on November 13, 2018 Permalink | Reply
    Tags: , ,   

    error: build error: Failed to push image – OpenShift v3.6 

    We had the below problem while trying to deploy an application on OpenShift version 3.6. The build was successful but it failed trying to push the image to the registry:

    Copying Maven artifacts from /tmp/src/XX/XXX/XXXX/target to /deployments …

    Running: cp *-SNAPSHOT.jar /deployments

    … done

    Pushing image docker-registry.default.svc:5000/tybsdev/XXXX:latest …

    Registry server Address:

    Registry server User Name: serviceaccount

    Registry server Email: [email protected]

    Registry server Password: <<non-empty>>

    error: build error: Failed to push image: Get https://docker-registry.default.svc:5000/v1/_ping: dial tcp: lookup docker-registry.default.svc on 172.20.30.2:53: no such host

    I checked that the failed build POD was on node1. So I logged in to Node 1 and tried to login to the registry:

    And got the below message:

    Error response from daemon: Get https://docker-registry.default.svc:5000/v1/users/: dial tcp: lookup docker-registry.default.svc on 172.20.30.2:53: no such host

    Adding a line for the registry to /etc/hosts of Node 1 resolved the problem:

     

     
  • Serdar Osman Onur 8:45 am on October 26, 2018 Permalink | Reply
    Tags: , , ,   

    gave up on Build for BuildConfig tybsdev/basvuru-arayuz (0) due to fatal error: the LastVersion(1) on build config xxx does not match the build request LastVersion(0) 

    OpenShift Builder POD Failed with no error message in oc logs -f pod_name output.

    **oc get events on the master showed this:

    Type: Warning
    Reason:BuildConfigInstantiateFailed
    Source: buildconfig-controller
    Message: gave up on Build for BuildConfig tybsdev/basvuru-arayuz (0) due to fatal error: the LastVersion(1) on build config tybsdev/basvuru-arayuz does not match the build request LastVersion(0)

    **oc describe pod said this:
    Events:
    FirstSeen LastSeen Count From SubObjectPath Type Reason Message
    ——— ——– —– —- ————- ——– —— ——-
    25m 25m 1 default-scheduler Normal Scheduled Successfully assigned basvuru-arayuz-1-build to tybsrhosnode02.defence.local
    <invalid> <invalid> 1 kubelet, tybsrhosnode02.defence.local spec.containers{sti-build} Normal Pulled Container image “openshift3/ose-sti-builder:v3.6.173.0.21” already present on machine
    <invalid> <invalid> 1 kubelet, tybsrhosnode02.defence.local spec.containers{sti-build} Normal Created Created container
    <invalid> <invalid> 1 kubelet, tybsrhosnode02.defence.local spec.containers{sti-build} Normal Started Started container

     

    **oc get pods -o wide showed that the build pod was scheduled on node2

    node2 showed no problems:
    **[[email protected] ~]# oc describe node tybsrhosnode02.defence.local
    Name: tybsrhosnode02.defence.local
    Role:
    Labels: beta.kubernetes.io/arch=amd64
    beta.kubernetes.io/os=linux
    kubernetes.io/hostname=tybsrhosnode02.defence.local
    logging-infra-fluentd=true
    region=primary
    Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
    Taints: <none>
    CreationTimestamp: Wed, 13 Sep 2017 14:16:02 +0300
    Phase:
    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message
    —- —— —————– —————— —— ——-
    OutOfDisk False Fri, 26 Oct 2018 11:53:16 +0300 Wed, 10 Oct 2018 20:09:05 +0300 KubeletHasSufficientDisk kubelet has sufficient disk space available
    MemoryPressure False Fri, 26 Oct 2018 11:53:16 +0300 Wed, 10 Oct 2018 20:09:05 +0300 KubeletHasSufficientMemory kubelet has sufficient memory available
    DiskPressure False Fri, 26 Oct 2018 11:53:16 +0300 Wed, 10 Oct 2018 20:09:05 +0300 KubeletHasNoDiskPressure kubelet has no disk pressure
    Ready True Fri, 26 Oct 2018 11:53:16 +0300 Wed, 10 Oct 2018 20:08:54 +0300 KubeletReady kubelet is posting ready status
    Addresses: 172.20.30.224,172.20.30.224,tybsrhosnode02.defence.local
    Capacity:
    cpu: 8
    memory: 131865388Ki
    pods: 80
    Allocatable:
    cpu: 6
    memory: 125618988Ki
    pods: 80
    System Info:
    Machine ID: dfeb0732c1464538abc9eab4169868cf
    System UUID: 42184533-35FC-C47E-B84F-223AE30C8645
    Boot ID: e9e34a15-7d8e-4cfc-b995-3871f849f1d3
    Kernel Version: 3.10.0-693.2.1.el7.x86_64
    OS Image: OpenShift Enterprise
    Operating System: linux
    Architecture: amd64
    Container Runtime Version: docker://1.12.6
    Kubelet Version: v1.6.1+5115d708d7
    Kube-Proxy Version: v1.6.1+5115d708d7
    ExternalID: tybsrhosnode02.defence.local
    Non-terminated Pods: (11 in total)
    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
    ——— —- ———— ———- ————— ————-
    amq broker-drainer-2-19w50 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    bpm-test proj1-1-h2kbs 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    logging logging-fluentd-tznxt 100m (1%) 100m (1%) 512Mi (0%) 512Mi (0%)
    process-server a1501-bpm-app-postgresql-4-bsrpv 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    sso sso-postgresql-1-gqxfr 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    tybsdev bilgi-edinme-yonetimi-1-km0dh 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    tybsdev infra-test-1-c3p9q 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    tybsdev komite-yonetimi-arayuz-1-pb11l 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    tybsdev panel-yonetimi-arayuz-1-393cr 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    tybsdev program-cagri-yonetimi-arayuz-1-x7q16 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    tybsdev teydeb-1-jt82x 0 (0%) 0 (0%) 0 (0%) 0 (0%)
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    CPU Requests CPU Limits Memory Requests Memory Limits
    ———— ———- ————— ————-
    100m (1%) 100m (1%) 512Mi (0%) 512Mi (0%)
    Events: <none>

     

    SSHed into node2
    Tried to fetch the builder image manually from node2
    **docker pull docker-registry.default.svc:5000/openshift3/ose-sti-builder:v3.6.173.0.21

    it said:
    Trying to pull repository docker-registry.default.svc:5000/openshift3/ose-sti-builder …
    unable to retrieve auth token: 401 unauthorized

     

    Tried to pull the application’s image
    **docker pull docker-registry.default.svc:5000/tybsdev/basvuru-arayuz

    it said:
    Using default tag: latest
    Trying to pull repository docker-registry.default.svc:5000/tybsdev/basvuru-arayuz …
    unable to retrieve auth token: 401 unauthorized

    I loggedin to the registry:
    **docker login -u admin -p $(oc whoami -t) docker-registry.default.svc:5000

    Tried to pull the image again:
    **docker pull docker-registry.default.svc:5000/tybsdev/basvuru-arayuz

    It said:
    Using default tag: latest
    Trying to pull repository docker-registry.default.svc:5000/tybsdev/basvuru-arayuz …
    manifest unknown: manifest unknown

    Did the same for builder image:

    ** docker pull docker-registry.default.svc:5000/openshift3/ose-sti-builder:v3.6.173.0.21

    it said:
    Trying to pull repository docker-registry.default.svc:5000/openshift3/ose-sti-builder …
    manifest unknown: manifest unknown

     

     

    Deleted the failed pod and re-run the paremeterized jenkins pipeline for deploying this application.
    Failed again.
    ** oc get events still displays the original problem.

     

    Exported build configuration. Contents are below:

    ** oc export bc basvuru-arayuz
    apiVersion: v1
    kind: BuildConfig
    metadata:
    annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
    {“apiVersion”:”v1″,”kind”:”BuildConfig”,”metadata”:{“annotations”:{“openshift.io/generated-by”:”OpenShiftNewApp”},”creationTimestamp”:null,”labels”:{“app”:”basvuru-arayuz”,”template”:”tybs-s2i-newapp-template”},”name”:”basvuru-arayuz”,”namespace”:”tybsdev”},”spec”:{“nodeSelector”:null,”output”:{“to”:{“kind”:”ImageStreamTag”,”name”:”basvuru-arayuz:latest”}},”postCommit”:{},”resources”:{},”source”:{“contextDir”:”dev”,”git”:{“ref”:”develop”,”uri”:”http://serdar.onur:[email protected]:7990/scm/tybs/tybs_code.git&#8221;},”type”:”Git”},”strategy”:{“sourceStrategy”:{“env”:[{“name”:”ARTIFACT_COPY_ARGS”,”value”:”*-SNAPSHOT.jar”},{“name”:”ARTIFACT_DIR”,”value”:”basvuru/basvuru-arayuz/target/”},{“name”:”MAVEN_ARGS”,”value”:”package -Dfabric8.skip=true -Ddb=postgres -DskipTests=true -pl basvuru/basvuru-arayuz –also-make”},{“name”:”MAVEN_MIRROR_URL”,”value”:”http://192.168.63.121:8081/repository/maven-public/&#8221;}],”from”:{“kind”:”ImageStreamTag”,”name”:”fis-java-openshift:latest”,”namespace”:”openshift”}},”type”:”Source”},”triggers”:[{“github”:{“secret”:”wjj3wH0ppJ9nNc4lQC6_”},”type”:”GitHub”},{“generic”:{“secret”:”jZEi04f0yjPpkDYbaxR4″},”type”:”Generic”},{“type”:”ConfigChange”},{“imageChange”:{},”type”:”ImageChange”}]},”status”:{“lastVersion”:0}}
    openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
    app: basvuru-arayuz
    template: tybs-s2i-newapp-template
    name: basvuru-arayuz
    spec:
    nodeSelector: null
    output:
    to:
    kind: ImageStreamTag
    name: basvuru-arayuz:latest
    postCommit: {}
    resources: {}
    runPolicy: Serial
    source:
    contextDir: dev
    git:
    ref: develop
    uri: http://serdar.onur:[email protected]:7990/scm/tybs/tybs_code.git
    type: Git
    strategy:
    sourceStrategy:
    env:

    • name: ARTIFACT_COPY_ARGS

    value: ‘*-SNAPSHOT.jar’

    • name: ARTIFACT_DIR

    value: basvuru/basvuru-arayuz/target/

    • name: MAVEN_ARGS

    value: package -Dfabric8.skip=true -Ddb=postgres -DskipTests=true -pl basvuru/basvuru-arayuz
    –also-make

    • name: MAVEN_MIRROR_URL

    value: http://192.168.63.121:8081/repository/maven-public/
    from:
    kind: ImageStreamTag
    name: fis-java-openshift:latest
    namespace: openshift
    type: Source
    triggers:

    • github:

    secret: wjj3wH0ppJ9nNc4lQC6_
    type: GitHub

    • generic:

    secret: jZEi04f0yjPpkDYbaxR4
    type: Generic

    • type: ConfigChange
    • imageChange: {}

    type: ImageChange
    status:
    lastVersion: 0

     

     
    • Serdar Osman Onur 7:30 am on October 30, 2018 Permalink | Reply

      Red Hat Response

      Login the registry with the cluster-admin user.

      docker -D login -u $(oc whoami) -p $(oc whoami -t) docker-registry.default.svc:5000

      After successful login

      docker pull openshift3/ose-sti-builder

      > I think we need to solve the “manifest unknown: manifest unknown”. I researched but could not find a useful post on the net.
      This error message means this image is not available in the registry or tag is missing.

      Verify by using docker search # docker search registry.access.redhat.com/openshift3/ose-sti-builder

      For build issue please increase the build log level and capture the builder pod logs

      oc start-build –build-loglevel=5
      oc logs -f

  • Serdar Osman Onur 11:44 am on August 13, 2018 Permalink | Reply
    Tags: , ,   

    Taking Config Files Outside of a POD – OpenShift

    There are configuration files that affect the way your application works and behaves. These files get deployed together with your application. So, when you deploy your application (in this case Red Hat SSO) these config files will also be deployed inside a POD. If you want to edit your configuration you will need to rsh into your pod and make changes to these configuration files. How about your PODs being destroyed and re-created on another node? What happens to the changes in your configuration files? They are gone!

    There are a couple of alternative approaches you can follow here. If you use a configmap or mount a PV, in both cases they become a part of the “DC” and when a pod is destroyed & re-created it will keep using the configmap or refer to the mounted PV. You get to keep any modifications you have made to your config files when a pod gets destroyed and re-created.

    You can use configmaps/secrets

    Using config maps is “like” mounting a volume to your POD.

    my-conf]# oc create configmap my-conf –from-file=. –dry-run -o yaml
    oc set volume dc/my-dc –configmap-name my-conf –mount-path /test –add=true

    You can mount secrets in a similar way:
    oc create secret generic my-secret –from-file=. –dry-run -o yaml
    oc set volume dc/my-dc –secret-name my-secret –mount-path /test

    In this case you will need to use “oc edit” command to make changes to your configmaps but the problem is, in order for these changes to be reflected in your running application, you will need to re-deploy it (this is what the Red Hat support wrote back to me…).

    You can use PersistentVolumes

    In this scenario, you need to create a PersistentVolume, create a PersistentVolumeClaim and bind the POD to the PV using this claim.

    You PV needs to include the config files that you want to use. A way to go about this could be:

    a) Copy all the files in your config directory to the PV
    b) Mount the PV to your config directory (inside your POD)

    Be Careful! You need to do a) before b) otherwise you will lose all the files and folders inside the config directory of your POD. The good thing about PersistentVolume usage is that you don’t need to re-deploy your PODs to your OpenShift cluster.

     
  • Serdar Osman Onur 7:52 am on August 2, 2018 Permalink | Reply
    Tags: , , , ,   

    Red Hat SSO 7.1 Persistent Application Template Deployment on OpenShift Failed

    I was having a problem deploying the persistent (PostgreSQL) red hat sso 7.1 application on OpenShift. For some reason, my postgresql pod was being stuck at ContainerCreating state. I saw the below message when I described sso-postgresql pod:

    FailedMount Unable to mount volumes for pod “sso-postgresql-1-3gjgf_tybsdev(b652abc6-9002-11e8-a82a-0050569897ab)”: timeout expired waiting for volumes to attach/mount for pod “tybsdev”/”sso-postgresql-1-3gjgf”. list of unattached/unmounted volumes=[sso-postgresql-pvol]

    “mount.nfs: Connection refused ”

    I thought the problem was about my PV/PVC configurations. I checked them and they seemed alright. I tried changing the accessMode of the related PV. I changed it from ReadWriteOnce to ReadWriteMany just to try and it didn’t work.

    Then I check the NFS service on the NFS server “systemctl status nfs”
    NFS service was stopped!

    I started the NFS service and I changed accessMode of PV back to ReadWriteOnce, re-started the installation process. It worked!

     
  • Serdar Osman Onur 7:19 am on June 21, 2018 Permalink | Reply
    Tags: , , ,   

    OpenShift POD in CrashLoopBackOff State

    *OpenShift V3.6
    From time to time PODs in an OCP cluster can be stuck in CrashLoopBackOff state. There are various reasons for this. Here I will talk about an exceptional case to be stuck in this CrashLoopBackOff state.

    I opened a support ticket about this and I had a remote session to solve the problem together with a Red Hat support personnel.

    The thing was, somehow, at some point, for an unknown reason (possibilities are network issues, proxy issues etc.), this exceptional state was created and the node that this pod was being scheduled to did not get the COMPLETE IMAGE to be used for this deployment. There was a missing layer! Once that missing layer was manually pulled inside the failing NODE, the problem was gone and the POD was up & running again.

    There are 2 things to be done after SSHing to the target NODE.
    1- Login to the DOCKER REGISTRY
    docker login -u admin -p $(oc whoami -t) docker-registry.default.svc:5000

    2-Manually pull the image
    docker pull docker-registry.default.svc:5000/tybsdev/yazi-sablon-arayuz

    In step 2 you will see the missing layer being pulled from the registry.

     
  • Serdar Osman Onur 12:17 pm on June 6, 2018 Permalink | Reply
    Tags: , , ,   

    OpenShift – Basic Deployment Operations 

    Starting a deployment:(start a deployment manually)

    Viewing a deployment: (get basic information about all the available revisions of your application)

    Canceling a deployment: (cancel a running or stuck deployment process)

    Retrying a deployment: (retry a failed deployment)

    Rolling back a deployment: (If no revision is specified with --to-revision, then the last successfully deployed revision will be used)

     

    https://docs.openshift.com/container-platform/3.6/dev_guide/deployments/basic_deployment_operations.html

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel