Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Serdar Osman Onur 2:41 pm on April 26, 2018 Permalink | Reply
    Tags: docker, ,   

    Could not transfer artifact "x" from/to mirror. Failed to transfer file. Return code is: 500. 

    Could not transfer artifact … from/to mirror.default (…/repository/maven-public/): Failed to transfer file: … Return code is: 500.

    Error occurred while executing a write operation to database ‘component’ due to limited free space on the disk (219 MB). The database is now working in read-only mode. Please close the database (or stop OrientDB), make room on your hard drive and then reopen the database. The minimal required space is 256 MB.

    This error caused our Jenkins pipelines and OpenShift builds fail. Apparently, the VM that hosted the docker container for nexus was out of disk space. We increased the disk space and the problem is gone now.

     
  • Serdar Osman Onur 6:36 am on April 20, 2018 Permalink | Reply
    Tags: , ,   

    What are OpenShift Node Selectors and Pod Selectors

    Node selectors are parameters that you can use in OpenShift cli that helps you target some or specific nodes.

    Pod selectors are parameters that you can use in OpenShift cli that helps you target some or specific pods.

    Not all the cli actions/commands require the use of these selectors. But some cli operations/command may require both selectors to be used.

    Consider the command below. This can be used to select specific pods on specific nodes:

    $ oc adm manage-node –selector= –list-pods [–pod-selector=] [-o json|yaml]

    An example would be something like below:

    oc adm manage-node –selector=region=primary –list-pods –pod-selector=app=basvuru-arayuz

    In this example region=primary is a label that I used on my cluster’s schedulable nodes.
    “app” is a label that I use for my deployments. Each bc/dc/service/route/pod will have this “app” label. In the example above “basvuru-arayuz” is an application name for one of my deployments.

    This example command will list all the pods related to “basvuru-arayuz” application deployed on schedulable nodes.

     
  • Serdar Osman Onur 10:28 am on April 18, 2018 Permalink | Reply
    Tags: , ,   

    oc-describe-node1 oc-describe-node2 oc-describe-pod-proj1-1-build

    No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), Insufficient pods (1), MatchNodeSelector (1).

    Hi,

    I am trying to deploy a new BPM application (proj1) on OpenShift. The thing is, build pod is stuck at pending state and in the Events section is says this:

    “No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), Insufficient pods (1), MatchNodeSelector (1).”

    I have 2 compute nodes in my cluster labeled as “region=primary”.

    oc get nodes prints this:

    NAME STATUS AGE VERSION
    tybsrhosinode01.defence.local Ready 215d v1.6.1+5115d708d7
    tybsrhosmaster01.defence.local Ready,SchedulingDisabled 215d v1.6.1+5115d708d7
    tybsrhosnode01.defence.local Ready 215d v1.6.1+5115d708d7
    tybsrhosnode02.defence.local Ready 215d v1.6.1+5115d708d7

    oc get pods prints this:

    NAME READY STATUS RESTARTS AGE
    basvuru-arayuz-1-ddwbs 1/1 Running 0 3d
    hakem-yonetimi-arayuz-1-0x379 1/1 Running 0 3d
    infra-test-6-h02nb 0/1 Pending 0 22h
    program-cagri-yonetimi-arayuz-1-cmcbb 1/1 Running 0 3d
    proj1-1-build 0/1 Pending 0 1h
    proj1-postgresql-1-deploy 0/1 Pending 0 1h

    I am attaching the outputs for oc describe node/node1, oc describe node/node2 and oc describe pod/proj1-1-build.

    I would like to solve this problem asap since I have a demo coming up.

    Thanks

    Where are you experiencing the behavior? What environment?

    This is our development environment.

    When does the behavior occur? Frequently? Repeatedly? At certain times?

    Never happened before.

    What information can you provide around timeframes and the business impact?

    I have a demo coming up so I would like to get this fixed asap.

    ************* ******************* ***************** ****************

    More info:

    “No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), Insufficient pods (1), MatchNodeSelector (1).”

    I would expect it to say:

    “No nodes are available that match all of the following predicates:: CheckServiceAffinity (2), Insufficient pods (2), MatchNodeSelector (2).” since I have 2 compute nodes. I feel like one of the nodes is not considered at all.

    Another thing, 1 of the compute nodes (node1) seems to be out of disk:

    Filesystem 1M-blocks Used Available Use% Mounted on
    /dev/mapper/rhel-root 46058 2081 43978 5% /
    devtmpfs 3901 0 3901 0% /dev
    tmpfs 3912 0 3912 0% /dev/shm
    tmpfs 3912 9 3903 1% /run
    tmpfs 3912 0 3912 0% /sys/fs/cgroup
    /dev/sda1 1014 185 830 19% /boot
    /dev/mapper/rhel-tmp 1014 33 982 4% /tmp
    /dev/mapper/rhel-var 15350 15337 14 100% /var
    /dev/mapper/rhel-usr_local_bin 1014 33 982 4% /usr/local/bin
    tmpfs 783 0 783 0% /run/user/0

    ****************** ******************* ***************** ***************

    Further info:

    Further info:

    I deleted all the other apps in the target namespace and this time build pod was successfully scheduled.

    1- Could this be a port issue?
    How should I manage the pods of my applications deployed on a namespace? Is it possible to have this kind of clashes?

    2- After finalizing clarification on “1”, we should still think about the (1) number in the error log instead of (2). I have 2 noıdes, it should have been (2).

    *************** ***************** ****************** ************** **************

    Red Hat Response

    From the attachments provided the following can be seen from the “tybsrhosnode01.defence.local” node

    —>

    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message
    —- —— —————– —————— —— ——-
    OutOfDisk True Mon, 16 Apr 2018 15:03:15 +0300 Mon, 16 Apr 2018 11:45:56 +0300 KubeletOutOfDisk out of disk space
    MemoryPressure False Mon, 16 Apr 2018 15:03:15 +0300 Mon, 16 Apr 2018 11:45:56 +0300 KubeletHasSufficientMemory kubelet has sufficient memory available

    —>

    Here the node clearly seems out of disk space.

    Hence, deleting other applications, which were scheduled on that node made disk space for the build pod “proj1-1-build”

    When we run:

    $ oc get pods -o wide

    we can see which pod is scheduled on which node.

    I will answer you questions one by one:

    1- Could this be a port issue?
    How should I manage the pods of my applications deployed on a namespace? Is it possible to have this kind of clashes?

    No, this is not a port issue, but due to unavailability of resources on the node to schedule a pod. Pods in a project can be scheduled on a node using the parameters in schedular.json file.

    $ vi /etc/origin/master/scheduler.json

    2- After finalizing clarification on “1”, we should still think about the (1) number in the error log instead of (2). I have 2 noıdes, it should have been (2).

    The below error message means:

    ——— ——– —– —- ————- ——– —— ——-
    1h 2m 208 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), Insufficient pods (1), MatchNodeSelector (1).

    –>

    No nodes are available that match all of the following predicates:: CheckServiceAffinity (1 node failed), Insufficient pods (1 node failed), MatchNodeSelector (1 node failed).

    This is because of the node “tybsrhosnode01.defence.local” which does not meet the requirement to schedule a pod.

    My response

    Yeah, while checking out the output of describe command I realized the same thing. Obviously, there is a disk space problem with “node 1”.
    Deleting existing application is out of question, so I think I need to add more disk space to this VM.

    What I don’t understand is, I have “2” compute nodes. node1 and node2. I see node 1 is out of space but what about node 2. Why is it not used?
    The error message should have said “CheckServiceAffinity (2), Insufficient pods (2), MatchNodeSelector (2).” since it cannot schedule the pod in either of the 2 nodes.

    ***** *************** ************* ************

    Solution and Conclusion

    We saw that the build was failing with the following error:

    No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), Insufficient pods (1), MatchNodeSelector (1).

    We checked both the nodes, node1 and node2.

    node1 – had full disk space and no pods were deployed on it. As we checked further, the /var/log folder was populated with the messages and the log rotated messages.

    All the above led to no pods being deployed on node1.

    node2 – Why the pods were unable to be deployed on node2 was now a question for us. We checked the node2 description and the following was the reason for that:

    Capacity:
    cpu: 1
    memory: 8010972Ki
    pods: 10
    Allocatable:
    cpu: 1
    memory: 7908572Ki
    pods: 10

    The node-2 already had 10 pods allocated so no pods were being scheduled on that node.

    Clearing your doubt about the error message:

    No nodes are available that match all of the following predicates:: CheckServiceAffinity (1), Insufficient pods (1), MatchNodeSelector (1)

    The 1 in the brackets is solely related to node1 as over here node2 is not considered as it already has allocated 10 pods.

    As mentioned earlier, 1 means 1 node failed. Over that node is node1.

     
  • Serdar Osman Onur 9:10 am on April 18, 2018 Permalink | Reply
    Tags: , , ,   

    Deploying BPM Processes on OpenShift

    When you create a Business Process in Red Hat BPM Suite, it will be living in a hierarchical structure similar to below:

    • Organizational Unit
      • Repository
        • Project
          • Business Process

    So, how do you deploy a Business Process on OpenShift? What is the unit of deployment?

    How do you deploy Business Process on Red Hat OpenShift?

    You can use S2I (source-to-image) process of OpenShift.
    1- You can deploy your projects using the quickstarts that already exist in the OpenShift catalog.
    2- You can create your own templates and you can use them instead for creating your “oc objects” inside your OpenShift Cluster.

    What is the unit of deployment in OpenShift?

    Unit of deployment is “project”. You deploy BPM “project”s on OpenShift, not individual business processes.
    Therefore, if your project has multiple business processes, then they will al scale together since they will all be residing in a single POD. If you want maximum modularity and scalability, you could consider building a single business process in a single project.

    This post is based on answers to a red hat support case.

     
  • Serdar Osman Onur 8:42 am on April 18, 2018 Permalink | Reply
    Tags: , ,   

    Updating Sub Processes – Red Hat JBoss BPM Suite

    Using a process as a sub-process in another one introduces some dependencies between the 2 BPM processes.

    If the child process (sub-process) is to be updated, following should be applied:

    • Implement changes in your child process
    • Increase the maven version of your child project
    • Build
    • Update parent’s pom.xml so it matches the new child maven version
    • Build & Deploy

    This post is based on answers to a red hat support case.

     
  • Serdar Osman Onur 10:52 am on April 12, 2018 Permalink | Reply
    Tags: , user-management   

    OpenShift – Changing the Password for a User

    If you are using file-based authentication
    Go to /etc/openshift/
    Cat the “admin” file and check if the user you want to change the password for is in there
    Now run “htpasswd -b admin user_name new_password”

    To check if you are using file-based authentication:
    Cat “/etc/origin/master/master-config.yaml” file and check “identityProviders” section:

    identityProviders:

    • challenge: true

    login: true
    mappingMethod: claim
    name: htpasswd_auth
    provider:
    apiVersion: v1
    file: /etc/openshift/admin
    kind: HTPasswdPasswordIdentityProvider

     
  • Serdar Osman Onur 7:21 am on April 3, 2018 Permalink | Reply
    Tags: , ,   

    Using Sub Processes – Red Hat JBoss BPM Suite

    Hi,

    We are trying to use a process as a subprocess in another one. Say we want to use process 2 in process 1 as a sub process.
    It seems like we cannot use process 2 as a subprocess if it is in another repository. Is that correct? I was hoping we would be able to organize our BPM flows divided/organized by repositories and reuse the processes from different repositories.

    We then tried to add the process 2 as a jar archive in the project of Process 1, but that didn’t work either.

    Also, we are able to add a flow as a sub flow on the web interface of BPM Suite if both are in the same repository even if they are not in the same project. But unfortunately, we just found out that even though we are able to add a process as a subprocess in the design time (Web UI), Process 1 will not be able to find the Process 2 if they are not in the same project.

    Now this is a big problem for us. This means that if we want to add a process to another one as a sub process, then both HAVE TO BE in the same PROJECT. This would result in huge project organization problems for us. Not only that, this woul also be a huge problem while deploying our processes to OpenShift because as far as I know projects are the units of deployment for OpenShift. You deploy a BPM project onto OpenShift not a business flow. If a project has multiple processes then they will all go into the same POD. Not acceptable for the requirements of our project.

    Are we missing something? Is there a way to add processes to other processes as sub-flows(sub processes) if they are not in the same repository or in the same project?
    Would it help to use the latest version of Red Hat BPM Suite?

    Red Hat Support:

    Thank you for contacting Red Hat Global Support Services.

    I believe this article could help you [1].

    In essence: If you have multiple KJARs – i.e. KJAR X with maven coordinates:
    org:kjar1:1 and KJAR Y with maven coordinates: org:kjar2:1, and you need to import process from KJAR Y into process inside KJAR X you need to:

    1) Add dependency inside KJAR X to KJAR Y (pom.xml needs to depend on
    org:kjar2:1)
    2) You need to update kmodule.xml of KJAR X, to actually include the KieBase from the KJAR Y, i.e.:

    <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
    <kbase name="kbaseX" includes="kbaseY">
    <ksession name="ksession1"/>
    </kbase>
    </kmodule>

    This of course assume that your KieBase in KJAR Y is properly named.
    [1] https://access.redhat.com/solutions/966573

     

    skipping my response…

    Red Hat Support:

    Hi,

    here is example I have prepared to make this work.

    Can you please give it a try and let me know if it works at your end?

    1) SUBPROJECT – project including child process, which we will import as a reusable one, later on)

    Using Project Editor – > Knowledge Base and Session, I have configured it as follows:

    • one default kiebase, with one default stateful kiesession – this will allow successful build of this project
    • second non-default kiebase, with non-default stateful kiesession – we include this kiebase in the parent project.

    Both kiebases include all packages, by using ‘*’ convention.

    final pom.xml:
    <?xml version=”1.0″ encoding=”UTF-8″?>
    <project xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&#8221; xmlns=”http://maven.apache.org/POM/4.0.0&#8243;
    xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.redhat.gss</groupId>
    <artifactId>SUBPROJECT</artifactId>
    <version>1.0</version>
    <packaging>kjar</packaging>
    <name>SUBPROJECT</name>
    <repositories>
    <repository>
    <id>guvnor-m2-repo</id>
    <name>Guvnor M2 Repo</name>
    <url>http://localhost:8080/business-central/maven2/</url&gt;
    </repository>
    </repositories>
    <build>
    <plugins>
    <plugin>
    <groupId>org.kie</groupId>
    <artifactId>kie-maven-plugin</artifactId>
    <version>6.5.0.Final-redhat-19</version>
    <extensions>true</extensions>
    </plugin>
    </plugins>
    </build>
    </project>

    final kmodule.xml:

    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;

    <kbase name=”subDefaultKieBase” default=”true” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*”>
    <ksession name=”subDefaultKieSession” type=”stateful” default=”true” clockType=”realtime”/>
    </kbase>

    <kbase name=”weWillImportThisKieBase” default=”false” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*”>
    <ksession name=”importingThisSession” type=”stateful” default=”false” clockType=”realtime”/>
    </kbase>

    </kmodule>

    2) PARENT PROJECT (I have created it in *different* repository to follow your use case):

    I have added dependency to SUBPROJECT.

    final pom.xml:

     

    <?xml version=”1.0″ encoding=”UTF-8″?>
    <project xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&#8221; xmlns=”http://maven.apache.org/POM/4.0.0&#8243;
    xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.redhat.gss</groupId>
    <artifactId>PARENT</artifactId>
    <version>1.0</version>
    <packaging>kjar</packaging>
    <name>PARENT</name>
    <dependencies>
    <dependency>
    <groupId>org.redhat.gss</groupId>
    <artifactId>SUBPROJECT</artifactId>
    <version>1.0</version>
    <scope>compile</scope>
    </dependency>
    </dependencies>
    <repositories>
    <repository>
    <id>guvnor-m2-repo</id>
    <name>Guvnor M2 Repo</name>
    <url>http://localhost:8080/business-central/maven2/</url&gt;
    </repository>
    </repositories>
    <build>
    <plugins>
    <plugin>
    <groupId>org.kie</groupId>
    <artifactId>kie-maven-plugin</artifactId>
    <version>6.5.0.Final-redhat-19</version>
    <extensions>true</extensions>
    </plugin>
    </plugins>
    </build>
    </project>

     

    I have also created non default kiebase and included the non default kiebase from previous SUBPROJECT.

    Final kmodule.xml:

    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;

    <kbase name=”nonDefaultParentBase” default=”false” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*” includes=”weWillImportThisKieBase”>
    <ksession name=”nonDefaultSession” type=”stateful” default=”false” clockType=”realtime”/>
    </kbase>

    </kmodule>

    Two catches though:

    1)
    Since the SUBPROJECT is in the different repository, their processes are not displayed in the Called element menu inside Web Designer. However, this field is still writeable and I can fill it manually with the ID of my desired process (which can be located in a separate subproject), i.e.: “SUBPROJECT.SUBPROCESS”. Unfortunately you need to know the ID beforehand.

    2) For deployment you should not use “Build&Deploy” menu – because this will deploy the default kiebase and we want to deploy non default one. Instead, you should use Deploy->Process Deployments->New Deployment Unit, configured like this:

    BASIC screen configured with these maven coordinates (PARENT project): org.redhat.gss:PARENT:1.0
    ADVANCED screen configured with Kie Base Name set to ‘nonDefaultParentBase’ and Kie Session Name set to ‘nonDefaultSession’.

    I assume you will be automating the second step in the later environments via REST API, and it is possible to supply kiebase/kiesesion names over the rest api as well.

    Please note that KieBase names are completely arbitrary, so they do not need to match project name or org unit name.

    skipping my response…

    Red Hat Support:

    Hi,

    here is what I did:

    1) I cloned repo4 inside business-central
    2) I build&deploy project4 – to make sure it’s installed into maven repo. Then I undeployed it, because my main go-to deployment is going to be parent.
    3) I cloned repo3 inside business-central
    4) I checked that its kmodule.xml includes the ‘kiebase4-nondefault’ which is correct.
    5) I executed build&deploy to make sure it’s installed into maven repo
    6) I undeployed it, because I plan to deploy it with different configuration
    7) I deployed it as follows via Deploy->Process Deployments-> New Deployment Unit:
    BASIC:example:proj3:1.0
    ADVANCED:
    strategy: per process instance
    kie base name:kiebase3
    kie session name: kiesession3
    8) Inside process management -> process definitions I can see both processes – process3 and process4, which suggests that child kiebase was correctly included
    9) I started process3 – the parent

    In the log, I can see:

    10:32:42,594 INFO [stdout] (http-127.0.0.1:8080-4) task3 activated
    10:32:42,629 INFO [stdout] (http-127.0.0.1:8080-4) task4 activated

    10) Under process management->process instances -> completed, I can see two entries – one for parent (process3), second for child (process4).

    Can you please try the above scenario at your end? I’d suggest if you have both these projects (proj3 and proj4) already installed in the maven repo, just try to undeploy everything, and then go straight to step 7).

    I remember that when I was trying this at my end, I hit similar behavior as you did – at my end, it was likely caused because child project was built too soon automatically – when the child process included only start->end node. So when I build my parent, it loaded this version of child into memory. Later on, when I added a script task into my child and rebuild it, it didn’t really matter, because previous version was loaded by parent already. So I had to rebuild and redeploy everything. The way I have discovered it was to actually inspect process model of my completed process instance:
    Process Management – > Process Instances – > Completed -> here I located child -> and clicked Options – > Process Model . Here you can see what version of child was actually executed – at my end, I could see ‘start->end’ process model (which was undesired), even though my child was already modeled as start->script->end.

    My Response

    Hello Anton,

    I was able to get it to work thanks for that!!

    Now I have some question to understand what is going on and what we did for what purpose..

    Q1) We created 2 types of kiebases and kiesessions. One of them is default and the other is non-default. What was the purpose of this?
    Why can’t we just use default kiebases and kiesessions for both parent and sub-projects? What is the difference?

    Q2) In the procedure that you defined earlier, you did not mention about creating a kiesession for the parent project but I did it anyway. How do we know if a kiesession is not needed? Why did you not tell about creating a kiesession for parent? Any particular reason?

    Q3) What is the purpose of a kiebase and a kiesession? I think a kiebase is a library where kjars of the projects are stored and shared (this is also very much related to understanding “Q1”).
    What about the kiesession? Is it like a session object we have in java? If not, how is that different? Is kiesession used for storing process and task related data inside a session?

    Q4) In the steps you have described previously, you did not mention about using the “*” convention for the kiebases of the “parent” project, but for the sub-project. Is there a particular reason for this?

    Q5) In this current repository configurations that you also have imported, the kiebase for the parent project is not set as default but we have added the kiebase4-nondefault for the child process. If we had set kiebase3 as “default” in the parent project, then when we directly “build&deploy” project3 we would have both process4 and process4 under process management->process definitions, am I correct?

    Red Hat Support:

    Hi,

    this is a tricky topic but I will try to do my best.

    When you click the button build&deploy in the business-central, you probably noticed that you didn’t specify the name of the session anywhere.
    This is possible because if no kiebase/kiesession name is specified, then the default one will be used. If kmodule.xml is empty, then jbpm will use the default one – there are two default sessions, these are implicit, one stateless and one stateful. there is also implicit kiebase – defaultkiebase, which is used, if nothing is defined in kmodule.xml.

    With this in mind, we need to remember that there can only ONE default kiesession PER PROJECT! (or two, but one for stateless, second for stateful).
    If there is more than ONE default session, then bpm wouldn’t know which one to use during deployment. This gets even trickier if you have kmodule.xml which imports other kiebase – the same way we are doing – by using keyword ‘includes’.

    Do you see where I am heading? If I would have one defeault session in parent, and one default in child, then during deployment, we would very likely see errors, like this:

    log.warn(“Found more than one default KieBase: disabling all. KieBases will be accessible only by name”);
    OR
    log.warn(“Found more than one default KieSession: disabling all. KieSessions will be accessible only by name”);

    This being said – in my previous experiences, where I had to fiddle around kmodule.xml, in multi-project setup, I started to define my own, custom, kiebases and kiesession to avoid any sorts of these issues (‘found more than one default..’).

    This is partially due to habit, partially due to my past experience that whenever I tried to simplify this, I ended up with more errors.

    For example: If you alter kmodule.xml in CHILD project – i.e. you remove the *default* kiebase and you leave just ‘kiebase4-nondefault’, the build will fail. This is because if kmodule.xml is altered, then jbpm won’t resolve to the ‘defaultkiebase’. You are now in full control, and bpm suite expects that you will provide the full, correct config inside your kmodule.xml

     

    I know it sounds tricky but..I think the rule of thumb could be that for single project setups, you will be likely fine with the defaults – i.e. no changes required.
    If you start altering kmodule.xml, then you need to start using named kiebases and named kiesessions, and one default session per whole project (with all the ‘includes’) has to be followed.

    KieBase is abstraction for partitioning your knowledge assets – rules, processes, decision tables, etc – it contains no runtime data, just design time artifacts.
    Often you don’t need to include *all* your assets in one piece – and if this is your use case, then you can start creating multiple KieBases, pointing to multiple packages, which allows you to partition your KJAR. It also allows you to use ‘include’ – to put together hierarchical kiebases.

    KieSession is abstract layer for actual execution of your rules / processes – it contains runtime data.. The reason to have a possibility to create more than one, is that you have multiple sessions with multiple config. This is especially useful in rule executions, as there are many options which can influence the rule behavior directly.

    The above is explanation in ‘layman terms’. If it’s somewhat not understandable, or even confusing, then please read the official documentation.
    Especially [1].

    Ad ‘*’ – I am always using ‘*’ for all my kiebases in all my projects. For dev/demo purposes I usually want to include *all* my artifacts, across *all* packages. The actual usage of ‘*’ may differ based on your use case. Not mentioning ‘*’ in both, child and parent project, was likely just an oversight at my end (sry for that!)

    I have also tried to simplify the config, as in your latest question, but that resulted in errors.

     

    1) PARENT kmodule.xml
    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <kbase name=”kiebase3″ default=”true” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*” includes=”kiebase4-nondefault”>
    <ksession name=”kiesession3″ type=”stateful” default=”true” clockType=”realtime”/>
    </kbase>
    </kmodule>

    2) CHILD kmodule.xml
    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <kbase name=”kiebase4-nondefault” default=”false” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*”>
    <ksession name=”kiesession4-nondefault” type=”stateful” default=”false” clockType=”realtime”/>
    </kbase>
    <kbase name=”defaultone” default=”true” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*”>
    <ksession name=”defaults” type=”stateful” default=”true” clockType=”realtime”/>
    </kbase>
    </kmodule>

    this will fail with:

    Caused by: java.lang.IllegalStateException: Cannot find kbase, either it does not exist or there are multiple default kbases in kmodule.xml

    I’d expect that since we are importing nondefault ‘kiebase4-nondefault’, it would work, but the opposite is true. I am checking with engineering why this fails, but I guess just the mere presence of the default kiebase in the dependant project, is sufficient to cause this fail..perhaps we could improve this though.

    What would somehow work is this config:

    PARENT
    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <kbase name=”kiebase3″ default=”true” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*” includes=”kiebase4-nondefault”>
    <ksession name=”kiesession3″ type=”stateful” default=”true” clockType=”realtime”/>
    </kbase>
    </kmodule>

     

    CHILD
    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <kbase name=”kiebase4-nondefault” default=”false” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*”>
    <ksession name=”kiesession4-nondefault” type=”stateful” default=”false” clockType=”realtime”/>
    </kbase>
    </kmodule>

     

    Whenever you build the CHILD with above kmodule.xml – you would see error about complaining of missing default kiebase – which is true. This is a deployment error, which does not prevent the installation inside maven repo. So if you then, attempted to build&deploy the parent project – it would actually work…

    My response:

    Thanks a lot for the answer.

    Let me revisit my questions in light of the explanations you have provided.

    Q1) We created 2 types of kiebases and kiesessions. One of them is default and the other is non-default. What was the purpose of this?
    Why can’t we just use default kiebases and kiesessions for both parent and sub-projects? What is the difference?

    Answer: Default kiebase and kiesessions are used for default deployments of projects. Other -non-default- kiebases and sessions can be used for deploying different versions of a project that could potentially display different behaviors. Every project needs at least 1 default kiebase and session. If you do not provide one explicitly, then the implicit defaults will be used.
    If a parent project has a default kiebase or session specified in their configuration, then no child projects can have any default kiebases or sessions specified by them explicitly in their configuration. Otherwise BPM deployment engine (I don’t know what it is called) would not know which kiebase or session to use while deploying the parent project.
    Is this understanding correct?

    Q2) In the procedure that you defined earlier, you did not mention about creating a kiesession for the parent project but I did it anyway. How do we know if a kiesession is not needed? Why did you not tell about creating a kiesession for parent? Any particular reason?

    Answer: If you don’t explicitly specify a session, deployment time will use an implicit one automatically.

    Q3) What is the purpose of a kiebase and a kiesession? I think a kiebase is a library where kjars of the projects are stored and shared (this is also very much related to understanding “Q1”).
    What about the kiesession? Is it like a session object we have in java? If not, how is that different? Is kiesession used for storing process and task related data inside a session?

    Answer: Now I understand that a kiebase is an abstraction for partitioning assets like rules, processes etc. and kiesession is where your runtime variables/values are stored.
    You said this: “you can start creating multiple KieBases, pointing to multiple packages, which allows you to partition your KJAR.”
    If you can clarify for me the differences between kiebase, package, and kjar this will be clear for me.

    Q4) In the steps you have described previously, you did not mention about using the “*” convention for the kiebases of the “parent” project, but for the sub-project. Is there a particular reason for this?

    Answer: So, “*” convention is to be used when you are not sure about how to partition your knowledge assets in your project. I think using the “*” convention in general should be ok for most of the cases. Correct?

    Q5) In this current repository configurations that you also have imported, the kiebase for the parent project is not set as default but we have added the kiebase4-nondefault for the child process. If we had set kiebase3 as “default” in the parent project, then when we directly “build&deploy” project3 we would have both process3 and process4 under process management->process definitions, am I correct?

    Answer:
    You said: “I’d expect that since we are importing nondefault ‘kiebase4-nondefault’, it would work, but the opposite is true. I am checking with engineering why this fails, but I guess just the mere presence of the default kiebase in the dependant project, is sufficient to cause this fail..perhaps we could improve this though.”
    **Question: If the mere presence of a default kiebase in a subproject will stop parent projects from deploying, then no sub-projects can ever have a default kiebase and they can never be deployable on their own. Because BPM deployment time won’t add the implicit default kiebase automatically as we can see in the below question. Is that the case?

     

    And you said:
    CHILD
    <kmodule xmlns=”http://jboss.org/kie/6.0.0/kmodule&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
    <kbase name=”kiebase4-nondefault” default=”false” eventProcessingMode=”stream” equalsBehavior=”identity” packages=”*”>
    <ksession name=”kiesession4-nondefault” type=”stateful” default=”false” clockType=”realtime”/>
    </kbase>
    </kmodule>

     

    Whenever you build the CHILD with above kmodule.xml – you would see error about complaining of missing default kiebase – which is true. This is a deployment error, which does not prevent the installation inside maven repo. So if you then, attempted to build&deploy the parent project – it would actually work…
    **Question: I thought that even if we don’t specify a default kiebase, deployment time of the BPM would use the implicit default kiebase. Why is that not the case here?

     

    Red Hat Support:

    >> Q1 – If a parent project has a default kiebase or session specified in their configuration, then no child projects
    >> can have any default kiebases or sessions specified by them explicitly in their configuration.
    >> Otherwise BPM deployment engine (I don’t know what it is called) would not know which kiebase or session to
    >> use while deploying the parent project.
    >> Is this understanding correct?

    Yes, this is correct. Basically, you cannot have two default kbases (or ksessions) in the same dependency hierarchy. One important aspect of the parent/child kjar approach is that the configuration of the two kjars are flattened, so this is like having two default kbases in the same kjar which is not allowed.

    jBPM needs to know which kbase to use – as this will drive the extraction from kbase what processes should be available for execution. As mentioned above, kbase hierarchy is actually flattened so if they are multiple default kbases then jBPM cannot simply find one and use it. Thus it enforces user to specify explicitly which kbase (and ksession) to use.

     

    >> Q2 – Answer: If you don’t explicitly specify a session, deployment time will use an implicit one automatically.

    Yes, the default kbase/ksessions are attempted to be used if nothing is specified at deployment time. For your scenario, it is recommended to explicitly define the kbase/ksession during deployment time as Anton has outlined.

     

    >> Q3 – If you can clarify for me the differences between kiebase, package, and kjar this will be clear for me.

    For an explanation of KieBase and KieSession, I’d recommend to take a look at the documentation Anton has already pointed out [1]. For a detailed explanation of a kjar, please refer to this KBase article [2].

     

    >> Q4 – I think using the “*” convention in general should be ok for most of the cases. Correct?

    You can use the ‘packages’ attribute to limit the number of compiled artifacts. Only the packages belonging to the list specified in this attribute are compiled. By default (or if using “*”), all artifacts (such as rules and processes) in the resources directory are included into a knowledge base.

     

    >> Q5 – **Question: If the mere presence of a default kiebase in a subproject will stop parent projects from deploying, then no sub-projects can ever have a default kiebase and they can never be deployable on their own.

    I think this goes back to the answer to question Q1, and there are two key aspects to consider:

    (1) The model of the merged projects is flattened at deployment time => two ‘default’ kbase/ksessions are not allowed

    (2) The ‘build&deploy’ button is a short cut for simplified deployments using the default kbase/ksession. For more sophisticated scenarios where the kbase/ksession name needs to be explicitly defined, deployment should be invoked by either the ‘Deploy->Process Deployments->New Deployment Unit’ GUI, or using REST calls.

     

    >> **Question: I thought that even if we don’t specify a default kiebase, deployment time of the BPM would use the implicit default kiebase. Why is that not the case here?

    jBPM is trying to use the implicit default kbase, this is correct. But there is no such default kbase withe the provided configuration. As Anton explained: “If kmodule.xml is empty, then jbpm will use the default one – there are two default sessions, these are implicit, one stateless and one stateful. there is also implicit kiebase – defaultkiebase, which is used, if nothing is defined in kmodule.xml.”

    => note the importance of the ‘if kmodule.xml is empty’ statement – which is not the case with the provided kmodule.xml.

    I hope this helps to clarify. Kindly let us know if you have further questions or remarks.

     

    [1] https://access.redhat.com/documentation/en-us/red_hat_jboss_bpm_suite/6.4/html/development_guide/chap_java_apis#sect_kie_api

    [2] https://access.redhat.com/articles/3355141

    See this video to learn how to use a bpm process as a sub-process:

     
  • Serdar Osman Onur 7:21 am on April 3, 2018 Permalink | Reply
    Tags: , ,   

    BPM Sub-processes and Deployment on OpenShift

    Hello,

    These questions are at the intersection of 2 subjects. Red Hat JBoss BPM Suite, and Red Hat OpenShift.

    Question 1: As far as I know, a project is a deployment unit for OpenShift. So, Business Process 1 will be deployed in its own pod, Business Processes 2 & 3 will be deployed together in another single pod. Am I correct? (Please see the attached drawing.)

    Question 2: I want to re-use Business Process 4 (which is in Project3 under TYBS_Repo 2) inside Business Process 1 (which is in Project 1 under TYBS_Repo 1). Is this possible? We tried to re-use a business flow from another flow under a different repository but we failed. I was hoping that it would be possible otherwise it would be hard to manage the organization of a complex BPM project. It would imply we would have to have all the BPM processes inside a single repository and we would need to organize the flows in “project” groupings, then this would mean (by the Question 1 above) that we would be deploying more than 1 business processes inside a single container which is not what I want to achieve. Can you help me with this? Are we missing something? (Please see the attached drawing.)

    We are about to decide the project repo/project structure of our BPM proceses so fast response is appreciated.

    Thanks a lot…

     

     

    Red Hat Support:

    Answer1:

    One pod has only one KieContainer. So process1 from project1 and process2 and process 3 from project2 will be in two different pods. Please revert in case of any query.

    Answer2:

    Check out this post: http://big.info/devops/2018/04/03/using-sub-processes-red-hat-jboss-bpm-suite/

     
  • Serdar Osman Onur 7:20 am on April 3, 2018 Permalink | Reply
    Tags: , ,   

    DB Clustering on OpenShift

    Hello there,

    I have a couple of questions about supported DB images, clustering and Data Grid on OpenShift.

    1- DataGrid Clustering:
    I have been considering Data Grid clustering on OpenShift. As far as I can see, Oracle cannot be used as a data source if we use data grid inside OpenShift. This means, If I was to create a data grid cluster inside OpenShift and wanted to use an oracle cluster for the persistent data storage, I would not be able to do that. Am I correct?
    https://access.redhat.com/documentation/en-us/red_hat_jboss_data_grid/7.1/pdf/data_grid_for_openshift/Red_Hat_JBoss_Data_Grid-7.1-Data_Grid_for_OpenShift-en-US.pdf –> see this link. It says:
    The will determine the driver for the datasource. Currently, only postgresql and
    mysql are supported.

    2- Supported DB images:
    I see that PostgreSQL and MySQL have supported images but Oracle does not. Correct? So, If I wanted to run Oracle on OpenShift, that would not be officially supported by Red Hat.

    3- DB Clustering:
    From the studies I have done on Red Hat documentation, I see that DB clustering -in general- is considered as “technology preview” and not recommended or officially supported. Is that correct?
    https://docs.openshift.com/container-platform/3.7/using_images/db_images/ -> see this link.

    Requirements:
    Our project has a requirement that applications should support both PostgreSQL and Oracle. Also, we need to set up a clustered DB strucure along with a clustered data grid in front do provide high availability and increase the throughput in times of request spikes.

    I would appreciate your help. Thanks a lot…

    Red Hat Support:

    Hello,

    Answering your questions:
    >I have been considering Data Grid clustering on OpenShift. As far as I can see, Oracle cannot be used as a data source if we use data grid inside OpenShift. This means, If I was to create a data grid cluster inside OpenShift and wanted to use an oracle cluster for the persistent data storage, I would not be able to do that. Am I correct?
    Yes, you are correct, Oracle database of not supported as of now, only SQL databases are supported i.e MySQL and PostGreSQL databases.

    >I see that PostgreSQL and MySQL have supported images but Oracle does not. Correct? So, If I wanted to run Oracle on OpenShift, that would not be officially supported by Red Hat.
    Yes, We are sorry, but as of now we don’t have official Oracle database images, you can use unofficial oracle images but that would not be supported by RedHat. Along with PostgreSQL and MySQL, official image of mongodb is also there.

    >From the studies I have done on Red Hat documentation, I see that DB clustering -in general- is considered as “technology preview” and not recommended or officially supported. Is that correct?
    https://docs.openshift.com/…/3.7/using_images/db_images/ -> see this link.

    Yes, clustering is in technology preview as of now and not be supported for production use but you can use it on your test cluster and I would like to request you to feel free use it on you test environment and let us know if you face any bug.

    Please feel free to get back if you have any more question.

     
  • Serdar Osman Onur 10:48 am on April 2, 2018 Permalink | Reply
    Tags: ,   

    Systemctl for Service Management

    Get list of enabled services
    systemctl list-unit-files –state=enabled
    systemctl list-unit-files | grep enabled

    Get list of running services
    systemctl | grep running

    Stop a service
    systemctl stop service_name

    Start a service
    systemctl start service_name

    Enable a service
    systemctl enable service_name
    Note: If you enable a service, it will be started during system boot.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel