Fusion Middleware

Deploying an Application to Pivotal Cloud Foundry through Spinnaker and then invoking a resize operation

Pas Apicella - Thu, 2019-03-28 22:36
In this post we show a basic deployment to Cloud foundry in fact Pivotal Cloud foundry 2.4 using spinnaker 1.13.0.


1. Configured a Cloud Foundry provider as shown below

spinnaker@myspinnaker-spinnaker-halyard-0:/workdir$ hal config provider cloudfoundry account add pez208 --user admin --password mypassword --api api.system.run.myenv.io --environment dev --appsManagerURI https://apps.system.run.myenv.io
+ Get current deployment
+ Add the pez208 account
Problems in default.provider.cloudfoundry:
- WARNING To be able to link server groups to CF Metrics a URI is
  required: pez208

+ Successfully added account pez208 for provider cloudfoundry.

2. Enable cloud foundry provider

spinnaker@myspinnaker-spinnaker-halyard-0:/workdir$ hal config provider cloudfoundry enable
+ Get current deployment
+ Edit the cloudfoundry provider

+ Successfully enabled cloudfoundry

3. Run "hal deploy apply"

spinnaker@myspinnaker-spinnaker-halyard-0:/workdir$ hal deploy apply
+ Get current deployment
+ Prep deployment
+ Preparation complete... deploying Spinnaker
+ Get current deployment
+ Apply deployment
+ Deploy spin-clouddriver
+ Deploy spin-front50
+ Deploy spin-orca
+ Deploy spin-deck
+ Deploy spin-echo
+ Deploy spin-gate
+ Deploy spin-igor
+ Deploy spin-rosco
+ Run `hal deploy connect` to connect to Spinnaker.

IN this demo I can simply going to deploy my artifact sitting within my GitHub repo using a HTTP endpoiunt so for that will need to enable HTTP artifact support in Spinnaker as shown below

$ hal config features edit --artifacts true
$ hal config artifact http enable
$ hal config artifact http account add apples-http
$ hal deploy apply


1. Lets create a new application called "pastest" as shown below. Be sure to select "CloudFoundry" provider.

2. Click "Create"

3. Click on "Create Server group"

4. Fill in the fields as shown below. In this example I am using the following

  • Account "pez208" which was the cloud foundry provider name we used above
  • Region is basically the CF space we will deploy into
  • HTTP artifact which I enabled called "apples-http".
  • Fully qualified path to my JAR file I wish to deploy
  • Form based manifest settings to define my application deployment settings

5. Click "Create"

6. Verify your application is going through the deploy phase as shown in the dialog

7. Oncer complete we can see our deployed application in Pivotal Cloud Foundry Applications Manager as shown below.

8. Now if we return to the Spinnaker UI we will see various views of what we just deployed as follows

Server Group Main Page

Load Balancer Page

Instance Page

9. Now let's actually scale our application to 2 instances rather than just a single instance. To do that lets click the "Resize Option" in the "Server Group Page" as shown below

10. In the dialog which appears set "Resize to" to "2"

11. Click "Submit"

12. Return to Pivotal Cloud Foundry Applications Manager and verify we now have 2 instances of our application as shown below

13. This will also be reflected on Spinnaker UI as well

More Information

Cloud Foundry - Cloud Provider
Categories: Fusion Middleware

Two nice Pivotal Container Service (PKS) CLI commands I use very often

Pas Apicella - Wed, 2019-03-27 23:07
Having always created multiple PKS clusters at times I forget the configuration of my K8S clusters and this command comes in very handy

First lets list those clusters we have created with PKS

papicella@papicella:~$ pks clusters

Name    Plan Name  UUID                                  Status     Action
lemons  small      5c19c39e-88ae-4e06-a1cf-050b517f1b9c  succeeded  CREATE
banana  small      7c3ab1b3-a25c-498e-8179-9a14336004ff  succeeded  CREATE

Now lets see how many master nodes and how many worker nodes actually exist in my cluster using "pks cluster {name} --json"

papicella@papicella:~$ pks cluster banana --json

   "name": "banana",
   "plan_name": "small",
   "last_action": "CREATE",
   "last_action_state": "succeeded",
   "last_action_description": "Instance provisioning completed",
   "uuid": "7c3ab1b3-a25c-498e-8179-9a14336004ff",
   "kubernetes_master_ips": [
   "parameters": {
      "kubernetes_master_host": "banana.yyyy.hhh.pivotal.io",
      "kubernetes_master_port": 8443,
      "kubernetes_worker_instances": 3

One final PKS CLI command I use often when creating my clusters is the --wait option so I know when it's done creating the cluster rather then continually checking using "pks cluster {name}"

papicella@papicella:~$ pks create-cluster cluster1 -e cluster1.run.yyyy.hhh.pivotal.io -p small -n 4 --wait

More Information


Categories: Fusion Middleware

Spring Initializr new look and feel

Pas Apicella - Tue, 2019-03-05 20:38
Head to http://start.spring.io and the new look and feel UI is now available

Categories: Fusion Middleware

Integrating Cloud Foundry with Spinnaker

Pas Apicella - Wed, 2019-02-13 19:36
I previously blogged about "Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere" and then how to invoke UI using a "kubectl port-forward".



1. Exec into hal pod using a command as follows:

$ kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

Note: You can get the POD name as follows

papicella@papicella:~$ kubectl get pods | grep halyard
myspinnaker-spinnaker-halyard-0       1/1       Running     0          6d

2. Create a file settings-local.js in the directory ~/.hal/default/profiles/

window.spinnakerSettings.providers.cloudfoundry = {
  defaults: {account: 'my-cloudfoundry-account'}

3. Create a file clouddriver-local.yml with contents as follows. You can add multiple accounts but in this example I am just adding one

  enabled: true
    - name: PWS
      user: papicella-pas@pivotal.io
      password: yyyyyyy
      api: api.run.pivotal.io

4. If you are working with an existing installation of Spinnaker, apply your changes:

spinnaker@myspinnaker-spinnaker-halyard-0:~/.hal/default/profiles$ hal deploy apply
+ Get current deployment
+ Prep deployment
Problems in halconfig:
- WARNING There is a newer version of Halyard available (1.15.0),
  please update when possible
? Run 'sudo apt-get update && sudo apt-get install
  spinnaker-halyard -y' to upgrade

+ Preparation complete... deploying Spinnaker
+ Get current deployment
+ Apply deployment
+ Run `hal deploy connect` to connect to Spinnaker.

5. Once this is done in the UI you will see any applications in your Organisations appear in this example it's a single application called "Spring" as shown below

6. In the example below when "Creating an Application" we can select the ORGS/Spaces we wish to use as shown below

More Information

Cloud Foundry Integration

Cloud Foundry Resource Mapping

Categories: Fusion Middleware

Exposing Spinnaker UI endpoint from a helm based spinnaker install on PKS with NSX-T

Pas Apicella - Thu, 2019-02-07 22:50
I previously blogged about "Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere" and then quickly invoking the UI using a "kubectl port-forward" as per this post.


That will work BUT but it won't get you too far so his what you would need to do so the UI works completely using the spin-gate API endpoint.

Steps (Once Spinnaker is Running)

1. Expose spin-deck and spin-gate to create external LB IP's. This is where NSX-T with PKS on prem is extremely useful as NSX-T has LB capability for your K8's cluster services you create making it as easier then using public cloud LB with Kubernetes.

$ kubectl expose service -n default spin-deck --type LoadBalancer --port 9000 --target-port 9000 --name spin-deck-public
service/spin-deck-public exposed

$ kubectl expose service -n default spin-gate --type LoadBalancer --port 8084 --target-port 8084 --name spin-gate-public
service/spin-gate-public exposed

2. That will create us two external IP's as shown below

$ kubectl get svc


NAME                 TYPE                 CLUSTER-IP     EXTERNAL-IP  PORT(S) AGE
spin-deck-public  LoadBalancer,  9000:30131/TCP ..
spin-gate-public   LoadBalancer,  8084:30312/TCP ..


3. Exec into hal pod using a command as follows

$ kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

4. Run these commands in order on the hal pod. Make sure you use the right IP address as per the output at #2 above. UI = spin-deck-public where API = spin-gate-public

$ hal config security ui edit --override-base-url
$ hal config security api edit --override-base-url
$ hal deploy apply

5. Port forward spin-gate on your localhost. Shouldn't really need to do this BUT for some reason it was required I suspect at some point this won't be required.

$ export GATE_POD=$(kubectl get pods --namespace default -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
$ echo $GATE_POD
$ kubectl port-forward --namespace default $GATE_POD 8084
Forwarding from -> 8084
Forwarding from [::1]:8084 -> 8084

6. Access UI using IP of spin-deck-public

If it worked you should see screen shots as follows showing that we can access the tabs and "Create Application" without errors accessing the gate API endpoint

Categories: Fusion Middleware

Spring Cloud GCP and authentication from your Spring Boot Application

Pas Apicella - Wed, 2019-02-06 17:18
When using Spring Cloud GCP you will need to authenticate at some point in order to use the GCP services. In this example below using a GCP Cloud SQL instance you really only need to do 3 things to access it externally from your Spring Boot application as follows.

1. Enable the Google Cloud SQL API which is detailed here


2. Ensure that your GCP SDK can login to your Google Cloud SQL. This command will take you to a web page asking which google account you want to use

  $ gcloud auth application-default login

3. Finally some application properties in your Spring Boot application detailing the Google Cloud SQL instance name and database name as shown below.


Now when you do that and your application starts up you will see a log message as follows below clearly warning you this this method of authentication can have implications at some point.

2019-02-07 09:10:26.700  WARN 2477 --- [           main] c.g.a.oauth2.DefaultCredentialsProvider  : Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/.

Clearly that's something we have to resolve. To do that we simply can add another Spring Boot application property pointing to a service account JSON file for us to authenticate against to remove the warning.


Note: You can also use an ENV variable as follows


You can get a JSON key generated from the GCP console "IAM and Admin -> Service Accounts" page

For more information on authentication visit this link https://cloud.google.com/docs/authentication/getting-started

Categories: Fusion Middleware

Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere

Pas Apicella - Thu, 2019-01-31 19:47
I decided to install spinnaker on my vSphere PKS installation into one of my clusters. Here is how I did this step by step

1. You will need PKS installed which I have on vSphere with PKS 1.2 using NSX-T. Here is a screen shot of that showing Ops Manager UI

Make sure your PKS Plans have these check boxes enabled, without these checked spinnaker will not install using the HELM chart we will be using below

2. In my setup I created a DataStore which will be used by my K8's cluster, this is optional you can setup PVC however you see fit.

3. Now it's assumed you have a K8s cluster which I have as shown below. I used the PKS CLI to create a small cluster of 1 master node and 3 worker nodes

$ pks cluster lemons

Name:                     lemons
Plan Name:                small
UUID:                     19318553-472d-4bb5-9783-425ce5626149
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   lemons.haas-65.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  10.y.y.y
Network Profile Name:

4. Create a Storage Class as follows, notice how we reference our vSphere Data Store named "k8s" as per step 2

$ kubectl create -f storage-class-vsphere.yaml

Note: storage-class-vsphere.yaml defined as follows

apiVersion: storage.k8s.io/v1
kind: StorageClass
  name: fast
provisioner: kubernetes.io/vsphere-volume
  datastore: k8s
  diskformat: thin
  fstype: ext3

5. Set this Storage Class as the default

$ kubectl patch storageclass fast -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'


papicella@papicella:~$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   14h

6. Install helm as shown below

$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller
$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
$ sleep 10
$ helm ls

Note: rbac-config.yaml defined as follows

apiVersion: v1
kind: ServiceAccount
  name: tiller
  namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
  name: tiller
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

7. Install spinnaker into your K8's cluster as follows

$ helm install --name myspinnaker stable/spinnaker --timeout 6000 --debug

If everything worked

papicella@papicella:~$ kubectl get pods
NAME                                  READY     STATUS      RESTARTS   AGE
myspinnaker-install-using-hal-gbd96   0/1       Completed   0          14m
myspinnaker-minio-5d4c999f8b-ttm7f    1/1       Running     0          14m
myspinnaker-redis-master-0            1/1       Running     0          14m
myspinnaker-spinnaker-halyard-0       1/1       Running     0          14m
spin-clouddriver-7b8cd6f964-ksksl     1/1       Running     0          12m
spin-deck-749c84fd77-j2t4h            1/1       Running     0          12m
spin-echo-5b9fd6f9fd-k62kd            1/1       Running     0          12m
spin-front50-6bfffdbbf8-v4cr4         1/1       Running     1          12m
spin-gate-6c4959fc85-lj52h            1/1       Running     0          12m
spin-igor-5f6756d8d7-zrbkw            1/1       Running     0          12m
spin-orca-5dcb7d79f7-v7cds            1/1       Running     0          12m
spin-rosco-7cb8bd4849-c44wg           1/1       Running     0          12m

8. At the end of the HELM command once complete you will see output as follows

1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace default $DECK_POD 9000

2. Visit the Spinnaker UI by opening your browser to:

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:

For more info on the Kubernetes integration for Spinnaker, visit:

9. Go ahead and run these commands to connect using your localhost to the spinnaker UI

$ export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
$ kubectl port-forward --namespace default $DECK_POD 9000
Forwarding from -> 9000
Forwarding from [::1]:9000 -> 9000

10. Browse to

More Information


Pivotal Container Service

Categories: Fusion Middleware

Introducing Fishbowl’s XML Feed Connector for Google Cloud Search

Last November, Google released Cloud Search with third-party connectivity. While not a direct replacement for the Google Search Appliance (GSA), Google Cloud Search is Google’s next-generation search platform and is an excellent option for many existing GSA customers whose appliances are nearing (or past) their expiration. Fishbowl is aiming to make the transition for GSA customers even easier with our new XML Feed Connector for Google Cloud Search.

One of the GSA’s features was the ability to index content via custom feeds using the GSA Feeds Protocol. Custom XML feed files containing content, URLs, and metadata could be pushed directly to the GSA through a feed client, and the GSA would parse and index the content in those files. The XML Feed Connector for Google Cloud Search brings this same functionality to Google’s next-generation search platform, allowing GSA customers to continue to use their existing XML feed files with Cloud Search.

Our number one priority with the XML Feed Connector was to ensure that users would be able to use the exact same XML feed files they used with the GSA, with no modifications to the files required. These XML feed files can provide content either by pointing to a URL to be crawled, or by directly providing the text, HTML, or compressed content in the XML itself. For URLs, the GSA’s built-in web crawler would retrieve the content; however, Google Cloud Search has no built-in crawling capabilities. But fear not, as our XML Feed Connector will handle URL content retrieval before sending the content to Cloud Search for indexing. It will also extract the title and metadata from any HTML page or PDF document retrieved via the provided URL, allowing the metadata to be used for relevancy, display, and filtering purposes. For content feeds using base-64 compressed content, the connector will also handle decompression and extraction of content for indexing.

In order to queue feeds for indexing, we’ve implemented the GSA’s feed client functionality, allowing feed files to be pushed to the Connector through a web port. The same scripts and web forms you used with the GSA will work here. You can configure the HTTP listener port and restrict the Connector to only accept files from certain IP addresses.

Another difference between the GSA and Google Cloud Search is how they handle metadata. The GSA would accept and index any metadata provided for an item, but Cloud Search requires you to specify and register a structured data schema that defines the metadata fields that will be accepted. There are tighter restrictions on names of metadata fields in Cloud Search, so we implemented the ability to map metadata names between those in your feed files and those uploaded to Cloud Search. For example, let’s say your XML feed file has a metadata field titled “document_title”. Cloud Search does not allow for underscores in metadata definitions, so you could register your schema with the metadata field “documenttitle”, then using the XML Feed Connector, map the XML field “document_title” to the Cloud Search field “documenttitle”.

Here is a full rundown of the supported features in the XML Feed Connector for Google Cloud Search:

  • Full, incremental, and metadata-and-url feed types
  • Grouping of records
  • Add and delete actions
  • Mapping of metadata
  • Feed content:
    • Text content
    • HTML content
    • Base 64 binary content
    • Base 64 compressed content
    • Retrieve content via URL
    • Extract HTML title and meta tags
    • Extract PDF title and metadata
  • Basic authentication to retrieve content from URLs
  • Configurable HTTP feed port
  • Configurable feed source IP restrictions

Of course, you don’t have to have used the GSA to benefit from the XML Feed Connector. As previously mentioned, Google Cloud Search does not have a built-in web crawler, and the XML Feed Connector can be given a feed file with URLs to retrieve content from and index. Feeds are especially helpful for indexing html content that cannot be traversed using a traditional web/spidering approach such as web applications, web-based content libraries, or single-page applications. If you’d like to learn more about Google Cloud Search or the XML Feed Connector, please contact us.

Fishbowl Solutions is a Google Cloud Partner and authorized Cloud Search reseller.

The post Introducing Fishbowl’s XML Feed Connector for Google Cloud Search appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Automate your PDMWorks, EPDM, and File System to PTC Windchill Data Migrations Using Fishbowl’s LinkLoader

Companies don’t invest in Windchill so they can execute lengthy, complicated, expensive data migration projects. They invest in Windchill so they can realize the benefits of a well-executed PLM strategy. However, the lynchpin of a good Windchill deployment is getting accurate data into Windchill…and the faster it happens, the faster a company can move on their PLM strategy.

Fishbowl Solutions has worked with numerous customers on enterprise data migration projects over the years. When many companies first deploy Windchill their migration projects are dealing with CAD data and documents. Manually loading CAD data is far too time consuming. As mentioned, these activities are vital to success, but why waste valuable engineering resources that can be better utilized on engineering work.

Fishbowl Solutions has the LinkLoader family of apps that automate the loading of files/data into Windchill PDMLink. When it comes to CAD data and documents, the files might be on the network file system (NFS) or from 3rd party PDM, such as EPDM or PDMWorks. In fact, migrating Solidworks into Windchill PDMLink is probably the busiest I have ever seen.

Please note that Fishbowl can do a lot more than just migrate Solidworks from the file system, but hopefully this gives a little detail to some types of projects.

To read the rest of this blog post, please visit the PTC LiveWorx 2019 blog.

The post Automate your PDMWorks, EPDM, and File System to PTC Windchill Data Migrations Using Fishbowl’s LinkLoader appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Testing out the new PFS (Pivotal Function Service) alpha release on minikube

Pas Apicella - Mon, 2019-01-21 19:25
I quickly installed PFS on minikube as per the instructions below so I could write my own function service. Below shows that function service and how I invoked using the PFS CLI and Postman

1. Install PFS using this url for minikube. Refer to these instructions to install PFS on minikube


2. Once installed verify PFS has been installed using some commands as follows

$ watch -n 1 kubectl get pod --all-namespaces


Various namespaces are created as shown below:

$ kubectl get namespaces
NAME                   STATUS    AGE
default                    Active       19h
istio-system           Active       18h
knative-build        Active       18h
knative-eventing  Active       18h
knative-serving    Active       18h
kube-public            Active       19h
kube-system          Active        19h

Ensure PFS is installed as shown below:

$ pfs version
  pfs cli: 0.1.0 (e5de84d12d10a060aeb595310decbe7409467c99)

3. Now we are going to deploy this employee function which exists on GitHub as follows


The Function code is as follows:

package com.example.empfunctionservice;

import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;

import java.util.function.Function;

public class EmpFunctionServiceApplication {

private static EmployeeService employeeService;

public EmpFunctionServiceApplication(EmployeeService employeeService) {
this.employeeService = employeeService;

public Function<String, String> findEmployee() {
return id -> {
String response = employeeService.getEmployee(id);

return response;

public static void main(String[] args) {
SpringApplication.run(EmpFunctionServiceApplication.class, args);


4. We are going to deploy a Spring Boot Function as per the REPO above. More information on Java Functions for PFS can be found here


5. Let's create a function called "emp-function" as shown below

$ pfs function create emp-function --git-repo https://github.com/papicella/emp-function-service --image $REGISTRY/$REGISTRY_USER/emp-function -w -v

Output: (Just showing the last few lines here)

papicella@papicella:~/pivotal/software/minikube$ pfs function create emp-function --git-repo https://github.com/papicella/emp-function-service --image $REGISTRY/$REGISTRY_USER/emp-function -w -v
Waiting for LatestCreatedRevisionName
Waiting on function creation: checkService failed to obtain service status for observedGeneration 1
LatestCreatedRevisionName available: emp-function-00001


default/emp-function-00001-gpn7p[build-step-build]: [INFO] BUILD SUCCESS
default/emp-function-00001-gpn7p[build-step-build]: [INFO] ------------------------------------------------------------------------
default/emp-function-00001-gpn7p[build-step-build]: [INFO] Total time: 12.407 s
default/emp-function-00001-gpn7p[build-step-build]: [INFO] Finished at: 2019-01-22T00:12:39Z
default/emp-function-00001-gpn7p[build-step-build]: [INFO] ------------------------------------------------------------------------
default/emp-function-00001-gpn7p[build-step-build]:        Removing source code
default/emp-function-00001-gpn7p[build-step-build]: -----> riff Buildpack 0.1.0
default/emp-function-00001-gpn7p[build-step-build]: -----> riff Java Invoker 0.1.3: Contributing to launch
default/emp-function-00001-gpn7p[build-step-build]:        Reusing cached download from buildpack
default/emp-function-00001-gpn7p[build-step-build]:        Copying to /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar
default/emp-function-00001-gpn7p[build-step-build]: -----> Process types:
default/emp-function-00001-gpn7p[build-step-build]:        web:      java -jar /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar $JAVA_OPTS --function.uri='file:///workspace/app'
default/emp-function-00001-gpn7p[build-step-build]:        function: java -jar /workspace/io.projectriff.riff/riff-invoker-java/java-function-invoker-0.1.3-exec.jar $JAVA_OPTS --function.uri='file:///workspace/app'


default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.617  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=1, name=pas)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.623  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=2, name=lucia)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.628  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=3, name=lucas)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: Hibernate: insert into employee (id, name) values (null, ?)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.632  INFO 1 --- [       Thread-4] c.e.empfunctionservice.LoadDatabase      : Preloading Employee(id=4, name=siena)
default/emp-function-00001-deployment-66fbd6bf4-bbqpq[user-container]: 2019-01-22 00:13:53.704  INFO 1 --- [       Thread-2] o.s.c.f.d.FunctionCreatorConfiguration   : Located bean: findEmployee of type class com.example.empfunctionservice.EmpFunctionServiceApplication$$Lambda$791/373359604

pfs function create completed successfully

6. Let's invoke our function as shown below by returning each Employee record using it's ID.

$ pfs service invoke emp-function --text -- -w '\n' -d '1'
curl -H 'Host: emp-function.default.example.com' -H 'Content-Type: text/plain' -w '\n' -d 1
Employee(id=1, name=pas)

$ pfs service invoke emp-function --text -- -w '\n' -d '2'
curl -H 'Host: emp-function.default.example.com' -H 'Content-Type: text/plain' -w '\n' -d 2
Employee(id=2, name=lucia)

The "pfs service invoke" will show you what an external command will look like to invoke the function service. The IP address here is just the same IP address returned by "minikube ip" as shown below.

$ minikube ip

7. Let's view our services using "pfs" CLI

$ pfs service list
NAME            STATUS
emp-function  Running
hello                Running

pfs service list completed successfully

8. Invoking from Postman, ensuring we issue a POST request and pass the correct headers as shown below

More Information


Categories: Fusion Middleware

Mind Over Matter

Greg Pavlik - Fri, 2019-01-18 18:13
"Plotinus gave exquisitely refined expression to the ancient intuition that the material order is not the basis of the mental, but rather the reverse. This is not only an eminently rational intuition; it is perhaps the only truly rational picture of reality as a whole. Mind does not emerge from mindless matter, as modern philosophical fashion would have it. The suggestion that is does is both a logical impossibility and a phenomenological absurdity. Plotinus and his contemporaries understood that all the things that most essentially characterize the act of rational consciousness—its irreducible unity of apprehension, its teleological structure, the logical syntax of reasoning, and on and on—are intrinsically incompatible with, and could not logically emerge from, a material reality devoid of mind. At the same time, they could not fail to notice that there is a constant correlation between that act of rational consciousness and the intelligibility of being, a correlation that is all but unimaginable if the structure and ground of all reality were not already rational. Happily, in Plotinus’s time no one had yet ventured the essentially magical theory of perception as representation. Plotinus was absolutely correct, therefore, to attempt to understand the structure of the whole of reality by looking inward to the structure of the mind; and he was just as correct to suppose that the reciprocity between the mind and objective reality must indicate a reality simpler and more capacious than either: a primordial intelligence, Nous, and an original unity, the One, generating, sustaining, and encompassing all things. And no thinker of late antiquity pursued these matters with greater persistence, rigor, and originality than he did."

DB Hart commenting on the new translation of Plotinus's Enneads.

Four Types of Mindbreeze Relevancy Boostings and How to Use Them

Strong relevancy is critical to a well-adopted search solution. Mindbreeze provides several ways to fine tune relevancy which they refer to as boosting. This post will explore how boosting works and four ways to apply boosting rules within Mindbreeze.

About Boosting & Rank Scores

Before we start altering relevancy, it’s important to examine how boosting works within Mindbreeze. Mindbreeze provides a baseline algorithm with factors such as term frequency, term proximity, freshness, etc. (while there are ways to alter these core signals, we’ll save that topic for another time). Boostings address many common relevancy-adjustment use cases and are the easiest way to alter rankings. Boosting are applied to the baseline rankings by configured amount. Although the term “boost” generally implies an increase, boosting can used to increase or decrease rank scores relative to the baseline.

Mindbreeze boostings are factor-based (i.e. multiplicative). For example, a boost factor of 2.0 would make something twice as relevant as the baseline, while a boost factor of 0.5 would make it half as relevant. For this reason, it’s helpful to monitor rankings (called rank scores) before and after boosting, in order to determine an appropriate boosting factor. Mindbreeze provides two options for viewing ranks scores as described below.

Viewing Rank using the Export

The easiest way to see the rank of multiple search results is to use the export feature. From the default Mindbreeze search client, perform your search and select Export. In the Export results window, select the plus sign (+) and add “mes:hitinfo:rank” to the visible columns. You can simply start typing “rank” in the input box and this column name will appear.

Viewing Rank within JSON Results

If using the Export feature is not available, or you need to test rankings that are specific to a custom search application, you can also view the rank within the Mindbreeze JSON response for a search request. Follow these instructions to do so:

  1. Open the developer-tools dock in your browser (F12).
  2. Navigate to the Network tab.
  3. Perform your search.
  4. Expand the search request.
  5. View the response body and drill down into the search request response data to find the desired rank. For example, to see the rank of the first result, you would select result set > results > 0 > rank_score.

For more information on the data contained in the search response, see the Mindbreeze documentation on api.v2.search.

Term2document Boosting

Term2DocumentBoost is a Mindbreeze query transformation plugin which allows you to apply relevance tuning to specific search queries (or all queries) based on defined rules. It is the primary way to adjust relevancy within Mindbreeze. The plugin gets configured either for a specific index or globally. If you configure an index-specific boosting file, those rules will be applied instead of (not in addition to) the global rules. If you’d like to apply both set of the rules, the global rules should be copied into the index-specific boosting file as each index can only reference one file at a time.

Term2Document boosting rules will always be applied to searches against the indices for which they are configured, so their best used for static boosting as opposed to the more dynamic boosting options described in later sections. Term2Document boosting can be used to increase the relevance of certain documents for specific search queries. For example, a search for “news” can be tuned so that documents with the “post-type” metadata value “newsarticle” will have higher relevance. Term2Document boosting can also be used to generally increase the relevance of certain documents based on any combination of metadata-value pairs. For example, all documents from a “Featured Results” data source, or all documents from the “products” section of a website. Rules can use regular expressions to accommodate more complex patterns. In the second to last example below, we show how pages one-level off the root of a website can be boosted using a regular expression.

The Term2Document Boost file uses five columns to apply boosting rules.

  • Term: This is the search term you want to trigger the rule. Leave this blank to trigger the rule for all searches.
  • Key: The name of the metadata field on which content to be boosted will be identified. If you want to boost documents matching a pattern in the full text of the document contents, the “Key” column should contain the word “content”.
  • Pattern: A term or pattern that determines the metadata value on which content to be boosted will be identified. This column supports regular expressions. Please note, any fields you wish to boost in this way should be added to the Aggregated Metadata Keys configuration for the respective indices in order to enable regex matching.
  • Boost: The boost factor. Values less than one (e.g. 0.5) should be preceded by a zero (i.e. 0.5 not .5).
  • Query: Optional column for advanced configuration. Instead of specifying a Term, Key, and Pattern, you can use this column to create more flexible boosting rules via the Mindbreeze query language. This is helpful when you want to change the boosting rule for each user’s query. For example, if someone searches for a person (e.g. “John Doe”), documents with this person as the Author (i.e. stored in the Author metadata) can be boosted. This is shown in the last example below.
Term2document Boosting Examples































Additional information can be found in the Mindbreeze documentation on the Term2DocumentBoost Transformer Plugin.

Category Descriptor Boosting

Mindbreeze uses an XML file called the CategoryDescriptor to control various aspects of the search experience for each data source category (e.g. Web, DataIntegration, Microsoft File, etc.). Each category plugin includes a default CategoryDescriptor which can be extended or modified to meet your needs.

You may modify the CategoryDescriptor if you wish to add localized display labels for metadata field names or alter the default metadata visible from the search results page. In this case, we’re focused on how you can use it to boost the overall impact of a metadata field on relevancy. This is common if you wish to change the impact of a term’s presence in certain field over others. Common candidates for up-boosting include title, keywords, or summary. Candidates for down-boosting may include ID numbers, GUIDs, or other values which could yield confusing or irrelevant results in certain cases.

The default CategoryDescriptor is located in the respective plugin’s zip file. You can extract this file and modify it as needed.

Category Descriptor Example

The example below shows the modification of two metadatum entries. The first, for Keywords, boosts the importance of this field by a factor of 5.0. The second, for Topic, adds a boost factor of 2.0 and localized display labels for four languages.

<?xml version=”1.0″ encoding=”UTF-8″?>

<category id=”MyExample” supportsPublic=”true”>

  <name>My Example Category</name>


    <metadatum aggregatable=”true” boost=”5.0″ id=”Keywords” selectable=”true” visible=”true”>


    <metadatum aggregatable=”true” boost=”2.0″ id=”Topic” selectable=”true” visible=”true”>

<name xml:lang=”de”>Thema</name>

<name xml:lang=”en”>Topic</name>

<name xml:lang=”fr”>Sujet</name>

<name xml:lang=”es”>Tema</name>


    … additional metadatum omitted for brevity …



Applying a Custom Category Descriptor

The easiest way to apply a custom category descriptor is to download a copy of the respective plugin from the Mindbreeze updates page. For example, if you wanted to change relevancy for crawled content, you would download Mindbreeze Web Connector.zip. Unzip the file and look for the categoryDescriptor.xml file which is located in Mindbreeze Web Connector\Web Connector\Plugin\WebConnector- (version number may vary).

Please note, if you update a plugin on the Mindbreeze appliance, your custom CategoryDescriptor will be overwritten. Keep a copy saved in case you need to reapply it after updating. Additional information can be found in the Mindbreeze documentation on Customizing the Category Descriptor.

Inline Boosting

Boosting can be set at query time using the Mindbreeze query language. This can be either done directly in the query box or as part of the search application’s design by applying it to the constraint property. This functionality can be leveraged to create contextual search by dynamically boosting results based on any number of factors and custom business logic.

For example:

  • ALL (company:fishbowl)^3.5 OR NOT (company:fishbowl)

Returns all results and ranks items with fishbowl in the company metadata field 3.5 times higher than other results.

  • (InSpire^2.0 OR “InSite“) AND “efficient“

Results must contain InSpire or InSite and efficient. Occurrences of InSpire are twice as relevant as other terms.

  • holiday AND ((DocType:Policy)^5 OR (DocType:Memo))

Returns results which contain holiday and a DocType value of Policy or Memo. Ranks items with Policy as their DocType 5 times higher than those with the DocType of Memo.

JavaScript Boosting

JavaScript boosting is similar to inline boosting in that it is also set at query time. It serves many of the same use cases as inline boosting, but can provide a cleaner implementation for clients already working within the Mindbreeze client.js framework. The examples below show how to apply boosting to three different scenarios. Please note, any fields you wish to boost in this way should be added to the Aggregated Metadata Keys configuration for the respective indices in order to enable regex matching.


This example ranks items with fishbowl in the company metadata field 3.5 times higher than other results. For comparison, this rule would have the same effect on rankings as the inline boosting shown in the first example within the previous section.

var application = new Application({});

application.execute(function (application) {

application.models.search.set(“boostings.fishbowl”, {

   factor: 3.5,

   query_expr: {

     label: “company”, regex: “^fishbowl$”


}, { silent: true });


This example ranks results with a department value of accounting 1.5 times higher than other results. This can be modified to dynamically set the department to a user’s given department. For example, accounting may wish to see accounting documents boosted whereas the engineering team would want engineering documents boosted. Please note, setting this dynamically requires access the user’s department data which is outside the scope of the Mindbreeze search API but is often accessible within the context of an intranet or other business application.

var application = new Application({});

application.execute(function (application) {

application.models.search.set(“boostings.accounting”, {

   factor: 1.5,

   query_expr: {

     label: “department”, regex: “^accounting$”


}, { silent: true });


This example shows how dynamic, non-indexed information (in this case, the current time) can be used to alter relevancy.  Results with a featuredmeals value of dinner are boosted by a factor of 3 when the local time is between 5 PM and 10 PM. This could be extended to boost breakfast, brunch, and lunch, for their respective date-time windows which would be helpful if users were searching for restaurants or other meal-time points of interest.

var application = new Application({});

application.execute(function (application) {

      var now = new Date().getHours();

            if (now >= 17 && now <= 22) {

                  application.models.search.set(“boostings.dinner”, {

                  factor: 3.0,

                  query_expr: {

                  label: “featuredmeals”, regex: “.*dinner.*”


            }, { silent: true });




As you can see, Mindbreeze offers a variety of options for relevancy adjustments. If you have any questions about our experience working with Mindbreeze or would like to know more, please contact us.

The post Four Types of Mindbreeze Relevancy Boostings and How to Use Them appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Robinson Jeffers

Greg Pavlik - Thu, 2019-01-10 12:25
Today is the birthday of one of the most under-rated American poets (in my view, one of the best we have produced), the builder of Tor House and Hawk Tower, which can still be visited in Carmel, California. A timely documentary on an American genius.

The website for Tor House visits, a fascinating experience:


Creating a local kubectl config file for the proxy to your Kubernetes API server

Pas Apicella - Fri, 2019-01-04 03:04
On my Mac accessing the CONFIG file of kubectl exist in a painful location as follows


When using the command "kubectl proxy" and invoking the UI requires you to browse to the CONFIG file which finder doesn't expose easily. One way around this is as follows

1. Save a copy of that config file in your current directory as follows

papicella@papicella:~/temp$ cat ~/.kube/config > kubeconfig

2. Invoke "kubectl proxy" to start a UI server to your K8's cluster

papicella@papicella:~/temp$ kubectl proxy
Starting to serve on

3. Navigate to the UI using an URL as follows


4. At this point we can browse to the easily accessible TEMP directory to the file "kubeconfig" we created at step #1 and then click "Sing In" button

Categories: Fusion Middleware

PCF Heathwatch 1.4 just got a new UI

Pas Apicella - Thu, 2018-12-13 04:45
PCF Healthwatch is a service for monitoring and alerting on the current health, performance, and capacity of PCF to help operators understand the operational state of their PCF deployment

Finally got around to installing the new PCF Healthwatch 1.4 and the first thing which struck me was the UI main dashboard page. It's clear what I need to look at in seconds and the alerts on the right hand side also useful

Some screen shots below

papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-99$ http http://healthwatch.run.haas-99.pez.pivotal.io/info
HTTP/1.1 200 OK
Content-Length: 45
Content-Type: application/json;charset=utf-8
Date: Thu, 13 Dec 2018 10:26:07 GMT
X-Vcap-Request-Id: ac309698-74a6-4e94-429a-bb5673c1c8f7

    "message": "PCF Healthwatch available"

More Information

Pivotal Cloud Foundry Healthwatch

Categories: Fusion Middleware

Disabling Spring Security if you don't require it

Pas Apicella - Sun, 2018-12-09 17:42
When using Spring Cloud Services Starter Config Client dependency for example Spring Security will also be included (Config servers will be protected by OAuth2). As a result this will also enable basic authentication to all our service endpoints on your application which may not be the desired result here if your just building a demo for example

Add the following to conditionally disable security in your Spring Boot main class
package com.example.employeeservice;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.WebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

public class EmployeeServiceApplication {

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);

static class ApplicationSecurity extends WebSecurityConfigurerAdapter {

public void configure(WebSecurity web) throws Exception {
Categories: Fusion Middleware

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Pas Apicella - Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here


Docs as follows

Categories: Fusion Middleware

Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems

Most organizations have silos of content spread out amongst databases, file shares, and one or more document management systems. Without a unified search system to tap into this information, knowledge often remains hidden and the assets employees create cannot be used to support design, manufacturing, or research objectives.

An enterprise search system that can connect these disparate content stores and provide a single search experience for users can help organizations increase operational efficiencies, enhance knowledge sharing, and ensure compliance. PTC Windchill provides a primary source for the digital product thread, but organizations often have other key systems storing valuable information. That is why it is critical to provide workers with access to associated information regardless of where it is stored.

This past August, Fishbowl released its PTC Windchill Connector for Google Cloud Search. Fishbowl developed the connector for companies needing a search solution that allows them to spend less time searching for existing information and more time developing new products and ideas. These companies need a centralized way to search their key engineering information stores, like PLM (in this case Windchill), ERP, quality database, and other legacy data systems. Google Cloud Search is Google’s next generation, cloud-based enterprise search platform from which customers can search large data sets both on-premise and in the cloud while taking advantage of Google’s world-class relevancy algorithms and search experience capabilities.

Connecting PTC Windchill and Google Cloud Search

Through Google Cloud Search, Google provides the power and reach of Google search to the enterprise. Fishbowl’s PTC Windchill Connector for Google Cloud Search provides customers with the ability to leverage Google’s industry-leading technology to search PTC Windchill for Documents, CAD files, Enterprise Parts, Promotion Requests, Change Requests, and Change Notices. The PTC Windchill Connector for Google Cloud Search assigns security to all items indexed through the connector based on the default ACL configuration specified in the connector configuration. The connector allows customers to take full advantage of additional search features provided by Google Cloud Search including Facets and Spelling Suggestions just as you would expect from a Google solution.

To read the rest of this blog post and see an architecture diagram showing how Fishbowl connects Google Cloud Search with PTC Windchill, please visit the PTC LiveWorx 2019 blog.

The post Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations

Fishbowl has helped numerous customers migrate Solidworks data into PTC Windchill. We have proven processes and proprietary applications to migrate from SolidWorks Enterprise PDM (EPDM) and PDMWorks, and WTPart migrations including structure and linking. This extensive experience combined with our bulk loading software has elevated us as one of the world’s premiere PTC Data Migration specialists.

Over the years, we’ve created various resources for Windchill customers to help them understand their options to migrate Solidworks data into Windchill, as well as some best practices when doing so. After all, we’ve seen firsthand how moving CAD files manually wastes valuable engineering resources that can be better utilized on more important work.

We’ve categorized those resources below. Please explore them and learn how Fishbowl Solution can help you realize the automation gains you are looking for.

Blog Posts Infographic Webinar Brochures LinkLoader Web Page

The post Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other


Subscribe to Oracle FAQ aggregator - Fusion Middleware