Fusion Middleware

Disabling Spring Security if you don't require it

Pas Apicella - 4 hours 38 min ago
When using Spring Cloud Services Starter Config Client dependency for example Spring Security will also be included (Config servers will be protected by OAuth2). As a result this will also enable basic authentication to all our service endpoints on your application which may not be the desired result here if your just building a demo for example

Add the following to conditionally disable security in your Spring Boot main class
  
package com.example.employeeservice;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.WebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@SpringBootApplication
@EnableDiscoveryClient
public class EmployeeServiceApplication {

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}

@Configuration
static class ApplicationSecurity extends WebSecurityConfigurerAdapter {

@Override
public void configure(WebSecurity web) throws Exception {
web
.ignoring()
.antMatchers("/**");
}
}
}
Categories: Fusion Middleware

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Pas Apicella - Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here

https://content.pivotal.io/blog/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today

Docs as follows

https://docs.pivotal.io/pfs/index.html
Categories: Fusion Middleware

Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems

Most organizations have silos of content spread out amongst databases, file shares, and one or more document management systems. Without a unified search system to tap into this information, knowledge often remains hidden and the assets employees create cannot be used to support design, manufacturing, or research objectives.

An enterprise search system that can connect these disparate content stores and provide a single search experience for users can help organizations increase operational efficiencies, enhance knowledge sharing, and ensure compliance. PTC Windchill provides a primary source for the digital product thread, but organizations often have other key systems storing valuable information. That is why it is critical to provide workers with access to associated information regardless of where it is stored.

This past August, Fishbowl released its PTC Windchill Connector for Google Cloud Search. Fishbowl developed the connector for companies needing a search solution that allows them to spend less time searching for existing information and more time developing new products and ideas. These companies need a centralized way to search their key engineering information stores, like PLM (in this case Windchill), ERP, quality database, and other legacy data systems. Google Cloud Search is Google’s next generation, cloud-based enterprise search platform from which customers can search large data sets both on-premise and in the cloud while taking advantage of Google’s world-class relevancy algorithms and search experience capabilities.

Connecting PTC Windchill and Google Cloud Search

Through Google Cloud Search, Google provides the power and reach of Google search to the enterprise. Fishbowl’s PTC Windchill Connector for Google Cloud Search provides customers with the ability to leverage Google’s industry-leading technology to search PTC Windchill for Documents, CAD files, Enterprise Parts, Promotion Requests, Change Requests, and Change Notices. The PTC Windchill Connector for Google Cloud Search assigns security to all items indexed through the connector based on the default ACL configuration specified in the connector configuration. The connector allows customers to take full advantage of additional search features provided by Google Cloud Search including Facets and Spelling Suggestions just as you would expect from a Google solution.

To read the rest of this blog post and see an architecture diagram showing how Fishbowl connects Google Cloud Search with PTC Windchill, please visit the PTC LiveWorx 2019 blog.

The post Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations

Fishbowl has helped numerous customers migrate Solidworks data into PTC Windchill. We have proven processes and proprietary applications to migrate from SolidWorks Enterprise PDM (EPDM) and PDMWorks, and WTPart migrations including structure and linking. This extensive experience combined with our bulk loading software has elevated us as one of the world’s premiere PTC Data Migration specialists.

Over the years, we’ve created various resources for Windchill customers to help them understand their options to migrate Solidworks data into Windchill, as well as some best practices when doing so. After all, we’ve seen firsthand how moving CAD files manually wastes valuable engineering resources that can be better utilized on more important work.

We’ve categorized those resources below. Please explore them and learn how Fishbowl Solution can help you realize the automation gains you are looking for.

Blog Posts Infographic Webinar Brochures LinkLoader Web Page

The post Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

The Way

Greg Pavlik - Thu, 2018-11-22 13:48
Walking the Camino one meets people from all walks of life - hailing from virtually everywhere, from virtually every continent and creed. All on a path to their destination. Isn't that life itself?








Conceptions of Fudo Myoo in Esoteric Buddhism

Greg Pavlik - Mon, 2018-11-12 17:14
Admittedly, this is an esoteric topic altogether - my own interest in understanding Fudo Myoo in Mahayana Buddhism have largely stemmed from an interest in Japanese art in the Edo wood block tradition - but I thought a rather interesting exploration of esoteric Buddhism and by implication currents of Japanese culture.

https://tricycle.org/magazine/evil-in-esoteric-japanese-buddhism/

On Education

Greg Pavlik - Mon, 2018-11-12 17:07
'We study to get diplomas and degrees and certifications, but imagine a life devoted to study for no other purpose than to be educated. Being educated is not the same as being informed or trained. Education is an "education", a drawing out of one's own genius, nature, and heart. The manifestation of one's essence, the unfolding of one's capacities, the revelation of one's heretofore hidden possibilities - these are the goals of study from the point of view of the person. From another side, study amplifies the speech and song of the world so that it's more palpably present.

Education in soul leads to the enchantment of the world and the attunement of self.'

Thomas Moore, 'Meditations'

The wait is over! Google Cloud Search with third-party connectivity is now available. Here’s what you need to know.

This month, Google Cloud opened the general availability of Cloud Search with third-party connectivity. This is an evolution of the Cloud Search available across G Suite, the set of cloud-native intelligent productivity and collaboration apps including Gmail, Drive and Docs. We’ve been working with pre-release versions of Cloud Search since January and are excited to finally share the news and capabilities of this exciting new functionality more broadly. This version supports third-party connectors and integrations with non-Google data sources both on-premise and in the cloud opening up use to all enterprise customers.

What is Google Cloud Search?

Google Cloud Search combines Google’s search expertise with features customized for business. Cloud Search can index both G Suite content like Gmail and Drive as well as third-party data both on-premise and in the cloud. This provides a unified search experience and enforces document-level permissions already in place in your repositories. Cloud Search boasts Google’s industry-leading machine learning relevancy and personalization to bring the speed, performance and reliability of Google.com to enterprise customers.

Who can use Cloud Search?

Any enterprise can purchase Cloud Search as a standalone platform edition regardless of whether you use other Google products such as G Suite or Google Cloud Platform. If users in your domain already has G Suite Enterprise licenses, they will now be able to access results from third-party data via the Cloud Search application provided as part of the G Suite. You will be allotted a fixed quota of third-party data that you can index, based on the number of Enterprise licenses. G Suite customers can also purchase the standalone platform edition if additional quota or search applications are required.

What’s new in this release? Third-Party Data Source Connectors

To enable easy indexing of third-party data both on-premise and in the cloud, Google Cloud has released the following reference connectors.

While the above list covers many popular sources, numerous other data sources exist within individual organizations. For this reason, Google Cloud also released content and identity  SDKs to enable custom connector development. Fishbowl has been working with these SDKs for nearly a year and we’ve released the following connectors for Cloud Search:

Embeddable Search Widget

As part of this release, Google Cloud has introduced two new options for consuming search results that include third-party data. The first is an embeddable search widget which provides a customizable search interface for use within an organization’s internal web applications. With only a small amount of HTML and JavaScript, the widget enables customers to integrate with Cloud Search. Using the search widget offers common search features such as facets and pagination while minimizing required development efforts. You can customize components of the interface with your own CSS and JavaScript if desired.

Query API

The second new search option is the Cloud Search Query API. The API provides search and suggest services for creating fully custom search interfaces powered by Cloud Search. It also offers more flexibility than the search widget when embedding search results in an existing application.

Note that in addition to the API and search widget, Google Cloud also offers a pre-built search interface available to customers at cloudsearch.google.com and via Google’s Cloud Search mobile apps available for iOS and Android. These interfaces now support the inclusion of third-party results.

Is this a replacement for Google Search Appliance?

Cloud Search is Google’s next-generation search platform. It is not a GSA in the Cloud but may be an excellent replacement option for many GSA customers. GSA customers who sign a 2-year Cloud Search contract before the end of the year can extend their appliances through the end of 2019 if needed. Google is also offering these customers a 30% discount on their first year of Cloud Search. If you have a GSA that is approaching expiration and are wondering whether Cloud Search would be a good fit, please contact us.

What’s next?

If you’d like to learn more about Google Cloud Search, schedule a demo, or discuss whether your search use case is a good fit, please get in touch.

Fishbowl Solutions is a Google Cloud Partner and authorized Cloud Search reseller.

The post The wait is over! Google Cloud Search with third-party connectivity is now available. Here’s what you need to know. appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Using the Mindbreeze client.js File to Create Custom Search Interfaces

This post describes how to create custom Mindbreeze search interfaces using the built-in Mindbreeze client.js file. This post is a follow up to our post on Four Options for Creating Mindbreeze Search Interfaces where we mention Option 3 is creating custom Mindbreeze web applications.

For this example, I will be using my local web server with XAMPP for Windows which uses Apache in the background. Setting up XAMPP is beyond the scope of this blog post, but this approach can be taken with any web server or architecture.

The widgets and html snippet references in the post are based on the following documentation from the Mindbreeze website: Development of Search Apps.

Creating a Basic Search Page

To begin, I created a blog.html file referencing Mindbreeze’s client.js file and using RequireJS to load the Mindbreeze search application. To do this, I created a new application object and told the page where the starting <div> block is using the rootEl property of the application. Since I did not want the page to run a blank search right away, I also added the startSearch property and set it to false.

There are a few mustache templates that I injected onto the page by copying and pasting from the default Mindbreeze search application (index.html). I then removed some of the optional elements to create a no-frills search page as shown in the snippet below. I’ve included the templates for result count and spelling suggestions, which are contained in the searchinfo and results templates respectively.

<html>
  <head>
    <title>Mindbreeze Search</title>
  </head>
  <body>
    <script src="https://mindbreeze.fishbowlsolutions.com:23352/apps/scripts/client.js" data-global-export="false"></script>
    <script>
      Mindbreeze.require(["client/application"], function (Application) {
        var application = new Application({
          rootEl: document.getElementById("searchresults"),
          startSearch: false,
        });
      });
  </script>
    <div id="searchresults">
      <div data-template="view" data-count="10" data-constraint="ALL">
        <script type="text/x-mustache-template" data-attr-role="status" data-attr-class="{{^estimated_count?}}hide{{/estimated_count?}}"
          data-attr-tabindex="-1">
          {{^status_messages.no_results?}}
          <h3>
            {{#estimated_count?}}
              {{estimated_count}} {{i18n.editor_result_title}}
            {{/estimated_count?}}
          </h3>
          {{/status_messages.no_results?}}
        </script>
        <div data-template="searchinfo"></div>
        <div data-template="results" class="media-list list-group">
        </div>
      </div>
    </div>
  </body>
</html>

Navigating to this page with a query parameter (e.g. ?query=ALL) returns a simple search results list without any styling.

We can now add our own custom styling to the page. I added the Bootstrap CDN, along with jQuery to style my page.

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
    <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script>
    <script defer src="https://use.fontawesome.com/releases/v5.0.13/js/all.js" integrity="sha384-xymdQtn1n3lH2wcu0qhcdaOpQwyoarkgLVxC/wZ5q7h9gHtxICrpcaSUfygqZGOe" crossorigin="anonymous"></script>

Adding some bootstrap class information like container and col-md-9 for the results helps me quickly style the page for adding on facets.

There are two different types of facets that Mindbreeze offers: FilteredFacets and FilteredFacet.  FilteredFacets allow for the configuration page in the Management Center to control what facets are displayed on the page.  FilteredFacet allows developers to manually add the desired facets individually.  For this demonstration, I will be manually adding a facet using FilteredFacet. This is shown below with the addition of the facet for author.

<html>
<head>
  <title>Mindbreeze Search</title>
  <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm"
    crossorigin="anonymous">
  <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN"
    crossorigin="anonymous"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q"
    crossorigin="anonymous"></script>
  <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl"
    crossorigin="anonymous"></script>
  <script defer src="https://use.fontawesome.com/releases/v5.0.13/js/all.js" integrity="sha384-xymdQtn1n3lH2wcu0qhcdaOpQwyoarkgLVxC/wZ5q7h9gHtxICrpcaSUfygqZGOe"
    crossorigin="anonymous"></script>
</head>
<body>
  <script src="https://mindbreeze.fishbowlsolutions.com:23352/apps/scripts/client.js" data-global-export="false"></script>
  <script>
    Mindbreeze.require(["client/application"], function (Application) {
        var application = new Application({
          rootEl: document.getElementById("searchresults"),
          startSearch: false,
        });
      });
  </script>
  <div id="searchresults" class="container">
    <div data-template="view" data-count="10" data-constraint="ALL">
      <!--Constraing is optional the same as in redirect of original page-->
      <script type="text/x-mustache-template" data-attr-role="status" data-attr-class="{{^estimated_count?}}hide{{/estimated_count?}}"
        data-attr-tabindex="-1">
        {{^status_messages.no_results?}}
          <h3>
            {{#estimated_count?}}
              {{estimated_count}} {{i18n.editor_result_title}}
            {{/estimated_count?}}
          </h3>
          {{/status_messages.no_results?}}
        </script>
      <div data-template="searchinfo"></div>
      <div class="row">
        <div class="col-md-9">
          <div data-template="results">
          </div>
        </div>
        <div class="col-md-3">
          <div data-template="filteredfacet" data-name="Author" data-container-tag-name="div" data-container-class-name="filter"
            data-entry-tag-name="div" data-entry-class-name="entry">
          </div>
        </div>
      </div>
    </div>
  </div>
</body>
</html>

We can see that I now have the author field as a configured value.  There are more configurations offered like changing the template of the results in the filter or HTML tags, allowing user input, and modifying the title label. Information on these options can be found here: Mindbreeze Filtered Facet Widget.

Now we will add the search form onto the page. This will allow suggestions to show up when a user is typing a query.  You can add additional parameters to the input tag to tell Mindbreeze which data sources to run suggestions against like popular searches, recent searches, and document properties.  This documentation is available here: Mindbreeze Suggestions.

We’ll style the suggestions in the next section.

<form class="center search-field mb-print-left" data-template="searchform" data-requires-user-input="true">
      <input data-template="suggest" data-disabled="false" data-placeholder="search" data-shortcut="alt+1" id="query" data-source-id-pattern="document_property|popularsearches" data-initial-source-id-pattern="document_property|popularsearches" data-grouped="true" class="" name="query" type="search" autofocus="" autocomplete="off" placeholder="Search">
      <button class="btn btn-link mb-no-print" type="submit" tabindex="-1"><i class="icon-search"></i></button>
    </form>

Now we want to add pagination or infinite scrolling to allow the loading of more results.  For this example, I will use paging via the Mindbreeze pages tempalte. This will allow page numbers to display in list format and Mindbreeze will be able to handle the paging actions for us.

<div class="col-md-9">
          <div data-template="results">
          </div>
          <div data-template="pages">
            <script type="text/x-mustache-template" data-class-name="mypaging" data-tag-name="ul">
              {{#pages?}}
                 {{#pages}}
                    <li class="{{#current_page?}}active{{/current_page?}}"><a href="#" data-action-name="setPage" data-page="{{page}}">{{page}}</a></li>
                  {{/pages}}
              {{/pages?}}
            </script>
          </div>
        </div>
Modifying Mustache Templates to Alter Result Data

Now we want to structure our result data. This is an easy way to adjust the information displayed for each result item. We can do this by overriding our results <div> and adding our own mustache.  Here is where you can make changes such as adding specific metadata for each result to display contextually relevant information.

<div data-template="results">
              <script type="text/x-mustache-template" data-class-name="media mb-clickable-phone" data-attr-role="group"
                data-attr-data-action-object='{ "toggleOpen": { "enabledSelector": ".visible-phone" }}'
                data-attr-aria-labelledby="result_{{id}}">
                <span class="pull-left media-object" aria-hidden="true">
                    {{#actions.data[0].value.href?}}
                    <a href="{{actions.data[0].value.href}}" data-disabled-selector=".visible-phone"
                        tabindex="-1" target="_self">{{/actions.data[0].value.href?}} {{{icon}}} {{#actions.data[0].value.href?}}
                    </a>{{/actions.data[0].value.href?}}
                </span>
                <div class="media-body">
                    <h3 class="media-heading" id="result_{{id}}">
                        {{#actions.data[0].value.href?}}
                        <a href="{{actions.data[0].value.href}}"
                            data-enabled-selector=".visible-phone" target="_self">{{/actions.data[0].value.href?}} {{{title}}} {{#actions.data[0].value.href?}}
                        </a>{{/actions.data[0].value.href?}}
                    </h3>
                    <ul class="mb-actions mb-separated hidden-phone mb-visible-open mb-no-print">
                        {{#actions.data}}
                        <li class="nowrap">{{{html}}}</li>{{/actions.data}}

                    </ul>
                    {{#content}}
                    <p class="mb-content">{{{.}}}</p>
                    {{/content}}
                    {{#mes:nested.data?}}
                    <ul class="mb-nested">
                        {{#mes:nested}}
                        <div class="media mb-nested" data-action-object="{ &quot;toggleOpen&quot;: { &quot;enabledSelector&quot;: &quot;.visible-phone&quot; }}">
                            {{#actions.data[0].value.href?}}
                            <a href="{{actions.data[0].value.href}}" data-disabled-selector=".visible-phone" target="_self">
                                {{/actions.data[0].value.href?}}
                                <b>{{{title}}}</b>
                                {{#actions.data[0].value.href?}}
                            </a>
                            {{/actions.data[0].value.href?}}
                            <ul class="mb-actions mb-separated hidden-phone mb-visible-submenu-open mb-no-print">
                                {{#actions.data}}
                                <li class="nowrap">
                                    <small>{{{html}}}</small>
                                </li>{{/actions.data}}
                            </ul>
                            <dl class="mb-comma-separated-values mb-separated mb-summarized-description-list mb-small">
                                <dd>{{{mes:date}}}</dd>
                            </dl>
                        </div>
                        {{/mes:nested}}
                    </ul>
                    {{/mes:nested.data?}}
                    <span class="clearfix"></span>
                </div>
            </script>

            </div>
Adding Custom Styling

After the appropriate data is returned, we can perform some styling.  For this, I am going to reference a custom CSS file on my Apache server.

We recommend that your CSS handle displaying search results on both mobile and desktop displays.

You can see the styling of the Suggestions (using jQuery UI autocomplete) below.

I’ve included the final code for this example below. You can see how this could be modified and extended to suit a wide variety of needs and use cases. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us.

#searchresults {
    display: flex;
    flex-direction: column;
}

.header{
    background: #ffffff;
}

.header .container {
    display: flex;
    align-items: center;
    justify-content: flex-start;
    max-width: 100vw;
    height: 7rem;
    border-bottom: 1px solid rgba(0, 0, 0, 0.2);
    padding: 0;
}

.header .brand-logo {
    width: 72px;
    height: 72px;
    padding: 0 1em;
    margin: 0 0.5em;
    box-sizing: content-box;
}

form{
    position: relative;
    margin: 0;
    flex: 0 1 300px;
    max-width: 300px;
    padding: 0 1em;
    display: flex;
    flex-direction: column;
    align-items: center;
    height: 32px;
    box-shadow: inset 0 0 0 2px #adadad;
    border-radius: 1em;
    transition: box-shadow 0.25s;
}

form,
form:active,
form:focus,
form:focus-within,
form input,
form input:active,
form input:focus,
form input:focus-within {
    outline: none;
}

form .search-box {
    display: flex;
    flex-wrap: wrap;
    flex-direction: row;
    align-items: center;
    width: 100%;
    height: 100%;
}

form input{
    flex: 0 0 calc(100% - 44px);
    padding: 0;
    border: none;
}

form:focus-within {
    box-shadow: inset 0 0 0 2px #868686;
}

form .search-box button{
    border: none;
    padding-right: 0;
    flex: 0 0 44px;
}

form .search-box .ui-autocomplete {
    max-width: 300px;
    width: 100% !important;
    flex: 0 0 100%;
}

form span[role='status']{
    display: none;
}

form ul.ui-autocomplete{
    background: white;
    z-index: 10;
    list-style: none;
    border: 1px solid black;
    padding: 0;
    top: 0 !important;
    left: 0 !important;
}

form .ui-autocomplete-category {
    background: #bdbdbd;
    font-weight: 700;
}

form .ui-autocomplete li{
    padding-left: 0.5em;
}

ul.ui-autocomplete span.matched{
    font-weight: 700;
}

form ul a:hover{
    text-decoration: none;
}

.ui-autocomplete li:hover:not(.ui-autocomplete-category) {
    background: rgba(0, 0, 0, 0.1);
    cursor: pointer;
}

.ui-autocomplete a{
    display: block;
}

.ui-autocomplete a{
    color: black;
}

.resultHeader{
    font-size: 14pt;

}

.resultHeader a, .resultHeader a:hover{
    text-decoration: none;
    color: black;
}

div[role='status']{
    padding: 10px 0px;
}

.results-wrapper {
    margin: 0;
    padding: 0 1.5em;
    box-sizing: border-box;
    flex: 0 0 100%;
    max-width: 100%;
}

.results-wrapper .container {
    width: 100%;
    max-width: 100%;
    display: flex;
    flex-direction: column;
    padding: 0;
}

.results-wrapper .row {
    margin: 0;
    display: flex;
}

.results-wrapper .results-container {
    flex-direction: row;
    flex-wrap: nowrap;
}

.results-wrapper .results-container .results {
    padding: 0;
    flex: 1 0 75%;
    box-sizing: border-box;
}

.results-wrapper .row .results .results-list {
    display: flex;
    flex-direction: column;
}

.results-wrapper .row .results .results-list .row {
    display: flex;
    border-radius: 4px;
    box-shadow: 0 1px 6px rgba(0,0,0,0.16), 0 2px 6px rgba(0,0,0,0.23);
    transition: box-shadow 0.25s;
    flex-wrap: nowrap;
}

.results-wrapper .row .results .results-list .row:hover {
    box-shadow: 0 5px 20px rgba(0,0,0,0.19), 0 2px 6px rgba(0,0,0,0.23);
}

.results-wrapper .row .results .results-list .row .thumbnail {
    flex: 1 0 80px;
    box-sizing: border-box;
    padding: 14px;
    max-width: 125px;
}

.results-wrapper .results-container .filters {
    flex: 0 1 25%;
    box-sizing: border-box;
    margin: 0 10px 0 30px;
}

.row .thumbnail a {
    max-width: 100%;
}

.row .thumbnail a .mb-thumbnail {
    max-width: 100%;
    height: auto;
}

.results-wrapper .results-container .results .results-list .row .content {
    flex: 1 0 75%;
    display: flex;
    flex-direction: column;
    box-sizing: border-box;
    padding: 0.5rem 0.5rem 0.5rem 0;
}

.row .content .resultHeader {
    box-sizing: border-box;
    margin-bottom: 0.5rem;
}

.row .content .resultHeader a {
    font-weight: bold;
    font-size: 1rem;
}

.row .content .resultHeader a:hover {
    text-decoration: underline;
}

.row .content .mb-content {
    font-size: 0.9rem;
}

em{
    font-style: normal;
    background: rgba(0, 0, 0, 0.1);
}

div[data-template='results'] .row{
    margin-bottom: 1.5rem;
}

.paging {
    padding: 0;
}

.paging li{
    display: inline-block;
    /* You can also add some margins here to make it look prettier */
    zoom:1;
    *display:inline;
}

.filter input[type="checkbox"] {
    width: 0;
    height: 0;
    position: relative;
    top: -6px;
    margin-right: 0.8rem;
}

.filter input[type="checkbox"]:before {
    position: relative;
    display: block;
    width: 8px;
    height: 8px;
    left: 2px;
    top: 2px;
    box-sizing: border-box;
    content: "";
    border: 1px solid rgba(0, 0, 0, 0.4);
    background: transparent;
    transition: all 0.1s linear;
    cursor: pointer;
}

.filter input[type="checkbox"]:hover:before,
.filter input[type="checkbox"]:checked:before {
    width: 12px;
    height: 12px;
    left: 0;
    top: 0;
    border: 0;
}

.filter input[type="checkbox"]:hover:before {
    background: rgba(0, 123, 255, 0.6);
}

.filter input[type="checkbox"]:checked:before {
    background: rgba(0, 123, 255, 1);
}

.paging a{
    float: left;
    padding: 4px 12px;
    line-height: 20px;
    text-decoration: none;
    background-color: #fff;
    border: 1px solid #b5b5b5;
    border-radius: 4px;
}

.paging .disabled>span{
    color: #777;
    cursor: default;
    background-color: transparent;
    float: left;
    padding: 4px 12px;
    line-height: 20px;
    text-decoration: none;
    border: 1px solid #b5b5b5;
    border-radius: 4px;
    margin-right: 5px;
}

a[data-action-name='previousPage']{
    margin-right: 5px;
}

.facet >div > input{
    display: none;
}

.checkbox {
    display: flex;
    align-items: center;
    margin-bottom: 1rem;
}

.checkbox > span {
    font-size: 14px;
    line-height: 13px;
    padding-left: 8px;
}

@media screen and (max-width: 768px) {
    .results-wrapper .row .results .results-list .row .thumbnail {
        display: none;
    }

    .results-wrapper .row .results .results-list .row .content {
        padding: 14px;
    }

    .filters {
        display: none;
    }
}


Index.html
<html>

<head>
  <title>Mindbreeze Search</title>
  <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm"
    crossorigin="anonymous">
  <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN"
    crossorigin="anonymous"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q"
    crossorigin="anonymous"></script>
  <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl"
    crossorigin="anonymous"></script>
  <script defer src="https://use.fontawesome.com/releases/v5.0.13/js/all.js" integrity="sha384-xymdQtn1n3lH2wcu0qhcdaOpQwyoarkgLVxC/wZ5q7h9gHtxICrpcaSUfygqZGOe"
    crossorigin="anonymous"></script>
  <link rel="stylesheet" href="/blog.css">
</head>

<body>
  <script src="https://mindbreeze.fishbowlsolutions.com:23382/apps/scripts/client.js" data-global-export="false"></script>
  <script>
    Mindbreeze.require(["client/application"], function (Application) {
        var application = new Application({
          rootEl: document.getElementById("searchresults"),
          startSearch: false,
        });
      });
  </script>
  <div id="searchresults" class="">
    <div class="header">
      <div class="container">
        <img class="brand-logo" src="/fb.svg" alt="Fishbowl Solutions">
        <form class="center search-field mb-print-left" data-template="searchform" data-requires-user-input="true">
          <div class="search-box">
            <input data-template="suggest" data-source-id-popularsearches-title="Popular Searches" data-disabled="false"
              data-placeholder="search" data-shortcut="alt+1" id="query" data-source-id-pattern="document_property|popularsearches|recent_query"
              data-initial-source-id-pattern="recent_query|popularsearches" data-grouped="true" class="" name="query"
              type="search" autofocus="" autocomplete="off" placeholder="Search">
            <button class="btn btn-link mb-no-print" type="submit" data-i18n="[title]action_search">
              <i class="fas fa-search"></i>
            </button>
          </div>
        </form>
      </div>
    </div>
    <div class="container results-wrapper">
      <div data-template="view" data-count="5" data-constraint="ALL" class="container">
        <script type="text/x-mustache-template" data-attr-role="status" data-attr-class="{{^estimated_count?}}hide{{/estimated_count?}}"
          data-attr-tabindex="-1">
          {{^status_messages.no_results?}}
          <h4>
            {{#estimated_count?}}
              {{estimated_count}} {{i18n.editor_result_title}}
            {{/estimated_count?}}
          </h4>
          {{/status_messages.no_results?}}
        </script>
        <div data-template="searchinfo"></div>
        <div class="row results-container">
          <div class="results">
            <div data-template="results" class="results-list">
              <script type="text/x-mustache-template" data-class-name="row" data-attr-role="group"
                data-attr-data-action-object='{ "toggleOpen": { "enabledSelector": ".visible-phone" }}'
                data-attr-aria-labelledby="result_{{id}}">
                <span class="thumbnail" aria-hidden="true">
                    {{#actions.data[0].value.href?}}
                    <a href="{{actions.data[0].value.href}}" data-disabled-selector=".visible-phone"
                        tabindex="-1" target="_self">{{/actions.data[0].value.href?}}{{#icon?}} {{{icon}}}{{/icon?}}{{^icon?}} <img src="/no-thumbnail.jpg" class="mb-thumbnail"/>{{/icon?}} {{#actions.data[0].value.href?}}
                    </a>{{/actions.data[0].value.href?}}
                </span>
                <div class="content">
                    <div class="resultHeader" id="result_{{id}}">
                        {{#actions.data[0].value.href?}}
                        <a href="{{actions.data[0].value.href}}"
                            data-enabled-selector=".visible-phone" target="_self">{{/actions.data[0].value.href?}} {{{title}}} {{#actions.data[0].value.href?}}
                        </a>{{/actions.data[0].value.href?}}
                      </div>

                    {{#content}}
                    <p class="mb-content">{{{.}}}</p>
                    {{/content}}
                    {{#mes:nested.data?}}
                    <ul class="mb-nested">
                        {{#mes:nested}}
                        <div class="media mb-nested" data-action-object="{ &quot;toggleOpen&quot;: { &quot;enabledSelector&quot;: &quot;.visible-phone&quot; }}">
                            {{#actions.data[0].value.href?}}
                            <a href="{{actions.data[0].value.href}}" data-disabled-selector=".visible-phone" target="_self">
                                {{/actions.data[0].value.href?}}
                                <b>{{{title}}}</b>
                                {{#actions.data[0].value.href?}}
                            </a>
                            {{/actions.data[0].value.href?}}
                            <ul class="mb-actions mb-separated hidden-phone mb-visible-submenu-open mb-no-print">
                                {{#actions.data}}
                                <li class="nowrap">
                                    <small>{{{html}}}</small>
                                </li>{{/actions.data}}
                            </ul>
                            <dl class="mb-comma-separated-values mb-separated mb-summarized-description-list mb-small">
                                <dd>{{{mes:date}}}</dd>
                            </dl>
                        </div>
                        {{/mes:nested}}
                    </ul>
                    {{/mes:nested.data?}}
                </div>
            </script>

            </div>

          </div>
          <div class="filters">
            <div data-template="filteredfacet" data-title-tag-name="h4" data-name="Author" data-container-tag-name="div"
              data-container-class-name="filter"  data-entry-tag-name="div"
              data-entry-class-name="entry" class="facet">
            </div>
          </div>

        </div>
        <div class="row">
            <div data-template="pages">
              <div class="pagination">
                <script type="text/x-mustache-template" data-class-name="paging" data-tag-name="ul">

                  {{#pages?}}

                      {{#onFirstPage?}}
                        <li class="disabled"><span>&laquo;</span><li>
                      {{/onFirstPage?}}
                      {{^onFirstPage?}}
                        <li><a href="#" data-action-name="previousPage">&laquo;</a><li>
                      {{/onFirstPage?}}

                      {{#pages}}
                        <li class="{{#current_page?}}active{{/current_page?}}"><a href="#" data-action-name="setPage" data-page="{{page_number}}">{{page}}</a></li>
                      {{/pages}}

                      {{#more_avail?}}
                        <li class="disabled"><span>&hellip;</span></li>
                      {{/more_avail?}}

                      {{#onLastPage?}}
                        <li class="disabled"><span>&raquo;</span><li>
                      {{/onLastPage?}}
                      {{^onLastPage?}}
                        <li><a href="#" data-action-name="nextPage">&raquo;</a><li>
                      {{/onLastPage?}}

                  {{/pages?}}
              </script>
              </div>
            </div>
            </div>
      </div>
    </div>
  </div>
</body>

</html>




<!--

var application = new Application({
  rootEl: document.getElementById("searchresults"),
  startSearch: false,
   queryURLParameter: "searchQuery",
});
//constraint wenn nötig
//application.setConstraint({unparsed: "extension:pdf"}); //constraint is optional
window.search = function () {
  var searchQuery = document.getElementById("searchQuery").value;
  //you need this only if you add the searchbox on the searchpage as well
  //Now the redirect will be on the current website (window.location.href) ==> If you search on the search website the redirect should be to the searchwebsite as well.
  window.location.href = "?searchQuery=" + encodeURIComponent(searchQuery);
}
function getUrlParameter(name) {
    name = name.replace(/[\[]/, '\\[').replace(/[\]]/, '\\]');
    var regex = new RegExp('[\\?&]' + name + '=([^&#]*)');
    var results = regex.exec(location.search);
    return results === null ? '' : decodeURIComponent(results[1].replace(/\+/g, ' '));
};
document.getElementById("searchQuery").value = getUrlParameter("searchQuery");

-->

The post Using the Mindbreeze client.js File to Create Custom Search Interfaces appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

One Click Access on the Shop Floor to Part Information with PTC ThingWorx® Navigate®

Shop floor technicians and operators involved in assembly and other processes are at the critical, final steps in the manufacturing process. Unfortunately, these workers are often at the mercy of out of date, less than accurate paper documentation, or they need to access multiple systems to find associated parts information. These issues create bottlenecks that can impact quality and on time shipments, as well as lead to employee frustration.

The following is a summary of the problems Fishbowl Solutions has seen at customers when it comes to accessing parts information needed for assembly:

  • Having the design engineers create manufacturing documentation, print it, and deliver physical copies to shop floor workers
  • Storing associated parts information on a network drive, making it hard to find for shop floor staff but especially new workers
  • Quality alerts and other processes are not integral with any systems
  • Parts information contained within PDFs requires excessive scrolling to get to the information needed
  • The MPMLink viewer requires multiple clicks to get to relevant parts information

To solve these problems, Fishbowl has worked with customers to leverage PTC ThingWorx to build shop floor viewing applications that can surface up relevant information to workers in one, simple view.

To read the rest of this blog post and see sample screenshots of the shop floor viewing application, please click over to the PTC LiveWorx 2019 blog.

The post One Click Access on the Shop Floor to Part Information with PTC ThingWorx® Navigate® appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Spring Cloud GCP using Spring Data JPA with MySQL 2nd Gen 5.7

Pas Apicella - Sun, 2018-10-07 19:06
Spring Cloud GCP adds integrations with Spring JDBC so you can run your MySQL or PostgreSQL databases in Google Cloud SQL using Spring JDBC, or other libraries that depend on it like Spring Data JPA. Here is an example of how using Spring Data JPA with "Spring Cloud GCP"

1. First we need a MySQL 2nd Gen 5.7 database to exist in our GCP account which I have previously created as shown below




2. Create a new project using Spring Initializer or how ever you like to create it BUT ensure you have the following dependencies in place. Here is an example of what my pom.xml looks like. In short add the following maven dependencies as per the image below



pom.xml

  
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.5.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud-gcp.version>1.0.0.RELEASE</spring-cloud-gcp.version>
<spring-cloud.version>Finchley.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>


...

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-dependencies</artifactId>
<version>${spring-cloud-gcp.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

3. Let's start by creating a basic Employee entity as shown below

Employee.java
  
package pas.apj.pa.sb.gcp;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
@Table (name = "employee")
public class Employee {

@Id
@GeneratedValue (strategy = GenerationType.AUTO)
private Long id;

private String name;

}

4. Let's now add a Rest JpaRepository for our Entity

EmployeeRepository.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository <Employee, Long> {
}
5. Let's create a basic RestController to show all our Employee entities

EmployeeRest.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RestController
public class EmployeeRest {

private EmployeeRepository employeeRepository;

public EmployeeRest(EmployeeRepository employeeRepository) {
this.employeeRepository = employeeRepository;
}

@RequestMapping("/emps-rest")
public List<Employee> getAllemps()
{
return employeeRepository.findAll();
}
}

6. Let's create an ApplicationRunner to show our list of Employees as the applications starts up

EmployeeRunner.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.stereotype.Component;

@Component
public class EmployeeRunner implements ApplicationRunner {

private EmployeeRepository employeeRepository;

public EmployeeRunner(EmployeeRepository employeeRepository) {
this.employeeRepository = employeeRepository;
}

@Override
public void run(ApplicationArguments args) throws Exception {
employeeRepository.findAll().forEach(System.out::println);
}
}
7. Add a data.sql file to create some records in the database at application startup

data.sql

insert into employee (name) values ('pas');
insert into employee (name) values ('lucia');
insert into employee (name) values ('lucas');
insert into employee (name) values ('siena');

8. Finally our "application.yml" file will need to be able to be able to connect to our MySQL instance running in GCP as well as set some properties for JPA as shown below

spring:
  jpa:
    hibernate:
      ddl-auto: create-drop
      use-new-id-generator-mappings: false
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MariaDB53Dialect
  cloud:
    gcp:
      sql:
        instance-connection-name: fe-papicella:australia-southeast1:apples-mysql-1
        database-name: employees
  datasource:
    initialization-mode: always
    hikari:
      maximum-pool-size: 1


A couple of things in here which are important.

- Set the Hibernate property "dialect: org.hibernate.dialect.MariaDB53Dialect" otherwise without this when hibernate creates tables for your entities you will  run into this error as Cloud SQL database tables are created using the InnoDB storage engine.

ERROR 3161 (HY000): Storage engine MyISAM is disabled (Table creation is disallowed).

- For a demo I don't need multiple DB connections so I set the datasource "maximum-pool-size" to 1

- Notice how I set the "instance-connection-name" and "database-name" which is vital for Spring Cloud SQL to establish database connections

8. Now we need to make sure we have a database called "employees" as per our "application.yml" setting.


9. Now let's run our Spring Boot Application and verify this working showing some output from the logs

- Connection being established

2018-10-08 10:54:37.333  INFO 89922 --- [           main] c.google.cloud.sql.mysql.SocketFactory   : Connecting to Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1] via ssl socket.
2018-10-08 10:54:37.335  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : First Cloud SQL connection, generating RSA key pair.
2018-10-08 10:54:38.685  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : Obtaining ephemeral certificate for Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1].
2018-10-08 10:54:40.132  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : Connecting to Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1] on IP [35.197.180.223].
2018-10-08 10:54:40.748  INFO 89922 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

- Showing the 4 Employee records

Employee(id=1, name=pas)
Employee(id=2, name=lucia)
Employee(id=3, name=lucas)
Employee(id=4, name=siena)

10. Finally let's make RESTful call as we defined above using HTTPie as follows

pasapicella@pas-macbook:~$ http :8080/emps-rest
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 08 Oct 2018 00:01:42 GMT
Transfer-Encoding: chunked

[
    {
        "id": 1,
        "name": "pas"
    },
    {
        "id": 2,
        "name": "lucia"
    },
    {
        "id": 3,
        "name": "lucas"
    },
    {
        "id": 4,
        "name": "siena"
    }
]

More Information

Spring Cloud GCP
https://cloud.spring.io/spring-cloud-gcp/

Spring Cloud GCP SQL demo (This one is using Spring JDBC)
https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-sql-sample

Categories: Fusion Middleware

Lizok's Bookshelf

Greg Pavlik - Sun, 2018-09-30 17:34
The first of Eugene Vodolazkin's novels translated to English was, of course, Laurus, which ranks as one of the significant literary works of the current century. I was impressed by the translators ability to convey not just a feel for what I presume the original has, but a kind of "other-time-yet-our-timeness" that seems an essential part of the authors objective. I recently picked up Volodazkin's Aviator and thought to look up the translator as well. I was delighted to find her blog on modern Russian literature, which can be found here:

http://lizoksbooks.blogspot.com/2018/09/the-2018-nose-award-longlist.html

Sea of Fertility

Greg Pavlik - Sun, 2018-09-23 18:24
In a discussion on some of my reservations on Murakami's take on 20th century Japanese literature, a friend commented on Mishima's Sea of Fertility tetrology with some real insights I thought worth preserving and sharing, albeit anonymously (if you're not into Japanese literature, now's a good time to stop reading):

"My perspective is different: it was a perfect echo of the end of “Spring Snow” and a final liberation of the main character from his self-constructed prison of beliefs. Honda’s life across the novels represents the false path: of consciousness the inglorious decay and death of the soul trapped in a repetition of situations that it cannot fathom being forced into waking. He is forced into being an observer of his own life eventually debasing himself into a “peeping Tom” even as he works as a judge. The irony is rich. Honda decays through the four novels since he clings to the memory of his friend (Kiyoaki) and does not understand the constructed nature his experience and desires. He is asleep. He wants Matsugae’s final dream to be the truth (that they will “...meet again under the Falls.”) His desires have been leading him in a circle and the final scene in the garden is his recognition of what the Abbess (Satoko from Spring Snow) was trying to convey to him. When she tells him, “There was no such person as Kiyoaki Matsugae”, it is her attempt to cure him of his delusion (and spiritual illness that has rendered him desperate and weak - chasing the ego illusions of his youth and seeking the reincarnation of his friend everywhere.) Honda lives in the dream of his ego and desire. In the final scene, he wakes up for the first time. I loved the image of the shadows falling on the garden. He is finally dying, stripped of illusion. I found it to be Mishima at his most powerful. I agree about “Sailor”, that is a great novel and much more Japanese in its economy of expression. Now, Haruki Murakami is a world apart from Kawabata and Mishima. I love his use of the unconscious/Id as a place to inform and enthrall: the labyrinth of dreams. Most of his characters are trapped (at least part of the time) in this “place”: eg Kafka on the Shore, Windup Bird Chronicle, Hard-boiled Wonderland and End of the World, etc. Literature has to have room for all of them. I like the other Murakami, Ryu Murkami, whose “Audition” and “Famous Hits of the Shōwa Era” are dark, psychotic tales of unrestrained, escalating violence but redeemed by deep probing of unconscious, hidden motives (the inhuman work of the unconscious that guides the characters like the Greek sense of fate (Moira)) and occasional black humor."
 

PKS - What happens when we create a new namespace with NSX-T

Pas Apicella - Mon, 2018-09-17 07:02
I previously blogged about the integration between PKS and NSX-T on this post

http://theblasfrompas.blogspot.com/2018/09/pivotal-container-service-pks-with-nsx.html

On this post lets show the impact of what occurs within NSX-T when we create a new Namespace in our K8s cluster.

1. List the K8s clusters with have available

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks clusters

Name    Plan Name  UUID                                  Status     Action
apples  small      d9f258e3-247c-4b4c-9055-629871be896c  succeeded  UPDATE

2. Fetch the cluster config for our cluster into our local Kubectl config

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks get-credentials apples

Fetching credentials for cluster apples.
Context set for cluster apples.

You can now switch between clusters by using:
$kubectl config use-context

3. Create a new Namespace for the K8s cluster as shown below

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl create namespace production
namespace "production" created

4. View the Namespaces in the K8s cluster

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl get ns
NAME          STATUS    AGE
default       Active    12d
kube-public   Active    12d
kube-system   Active    12d
production    Active    9s

Using NSX-T manager the first thing you will see is a new Tier 1 router created for the K8s namespace "production"



Lets view it's configuration via the "Overview" screen


Finally lets see the default "Logical Routes" as shown below



When we push workloads to the "Production" namespace it's this configuration which was dynamically created which we will get out of the box allowing us to expose a "LoadBalancer" service as required across the Pods deployed within the Namspace

Categories: Fusion Middleware

Pivotal Container Service (PKS) with NSX-T on vSphere

Pas Apicella - Wed, 2018-09-05 06:15
It taken some time but now I officially was able to test PKS with NSX-T rather then using Flannel.

While there is a bit of initial setup to install NSX-T and PKS and then ensure PKS networking is NSX-T, the ease of rolling out multiple Kubernetes clusters with unique networking is greatly simplified by NSX-T. Here I am going to show what happens after pushing a workload to my PKS K8s cluster

First Before we can do anything we need the following...

Pre Steps

1. Ensure you have NSX-T setup and a dashboard UI as follows


2. Ensure you have PKS installed in this example I have it installed on vSphere which at the time of this blog is the only supported / applicable version we can use for NSX-T



PKS tile would need to ensure it's setup to use NSX-T which is done on this page of the tile configuration



3. You can see from the NSX-T manager UI we have a Load Balancers setup as shown below. Navigate to "Load Balancing -> Load Balancers"



And this Load Balancer is backed by few "Virtual Servers", one for http (port 80) and the other for https (port 443), which can be seen when you select the Virtual Servers link


From here we have logical switches created for each of the Kubernetes namespaces. We see two for our load balancer, and the other 3 are for the 3 K8s namespaces which are (default, kube-public, kube-system)


Here is how we verify the namespaces we have in our K8s cluster

pasapicella@pas-macbook:~/pivotal $ kubectl get ns
NAME          STATUS    AGE
default       Active    5h
kube-public   Active    5h
kube-system   Active    5h

All of the logical switches are connected to the T0 Logical Switch by a set of T1 Logical Routers


For these to be accessible, they are linked to the T0 Logical Router via a set of router ports



Now lets push a basic K8s workload and see what NSX-T and PKS give us out of the box...

Steps

Lets create our K8s cluster using the PKS CLI. You will need a PKS CLI user which can be created following this doc

https://docs.pivotal.io/runtimes/pks/1-1/manage-users.html

1. Login using the PKS CLI as follows

$ pks login -k -a api.pks.haas-148.pez.pivotal.io -u pas -p ****

2. Create a cluster as shown below

$ pks create-cluster apples --external-hostname apples.haas-148.pez.pivotal.io --plan small

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  In Progress

3. Wait for the cluster to have created as follows

$ pks cluster apples

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  10.1.1.10

The PKS CLI is basically telling BOSH to go ahead an based on the small plan create me a fully functional/working K8's cluster from VM's to all the processes that go along with it and when it's up keep it up and running for me in the event of failure.

His an example of the one of the WORKER VM's of the cluster shown in vSphere Web Client



4. Using the following YAML file as follows lets push that workload to our K8s cluster

apiVersion: v1
kind: Service
metadata:
  labels:
    app: fortune-service
    deployment: pks-workshop
  name: fortune-service
spec:
  ports:
  - port: 80
    name: ui
  - port: 9080
    name: backend
  - port: 6379
    name: redis
  type: LoadBalancer
  selector:
    app: fortune
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: fortune
    deployment: pks-workshop
  name: fortune
spec:
  containers:
  - image: azwickey/fortune-ui:latest
    name: fortune-ui
    ports:
    - containerPort: 80
      protocol: TCP
  - image: azwickey/fortune-backend-jee:latest
    name: fortune-backend
    ports:
    - containerPort: 9080
      protocol: TCP
  - image: redis
    name: redis
    ports:
    - containerPort: 6379
      protocol: TCP

5. Push the workload as follows once the above YAML is saved to a file

$ kubectl create -f fortune-teller.yml
service "fortune-service" created
pod "fortune" created

6. Verify the PODS are running as follows

$ kubectl get all
NAME         READY     STATUS    RESTARTS   AGE
po/fortune   3/3       Running   0          35s

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                      AGE
svc/fortune-service   LoadBalancer   10.100.200.232   10.195.3.134   80:30591/TCP,9080:32487/TCP,6379:32360/TCP   36s
svc/kubernetes        ClusterIP      10.100.200.1              443/TCP                                      5h

Great so now lets head back to our NSX-T manager UI and see what has been created. From the above output you can see a LB service is created and external IP address assigned

7. First thing you will notice is in "Virtual Servers" we have some new entries for each of our containers as shown below


and ...


Finally the LB we previously had in place shows our "Virtual Servers" added to it's config and routable



More Information

Pivotal Container Service
https://docs.pivotal.io/runtimes/pks/1-1/

VMware NSX-T
https://docs.vmware.com/en/VMware-NSX-T/index.html
Categories: Fusion Middleware

PCF Platform Automation with Concourse (PCF Pipelines)

Pas Apicella - Mon, 2018-08-20 03:28
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

... and from there setting up Concourse

http://theblasfrompas.blogspot.com/2018/08/deploying-concourse-using-my-bubble.html

.. of course this was created so I can now use the PCF Pipelines to deploy Pivotal Cloud Foundry's Pivotal Application Service (PAS). At a high level this is how to achieve this with some screen shots on the end result

Steps

1. To get started you would use this link as follows. In my example I was deploying PCF to AWS

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf

AWS Install Pipeline

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf/aws

2. Create a versioned bucket for holding terraform state. on AWS that will look as follows


3. Unless you ensure AWS pre-reqs are meet you won't be able to install PCF so this link highlights all that you will need for installing PCF on AWS such as key pairs, limits, etc

https://docs.pivotal.io/pivotalcf/2-1/customizing/aws.html

4. Create a public DNS zone, get its zone ID we will need that when we setup the pipeline shortly. I also created a self signed public certificate used for my DNS as part of the setup which is required as well.





5. At this point we can download the PCF Pipelines from network.pivotal.io or you can use the link as follows

https://network.pivotal.io/products/pcf-automation/



6. Once you have unzipped the file you would then change to the directory for the write IaaS in my case "aws"

$ cd pcf-pipelines/install-pcf/aws


7. Change all of the CHANGEME values in params.yml with real values for your AWS env. This file is documented so you are clear with what you need to add and where. Most of the values are defaults of course.

8. Login to concourse using the "fly" command line

$ fly --target pcfconcourse login  --concourse-url https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com -k

9. Add pipeline

$ fly -t pcfconcourse set-pipeline -p deploy-pcf -c pipeline.yml -l params.yml

10. Unpause pipeline

$ fly -t pcfconcourse unpause-pipeline -p deploy-pcf

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines/pcf-pipelines/install-pcf/aws$ fly -t pcfconcourse pipelines
name        paused  public
deploy-pcf  no      no

11. The pipeline on concourse will look as follows



12. Now to execute the pipeline you have to manually run 2 tasks

- Run bootstrap-terraform-state job manually




- Run create-infrastructure manually
 


At this point the pipeline will kick of automatically. If you need to run-run due to an issue you can manually kick off the task after you fix what you need to fix. The “wipe-env” task will take everything for PAS down and terraform removes all IaaS config as well.

While running each task current state is shown as per the image below


If successful your AWS account will the PCF VM's created for example


Verify that PCF installed is best done using Pivotal Operations Manager as shown below



More Information

https://network.pivotal.io/products/pcf-automation/


Categories: Fusion Middleware

Deploying concourse using my "Bubble" created Bosh director

Pas Apicella - Fri, 2018-08-17 23:27
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

Now with bosh director deployed it's time to deploy concourse itself. The process is very straight forward as per the steps below

1. First let's clone the bosh concourse deployment using the GitHub project as follows



2.  Target bosh director and login, must set ENV variables to connect to AWS bosh correctly using "eval" as we did in the previous post. This will set all the ENV variables we need

$ eval "$(bbl print-env -s state)"
$ bosh alias-env aws-env
$ bosh -e aws-env log-in

3. At this point we need to set the external URL which is essentially the load balancer we created when we deployed Bosh Director in the previous post. To get that value run a command as follows where we deployed bosh director from as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bbl lbs -s state
Concourse LB: bosh-director-aws-concourse-lb [bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com]

4. Now lets set that ENV variable as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ export external_url=https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com

5. Now from the cloned bosh concourse directory change to the directory "concourse-bosh-deployment/cluster" as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ cd concourse-bosh-deployment/cluster

6. Upload stemcell as follows

$ bosh upload-stemcell light-bosh-stemcell-3363.69-aws-xen-hvm-ubuntu-trusty-go_agent.tgz

Verify:

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-bosh stemcells
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name                                     Version  OS             CPI  CID
bosh-aws-xen-hvm-ubuntu-trusty-go_agent  3363.69  ubuntu-trusty  -    ami-0812e8018333d59a6

(*) Currently deployed

1 stemcells

Succeeded
 
7. Now lets deploy concourse as shown below with a command as follows. Make sure you set a password as per "atc_basic_auth.password"

$ bosh deploy -d concourse concourse.yml   -l ../versions.yml   --vars-store cluster-creds.yml   -o operations/basic-auth.yml   -o operations/privileged-http.yml   -o operations/privileged-https.yml   -o operations/tls.yml   -o operations/tls-vars.yml   -o operations/web-network-extension.yml   --var network_name=default   --var external_url=$external_url   --var web_vm_type=default   --var db_vm_type=default   --var db_persistent_disk_type=10GB   --var worker_vm_type=default   --var deployment_name=concourse   --var web_network_name=private   --var web_network_vm_extension=lb  --var atc_basic_auth.username=admin --var atc_basic_auth.password=..... --var worker_ephemeral_disk=500GB_ephemeral_disk -o operations/worker-ephemeral-disk.yml 

8. Once deployed verify the deployment and VM's created as follows

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env deployments
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name       Release(s)          Stemcell(s)                                      Team(s)
concourse  concourse/3.13.0    bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3363.69  -
           garden-runc/1.13.1
           postgres/28

1 deployments

Succeeded
pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 32. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/db78de7f-55c5-42f5-bf9d-20b4ef0fd331      running        z1  10.0.16.5  i-04904fbdd1c7e829f  default  true
web/767b14c8-8fd3-46f0-b74f-0dca2c3b9572     running        z1  10.0.16.4  i-0e5f1275f635bd49d  default  true
worker/cde3ae19-5dbc-4c39-854d-842bbbfbe5cd  running        z1  10.0.16.6  i-0bd44407ec0bd1d8a  default  true

3 vms

Succeeded

9. Navigate to the LB url we used above to access concourse UI using the username/password you set as per the deployment

https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com/


10. Finally we can see of Bosh Director and Concourse deployment VM's on our AWS instance EC2 page as follows



More Information

Categories: Fusion Middleware

bosh-bootloader or "Bubble" as pronounced and how to get started

Pas Apicella - Wed, 2018-08-15 06:50
I decided to try out installing bosh using the bosh-bootloader CLI today. bbl currently supports AWS, GCP, Microsoft Azure, Openstack and vSphere. In this example I started with AWS but it won't be long until try this on GCP

It's worth noting that this can all be done remotely from your laptop once you give BBL the access it needs for the cloud environment.

Steps

1. First your going to need the bosh v2 CLI which you can install here

  https://bosh.io/docs/cli-v2/

Verify:

pasapicella@pas-macbook:~$ bosh -version
version 5.0.1-2432e5e9-2018-07-18T21:41:03Z

Succeeded

2. Second you will need Terrform having a Mac I use brew

$ brew install terrafrom

Verify:

pasapicella@pas-macbook:~$ terraform version
Terraform v0.11.7

3. Now we need to install BBL which is done as follows on a Mac. I also show how to install bosh CLI as well if you missed step 1

$ brew tap cloudfoundry/tap
$ brew install bosh-cli
$ brew install bbl

Further instructions on this link

https://github.com/cloudfoundry/bosh-bootloader

4. At this point your ready to deploy BOSH the instructions for AWS are here

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

Pretty straight forward but here is what I did at this point

5. In order for bbl to interact with AWS, an IAM user must be created. This user will be issuing API requests to create the infrastructure such as EC2 instances, load balancers, subnets, etc.

The user must have the following policy which I just copy into my clipboard to use later:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:*",
                "elasticloadbalancing:*",
                "cloudformation:*",
                "iam:*",
                "kms:*",
                "route53:*",
                "ec2:*"
            ],
            "Resource": "*"
        }
    ]
}


$ aws iam create-user --user-name "bbl-user”

This next command requires you to copy the policy JSON above

$ aws iam put-user-policy --user-name "bbl-user" --policy-name "bbl-policy" --policy-document "$(pbpaste)"

$ aws iam create-access-key --user-name "bbl-user"

You will get a JSON response at this point as follows. Save file created here as it’s used next few steps

{
    "AccessKey": {
        "UserName": "bbl-user",
        "Status": "Active",
        "CreateDate": "2018-08-07T03:30:39.993Z",
        "SecretAccessKey": ".....",
        "AccessKeyId": "........"
    }
}

In the next step BBL will use these commands to create infrastructure on AWS.

6. Now we can pave the infrastructure, Create a Jumpbox, and Create a BOSH Director as well as a LB which I need as I plan to deploy concourse using BOSH.

$ bbl up --aws-access-key-id ..... --aws-secret-access-key ... --aws-region ap-southeast-2 --lb-type concourse --name bosh-director -d -s state --iaas aws

The process takes around 5-8 minutes.

The bbl state directory contains all of the files that were used to create your bosh director. This should be checked in to version control, so that you have all the information necessary to later destroy or update this environment at a later date.

7.  Finally we target the the bosh director as follows. Keep in mind everything we need is stored in the "state" directory as per above

$ eval "$(bbl print-env -s state)"

8. This will set various ENV variables which the bosh CLI will then use to target the bosh director.  Now we need to just prepare ourselves to actually log in. I use a script as follows

target-bosh.sh

bbl director-ca-cert -s state > bosh.crt
export BOSH_CA_CERT=bosh.crt

export BOSH_ENVIRONMENT=$(bbl director-address -s state)

echo ""
echo "Username: $(bbl director-username -s state)"
echo "Password: $(bbl director-password -s state)"
echo ""
echo "Log in using -> bosh log-in"
echo ""

bosh alias-env aws-env

echo "ENV set to -> aws-env"
echo ""

Output When run with password omitted ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ ./target-bosh.sh

Username: admin
Password: ......

Log in using -> bosh log-in

Using environment 'https://10.0.0.6:25555' as client 'admin'

Name      bosh-bosh-director-aws
UUID      3ade0d28-77e6-4b5b-9be7-323a813ac87c
Version   266.4.0 (00000000)
CPI       aws_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      admin

Succeeded
ENV set to -> aws-env

9. Finally lets log-in as follows

$ bosh -e aws-env log-in

Output ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env log-in
Successfully authenticated with UAA

Succeeded

10. Last but not least lets see what VM's bosh has under management. These VM's are for my concourse I installed. If you would like to install concourse use this link - https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/concourse.md

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 20. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/ec8aa978-1ec5-4402-9835-9a1cbce9c1e5      running        z1  10.0.16.5  i-0d33949ece572beeb  default  true
web/686546be-09d1-43ec-bbb7-d96bb5edc3df     running        z1  10.0.16.4  i-03af52f574399af28  default  true
worker/679be815-6250-477c-899c-b962076f26f5  running        z1  10.0.16.6  i-0efac99165e12f2e6  default  true

3 vms

Succeeded

More Information

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/howto-target-bosh-director.md


Categories: Fusion Middleware

Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information

A supplier of equipment, vessels, and services for offshore dredging and wet-mining markets, based in Europe with over 3,000 employees and 39 global locations, was struggling to get the most out of their enterprise business applications.

Business Problem

In 2012, the company started a transformation initiative, and as part of the project, they replaced most of their enterprise business applications.  The company had over 10 different business applications and wanted to provide employees with access to information through a single web experience or portal view. For example, the field engineers may need information for a ship’s parts from the PLM system (TeamCenter), as well as customer-specific cost information for parts from the company’s ERP system (IFS Applications). It was critical to the business that employees could quickly navigate, search, and view information regardless of where it is stored in the content management system. The company’s business is built from ships dredging, laying cable, etc., so the sooner field engineers are able to find information on servicing a broken part, the sooner the company is able to drive revenue.

Integrating Oracle WebCenter

The company chose Oracle WebCenter Portal because it had the best capabilities to integrate their various business systems, as well as its ability to scale. WebCenter enabled them to build a data integration portal that provided a single pane of glass to all enterprise information. Unfortunately, this single pane of glass did not perform as well as expected. The integrations, menu navigation, and the ability to render part drawings in the portal were all developed using Oracle Application Development Framework (Oracle ADF).  Oracle ADF is great for serving up content to WebCenter Portal using taskflows, but it requires a specialized development skill set. The company had limited Oracle ADF development resources, so each time a change or update was requested for the portal it took them weeks and sometimes months to implement the enhancement. Additionally, every change to the portal required a restart and these took in excess of forty minutes.

Platform Goals

The company wanted to shorten the time-to-market for portal changes, as well as reduce its dependency on and the overall development and design limitations with Oracle ADF. They wanted to modernize their portal and leverage a more designer-friendly, front-end development framework. They contacted Fishbowl Solutions after searching for Oracle WebCenter Portal partners and finding out about their single page application approach (SPA) to front-end portal development.

Fishbowl Solutions’ SPA for Oracle WebCenter Portal is a framework that overhauls the Oracle ADF UI with Oracle JET (JavaScript Extension Toolkit) or other front-end design technology such as Angular or React. The SPA framework includes components (taskflows) that act as progressive web applications and can be dropped onto pages from the portal resource catalog, meaning that no Oracle ADF development is necessary. Fishbowl’s SPA also enables portal components to be rendered on the client side with a single page load. This decreases the amount of processing being done on the portal application server, as well as how many times the page has to reload. This all leads to an improved experience for the user, as well as the ability design and development teams to view changes or updates to the portal almost instantaneously.

Outcome

Fishbowl Solutions helped the company implement its SPA framework in under two weeks. Since the implementation, they have observed more return visits to the portal, as well as fewer support issues. They are also no longer constrained by the 40-minute portal restart after changes to the portal, and overall portal downtime has been significantly reduced. Lastly, Fishbowl’s SPA framework provided them with a go-forward design and development approach for portal projects, which will enable them to continue to evolve their portal to best serve their employees and customers alike.

The post Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Four Options for Creating Mindbreeze Search Interfaces

A well-designed search interface is a critical component of an engaging search experience. Mindbreeze provides a nice combination of both pre-built search apps and tools for customization. This post explores the following approaches to building a Mindbreeze search interface:

  • The Mindbreeze Default Search Client
  • The Mindbreeze Search App Designer
  • Custom Mindbreeze Web Applications
  • The Mindbreeze REST API
Option 1: The Mindbreeze Default Search Client Flexibility: Low | Development Effort: None

Mindbreeze includes a built-in search client which offers a feature-rich, mobile friendly, search interface out of the box. Built-in controls exist to configure filter facets, define suggestion sources, and enable or disable export. Features are enabled and disabled via the Client Service configuration interface within the Mindbreeze Management Center. The metadata displayed within the default client is determined by the value of the “visible” property set in the Category Descriptor for the respective data sources. Some of the Mindbreeze features exposed through the default client are not available via a designer-built search app (discussed in Option 2). These include saved searches, result groupings (i.e. summarize-by), the sort-by picker, sources filters, and tabs. Organizations that wish to use these features without much effort would be wise to consider the Mindbreeze Default Search Client.

In order to integrate the built-in client with a website or application, users are typically redirected from the primary website to the Mindbreeze client when performing a search. The default client is served directly from the search appliance and the query term can be passed in the URL from the website’s search box to the Mindbreeze client. Alternately, the built-in client can be embedded directly into a website using an iframe.

What is a Category Descriptor?

Mindbreeze uses an XML file called the Category Descriptor (categorydescriptor.xml) to control various aspects of both indexing and serving for each data source category (e.g. Web, SharePoint, Google Drive, etc.). Each category plugin includes a default Category Descriptor which can be extended or modified to meet your needs. Common modifications include adding localized display labels for metadata field names, boosting the overall impact of a metadata field on relevancy, and changing which fields are visible within the default search client.

Option 2: The Mindbreeze Search App Designer Flexibility: Moderate | Development Effort: None to Moderate

The Mindbreeze Search App Designer provides a drag-and-drop interface for creating modular, mobile-friendly, search applications. Some of the most popular modules include filters, maps, charts, and galleries. Many of these features are not enabled on the aforementioned default Client, so a search app is the easiest way to use them. This drag-and-drop configuration allows for layout adjustments, widget selection, and basic configurations without coding or technical knowledge. To further customize search apps, users can modify the mustache templates that control the rendering of each search widget within the search app. Common modifications include conditionally adjusting visible metadata, removing actions, or adding custom callouts or icons for certain result types. 

A key feature is the ability to export the code needed to embed a search app into a website or application from the Search Apps page in the Mindbreeze Management Center. That code can then be placed directly in a div or iframe on the target website eliminating the need to redirect users to the appliance. Custom CSS files may be used to style the results to match the rest of the website. Although you can add a search box directly to a search app, webpages usually have their own search box in the header. You can utilize query terms from an existing search box by passing them as a URL parameter where they will be picked up by the embedded search app.

Did you know? This website uses a search app for Mindbreeze-powered website search. For a deep-dive look at that integration, check out our blog post on How We Integrated this Website with Mindbreeze InSpire.

Option 3: Custom Mindbreeze Web Applications Flexibility: High | Development Effort: Low to Moderate

The default client mentioned in Option 1 can also be copied to create a new custom version of a Mindbreeze Web Application. The most common alteration is to add a reference to a custom CSS file which modifies the look and feel of the search results without changing the underlying data or DOM structure. This modification is easy and low risk. It also very easy to isolate issues related to such a change, as you can always attempt to reproduce an issue using the default client without your custom CSS.

More substantial modifications to the applications index.html or JavaScript files can also be made to significantly customize and alter the behavior of the search experience. Examples include adding custom business logic to manipulate search constraints or applying dynamic boosting to alter relevancy at search time. Other Mindbreeze UI elements can also be added to customized web apps using Mindbreeze HTML building blocks; this includes many of the elements exposed through the search app Designer such as graphs, maps, and timelines. While these types of alterations require deeper technical knowledge than simply adding custom CSS, they are still often less effort than building a custom UI from scratch (as described in Option 4). These changes may require refactoring to be compatible with future versions or integrate new features over time, so this should be considered when implementing your results page.

Option 4: The Mindbreeze REST API Flexibility: High | Development Effort: Moderate to High

For customers seeking a more customized integration, the Mindbreeze REST API allows search results to be returned as JSON, giving you full control over their presentation. Custom search pages also allow for dynamic alterations to the query, constraints, or other parameters based on custom business logic. Filters, spelling suggestions, preview URLs, and other Mindbreeze features are all available in the JSON response, but it is up to the front-end developers to determine which features to render on the page, how to arrange them, and what styling to use. This approach allows for the most control and tightest integration with the containing site, but it is also the most effort. That said, just because custom search pages generally require the greatest effort is not to say selecting this option always will result in a lengthy deployment. In fact, one of our clients used the Mindbreeze API to power their custom search page and went from racking to go-live in 37 days.

Mindbreeze offers an excellent combination of built-in features with tools for extending capabilities when necessary. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us.

The post Four Options for Creating Mindbreeze Search Interfaces appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Pages

Subscribe to Oracle FAQ aggregator - Fusion Middleware