Fusion Middleware

Command Line and Vim Tips from a Java Programmer

I’m always interested in learning more about useful development tools. In college, most programmers get an intro to the Linux command line environment, but I wanted to share some commands I use daily that I’ve learned since graduation.

Being comfortable on the command line is a great skill to have when a customer is looking over your shoulder on a Webex. They could be watching a software demo or deployment to their environment. It can also be useful when learning a new code base or working with a product with a large, unfamiliar directory structure with lots of logs.

If you’re on Windows, you can use Cygwin to get a Unix-like CLI to make these commands available.

Useful Linux commands Find

The command find helps you find files by recursively searching subdirectories. Here are some examples:

find .
    Prints all files and directories under the current directory.

find . -name '*.log'
  Prints all files and directories that end in “.log”.

find /tmp -type f -name '*.log'
   Prints only files in the directory “/tmp” that end in “.log”.

find . -type d
   Prints only directories.

find . -maxdepth 2
     Prints all files and directories under the current directory, and subdirectories (but not sub-subdirectories).

find . -type f -exec ls -la {} \;
     The 
-exec
flag runs a command against each file instead of printing the name. In this example, it will run 
ls -la filename
  on each file under the current directory. The curly braces take the place of the filename.

Grep

The command grep lets you search text for lines that match a specific string. It can be helpful to add your initials to debug statements in your code and then grep for them to find them in the logs.

grep foo filename
  Prints each line in the file “filename” that matches the string “foo”.

grep foo\\\|bar filename
Grep supports regular expressions, so this prints each line in the file that matches “foo” or “bar”.

grep -i foo filename
  Add -i for case insensitive matching.

grep foo *
  Use the shell wildcard, an asterisk, to search all files in the current directory for the string “foo”.

grep -r foo *
  Recursively search all files and directories in the current directory for a string.

grep -rnH foo filename
  Add -n to print line numbers and -H to print the filename on each line.

find . -type f -name '*.log' -exec grep -nH foo {} \;
  Combining find and grep can let you easily search each file that matches a certain name for a string. This will print each line that matches “foo” along with the file name and line number in each file that ends in “.log” under the current directory.

ps -ef | grep processName
  The output of any command can be piped to grep, and the lines of STDOUT that match the expression will be printed. For example, you could use this to find the pid of a process with a known name.

cat file.txt | grep -v foo
  You can also use -v to print all lines that don’t match an expression.

Ln

The command ln lets you create links. I generally use this to create links in my home directory to quickly cd into long directory paths.

ln -s /some/really/long/path foo
  The -s is for symbolic, and the long path is the target. The output of
ls -la
 in this case would be
foo -> /some/really/long/path
 .

Bashrc

The Bashrc is a shell script that gets executed whenever Bash is started in an interactive terminal. It is located in your home directory,

~/.bashrc
 . It provides a place to edit your $PATH, $PS1, or add aliases and functions to simplify commonly used tasks.

Aliases are a way you can define your own command line commands. Here are a couple useful aliases I’ve added to my .bashrc that have saved a lot of keystrokes on a server where I’ve installed Oracle WebCenter:

WC_DOMAIN=/u01/oracle/fmw/user_projects/domains/wc_domain
alias assets="cd /var/www/html"
alias portalLogs="cd $WC_DOMAIN/servers/WC_Spaces/logs"
alias domain="cd $WC_DOMAIN"
alias components="cd $WC_DOMAIN/ucm/cs/custom"
alias rpl="portalLogs; vim -R WC_Spaces.out"

After making changes to your .bashrc, you can load them with

source ~/.bashrc
 . Now I can type
rpl
 , short for Read Portal Logs, from anywhere to quickly jump into the WebCenter portal log file.

alias grep=”grep --color”

This grep alias adds the –color option to all of my grep commands.  All of the above grep commands still work, but now all of the matches will be highlighted.

Vim

Knowing Vim key bindings can be convenient and efficient if you’re already working on the command line. Vim has many built-in shortcuts to make editing files quick and easy.

Run 

vim filename.txt
  to open a file in Vim. Vim starts in Normal Mode, where most characters have a special meeting, and typing a colon,
:
 , lets you run Vim commands. For example, typing 
Shift-G
  will jump to the end of the file, and typing
:q
 while in normal mode will quit Vim. Here is a list of useful commands:

:q
  Quits Vim

:w
  Write the file (save)

:wq
  Write and quit

:q!
  Quit and ignore warnings that you didn’t write the file

:wq!
  Write and quit, ignoring permission warnings

i
  Enter Insert Mode where you can edit the file like a normal text editor

a
  Enter Insert Mode and place the cursor after the current character

o
  Insert a blank line after the current line and enter Insert Mode

[escape]
  The escape button exits insert mode

:150
  Jump to line 150

shift-G
  Jump to the last line

gg
  Jump to the first line

/foo
  Search for the next occurrence of “foo”. Regex patterns work in the search.

?foo
  Search for the previous occurrence of “foo”

n
  Go to the next match

N
Go to the previous match

*
  Search for the next occurrence of the searched word under the cursor

#
  Search for the previous occurrence of the searched word under the cursor

w
  Jump to the next word

b
  Jump to the previous word

``
  Jump to the last action

dw
  Delete the word starting at the cursor

cw
  Delete the word starting at the cursor and enter insert mode

c$
  Delete everything from the cursor to the end of the line and enter insert mode

dd
  Delete the current line

D
  Delete everything from the cursor to the end of the line

u
  Undo the last action

ctrl-r
 
ctrl-r
  Redo the last action

d[up]
  Delete the current line and the line above it. “[up]” is for the up arrow.

d[down]
  Delete the current line and the line below it

d3[down]
  Delete the current line and the three lines below it

r[any character]
  Replace the character under the cursor with another character

~
  Toggle the case (upper or lower) of the character under the cursor

v
  Enter Visual Mode. Use the arrow keys to highlight text.

shift-V
  Enter Visual Mode and highlight whole lines at a time.

ctrl-v
  Enter Visual Mode but highlight blocks of characters.

=
  While in Visual Mode, = will auto format highlighted text.

c
  While in Visual Mode, c will cut the highlighted text.

y
  While in Visual Mode, y will yank (copy) the highlighted text.

p
  In Normal Mode, p will paste the text in the buffer (that’s been yanked or cut).

yw
  Yank the text from the cursor to the end of the current word.

:sort
  Highlight lines in Visual Mode, then use this command to sort them alphabetically.

:s/foo/bar/g
  Highlight lines in Visual Mode, then use search and replace to replace all instances of “foo” with “bar”.

:s/^/#/
  Highlight lines in Visual Mode, then add # at the start of each line. This is useful to comment out blocks of code.

:s/$/;/
Highlight lines in Visual Mode, then add a semicolon at the end of each line.

:set paste
  This will turn off auto indenting. Use it before pasting into Vim from outside the terminal (you’ll want to be in insert mode before you paste).

:set nopaste
  Make auto indenting return to normal.

:set nu
  Turn on line numbers.

:set nonu
  Turn off line numbers.

:r!pwd
  Read the output of a command into Vim. In this example, we’ll read in the current directory.

:r!sed -n 5,10p /path/to/file
  Read lines 5 through 10 from another file in Vim. This can be a good way to copy and paste between files in the terminal.

:[up|down]
  Type a colon and then use the arrow keys to browse through your command history. If you type letters after the colon, it will only go through commands that matched that. (i.e., :se  and then up would help find to “:set paste” quickly).

Vimrc

The Vimrc is a configuration file that Vim loads whenever it starts up, similar to the Bashrc. It is in your home directory.

Here is a basic Vimrc I’d recommend for getting started if you don’t have one already. Run

vim ~/.vimrc
and paste in the following:

set backspace=2         " backspace in insert mode works like normal editor
syntax on               " syntax highlighting
filetype indent on      " activates indenting for files
set autoindent          " auto indenting
set number              " line numbers
colorscheme desert      " colorscheme desert
set listchars=tab:>-,trail:.,extends:>,precedes:<
set list                " Set up whitespace characters
set ic                  " Ignore case by default in searches
set statusline+=%F      " Show the full path to the file
set laststatus=2        " Make the status line always visible

 

Perl

Perl comes installed by default on Linux, so it is worth mentioning that it has some extensive command line capabilities. If you have ever tried to grep for a string that matches a line in a minified Javascript file, you can probably see the benefit of being able to filter out lines longer than 500 characters.

grep -r foo * | perl -nle'print if 500 > length'

Conclusion

I love learning the tools that are available in my development environment, and it is exciting to see how they can help customers as well.

Recently, I was working with a customer and we were running into SSL issues. Java processes can be run with the option 

-Djavax.net.ssl.trustStore=/path/to/trustStore.jks
  to specify which keystore to use for SSL certificates. It was really easy to run
ps -ef | grep trustStore
to quickly identify which keystore we needed to import certificates into.

I’ve also been able to use various find and grep commands to search through unfamiliar directories after exporting metadata from Oracle’s MDS Repository.

Even if you aren’t on the command line, I’d encourage everyone to learn something new about their development environment. Feel free to share your favorite Vim and command line tips in the comments!

Further reading

http://www.vim.org/docs.php

https://www.gnu.org/software/bash/manual/bash.html

http://perldoc.perl.org/perlrun.html

The post Command Line and Vim Tips from a Java Programmer appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Webinar Recording: Ryan Companies Leverages Fishbowl’s ControlCenter for Oracle WebCenter to Enhance Document Control Leading to Improved Knowledge Management

On Thursday, December 8th, Fishbowl had the privilege of presenting a webinar with Mike Ernst – VP of Contruction Operations – at Ryan Companies regarding their use case for Fishbowl’s ControlCenter product for controlled document management. Mike was joined by Fishbowl’s ControlCenter product manager, Kim Negaard, who provided an overview of how the solution was implemented and how it is being used at Ryan.

Ryan Companies had been using Oracle WebCenter for many years, but they were looking for some additional document management functionality and a more intuitive interface to help improve knowledge management at the company. Their main initiative was to make it easier for users to access and manage their corporate knowledge documents (policies and procedures), manuals (safety), and real estate documents (leases) throughout each document’s life cycle.

Mike provided some interesting stats that factored into their decision to implement ControlCenter for WebCenter:

  • $16k – the average cost of “reinventing” procedures per project (ex. checklists and templates)
  • $25k – the average cost of estimating incorrect labor rates
  • 3x – salary to onboard someone new when an employee leaves the company

To hear more about how Ryan found knowledge management success with ControlCenter for WebCenter, watch the webinar recording: https://youtu.be/_NNFRV1LPaY

The post Webinar Recording: Ryan Companies Leverages Fishbowl’s ControlCenter for Oracle WebCenter to Enhance Document Control Leading to Improved Knowledge Management appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Spring Boot / Feign Client accessing external service

Pas Apicella - Thu, 2016-12-08 17:49
Previously we used Feign to create clients for our own services, which are registered on our Eureka Server using a service name as shown in the previous blog post http://theblasfrompas.blogspot.com.au/2016/11/declarative-rest-client-feign-with_8.html. It's not unusual that you'd want to implement an external rest endpoint, basically an endpoint that's not discoverable by Eureka. In that case, you can use the url property on the @FeignClient annotation,
which gracefully supports property injection. His an example of this.

Full example on GitHub as follows

https://github.com/papicella/FeignClientExternalSpringBoot

1. Start by adding the correct maven dependencies and the one you need is as follows, there would be others if you want to use a web based spring boot project etc.
  
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>

2. We are going to consume this external service as follows

http://country.io/names.json

To do that we create a simple interface as follows
  
package pas.au.pivotal.feign.external;

import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(name = "country-service-client", url = "http://country.io")
public interface CountryServiceClient {

@RequestMapping(method = RequestMethod.GET, value = "/names.json")
String getCountries();
}

3. In this example I have created a RestController to consume this REST service and test it because it's the easiest way to do this. We simply AutoWire the CountryServiceClient interface into the RestController to make those external calls through FEIGN.
  
package pas.au.pivotal.feign.external.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import pas.au.pivotal.feign.external.CountryServiceClient;

import java.util.Map;

@RestController
public class CountryRest
{
Logger logger = LoggerFactory.getLogger(CountryRest.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

@Autowired
private CountryServiceClient countryServiceClient;

@RequestMapping(value = "/countries", method = RequestMethod.GET,
                         produces = "application/json")
public String allCountries()
{
String countries = countryServiceClient.getCountries();

return countries;
}

@RequestMapping(value = "/country_names", method = RequestMethod.GET)
public String[] countryNames ()
{
String countries = countryServiceClient.getCountries();

Map<String, Object> countryMap = parser.parseMap(countries);

String countryArray[] = new String[countryMap.size()];
logger.info("Size of countries " + countryArray.length);

int i = 0;
for (Map.Entry<String, Object> entry : countryMap.entrySet()) {
countryArray[i] = (String) entry.getValue();
i++;
}

return countryArray;

}
}

4. Of course we will have our main class to boot strap the application and it includes the "spring-boot-starter-web" maven repo to start a tomcat server for us.
  
package pas.au.pivotal.feign.external;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
public class FeignClientExternalSpringBootApplication {

public static void main(String[] args) {
SpringApplication.run(FeignClientExternalSpringBootApplication.class, args);
}
}

5. Ensure your application.properties or application.yml has the following properties to disable timeouts.

feign:
  hystrix:
    enabled: false

hystrix:
  command:
    choose:
      default:
        execution:
          timeout:
            enabled: false

6. Run the main class "FeignClientExternalSpringBootApplication"

Access as follows

http://localhost:8080/countries





Categories: Fusion Middleware

Webinar: Quality, Safety, Knowledge Management with Oracle WebCenter Content and ControlCenter

DATE: THURSDAY, DECEMBER 8, 2016
TIME: 10:00 A.M. PST / 1:00 P.M. EST

Join Ryan Companies Vice President of Construction Operations, Mike Ernst, and Fishbowl Solutions Product Manager, Kim Negaard, to learn how Ryan Companies, a leading national construction firm, found knowledge management success with ControlCenter for Oracle WebCenter Content.

In this webinar, you’ll hear first-hand how ControlCenter has been implemented as part of Ryan’s Integrated Project Delivery Process helping them create a robust knowledge management system to promote consistent and effective operations across multiple regional offices. You’ll also learn how ControlCenter’s intuitive, modern user experience enabled Ryan to easily find documents across devices, implement reoccurring review cycles, and control both company-wide and project-specific documents throughout their lifecycle.

Register today.

Register

 

 

The post Webinar: Quality, Safety, Knowledge Management with Oracle WebCenter Content and ControlCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Deploying Spring Boot Applications on Google Application Engine (GAE)

Pas Apicella - Tue, 2016-11-22 02:07
I previously blogged about how to how to deploy a Spring Boot application to Flexible VM's on Google Cloud Platform as shown below.

http://theblasfrompas.blogspot.com.au/2016/09/spring-boot-on-google-cloud-platform-gcp.html

In this example below I use Google Application Engine (GAE) to deploy a Spring Boot application without using a flexible VM which is a lot faster and what I orginally wanted to do when I did this previously. In short this is using the [Standard environment] option for GAE.

Spring Boot uses Servlet 3.0 APIs to initialize the ServletContext (register Servlets etc.) so you can’t use the same application out of the box in a Servlet 2.5 container. It is however possible to run a Spring Boot application on an older container with some special tools. If you include org.springframework.boot:spring-boot-legacy as a dependency (maintained separately to the core of Spring Boot and currently available at 1.0.2.RELEASE), all you should need to do is create a web.xml and declare a context listener to create the application context and your filters and servlets. The context listener is a special purpose one for Spring Boot, but the rest of it is normal for a Spring application in Servlet 2.5

Visit for more Information:

   http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-servlet-2-5 

Steps

1. In order to use Servlet 2.5 and a web.xml we will need to add spring-boot-legacy dependecany to a local maven repoistory as shown below.

$ git clone https://github.com/scratches/spring-boot-legacy
$ cd spring-boot-legacy
$ mvn install

2. Clone and package the GIT REPO as shown below

$ https://github.com/papicella/GoogleAppEngineSpringBoot.git

3. Edit the file ./src/main/webapp/WEB-INF/appengine-web.xml to specify the correct APPLICATION ID which we will target in step 4 as well.
  
<?xml version="1.0" encoding="utf-8"?>
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>fe-papicella</application>
<version>5</version>
<threadsafe>true</threadsafe>
<manual-scaling>
<instances>1</instances>
</manual-scaling>
</appengine-web-app>

4. Package as shown below

$ mvn package

5. Target your project for deployment as follows

pasapicella@pas-macbook:~/piv-projects/GoogleAppEngineSpringBoot$ gcloud projects list
PROJECT_ID              NAME                    PROJECT_NUMBER
bionic-vertex-150302    AppEngineSpringBoot     97889500330
fe-papicella            FE-papicella            1049163203721
pas-spring-boot-on-gcp  Pas Spring Boot on GCP  1043917887789

pasapicella@pas-macbook:~/piv-projects/GoogleAppEngineSpringBoot$ gcloud config set project fe-papicella
Updated property [core/project].

6. Deploy as follows

mvn appengine:deploy

Finally once deployed you can access you application using it's endpoint which is displayed in the dashboard of GCP console





Project in InteiilJ IDEA




NOTE: Google AppEngine does not allow JMX, so you have to switch it off in a Spring Boot app (set spring.jmx.enabled=false in application.properties).

application.properties

spring.jmx.enabled=false

More Information

Full working example with code as follows on GitHub

https://github.com/papicella/GoogleAppEngineSpringBoot
Categories: Fusion Middleware

Uploading Tiles into Pivotal Cloud Foundry Operations Manager from the Ops Manager VM directly

Pas Apicella - Fri, 2016-11-18 00:15
When deploying PCF, you start by deploying Ops Manager. This is basically a VM that you deploy into your IaaS system of choice and it orchestrates the PCF installation. The installation of PCF is done by you through a web interface that runs on the Ops Manager VM. Into that web interface, you can load various "tiles". Each tile provides a specific set of functionality.

For example, Ops Manager comes with a tile for Bosh Director. This is the only out-of-the-box tile, as all the other tiles depend on it. Most users will first install the PCF tile. This provides the Cloud Foundry installation. After that, tiles generally provide functionality for services. Popular tiles include MySQL, RabbitMQ and Redis. There are quite a few tiles in total now, you can see them all listed on https://network.pivotal.io.



Some tiles are quite large , for example the "Elastic Runtime" tile in PCF 1.8 is 5G so from Australia I don't want to a 5G file to my laptop then upload it into the Ops Manager Web UI so here is how you can import tiles directly from the Ops Manager VM itself

1. Log into the Ops Manager VM using SSH with your keyfile.

Note: 0.0.0.0 is a bogus ip address for obvious reasons

pasapicella@pas-macbook:~/pivotal/GCP/install/ops-manager-key$ ssh -i ubuntu-key ubuntu@0.0.0.0
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-47-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Nov 16 23:36:27 UTC 2016

  System load:  0.0                Processes:           119
  Usage of /:   36.4% of 49.18GB   Users logged in:     0
  Memory usage: 37%                IP address for eth0: 10.0.0.0
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

Your Hardware Enablement Stack (HWE) is supported until April 2019.

Last login: Wed Nov 16 23:36:30 2016 from 0.0.0.0
ubuntu@myvm-gcp:~$

2. Log into https://network.pivotal.io/ and click on "Edit Profile" as shown below


3. Locate your "API token" and record it we will need it shortly

4. In this example I am uploading the "Pivotal Cloud Foundry Elastic Runtime" tile so navigate to the correct file and select the "i" icon to reveal the API endpoint for the tile.


5. Issue a wget command as follows which has a format as follows. This will download the 5G file into the HOME directory. Wait for this to complete before moving to the next step.

wget {file-name} --post-data="" --header="Authorization: Token {TOKEN-FROM-STEP-3" {API-LOCATION-URL}

$ wget -O cf-1.8.14-build.7.pivotal  --post-data="" --header="Authorization: Token {TOKEN-FROM-STEP-3" https://network.pivotal.io/api/v2/products/elastic-runtime/releases/2857/product_files/9161/download

6. Retrieve an access token which will need us to use the username/password for the Ops Manager admin account.

curl -s -k -H 'Accept: application/json;charset=utf-8' -d 'grant_type=password' -d 'username=admin' -d 'password=OPSMANAGER-ADMIN-PASSWD' -u 'opsman:' https://localhost/uaa/oauth/token

$ curl -s -k -H 'Accept: application/json;charset=utf-8' -d 'grant_type=password' -d 'username=admin' -d 'password=welcome1' -u 'opsman:' https://localhost/uaa/oauth/token
{"access_token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImxlZ2Fj ...... "

7. Finally upload the tile to be imported from the Ops Manager UI using a format as follows. You need to make sure you use the correct file name as per the download from STEP 5

curl -v -H "Authorization: Bearer STEP6-ACCESS-TOKEN" 'https://localhost/api/products' -F 'product[file]=@/home/ubuntu/cf-1.8.14-build.7.pivotal'  -X POST -k

Once complete you should see the Tile in Ops Manager as shown below. This is much faster way to upload tiles especially from Australia



More Information

https://docs.pivotal.io/pivotalcf/1-8/customizing/pcf-interface.html
Categories: Fusion Middleware

Installing Pivotal Cloud Foundry (PCF) on Google Cloud Platform (GCP)

Pas Apicella - Wed, 2016-11-16 21:50
I decided to install PCF 1.8 onto Google Cloud Platform today and I thought the experience was fantastic and very straight forward. The GCP Console is fantastic and very powerful indeed. The steps to install it are as follows

http://docs.pivotal.io/pivotalcf/1-8/customizing/gcp.html

Here are some screen shots you would expect to see along the way when using Operations Manager

Screen Shots 










Finally Once Installed here is how to create an ORG, USER and get started using the CLI. You will note you must log in as ADMIN to get started and finally I log in as the user who will be the OrgManager.

** Target my PCF Instance **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf api https://api.system.pas-apples.online --skip-ssl-validation
Setting api endpoint to https://api.system.pas-apples.online...
OK


API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
Not logged in. Use 'cf login' to log in.

** Login as ADMIN **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf login -u admin -p YYYY -o system -s system
API endpoint: https://api.system.pas-apples.online
Authenticating...
OK

Targeted org system

Targeted space system

API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
User:           admin
Org:            system
Space:          system

** Create Org **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf create-org gcp-pcf-org
Creating org gcp-pcf-org as admin...
OK

Assigning role OrgManager to user admin in org gcp-pcf-org ...
OK

TIP: Use 'cf target -o gcp-pcf-org' to target new org

** Create a USER **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf create-user pas YYYY
Creating user pas...
OK

TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'

** Set ORG Role **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf set-org-role pas gcp-pcf-org OrgManager
Assigning role OrgManager to user pas in org gcp-pcf-org as admin...
OK

** Target the newly created ORG **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf target -o gcp-pcf-org

API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
User:           admin
Org:            gcp-pcf-org
Space:          No space targeted, use 'cf target -s SPACE'

** Create a SPACE **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf create-space development
Creating space development in org gcp-pcf-org as admin...
OK
Assigning role RoleSpaceManager to user admin in org gcp-pcf-org / space development as admin...
OK
Assigning role RoleSpaceDeveloper to user admin in org gcp-pcf-org / space development as admin...
OK

TIP: Use 'cf target -o "gcp-pcf-org" -s "development"' to target new space

** Set Some Space Roles **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf set-space-role pas gcp-pcf-org development SpaceDeveloper
Assigning role RoleSpaceDeveloper to user pas in org gcp-pcf-org / space development as admin...
OK
pasapicella@pas-macbook:~/pivotal/GCP/install$ cf set-space-role pas gcp-pcf-org development SpaceManager
Assigning role RoleSpaceManager to user pas in org gcp-pcf-org / space development as admin...
OK

** Login as PAS user and target the correct ORG/SPACE **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf login -u pas -p YYYY -o gcp-pcf-org -s development
API endpoint: https://api.system.pas-apples.online
Authenticating...
OK

Targeted org gcp-pcf-org

Targeted space development

API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
User:           pas
Org:            gcp-pcf-org
Space:          development

Lets push a simple application

Application manifest.yml

pasapicella@pas-macbook:~/piv-projects/PivotalSpringBootJPA$ cat manifest-inmemory-db.yml
applications:
- name: pas-albums
  memory: 512M
  instances: 1
  random-route: true
  path: ./target/PivotalSpringBootJPA-0.0.1-SNAPSHOT.jar
  env:
    JAVA_OPTS: -Djava.security.egd=file:///dev/urando

Deploy

pasapicella@pas-macbook:~/piv-projects/PivotalSpringBootJPA$ cf push -f manifest-inmemory-db.yml
Using manifest file manifest-inmemory-db.yml

Creating app pas-albums in org gcp-pcf-org / space development as pas...
OK

Creating route pas-albums-gloomful-synapse.apps.pas-apples.online...
OK

Binding pas-albums-gloomful-synapse.apps.pas-apples.online to pas-albums...
OK

Uploading pas-albums...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app341113312
Uploading 31.6M, 195 files
Done uploading
OK

Starting app pas-albums in org gcp-pcf-org / space development as pas...

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started

OK

App pas-albums was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djava.security.egd=file:///dev/urando" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher`

Showing health and status for app pas-albums in org gcp-pcf-org / space development as pas...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-albums-gloomful-synapse.apps.pas-apples.online
last uploaded: Thu Nov 17 03:39:04 UTC 2016
stack: cflinuxfs2
buildpack: java-buildpack=v3.8.1-offline-https://github.com/cloudfoundry/java-buildpack.git#29c79f2 java-main java-opts open-jdk-like-jre=1.8.0_91-unlimited-crypto open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu      memory           disk         details
#0   running   2016-11-17 02:39:57 PM   142.6%   333.1M of 512M   161M of 1G

Get Route to Application

pasapicella@pas-macbook:~/piv-projects/PivotalSpringBootJPA$ cf apps
Getting apps in org gcp-pcf-org / space development as pas...
OK

name         requested state   instances   memory   disk   urls
pas-albums   started           1/1         512M     1G     pas-albums-gloomful-synapse.apps.pas-apples.online






More Information

https://cloud.google.com/solutions/cloud-foundry-on-gcp
Categories: Fusion Middleware

Accessing the Cloud Foundry REST API from SpringBoot

Pas Apicella - Mon, 2016-11-14 17:43
Accessing the Cloud Foundry REST API is simple enough to do as shown in the example below using curl we can list all our organizations.

Cloud Foundry REST API - https://apidocs.cloudfoundry.org/246/

Below shows just the organizations name and I am filtering on that using JQ, if you wnat to see all the output then remove the PIPE or JQ. You have to be logged in to use "cf oauth-token"

pasapicella@pas-macbook:~/apps$ curl -k "https://api.run.pivotal.io/v2/organizations" -X GET -H "Authorization: `cf oauth-token`" | jq -r ".resources[].entity.name"

APJ
apples-pivotal-org
Suncorp

In the example below I will show how you would invoke this REST API using SpringBoot's RestTemplate.

1.  Firstly we need to retrieve our bearer token as we will need that for all API calls into the CF REST API. The code below will retrieve that for us using the RestTemplate
  
package com.pivotal.platform.pcf;

import org.apache.tomcat.util.codec.binary.Base64;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.web.client.RestTemplate;

import java.util.Arrays;
import java.util.Map;

public class Utils
{
private final static String username = "papicella@pivotal.io";
private final static String password = "PASSWORD";
private static final Logger log = LoggerFactory.getLogger(Utils.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

public static String getAccessToken ()
{
String uri = "https://login.run.pivotal.io/oauth/token";
String data = "username=%s&password=%s&client_id=cf&grant_type=password&response_type=token";
RestTemplate restTemplate = new RestTemplate();

// HTTP POST call with data

HttpHeaders headers = new HttpHeaders();

headers.add("Authorization", "Basic " + encodePassword());
headers.add("Content-Type", "application/x-www-form-urlencoded");

headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));

String postArgs = String.format(data, username, password);

HttpEntity<String> requestEntity = new HttpEntity<String>(postArgs,headers);

String response = restTemplate.postForObject(uri, requestEntity, String.class);

Map<String, Object> jsonMap = parser.parseMap(response);

String accessToken = (String) jsonMap.get("access_token");

return accessToken;
}

private static String encodePassword()
{
String auth = "cf:";
byte[] plainCredsBytes = auth.getBytes();
byte[] base64CredsBytes = Base64.encodeBase64(plainCredsBytes);
return new String(base64CredsBytes);
}

}

To achieve the same thing as above using CURL would look as follows, I have stripped the actual bearer token as that is a lot of TEXT.

pasapicella@pas-macbook:~$ curl -v -XPOST -H "Application/json" -u "cf:" --data "username=papicella@pivotal.io&password=PASSWORD&client_id=cf&grant_type=password&response_type=token" https://login.run.pivotal.io/oauth/token

...

{"access_token":"YYYYYYYYYYY ....","token_type":"bearer","refresh_token":"3dd9a2b63f3640c38eb8220e2ae88dfc-r","expires_in":599,"scope":"openid uaa.user cloud_controller.read password.write cloud_controller.write","jti":"c3706c86e376445686a0dd289262bbfa"}

2. Once we have the bearer token we can then make calls to the CF REST API using the bearer token as shown below. The code below simply ensures we get the bearer token before we make the calls to the CF REST API and then we are free to output what we want to output. One method below simply returns the RAW JSON output as per the method "getAllApps" and the other method "getAllOrgs" to get Organizations strips out what we don't want and adds it to a list of POJO that define exactly what we want to return.
  
package com.pivotal.platform.pcf;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.pivotal.platform.pcf.beans.Organization;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;

import java.util.*;

@RestController
public class CFRestAPISpringBoot
{
private RestTemplate restTemplate = new RestTemplate();
private static final Logger log = LoggerFactory.getLogger(CFRestAPISpringBoot.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

@RequestMapping(value = "/cf-apps", method = RequestMethod.GET, path = "/cf-apps")
public String getAllApps ()
{
String uri = "https://api.run.pivotal.io/v2/apps";

String accessToken = Utils.getAccessToken();

// Make CF REST API call for Applications
HttpHeaders headers = new HttpHeaders();
headers.set("Authorization", String.format("Bearer %s", accessToken));
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));

HttpEntity entity = new HttpEntity(headers);

log.info("CF REST API Call - " + uri);

HttpEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET, entity, String.class);

return response.getBody();
}

@RequestMapping(value = "/cf-orgs", method = RequestMethod.GET, path = "/cf-orgs")
public List<Organization> getAllOrgs ()
{
String uri = "https://api.run.pivotal.io/v2/organizations";

String accessToken = Utils.getAccessToken();

// Make CF REST API call for Applications
HttpHeaders headers = new HttpHeaders();
headers.set("Authorization", String.format("Bearer %s", accessToken));
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));

HttpEntity entity = new HttpEntity(headers);

log.info("CF REST API Call - " + uri);
HttpEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET, entity, String.class);

log.info(response.getBody());

Map<String, Object> jsonMap = parser.parseMap(response.getBody());

List<Object> resourcesList = (List<Object>) jsonMap.get("resources");
ObjectMapper mapper = new ObjectMapper();
ArrayList<Organization> orgs = new ArrayList<Organization>();

for (Object item: resourcesList)
{
Map map = (Map) item;

Iterator entries = map.entrySet().iterator();

while (entries.hasNext())
{
Map.Entry thisEntry = (Map.Entry) entries.next();
if (thisEntry.getKey().toString().equals("entity"))
{
Map entityMap = (Map) thisEntry.getValue();
Organization org =
new Organization((String)entityMap.get("name"),
(String)entityMap.get("status"),
(String)entityMap.get("spaces_url"));
log.info(org.toString());
orgs.add(org);
}

}

}

return orgs;
}
}

3. Of course we have the standard SpringBoot main class which ensures we us an embedded tomcat server to server the REST end points
  
package com.pivotal.platform.pcf;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class SpringBootCfRestApiApplication {

public static void main(String[] args)
{
SpringApplication.run(SpringBootCfRestApiApplication.class, args);
}
}

4. The POJO is as follows
  
package com.pivotal.platform.pcf.beans;

public final class Organization
{
private String name;
private String status;
private String spacesUrl;

public Organization()
{
}

public Organization(String name, String status, String spacesUrl) {
this.name = name;
this.status = status;
this.spacesUrl = spacesUrl;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

public String getStatus() {
return status;
}

public void setStatus(String status) {
this.status = status;
}

public String getSpacesUrl() {
return spacesUrl;
}

public void setSpacesUrl(String spacesUrl) {
this.spacesUrl = spacesUrl;
}

@Override
public String toString() {
return "Organization{" +
"name='" + name + '\'' +
", status='" + status + '\'' +
", spacesUrl='" + spacesUrl + '\'' +
'}';
}
}

Once our Spring Boot application is running we can simply invoke one of the REST end points as follows and it will login as well as make the REST call using the CF REST API under the covers for us.

pasapicella@pas-macbook:~/apps$ curl http://localhost:8080/cf-orgs | jq -r
[
  {
    "name": "APJ",
    "status": "active",
    "spacesUrl": "/v2/organizations/b7ec654f-f7fd-40e2-a4f7-841379d396d7/spaces"
  },
  {
    "name": "apples-pivotal-org",
    "status": "active",
    "spacesUrl": "/v2/organizations/64c067c1-2e19-4d14-aa3f-38c07c46d552/spaces"
  },
  {
    "name": "Suncorp",
    "status": "active",
    "spacesUrl": "/v2/organizations/dd06618f-a062-4fbc-b8e9-7b829d9eaf37/spaces"
  }
]

More Information

1. Cloud Foundry REST API - https://apidocs.cloudfoundry.org/246/

2. RestTemplate - http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html



Categories: Fusion Middleware

Declarative REST Client Feign with Spring Boot

Pas Apicella - Mon, 2016-11-07 17:46
Feign is a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders.

In this example I show how to use Spring Cloud / Spring Boot application with Feign. The source code for this is as follows

https://github.com/papicella/SpringBootEmployeeFeignClient

1. Include the required maven dependency for Feign as shown below

  
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>

2. Assuming your going to lookup a service using Service Discovery with Spring Cloud then include this dependency as well, the example below is doing this using Spring Cloud Service Discovery.


<dependency>
<groupId>io.pivotal.spring.cloud</groupId>
<artifactId>spring-cloud-services-starter-service-registry</artifactId>
</dependency>


See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train

3. To enable Feign we simple add the annotation @EnableFeignClients as shown below


package pas.au.scs.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class SpringBootEmployeeFeignClientApplication {

public static void main(String[] args) {
SpringApplication.run(SpringBootEmployeeFeignClientApplication.class, args);
}
}

4. Next we have to create an interface to call our service methods. The interface methods must match the service method signatures as shown below. In this example we use Spring Cloud service discovery to find our service and invoke the right implementation method, Feign can do more then just call registered services through spring cloud service discovery BUT this example does that.

EmployeeServiceClient Interface
 
package pas.au.scs.demo.employee;

import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

import java.util.List;

@FeignClient("SPRINGBOOT-EMPLOYEE-SERVICE")
public interface EmployeeServiceClient
{
@RequestMapping(method = RequestMethod.GET, value = "/emps")
List<Employee> listEmployees();
}

So what does the actual service method look like?



@RestController
public class EmployeeRest
{
private static Log logger = LogFactory.getLog(EmployeeRest.class);
private EmployeeRepository employeeRepository;

@Autowired
public EmployeeRest(EmployeeRepository employeeRepository)
{
this.employeeRepository = employeeRepository;
}

@RequestMapping(value = "/emps",
method = RequestMethod.GET,
produces = MediaType.APPLICATION_JSON_VALUE)
public List<Employee> listEmployees()
{
logger.info("REST request to get all Employees");
List<Employee> emps = employeeRepository.findAll();

return emps;
}

.....


5. It's important to note that the Feign client is calling a service method using Spring Cloud service discovery , the screen shot below shows how it looks inside Pivotal Cloud Foundry when we select out service registry instance and click on Manage






6. Finally we just need to call our service using the Feign client interface and do do that with Autowire as required. In this example below we use a class annotated with @Controller as shown below which then using the returned data to display the results to a web page using Thymeleaf


package pas.au.scs.demo.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import pas.au.scs.demo.employee.EmployeeServiceClient;

@Controller
public class EmployeeFeignController
{
Logger logger = LoggerFactory.getLogger(EmployeeFeignController.class);

@Autowired
private EmployeeServiceClient employeeServiceClient;

@RequestMapping(value = "/", method = RequestMethod.GET)
public String homePage(Model model) throws Exception
{
model.addAttribute("employees", employeeServiceClient.listEmployees());

return "employees";
}

}

7. The Web page "employees.html" fragment accessing the returned List of employees is as follows.

<div class="col-xs-12">
<table id="example" class="table table-hover table-bordered table-striped table-condensed">
<thead>
<tr>
<th>Id</th>
<th>Name</th>
<th>Job</th>
<th>Mgr</th>
<th>Salary</th>
</tr>
</thead>
<tbody>
<tr th:each="employee : ${employees}">
<td th:text="${employee.id}"></td>
<td th:text="${employee.name}"></td>
<td th:text="${employee.job}"></td>
<td th:text="${employee.mgr}"></td>
<td th:text="${employee.salary}"></td>
</tr>
</tbody>
</table>
</div>

More Information

1. Spring Cloud
http://projects.spring.io/spring-cloud/

2. Declarative REST Client: Feign
http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html#spring-cloud-feign
Categories: Fusion Middleware

Approaches to Consider for Your Organization’s Windchill Consolidation Project

This post comes from Fishbowl Solutions’ Senior Solutions Architect, Seth Richter.

More and more organizations need to merge multiple Windchill instances into a single Windchill instance after either acquiring another company or maybe had separate Windchill implementations based on old divisional borders. Whatever the situation, these organizations want to merge into a single Windchill instance to gain efficiencies and/or other benefits.

The first task for a company in this situation is to assemble the right team and develop the right plan. The team will need to understand the budget and begin to document key requirements and its implications. Will they hire an experienced partner like Fishbowl Solutions? If so, we recommend involving the partner early on in the process so they can help navigate the key decisions, avoid pitfalls and develop the best approach for success.

Once you start evaluating the technical process and tools to merge the Windchill instances, the most likely options are:

1. Manual Method

Moving data from one Windchill system to another manually is always an option. This method might be viable if there are small pockets of data to move in an ad-hoc manner. However, this method is extremely time consuming so proceed with caution…if you get halfway through and then move to a following method then you might have hurt the process rather than help it.

2. Third Party Tools (Fishbowl Solutions LinkExtract & LinkLoader tools)

This process can be a cost effective alternative, but it is not as robust as the Windchill Bulk Migrator so your requirements might dictate if this is viable or not.

3. PTC Windchill Bulk Migrator (WBM) tool

This is a powerful, complex tool that works great if you have an experienced team running it. Fishbowl prefers the PTC Windchill Bulk Migrator in many situations because it can complete large merge projects over a weekend and historical versions are also included in the process.

A recent Fishbowl project involved a billion-dollar manufacturing company who had acquired another business and needed to consolidate CAD data from one Windchill system into their own. The project had an aggressive timeline because it needed to be completed before the company’s seasonal rush (and also be prepared for an ERP integration). During the three-month project window, we kicked off the project, executed all of the test migrations and validations, scheduled a ‘go live’ date, and then completed the final production migration over a weekend. Users at the acquired company checked their data into their “old” Windchill system on a Friday and were able check their data out of the main corporate instance on Monday with zero engineer downtime.

Fishbowl Solutions’ PTC/PLM team has completed many Windchill merge projects such as this one. The unique advantage of working with Fishbowl is that we are  PTC Software Partners and Windchill programming experts. Often times, when other reseller/consulting partners get stuck waiting on PTC technical support, Fishbowl has been able to problem solve and keep projects on time and on budget.

If your organization is seeking to find an effective and efficient way to bulk load data from one Windchill system to another, our experts at Fishbowl Solutions are able to accomplish this on time and on budget. Urgency is a priority in these circumstances, and we want to ensure you’re able to make this transition process as hassle-free as possible with no downtime. Not sure which tool is the best fit for your Windchill migration project? Check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department for more information or to request a demo.

Contact Us

Rick Passolt
Senior Account Executive
952.456.3418
mcadsales@fishbowlsolutions.com

Seth Richter is a Senior Solutions Architect at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do.

The post Approaches to Consider for Your Organization’s Windchill Consolidation Project appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Consider Your Options for SolidWorks to Windchill Data Migrations

This post comes from Fishbowl Solutions’ Associate MCAD Consultant, Ben Sawyer.

CAD data migrations are most often seen as a huge burden. They can be lengthy, costly, messy, and a general road block to a successful project. Organizations planning on migrating SolidWorks data to PTC Windchill should consider their options when it comes to the process and tools they utilize to perform the bulk loading.

At Fishbowl Solutions, our belief is that the faster you can load all your data accurately into Windchill, the faster your company can implement critical PLM business processes and realize the results of such initiatives like a Faster NPI, Streamline Change & Configuration Management, Improved Quality, Etc.

There are two typical project scenarios we encounter with these kinds of data migration projects. SolidWorks data resides on a Network File System (NFS) or resides in either PDMWorks or EPDM.

The options for this process and the tools used will be dependent on other factors as well. The most common guiding factors to influence decisions are the quantity of data and the project completion date requirements. Here are typical project scenarios.

Scenario One: Files on a Network File System

Manual Migration

There is always an option to manually migrate SolidWorks data into Windchill. However, if an organization has thousands of files from multiple products that need to be imported, this process can be extremely daunting. When loading manually, this process involves bringing files into the Windchill workspace, carefully resolving any missing dependents, errors, duplicates, setting destination folders, revisions, lifecycles and fixing bad metadata. (Those who have tried this approach with large data quantities in the past know the pain of which we are talking about!)

Automated Solution

Years ago, Fishbowl developed its LinkLoader tool for SolidWorks as a viable solution to complete a Windchill bulk loading project with speed and accuracy.

Fishbowl’s LinkLoader solution follows a simple workflow to help identify data to be cleansed and mass loaded with accurate metadata. The steps are as follows:

1. Discovery
In this initial stage, the user chooses the mass of SolidWorks data to be loaded into Windchill. Since Windchill doesn’t allow duplicate named CAD files in the system, the software quickly identifies these duplicate files. It is up to the user to resolve the duplicate files or remove them from the data loading set.

2. Validation
The validation stage will ensure files are retrievable, attributes/parameters are extracted (for use in later stages), and relationships with other SolidWorks files are examined. LinkLoader captures all actions. The end user will need to resolve any errors or remove the data from the loading set.

3. Mapping
Moving toward the bulk loading stage, it is necessary to confirm and/or modify the attribute-mapping file as desired. The only required fields for mapping are lifecycle, revision/version, and the Windchill folder location. End users are able to leverage the attributes/parameter information from the validation as desired, or create their own ‘Instance Based Attribute’ list to map with the files.

4. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in an ‘Error List Report’ and LinkLoader will simply move on to the next file.

Scenario Two: Files reside in PDMWorks or EPDM

Manual Migration

There is also an option to do a manual data migration from one system to another if files reside in PDMWorks or EPDM. However, this process can also be tedious and drawn out as much, or perhaps even more than when the files are on a NFS.

Automated Solution

Having files within PDMWorks or EPDM can make the migration process more straightforward and faster than the NFS projects. Fishbowl has created an automated solution tool that extracts the latest versions of each file from the legacy system and immediately prepares it for loading into Windchill. The steps are as follows:

1. Extraction (LinkExtract)
In this initial stage, Fishbowl uses its LinkExtract tool to pull the latest version of all SolidWorks files , determine references, and extract all the attributes for the files as defined in PDMWorks or EPDM.

2. Mapping
Before loading the files, it is necessary to confirm and or modify the attribute mapping file as desired. Admins can fully leverage the attributes/parameter information from the Extraction step, or can start from scratch if they find it to be easier. Often the destination Windchill system will have different terminology or states and it is easy to remap those as needed in this step.

3. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in the Error List Report and LinkLoader will move on to the next file.

Proven Successes with LinkLoader

Many of Fishbowl’s customers have purchased and successfully ran LinkLoader themselves with little to no assistance from Fishbowl. Other customers of ours have utilized our consulting services to complete the migration project on their behalf.

With Fishbowl’s methodology centered on “Customer First”, our focus and support continuously keeps our customers satisfied. This is the same commitment and expertise we will bring to any and every data migration project.

If your organization is looking to consolidate SolidWorks CAD data to Windchill in a timely and effective manner, regardless of the size and scale of the project, our experts at Fishbowl Solutions can get it done.

For example, Fishbowl partnered with a multi-billion dollar medical device company with a short time frame to migrate over 30,000 SolidWorks files from a legacy system into Windchill. Fishbowl’s expert team took initiative and planned the process to meet their tight industry regulations and finish on time and on budget. After the Fishbowl team executed test migrations, the actual production migration process only took a few hours, thus eliminating engineering downtime.

If your organization is seeking the right team and tools to complete a SolidWorks data migration to Windchill, reach out to us at Fishbowl Solutions.

If you’d like more information about Fishbowl’s LinkLoader tool or our other products and services for PTC Windchill and Creo, check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department.

Contact Us

Rick Passolt
Senior Account Executive
952.465.3418
mcadsales@fishbowlsolutions.com

Ben Sawyer is an Associate MCAD Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do. 

The post Consider Your Options for SolidWorks to Windchill Data Migrations appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Pushing a Docker image using Docker Hub on Pivotal Cloud Foundry

Pas Apicella - Mon, 2016-10-10 21:13
In this example I will show how to push a Docker image to Pivotal Cloud Foundry (PCF) using Docker Hub. You can use your own private Docker Registry BUT here I am using Docker Hub in this example.

The example spring boot application which can easily created as a Docker Image is per this spring guide below.

https://spring.io/guides/gs/spring-boot-docker/

1. First we need to ensure the docker is enabled on Diego as shown below.

pasapicella@pas-macbook:~$ cf feature-flag diego_docker
Retrieving status of diego_docker as admin...
OK

Features           State
diego_docker   enabled

Note: If it's not enabled you would need ADMIN rights to set it as follows

$ cf enable-feature-flag diego_docker

2. Login to Docker Hub from the command line

pasapicella@pas-macbook:~/pivotal/software/docker$ docker login -u pasapples -p ******
Login Succeeded

3. Push your local Docker image to your public Docker Hub Repository as follows

This assumes you have an IMAGE to push as per below.

pasapicella@pas-macbook:~/pivotal/software/docker$ docker images
REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
pasapples/cf                                 0.0.1                b25e9b214774       3 days ago              881.4 MB
pasapples/gs-spring-boot-docker latest               5fc76927eca2        3 days ago              195.5 MB
gregturn/gs-spring-boot-docker   latest               a813439710d3       3 days ago              195.4 MB
ubuntu                                          14.04               f2d8ce9fa988        2 weeks ago            187.9 MB
frolvlad/alpine-oraclejdk8            slim                f8103909759b       2 weeks ago            167.1 MB
springio/gs-spring-boot-docker    latest              688d6c4ab4d3       18 months ago         609.9 MB

** Push to Docker Hub **

$ docker push pasapples/gs-spring-boot-docker
The push refers to a repository [docker.io/pasapples/gs-spring-boot-docker]
1a701a998f45: Layer already exists
0d4e0b525d4f: Layer already exists
a27c88827076: Pushed
58f7b9930e4f: Layer already exists
9007f5987db3: Layer already exists
latest: digest: sha256:6b3ccae43e096b1fa4d288900c6d2328e34f11e286996ffa582961bad599aee9 size: 1375

4. Login to Docker Hub and verify it's loaded as shown below

https://hub.docker.com/


At this point we are ready to Deploy to our PCF instance and it's assumed you have already logged into the instance prior to running this next step

5. Push as shown below to PCF

pasapicella@pas-macbook:~$ cf push springboot-docker --docker-image pasapples/gs-spring-boot-docker --random-route -i 1 -m 512M -t 180
Creating app springboot-docker in org apples-org / space development as papicella@pivotal.io...
OK

Creating route springboot-docker-oological-superseniority.apps.pcfdemo.net...
OK

Binding springboot-docker-oological-superseniority.apps.pcfdemo.net to springboot-docker...
OK


Starting app springboot-docker in org apples-org / space development as papicella@pivotal.io...
Creating container
Successfully created container
Staging...
Staging process started ...
Staging process finished
Exit status 0
Staging Complete
Destroying container
Successfully destroyed container

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App springboot-docker was started using this command `java -Djava.security.egd=file:/dev/./urandom -jar /app.jar `

Showing health and status for app springboot-docker in org apples-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: springboot-docker-oological-superseniority.apps.pcfdemo.net
last uploaded: Tue Oct 11 02:04:42 UTC 2016
stack: unknown
buildpack: unknown

     state     since                    cpu      memory           disk         details
#0   running   2016-10-11 01:07:34 PM   104.3%   309.3M of 512M   1.4M of 1G



You can generate an application.yml as shown below

pasapicella@pas-macbook:~$ cf create-app-manifest springboot-docker
Creating an app manifest from current settings of app springboot-docker ...

OK
Manifest file created successfully at ./springboot-docker_manifest.yml

pasapicella@pas-macbook:~$ cat springboot-docker_manifest.yml
applications:
- name: springboot-docker
  instances: 1
  memory: 512M
  disk_quota: 1024M
  host: springboot-docker-oological-superseniority
  domain: apps.pcfdemo.net
  stack: cflinuxfs2
  timeout: 180

More Information

http://docs.pivotal.io/pivotalcf/1-8/adminguide/docker.html
Categories: Fusion Middleware

What I Have Learned as an Oracle WebCenter Consultant in My First Three Months at Fishbowl Solutions

This post comes from Fishbowl Solutions’ Associate Software Consultant, Jake Jehlicka.

Finishing college can be an intimidating experience for many. We leave what we know behind to open the gates to brand new experiences. Those of us fortunate enough to gain immediate employment often find ourselves leaving school and plunging headfirst into an entirely new culture a mere few weeks after turning in our last exam. It is exciting, yet frightening, and what can make-or-break the whole experience is the new environment in which you find yourself if. I consider myself one of the lucky ones.

Intern shirt-back

I have been with Fishbowl Solutions for just over three months, and the experience is unlike any that I had encountered in my previous internships, work, or schooling in Duluth. I moved to the Twin Cities within a week of accepting the position. I was terrified, but my fears were very soon laid to rest. Fishbowl welcomed me with open arms, and I have learned an incredible amount in the short time that I have spent here. Here are just a few of the many aspects of Fishbowl and the skills I’ve gained since working here as an associate software consultant.

Culture

One of the things that really jumped out at me right away is how a company’s culture is a critical component to making work enjoyable and sustainable. Right from the outset, I was invited and even encouraged to take part in Fishbowl’s company activities like their summer softball team and happy hours celebrating new employees joining the team. I have seen first-hand how much these activities bring the workplace together in a way that not only makes employees happy, but makes them very approachable when it comes to questions or assistance. The culture here seems to bring everyone together in a way that is unique to Fishbowl, and the work itself sees benefits because of it.

Teamwork

Over the past three months, one thing that I have also learned is the importance of working together. I joined Fishbowl a few weeks after the other trainees in my group, and they were a bit ahead of me in the training program when I started. Not only were they ready and willing to answer any questions that I had, but they also shared their knowledge that they had acquired in such a way that I was able to catch up before our training had completed. Of course the other trainees weren’t the only ones willing to lend their assistance. The team leads have always been there whenever I needed a technical question answered, or even if I just wanted advice in regard to where my own career may be heading.

People Skills

The team leads also taught me that not every skill is something that can be measured. Through my training, we were exposed to other elements outside of the expected technical skills. We were given guidance when it comes to oft-neglected soft skills such as public speaking and client interactions. These sorts of skills are utterly necessary to learn, regardless of which industry you are in. It is thanks to these that I have already had positive experiences working with our clients.

Technical Skills

As a new software consultant at Fishbowl, I have gained a plethora of knowledge about various technologies and applications, especially with Oracle technologies. The training that I received has prepared me for working with technologies like Oracle WebCenter in such a way that I have been able to dive right into projects as soon as I finished. Working with actual systems was nearly a foreign concept after working with small individual projects in college, but I learned enough from my team members to be able to proceed with confidence. The training program at Fishbowl has a very well-defined structure, with an agenda laid out of what I should be working on in any given time period. A large portion of this was working directly with my own installation of the WebCenter content server. I was responsible for setting up, configuring, and creating a custom code for the servers both in a Windows and Linux environment. The training program was very well documented and I always had the tools, information, and assistance that was needed to complete every task.

Once the formal training ended, I was immediately assigned a customer project involving web development using Oracle’s Site Studio Designer. The training had actually covered this application and I was sufficiently prepared to tackle the new endeavor! With that said, every single day at Fishbowl is another day of education; no two projects are identical and there is always something to be learned. For example, I am currently learning Ext JS with Sencha Architect in preparation for a new project!

Although we may never know with absolute certainty what the future has in store for us, I can confidently say that the experiences, skills, knowledge that I have gained while working at Fishbowl Solutions will stay with me for the rest of my life.

Thank you to the entire Fishbowl team for everything they have done for me, and I look forward to growing alongside them!

j_jehlicka

Jake Jehlicka is an Associate Software Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do. 

The post What I Have Learned as an Oracle WebCenter Consultant in My First Three Months at Fishbowl Solutions appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Displaying Pivotal Cloud Foundry application Instances Buildpacks or Docker Images using CF CLI

Pas Apicella - Mon, 2016-10-10 00:34
I was recently asked how you could display all PCF application instances detected buildpack OR docker image being used from the command line. The CF REST API gives you all of this information and more as per the documentation below to list all applications.

https://apidocs.cloudfoundry.org/244/apps/list_all_apps.html

This API call gives you lots of information so to filter that a fellow work college created this script to get just the output we want. You need to be logged into your PCF instance with "cf login" prior to running this script because it's using "CF CURL" rather then calling the REST API directly

guids=$(cf curl /v2/apps?q=space_guid:`cf space development --guid` | jq -r ".resources[].metadata.guid")
echo -e "App Name, Buildpack, Docker"
for guid in $guids; do
appName=$(cf curl /v2/apps/$guid/summary | jq -r ".name")
buildpack=$(cf curl /v2/apps/$guid/summary | jq -r ".detected_buildpack")
docker_image=$(cf curl /v2/apps/$guid/summary | jq -r ".docker_image")
echo -e "$appName," "$buildpack," "$docker_image"
done

Output:

App Name, Buildpack, Docker
guestbook-backend, null, jamesclonk/guestbook-backend:latest
springboot-docker, null, pasapples/gs-spring-boot-docker:latest
pas-albums, java-buildpack=v3.8.1-offline-https://github.com/cloudfoundry/java-buildpack.git#29c79f2 java-main java-opts open-jdk-like-jre=1.8.0_91-unlimited-crypto open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE, null

To use the REST API directly replace

guids=$(cf curl /v2/apps?q=space_guid:`cf space development --guid` | jq -r ".resources[].metadata.guid")

WITH

guids=$(curl -k https://api.run.pivotal.io/v2/apps?q=space_guid:`cf space development --guid` -X GET -H "Authorization: `cf oauth-token`" | jq -r ".resources[].metadata.guid")
Categories: Fusion Middleware

Reading VCAP_SERVICES and VCAP_APPLICATION from a Spring Boot Rest Controller in PCF

Pas Apicella - Thu, 2016-10-06 06:22
Note for myself: Reading PCF System and ENV variables
  
package com.example;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import java.util.Map;

@RestController
public class DemoRest
{
private static final Logger logger = LoggerFactory.getLogger(DemoRest.class);

@RequestMapping(value = "/version", method = RequestMethod.GET)
public String version()
{
return "1.0";
}

@RequestMapping(value = "/vcapapplication", method = RequestMethod.GET)
public Map vcapApplication() throws Exception
{
return Utils.getEnvMap("VCAP_APPLICATION");
}

@RequestMapping(value = "/vcapservices", method = RequestMethod.GET)
public Map vcapServices() throws Exception
{
return Utils.getEnvMap("VCAP_SERVICES");
}

@RequestMapping(value = "/vcapservices_json", method = RequestMethod.GET)
public String vcapServicesJSON() throws Exception
{
return System.getenv().get("VCAP_SERVICES");
}


@RequestMapping(value = "/appindex", method = RequestMethod.GET)
public String appIndex() throws Exception
{
String instanceIndex = "N/A";

try
{
instanceIndex =
Utils.getEnvMap("VCAP_APPLICATION").getOrDefault("instance_index", "N/A").toString();
}
catch (Exception ex)
{
logger.info("Exception getting application index : " + ex.getMessage());
}

return instanceIndex;
}

@RequestMapping(value = "/getEnvVariable/{env_var}", method = RequestMethod.GET)
public String getEnvVariable(@PathVariable String env_var)
{
return System.getenv().get(env_var);
}

}

Utils.java (Referenced in Code above)
  
package com.example;

import com.fasterxml.jackson.databind.ObjectMapper;

import java.util.HashMap;
import java.util.Map;

public class Utils
{
public static Map getEnvMap(String vcap) throws Exception
{
String vcapEnv = System.getenv(vcap);
ObjectMapper mapper = new ObjectMapper();

if (vcapEnv != null) {
Map<String, ?> vcapMap = mapper.readValue(vcapEnv, Map.class);
return vcapMap;
}

return new HashMap<String, String>();
}
}
Categories: Fusion Middleware

Using Oracle 12c with Pivotal Cloud Foundry Applications and Spring Boot

Pas Apicella - Fri, 2016-09-23 01:24
In this post I walk through what it would take to access Oracle 12c using a spring boot application deployed to Pivotal Cloud Foundry PCF all from my Macbook Pro. Of course this can be done outside of an isolated laptop like my Macbook Pro but handy while doing DEV/TEST and still being able to use Oracle 12c.

Requirements
  • Oracle 12c instance
  • PCFDev 
  • Git Client

1. First you will need a 12c Database and the best way to do that is to use this Oracle VM image below. I use VirtualBox to start that up and it gives me a working 12c database out of the box.

  http://www.oracle.com/technetwork/community/developer-vm/index.html#dbapp

Once imported into VirtualPort you will want to configure the Network to allow port forwarding on the database listener port of 1521 and perhaps SSH into port 22 if you need that. The 1521 Port Forward rule is vital to ensure your Macbook localhost can access the database VM using the listener port. It's setup as follows.



2. This isn't required but installing the Oracle 12c instant client will give you SQL*Plus and to me that's vital. You could use a GUI tool if that's what you like but for me SQL*Plus is more then good enough. Here is the link for Mac Os X install.

  http://www.oracle.com/technetwork/topics/intel-macsoft-096467.html

Verify Setup:

Note: I am using the IP address of my local Macbook pro. I could use "localhost" as I have setup a Port Forward rule to enable that BUT given I am using PCFDev it will need the IP address of my local Macbook pro to ensure it's talking to the right host to get to the Oracle 12c instance VM.

pasapicella@pas-macbook:~/pivotal/software/oracle$ sqlplus scott/tiger@10.47.253.3/orcl

SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 23 15:57:11 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Fri Sep 23 2016 15:48:26 +10:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SCOTT@10.47.253.3/orcl>

3. I use PCFDev and the reason is it's local to my MacBook Pro and I can get it to talk to the Oracle 12c instance easily. You can use any PCF as long as you have network access to your Oracle 12c instance.

Download from here : https://network.pivotal.io/products/pcfdev
Docs are here : https://docs.pivotal.io/pcf-dev/

At this point your ready to go , so follow these steps to test your setup

4. Clone Spring Music as follows

$ git clone https://github.com/cloudfoundry-samples/spring-music.git

5. Download the Oracle 12c JDBC driver from the location below and place it into "src/main/webapp/WEB-INF/lib" folder

  http://www.oracle.com/technetwork/database/features/jdbc/jdbc-drivers-12c-download-1958347.html

6. Package as follows

$ ./gradlew assemble

7. Now lets create a CUPS service to enable our application to bind to Oracle 12c we do that as follows

Note: It's vital we use the IP address of your local Macbook Pro as PCFDev itself is a VM which referencing "localhost" will not find the Oracle Database instance

pasapicella@pas-macbook:~/apps/pcf-dev/demos/spring-music$ cf create-user-provided-service oracle-db -p '{"uri":"oracle://scott:tiger@10.47.253.3:1521/orcl"}'
Creating user provided service oracle-db in org pcfdev-org / space pcfdev-space as admin...
OK

8. Now lets create a file called manifest-oracle.yml to use the CUPS service as shown below

---
applications:
- name: spring-music
  memory: 512M
  instances: 1
  random-route: true
  path: build/libs/spring-music.war
  services:
    - oracle-db

9. Push as follows

$ cf push -f manifest-oracle.yml

Output:

pasapicella@pas-macbook:~/apps/pcf-dev/demos/spring-music$ cf push -f manifest-oracle.yml
Using manifest file manifest-oracle.yml

Creating app spring-music in org pcfdev-org / space pcfdev-space as admin...
OK

Creating route spring-music-apiaceous-interviewer.local.pcfdev.io...
OK

Binding spring-music-apiaceous-interviewer.local.pcfdev.io to spring-music...
OK

Uploading spring-music...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app274683538
Uploading 457K, 88 files
Done uploading
OK
Binding service oracle-db to app spring-music in org pcfdev-org / space pcfdev-space as admin...
OK

Starting app spring-music in org pcfdev-org / space pcfdev-space as admin...
Downloading binary_buildpack...
Downloading java_buildpack...
Downloading ruby_buildpack...
Downloading staticfile_buildpack...
Downloading nodejs_buildpack...
Downloading go_buildpack...
Downloading python_buildpack...
Downloading php_buildpack...
Downloaded java_buildpack
Downloaded binary_buildpack
Downloaded python_buildpack
Downloaded nodejs_buildpack
Downloaded ruby_buildpack
Downloaded go_buildpack
Downloaded staticfile_buildpack
Downloaded php_buildpack
Creating container
Successfully created container
Downloading app package...
Downloaded app package (27.5M)
Staging...
-----> Java Buildpack Version: v3.6 (offline) | https://github.com/cloudfoundry/java-buildpack.git#5194155
-----> Downloading Open Jdk JRE 1.8.0_71 from https://download.run.pivotal.io/openjdk/trusty/x86_64/openjdk-1.8.0_71.tar.gz (found in cache)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.2s)
-----> Downloading Open JDK Like Memory Calculator 2.0.1_RELEASE from https://download.run.pivotal.io/memory-calculator/trusty/x86_64/memory-calculator-2.0.1_RELEASE.tar.gz (found in cache)
       Memory Settings: -Xmx382293K -XX:MaxMetaspaceSize=64M -Xss995K -Xms382293K -XX:MetaspaceSize=64M
-----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.10.0_RELEASE.jar (found in cache)
-----> Downloading Tomcat Instance 8.0.30 from https://download.run.pivotal.io/tomcat/tomcat-8.0.30.tar.gz (found in cache)
       Expanding Tomcat Instance to .java-buildpack/tomcat (0.1s)
-----> Downloading Tomcat Lifecycle Support 2.5.0_RELEASE from https://download.run.pivotal.io/tomcat-lifecycle-support/tomcat-lifecycle-support-2.5.0_RELEASE.jar (found in cache)
-----> Downloading Tomcat Logging Support 2.5.0_RELEASE from https://download.run.pivotal.io/tomcat-logging-support/tomcat-logging-support-2.5.0_RELEASE.jar (found in cache)
-----> Downloading Tomcat Access Logging Support 2.5.0_RELEASE from https://download.run.pivotal.io/tomcat-access-logging-support/tomcat-access-logging-support-2.5.0_RELEASE.jar (found in cache)
Exit status 0
Staging complete
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (108B)
Uploaded droplet (79.8M)
Uploading complete

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App spring-music was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.1_RELEASE -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,native:10,stack:5 -memoryInitials=heap:100%,metaspace:100% -totMemory=$MEMORY_LIMIT) &&  JAVA_HOME=$PWD/.java-buildpack/open_jdk_jre JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Daccess.logging.enabled=false -Dhttp.port=$PORT" exec $PWD/.java-buildpack/tomcat/bin/catalina.sh run`

Showing health and status for app spring-music in org pcfdev-org / space pcfdev-space as admin...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: spring-music-apiaceous-interviewer.local.pcfdev.io
last uploaded: Fri Sep 23 06:14:54 UTC 2016
stack: unknown
buildpack: java-buildpack=v3.6-offline-https://github.com/cloudfoundry/java-buildpack.git#5194155 open-jdk-like-jre=1.8.0_71 open-jdk-like-memory-calculator=2.0.1_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE tomcat-access-logging-support=2.5.0_RELEASE tomca...

     state     since                    cpu    memory         disk           details
#0   running   2016-09-23 04:15:22 PM   0.0%   844K of 512M   452K of 512M

10. Verify from SQL*Plus it has created the table ALBUM in the SCOTT schema as shown below
  
SCOTT@10.47.253.3:1521/orcl> describe album;
Name Null? Type
----------------------------------------------------------------- -------- --------------------------------------------
ID NOT NULL VARCHAR2(40 CHAR)
ALBUMID VARCHAR2(255 CHAR)
ARTIST VARCHAR2(255 CHAR)
GENRE VARCHAR2(255 CHAR)
RELEASEYEAR VARCHAR2(255 CHAR)
TITLE VARCHAR2(255 CHAR)
TRACKCOUNT NOT NULL NUMBER(10)

SCOTT@10.47.253.3:1521/orcl>

11. Test application in a browser




Categories: Fusion Middleware

Using H2 Console in development with Spring Boot then NOT when deployed to Pivotal Cloud Foundry

Pas Apicella - Thu, 2016-09-22 07:02
Frequently when developing Spring based applications, I will use the H2 in memory database during your development process. H2 ships with a web based database console, which you can use while your application is under development. It is a convenient way to view the tables created by Hibernate and run queries against the in memory database. In this post I show what is required to set this up as well as what it means to then deploy your Spring Boot applications to Pivotal Cloud Foundry and rely on a database service and hence your application becomes cloud aware.

1. First ensure you have included the H2 maven dependency as shown below. I also use DevTools BUT thats not needed to enable the H2 web console
  
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>
2. Then create an specific application.yml file while in development mode only and enable the H2 web console. Using the default name "application.yml"will ensure while your in DEV mode it will use that file. Notice how I give the database a name rather then use the default and also specify a datasource you don't need to go to that effort BUT to me it's good practice to define a datasource because it is what you will do for an application itself when not in DEV mode.

application.yml

server:
  error:
    whitelabel:
      enabled: false

spring:
  h2:
    console:
      enabled: true
  jpa:
    hibernate:
      ddl-auto: create
  datasource:
    url: jdbc:h2:mem:apples;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    username: sa
    password:
    driver-class-name: org.h2.Driver
    platform: h2

3. Run your spring boot application

....
2016-09-22 21:30:26.771  INFO 18929 --- [  restartedMain] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2016-09-22 21:30:26.778  INFO 18929 --- [  restartedMain] gBootJpaBootstrapEmployeeDemoApplication : Started SpringBootJpaBootstrapEmployeeDemoApplication in 6.021 seconds (JVM running for 6.553)

4. Connect to the H2 we console as follows

http://localhost:8080/h2-console/

The JDBC Url now becomes what you set in the dialog above, inshort the DB name I set was "apples"



When it comes to deployment in Pivotal Cloud Foundry (PCF) you most likely will not want to use H2 and instead bind to a database service like MySQL for example. To do that we would alter our project as follows.

5. Add the following maven dependancies. I add MySQL dependency and you can leave H2 as it will use that if it doesn't finda MySQL service instance to bind to. I also add "spring-boot-starter-cloud-connectors" as it's this which automatically creates and configures a DataSource which injects the service details at Runtime for me.
  
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cloud-connectors</artifactId>
</dependency>

6. Add a specific cloud application YML file named "application-cloud.yml" as follows. I have left out a datasource and Spring Boot will create that for me when bound to the database service, BUT generally I always set the datasource with the correct properties required to meet my application requirements.

application-cloud.yml

spring:
  jpa:
    hibernate:
      ddl-auto: create

server:
  error:
    whitelabel:
      enabled: false

7. When creating a manifest.yml file to deploy your application to PCF all you need to do is add a MySQL database service and specify the active profile as CLOUD as shown below which will ensure we use the "application-cloud.yml" file we created above.

manifest.yml

---
applications:
- name: springboot-bootstrap-employee
  memory: 512M
  instances: 1
  random-route: true
  timeout: 180
  path: ./target/springbootjpabootstrapemployeedemo-0.0.1-SNAPSHOT.jar
  services:
    - pas-mysql
  env:
    JAVA_OPTS: -Djava.security.egd=file:///dev/urando
    SPRING_PROFILES_ACTIVE: cloud

The project in IntelliJ is as follows


GitHub URL as follows:

https://github.com/papicella/SpringBootJPABootstrapEmployeeDemo
Categories: Fusion Middleware

Spring Boot on Google Cloud Platform (GCP)

Pas Apicella - Tue, 2016-09-20 01:30
I recently created a demo which can be used to deploy a basic Spring Boot application on Google Cloud Platform (GCP). There isn't really anything specific in the code to make this work on GCP BUT the maven pom.xml has what is required to make it one simple command to send this app to GCP.

$ mvn gcloud:deploy

You can run an App Engine application in two environments, the standard environment and the flexible environment. This is an example of Java with Spring Boot in the App Engine [Flexible Environment]. The following table summarizes some of the differences between the two environments.


Feature Standard environment Flexible environment
---------------------------------------------------------------------------------------------------------
Instance startup time Milliseconds Minutes
Scaling Manual, Basic, Automatic Manual, Automatic
Writing to local disk No Yes, ephemeral (disk initialized on each VM startup)
Customizable serving stack No Yes (built by customizing a Dockerfile)
First time deployment may take several minutes. This is because App Engine Flexible environment will automatically provision a Google Compute Engine virtual machine for you behind the scenes to run this application.

GitHub URL:

https://github.com/papicella/PivotalSpringBoot


Categories: Fusion Middleware

Why taking good holidays is good practice

Steve Jones - Wed, 2016-08-24 02:22
Back when I was a fairly recent graduate I received one of the best pieces of advice I've ever received.  The project was having some delivery pressures and I was seen as crucial to one of the key parts.  As a result my manager was putting pressure on me to cancel my holiday (two weeks of Windsurfing bliss in the Med with friends) with a promise that the company would cover the costs.  I was
Categories: Fusion Middleware

Variable substitution for a manifest.yml for Cloud Foundry

Pas Apicella - Fri, 2016-08-19 06:45
Pushed applications to CF or PCF you would of most likely used a manifest.yml file and at some point wanted to use variable substitution. manifest.yml files don't support that and a feature request has been asked for this as follows

https://github.com/cloudfoundry/cli/issues/820

With a recent customer we scripted the creation of a manifest.yml file from a Jenkins job  which would inject the required ROUTE to the application by creating the manifest.yml through a script as follows as shown below.

manifest-demo.sh

export ROUTE=$1

echo ""
echo "Setting route to $ROUTE ..."
echo ""

cat > manifest.yml <<!
---
applications:
- name: gs-rest-service
  memory: 256M
  instances: 1
  host: $ROUTE
  path: target/gs-rest-service-0.1.0.jar
!

cat manifest.yml

Script tested as follows

pasapicella@pas-macbook:~/bin/manifest-demo$ ./manifest-demo.sh apples-route-pas

Setting route to apples-route-pas ...

---
applications:
- name: gs-rest-service
  memory: 256M
  instances: 1
  host: apples-route-pas
  path: target/gs-rest-service-0.1.0.jar

Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator - Fusion Middleware