Fusion Middleware

ASP.NET Core app deployed to Pivotal Cloud Foundry

Pas Apicella - Thu, 2017-03-16 22:37
This post will show you how to write your first ASP.NET Core application on macOS or Linux and push it to Pivotal Cloud Foundry without having to PUBLISH it for deployment.

Before getting started you will need the following

1. Download and install .NET Core
2. Visual Studio Code with the C# extension.
3. CF CLI installed https://github.com/cloudfoundry/cli

Steps

Note: Assumes your already logged into Pivotal Cloud Foundry and connected to Pivotal Web Services (run.pivotal.io), the command below shows I am connected and targeted

pasapicella@pas-macbook:~$ cf target
API endpoint:   https://api.run.pivotal.io
API version:    2.75.0
User:           papicella@pivotal.io
Org:            apples-pivotal-org
Space:          development

1. Create new project

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet new mvc --auth None --framework netcoreapp1.0
Content generation time: 278.4748 ms
The template "ASP.NET Core Web App" created successfully.

2. Restore as follows

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
  Restoring packages for /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/dotnet-core-mvc.csproj...
  Generating MSBuild file /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvco/obj/dotnet-core-mvc.csproj.nuget.g.props.
  Generating MSBuild file /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/obj/dotnet-core-mvc.csproj.nuget.g.targets.
  Writing lock file to disk. Path: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/obj/project.assets.json
  Restore completed in 1.09 sec for /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/dotnet-core-mvc.csproj.

  NuGet Config files used:
      /Users/pasapicella/.nuget/NuGet/NuGet.Config

  Feeds used:
      https://api.nuget.org/v3/index.json

3. At this point we can run the application and see what it looks like in a browser

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet run
Hosting environment: Production
Content root path: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.


Now to prepare this demo for Pivotal Cloud Foundry we need to make some changes to he generated code as shown in the next few steps

4. In Visual Studio Code, under the menu item “File/Open” select the “dotnet-core-mvc” folder and open it. Confirm all messages from Visual Studio Code.



The .NET Core buildpack configures the app web server automatically so you don’t have to handle this yourself, but you have to prepare your app in a way that allows the buildpack to deliver this information via the command line to your app

5. Open "Program.cs" and modify the Main() method as follows adding "var config = ..." and ".UseConfiguration(config)" as shown below
  
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;

namespace dotnet_core_mvc
{
public class Program
{
public static void Main(string[] args)
{
var config = new ConfigurationBuilder()
.AddCommandLine(args)
.Build();

var host = new WebHostBuilder()
.UseKestrel()
.UseConfiguration(config)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

6. Open "dotnet-core-mvc.csproj" and add the following dependency "Microsoft.Extensions.Configuration.CommandLine" as shown below
  
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp1.0</TargetFramework>
</PropertyGroup>


<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.0.4" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.0.3" />
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.0.2" />
<PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.0.2" />
<PackageReference Include="Microsoft.Extensions.Configuration.CommandLine" Version="1.0.0" />
<PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.0.1" />
</ItemGroup>


</Project>


7. File -> Save All

8. Jump back out to a terminal windows, you can actually restore from Visual Studio Code IDE BUT I still like to do it from the command line

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
...

9. Deploy to Pivotal Cloud Foundry as follows, you will need to use a unique name so replace "pas" with your own name that should do it.

$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m

** Output **

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m
Creating app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Using route pas-dotnetcore-mvc-demo.cfapps.io
Binding pas-dotnetcore-mvc-demo.cfapps.io to pas-dotnetcore-mvc-demo...
OK

Uploading pas-dotnetcore-mvc-demo...
Uploading app files from: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc
Uploading 208.7K, 84 files
Done uploading
OK

Starting app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
Creating container
Successfully created container
Downloading app package...
Downloaded app package (675.5K)
ASP.NET Core buildpack version: 1.0.13
ASP.NET Core buildpack starting compile
-----> Restoring files from buildpack cache
       OK
-----> Restoring NuGet packages cache
       OK
-----> Extracting libunwind
       libunwind version: 1.2
       https://buildpacks.cloudfoundry.org/dependencies/manual-binaries/dotnet/libunwind-1.2-linux-x64-f56347d4.tgz
       OK
-----> Installing .NET SDK
       .NET SDK version: 1.0.1
       OK
-----> Restoring dependencies with Dotnet CLI

       Welcome to .NET Core!
       ---------------------
       Telemetry
       The .NET Core tools collect usage data in order to improve your experience. The data is anonymous and does not include command-line arguments. The data is collected by Microsoft and shared with the community.
       You can opt out of telemetry by setting a DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 using your favorite shell.
       You can read more about .NET Core tools telemetry @ https://aka.ms/dotnet-cli-telemetry.
       Configuring...
       -------------------
       A command is running to initially populate your local package cache, to improve restore speed and enable offline access. This command will take up to a minute to complete and will only happen once.
       Decompressing 100% 16050 ms
-----> Buildpack version 1.0.13
       https://buildpacks.cloudfoundry.org/dependencies/dotnet/dotnet.1.0.1.linux-amd64-99324ccc.tar.gz
       Learn more about .NET Core @ https://aka.ms/dotnet-docs. Use dotnet --help to see available commands or go to https://aka.ms/dotnet-cli-docs.

       --------------

       Expanding 100% 13640 ms
         Restoring packages for /tmp/app/dotnet-core-mvc.csproj...
         Installing Microsoft.Extensions.Configuration 1.0.0.
         Installing Microsoft.Extensions.Configuration.CommandLine 1.0.0.
         Generating MSBuild file /tmp/app/obj/dotnet-core-mvc.csproj.nuget.g.props.
         Writing lock file to disk. Path: /tmp/app/obj/project.assets.json
         Restore completed in 2.7 sec for /tmp/app/dotnet-core-mvc.csproj.

         NuGet Config files used:
             /tmp/app/.nuget/NuGet/NuGet.Config

         Feeds used:
             https://api.nuget.org/v3/index.json

         Installed:
             2 package(s) to /tmp/app/dotnet-core-mvc.csproj
       OK
       Detected .NET Core runtime version(s) 1.0.4, 1.1.1 required according to 'dotnet restore'
-----> Installing required .NET Core runtime(s)
       .NET Core runtime 1.0.4 already installed
       .NET Core runtime 1.1.1 already installed
       OK
-----> Publishing application using Dotnet CLI
       Microsoft (R) Build Engine version 15.1.548.43366
       Copyright (C) Microsoft Corporation. All rights reserved.

         dotnet-core-mvc -> /tmp/app/bin/Debug/netcoreapp1.0/dotnet-core-mvc.dll
       Copied 38 files from /tmp/app/libunwind to /tmp/cache
-----> Saving to buildpack cache
       OK
       Copied 850 files from /tmp/app/.dotnet to /tmp/cache
       Copied 19152 files from /tmp/app/.nuget to /tmp/cache
       OK
-----> Cleaning staging area
       Removing /tmp/app/.nuget
       OK
ASP.NET Core buildpack is done creating the droplet
Exit status 0
Uploading droplet, build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (359.9M)
Uploaded droplet (131.7M)
Uploading complete
Successfully destroyed container

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-dotnetcore-mvc-demo was started using this command `cd .cloudfoundry/dotnet_publish && dotnet dotnet-core-mvc.dll --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-dotnetcore-mvc-demo.cfapps.io
last uploaded: Fri Mar 17 03:19:51 UTC 2017
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/dotnet-core-buildpack

     state     since                    cpu    memory          disk           details
#0   running   2017-03-17 02:26:03 PM   0.0%   39.1M of 512M   302.7M of 1G

10. Finally invoke the application using the URL which can be determined by the output at the end of the PUSH above or using "cf apps"



More Information

https://docs.microsoft.com/en-us/aspnet/core/tutorials/your-first-mac-aspnet
Categories: Fusion Middleware

Run a Spring Cloud Task from Pivotal Cloud Foundry using Cloud Foundry Tasks

Pas Apicella - Fri, 2017-03-10 02:26
Recently we announced Spring Cloud Task under the umbrella of Spring Cloud through the following blog entry.  In the post below I am going to show you how you would create a Cloud Foundry Task that can invoke this Spring Cloud Task itself.

Spring Cloud Task allows a user to develop and run short lived microservices using Spring Cloud and run them locally, in the cloud, even on Spring Cloud Data Flow. In this example we will run it in the cloud using Pivotal Cloud Foundry (PWS instance run.pivotal.io). For more information on this follow the link below.

https://cloud.spring.io/spring-cloud-task/

For more information on Cloud Foundry Tasks follow the link below

https://docs.cloudfoundry.org/devguide/using-tasks.html

Steps

Note: This demo assumes you are already logged into PCF you can confirm that using a command as follows

pasapicella@pas-macbook:~/temp$ cf target
API endpoint:   https://api.run.pivotal.io
API version:    2.75.0
User:           papicella@pivotal.io
Org:            apples-pivotal-org
Space:          development

Also ensure your using the correct version of CF CLI which at the time of this blog was as follows you will need at least that version.

pasapicella@pas-macbook:~/temp$ cf --version
cf version 6.25.0+787326d95.2017-02-28

You will also need an instance of Pivotal Cloud Foundry which supports Tasks within the Applications Manager UI which Pivotal Web Services (PWS) does

1. Clone the simple Spring Cloud Task as follows

$ git clone https://github.com/papicella/SpringCloudTaskTodaysDate.git

pasapicella@pas-macbook:~/temp$ git clone https://github.com/papicella/SpringCloudTaskTodaysDate.git
Cloning into 'SpringCloudTaskTodaysDate'...
remote: Counting objects: 19, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 19 (delta 0), reused 19 (delta 0), pack-reused 0
Unpacking objects: 100% (19/19), done.

2. Change into SpringCloudTaskTodaysDate directory

3. If you look at the class "pas.au.pivotal.pa.sct.demo.SpringCloudTaskTodaysDateApplication" you will see it's just a Spring Boot application that has an annotation "@EnableTask". As long as Spring Cloud Task is on the classpath any Spring Boot application with @EnableTask will record the start and finish of the boot application.

4. Package the application using "mvn package"

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ mvn package
[INFO] Scanning for projects...
Downloading: https://repo.spring.io/snapshot/org/springframework/cloud/spring-cloud-task-dependencies/1.2.0.BUILD-SNAPSHOT/maven-metadata.xml
Downloaded: https://repo.spring.io/snapshot/org/springframework/cloud/spring-cloud-task-dependencies/1.2.0.BUILD-SNAPSHOT/maven-metadata.xml (809 B at 0.6 KB/sec)
[INFO]

..

[INFO] Building jar: /Users/pasapicella/temp/SpringCloudTaskTodaysDate/target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.2.RELEASE:repackage (default) @ springcloudtasktodaysdate ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.621 s
[INFO] Finished at: 2017-03-10T18:51:15+11:00
[INFO] Final Memory: 29M/199M
[INFO] ------------------------------------------------------------------------

5.  Push the application as shown below

$ cf push springcloudtask-date --no-route --health-check-type none -p ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar -m 512m

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf push springcloudtask-date --no-route --health-check-type none -p ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar -m 512m

Creating app springcloud-task-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

App springcloud-task-date is a worker, skipping route creation
Uploading springcloud-task-date...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app069139431
Uploading 239.1K, 89 files

...

1 of 1 instances running

App started


OK

App springcloudtask-date was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher`

Showing health and status for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls:
last uploaded: Fri Mar 10 07:57:17 UTC 2017
stack: cflinuxfs2
buildpack: container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.14-offline-https://github.com/cloudfoundry/java-buildpack.git#d5d58c6 java-main open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10...

     state      since                    cpu    memory         disk         details
#0   starting   2017-03-10 06:58:43 PM   0.0%   936K of 512M   1.3M of 1G


6. Stop the application as we only want to run it as a CF Task when we are ready to run it.

$ cf stop springcloudtask-date

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf stop springcloudtask-date
Stopping app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

7. In a separate lets tail the logs from the application as follows. Don't worry there is no output yet as the application invocation through a task has not yet occurred.

$ cf logs springcloudtask-date

** Output **

pasapicella@pas-macbook:~$ cf logs springcloudtask-date
Connected, tailing logs for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...


8. Now log into PWS apps manager console and navigate to your application settings page as shown below. On this page you will see the run command for the spring boot application as shown below


9. To invoke the task we run a command as follows using the "invocation command" we get from step #8 above.

Format: cf run-task {app-name} {invocation command}

$ cf run-task springcloudtask-date 'INVOCATION COMMAND from step #8 above'

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf run-task springcloudtask-date 'CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher'
Creating task for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Task has been submitted successfully for execution.
Task name:   371bb9b1
Task id:     1

10. Return to PWS Applications Manager and click on the "Tasks" tab to verify if was successful


11. Return to the terminal window where we were tailing the logs to verify the task was run

pasapicella@pas-macbook:~$ cf logs springcloudtask-date
Connected, tailing logs for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...

2017-03-10T19:15:29.55+1100 [APP/TASK/371bb9b1/0]OUT Creating container
2017-03-10T19:15:29.89+1100 [APP/TASK/371bb9b1/0]OUT Successfully created container
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT   .   ____          _            __ _ _
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT   '  |____| .__|_| |_|_| |_\__, | / / / /
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  :: Spring Boot ::        (v1.5.2.RELEASE)
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  =========|_|==============|___/=/_/_/_/
2017-03-10T19:15:34.71+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.706  INFO 7 --- [           main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext
2017-03-10T19:15:34.85+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.853  INFO 7 --- [           main] nfigurationApplicationContextInitializer : Adding cloud service auto-reconfiguration to ApplicationContext
2017-03-10T19:15:34.89+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.891  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : The following profiles are active: cloud
2017-03-10T19:15:34.89+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.890  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Starting SpringCloudTaskTodaysDateApplication on b00b045e-dea4-4e66-8298-19dd71edb9c8 with PID 7 (/home/vcap/app/BOOT-INF/classes started by vcap in /home/vcap/app)
2017-03-10T19:15:35.00+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.009  INFO 7 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a07c5b4: startup date [Fri Mar 10 08:15:35 UTC 2017]; root of context hierarchy
2017-03-10T19:15:35.91+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.912  INFO 7 --- [           main] urceCloudServiceBeanFactoryPostProcessor : Auto-reconfiguring beans of type javax.sql.DataSource
2017-03-10T19:15:35.91+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.916  INFO 7 --- [           main] urceCloudServiceBeanFactoryPostProcessor : No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.
2017-03-10T19:15:36.26+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.259 DEBUG 7 --- [           main] o.s.c.t.c.SimpleTaskConfiguration        : Using org.springframework.cloud.task.configuration.DefaultTaskConfigurer TaskConfigurer
2017-03-10T19:15:36.74+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.748  INFO 7 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
2017-03-10T19:15:36.75+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.758 DEBUG 7 --- [           main] o.s.c.t.r.support.SimpleTaskRepository   : Creating: TaskExecution{executionId=0, parentExecutionId=null, exitCode=null, taskName='DateSpringCloudTask:cloud:', startTime=Fri Mar 10 08:15:36 UTC 2017, endTime=null, exitMessage='null', externalExecutionId='null', errorMessage='null', arguments=[]}
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.776 DEBUG 7 --- [           main] o.s.c.t.r.support.SimpleTaskRepository   : Updating: TaskExecution with executionId=0 with the following {exitCode=0, endTime=Fri Mar 10 08:15:36 UTC 2017, exitMessage='null', errorMessage='null'}
2017-03-10T19:15:36.75+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.757  INFO 7 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Starting beans in phase 0
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.775  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Executed at : 3/10/17 8:15 AM
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.777  INFO 7 --- [           main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a07c5b4: startup date [Fri Mar 10 08:15:35 UTC 2017]; root of context hierarchy
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.779  INFO 7 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Stopping beans in phase 0
2017-03-10T19:15:36.78+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.782  INFO 7 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Unregistering JMX-exposed beans on shutdown
2017-03-10T19:15:36.78+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.788  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Started SpringCloudTaskTodaysDateApplication in 3.205 seconds (JVM running for 3.985)
2017-03-10T19:15:36.83+1100 [APP/TASK/371bb9b1/0]OUT Exit status 0
2017-03-10T19:15:36.86+1100 [APP/TASK/371bb9b1/0]OUT Destroying container
2017-03-10T19:15:37.79+1100 [APP/TASK/371bb9b1/0]OUT Successfully destroyed container

12. Finally you can verify tasks using a command as follows

$ cf tasks springcloudtask-date

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf tasks springcloudtask-date
Getting tasks for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

id   name       state       start time                      command
1    371bb9b1   SUCCEEDED   Fri, 10 Mar 2017 08:15:28 UTC   CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher


Categories: Fusion Middleware

ASP .NET Core (CLR) on Pivotal Cloud Foundry

Pas Apicella - Wed, 2017-03-08 04:57
There are two ways to run .NET applications on Pivotal Cloud Foundry. In short it's as follows

  1. Windows 2012 R2 Stack (Windows 2016 coming soon)
  2. Linux Stack - ASP.NET Core CLR only

In the example below I am going to show how you would push a sample ASP.NET Core application using the default Linux stack. I am using run.pivotal.io or better knows as PWS (Pivotal Web Services) instance which only supports a Linux stack. In your own PCF installation an operator may have provided Windows support in which case a "cf stacks" is one way to find out as shown below

$ cf stacks
Getting stacks in org pivot-papicella / space development as papicella@pivotal.io...
OK

name            description
cflinuxfs2      Cloud Foundry Linux-based filesystem

windows2012R2   Microsoft Windows / .Net 64 bit

Steps

1. Clone a demo as shown below

$ git clone https://github.com/bingosummer/aspnet-core-helloworld.git
Cloning into 'aspnet-core-helloworld'...
remote: Counting objects: 206, done.
remote: Total 206 (delta 0), reused 0 (delta 0), pack-reused 206
Receiving objects: 100% (206/206), 43.40 KiB | 0 bytes/s, done.
Resolving deltas: 100% (78/78), done.

2. Change to the right directory as shown below

$ cd aspnet-core-helloworld

3. Edit manifest.yml to use the BETA buildpack as follows. You can list out the build packs using "cf buildpacks"

---
applications:
- name: sample-aspnetcore-helloworld
  random-route: true
  memory: 512M
  buildpack: dotnet_core_buildpack_beta

4. Push as shown below

pasapicella@pas-macbook:~/apps/dotnet/aspnet-core-helloworld$ cf push
Using manifest file /Users/pasapicella/apps/dotnet/aspnet-core-helloworld/manifest.yml

Updating app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Uploading sample-aspnetcore-helloworld...
Uploading app files from: /Users/pasapicella/pivotal/apps/dotnet/aspnet-core-helloworld
Uploading 21.9K, 15 files
Done uploading
OK

Stopping app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Starting app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
Downloading dotnet_core_buildpack_beta...
Downloaded dotnet_core_buildpack_beta
Creating container
Successfully created container
Downloading build artifacts cache...
Downloading app package...
Downloaded app package (21.5K)
Downloaded build artifacts cache (157.7M)

...

-----> Saving to buildpack cache
    Copied 0 files from /tmp/app/libunwind to /tmp/cache
    Copied 0 files from /tmp/app/.dotnet to /tmp/cache
    Copied 0 files from /tmp/app/.nuget to /tmp/cache
    OK
ASP.NET Core buildpack is done creating the droplet
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (157.7M)
Uploaded droplet (157.7M)
Uploading complete
Destroying container
Successfully destroyed container

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App sample-aspnetcore-helloworld was started using this command `dotnet run --project src/dotnetstarter --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: sample-aspnetcore-helloworld-gruffier-jackpot.cfapps.io
last uploaded: Wed Mar 8 10:46:44 UTC 2017
stack: cflinuxfs2
buildpack: dotnet_core_buildpack_beta

  state     since                    cpu     memory          disk           details
#0   running   2017-03-08 09:48:29 PM   22.4%   36.7M of 512M   556.8M of 1G


Verify the application using the URL given at the end of the push







Categories: Fusion Middleware

Spring Boot Actuator support added to Pivotal Web Services (PWS) Application Manager

Pas Apicella - Mon, 2017-03-06 17:07
Recently we added "Spring Boot Actuator support" to Pivotal Web Services (PWS) http://run.pivotal.io. If you want to try this out simply use the demo below which is all setup to verify how this works.

https://github.com/papicella/SpringBootPCFPas

Once pushed you will see a Spring Boot icon in the Application Manager UI showing the Actuator support as per below.








Categories: Fusion Middleware

Retrieving ATM location's using the NAB API

Pas Apicella - Mon, 2017-01-30 21:21
NAB have released an API to determine ATM Locations amongst other things based on a GEO location. It's documented here.

https://developer.nab.com.au/docs#locations-api-get-locations-by-geo-coordinates

Here we use this API but I wanted to highlight a few things you would need to know to consume the API

1. You will need to provide the following and this has to be calculated based on a LAT/LONG. The screen shot below shows what a GEO location with a radius of 0.5 KM's would look like. You will see it's starting point is your current location in this case in Melbourne CBD.



2. The NAP API would required the following to be set which can be obtained using a calculation as per the screen shot above

swLat South-West latitude coordinates expressed as decimal
neLat North-East latitude coordinates expressed as decimal
neLng North-East longitude coordinates expressed as decimal
swLng South-West longitude coordinates expressed as decimal

3. The attribute LocationType allows you to filter what type of location your after I set this to "ATM" to only find ATM locations.

4. I also set the attribute Fields to extended as this gives me detailed information

5. Once you have the data here is an example of getting detailed information of ATM locations using the GEO location co-ordinates. In this example CURL is good enough to illustrate that

pasapicella@pas-macbook:~/pivotal$ curl -H "x-nab-key: NABAPI-KEY" "https://api.developer.nab.com.au/v2/locations?locationType=atm&searchCriteria=geo&swLat=-37.81851471355399&swLng=144.95235719310358&neLat=-37.812155549503025&neLng=144.96040686020137&fields=extended&v=1" | jq -r
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10371  100 10371    0     0   2016      0  0:00:05  0:00:05 --:--:--  2871
{
  "locationSearchResponse": {
    "totalRecords": 16,
    "viewport": {
      "swLat": -37.81586424048582,
      "swLng": 144.9589117502319,
      "neLat": -37.81109231077813,
      "neLng": 144.96758064976802
    },
    "locations": [
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Melbourne Central",
          "address4": "Lower Ground floor",
          "id": 5058548,
          "key": "atm_3B46",
          "description": "Melbourne Central",
          "address1": "300 Elizabeth Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.812031,
          "longitude": 144.9621768,
          "hours": "Mon-Thu 10.00am-06.00pm, Fri 10.00am-09.00pm, Sat 10.00am-06.00pm, Sun 10.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Melbourne Central",
          "address4": "Ground Floor",
          "address5": "Near La Trobe St Entrance under escalator",
          "id": 5058552,
          "key": "atm_3B56",
          "description": "Melbourne Central",
          "address1": "300 Elizabeth Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.812031,
          "longitude": 144.9621768,
          "hours": "Mon-Thu 10.00am-06.00pm, Fri 10.00am-09.00pm, Sat 10.00am-06.00pm, Sun 10.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Queen Victoria Market",
          "address5": "Outside the market facing the street",
          "id": 5058555,
          "key": "atm_3B61",
          "description": "Queen Victoria Market",
          "address1": "Queen Street",
          "address2": "Corner Therry Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8130009,
          "longitude": 144.9597905,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Target Centre",
          "id": 5058577,
          "key": "atm_3CC7",
          "description": "Target Centre",
          "address1": "236 Bourke Street Mall",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8132227,
          "longitude": 144.9665518,
          "hours": "Mon-Fri 09.00am-05.00pm, Sat-Sun 10.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Queen Victoria Centre",
          "id": 5058614,
          "key": "atm_3F07",
          "description": "Queen Victoria",
          "address1": "228-234 Lonsdale Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8122729,
          "longitude": 144.9622383,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "University of Melbourne",
          "address5": "Kenneth Myer Building",
          "id": 5058653,
          "key": "atm_3G28",
          "description": "KMB Foyer",
          "address1": "30 Royal Parade",
          "suburb": "Parkville",
          "state": "VIC",
          "postcode": "3052",
          "latitude": -37.8149256,
          "longitude": 144.9643156,
          "hours": "Mon-Fri 07.00am-07.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Midtown Plaza",
          "address4": "Shop 8",
          "id": 5058783,
          "key": "atm_3S02",
          "description": "Midtown Plaza",
          "address1": "186 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8131315,
          "longitude": 144.9654723,
          "hours": "Mon-Fri 09.30am-05.00pm, Sat 10.00am-02.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Midtown Plaza",
          "address4": "Shop 8",
          "id": 5058784,
          "key": "atm_3S03",
          "description": "Midtown Plaza",
          "address1": "186 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8131315,
          "longitude": 144.9654723,
          "hours": "Mon-Fri 09.30am-05.00pm, Sat 10.00am-02.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Melbourne NAB House",
          "id": 5058814,
          "key": "atm_3S38",
          "description": "National Bank House",
          "address1": "500 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8154128,
          "longitude": 144.9590017
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Melbourne NAB House",
          "id": 5058815,
          "key": "atm_3S39",
          "description": "National Bank House",
          "address1": "500 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8154128,
          "longitude": 144.9590017,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Melbourne NAB House",
          "id": 5058837,
          "key": "atm_3S67",
          "description": "National Bank House",
          "address1": "500 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8154128,
          "longitude": 144.9590017,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Midtown Plaza",
          "address4": "Shop 8",
          "id": 5058842,
          "key": "atm_3S72",
          "description": "Midtown Plaza",
          "address1": "186 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8131315,
          "longitude": 144.9654723,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Midtown Plaza",
          "id": 5059024,
          "key": "atm_4G04",
          "description": "Midtown Plaza",
          "address1": "194 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8130332,
          "longitude": 144.9654279,
          "hours": "Mon-Tue 09.30am-05.30pm, Wed-Fri 09.30am-09.00pm, Sat 09.30am-05.30pm, Sun 11.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "BOQ ATM",
          "id": 5059452,
          "key": "atm_9036021I",
          "description": "455 Bourke Street",
          "address1": "455 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.81518,
          "longitude": 144.960583
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": false,
          "isDisabilityApproved": false,
          "isAudio": false,
          "source": "rediATM",
          "id": 5060659,
          "key": "atm_C11243",
          "description": "Bourke Street",
          "address1": "460 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.81494,
          "longitude": 144.96002,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": false,
          "isAudio": false,
          "source": "rediATM",
          "address3": "Emporium Melbourne",
          "address5": "ATM 02",
          "id": 5060908,
          "key": "atm_C11662",
          "description": "Emporium Melbourne",
          "address1": "269-321 Lonsdale Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.811932,
          "longitude": 144.963648,
          "hours": "Mon-Wed 10.00am-07.00pm, Thu-Fri 10.00am-09.00pm, Sat-Sun 10.00am-07.00pm"
        }
      }
    ]
  },
  "status": {
    "code": "API-200",
    "message": "Success"
  }
}
Categories: Fusion Middleware

Configuring Spring Boot Actuator information in Pivotal Cloud Foundry 1.9 Applications Manager

Pas Apicella - Tue, 2017-01-10 22:12
With the release of Pivotal Cloud Foundry 1.9 we added "Spring Boot Actuator In PCF" through Applications Manager Web UI. In this example below I will show what config you need on your application side to get Apps Manager Web UI to show you this information.

Spring Boot Actuator exposes information about a running Spring Boot application via an http endpoint. Get useful diagnostics about an app programmatically using these RESTful APIs:  health, git commit, build information, and so on. Now, developers can view some of this data in PCF Apps Manager. It’s easier to debug and monitor your apps in production, since Apps Manager shows this diagnostics info in context.

The steps are are described here BUT the steps below are more detailed and provide a sample application to verify this with.

http://docs.pivotal.io/pivotalcf/1-9/console/spring-boot-actuators.html

1. Add the maven dependency "spring-boot-starter-actuator" to your pom.xml
  
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>

2. Ensure your using Spring Boot 1.5 snapshots/release candidates/final as earlier versions won't work.
  
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.0.BUILD-SNAPSHOT</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

https://spring.io/blog/2017/01/06/spring-boot-1-5-0-rc1-available-now

3. Configure the Info Actuator as per link below

http://docs.pivotal.io/pivotalcf/1-9/console/spring-boot-actuators.html#info-endpoint

Note: It will differ depending on if your using Maven or Gradle

Ensure you add this property for GIT integration in your YML file or properties file, YML example

management:
  info:
    git:
      mode: full

4. The following demo below has the required setup for verification application , clone and push to CF as shown below.

Note: review pom.xml for everything that is needed for this to work

https://github.com/papicella/SpringBootPCFPas

$ git clone https://github.com/papicella/SpringBootPCFPas.git
$ mvn package
$ cf push -f manifest-inmemory-db.yml

....

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-springboot-pcf was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djava.security.egd=file:///dev/urando" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher`

Showing health and status for app pas-springboot-pcf in org pivot-papicella / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pas-springboot-pcf-squireless-clicha.cfapps.pez.pivotal.io
last uploaded: Wed Jan 11 03:59:24 UTC 2017
stack: cflinuxfs2
buildpack: java-buildpack=v3.10-offline-https://github.com/cloudfoundry/java-buildpack.git#193d6b7 java-main java-opts open-jdk-like-jre=1.8.0_111 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu      memory         disk           details
#0   running   2017-01-11 03:00:18 PM   207.1%   489.9M of 1G   161.7M of 1G

5. View the application in Application Manager UI as shown below and you will see that a Spring Icon appears next to the application name



More Screen Shots





More Information

http://docs.pivotal.io/pivotalcf/1-9/console/spring-boot-actuators.html
Categories: Fusion Middleware

Spring Boot Application Consuming NAB FxRates API

Pas Apicella - Sun, 2017-01-08 15:40
National Australia Bank (NAB) recently released a set of BETA API's as per the link below.

https://developer.nab.com.au/

The following example is a Spring Boot Application that consumes the FxRates API. It's all documented on GitHub and you will need a NAB API key to run this demo. It can run stand alone as a FAT JAR through Spring Boot or deployed to Cloud Foundry, instructions appear for both.

https://github.com/papicella/NABApi-fx-demo

 
Categories: Fusion Middleware

Script to tally Application Instance Counts for Pivotal Cloud Foundry

Pas Apicella - Mon, 2016-12-19 20:32
I was recently asked how to determine how many application instances exist in a given ORG at a point in time. The script below can do this using CF CURL command which means you must be logged into your Pivotal Cloud Foundry instance for this to work. You can use the CF REST API itself but I find the CF CURL much easier.

CF REST API https://apidocs.cloudfoundry.org/249/

The script below is assuming an ORG name of "apples-pivotal-org" so it would make sense to pass this in as a script variable which is easy enough to do

Prior to running this script it's worth checking your current TARGET endpoints as shown below.

pasapicella@pas-macbook:~/pivotal/PCF/scripts$ cf target

API endpoint:   https://api.run.pivotal.io (API version: 2.65.0)
User:           papicella@pivotal.io
Org:            apples-pivotal-org
Space:          development

Script:

echo "AI Count for applications in a organization.."
echo ""

guids=$(cf curl /v2/apps?q=organization_guid:`cf org apples-pivotal-org --guid` | jq -r ".resources[].metadata.guid")
total=0
for guid in $guids; do
  name=$(cf curl /v2/apps/$guid | jq -r ".entity.name")
  count=$(cf curl /v2/apps/$guid | jq -r ".entity.instances")
  echo -e "App Name: $name , Instances: $count"
  total=$(( $total + $count ))
done

echo "-----"
echo "Total AI's = $total"
echo ""

Output:

pasapicella@pas-macbook:~/pivotal/PCF/scripts$ ./ai-count-org-details.sh
AI Count for applications in a organization..

App Name: pas-telstrawifiapi-client , Instances: 1
App Name: springboot-bootstrap-employee , Instances: 2
App Name: springboot-employee-feign-client , Instances: 1
App Name: greeting-config , Instances: 1
App Name: employee-feign-client-hystrix , Instances: 1
App Name: pas-albums , Instances: 2
App Name: pas-springboot-pcf , Instances: 2
App Name: springboot-typeahead , Instances: 1
-----
Total AI's = 11
Categories: Fusion Middleware

Spring Boot with Feign and Twitter Typeahead JS library

Pas Apicella - Tue, 2016-12-13 18:35
I previously blogged about a demo using Spring Boot and Feign making an external based REST call to a service I had created. The real purpose of that demo "http://theblasfrompas.blogspot.com.au/2016/12/spring-boot-feign-client-accessing.html" was to use Twitter Typeahead for auto completion which is the demo on Github below. The returned data is now used in an INPUT text input for auto completion as the user types in the Country Name

https://github.com/papicella/FeignClientExternalSpringBoot


Categories: Fusion Middleware

Command Line and Vim Tips from a Java Programmer

I’m always interested in learning more about useful development tools. In college, most programmers get an intro to the Linux command line environment, but I wanted to share some commands I use daily that I’ve learned since graduation.

Being comfortable on the command line is a great skill to have when a customer is looking over your shoulder on a Webex. They could be watching a software demo or deployment to their environment. It can also be useful when learning a new code base or working with a product with a large, unfamiliar directory structure with lots of logs.

If you’re on Windows, you can use Cygwin to get a Unix-like CLI to make these commands available.

Useful Linux commands Find

The command find helps you find files by recursively searching subdirectories. Here are some examples:

find .
    Prints all files and directories under the current directory.

find . -name '*.log'
  Prints all files and directories that end in “.log”.

find /tmp -type f -name '*.log'
   Prints only files in the directory “/tmp” that end in “.log”.

find . -type d
   Prints only directories.

find . -maxdepth 2
     Prints all files and directories under the current directory, and subdirectories (but not sub-subdirectories).

find . -type f -exec ls -la {} \;
     The 
-exec
flag runs a command against each file instead of printing the name. In this example, it will run 
ls -la filename
  on each file under the current directory. The curly braces take the place of the filename.

Grep

The command grep lets you search text for lines that match a specific string. It can be helpful to add your initials to debug statements in your code and then grep for them to find them in the logs.

grep foo filename
  Prints each line in the file “filename” that matches the string “foo”.

grep foo\\\|bar filename
Grep supports regular expressions, so this prints each line in the file that matches “foo” or “bar”.

grep -i foo filename
  Add -i for case insensitive matching.

grep foo *
  Use the shell wildcard, an asterisk, to search all files in the current directory for the string “foo”.

grep -r foo *
  Recursively search all files and directories in the current directory for a string.

grep -rnH foo filename
  Add -n to print line numbers and -H to print the filename on each line.

find . -type f -name '*.log' -exec grep -nH foo {} \;
  Combining find and grep can let you easily search each file that matches a certain name for a string. This will print each line that matches “foo” along with the file name and line number in each file that ends in “.log” under the current directory.

ps -ef | grep processName
  The output of any command can be piped to grep, and the lines of STDOUT that match the expression will be printed. For example, you could use this to find the pid of a process with a known name.

cat file.txt | grep -v foo
  You can also use -v to print all lines that don’t match an expression.

Ln

The command ln lets you create links. I generally use this to create links in my home directory to quickly cd into long directory paths.

ln -s /some/really/long/path foo
  The -s is for symbolic, and the long path is the target. The output of
ls -la
 in this case would be
foo -> /some/really/long/path
 .

Bashrc

The Bashrc is a shell script that gets executed whenever Bash is started in an interactive terminal. It is located in your home directory,

~/.bashrc
 . It provides a place to edit your $PATH, $PS1, or add aliases and functions to simplify commonly used tasks.

Aliases are a way you can define your own command line commands. Here are a couple useful aliases I’ve added to my .bashrc that have saved a lot of keystrokes on a server where I’ve installed Oracle WebCenter:

WC_DOMAIN=/u01/oracle/fmw/user_projects/domains/wc_domain
alias assets="cd /var/www/html"
alias portalLogs="cd $WC_DOMAIN/servers/WC_Spaces/logs"
alias domain="cd $WC_DOMAIN"
alias components="cd $WC_DOMAIN/ucm/cs/custom"
alias rpl="portalLogs; vim -R WC_Spaces.out"

After making changes to your .bashrc, you can load them with

source ~/.bashrc
 . Now I can type
rpl
 , short for Read Portal Logs, from anywhere to quickly jump into the WebCenter portal log file.

alias grep=”grep --color”

This grep alias adds the –color option to all of my grep commands.  All of the above grep commands still work, but now all of the matches will be highlighted.

Vim

Knowing Vim key bindings can be convenient and efficient if you’re already working on the command line. Vim has many built-in shortcuts to make editing files quick and easy.

Run 

vim filename.txt
  to open a file in Vim. Vim starts in Normal Mode, where most characters have a special meeting, and typing a colon,
:
 , lets you run Vim commands. For example, typing 
Shift-G
  will jump to the end of the file, and typing
:q
 while in normal mode will quit Vim. Here is a list of useful commands:

:q
  Quits Vim

:w
  Write the file (save)

:wq
  Write and quit

:q!
  Quit and ignore warnings that you didn’t write the file

:wq!
  Write and quit, ignoring permission warnings

i
  Enter Insert Mode where you can edit the file like a normal text editor

a
  Enter Insert Mode and place the cursor after the current character

o
  Insert a blank line after the current line and enter Insert Mode

[escape]
  The escape button exits insert mode

:150
  Jump to line 150

shift-G
  Jump to the last line

gg
  Jump to the first line

/foo
  Search for the next occurrence of “foo”. Regex patterns work in the search.

?foo
  Search for the previous occurrence of “foo”

n
  Go to the next match

N
Go to the previous match

*
  Search for the next occurrence of the searched word under the cursor

#
  Search for the previous occurrence of the searched word under the cursor

w
  Jump to the next word

b
  Jump to the previous word

``
  Jump to the last action

dw
  Delete the word starting at the cursor

cw
  Delete the word starting at the cursor and enter insert mode

c$
  Delete everything from the cursor to the end of the line and enter insert mode

dd
  Delete the current line

D
  Delete everything from the cursor to the end of the line

u
  Undo the last action

ctrl-r
 
ctrl-r
  Redo the last action

d[up]
  Delete the current line and the line above it. “[up]” is for the up arrow.

d[down]
  Delete the current line and the line below it

d3[down]
  Delete the current line and the three lines below it

r[any character]
  Replace the character under the cursor with another character

~
  Toggle the case (upper or lower) of the character under the cursor

v
  Enter Visual Mode. Use the arrow keys to highlight text.

shift-V
  Enter Visual Mode and highlight whole lines at a time.

ctrl-v
  Enter Visual Mode but highlight blocks of characters.

=
  While in Visual Mode, = will auto format highlighted text.

c
  While in Visual Mode, c will cut the highlighted text.

y
  While in Visual Mode, y will yank (copy) the highlighted text.

p
  In Normal Mode, p will paste the text in the buffer (that’s been yanked or cut).

yw
  Yank the text from the cursor to the end of the current word.

:sort
  Highlight lines in Visual Mode, then use this command to sort them alphabetically.

:s/foo/bar/g
  Highlight lines in Visual Mode, then use search and replace to replace all instances of “foo” with “bar”.

:s/^/#/
  Highlight lines in Visual Mode, then add # at the start of each line. This is useful to comment out blocks of code.

:s/$/;/
Highlight lines in Visual Mode, then add a semicolon at the end of each line.

:set paste
  This will turn off auto indenting. Use it before pasting into Vim from outside the terminal (you’ll want to be in insert mode before you paste).

:set nopaste
  Make auto indenting return to normal.

:set nu
  Turn on line numbers.

:set nonu
  Turn off line numbers.

:r!pwd
  Read the output of a command into Vim. In this example, we’ll read in the current directory.

:r!sed -n 5,10p /path/to/file
  Read lines 5 through 10 from another file in Vim. This can be a good way to copy and paste between files in the terminal.

:[up|down]
  Type a colon and then use the arrow keys to browse through your command history. If you type letters after the colon, it will only go through commands that matched that. (i.e., :se  and then up would help find to “:set paste” quickly).

Vimrc

The Vimrc is a configuration file that Vim loads whenever it starts up, similar to the Bashrc. It is in your home directory.

Here is a basic Vimrc I’d recommend for getting started if you don’t have one already. Run

vim ~/.vimrc
and paste in the following:

set backspace=2         " backspace in insert mode works like normal editor
syntax on               " syntax highlighting
filetype indent on      " activates indenting for files
set autoindent          " auto indenting
set number              " line numbers
colorscheme desert      " colorscheme desert
set listchars=tab:>-,trail:.,extends:>,precedes:<
set list                " Set up whitespace characters
set ic                  " Ignore case by default in searches
set statusline+=%F      " Show the full path to the file
set laststatus=2        " Make the status line always visible

 

Perl

Perl comes installed by default on Linux, so it is worth mentioning that it has some extensive command line capabilities. If you have ever tried to grep for a string that matches a line in a minified Javascript file, you can probably see the benefit of being able to filter out lines longer than 500 characters.

grep -r foo * | perl -nle'print if 500 > length'

Conclusion

I love learning the tools that are available in my development environment, and it is exciting to see how they can help customers as well.

Recently, I was working with a customer and we were running into SSL issues. Java processes can be run with the option 

-Djavax.net.ssl.trustStore=/path/to/trustStore.jks
  to specify which keystore to use for SSL certificates. It was really easy to run
ps -ef | grep trustStore
to quickly identify which keystore we needed to import certificates into.

I’ve also been able to use various find and grep commands to search through unfamiliar directories after exporting metadata from Oracle’s MDS Repository.

Even if you aren’t on the command line, I’d encourage everyone to learn something new about their development environment. Feel free to share your favorite Vim and command line tips in the comments!

Further reading

http://www.vim.org/docs.php

https://www.gnu.org/software/bash/manual/bash.html

http://perldoc.perl.org/perlrun.html

The post Command Line and Vim Tips from a Java Programmer appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Webinar Recording: Ryan Companies Leverages Fishbowl’s ControlCenter for Oracle WebCenter to Enhance Document Control Leading to Improved Knowledge Management

On Thursday, December 8th, Fishbowl had the privilege of presenting a webinar with Mike Ernst – VP of Contruction Operations – at Ryan Companies regarding their use case for Fishbowl’s ControlCenter product for controlled document management. Mike was joined by Fishbowl’s ControlCenter product manager, Kim Negaard, who provided an overview of how the solution was implemented and how it is being used at Ryan.

Ryan Companies had been using Oracle WebCenter for many years, but they were looking for some additional document management functionality and a more intuitive interface to help improve knowledge management at the company. Their main initiative was to make it easier for users to access and manage their corporate knowledge documents (policies and procedures), manuals (safety), and real estate documents (leases) throughout each document’s life cycle.

Mike provided some interesting stats that factored into their decision to implement ControlCenter for WebCenter:

  • $16k – the average cost of “reinventing” procedures per project (ex. checklists and templates)
  • $25k – the average cost of estimating incorrect labor rates
  • 3x – salary to onboard someone new when an employee leaves the company

To hear more about how Ryan found knowledge management success with ControlCenter for WebCenter, watch the webinar recording: https://youtu.be/_NNFRV1LPaY

The post Webinar Recording: Ryan Companies Leverages Fishbowl’s ControlCenter for Oracle WebCenter to Enhance Document Control Leading to Improved Knowledge Management appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Spring Boot / Feign Client accessing external service

Pas Apicella - Thu, 2016-12-08 17:49
Previously we used Feign to create clients for our own services, which are registered on our Eureka Server using a service name as shown in the previous blog post http://theblasfrompas.blogspot.com.au/2016/11/declarative-rest-client-feign-with_8.html. It's not unusual that you'd want to implement an external rest endpoint, basically an endpoint that's not discoverable by Eureka. In that case, you can use the url property on the @FeignClient annotation,
which gracefully supports property injection. His an example of this.

Full example on GitHub as follows

https://github.com/papicella/FeignClientExternalSpringBoot

1. Start by adding the correct maven dependencies and the one you need is as follows, there would be others if you want to use a web based spring boot project etc.
  
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>

2. We are going to consume this external service as follows

http://country.io/names.json

To do that we create a simple interface as follows
  
package pas.au.pivotal.feign.external;

import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

@FeignClient(name = "country-service-client", url = "http://country.io")
public interface CountryServiceClient {

@RequestMapping(method = RequestMethod.GET, value = "/names.json")
String getCountries();
}

3. In this example I have created a RestController to consume this REST service and test it because it's the easiest way to do this. We simply AutoWire the CountryServiceClient interface into the RestController to make those external calls through FEIGN.
  
package pas.au.pivotal.feign.external.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import pas.au.pivotal.feign.external.CountryServiceClient;

import java.util.Map;

@RestController
public class CountryRest
{
Logger logger = LoggerFactory.getLogger(CountryRest.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

@Autowired
private CountryServiceClient countryServiceClient;

@RequestMapping(value = "/countries", method = RequestMethod.GET,
                         produces = "application/json")
public String allCountries()
{
String countries = countryServiceClient.getCountries();

return countries;
}

@RequestMapping(value = "/country_names", method = RequestMethod.GET)
public String[] countryNames ()
{
String countries = countryServiceClient.getCountries();

Map<String, Object> countryMap = parser.parseMap(countries);

String countryArray[] = new String[countryMap.size()];
logger.info("Size of countries " + countryArray.length);

int i = 0;
for (Map.Entry<String, Object> entry : countryMap.entrySet()) {
countryArray[i] = (String) entry.getValue();
i++;
}

return countryArray;

}
}

4. Of course we will have our main class to boot strap the application and it includes the "spring-boot-starter-web" maven repo to start a tomcat server for us.
  
package pas.au.pivotal.feign.external;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableFeignClients
public class FeignClientExternalSpringBootApplication {

public static void main(String[] args) {
SpringApplication.run(FeignClientExternalSpringBootApplication.class, args);
}
}

5. Ensure your application.properties or application.yml has the following properties to disable timeouts.

feign:
  hystrix:
    enabled: false

hystrix:
  command:
    choose:
      default:
        execution:
          timeout:
            enabled: false

6. Run the main class "FeignClientExternalSpringBootApplication"

Access as follows

http://localhost:8080/countries





Categories: Fusion Middleware

Webinar: Quality, Safety, Knowledge Management with Oracle WebCenter Content and ControlCenter

DATE: THURSDAY, DECEMBER 8, 2016
TIME: 10:00 A.M. PST / 1:00 P.M. EST

Join Ryan Companies Vice President of Construction Operations, Mike Ernst, and Fishbowl Solutions Product Manager, Kim Negaard, to learn how Ryan Companies, a leading national construction firm, found knowledge management success with ControlCenter for Oracle WebCenter Content.

In this webinar, you’ll hear first-hand how ControlCenter has been implemented as part of Ryan’s Integrated Project Delivery Process helping them create a robust knowledge management system to promote consistent and effective operations across multiple regional offices. You’ll also learn how ControlCenter’s intuitive, modern user experience enabled Ryan to easily find documents across devices, implement reoccurring review cycles, and control both company-wide and project-specific documents throughout their lifecycle.

Register today.

Register

 

 

The post Webinar: Quality, Safety, Knowledge Management with Oracle WebCenter Content and ControlCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Deploying Spring Boot Applications on Google Application Engine (GAE)

Pas Apicella - Tue, 2016-11-22 02:07
I previously blogged about how to how to deploy a Spring Boot application to Flexible VM's on Google Cloud Platform as shown below.

http://theblasfrompas.blogspot.com.au/2016/09/spring-boot-on-google-cloud-platform-gcp.html

In this example below I use Google Application Engine (GAE) to deploy a Spring Boot application without using a flexible VM which is a lot faster and what I orginally wanted to do when I did this previously. In short this is using the [Standard environment] option for GAE.

Spring Boot uses Servlet 3.0 APIs to initialize the ServletContext (register Servlets etc.) so you can’t use the same application out of the box in a Servlet 2.5 container. It is however possible to run a Spring Boot application on an older container with some special tools. If you include org.springframework.boot:spring-boot-legacy as a dependency (maintained separately to the core of Spring Boot and currently available at 1.0.2.RELEASE), all you should need to do is create a web.xml and declare a context listener to create the application context and your filters and servlets. The context listener is a special purpose one for Spring Boot, but the rest of it is normal for a Spring application in Servlet 2.5

Visit for more Information:

   http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-servlet-2-5 

Steps

1. In order to use Servlet 2.5 and a web.xml we will need to add spring-boot-legacy dependecany to a local maven repoistory as shown below.

$ git clone https://github.com/scratches/spring-boot-legacy
$ cd spring-boot-legacy
$ mvn install

2. Clone and package the GIT REPO as shown below

$ https://github.com/papicella/GoogleAppEngineSpringBoot.git

3. Edit the file ./src/main/webapp/WEB-INF/appengine-web.xml to specify the correct APPLICATION ID which we will target in step 4 as well.
  
<?xml version="1.0" encoding="utf-8"?>
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>fe-papicella</application>
<version>5</version>
<threadsafe>true</threadsafe>
<manual-scaling>
<instances>1</instances>
</manual-scaling>
</appengine-web-app>

4. Package as shown below

$ mvn package

5. Target your project for deployment as follows

pasapicella@pas-macbook:~/piv-projects/GoogleAppEngineSpringBoot$ gcloud projects list
PROJECT_ID              NAME                    PROJECT_NUMBER
bionic-vertex-150302    AppEngineSpringBoot     97889500330
fe-papicella            FE-papicella            1049163203721
pas-spring-boot-on-gcp  Pas Spring Boot on GCP  1043917887789

pasapicella@pas-macbook:~/piv-projects/GoogleAppEngineSpringBoot$ gcloud config set project fe-papicella
Updated property [core/project].

6. Deploy as follows

mvn appengine:deploy

Finally once deployed you can access you application using it's endpoint which is displayed in the dashboard of GCP console





Project in InteiilJ IDEA




NOTE: Google AppEngine does not allow JMX, so you have to switch it off in a Spring Boot app (set spring.jmx.enabled=false in application.properties).

application.properties

spring.jmx.enabled=false

More Information

Full working example with code as follows on GitHub

https://github.com/papicella/GoogleAppEngineSpringBoot
Categories: Fusion Middleware

Uploading Tiles into Pivotal Cloud Foundry Operations Manager from the Ops Manager VM directly

Pas Apicella - Fri, 2016-11-18 00:15
When deploying PCF, you start by deploying Ops Manager. This is basically a VM that you deploy into your IaaS system of choice and it orchestrates the PCF installation. The installation of PCF is done by you through a web interface that runs on the Ops Manager VM. Into that web interface, you can load various "tiles". Each tile provides a specific set of functionality.

For example, Ops Manager comes with a tile for Bosh Director. This is the only out-of-the-box tile, as all the other tiles depend on it. Most users will first install the PCF tile. This provides the Cloud Foundry installation. After that, tiles generally provide functionality for services. Popular tiles include MySQL, RabbitMQ and Redis. There are quite a few tiles in total now, you can see them all listed on https://network.pivotal.io.



Some tiles are quite large , for example the "Elastic Runtime" tile in PCF 1.8 is 5G so from Australia I don't want to a 5G file to my laptop then upload it into the Ops Manager Web UI so here is how you can import tiles directly from the Ops Manager VM itself

1. Log into the Ops Manager VM using SSH with your keyfile.

Note: 0.0.0.0 is a bogus ip address for obvious reasons

pasapicella@pas-macbook:~/pivotal/GCP/install/ops-manager-key$ ssh -i ubuntu-key ubuntu@0.0.0.0
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-47-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Nov 16 23:36:27 UTC 2016

  System load:  0.0                Processes:           119
  Usage of /:   36.4% of 49.18GB   Users logged in:     0
  Memory usage: 37%                IP address for eth0: 10.0.0.0
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

Your Hardware Enablement Stack (HWE) is supported until April 2019.

Last login: Wed Nov 16 23:36:30 2016 from 0.0.0.0
ubuntu@myvm-gcp:~$

2. Log into https://network.pivotal.io/ and click on "Edit Profile" as shown below


3. Locate your "API token" and record it we will need it shortly

4. In this example I am uploading the "Pivotal Cloud Foundry Elastic Runtime" tile so navigate to the correct file and select the "i" icon to reveal the API endpoint for the tile.


5. Issue a wget command as follows which has a format as follows. This will download the 5G file into the HOME directory. Wait for this to complete before moving to the next step.

wget {file-name} --post-data="" --header="Authorization: Token {TOKEN-FROM-STEP-3" {API-LOCATION-URL}

$ wget -O cf-1.8.14-build.7.pivotal  --post-data="" --header="Authorization: Token {TOKEN-FROM-STEP-3" https://network.pivotal.io/api/v2/products/elastic-runtime/releases/2857/product_files/9161/download

6. Retrieve an access token which will need us to use the username/password for the Ops Manager admin account.

curl -s -k -H 'Accept: application/json;charset=utf-8' -d 'grant_type=password' -d 'username=admin' -d 'password=OPSMANAGER-ADMIN-PASSWD' -u 'opsman:' https://localhost/uaa/oauth/token

$ curl -s -k -H 'Accept: application/json;charset=utf-8' -d 'grant_type=password' -d 'username=admin' -d 'password=welcome1' -u 'opsman:' https://localhost/uaa/oauth/token
{"access_token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImxlZ2Fj ...... "

7. Finally upload the tile to be imported from the Ops Manager UI using a format as follows. You need to make sure you use the correct file name as per the download from STEP 5

curl -v -H "Authorization: Bearer STEP6-ACCESS-TOKEN" 'https://localhost/api/products' -F 'product[file]=@/home/ubuntu/cf-1.8.14-build.7.pivotal'  -X POST -k

Once complete you should see the Tile in Ops Manager as shown below. This is much faster way to upload tiles especially from Australia



More Information

https://docs.pivotal.io/pivotalcf/1-8/customizing/pcf-interface.html
Categories: Fusion Middleware

Installing Pivotal Cloud Foundry (PCF) on Google Cloud Platform (GCP)

Pas Apicella - Wed, 2016-11-16 21:50
I decided to install PCF 1.8 onto Google Cloud Platform today and I thought the experience was fantastic and very straight forward. The GCP Console is fantastic and very powerful indeed. The steps to install it are as follows

http://docs.pivotal.io/pivotalcf/1-8/customizing/gcp.html

Here are some screen shots you would expect to see along the way when using Operations Manager

Screen Shots 










Finally Once Installed here is how to create an ORG, USER and get started using the CLI. You will note you must log in as ADMIN to get started and finally I log in as the user who will be the OrgManager.

** Target my PCF Instance **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf api https://api.system.pas-apples.online --skip-ssl-validation
Setting api endpoint to https://api.system.pas-apples.online...
OK


API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
Not logged in. Use 'cf login' to log in.

** Login as ADMIN **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf login -u admin -p YYYY -o system -s system
API endpoint: https://api.system.pas-apples.online
Authenticating...
OK

Targeted org system

Targeted space system

API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
User:           admin
Org:            system
Space:          system

** Create Org **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf create-org gcp-pcf-org
Creating org gcp-pcf-org as admin...
OK

Assigning role OrgManager to user admin in org gcp-pcf-org ...
OK

TIP: Use 'cf target -o gcp-pcf-org' to target new org

** Create a USER **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf create-user pas YYYY
Creating user pas...
OK

TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'

** Set ORG Role **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf set-org-role pas gcp-pcf-org OrgManager
Assigning role OrgManager to user pas in org gcp-pcf-org as admin...
OK

** Target the newly created ORG **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf target -o gcp-pcf-org

API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
User:           admin
Org:            gcp-pcf-org
Space:          No space targeted, use 'cf target -s SPACE'

** Create a SPACE **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf create-space development
Creating space development in org gcp-pcf-org as admin...
OK
Assigning role RoleSpaceManager to user admin in org gcp-pcf-org / space development as admin...
OK
Assigning role RoleSpaceDeveloper to user admin in org gcp-pcf-org / space development as admin...
OK

TIP: Use 'cf target -o "gcp-pcf-org" -s "development"' to target new space

** Set Some Space Roles **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf set-space-role pas gcp-pcf-org development SpaceDeveloper
Assigning role RoleSpaceDeveloper to user pas in org gcp-pcf-org / space development as admin...
OK
pasapicella@pas-macbook:~/pivotal/GCP/install$ cf set-space-role pas gcp-pcf-org development SpaceManager
Assigning role RoleSpaceManager to user pas in org gcp-pcf-org / space development as admin...
OK

** Login as PAS user and target the correct ORG/SPACE **

pasapicella@pas-macbook:~/pivotal/GCP/install$ cf login -u pas -p YYYY -o gcp-pcf-org -s development
API endpoint: https://api.system.pas-apples.online
Authenticating...
OK

Targeted org gcp-pcf-org

Targeted space development

API endpoint:   https://api.system.pas-apples.online (API version: 2.58.0)
User:           pas
Org:            gcp-pcf-org
Space:          development

Lets push a simple application

Application manifest.yml

pasapicella@pas-macbook:~/piv-projects/PivotalSpringBootJPA$ cat manifest-inmemory-db.yml
applications:
- name: pas-albums
  memory: 512M
  instances: 1
  random-route: true
  path: ./target/PivotalSpringBootJPA-0.0.1-SNAPSHOT.jar
  env:
    JAVA_OPTS: -Djava.security.egd=file:///dev/urando

Deploy

pasapicella@pas-macbook:~/piv-projects/PivotalSpringBootJPA$ cf push -f manifest-inmemory-db.yml
Using manifest file manifest-inmemory-db.yml

Creating app pas-albums in org gcp-pcf-org / space development as pas...
OK

Creating route pas-albums-gloomful-synapse.apps.pas-apples.online...
OK

Binding pas-albums-gloomful-synapse.apps.pas-apples.online to pas-albums...
OK

Uploading pas-albums...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app341113312
Uploading 31.6M, 195 files
Done uploading
OK

Starting app pas-albums in org gcp-pcf-org / space development as pas...

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started

OK

App pas-albums was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djava.security.egd=file:///dev/urando" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher`

Showing health and status for app pas-albums in org gcp-pcf-org / space development as pas...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-albums-gloomful-synapse.apps.pas-apples.online
last uploaded: Thu Nov 17 03:39:04 UTC 2016
stack: cflinuxfs2
buildpack: java-buildpack=v3.8.1-offline-https://github.com/cloudfoundry/java-buildpack.git#29c79f2 java-main java-opts open-jdk-like-jre=1.8.0_91-unlimited-crypto open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu      memory           disk         details
#0   running   2016-11-17 02:39:57 PM   142.6%   333.1M of 512M   161M of 1G

Get Route to Application

pasapicella@pas-macbook:~/piv-projects/PivotalSpringBootJPA$ cf apps
Getting apps in org gcp-pcf-org / space development as pas...
OK

name         requested state   instances   memory   disk   urls
pas-albums   started           1/1         512M     1G     pas-albums-gloomful-synapse.apps.pas-apples.online






More Information

https://cloud.google.com/solutions/cloud-foundry-on-gcp
Categories: Fusion Middleware

Accessing the Cloud Foundry REST API from SpringBoot

Pas Apicella - Mon, 2016-11-14 17:43
Accessing the Cloud Foundry REST API is simple enough to do as shown in the example below using curl we can list all our organizations.

Cloud Foundry REST API - https://apidocs.cloudfoundry.org/246/

Below shows just the organizations name and I am filtering on that using JQ, if you wnat to see all the output then remove the PIPE or JQ. You have to be logged in to use "cf oauth-token"

pasapicella@pas-macbook:~/apps$ curl -k "https://api.run.pivotal.io/v2/organizations" -X GET -H "Authorization: `cf oauth-token`" | jq -r ".resources[].entity.name"

APJ
apples-pivotal-org
Suncorp

In the example below I will show how you would invoke this REST API using SpringBoot's RestTemplate.

1.  Firstly we need to retrieve our bearer token as we will need that for all API calls into the CF REST API. The code below will retrieve that for us using the RestTemplate
  
package com.pivotal.platform.pcf;

import org.apache.tomcat.util.codec.binary.Base64;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.web.client.RestTemplate;

import java.util.Arrays;
import java.util.Map;

public class Utils
{
private final static String username = "papicella@pivotal.io";
private final static String password = "PASSWORD";
private static final Logger log = LoggerFactory.getLogger(Utils.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

public static String getAccessToken ()
{
String uri = "https://login.run.pivotal.io/oauth/token";
String data = "username=%s&password=%s&client_id=cf&grant_type=password&response_type=token";
RestTemplate restTemplate = new RestTemplate();

// HTTP POST call with data

HttpHeaders headers = new HttpHeaders();

headers.add("Authorization", "Basic " + encodePassword());
headers.add("Content-Type", "application/x-www-form-urlencoded");

headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));

String postArgs = String.format(data, username, password);

HttpEntity<String> requestEntity = new HttpEntity<String>(postArgs,headers);

String response = restTemplate.postForObject(uri, requestEntity, String.class);

Map<String, Object> jsonMap = parser.parseMap(response);

String accessToken = (String) jsonMap.get("access_token");

return accessToken;
}

private static String encodePassword()
{
String auth = "cf:";
byte[] plainCredsBytes = auth.getBytes();
byte[] base64CredsBytes = Base64.encodeBase64(plainCredsBytes);
return new String(base64CredsBytes);
}

}

To achieve the same thing as above using CURL would look as follows, I have stripped the actual bearer token as that is a lot of TEXT.

pasapicella@pas-macbook:~$ curl -v -XPOST -H "Application/json" -u "cf:" --data "username=papicella@pivotal.io&password=PASSWORD&client_id=cf&grant_type=password&response_type=token" https://login.run.pivotal.io/oauth/token

...

{"access_token":"YYYYYYYYYYY ....","token_type":"bearer","refresh_token":"3dd9a2b63f3640c38eb8220e2ae88dfc-r","expires_in":599,"scope":"openid uaa.user cloud_controller.read password.write cloud_controller.write","jti":"c3706c86e376445686a0dd289262bbfa"}

2. Once we have the bearer token we can then make calls to the CF REST API using the bearer token as shown below. The code below simply ensures we get the bearer token before we make the calls to the CF REST API and then we are free to output what we want to output. One method below simply returns the RAW JSON output as per the method "getAllApps" and the other method "getAllOrgs" to get Organizations strips out what we don't want and adds it to a list of POJO that define exactly what we want to return.
  
package com.pivotal.platform.pcf;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.pivotal.platform.pcf.beans.Organization;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;

import java.util.*;

@RestController
public class CFRestAPISpringBoot
{
private RestTemplate restTemplate = new RestTemplate();
private static final Logger log = LoggerFactory.getLogger(CFRestAPISpringBoot.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

@RequestMapping(value = "/cf-apps", method = RequestMethod.GET, path = "/cf-apps")
public String getAllApps ()
{
String uri = "https://api.run.pivotal.io/v2/apps";

String accessToken = Utils.getAccessToken();

// Make CF REST API call for Applications
HttpHeaders headers = new HttpHeaders();
headers.set("Authorization", String.format("Bearer %s", accessToken));
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));

HttpEntity entity = new HttpEntity(headers);

log.info("CF REST API Call - " + uri);

HttpEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET, entity, String.class);

return response.getBody();
}

@RequestMapping(value = "/cf-orgs", method = RequestMethod.GET, path = "/cf-orgs")
public List<Organization> getAllOrgs ()
{
String uri = "https://api.run.pivotal.io/v2/organizations";

String accessToken = Utils.getAccessToken();

// Make CF REST API call for Applications
HttpHeaders headers = new HttpHeaders();
headers.set("Authorization", String.format("Bearer %s", accessToken));
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));

HttpEntity entity = new HttpEntity(headers);

log.info("CF REST API Call - " + uri);
HttpEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET, entity, String.class);

log.info(response.getBody());

Map<String, Object> jsonMap = parser.parseMap(response.getBody());

List<Object> resourcesList = (List<Object>) jsonMap.get("resources");
ObjectMapper mapper = new ObjectMapper();
ArrayList<Organization> orgs = new ArrayList<Organization>();

for (Object item: resourcesList)
{
Map map = (Map) item;

Iterator entries = map.entrySet().iterator();

while (entries.hasNext())
{
Map.Entry thisEntry = (Map.Entry) entries.next();
if (thisEntry.getKey().toString().equals("entity"))
{
Map entityMap = (Map) thisEntry.getValue();
Organization org =
new Organization((String)entityMap.get("name"),
(String)entityMap.get("status"),
(String)entityMap.get("spaces_url"));
log.info(org.toString());
orgs.add(org);
}

}

}

return orgs;
}
}

3. Of course we have the standard SpringBoot main class which ensures we us an embedded tomcat server to server the REST end points
  
package com.pivotal.platform.pcf;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class SpringBootCfRestApiApplication {

public static void main(String[] args)
{
SpringApplication.run(SpringBootCfRestApiApplication.class, args);
}
}

4. The POJO is as follows
  
package com.pivotal.platform.pcf.beans;

public final class Organization
{
private String name;
private String status;
private String spacesUrl;

public Organization()
{
}

public Organization(String name, String status, String spacesUrl) {
this.name = name;
this.status = status;
this.spacesUrl = spacesUrl;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

public String getStatus() {
return status;
}

public void setStatus(String status) {
this.status = status;
}

public String getSpacesUrl() {
return spacesUrl;
}

public void setSpacesUrl(String spacesUrl) {
this.spacesUrl = spacesUrl;
}

@Override
public String toString() {
return "Organization{" +
"name='" + name + '\'' +
", status='" + status + '\'' +
", spacesUrl='" + spacesUrl + '\'' +
'}';
}
}

Once our Spring Boot application is running we can simply invoke one of the REST end points as follows and it will login as well as make the REST call using the CF REST API under the covers for us.

pasapicella@pas-macbook:~/apps$ curl http://localhost:8080/cf-orgs | jq -r
[
  {
    "name": "APJ",
    "status": "active",
    "spacesUrl": "/v2/organizations/b7ec654f-f7fd-40e2-a4f7-841379d396d7/spaces"
  },
  {
    "name": "apples-pivotal-org",
    "status": "active",
    "spacesUrl": "/v2/organizations/64c067c1-2e19-4d14-aa3f-38c07c46d552/spaces"
  },
  {
    "name": "Suncorp",
    "status": "active",
    "spacesUrl": "/v2/organizations/dd06618f-a062-4fbc-b8e9-7b829d9eaf37/spaces"
  }
]

More Information

1. Cloud Foundry REST API - https://apidocs.cloudfoundry.org/246/

2. RestTemplate - http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html



Categories: Fusion Middleware

Declarative REST Client Feign with Spring Boot

Pas Apicella - Mon, 2016-11-07 17:46
Feign is a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders.

In this example I show how to use Spring Cloud / Spring Boot application with Feign. The source code for this is as follows

https://github.com/papicella/SpringBootEmployeeFeignClient

1. Include the required maven dependency for Feign as shown below

  
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>

2. Assuming your going to lookup a service using Service Discovery with Spring Cloud then include this dependency as well, the example below is doing this using Spring Cloud Service Discovery.


<dependency>
<groupId>io.pivotal.spring.cloud</groupId>
<artifactId>spring-cloud-services-starter-service-registry</artifactId>
</dependency>


See the Spring Cloud Project page for details on setting up your build system with the current Spring Cloud Release Train

3. To enable Feign we simple add the annotation @EnableFeignClients as shown below


package pas.au.scs.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class SpringBootEmployeeFeignClientApplication {

public static void main(String[] args) {
SpringApplication.run(SpringBootEmployeeFeignClientApplication.class, args);
}
}

4. Next we have to create an interface to call our service methods. The interface methods must match the service method signatures as shown below. In this example we use Spring Cloud service discovery to find our service and invoke the right implementation method, Feign can do more then just call registered services through spring cloud service discovery BUT this example does that.

EmployeeServiceClient Interface
 
package pas.au.scs.demo.employee;

import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

import java.util.List;

@FeignClient("SPRINGBOOT-EMPLOYEE-SERVICE")
public interface EmployeeServiceClient
{
@RequestMapping(method = RequestMethod.GET, value = "/emps")
List<Employee> listEmployees();
}

So what does the actual service method look like?



@RestController
public class EmployeeRest
{
private static Log logger = LogFactory.getLog(EmployeeRest.class);
private EmployeeRepository employeeRepository;

@Autowired
public EmployeeRest(EmployeeRepository employeeRepository)
{
this.employeeRepository = employeeRepository;
}

@RequestMapping(value = "/emps",
method = RequestMethod.GET,
produces = MediaType.APPLICATION_JSON_VALUE)
public List<Employee> listEmployees()
{
logger.info("REST request to get all Employees");
List<Employee> emps = employeeRepository.findAll();

return emps;
}

.....


5. It's important to note that the Feign client is calling a service method using Spring Cloud service discovery , the screen shot below shows how it looks inside Pivotal Cloud Foundry when we select out service registry instance and click on Manage






6. Finally we just need to call our service using the Feign client interface and do do that with Autowire as required. In this example below we use a class annotated with @Controller as shown below which then using the returned data to display the results to a web page using Thymeleaf


package pas.au.scs.demo.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import pas.au.scs.demo.employee.EmployeeServiceClient;

@Controller
public class EmployeeFeignController
{
Logger logger = LoggerFactory.getLogger(EmployeeFeignController.class);

@Autowired
private EmployeeServiceClient employeeServiceClient;

@RequestMapping(value = "/", method = RequestMethod.GET)
public String homePage(Model model) throws Exception
{
model.addAttribute("employees", employeeServiceClient.listEmployees());

return "employees";
}

}

7. The Web page "employees.html" fragment accessing the returned List of employees is as follows.

<div class="col-xs-12">
<table id="example" class="table table-hover table-bordered table-striped table-condensed">
<thead>
<tr>
<th>Id</th>
<th>Name</th>
<th>Job</th>
<th>Mgr</th>
<th>Salary</th>
</tr>
</thead>
<tbody>
<tr th:each="employee : ${employees}">
<td th:text="${employee.id}"></td>
<td th:text="${employee.name}"></td>
<td th:text="${employee.job}"></td>
<td th:text="${employee.mgr}"></td>
<td th:text="${employee.salary}"></td>
</tr>
</tbody>
</table>
</div>

More Information

1. Spring Cloud
http://projects.spring.io/spring-cloud/

2. Declarative REST Client: Feign
http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html#spring-cloud-feign
Categories: Fusion Middleware

Approaches to Consider for Your Organization’s Windchill Consolidation Project

This post comes from Fishbowl Solutions’ Senior Solutions Architect, Seth Richter.

More and more organizations need to merge multiple Windchill instances into a single Windchill instance after either acquiring another company or maybe had separate Windchill implementations based on old divisional borders. Whatever the situation, these organizations want to merge into a single Windchill instance to gain efficiencies and/or other benefits.

The first task for a company in this situation is to assemble the right team and develop the right plan. The team will need to understand the budget and begin to document key requirements and its implications. Will they hire an experienced partner like Fishbowl Solutions? If so, we recommend involving the partner early on in the process so they can help navigate the key decisions, avoid pitfalls and develop the best approach for success.

Once you start evaluating the technical process and tools to merge the Windchill instances, the most likely options are:

1. Manual Method

Moving data from one Windchill system to another manually is always an option. This method might be viable if there are small pockets of data to move in an ad-hoc manner. However, this method is extremely time consuming so proceed with caution…if you get halfway through and then move to a following method then you might have hurt the process rather than help it.

2. Third Party Tools (Fishbowl Solutions LinkExtract & LinkLoader tools)

This process can be a cost effective alternative, but it is not as robust as the Windchill Bulk Migrator so your requirements might dictate if this is viable or not.

3. PTC Windchill Bulk Migrator (WBM) tool

This is a powerful, complex tool that works great if you have an experienced team running it. Fishbowl prefers the PTC Windchill Bulk Migrator in many situations because it can complete large merge projects over a weekend and historical versions are also included in the process.

A recent Fishbowl project involved a billion-dollar manufacturing company who had acquired another business and needed to consolidate CAD data from one Windchill system into their own. The project had an aggressive timeline because it needed to be completed before the company’s seasonal rush (and also be prepared for an ERP integration). During the three-month project window, we kicked off the project, executed all of the test migrations and validations, scheduled a ‘go live’ date, and then completed the final production migration over a weekend. Users at the acquired company checked their data into their “old” Windchill system on a Friday and were able check their data out of the main corporate instance on Monday with zero engineer downtime.

Fishbowl Solutions’ PTC/PLM team has completed many Windchill merge projects such as this one. The unique advantage of working with Fishbowl is that we are  PTC Software Partners and Windchill programming experts. Often times, when other reseller/consulting partners get stuck waiting on PTC technical support, Fishbowl has been able to problem solve and keep projects on time and on budget.

If your organization is seeking to find an effective and efficient way to bulk load data from one Windchill system to another, our experts at Fishbowl Solutions are able to accomplish this on time and on budget. Urgency is a priority in these circumstances, and we want to ensure you’re able to make this transition process as hassle-free as possible with no downtime. Not sure which tool is the best fit for your Windchill migration project? Check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department for more information or to request a demo.

Contact Us

Rick Passolt
Senior Account Executive
952.456.3418
mcadsales@fishbowlsolutions.com

Seth Richter is a Senior Solutions Architect at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do.

The post Approaches to Consider for Your Organization’s Windchill Consolidation Project appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Consider Your Options for SolidWorks to Windchill Data Migrations

This post comes from Fishbowl Solutions’ Associate MCAD Consultant, Ben Sawyer.

CAD data migrations are most often seen as a huge burden. They can be lengthy, costly, messy, and a general road block to a successful project. Organizations planning on migrating SolidWorks data to PTC Windchill should consider their options when it comes to the process and tools they utilize to perform the bulk loading.

At Fishbowl Solutions, our belief is that the faster you can load all your data accurately into Windchill, the faster your company can implement critical PLM business processes and realize the results of such initiatives like a Faster NPI, Streamline Change & Configuration Management, Improved Quality, Etc.

There are two typical project scenarios we encounter with these kinds of data migration projects. SolidWorks data resides on a Network File System (NFS) or resides in either PDMWorks or EPDM.

The options for this process and the tools used will be dependent on other factors as well. The most common guiding factors to influence decisions are the quantity of data and the project completion date requirements. Here are typical project scenarios.

Scenario One: Files on a Network File System

Manual Migration

There is always an option to manually migrate SolidWorks data into Windchill. However, if an organization has thousands of files from multiple products that need to be imported, this process can be extremely daunting. When loading manually, this process involves bringing files into the Windchill workspace, carefully resolving any missing dependents, errors, duplicates, setting destination folders, revisions, lifecycles and fixing bad metadata. (Those who have tried this approach with large data quantities in the past know the pain of which we are talking about!)

Automated Solution

Years ago, Fishbowl developed its LinkLoader tool for SolidWorks as a viable solution to complete a Windchill bulk loading project with speed and accuracy.

Fishbowl’s LinkLoader solution follows a simple workflow to help identify data to be cleansed and mass loaded with accurate metadata. The steps are as follows:

1. Discovery
In this initial stage, the user chooses the mass of SolidWorks data to be loaded into Windchill. Since Windchill doesn’t allow duplicate named CAD files in the system, the software quickly identifies these duplicate files. It is up to the user to resolve the duplicate files or remove them from the data loading set.

2. Validation
The validation stage will ensure files are retrievable, attributes/parameters are extracted (for use in later stages), and relationships with other SolidWorks files are examined. LinkLoader captures all actions. The end user will need to resolve any errors or remove the data from the loading set.

3. Mapping
Moving toward the bulk loading stage, it is necessary to confirm and/or modify the attribute-mapping file as desired. The only required fields for mapping are lifecycle, revision/version, and the Windchill folder location. End users are able to leverage the attributes/parameter information from the validation as desired, or create their own ‘Instance Based Attribute’ list to map with the files.

4. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in an ‘Error List Report’ and LinkLoader will simply move on to the next file.

Scenario Two: Files reside in PDMWorks or EPDM

Manual Migration

There is also an option to do a manual data migration from one system to another if files reside in PDMWorks or EPDM. However, this process can also be tedious and drawn out as much, or perhaps even more than when the files are on a NFS.

Automated Solution

Having files within PDMWorks or EPDM can make the migration process more straightforward and faster than the NFS projects. Fishbowl has created an automated solution tool that extracts the latest versions of each file from the legacy system and immediately prepares it for loading into Windchill. The steps are as follows:

1. Extraction (LinkExtract)
In this initial stage, Fishbowl uses its LinkExtract tool to pull the latest version of all SolidWorks files , determine references, and extract all the attributes for the files as defined in PDMWorks or EPDM.

2. Mapping
Before loading the files, it is necessary to confirm and or modify the attribute mapping file as desired. Admins can fully leverage the attributes/parameter information from the Extraction step, or can start from scratch if they find it to be easier. Often the destination Windchill system will have different terminology or states and it is easy to remap those as needed in this step.

3. Bulk Load
Once the mapping stage is completed, the loading process is ready. There is a progress indicator that displays the number of files completed and the percentage done. If there are errors with any files during the upload, it will document these in the Error List Report and LinkLoader will move on to the next file.

Proven Successes with LinkLoader

Many of Fishbowl’s customers have purchased and successfully ran LinkLoader themselves with little to no assistance from Fishbowl. Other customers of ours have utilized our consulting services to complete the migration project on their behalf.

With Fishbowl’s methodology centered on “Customer First”, our focus and support continuously keeps our customers satisfied. This is the same commitment and expertise we will bring to any and every data migration project.

If your organization is looking to consolidate SolidWorks CAD data to Windchill in a timely and effective manner, regardless of the size and scale of the project, our experts at Fishbowl Solutions can get it done.

For example, Fishbowl partnered with a multi-billion dollar medical device company with a short time frame to migrate over 30,000 SolidWorks files from a legacy system into Windchill. Fishbowl’s expert team took initiative and planned the process to meet their tight industry regulations and finish on time and on budget. After the Fishbowl team executed test migrations, the actual production migration process only took a few hours, thus eliminating engineering downtime.

If your organization is seeking the right team and tools to complete a SolidWorks data migration to Windchill, reach out to us at Fishbowl Solutions.

If you’d like more information about Fishbowl’s LinkLoader tool or our other products and services for PTC Windchill and Creo, check out our website, click the “Contact Us” tab, or reach out to Rick Passolt in our business development department.

Contact Us

Rick Passolt
Senior Account Executive
952.465.3418
mcadsales@fishbowlsolutions.com

Ben Sawyer is an Associate MCAD Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do. 

The post Consider Your Options for SolidWorks to Windchill Data Migrations appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Pages

Subscribe to Oracle FAQ aggregator - Fusion Middleware