Fusion Middleware

Cross-origin resource sharing (CORS) from Spring Boot Rest Controllers

Pas Apicella - Tue, 2017-04-25 18:45
Was involved in a hackathon recently and after creating a few Spring boot API's for the UI team to consume and they run into errors around (Cross-origin resource sharing ). For security reasons, browsers prohibit AJAX calls to resources residing outside the current origin.

I have seen this before and Spring Boot has support to ensure you can control which resources can be accessed outside of the current origin. It's as simple as an annotation "@CrossOrigin", as shown below. In this example every request from this Rest Controller supports resource calls residing outside the current origin.
  
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.*;

import java.util.List;

@CrossOrigin
@RestController
@RequestMapping(value = "/beacon")
public class BeaconRest
{
private static Log logger = LogFactory.getLog(BeaconRest.class);

@Autowired
private BeaconRepository beaconRepository;

@RequestMapping(value = "/all",
method = RequestMethod.GET,
produces = MediaType.APPLICATION_JSON_VALUE)
public List<Beacon> allBeacons()
{
logger.info("Invoking /beacon/all RESTful method");
return beaconRepository.findAll();
}

Of course it's much more flexible then that adding the ability to add options, and you can read more about it here.

https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/cors.html
Categories: Fusion Middleware

Proactively Manage Contracts, Policies, Web Assets, and Sales Content stored in Oracle WebCenter with Fishbowl’s Subscription Notifier

Oracle WebCenter Content is a great tool for keeping your company’s content organized, but it can be difficult to proactively manage new, updated, and expiring content. For example:

  • Users check out content and forget to check it back in
  • Plans and policies require reviews at varying intervals
  • Managers change, and metadata needs to be changed for several content items

In 2005, Fishbowl launched the initial release of Subscription Notifier to help solve these problems and many more. It was also specifically designed to satisfy these content management use cases:

  • Web Content Management: Ensure proactive updates to web content for optimal SEO.
  • Contract Management: Enable contract management knowledge workers to get ahead of contract renewals with scheduled email notifications at 90, 60, and 30 days.
  • Policy and Procedure Management: Trigger a workflow process to alert users of content requiring review annually to ensure company policies and procedures are up to date.
  • Sales Enablement: Provide key stakeholders with better visibility into new or updated sales or marketing material.

Today, Subscription Notifier is sold as part of the Admin Suite and included in our controlled document management solution – ControlCenter. Due to its value-add content management capabilities, Subscription Notifier has become one of our most popular products. We continue to make enhancements to the product, and just last month, we released version 5.0, which brings some customer-requested capabilities.

Before I provide an overview of what’s new in version 5.0, I want to start with a brief introduction on what Subscription Notifier actually does. Subscription Notifier is a query-based email notification and scheduled job utility that enables proactive content management in WebCenter Content. With an easy-to-understand subscription builder, you can quickly create subscriptions based on any business rule in your content server – not just expiration. You can schedule the subscription to run on an hourly, daily, weekly, monthly, or yearly basis, or let it run without a schedule to notify users as soon as possible. It also enables you to specify users and/or aliases to be notified of content that matches the subscription query, either directly by username or by using a metadata field, email address, or Idocscript. Other options are available to further customize the subscription, but the core is that simple – specify a schedule, include the users to notify, build the query, and you’re done!

As I highlighted in the policy and procedure management use case above, subscriptions can be set up as periodic reviews, which will put content items into the specified user’s “Documents Under Review” queue as the item’s expiration date (or any other specified date) approaches. Content remains in the queue until one of three actions are taken: “No Change Necessary”, allowing the user to update the review date without updating the content item; “Check Out and Revise”, updating the content item and its review date; or “Approve Expiration”, which lets the content item become expired. The review queue appears in both the core WebCenter UI and ControlCenter. Periodic reviews are one of the most useful features of Subscription Notifier, enabling companies to stay on top of expiring content and ensure that content is always kept up to date.

Beyond notifications and reviews, Subscription Notifier can also empower data synchronization through Pre-query Actions and Side Effects. These are extra effects that are triggered either once before the query executes (Pre-query Actions) or once for every content item that matches the subscription query (Side Effects). Custom Pre-query Actions and Side Effects can be created, and Subscription Notifier comes packaged with some useful Side Effects. These include actions to update metadata, delete old revisions, check-out and check-in, update an external database, and resubmit a content item to Inbound Refinery for conversion. Subscriptions don’t need to send emails – you can set up subscriptions to only trigger these actions.

Now that I’ve gone over the core functionality of Subscription Notifier, I’d like to highlight our latest release, version 5.0, by discussing some of this release’s new features:

 

Simplified Subscriptions for End-Users

Simplified Subscription Builder Interface

By specifying a role in the component configuration, non-administrators can now create subscriptions in Subscription Notifier. Users with restricted access can create subscriptions in a restricted view, which both simplifies the view for the non-tech-savvy and ensures security so that end-users do not have access to more than they should. Administrators can still manage all subscriptions, but users with restricted access can only manage subscriptions they have created themselves. This has been a highly-requested feature, so we’re excited to finally bring the requests to fruition!

 

Type-ahead User Fields

Type-Ahead User Fields Screenshot

Taking a feature from ControlCenter, user and alias fields on the subscription creation page will now offer type-ahead suggestions based on both usernames and full names. No longer do you need to worry about the exact spelling of usernames – these validated fields will do the remembering for you! In addition to greatly improving the look and feel of the subscription creation page, these newly improved fields also enhance performance by cutting out the load time of populating option lists.

 

Job Run History Job Run History Report Screenshot

Administrators can view an audit history of subscription job executions, allowing them to view when subscriptions are evaluated. The table can be sorted and filtered to allow for detailed auditing of Subscription Notifier. By inspecting an individual job run, you can see which content items matched the query and who was notified. If a job run failed, you can easily view the error message without delving into the content server logs.

 

Resubmit for Conversion Side Effect

Sometimes Inbound Refinery hits a snag, and content items will fail conversion for no apparent reason. This new Side Effect will allow you to resubmit content items to Inbound Refinery to attempt conversion again. You can specify the maximum number of times to attempt the re-conversion, and the queue of items being added to the conversion queue is throttled, so you don’t need to worry about clogging up Inbound Refinery with conversion requests.

 

Enforce Security on a Per-Subscription Basis

Subscription Notifier has allowed you to specify whether to enforce content security when sending emails, making sure users only are notified on content they have permissions to read or letting users be notified of everything. Previously, this was a component configuration setting, but now this setting can be changed on each subscription individually.

That about wraps up this spotlight on Subscription Notifier. I hope I was able to share how a simple yet powerful notification and subscription solution for Oracle WebCenter supports multiple use cases for proactive content management. At its core, Subscription Notifier helps organizations keep their content up-to-date while providing visibility into the overall content creation process. Its powerful side-effects capabilities can be used to trigger workflows, update metadata, delete old revisions and more – providing more proactive methods for users to best manage high-value content in an organization. If you’re interested in purchasing Subscription Notifier or upgrading your existing copy, please contact us for more info.

The post Proactively Manage Contracts, Policies, Web Assets, and Sales Content stored in Oracle WebCenter with Fishbowl’s Subscription Notifier appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

We're All In This Thing Together

Greg Pavlik - Fri, 2017-04-14 17:26
This song pretty much summarizes everything I've learned to be true about life after nearly five decades of living...

Well my friend, well I see your face so clear
Little bit tired, a little worn through the years
You sound nervous, you seem alone
I hardly recognize your voice on the telephone

In between I remember
Just before we wound up broken down
We'd drive out to the edge of the highway
Follow that lonesome dead-end roadside sound

We're all in this thing together
Walkin' the line between faith and fear
This life don't last forever
When you cry I taste the salt in your tears

Well my friend let's put this thing together
And walk the path that worn out feet have trod
If you wanted we can go home forever
Give up your jaded ways, spell your name to God

We're all in this thing together
Walkin' the line between faith and fear
This life don't last forever
When you cry I taste the salt in your tears

All we are is a picture in a mirror
Fancy shoes to grace our feet
All that there is is a slow road to freedom
Heaven above and the devil beneath

We're all in this thing together
Walkin' the line between faith and fear
This life don't last forever
When you cry I taste the salt in your tears

Automate and expedite bulk loading into Windchill.

Data migration is the least attractive part of a PDM/PLM project.  Take a look at our latest infographic to learn how to speed up bulk loading data from Creo, Autodesk Inventor and AutoCAD, SolidWorks, Documents, WTParts and more into Windchill PDMLink and Pro/INTRALINK.

More information can also be found in our previous posts:

Approaches to Consider for Your Organization’s Windchill Consolidation Project

Consider Your Options for SolidWorks to Windchill Data Migrations

 

The post Automate and expedite bulk loading into Windchill. appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Spring Boot Application for Pivotal Cloud Cache Service

Pas Apicella - Thu, 2017-04-13 06:21
I previously blogged about the Pivotal Cloud Cache service in Pivotal Cloud Foundry as follows

http://theblasfrompas.blogspot.com.au/2017/04/getting-started-with-pivotal-cloud.html

During that post I promised it will follow with a Spring Boot application which would use the PCC service to show what the code would look like. That demo exists at the GitHub URL below.

https://github.com/papicella/SpringBootPCCDemo

The GitHub URL above shows how you can clone , package and then push this application to PCF using your own PCC service instance using the "Spring Cloud GemFire Connector"



More Information

Pivotal Cloud Cache Docs
http://docs.pivotal.io/p-cloud-cache/index.html



Categories: Fusion Middleware

Webinar Recording: Improve WebCenter Portal Performance by 30% and get out of Oracle ADF Development Hell

In this webinar Fishbowl’s Director of Solutions, Jerry Aber, shared how leveraging modern web development technologies like Oracle JET, instead of ADF taskflows, can dramatically improve the performance of a portal – including the overall time to load the home page, as well as making content or stylistic changes.

Jerry also shared how to architect a portal implementation to include a caching layer that further enhances performance. These topics were all be backed by real world customer metrics that Jerry and Fishbowl team have seen through numerous, successful customer deployments.

If you are a WebCenter Portal administrator and are frustrated with challenges of improving your ADF-centric portal, this webinar is for you. Watch to learn how to overhaul the ADF UI, which will lead to less development complexities and ensure more happy users.

 

The post Webinar Recording: Improve WebCenter Portal Performance by 30% and get out of Oracle ADF Development Hell appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Getting Started with Pivotal Cloud Cache on Pivotal Cloud Foundry

Pas Apicella - Sun, 2017-04-09 22:57
Recently we announced the new cache service Pivotal Cloud Cache (PCC) for Pivotal Cloud Foundry (PCC). In short Pivotal Cloud Cache (PCC) is a opinionated, distributed, highly available, high speed key/value caching service. PCC can be easily horizontally scaled for capacity and performance.

In this post we will show how you would provision a service, login to the Pulse UI dashboard, connect using GFSH etc. I won't create a spring boot application to use the service at this stage BUT that will follow in a post soon enough.

Steps

1. First you will need the PCC service and if it's been installed it will look like this


2. Now let's view the current plans we have in place as shown below

pasapicella@pas-macbook:~$ cf marketplace -s p-cloudcache
Getting service plan information for service p-cloudcache as papicella@pivotal.io...
OK

service plan   description          free or paid
extra-small    Plan 1 Description   free
extra-large    Plan 5 Description   free

3. Now let's create a service as shown below

pasapicella@pas-macbook:~$ cf create-service p-cloudcache extra-small pas-pcc
Creating service instance pas-pcc in org pivot-papicella / space development as papicella@pivotal.io...
OK

Create in progress. Use 'cf services' or 'cf service pas-pcc' to check operation status.

4. At this point it will asynchronously create the GemFire cluster which is essentially what PCC is. For more Information on GemFire see the docs link here.

You can check the progress one of two ways.

1. Using Pivotal Apps manager as shown below


2. Using a command as follows

pasapicella@pas-macbook:~$ cf service pas-pcc

Service instance: pas-pcc
Service: p-cloudcache
Bound apps:
Tags:
Plan: extra-small
Description: Pivotal CloudCache offers the ability to deploy a GemFire cluster as a service in Pivotal Cloud Foundry.
Documentation url: http://docs.pivotal.io/gemfire/index.html
Dashboard: http://gemfire-yyyyy.run.pez.pivotal.io/pulse

Last Operation
Status: create in progress
Message: Instance provisioning in progress
Started: 2017-04-10T01:34:58Z
Updated: 2017-04-10T01:36:59Z

5. Once complete it will look as follows


6. Now in order to log into both GFSH and Pulse we are going to need to create a service key for the service we just created, which we do as shown below.

pasapicella@pas-macbook:~/pivotal/PCF/services/PCC$ cf create-service-key pas-pcc pas-pcc-key
Creating service key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...
OK

7. Retrieve service keys as shown below

pasapicella@pas-macbook:~$ cf service-key pas-pcc pas-pcc-key
Getting key pas-pcc-key for service instance pas-pcc as papicella@pivotal.io...

{
 "locators": [
  "0.0.0.0[55221]",
  "0.0.0.0[55221]",
  "0.0.0.0[55221]"
 ],
 "urls": {
  "gfsh": "http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1",
  "pulse": "http://gemfire-yyyy.run.pez.pivotal.io/pulse"
 },
 "users": [
  {
   "password": "password",
   "username": "developer"
  },
  {
   "password": "password",
   "username": "operator"
  }
 ]
}

8. Now lets log into Pulse. The URL is available as part of the output above

Login Page


Pulse Dashboard : You can see from the dashboard page it shows how many locators and cache server members we have as part of this default cluster



9. Now lets log into GFSH. Once again the URL is as per the output above

- First we will need to download Pivotal GemFire so we have the GFSH client, download the zip at the link below and extract to your file system

  https://network.pivotal.io/products/pivotal-gemfire

- Invoke as follows using the path to the extracted ZIP file

$GEMFIRE_HOME/bin/gfsh

pasapicella@pas-macbook:~/pivotal/software/gemfire/pivotal-gemfire-9.0.3/bin$ ./gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  /
 / /__/ / ____/  _____/ / /    / /
/______/_/      /______/_/    /_/    9.0.3

Monitor and Manage Pivotal GemFire
gfsh>connect --use-http --url=http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1 --user=operator --password=password
Successfully connected to: GemFire Manager HTTP service @ http://gemfire-yyyy.run.pez.pivotal.io/gemfire/v1

gfsh>

10. Now lets create a region which will use to store some cache data

$ create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
  
gfsh>create region --name=demoregion --type=PARTITION_HEAP_LRU --redundant-copies=1
Member | Status
----------------------------------- | ---------------------------------------------------------------------
cacheserver-PCF-PEZ-Heritage-RP04-1 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-1"
cacheserver-PCF-PEZ-Heritage-RP04-0 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-0"
cacheserver-PCF-PEZ-Heritage-RP04-2 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-2"
cacheserver-PCF-PEZ-Heritage-RP04-3 | Region "/demoregion" created on "cacheserver-PCF-PEZ-Heritage-RP04-3"

Note: Understanding the region types you can create exist at the Pivotal GemFire docs but basically in the example above we create a partitioned region where primary and backup data is stored among the cache servers. As you can see we asked for a single backup copy of each region entry to be placed on a separate cache server itself for redundancy

http://gemfire.docs.pivotal.io/geode/developing/region_options/region_types.html#region_types

11. If we return to the Pulse Dashboard UI we will see from the "Data Browser" tab we have a region


12. Now lets just add some data , few entries which are simple String key/value pairs only
  
gfsh>put --region=/demoregion --key=1 --value="value 1"
Result : true
Key Class : java.lang.String
Key : 1
Value Class : java.lang.String
Old Value : <NULL>


gfsh>put --region=/demoregion --key=2 --value="value 2"
Result : true
Key Class : java.lang.String
Key : 2
Value Class : java.lang.String
Old Value : <NULL>


gfsh>put --region=/demoregion --key=3 --value="value 3"
Result : true
Key Class : java.lang.String
Key : 3
Value Class : java.lang.String
Old Value : <NULL>

13. Finally lets query the data we have in the cache
  
gfsh>query --query="select * from /demoregion"

Result : true
startCount : 0
endCount : 20
Rows : 3

Result
-------
value 3
value 1
value 2

NEXT_STEP_NAME : END

13. We can return to Pulse and invoke the same query from the "Data Browser" tab as shown below.



Of course storing data in a cache isn't useful unless we actually have an application on PCF that can use the Cache BUT that will come in a separate post. Basically we will BIND to this service, connect as a GemFire Client using the locators we are given as part of the service key and then extract the cache data we have just created above by invoking a query.

More Information

Download PCC for PCF
https://network.pivotal.io/products/cloud-cache

Data Sheet for PCC
https://content.pivotal.io/datasheets/pivotal-cloud-cache
Categories: Fusion Middleware

Hackathon Weekend at Fishbowl Solutions: Bots, Cloud Content Migrations, and Lightweight ECM Apps

Hackathon 2017 captains – from L to R: Andy Weaver, John Sim, and Jake Ferm.

It’s hackathon weeked at Fishbowl Solutions. This means our resident hackers (coders) will be working as teams to develop new solutions for Oracle WebCenter, enterprise search, and various cloud offerings. The theme overall this year is The Cloud, and each completed solution will integrate with a cloud offering from Oracle, Google, and perhaps even a few others if time allows.

This year three teams have formed, and they all began coding today at 1:00 PM. Teams have until 9:00 AM on Monday, April 10th to complete their innovative solutions. Each team will then present and demo their solution to everyone at Fishbowl Solutions during our quarterly meeting at 4 PM. The winning team will be decided by votes from employees that did NOT participate in the hackathon.

Here are the descriptions of the three solutions that will be developed over the weekend:

Team Captain: Andy Weaver
Team Name – for now: Cloud ECM Middleware
Overview: Lightweight ECM for The Cloud. Solution will provide content management capabilities (workflow, versioning, periodic review notifications, etc.) to Google’s cloud platform. Solution will also include a simple dashboard to notify users of documents awaiting their attention, and users will be able to use the solution on any device as well.

Team Captain: John Sim
Team Name: SkyNet – Rise of the Bots
Overview: This team has high aspirations as they will be working on a number of solutions. The first is a bot that they are calling Atlas that will essentially query Fishbowl’s Google Search Appliance and return documents, which are stored in Oracle WebCenter, based on what was asked. For example, “show me the standard work document on on ordering food for the hackathon”. The bot will use Facebook messenger as the input interface, and if time allows, a similar bot will be developed to support Siri, Slack, and Skype.

The next solution the team will try and code by Monday will be a self-service bot to query a human capital management/human resources system to return how many days of PTO the employee has.

The last solution will be a bot that integrates Alexa, which is the voice system that powers the Amazon Echo, with Oracle WebCenter. In this example, voice commands could be used to ask Alexa to tell the user the number of workflow items in their queue, or the last document checked in by their manager.

Team Captain: Jake Ferm
Team Name – for now: Cloud Content Migrator
Overview: Jake’s team will be working on an interface to enable users to select content to be migrated across Google Drive, Microsoft OneDrive, DropBox, and the Oracle Documents Cloud Service. The goal with this solution is to enable with as few clicks as possible the ability to, for example, migrate content from OneDrive to the Oracle Documents Cloud Service. They will also be working on ensuring that content with larger file sizes can be migrated in the background so that users can carry on with other computer tasks.

Please check back on Tuesday, April 11th for a recap of the event and details on the winning solution. Happy hacking!

Taco bar to fuel the hackers!

 

The post Hackathon Weekend at Fishbowl Solutions: Bots, Cloud Content Migrations, and Lightweight ECM Apps appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Pivotal Cloud Foundry Cloud Service Brokers for AWS, Azure and GCP

Pas Apicella - Tue, 2017-04-04 00:10
Pivotal Cloud Foundry (PCF) has various cloud service brokers for all the public clouds we support which include AWS, Azure and GCP. You can download and install those service brokers on premise or off premise giving you the capability to use Cloud services where it makes sense for your on premise or off premise cloud native applications.

https://network.pivotal.io/

The three cloud service brokers are as follows:





In the example below we have a PCF install running on vSphere and it has the AWS service broker tile installed as shown by the Ops Manager UI


Once installed this PCF instance can then provision AWS services and you can do that one of two ways.

1. Using Apps Manager UI as shown below


2. Use the CF CLI tool and invoking "cf marketplace" to list the service and then "cf create-service" to actually create an instance of the service.



Once provisioned within a SPACE of PCF you can then bind and use the service from applications as you normally would to consume the service reading the VCAP_SERVICES ENV variable and essentially access AWS services from your on premise installation of PCF in the example above.

More Information

GCP service broker:
https://network.pivotal.io/products/gcp-service-broker

AWS service broker:
https://network.pivotal.io/products/pcf-service-broker-for-aws

Azure service broker:
https://network.pivotal.io/products/microsoft-azure-service-broker


Categories: Fusion Middleware

Manually running a BOSH errand for Pivotal Cloud Foundry on GCP

Pas Apicella - Mon, 2017-04-03 19:33
Pivotal Ops Manager has various errands in runs for different deployments within a PCF instance. These Errands can be switched off manually when installing new Tiles or upgrading the platform, in fact in PCF 1.10 the errands themselves will only run if they need to run making it a lot faster.

Below I am going to show you how you would manually run an Errand if you needed to on a PCF instance running on GCP. These instructions would work for PCF running on AWS, Azure or even vSphere so there not specific to PCF on GCP.

1. First login to your Ops Manager VM itself

pasapicella@pas-macbook:~/pivotal/GCP/install/10/opsmanager$ ./ssh-opsman.sh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-66-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Apr  3 23:38:57 UTC 2017

  System load:  0.0                Processes:           141
  Usage of /:   14.7% of 78.71GB   Users logged in:     0
  Memory usage: 68%                IP address for eth0: 0.0.0.0
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

5 packages can be updated.
0 updates are security updates.

Your Hardware Enablement Stack (HWE) is supported until April 2019.

*** System restart required ***
Last login: Mon Apr  3 23:38:59 2017 from 110.175.56.52
ubuntu@om-pcf-110:~$

2. Target the Bosh director which would look like this

ubuntu@om-pcf-110:~$ bosh --ca-cert /var/tempest/workspaces/default/root_ca_certificate target 10.0.0.10
Target set to 'p-bosh'

Note: You may be asked to login if you have not logged in to the bosh director which you can determine the login details from Ops Manager UI as follows

- Log into Ops Manager UI
- Click on the tile for the the the "Ops Manager Director" which would be specific to your IaaS provider, in the example below that is GCP


- Click on the credentials tab


3. Target the correct deployment. In the example below I am targeting the Elastic Runtime deployment.

ubuntu@om-pcf-110:~$ bosh deployment /var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml
Deployment set to '/var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml'

Note: You can list out the deployment names using "bosh deployments"

4. List out the errands as shown below using "bosh errands"

ubuntu@om-pcf-110:~$ bosh errands
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

+-----------------------------+
| Name                        |
+-----------------------------+
| smoke-tests                 |
| push-apps-manager           |
| notifications               |
| notifications-ui            |
| push-pivotal-account        |
| autoscaling                 |
| autoscaling-register-broker |
| nfsbrokerpush               |
| bootstrap                   |
| mysql-rejoin-unsafe         |
+-----------------------------+

5. Now in this example we are going to run the errand "push-apps-manager" and we do it as shown below

$ bosh run errand push-apps-manager

** Output **

ubuntu@om-pcf-110:~$ bosh run errand push-apps-manager
Acting as user 'director' on deployment 'cf-c099637fab39369d6ba0' on 'p-bosh'
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

Director task 621
  Started preparing deployment > Preparing deployment

  Started preparing package compilation > Finding packages to compile. Done (00:00:01)

     Done preparing deployment > Preparing deployment (00:00:05)

  Started creating missing vms > push-apps-manager/32218933-7511-4c0d-b512-731ca69c4254 (0)

...

+ '[' '!' -z 'Invitations deploy log: ' ']'
+ printf '** Invitations deploy log:  \n'
+ printf '*************************************************************************************************\n'
+ cat /var/vcap/packages/invitations/invitations.log

Errand 'push-apps-manager' completed successfully (exit code 0)
ubuntu@om-pcf-110:~$


Categories: Fusion Middleware

Mindbreeze Partnership Brings GSA Migration Path for Customers

This morning Fishbowl announced a new partnership with Mindbreeze bringing additional enterprise search options to our customers. As a leading provider of enterprise search software, Mindbreeze serves thousands of customers around the globe spanning governments, banks, healthcare, insurance, and educational institutions. Last Friday, Gartner released the 2017 Insight Engines Magic Quadrant; Mindbreeze has been positioned highest for Ability to Execute.

With the sunsetting of the Google Search Appliance announced last year, Fishbowl has been undergoing an evaluation of alternatives to serve both new and existing customers looking to improve information discovery. While Fishbowl will continue to partner with Google on cloud search initiatives, we feel Mindbreeze InSpire provides a superior solution to the problems faced by organizations with large volumes of on-premise content. In addition to on-premise appliances, Mindbreeze also provides cloud search services with federation options for creating a single, hybrid search experience. We’re excited about the opportunity this partnership brings to once again help customers get more value from the millions of unstructured documents buried in siloed systems across the enterprise—particualrly those stored in Oracle WebCenter and PTC Windchill.

In the coming months, we’ll be expanding our connector offerings to integrate Mindbreeze Inspire with Oracle WebCenter Content and PTC Windchill. Mindbreeze InSpire is offered as an on-premise search appliance uniting information from varied internal data sources into one semantic search index. As a full-service Mindbreeze partner, Fishbowl will provide connectors, appliance resale, implementation services, and support for our customers. To learn more about Mindbreeze, GSA migration options, or beta access to our Mindbreeze connectors, please contact us.

The post Mindbreeze Partnership Brings GSA Migration Path for Customers appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Replacing the “V” in Oracle ADF’s MVC design pattern with Oracle JET or other front end framework

This post was written by Fishbowl’s own John Sim – our resident Oracle User Experience expert. From front-end design to user journeys and persona mapping; John has helped numerous customers over 14 years enhance their desktop and mobile experiences with Oracle WebCenter. John is also an Oracle ACE, which recognizes leaders for their technical expertise and community evangelism.

One of our goals at Fishbowl is to continuously enhance and evolve the capabilities of WebCenter for both developers and clients with new tooling capabilities and pre-built custom components that are essential and not available today as part of the OOTB Oracle solution.

We have taken all of our collective knowledge and IP over the years since WebCenter PS3 and created the “Portal Solution Accelerator” previously known as “Intranet In A Box” that takes WebCenter Portal and it’s capabilities to the next level for creating Digital Workplace Portals.

We have taken all of our collective knowledge and IP over the years since WebCenter PS3 and created the “Portal Solution Accelerator” previously known as “Intranet In A Box” that takes WebCenter Portal and it’s capabilities to the next level for creating Digital Workplace Portals.

Today I’m going to cover one of the benefits of using our Portal Solution Accelerator: Replacing the “V” in ADFs MVC design pattern. This enables third party developers, web design agencies, marketers (with basic web design skills) to use other libraries and front end frameworks of their choosing such as Oracle JET, Angular, React, Vue, and Bootstrap – to name a few. By using a different front end library such as JET, you will be able to create more modern and dynamic responsive portals, widgets, and portlets with little to no experience of developing with ADF. You will also be able to leverage the benefits of ADF Model Controller and WebCenter’s Personalisation, Security, Caching and Mashup integration capabilities with other solutions like Oracle E-Business Suite (EBS) and Business Intelligence (BI) on the back end.

So, let’s take a closer look at the Portal Solution Accelerator in the following diagram. You can see it is made up of 2 core components – our back end PSA (Portal Solution Accelerator) component and our front end SPA (Single Page Application) component architecture. One of the things we decided early on is to separate the back end and front end architecture to allow for SPA front end components to be platform agnostic and allow them to work as a Progressive Web App and work on other platforms outside of Portal. This enables us to deploy SPA front end components directly onto BI to provide additional charting capabilities through their narrative components to EBS, SharePoint, and Liferay, as well as onto the cloud. This provides the potential for a hybrid on-premise Portal to Oracle Cloud (Site Cloud Service) Content Experience platform enabling reuse of our portal components and security on the Cloud.

To find out more about our Portal Solution Accelerator head over to our website – https://www.fishbowlsolutions.com/services/oracle-webcenter-portal-consulting/portal-solution-accelerator/

Lets go into a quick dive into WebCenter Portal Taskflows and our Single Page Application (SPA) architecture.

WebCenter Portal – allows you to create Widgets (ADF Taskflows) that can easily be dragged and dropped onto a page by a contributor and can work independently or alongside another taskflow. The interface View is currently generated at the back end with Java processes and can be easily optimised to enable support of adaptive applications. However, you should be aware that this model is very server process intensive.

  • Pros
    • If you know ADF development it makes it extremely fast to create connected web applications using the ADF UI.
    • The ADF generated HTML/JS/CSS UI supports Mobile and desktop browsers.
    • The UI is generated by the application allowing developers to create applications without the need for designers to be involved.
  • Cons
    • If you don’t know ADF or have a UI designed by a third party that does not align with ADFs UI capabilities , it can be very challenging to create complex UI’s using ADF tags, ADF Skins and ADFs Javascript framework.
    • It is a bad practice to combine mix and match open source libraries with ADF tags like jQuery or Bootstrap not supported by Oracle with ADF. This limits the reuse of the largely available open source to create dynamic interactive components and interfaces such as a Carousel etc.
    • It also can be very hard to brand, and is also very server process intensive.

Single Page Applications –  are essentially browser generated applications with Javascript that use AJAX to quickly and easily update and populate the user interface to create fluid and responsive web apps. Instead of the server processing and managing the DOM generated and sent to the client, the client’s browser processes and generates and caches the UI on the fly.

  • Pros
    • All modern front end frameworks allow you to create Single Page Applications and tie into lots of open source front end solutions and interfaces.
  • Cons
    • Can be hard to create Modular Isometric Universal JS applications.
    • You also need to test across browsers and devices your application is looking to support.
    • The front end application can get very large if not managed correctly.

The Portal Solution Accelerator.

What we have done with PSA is create a framework that provides the best of both worlds allowing you to create Modular Single Page Application taskflows that can be dragged and dropped onto a WebCenter Portal page. This allows your web design teams and agencies to manage and develop the front end quickly and effectively with any frameworks and standard HTML5, CSS, and Javascript. You can also use Groovy scripts or Javascript with (Oracle Nashorn) on the server side to create Isometric javascript taskflow applications.

Please note – you cannot create a taskflow that leverages both ADFs View model and our framework together. You can however create 1 taskflow that is pure ADF and drop it on the same page as a taskflow that has been created with a custom front end such as angular using our Portal Solution Accelerator View to replace ADF view. This enables you to use existing OOTB WebCenter Portal taskflows and have them work in conjunction with custom built components.

How Does it work?

Within WebCenter Portal in the composer panel where you can drag and drop in your taskflows onto a page there is a custom taskflow – Fishbowl Single Page Application.

Drop this onto the page and manage its parameters. Here is a quick screenshot of a sample taskflow component for loading in Recent News items.

The Template parameters points to a custom SPA frontend javascript component you would like to load in and inject into the taskflow. You can define custom parameters to pass to this component and these parameters can be dynamic ADF variables via the template parameter panel. The SPA component then handles the magic loading in the template, events, JS libraries CSS and images to be generated from within the taskflow.

Within the SPA API there are custom methods we have created that allow you to pass AJAX JSON calls to the ADF backend groovy or javascript code that enable the app to work and communicate with other services or databases.

ADF Lifecycle… Timeouts.

One of things that often comes up when we present our solution with others who have attempted to integrate JET applications with WebCenter portal is how do you manage the lifecycle and prevent ADF timeouts. For example, if you stay on the same WebCenter Portal page for some time working on a single page application you will get a popup saying you will be automatically logged out. Remember our Portal Solution Accelerator is a taskflow. We are using a similar ADF message queue to pass JSON updates to the ADF lifecycle when a user is working on a complex modular single page application so we don’t run into timeout issues.

Getting out of deployment hell (as well)!!!

One of the downsides with ADF development is having to build your ADF application and deploy stop and start the server to test and find there is a bug that needs to be fixed. And then go through the entire process again. Trust me – it is not quick!

Once you have our framework deployed you can easily deploy / upload standard Javascript Templates, CSS and groovy scripts to Apache or OHS that are automatically consumed by our ADF Taskflow. There is no stop start test. Just upload your updates and refresh the browser!!

I hear Oracle is working to integrate JET with ADF.

Yes, but it’s not there today.
Plus you’re not stuck to just JET with our framework. You can use React or any front end framework or library and you get the benefits of all the additional components, apps, tooling that the Portal Solution Accelerator provide.

Futures

Our next key release that we are working on is to fully support Progressive Web Application Taskflow Development. To find out more on what a progressive web app is head over to google – https://developers.google.com/web/progressive-web-apps/checklist

 

The post Replacing the “V” in Oracle ADF’s MVC design pattern with Oracle JET or other front end framework appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

PTC Windchill Success Story: The Benefits of Moving from PDM to PLM

PTC Windchill Success Story: The Benefits of Moving from PDM to PLM A Prominent Furniture Manufacturer deploys Fishbowl’s System Generated Drawing Automation to Increase Efficiencies with their Enterprise Part deployment within PTC Windchill

Our client has numerous global manufacturing facilities and is using PTC Windchill to streamline eBOM and mBOM processes. However, not all modifications to parts information propagates automatically/accurately at the drawing level. Updating plant specific drawings with enterprise part information was a time-consuming process that was manual, error prone, full of delays and diverted valuable engineering resources away from their value-added work.

The client desired a go-forward approach with their Windchill PLM implementation that would automatically update this critical enterprise part information. They became aware of our System Generated Drawing solution from a presentation at PTC LiveWorx. From the time of first contact the Fishbowl Solutions team worked to deliver a solution that helped them realize their vision.

BUSINESS PROBLEMS
  • Manufacturing waste due to ordering obsolete or incorrect parts
  • Manufacturing delays due to drawing updates needed for non-geometric changes – title block, lifecycle, BOM, as well as environmental/regulatory compliance markings, variant designs, etc.
  • Manually updating product drawings with plant specific parts information took away valuable engineering time
SOLUTION HIGHLIGHTS
  • Fishbowl’s System Generated Drawing Automation Systematically combines data from BOM, CAD, Drawing/Model, Part Attributes and enterprise resource planning (ERP) systems
  • Creates complete, static views of drawings based on multiple event triggers
  • Creates a template-based PDF that is overlaid along with the CAD geometry to produce a final document that can be dynamically stamped along with applicable lifecycle and approval information
  • Real-time watermarking on published PDFs
RESULTS

Increased accuracy of enterprise parts information included on drawings reduced product manufacturing waste
Allowed design changes to move downstream quickly, allowing a increase in design to manufacturing operational efficiencies

 

“Fishbowl’s System Generated Drawing Automation solution is the linchpin to our enterprise processes. It provides us with an automated method to include, update and proliferate accurate parts information throughout the business. This automation has in turn led to better data integrity, less waste, and more process efficiencies.” -PTC Windchill Admin/Developer

 

For more information about Fishbowl’s solution for System Generated Drawing Automation Click Here

The post PTC Windchill Success Story: The Benefits of Moving from PDM to PLM appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Webinar: Improve WebCenter Portal Performance by 30% and get out of Oracle ADF Development Hell

DATE: Thursday, March 30th
TIME: 12:00 PM CST, 1:00 PM EST

Jerry AberJoin Fishbowl’s Enterprise Architect, Jerry Aber, as he shares recommendations on performance improvements for WebCenter-based portals. Jerry has been delivering portal projects for over 15 years, and has been instrumental in developing a technology framework and methodology that provides repeatable and reusable development patterns for portal deployments and their ongoing administration and management. In this webinar, Jerry will share how leveraging modern web development technologies like Oracle JET, instead of ADF taskflows, can dramatically improve the performance of a portal – including the overall time to load the home page, as well as making content or stylistic changes.

Jerry will also share how to architect a portal implementation to include a caching layer that further enhances performance. These topics will all be backed by real world customer metrics Jerry and Fishbowl team have seen through numerous, successful customer deployments.

If you are a WebCenter Portal administrator and are frustrated with challenges of improving your ADF-centric portal, this webinar is for you. Come learn how to overhaul the ADF UI, which will lead to less development complexities and ensure more happy users.

Register today. 

New to Zoom? Go to zoom.us/test to ensure you can access the webinar.

The post Webinar: Improve WebCenter Portal Performance by 30% and get out of Oracle ADF Development Hell appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Visual Studio Code editor support for Cloud Foundry Manifest files

Pas Apicella - Tue, 2017-03-21 19:14
An early BETA version of the Cloud Foundry (CF) manifest file support is available in Visual Studio Code. To see a video on this support follow the link below which shows how to install the extension and use Code Completion and a bit more follow link.

  https://www.youtube.com/watch?v=Ao6Mx6Q0XKE

With this extension for manifest files, it becomes a pleasure to write and modify those CF manifest files. You get content-assist, validations, and hover help - even for dynamic content like buildpacks and services (it integrates with the CF CLI for that)

Some screen shots of this as follows -






Categories: Fusion Middleware

dotnet publish - ASP.NET Core app deployed to Pivotal Cloud Foundry

Pas Apicella - Tue, 2017-03-21 06:16
I previously showed how to push a ASP .NET Core application to Pivotal Cloud Foundry by just using the source code files itself. It turns out this creates a rather large droplet and hence slows down the deployment. So here we are going to take the same demo and use "dotnet publish" to make this a lot faster. The previous post is here which is the base for this blog entry itself.

ASP.NET Core app deployed to Pivotal Cloud Foundry
http://theblasfrompas.blogspot.com.au/2017/03/aspnet-core-app-deployed-to-pivotal.html

First we need to make some changes to our project

1. Open "dotnet-core-mvc.csproj" and add "RuntimeIdentifiers" inside the "PropertyGroup" tag
  
<PropertyGroup>
<TargetFramework>netcoreapp1.0</TargetFramework>
<RuntimeIdentifiers>osx.10.10-x64;osx.10.11-x64;ubuntu.14.04-x64;ubuntu.15.04-x64;debian.8-x64</RuntimeIdentifiers>
</PropertyGroup>



2. Perform a "dotnet restore" as shown below either form a terminal windows/prompt or from Visual Studio Code itself , this step is vital and is required

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
....

3. Now lets publish this as Release and ensure we target the correct runtime. For Cloud Foundry (CF) that will be "ubuntu.14.04-x64" and the framework version is 1.0 as we created the application using 1.0 , we could of used 1.1 here if we wanted to.

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet publish --output ./publish --configuration Release --runtime ubuntu.14.04-x64  --framework netcoreapp1.0
Microsoft (R) Build Engine version 15.1.548.43366
Copyright (C) Microsoft Corporation. All rights reserved.

  dotnet-core-mvc -> /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/bin/Release/netcoreapp1.0/ubuntu.14.04-x64/dotnet-core-mvc.dll

4. Finally cd into the "Publish" folder and verify there are the required DLL's as well as project files, JSON files , everything ready to run your application.

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc/publish$ ls -lartF
total 116848
-rwxr--r--    1 pasapicella  staff    25992 Jun 11  2016 Microsoft.Win32.Primitives.dll*

..

-rwxr--r--    1 pasapicella  staff      168 Mar 16 22:33 appsettings.Development.json*
drwxr-xr-x    7 pasapicella  staff      238 Mar 21 08:01 wwwroot/
-rwxr--r--    1 pasapicella  staff     1332 Mar 21 08:01 dotnet-core-mvc.pdb*
-rwxr--r--    1 pasapicella  staff     8704 Mar 21 08:01 dotnet-core-mvc.dll*
drwxr-xr-x    6 pasapicella  staff      204 Mar 21 08:01 Views/
drwxr-xr-x   16 pasapicella  staff      544 Mar 21 08:01 ../
-rwxr--r--    1 pasapicella  staff      362 Mar 21 08:01 web.config*
drwxr-xr-x   79 pasapicella  staff     2686 Mar 21 08:01 refs/
-rwxr--r--    1 pasapicella  staff       92 Mar 21 08:01 dotnet-core-mvc.runtimeconfig.json*
-rwxr--r--    1 pasapicella  staff   297972 Mar 21 08:01 dotnet-core-mvc.deps.json*
drwxr-xr-x  212 pasapicella  staff     7208 Mar 21 08:01 ./

5. Now this time lets "cf push" using the files in the "Publish" folder and

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc/publish$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m
Creating app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Using route pas-dotnetcore-mvc-demo.cfapps.io
Binding pas-dotnetcore-mvc-demo.cfapps.io to pas-dotnetcore-mvc-demo...
OK

Uploading pas-dotnetcore-mvc-demo...
Uploading app files from: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/publish
Uploading 14.8M, 280 files
Done uploading
OK

Starting app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
Creating container
Successfully created container
Downloading app package...
Downloaded app package (23.7M)
-----> Buildpack version 1.0.13
ASP.NET Core buildpack version: 1.0.13
ASP.NET Core buildpack starting compile
-----> Restoring files from buildpack cache
       OK
-----> Restoring NuGet packages cache
-----> Extracting libunwind
       libunwind version: 1.2
       OK
       https://buildpacks.cloudfoundry.org/dependencies/manual-binaries/dotnet/libunwind-1.2-linux-x64-f56347d4.tgz
       OK
-----> Saving to buildpack cache
       Copied 38 files from /tmp/app/libunwind to /tmp/cache
       OK
-----> Cleaning staging area
       OK
ASP.NET Core buildpack is done creating the droplet
Exit status 0
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (995K)
Uploaded droplet (23.8M)
Uploading complete
Destroying container
Successfully destroyed container

1 of 1 instances running

App started


OK

App pas-dotnetcore-mvc-demo was started using this command `cd . && ./dotnet-core-mvc --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-dotnetcore-mvc-demo.cfapps.io
last uploaded: Mon Mar 20 21:05:08 UTC 2017
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/dotnet-core-buildpack

     state     since                    cpu    memory          disk          details
#0   running   2017-03-21 08:06:05 AM   0.0%   39.2M of 512M   66.9M of 1G

Categories: Fusion Middleware

Welcome to the new Fishbowl Solutions Blog

Out with the old and in with the new.  Welcome to the new home of the Fishbowl Solutions blog! Please enjoy upgraded functionality and integration with our website.  Check back often for new and exciting posts form our talented staff.  If you want automatic updates click the subscribe link to the right and be notified whenever a new post appears.

 

 

 

 

 

 

 

The post Welcome to the new Fishbowl Solutions Blog appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

ASP.NET Core app deployed to Pivotal Cloud Foundry

Pas Apicella - Thu, 2017-03-16 22:37
This post will show you how to write your first ASP.NET Core application on macOS or Linux and push it to Pivotal Cloud Foundry without having to PUBLISH it for deployment.

Before getting started you will need the following

1. Download and install .NET Core
2. Visual Studio Code with the C# extension.
3. CF CLI installed https://github.com/cloudfoundry/cli

Steps

Note: Assumes your already logged into Pivotal Cloud Foundry and connected to Pivotal Web Services (run.pivotal.io), the command below shows I am connected and targeted

pasapicella@pas-macbook:~$ cf target
API endpoint:   https://api.run.pivotal.io
API version:    2.75.0
User:           papicella@pivotal.io
Org:            apples-pivotal-org
Space:          development

1. Create new project

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet new mvc --auth None --framework netcoreapp1.0
Content generation time: 278.4748 ms
The template "ASP.NET Core Web App" created successfully.

2. Restore as follows

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
  Restoring packages for /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/dotnet-core-mvc.csproj...
  Generating MSBuild file /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvco/obj/dotnet-core-mvc.csproj.nuget.g.props.
  Generating MSBuild file /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/obj/dotnet-core-mvc.csproj.nuget.g.targets.
  Writing lock file to disk. Path: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/obj/project.assets.json
  Restore completed in 1.09 sec for /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/dotnet-core-mvc.csproj.

  NuGet Config files used:
      /Users/pasapicella/.nuget/NuGet/NuGet.Config

  Feeds used:
      https://api.nuget.org/v3/index.json

3. At this point we can run the application and see what it looks like in a browser

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet run
Hosting environment: Production
Content root path: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.


Now to prepare this demo for Pivotal Cloud Foundry we need to make some changes to he generated code as shown in the next few steps

4. In Visual Studio Code, under the menu item “File/Open” select the “dotnet-core-mvc” folder and open it. Confirm all messages from Visual Studio Code.



The .NET Core buildpack configures the app web server automatically so you don’t have to handle this yourself, but you have to prepare your app in a way that allows the buildpack to deliver this information via the command line to your app

5. Open "Program.cs" and modify the Main() method as follows adding "var config = ..." and ".UseConfiguration(config)" as shown below
  
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;

namespace dotnet_core_mvc
{
public class Program
{
public static void Main(string[] args)
{
var config = new ConfigurationBuilder()
.AddCommandLine(args)
.Build();

var host = new WebHostBuilder()
.UseKestrel()
.UseConfiguration(config)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

6. Open "dotnet-core-mvc.csproj" and add the following dependency "Microsoft.Extensions.Configuration.CommandLine" as shown below
  
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp1.0</TargetFramework>
</PropertyGroup>


<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.0.4" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.0.3" />
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.0.2" />
<PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.0.2" />
<PackageReference Include="Microsoft.Extensions.Configuration.CommandLine" Version="1.0.0" />
<PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.0.1" />
</ItemGroup>


</Project>


7. File -> Save All

8. Jump back out to a terminal windows, you can actually restore from Visual Studio Code IDE BUT I still like to do it from the command line

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
...

9. Deploy to Pivotal Cloud Foundry as follows, you will need to use a unique name so replace "pas" with your own name that should do it.

$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m

** Output **

pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m
Creating app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Using route pas-dotnetcore-mvc-demo.cfapps.io
Binding pas-dotnetcore-mvc-demo.cfapps.io to pas-dotnetcore-mvc-demo...
OK

Uploading pas-dotnetcore-mvc-demo...
Uploading app files from: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc
Uploading 208.7K, 84 files
Done uploading
OK

Starting app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
Creating container
Successfully created container
Downloading app package...
Downloaded app package (675.5K)
ASP.NET Core buildpack version: 1.0.13
ASP.NET Core buildpack starting compile
-----> Restoring files from buildpack cache
       OK
-----> Restoring NuGet packages cache
       OK
-----> Extracting libunwind
       libunwind version: 1.2
       https://buildpacks.cloudfoundry.org/dependencies/manual-binaries/dotnet/libunwind-1.2-linux-x64-f56347d4.tgz
       OK
-----> Installing .NET SDK
       .NET SDK version: 1.0.1
       OK
-----> Restoring dependencies with Dotnet CLI

       Welcome to .NET Core!
       ---------------------
       Telemetry
       The .NET Core tools collect usage data in order to improve your experience. The data is anonymous and does not include command-line arguments. The data is collected by Microsoft and shared with the community.
       You can opt out of telemetry by setting a DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 using your favorite shell.
       You can read more about .NET Core tools telemetry @ https://aka.ms/dotnet-cli-telemetry.
       Configuring...
       -------------------
       A command is running to initially populate your local package cache, to improve restore speed and enable offline access. This command will take up to a minute to complete and will only happen once.
       Decompressing 100% 16050 ms
-----> Buildpack version 1.0.13
       https://buildpacks.cloudfoundry.org/dependencies/dotnet/dotnet.1.0.1.linux-amd64-99324ccc.tar.gz
       Learn more about .NET Core @ https://aka.ms/dotnet-docs. Use dotnet --help to see available commands or go to https://aka.ms/dotnet-cli-docs.

       --------------

       Expanding 100% 13640 ms
         Restoring packages for /tmp/app/dotnet-core-mvc.csproj...
         Installing Microsoft.Extensions.Configuration 1.0.0.
         Installing Microsoft.Extensions.Configuration.CommandLine 1.0.0.
         Generating MSBuild file /tmp/app/obj/dotnet-core-mvc.csproj.nuget.g.props.
         Writing lock file to disk. Path: /tmp/app/obj/project.assets.json
         Restore completed in 2.7 sec for /tmp/app/dotnet-core-mvc.csproj.

         NuGet Config files used:
             /tmp/app/.nuget/NuGet/NuGet.Config

         Feeds used:
             https://api.nuget.org/v3/index.json

         Installed:
             2 package(s) to /tmp/app/dotnet-core-mvc.csproj
       OK
       Detected .NET Core runtime version(s) 1.0.4, 1.1.1 required according to 'dotnet restore'
-----> Installing required .NET Core runtime(s)
       .NET Core runtime 1.0.4 already installed
       .NET Core runtime 1.1.1 already installed
       OK
-----> Publishing application using Dotnet CLI
       Microsoft (R) Build Engine version 15.1.548.43366
       Copyright (C) Microsoft Corporation. All rights reserved.

         dotnet-core-mvc -> /tmp/app/bin/Debug/netcoreapp1.0/dotnet-core-mvc.dll
       Copied 38 files from /tmp/app/libunwind to /tmp/cache
-----> Saving to buildpack cache
       OK
       Copied 850 files from /tmp/app/.dotnet to /tmp/cache
       Copied 19152 files from /tmp/app/.nuget to /tmp/cache
       OK
-----> Cleaning staging area
       Removing /tmp/app/.nuget
       OK
ASP.NET Core buildpack is done creating the droplet
Exit status 0
Uploading droplet, build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (359.9M)
Uploaded droplet (131.7M)
Uploading complete
Successfully destroyed container

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-dotnetcore-mvc-demo was started using this command `cd .cloudfoundry/dotnet_publish && dotnet dotnet-core-mvc.dll --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-dotnetcore-mvc-demo.cfapps.io
last uploaded: Fri Mar 17 03:19:51 UTC 2017
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/dotnet-core-buildpack

     state     since                    cpu    memory          disk           details
#0   running   2017-03-17 02:26:03 PM   0.0%   39.1M of 512M   302.7M of 1G

10. Finally invoke the application using the URL which can be determined by the output at the end of the PUSH above or using "cf apps"



More Information

https://docs.microsoft.com/en-us/aspnet/core/tutorials/your-first-mac-aspnet
Categories: Fusion Middleware

Run a Spring Cloud Task from Pivotal Cloud Foundry using Cloud Foundry Tasks

Pas Apicella - Fri, 2017-03-10 02:26
Recently we announced Spring Cloud Task under the umbrella of Spring Cloud through the following blog entry.  In the post below I am going to show you how you would create a Cloud Foundry Task that can invoke this Spring Cloud Task itself.

Spring Cloud Task allows a user to develop and run short lived microservices using Spring Cloud and run them locally, in the cloud, even on Spring Cloud Data Flow. In this example we will run it in the cloud using Pivotal Cloud Foundry (PWS instance run.pivotal.io). For more information on this follow the link below.

https://cloud.spring.io/spring-cloud-task/

For more information on Cloud Foundry Tasks follow the link below

https://docs.cloudfoundry.org/devguide/using-tasks.html

Steps

Note: This demo assumes you are already logged into PCF you can confirm that using a command as follows

pasapicella@pas-macbook:~/temp$ cf target
API endpoint:   https://api.run.pivotal.io
API version:    2.75.0
User:           papicella@pivotal.io
Org:            apples-pivotal-org
Space:          development

Also ensure your using the correct version of CF CLI which at the time of this blog was as follows you will need at least that version.

pasapicella@pas-macbook:~/temp$ cf --version
cf version 6.25.0+787326d95.2017-02-28

You will also need an instance of Pivotal Cloud Foundry which supports Tasks within the Applications Manager UI which Pivotal Web Services (PWS) does

1. Clone the simple Spring Cloud Task as follows

$ git clone https://github.com/papicella/SpringCloudTaskTodaysDate.git

pasapicella@pas-macbook:~/temp$ git clone https://github.com/papicella/SpringCloudTaskTodaysDate.git
Cloning into 'SpringCloudTaskTodaysDate'...
remote: Counting objects: 19, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 19 (delta 0), reused 19 (delta 0), pack-reused 0
Unpacking objects: 100% (19/19), done.

2. Change into SpringCloudTaskTodaysDate directory

3. If you look at the class "pas.au.pivotal.pa.sct.demo.SpringCloudTaskTodaysDateApplication" you will see it's just a Spring Boot application that has an annotation "@EnableTask". As long as Spring Cloud Task is on the classpath any Spring Boot application with @EnableTask will record the start and finish of the boot application.

4. Package the application using "mvn package"

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ mvn package
[INFO] Scanning for projects...
Downloading: https://repo.spring.io/snapshot/org/springframework/cloud/spring-cloud-task-dependencies/1.2.0.BUILD-SNAPSHOT/maven-metadata.xml
Downloaded: https://repo.spring.io/snapshot/org/springframework/cloud/spring-cloud-task-dependencies/1.2.0.BUILD-SNAPSHOT/maven-metadata.xml (809 B at 0.6 KB/sec)
[INFO]

..

[INFO] Building jar: /Users/pasapicella/temp/SpringCloudTaskTodaysDate/target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.2.RELEASE:repackage (default) @ springcloudtasktodaysdate ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.621 s
[INFO] Finished at: 2017-03-10T18:51:15+11:00
[INFO] Final Memory: 29M/199M
[INFO] ------------------------------------------------------------------------

5.  Push the application as shown below

$ cf push springcloudtask-date --no-route --health-check-type none -p ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar -m 512m

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf push springcloudtask-date --no-route --health-check-type none -p ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar -m 512m

Creating app springcloud-task-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

App springcloud-task-date is a worker, skipping route creation
Uploading springcloud-task-date...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app069139431
Uploading 239.1K, 89 files

...

1 of 1 instances running

App started


OK

App springcloudtask-date was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher`

Showing health and status for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls:
last uploaded: Fri Mar 10 07:57:17 UTC 2017
stack: cflinuxfs2
buildpack: container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.14-offline-https://github.com/cloudfoundry/java-buildpack.git#d5d58c6 java-main open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10...

     state      since                    cpu    memory         disk         details
#0   starting   2017-03-10 06:58:43 PM   0.0%   936K of 512M   1.3M of 1G


6. Stop the application as we only want to run it as a CF Task when we are ready to run it.

$ cf stop springcloudtask-date

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf stop springcloudtask-date
Stopping app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

7. In a separate lets tail the logs from the application as follows. Don't worry there is no output yet as the application invocation through a task has not yet occurred.

$ cf logs springcloudtask-date

** Output **

pasapicella@pas-macbook:~$ cf logs springcloudtask-date
Connected, tailing logs for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...


8. Now log into PWS apps manager console and navigate to your application settings page as shown below. On this page you will see the run command for the spring boot application as shown below


9. To invoke the task we run a command as follows using the "invocation command" we get from step #8 above.

Format: cf run-task {app-name} {invocation command}

$ cf run-task springcloudtask-date 'INVOCATION COMMAND from step #8 above'

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf run-task springcloudtask-date 'CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher'
Creating task for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Task has been submitted successfully for execution.
Task name:   371bb9b1
Task id:     1

10. Return to PWS Applications Manager and click on the "Tasks" tab to verify if was successful


11. Return to the terminal window where we were tailing the logs to verify the task was run

pasapicella@pas-macbook:~$ cf logs springcloudtask-date
Connected, tailing logs for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...

2017-03-10T19:15:29.55+1100 [APP/TASK/371bb9b1/0]OUT Creating container
2017-03-10T19:15:29.89+1100 [APP/TASK/371bb9b1/0]OUT Successfully created container
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT   .   ____          _            __ _ _
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT   '  |____| .__|_| |_|_| |_\__, | / / / /
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  :: Spring Boot ::        (v1.5.2.RELEASE)
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  =========|_|==============|___/=/_/_/_/
2017-03-10T19:15:34.71+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.706  INFO 7 --- [           main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext
2017-03-10T19:15:34.85+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.853  INFO 7 --- [           main] nfigurationApplicationContextInitializer : Adding cloud service auto-reconfiguration to ApplicationContext
2017-03-10T19:15:34.89+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.891  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : The following profiles are active: cloud
2017-03-10T19:15:34.89+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.890  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Starting SpringCloudTaskTodaysDateApplication on b00b045e-dea4-4e66-8298-19dd71edb9c8 with PID 7 (/home/vcap/app/BOOT-INF/classes started by vcap in /home/vcap/app)
2017-03-10T19:15:35.00+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.009  INFO 7 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a07c5b4: startup date [Fri Mar 10 08:15:35 UTC 2017]; root of context hierarchy
2017-03-10T19:15:35.91+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.912  INFO 7 --- [           main] urceCloudServiceBeanFactoryPostProcessor : Auto-reconfiguring beans of type javax.sql.DataSource
2017-03-10T19:15:35.91+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.916  INFO 7 --- [           main] urceCloudServiceBeanFactoryPostProcessor : No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.
2017-03-10T19:15:36.26+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.259 DEBUG 7 --- [           main] o.s.c.t.c.SimpleTaskConfiguration        : Using org.springframework.cloud.task.configuration.DefaultTaskConfigurer TaskConfigurer
2017-03-10T19:15:36.74+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.748  INFO 7 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
2017-03-10T19:15:36.75+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.758 DEBUG 7 --- [           main] o.s.c.t.r.support.SimpleTaskRepository   : Creating: TaskExecution{executionId=0, parentExecutionId=null, exitCode=null, taskName='DateSpringCloudTask:cloud:', startTime=Fri Mar 10 08:15:36 UTC 2017, endTime=null, exitMessage='null', externalExecutionId='null', errorMessage='null', arguments=[]}
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.776 DEBUG 7 --- [           main] o.s.c.t.r.support.SimpleTaskRepository   : Updating: TaskExecution with executionId=0 with the following {exitCode=0, endTime=Fri Mar 10 08:15:36 UTC 2017, exitMessage='null', errorMessage='null'}
2017-03-10T19:15:36.75+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.757  INFO 7 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Starting beans in phase 0
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.775  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Executed at : 3/10/17 8:15 AM
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.777  INFO 7 --- [           main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a07c5b4: startup date [Fri Mar 10 08:15:35 UTC 2017]; root of context hierarchy
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.779  INFO 7 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Stopping beans in phase 0
2017-03-10T19:15:36.78+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.782  INFO 7 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Unregistering JMX-exposed beans on shutdown
2017-03-10T19:15:36.78+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.788  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Started SpringCloudTaskTodaysDateApplication in 3.205 seconds (JVM running for 3.985)
2017-03-10T19:15:36.83+1100 [APP/TASK/371bb9b1/0]OUT Exit status 0
2017-03-10T19:15:36.86+1100 [APP/TASK/371bb9b1/0]OUT Destroying container
2017-03-10T19:15:37.79+1100 [APP/TASK/371bb9b1/0]OUT Successfully destroyed container

12. Finally you can verify tasks using a command as follows

$ cf tasks springcloudtask-date

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf tasks springcloudtask-date
Getting tasks for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

id   name       state       start time                      command
1    371bb9b1   SUCCEEDED   Fri, 10 Mar 2017 08:15:28 UTC   CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher


Categories: Fusion Middleware

ASP .NET Core (CLR) on Pivotal Cloud Foundry

Pas Apicella - Wed, 2017-03-08 04:57
There are two ways to run .NET applications on Pivotal Cloud Foundry. In short it's as follows

  1. Windows 2012 R2 Stack (Windows 2016 coming soon)
  2. Linux Stack - ASP.NET Core CLR only

In the example below I am going to show how you would push a sample ASP.NET Core application using the default Linux stack. I am using run.pivotal.io or better knows as PWS (Pivotal Web Services) instance which only supports a Linux stack. In your own PCF installation an operator may have provided Windows support in which case a "cf stacks" is one way to find out as shown below

$ cf stacks
Getting stacks in org pivot-papicella / space development as papicella@pivotal.io...
OK

name            description
cflinuxfs2      Cloud Foundry Linux-based filesystem

windows2012R2   Microsoft Windows / .Net 64 bit

Steps

1. Clone a demo as shown below

$ git clone https://github.com/bingosummer/aspnet-core-helloworld.git
Cloning into 'aspnet-core-helloworld'...
remote: Counting objects: 206, done.
remote: Total 206 (delta 0), reused 0 (delta 0), pack-reused 206
Receiving objects: 100% (206/206), 43.40 KiB | 0 bytes/s, done.
Resolving deltas: 100% (78/78), done.

2. Change to the right directory as shown below

$ cd aspnet-core-helloworld

3. Edit manifest.yml to use the BETA buildpack as follows. You can list out the build packs using "cf buildpacks"

---
applications:
- name: sample-aspnetcore-helloworld
  random-route: true
  memory: 512M
  buildpack: dotnet_core_buildpack_beta

4. Push as shown below

pasapicella@pas-macbook:~/apps/dotnet/aspnet-core-helloworld$ cf push
Using manifest file /Users/pasapicella/apps/dotnet/aspnet-core-helloworld/manifest.yml

Updating app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Uploading sample-aspnetcore-helloworld...
Uploading app files from: /Users/pasapicella/pivotal/apps/dotnet/aspnet-core-helloworld
Uploading 21.9K, 15 files
Done uploading
OK

Stopping app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Starting app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
Downloading dotnet_core_buildpack_beta...
Downloaded dotnet_core_buildpack_beta
Creating container
Successfully created container
Downloading build artifacts cache...
Downloading app package...
Downloaded app package (21.5K)
Downloaded build artifacts cache (157.7M)

...

-----> Saving to buildpack cache
    Copied 0 files from /tmp/app/libunwind to /tmp/cache
    Copied 0 files from /tmp/app/.dotnet to /tmp/cache
    Copied 0 files from /tmp/app/.nuget to /tmp/cache
    OK
ASP.NET Core buildpack is done creating the droplet
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (157.7M)
Uploaded droplet (157.7M)
Uploading complete
Destroying container
Successfully destroyed container

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App sample-aspnetcore-helloworld was started using this command `dotnet run --project src/dotnetstarter --server.urls http://0.0.0.0:${PORT}`

Showing health and status for app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: sample-aspnetcore-helloworld-gruffier-jackpot.cfapps.io
last uploaded: Wed Mar 8 10:46:44 UTC 2017
stack: cflinuxfs2
buildpack: dotnet_core_buildpack_beta

  state     since                    cpu     memory          disk           details
#0   running   2017-03-08 09:48:29 PM   22.4%   36.7M of 512M   556.8M of 1G


Verify the application using the URL given at the end of the push







Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator - Fusion Middleware