Feed aggregator

Updates to Oracle Cloud Infrastructure CLI

OTN TechBlog - Fri, 2017-12-01 15:01
pre { white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ margin-bottom: 30px; }

We’ve been hard at work the last few months making updates to our command line interface for Oracle Cloud Infrastructure, and wanted to take a minute to share some of the new functionality! The full list of new features and services can be found in our changelog on GitHub, and below are a few core features we wanted to call out specifically:

Defaults

We know how tedious it can be to type out the same values again and again while using the CLI, so we have added the ability to specify default values for parameters. The example below shows a sample oci_cli_rc file which sets two defaults: one at a global level which will be applied to all operations with a --compartment-id parameter, and one for only ‘os’ (object storage) commands which will be applied to all ‘os’ commands with a --namespace parameter.

Content of ~/.oci/oci_cli_rc:

[DEFAULT] # globally scoped default for all operations with a --compartment-id parameter compartment-id= ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2… # default for --namespace scoped specifically to Object Storage commands os.namespace=mynamespace

Example commands that no longer need explicit parameters:

oci compute instance list # no --compartment-id needed oci os bucket list  # no --compartment-id or --namespace needed

 

Command and parameter aliases

To help with specifying long command and parameter names, we have also added support for defining aliases. The example oci_cli_rc file below shows examples of defining aliases for commands and parameters:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_PARAM_ALIASES] --ad=--availability-domain -a=--availability-domain --dn=--display-name [OCI_CLI_COMMAND_ALIASES] # This lets you use "ls" instead of "list" for any list command in the CLI (e.g. oci compute instance ls) ls = list # This lets you do "oci os object rm" rather than "oci os object delete" rm = os.object.delete Table output

JSON output is great for parsing but can be problematic when it comes to readability on the command line. To help with this we have added table output format which can be triggered for any operation by supplying --output table. This also makes it easier to use common tools like grep and awk on the CLI output to grab specific records from a table. See the section on JMESPath below to see how you can filter data to make your table output more concise.

Here is an example command and output:

oci iam region list --output table +-----+--------------------+ | key | name | +-----+--------------------+ | FRA | eu-frankfurt-1 | | IAD | us-ashburn-1 | | PHX | us-phoenix-1 | +-----+--------------------+ JMESPath queries

Often times a CLI operation will return more data than you are interested in. To help with filtering and querying data from CLI responses, we have added the --query option which allows running arbitrary JMESPath (http://jmespath.org/) queries on the CLI output before the data is returned.

For example, you may want to list all of the instances in your compartment but only want to see the display-name and lifecycle-state, you can do this with the following query:

# using the oci_cli_rc file from above so we don’t have to specify --compartment-id oci compute instance list --query 'data[*].{"display-name":"display-name","lifecycle-state":"lifecycle-state"}'

This is especially convenient for use with table output so you can limit the output to a size that will fit in your terminal.

You can also define queries in your oci_cli_rc file and reference them by name so you don’t have to type out complex queries, for example:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_CANNED_QUERIES] get_id_and_display_name_from_list=data[*].{id: id, "display-name": “display-name"}

Example command:

oci compute instance list -c $C --query query://get_id_and_display_name_from_list

To help getting started with some of these features we have added the command 'oci setup oci-cli-rc' to generate a sample oci_cli_rc file with examples of canned queries, defaults, and parameter / command aliases.

JSON Input made easier

We have made a number of improvements to how our CLI works with complex parameters that require JSON input:

Reading JSON parameters from a file:

For any parameter marked as a "COMPLEX TYPE" you can now specify the value to be read from a file using the "file://" prefix instead of needing to format a JSON string on the command line. For example:

oci iam policy create —statements file://statements.json

Generate JSON skeletons for single parameter

To help with specifying JSON input from a file we have added --generate-param-json-input to each command with complex parameters to enable generating a JSON template for a given input parameter. For example, if you are not sure of the format for the oci iam policy create --statements parameter you can issue the following command to generate a template:

oci iam policy create --generate-json-param-input statements output: [ “string”, “string” ]

You can then fill out this template and specify it as the input to a create policy call like so:

oci iam policy create --statements file://statements.json

Generate JSON skeletons for full command input

We also support generating a JSON skeleton for the full command input. A common workflow with this parameter is to dump the full JSON skeleton to a file, edit the file with the input values you want, and then execute the command using that file as input. Here is an example:

# command to emit full JSON skeleton for command to a file input.json oci os preauth-request create --generate-full-command-json-input > input.json # view content of input.json and edit values cat input.json { "accessType": "ObjectRead|ObjectWrite|ObjectReadWrite|AnyObjectWrite", "bucketName": "string", "name": "string", "namespace": "string", "objectName": "string", "opcClientRequestId": "string", "timeExpires": "2017-01-01T00:00:00.000000+00:00" } # run create pre-authenticated request with the values specified from a file oci os preauth-request create --from-json file://input.json Windows auto-complete for power shell

We have now added tab completion for Windows PowerShell! Completion works on commands and parameters and can be enabled with the following command:

oci setup autocomplete

For more in-depth documentation on these features and more, check out our main CLI documentation page here.

Related content

Secure Oracle E-Business Suite 12.2 with Cookie Domain Scoping

Steven Chan - Fri, 2017-12-01 11:56

A cookie is a mechanism of storing state across requests to web site. When a site is accessed, a user's browser uses the cookie to store information such as a session identifier. When the site is accessed on a future occasion, the information in the cookie can be reused. If a domain is not specified, then the browser does not send the cookie beyond the originating host.

The Oracle E-Business Suite 12.2 Cookie Domain Scoping feature allows you to define the scope of the cookie. Your scoping configuration requirements will be dictated by the external integrations used with your Oracle E-Business Suite environment and your network configuration. Refer to the documentation for more information regarding your configuration requirements.

Where can I learn more?

Related Articles

References

Categories: APPS Blogs

Announcing Mobile Authentication Plugin for Apache Cordova, and More!

OTN TechBlog - Fri, 2017-12-01 11:19

We are excited to announce the open source release on GitHub of the cordova-plugin-oracle-idm-auth plugin for Apache Cordova, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

This plugin provides a simple JavaScript API for performing complex authentication, powered by a native SDK developed by the Oracle Access Management Mobile & Social (OAMMS) team that has been tested and verified against Oracle Access Manager (OAM) and Oracle Identity Cloud Service (IDCS) and is compatible with other 3rd party authentication applications that support Basic Authentication, OAuth, Web SSO or OpenID Connect.

Whilst the plugin is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any Cordova-based app targeting Android or iOS.

Most mobile authentication scenarios are complex, often requiring interaction with the native operating system for use cases such as:

  • Retrieving authentication tokens and cookies following successful authentication
  • Securely storing tokens and user credentials
  • Performing offline authentication and automatic login

Writing code to handle each of the required authentication scenarios, especially within hybrid mobile applications, is tedious and can be error-prone.

The cordova-plugin-oracle-idm-auth plugin significantly reduces the amount of coding required to successfully authenticate your users and handle various error cases, by abstracting the complex logic behind a set of simple JavaScript APIs, thus allowing you to focus on implementation of your mobile app’s functional aspects.

To add this plugin to your Oracle JET app:

$ ojet add plugin cordova-plugin-oracle-idm-auth

 

To know more about the Oracle JET CLI, visit the ojet-cli project.

To add this plugin to your plain Apache Cordova app:

$ cordova plugin add cordova-plugin-oracle-idm-auth

 

Although the plugin itself contains detailed documentation, stay tuned for more technical posts describing common usage scenarios.

The release of this plugin continues Oracle’s commitment to the open source Apache Cordova community, along with these previously released plugins:

Hope you enjoy, and if you have any feedback, please submit issues to our Cordova projects on GitHub.

For more technical articles, you can also follow OracleDevs on Medium.com.

Related content

 

Enters Amazon Aurora Serverless

Pakistan's First Oracle Blog - Thu, 2017-11-30 23:06
More often than not, database administrators around the technologies have to fight out high load on their databases. It could be ad hoc queries, urgent reports, overflown jobs, or simply high frequency and volume of queries by the end users.

DBAs try their best to do a generous capacity planning to ensure optimal response time and throughput for the end users. There are various scenarios where it becomes very hard to predict the demand. Storage and processing needs in case of unpredictable load are hard to foretell in advance.





Cloud computing offers the promise of unmatched scalability for processing and storage needs. Amazon AWS has introduced a new service which gets closer to that ultimate scalability. Amazon Aurora is hosted relational database service by AWS. You set your instance size and storage need while setting Aurora up. If your processing requirements change, you change your instance size and if you need more read throughput you add more read replicas.

But that is good for the loads we know about and can more ore less predict. What about the loads which appear out of blue? May be for a blogging site, where some post has suddenly gone viral and it has started getting million of views instead of hundreds? And then the traffic disappears after some time suddenly just like it appeared out of nowhere and may be after some days it happens for some another post?

In this case if you are running Amazon Aurora, it would be fairly expensive to just increase the instance size or read replicas in the anticipation that some traffic burst would come. It might not, but then it might.

In front of this uncertainty, enters Amazon Aurora Serverless. With this Serverless Aurora, you don't select your instance size. You simply specify an endpoint and then all the queries are routed to that endpoint. Behind that endpoint lies a a warm proxy fleet of database capacity which can scale as per your requirements within 5 seconds.

It's all on-demand and ideal for transient spiky loads. What's more sweet is that billing is on second by second basis and deals in Aurora capacity units and minimum is 1-minute for each newly address resource.
Categories: DBA Blogs

Oracle provider for OLE DB (OraOLEDB) 11.2.0.1.0 unable to connect to Oracle DB 10 Release 2

Tom Kyte - Thu, 2017-11-30 22:26
I have installed Oracle provider for OLE DB (OraOLEDB) 11.2.0.1.0 on a server to allow our SIEM to connect to our customer's Oracle DB 10G R2 for monitoring purpose. However, I'm still getting error saying "ORA-12541: TNS:no listener" when I test...
Categories: DBA Blogs

pragma autonomous_transaction; and database links

Tom Kyte - Thu, 2017-11-30 22:26
I have a package of functions that return data from a SqlServer database through a link. Usually the results are just displayed in optional fields on a web page or client program. They take the form of: <code> function get_info(ar_key number) ...
Categories: DBA Blogs

Installing Visual Studio Code on Oracle Linux 7

Wim Coekaerts - Thu, 2017-11-30 17:43

Visual Studio Code is a popular editor. There is an RPM available for "el7" from the Microsoft yumrepo. This RPM can be manually downloaded on Oracle Linux 7 and installed with # yum localinstall code...  or # rpm -ivh code... but it's easier to just create a yum repo file so that you can just do # yum install code and # yum update code.

Here's an example. On Oracle Linux 7 (not 6), as user root:

# cd /etc/yum.repos.d

create a file, let's say vscode.repo with the following content:

[vscode]
name=vscode
baseurl=https://packages.microsoft.com/yumrepos/vscode/
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc

 

and now you can just do

# yum install code
Loaded plugins: langpacks, ulninfo
vscode                                                   | 2.9 kB     00:00     
Resolving Dependencies
--> Running transaction check
---> Package code.x86_64 0:1.18.1-1510857496.el7 will be installed
y
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package      Arch           Version                       Repository      Size
================================================================================
Installing:
 code         x86_64         1.18.1-1510857496.el7         vscode          63 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 63 M
Installed size: 186 M
Is this ok [y/d/N]: Downloading packages:
code-1.18.1-1510857496.el7.x86_64.rpm                      |  63 MB   00:41     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : code-1.18.1-1510857496.el7.x86_64                            1/1
  Verifying  : code-1.18.1-1510857496.el7.x86_64                            1/1

Installed:
  code.x86_64 0:1.18.1-1510857496.el7                                           

Complete!

That's it.

 

OpenJDK 9: Limitations/shortcomings of the Jshell

Dietrich Schroff - Thu, 2017-11-30 13:56
Jshell is a quite powerful tool to do some quick explorations and to write some scripts.

If found the following limitations:
  • No syntax highlighting
  • Only comments with // are allowed
    if you open a script file with
     /**
      *  My comments
      */
    this will work only many warnings.
    |  Error:
    |  illegal start of expression
    |   * Copyright (c) 1995, 2008, Oracle and/or its affiliates. All rights reserved.
    |   ^
    |  Error:
    |  illegal start of expression
    |   *
    |   ^
  • The usage of public, static, ... is quite different:
    -> public class MyClass {
    >> public int a;
    >> };
    |  Warning:
    |  Modifier 'public'  not permitted in top-level declarations, ignored
    |  public class MyClass {
    |  ^----^
    |  Added class MyClass this error is corrected:
    |      Modifier 'public'  not permitted in top-level declarations, ignored
    |      public class MyClass {
    |      ^----^
  • If you want to load a scipt with /open then tab completion does not work. The complete path including filename has to be typed / pasted.
  • If you load some example snippet, main(String[] args) is not run by default.
    Just tried with the swing tutorial from oracle:
    jshell Downloads/HelloWorldSwing.java  -> String[] mystringarray;
    |  Added variable mystringarray of type String[]

    -> HelloWorldSwing.main(mystringarray); 
  • .. 
to be continued...

Top 12 Rank Tracking Software and services

Nilesh Jethwa - Thu, 2017-11-30 13:23

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically designed to look for rank and popularity.

Rank tracking software gauges the performance of a business. The use of this rank tracker tool allows one to evaluate and track ranks in search engines as well as give them the range of their visibility to the prospect market. This can also observe the progress of competitors in the market. This knowledge can act as their edge and chance for better improvement.

Monitoring business using keyword rank checker can offer numerous benefits to a business enterprise, but before it can happen, you must first find the suitable rank tracker tool for you.

Here are a list of the top rank tracking software and services

  1. ADVANCE WEB RANKING (AWR)

Initially founded in 2002, this company offers an extensive variety of different keyword tracking tools. It prides itself on providing worry-free reporting that allows a user to focus on website optimization rather than tracking.  It also has cloud service and desktop software.

Features and benefits of this rank tracker:

  • Allows white label reporting
  • It offers unlimited observation of campaigns and websites competition
  • Allows easy assignment of read/write permissions
  • This keyword rank checker has an international customer base that supports special characters
  • Simple project setup and easy sharing of data online
  • User-intuitive interface and can be accessed by mobile devices
  • Because of its custom google location, it aren support the international markets and gets results from over 50+ countries.

What makes it better?

Compared to its competitors, Advanced Web Ranking can give localized ranking with pinpoint accuracy. Geolocation proxies in the company system is the reason for this better accuracy. It also offers a 30-day trial for free.

But what’s the price?

The starting least priced option is $49 per month and the highest value is at $499. The price is a bit costly, but with its given features, it is understandable.

 

Read more at https://www.infocaptor.com/dashboard/top-12-rank-tracking-software-and-services-for-your-business-needs

ARM, YUM, Cloud, containers,...

Wim Coekaerts - Thu, 2017-11-30 11:43

It's been a while since my last post so a lot of stuff has been going on! This one will be a random collection of things that I want to point out. I will have to use a lot of tags to keep search engines happy here :-)

Where to start...

Preview release : Oracle Linux 7 for ARM64 (aarch64)

Given the growing interest in ARM64.  We created a publicly available, free download, no registration keys, no access codes, no authentication codes,version of OL7 for ARM64. You can go download it here:  http://www.oracle.com/technetwork/server-storage/linux/downloads/oracle-linux-arm-4072846.html

We have an ISO you can install on a few available ARM64 servers, more servers will be tested and added over time. (See release notes) and we also created a little runtime image for the RPI3. That way you can easily try it out in minutes on a cheap, readily available platform.

Tons of RPMs have been built and are on http://yum.oracle.com (specifically: http://yum.oracle.com/repo/OracleLinux/OL7/latest/aarch64/index.html ) We currently use a 4.13 kernel but that will soon move to 4.14 (basis for the next version of UEK).

One of the reasons we do a preview release right now and not GA is because it's still a fast moving target. Lots of kernel changes coming, we're looking at providing the latest toolchain, gcc7, create a good public developer program around Oracle Linux for ARM64 and the introduction of new platforms over the next several months that might require adding new drivers, compile the binaries with better optimizations etc... so right now I would not want  to call this Generally Available. It's certainly in a good state for developers to start using and get their feet wet, for partners that are interested in ARM to start porting apps and work with us as we improve performance and build out the developer ecosystem. It's certainly an exciting development. We're working on all the usual things, we are working on ksplice,  dtrace, lots of server side enhancements that are still missing, testing of kvm, seeing if we can build even the kernel with gcc7.2? Pick the right chip to target for optimizations...

New packages for Oracle Linux

Over the last several months we started adding a ton of new RPMs on yum to make it easier for admins and developers that want newer stuff that's just not typically available directly from the Enterprise Linux vendor side.

We track the latest versions of terraform (and the OCI-provider for terraform), we released dotnet2.0, powershell updates, over a 1000 RPMs added from the EPEL repository, docker 17.06. We packaged the OCI SDK and CLI into RPMs to make it easy (no need to run pip install).

For the nitpickers - as I mentioned previously, we are just replicating EPEL, we are not 'forking' it, we are not modifying source, the intent is to have it available from the same 'location', signed by us, built by us tested together in terms of dependencies. It's still EPEL. If we were to find bugs or whatever we'd get that fixed on the EPEL source side. No other intent... just to re-iterate that.

"What's new" on yum

Since we do a lot of packages updates on yum.oracle.com, we added a what's new page, it lists new RPMs that are published every day and we keep 6 months of history. This way  you can easily see if something got updated without having to run yum commands on a server.

Kernel Blog

In order to be more public about the type of development projects we have going on, we are finally back to writing regular articles about various kernel projects. You can find that here. It's a random collection of things developers will write up, stuff they worked on in the past or something like that. It gives a bit more context than just seeing commit messages. We started this way back when, then it went dormant but we picked it up again. Some good stuff can be found there.

Linux NFS appliance image for Oracle Cloud Infrastructure

Regular updates continue on our Linux NFS appliance image that can be found here. An easy way to create a Linux-based NFS server in your own tenancy. It's not an NFS service, it's just a standard Oracle Linux image that creates an NFS  server setup.

Oracle Container Registry

A reminder that we have replicas of the Oracle Container registry in each of the Oracle Cloud Infrastructure regions for fast, internal to the region access to our docker images.

container-registry-ash.oracle.com (Ashburn datacenter)

container-registry-phx.oracle.com (Phoenix datacenter)

container-registry-fra.oracle.com (Frankfurt datacenter)

These registries are also externally accessible so you can use it from wherever you are. Pick the one that's fastest for you.

We will introduce yum replicas soon as well.

 

 

 

 

 

 

 


 

 

Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots

Yann Neuhaus - Thu, 2017-11-30 04:58

If you store your databases on the cluster filesystem ACFS you may use the provided Perl-script gDBClone from OTN to clone databases or create snapshot databases. It is an interesting approach to create clones from the Production DB in minutes regardless of the production DB size. What you do is to create a standby DB from your production DB on a separate cluster and use that standby DB as a Gold/Master copy for ACFS snapshots.

In a Production environment with Data Guard Broker we wanted to use that technique, but were confronted with an issue:

The Production DB had already a physical standby DB with the Data Guard Broker running. The protection mode was MaxAvailability, which means transport of the redo in sync mode. The master/gold copy to get the snapshots from should receive the redo data in async mode. How to achieve that?

Actually not very common parameters in a Broker configuration are


ExternalDestination1
ExternalDestination2

With those parameters (which are available in 11.2.0.4 and 12.1.0.2 onwards) you actually can send your redo to a destination in async mode. The parameters are documented as follows:

The ExternalDestination1 configuration property is used to specify a redo transport destination that can receive redo data from the current primary database. To set up transport of redo data to the specified destination, the broker uses the values specified for this parameter to define a LOG_ARCHIVE_DEST_n initialization parameter on the primary database. The broker also monitors the health of the transport to the specified destination.

After a role change, the broker automatically sets up a LOG_ARCHIVE_DEST_n initialization parameter on the new primary database to ship redo data to the specified destination.

I.e. you can set the parameter the same as LOG_ARCHIVE_DEST_n, but the following options are not allowed:

ALTERNATE
DELAY
LOCATION
MANDATORY
MAX_FAILURE
NET_TIMEOUT
SYNC
VALID_FOR

So let’s assume I created my DB GOLDCOP as a standby DB using the rman duplicate command


RMAN> duplicate target database for standby from active database dorecover nofilenamecheck;

or alternatively using


# ./gDBClone clone -sdbname PRIM -sdbscan scoda7 -tdbname GOLDCOP -tdbhome OraDb11g_home1 -dataacfs /cloudfs -standby

In the broker configuration I added the DB GOLDCOP as follows:

DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
STBY - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS
 
DGMGRL> edit configuration set property ExternalDestination1 = 'service=goldcop db_unique_name=GOLDCOP noaffirm async';
Property "externaldestination1" updated
DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
STBY - Physical standby database
GOLDCOP - External destination 1
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS

Let’s check if I really do NOAFFIRM ASYNC redo transport on PRIM:

SQL> select DEST_NAME, DB_UNIQUE_NAME, AFFIRM, TRANSMIT_MODE from v$archive_dest where dest_id in (2,3);
 
DEST_NAME DB_UNIQUE_NAME AFF TRANSMIT_MOD
-------------------------------- ------------------------------ --- ------------
LOG_ARCHIVE_DEST_2 STBY YES PARALLELSYNC
LOG_ARCHIVE_DEST_3 GOLDCOP NO ASYNCHRONOUS

The external destination is not a database in the configuration:

DGMGRL> show database "GOLDCOP";
Object "GOLDCOP" was not found

But the transport to its destination is monitored by the Broker. I.e. when shutting down the DB GOLDCOP I do get an error:

DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
Error: ORA-16778: redo transport error for one or more databases
 
STBY - Physical standby database
GOLDCOP - External destination 1
 
Fast-Start Failover: DISABLED
 
Configuration Status:
ERROR
 
DGMGRL> show instance "PRIM";
 
Instance 'PRIM' of database 'PRIM'
 
Instance Error(s):
ORA-16737: the redo transport service for standby database "GOLDCOP" has an error
 
Instance Status:
ERROR

As the External destination 1 is not “a database” in the broker configuration, it actually also does not matter if the broker is started (dg_broker_start=TRUE) at the external destination GOLDCOP or not.
To start applying redo on the external destination, you have to start managed recovery as you would without a broker configuration:

alter database recover managed standby database using current logfile disconnect from session;

And redo real time apply is happening on GOLDCOP:

SQL> select name,value
2 from v$dataguard_stats
3 where name in ('apply lag','transport lag');
 
NAME VALUE
-------------------- --------------------
transport lag +00 00:00:00
apply lag +00 00:00:00
 
SQL>
SQL> select inst_id,process,pid,status,thread#,sequence#, block#
2 from gv$managed_standby
3 where process like 'MRP%';
 
INST_ID PROCESS PID STATUS THREAD# SEQUENCE# BLOCK#
---------- --------- ---------- ------------ ---------- ---------- ----------
1 MRP0 5155 APPLYING_LOG 1 50 420

To make the external destination self managing I did set the archivelog deletion policy on GOLDCOP to

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

in rman so that applied archives become reclaimable automatically in the fast recovery area. In addition I set

fal_server='PRIM'

on GOLDCOP to ensure that archive gaps can be resolved.

The pro’s of above configuration are:
– the GOLDCOP-DB does not cause much overhead for my Production DB (async redo transport).
– Decoupling my GOLDCOP DB from Primary (temporarily) is fast and easy:
edit configuration set property ExternalDestination1 = '';

REMARK: Of course I do also have the other advantages of the gDBClone-approach:
– A production copy on a separate cluster which serves as a gold-copy to take snapshots from for testing or development purposes.
– Creating a snapshot database takes minutes regardless of the DB size.

Con’s:
– I have to take care to start managed standby database recovery on my GOLDCOP-DB. I.e. the same as when running data guard without the Broker.

To create a snapshot DB I just do something like:


# ./gDBClone snap -sdbname GOLDCOP -tdbname PRODCOP1

Et voilà a production copy in 2 minutes.

If PRODCOP1 is no longer needed I can delete it:


# ./gDBClone deldb -tdbname PRODCOP1

Besides using the configuration property ExternalDestination1 there are other possibilities in 12c to run a standby DB as a master copy for snapshots without affecting the production system (like e.g. the support of cascaded standby DBs in the Broker), but I still think that the external destinations feature offers a good possibility to run a master copy.

 

Cet article Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots est apparu en premier sur Blog dbi services.

nVision Performance Tuning: 11. Excel -v- OpenXML

David Kurtz - Thu, 2017-11-30 04:47
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The general objective the performance tuning changes described in this series of blog posts has been to improve the performance of individual nVision reports, but also to allow many reports to execute concurrently.
However, if you use Excel 2010, Excel 2013 or above, then you may notice run times are significantly longer than with Excel 2007.  Also, from PeopleTools 8.54, Excel 2007 is no longer certified.
The problem is discussed in Oracle support note E-NV: nVision Performance using Microsoft Office 2010 and above (Doc ID 1537491.1).  Essentially, Excel 2010 upwards only runs single threaded.  Only one Excel nVision process that is not waiting for a database call to return can run concurrently on any one Windows server at any one time.  If you want to be able to run 10 concurrent nVision reports you would need to run one on each of 10 process schedulers, on 10 different windows servers.
From PT8.54, OpenXML is the default and preferred engine for executing nVision report on the process scheduler.  This uses a different PeopleSoft executable (PSNVSSRV).  It does not suffer from the single-threading problem so multiple processes can run concurrently.  It can also be run on non-Windows environments.
However, there are some limitations with OpenXML:
  • Excel macros are ignored during report generation, although macros can be put into a layout that will execute when the generated report is subsequently opened in Excel.
  • There are problems with nPlosion.  
  • Any print area set in the layout is lost.
  • When rerunning nVision to file any pre-existing file is not overwritten.
Therefore, it may be necessary to continue to run some nVision reports on Excel.  This would require:
  • Separate process schedulers configured to run Excel rather than OpenXML on each available Windows server.  Excel is selected by setting the variable Excel Automation = 1, in the nVision section of the process scheduler configuration file (psprcs.cfg).  
  • A new Excel nVision process type should be configured to run specific layouts or reportbooks on Excel.  
  • That new process type should only run on these additional process schedulers.  It should have a maximum concurrence of 1, or at most 2, on each Process Scheduler.  These schedulers should be configured to run this new process type (and a single Application Engine so that the purge process can run).

FMW, GoldenGate & Apps DBA + FREE Training This Week

Online Apps DBA - Thu, 2017-11-30 03:06

[K21Academy Weekly Newsletter] 171130 Subject: FMW, GoldenGate & Apps DBA + FREE Training This Week In this weeks issue, you will find:- 1. [Facebook Live] SSL/TLS Oracle Fusion Middleware & EBS (R12) 2. WebLogic / Oracle FMW to RAC Database connection: Using Active GridLink ? 3. [Video] Oracle GoldenGate: What Why And How To Learn 4. ADOP (R12.2 Online […]

The post FMW, GoldenGate & Apps DBA + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Advanced Oracle Troubleshooting seminar in 2018!

Tanel Poder - Wed, 2017-11-29 16:24

A lot of people have asked me to do another run of my Advanced Oracle Troubleshooting training or at least get access to previous recordings – so I decided to geek out over the holiday period, update the material with latest stuff and run one more AOT class in 2018!

The online training will take place on 29 January – 2 February 2018 (Part 1) & 26 February – 2 March 2018 (Part 2).

The latest TOC is below:

Seminar registration details:

Just like last time (AOT 2.5 about 2 years ago!), the attendees will get downloadable video recordings after the sessions for personal use! So, no crappy streaming with 14-day expiry date, you can download the video MP4 files straight to your computer or tablet and keep for your use forever!

If you sign up early and can’t wait until end of January, I can send the registered attendees most of the previous AOT 2.5 video recordings upfront, so you’d be ready for action in the live class :)

I also have a Youtube channel (that you may have missed), there are a couple of introductory videos about how I set up my environment & use some key scripts available now:

I plan to start posting some more Oracle/Linux/Hadoop stuff in the Youtube channel, but this is quite likely the last AOT class that I do, so see you soon! ;-)

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Google dashboard for lazy businessman

Nilesh Jethwa - Wed, 2017-11-29 15:20

Once you start your own business or startup, the very first thing that comes to mind is "How many visitors did my website receive?"

Google Analytics provides tons of metrics and it becomes quite a chore to keep up with all the dashboard pages and filter options. As a small biz owner myself, I went through a phase where I ended up spending significant amount of time checking out Google Analytics

To save time and mental energy on a daily routine task, I asked "What are the most basic metrics I need to measure from Google Analytics?"

The answer pretty much came down as a need to have "one page dashboard that displays various metrics".

Read more at http://www.infocaptor.com/dashboard/what-are-the-bare-minimum-traffic-metrics-that-i-can-track-easily

The Biggest Change to Reporting & Analysis in 2018 Won’t Be the Cloud

Look Smarter Than You Are - Wed, 2017-11-29 12:43
Screenshot from https://www.oracle.com/solutions/business-analytics/day-by-day.html

Companies spent most of 2017 either preparing their journey to the Cloud, getting started on moving their applications to the Cloud, or hoping the whole Cloud thing would go away if we just ignored it long enough (like my late fees at Blockbuster). But in the end, the Cloud isn’t revolutionary: the Cloud just means someone else is managing your server for you. While it’s nice that your servers are now someone else’s problem, there is an actual revolution happening in reporting & analysis and it’s a technology that’s been around for decades.
The Future of Reporting & Analysis Can Also Take Selfies
Up to this point, mobile has been an afterthought in the world of reporting & analysis: we design for a laptop first and if something ends up mobile-enabled, that’s a nice-to-have. The commonly held belief is that mobile devices (phones, tablets) are too small of a footprint to show formatted reports or intricate dashboards. That belief is correct in the same way that Microsoft Outlook is way too complex of an application to make reading emails on a mobile device practical… except that most emails in the world are now read on a mobile device. They’re just not using Outlook. We had to rethink of a smaller, faster, easier, more intuitive (sorry, Microsoft) way of consuming information to take email mobile.

Reporting & analysis will also hit that tipping point in 2018 where we ask ourselves simply “what questions do I need answered to make better business decisions faster?” and then our phones will give us exactly that without all the detail a typical report or dashboard provides. Will mobile analytics kill off desktop applications? No more than the desktop killed off paper reports. They all have their place: paper reports are good for quickly looking at a large amount of formatted information, desktops will be good for details (Excel will live on for the foreseeable future), and mobile will take its rightful place as the dominant form of information consumption.
Forget the Past and Pay Attention to the Present
The greatest thing about mobile is that everyone has their phone less than six feet from them at all times [you just glanced over at yours to see if I’m right]. But would you ever look at your phone if your screen took a month to update? Traditional reports are very backwards-looking. Your typical Income Statement, for instance, tells you how you spent the last year, it sometimes tells you about the upcoming forecast, but it rarely tells you, “am I making money at this moment?” Just like the dashboard of a car would be awfully useless if it gave you last month’s average gas tank reading – hey, I was 75% full in December! – mobile reports won’t be for looking at historically dated information. Instead, we’ll look to mobile to give us just the information we need to take physical actions now.
But Why is 2018 the Year of Mobile Analytics?
Quite simply, we didn’t have the technology to support our decisions until now. While we could take reports or dashboards and interact with them on mobile devices, we don’t want to actually perform analytics on our phones. We want the computers doing the analysis for us. While we’ve had data mining for years, it was relegated to high-priced data scientists or not-so-highly-paid analysts.

We now have artificial intelligence that can look through our data 24/7 and with no guidance from us, determine what drivers correlate with which results. Machine learning can then determine which information it delivers do we truly find useful. And so we don’t have to dig through all the results to find out what the system is trying to tell us, the mobile analytics apps in 2018 will convert complex information into natural language. It will simply tell us in plain English (or your language of choice), “I looked through all your information and here are the things you need to be aware of right now.”

While that may seem like distant promises to many people, it’s here now. At Oracle’s OpenWorld 2017 conference, there was an amazing demonstration of everything I mentioned in the last paragraph. The audience was even more amazed when told that all that functionality would be in Oracle Analytics Cloud before OpenWorld 2018. I’m sure the employees of Microsoft, Tableau, QlikView, and others are either busy working on their own technological magic or they’re busier working on their resumés.
Am I Ready for the Future?
Start finding out at EPM.BI/Survey. Each year, I conduct a global survey of Business Analytics. Last year, I asked over 250 companies how they were doing in the world of reporting, analysis, planning, and consolidation.  To participate in this year’s survey, go to EPM.BI/Survey and spend 15 minutes answering questions about your State of Business Analytics that you maybe haven’t thought of in years. In exchange for filling in the survey, you’ll be invited to a webcast on January 31, 2018, at 1PM Eastern, where you’ll learn how your BI & EPM (Business Intelligence & Enterprise Performance Management) stacks up against the rest of the world.

If you have any questions, ask them in the comments or tweet them to me @ERoske.
Categories: BI & Warehousing

Secure Oracle E-Business Suite 12.2 with Allowed JSPs/Resources

Steven Chan - Wed, 2017-11-29 11:38

Oracle E-Business Suite is delivered with JSPs and servlets. Most customers use only a subset of these provided resources. The Allowed JSPs or Allowed Resources feature allows you to reduce your attack surface by disabling JSPs or servlets that are not used in your environment. You can allow or deny resources at the family, product or resource level.

The Allowed JSPs feature allows you to define a whitelist of allowed JSPs for your Oracle E-Business Suite 12.2 environment. When enabled, accessing JSPs that are not configured in your whitelist is not allowed.

The Allowed Resources feature expands upon the concept of the Allowed JSPs feature and allows you to define a whitelist of allowed JSPs and servlets for your Oracle E-Business Suite 12.2 environment. When enabled, accessing JSPs or servlets that are not configured in your whitelist is not allowed.

Your users will see an error message if a resource is blocked by the Allowed JSPs or Allowed Resources feature.

Refer to the documentation for more information on how to to deploy and configure the Allowed JSPs or Allowed Resources feature.

Which EBS Releases include Allowed JSPs or Allowed Resources?

  • Allowed JSPs is delivered with Oracle E-Business Suite Release 12.2.4
  • Allowed Resources can be enabled with Oracle E-Business Suite 12.2.6+.
  • Allowed Resources with a new user interface and recommendations to provide ease of configuration is on by default with Oracle E-Business Suite 12.2.7.

Where can I learn more?

Related Articles

References

Categories: APPS Blogs

How can i read a csv file

Tom Kyte - Wed, 2017-11-29 09:46
Hi TOM :) Resourse: 1) i have a table that was milions of records of the clients 2) and i have a CSV with only 1,200 clients 3) i don't have permitions to create a table. Problem: how can i read from the CSV to join with the pr...
Categories: DBA Blogs

Implementing Authentication for REST API calls from JET Applications embedded in ADF or WebCenter Portal using JSON Web Token (JWT)

Amis Blog - Wed, 2017-11-29 05:00

imageThe situation discussed in this article is as follows: a rich client web application (JavaScript based, could be created with Oracle JET or based on Angular/Vue/React/Ember/…) is embedded in an ADF or WebCenter Portal application. Users are authenticated in that application through a regular login procedure that leverages the OPSS (Oracle Platform Security Service) in WebLogic, authenticating against an LDAP directory or another type of security provider. The embedded rich web application makes calls to REST APIs. These APIs enforce authentication and authorization – to prevent rogue calls. Note: these APIs have to be accessible from wherever the users are working with the ADF or WebCenter Portal application.

This article describes how the authenticated HTTP Session context in ADF – where we have the security context with authenticated principal with subjects and roles – can be leveraged to generate a secure token that can be passed to the embedded client web application and subsequently used by that application to make calls to REST APIs that can verify through that token that an authenticated user is making the call. The REST API can also extract relevant information from the token – such as the user’s identity, permissions or entitlements and custom attributes. The token could also be used by the REST API to retrieve additional information about the user and his or her session context.

Note: if calls are made to REST APIs that are deployed as part of the enterprise application (same EAR file) that contains the ADF or WebCenter Portal application, then the session cookie mechanism ensures that the REST API handles the request in the same [authenticated]session context. In that case, there is no need for a token exchange.

 

Steps described in this article:

  1. Create a managed session bean that can be called upon to generate the JWT Token
  2. Include the token from this session bean in the URL that loads the client web application into the IFrame embedded in the ADF application
  3. Store the token in the web client
  4. Append the token to REST API calls made from the client application
  5. Receive and inspect the token inside the REST API to ensure the authenticated status of the user; extract additional information from the token

As a starting point, we will assume an ADF application for which security has been configured, forcing users accessing the application to login by providing user credentials.

The complete application in a working – though somewhat crude – form with code that absolutely not standards compliant nor production ready can be found on GitHub: https://github.com/lucasjellema/adf-embedded-js-client-token-rest.

 

Create a managed session bean that can be called upon to generate the JWT Token

I will use a managed bean to generate the JWT Token, either in session scope (to reuse the token) or in request scope (to generate fresh tokens on demand) .

JDeveloper and WebLogic both ship with libraries that support the generation of JWT Tokens. In a Fusion Web Application the correct libraries are present by default. Anyone of these libraries will suffice:

image

I create a new class as the Token Generator:

package nl.amis.portal.view;

import java.util.Date;

import javax.faces.bean.SessionScoped;
import javax.faces.bean.ManagedBean;

import oracle.adf.share.ADFContext;
import oracle.adf.share.security.SecurityContext;

import oracle.security.restsec.jwt.JwtToken;
import java.util.HashMap;
import java.util.Map;
@ManagedBean
@SessionScoped
public class SessionTokenGenerator {
    
    private String token = ";";
    private final String secretKey = "SpecialKeyKeepSecret";
    public SessionTokenGenerator() {
        super();
        ADFContext adfCtx = ADFContext.getCurrent();  
        SecurityContext secCntx = adfCtx.getSecurityContext();  
        String user = secCntx.getUserPrincipal().getName();  
        String _user = secCntx.getUserName();  
        try {
            String jwt = generateJWT(user, "some parameter value - just because we can", _user, secretKey);
            this.token = jwt;
        } catch (Exception e) {
        }
    }

    public String generateJWT(String subject, String extraParam, String extraParam2, String myKey) throws Exception {           
           String result = null;        
           JwtToken jwtToken = new JwtToken();
           //Fill in all the parameters- algorithm, issuer, expiry time, other claims etc
           jwtToken.setAlgorithm(JwtToken.SIGN_ALGORITHM.HS512.toString());
           jwtToken.setType(JwtToken.JWT);
           jwtToken.setClaimParameter("ExtraParam", extraParam);
           jwtToken.setClaimParameter("ExtraParam2", extraParam2);
           long nowMillis = System.currentTimeMillis();
           Date now = new Date(nowMillis);
           jwtToken.setIssueTime(now);
           // expiry = 5 minutes - only for demo purposes; in real life, several hours - equivalent to HttpSession Timeout in web.xml - seems more realistic
           jwtToken.setExpiryTime(new Date(nowMillis + 5*60*1000));
                                           jwtToken.setSubject(subject);
                                           jwtToken.setIssuer("ADF_JET_REST_APP");
           // Get the private key and sign the token with a secret key or a private key
           result = jwtToken.signAndSerialize(myKey.getBytes());
           return result;
       }

    public String getToken() {
        return token;
    }
}
Embed the Web Client Application

The ADF Application consists of main page – index.jsf – that contains a region binding a taskflow that in turn contains a page fragment (client-app.jsff) that consists of a panelStretchLayout that contains an inline frame (rendered as an IFrame) that loads the web client application.

image

The JWT token (just a long string) has to be included in the URL that loads the client web application into the IFrame. This is easily done by adding an EL Expression in the URL property:

 <af:inlineFrame source="client-web-app/index.xhtml?token=#{sessionTokenGenerator.token}"
                            id="if1" sizing="preferred" styleClass="AFStretchWidth"/>

 

Store the token in the web client

When the client application is loaded, the token can be retrieved from the query parameters. An extremely naive implementation uses an onLoad event trigger on the body object to call a function that reads the token from the query parameters on the window.location.href object and stores it in the session storage:

function getQueryParams() {
    token = getParameterByName('token');
    if (token) {
        document.getElementById('content').innerHTML += '<br>Token was received and saved in the client for future REST calls';
        // Save token to sessionStorage
        sessionStorage.setItem('portalToken', token);    }
    else 
        document.getElementById('content').innerHTML += '<br>Token was NOT received; you will not be able to use this web application in a meaningful way';
}

function getParameterByName(name, url) {
    if (!url)
        url = window.location.href;
    var regex = new RegExp("[?&]" + name + "(=([^&#]*)|&|#|$)"), results = regex.exec(url);
    if (!results)
        return null;
    if (!results[2])
        return '';
    return decodeURIComponent(results[2].replace(/\+/g, " "));
}

If we wanted to so do, we can parse the token in the client and extract information from it – using a function like this one:

 

function parseJwt(token) {
    var base64Url = token.split('.')[1];
    var base64 = base64Url.replace('-', '+').replace('_', '/');
    return JSON.parse(window.atob(base64));
};

 

Append the token to REST API calls made from the client application

Whenever the client application makes REST API calls, it should include the JWT token in an HTTP Header. Here is example code for making an AJAX style REST API call – with the token included in the Authorization header:

function callServlet() {
    var portalToken = sessionStorage.getItem('portalToken');
    // in this example the REST API runs on the same host and port as the ADF Application; that need not be the case - the following URL is also a good example: 
    // var targetURL = 'http://some.otherhost.com:8123/api/things';
    var targetURL = '/ADF_JET_REST-ViewController-context-root/restproxy/rest-api/person';
    var xhr = new XMLHttpRequest();
    xhr.open('GET', targetURL)
    xhr.setRequestHeader("Authorization", "Bearer " +  portalToken);
    xhr.onload = function () {
        if (xhr.status === 200) {
            alert('Response ' + xhr.responseText);
        }
        else {
            alert('Request failed.  Returned status of ' + xhr.status);
        }
    };
    xhr.send();
}

 

Receive and inspect the token inside the REST API to ensure the authenticated status of the user

Depending on how the REST API is implemented – for example Java with JAX-RS, Node with Express, Python, PHP, C# – the inspection of the token will take a place in a slightly different way.

With JAX-RS based REST APIs running on a Java EE Web Server, one possible approach to inspection of the token is using a ServletFilter. This filter can front the JAX-RS service and stay completely independent of it. By mapping the Servlet Filter to all URL paths on which REST APIs can be accessed, we ensure that these REST APIs can only be accessed by requests that contain valid tokens.

A more simplistic, less elegant approach is to just make the inspection of the token an explicit part of the REST API. The Java code required for both approaches is very similar. Here is the code I used in a simple servlet that sits between the incoming REST API request and the actual REST API as a proxy that verifies the token, does the CORS headers and does the routing:

 

package nl.amis.portal.view;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;

import java.net.HttpURLConnection;
import java.net.URL;

import javax.servlet.*;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

import javax.ws.rs.core.HttpHeaders;

import oracle.adf.share.ADFContext;
import oracle.adf.share.security.SecurityContext;

import java.util.Date;

import java.util.Map;

import oracle.security.restsec.jwt.JwtException;
import oracle.security.restsec.jwt.JwtToken;
import oracle.security.restsec.jwt.VerifyException;


@WebServlet(name = "RESTProxy", urlPatterns = { "/restproxy/*" })
public class RESTProxy extends HttpServlet {
    private static final String CONTENT_TYPE = "application/json; charset=UTF-8";
    private final String secretKey = "SpecialKeyKeepSecret";


    public void init(ServletConfig config) throws ServletException {
        super.init(config);
    }

    private TokenDetails validateToken(HttpServletRequest request) {
        TokenDetails td = new TokenDetails();
        try {
            boolean tokenAccepted = false;
            boolean tokenValid = false;
            // 1. check if request contains token

            // Get the HTTP Authorization header from the request
            String authorizationHeader = request.getHeader(HttpHeaders.AUTHORIZATION);

            // Extract the token from the HTTP Authorization header
            String tokenString = authorizationHeader.substring("Bearer".length()).trim();

            String jwtToken = "";
            String issuer = "";
            td.setIsTokenPresent(true);

            try {
                JwtToken token = new JwtToken(tokenString);
                // verify whether token was signed with my key
                boolean result = token.verify(secretKey.getBytes());
                if (!result) {
                    td.addMotivation("Token was not signed with correct key");
                } else {
                    td.setIsTokenVerified(true);
                    td.setJwtTokenString(tokenString);
                    tokenAccepted = false;
                }

                // Validate the issued and expiry time stamp.
                if (token.getExpiryTime().after(new Date())) {
                    jwtToken = tokenString;
                    tokenValid = true;
                    td.setIsTokenFresh(true);
                } else {
                    td.addMotivation("Token has expired");
                }

                // Get the issuer from the token
                issuer = token.getIssuer();
                // possibly validate/verify the issuer as well
                
                td.setIsTokenAccepted(td.isIsTokenPresent() && td.isIsTokenFresh() && td.isIsTokenVerified());
                return td;

            } catch (JwtException e) {
                td.addMotivation("No valid token was found in request");

            } catch (VerifyException e) {
                td.addMotivation("Token was not verified (not signed using correct key");

            }
        } catch (Exception e) {
            td.addMotivation("No valid token was found in request");
        }
        return td;
    }

    private void addCORS(HttpServletResponse response) {
        response.setHeader("Access-Control-Allow-Origin", "*");
        response.setHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, DELETE");
        response.setHeader("Access-Control-Max-Age", "3600");
        response.setHeader("Access-Control-Allow-Headers", "x-requested-with");
    }

    public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        TokenDetails td = validateToken(request);
        if (!td.isIsTokenAccepted()) {
            response.setContentType(CONTENT_TYPE);
            response.setStatus(HttpServletResponse.SC_FORBIDDEN);
            response.addHeader("Refusal-Motivation", td.getMotivation());
            addCORS(response);
            response.getOutputStream().close();
        } else {

            // optionally parse token, extract details for user

            // get URL path for REST call
            String pathInfo = request.getPathInfo();

            // redirect the API call/ call API and return result

            URL url = new URL("http://127.0.0.1:7101/RESTBackend/resources" + pathInfo);
            HttpURLConnection conn = (HttpURLConnection) url.openConnection();
            conn.setRequestMethod("GET");
            conn.setRequestProperty("Accept", "application/json");

            if (conn.getResponseCode() != 200) {
                throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode());
            }

            BufferedReader br = new BufferedReader(new InputStreamReader((conn.getInputStream())));


            response.setContentType(CONTENT_TYPE);
            // see http://javahonk.com/enable-cors-cross-origin-requests-restful-web-service/
            addCORS(response);

            response.setStatus(conn.getResponseCode());
            RESTProxy.copyStream(conn.getInputStream(), response.getOutputStream());
            response.getOutputStream().close();
        } // token valid so continue

    }


    public static void copyStream(InputStream input, OutputStream output) throws IOException {
        byte[] buffer = new byte[1024]; // Adjust if you want
        int bytesRead;
        while ((bytesRead = input.read(buffer)) != -1) {
            output.write(buffer, 0, bytesRead);
        }
    }

   private class TokenDetails {
        private String jwtTokenString;
        private String motivation;

        private boolean isJSessionEstablished; // Http Session could be reestablished
        private boolean isTokenVerified; // signed with correct key
        private boolean isTokenFresh; // not expired yet
        private boolean isTokenPresent; // is there a token at all
        private boolean isTokenValid; // can it be parsed
        private boolean isTokenIssued; // issued by a trusted token issuer

        private boolean isTokenAccepted = false; // overall conclusion

        ... plus getters and setters

}

 

Running the ADF Application with the Embedded Client Web Application

When  accessing the ADF application in the browser, we are prompted with the login dialog:

image

After successful authentication, the ADF Web Application renders its first page. This includes the Taskflow that contains the Inline Frame that loads the client web application using a URL that contains the token.

image

When the link is clicked in the client web application, the AJAX call is made – the call that has the token included in a Authorization Request header. The first time we make the call, the result is shown as returned from the REST API

image

However, a second call after more than 5 minutes fails:

image

Upon closer inspection of the request, we find the reason: the token has expired:

image

The token based authentication has done a good job.

Similarly, when we try to access the REST API directly – we need to have a valid token or we are unsuccessful:

image

 

Inspect token in Node based REST API

REST APIs can be implemented in various technologies. One popular option is Node – using server side JavaScript. Node applications are perfectly capable of doing inspection of JWT tokens – verifying their validity and extracting information from the token. A simple example is shown here – using the NPM module jsonwebtoken:

 

// Handle REST requests (POST and GET) for departments
var express = require('express') //npm install express
  , bodyParser = require('body-parser') // npm install body-parser
  , http = require('http')
  ;

var jwt = require('jsonwebtoken');
var PORT = process.env.PORT || 8123;


const app = express()
  .use(bodyParser.urlencoded({ extended: true }))
  ;

const server = http.createServer(app);

var allowCrossDomain = function (req, res, next) {
  res.header('Access-Control-Allow-Origin', '*');
  res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
  res.header('Access-Control-Allow-Headers', 'Content-Type');
  res.header('Access-Control-Allow-Credentials', true);
  res.header("Access-Control-Allow-Headers", "Access-Control-Allow-Headers, Origin,Accept, X-Requested-With, Content-Type, Authorization, Access-Control-Request-Method, Access-Control-Request-Headers");
  next();
}

app.use(allowCrossDomain);

server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);
});

app.get('/api/things', function (req, res) {
  // check header or url parameters or post parameters for token
  var token = req.body.token || req.query.token || req.headers['x-access-token'];
  if (req.headers && req.headers.authorization) {
    var parts = req.headers.authorization.split(' ');
    if (parts.length === 2 && parts[0] === 'Bearer') {
      // two tokens sent in the request
      if (token) {
        error = true;
      }
      token = parts[1];
    }
  }
  var decoded = jwt.decode(token);

  // get the decoded payload and header
  var decoded = jwt.decode(token, { complete: true });
  var subject = decoded.payload.sub;
  var issuer = decoded.payload.iss;

  // verify key
  var myKey = "SpecialKeyKeepSecret";
  var rejectionMotivation;
  var tokenValid = false;

  jwt.verify(token, myKey, function (err, decoded) {
    if (err) {
      rejectionMotivation = err.name + " - " + err.message;
    } else {
      tokenValid = true;
    }
  });


  if (!tokenValid) {
    res.status(403);
    res.header("Refusal-Motivation", rejectionMotivation);
    res.end();
  } else {
      // do the thing the REST API is supposed to do
      var things = { "collection": [{ "name": "bicycle" }, { "name": "table" }, { "name": "car" }] }

      res.status(200);
      res.header('Content-Type', 'application/json');
      res.end(JSON.stringify(things));
  }
  }
});

The post Implementing Authentication for REST API calls from JET Applications embedded in ADF or WebCenter Portal using JSON Web Token (JWT) appeared first on AMIS Oracle and Java Blog.

How to rename an existing Fusion Middleware WebLogic Domain

Yann Neuhaus - Tue, 2017-11-28 23:54

Some times it happens that we need to rename an existing fusion Middleware WebLogic domain. I was asked to do such on a Fusion Middleware Reports & Forms environment.
I took some time to check how this can be done and did some testing to confirm it works as expected. The difficulty is not the WebLogic Domain itself as a WebLogic domain can be created quickly but it is time consuming if the complete configuration has to be redone like SSL, logging settings, etc and what about the system components.

I used pack and unpack to rename the FMW WebLogic Domain.

Let’s say I wanted to rename a Fusion Middleware Forms & Reports WebLogic domain named fr_domain in fr_domain_new

First I used pack to create the domain archive:

cd $MW_HOME/oracle_common/common/bin
./pack.sh -domain /u01/config/domains/fr_domain -template $HOME/fr_domain.jar -template_name full_fr_domain

Then using unpack, I changed the domain directory path thus the domain name.

./unpack.sh -domain /u01/config/domains/fr_domain_new -template /home/oracle/fr_domain.jar -user_name weblogic -password Welcome1 -server_start_mode prod -app_dir /u01/config/applications/fr_domain_new -java_home $JAVA_HOME

Of course the JAVA_HOME environment variable needs to be set before.

This simply worked but I had to recreate the security files  for the Administration Server and Managed Servers (boot.properties) if needed and those for the system Components.

To create the security files for the System Components, the Node Manager needs to be started

export WEBLOGIC_DOMAIN_HOME=/u01/config/domains/fr_domain_new/
nohup ${WEBLOGIC_DOMAIN_HOME}/bin/startNodeManager.sh >> ${WEBLOGIC_DOMAIN_HOME}/nodemanager/nohup-NodeManager.out 2>&1 &

And then start once the System  components with the storeUserConfig option. for example:

cd /u01/config/domains/fr_domain_new/bin 
./startComponent.sh ohs1 storeUserConfig
./startComponent.sh vm01_reportsServer storeUserConfig

This was for a simple WebLogic Domain on a single machine. For clustered WebLogic Domains installed on several hosts, the pack and unpack needs to be used again to dispatch the WebLogic Managed  Servers on the targeted machines.

As example, to create the archive files for the Managed Servers to be installed on remote machines:

$MW_HOME/oracle_common/common/bin/pack.sh -managed=true -domain /u01/config/domains/fr_domain_new -template /home/oracle/fr_domain_new.jar -template_name fr_domain_new

 

 

 

Cet article How to rename an existing Fusion Middleware WebLogic Domain est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator