Feed aggregator

Secure Oracle E-Business Suite 12.2 with Cookie Domain Scoping

Steven Chan - Fri, 2017-12-01 11:56

A cookie is a mechanism of storing state across requests to web site. When a site is accessed, a user's browser uses the cookie to store information such as a session identifier. When the site is accessed on a future occasion, the information in the cookie can be reused. If a domain is not specified, then the browser does not send the cookie beyond the originating host.

The Oracle E-Business Suite 12.2 Cookie Domain Scoping feature allows you to define the scope of the cookie. Your scoping configuration requirements will be dictated by the external integrations used with your Oracle E-Business Suite environment and your network configuration. Refer to the documentation for more information regarding your configuration requirements.

Where can I learn more?

Related Articles


Categories: APPS Blogs

Announcing Mobile Authentication Plugin for Apache Cordova, and More!

OTN TechBlog - Fri, 2017-12-01 11:19

We are excited to announce the open source release on GitHub of the cordova-plugin-oracle-idm-auth plugin for Apache Cordova, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

This plugin provides a simple JavaScript API for performing complex authentication, powered by a native SDK developed by the Oracle Access Management Mobile & Social (OAMMS) team that has been tested and verified against Oracle Access Manager (OAM) and Oracle Identity Cloud Service (IDCS) and is compatible with other 3rd party authentication applications that support Basic Authentication, OAuth, Web SSO or OpenID Connect.

Whilst the plugin is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any Cordova-based app targeting Android or iOS.

Most mobile authentication scenarios are complex, often requiring interaction with the native operating system for use cases such as:

  • Retrieving authentication tokens and cookies following successful authentication
  • Securely storing tokens and user credentials
  • Performing offline authentication and automatic login

Writing code to handle each of the required authentication scenarios, especially within hybrid mobile applications, is tedious and can be error-prone.

The cordova-plugin-oracle-idm-auth plugin significantly reduces the amount of coding required to successfully authenticate your users and handle various error cases, by abstracting the complex logic behind a set of simple JavaScript APIs, thus allowing you to focus on implementation of your mobile app’s functional aspects.

To add this plugin to your Oracle JET app:

$ ojet add plugin cordova-plugin-oracle-idm-auth


To know more about the Oracle JET CLI, visit the ojet-cli project.

To add this plugin to your plain Apache Cordova app:

$ cordova plugin add cordova-plugin-oracle-idm-auth


Although the plugin itself contains detailed documentation, stay tuned for more technical posts describing common usage scenarios.

The release of this plugin continues Oracle’s commitment to the open source Apache Cordova community, along with these previously released plugins:

Hope you enjoy, and if you have any feedback, please submit issues to our Cordova projects on GitHub.

For more technical articles, you can also follow OracleDevs on Medium.com.

Related content


OSB: Disable Chunked Streaming Mode recommendation

Darwin IT - Fri, 2017-12-01 05:37
Intro These weeks I got involved in a document generation performance issue. This ran for several months, maybe years even. But it stayed quite unclear what the actual issue was.

Often we got complaints that document generation from the front-end application (based on Siebel) was taking very long. End users often hit the button several times, but with no luck. Asking further, it did not mean that there appeared a document in the content management system (Oracle UCM/WCC). So, we concluded that it wasn't so much a performance issue, but an exception along the process of document generation. Since we upgraded BI Publisher to 12c, it was figured that it might got something to do with that. But we did not find any problems with BI Publisher, itself. Also, there was an issue with Siebel it's self, but that's also out of the scope of this article.
The investigationFirst, on OSB the retry interval of the particular Business Service was decreased from 60 seconds to 10. And the performance increased. Since the retry interval was shorter, OSB does a retry on shorter notice. But of course this did not solve the problem.

As Service developers we often are quite laconical about retries. We make up some settings. Quite default is an interval of 30 seconds and a retry count of 3. But, we should actually think about this and figure out what the possible failures could be and what a sensible retry setting would be. For instance: is it likely that the remote system is out of order? What are the SLA's for hoisting it back up again? If the system startup is 10 minutes, then a retry count of 3 and interval of 30 seconds is not making sense. The retries are done long before the system's up again. But of course, in our case sensible settings for system outage would cause delays being too long. We apparently needed to cater for network issues.

Last week our sysadmins encountered network failures, so they changed the LoadBalancer of BIP Publisher, to get chunks/packets of one requests routed to the same BI Publisher node. I found SocketReadTimeOuts in the logfiles. And from the Siebel database a query was done and plotted out in Excel showing lots of request in the  1-15 seconds range, but also some plots in ranges around 40 seconds and 80 seconds. We wondered why these were.

The Connection and Read TimeOut settings on the Business Service were set to 30s. So I figured the 40 and 80 seconds range could have something to do with a retry interval of 10s added to a time out of 30 seconds.

I soon found out that in OSB on the Business Service, the Chunked Streaming Mode  was enabled. This is a setting we struggled with a lot. Several issues we encountered were blamed on this one. As a Helpdesk employee would ask you if you have restarted your system, on OSB questions I would ask you about this setting first... Actually, I did for this case, long before I got actively involved.
Chunked Streaming Mode explainedLet's start with a diagram:

In this diagram you'll see that the OSB is fronted by a Load Balancer. But since 12c the Oracle HTTP Server is added to the Weblogic Infrastructure. And following the Enterprise Deployment Guide we added an OHS to the Weblogic Infrastructure Domain, as a co-located OHS Instance. And since the OSB as well as the Service Provider (in our case BI Publisher) are clustered, the OHS will load balance the requests.

Now, the Chunked transfer encoding is an HTTP 1.1 specification. It is an improvement that allows clients to process the data in chunks right after the chunk is read. But in most (of our) cases a chunk on itself is meaning-less, since a SOAP Request/XML Document need to be parsed as a whole.
The Load Balancer also process the chunks as separate entities. So,by default, it will route the first one to the first endpoint, and the other one to the next. And thus each SP Managed Server gets an incomplete message and there for a so-called Bad Request. This happens with big requests, where for instance a report is requested together with the complete content. Then chances are that the request is split up in chunks.

But although the SysAdmins adapted the SP Load Balancer, and although I was involved in the BIPublisher 12c setup, even I forgot about the BIP12c OHS! And even when the LoadBalancer tries to keep the chunks together, then again the OHS will mess with them. Actually, if the LoadBalancer did not keep them together, the OHS instances could reroute them again to the correct end-node.
The SolutionSo for all those Service Bus developers amongst you, I'd like you to memorize two concepts: "Chunked Streaming Mode" and "disable", and the latter in combination with the first, of course.
In short: remember to set Chunked Streaming Mode to disable in every SOAP/http based Business Service. Especially with services that send potentially large requests, for instance document check-in services on Content/Document Management Systems.
The proof of the puddingAfter some discussion and not being able to test it on the Acceptance Test environment, due to rebuilds, we decided to change this in production (I would/should not recommend that, at least not right away).

And this was the result:

This picture shows that the first half of the day, plenty requests were retried at least once, and several even twice. Notice the request durations around the 40 seconds (30 seconds read timeout + 10 seconds retry interval) and 80 seconds. But since 12:45, when we disabled the Chunked Streaming Mode we don't see any timeout exceptions any more. I hope the end users are happy now.

Or how a simple setting can throw a spanner in the works. And how difficult it is to get such a simple change into production. Personally I think it's a pity that the Chunked Streaming Mode is enabled by default, since in most cases it causes problems, while in rare cases it might provide some performance improvements. I think you should rationalize the enablement of it, in stead of actively needing to disable it.

Enters Amazon Aurora Serverless

Pakistan's First Oracle Blog - Thu, 2017-11-30 23:06
More often than not, database administrators around the technologies have to fight out high load on their databases. It could be ad hoc queries, urgent reports, overflown jobs, or simply high frequency and volume of queries by the end users.

DBAs try their best to do a generous capacity planning to ensure optimal response time and throughput for the end users. There are various scenarios where it becomes very hard to predict the demand. Storage and processing needs in case of unpredictable load are hard to foretell in advance.

Cloud computing offers the promise of unmatched scalability for processing and storage needs. Amazon AWS has introduced a new service which gets closer to that ultimate scalability. Amazon Aurora is hosted relational database service by AWS. You set your instance size and storage need while setting Aurora up. If your processing requirements change, you change your instance size and if you need more read throughput you add more read replicas.

But that is good for the loads we know about and can more ore less predict. What about the loads which appear out of blue? May be for a blogging site, where some post has suddenly gone viral and it has started getting million of views instead of hundreds? And then the traffic disappears after some time suddenly just like it appeared out of nowhere and may be after some days it happens for some another post?

In this case if you are running Amazon Aurora, it would be fairly expensive to just increase the instance size or read replicas in the anticipation that some traffic burst would come. It might not, but then it might.

In front of this uncertainty, enters Amazon Aurora Serverless. With this Serverless Aurora, you don't select your instance size. You simply specify an endpoint and then all the queries are routed to that endpoint. Behind that endpoint lies a a warm proxy fleet of database capacity which can scale as per your requirements within 5 seconds.

It's all on-demand and ideal for transient spiky loads. What's more sweet is that billing is on second by second basis and deals in Aurora capacity units and minimum is 1-minute for each newly address resource.
Categories: DBA Blogs

Oracle provider for OLE DB (OraOLEDB) unable to connect to Oracle DB 10 Release 2

Tom Kyte - Thu, 2017-11-30 22:26
I have installed Oracle provider for OLE DB (OraOLEDB) on a server to allow our SIEM to connect to our customer's Oracle DB 10G R2 for monitoring purpose. However, I'm still getting error saying "ORA-12541: TNS:no listener" when I test...
Categories: DBA Blogs

pragma autonomous_transaction; and database links

Tom Kyte - Thu, 2017-11-30 22:26
I have a package of functions that return data from a SqlServer database through a link. Usually the results are just displayed in optional fields on a web page or client program. They take the form of: <code> function get_info(ar_key number) ...
Categories: DBA Blogs

Installing Visual Studio Code on Oracle Linux 7

Wim Coekaerts - Thu, 2017-11-30 17:43

Visual Studio Code is a popular editor. There is an RPM available for "el7" from the Microsoft yumrepo. This RPM can be manually downloaded on Oracle Linux 7 and installed with # yum localinstall code...  or # rpm -ivh code... but it's easier to just create a yum repo file so that you can just do # yum install code and # yum update code.

Here's an example. On Oracle Linux 7 (not 6), as user root:

# cd /etc/yum.repos.d

create a file, let's say vscode.repo with the following content:



and now you can just do

# yum install code
Loaded plugins: langpacks, ulninfo
vscode                                                   | 2.9 kB     00:00     
Resolving Dependencies
--> Running transaction check
---> Package code.x86_64 0:1.18.1-1510857496.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package      Arch           Version                       Repository      Size
 code         x86_64         1.18.1-1510857496.el7         vscode          63 M

Transaction Summary
Install  1 Package

Total download size: 63 M
Installed size: 186 M
Is this ok [y/d/N]: Downloading packages:
code-1.18.1-1510857496.el7.x86_64.rpm                      |  63 MB   00:41     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : code-1.18.1-1510857496.el7.x86_64                            1/1
  Verifying  : code-1.18.1-1510857496.el7.x86_64                            1/1

  code.x86_64 0:1.18.1-1510857496.el7                                           


That's it.


PLS-00231 – The best laid (cunning) plans and private package functions

The Anti-Kyte - Thu, 2017-11-30 15:02

There are times when I feel like Baldrick.
One moment I’m all boundless optimism and cunning plans and the next, I’m so confused I don’t know what my name is or where I live.
One such recent bout of existential uncertainty was caused by the error mentioned in the title of this post, or to give it it’s full name :

PLS-00231 : Function <function name> may not be used in SQL

So, from the beginning…

Let’s start with (appropriately enough) a simple package header :

create or replace package baldrick as
    procedure cunning_plan;
end baldrick;

No problems there, it compiles fine as you’d expect.
Now for the body…

create or replace package body baldrick as
    -- Private
    function catchphrase return varchar2 is
        return 'I have a cunning plan which cannot fail';
    end catchphrase;
    -- Public
    procedure cunning_plan is
        optimism varchar2(4000);
        select catchphrase
        into optimism
        from dual;
    end cunning_plan;
end baldrick;

That looks fine, right?
I mean, sure, the CATCHPHRASE function is private so it can only be referenced from inside the package but that’s not unusual, is it ?
Well, it turns out that Oracle isn’t entirely happy about this and says so at compile time…

-------- -----------------------------------------------------------------
12/9     PL/SQL: SQL Statement ignored
12/16    PL/SQL: ORA-00904: "CATCHPHRASE": invalid identifier
12/16    PLS-00231: function 'CATCHPHRASE' may not be used in SQL

After some head-scratching, I was beginning to worry that I was losing it. Maybe I should apply for that job as Village Idiot of Kensington.
Fortunately, I was saved from a career of Professional Idiocy in West London by the simple expedient of making the function public…

create or replace package baldrick as
    function catchphrase return varchar2;
    procedure cunning_plan;
end baldrick;

Re-creating the package header using this code, we can now see that the package body magically compiles and works without further complaint…

Cunning Plans are here again !

To discover why this happens may not require a plan more cunning than a Fox who has just been made Professor of Cunning at Oxford University but it’s my turn to cook.
So, now my code is working, I’m off to prepare the Turnip Surprise.

OpenJDK 9: Limitations/shortcomings of the Jshell

Dietrich Schroff - Thu, 2017-11-30 13:56
Jshell is a quite powerful tool to do some quick explorations and to write some scripts.

If found the following limitations:
  • No syntax highlighting
  • Only comments with // are allowed
    if you open a script file with
      *  My comments
    this will work only many warnings.
    |  Error:
    |  illegal start of expression
    |   * Copyright (c) 1995, 2008, Oracle and/or its affiliates. All rights reserved.
    |   ^
    |  Error:
    |  illegal start of expression
    |   *
    |   ^
  • The usage of public, static, ... is quite different:
    -> public class MyClass {
    >> public int a;
    >> };
    |  Warning:
    |  Modifier 'public'  not permitted in top-level declarations, ignored
    |  public class MyClass {
    |  ^----^
    |  Added class MyClass this error is corrected:
    |      Modifier 'public'  not permitted in top-level declarations, ignored
    |      public class MyClass {
    |      ^----^
  • If you want to load a scipt with /open then tab completion does not work. The complete path including filename has to be typed / pasted.
  • If you load some example snippet, main(String[] args) is not run by default.
    Just tried with the swing tutorial from oracle:
    jshell Downloads/HelloWorldSwing.java  -> String[] mystringarray;
    |  Added variable mystringarray of type String[]

    -> HelloWorldSwing.main(mystringarray); 
  • .. 
to be continued...

Top 12 Rank Tracking Software and services

Nilesh Jethwa - Thu, 2017-11-30 13:23

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically designed to look for rank and popularity.

Rank tracking software gauges the performance of a business. The use of this rank tracker tool allows one to evaluate and track ranks in search engines as well as give them the range of their visibility to the prospect market. This can also observe the progress of competitors in the market. This knowledge can act as their edge and chance for better improvement.

Monitoring business using keyword rank checker can offer numerous benefits to a business enterprise, but before it can happen, you must first find the suitable rank tracker tool for you.

Here are a list of the top rank tracking software and services


Initially founded in 2002, this company offers an extensive variety of different keyword tracking tools. It prides itself on providing worry-free reporting that allows a user to focus on website optimization rather than tracking.  It also has cloud service and desktop software.

Features and benefits of this rank tracker:

  • Allows white label reporting
  • It offers unlimited observation of campaigns and websites competition
  • Allows easy assignment of read/write permissions
  • This keyword rank checker has an international customer base that supports special characters
  • Simple project setup and easy sharing of data online
  • User-intuitive interface and can be accessed by mobile devices
  • Because of its custom google location, it aren support the international markets and gets results from over 50+ countries.

What makes it better?

Compared to its competitors, Advanced Web Ranking can give localized ranking with pinpoint accuracy. Geolocation proxies in the company system is the reason for this better accuracy. It also offers a 30-day trial for free.

But what’s the price?

The starting least priced option is $49 per month and the highest value is at $499. The price is a bit costly, but with its given features, it is understandable.


Read more at https://www.infocaptor.com/dashboard/top-12-rank-tracking-software-and-services-for-your-business-needs

ARM, YUM, Cloud, containers,...

Wim Coekaerts - Thu, 2017-11-30 11:43

It's been a while since my last post so a lot of stuff has been going on! This one will be a random collection of things that I want to point out. I will have to use a lot of tags to keep search engines happy here :-)

Where to start...

Preview release : Oracle Linux 7 for ARM64 (aarch64)

Given the growing interest in ARM64.  We created a publicly available, free download, no registration keys, no access codes, no authentication codes,version of OL7 for ARM64. You can go download it here:  http://www.oracle.com/technetwork/server-storage/linux/downloads/oracle-linux-arm-4072846.html

We have an ISO you can install on a few available ARM64 servers, more servers will be tested and added over time. (See release notes) and we also created a little runtime image for the RPI3. That way you can easily try it out in minutes on a cheap, readily available platform.

Tons of RPMs have been built and are on http://yum.oracle.com (specifically: http://yum.oracle.com/repo/OracleLinux/OL7/latest/aarch64/index.html ) We currently use a 4.13 kernel but that will soon move to 4.14 (basis for the next version of UEK).

One of the reasons we do a preview release right now and not GA is because it's still a fast moving target. Lots of kernel changes coming, we're looking at providing the latest toolchain, gcc7, create a good public developer program around Oracle Linux for ARM64 and the introduction of new platforms over the next several months that might require adding new drivers, compile the binaries with better optimizations etc... so right now I would not want  to call this Generally Available. It's certainly in a good state for developers to start using and get their feet wet, for partners that are interested in ARM to start porting apps and work with us as we improve performance and build out the developer ecosystem. It's certainly an exciting development. We're working on all the usual things, we are working on ksplice,  dtrace, lots of server side enhancements that are still missing, testing of kvm, seeing if we can build even the kernel with gcc7.2? Pick the right chip to target for optimizations...

New packages for Oracle Linux

Over the last several months we started adding a ton of new RPMs on yum to make it easier for admins and developers that want newer stuff that's just not typically available directly from the Enterprise Linux vendor side.

We track the latest versions of terraform (and the OCI-provider for terraform), we released dotnet2.0, powershell updates, over a 1000 RPMs added from the EPEL repository, docker 17.06. We packaged the OCI SDK and CLI into RPMs to make it easy (no need to run pip install).

For the nitpickers - as I mentioned previously, we are just replicating EPEL, we are not 'forking' it, we are not modifying source, the intent is to have it available from the same 'location', signed by us, built by us tested together in terms of dependencies. It's still EPEL. If we were to find bugs or whatever we'd get that fixed on the EPEL source side. No other intent... just to re-iterate that.

"What's new" on yum

Since we do a lot of packages updates on yum.oracle.com, we added a what's new page, it lists new RPMs that are published every day and we keep 6 months of history. This way  you can easily see if something got updated without having to run yum commands on a server.

Kernel Blog

In order to be more public about the type of development projects we have going on, we are finally back to writing regular articles about various kernel projects. You can find that here. It's a random collection of things developers will write up, stuff they worked on in the past or something like that. It gives a bit more context than just seeing commit messages. We started this way back when, then it went dormant but we picked it up again. Some good stuff can be found there.

Linux NFS appliance image for Oracle Cloud Infrastructure

Regular updates continue on our Linux NFS appliance image that can be found here. An easy way to create a Linux-based NFS server in your own tenancy. It's not an NFS service, it's just a standard Oracle Linux image that creates an NFS  server setup.

Oracle Container Registry

A reminder that we have replicas of the Oracle Container registry in each of the Oracle Cloud Infrastructure regions for fast, internal to the region access to our docker images.

container-registry-ash.oracle.com (Ashburn datacenter)

container-registry-phx.oracle.com (Phoenix datacenter)

container-registry-fra.oracle.com (Frankfurt datacenter)

These registries are also externally accessible so you can use it from wherever you are. Pick the one that's fastest for you.

We will introduce yum replicas soon as well.










Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots

Yann Neuhaus - Thu, 2017-11-30 04:58

If you store your databases on the cluster filesystem ACFS you may use the provided Perl-script gDBClone from OTN to clone databases or create snapshot databases. It is an interesting approach to create clones from the Production DB in minutes regardless of the production DB size. What you do is to create a standby DB from your production DB on a separate cluster and use that standby DB as a Gold/Master copy for ACFS snapshots.

In a Production environment with Data Guard Broker we wanted to use that technique, but were confronted with an issue:

The Production DB had already a physical standby DB with the Data Guard Broker running. The protection mode was MaxAvailability, which means transport of the redo in sync mode. The master/gold copy to get the snapshots from should receive the redo data in async mode. How to achieve that?

Actually not very common parameters in a Broker configuration are


With those parameters (which are available in and onwards) you actually can send your redo to a destination in async mode. The parameters are documented as follows:

The ExternalDestination1 configuration property is used to specify a redo transport destination that can receive redo data from the current primary database. To set up transport of redo data to the specified destination, the broker uses the values specified for this parameter to define a LOG_ARCHIVE_DEST_n initialization parameter on the primary database. The broker also monitors the health of the transport to the specified destination.

After a role change, the broker automatically sets up a LOG_ARCHIVE_DEST_n initialization parameter on the new primary database to ship redo data to the specified destination.

I.e. you can set the parameter the same as LOG_ARCHIVE_DEST_n, but the following options are not allowed:


So let’s assume I created my DB GOLDCOP as a standby DB using the rman duplicate command

RMAN> duplicate target database for standby from active database dorecover nofilenamecheck;

or alternatively using

# ./gDBClone clone -sdbname PRIM -sdbscan scoda7 -tdbname GOLDCOP -tdbhome OraDb11g_home1 -dataacfs /cloudfs -standby

In the broker configuration I added the DB GOLDCOP as follows:

DGMGRL> show configuration;
Configuration - MYPROD
Protection Mode: MaxAvailability
PRIM - Primary database
STBY - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
DGMGRL> edit configuration set property ExternalDestination1 = 'service=goldcop db_unique_name=GOLDCOP noaffirm async';
Property "externaldestination1" updated
DGMGRL> show configuration;
Configuration - MYPROD
Protection Mode: MaxAvailability
PRIM - Primary database
STBY - Physical standby database
GOLDCOP - External destination 1
Fast-Start Failover: DISABLED
Configuration Status:

Let’s check if I really do NOAFFIRM ASYNC redo transport on PRIM:

SQL> select DEST_NAME, DB_UNIQUE_NAME, AFFIRM, TRANSMIT_MODE from v$archive_dest where dest_id in (2,3);
-------------------------------- ------------------------------ --- ------------

The external destination is not a database in the configuration:

DGMGRL> show database "GOLDCOP";
Object "GOLDCOP" was not found

But the transport to its destination is monitored by the Broker. I.e. when shutting down the DB GOLDCOP I do get an error:

DGMGRL> show configuration;
Configuration - MYPROD
Protection Mode: MaxAvailability
PRIM - Primary database
Error: ORA-16778: redo transport error for one or more databases
STBY - Physical standby database
GOLDCOP - External destination 1
Fast-Start Failover: DISABLED
Configuration Status:
DGMGRL> show instance "PRIM";
Instance 'PRIM' of database 'PRIM'
Instance Error(s):
ORA-16737: the redo transport service for standby database "GOLDCOP" has an error
Instance Status:

As the External destination 1 is not “a database” in the broker configuration, it actually also does not matter if the broker is started (dg_broker_start=TRUE) at the external destination GOLDCOP or not.
To start applying redo on the external destination, you have to start managed recovery as you would without a broker configuration:

alter database recover managed standby database using current logfile disconnect from session;

And redo real time apply is happening on GOLDCOP:

SQL> select name,value
2 from v$dataguard_stats
3 where name in ('apply lag','transport lag');
-------------------- --------------------
transport lag +00 00:00:00
apply lag +00 00:00:00
SQL> select inst_id,process,pid,status,thread#,sequence#, block#
2 from gv$managed_standby
3 where process like 'MRP%';
---------- --------- ---------- ------------ ---------- ---------- ----------
1 MRP0 5155 APPLYING_LOG 1 50 420

To make the external destination self managing I did set the archivelog deletion policy on GOLDCOP to


in rman so that applied archives become reclaimable automatically in the fast recovery area. In addition I set


on GOLDCOP to ensure that archive gaps can be resolved.

The pro’s of above configuration are:
– the GOLDCOP-DB does not cause much overhead for my Production DB (async redo transport).
– Decoupling my GOLDCOP DB from Primary (temporarily) is fast and easy:
edit configuration set property ExternalDestination1 = '';

REMARK: Of course I do also have the other advantages of the gDBClone-approach:
– A production copy on a separate cluster which serves as a gold-copy to take snapshots from for testing or development purposes.
– Creating a snapshot database takes minutes regardless of the DB size.

– I have to take care to start managed standby database recovery on my GOLDCOP-DB. I.e. the same as when running data guard without the Broker.

To create a snapshot DB I just do something like:

# ./gDBClone snap -sdbname GOLDCOP -tdbname PRODCOP1

Et voilà a production copy in 2 minutes.

If PRODCOP1 is no longer needed I can delete it:

# ./gDBClone deldb -tdbname PRODCOP1

Besides using the configuration property ExternalDestination1 there are other possibilities in 12c to run a standby DB as a master copy for snapshots without affecting the production system (like e.g. the support of cascaded standby DBs in the Broker), but I still think that the external destinations feature offers a good possibility to run a master copy.


Cet article Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots est apparu en premier sur Blog dbi services.

nVision Performance Tuning: 11. Excel -v- OpenXML

David Kurtz - Thu, 2017-11-30 04:47
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The general objective the performance tuning changes described in this series of blog posts has been to improve the performance of individual nVision reports, but also to allow many reports to execute concurrently.
However, if you use Excel 2010, Excel 2013 or above, then you may notice run times are significantly longer than with Excel 2007.  Also, from PeopleTools 8.54, Excel 2007 is no longer certified.
The problem is discussed in Oracle support note E-NV: nVision Performance using Microsoft Office 2010 and above (Doc ID 1537491.1).  Essentially, Excel 2010 upwards only runs single threaded.  Only one Excel nVision process that is not waiting for a database call to return can run concurrently on any one Windows server at any one time.  If you want to be able to run 10 concurrent nVision reports you would need to run one on each of 10 process schedulers, on 10 different windows servers.
From PT8.54, OpenXML is the default and preferred engine for executing nVision report on the process scheduler.  This uses a different PeopleSoft executable (PSNVSSRV).  It does not suffer from the single-threading problem so multiple processes can run concurrently.  It can also be run on non-Windows environments.
However, there are some limitations with OpenXML:
  • Excel macros are ignored during report generation, although macros can be put into a layout that will execute when the generated report is subsequently opened in Excel.
  • There are problems with nPlosion.  
  • Any print area set in the layout is lost.
  • When rerunning nVision to file any pre-existing file is not overwritten.
Therefore, it may be necessary to continue to run some nVision reports on Excel.  This would require:
  • Separate process schedulers configured to run Excel rather than OpenXML on each available Windows server.  Excel is selected by setting the variable Excel Automation = 1, in the nVision section of the process scheduler configuration file (psprcs.cfg).  
  • A new Excel nVision process type should be configured to run specific layouts or reportbooks on Excel.  
  • That new process type should only run on these additional process schedulers.  It should have a maximum concurrence of 1, or at most 2, on each Process Scheduler.  These schedulers should be configured to run this new process type (and a single Application Engine so that the purge process can run).

FMW, GoldenGate & Apps DBA + FREE Training This Week

Online Apps DBA - Thu, 2017-11-30 03:06

[K21Academy Weekly Newsletter] 171130 Subject: FMW, GoldenGate & Apps DBA + FREE Training This Week In this weeks issue, you will find:- 1. [Facebook Live] SSL/TLS Oracle Fusion Middleware & EBS (R12) 2. WebLogic / Oracle FMW to RAC Database connection: Using Active GridLink ? 3. [Video] Oracle GoldenGate: What Why And How To Learn 4. ADOP (R12.2 Online […]

The post FMW, GoldenGate & Apps DBA + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Advanced Oracle Troubleshooting seminar in 2018!

Tanel Poder - Wed, 2017-11-29 16:24

A lot of people have asked me to do another run of my Advanced Oracle Troubleshooting training or at least get access to previous recordings – so I decided to geek out over the holiday period, update the material with latest stuff and run one more AOT class in 2018!

The online training will take place on 29 January – 2 February 2018 (Part 1) & 26 February – 2 March 2018 (Part 2).

The latest TOC is below:

Seminar registration details:

Just like last time (AOT 2.5 about 2 years ago!), the attendees will get downloadable video recordings after the sessions for personal use! So, no crappy streaming with 14-day expiry date, you can download the video MP4 files straight to your computer or tablet and keep for your use forever!

If you sign up early and can’t wait until end of January, I can send the registered attendees most of the previous AOT 2.5 video recordings upfront, so you’d be ready for action in the live class :)

I also have a Youtube channel (that you may have missed), there are a couple of introductory videos about how I set up my environment & use some key scripts available now:

I plan to start posting some more Oracle/Linux/Hadoop stuff in the Youtube channel, but this is quite likely the last AOT class that I do, so see you soon! ;-)

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Google dashboard for lazy businessman

Nilesh Jethwa - Wed, 2017-11-29 15:20

Once you start your own business or startup, the very first thing that comes to mind is "How many visitors did my website receive?"

Google Analytics provides tons of metrics and it becomes quite a chore to keep up with all the dashboard pages and filter options. As a small biz owner myself, I went through a phase where I ended up spending significant amount of time checking out Google Analytics

To save time and mental energy on a daily routine task, I asked "What are the most basic metrics I need to measure from Google Analytics?"

The answer pretty much came down as a need to have "one page dashboard that displays various metrics".

Read more at http://www.infocaptor.com/dashboard/what-are-the-bare-minimum-traffic-metrics-that-i-can-track-easily

The Biggest Change to Reporting & Analysis in 2018 Won’t Be the Cloud

Look Smarter Than You Are - Wed, 2017-11-29 12:43
Screenshot from https://www.oracle.com/solutions/business-analytics/day-by-day.html

Companies spent most of 2017 either preparing their journey to the Cloud, getting started on moving their applications to the Cloud, or hoping the whole Cloud thing would go away if we just ignored it long enough (like my late fees at Blockbuster). But in the end, the Cloud isn’t revolutionary: the Cloud just means someone else is managing your server for you. While it’s nice that your servers are now someone else’s problem, there is an actual revolution happening in reporting & analysis and it’s a technology that’s been around for decades.
The Future of Reporting & Analysis Can Also Take Selfies
Up to this point, mobile has been an afterthought in the world of reporting & analysis: we design for a laptop first and if something ends up mobile-enabled, that’s a nice-to-have. The commonly held belief is that mobile devices (phones, tablets) are too small of a footprint to show formatted reports or intricate dashboards. That belief is correct in the same way that Microsoft Outlook is way too complex of an application to make reading emails on a mobile device practical… except that most emails in the world are now read on a mobile device. They’re just not using Outlook. We had to rethink of a smaller, faster, easier, more intuitive (sorry, Microsoft) way of consuming information to take email mobile.

Reporting & analysis will also hit that tipping point in 2018 where we ask ourselves simply “what questions do I need answered to make better business decisions faster?” and then our phones will give us exactly that without all the detail a typical report or dashboard provides. Will mobile analytics kill off desktop applications? No more than the desktop killed off paper reports. They all have their place: paper reports are good for quickly looking at a large amount of formatted information, desktops will be good for details (Excel will live on for the foreseeable future), and mobile will take its rightful place as the dominant form of information consumption.
Forget the Past and Pay Attention to the Present
The greatest thing about mobile is that everyone has their phone less than six feet from them at all times [you just glanced over at yours to see if I’m right]. But would you ever look at your phone if your screen took a month to update? Traditional reports are very backwards-looking. Your typical Income Statement, for instance, tells you how you spent the last year, it sometimes tells you about the upcoming forecast, but it rarely tells you, “am I making money at this moment?” Just like the dashboard of a car would be awfully useless if it gave you last month’s average gas tank reading – hey, I was 75% full in December! – mobile reports won’t be for looking at historically dated information. Instead, we’ll look to mobile to give us just the information we need to take physical actions now.
But Why is 2018 the Year of Mobile Analytics?
Quite simply, we didn’t have the technology to support our decisions until now. While we could take reports or dashboards and interact with them on mobile devices, we don’t want to actually perform analytics on our phones. We want the computers doing the analysis for us. While we’ve had data mining for years, it was relegated to high-priced data scientists or not-so-highly-paid analysts.

We now have artificial intelligence that can look through our data 24/7 and with no guidance from us, determine what drivers correlate with which results. Machine learning can then determine which information it delivers do we truly find useful. And so we don’t have to dig through all the results to find out what the system is trying to tell us, the mobile analytics apps in 2018 will convert complex information into natural language. It will simply tell us in plain English (or your language of choice), “I looked through all your information and here are the things you need to be aware of right now.”

While that may seem like distant promises to many people, it’s here now. At Oracle’s OpenWorld 2017 conference, there was an amazing demonstration of everything I mentioned in the last paragraph. The audience was even more amazed when told that all that functionality would be in Oracle Analytics Cloud before OpenWorld 2018. I’m sure the employees of Microsoft, Tableau, QlikView, and others are either busy working on their own technological magic or they’re busier working on their resumés.
Am I Ready for the Future?
Start finding out at EPM.BI/Survey. Each year, I conduct a global survey of Business Analytics. Last year, I asked over 250 companies how they were doing in the world of reporting, analysis, planning, and consolidation.  To participate in this year’s survey, go to EPM.BI/Survey and spend 15 minutes answering questions about your State of Business Analytics that you maybe haven’t thought of in years. In exchange for filling in the survey, you’ll be invited to a webcast on January 31, 2018, at 1PM Eastern, where you’ll learn how your BI & EPM (Business Intelligence & Enterprise Performance Management) stacks up against the rest of the world.

If you have any questions, ask them in the comments or tweet them to me @ERoske.
Categories: BI & Warehousing

Secure Oracle E-Business Suite 12.2 with Allowed JSPs/Resources

Steven Chan - Wed, 2017-11-29 11:38

Oracle E-Business Suite is delivered with JSPs and servlets. Most customers use only a subset of these provided resources. The Allowed JSPs or Allowed Resources feature allows you to reduce your attack surface by disabling JSPs or servlets that are not used in your environment. You can allow or deny resources at the family, product or resource level.

The Allowed JSPs feature allows you to define a whitelist of allowed JSPs for your Oracle E-Business Suite 12.2 environment. When enabled, accessing JSPs that are not configured in your whitelist is not allowed.

The Allowed Resources feature expands upon the concept of the Allowed JSPs feature and allows you to define a whitelist of allowed JSPs and servlets for your Oracle E-Business Suite 12.2 environment. When enabled, accessing JSPs or servlets that are not configured in your whitelist is not allowed.

Your users will see an error message if a resource is blocked by the Allowed JSPs or Allowed Resources feature.

Refer to the documentation for more information on how to to deploy and configure the Allowed JSPs or Allowed Resources feature.

Which EBS Releases include Allowed JSPs or Allowed Resources?

  • Allowed JSPs is delivered with Oracle E-Business Suite Release 12.2.4
  • Allowed Resources can be enabled with Oracle E-Business Suite 12.2.6+.
  • Allowed Resources with a new user interface and recommendations to provide ease of configuration is on by default with Oracle E-Business Suite 12.2.7.

Where can I learn more?

Related Articles


Categories: APPS Blogs

How can i read a csv file

Tom Kyte - Wed, 2017-11-29 09:46
Hi TOM :) Resourse: 1) i have a table that was milions of records of the clients 2) and i have a CSV with only 1,200 clients 3) i don't have permitions to create a table. Problem: how can i read from the CSV to join with the pr...
Categories: DBA Blogs

Implementing Authentication for REST API calls from JET Applications embedded in ADF or WebCenter Portal using JSON Web Token (JWT)

Amis Blog - Wed, 2017-11-29 05:00

imageThe situation discussed in this article is as follows: a rich client web application (JavaScript based, could be created with Oracle JET or based on Angular/Vue/React/Ember/…) is embedded in an ADF or WebCenter Portal application. Users are authenticated in that application through a regular login procedure that leverages the OPSS (Oracle Platform Security Service) in WebLogic, authenticating against an LDAP directory or another type of security provider. The embedded rich web application makes calls to REST APIs. These APIs enforce authentication and authorization – to prevent rogue calls. Note: these APIs have to be accessible from wherever the users are working with the ADF or WebCenter Portal application.

This article describes how the authenticated HTTP Session context in ADF – where we have the security context with authenticated principal with subjects and roles – can be leveraged to generate a secure token that can be passed to the embedded client web application and subsequently used by that application to make calls to REST APIs that can verify through that token that an authenticated user is making the call. The REST API can also extract relevant information from the token – such as the user’s identity, permissions or entitlements and custom attributes. The token could also be used by the REST API to retrieve additional information about the user and his or her session context.

Note: if calls are made to REST APIs that are deployed as part of the enterprise application (same EAR file) that contains the ADF or WebCenter Portal application, then the session cookie mechanism ensures that the REST API handles the request in the same [authenticated]session context. In that case, there is no need for a token exchange.


Steps described in this article:

  1. Create a managed session bean that can be called upon to generate the JWT Token
  2. Include the token from this session bean in the URL that loads the client web application into the IFrame embedded in the ADF application
  3. Store the token in the web client
  4. Append the token to REST API calls made from the client application
  5. Receive and inspect the token inside the REST API to ensure the authenticated status of the user; extract additional information from the token

As a starting point, we will assume an ADF application for which security has been configured, forcing users accessing the application to login by providing user credentials.

The complete application in a working – though somewhat crude – form with code that absolutely not standards compliant nor production ready can be found on GitHub: https://github.com/lucasjellema/adf-embedded-js-client-token-rest.


Create a managed session bean that can be called upon to generate the JWT Token

I will use a managed bean to generate the JWT Token, either in session scope (to reuse the token) or in request scope (to generate fresh tokens on demand) .

JDeveloper and WebLogic both ship with libraries that support the generation of JWT Tokens. In a Fusion Web Application the correct libraries are present by default. Anyone of these libraries will suffice:


I create a new class as the Token Generator:

package nl.amis.portal.view;

import java.util.Date;

import javax.faces.bean.SessionScoped;
import javax.faces.bean.ManagedBean;

import oracle.adf.share.ADFContext;
import oracle.adf.share.security.SecurityContext;

import oracle.security.restsec.jwt.JwtToken;
import java.util.HashMap;
import java.util.Map;
public class SessionTokenGenerator {
    private String token = ";";
    private final String secretKey = "SpecialKeyKeepSecret";
    public SessionTokenGenerator() {
        ADFContext adfCtx = ADFContext.getCurrent();  
        SecurityContext secCntx = adfCtx.getSecurityContext();  
        String user = secCntx.getUserPrincipal().getName();  
        String _user = secCntx.getUserName();  
        try {
            String jwt = generateJWT(user, "some parameter value - just because we can", _user, secretKey);
            this.token = jwt;
        } catch (Exception e) {

    public String generateJWT(String subject, String extraParam, String extraParam2, String myKey) throws Exception {           
           String result = null;        
           JwtToken jwtToken = new JwtToken();
           //Fill in all the parameters- algorithm, issuer, expiry time, other claims etc
           jwtToken.setClaimParameter("ExtraParam", extraParam);
           jwtToken.setClaimParameter("ExtraParam2", extraParam2);
           long nowMillis = System.currentTimeMillis();
           Date now = new Date(nowMillis);
           // expiry = 5 minutes - only for demo purposes; in real life, several hours - equivalent to HttpSession Timeout in web.xml - seems more realistic
           jwtToken.setExpiryTime(new Date(nowMillis + 5*60*1000));
           // Get the private key and sign the token with a secret key or a private key
           result = jwtToken.signAndSerialize(myKey.getBytes());
           return result;

    public String getToken() {
        return token;
Embed the Web Client Application

The ADF Application consists of main page – index.jsf – that contains a region binding a taskflow that in turn contains a page fragment (client-app.jsff) that consists of a panelStretchLayout that contains an inline frame (rendered as an IFrame) that loads the web client application.


The JWT token (just a long string) has to be included in the URL that loads the client web application into the IFrame. This is easily done by adding an EL Expression in the URL property:

 <af:inlineFrame source="client-web-app/index.xhtml?token=#{sessionTokenGenerator.token}"
                            id="if1" sizing="preferred" styleClass="AFStretchWidth"/>


Store the token in the web client

When the client application is loaded, the token can be retrieved from the query parameters. An extremely naive implementation uses an onLoad event trigger on the body object to call a function that reads the token from the query parameters on the window.location.href object and stores it in the session storage:

function getQueryParams() {
    token = getParameterByName('token');
    if (token) {
        document.getElementById('content').innerHTML += '<br>Token was received and saved in the client for future REST calls';
        // Save token to sessionStorage
        sessionStorage.setItem('portalToken', token);    }
        document.getElementById('content').innerHTML += '<br>Token was NOT received; you will not be able to use this web application in a meaningful way';

function getParameterByName(name, url) {
    if (!url)
        url = window.location.href;
    var regex = new RegExp("[?&]" + name + "(=([^&#]*)|&|#|$)"), results = regex.exec(url);
    if (!results)
        return null;
    if (!results[2])
        return '';
    return decodeURIComponent(results[2].replace(/\+/g, " "));

If we wanted to so do, we can parse the token in the client and extract information from it – using a function like this one:


function parseJwt(token) {
    var base64Url = token.split('.')[1];
    var base64 = base64Url.replace('-', '+').replace('_', '/');
    return JSON.parse(window.atob(base64));


Append the token to REST API calls made from the client application

Whenever the client application makes REST API calls, it should include the JWT token in an HTTP Header. Here is example code for making an AJAX style REST API call – with the token included in the Authorization header:

function callServlet() {
    var portalToken = sessionStorage.getItem('portalToken');
    // in this example the REST API runs on the same host and port as the ADF Application; that need not be the case - the following URL is also a good example: 
    // var targetURL = 'http://some.otherhost.com:8123/api/things';
    var targetURL = '/ADF_JET_REST-ViewController-context-root/restproxy/rest-api/person';
    var xhr = new XMLHttpRequest();
    xhr.open('GET', targetURL)
    xhr.setRequestHeader("Authorization", "Bearer " +  portalToken);
    xhr.onload = function () {
        if (xhr.status === 200) {
            alert('Response ' + xhr.responseText);
        else {
            alert('Request failed.  Returned status of ' + xhr.status);


Receive and inspect the token inside the REST API to ensure the authenticated status of the user

Depending on how the REST API is implemented – for example Java with JAX-RS, Node with Express, Python, PHP, C# – the inspection of the token will take a place in a slightly different way.

With JAX-RS based REST APIs running on a Java EE Web Server, one possible approach to inspection of the token is using a ServletFilter. This filter can front the JAX-RS service and stay completely independent of it. By mapping the Servlet Filter to all URL paths on which REST APIs can be accessed, we ensure that these REST APIs can only be accessed by requests that contain valid tokens.

A more simplistic, less elegant approach is to just make the inspection of the token an explicit part of the REST API. The Java code required for both approaches is very similar. Here is the code I used in a simple servlet that sits between the incoming REST API request and the actual REST API as a proxy that verifies the token, does the CORS headers and does the routing:


package nl.amis.portal.view;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;

import java.net.HttpURLConnection;
import java.net.URL;

import javax.servlet.*;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

import javax.ws.rs.core.HttpHeaders;

import oracle.adf.share.ADFContext;
import oracle.adf.share.security.SecurityContext;

import java.util.Date;

import java.util.Map;

import oracle.security.restsec.jwt.JwtException;
import oracle.security.restsec.jwt.JwtToken;
import oracle.security.restsec.jwt.VerifyException;

@WebServlet(name = "RESTProxy", urlPatterns = { "/restproxy/*" })
public class RESTProxy extends HttpServlet {
    private static final String CONTENT_TYPE = "application/json; charset=UTF-8";
    private final String secretKey = "SpecialKeyKeepSecret";

    public void init(ServletConfig config) throws ServletException {

    private TokenDetails validateToken(HttpServletRequest request) {
        TokenDetails td = new TokenDetails();
        try {
            boolean tokenAccepted = false;
            boolean tokenValid = false;
            // 1. check if request contains token

            // Get the HTTP Authorization header from the request
            String authorizationHeader = request.getHeader(HttpHeaders.AUTHORIZATION);

            // Extract the token from the HTTP Authorization header
            String tokenString = authorizationHeader.substring("Bearer".length()).trim();

            String jwtToken = "";
            String issuer = "";

            try {
                JwtToken token = new JwtToken(tokenString);
                // verify whether token was signed with my key
                boolean result = token.verify(secretKey.getBytes());
                if (!result) {
                    td.addMotivation("Token was not signed with correct key");
                } else {
                    tokenAccepted = false;

                // Validate the issued and expiry time stamp.
                if (token.getExpiryTime().after(new Date())) {
                    jwtToken = tokenString;
                    tokenValid = true;
                } else {
                    td.addMotivation("Token has expired");

                // Get the issuer from the token
                issuer = token.getIssuer();
                // possibly validate/verify the issuer as well
                td.setIsTokenAccepted(td.isIsTokenPresent() && td.isIsTokenFresh() && td.isIsTokenVerified());
                return td;

            } catch (JwtException e) {
                td.addMotivation("No valid token was found in request");

            } catch (VerifyException e) {
                td.addMotivation("Token was not verified (not signed using correct key");

        } catch (Exception e) {
            td.addMotivation("No valid token was found in request");
        return td;

    private void addCORS(HttpServletResponse response) {
        response.setHeader("Access-Control-Allow-Origin", "*");
        response.setHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, DELETE");
        response.setHeader("Access-Control-Max-Age", "3600");
        response.setHeader("Access-Control-Allow-Headers", "x-requested-with");

    public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        TokenDetails td = validateToken(request);
        if (!td.isIsTokenAccepted()) {
            response.addHeader("Refusal-Motivation", td.getMotivation());
        } else {

            // optionally parse token, extract details for user

            // get URL path for REST call
            String pathInfo = request.getPathInfo();

            // redirect the API call/ call API and return result

            URL url = new URL("" + pathInfo);
            HttpURLConnection conn = (HttpURLConnection) url.openConnection();
            conn.setRequestProperty("Accept", "application/json");

            if (conn.getResponseCode() != 200) {
                throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode());

            BufferedReader br = new BufferedReader(new InputStreamReader((conn.getInputStream())));

            // see http://javahonk.com/enable-cors-cross-origin-requests-restful-web-service/

            RESTProxy.copyStream(conn.getInputStream(), response.getOutputStream());
        } // token valid so continue


    public static void copyStream(InputStream input, OutputStream output) throws IOException {
        byte[] buffer = new byte[1024]; // Adjust if you want
        int bytesRead;
        while ((bytesRead = input.read(buffer)) != -1) {
            output.write(buffer, 0, bytesRead);

   private class TokenDetails {
        private String jwtTokenString;
        private String motivation;

        private boolean isJSessionEstablished; // Http Session could be reestablished
        private boolean isTokenVerified; // signed with correct key
        private boolean isTokenFresh; // not expired yet
        private boolean isTokenPresent; // is there a token at all
        private boolean isTokenValid; // can it be parsed
        private boolean isTokenIssued; // issued by a trusted token issuer

        private boolean isTokenAccepted = false; // overall conclusion

        ... plus getters and setters



Running the ADF Application with the Embedded Client Web Application

When  accessing the ADF application in the browser, we are prompted with the login dialog:


After successful authentication, the ADF Web Application renders its first page. This includes the Taskflow that contains the Inline Frame that loads the client web application using a URL that contains the token.


When the link is clicked in the client web application, the AJAX call is made – the call that has the token included in a Authorization Request header. The first time we make the call, the result is shown as returned from the REST API


However, a second call after more than 5 minutes fails:


Upon closer inspection of the request, we find the reason: the token has expired:


The token based authentication has done a good job.

Similarly, when we try to access the REST API directly – we need to have a valid token or we are unsuccessful:



Inspect token in Node based REST API

REST APIs can be implemented in various technologies. One popular option is Node – using server side JavaScript. Node applications are perfectly capable of doing inspection of JWT tokens – verifying their validity and extracting information from the token. A simple example is shown here – using the NPM module jsonwebtoken:


// Handle REST requests (POST and GET) for departments
var express = require('express') //npm install express
  , bodyParser = require('body-parser') // npm install body-parser
  , http = require('http')

var jwt = require('jsonwebtoken');
var PORT = process.env.PORT || 8123;

const app = express()
  .use(bodyParser.urlencoded({ extended: true }))

const server = http.createServer(app);

var allowCrossDomain = function (req, res, next) {
  res.header('Access-Control-Allow-Origin', '*');
  res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
  res.header('Access-Control-Allow-Headers', 'Content-Type');
  res.header('Access-Control-Allow-Credentials', true);
  res.header("Access-Control-Allow-Headers", "Access-Control-Allow-Headers, Origin,Accept, X-Requested-With, Content-Type, Authorization, Access-Control-Request-Method, Access-Control-Request-Headers");


server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);

app.get('/api/things', function (req, res) {
  // check header or url parameters or post parameters for token
  var token = req.body.token || req.query.token || req.headers['x-access-token'];
  if (req.headers && req.headers.authorization) {
    var parts = req.headers.authorization.split(' ');
    if (parts.length === 2 && parts[0] === 'Bearer') {
      // two tokens sent in the request
      if (token) {
        error = true;
      token = parts[1];
  var decoded = jwt.decode(token);

  // get the decoded payload and header
  var decoded = jwt.decode(token, { complete: true });
  var subject = decoded.payload.sub;
  var issuer = decoded.payload.iss;

  // verify key
  var myKey = "SpecialKeyKeepSecret";
  var rejectionMotivation;
  var tokenValid = false;

  jwt.verify(token, myKey, function (err, decoded) {
    if (err) {
      rejectionMotivation = err.name + " - " + err.message;
    } else {
      tokenValid = true;

  if (!tokenValid) {
    res.header("Refusal-Motivation", rejectionMotivation);
  } else {
      // do the thing the REST API is supposed to do
      var things = { "collection": [{ "name": "bicycle" }, { "name": "table" }, { "name": "car" }] }

      res.header('Content-Type', 'application/json');

The post Implementing Authentication for REST API calls from JET Applications embedded in ADF or WebCenter Portal using JSON Web Token (JWT) appeared first on AMIS Oracle and Java Blog.


Subscribe to Oracle FAQ aggregator