Feed aggregator

Using Official Oracle NoSQL Docker images

Marcelo Ochoa - Tue, 2016-03-01 11:54
At the time of writing this post Building an Oracle NoSQL cluster using Docker there was no official Oracle NoSQL images to use as base image, also Docker has no native networking functionality.
So with these two new additions making a NoSQL cluster is too easy :)
First step, creating my NoSQL image with an automatic startup script, here a Docker file:
FROM oracle/nosql
MAINTAINER marcelo.ochoa@gmail.com
ADD start-nosql.sh /start-nosql.sh
RUN chmod +x /start-nosql.sh
CMD /start-nosql.sh
start-nosql.sh script look like:
#!/bin/bash
mkdir -p $KVROOT
stop_database() {
        java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar stop -root $KVROOT
exit
}
start_database() {
nohup java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
}
create_bootconfig() {
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "m" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -admin 5001 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "s" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
}
trap stop_database SIGTERM
if [ ! -f $KVROOT/config.xml ]; then
create_bootconfig
fi
start_database
touch $KVROOT/snaboot_0.log
tail -f $KVROOT/snaboot_0.log
Note that above script is using an environment variable to know if we are launching a master node (with the admin tool enabled) and a regular node, in both cases if $KVROOT/config.xml is not present we assume that the makebootconfig operation should be execute first.
To build an image using above Docker file is necessary to run:
docker build -t "oracle-nosql/net" .
after that a new Docker image will be ready to use at the local repository:
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
oracle-nosql/net    latest              2bccb187cbbe        4 days ago          476.9 MB
now as in the other blog post we can launch a NoSQL cluster using a volatile repository or a persistent one, here both examples, volatile repository (changes on NoSQL storage will be lost when Docker image is removed):
#!/bin/bash
export KVHOME `docker inspect --format='{{index .Config.Env 3}}' oracle-nosql/net`
export KVROOT `docker inspect --format='{{index .Config.Env 7}}' oracle-nosql/net`
echo starting cluster using $KVHOME and $KVROOT
docker network create -d bridge mycorp.com
docker run -d -t --net=mycorp.com --publish=5000:5000 --publish=5001:5001 -e NODE_TYPE=m -P --name master -h master.mycorp.com oracle-nosql/net
docker run -d -t --net=mycorp.com -e NODE_TYPE=s -P --name slave1 -h slave1.mycorp.com oracle-nosql/net
docker run -d -t --net=mycorp.com -e NODE_TYPE=s -P --name slave2 -h slave2.mycorp.com oracle-nosql/net
persistent repository (NoSQL storage will persist over multiple cluster executions):
#!/bin/bash
export KVHOME `docker inspect --format='{{index .Config.Env 3}}' oracle-nosql/net`
export KVROOT `docker inspect --format='{{index .Config.Env 7}}' oracle-nosql/net`
echo starting cluster using $KVHOME and $KVROOT
mkdir -p /tmp/kvroot1
mkdir -p /tmp/kvroot2
mkdir -p /tmp/kvroot3
docker network create -d bridge mycorp.com
docker run -d -t --volume=/tmp/kvroot1:$KVROOT --net=mycorp.com --publish=5000:5000 --publish=5001:5001 -e NODE_TYPE=m -P \
  --name master -h master.mycorp.com oracle-nosql/net
docker run -d -t --volume=/tmp/kvroot2:$KVROOT --net=mycorp.com -e NODE_TYPE=s -P --name slave1 -h slave1.mycorp.com oracle-nosql/net
docker run -d -t --volume=/tmp/kvroot3:$KVROOT --net=mycorp.com -e NODE_TYPE=s -P --name slave2 -h slave2.mycorp.com oracle-nosql/net
starting a NoSQL cluster by using a persistent or a volatile way is easy using above scripts, for example:
./start-cluster-persistent.sh
starting cluster using /kv-3.5.2 and /var/kvroot
81ff17648736e366f5a30e74abb2168c6b784e17986576a9971974f1e4f8589e
b0ca038ee366fa1f4f2f645f46b9df32c9d2461365ab4f03a8caab94b4474027
53518935f250d43d04388e5e76372c088a7a933a110b6e11d6b31db60399d03d
05a7ba2908b77c1f5fa3d0594ab2600e8cb4f9bbfc00931cb634fa0be2aaeb56
to deploy a NoSQL storage we use a simple script as:
#!/bin/bash
export KVHOME `docker inspect --format='{{index .Config.Env 3}}' oracle-nosql/net`
export KVROOT `docker inspect --format='{{index .Config.Env 7}}' oracle-nosql/net`
echo deploying cluster using $KVHOME and $KVROOT
grep -v "^#" script.txt | while read line ;do
  docker exec -t master java -jar $KVHOME/lib/kvstore.jar runadmin -host master -port 5000 $line;
done
and script.txt as:
### build script start
configure -name mystore
plan deploy-zone -name "Boston" -rf 3 -wait
plan deploy-sn -zn zn1 -host master.mycorp.com -port 5000 -wait
plan deploy-admin -sn sn1 -port 5001 -wait
pool create -name BostonPool
pool join -name BostonPool -sn sn1
plan deploy-sn -zn zn1 -host slave1.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn2
plan deploy-sn -zn zn1 -host slave2.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn3
topology create -name topo -pool BostonPool -partitions 300
plan deploy-topology -name topo -wait
show topology
### build script end
a ping script returns:
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2016-03-01 14:51:36 UTC   Version: 12.1.3.5.2
Shard Status: healthy:1 writable-degraded:0 read-only:0 offline:0
Admin Status: healthy
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: online:3 offline:0 maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.5.2 2015-12-03 08:34:31 UTC  Build id: 0c693aa1a5a0
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,REPLICA sequenceNumber:100,693 haPort:5011 delayMillis:0 catchupTimeSecs:0
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.5.2 2015-12-03 08:34:31 UTC  Build id: 0c693aa1a5a0
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:100,693 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.5.2 2015-12-03 08:34:31 UTC  Build id: 0c693aa1a5a0
Rep Node [rg1-rn3] Status: RUNNING,MASTER sequenceNumber:100,693 haPort:5010
finally we can test the cluster using some Oracle NoSQL examples, here two, external-table.sh:
#!/bin/bash
export KVHOME `docker inspect --format='{{index .Config.Env 3}}' oracle-nosql/net`
export KVROOT `docker inspect --format='{{index .Config.Env 7}}' oracle-nosql/net`
echo hello world cluster using $KVHOME and $KVROOT
docker exec -t master javac -cp examples:lib/kvclient.jar examples/externaltables/UserInfo.java
docker exec -t master javac -cp examples:lib/kvclient.jar examples/externaltables/MyFormatter.java
docker exec -t master javac -cp examples:lib/kvclient.jar examples/externaltables/LoadCookbookData.java
docker exec -t master java -cp examples:lib/kvclient.jar externaltables.LoadCookbookData -store mystore -host master -port 5000 -delete
and parallel-scan.sh:
#!/bin/bash
export KVHOME `docker inspect --format='{{index .Config.Env 3}}' oracle-nosql/net`
export KVROOT `docker inspect --format='{{index .Config.Env 7}}' oracle-nosql/net`
echo parallel scan cluster using $KVHOME and $KVROOT
docker exec -t master javac -cp examples:lib/kvclient.jar examples/parallelscan/ParallelScanExample.java
docker exec -t master java -cp examples:lib/kvclient.jar parallelscan.ParallelScanExample -store mystore -host master -port 5000 -load 50000
docker exec -t master java -cp examples:lib/kvclient.jar parallelscan.ParallelScanExample -store mystore -host master -port 5000 -where 99
here some outputs:
# ./parallel-scan.sh
parallel scan cluster using /kv-3.5.2 and /var/kvroot
1400 records found in 3448 milliseconds.
Detailed Metrics: rg1 records: 50010 scanTime: 3035
# ./external-table.sh 
hello world cluster using /kv-3.5.2 and /var/kvroot
50010 records deleted
the NoSQL Web Admin tools shows:

and that's all IMO much easier. a complete list of file used in this post is available at GDrive.







Is the Mi Band the Harbinger of Affordable #fashtech?

Oracle AppsLab - Tue, 2016-03-01 10:15

So, here’s a new thing I’ve noticed lately, customizable wearables, specifically the Xiaomi Mi Band (#MiBand), which is cheap and completely extensible.

This happens to be Ultan’s (@ultan) new fitness band of choice, and coincidentally, Christina’s (@ChrisKolOrcl) as well. Although both are members of Oracle Applications User Experience (@usableapps), neither knew the other was wearing the Mi Band until they read Ultan’s post.

Since, they’ve shared pictures of their custom bands.

ultanMi

Ultan’s Hello Kitty Mi Band.

20160226_174826

Christina’s charcoal+red Mi Band.

The Mi Band already comes in a wider array of color options that most fitness bands, and a quick search of Amazon yields many pages of wristband and other non-Xiaomi produced accessories. So, there’s already a market for customizing the $20 device.

And why not, given it’s the price of a nice pedometer with more bells and whistles and a third the cost of the cheapest Fitbit, the Zip, leaving plenty of budget left over for making it yours.

Both Christina and Ultan have been tracking fitness for a long time and as early adopters so I’m ready to declare this a trend, i.e. super-cheap, completely-customizable fitness bands.

Of course, as with anything related to fashion (#fashtech), I’m the last to know. Much like a broken clock, my wardrobe is fashionable every 20 years or so. However, Ultan has been beating the #fashtech drum for a while now, and it seems the time has come to throw off the chains of the dull, black band and embrace color again.

Or something like that. Anyway, find the comments and share your Mi Bands or opinions. Either, both, all good.Possibly Related Posts:

Oracle Cloud – About buttons, icons, links and other stuff…

Marco Gralike - Tue, 2016-03-01 08:01
While scrolling to the DBaaS interface pages, I realized that I was spending a lot…

Fusion Financials (ERP) Cloud Support Resources

Chris Warticki - Tue, 2016-03-01 07:00
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

First and ALWAYS – the #1 investment is made in the PRODUCT, PRODUCT, PRODUCT.

Remain a student of the product.

1. ERP Cloud Product Information Page

· ERP Webcast Series

2. EPM Cloud Product Information Page

· EPM Webcast Series

3. ERP / PPM Documentation and Resources

1.

2.

3.

4. Oracle Cloud Learning Library

5. ERP Cloud Learning Subscription

6. Oracle University – Fusion Applications Training

4.

5.

6.

7. Cloud.Oracle.com – Oracle Cloud Portal (Subscription and Services Admin)

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

1.

2.

3.

4.

ERP Cloud - Applications Customer Connect

Personalize My Oracle Support Experience

· Setup Proactive Alerts and Notifications

· Customize your MOS Dashboard

Collaborate. Communicate. Connect

· Subscribe to Cloud and SaaS, Newsletters

· Enterprise Performance Management News

· Oracle Mobile App – News, Events, Mobile MOS, Videos etc

· Oracle Support's ERP Community

SOCIAL Circles of Influence

· Cloud Solutions Blog

· Oracle Applications Blog

· ERP Cloud Forum Twitter

· OraERP.com

· YouTube – ERP Cloud

· Oracle Cloud Zone

· Oracle Cloud Marketplace

· Cloud Café (Podcasts)

KNOW Support Best Practices

Oracle Support Document 104.2 (Information Center: Fusion Financials)

Oracle Support Document 1456185.1 (Get Proactive with Oracle Fusion Applications

Oracle Support Document 1338511.1 (What Diagnostic Tests Are Available For Fusion Financials)

Oracle Support Document 1359493.1 (What Diagnostic Tests Are Available for Oracle Fusion Project Portfolio Management

View and EXECUTE the list of diagnostic tools for this product

Engage with Oracle Support

1. Upload ALL reports if logging a Service Request

2. Leverage Oracle Collaborative Support (web conferencing)

3. Better Yet – Record your issue and upload it (why wait for a scheduled web conference?)

4. Request Management Attention as necessary

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

ADF BC REST Support for List Of Values (LOV) Data

Andrejus Baranovski - Mon, 2016-02-29 19:21
ADF BC REST service out of the box supports LOV list data. You can define LOV for ADF BC View Object attribute and use it straight away in REST service. This is especially useful when LOV list is filtered based on other attributes from current row - no need to collect and send these attributes to the service separately, filtering logic will be handled for you in ADF BC backend. One more benefit - you can reuse existing ADF BC with LOV's implementation and expose it through REST service.

Below you can see employee #108 attributes with values returned by REST service, generated on top of ADF BC View Object. ManagerId attribute in View Object is configured with LOV list. Based on this configuration, each item in REST collection will have a link to REST resource providing LOV data (Employees/280/lov/EmployeesLovView1). For employee #108, it retrieves LOV data from VO EmployeesLovView1:


We can get LOV list entries for employee #108 by executing simple REST request, no need to specify additional parameters (Employees/280/lov/EmployeesLovView1):


LOV VO is set with View Criteria to return a list of possible managers, employees with the same job, excluding employee himself:


LOV View Accessor in the main VO (Employees) is configured to use employee ID and job ID from current row (current REST collection item) - in this way LOV list will be filtered automatically, by View Criteria:


The only thing that needs to be done to enable LOV support - define LOV for manager ID attribute in the base VO:


As you see, it is pretty easy to reuse ADF BC LOV's in REST service. Download sample application - ADFBCRestApp_v6.zip.

Knowing Your Cloud From Your SaaS

Floyd Teter - Mon, 2016-02-29 11:23
I have recently spent far too much time in far too many conversations in which the terms "cloud" and "SaaS" are used interchangeably.  Let's be clear:  the two terms are not interchangeable as they describe very different concepts.

Cloud.  There are many definitions out there.  Marketers and sales people.  Engineers.  Industry analysts. The National Institute of Standard and Technology.  Frankly, most of those definitions are either wrong, or they're technically accurate while thoroughly useless.  So let's go with a simple definition: a computer in a different physical location attached to a network.  It's about physical architecture.  Think about it.  Play with it.  Hit the comments if you have a better definition.

SaaS.  Acronym for "Software as a Service".  Same set of folks attempting to define this idea with the same set of sad results.  Try this on for size:  Applications accessed via a browser, licensed on a subscription basis and delivered via Cloud.

So it's very possible to have Cloud (think hosting operations) without having SaaS.  But there is no SaaS without Cloud.  SaaS is a subset of Cloud.

In "Oracle speak", Fusion Applications (including Taleo), are SaaS.  As a customer, I could also opt to have my licensed E-Business, PeopleSoft, or JD Edwards applications on the cloud...but that is not SaaS, as those applications are not offered on a subscription basis.

So there ya go.  Simple set of definitions.  Yes, there are more nuances if you dig into the subject.  But this is a simple foundation to start.  If nothing else, the next time you're involved in a conversation, you can use this to know your Cloud from your SaaS...which will put you way ahead of the curve ;)


Part 2: MDX Code Development for Beginners

Chris Foot - Mon, 2016-02-29 08:30

Welcome back to part 2 of MDX Code Development for Beginners.  In part 1, we looked directly at the basic SELECT FROM clause in T-SQL and converted it to the SELECT ON COLUMNS ON ROWS FROM which is the equivalent in MDX.  In part 2 we will be looking into the different BASIC filters that can be introduce into the SELECT statement in comparison to T-SQL  “WHERE” clauses.

node-oracledb 1.7.0 has a connection pool queue (Node.js add-on for Oracle Database)

Christopher Jones - Mon, 2016-02-29 07:56

Node-oracledb 1.7.0, the Node.js add-on for Oracle Database, is on NPM.

Top features: a new connection pool queue to make apps more resilient, and "Bind by position" syntax for PL/SQL Index-by array binds.

This release has a couple of interesting changes as well as some small bind fixes. A few reported build warnings with some compilers were also squashed.

Extended PL/SQL Index-by Array Bind Syntax

To start with, a followup PR from @doberkofler completes his PL/SQL Index-by array binding support project. In node-oracledb 1.7 he has added "bind by position" syntax to the already existing "bind by name" support. Thanks Dieter! The "bind by position" syntax looks like:

connection.execute(
  "BEGIN mypkg.myinproc(:id, :vals); END;",
  [
    1234,
    { type: oracledb.NUMBER,
       dir: oracledb.BIND_IN,
       val: [1, 2, 23, 4, 10]
    }
  ],
  function (err) { . . . });

Personally I'd recommend using bind by name for clarity, but this PR makes the feature congruent with binding scalar values, which is always a good thing.

Documentation is at PL/SQL Collection Associative Array (Index-by) Bind Parameters.

New Transparent JavaScript Wrapper for Existing Classes

The other major change in 1.7 is a new JavaScript wrapper over the current node-oracledb C++ API implementation, courtesy of some community discussion and the direction that users seemed to have been heading in: creating similar wrappers. It was also the result of some 'above and beyond' overtime from Dan McGhan who did the project. This wrapper should be transparent to most users. It gives a framework that will make it easier to extend node-oracledb in a consistent way and also let developers who know JavaScript better than C++ contribute to node-oracledb.

New Connection Pool Queue Enabled by Default

The layer has let Dan add his first new user feature: a request queue for connection pooling. It is enabled by a new Boolean pool attribute queueRequests. If a pool.getConnection() request is made but there are no free connections (aka sessions) in the pool, the request will now be queued until an in-use connection is released back to the pool. At this time the first request in the queue will be dequeued, and the underlying C++ implementation of pool.getConnection() will be called to return the now available connection to the waiting requester.

A second new pool attribute queueTimeout uses setTimeout to automatically dequeue and return an error for any request that has been waiting in the queue too long. The default value is 60000 milliseconds, i.e. 60 seconds. In normal cases, when requests are dequeued because a connection does become available, the timer is stopped before the underlying C++ layer gets called to return the connection.

The pool queue is enabled by default. If it is turned off, you get pre-1.7 behavior. For example if more requests are concurrently thrown at an app than the poolMax value, then some of the pool.getConnection() calls would likely return an error ORA-24418: Cannot open further sessions. When enabled, the new queue nicely stops this error occurring and lets apps be more resilient.

The pool option attribute _enableStats turns on lightweight gathering of basic pool and queue statistics. It is false by default. If it is enabled, applications can output stats to the console by calling pool._logStats() whenever needed. I think it will be wise to monitor the queue statistics to make sure your pool configuration is suitable for the load. You don't want the queue to be an invisible bottle neck when too many pool.getConnection() requests end up in the queue for too long. Statistics and the API may change in future, so the attribute and method have an underscore prefix to indicate they are internal.

Connection Queue Example

To look at an example, I used ab to throw some load at an app based on examples/webapp.js I used a load concurrency of 25 parallel requests. The pool had a maximum of 20 sessions in its pool. The extra load was nicely handled by the connection queue without the application experiencing any connection failures.

I'd modified the app to check for a particular URL and dump statistics on request:

    . . .
    var hs = http.createServer (
      function(request, response)
      {
        var urlparts = request.url.split("/");
        var arg = urlparts[1];
        if (arg === 'stats') {
          pool._logStats();
        }
    . . .

Here is snapshot of the output from _logStats() at one point during the test:

Pool statistics:
...total connection requests: 26624147
...total requests enqueued: 5821874
...total requests dequeued: 5821874
...total requests failed: 0
...total request timeouts: 0
...max queue length: 6
...sum of time in queue (milliseconds): 13920717
...min time in queue (milliseconds): 0
...max time in queue (milliseconds): 1506
...avg time in queue (milliseconds): 2
...pool connections in use: 12
...pool connections open: 20
Related pool attributes:
...queueRequests: true
...queueTimeout (milliseconds): 0
...poolMin: 10
...poolMax: 20
...poolIncrement: 10
...poolTimeout: 0
...stmtCacheSize: 30
Related environment variables:
...process.env.UV_THREADPOOL_SIZE: undefined

The connection pool was semi-arbitrarily configured for testing. It started out with 10 sessions open (poolMin) and as soon as they were in use, the pool would have grown by another 10 sessions (poolIncrement) to the maximum of 20 (poolMax).

What the stats show is that not all pool.getConnection() requests could get a pooled connection immediately. About 20% of requests ended up waiting in the queue. The connection pool poolMax is smaller than optimal for this load.

The queue was never large; it never had more than 6 requests in it. This is within expectations since there are at least 5 more concurrent requests at a time than there are connections available in the pool.

If this were a real app, I might decide to increase poolMax so no pool.getConnection() call ever waited. (I might also want to set poolTimeout so that when the pool was quiet, it could shrink, freeing up DB resources.) However the average wait time of 2 milliseconds is small. If I don't have DB resources to handle the extra sessions from a bigger pool, I might decide that a 2 millisecond wait is OK and that the pool size is fine.

At least one connection spent 1.5 seconds in the queue. Since I know my test infrastructure I'm guessing this was when the pool ramped up in size and my small, loaded DB took some time to create the second set of 10 sessions. Maybe I should experiment with a smaller poolIncrement or bigger poolMin?

Another important variable shown in the stats is UV_THREADPOOL_SIZE. I'd not set it so there were the default four worker threads in the Node process. Blindly increasing poolMax may not always help throughput. If DB operations take some time, you might find all threads get blocked waiting for their respective DB response. Increasing UV_THREADPOOL_SIZE may help improve application throughput.

The best settings for pool configuration, UV_THREADPOOL_SIZE, and any DRCP pool size will depend on your application and environment.

Connection Pool Queue Statistics

The table below shows the node-oracledb 1.7 pool statistics descriptions. These stats and the APIs to enable and log them may change in future versions of node-oracledb. I look forward to getting some PRs, for example to add a standard logging capability which the stats generation can be part of.

Connection Pool MetricDescription

total connection requests

Number of pool.getConnection() calls made for this pool

total requests enqueued

Number of connections that couldn't be satisfied because every session in the the pool was already being used, and so they had to be queued waiting for a session to be returned to the pool

total requests dequeued

Number of connection requests that were removed from the queue when a free connection has become available. This is triggered when the app has finished with a connection and calls release() to return it to the queue

total requests failed

Number of connection calls that invoked the underlying C++ pool.getConnection() callback with an error state. Does not include queue request timeout errors.

total request timeouts

Number of connection requests that were waiting in the queue but exceeded the queueTimeout setting. The timeout is triggered with a JavaScript setTimeout call

max queue length

Maximum number of connection requests that were ever waiting at one time

sum of time in queue

Total sum of time that connection requests have been waiting in the queue

min time in queue

Smallest amount of time that any request waited in the queue

max time in queue

Longest amount of time that any request waited in the queue

avg time in queue

Derived from the sum of time value divided by the number of requests enqueued

pool connections in use

A metric returned by the underlying Oracle C client session pool implementation. It is the number of currently active connections in the connection pool pool connections open

Also returned by the underlying library. It shows the number of currently open sessions in the underlying connection pool

Note that the sum of time in queue, the min time in queue and the max time in queue values are calculated when queued requests are removed from the queue, so they don't record the amount of time for requests still waiting in the queue.

Resources

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Public Cloud – Plain Vanilla or Business Logic Tailored to You

WebCenter Team - Mon, 2016-02-29 06:00
Normal 0 false false false EN-US X-NONE X-NONE

Brent Seaman, Vice President, Cloud Solutions, Mythics, Inc.

When you think Public Cloud, do you think “vanilla”? Many people do.

Many SaaS applications are built to run essentially out-of-the-box and to be configurable without much room for customization. While that works for many organizations, some business problems benefit from a tailored application that maps to unique business processes. In those instances, when a custom process needs modeled or when process automation enhances efficiency, I’ve found that Business Process Management (BPM) tools are the place to turn.

In the fall of 2015, Mythics bought Oracle Process Cloud Service (OraclePCS or PCS) to aid in automating certain business processes in our operations. In the professional services industry, proposals, agreements, and contracts are more routine than getting an oil change or eating dinner. Mythics has processes common to other organizations. The ones of interest were not part of a particular CRM or ERP tool, but do interface with those systems. We chose automating the Statement of Work approval process as our first endeavor with OraclePCS.

Compared to BPM software installed on-premise, Oracle Process Cloud Service eliminates the IT burden for BPM infrastructure. It puts the problem and solution in the hands of the line of business (LoB) where the process expertise exists. OraclePCS also allows progression of processes from a Development or Test environment to UAT and Production either to the Cloud or to on-premises BPM infrastructure. Even though Mythics has expertise to efficiently stand up an on-premise BPM infrastructure, OraclePCS allowed us to eliminate those activities and better manage one of our most precious commodities – time.

With a growing business that is intent on remaining agile in operations, quick and effective innovation is important for Mythics. Process modeling and implementation accelerators came in handy for us on the very first use of Oracle PCS. We modeled the process in Business Process Composer and were able to test the process flow using the Play function - which most closely resembles the debug function of some Integrated Development Environments – allowing us to step through the process in a play-by-play fashion. Once all routes prove out with the Play feature, the process can be promoted to a Test environment or to Production.

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

A key approach we took with the toolset is to combine Oracle Process Cloud Service with Oracle Documents Cloud Service (OracleDOCS or DOCS). We use DOCS in conjunction with PCS to manage the documents associated with the process. Documents can be used to kick-off a process, for in-work editing, and to release an approved state. Processes use documents – they naturally fit together.

The desktop sync client in DOCS allows us to drop a SOW or other supporting documents into the opportunity folder to feed the approval request process with reference items. This folder example happens to be from Windows. The same function exists for Mac OSX.

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Below, is an example of the corresponding folder from a web browser. The desktop sync function allows users to work offline when not connected to the network, and it automatically syncs to the Document Cloud Service when reconnected to the network. We have used several different browsers without issue, including IE, Safari, Chrome, and Firefox.

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

In addition to browser access, we recently started using mobile apps as part of the approval process for PCS and DOCS. By clicking on our opportunity folder link in the PCS form, the appropriate DOCS folder automatically opens in the Oracle Documents mobile app (available on both Android and iOS). We have approvers using this functionality to review and approve from their mobile devices.

DOCS is a common repository for use across many Cloud services and can be leveraged for a variety of uses. We plan to leverage our investment of this Cloud tool with other projects, which include pulling in share drive content and integrating to our CRM tool for a standard of record.

Similarly, we plan to leverage the Process Cloud Service technology across other processes. Our approach was to start with one business process, support the organizational change to process automation, then to expand to other processes. Some organizations may take a different approach, but this one made natural sense to us. We are currently mapping a process for environment (VM) provisioning process for our Technology Innovation Center. Future implementations will include processes for procurement approval, new employee on-boarding, event planning, and solution development.

I joined a webcast in November with David Le Strat from Oracle Product Management to share our experience. My recorded portion is in the second half of this webcast. A collection of the Q&A from that session can be found here. A brief video summarizing the benefits from our solution strategy can be found here.

In Summary

For anyone looking at stepping into the Cloud, process mapping and automation is a safe way to start. You can define your process as big or small as you like. You can update the process over time. You can determine how many people to involve in the process from the start. Starting at the right size project for you will help with adoption of the application.

Business Process Management is certainly a way to achieve quick time to benefits. Automating processes in this way provides ancillary benefits like traceability, reporting, cycle time reduction, and process improvement.

When considering a process automation project with BPM tools, consider a manageable scope and grow from there and make sure to include a good communication plan from the start. Stakeholders will want to know what is planned and how things turned out. In addition, the broader organization will want to be well informed along the way.

If any part of this blog article was interesting to you, you may also like other Cloud Computing articles by Mythics. One good summary of the Oracle Cloud Platform Services by Shawn Ruff is in this collection.

Consider connecting with us at Oracle CloudWorld in DC on March 24. Oracle is planning several different areas of interest from HR, Marketing, and Customer Services cloud applications to Back Office and industry focused sessions.


/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

An old dog learns a new trick

Mathias Magnusson - Mon, 2016-02-29 05:00

Reports of this blogs death have been greatly exaggerated. It has been very quiet here though while I worked on getting the Swedish part of Miracle started. It is now rocking the Stockholm market so it’s time to get back into more geeky stuff.

Talking of which. I have encountered Liquibase for database versioning time after time and always come to the conclusion that it is not what a DBA want to use. I recently took it out for a spin to prove once and for all that it is not capable of what real DBAs need.

Well, lets just say that it did not end like I expected. The phrase, an old dog learns a new trick comes to mind. Once I got over the idea that it is the DDL I have to protect and realised that it is rather control over how changes occur I need. In fact I get all the control I need and a lot of problems with being able to back out changes and to adopt for different kinds of environments and/or database types are easily handled.

Do take it out for a spin, you’ll like it.

If you read Swedish, I wrote a document with a full demo of lots of the features. You find it here. It is a long document, partly because it has a long appendix with a changelog and partly because I step through each change (of 37) and explain them.

I also show how APEX applications can be installed with Liquibase. This is often said to not work, you have to do it by hand with APEX. Well, not only is it possible – it is easy.

I’d translate the demo and the document to English if that would be useful to many people. Let me know if this sounds like a document you’d like to see in English.


Client support for WITH using PL/SQL

Gary Myers - Mon, 2016-02-29 03:00
My employer has been using 12c for about a year now, migrating away from 11gR2. It's fun working out the new functionality, including wider use of PL/SQL.

In the 'old' world, you had SQL statements that had to include PL/SQL, such as CREATE TRIGGER, PROCEDURE etc). And you had statements that could never include PL/SQL, such as CREATE SYNONYM, CREATE SEQUENCE. DML (SELECT, INSERT, UPDATE, DELETE and MERGE) were in the latter category.

One of the snazzy new 12c features is the use of PL/SQL in SELECTs, so we have a new category of statements which may include PL/SQL. In some cases that confuses clients that try to interpret the semi-colons in PL/SQL as SQL statement terminators.
SQL Plus
The good news is the the 12c SQL Plus client works great (or at least I haven't got it confused yet), so gets a grade A pass. However, if you're stuck with an older 11g client, you have to make accommodations to use this 12 stuff.

Fortunately, even the older sqlplus clients have a SET SQLTERMINATOR statement. By setting the value to OFF, the client will ignore the semi-colons. That means you'll be using the slash character on a new line to execute your SQL statements. Given the necessary workaround, I'll give it a B grade, but that's not bad for a superseded version of the client.

SET SQLTERMINATOR OFF

WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/
SQLCL
If you grab the latest version of SQLcl (mentioned by Jeff Smith here) you'll be fine with the WITH...SELECT option. It also seemed to work fine for the other DML statements. Note that, as per the docs, "If the top-level statement is a DELETEMERGEINSERT, or UPDATE statement, then it must have the WITH_PLSQL hint." 

INSERT /*+WITH_PLSQL */ INTO t123 
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123
  FROM dual
/

It does fall down on the CREATE statements. The CREATE TABLE, CREATE VIEW and CREATE MATERIALIZED VIEW statements all allow WITH PL/SQL, and do not require the hint. The following works fine in SQL Plus (or if you send it straight to the SQL engine via JDBC or OCI, or through dynamic SQL).

CREATE TABLE t123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123  val
  FROM dual
/

Again, there's a workaround, and sqlcl will process the statement if it does contain the WITH_PLSQL hint. However that hint isn't genuine as far as the database is concerned (ie not in the data dictionary and won't be pulled out via a DBMS_METADATA.GET_DDL). Also sqlcl doesn't support the SQL Plus SET SQLTERMINATOR command, so we can't use that workaround. Still, I'll give it a B grade.

CREATE /*+WITH_PLSQL */ TABLE t123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123  val
  FROM dual
/
SQL Developer
As of 4.1.3, SQL Developer offers the weakest support for this 12c functionality. 
[Note: Scott in Perth noted the problems back in 2014.]

Currently the plain WITH...SELECT works correctly, but DML and CREATE statements all fail when it hits the semi-colon and it tries to run the statement as two or more separate SQLs. The only work around is to execute the statement as dynamic SQL through PL/SQL.

Since it seems to share most of the parsing logic with sqlcl, I'd expect it to catch up with its younger sibling on the next release. Hopefully they'll be quicker supporting any 12cR2 enhancements.

I'll give it a 'D' until the next release. In the meantime, pair it up with SQL Plus
TOAD 11
While I very rarely use it, I do have access to TOAD at work. TOAD recognizes blank lines as the separator between statements, so doesn't have an issue with semi-colons in the middle of SQL statements. Grade A for this functionality.

Just for completeness, these are the test statements I used

CLEAR SCREEN

SET SQLTERMINATOR OFF

DROP TABLE t123
/
DROP VIEW v123
/
DROP MATERIALIZED VIEW mv123
/

PROMPT SELECT 
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/

PROMPT CREATES

CREATE TABLE t123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123  val
  FROM dual
/

CREATE VIEW v123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/

CREATE MATERIALIZED VIEW mv123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/

PROMPT INSERT/DELETE/MERGE

INSERT /*+WITH_PLSQL */ INTO t123 
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123
  FROM dual
/

DELETE /*+WITH_PLSQL */FROM t123
WHERE val =
  (WITH
     FUNCTION r123 RETURN NUMBER IS
     BEGIN
       RETURN 123;
     END;
    SELECT r123
      FROM dual)
/

MERGE /*+WITH_PLSQL */ INTO  t123 D
   USING (WITH
             FUNCTION r123 RETURN NUMBER IS
             BEGIN
               RETURN 123;
             END;
            SELECT r123 val
              FROM dual) s
   ON (d.val = s.val )
   WHEN NOT MATCHED THEN INSERT (val) VALUES (s.val)
/

PROMPT UPDATES

UPDATE /*+WITH_PLSQL */
  (WITH
     FUNCTION r123 RETURN NUMBER IS
     BEGIN
       RETURN 123;
     END;
    SELECT val, r123
      FROM t123)
SET val = r123
/

UPDATE /*+WITH_PLSQL */ t123
SET val =
  (WITH
     FUNCTION r123 RETURN NUMBER IS
     BEGIN
       RETURN 123;
     END;
    SELECT r123
      FROM dual)
/      

CREATE /*+WITH_PLSQL */ TABLE t123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123  val
  FROM dual
/

CREATE /*+WITH_PLSQL */ VIEW v123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/

CREATE /*+WITH_PLSQL */ MATERIALIZED VIEW mv123 AS
WITH
 FUNCTION r123 RETURN NUMBER IS
 BEGIN
   RETURN 123;
 END;
SELECT r123 val
  FROM dual
/

Links for 2016-02-28 [del.icio.us]

Categories: DBA Blogs

What is FAHRCS?

David Haimes - Sun, 2016-02-28 11:00

FAHRCS (pronounced farks) is the de facto acronym for the officially titled Accounting Hub Reporting Cloud Service.  Is Stands for Fusion Accounting Hub Reporting Cloud Service, which is quite difficult to say.  I have got pretty good at saying F.A.H.R.C.S quickly, but I think “farks” is probably the easiest.

If you are wondering what FAHRCS actually is, you can follow @FAHRCS on twitter, or check out https://cloud.oracle.com/en_US/accounting-hub-reporting-cloud for official documentation.

I’ll be presenting about it at the Higher Education User Group Conference, Alliance16 in March and again at OAUG Collaborate16 in April.  So I hope to see you there and help you learn more about FAHRCS.


Categories: APPS Blogs

Change Item Position using jQuery

Denes Kubicek - Sun, 2016-02-28 07:51
See this example on how to change the item position. In APEX you can position the buttons after the action bar in an interactive report. However, you can't put the items there. Using jQuery this is easy to achieve.

Categories: Development

React on Tab Change

Denes Kubicek - Sun, 2016-02-28 07:47
See this example on how to react on tab change in APEX 5. The problem is to determine the right selector to trigger the corresponding dynamic action. Thanks Christian Rokitta for your help.

Categories: Development

Oracle JET Live List with WebSocket

Andrejus Baranovski - Sat, 2016-02-27 19:47
I have updated sample - Oracle JET and WebSocket Integration for Live Data, to include live data list. Along with live updates delivered to the chart component, list component displays changed data. I'm displaying five last changes, with new data appearing in the last row:


Watch recorded demo, each 4 seconds new data arrives and chart is updated, along with list records:


JET list component UI structure (data grid) is rendered with a template, to arrange separate column for each job:


When new data arrives, it is being pushed into JET observable array (this triggers automatic UI list reload/refresh). If there are more than five items in the list, first one is removed. List observable array is configured with 500 ms. delay, to make UI refresh smoother:


Download sample application (JET, WebSocket server side and ADF to simulate updates in the DB) - JETWebSocket_v2.zip.

I'm going to present JET and WebSocket integration on Oracle Fusion Middleware Partner Community Forum 2016.

A Few of My Favorite Things for Ultimate Productivity with Oracle WebCenter

As an Oracle WebCenter consultant at Fishbowl Solutions, I have a number of tools that I use that keep me happy and productive. Whether or not you are a software developer, these tools can do the same for you and your business.

 

Slack

slack

Unless you’ve been hibernating for the last year or so, you’ve probably heard of Slack. Haven’t adopted it for your business yet? Here’s why you should.

Slack facilitates contextual, transparent and efficient communication for teams. Slack helps organize your communications into “channels.” Working on a project with Fishbowl Solutions on a WebCenter project? Create a Slack channel and centralize your communications. Quickly share files with the entire team, and “Slackers” can give instant feedback. On the go? Slack goes with you via mobile, of course. Slack provides direct messaging and private channels, too.

Even better, Slack lets you integrate dozens of apps, so that you can centralize all of the services you and your team use. Send calendar reminders and events, search for documents, even start a Skype call. Slack is team communication for the 21st century (with custom emojis!).

 

Twitter and Evernote

Evernote

Twitter and Evernote are my number one/two combo punch for staying on top of all things development. Twitter allows me to keep up to date with the latest news and trends in the web development world. I can peruse dozens of articles every day with information I want to store for later use. I save links to the best articles in my Evernote account, which I have organized into different notebooks for various topics. For example, I’m working on a presentation for Collaborate 16 on Oracle JET, Oracle’s new front-end JavaScript framework. Anything interesting I see on Twitter re: Oracle JET goes right into my Evernote notebook. It makes keeping track of news and information a breeze.

 

Trello

trello

Trello is the application for list-making over-achievers (like me). I organize my to-dos into different “boards”, depending on the project. I have a different board for each project I’m working on at Fishbowl. As I think of something I need to do, I can quickly add it to the appropriate to-do column. When I’m busy with a task, I put it in the “doing” column, and then slide it on over to “done” when finished. I can keep up with my task flow, it’s motivating, visually appealing, and goes with me where I go. Trello also allows me to share tables with others for easy collaboration. Oh, and did I mention I can integrate Trello with Slack (insert custom Slack emoji here)?

Toggl

toggl

Toggl is a fantastic little desktop timer tool my colleague Nate Yates introduced to me. We consultants at Fishbowl Solutions need to keep very accurate timing of the hours we spend on different projects. Toggl allows me to input my different projects, and then just click the appropriate button when I start working on it. It keeps track of my time for the week on each project. It makes keeping track of my time simple, so that I can focus most of my time on creating responsive single-page applications for Fishbowl Solutions customers.

 

 

The post A Few of My Favorite Things for Ultimate Productivity with Oracle WebCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Get Proactive - Leverage the Oracle Dynamic Toolbox

Joshua Solomin - Fri, 2016-02-26 10:57
Looking for a collection of diagnostic tools, scripts, data collectors, or health checks for your Oracle products?

Bookmark the Catalog: Oracle Toolbox (Doc ID 1987483.2). Whenever a new or updated tool or resource is published, the links will automatically be added or updated. Start exploring today!

Included in the Oracle Dynamic Toolbox for all Product areas: Tool Description Diagnostic Tools & Scripts Data Collectors, Diagnostics, Health Checks, Utilities, Wizards, etc. Service Request Data Collection Plans (SRDCs) These list the information and output needed to start analyzing your issue. Generic Tools applicable for all products

Details on Tools like Oracle Configuration Manager (OCM), Remote Diagnostic Assistant (RDA), etc.

User Guides

Index page for the online User Guides for any product area.

My Oracle Support Communities (MOSC)

Link to the parent MOSC site allowing you to browse and post questions for any product area.

Information Centers (ICs) - Doc ID 1987485.1 These display aggregate content for a given focus area and present this content in categories for easy browsing. They offer a variety of focused dynamic content organized around a specific task. Interactive Troubleshooting Assistants - Doc ID 1987486.1 These dynamic question and answer tools guide you to a targeted solution.

Webcast: Marketing Asset Management Integrated with Marketing Cloud

WebCenter Team - Fri, 2016-02-26 08:46
Oracle Corporation WEBCAST Marketing Asset Management Integrated with Marketing Cloud Drive Marketing Effectiveness with Oracle Cloud Solutions

Organizations are struggling with managing marketing assets across multiple digital channels where content on each channel (web, email, Facebook page, etc.) is created and delivered by different teams of marketers using different technologies.

Join this webcast to learn how you can enable IT to empower Line of Business by putting the power to create rich microsites in their hands -- driving business agility and innovation.
  • Save money by enabling non-technical users to create microsites for content sharing and distribution
  • Reduce approval and process bottlenecks via automation
  • Coordinate marketing asset management across channels
Register Now for this webcast. Red Button Top Register Now Red Button Bottom Live Webcast Calendar March 2, 2016
10:00 AM PT
/ 1:00 PM ET
#OracleDOCS #OraclePCS #OracleSCS SPEAKER: Mariam Tariq Mariam Tariq,
Senior Director Product
Management,
Content and Process, Oracle Integrated Cloud Applications and Platform Services Copyright © 2016, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Pages

Subscribe to Oracle FAQ aggregator