Feed aggregator

Quick Cross-Period AWR Comparison

Dominic Brooks - Thu, 2015-10-15 04:43

Here’s a query which I find useful in order to have a very quick comparison across AWR snapshots of the high level time model statistics.
The numbers should match those in the associated section in the AWR report.

If you feel compulsed, obsessively, with tuning then you may see some blips here and there which then encourage you to dive into the AWR detail for that snapshot.

Or quite often I get in in the morning and there might be an email about slow overnight processes and generic “database has been slow” enquiries and before I start hunting around to prove or disprove the allegations, this can be a useful frame.

WITH subq_snaps AS
(SELECT dbid                dbid
 ,      instance_number     inst
 ,      snap_id             e_snap
 ,      lag(snap_id) over (partition by instance_number, startup_time order by snap_id) b_snap
 ,      TO_CHAR(begin_interval_time,'D') b_day
 ,      TO_CHAR(begin_interval_time,'DD-MON-YYYY HH24:MI') b_time
 ,      TO_CHAR(end_interval_time,'HH24:MI')   e_time
 ,    ((extract(day    from (end_interval_time - begin_interval_time))*86400)
     + (extract(hour   from (end_interval_time - begin_interval_time))*3600)
     + (extract(minute from (end_interval_time - begin_interval_time))*60)
     + (extract(second from (end_interval_time - begin_interval_time)))) duration
 FROM   dba_hist_snapshot)
SELECT ss.inst
,      ss.b_snap
,      ss.e_snap
,      ss.b_time
,      ss.e_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END)/1000000/60,2),'999999990.99')                                  db_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END)/(ss.duration*1000000),1),'999999990.99')        aas
,      (SELECT round(average,2)
        FROM   dba_hist_sysmetric_summary sm
        WHERE  sm.dbid            = ss.dbid
        AND    sm.snap_id         = ss.e_snap
        AND    sm.instance_number = ss.inst
        AND    sm.metric_name     = 'Average Synchronous Single-Block Read Latency'
        AND    sm.group_id        = 2)                                                                                                                   assbl
,      (SELECT round(average,2)
FROM   dba_hist_sysmetric_summary sm
WHERE  sm.dbid            = ss.dbid
AND    sm.snap_id         = ss.e_snap
AND    sm.instance_number = ss.inst
AND    sm.metric_name     = 'Host CPU Utilization (%)'
AND    sm.group_id        = 2)                                                                                                                   cpu_util
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB CPU' THEN em.value - bm.value END)/1000000,2),'999999990.99')                                      db_cpu
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'sql execute elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                    sql_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'PL/SQL execution elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')               plsql_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'parse time elapsed' THEN em.value - bm.value END)/1000000,2),'999999990.00')                          parse_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'failed parse elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                   failed_parse
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'hard parse (sharing criteria) elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')  hard_parse_sharing
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'RMAN cpu time (backup/restore)' THEN em.value - bm.value END)/1000000,2),'999999990.99')              rman_cpu
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'connection management call elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')     connection_mgmt
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'sequence load elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                  sequence_load
,      TO_CHAR(ROUND(100*MAX(CASE WHEN bm.stat_name = 'DB CPU' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           db_cpu_perc
,      TO_CHAR(ROUND(100*MAX(CASE WHEN bm.stat_name = 'sql execute elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           sql_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'PL/SQL execution elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           plsql_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'parse time elapsed' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           parse_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'failed parse elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           failed_parse_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'hard parse (sharing criteria) elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           hard_parse_sharing_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'RMAN cpu time (backup/restore)' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           rman_cpu_perc
FROM  subq_snaps              ss
,     dba_hist_sys_time_model em
,     dba_hist_sys_time_model bm
WHERE bm.dbid                   = ss.dbid
AND   bm.snap_id                = ss.b_snap
AND   bm.instance_number        = ss.inst
AND   em.dbid                   = ss.dbid
AND   em.snap_id                = ss.e_snap
AND   em.instance_number        = ss.inst
AND   bm.stat_id                = em.stat_id
,      ss.inst
,      ss.b_day
,      ss.b_snap
,      ss.e_snap
,      ss.b_time
,      ss.e_time
,      ss.duration
--HAVING b_day NOT IN (6,7)
--AND    inst = 2
--AND b_snap = 18673
--AND    e_time = '17:00'

Also, similar query for comparing the different IO metrics.
Again should match numbers in AWR.

WITH subq_snaps AS
(SELECT dbid                dbid
 ,      instance_number     inst
 ,      snap_id             e_snap
 ,      lag(snap_id) over (partition by instance_number, startup_time order by snap_id) b_snap
 ,      TO_CHAR(begin_interval_time,'D') b_day
 ,      TO_CHAR(begin_interval_time,'DD-MON-YYYY HH24:MI') b_time
 ,      TO_CHAR(end_interval_time,'HH24:MI')   e_time
 ,    ((extract(day    from (end_interval_time - begin_interval_time))*86400)
     + (extract(hour   from (end_interval_time - begin_interval_time))*3600)
     + (extract(minute from (end_interval_time - begin_interval_time))*60)
     + (extract(second from (end_interval_time - begin_interval_time)))) duration
 FROM   dba_hist_snapshot)
,    io_stats AS
(SELECT ss.*
 ,      bv.event_name
 ,      ev.time_waited_micro_fg - bv.time_waited_micro_fg time_waited_micro
 ,      ev.total_waits_fg       - bv.total_waits_fg       waits
 FROM   subq_snaps            ss
 ,      dba_hist_system_event bv
 ,      dba_hist_system_event ev
 WHERE  bv.dbid                   = ss.dbid
 AND    bv.snap_id                = ss.b_snap
 AND    bv.instance_number        = ss.inst
 AND    bv.event_name            IN ('db file sequential read','direct path read','direct path read temp','db file scattered read','db file parallel read')
 AND    ev.dbid                   = ss.dbid
 AND    ev.snap_id                = ss.e_snap
 AND    ev.instance_number        = ss.inst
 AND    ev.event_id               = bv.event_id)
SELECT io.dbid
,      io.inst
,      io.b_snap
,      io.e_snap
,      io.b_time
,      io.e_time
,      (SELECT ROUND(average,2)
        FROM   dba_hist_sysmetric_summary sm
        WHERE  sm.dbid            = io.dbid
        AND    sm.snap_id         = io.e_snap
        AND    sm.instance_number = io.inst
        AND    sm.metric_name     = 'Average Synchronous Single-Block Read Latency'
        AND    sm.group_id        = 2) assbl
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END) single_waits
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END) multi_waits
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END) prefch_wait
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits  END) END) direct_waits
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END)  temp_waits
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_single
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) single_secs_total
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) single_avg
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_multi
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) multi_secs_total
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) multi_avg
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_prefch
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) prefch_secs_total
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) prefch_avg
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits  END/duration) END) iops_direct
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) direct_secs_total
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) direct_avg
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_temp
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) temp_secs_total
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) temp_avg
FROM   io_stats io
,      io.inst
,      io.b_day
,      io.b_snap
,      io.e_snap
,      io.b_time
,      io.e_time
,      io.duration
HAVING b_day NOT IN (6,7)
AND    inst = 2
--AND b_snap = 18673
--AND    e_time = '17:00'

Finally, one of the queries I used the most for quick checks – comparing top SQL in AWR across periods:

SELECT x.*, (SELECT sql_text from dba_hist_sqltext t where t.sql_id = x.sql_id and rownum = 1) txt
SELECT sn.snap_id
,      TO_CHAR(sn.end_interval_time,'DD-MON-YYYY HH24:MI') dt
,      st.sql_id
,      st.instance_number
,      st.parsing_schema_name
,      st.plan_hash_value
,      SUM(st.fetches_delta) fch
,      SUM(rows_processed_delta) rws
,      SUM(executions_delta)     execs
,      ROUND(SUM(elapsed_time_delta)/1000/1000)   elp
,      ROUND(SUM(elapsed_time_delta)/1000/1000/NVL(NULLIF(SUM(executions_delta),0),1),2)   elpe
,      ROUND(SUM(cpu_time_delta)/1000/1000)       cpu
,      SUM(buffer_gets_delta)    gets
,      ROUND(SUM(iowait_delta)/1000/1000)         io
,      ROUND(SUM(clwait_delta)/1000/1000)         cl
,      ROUND(SUM(ccwait_delta)/1000/1000)         cc
,      ROUND(SUM(apwait_delta)/1000/1000)         ap
,      ROUND(SUM(plsexec_time_delta)/1000/1000)   pl
,      ROUND(SUM(disk_reads_delta))         disk_reads
,      ROUND(SUM(direct_writes_delta))        direct_writes
,      ROW_NUMBER() over (PARTITION BY sn.dbid, sn.snap_id, st.instance_number
                          ORDER BY SUM(elapsed_time_delta) desc) rn
FROM   dba_hist_snapshot sn
,      dba_hist_sqlstat  st
WHERE  st.dbid            = sn.dbid
AND    st.snap_id         = sn.snap_id
AND    sn.instance_number = st.instance_number
,      sn.snap_id
,      sn.end_interval_time
,      st.sql_id
,      st.instance_number
,      st.parsing_schema_name
,      st.plan_hash_value
) x
WHERE rn <= 5
ORDER by snap_id DESC, instance_number, rn;

Let me know if you spot any errors or anomolies or obvious improvements / additions.

Wheres my TNS connection? SHOW TNS

Barry McGillin - Thu, 2015-10-15 04:19
Lots of users have been head scratching as to which tnsnames.ora is being found and used when  connecting to the database with SQLDeveloper and with SQLcl.

In the latest release we've added another new command.


What this will do is walk the locations where we look for tnsnames.ora and list these out in order.  Then it will tell you which one the tool will actually use and list the entries for you.

So, with nothing set, no ORACLE_HOME, no TNS_ADMIN, here's what you get.  TNS is going to look in your home directory for a file called tnsnames.ora or .tnsnames.

Now, if we have an ORACLE_HOME set, we'll look for $ORACLE_HOME/network/admin/tnsnames.ora

Further, if we set TNS_ADMIN, it will override ORACLE_HOME and go to that location as shown here.

Lastly, we'll come back to the User directory. If you have a tnsnames.ora there or a .tnsnames, it will override everything and this is what will be used.

Now, go ahead and make a connection.  You can then do another new command called


which will show you how you are connected and what you are connected to.

Finally, you can get a look at which driver you are using for a connection using


which will show something like this, detailing types, versions and the URL of the connection you have.

Scaling FLUID pages for iPhone 6

Javier Delgado - Thu, 2015-10-15 03:06
We are currently developing FLUID pages for a customer on PeopleSoft HCM 9.1. As they cannot benefit from the standard functionality delivered through Update Manager until they upgrade to PeopleSoft HCM 9.2, they have decided to provisionally implement FLUID through customisations.

When doing this, we have identified an issue in iPhone 6 by which the FLUID pages were not correctly scaling:

As you see, the text is barely readable. After some research, we have identified that standard pages deal with this scaling issue by using the following PeopleCode (normally in the component PostBuild event):

Declare Function GetDefaultViewportSetting PeopleCode PTLAYOUT.FUNCLIB FieldFormula;
Declare Function SetViewport PeopleCode PTLAYOUT.FUNCLIB FieldFormula;

Local string &Viewport;
Local number &Pos;

&Viewport = GetDefaultViewportSetting();

If %Request.BrowserDeviceFormFactor = 0 And
      %Request.BrowserPlatformClass = "ios" Then
   &Pos = Find("minimal-ui", &Viewport);
   If &Pos = 0 Then
      &Viewport = &Viewport | ", minimal-ui";

AddMetaTag("format-detection", "telephone=no");

The page now displays in a much better format:

Opening of APEX R&D New Zealand

Dimitri Gielis - Thu, 2015-10-15 02:20
We're really excited to announce that next to our Belgium office, we have opened an office in New Zealand too. This will allow us to cover multiple timezones and better serve the Australian and New Zealand market. You can read more about it on Lino's blog


Contact our team of experienced professionals for innovative, reliable and cost effective Oracle Application Express solutions. 

Regarding any custom application development, training/coaching or consulting related to Oracle APEX technology we would love to hear from you.

For more information please contact our Belgium or New Zealand office.

We look forward working with you.

Categories: Development

node-oracledb 1.3.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Thu, 2015-10-15 01:24

Version 1.3 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

We kept the changes minimal in this release for several reasons. It has two small enhancements and a few bug fixes.

  • A new oracledb.oracleClientVersion attributes gives the version of the Oracle client libraries that node-oracledb is linked with.

    A connection.oracleServerVersion attribute gives the Oracle Database version used by the connection.

    These attributes are handy for code that needs to run in multiple environments. See examples/version.js for usage.

  • The major bug fix resolves some corruption with result.outBinds when calling PL/SQL blocks. This was sometimes causing a crash.

See CHANGELOG for the other changes.

Issues and questions about node-oracledb can be posted on GitHub.

node-oracledb installation instructions are here.

node-oracledb documentation is here.

Copy Data Management for Oracle Database with EMC AppSync and XtremIO

Kevin Closson - Wed, 2015-10-14 12:08

This is a quick blog entry to invite readers to view this little demonstration video I created. The topic is Copy Data Management in an Oracle Database environment. We all know the pains involved with the number of database copies needed in today’s Oracle environment. Well, how about technology with these characteristics:

  1. 100% space efficient. There is no need for any full-copy “donor” in this solution. You can create 8192 XtremIO Virtual Copies of volumes in an XtremIO array and there is no reduction in user-capacity at the storage level. For example, 512 copies of a 1TB volume with Oracle tablespaces in it takes exactly 1TB from the array.
  2. Self service. With EMC AppSync permissions can be set up so that developers can create their own copies, refresh their own copies and expire their own copies.
  3. Speed. AppSync copy operations such as creation and refresh are measured in seconds.
  4. Data Services. All XtremIO Virtual Copies enjoy data reduction services. So as users begin to make changes to their database copies the modified blocks are first treated with de-duplication and then compression.

You more than likely need XtremIO in any cose. However, now it’s also time to think about the ease of provisioning copies of Oracle databases to test/dev and other functions the XtremIO way.

It only takes minutes so please give this a view:

Filed under: oracle

Ramp Up with Support Services at Oracle OpenWorld

Chris Warticki - Wed, 2015-10-14 11:13
Ramp Up with Support Services at Oracle OpenWorld

With over 500 experts on hand to help our customers, Oracle Support Services will be front and center at Oracle OpenWorld 2015. Come to the Moscone Center and Palace Hotel October 25–29, and learn best practices to help optimize your Oracle technology and increase the value of your Oracle products and services.

If you or others in your organization will be attending Oracle OpenWorld this year, take a moment to review the available Support Services sessions, events, and demonstrations.

If You'll be Attending

Learn about the must-see conference sessions and networking events hosted by Oracle Support Services, including hands-on, interactive demos and training.


Discover 50+ sessions that highlight the depth and breadth of Oracle's Support Services portfolio, and see how other customers are driving success with Oracle Support tools and resources.

  • Best practices for supporting and upgrading Oracle hardware and software, including how to utilize all the features and entitlements of Oracle Premier Support
  • Best practices, tips, and customer examples for how to leverage mission-critical support delivered by Oracle Advanced Customer Support
  • How customers are leveraging Oracle Consulting for faster adoption and ROI to help them succeed using the Oracle stack
  • How organizations are developing sharper, smarter, faster strategies which help them grow and meet today's changing business requirements with modern business platforms that drive innovation, create value, lower cost, and reduce risk
Oracle University Trainings

Explore 18 single-day, pre-conference training sessions offered by Oracle University on Sunday, October 25 covering popular Oracle technology topics, all taught by expert instructors.

See available training sessions on the Oracle University OpenWorld site.
Networking Events

If you're an Oracle Premier Support customer, come see us on Monday, October 26 at the sixth annual My Oracle Support Monday Mix. Relax and network with Oracle support engineers, support managers, and other Oracle customers starting at 6 p.m. PT. Find out more about the Monday Mix event and venue here.

Meet us in the Support Stars Bar and Mini-Briefing Center in Moscone West Exhibition Hall, booth 3361. Work one-on-one with some of Oracle's best to get answers to your support questions. Attend a 10-minute mini-briefing and get entered to win prizes!


DEMOgrounds feature live demonstrations focusing on specific Oracle products and solutions across the Moscone Center and other event venues.

Based on your Oracle products and services, visit a DEMOgrounds area appropriate to you:

Product Location Booth Oracle Database Moscone South 122 Oracle HCM and Fusion HCM Palace Hotel Sunset Court Middleware Moscone South 106 My Oracle Support & Proactive Tools Moscone West 3161
(next to the Stars Bar) Oracle PeopleSoft Palace Hotel Sunset Court Server / Storage Moscone South 737
Other Featured Support Services Demonstrations

In addition to the DEMOgrounds, Oracle Support Services will be offering specialized informational demos around maximizing Oracle Support resources and tools during convention hall hours.

Program or Service Area Title Location HCM - Fusion Applications Maximize Your Investment in Oracle HCM Cloud Services Support Palace Hotel, Sunset Court ERP - Fusion Applications Maximize Your Investment in Oracle Enterprise Resource Planning Cloud Services Support Moscone West, Level 1 ERP Showcase PaaS Maximize Your Investment in Oracle Technology Cloud Services Support Moscone South, Oracle Cloud Platform and Infrastructure Showcase Platinum Services Oracle Platinum Services: Extreme Support for Engineered Systems Moscone North, Lower Lobby, Engineered Systems Showcase (at the bottom of the escalator) Oracle Consulting (Announced at the event) Moscone West, ERP Showcase Oracle Advanced Customer Support (Announced at the event) Moscone South, Oracle Database DEMOgrounds (area 122)
Additional Links

Like a white stone in the depths of a well...

Greg Pavlik - Tue, 2015-10-13 13:26
Like a white stone in the depths of a well...
Anna Akhmatova
working translation by Greg Pavlik

Like a white stone in the depths of a well
within me there lies one memory.
I can not - and do not - want to expel
this, my greatest joy and my agony.

I think that anyone who closely looks
can see this recollection reads
as harrowing sadness in a tragic book -
a warning, and a sign of need.

I know the gods forever strive
to wreck the body but cannot touch the mind;
assurance that you will forever live
as a memory I can’t leave behind.

Как белый камень в глубине колодца,
Лежит во мне одно воспоминанье.
Я не могу и не хочу бороться:
Оно - веселье и оно - страданье.

Мне кажется, что тот, кто близко взглянет
В мои глаза, его увидит сразу.
Печальней и задумчивее станет
Внимающего скорбному рассказу.

Я ведаю, что боги превращали
Людей в предметы, не убив сознанья,
Чтоб вечно жили дивные печали.
Ты превращен в мое воспоминанье.

Got Published in AUSOUG's Foresight Online Spring 2015

Pakistan's First Oracle Blog - Tue, 2015-10-13 01:47
AUSOUG's Foresight Online Spring 2015 Edition is the premier publication by Australian Oracle User Group.

Following are highlights of this edition:

  • President's Message
  • DBA Article: Automated Testing of Oracle BPM Suite 12c Processes with SOAP UI - Peter Kemp, State Revenue Office, Victoria
  • DBA Article: Best Practices for Oracle on Pure Storage
  • Apps Article: Performance Review Data Capture - Brad Sayer, More4Apps
  • DBA / Dev Article: Database Developers – Feeling Left Out of Agile? - D Nowrood, Dell Software
  • Apps Article:  Cost-effective alternative to Oracle Discoverer and BI Suite - Wilhelm Hamman, Excel4apps
  • DBA Article: DBA101 - characterset corruption - Paul Guerin, HP
  • Quick Tips 1: Five Reasons to Upgrade to APEX 5.0 - Scott Wesley, Sage Computing Services
  • Quick Tips 2: Last Successful login time in 12c - Fahd Mirza Chughtai, The Pythian Group
Categories: DBA Blogs

Sharding in Oracle 12c Database

Pakistan's First Oracle Blog - Mon, 2015-10-12 22:22
Sharding for Oracle DBAs is still pretty much an alien or pretty new concept. In the realms of big data, this term is being used quite extensively though.

What is Sharding in simple words:

Sharding is partitioning. Horizontal partitioning to be exact.

Sharding means partitioning a table rows on basis of some criteria and storing that partitioned rows of table (i.e. a shard) on different database servers. These database servers are cheap low commodity servers.

The benefits include smaller data to manage, smaller backups, faster reads, and faster response time for the queries.

Just like existing partitioning option in the Oracle database, there are generally three kinds of sharding:

Range Sharding
List Sharding
Hash Sharding

The news out there on social media is that Oracle 12c next version is coming up with Sharding option. That is pretty exciting and let's see what they come up in this regard.

Categories: DBA Blogs

Next Oracle APEX NOVA Meetup Date Set

Scott Spendolini - Mon, 2015-10-12 15:26
The next Oracle APEX NOVA MeetUp is going to be held on November 12th, 2015 at 7PM.  We decided to mix things up a bit and are going to have it at Vapianos in the Reston Town Center.  We're also going to try a more informal agenda.  In other words, there will be no agenda.

So if you're around Reston on November 12th from 7-9PM (or so), feel free to stop by.  Here's the MeetUp.com link: http://www.meetup.com/orclapex-NOVA/events/226009784/

Submit your abstract for KScope 16 - only 3 days left

Dimitri Gielis - Mon, 2015-10-12 08:01
Only 3 days left to submit your abstract for KScope 16... so don't wait longer, submit now! :)

My first ODTUG conference was in 2006 and it was awesome. On this blog you'll find different blog posts about my experience and why I love it, so I won't go over that... but from time to time I get some questions about being a presenter at the event.

"What should I submit, I don't know what to talk about?"

The topics in the Oracle Application Express track are following:
  • Integration: Relates to any work where other products are integrated with APEX or a gap is bridged between platforms. Perhaps is integration with EBS, SAP, Raspberry Pi, Node.js, OAuth, REST, etc...
  • New Release: Presentations that relate to the latest version of APEX. Typically the Oracle development team is presenting those topics.
  • Real World: Everybody loves real life examples. This is anything where a real life system is presented. The problem is explained and how APEX was used to solve it.
  • Detail Plunge: These are presentations on any topic, but the topic, even if it seems narrow, is explored in detail and with focus. It doesn't have to be complicated. I could be building plugins or Dynamic Actions.
  • Other Application Express: Everything else goes here. This is the catch-all bucket.
You can submit up to 4 abstracts.

The way I chose my topics typically fall in one of following categories:

"Why even bother, I won't be selected"

Well, without submitting you don't know. What do you have to lose? I'm sure I don't speak for myself when I say, I would love to meet new people and hear about what they do and discover. There's not a single day I don't learn anything, tech or non-tech. This weekend I learned something new and the feature has been in APEX for many years! (but that's for another blob post)
The level of presentation is very broad as well. We need presentations of all levels; beginners, intermediate and advanced.

"I don't know how to write an abstract"

On the internet there're many links how to present and write an abstract. On the ODTUG site you'll find some tips and examples.

Give it a go, you'll feel good when you've done it. Here's the link. It's all about the learning experience!

All the best and I look forward meeting you in Chicago.
Categories: Development

Requiem Fragment

Greg Pavlik - Sun, 2015-10-11 16:59
Requiem Fragment
Anna Akhmatova
translation by Greg Pavlik

10 Crucifixion

      Do not lament me, O Mother,
      seeing me in the tomb.

The angelic chorus glorified the hour of eternity,
when the heavens convulsed in a river of fire:
He cried to His Father “Eli, Eli, lama sabachthani?”
And to His Mother, he spoke: “Do not lament me”…


Mary Magdalena writhed, and wept
as the beloved John froze like stone (or salt).
Where the Mother stood in silence -
no one dared to look.


Translator notes. I chose to translate this poem much less loosely than my interpretative translation of Akhmatova’s Lot’s Wife – there Russian readers will recognize that the final stanza is essentially my own poem (with an explicit reference to the unrelated work of Scott Cairns, in fact).

Instead, this section of Requiem is much closer to the original (1), with only minimal augmentation. Nonetheless, it is also very different from Akhmatova’s in certain critical respects. Requiem itself is a difficult and evocative work – its melancholy is inseparable from the suffering of both Akhmatova herself and the Soviet people under Stalin. In some sense, I have abandoned this context in my translation. Educated Russian readers would have recognized Akhmatova’s work as using the imagery of the hymnography of the Paschal Nocturnes, the final liturgical setting of Great and Holy Saturday chanted before the entombed body of the dead Christ, which includes a deeply moving dialogue with his Mother. (2)

I have chosen to deepen the liturgical elements of the poem and play off themes that recur in and around the Lenten Triodion. The text of the Slavonic service is rendered here in English as it is commonly used in American parishes within the Russian Orthodox tradition. Second, I use the Hebrew directly in quotation from the Psalter, emphasizing its position as a liturgical prayer. While the dialogic element from the Nocturnes service is repeated, maintaining the liturgical connection, here the address to the Mother is clearly one of human filial affection.

The river of fire is evocative of the image God as “consuming fire”, which, St Isaac says is experienced as bliss by the pure in heart. The second stanza – and I do not believe there is any intention at all in the original to do this – also points back to the story of Lot and Sodom and implicitly re-invokes the image of fire. Akhmatova makes no association with salt. I have tried intentionally not to recall the poetics of Stabat Mater in the final lines.

(1) Original Russian:


   Не рыдай Мене, Мати,
   во гробе зрящия.

Хор ангелов великий час восславил,
И небеса расплавились в огне.
Отцу сказал: "Почто Меня оставил!"
А матери: "О, не рыдай Мене..."


Магдалина билась и рыдала,
Ученик любимый каменел,
А туда, где молча Мать стояла,
Так никто взглянуть и не посмел.

1940, Фонтанный Дом

(2) From the Eastern Orthodox Liturgical Service of Paschal Nocturnes:

Do not lament me, O Mother, seeing me in the tomb, the Son conceived in the womb without seed, for I shall arise and be glorified with eternal glory as God. I shall exalt all who magnify thee in faith and in love.

Glory to Thee, our God, glory to Thee!

I escaped sufferings and was blessed beyond nature at Thy strange birth, O Son, who art without beginning. But now, beholding Thee, my God, dead and without breath, I am sorely pierced by the sword of sorrow. But arise, that I may be magnified.

Glory to Thee, our God, glory to Thee!

By my own will, the earth covers me, O Mother, but the gatekeepers of hell tremble at seeing me clothed in the blood-stained garments of vengeance; for when I have vanquished my enemies on the cross, I shall arise as God and magnify thee.

Glory to the Father and to the Son and to the Holy Spirit, now and ever and unto ages of ages. Amen.

Let creation rejoice, let all born on earth be glad, for hateful hell has been despoiled, let the women with myrrh come to meet me, for I am redeeming Adam and Eve and all their descendants, and on the third day shall I arise.

Oracle package for HTTPS/HTTP[version 0.2]

XTended Oracle SQL - Sun, 2015-10-11 16:30

A couple days ago i created simple package for HTTPS/HTTP, but I’ve decided now to improve it:

  1. Timeout parameter – it would be better to control connection time;
  2. Simple page parsing with PCRE regular expressions – to speed up and simplify page parsing, because if you want to get big number matched expressions from CLOB with regexp_xxx oracle functions, you have to call these functions many times with different [occurance] parameters, passing/accessing to the clob many times. But within java procedure it will be just one pass.
  3. Support of plsqldoc – the tool for automatically generating documentation in HTML format.(javadoc analogue)

Upd 2015-10-11:

  1. added HttpMethod parameter – so you can choose POST or GET method
  2. added function get_last_response – returns last HTTP response code.

You can download new version from github: https://github.com/xtender/xt_http
Also it may be interesting if you want to see how to get collection of CLOBs/varchar2 from JAVA stored procedure.

So with new functions I can even more easy get UpVoters list from prevous post:

select * 
from table(
            pUrl     => '&url'
           ,pPattern => 'alt="([^"]+)"'
           ,pGroup   => 1
    ) t
select * 
from table(
            pUrl     => '&url'
           ,pPattern => 'alt="([^"]+)"'
           ,pGroup   => 1
    ) t
Enter value for url: https://community.oracle.com/voting-history.jspa?ideaID=6901&start=0&numResults=1000

Denes Kubicek
Pavel Luzanov
Martin Preiss
Toon Koppelaars
Dom Brooks
Mahir M. Quluzade
Dmitry A. Bogomolov
Mohamed Houri
Randolf Geist
Kot Dmitriy
Timur Akhmadeev
Franck Pachot
Alexander Semenov
Bud Light
Harun Kucuksabanoglu
Vigneswar Battu
be here now
Oleh Tyshchenko
Yury Pudovchenko
Sergey Navrotskiy
Mikhail Velikikh
Oren Nakdimon
Sergei Perminov
Victor Osolovskiy
Alfredo Abate
Trihin Pavel
Menno Hoogendijk
Pinto Das
Mehmet Kaplan
naeel maqsudov

Categories: Development

Exadata Upgrade - Adding X5-2 Half Rack to X3-2 Existing Half Rack

Senthil Rajendran - Sat, 2015-10-10 23:26

Recently did an Exadata Expansion project on x3 half rack and making it a full rack by adding x5.
this setup is a full exadata rack with two different flavours of engineered system hardware four x3 compute nodes and four x5-2 compute nodes along with seven x3 cellnodes with seven x5 cell nodes.
This topology is supported but there were a lot of work behind.

  • First getting the x3 on the supported release 12c , this involved 4 node grid upgrade from 11g to 12c and x3 compute/cell node image to 12c from 11g.
  • Next physical racking of x5 hardware into x3 rack space
  • Then upgrade x5 to the latest software stack 12c same as x3
  • Interconnect both x3 and x5 to co-exisits
  • Precheck of the full exadata rack before making them into one cluster
  • Next add the four x5 node to the x3 rac cluster using addnode.sh
  • Now the storage - x3 had 3TB disk and x5 has 4TB disk , so grid disk has to be the same size.
    • following doc "How to Add Exadata Storage Servers Using 3TB/4TB Disks to an Existing Database Machine (Doc ID 1476336.1)" will help.
    • the additional 1TB space from x5 cell storage was created into RECO2 for future recovery storage space as the current setup is 80:20 (DATA:RECO)
  • Completed cluster verification and return the rack to service 

This whole exercises was so cool and went as documented.
For documentation the below was used (you may have to refer to the latest document for extending exadata)
Oracle® Exadata Database Machine Extending and Multi-Rack Cabling Guide
12c Release 1 (12.1)

IBM Bluemix - Spring Boot Elasticsearch Repositories demo

Pas Apicella - Sat, 2015-10-10 05:51
I decided to take the Elasticsearch repositories as part of the Spring Data project for a test drive. The Spring boot Elasticsearch repositories are described in the link below.


In this example I create a basic Spring Boot application using Elasticsearch, Web, Rest Repositories along with Thymeleaf / Bootstrap as the view pages. The code for this is on GitHub which also provides the "Deploy to Bluemix" button to deploy to your own instance of this application into your own Bluemix accout.


Like all Spring Data Repositories you can create an interface and be given basic CRUD operations to the Elastisearch DOCUMENT as shown below.

package pas.au.ibm.bluemix.elastic;

import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;

import java.util.List;

public interface EmployeeRepository extends ElasticsearchRepository<Employee, String>
public List<Employee> findByFirstNameContaining(String firstName);

package pas.au.ibm.bluemix.elastic;

import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;

@Document(indexName = "employee", type = "employee", shards = 1, replicas = 0, refreshInterval = "-1")
public class Employee
private String id;
private String firstName;
private String lastName;
private String job;
private int deptNo;

public Employee()


If you don't have a IBM DevOps Jazzhub account the manifest.yml can be used if you clone the project, and compile it using maven with "mvn package". Be sure to alter the application name / host to be a unique name within Bluemix.

 - name: pas-sb-elastic
   memory: 450M
   path: ./target/SpringBootElasticSearch-0.0.1-SNAPSHOT.jar
   instances: 1
   host: pas-sb-elastic
   buildpack: java_buildpack
   domain: mybluemix.net

Example Deployment

pasapicella@pas-macbook-pro:~/ibm/DemoProjects/spring-starter/jazzhub/SpringBootElasticSearch$ cf push
Using manifest file /Users/pasapicella/ibm/DemoProjects/spring-starter/jazzhub/SpringBootElasticSearch/manifest.yml

Creating app pas-sb-elastic in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...

Using route pas-sb-elastic.mybluemix.net
Binding pas-sb-elastic.mybluemix.net to pas-sb-elastic...

Uploading pas-sb-elastic...
Uploading app files from: /Users/pasapicella/ibm/DemoProjects/spring-starter/jazzhub/SpringBootElasticSearch/target/SpringBootElasticSearch-0.0.1-SNAPSHOT.jar
Uploading 1M, 133 files
Done uploading

Starting app pas-sb-elastic in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (35M)
-----> Java Buildpack Version: v3.0 | https://github.com/cloudfoundry/java-buildpack.git#3bd15e1
-----> Downloading Open Jdk JRE 1.8.0_60 from https://download.run.pivotal.io/openjdk/trusty/x86_64/openjdk-1.8.0_60.tar.gz (2.7s)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.7s)
-----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.10.0_RELEASE.jar (0.9s)

-----> Uploading droplet (79M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


App pas-sb-elastic was started using this command `SERVER_PORT=$PORT $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar -Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh -Xmx329386K -Xms329386K -XX:MaxMetaspaceSize=64M -XX:MetaspaceSize=64M -Xss975K org.springframework.boot.loader.JarLauncher`

Showing health and status for app pas-sb-elastic in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...

requested state: started
instances: 1/1
usage: 450M x 1 instances
urls: pas-sb-elastic.mybluemix.net
last uploaded: Sat Oct 10 10:20:52 UTC 2015
stack: cflinuxfs2
buildpack: java_buildpack

     state     since                    cpu    memory           disk           details
#0   running   2015-10-10 09:22:42 PM   0.3%   425.1M of 450M   158.1M of 1G

The application is currently deployed and is accessible at the URL below.


More Information


Categories: Fusion Middleware

sftp failure due to newline character difference between windows and unix.

Vikram Das - Fri, 2015-10-09 21:36
Recently I spent almost a full day struggling to make out, why an sftp connection would not work without password, after setting up ssh equivalence.  The keys were correct, the permissions on the directories were correct.  The authorized_keys file looked ok.  I copied the authorized_keys file of another account that was working fine.  When I replaced the authorized_keys after taking backup of original authorized_keys, it started working.  So then I proceeded to check the contents in a hex editor

On the left side you have the authorized_keys file created in Windows.
On the right side you have the same authorized_keys file created in Unix.

If you notice the ends of the lines in the Windows file it shows CR LF, where as unix shows LF.

This difference is well described in the wikipedia article on newline character.

The one mistake I had done this time was create the authorized_keys file in Windows notepad, as I was teaching a Developer how to create authorized_keys file.  Once I used vi on unix to create the authorized_keys file and pasted the same ssh key, sftp started working without prompting for password.  I know that Windows/DOS and Unix have different newline characters.  However, I was not able to apply that knowledge, till I compared the files in hex editor.

Whenever, a techie is able to get to the root cause of a problem, a deep sense of satisfaction is experienced.  I am glad I got the opportunity to troubleshoot and fix the issue by getting to the root cause of the issue.
Categories: APPS Blogs

The Support Stars Bar at Oracle OpenWorld - Live Help, Demos, and More From Some of Oracle's Best and Brightest

Joshua Solomin - Fri, 2015-10-09 11:54

Come to the Oracle Support Stars Bar at Oracle OpenWorld and get the answers you need from Oracle Support experts—across all Oracle product lines.

Visit us in Moscone West, Booth 3361 and learn about new features and tools that will help make your job simpler.

  • Find Answers: Ask the Stars of Oracle Support your toughest questions.
  • Win a Prize: Attend a 10-minute mini-briefing on a specialized topic for a chance to win.
  • Get Proactive: Discover powerful new proactive support tools.
  • Power Up: Maximize availability and performance with Advanced Customer Support.
Stars Bar Hours

Moscone West Exhibition Hall, Booth 3361

Monday, October 26: 10:15 a.m. to 6:00 p.m. PT
Tuesday, October 27: 10:15 a.m. to 6:00 p.m. PT
Wednesday, October 28: 10:15 a.m. to 4:15 p.m. PT

See the current Stars Bar Mini-briefing Session Schedule here.

Find Answers
Our product experts are here to help. We will have senior support engineers on hand spanning all major Oracle product areas, from systems and database products to middleware, applications, and Oracle Cloud.
Get Proactive
Discover powerful new proactive support tools, services and best practices to help you prevent problems, resolve issues more quickly, and upgrade more smoothly. Ask us about Oracle Platinum Services, Oracle Auto Service Request, Oracle Upgrade Advisors, the My Oracle Support Accreditation Program, and more.
Win a Prize
While at the Stars Bar attend a 10-minute mini-briefing for your chance to win! The briefings cover a wide variety of topics, presenting valuable tips and best practices directly from Oracle Support experts. You are sure to leave with great ideas that will make your life easier and help you get the most from your Oracle products.
Power Up
Take your support to the next level. Stop by the Oracle Advanced Customer Support table where you can talk to experts about best practices and specialized services for your mission-critical environments.

Orphan Table Rows ... ugh!

Tim Dexter - Fri, 2015-10-09 10:57

This week, orphaned table rows and how to avoid them.

Its a bit more subtle than rows breaking across a page border and the solution is a doozy!

Im using another video to demonstrate because

  1. I don't have to type and grab screen shots, even thou I have one above
  2. Its faster and more easily understood, even in my umming and erring English.
  3. I'm hip and happening and video help is the future kids!
  4. You get to hear my Southern (England) drawl; a great sleep aid for insomniacs!

Here it is. You might want to 'fullscreen' it. Enjoy!

Categories: BI & Warehousing

generate safe passwords

Laurent Schneider - Fri, 2015-10-09 08:29

This is probably ambitious and I’ll start with a disclaimer, there is no such thing.

But ok, we know that system/manager isn’t

Apart from system/manager, there are hundreds of weak passwords that are commonly used and easy to guess.

On your database server, after a few tries, the account is lock. And maybe the attacker is detected by then.

So the worst passwords are the default passwords and passwords like oracle.

To enforce good passwords, we have verify functions, like ora12c_strong_verify_function in 12c, that checks for mixed case, special characters, etc. One may prefer to write his own and not disclose what it exactly checks.

In that function in rdbms admin, it states The maximum length of any DB User password is 128 bytes. but it’s 30 character in most cases.

If you have failed login attends of 10, chosing eleven as a password does not make it safe. If the attacker got’s the user metadata, you are screwed in no time. In Oracle 4, it’s clear text. In 7-10, it’s a doubled-DES unsalted with a fixed disclosed key encryption. There any dictionary attack takes milliseconds, and a 6 character password in sub-second. It’s got better in 11, where SHA1 could take weeks to years to have a 8 char password. Depending on its complexity. In 12c, generating a hash cost lot’s of cpu cycle, so it is no longer possible to test millions of password per second, even with the strongest hardware.

But to get a good password it is recommended and often required to use digit / letters / special signs / mixed case and no dictionary word.

I have made a small password generator for my reader using dbms_random.string, which generates pseudorandom string. It is best to use the cryptographically secure dbms_crypto.randombytes, but then you must still get a password that you can type. It should also be possible to use unicode if you like. And depending where you are going to use it, it is sometimes safer to not use signs like * or ‘ because, who know’s, your password may produce an error and end up in a logfile.

Okay, I wrote a small function that generates a 10-char string and verify it with the 12c strong verifier. And loop until one is good enough.

The chance that a random password is manager is pretty low, but it is probably best to check you got not only safe random, but also strong string

-- @?/rdbms/admin/catpvf
                               old_password    VARCHAR2 DEFAULT NULL)
   p   VARCHAR2 (30);
   c   BOOLEAN := FALSE;
   i   NUMBER := 0;
   WHILE NOT c AND i < 1000
      p := DBMS_RANDOM.string ('P', 10);
      i := i + 1;
         c := sys.ora12c_strong_verify_function (username, p, old_PASSWORD);
         WHEN OTHERS
   RETURN p;



This could well be a good initial expired password for your user. Later the user will find something easier to remember


Subscribe to Oracle FAQ aggregator