Feed aggregator

What AWR isn’t telling you

Dominic Brooks - Fri, 2015-10-16 14:08

It is well-known that AWR, and Statspack before, take snapshots of V$ views (or rather the underlying objects) to produce the data in AWR.

It is also well-known that, when considering sql and its statistics in the shared pool, if something big hitting happens but the big-hitter is no longer in the shared pool by the time of the snapshot, then it can’t be recorded in your AWR picture of activity.

But like many things supposedly well-known, it can still come back and remind you that you’ve forgotten or overlooked this behaviour.

Here is a little illustration which happened earlier this week.

This week I’ve been looking closely at activity which happens on a particular DB between 16:00 and 17:00 because the IO subsystem is showing signs of stress and the timings for ‘Average Synchronous Single-Block Read Latency’ (V$SYSMETRIC_SUMMARY / DBA_HIST_SYSMETRIC_SUMMARY) have pushed above our amber warning levels.

Don’t get me started but our amber level for a single block read is 20ms!
Way too high for my liking for such sustained average but that is apparently the SLA from the SAN.

Why do we have such an alert?
The purpose is two-fold.

  1. It can sometimes be a warning that some of our IO-sensitive batch work might show some performance degradations, threatening SLAs
  2. In the past, it has been an effective warning that a sql statement executed concurrently in multiple threads from the applications has switched to a "bad" execution plan whose impact is system-wide (normally heavy concurrent direct path reads).

So… I was hovering around this system during the relevant timeframe looking at what was going on.

And I observed some heavy hitting queries running in parallel originating from another system (i.e. not my application).

There were two executions of one sql id (0cu2sm062jutc) executing concurrently with one single execution of a another sql statement.

The top-level RTSM execution metrics for these three executions were all very similar to this:

Executing for about 10 minutes, in parallel doing a lot of physical direct path reads (38GB) and accumulating over 1 hour in parallel processing time.

 Status              :  DONE (ALL ROWS)          
 Instance ID         :  2                        
 SQL ID              :  0cu2sm062jutc            
 SQL Execution ID    :  33554432                 
 Execution Started   :  10/13/2015 16:03:35      
 First Refresh Time  :  10/13/2015 16:03:37      
 Last Refresh Time   :  10/13/2015 16:13:19      
 Duration            :  584s                     
 Module/Action       :  JDBC Thin Client/-       
 Program             :  JDBC Thin Client         
 Fetch Calls         :  257                      

Global Stats
===================================================================================================================================
| Elapsed |   Cpu   |    IO    | Application | Concurrency | Cluster  |  Other   | Fetch | Buffer | Read  | Read  | Write | Write |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) | Waits(s) | Calls |  Gets  | Reqs  | Bytes | Reqs  | Bytes |
===================================================================================================================================
|    4041 |     252 |     3782 |        0.06 |        1.75 |     0.97 |     3.95 |   257 |     6M | 75051 |  38GB |  1070 | 125MB |
===================================================================================================================================

And then whilst I was fiddling with various queries against V$SQL the queries in question disappeared from V$SQL.

No big surprise of course, stuff is liable to be released from the library cache whenever it sees fit.

But then the remembrance penny hit home and I was left thinking that it was no wonder that I couldn’t see this regularly when looking retrospectively at this timeframe via AWR.

But this cumulative exection time above is larger than the number 3 SQL statement in AWR by elapsed time … so I’m not making this up. It’s significant.

This got me thinking – how can we try to identify heavy-hitting SQL which is being missed by AWR – ASH surely? – and what can we do to help – COLORED SQL?

Here is my first attempt at such an ASH query, trying to find queries which have taken longer than 5 minutes in a single execution in a single snapshot period but which aren’t in AWR.

SELECT * 
FROM   (SELECT h.dbid
        ,      h.instance_number
        ,      h.snap_id
        ,      h.sql_id
        ,      h.sql_exec_id
        ,      h.sql_exec_start
        ,      h.qc_instance_id||'::'||h.qc_session_id||'::'||h.qc_session_serial# px_details
        ,      COUNT(*)
        ,      COUNT(DISTINCT h.session_id)
        --,      MIN(h.snap_id) min_snap
        --,      MAX(h.snap_id) max_snap
        ,      MIN(h.sample_time)
        ,      MAX(h.sample_time)
        ,    ((EXTRACT(DAY    FROM (MAX(h.sample_time) - MIN(h.sample_time)))*86400)
            + (EXTRACT(HOUR   FROM (MAX(h.sample_time) - MIN(h.sample_time)))*3600)
            + (EXTRACT(MINUTE FROM (MAX(h.sample_time) - MIN(h.sample_time)))*60)
            + (EXTRACT(SECOND FROM (MAX(h.sample_time) - MIN(h.sample_time))))) duration
        FROM   dba_hist_active_sess_history h
        ,      dba_hist_snapshot            s
        WHERE  h.dbid            = s.dbid
        AND    h.snap_id         = s.snap_id
        AND    h.instance_number = s.instance_number
        AND    sql_id           IS NOT NULL
        AND    sql_exec_id      IS NOT NULL
        AND    NOT EXISTS (SELECT 1
                           FROM   dba_hist_sqlstat st
                           WHERE  st.dbid            = h.dbid
                           AND    st.snap_id         = h.snap_id --BETWEEN xxx.min_snap AND xxx.max_snap
                           AND    st.instance_number = h.instance_number
                           AND    st.sql_id          = h.sql_id)
        GROUP BY 
               h.dbid
        --
        ,      h.snap_id
        --
        ,      h.instance_number
        ,      h.sql_id
        ,      h.sql_exec_id
        ,      h.sql_exec_start
        ,      h.qc_instance_id
        ,      h.qc_session_id
        ,      h.qc_session_serial#) xxx
WHERE  duration > 600
ORDER BY snap_id DESC, duration DESC;

The main problem with this query is that DURATION does not tell us how long the query has been active in the database.

We could have a query which has been active all that time but was doing relatively little in the DB.

I had a quick look (again) at TM_DELTA_TIME and TM_DELTA_DB_TIME but they are still unreliable – in some of the rows it looks ok, in others not.

Let’s forget that and pretend that the SQL above works ok.

Perhaps we can add a limit WHERE the COUNT(*) > threshold because otherwise this is currently giving me 100s of statements in my production database.

That gives me a token 166 statements over the past 30 days.

And that’s a problem because my intention was to use this to identify SQL statements to add to AWR via ADD_COLORED_SQL.
And that has a limit of 100 statements.

Ah… but no, the other penny has just dropped whilst writing this…

If I add color this SQL, that will make no difference. That just means AWR will record that SQL if it’s there.

But if it’s not in the shared pool at snapshot time, it’s not in the shared pool so it’s not in AWR.

Duh!

It’s almost as if we need a process which if a sql statement has breached a certain threshold then it won’t let it be flushed until MMON has written it to AWR.

No, that’s not right. That would present other problems.

I could shorten the AWR snapshot interval but I’m not convinced.

I think this is an oft-overlooked yet significant shortcoming.

So, for the moment at least, I’m left with what I think is a problem and I can’t think of a good solution… can you?

Or are you thinking mountain vs molehill?


Accessibility Guide Updated

PeopleSoft Technology Blog - Fri, 2015-10-16 12:38

We've updated the PeopleSoft Accessibility Guidelines for 8.54.  This guide is for people developing using PeopleTools, who want to insure that their designs conform to current accessibility standards.  The PeopleSoft Applications development teams employ these same guidelines.  You can download them from My Oracle Support:  https://mosemp.us.oracle.com/epmos/faces/DocumentDisplay?id=1941471.1   This document applies to the new Fluid User Interface.

PeopleSoft publishes our VPATs (accessibility conformance declarations) here.

General accessibility guidelines for PeopleSoft are documented here.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

About Bugs

Floyd Teter - Thu, 2015-10-15 16:53
Been a few weeks since I last checked in.  Onboarding with Oracle has been like drinking from a data firehose, so I've been a bit pressed for time...

As I write this, I'm sitting out on my backyard patio right around sundown.  It's been unseasonable warm here in Utah of late, so the flying bugs are out in abundance.  While they're a bit irritating, they're not the type of bugs I have on my mind this evening.  I'm thinking more about software bugs.

I design a software bug as a flaw that causes said software to perform differently than designed or intended.  Simple definition for my simple mind, I suppose.

One of the eye-opening insights in having an insider's perspective at Oracle has been looking at software bugs.  Not the volume of bugs so much as the type of bugs.  If I apply my simple definition of a software bug, over half the logged bugs are not really bugs at all.  The software is working as designed or intended, but we're not seeing the expected result.  Could be any one of a number of reasons:  applying the software incorrectly due to lack of knowledge, desire for features not provided, violation of business rules, data quality flaws in converted or interfaced data, errors in writing data...the list goes on and on.

Wading through this data, I see a couple of trends.  First, it's pretty obvious that enterprise software vendors could do a better job of educating and enabling customers and partners on how their software works...especially from a systems engineering perspective.  Second, the industry needs better tools for evaluating data quality prior to converting or interfacing data between data sources...a symptom of the old "garbage in, garbage out" rule.

If we could improve in this area, think of the reduced workload for application developers.  Keep in mind that most enterprise software application development teams are dealing with at least four application versions simultaneously:  the previously released version(s) in the field, the latest release in the field, the next release being built, and the design work for the release after the version currently being built.  Anything we can do to alleviate the bug resolution workload allows vendors to apply that extra bandwidth in ways that will shorten release cycle times...something everybody wants.

I'm sure there are more trends to add to the list.  You have a contribution to make?  The comments await.

Oracle Priority Support Infogram for 15-OCT-2015

Oracle Infogram - Thu, 2015-10-15 16:10

Oracle OpenWorld









RDBMS

Loading blobs without a spoon at Kris’ Blog: SQLCL - Blob loading ALL the files

Fusion

Release 10: Understanding RESTful Services in HCM Cloud, from Fusion Applications Developer Relations.

Java

Available: NetBeans IDE 8.1 RC 2, from Geertjan’s Blog.


Mobile Computing


From the same source: Oracle MAF 2.2 New Features

Big Data

Big Data SQL 2.0 - Now Available, from The Data Warehouse Insider.

Oracle Utilities

OUAF 4.3.0.1.0 Feature: Required Field Indicator, from The Shorten Spot, along with a couple of other good postings:



Solaris

Solaris 11.2 SysAdmin Handbook is now available, from Dr Cloud's Flying Software Circus.

And a blog on Solaris Studio and related topics I haven’t seen before, All in a day’s work, had several hands on postings:





Ops Center


EBS

From the Oracle E-Business Suite Support blog:



From the Oracle E-Business Suite Technology blog:




Couchbase 4.0 and related subjects

Curt Monash - Thu, 2015-10-15 10:17

I last wrote about Couchbase in November, 2012, around the time of Couchbase 2.0. One of the many new features I mentioned then was secondary indexing. Ravi Mayuram just checked in to tell me about Couchbase 4.0. One of the important new features he mentioned was what I think he said was Couchbase’s “first version” of secondary indexing. Obviously, I’m confused.

Now that you’re duly warned, let me remind you of aspects of Couchbase timeline.

  • 2 corporate name changes ago, Couchbase was organized to commercialize memcached. memcached, of course, was internet companies’ default way to scale out short-request processing before the rise of NoSQL, typically backed by manually sharded MySQL.
  • Couchbase’s original value proposition, under the name Membase, was to provide persistence and of course support for memcached. This later grew into a caching-oriented pitch even to customers who weren’t already memcached users.
  • A merger with the makers of CouchDB ensued, with the intention of replacing Membase’s SQLite back end with CouchDB at the same time as JSON support was introduced. This went badly.
  • By now, however, Couchbase sells for more than distributed cache use cases. Ravi rattled off a variety of big-name customer examples for system-of-record kinds of use cases, especially in session logging (duh) and also in travel reservations.
  • Couchbase 4.0 has been in beta for a few months.

Technical notes on Couchbase 4.0 — and related riffs :) — start:

  • There’s a new SQL-like language called N1QL (pronounced like “nickel”). I’m hearing a lot about SQL-on-NoSQL these days. More on that below.
  • “Index”, “data” and “query” are three different services/tiers.
    • You can run them all on the same nodes or separately. Couchbase doesn’t have enough experience yet with the technology to know which choice will wind up as a best practice.
    • I’m hearing a lot about heterogeneous-node/multi-tier DBMS architectures these days, and would no longer stand by my 2009 statement that they are unusual. Other examples include Oracle Exadata, MySQL, MongoDB (now that it has pluggable storage engines), MarkLogic, and of course the whole worlds of Hadoop and Spark.
  • To be clear — the secondary indexes are global, and not tied to the same nodes as the data they index.
  • There’s a new back end called ForestDB, but if I understood correctly, it’s used just for the indexes, not for the underlying data.
  • ForestDB represents Couchbase indexes in something that resembles b-trees, but also relies on tries. Indeed, if I’m reading the relevant poster correctly, it’s based on a trie of b-trees.
  • In another increasingly common trend, Couchbase uses Bloom filters to help decide which partitions to retrieve for any particular query.

Up to a point, SQL-on-NoSQL stories can be fairly straightforward.

  • You define some kind of a table,* perhaps in a SQL-like DDL (Data Description Language).
  • SELECT, FROM and WHERE clauses work in the usual way.
  • Hopefully, if a column is going to have a lot of WHERE clauses on it, it also has an index.

For example, I think that’s the idea behind most ODBC/JDBC drivers for NoSQL systems. I think it’s also the idea behind most “SQL-like” languages that NoSQL vendors ship.

*Nobody I talk to about this ever wants to call it a “view”, but it sure sounds like a view to me — not a materialized view, of course, but a view nonetheless.

JOIN syntax can actually be straightforward as well under these assumptions. As for JOIN execution, Couchbase pulls all the data into the relevant tier, and nested loop execution there. My new clients at SequoiaDB have a similar strategy, by the way, although in their case there’s a hash join option as well.

But if things stopped there, they would miss an important complication: NoSQL has nested data. I.e., a value can actually be an array, whose entries are arrays themselves, and so on. That said, the “turtles all the way down” joke doesn’t quite apply, because at some point there are actual scalar or string values, and those are the ones SQL wants to actually operate on.

Most approaches I know of to that problem boil down to identifying particular fields as table columns, with or without aliases/renaming; I think that’s the old Hadapt/Vertica strategy, for example. Couchbase claims to be doing something a little different however, with a SQL-extending operator called UNNEST. Truth be told, I’m finding the N1QL language reference a bit terse, and haven’t figured out what the practical differences vs. the usual approach are, if any. But it sounds like there may be some interesting ideas in there somewhere.

Quick Cross-Period AWR Comparison

Dominic Brooks - Thu, 2015-10-15 04:43

Here’s a query which I find useful in order to have a very quick comparison across AWR snapshots of the high level time model statistics.
The numbers should match those in the associated section in the AWR report.

If you feel compulsed, obsessively, with tuning then you may see some blips here and there which then encourage you to dive into the AWR detail for that snapshot.

Or quite often I get in in the morning and there might be an email about slow overnight processes and generic “database has been slow” enquiries and before I start hunting around to prove or disprove the allegations, this can be a useful frame.

WITH subq_snaps AS
(SELECT dbid                dbid
 ,      instance_number     inst
 ,      snap_id             e_snap
 ,      lag(snap_id) over (partition by instance_number, startup_time order by snap_id) b_snap
 ,      TO_CHAR(begin_interval_time,'D') b_day
 ,      TO_CHAR(begin_interval_time,'DD-MON-YYYY HH24:MI') b_time
 ,      TO_CHAR(end_interval_time,'HH24:MI')   e_time
 ,    ((extract(day    from (end_interval_time - begin_interval_time))*86400)
     + (extract(hour   from (end_interval_time - begin_interval_time))*3600)
     + (extract(minute from (end_interval_time - begin_interval_time))*60)
     + (extract(second from (end_interval_time - begin_interval_time)))) duration
 FROM   dba_hist_snapshot)
SELECT ss.inst
,      ss.b_snap
,      ss.e_snap
,      ss.b_time
,      ss.e_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END)/1000000/60,2),'999999990.99')                                  db_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END)/(ss.duration*1000000),1),'999999990.99')        aas
,      (SELECT round(average,2)
        FROM   dba_hist_sysmetric_summary sm
        WHERE  sm.dbid            = ss.dbid
        AND    sm.snap_id         = ss.e_snap
        AND    sm.instance_number = ss.inst
        AND    sm.metric_name     = 'Average Synchronous Single-Block Read Latency'
        AND    sm.group_id        = 2)                                                                                                                   assbl
,      (SELECT round(average,2)
FROM   dba_hist_sysmetric_summary sm
WHERE  sm.dbid            = ss.dbid
AND    sm.snap_id         = ss.e_snap
AND    sm.instance_number = ss.inst
AND    sm.metric_name     = 'Host CPU Utilization (%)'
AND    sm.group_id        = 2)                                                                                                                   cpu_util
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB CPU' THEN em.value - bm.value END)/1000000,2),'999999990.99')                                      db_cpu
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'sql execute elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                    sql_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'PL/SQL execution elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')               plsql_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'parse time elapsed' THEN em.value - bm.value END)/1000000,2),'999999990.00')                          parse_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'failed parse elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                   failed_parse
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'hard parse (sharing criteria) elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')  hard_parse_sharing
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'RMAN cpu time (backup/restore)' THEN em.value - bm.value END)/1000000,2),'999999990.99')              rman_cpu
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'connection management call elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')     connection_mgmt
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'sequence load elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                  sequence_load
,      TO_CHAR(ROUND(100*MAX(CASE WHEN bm.stat_name = 'DB CPU' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           db_cpu_perc
,      TO_CHAR(ROUND(100*MAX(CASE WHEN bm.stat_name = 'sql execute elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           sql_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'PL/SQL execution elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           plsql_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'parse time elapsed' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           parse_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'failed parse elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           failed_parse_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'hard parse (sharing criteria) elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           hard_parse_sharing_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'RMAN cpu time (backup/restore)' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           rman_cpu_perc
FROM  subq_snaps              ss
,     dba_hist_sys_time_model em
,     dba_hist_sys_time_model bm
WHERE bm.dbid                   = ss.dbid
AND   bm.snap_id                = ss.b_snap
AND   bm.instance_number        = ss.inst
AND   em.dbid                   = ss.dbid
AND   em.snap_id                = ss.e_snap
AND   em.instance_number        = ss.inst
AND   bm.stat_id                = em.stat_id
GROUP BY
       ss.dbid
,      ss.inst
,      ss.b_day
,      ss.b_snap
,      ss.e_snap
,      ss.b_time
,      ss.e_time
,      ss.duration
--HAVING b_day NOT IN (6,7)
--AND    inst = 2
--AND b_snap = 18673
--AND    e_time = '17:00'
ORDER BY b_snap DESC;

Also, similar query for comparing the different IO metrics.
Again should match numbers in AWR.

WITH subq_snaps AS
(SELECT dbid                dbid
 ,      instance_number     inst
 ,      snap_id             e_snap
 ,      lag(snap_id) over (partition by instance_number, startup_time order by snap_id) b_snap
 ,      TO_CHAR(begin_interval_time,'D') b_day
 ,      TO_CHAR(begin_interval_time,'DD-MON-YYYY HH24:MI') b_time
 ,      TO_CHAR(end_interval_time,'HH24:MI')   e_time
 ,    ((extract(day    from (end_interval_time - begin_interval_time))*86400)
     + (extract(hour   from (end_interval_time - begin_interval_time))*3600)
     + (extract(minute from (end_interval_time - begin_interval_time))*60)
     + (extract(second from (end_interval_time - begin_interval_time)))) duration
 FROM   dba_hist_snapshot)
,    io_stats AS
(SELECT ss.*
 ,      bv.event_name
 ,      ev.time_waited_micro_fg - bv.time_waited_micro_fg time_waited_micro
 ,      ev.total_waits_fg       - bv.total_waits_fg       waits
 FROM   subq_snaps            ss
 ,      dba_hist_system_event bv
 ,      dba_hist_system_event ev
 WHERE  bv.dbid                   = ss.dbid
 AND    bv.snap_id                = ss.b_snap
 AND    bv.instance_number        = ss.inst
 AND    bv.event_name            IN ('db file sequential read','direct path read','direct path read temp','db file scattered read','db file parallel read')
 AND    ev.dbid                   = ss.dbid
 AND    ev.snap_id                = ss.e_snap
 AND    ev.instance_number        = ss.inst
 AND    ev.event_id               = bv.event_id)
SELECT io.dbid
,      io.inst
,      io.b_snap
,      io.e_snap
,      io.b_time
,      io.e_time
,      (SELECT ROUND(average,2)
        FROM   dba_hist_sysmetric_summary sm
        WHERE  sm.dbid            = io.dbid
        AND    sm.snap_id         = io.e_snap
        AND    sm.instance_number = io.inst
        AND    sm.metric_name     = 'Average Synchronous Single-Block Read Latency'
        AND    sm.group_id        = 2) assbl
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END) single_waits
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END) multi_waits
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END) prefch_wait
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits  END) END) direct_waits
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END) END)  temp_waits
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_single
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) single_secs_total
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) single_avg
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_multi
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) multi_secs_total
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) multi_avg
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_prefch
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) prefch_secs_total
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) prefch_avg
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits  END/duration) END) iops_direct
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) direct_secs_total
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) direct_avg
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN waits < 0 THEN NULL ELSE waits END/duration) END) iops_temp
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN time_waited_micro/1000/1000 < 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) temp_secs_total
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) temp_avg
FROM   io_stats io
GROUP BY
       io.dbid
,      io.inst
,      io.b_day
,      io.b_snap
,      io.e_snap
,      io.b_time
,      io.e_time
,      io.duration
HAVING b_day NOT IN (6,7)
AND    inst = 2
--AND b_snap = 18673
--AND    e_time = '17:00'
ORDER BY b_snap DESC;

Finally, one of the queries I used the most for quick checks – comparing top SQL in AWR across periods:

SELECT x.*, (SELECT sql_text from dba_hist_sqltext t where t.sql_id = x.sql_id and rownum = 1) txt
FROM (
SELECT sn.snap_id
,      TO_CHAR(sn.end_interval_time,'DD-MON-YYYY HH24:MI') dt
,      st.sql_id
,      st.instance_number
,      st.parsing_schema_name
,      st.plan_hash_value
,      SUM(st.fetches_delta) fch
,      SUM(rows_processed_delta) rws
,      SUM(executions_delta)     execs
,      ROUND(SUM(elapsed_time_delta)/1000/1000)   elp
,      ROUND(SUM(elapsed_time_delta)/1000/1000/NVL(NULLIF(SUM(executions_delta),0),1),2)   elpe
,      ROUND(SUM(cpu_time_delta)/1000/1000)       cpu
,      SUM(buffer_gets_delta)    gets
,      ROUND(SUM(iowait_delta)/1000/1000)         io
,      ROUND(SUM(clwait_delta)/1000/1000)         cl
,      ROUND(SUM(ccwait_delta)/1000/1000)         cc
,      ROUND(SUM(apwait_delta)/1000/1000)         ap
,      ROUND(SUM(plsexec_time_delta)/1000/1000)   pl
,      ROUND(SUM(disk_reads_delta))         disk_reads
,      ROUND(SUM(direct_writes_delta))        direct_writes
,      ROW_NUMBER() over (PARTITION BY sn.dbid, sn.snap_id, st.instance_number
                          ORDER BY SUM(elapsed_time_delta) desc) rn
FROM   dba_hist_snapshot sn
,      dba_hist_sqlstat  st
WHERE  st.dbid            = sn.dbid
AND    st.snap_id         = sn.snap_id
AND    sn.instance_number = st.instance_number
GROUP BY
       sn.dbid
,      sn.snap_id
,      sn.end_interval_time
,      st.sql_id
,      st.instance_number
,      st.parsing_schema_name
,      st.plan_hash_value
) x
WHERE rn <= 5
ORDER by snap_id DESC, instance_number, rn;

Let me know if you spot any errors or anomolies or obvious improvements / additions.


Wheres my TNS connection? SHOW TNS

Barry McGillin - Thu, 2015-10-15 04:19
Lots of users have been head scratching as to which tnsnames.ora is being found and used when  connecting to the database with SQLDeveloper and with SQLcl.

In the latest release we've added another new command.

SHOW TNS

What this will do is walk the locations where we look for tnsnames.ora and list these out in order.  Then it will tell you which one the tool will actually use and list the entries for you.

So, with nothing set, no ORACLE_HOME, no TNS_ADMIN, here's what you get.  TNS is going to look in your home directory for a file called tnsnames.ora or .tnsnames.


Now, if we have an ORACLE_HOME set, we'll look for $ORACLE_HOME/network/admin/tnsnames.ora


Further, if we set TNS_ADMIN, it will override ORACLE_HOME and go to that location as shown here.


Lastly, we'll come back to the User directory. If you have a tnsnames.ora there or a .tnsnames, it will override everything and this is what will be used.



Now, go ahead and make a connection.  You can then do another new command called

SHOW CONNECTION

which will show you how you are connected and what you are connected to.


Finally, you can get a look at which driver you are using for a connection using

SHOW JDBC

which will show something like this, detailing types, versions and the URL of the connection you have.


Scaling FLUID pages for iPhone 6

Javier Delgado - Thu, 2015-10-15 03:06
We are currently developing FLUID pages for a customer on PeopleSoft HCM 9.1. As they cannot benefit from the standard functionality delivered through Update Manager until they upgrade to PeopleSoft HCM 9.2, they have decided to provisionally implement FLUID through customisations.

When doing this, we have identified an issue in iPhone 6 by which the FLUID pages were not correctly scaling:



As you see, the text is barely readable. After some research, we have identified that standard pages deal with this scaling issue by using the following PeopleCode (normally in the component PostBuild event):

Declare Function GetDefaultViewportSetting PeopleCode PTLAYOUT.FUNCLIB FieldFormula;
Declare Function SetViewport PeopleCode PTLAYOUT.FUNCLIB FieldFormula;

Local string &Viewport;
Local number &Pos;

&Viewport = GetDefaultViewportSetting();

If %Request.BrowserDeviceFormFactor = 0 And
      %Request.BrowserPlatformClass = "ios" Then
   &Pos = Find("minimal-ui", &Viewport);
   If &Pos = 0 Then
      &Viewport = &Viewport | ", minimal-ui";
   End-If;
End-If;

SetViewport(&Viewport);
AddMetaTag("format-detection", "telephone=no");

The page now displays in a much better format:




Opening of APEX R&D New Zealand

Dimitri Gielis - Thu, 2015-10-15 02:20
We're really excited to announce that next to our Belgium office, we have opened an office in New Zealand too. This will allow us to cover multiple timezones and better serve the Australian and New Zealand market. You can read more about it on Lino's blog

 APEX R&D

Contact our team of experienced professionals for innovative, reliable and cost effective Oracle Application Express solutions. 

Regarding any custom application development, training/coaching or consulting related to Oracle APEX technology we would love to hear from you.

For more information please contact our Belgium or New Zealand office.

We look forward working with you.

Categories: Development

node-oracledb 1.3.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Thu, 2015-10-15 01:24

Version 1.3 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

We kept the changes minimal in this release for several reasons. It has two small enhancements and a few bug fixes.

  • A new oracledb.oracleClientVersion attributes gives the version of the Oracle client libraries that node-oracledb is linked with.

    A connection.oracleServerVersion attribute gives the Oracle Database version used by the connection.

    These attributes are handy for code that needs to run in multiple environments. See examples/version.js for usage.

  • The major bug fix resolves some corruption with result.outBinds when calling PL/SQL blocks. This was sometimes causing a crash.

See CHANGELOG for the other changes.

Issues and questions about node-oracledb can be posted on GitHub.

node-oracledb installation instructions are here.

node-oracledb documentation is here.

Copy Data Management for Oracle Database with EMC AppSync and XtremIO

Kevin Closson - Wed, 2015-10-14 12:08

This is a quick blog entry to invite readers to view this little demonstration video I created. The topic is Copy Data Management in an Oracle Database environment. We all know the pains involved with the number of database copies needed in today’s Oracle environment. Well, how about technology with these characteristics:

  1. 100% space efficient. There is no need for any full-copy “donor” in this solution. You can create 8192 XtremIO Virtual Copies of volumes in an XtremIO array and there is no reduction in user-capacity at the storage level. For example, 512 copies of a 1TB volume with Oracle tablespaces in it takes exactly 1TB from the array.
  2. Self service. With EMC AppSync permissions can be set up so that developers can create their own copies, refresh their own copies and expire their own copies.
  3. Speed. AppSync copy operations such as creation and refresh are measured in seconds.
  4. Data Services. All XtremIO Virtual Copies enjoy data reduction services. So as users begin to make changes to their database copies the modified blocks are first treated with de-duplication and then compression.

You more than likely need XtremIO in any cose. However, now it’s also time to think about the ease of provisioning copies of Oracle databases to test/dev and other functions the XtremIO way.

It only takes minutes so please give this a view:


Filed under: oracle

Ramp Up with Support Services at Oracle OpenWorld

Chris Warticki - Wed, 2015-10-14 11:13
Ramp Up with Support Services at Oracle OpenWorld

With over 500 experts on hand to help our customers, Oracle Support Services will be front and center at Oracle OpenWorld 2015. Come to the Moscone Center and Palace Hotel October 25–29, and learn best practices to help optimize your Oracle technology and increase the value of your Oracle products and services.

If you or others in your organization will be attending Oracle OpenWorld this year, take a moment to review the available Support Services sessions, events, and demonstrations.

If You'll be Attending

Learn about the must-see conference sessions and networking events hosted by Oracle Support Services, including hands-on, interactive demos and training.

Sessions

Discover 50+ sessions that highlight the depth and breadth of Oracle's Support Services portfolio, and see how other customers are driving success with Oracle Support tools and resources.

  • Best practices for supporting and upgrading Oracle hardware and software, including how to utilize all the features and entitlements of Oracle Premier Support
  • Best practices, tips, and customer examples for how to leverage mission-critical support delivered by Oracle Advanced Customer Support
  • How customers are leveraging Oracle Consulting for faster adoption and ROI to help them succeed using the Oracle stack
  • How organizations are developing sharper, smarter, faster strategies which help them grow and meet today's changing business requirements with modern business platforms that drive innovation, create value, lower cost, and reduce risk
Oracle University Trainings

Explore 18 single-day, pre-conference training sessions offered by Oracle University on Sunday, October 25 covering popular Oracle technology topics, all taught by expert instructors.

See available training sessions on the Oracle University OpenWorld site.
Networking Events

If you're an Oracle Premier Support customer, come see us on Monday, October 26 at the sixth annual My Oracle Support Monday Mix. Relax and network with Oracle support engineers, support managers, and other Oracle customers starting at 6 p.m. PT. Find out more about the Monday Mix event and venue here.

Meet us in the Support Stars Bar and Mini-Briefing Center in Moscone West Exhibition Hall, booth 3361. Work one-on-one with some of Oracle's best to get answers to your support questions. Attend a 10-minute mini-briefing and get entered to win prizes!

DEMOgrounds

DEMOgrounds feature live demonstrations focusing on specific Oracle products and solutions across the Moscone Center and other event venues.

Based on your Oracle products and services, visit a DEMOgrounds area appropriate to you:

Product Location Booth Oracle Database Moscone South 122 Oracle HCM and Fusion HCM Palace Hotel Sunset Court Middleware Moscone South 106 My Oracle Support & Proactive Tools Moscone West 3161
(next to the Stars Bar) Oracle PeopleSoft Palace Hotel Sunset Court Server / Storage Moscone South 737
Other Featured Support Services Demonstrations

In addition to the DEMOgrounds, Oracle Support Services will be offering specialized informational demos around maximizing Oracle Support resources and tools during convention hall hours.

Program or Service Area Title Location HCM - Fusion Applications Maximize Your Investment in Oracle HCM Cloud Services Support Palace Hotel, Sunset Court ERP - Fusion Applications Maximize Your Investment in Oracle Enterprise Resource Planning Cloud Services Support Moscone West, Level 1 ERP Showcase PaaS Maximize Your Investment in Oracle Technology Cloud Services Support Moscone South, Oracle Cloud Platform and Infrastructure Showcase Platinum Services Oracle Platinum Services: Extreme Support for Engineered Systems Moscone North, Lower Lobby, Engineered Systems Showcase (at the bottom of the escalator) Oracle Consulting (Announced at the event) Moscone West, ERP Showcase Oracle Advanced Customer Support (Announced at the event) Moscone South, Oracle Database DEMOgrounds (area 122)
Additional Links

Like a white stone in the depths of a well...

Greg Pavlik - Tue, 2015-10-13 13:26
Like a white stone in the depths of a well...
Anna Akhmatova
working translation by Greg Pavlik

Like a white stone in the depths of a well
within me there lies one memory.
I can not - and do not - want to expel
this, my greatest joy and my agony.

I think that anyone who closely looks
can see this recollection reads
as harrowing sadness in a tragic book -
a warning, and a sign of need.

I know the gods forever strive
to wreck the body but cannot touch the mind;
assurance that you will forever live
as a memory I can’t leave behind.

Original:
Как белый камень в глубине колодца,
Лежит во мне одно воспоминанье.
Я не могу и не хочу бороться:
Оно - веселье и оно - страданье.

Мне кажется, что тот, кто близко взглянет
В мои глаза, его увидит сразу.
Печальней и задумчивее станет
Внимающего скорбному рассказу.

Я ведаю, что боги превращали
Людей в предметы, не убив сознанья,
Чтоб вечно жили дивные печали.
Ты превращен в мое воспоминанье.

Got Published in AUSOUG's Foresight Online Spring 2015

Pakistan's First Oracle Blog - Tue, 2015-10-13 01:47
AUSOUG's Foresight Online Spring 2015 Edition is the premier publication by Australian Oracle User Group.

Following are highlights of this edition:

  • President's Message
  • DBA Article: Automated Testing of Oracle BPM Suite 12c Processes with SOAP UI - Peter Kemp, State Revenue Office, Victoria
  • DBA Article: Best Practices for Oracle on Pure Storage
  • Apps Article: Performance Review Data Capture - Brad Sayer, More4Apps
  • DBA / Dev Article: Database Developers – Feeling Left Out of Agile? - D Nowrood, Dell Software
  • Apps Article:  Cost-effective alternative to Oracle Discoverer and BI Suite - Wilhelm Hamman, Excel4apps
  • DBA Article: DBA101 - characterset corruption - Paul Guerin, HP
  • Quick Tips 1: Five Reasons to Upgrade to APEX 5.0 - Scott Wesley, Sage Computing Services
  • Quick Tips 2: Last Successful login time in 12c - Fahd Mirza Chughtai, The Pythian Group
Categories: DBA Blogs

Sharding in Oracle 12c Database

Pakistan's First Oracle Blog - Mon, 2015-10-12 22:22
Sharding for Oracle DBAs is still pretty much an alien or pretty new concept. In the realms of big data, this term is being used quite extensively though.

What is Sharding in simple words:

Sharding is partitioning. Horizontal partitioning to be exact.

Sharding means partitioning a table rows on basis of some criteria and storing that partitioned rows of table (i.e. a shard) on different database servers. These database servers are cheap low commodity servers.

The benefits include smaller data to manage, smaller backups, faster reads, and faster response time for the queries.

Just like existing partitioning option in the Oracle database, there are generally three kinds of sharding:

Range Sharding
List Sharding
Hash Sharding

The news out there on social media is that Oracle 12c next version is coming up with Sharding option. That is pretty exciting and let's see what they come up in this regard.


Categories: DBA Blogs

Next Oracle APEX NOVA Meetup Date Set

Scott Spendolini - Mon, 2015-10-12 15:26
The next Oracle APEX NOVA MeetUp is going to be held on November 12th, 2015 at 7PM.  We decided to mix things up a bit and are going to have it at Vapianos in the Reston Town Center.  We're also going to try a more informal agenda.  In other words, there will be no agenda.

So if you're around Reston on November 12th from 7-9PM (or so), feel free to stop by.  Here's the MeetUp.com link: http://www.meetup.com/orclapex-NOVA/events/226009784/

Submit your abstract for KScope 16 - only 3 days left

Dimitri Gielis - Mon, 2015-10-12 08:01
Only 3 days left to submit your abstract for KScope 16... so don't wait longer, submit now! :)


My first ODTUG conference was in 2006 and it was awesome. On this blog you'll find different blog posts about my experience and why I love it, so I won't go over that... but from time to time I get some questions about being a presenter at the event.

"What should I submit, I don't know what to talk about?"

The topics in the Oracle Application Express track are following:
  • Integration: Relates to any work where other products are integrated with APEX or a gap is bridged between platforms. Perhaps is integration with EBS, SAP, Raspberry Pi, Node.js, OAuth, REST, etc...
  • New Release: Presentations that relate to the latest version of APEX. Typically the Oracle development team is presenting those topics.
  • Real World: Everybody loves real life examples. This is anything where a real life system is presented. The problem is explained and how APEX was used to solve it.
  • Detail Plunge: These are presentations on any topic, but the topic, even if it seems narrow, is explored in detail and with focus. It doesn't have to be complicated. I could be building plugins or Dynamic Actions.
  • Other Application Express: Everything else goes here. This is the catch-all bucket.
You can submit up to 4 abstracts.

The way I chose my topics typically fall in one of following categories:



"Why even bother, I won't be selected"

Well, without submitting you don't know. What do you have to lose? I'm sure I don't speak for myself when I say, I would love to meet new people and hear about what they do and discover. There's not a single day I don't learn anything, tech or non-tech. This weekend I learned something new and the feature has been in APEX for many years! (but that's for another blob post)
The level of presentation is very broad as well. We need presentations of all levels; beginners, intermediate and advanced.


"I don't know how to write an abstract"

On the internet there're many links how to present and write an abstract. On the ODTUG site you'll find some tips and examples.


Give it a go, you'll feel good when you've done it. Here's the link. It's all about the learning experience!

All the best and I look forward meeting you in Chicago.
Categories: Development

Requiem Fragment

Greg Pavlik - Sun, 2015-10-11 16:59
Requiem Fragment
Anna Akhmatova
translation by Greg Pavlik

10 Crucifixion

      Do not lament me, O Mother,
      seeing me in the tomb.


The angelic chorus glorified the hour of eternity,
when the heavens convulsed in a river of fire:
He cried to His Father “Eli, Eli, lama sabachthani?”
And to His Mother, he spoke: “Do not lament me”…

1938

Mary Magdalena writhed, and wept
as the beloved John froze like stone (or salt).
Where the Mother stood in silence -
no one dared to look.

1940

Translator notes. I chose to translate this poem much less loosely than my interpretative translation of Akhmatova’s Lot’s Wife – there Russian readers will recognize that the final stanza is essentially my own poem (with an explicit reference to the unrelated work of Scott Cairns, in fact).

Instead, this section of Requiem is much closer to the original (1), with only minimal augmentation. Nonetheless, it is also very different from Akhmatova’s in certain critical respects. Requiem itself is a difficult and evocative work – its melancholy is inseparable from the suffering of both Akhmatova herself and the Soviet people under Stalin. In some sense, I have abandoned this context in my translation. Educated Russian readers would have recognized Akhmatova’s work as using the imagery of the hymnography of the Paschal Nocturnes, the final liturgical setting of Great and Holy Saturday chanted before the entombed body of the dead Christ, which includes a deeply moving dialogue with his Mother. (2)

I have chosen to deepen the liturgical elements of the poem and play off themes that recur in and around the Lenten Triodion. The text of the Slavonic service is rendered here in English as it is commonly used in American parishes within the Russian Orthodox tradition. Second, I use the Hebrew directly in quotation from the Psalter, emphasizing its position as a liturgical prayer. While the dialogic element from the Nocturnes service is repeated, maintaining the liturgical connection, here the address to the Mother is clearly one of human filial affection.

The river of fire is evocative of the image God as “consuming fire”, which, St Isaac says is experienced as bliss by the pure in heart. The second stanza – and I do not believe there is any intention at all in the original to do this – also points back to the story of Lot and Sodom and implicitly re-invokes the image of fire. Akhmatova makes no association with salt. I have tried intentionally not to recall the poetics of Stabat Mater in the final lines.

(1) Original Russian:

10
РАСПЯТИЕ

   Не рыдай Мене, Мати,
   во гробе зрящия.

Хор ангелов великий час восславил,
И небеса расплавились в огне.
Отцу сказал: "Почто Меня оставил!"
А матери: "О, не рыдай Мене..."

1938

Магдалина билась и рыдала,
Ученик любимый каменел,
А туда, где молча Мать стояла,
Так никто взглянуть и не посмел.

1940, Фонтанный Дом

(2) From the Eastern Orthodox Liturgical Service of Paschal Nocturnes:

Do not lament me, O Mother, seeing me in the tomb, the Son conceived in the womb without seed, for I shall arise and be glorified with eternal glory as God. I shall exalt all who magnify thee in faith and in love.

Glory to Thee, our God, glory to Thee!

I escaped sufferings and was blessed beyond nature at Thy strange birth, O Son, who art without beginning. But now, beholding Thee, my God, dead and without breath, I am sorely pierced by the sword of sorrow. But arise, that I may be magnified.

Glory to Thee, our God, glory to Thee!

By my own will, the earth covers me, O Mother, but the gatekeepers of hell tremble at seeing me clothed in the blood-stained garments of vengeance; for when I have vanquished my enemies on the cross, I shall arise as God and magnify thee.

Glory to the Father and to the Son and to the Holy Spirit, now and ever and unto ages of ages. Amen.

Let creation rejoice, let all born on earth be glad, for hateful hell has been despoiled, let the women with myrrh come to meet me, for I am redeeming Adam and Eve and all their descendants, and on the third day shall I arise.

Oracle package for HTTPS/HTTP[version 0.2]

XTended Oracle SQL - Sun, 2015-10-11 16:30

A couple days ago i created simple package for HTTPS/HTTP, but I’ve decided now to improve it:

  1. Timeout parameter – it would be better to control connection time;
  2. Simple page parsing with PCRE regular expressions – to speed up and simplify page parsing, because if you want to get big number matched expressions from CLOB with regexp_xxx oracle functions, you have to call these functions many times with different [occurance] parameters, passing/accessing to the clob many times. But within java procedure it will be just one pass.
  3. Support of plsqldoc – the tool for automatically generating documentation in HTML format.(javadoc analogue)

Upd 2015-10-11:

  1. added HttpMethod parameter – so you can choose POST or GET method
  2. added function get_last_response – returns last HTTP response code.

You can download new version from github: https://github.com/xtender/xt_http
Also it may be interesting if you want to see how to get collection of CLOBs/varchar2 from JAVA stored procedure.

So with new functions I can even more easy get UpVoters list from prevous post:

select * 
from table(
         xt_http.get_matches(
            pUrl     => '&url'
           ,pPattern => 'alt="([^"]+)"'
           ,pGroup   => 1
         )
    ) t
Results
select * 
from table(
         xt_http.get_matches(
            pUrl     => '&url'
           ,pPattern => 'alt="([^"]+)"'
           ,pGroup   => 1
         )
    ) t
/
Enter value for url: https://community.oracle.com/voting-history.jspa?ideaID=6901&start=0&numResults=1000

NAME
--------------------------------------------------
Denes Kubicek
Pavel Luzanov
Martin Preiss
AlexAA
scherbak
TimHall
Toon Koppelaars
Dom Brooks
mweedman
BluShadow
Dmitry-Oracle
Mahir M. Quluzade
SA2
Dmitry A. Bogomolov
SQL*Plus
Alexander.Ryndin-Oracle
Mohamed Houri
Randolf Geist
ctrieb
UltraBlast
Kot Dmitriy
user9506228
Timur Akhmadeev
Franck Pachot
pudge
user12068799
user11933056
user11994768
iRAV
user12228999
nicher100
vva
Alexander Semenov
Dmitry_Nikiforov
Bud Light
user7111641
dbms_photoshop
AcidMan
achervov
GokhanAtil
user2616810
Harun Kucuksabanoglu
_Nikotin
Maki
user9066618
user10487079
IgorUsoltsev
edw_otn
Vigneswar Battu
user11198823
be here now
869219
user7543311
VladimirSitnikov
kamineff
Asmodeus
djeday84
Oleh Tyshchenko
87Rb-87Sr
911978
KoTTT
Konstantin
945154
953255
user12217223
Alexander_Anokhin-Oracle
oragraf.sql
Jack10154746
user9502569
Yury Pudovchenko
Sergey Navrotskiy
985277
xifos
MatthiasRogel
Mikhail Velikikh
user12134743
1025067
Oren Nakdimon
1051964
1094595
1209426
user11211533
user6115180
user3990689
d.nemolchev
user8925862
user11222376
user882251
user12279047
1284785
1323138
1336159
Grigory-OC
1373320
1373354
1411786
1421824
user13287062
1442254
1443436
1447180
KSDaemon
user12097700
Sergei Perminov
user3539222
Victor Osolovskiy
1560322
user13609377
user6672754
Alfredo Abate
user11315510
user12245839
1636030
user5399907
user3983717
user12276855
1744386
user9171605
ksAsmodeus
user12058508
1856219
user3214869
Trihin Pavel
1879578
1886567
user8869337
1890583
1913344
user11978061
user1438531
Menno Hoogendijk
2616420
2646629
rpc1
user4770257
user5217858
user11949728
al.netrusov
2693742
AKalugin
2721788
RZGiampaoli
2728073
2769955
2786798
2789541
BobDJ
2816036
user10123230
2831055
DBA_Hans_007
2880604
2896072
2902504
ApInvent
Pinto Das
natalka
2913706
2924025
2957186
2960288
-KE-
user5337688
3001391
3004351
oleksii.kondratenko
zeynep.samsa
Mehmet Kaplan
adbee6ae-dcb4-4c59-915b-487b33773a3f
6648e909-359d-4a32-b7bd-6cea3fd9fec3
mehmetyalcin
60e3f71b-bd5d-422c-b479-7a087d5b3827
3032511
c09042fd-15da-494b-b7b0-243ff4ceeb4b
5e087e18-4789-4923-92cb-cce149ba3072
5291adcf-242b-4f01-bbe6-b7ce44db1aac
9c0321bf-a358-49f9-88e4-462bd4fd5674
03b9266b-53f4-4503-ae04-b339c039bffd
user12044736
4a89f5a6-cc35-424e-8e3c-59964ad8d56b
54113fd4-6592-4a32-9920-663813bdd4c5
590d4b9f-32b6-43ad-b0c4-3b767407c055
ffb03087-d390-4068-98e5-4cfd73e66d00
3fa47a10-7b28-4857-9274-d175f3b7fd48
user5814569
d0b20163-7a58-4d4d-9a7d-01ec973bc3e1
8aaa1ed8-e0f6-4712-bef7-6b1e6579798f
1aee554c-832b-4fa5-bead-0680a53d1cc5
5daa756f-80aa-4260-b91d-10d2c51b78d6
e7897e6a-993d-46db-ad93-215b61b715ad
9f982eda-2b58-4d61-aad9-4c6a50d2dab7
user2503867
3032876
f3012cfb-62b5-4c86-a102-2172c3640d5b
68fe6d1e-d41c-4528-b076-ac3bc5289cc5
41232c56-5a29-442c-a1c6-d5b94477be1d
0682b6e1-8662-498c-8455-629032a25cea
user6592033
59961cb1-b4a6-470c-9802-44432911a7ff
user7345691
ab7980ce-71ba-4ec1-a578-6b716f2ae1ae
user5844404
723b639c-f6c6-4780-8ad6-0315564ef937
28e651f6-c9c3-4d2a-af03-001837eb99ba
3032942
841b96b4-ab84-461e-aed9-58f9df710406
user9961876
d7e48e5c-868f-4b2d-88b5-8614e9d35c80
3033022
f095cbb6-707a-4f40-8f18-a6a9dc37894d
3033091
9e9d3c99-9b5e-4fa2-89ac-4e6216209566
a9e702d3-f8e5-43ee-8e6c-0fe722d9ab50
bdd07d60-c467-4115-8149-8ef2af880d9a
d5571104-4726-4f06-b529-293dc667ae6e
user10865764
petrelevich
87a6503f-5717-4887-ac77-cd916002f53a
user7355088
user6083916
user2300447
user12299863
81ddc21a-7cb3-4298-a96b-ea7c9774b2c4
3405d13c-9d19-4903-8eb0-14a2544cb32b
user2427414
4670adeb-1c9d-4ce4-98eb-962bc4c68f5d
oldhook
1e4428aa-3a63-4a1a-90b3-f2b74292f502
user6367919
7c75e315-487d-4797-8e5b-f3dee58bbc79
user8828289
1a8ad4e7-759f-48ee-8054-c449540d0573
6f3b1262-c9d6-42d6-b703-fc4e6a40b7a1
037f209b-f643-4642-a059-79988d19d77b
PL99
3034106
3034166
user5489918
user2340616
naeel maqsudov
3036157
user2626322
52e8d732-4289-4d0d-b8c5-80e701f3c07d
4b2deedd-84c4-4b8b-8724-837c54dd764e
user12569643
07292d40-2bab-4e94-b68e-cfaae6c093a8
8ec394ac-fd54-4896-9810-0381bb75260e
caec3a42-0f98-440b-ad71-9522cb1e0a0e
9c52d45b-e2e4-489c-9a32-548c77f159b3
f6e966be-8576-4da9-a0df-8fec374b6cd3
5214be2e-d761-4a4e-aeba-23ff7bb4cf4e
f070b484-017d-4d4c-a740-f6ad9db37286
d4f322d3-0265-458d-948d-83bd66d5c7e3

[collapse]
Categories: Development

Exadata Upgrade - Adding X5-2 Half Rack to X3-2 Existing Half Rack

Senthil Rajendran - Sat, 2015-10-10 23:26

Recently did an Exadata Expansion project on x3 half rack and making it a full rack by adding x5.
this setup is a full exadata rack with two different flavours of engineered system hardware four x3 compute nodes and four x5-2 compute nodes along with seven x3 cellnodes with seven x5 cell nodes.
This topology is supported but there were a lot of work behind.

  • First getting the x3 on the supported release 12c , this involved 4 node grid upgrade from 11g to 12c and x3 compute/cell node image to 12c from 11g.
  • Next physical racking of x5 hardware into x3 rack space
  • Then upgrade x5 to the latest software stack 12c same as x3
  • Interconnect both x3 and x5 to co-exisits
  • Precheck of the full exadata rack before making them into one cluster
  • Next add the four x5 node to the x3 rac cluster using addnode.sh
  • Now the storage - x3 had 3TB disk and x5 has 4TB disk , so grid disk has to be the same size.
    • following doc "How to Add Exadata Storage Servers Using 3TB/4TB Disks to an Existing Database Machine (Doc ID 1476336.1)" will help.
    • the additional 1TB space from x5 cell storage was created into RECO2 for future recovery storage space as the current setup is 80:20 (DATA:RECO)
  • Completed cluster verification and return the rack to service 


This whole exercises was so cool and went as documented.
For documentation the below was used (you may have to refer to the latest document for extending exadata)
Oracle® Exadata Database Machine Extending and Multi-Rack Cabling Guide
12c Release 1 (12.1)
E51952-10
https://docs.oracle.com/cd/E50790_01/doc/doc.121/e51952/hardware.htm#DBMMR21157

Pages

Subscribe to Oracle FAQ aggregator