Skip navigation.

Feed aggregator

Conference Season

Scott Spendolini - Mon, 2016-01-11 21:56
It's conference season!  That means time to start looking at flights and hotels and ensuring that while I'm on the road, my wife is not at work (no easy task).  In addition to many of the conferences that I've been presenting at for years, I have a couple of new additions to the list.

Here it is:

RMOUG - Denver, CO
One of the larger conferences, the year usually starts out in Denver for me, where crowds are always large and appreciative.  RMOUG has some of the most dedicated volunteers and puts on a great conference year after year.

GAOUG - Atlanta, GA
This will be my first time at GAOUG, and I'm excited to help them get their annual conference started.  Lots of familiar faces will be in attendance.  At only $150, if you near the Atlanta drive, it's worth checking out.

OCOJ - Williamsburg, VA (submitted)
This will (hopefully) also be my first Oracle Conference on the James.  Held in historic Williamsburg, OCOJ is also a steal at just $99.

UTOUG - Salt Lake City, UT
I'll head back out west to Utah for UTOUG.  Always good to catch up with the local Oracle community in Utah each year.  Plus, I make my annual SLC brewery tour while there.

GLOC - Cleveland, OH (submitted)
Steadily growing in popularity, the folks at GLOC put on an excellent conference.  Waiting to hear back on whether my sessions got accepted.

KSCOPE - Chicago, IL
Like everyone, I'm looking forward to one of the best annual technical conferences that I've regularly attended.  In addition to the traditional APEX content, there's few surprises planned this year!

ECO - Raleigh/Durham, NC (planning on submitting)
ECO - formerly VOUG - is also growing in numbers each year.  There's a lot of tech in the RDU area, and many of the talented locals present here.  Bonus: Jeff Smith curated brewery/bar tour the night before.

OOW - San Francisco, CA (planning on submitting)
As always, the conference year typically ends with the biggest one - Oracle Open World.  While there's not as much APEX content as there once way, it's always been more focused on the marketing side of technology, which is good to hear every now and then.



User group conferences are one of the best types of training available, especially since they typically cost just a couple hundred dollars.  I encourage you to try to check out one near you.  Smaller groups are also great places to get an opportunity to present.  In addition to annual conferences, many smaller groups meet monthly or quarterly and are always on the look out for new content.

David Bowie 1947-2016. My Memories.

Richard Foote - Mon, 2016-01-11 20:06
In mid-April 1979, as a nerdy 13 year old, I sat in my bedroom in Sale, North England listening to the radio when a song called “Boys Keep Swinging” came on by an singer called David Bowie who I never heard of before. I instantly loved it and taped it next time it came on the radio via my […]
Categories: DBA Blogs

Parent Company of University of Phoenix Could Be Sold to Owner of McGraw-Hill Education

Michael Feldstein - Mon, 2016-01-11 16:45

By Phil HillMore Posts (383)

Apollo Education Group, parent company of the University of Phoenix as well as Apollo Global, is in “advanced talks” to be purchased by Apollo Global Management, owner of McGraw-Hill Education and of Cengage debt. Got that?

To clarify, the Apollo Education Group is the parent company of the University of Phoenix, and they have a subsidiary called Apollo Global, which is a joint venture with the Carlyle Group, another private equity firm. While the confusion is understandable, Apollo Global Management previously shared nothing in common with the Apollo Education Group other than their admiration for the choir-directing sun-god.

With that in mind, here is the news from the Wall Street Journal today:

A deal between Phoenix-based Apollo Education and Apollo Global Management, a New York private-equity firm, could be worth about $1 billion, some of the people said, with one of them adding an agreement could be reached in the next few weeks. Apollo Education had been in discussions with a number of private-equity firms since late last year, but Apollo Global Management is the only one still in the running now, this person said.

It is possible, as always, in such situations that there will be no deal, and another buyout firm could re-emerge.

This news coincided with Apollo Education Group’s announcement of its Q1 2016 earnings. To get a sense of just how far the company has declined, consider that in the earnings they stated their “cash and marketable securities were approximately $756 million and our outstanding debt was $43 million”. AGM is talking about paying $1 billion to get just over $700 million in cash and marketable securities and less than $300 million for the actual assets. The following chart shows the historical Apollo Education Group combined enrollment.

Update: The markets are taking this news well, as shares of Apollo Education Group are up 23% in after-hours trading. At the close today, the publicly-traded market cap of Apollo was just $692 million.

Later in the WSJ article:

Apollo Global Management, founded by veterans of junk-bond pioneer Drexel Burnham Lambert, is known for a willingness to invest in struggling industries and companies. For example, last year it pursued—though never closed—a deal for property mogul Nicholas Schorsch’s troubled real-estate empire.

Connections In Higher Ed

Unstated in the article, but quite important to understand, is that AGM purchased McGraw-Hill Education for $2.5 billion in late 2012 and is trying to take that company public (which I personally doubt will happen in the near term). According to the NY Post, AGM also owns debt from Cengage Learning.

There is one other aspect to consider: the University of Phoenix is in the middle of migrating from their homegrown learning platform that cost several hundreds of millions of dollars to develop to Blackboard’s Learn Ultra. While no one has said that this agreement will change, if AGM does buy the Apollo Education Group, then there would be additional risk on this Learn Ultra migration. University of Phoenix instructors have been told about the changes, but the actual implementation with students and classes is scheduled for Summer 2016. It is quite common for acquisitions of distressed companies to lead to major strategic initiatives being put on hold until the new owners figure out what they plan to do with the company.

There is a lot of moves happening in education this year, and this potential sale will likely have bigger ramifications than just for the for-profit sector.

Update: Sorry for multiple updates, but this observation by Audrey is too good to not include.

Future of edu irony: news that U of Phoenix may be for sale, same day college football championship to be played in U of Phoenix stadium

— Audrey Watters (@audreywatters) January 11, 2016

Future of edu irony: news that U of Phoenix may be for sale, same day college football championship to be played in U of Phoenix stadium

The post Parent Company of University of Phoenix Could Be Sold to Owner of McGraw-Hill Education appeared first on e-Literate.

Pythian – Through the Eyes of a CO-OP Student

Pythian Group - Mon, 2016-01-11 15:32

 

I worked at Pythian as a Global Talent Acquisition Coordinator (co-op student) within the company’s human resources function in Ottawa, Canada. At the core of my role, I provided direct support to my colleagues in guiding job applicants through our hiring process. I also had the opportunity to take on and create small projects for myself during my four months there.

Since our past technical CO-OPs have written fantastic blogs about their experiences at Pythian (be sure to read them!), how about I write this one about the business side of working there? Here are my top three reasons why any business student would want to work at Pythian:

1. What better place is there to develop cross-cultural literacy?

With Pythian, I had the pleasure of working with remote and international colleagues for the first time. Top that with actively communicating with a global pool of job applicants on a daily basis. Succeeding in this kind of environment definitely requires you to be cross-culturally literate, which means that you understand how cultural differences—both inside and outside an organization—affect a business’s day-to-day practices.

In business school, we are often reminded about the importance of considering the external environment when a firm goes global (CDSTEP, anyone?), so it was quite eye-opening to see how my experience at Pythian really validated my studies. For example, processes that are of no legal concern in Canada might present a huge obstacle when hiring abroad, and pieces of information that North Americans prefer to keep private are quite openly discussed in other cultures. Talking to candidates from around the world definitely taught me how to think more critically about my communication—not only in terms of what I say, but also how I say it.

2. It feels nice not to be just “the CO-OP student”.

Upon my first day, I immediately felt that I would not be looked at as simply an intern. My team greeted me with open arms (already knowing my name!) and repeatedly emphasized the value of having me on board throughout my term. Within days, I could already understand the significance of my tasks and how they contributed not only to the team, but also to the organization as a whole.

Another great thing about not being just “the CO-OP student” is empowerment. At Pythian, you are really treated like a colleague rather than a subordinate. I never worked for my boss, but I always worked with her. During my term, my team enthusiastically invited me to explore our work processes and offer ideas to make things better. It was pretty cool to see my thoughts being listened to and put into action, even after my departure!

3.There’s nothing more rewarding than stepping out of your comfort zone.

One of the things I hated doing most before working at Pythian was using the phone. If you run a quick Google search, you will find many studies showing that Millennials are not as comfortable with making phone calls as their predecessors were—and I could speak to that, 100 percent! Want me to order a pizza? I’ll do it through the restaurant’s online platform. Am I due for a dentist appointment? I’ll ignore the receptionist’s voicemails until she sends me an e-mail (although my telephonophobia might not be the only reason for that one!).

My colleagues helped me overcome this discomfort by having me conduct reference checks. As embarrassing as it might sound, I actually had to take the greatest leap of faith to get my fingers dialling for the first time. Although I certainly had a mini heart-attack whenever I was asked to help with a reference, I eventually eased into the task with time. While I might still shy away from the telephone now and then, I really do feel accomplished in getting more comfortable with using less texting and more talking!

All in all, my experience at Pythian has been nothing less than fantastic. It has truly been a pleasure to work with a diverse group of people from around the world, and I would be thrilled to see my future take me back there one day. If you’re looking to intern for a company with a global focus and an information-sharing, empowering culture, then you would definitely love to join Pythian!

Categories: DBA Blogs

Use Case for SQL Server Table Partitioning

Pythian Group - Mon, 2016-01-11 15:31

 

Often we are asked by our clients about Table Partitioning, and specifically, which tables will be good candidates to be partitioned?
Here are some of the main use cases for Table Partitioning:

  1. You have the Enterprise Edition of SQL Server 2005 or higher.
  2. The table contains or will contain more than 5-6 million rows and growing fast, or its size is growing by around 1GB per month or more.
  3. The FULL backup is taking too long or the backup file is too large and older data is not being updated (i.e.: users can only update data from the last 3 months).
  4. Data needs to be archived or purged on a regular basis, potentially the current archiving or deletion of data is causing blocks or deadlocks to other processes.
  5. There is a NOT NULL date column or another NOT NULL sequential column that the table can be partitioned upon.
  6. Better if most queries are including the partitioned column in WHERE clauses (i.e: between the date range).

When partitioning a table, here are few things to look into:

  1. Create a file and a filegroup per partition (even if the files are created in the same place). This way, it is easy to backup (i.e.: FILEGROUP backup), maintain and archive/purge.
  2. The best way to partition a table is by a date column because data is usually archived or purged by a date. If you do not have such a column, you may want to consider adding a column that will contain the current date/time when the row is created. This column can contain the default GETDATE(). Using an ID or a calculated column may cause too many headaches.
    If the application returns an error when adding the date column, consider using a view on top of the underlying partitioned table.
  3. Partitioning requires maintenance:
    1. To add files, filegroups and partitions on a regular basis and in advance.
    2. Monitor data growth and potentially modify the partition architecture (i.e.: move from a monthly partition to a weekly or daily partition).
    3. Archiving/purging partitions on a regular basis. Consider using a SWITCH partition for quick archiving and purging.
    4. You can defragment indexes per partition.
    5. Remember that statistics cannot be updated by partition, however, you may want to consider FILTERED INDEXES or FILTERED STATISTICS to avoid updating statistics on the entire table, as well as improving performance in specific cases.
  4. Consider using MAXDOP <> 1 on the instance level or for specific queries that span multiple partitions in order to take advantage of parallelism. Configure parallelism with caution.

Additional links:

There is a lot of information around SQL Server Table Partitioning. Here are some of the useful links:

SQL Server Database Partitioning Myths and Truths
Table Partitioning in SQL Server – The Basics
How To Decide if You Should Use Table Partitioning
Query Processing Enhancements on Partitioned Tables and Indexes

 

Discover more about our expertise in SQL Server.

Categories: DBA Blogs

January 17: Laser Technology―Oracle Sales Cloud Customer Reference Forum

Linda Fishman Hoyle - Mon, 2016-01-11 14:37

Join us for an Oracle Customer Reference Forum on January 27, 2016. Roosevelt Rogers, Executive Vice President of Laser Technology will talk about why the company used Oracle Sales and Marketing Clouds to formalize their sales process with predictability and accountability. Laser Technology's environment is one where trade shows, regional events, and web leads represent the primary sales funnel; therefore, the need to nurture and educate before and after the sales cycle is critical.

Register now to attend the live forum on Wednesday, January 27, 2016, at 8:00 a.m. PST and learn more about Laser Technology’s experience with Oracle Sales Cloud.

[FREE Live Webinar] Oracle Access Manager Live Webinar: Join k21 Team on 16 January

Online Apps DBA - Mon, 2016-01-11 11:59

image003 (1)

We’re so glad and excited to share that this year 2016, we got an amazing startup and noticed that you are also seeking the same. How?

A couple of days back, we launched our Oracle Fusion Middleware Live Webinar for FREE where Atul Kumar (an Oracle ACE and Fusion MiddleWare and Identity and Access Management Expert) taught Fusion MiddleWare Concepts, SSL, Q/A, etc. And you know what? 300+ registrations with attendance of 250 learners took place! Woah! This is incredible!

Now, since, We can see a hell lot of learners who either reads our blogs or attend events and even many of them registers for our paid trainings, so, we have decided to come up with Oracle Access Manager FREE Live Webinar to make sure that if Oracle Fusion MiddleWare Experts are learning and growing, then why you can’t? You know, this world is all about learning and growing carrier! Also, here, everything is all about results, so, why to miss it?

We’re officially inviting you to attend for our Oracle Access Manager FREE Live Webinar which is going to take place on 16th January Saturday, 2016 at 3:30 PM GMT / 7:30 AM PST / 9:00 PM IST where Atul kumar and Ganesh Kamble will be covering the following topics:

  • Architecture of Oracle Access Manager (OAM).
  • Oracle Access Manager And EBS(R12) Integration
  • Answer to challenges you are facing in your own projects.
  • Live Question / Answer

We would be glad if you take out a couple of hours to join the K21 Team in this FREE Live Online Event to learn something good to get a great carrier ahead!
Click the below button register for our webinar to grab your seat quickly!!

Click Here to Register For FREE Webinar

Please notice that we have limited number of seats for a limited time. So, grab it before it goes off!

The post [FREE Live Webinar] Oracle Access Manager Live Webinar: Join k21 Team on 16 January appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Subquery Effects

Jonathan Lewis - Mon, 2016-01-11 06:50

Towards the end of last year I used a query with a couple of “constant” subqueries as a focal point for a blog note on reading parallel execution plans. One of the comments on that note raised a question about cardinality estimates and, coincidentally, I received an email about the cost calculations for a similar query a few days later.

Unfortunately there are all sorts of anomalies, special cases, and changes that show up across versions when subqueries come into play – it’s only in recent versions of 11.2, for example, that a very simple example I’ve got of three equivalent statements that produce the same execution plan report the same costs and cardinality. (The queries are:  table with IN subquery, table with EXISTS subquery, table joined to “manually unnested” subquery – the three plans take the unnested subquery shape.)

I’m just going to pick out one particular anomaly, which is a costing error with multiple subqueries when “OR-ed”. Here’s my sample data set:


create table t1
nologging
as
select
        rownum                  n1,
        rownum                  n2,
        rownum                  n3,
        lpad(rownum,10)         small_vc,
        rpad('x',100,'x')       padding
from dual
connect by
        level <= 20000
;


create table t2
nologging
as
select
        rownum                  n1,
        rownum                  n2,
        rownum                  n3,
        lpad(rownum,10)         small_vc,
        rpad('x',100,'x')       padding
from dual
connect by
        level <= 25000
;

create table t3
nologging
as
select
        rownum                  n1,
        rownum                  n2,
        rownum                  n3,
        lpad(rownum,10)         small_vc,
        rpad('x',100,'x')       padding
from dual
connect by
        level <= 30000
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'t1',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'t2',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'t3',
                method_opt       => 'for all columns size 1'
        );
end;
/

The three tables are slightly different sizes so that it will be easy to see different costs of tablescans, and there are no indexes to everything I do in the queries will be tablescans. Here are six queries I’m going to test – they all scan t1, with “constant” subqueries against t2 and/or t3. The first pair is just to show you the basic cost of the query with a single subquery, the second pair shows you the default action with two subqueries in two different orders, the final pair shows you what happens with two subqueries when you block subquery pushing.


select
        max(t1.n1)
from
        t1
where
        t1.n1 > 10000
and     t1.n2 > (select avg(t2.n2) from t2)
;

select
        max(t1.n1)
from
        t1
where
        t1.n1 > 10000
and     t1.n3 > (select avg(t3.n3) from t3)
;

select
        max(t1.n1)
from
        t1
where
        t1.n1 > 10000
and     (
            t1.n2 > (select avg(t2.n2) from t2)
         or t1.n3 > (select avg(t3.n3) from t3)
        )
;

select
        max(t1.n1)
from
        t1
where
        t1.n1 > 10000
and     (
            t1.n3 > (select avg(t3.n3) from t3)
         or t1.n2 > (select avg(t2.n2) from t2)
        )
;

select
        max(t1.n1)
from
        t1
where
        t1.n1 > 10000
and     (
            t1.n2 > (select /*+ no_push_subq */ avg(t2.n2) from t2)
         or t1.n3 > (select /*+ no_push_subq */ avg(t3.n3) from t3)
        )
;

select
        max(t1.n1)
from
        t1
where
        t1.n1 > 10000
and     (
            t1.n3 > (select /*+ no_push_subq */ avg(t3.n3) from t3)
         or t1.n2 > (select /*+ no_push_subq */ avg(t2.n2) from t2)
        )
;

Here are the first two plans, pulled from memory (which you might have guessed thanks to the “disappearing subquery predicate” in the predicate section. These examples came from 12.1.0.2, but the same happens in 11.2.0.4:


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   111 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |    10 |            |          |
|*  2 |   TABLE ACCESS FULL  | T1   |   500 |  5000 |    49   (3)| 00:00:01 |
|   3 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   4 |     TABLE ACCESS FULL| T2   | 25000 |   122K|    62   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T1"."N1">10000 AND "T1"."N2">))

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   123 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |    10 |            |          |
|*  2 |   TABLE ACCESS FULL  | T1   |   500 |  5000 |    49   (3)| 00:00:01 |
|   3 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   4 |     TABLE ACCESS FULL| T3   | 30000 |   146K|    74   (3)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T1"."N1">10000 AND "T1"."N3">))

As you can see, the cost of the query is the cost of the t1 tablescan plus the cost of running the t2 or t3 subquery once: 111 = 49 + 62, and 123 = 49 + 74.

(As a general guideline, recent versions of the optimizer tend to allow for subqueries by including “cost of subquery” * “number of times the optimizer thinks it will execute” – in this case the optimizer knows that the subquery will run exactly once).

But what happens when we test the query that applies BOTH subqueries to the tablescan ?


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |    50 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |    15 |            |          |
|*  2 |   TABLE ACCESS FULL  | T1   |   975 | 14625 |    50   (4)| 00:00:01 |
|   3 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   4 |     TABLE ACCESS FULL| T2   | 25000 |   122K|    62   (4)| 00:00:01 |
|   5 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   6 |     TABLE ACCESS FULL| T3   | 30000 |   146K|    74   (3)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T1"."N1">10000 AND ("T1"."N2"> OR "T1"."N3">)))


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |    50 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |    15 |            |          |
|*  2 |   TABLE ACCESS FULL  | T1   |   975 | 14625 |    50   (4)| 00:00:01 |
|   3 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   4 |     TABLE ACCESS FULL| T3   | 30000 |   146K|    74   (3)| 00:00:01 |
|   5 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   6 |     TABLE ACCESS FULL| T2   | 25000 |   122K|    62   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T1"."N1">10000 AND ("T1"."N3"> OR "T1"."N2">)))

The cost of the query in both cases is just the cost of the tablescan of t1 – the subqueries are, apparently, free. You can check from the predicate section, by the way, that the subqueries are applied in the order they appear in original statement.

Does anything change if the subqueries are not pushed ?


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   111 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |    15 |            |          |
|*  2 |   FILTER             |      |       |       |            |          |
|*  3 |    TABLE ACCESS FULL | T1   | 10001 |   146K|    50   (4)| 00:00:01 |
|   4 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   5 |     TABLE ACCESS FULL| T2   | 25000 |   122K|    62   (4)| 00:00:01 |
|   6 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   7 |     TABLE ACCESS FULL| T3   | 30000 |   146K|    74   (3)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T1"."N2"> OR "T1"."N3">))
   3 - filter("T1"."N1">10000)

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   124 (100)|          |
|   1 |  SORT AGGREGATE      |      |     1 |    15 |            |          |
|*  2 |   FILTER             |      |       |       |            |          |
|*  3 |    TABLE ACCESS FULL | T1   | 10001 |   146K|    50   (4)| 00:00:01 |
|   4 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   5 |     TABLE ACCESS FULL| T3   | 30000 |   146K|    74   (3)| 00:00:01 |
|   6 |    SORT AGGREGATE    |      |     1 |     5 |            |          |
|   7 |     TABLE ACCESS FULL| T2   | 25000 |   122K|    62   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T1"."N3"> OR "T1"."N2">))
   3 - filter("T1"."N1">10000)

The two plans have different costs – and the cost is the cost of the tablescan of t1 plus the cost of just the first subquery in the filter predciate list.

The non-pushed subqueries show up another anomaly: you’ll notice that the t1 tablescan reports 10,001 rows cardinality, but the FILTER operation doesn’t have an associated cardinality so we can’t see how many rows the optimizer thinks will survive the subqueries. So let’s run a query that allows us to see the surviving row estimate:


select
        max(n1)
from
        (
        select
                /*+ no_eliminate_oby */
                t1.n1
        from
                t1
        where
                t1.n1 > 10000
        and     (
                   t1.n3 > (select /*+ no_push_subq */ avg(t3.n3) from t3)
                or t1.n2 > (select /*+ no_push_subq */ avg(t2.n2) from t2)
                )
        order by
                n1
        )
;

-------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |       |       |   126 (100)|          |
|   1 |  SORT AGGREGATE        |      |     1 |    13 |            |          |
|   2 |   VIEW                 |      | 10001 |   126K|   126   (5)| 00:00:01 |
|   3 |    SORT ORDER BY       |      | 10001 |   146K|   126   (5)| 00:00:01 |
|*  4 |     FILTER             |      |       |       |            |          |
|*  5 |      TABLE ACCESS FULL | T1   | 10001 |   146K|    50   (4)| 00:00:01 |
|   6 |      SORT AGGREGATE    |      |     1 |     5 |            |          |
|   7 |       TABLE ACCESS FULL| T3   | 30000 |   146K|    74   (3)| 00:00:01 |
|   8 |      SORT AGGREGATE    |      |     1 |     5 |            |          |
|   9 |       TABLE ACCESS FULL| T2   | 25000 |   122K|    62   (4)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(("T1"."N3"> OR "T1"."N2">))
   5 - filter("T1"."N1">10000)

As you can see, the SORT ORDER BY operation thinks it’s going to handle 10,0001 rows – it looks as if the optimizer arithmetic hasn’t applied the usual subquery guess of 5% for the two subqueries. (When the subqueries were automatically pushed you saw a cardinality of 975 – which is 5% for subquery t2 plus (due to OR) 5% for subquery t3 minus 5% of 5% (=25) for the overlap – this is the standard OR arithmetic)

tl;dr

Although the optimizer code has been enhanced in many places for dealing with subquery estimates, but there are still some odd errors and inconsistencies that you need to be aware of. The examples I’ve shown may not be particularly significant in terms of what they do, but the pattern is one that you may recognise in more complex queries.

 

Reference script: subq_cost_anomaly_2.sql

 


Intro to JCS

Angelo Santagata - Mon, 2016-01-11 06:04
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Angelo Santagata Angelo Santagata 3 3 2016-01-11T12:00:00Z 2016-01-11T12:03:00Z 1 62 356 2 1 417 15.00 Clean Clean false false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;}

Happy New Year all!!!

 

To start this year's blogging I thought id highlight a nice and easy quick win on getting started with Oracle Java Cloud Service..  This link https://www.youtube.com/watch?v=5DHsE2x5mks takes you to a youtube video which I found very clear and easy to follow.

 

Enjoy

MobaXterm 8.5

Tim Hall - Mon, 2016-01-11 02:28

MobaXterm 8.5 was released at then end of last year. The download and changelog can be found here.

If you use a Windows desktop and you ever need to SSH a server or use X emulation, you need this bit of software in your life. There is a free version, or you can subscribe to support the development.

Give it a go. You won’t regret it.

Cheers

Tim…

MobaXterm 8.5 was first posted on January 11, 2016 at 9:28 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

KeePass 2.31

Tim Hall - Mon, 2016-01-11 02:05

KeePass 2.31 has been released. You can download it from here.

You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

Cheers

Tim…

KeePass 2.31 was first posted on January 11, 2016 at 9:05 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ORA-12514: TNS:listener during switchover using DGMGRL

Michael Dinh - Sun, 2016-01-10 20:02

Not sure what I am doing wrong because DEFAULT StaticConnectIdentifier does not seem to work.

Google and MOS were not much help.

Using StaticConnectIdentifier as created by dmgrl would fail with the following error during switchover.
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

The error can be found in the listener.log

When using tnsnames for StaticConnectIdentifier, switchover works like a charm.

I keep thinking, there must be something wrong with my configuration but as you can see from the test case, the only changes were to StaticConnectIdentifier.

Both databases are on the same host with each listening on different ports.

Updated: Just found this note
DGMGRL>switchover to Fails with ORA-12514 (Doc ID 1582927.1)
Oracle Database – Enterprise Edition – Version 12.1.0.1 to 12.1.0.1 [Release 12.1]

Interesting to see if this will work.

oracle@arrow:hawklas:/home/oracle
$ dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> connect sys/oracle@dc_las
Connected.
DGMGRL> connect sys/oracle@dc_san
Connected.
DGMGRL> show configuration verbose

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database verbose hawklas

Database - hawklas

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    hawklas

  Properties:
    DGConnectIdentifier             = 'dc_las'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '1800'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    SidName                         = 'hawklas'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '8191'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL> show database verbose hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 0 seconds ago)
  Apply Lag:       0 seconds (computed 0 seconds ago)
  Apply Rate:      32.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

  Properties:
    DGConnectIdentifier             = 'dc_san'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '30'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '1800'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    SidName                         = 'hawksan'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '8191'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      32.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS
DGMGRL> connect sys/oracle@dc_las
Connected.
DGMGRL> switchover to hawksan
Performing switchover NOW, please wait...
Operation requires a connection to instance "hawksan" on database "hawksan"
Connecting to instance "hawksan"...
Connected.
New primary database "hawksan" is opening...
Operation requires startup of instance "hawklas" on database "hawklas"
Starting instance "hawklas"...
Unable to connect to database
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Failed.
Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up instance "hawklas" of database "hawklas"

DGMGRL> exit
oracle@arrow:hawklas:/home/oracle
$ sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 17:36:20 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@hawklas> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2260088 bytes
Variable Size             385876872 bytes
Database Buffers          675282944 bytes
Redo Buffers                5517312 bytes
Database mounted.
Database opened.
SYS@hawklas> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:hawklas:/home/oracle
$ dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawksan - Primary database
    hawklas - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
ORA-16534: switchover, failover or convert operation in progress
DGM-17017: unable to determine configuration status

DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawksan - Primary database
    hawklas - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> edit database hawksan set property StaticConnectIdentifier='dc_san';
Property "staticconnectidentifier" updated
DGMGRL> edit database hawklas set property StaticConnectIdentifier='dc_las';
Property "staticconnectidentifier" updated'
DGMGRL> connect sys/oracle@dc_san
Connected.
DGMGRL> switchover to hawklas
Performing switchover NOW, please wait...
Operation requires a connection to instance "hawklas" on database "hawklas"
Connecting to instance "hawklas"...
Connected.
New primary database "hawklas" is opening...
Operation requires startup of instance "hawksan" on database "hawksan"
Starting instance "hawksan"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "hawklas"
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      0 Byte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> exit
oracle@arrow:hawklas:/home/oracle
$
oracle@arrow:hawklas:/home/oracle
$ tns
oracle@arrow:hawklas:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$ tnsping dc_las

TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 10-JAN-2016 17:50:08

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=32767) (RECV_BUF_SIZE=32767) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = arrow)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = hawklas)))
OK (0 msec)
oracle@arrow:hawklas:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$ tnsping dc_san

TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 10-JAN-2016 17:50:14

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=32767) (RECV_BUF_SIZE=32767) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = arrow)(PORT = 1531))) (CONNECT_DATA = (SERVICE_NAME = hawksan)))
OK (0 msec)
oracle@arrow:hawklas:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$ lsnrctl status listener_las

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-JAN-2016 17:50:26

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow)(PORT=1521)(SEND_BUF_SIZE=32767)(RECV_BUF_SIZE=32767)))
STATUS of the LISTENER
------------------------
Alias                     listener_las
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                10-JAN-2016 16:15:51
Uptime                    0 days 1 hr. 34 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_las.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521)))
Services Summary...
Service "foo2" has 1 instance(s).
  Instance "foo2", status UNKNOWN, has 1 handler(s) for this service...
Service "hawk" has 1 instance(s).
  Instance "hawklas", status READY, has 1 handler(s) for this service...
Service "hawklas" has 2 instance(s).
  Instance "hawklas", status UNKNOWN, has 2 handler(s) for this service...
  Instance "hawklas", status READY, has 1 handler(s) for this service...
Service "hawklas_DGB" has 1 instance(s).
  Instance "hawklas", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@arrow:hawklas:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$ lsnrctl status listener_san

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-JAN-2016 17:50:29

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow)(PORT=1531)(SEND_BUF_SIZE=32767)(RECV_BUF_SIZE=32767)))
STATUS of the LISTENER
------------------------
Alias                     listener_san
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                10-JAN-2016 16:15:54
Uptime                    0 days 1 hr. 34 min. 35 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_san.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531)))
Services Summary...
Service "hawk" has 1 instance(s).
  Instance "hawksan", status READY, has 1 handler(s) for this service...
Service "hawksan" has 2 instance(s).
  Instance "hawksan", status UNKNOWN, has 2 handler(s) for this service...
  Instance "hawksan", status READY, has 1 handler(s) for this service...
Service "hawksan_DGB" has 1 instance(s).
  Instance "hawksan", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@arrow:hawklas:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$

It seems that static registration is not working for db_unique_name_DMGRL

oracle@arrow:hawksan:/home/oracle
$ dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show database hawklas StaticConnectIdentifier
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))'
DGMGRL> show database hawksan StaticConnectIdentifier
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))'
DGMGRL> exit
oracle@arrow:hawksan:/home/oracle
$ lsnrctl status listener_las

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-JAN-2016 22:16:09

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521)(SEND_BUF_SIZE=32767)(RECV_BUF_SIZE=32767)))
STATUS of the LISTENER
------------------------
Alias                     listener_las
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                10-JAN-2016 22:02:05
Uptime                    0 days 0 hr. 14 min. 3 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_las.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521)))
Services Summary...
Service "foo2" has 1 instance(s).
  Instance "foo2", status UNKNOWN, has 1 handler(s) for this service...
Service "hawk" has 1 instance(s).
  Instance "hawklas", status READY, has 1 handler(s) for this service...
Service "hawklas" has 2 instance(s).
  Instance "hawklas", status UNKNOWN, has 2 handler(s) for this service...
  Instance "hawklas", status READY, has 1 handler(s) for this service...
Service "hawklas_DGB" has 1 instance(s).
  Instance "hawklas", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@arrow:hawksan:/home/oracle
$ lsnrctl status listener_san

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 10-JAN-2016 22:16:12

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531)(SEND_BUF_SIZE=32767)(RECV_BUF_SIZE=32767)))
STATUS of the LISTENER
------------------------
Alias                     listener_san
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                10-JAN-2016 22:02:03
Uptime                    0 days 0 hr. 14 min. 9 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/dbhome_1/network/log/listener_san.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531)))
Services Summary...
Service "hawk" has 1 instance(s).
  Instance "hawksan", status READY, has 1 handler(s) for this service...
Service "hawksan" has 2 instance(s).
  Instance "hawksan", status UNKNOWN, has 2 handler(s) for this service...
  Instance "hawksan", status READY, has 1 handler(s) for this service...
Service "hawksan_DGB" has 1 instance(s).
  Instance "hawksan", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@arrow:hawksan:/home/oracle
$ sqlplus sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGMGRL)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))' as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 22:17:09 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name: ^C
oracle@arrow:hawksan:/home/oracle
$ sqlplus sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGMGRL)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))' as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 22:17:20 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name: ^C
oracle@arrow:hawksan:/home/oracle
$ sqlplus sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=hawklas_DGB)(INSTANCE_NAME=hawklas)(SERVER=DEDICATED)))' as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 22:17:48 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@hawklas):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:hawksan:/home/oracle
$ sqlplus sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=arrow.localdomain)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=hawksan_DGB)(INSTANCE_NAME=hawksan)(SERVER=DEDICATED)))' as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 22:17:55 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@hawksan):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:hawksan:/home/oracle
$

$ tns
oracle@arrow:hawksan:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$ grep -i dg listener.ora
        (GLOBAL_NAME = hawklas_DGB)
        (GLOBAL_NAME = hawklas_DGMGRL)
        (GLOBAL_NAME = hawksan_DGB)
        (GLOBAL_NAME = hawksan_DGMGRL)
oracle@arrow:hawksan:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
$

UPDATED Jan 16, 2015.

Thanks to Shivananda Rao (http://shivanandarao-oracle.com/) for pointing out the obvious which I was blind to.

IT’S NOT GLOBAL_NAME = hawklas_DGMGRL !!!

It’s GLOBAL_DBNAME = hawklas_DGMGRL (Missing DB)

R.I.P.

 


Handling ADF BC 12.2.1 REST Validation in Oracle JET

Andrejus Baranovski - Sun, 2016-01-10 15:37
CRUD use case would not be complete without validation logic implementation. Oracle JET allows to implement standard and custom validators on the client. Probably most of the simple to average complexity logic will be implemented in Oracle JET. Complex use cases can be handled in ADF BC validation rules, data will be validated through REST calls. When validation fails in ADF BC, error message is propagated through REST back to Oracle JET client, where we can parse it and attach to the UI field.

You can watch demo video, it shows how it works. I'm trying to update a row with invalid data, rules are executed in ADF BC and validation messages are displayed next to the UI fields in JET:


There are three rules defined in ADF BC. It might be tricky to map UI field with error message. This is easy in case of PATCH method, ADF BC REST returns failed attribute name together with the message. This doesn't work in the same way with POST method, when new row is created. For this reason, I have introduced custom indicator into each validation message *AttributeName*. I'm parsing this value in JET, to understand which UI field should be marked as incorrect.

Rule 1. Unique Key validation rule for Email attribute.


Rule 2. Hire Date validation rule, it should not be in the future.


Rule 3. Salary range validation rule.


Mandatory check is performed in JET, if mandatory UI field is not set - JET will prevent further call to REST service. I have achieved this with JET invalidComponentTracker property defined for JET UI input fields:


Tracker variable is defined and accessed in JavaScript, here we are going to evaluate if there are any issues on UI components, before calling REST service. There are three variables defined to report valdiation messages from ADF BC REST for Hire Date, Email and Salary fields:


In update function, before calling REST - I check if there are no issues with UI components by obtaining tracker:


Method showMessages is invoked for tracker, if there are issues - focus is set on the first one. If this doesn't happen, it means there are no issues and further REST call is allowed:


This is how it looks on UI. I'm trying to update empty row, there are mandatory fields and built-in functionality prevents me from submitting data:


Let's move to the main topic of this post - ADF BC REST call validation error handling. JET allows to define callbacks for REST call action - success/error. If ADF BC REST call fails with validation issue, error callback is invoked. Here we can get response text, parse it and assign to the UI fields. Multiple validation rules can fail at once in ADF BC, REST returns multiple messages in one response. This is good, we can display all errors at once:


Messages are attached to UI field through special property (and variable defined above) - messageCustom. This must be set on JET UI field:


In JavaScript callback, we are parsing response, constructing error message and assigning it to the messageCustom of specific UI field. This is how message is displayed on UI, next to the field:


Here is example of all three validation rules failing and messages are displayed on JET UI:


All three are reported in REST call response by ADF BC validation. We just need to parse returned message in Oracle JET and display it:


Download sample application - JETCRUDApp_v4.zip.

How Reliable is v$archive_dest_status?

Michael Dinh - Sun, 2016-01-10 13:23

v$archive_dest_status

Any DG Experts out there who knows more about the reliability for v$archive_dest_status as it has been many years since I have written scripts to monitor DG.

Don’t want to reinvent the wheel if I cannot make it better.

Note: gap_status is only available from 11gR2 +++

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 11:06:55 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@hawklas):PRIMARY> select
  2    arch.dest_id,
  3    arch.thread# "thread",
  4    arch.sequence# "last sequence received",
  5    appl.sequence# "last sequence applied",
  6   (arch.sequence#-appl.sequence#) "difference"
  7  from
  8    (
  9      select dest_id,thread#,sequence# from v$archived_log
 10      where (thread#,first_time)
 11      in (select thread#,max(first_time) from v$archived_log where resetlogs_change#=(select resetlogs_change# from v$database) group by thread#)
 12    ) arch,
 13    (
 14      select thread# ,sequence# from v$log_history
 15      where (thread#,first_time)
 16      in (select thread#,max(first_time) from v$log_history where resetlogs_change#=(select resetlogs_change# from v$database) group by thread#)
 17    ) appl
 18  where arch.thread#=appl.thread#
 19  order by 1
 20  ;

DEST_ID thread last sequence received last sequence applied difference
------- ------ ---------------------- --------------------- ----------
      1      1                    234                   234          0
      2      1                    234                   234          0

Elapsed: 00:00:07.99
ARROW:(SYS@hawklas):PRIMARY> select
  2  DEST_ID,STATUS,TYPE,DATABASE_MODE,RECOVERY_MODE,STANDBY_LOGFILE_COUNT srl_ct,STANDBY_LOGFILE_ACTIVE srl_active,
  3  ARCHIVED_THREAD# thrd,ARCHIVED_SEQ# seq,APPLIED_SEQ# applied,SRL,
  4  GAP_STATUS, decode(ERROR,null,'NONE','ERROR') error
  5  from v$archive_dest_status
  6  where status!='INACTIVE'
  7  ;

DEST_ID STATUS    TYPE           DATABASE_MODE   RECOVERY_MODE           SRL_CT SRL_ACTIVE THRD  SEQ APPLIED SRL GAP_STATUS               ERROR
------- --------- -------------- --------------- ----------------------- ------ ---------- ---- ---- ------- --- ------------------------ -----
      1 VALID     LOCAL          OPEN            IDLE                         0          0    1  234       0 NO                           NONE
      2 VALID     PHYSICAL       MOUNTED-STANDBY MANAGED REAL TIME APPLY      4          0    1  234     233 YES NO GAP                   NONE

Elapsed: 00:00:00.01
ARROW:(SYS@hawklas):PRIMARY>
SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 10 11:06:36 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@hawksan):PHYSICAL STANDBY> select
  2    arch.dest_id,
  3    arch.thread# "thread",
  4    arch.sequence# "last sequence received",
  5    appl.sequence# "last sequence applied",
  6   (arch.sequence#-appl.sequence#) "difference"
  7  from
  8    (
  9      select dest_id,thread#,sequence# from v$archived_log
 10      where (thread#,first_time)
 11      in (select thread#,max(first_time) from v$archived_log where resetlogs_change#=(select resetlogs_change# from v$database) group by thread#)
 12    ) arch,
 13    (
 14      select thread# ,sequence# from v$log_history
 15      where (thread#,first_time)
 16      in (select thread#,max(first_time) from v$log_history where resetlogs_change#=(select resetlogs_change# from v$database) group by thread#)
 17    ) appl
 18  where arch.thread#=appl.thread#
 19  order by 1
 20  ;

DEST_ID thread last sequence received last sequence applied difference
------- ------ ---------------------- --------------------- ----------
      1      1                    234                   234          0

Elapsed: 00:00:00.01
ARROW:(SYS@hawksan):PHYSICAL STANDBY> select
  2  DEST_ID,STATUS,TYPE,DATABASE_MODE,RECOVERY_MODE,STANDBY_LOGFILE_COUNT srl_ct,STANDBY_LOGFILE_ACTIVE srl_active,
  3  ARCHIVED_THREAD# thrd,ARCHIVED_SEQ# seq,APPLIED_SEQ# applied,SRL,
  4  GAP_STATUS, decode(ERROR,null,'NONE','ERROR') error
  5  from v$archive_dest_status
  6  where status!='INACTIVE'
  7  ;

DEST_ID STATUS    TYPE           DATABASE_MODE   RECOVERY_MODE           SRL_CT SRL_ACTIVE THRD  SEQ APPLIED SRL GAP_STATUS               ERROR
------- --------- -------------- --------------- ----------------------- ------ ---------- ---- ---- ------- --- ------------------------ -----
      1 VALID     LOCAL          MOUNTED-STANDBY MANAGED REAL TIME APPLY      0          0    1  234       0 NO                           NONE
      2 VALID     UNKNOWN        UNKNOWN         IDLE                         0          0    0    0       0 NO                           NONE
     32 VALID     UNKNOWN        UNKNOWN         IDLE                         0          0    1  234     234 NO                           NONE

Elapsed: 00:00:00.00
ARROW:(SYS@hawksan):PHYSICAL STANDBY>

DBMS_METADATA and SELECT_CATALOG_ROLE – Cat Herding in Oracle

The Anti-Kyte - Sun, 2016-01-10 11:28

Last year we got a kitten. Little Cleo was full of the joys of…well…being a cat. Then, one day, she just disappeared.
Several months later, having given up hope of ever seeing her again, we adopted Nutmeg.
Then, just before Christmas, Cleo suddenly re-appeared.
It’s a complete mystery as to where she had been for the last year and she has not condescended to comment on the matter.
The end result is that we are now a two cat family.
This brings with it certain complications.
When they aren’t studiously ignoring each other, the cats sit there giving each other hard stares for hours on end.
I think there may be some tension over exactly just who owns that fluffy ball.
To ensure that our sleep is not disturbed by these two fighting like cats in a sack, it’s necessary to ensure that they are in separate rooms before we retire for the evening.
As a result we’ve become rather expert at the art of Cat Herding, which largely consists of bribery with cat-nip, among other things.

Whilst acquiring a reputation as a “dealer” among the feline population of Milton Keynes, I have had cause to reflect on the similarity of our new hobby with the trials and tribulations of persuading DBMS_METADATA.GET_DDL that you do actually have permissions to see the source code you’ve asked for.

This is regularly a fairly tortuous process. In what follows I will be examining just why SELECT_CATALOG_ROLE is DBMS_METADATA cat-nip…and why SELECT ANY DICTIONARY isn’t.
I’ll also look at how you can stop chasing your tail and get this function to behave itself when invoked from within a stored program unit…

SELECT ANY DICTIONARY

According to the Oracle Documentation, the SELECT ANY DICTIONARY privilege allows you to :

“Query any data dictionary object in the SYS schema. This privilege lets you selectively override the default FALSE setting of the O7_DICTIONARY_ACCESSIBILITY initialization parameter.”

Some explanation is required here. Prior to Oracle 8, a user with the SELECT ANY TABLE privilege had access to any table in the database. From 8 onwards, this privilege was restricted to exclude the Data Dictionary. This is where SELECT ANY DICTIONARY came in.

Before we go any further, let’s just make sure that the O7_DICTIONARY_ACCESSIBILITY parameter is not set to TRUE…

select value
from v$parameter
where name = 'O7_DICTIONARY_ACCESSIBILITY'
/

VALUE
--------------------------------------------------------------------------------
FALSE

Now, to see the effect of this privilege on DBMS_METADATA.GET_DDL, let’s create a user :

grant create session, select any dictionary
    to cleo identified by password_you_can_type_with_paws
/

If we now connect to the database as cleo, we can see that she has permissions to query the DBA_SOURCE view, among other things, and therefore to retrieve the source for a procedure in the HR schema :

set heading off
set pages 500
set lines 130
select text
from dba_source
where owner = 'HR'
and name = 'ADD_JOB_HISTORY'
and type = 'PROCEDURE'
order by line
/

Sure enough, we get the expected output :

PROCEDURE add_job_history
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN
  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END add_job_history;

13 rows selected.

SQL>

By default, EXECUTE on the DBMS_METADATA package are granted to PUBLIC. So, invoking the GET_DDL function for the same program unit should return the DDL statement required to re-create it. We already know we have access to the source so this should be no problem, right ?

set long 5000
select dbms_metadata.get_ddl( schema => 'HR', name => 'ADD_JOB_HISTORY', object_type => 'PROCEDURE')
from dual
/
ERROR:
ORA-31603: object "ADD_JOB_HISTORY" of type PROCEDURE not found in schema "HR"
ORA-06512: at "SYS.DBMS_METADATA", line 4018
ORA-06512: at "SYS.DBMS_METADATA", line 5843
ORA-06512: at line 1

no rows selected

SQL>

Has Oracle forgotten about this procedure ? Maybe it’s rolled under the sofa ?
Either way, further investigation would seem to be in order…

No SELECT_CATALOG_ROLE, no comment

Like a cat, DBMS_METADATA.GET_DDL can appear to be, to put it delicately, rather indepenently minded. It certainly doesn’t always do what it’s told.

To try to determine what’s happening, we could do some tracing and look through the recursive statements to see which precisely what is causing the error. However, there is a quicker way.
Let’s start by looking at the comments in the DBMS_METADATA package header :

-- SECURITY
-- This package is owned by SYS with execute access granted to PUBLIC.
-- It runs with invokers rights, i.e., with the security profile of
-- the caller.  It calls DBMS_METADATA_INT to perform privileged
-- functions.
-- The object views defined in catmeta.sql implement the package's security
-- policy via the WHERE clause on the public views which include syntax to
-- control user access to metadata: if the current user is SYS or has
-- SELECT_CATALOG_ROLE, then all objects are visible; otherwise, only
-- objects in the schema of the current user are visible.

This points us in the direction of the file…

$ORACLE_HOME/rdbms/admin/catmeta.sql

Sure enough, when we get to line 10209, things become a bit clearer…

-- base view for procedures, functions, packages and package bodies

create or replace force view ku$_base_proc_view of ku$_proc_t
  with object identifier (obj_num) as
  select '1','1',
         oo.obj#,
         oo.type#,
         value(o),
         sys.dbms_metadata_util.get_source_lines(oo.name,oo.obj#,oo.type#)
  from  sys.ku$_edition_schemaobj_view o, sys.ku$_edition_obj_view oo
  where (oo.type# = 7 or oo.type# = 8 or oo.type# = 9 or oo.type# = 11)
    and oo.obj#  = o.obj_num and oo.linkname is NULL
         AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR
              EXISTS ( SELECT * FROM sys.session_roles
                       WHERE role='SELECT_CATALOG_ROLE' ))
/

Unless you are either the owner of the stored program unit you’re trying to retrieve, or connected as USER_ID 0 ( i.e. SYS), the only way that you’re going to get anything back from a query on this view is if you have been granted a role called SELECT_CATALOG_ROLE.

To verify this, let’s create another user…

grant create session
    to nutmeg identified by must_have_catnip
/

grant select_catalog_role
    to nutmeg
/

Once again, we have access to the DBA_SOURCE view…

set heading off
set pages 500
set lines 130
select text
from dba_source
where owner = 'HR'
and name = 'ADD_JOB_HISTORY'
and type = 'PROCEDURE'
order by line
/

PROCEDURE add_job_history
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN
  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END add_job_history;

13 rows selected.

SQL>

…however, we can now also use DBMS_METADATA.GET_DDL …

set long 5000
select dbms_metadata.get_ddl( schema => 'HR', name => 'ADD_JOB_HISTORY', object_type => 'PROCEDURE')
from dual
/
  CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY"
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN
  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date,
 p_job_id, p_department_id);
END add_job_history;

SQL>

It seems that DBMS_METADATA has retrieved it’s fluffy ball from under the sofa.

DBMS_METADATA.GET_DDL in a Stored Program Unit

The fact that SELECT_CATALOG_ROLE is a role rather than a system privilege does tend to make life interesting if you put calls to DBMS_METADATA.GET_DDL into a stored program unit.

To demonstrate, let’s create a function that does just that (in the nutmeg schema – i.e. a schema that does not have the role granted to it) :

alter session set current_schema = nutmeg
/

create or replace function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2)
    return clob
is
begin
    dbms_output.put_line('Listing active session roles...');
    for r_role in (select role from sys.session_roles) loop
        dbms_output.put_line('Role : '||r_role.role);
    end loop;
    dbms_output.put_line('Roles listed');
    return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type);
end;
/

This should work fine for a user with the role granted, shouldn’t it ?
Let’s test it (once again, connected as cleo)…

set serveroutput on size unlimited
select get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE')
from dual
/
ERROR:
ORA-31603: object "ADD_JOB_HISTORY" of type PROCEDURE not found in schema "HR"
ORA-06512: at "SYS.DBMS_METADATA", line 4018
ORA-06512: at "SYS.DBMS_METADATA", line 5843
ORA-06512: at "NUTMEG.GET_DDL", line 10

no rows selected

Listing active session roles...
Roles listed
SQL>

Because we’re calling the DBMS_METADATA.GET_DDL function from inside a stored program unit, the role is disabled.
In order for this to work we need to make the stored program unit invoker’s rights…

alter session set current_schema = nutmeg
/

create or replace function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2)
    return clob
    authid current_user
is
begin
    dbms_output.put_line('Listing active session roles...');
    for r_role in (select role from sys.session_roles) loop
        dbms_output.put_line('Role : '||r_role.role);
    end loop;
    dbms_output.put_line('Roles listed');
    return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type);
end;
/

Now, when we invoke this function (as cleo once again), the role is still applicable…

set serveroutput on size unlimited
set long 5000
set heading off
select get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE')
from dual
/

  CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY"
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN

  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END add_job_history;

Listing active session roles...
Role : SELECT_CATALOG_ROLE
Role : HS_ADMIN_SELECT_ROLE
Roles listed
SQL>

It would seem then, that only a user granted SELECT_CATALOG_ROLE can usefully use DBMS_METADATA.GET_DDL in a stored program unit to retrieve DDL for objects not owned by them.

Something you might want to consider at this point, is that SELECT_CATALOG_ROLE is itself granted the HS_ADMIN_SELECT_ROLE role. Between them, these roles have SELECT access on quite a large number of SYS objects :

select count(*)
from dba_tab_privs
where grantee in ('SELECT_CATALOG_ROLE', 'HS_ADMIN_SELECT_ROLE')
/

  COUNT(*)
----------
      2207

SQL>

At this point, you may well ask if there is any way for users to utilise our function without having this role granted.
After all, a common approach to application security is to bestow execute access to users on a stored program unit without them having any visibility of the underlying tables and views.

Well, there is…

Doing without the SELECT_CATALOG_ROLE

Before we go any further I think I should point out that there are several issues with creating objects in the SYS schema.

The objects in the schema are effectively part of the Oracle software. The schema can be seen as being analogous to root on a *nix system.

There are various admonitions against performing DDL in the SYS schema. These include

As with most “golden rules” however, there is at least one exception – in this case, the Password Verify Function springs to mind.

I suppose the best reason for avoiding this sort of thing is that it would only take one moment of inattention to cause potentially irreparable damage to your Oracle installation. Even with a backup re-installing Oracle is no trivial matter. Therefore, I strongly suggest that you consider carefully whether the benefits of the approach I’m about to take here outweigh the issues of granting SELECT_CATALOG_ROLE.

So then, as SYS…

create function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2)
    return clob
is
begin
    dbms_output.put_line('Listing active session roles...');
    for r_role in (select role from sys.session_roles) loop
        dbms_output.put_line('Role : '||r_role.role);
    end loop;
    dbms_output.put_line('Roles listed');
    dbms_output.put_line('Userid is : '||sys_context('userenv', 'current_userid'));
    return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type);
end;
/

grant execute on get_ddl to cleo
/

Note that we’re using CREATE rather than CREATE OR REPLACE to ensure that we don’t accidentally overwrite anything.

Now, when we call this function as cleo…

set serveroutput on
set heading off
set lines 130
set long 5000
select sys.get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE')
from dual
/

  CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY"
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN

  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date,
 p_job_id, p_department_id);
END add_job_history;

Listing active session roles...
Roles listed
Userid is : 0
SQL>

The fact that we don’t have the role becomes irrelevant because the function is running with definer’s rights (i.e. as SYS, which as a CURRENT_USERID of 0).

Conclusions

The security model implemented for DBMS_METADATA is quite unusual for Oracle supplied PL/SQL packages.
As we have seen, invoking this package, especially it’s GET_DDL function often behaves in unexpected (and possibly entertaining) ways.


Filed under: Oracle, PL/SQL Tagged: 07_dictionary_accessibility, catmeta.sql, current_userid, DBMS_METADATA, DBMS_METADATA.GET_DDL, definer's rights, ku$_edition_schemaobj_view, ORA-31603, select any dictionary, SELECT_CATALOG_ROLE, sys.session_roles, SYS_CONTEXT

Public Appearances H1 2016

Tanel Poder - Sat, 2016-01-09 21:53

Here’s where I’ll hang out in the following months:

26-28 January 2016: BIWA Summit 2016 in Redwood Shores, CA

10-11 February 2016: RMOUG Training Days in Denver, CO

25 February 2016: Yorkshire Database (YoDB) in Leeds, UK

6-10 March 2016: Hotsos Symposium, Dallas, TX

10-14 April 2016: IOUG Collaborate, Las Vegas, NV

  • Beer session: Not speaking myself but planning to hang out on a first couple of conference days, drink beer and attend Gluent colleague Maxym Kharchenko‘s presentations

24-26 April 2016: Enkitec E4, Barcelona, Spain

18-19 May 2016: Great Lakes Oracle Conference (GLOC) in Cleveland, OH

  • I plan to submit abstracts (and hope to get some accepted :)
  • The abstract submission is still open until 1st February 2016

2-3 June 2016: AMIS 25 – Beyond the Horizon near Leiden, Netherlands

  • This AMIS 25th anniversary event will take place in a pretty cool location – an old military airport hangar (and abstract submission is still open :)
  • I plan to deliver 2 presentations, one about the usual Oracle performance stuff I do and one about Hadoop

5-7 June 2016: Enkitec E4, Dallas, TX

 

As you can see, I have changed my “I don’t want to travel anymore” policy ;-)

 

NB! If you want to move to the "New World" - and benefit from the awesomeness of Hadoop, without having to re-engineer your existing applications - check out Gluent, my new startup that will make history! ;-)

Related Posts

VR Skeptic: 1994

Oracle AppsLab - Sat, 2016-01-09 12:52

Here is a blast from the past: a letter I wrote to some friends back in 1994 about my very first VR experience.

VR enjoyed a brief spin as the next big thing that year. Jaron Lanier had been featured in the second issue of Wired magazine and virtual reality arcades began to appear in the hipper shopping malls. In short, it was as hyped and inevitable then as it is again today.

So with any further ado, here is my unedited account of what virtual reality was like in 1994:

A view of the Dactyl Nightmare playing field

A view of the Dactyl Nightmare playing field

 

For my birthday last weekend, Janine, Betsy, Tom and I tried an interesting experiment: a virtual reality arcade in San Francisco called Cyber Mind.

It was a pleasant enough little boutique, not overrun by pimply-faced hoards as I had expected. They had a total of ten machines, two sit-down, one-person flight simulator contraptions, and two sets of four networked platforms. We chose to play one of the four-person games called Dactyl Nightmare.

I stepped up on a sleek-looking platform and an attendant lowered a railing over my head so that I would not wander off while my mind was in other worlds. I then strapped on a belt with more equipment and cables and donned a six pound Darth-Vader-on-steroids helmet. The attendant placed a gun in my hand.

When they pulled the switch I found myself standing on a little chessboard floating in space. There were a total of four such platforms with stairs leading down to a larger chessboard, all decorated by arches and columns. I could look in any direction and if I held out my arm I could see a computer-generated rendition of my arm flailing around, holding a gun. If I pushed the thumb switch on top of the gun I began to walk in whatever direction I was looking in.

I began bumping into columns and stumbling down stairs. It wasn’t long before I saw Janine, Betsy, and Tom also stumbling around, walking with an odd gait, and pointing guns at me. The game, as old as childhood itself, was to get them before they could get me. Usually, by the time I could get my bearings and take careful aim, someone else (usually Betsy) had sneaked up behind me and blasted me into computer-generated smithereens. After a few seconds, I reformed and rejoined the hunt.

This happy situation was somewhat complicated by a large green Pterodactyl with the bad habit of swooping down and carrying off anyone who kept firing their guns into the ground (which was usually where I tended to fire). If you were true of heart and steady of aim you could blast the creature just before it’s claws sunk in. I managed this once, but the other three or four times I was carried HIGH above the little chessboard world and unceremoniously dropped.

After a quick six minutes it was all over. The total cost was $20 for the four of us, about a dollar per person per minute. I found the graphics interesting but not compelling and resolved to come back in a few years when the technology had improved.

I was not dizzy or disoriented during the game itself, but I emerged from my helmet slightly seasick, especially after the second round. This feeling persisted for the rest of the day. But it was a worthy experiment. Twenty dollars and a dizzy day: a small price to pay for my first glimpse at virtual reality.Possibly Related Posts:

VR Skeptic: Immersion and Serendipity

Oracle AppsLab - Sat, 2016-01-09 12:42

 

John staring boldly into the future of VR

Look, I’m as fond of holodecks and the matrix as the next nerd. I was having queazy VR experiences back in 1994. That’s me just last month strapped into to a cheap plastic viewer, staring boldly into the future. I’ve been thinking and writing about virtual reality for over twenty years now.

But are we there yet? Is VR ready for mainstream adoption? And, aside from a few obvious niche cases, does it have any significant relevance for the enterprise?

This is the first of a series of “VR Skeptic” blog posts that will explore these questions. The AppsLab has already started to acquire some VR gear and is hunting for enterprise use cases. I’ll share what I find along the way.

So why am I a skeptic? Despite all the breathless reviews of the Oculus Rift over the last few years and the current hype storm at CES, the VR industry still faces many serious hurdles :

  • Chicken and egg: developers need a market, the market needs developers
  • Hand controls remain awkward
  • The headsets are still bulky
  • Most PCs will need an upgrade to keep up
  • People are still getting sick

To this I would add what I’ve noticed about the user experience while sampling Google Cardboard VR content:

  • Limited range of view unless you’re sitting in a swivel chair
  • Viewer fatigue after wearing the googles for a few minutes
  • Likelihood of missing key content and affordances behind you
  • Little or no interactivity
  • Low quality resolution for reading text

But every time I’m ready to give up on VR, something like this pulls me back: Google Cardboard Saves Baby’s Life  (hat tip to my colleague Cindy Fong for finding this).

Immersion and Serendipity

I think VR does have two unique qualities that might help us figure out where it could make an impact: immersion and serendipity.

Immersion refers to the distinct feeling of being “inside” an experience. When done well, this can be a powerful effect, the difference between watching a tiger through the bars of its cage and being in the cage with the tiger. Your level of engagement rises dramatically. You are impelled to give the experience your full attention – no multi-tasking! And you may feel a greater sense of empathy with people portrayed in whatever situation you find yourself in.

Serendipity refers to the potential for creative discovery amidst the jumbled overflow of information that tends to come with VR. A VR typically shows you more content than you can easily absorb, including many things you won’t even see unless you happen to be looking in the right direction at the right moment. This makes it harder to guide users through a fixed presentation of information. But it might be an advantage in situations where users need to explore vast spaces, each one using his or her instincts to find unique, unpredictable insights.

It might be fruitful, then, to look for enterprise use cases that require either immersion or serendipity. For immersion this might include sales presentations or job training. Serendipity could play a role in ideation (e.g. creating marketing campaigns) or investigation (e.g. audits or budget reconciliations where you don’t know what you’re looking for until you find it).

Collaboration

Because VR content and viewers are not yet ubiquitous, VR today tends to be a solitary experience. There are certainly a number of solitary enterprise use cases, but the essence of “enterprise” is collaboration: multiple people working together to achieve things no single person could. So if there is a killer enterprise VR app, my guess is that it will involve rich collaboration.

The most obvious example is virtual meetings. Business people still fill airports because phone and even video conferences cannot fully replace the subtleties and bonding opportunities that happen when people share a physical space. If VR meetings could achieve enough verisimilitude to close a tricky business deal or facilitate a delicate negotiation that would be a game changer. But this is a very hard problem. The AppsLab will keep thinking about this, but I don’t see a breakthrough anytime soon.

Are there any easier collaborations that could benefit from VR? Instead of meetings with an unlimited number of participants, perhaps we could start with use cases involving just two people. And since capturing subtle facial expressions and gestures is hard, maybe we could look for situations which are less about personal interactions and more about a mutual exploration of some kind of visualized information space.

One example I heard about at last year’s EyeO conference was the Mars Rover team’s use of Microsofts’s HoloLens. Two team members in remote locations could seem to be standing together in a Martian crater as they decide on where to send the rover to next. One could find an interesting rock and call the other over to look at it. One could stand next to a distant feature to give the other a better sense of scale.

Can we find a similar (but more mundane) situation? Maybe an on-site construction worker with an AR headset sharing a live 360 view with a remote supervisor with a VR headset. In addition to seeing what the worker sees, the supervisor could point to elements that the worker would then see highlighted in his AR display. Or maybe two office managers taking a stroll together through a virtual floor plan allocating cubicle assignments.

These are some of the ideas I hope to explore in future installments. Stay tuned and please join the conversation.Possibly Related Posts:

Oracle Linux : UEK4 Released

Tim Hall - Sat, 2016-01-09 07:29

linux-tuxI wrote a post a couple of months ago called
Which version of Oracle Linux should I pick for Oracle server product installations? One of the points I raised was the use of UEK allows you to have all the latest kernel goodies, regardless of being on an older release, like OL6.

I saw a post today about the release of UEK4, so now you have access to all the improvements in the 4.1 mainline Linux kernel, whether you are on are running OL6 or OL7. That just goes to prove the point really.

If you are running RHEL, you might be feeling pressure to move from RHEL6 to RHEL7 to get a bunch of the kernel enhancements that came with it. If you are running OL6, just switch to UEK4 and your kernel is ahead of the RHEL7 kernel. No stress and no having to deal with systemd and firewalld. :)

Cheers

Tim…

Oracle Linux : UEK4 Released was first posted on January 9, 2016 at 2:29 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.