Skip navigation.

Feed aggregator

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - 5 min 56 sec ago

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Oracle Security and Electronics

Pete Finnigan - 5 min 56 sec ago

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Categories: Security Blogs

New Conference Speaking Dates Added

Pete Finnigan - 5 min 56 sec ago

In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]

Posted by Pete On 06/07/15 At 09:40 AM

Categories: Security Blogs

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - 5 min 56 sec ago

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - 5 min 56 sec ago

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Unique Oracle Security Trainings In York, England, September 2015

Pete Finnigan - 5 min 56 sec ago

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Categories: Security Blogs

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - 5 min 56 sec ago

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - 5 min 56 sec ago

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

IN/EXISTS bugs

Jonathan Lewis - 3 hours 45 min ago

Here’s a simple data set – I’m only interested in three of the columns in the work that follows, but it’s a data set that I use for a number of different models:


execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;
create table t2 nologging 
as
select * from t1
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt 	 => 'for all columns size 1'
	);

	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T2',
		method_opt 	 => 'for all columns size 1'
	);
end;
/

The columns I want to consider are n_3, n_400, and n_1000. As their names suggest the columns have 3, 400, and 1000 distinct values respectively and since I’ve used the dbms_random.value() function to generate the data the distinct values are fairly evenly spread across the million rows of the table.

Consider, then, the following two queries:


select
        *
from
        t1
where
        exists (
                select  null
                from    t2
                where   n_1000 = 0
                and     t2.n_400 = t1.n_400
                and     t2.n_3 = t1.n_3
        )
;


select
        *
from
        t1
where
        (t1.n_400, t1.n_3) in (
                select  t2.n_400, t2.n_3
                from    t2
                where   t2.n_1000 = 0
        )
;

The first point to check is that these two queries are logically equivalent.

Once you’re happy with that idea we can work out, informally, how many rows we should expect the queries ought to return: there are 1,200 combinations for (n_400, n_3) so each combination should return roughly 833 rows; if we pick 1,000 rows from the 1 million available we can expect to see 679 of those combinations (that’s Alberto Dell’Era’s “selection without replacement” formula that Oracle uses for adjusting num_distinct to allow for filter predicates). So we might reasonably suggest that the final number of rows as 833 * 679 = 565,607. It turns out that that’s a pretty good estimate – when I ran the query the result was actually 567,018 rows.

So what does Oracle produce for the two execution plans – here are the result from 12c (EXISTS first, then IN):


===================
Multi-column EXISTS
===================
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   920K|    34M|  1259  (11)| 00:00:01 |
|*  1 |  HASH JOIN RIGHT SEMI|      |   920K|    34M|  1259  (11)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | T2   |  1000 | 11000 |   610   (8)| 00:00:01 |
|   3 |   TABLE ACCESS FULL  | T1   |  1000K|    26M|   628  (11)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_400"="T1"."N_400" AND "T2"."N_3"="T1"."N_3")
   2 - filter("N_1000"=0)

===================
Equivalent IN query
===================
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   833K|    30M|  1259  (11)| 00:00:01 |
|*  1 |  HASH JOIN RIGHT SEMI|      |   833K|    30M|  1259  (11)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | T2   |  1000 | 11000 |   610   (8)| 00:00:01 |
|   3 |   TABLE ACCESS FULL  | T1   |  1000K|    26M|   628  (11)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."N_400"="T2"."N_400" AND "T1"."N_3"="T2"."N_3")
   2 - filter("T2"."N_1000"=0)

The first thing to note is that the shape of the plans is identical, and the predicate sections are identical – but the final cardinalities are different. Clearly at least one of the cardinalities has to be wrong by a significant amount (7.5% or 10.4%, depending which way round you want to look at it). If you run the test on 11.2.0.4 you find that both plans give the same estimated row count – and it’s the 920,000 rows; so arguably 12c has “fixed” the IN subquery calculation, bringing it closer to a reasonable prediction, but it hasn’t fixed the EXISTS subquery calculation. That 833K prediction, by the way, is what you would expect to see with this data with a basic join – and a semi-join shouldn’t be able to produce more data than  a join.

But both predictions are way off the (informal) expectation, so how have they appeared ?

Working backwards it’s easy to spot that: 833K = 833 * 1,000: Oracle is behaving as if every single row identified in the subquery will produce a separate combination of (n_400, n_3). If we reverse engineer 920K we get: 920K / 833 = 1104 – it would appear that the optimizer thinks the 1,000 rows produced by the subquery will produce 1,104 distinct combinations of (n_400, n_3) so we how did the impossible 1,104 appear in the arithmetic.

If you apply the “selection without replacement” formula to picking 1,000 rows with 400 distinct values from 1,000,000 rows the expected number of distinct values (with rounding) will be 368; if you apply the formula for picking 1,000 rows with 3 distinct values from 1,000,000 rows the expected number will be 3. And 3 * 368 = 1,104. (Remember that in my original estimate I applied the formula after multiplying out the combination of distinct values). The optimizer is using its standard methods, but using internediate results in an unsuitable fashion.

It’s impossible to say what the impact of this particular code path – and the change on the upgrade – might be. The optimizer has over-estimated by 47% in one case and 62% in the other but (a) there may be something about my data that exaggerated an effect that few people will see in the wild and (b) in many cases getting in the right ballpark is enough to get a reasonable plan, and a factor of 2 is the right ballpark.

Of course, a few people will be unlucky with a few queries on the upgrade where the estimate changes – after all a single row difference in the estimate can cause the optimizer to flip between a hash join and a nested loop – but at least you’ve got a little extra information that might help when you see a bad estimate on an important semi-join.

So is there a workaround ? Given that I’ve got 12c, the obvious thing to try is to create a column group at both ends of the semi-join and see what happens. It shouldn’t really make any difference because column groups are targeted at the problems of correlated column – but we might as well try it:


execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns (n_400,n_3) size 1')
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for columns (n_400,n_3) size 1')

Unfortunately when I did this the final cardinality estimate for both queries dropped to just 833 (the absence of a K on the end isn’t a typo!).

Manually unnesting got me closer:


select
        *
from
        (
        select  distinct n_3, n_400
        from    t2
        where   n_1000 = 0
        )       sq,
        t1
where   
        sq.n_400 = t1.n_400
and     sq.n_3 = t1.n_3
;

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   649K|    33M|  1260  (11)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   649K|    33M|  1260  (11)| 00:00:01 |
|   2 |   VIEW               |      |   779 | 20254 |   612   (8)| 00:00:01 |
|   3 |    HASH UNIQUE       |      |   779 |  8569 |   612   (8)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T2   |  1000 | 11000 |   610   (8)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T1   |  1000K|    26M|   628  (11)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("SQ"."N_400"="T1"."N_400" AND "SQ"."N_3"="T1"."N_3")
   4 - filter("N_1000"=0)

The cardinality of 649K is (allowing for rounding) 833 * 779; so we need to know where the 779 came from. It’s the optimizer standard arithmetic for “distinct” – multiply the N individual selectivities together then divide by the sqrt(2) “N-1” times. So we apply the “selection without replacement formula twice”:

  • adjusted selectivity of n_400 = 367.21
  • adjusted selectivity of n_3 = 3
  • 367.21 * 3 / sqrt(2) = 779

If you create column group statistics for (n_400, n_3) this doesn’t change the optimizer’s estimate for the number of distinct combinations after selection – maybe that’s another enhancement in the pipeline – but, at least in this case, the manual unnesting has got us a little closer to the right estimates without any statistical intervention.

Footnote:

Just for the sake of completeness, here are the plans (with yet more cardinality predictions) that you get if you block the unnesting:


select 
	*
from 
	t1 
where 
	exists (
		select	
			/*+ no_unnest */
			null  
		from	t2 
		where	n_1000 = 0 
		and	t2.n_400 = t1.n_400 
		and	t2.n_3 = t1.n_3
	)
;



---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  1179 | 33012 |   766K (12)| 00:00:30 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1   |  1000K|    26M|   632  (11)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| T2   |     1 |    11 |   638  (12)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T2" "T2" WHERE
              "N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2))
   3 - filter("N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)



=====================================
Unnesting blocked and subquery pushed
=====================================
select 
	*
from 
	t1 
where 
	exists (
		select	
			/*+ no_unnest push_subq */
			null  
		from	t2 
		where	n_1000 = 0 
		and	t2.n_400 = t1.n_400 
		and	t2.n_3 = t1.n_3
	)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 50000 |  1367K|  1271  (12)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL | T1   | 50000 |  1367K|   632  (11)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| T2   |     1 |    11 |   638  (12)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST */ 0 FROM "T2"
              "T2" WHERE "N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2))
   2 - filter("N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)

The 1179 comes from the magic of sqrt(2):  1179 = 1,000,000 / (400 * 3 / sqrt(2)).

The 50,000 is just the basic “I dunno, let’s call it 5%”.

 

Reference script: aggregate_selectivity_c.sql

 


Bigger Than Ever―Oracle’s Commerce Solutions at OpenWorld 2015

Linda Fishman Hoyle - Tue, 2015-09-01 13:24

A Guest Post by Jeri Kelley (pictured left), Senior Principal Product Manager, Oracle

There are a lot of great reasons for Oracle Commerce customers to attend OpenWorld at the end of October, including in-depth product updates, many customer success stories, hands-on labs, and networking events. Attendees will walk away with a better understanding of how Oracle’s commerce solutions can help them stay competitive in today’s rapidly changing commerce market.

What’s New and Different?

  • Meet Oracle Commerce Cloud―it's the newest addition to Oracle’s CX Applications portfolio. See demos, learn about the roadmap, and hear directly from our first customers leveraging this new product
  • Check out the Hands-on Labs: See how you can quickly stand up an online storefront with Oracle Commerce Cloud
  • Catch the Interactive Customer Showcases in the CX Commerce Demo Zone, featuring Oracle Commerce and Commerce Cloud customers

All sessions and the demo zone for customer experience will be located on 2nd floor of Moscone West in San Francisco.

Conference Sessions

Commerce attendees can explore best practices and share knowledge with more than 20 commerce-focused sessions:

  • Learn about roadmap and release updates
  • Get an in-depth look at Oracle Commerce Cloud
  • Attend thought-leadership sessions featuring Oracle strategy experts and industry analysts
  • Sit in on customer panels featuring both Oracle Commerce and Commerce Cloud customers
  • Experience manager and business control center best practice sessions
  • Listen to customer and partner case studies
  • Take part in more than just commerce-focused sessions and explore all that CX Central @ OpenWorld has to offer

Sessions of Special Interest

  • The Future of Oracle Commerce: Roadmap and Release Update (CON6303), Tuesday Oct. 27, 5:15-6:00 p.m., Moscone West, Room 2005
  • Meet Oracle Commerce Cloud―A New SaaS Solution for Commerce (CON8647), Wednesday, Oct. 28, 12:15-1:00 p.m., Moscone West Room 2005
  • Accelerating Success with Oracle Commerce―Panel discussion with KLX Aerospace, Tilly’s, and other Oracle Commerce Customers (CON8641), Tuesday, Oct. 27, 4:00-4:45 p.m., Moscone West, Room 2005
  • Building Commerce Experiences In The Cloud―Panel discussion with Rock/Creek, Hollander, and Elaine Turner (CON8842), Wednesday, Oct. 28, 3-3:45 p.m., Moscone West Room 2005

Guest Customer and Partner Appearances Include:

Vitamix, American Greetings, Maritz Reward Solutions, KLX Aerospace, Tilly’s, Ulta Rock/Creek, Hollander, Elaine Turner, JC Penney, Furniture Row, TOMS, Bodybuilding.com, Lojos Renner, Verizon, Razorfish, Compasso, SapientNitro, Cirrus10, and more!

Commerce Demo Zone

Take a break in the CX-Commerce Demo Zone. You’ll see the latest Oracle Commerce product demonstrations led by members of the Oracle Commerce product management and sales consulting teams. Take note of the latest features and learn from our customers at these demonstrations:

  • Oracle Commerce On-Premise: See the latest features for both B2C and B2B commerce
  • Oracle Commerce Cloud: Learn all about our newest offering
  • Interactive Customer Showcase: Stop by and visit Oracle Commerce and Commerce Cloud customers as they showcase their latest product offerings. You also can see how they are using Oracle Commerce or Commerce Cloud to power their online shopping experiences.
    • Note: These customers will be offering special OpenWorld-only discounts on their products, so make sure to stop by! Featured customers include Vitamix, Rock/Creek, Elaine Turner, and Hollander.

Customer Events

Finally, a preview of Oracle Commerce at OpenWorld would not be complete without a mention of customer appreciation events:

  • Monday, October 26: Commerce Customer Dinner @ The Waterbar Restaurant; by invitation only and your chance to network with Oracle Commerce product management and your commerce peers.
  • Tuesday, October 27: CX customer appreciation event; planning is in progress!
  • Wednesday, October 28: Oracle Appreciation Event at Treasure Island!

At a Glance

Visit Commerce—CX Central @ OpenWorld for full details on speakers, conference sessions, exhibits and entertainment!

We look forward to seeing everyone in San Francisco, October 25–October 29, 2015!

A New Approach to Recruiting: Create a Haystack of Needles

Linda Fishman Hoyle - Tue, 2015-09-01 13:09

A Guest Post by Joachim Skura, (pictured left), member of Oracle’s HCM Sales Development team

For many years, proactive in-house recruitment efforts have been looked down upon in recruitment circles. The internal recruitment role within companies has been partially eclipsed by the work of dedicated recruitment consultants and agencies, exacerbated by the fact that in-house HR teams have enough to do without having to seek out talent.

As a result, responsibility for recruiting new employees has for many years been split between internal recruitment, advertising, and third-party recruitment agencies.

The problem with advertising is that it is something of a blunt instrument. Organizations will quite often not get enough applications from the high-quality talent they want and it can be a time-sapping experience going through piles of speculative applications. It is a process that has been likened to finding a needle in a haystack.

Using external recruitment agencies, meanwhile, is a huge cost that many businesses can ill afford. The effectiveness of such agencies is declining as is their ability to deliver volume in today’s competitive business environment.

It, therefore, seems clear that the time is right for companies to take another look at how they recruit. So how can they do this? The answer is simple: take the concept of the proactive recruitment desk, dust it off, and bring it into the 21st century.

The first step in this process is to take responsibility for recruitment away from HR. This may seem counterintuitive to many, but for me it is absolutely critical. Recruitment is a very different animal to core HR, combining as it does the disciplines of sales and assessment. To be effective, recruiters must have an excellent understanding of the business―in fact they must be every bit as much business professionals as HR professionals.

Of course, recruitment can only be split from HR with the complete buy-in of senior leadership. In part, this involves a recognition that getting the best people into the business can and does positively affect the bottom line; recruitment must, in short, be seen as mission-critical. The fact is that far too few businesses can currently claim this is the case.

The next step is to ensure the proactive recruitment desk has itself been staffed correctly. As I have said, recruitment requires people with two very distinct skills: the ability to sell a company and the ability to assess an individual. It is a mix of competencies most usually found in head-hunting agencies, and businesses should look to hire people who have come from that background. The result is a team of dedicated in-house recruiters trusted with a single responsibility: to use their industry expertise and extensive social networks to identify the best possible candidates for the business and activate them.

There are two other important features to be considered in a modern proactive recruitment desk. Firstly, the recruitment desk should play no part in the interview process. With a sound understanding of their industry, the recruiter’s focus should be squarely on pre-selection, as good pre-selection is the best possible preparation for line managers and senior managers to then assess the candidates’ fitness. For me, a good pre-selection will ensure that not only do the best candidates get through, but also that managers are armed with the right questions to ask during interviews.

Secondly, the proactive recruitment desk must maintain its competitive edge. Recruitment is a results-based process and teams will perform most effectively if they compete against third-party recruiters. Similarly, the pay structure for recruiters should model that of sales organizations, with a good 60/40 split between core pay and performance-related bonuses. This will encourage proactivity and service excellence from the recruitment desk.

In a former blog on the subject of recruitment, I discussed the need for modern electronic word-of-mouth marketing and the importance of building strong referral and candidate pools, supported by social capabilities in an integrated applicant tracking system. This is a vital component which should be built in to this new model for recruiting. With such an approach, we are no longer searching for a needle in a haystack but are, in effect, creating a haystack of needles.

That this new approach to recruiting is effective cannot be doubted. In fact we have adopted the model at Oracle for major recruitment drives in EMEA and APAC. The results are astonishing. Supported by a successful public relations effort in these regions to let people know we were recruiting, we managed to rapidly fill thousands of positions with just the right people. In fact, so successful was the approach that we now do almost all of our recruiting in-house.

For more information, go to Modern HR in the Cloud.

And if you would like to hear more about this transformative approach to recruiting, then please come to the Zukunft Personal in Cologne on September 15, 2015. Vice President Recruiting EMEA and CEE, Anne-Marie O’Donnell, will be giving a talk on how Oracle went about overhauling its approach to recruiting and the huge benefits we are enjoying as a result.

Index Usage – 4

Jonathan Lewis - Tue, 2015-09-01 11:41

Here’s a thought that came to me while I was writing up a note about identifying redundant indexes a few minutes ago. Sometimes you end up supporting applications with unexpected duplication of data and indexes and need to find ways to reduce overheads. Here’s some code modelling a scenario that I’ve seen more often than I like (actually, just once would be more often than I’d like):


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e5
)
select
        rownum                                          id,
        trunc(sysdate,'MM') + (rownum-1)/1440           date_time,
        trunc(sysdate,'MM') + trunc((rownum-1)/1440)    date_only,
        rpad('x',100)                                   padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 ; begin dbms_stats.gather_table_stats( ownname => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

I’ve got a table holding one row per minute since the start of the month; there’s a column which holds the date and time accurate to the minute, and another column which is supposed to hold just the date part. Is it possible to create a single index that allows Oracle to handles queries relatively efficiently whether they refer to date_time or date_only ? As a starting step could we get an index range scan on the same index for both of the following queries:


select
        max(id)
from
        t1
where
        date_only between sysdate-1 and sysdate
;


select
        max(id)
from
        t1
where
        date_time between sysdate-1 and sysdate
;

As Bob the Builder likes to say: “yes we can”.

There are a few lines of SQL between the table creation and the stats gathering that I didn’t show you. The first creates the constraint that describes the relationship between date_time and date_only – one is the truncated version of the other; the second defines the index we need, and the third (unfortunately) has to be there to declare the date_time column as a mandatory column:

alter table t1
        add constraint t1_trunc_date
        check(
                  date_only = trunc(date_time)
              and (   (date_only is null and date_time is null)
                   or (date_only is not null and date_time is not null)
              )
        )
;

create index t1_i1 on t1(trunc(date_time)) nologging;

alter table t1 modify (date_time not null);

(Given the requirement for date_time to be not null to get my indexing strategy to work, we could simplify the t1_trunc_date constraint to just (date_only = trunc(date_time)) if we declared date_only to be not null as well).

With the extra lines of SQL included here are the resulting execution plans for the two queries (running on 11.2.0.4, but you get the same plans on 12.1.0.2):


=======================================
date_only between sysdate-1 and sysdate
=======================================

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |     1 |    21 |    92   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE               |       |     1 |    21 |            |          |
|*  2 |   FILTER                      |       |       |       |            |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T1    |  4306 | 90426 |    92   (2)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | T1_I1 |  4306 |       |    13   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(SYSDATE@!>=SYSDATE@!-1)
   3 - filter("DATE_ONLY"<=SYSDATE@! AND "DATE_ONLY">=SYSDATE@!-1)
   4 - access(TRUNC(INTERNAL_FUNCTION("DATE_TIME"))>=SYSDATE@!-1 AND
              TRUNC(INTERNAL_FUNCTION("DATE_TIME"))<=SYSDATE@!)
=======================================
date_time between sysdate-1 and sysdate
=======================================

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |     1 |    21 |    92   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE               |       |     1 |    21 |            |          |
|*  2 |   FILTER                      |       |       |       |            |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T1    |  1442 | 30282 |    92   (2)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | T1_I1 |  4306 |       |    13   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(SYSDATE@!>=SYSDATE@!-1)
   3 - filter("DATE_TIME"=SYSDATE@!-1)
   4 - access(TRUNC(INTERNAL_FUNCTION("DATE_TIME"))>=TRUNC(SYSDATE@!-1) AND
              TRUNC(INTERNAL_FUNCTION("DATE_TIME"))>=TRUNC(SYSDATE@!))

The optimizer has managed to generate extra predicates in both cases by applying transitive closure to the critical constraint to produce queries that can be addressed (with some inefficiencies) through the single index.

Within limits, therefore, I can reduce two indexes to a single index. The strategy isn’t ideal but it may be appropriate in a few special cases. There are several problems that should be considered carefully:

  • The date_time column has to be declared not null for this optimization strategy to appear – that’s going to limit its applicability.
  • You may have more complex code where the transformation simply can’t be made to appear.
  • The introduction of the trunc() function may change the optimizer’s arithmetic in ways that cause plans to change for the worse
  • (Most important) The index range scan is always a multiple of 24 hours, with the excess data discarded after you reach the table. If you have lots of time-based queries for short time intervals (e.g. less than 8 hours) then the extra work done may outweigh the benefit of reducing the number of indexes – especially if all the excess table visits turn into randomly scattered single block reads.

Despite these drawbacks you may decide that you have a case where the strategy is “good enough” to help you reduce the workload on your system at some critical times during the day or night.

 


Index Usage – 3

Jonathan Lewis - Tue, 2015-09-01 10:52

In my last note on index usage I introduced the idea of looking at v$segstat (or v$segment_statistics) and comparing the “logical reads” statistic with the “db block changes” statistic as an indicator of whether or not the index was used in execution plans. This week I’ll explain the idea and show you some results – with a little commentary – from a production system that was reported on the OTN database forum.

The idea is fairly simple (and simplistic). If you update a typical index you will traverse three blocks (root, branch, leaf) to find the index entry that has to be updated, so if the only reason you use an index is to find out which index entry has to be updated than the number of “db block changes” for that index will be (we hope) roughly one-third of the number of “session logical I/Os” of the index.

We can do some testing of this hypothesis with some simple SQL:


create table t1 nologging as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                                  id,
        trunc(dbms_random.value(0,333333))      n1,
        rpad('x',100)                           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 ; begin dbms_stats.gather_table_stats( ownname => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

alter table t1 add constraint t1_pk primary key(id) using index nologging;
create index t1_i1 on t1(n1)nologging;

So I’ve got a table with a million rows, a primary key, and an index on a column of randomly generated data. Now all I need to do is run the following little script  a few thousand times and check the segment stats – I’ve avoided using a pl/sql script because of all the special buffer-handling optimisations could appear if I did:


exec :b1 := trunc(dbms_random.value(1,1000001))

update t1
        set n1 = trunc(dbms_random.value(0,333333))
        where   id = :b1;

commit;

There are various ways of checking the segment stats, you could simply launch an AWR snapshot (or statspack snapshot at level 7) before and after the test – the results from the “Segments by …” sections of the report should tell you all you need to know; or you could run a simple piece of SQL like the following before and after the test and then doing some arithimetic:

select
        object_name, statistic_name, value 
from
       v$segment_statistics
where
       owner = {your user name here}
and    object_name in ('T1','T1_PK','T1_I1')
and    statistic_name in (
              'db block changes',
              'logical reads'
)
and     value != 0
order by
        object_name,
        statistic_name
;

I happen to have some snapshot code in a little procedure that does the job I need, so my testbed code looks like this:

execute snap_my_stats.start_snap
execute snap_segstat.start_snap

set termout off
set serveroutput off

variable b1 number

@start_10000    -- invoke my script 10,000 times

spool test

set serveroutput on
set termout on

execute snap_segstat.end_snap
execute snap_my_stats.end_snap

spool off

The question is, what do we expect the results to look like, and what do they actually look like. Given we have 10,000 updates going on we might expect something like the following:

  • T1_PK – index access by primary key, 10,000 * 3 logical I/Os
  • T1 – 10,000 logical I/Os as we find the rows then 10,000 db block changes
  • T1_I1 – index access to find entry to be deleted (10,000 * 3 logical I/Os), repeated to find leaf block for insertion of new entry (10,000 * 3 logical I/Os), with 10,000 * 2 db block changes for the delete/insert actions.

Here are a few results from 12.1.0.2 – if I don’t include a commit in the update script:


12.1.0.2 with no commit
Segment stats
=======================
T1
logical reads                               20,016
db block changes                            19,952

T1_PK
logical reads                               30,016
physical reads                                  19
physical read requests                          19

T1_I1
logical reads                               60,000
db block changes                            21,616

Session Stats
=============
Name                                         Value
----                                         -----
session logical reads                      110,919
consistent gets                             30,051
consistent gets examination                 30,037
db block gets                               80,868
db block changes                            81,989

Some of the figures match the predictions very nicely – in particular the logical reads and db block changes on the T1_I1 index are amazing (so good I feel I have to promise that I didn’t fake them, or wait until after the test to make my prediction;)

There are, however, some anomalies: why have I got 20,000 logical reads and db block changes on the table when I did only 10,000 updates. I was surprised by this, but it is something I’ve seen before: Oracle was locking each row before updating it, so generating two changes and two redo entries (Op Codes 11.4 and 11.5). In the past I’d noticed this as a side effect of setting the audit_trail to DB, but it was happening here with audit_trail =none. (Something to add to my “todo” list – why is this happening, when did it appear.)

You’ll also notice that the session level stats for logical reads nearly matches the table and index level (20K + 30K + 60K = ca. 110K) while the db block changes stats are out by a factor of 2. Don’t forget that for each change to a table or index we make a change to an undo block describing how to reverse that change so the 40,000 data changes are matched by a further 40,000 undo block changes; and on top of this every time we get the next undo block we change our transaction table entry in the undo segment header we’re using, and that accounts for most of the rest. The discrepancy in the number of logical reads is small because while we keeping getting and releasing the table and index blocks, we pin the undo block from the moment we acquire it to the moment it’s full so we don’t record extra logical reads each time we modify it.

Big observation

Based on the figures above, we could probably say that, for an index with a blevel = 2 (height = 3), if the number of db block changes recorded is close to one-third of the logical reads recorded, then that index is a good candidate for review as it may be an index that is not used to access data, it may be an index that does nothing except use up resources to keep itself up to date.

Big problem

Take a look at the statistics when I included the commit in my test case:

12.1.0.2 with commit
Segment Stats
====================
T1
logical reads                               20,000

T1_PK
logical reads                               30,000

T1_I1
logical reads                                  512
db block changes                               160

Session Stats
=============
Name                                         Value
----                                         -----
session logical reads                       80,625
consistent gets                             30,106
consistent gets examination                 30,039
db block gets                               50,519
db block changes                            60,489

Apparently my session has made 60,000 changes – but none of them applied to the table or index! In fact I haven’t even accessed the T1_I1 index! The segment statistics have to be wrong. Moreover, if I commit every update I ought to change a segment header block at the start and end of every update, which means I should see at least 20,000 more db block changes in the session (not 20,000 less); and since I’m not pinning undo blocks for long transaction I should see about 10,000 extra logical reads as I acquire 10,000 undo blocks at the start of each short transaction. The session statistics have to be wrong as well!

A quick check on the redo stream shows exactly the change vectors I expect to see for these transactions:

  • 11.4 – lock row price (table)
  • 5.2 – start transaction (update undo segment header)
  • 11.5 – update row piece (table)
  • 10.4 – delete leaf row (index)
  • 10.2 – insert leaf row (index)
  • 5.4 – commit (update undo segment header)
  • 5.1 – update undo block (op 11.1 – undo table row operation)
  • 5.1 – update undo block (op 11.1 – undo table row operation)
  • 5.1 – update undo block (op 10.22 – undo leaf operation)
  • 5.1 – update undo block (op 10.22 – undo leaf operation)

That’s a total of 10 changes per transaction – which means 100,000 db block changes  in total, not 60,000.

This anomaly is so large that it HAS to make my suggested use of the segment stats suspect.  Fortunately, though, the error is in a direction that, while sapping our confidence, doesn’t make checking the numbers a completely pointless exercise.  If the error is such that we lose sight of the work done in modifying the index then the figures remaining are such that they increase our perception of the index as one that is being used for queries as well – in other words the error doesn’t make an index that’s used for queries look like an index that’s only used for self-maintenance.

Case Study

The following figures were the results from the OTN database forum posting that prompted me to write this note and the previous one:

OTN

The poster has some code which gives a report of the indexes on a table (all 26 of them in this case) with their column definition and segment statistics. What (tentative) clues do we get about these indexes as far as this article is concerned ?

Conveniently the code arranges the indexes in order of “change percentage”, and we can see very easily that the first nine indexes in the list show “db block changes” > one-third of “logical reads”, the cut-off point for the article, so it’s worth taking a quick look at those indexes to see if they are suitable candidates for dropping. Inevitably the moment you start looking closely there are a number of observations to add to this starting point.

  1. Look at the number of changes in the first 12 indexes, notice how frequently numbers around 300,000 appear – perhaps that’s indicative of about 300,000 inserts taking place in the interval, in which case the first and 14th indexes (on (zcid) and (ps_spdh) respectively) must be on columns which are very frequently null and are therefore much smaller than the rest of the indes. Even though the index on (zcid) is reported at 39%, perhaps this is an index with a blevel of 1 (height = 2) in which case its cut-off point would be 50% rather than 33% – which means it could well be used for a lot of queries.
  2. The tenth index on (dp_datetime) reports 26%, “change percentage”  which is below the cut-off, but it’s worth noting that are three other indexes (12, 13 and 21) on that table that start with a column called dp_datetime_date. Is dp_datetime_date the truncated value of db_datetime and is it a real column or a virtual column ? Given my comments about the optimizer’s clever trick with indexes on trunc(date_column) in the second post in this series perhaps there’s scope here for getting rid of the dp_datetime index even though the simple numeric suggests that it probably is used for some queries.
  3. Of the three indexes starting with db_datetime_date, one consists of just that single column – so perhaps (as suggested in the first post in this series) we could simply drop that too. Then, when we look at the other two (indexes 12 and 13) we note that index 13 is subject to fives time as much change as index 12 (is that one insert plus 2 updates, given that an update means two changes), but fifteen times as much logical I/O. The extra LIO may be because the index is larger (so many more columns), it may be because the index is used very inefficiently – either way, we might look very carefully at the column ordering to see if index 13 could be rearranged to start the same way as index 12, and then drop index 12.  On top of everything else we might also want to check whether we have the right level of compression on the index – if it’s not very effective until we’ve selected on many columns then it must be subject to a lot of repetition in the first few columns.
  4. I gave a few examples in part one of reasons for dropping indexes based on similarity of columns used – the examples came from this output so I won’t repeat them, but if you refer back to them you will note that the desirability of some of the suggestions in the earlier article is re-inforced by the workload statistics – for example: the similarity of indexes 24 and 24, with an exact ordered match on the first 4 columns, suggests that we consider combining the two indexes into a single index: the fact that both indexes were subject to 2.7 million changes makes this look like a highly desirable target.
Summary

There are a lot of indexes on this table but it looks as if we might be able to drop nearly half of them, although we will have to be very careful before we do so and will probably want to make a couple at a time invisible (and we can make the change “online” in 12c) for a while before dropping them.

Remember, though, that everything I’ve said in this note is guesswork based on a few simple numbers, and I want to emphasise an important point – this notes wasn’t trying to tell you how to decide if an index could be dropped, it was pointing out that there’s a simple way to focus your attention on a few places where you’re most likely to find some indexes that are worth dropping.  Run a report like this against the five biggest tables or the five busiest tables or the five tables with the most indexes and you’ll probably find a few easy wins as far as redundant indexes are concerned.

Footnote

While writing up my comments about the optimizer’s tricks with columns like dp_datetime and a virtual dp_datetime_date I had a sudden sneaky thought about how we could play games with the optimizer if both columns were real columns that were kept in synch with each other. If it works out I’ll write it up in a further blog.

 

 


You work for a software company. You don’t? Think again.

Pythian Group - Tue, 2015-09-01 09:26

If someone asks what business your company is in, you might say transportation, or networks, or retail, or a hundred other possibilities. But what you should be saying is that you are also in the software business.

At its core, your company is a software company. Or it should be.

Why? Because your competitors are growing in numbers, emerging from nowhere and aggressively using digital strategies to succeed over you.

To be successful, you must continually innovate and differentiate your company, no matter what your industry. You must do things better, faster, and cheaper. And you must engage your customers and your partners in new and meaningful ways. It doesn’t matter whether you’re a bank, a pharmaceutical company, or a logistics provider. Think like a startup and use software to stay one step ahead.

This connection can be easy if your business is already using software to provide differentiating product features or services. If you sell goods online, or you deliver content, or you offer software as a service, you know you’re in the software business. You probably focus on being more responsive and being agile, that is delivering new features faster than ever before and using data to gain business insights and to optimize the user experience.

For those companies who don’t initially think of themselves as software companies, it’s a little more interesting. In time, they will realize that software is what is differentiating them.

For example, Redline Communications thinks of itself as a wireless infrastructure company that delivers wireless networks in remote locations. In actuality, it uses software to add new features to its network components. It also uses software to expand a network’s capacity on demand, and to troubleshoot problems. Redline might manufacture hardware but it is solidly in the software business.

Pythian is often described as an IT services company, but it is undoubtedly a software company. Nobody at Pythian touches a customer’s critical production system or data without without going through a software portal called Adminiscope that secures access and records all activity. Pythian doesn’t sell software, but it is absolutely in the software business.

Then there are the companies that would not traditionally be classified as technology businesses at all, but have clearly made the connection. And that doesn’t mean just having an online presence. Take retailer Neiman Marcus, a company that has consciously entered the software space with the development of apps like  “Snap. Find. Shop.” a tool that lets users take photos of a product they want and helps them track it down. They know they need to engage customers more personally, and the way to do that is through software that enables them to interact with customers, to understand and respond to buying behaviors and preferences.

KAR Auction Services, who you might know as a car auction company, has stated publicly that that they no longer want to be a car auction company that uses technology but “a technology company that sells cars”. They know that software will drive the future of their business.

It is increasing difficult to sell, deliver or support any product or service without using software. It is increasingly difficult to truly understand your business without being data driven, the byproduct of software. It is increasingly difficult to recruit employees without using software. Your customers and your employees expect you to be agile and responsive, and software helps you meet those expectations, and then measures, monitors, analyzes, and integrates data to keep you ahead of the game.

In today’s hyper-competitive world, your company must use software and technology to become agile in order to respond to ever-changing customer needs. Then you must remain as aggressive by measuring, monitoring, evaluating, and responding to data about your products and services as well as their impact on your customers and their environment. Whether it’s customer feedback about product features, or changing market trends, you need to be ready to react and iterate your products and processes at lightning speed. Software is the one thing that’s going to enable that. 

So what does it mean to use software to be competitive? It means departing from tradition. It means empowering IT to go beyond cutting costs to transform the business. It means empowering everyone in the company to innovate around software. It means encouraging radical disruptive ideas on how to change the business. And it means putting a digital strategy at the heart of your planning. And this is certainly what your competition is doing.

The post You work for a software company. You don’t? Think again. appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

The greatest OBIEE usage study ever…

Rittman Mead Consulting - Tue, 2015-09-01 09:03

Rittman Mead is excited to announce the first ever global OBIEE usage survey. Our goal is to provide deep analysis into how organisations use their OBIEE systems and to create a set of industry benchmarks. We will be releasing the results of this research for free to the OBIEE community.

measure

The types of metrics we are looking at are:

  • DAU/MAU (daily average users/monthly average users) – this gives an approximation of utility;
  • Frequency of use – how often your users access dashboards;
  • Recency of use – when was the last time reports and dashboards were accessed;
  • Reach of system – how many users are accessing the system.

Here’s how it works: we need your Usage Tracking data. To make providing this data easier we can send you a script for your system administrator to run to extract this. We even have an option to obfuscate key attributes so we can’t see any usernames or sensitive details.

Once we receive your data, we will analyze your data individually and provide you with a free report designed to provide you unique insight into to your system’s usage, an example of which is available here.

We will also add your obfuscated, depersonalised and aggregated data to our benchmarking database and let you know how your system performs against industry standards.

Please note: you do have to be running the usage tracking feature of OBIEE for this to work. We strongly advise having this running in any system and can help you get it turned on, if required. Also any data you send to Rittman Mead is completely anonymous and holds no personal or sensitive attributes. It and will only be used for benchmarking.

At the end of the survey we will perform a detailed analysis of the complete OBIEE usage database and publish the results.

How do I take part?

Please email us at ux@rittmanmead.com and we will send full over the scripts and full instructions.

Why are we doing this?

We are currently focused on user engagement of BI and analytics systems and have been conducting research over the last few months. We have found very few tangible studies about enterprise BI usage, in particular OBIEE usage.

We are creating this database from OBIEE users around the world and will use this as the academic basis for furthering our research into user engagement and OBIEE.

UE-report-exported

Categories: BI & Warehousing

Oracle CIO's Take on Why Cloud Matters to IT

WebCenter Team - Tue, 2015-09-01 07:38

The benefits of an effective and comprehensive cloud file sharing and collaboration solution are quite obvious - you have access to documents you need to be productive anytime, anywhere and on any device; remote employees and employees on the go are empowered; team collaboration and doing projects either among the employees or with outside contractors and suppliers is vastly simplified; when delivered securely and in context, the solution drive work efficiency, employee satisfaction and even improves time to market, and so on.

But what does it mean for IT? Less governance and information silo's to deal with; more efficient provisioning; automatic updates and almost zero maintenance; more self service and less administrative overhead; streamlined infrastructure and better governance. Here's what Oracle CIO and SVP, Mark Sunday has to say about driving IT transformation with Oracle Documents Cloud Service.

And why Oracle Documents Cloud Service needs to be an ubiquitous solution for any organization.


For more information, visit oracle.com/digitalcollaboration.

ADF 12c Dynamic Forms with ADF BC UI Categories

Andrejus Baranovski - Tue, 2015-09-01 04:54
ADF 12c offers completely new way to implement UI forms. We can implement dynamic forms, based on static ADF BC components. Such approach could simplify form maintenance - when there will be changes in DB, developer would need to adjust only ADF BC EO/VO structure, without changing anything on UI. Bindings layers could determine data structure dynamically and dictate how to render UI blocks. ADF BC VO provides special metadata option to define how form UI should be rendered - UI Categories. Through categories we could group attributes and this would help to render dynamic UI with separate blocks and proper order.

Using UI Categories is straightforward, add category group and move attributes under the category. Label and tooltip can be defined, this will be a title for UI block:


ADF 12c is using general collection binding for dynamic UI, it doesn't depend on specific attribute binding:


Collection binding is generic, without a list of accessible attributes, it just points to the VO. This means we can change list of attributes registered in the VO at any time:


This is how looks dynamic form generated in ADF 12c. I have customised it a bit, but major part is generated automatically by JDeveloper 12c. There is global iterator over UI Categories, rendered UI blocks. Inner iterator renders UI Category attributes. Each attribute is rendered through ADF 12c dynamic component, this means there is no dependency related to attribute UI implementation (input text, LOV, choice list, etc.) - UI implementation is controlled from attribute UI hints in ADF BC:


It is easier to understand how dynamic form is organised from UI structure view:


This is the result - dynamic form is rendered with two blocks, based on defined UI Categories:


I have customised dynamic form rendering with metadata usage. Buttons are rendered only in the first block, identified by PersonalData:


Block identified by JobData is rendered with maximum 4 rows:


I hope this post will give you some new ideas about robust ADF UI implementation. Download sample application - ADFAltaApp_v7.zip.

Cara Nelpon Gratis All Operator dari Internet

Daniel Fink - Tue, 2015-09-01 01:42
Cara Nelpon Gratis All Operator dari Internet - Apa saat ini sobat tidak memiliki pulsa dan ingin menelpon keluarga, teman atau pacar? Nampaknya beberapa Cara Telepon Gratis Terbaru 2015 Untuk Semua Operator ini akan membantu anda keluar dari masalah tersebut. Sebagian besar aplikasi panggilan telepon gratis saya daftar di bawah ini adalah program PC ke ponsel, yang berarti Anda dapat membuat panggilan gratis dari komputer Anda ke nomor telepon nyata sehingga Anda tidak perlu telepon sendiri melalui telepon seluler. Beberapa program telepon internet gratis lainnya akan menghubungkan telepon yang sebenarnya dari Anda ke ponsel lain yang nyata secara gratis, membuat rencana jarak jauh yang tidak perlu menggunakan biaya yang terlalu mahal dan tidak peduli cara kerjanya, itu adalah panggilan gratis.

Cara Nelpon Gratis All Operator dari Internet
Ya, untuk melakukan panggilan telepon dengan cara gratisan tersebut sangatlah mudah dan tidak sulit. Karena kita bisa memanfaatkan jaringan intenet yang sudah disediakan dan bisa melakukan panggilan pada nomor yang diinginkan tanpa memikirkan pulsa. Layanan telepon gratis yang disediakan Internet tersebut bisa dilakukan pada semua operator yang ada diseluruh Indonesia diantaranya ialah Telkomsel, Indosat, XL, Three atau yang lalin-lainya. Caranya pun juga cukup mudah karena ada beberapa situs yang menyediakan layanan telepon gratis tersebut dan diberikan secara Cuma-Cuma. Bagi sobat yang berpergian jauh atau lagi merantau dan sangat jarang bertemu dengan keluarga apalagi dalam keadaan tidak ada duit pasti kangen dan sayang banget sama keluargamu dan kali  ini saya akan berbagi cara nelpon gratis ke semua operator lewat internet,dibawah ini adalah website yang dapat anda gunakan untuk nelpon gratis ke semua operator.

Tapi diantaranya ada beberapa persyaratan yang harus dilakukan pengguna untuk melakukan panggilan lewat situs mereka secara gratis. Lalu situs apa saja yang menyediakan layanan telepon gratis dan apa persyaratanya? Simak baik-baik Tips Trik Cara Telepon Gratis Untuk Semua Operator dibawah ini.
1.CALL2FRIENDSCall2Friends ini adalah salah satu website yang dapat anda gunakan untuk nelpon gratis ke semua operator lewat internet,website ini digunakan untuk tujuan nomor Indonesia saja dan dibatasi 30 detik perpanggilan dan panggilan sehari mnimal 2 x. Di website tersebut juga ada fasilitas nelpon berbayar namun dengan tarif yang sangat murah.Bisa anda coba di CALL2FRIENDS
2.FreetringAnda bisa menelpon gratis di website ini tapi untuk menelpon gratis anda harus memperoleh poin yang diperoleh dengan cara menyelesaikan offers, anda bisa mencobanya di FREETRING
3.RebtelAnda bisa menelpon gratis di Rebtel dengan cara registrasi dan mendownload aplikasi yang di sediakan untuk PC,Blackberry,Android ataupun iPhone.Anda bisa menelpon gratis di Rebtel dan juga anda bisa dengan biaya yang cukup murah untuk telepon yang lain.Anda bisa mencoba nya di REBTEL

Nah itulah beberapa cara untuk nelpon gratis di internet terbaru yang mana mungkin sedang kalian cari saat ini dan semoga bermanfaat untuk kalian semua..

What is the most dangerous food at Olive Garden?

Nilesh Jethwa - Mon, 2015-08-31 21:24

Olive Garden is one of the favorite destination for Italian food and today we got hold of the entire Olive Garden menu along with their nutrition data.

A typical meal at Olive Garden starts with a drink, appetizers [free bread sticks], main dish and finally the desert.

So going in the same sequence let see what the data menu for Wine and Beer has to offer.

Amount of Carbs per serving in your favorite Wine at Olive Garden

Read more at: http://www.infocaptor.com/dashboard/what-is-the-most-dangerous-food-at-olive-garden

Plan change monitor prevents user impact from bad plan

Bobby Durrett's DBA Blog - Mon, 2015-08-31 13:07

This morning a colleague of mine noticed an alert from our plan change monitor and prevented any impact on the users by forcing a SQL’s plan to its normal fast plan before the users could be affected.  I want to share the monitor script and describe how we use its output.

I know this is long but I want to paste the SQL text of the monitor script here.  Review my comments on each step.  I can not remember the details of the script myself even though I wrote it but the comments hopefully will help:

set linesize 160
set pagesize 1000
set echo on
set termout on
set trimspool on
set define off

-- status active
-- this plan isn't the most commonly
-- run plan.
-- average this plan in v$sql > 10 x average of
-- most common plan

-- status active sessions
-- sql_id and plan_hash_value
-- elapsed and executions
-- max elapsed for the sql
-- eliminate pl/sql which has
-- plan hash value of 0

drop table active_sql;

create table active_sql as
select
vs.sql_id,
sq.plan_hash_value,
max(sq.ELAPSED_TIME) elapsed,
max(sq.executions) executions
from
v$session vs,
v$sql sq
where
vs.sql_id=sq.sql_id and
vs.SQL_CHILD_NUMBER=sq.child_number and
vs.status='ACTIVE' and
sq.plan_hash_value <> 0
group by 
vs.sql_id,
sq.plan_hash_value;

-- to get the most frequently
-- used plan first get the number
-- of exections by plan
-- for each of the active sqls

drop table plan_executions;

create table plan_executions as
select
ss.sql_id,
ss.plan_hash_value,
sum(ss.executions_delta) total_executions
from
dba_hist_sqlstat ss,
active_sql a
where
a.sql_id=ss.sql_id
group by 
ss.sql_id,
ss.plan_hash_value;

-- use the previous table to get
-- the plans that are most frequently 
-- used. note that two plans could
-- have the same number of executions
-- but this is unlikely.

drop table most_frequent_executions;

create table most_frequent_executions as
select
pe1.sql_id,
pe1.plan_hash_value
from plan_executions pe1
where
pe1.total_executions =
(select max(pe2.total_executions)
from plan_executions pe2
where
pe1.sql_id=pe2.sql_id);

-- handle special case of two plans with
-- same number of executions.
-- pick one with highest plan value
-- just to eliminate dups.

drop table most_frequent_nodups;

create table most_frequent_nodups as
select
mfe1.sql_id,
mfe1.plan_hash_value
from most_frequent_executions mfe1
where
mfe1.plan_hash_value =
(select max(mfe2.plan_hash_value)
from most_frequent_executions mfe2
where
mfe1.sql_id=mfe2.sql_id);

-- get list of active sql that 
-- are not running the most
-- frequently executed plan

drop table not_most_freq;

create table not_most_freq as
select * from active_sql
where
(sql_id,plan_hash_value) not in
(select sql_id,plan_hash_value from most_frequent_nodups);

-- working on this part of the logic:
-- average this plan in v$sql > 10 x average of
-- most common plan

-- get average elapsed of most
-- frequently executed plans
-- add 1 to handle case of 0 executions

drop table avg_elapsed_most_f;

create table avg_elapsed_most_f as
select
ss.sql_id,
ss.plan_hash_value,
sum(ss.ELAPSED_TIME_DELTA)/(sum(ss.executions_delta)+1) 
avg_elapsed
from DBA_HIST_SQLSTAT ss,
most_frequent_nodups nd
where 
ss.sql_id = nd.sql_id and
ss.plan_hash_value = nd.plan_hash_value
group by
ss.sql_id,
ss.plan_hash_value;

-- get list of the sqls that are running
-- the plan that isn't most frequently 
-- executed and has an average elapsed 
-- more than 10 times the average of 
-- the most frequently executed plan
-- add 1 to executions to prevent 
-- divide by zero

drop table more_than_10x;

create table more_than_10x as
select 
n.sql_id,
n.plan_hash_value
from 
not_most_freq n,
avg_elapsed_most_f m
where
(n.elapsed/(n.executions+1)) > 10 * m.avg_elapsed and
n.sql_id=m.sql_id;

spool planchangemonitor.log

select name db_name from v$database;

-- The listed sql_id and plan_hash_value items correspond to 
-- sql statements that have plans that may be
-- inefficient and need to be investigated.
-- The session id and username are included if a
-- session is currently running the sql with the plan.

select
'CHANGED '||'PLAN' flag,
m.sql_id,
m.plan_hash_value,
s.sid,
s.username
from 
more_than_10x m,
v$session s,
v$sql q
where
m.sql_id=s.sql_id(+) and
m.plan_hash_value=q.plan_hash_value(+) and
s.sql_id=q.sql_id and
s.SQL_CHILD_NUMBER=q.child_number
order by
m.sql_id,
m.plan_hash_value,
s.sid,
s.username;

spool off

If I remember correctly I think the script looks for sessions running a plan whose current run time is 10 times that of the most frequently executed plan. This script is not perfect. The join to v$sql is not perfect and in some cases you can get duplicates.  People could quibble about the approach.  Why 10 times the previous run time?  I thought about more complex approaches but I just needed to get something in place.  But, on one database with a lot of small transactions we have made good use of this script, despite its flaws.

This morning my colleague noticed emails from the script that runs this SQL listing a query whose plan had changed.  The output looked like this:

FLAG         SQL_ID        PLAN_HASH_VALUE        SID USERNAME
------------ ------------- --------------- ---------- ---------
CHANGED PLAN 75ufmwrcmsuwz      2484041482         35 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        394 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        395 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        446 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        463 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        464 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        544 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        613 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        631 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        665 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        678 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        738 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        746 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        750 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482        752 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1333 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1416 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1573 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1943 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       1957 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       3038 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       3445 YOURUSER
CHANGED PLAN 75ufmwrcmsuwz      2484041482       3816 YOURUSER

I changed the real user to YOURUSER.  This output indicates that 23 sessions were all running the same SQL – sql_id=75ufmwrcmsuwz – and that this SQL was running on a new plan that was causing the SQL to run at least 10 times the normal run time.  In fact it was about 30 times as long.

To resolve the issue my colleague used our script to find the history of plans for 75ufmwrcmsuwz.

     select ss.sql_id,
  2  ss.plan_hash_value,
  3  sn.END_INTERVAL_TIME,
  4  ss.executions_delta,
  5  ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms",
  6  CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms",
  7  IOWAIT_DELTA/(executions_delta*1000) "IO Average ms",
  8  CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms",
  9  APWAIT_DELTA/(executions_delta*1000) "Application Average ms",
 10  CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms",
 11  BUFFER_GETS_DELTA/executions_delta "Average buffer gets",
 12  DISK_READS_DELTA/executions_delta "Average disk reads",
 13  ROWS_PROCESSED_DELTA/executions_delta "Average rows processed"
 14  from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
 15  where ss.sql_id = '75ufmwrcmsuwz'
 16  and ss.snap_id=sn.snap_id
 17  and executions_delta > 0
 18  and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
 19  order by ss.snap_id,ss.sql_id;

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
75ufmwrcmsuwz      2297146707 30-AUG-15 09.00.54.472 AM              830         587.207673     49.9638554    541.617188                  0                      0                      0          5234.01928         125.083133              332.66747
75ufmwrcmsuwz      2297146707 30-AUG-15 10.00.46.872 AM             1178         578.205867     49.3972835    532.377174                  0                      0                      0          4870.22326         126.048387              324.50764
75ufmwrcmsuwz      2297146707 30-AUG-15 11.00.39.206 AM             1433         631.484713     49.1486392    585.826676                  0                      0                      0          4624.11305         125.446615              299.57083
75ufmwrcmsuwz      2297146707 30-AUG-15 12.00.32.247 PM             1620         592.593823     49.5987654     546.29731                  0                      0                      0          4744.17284         121.961728             312.735185
75ufmwrcmsuwz      2297146707 30-AUG-15 01.00.20.783 PM             1774         534.412339      51.059752     485.46836                  0                      0                      0          4983.44983         119.564825             326.067644
75ufmwrcmsuwz      2297146707 30-AUG-15 02.00.08.843 PM             1757         447.385105     44.9345475    404.415659                  0                      0                      0          4525.13147         107.277746             293.739328
75ufmwrcmsuwz      2297146707 30-AUG-15 03.00.57.308 PM             1626         431.718507      45.904059    388.200416                  0                      0                      0          4462.93296         118.027675             300.724477
75ufmwrcmsuwz      2297146707 30-AUG-15 04.00.44.209 PM             1080         375.905966      44.212963    334.434835                  0                      0                      0          4766.81574         109.157407             310.712037
75ufmwrcmsuwz      2297146707 30-AUG-15 05.00.36.753 PM              707         368.289475     44.3140028    327.166223                  0                      0                      0          4894.20509         108.050919             315.565771
75ufmwrcmsuwz      2297146707 30-AUG-15 06.00.26.449 PM              529         341.483588     39.6408318     305.47356                  0                      0                      0          4381.19849         96.2646503             288.030246
75ufmwrcmsuwz      2297146707 30-AUG-15 07.00.17.636 PM              356         380.733635     41.5168539    342.034876                  0                      0                      0           4553.4691         105.272472             292.283708
75ufmwrcmsuwz      2297146707 30-AUG-15 08.00.11.170 PM              313         435.449406     37.1565495    402.636489                  0                      0                      0          4144.30351         92.8690096             264.923323
75ufmwrcmsuwz      2297146707 30-AUG-15 09.00.07.894 PM              214         516.455509     44.5794393    477.020692                  0                      0                      0          4567.67757         114.415888             289.607477
75ufmwrcmsuwz      2297146707 30-AUG-15 10.00.59.991 PM              182         720.749681     44.3956044    684.439467                  0                      0                      0          3811.83516         95.2362637             239.027473
75ufmwrcmsuwz      2297146707 30-AUG-15 11.00.47.388 PM               83          1043.1503     43.7349398    1008.41358                  0                      0                      0          3575.96386         114.289157             250.120482
75ufmwrcmsuwz      2484041482 30-AUG-15 11.00.47.388 PM                6         25314.6558     4311.66667    22971.4913                  0                      0                      0          78533.8333         69813.3333             157.833333
75ufmwrcmsuwz      2484041482 31-AUG-15 12.00.36.033 AM               96         25173.7346     5105.20833    21475.9516                  0                      0                      0          135242.802         62433.3125             118.395833
75ufmwrcmsuwz      2484041482 31-AUG-15 01.00.29.070 AM               39         26877.0626     5540.51282    22977.6229                  0                      0                      0          139959.308         68478.1795             93.7179487
75ufmwrcmsuwz      2484041482 31-AUG-15 02.00.18.755 AM               38          26993.419     5998.15789    22768.4285                  0                      0                      0          153843.342              74492             149.342105
75ufmwrcmsuwz      2484041482 31-AUG-15 03.00.09.615 AM               29         25432.5074     4952.06897    22288.7966                  0                      0                      0          112813.552         69803.0345             187.689655
75ufmwrcmsuwz      2484041482 31-AUG-15 04.00.01.749 AM               34         27281.7339     4541.47059    24543.1609                  0                      0                      0             95144.5         69187.3824             135.676471
75ufmwrcmsuwz      2484041482 31-AUG-15 05.00.55.876 AM              146         30512.9976     5421.43836    26984.2559                  0                      0                      0          115531.801         71886.6644             136.321918
75ufmwrcmsuwz      2484041482 31-AUG-15 06.00.50.769 AM              405         24339.6641     4853.40741    20794.0472                  0                      0                      0           115490.01         62004.4642             229.106173
75ufmwrcmsuwz      2484041482 31-AUG-15 07.00.43.047 AM              838         27552.3731     4903.06683    23661.2101                  0                      0                      0          111654.558         58324.9511             176.058473
75ufmwrcmsuwz      2484041482 31-AUG-15 08.00.50.864 AM             1653         30522.8358     4550.56261    26526.2183                  0                      0                      0           93818.418         49865.4701             137.212341

Based on this output my colleague chose 2297146707 as the good plan.  She ran coe_xfr_sql_profile.sql which is found in SQLT’s utl directory like this:

coe_xfr_sql_profile.sql 75ufmwrcmsuwz 2297146707

Then she ran the generated script:

coe_xfr_sql_profile_75ufmwrcmsuwz_2297146707.sql

This forced the plan back to its earlier efficient plan and my coworker did all of this early this morning before we reached our peak time of usage and before the bad plan could affect our users.

Bobby

P.S. This example is from an 11.2.0.3 database running on HP-UX Itanium.

Categories: DBA Blogs