Skip navigation.

DBA Blogs

Oracle EBS News: Openworld 2015

Pythian Group - Tue, 2015-11-10 14:41

 

Oracle OpenWorld is always an exciting and energizing experience. This year was no different! There are always a big laundry lists of new service offerings and upcoming release information during OpenWorld. This year, the major buzz is Oracle Cloud. Oracle Cloud offerings actually shadowed lot of other imported product updates at OpenWorld.

In the databases area, Oracle announced database 12c Release2 beta program. Now we can have in-memory options enabled in Active Data Guard as well. And Markus Michalewicz session “Introducing the Next Generation of Oracle Real Application Clusters [CON8769]” gave a very good insight into DB 12cR2 RAC new features. Features like Node Weighing during evictions, Domain Service Clusters and Member Clusters.

In the Middleware area, Larry announced weblogic multitenant 12cR2.

Coming to Oracle E-Business Suite, Oracle announced EBS 12.2.5 just few days before OpenWorld. And Almost all OpenWorld EBS sessions were filled with 12.2.5 updates. Cliff Godwin announced new functional updates in 12.2.5 and also confirmed that there will be 12.2.6, 12.2.7 and 12.3 in future. You can get a full list of updates from the Oracle E-Business Suite Learning Stream and this press release. On the technology side, 12.2.5 introduced new Alta UI, which is tablet friendly. 12.2.5 also introduced new tools like adopmon to monitor online patching cycle and adcfgclone dualfs=yes to parallelize patch and run fs configuration during a clone. adop validate option to validate the EBS techstack before even starting online patching. Also now oracle certified Oracle Unified Directory with EBS Release 12.2.5.

Coming to the main buzz of the event Oracle Cloud, there are two things I liked the most.

  1. Oracle Key Vault – Centralized On-Premise Key Manager for the Enterprise through which Client can control their Cloud data.
  2. Oracle Audit Vault – Audit Trails managed by the Customer using an on-premise system, which can help analyze, audit data and detect breaches. I think these two features will set apart Oracle from other Cloud providers.

For the people who are looking for OpenWorld presentation PPTs or PDFs, you can find most of them online.

 

Discover more about our expertise in the world of Oracle.

 

Categories: DBA Blogs

Implementing Fuzzy Search in SQL Server – Part 1

Pythian Group - Tue, 2015-11-10 14:24

 

Fuzzy Search in SQL Server is not done very well. This post will cover how to implement Fuzzy Search capabilities using several approaches.

 

What is it?

Fuzzy Search is the process to locate records that are relevant to a search, even when the search criteria doesn’t match. Fuzzy Searches are used to:

  1. Suggest the correct spelling of a word (“Did you mean this…”).
  2. Find results related to your search term (“You might also like…”).
  3. Finding synonyms for search terms (Search for Dog and also get results for Puppies and Pets).
  4. Probably other things…

What we’re covering here relates to finding results after a search term has been misspelled. Related Results & Synonyms are handled quite a bit differently, and are more like geographic points on a map used to find the closest other points.

 

What Does it Fix?

As suggested above, Fuzzy Search logic is great for when you or the users don’t know the exact term they’re looking for.

For example, let’s say I want to find the latest James Bond movie, but only vaguely know its name.

An IMDB search for “Specter” brings up the James Bond “Spectre” movie as a suggestion today. If you actually search for the term, you’ll get a wide variety of increasingly weird results (And also the right answer!). I don’t have access to the IMDB code, but I believe they’re using a combination of the Levenshtein Distance, Trigrams, and maybe Double Metaphones. Coincidentally, those are exactly what we’re covering here.

Search on IMDB using Spector

An example search on IMDB

 

Fuzzy Search in SQL Server

Before implementing Fuzzy Search in SQL Server, I’m going to define what each function does. As you’ll see, they are all complementary to each other and can be used together to return a wide range of results that would be missed with traditional queries or even just one of these functions.

The three functions we’re going to look at are:

  1. Damerau-Levenshtein Distance
  2. Trigrams
  3. Double Metaphones

All of these have been written in T-SQL by very good DBAs. In future posts, I’ll be comparing the T-SQL performance to CLRs written in C# code to hopefully move the string manipulation and comparisons to a more appropriate place.

Damerau-Levenshtein Distance

The Levenshtein Distance is a calculation of how different two strings are, and it’s expressed as the number of steps required to make StringA look like StringB. The steps are counted in terms of Inserts, Updates, and Deletes of individual letters in the two words being compared.

This is good for short strings where not many differences are expected, AKA misspelled words.

A simple example is the word Car to Date. The Levenshtein Distance here is 3, and we get there by:

Step 1: UPDATE ‘C’ to ‘D’
Step 2: UPDATE ‘r’ to ‘t’
Step 3: APPEND ‘e’

So, in the IMDB example, my search for “Specter” has a Levenshtein Distance of 2 from “Spectre”. Again, we get there by:

STEP 1: UPDATE ‘r’ to ‘e’
STEP 2: UPDATE ‘e’ to ‘r’

We’re going to be looking at the Damerau-Levenshtein Distance. This is built on the Levenshtein Distance, but also accounts for transpositions of letters right next to each other. That would have returned a value of 1 with this logic:
STEP 1: Flip ‘r’ and ‘e’

Trigrams

Trigrams are used to find matching sets of words or characters in words or phrases. As the name implies, each Trigram is a set of 3 characters or words, and you simply count how many trigrams in each string match the other string’s trigrams to get a number.

These are great for comparing phrases.

An easy example is to compare a search for “SQL Server” to “SQL Sever”:

STEP 1: Break ‘SQL Server’ up into trigrams
‘SQL’, ‘QL ‘, ‘L S’, ‘ Se’, ‘Ser’, ‘erv’, ‘rve’, ‘ver’
STEP 2: Break ‘SQL Sever’ up into trigrams
‘SQL’, ‘QL ‘, ‘L S’, ‘ Se’, ‘Sev’, ‘eve’, ‘ver’
STEP 3: Count the number of trigrams that match: 5

A matching set of 5 trigrams might mean these are close enough for the application to suggest as an alternative.

Another example is comparing the phrase “Oracle” to “Relational Database Management System”.

STEP 1: Break ‘Oracle’ up into trigrams
‘Ora’, ‘rac’, ‘acl’, ‘cle’
STEP 2: Break ‘Relational Database Management System’ into trigrams
‘Rel’, ‘ela’, ‘lat’, ‘ati’, ‘ona’, ‘nal’, ‘al_’, ‘l_D’, …, ‘tem’
STEP 3: Count the number of trigrams that match between them: 0

As you can see, Oracle isn’t very close to being an RDBMS at all.

Finally, in our IMDB example, you can see that the movie Unexpected was returned. Why? I don’t actually know, but I believe it’s because there are several matching trigrams between “Specter” and “Unexpected” which pushed it into the possible suggestions.

Search on IMDB using Spector

I don’t like scrolling up.

Double Metaphone

Double Metaphones are the updated version of the SOUNDEX() function. Updated as in soundex was first patented in 1918, and double metaphones are from the 1990’s. They both work the same by breaking up consonants into what they sound like and comparing them to the closest matching values; however, soundex assumes each consonant has the same pronunciation, while metaphones allow for multiple pronunciations of the same word.

Double Metaphones are geared towards names

The first example is one where SOUNDEX and a Double Metaphone work identically. The name “Bruce” and it’s common(?) misspelling “Broos”.

STEP 1: SELECT SOUNDEX(‘Broos’) and we get B620
STEP 2: SELECT SOUNDEX(‘Bruce’) and we get B620
STEP 3: Use an online calculator to get the Metaphone values for Broos and Bruce
Both return ‘PRS’ and ‘PRS’

The benefit of the Double Metaphone here is that, because the results for both words are the same, you can have a higher level of confidence that this is a misspelling.

Another example is “Smith” and “Schmidt”.

STEP 1: SELECT SOUNDEX(‘smith’) and we get S530
STEP 2: SELECT SOUNDEX(‘schmidt’) and we get S530
STEP 3: Use an online calculator to get the Metaphone values for Smith and Schmidt
The most common pronunciation is first:
Smith yields ‘SM0’ and ‘XMT’
Schmidt yields ‘XMT’ and ‘SMT’

Using the SOUNDEX() function, you might return this “Smith” as a misspelling of “Schmidt”, which is unlikely. On the other hand, using a double metaphone function, you would know that while these two words are occasionally pronounced identically they more frequently not pronounced the same, and you could either discard the result or push it to the bottom of your suggestions.

If you do pronounce any of these words the same, here’s a website to compare the expected English pronunciations.

In my next post, I’ll implement each of these as CLRs in C# code. I’ll also point out some nice T-SQL implementations I’ve found and compare performance.

 

Discover more about our expertise in SQL Server

Categories: DBA Blogs

Not an Oracle Partner? Join Oracle Partner Network now For Free!

If you are an IT Company, Independent Software Vendor or even a Digital Agency and you are not an Oracle Partner yet, don't miss this chance and join Oracle Partner Network with no membership fee for...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2015-11-09 [del.icio.us]

Categories: DBA Blogs

Oracle Live SQL

Oracle recently introduced a new generation online SQL tool, called Live SQL. Live SQL is a kind of search engine for database developers but also a great free learning tool for those...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Trace Files -- 7 : SQL in PL/SQL

Hemant K Chitale - Sun, 2015-11-08 09:50
So far, all my examples of Tracing have been 'pure' SQL.  What does a Trace File show for SQL that is executed from within PL/SQL ?

I run this code  where I trace only the execution of the stored procedure :

SQL> drop table departmental_salary purge;

Table dropped.

SQL> drop procedure my_dept_salary_proc ;

Procedure dropped.

SQL>
SQL> create table departmental_salary (department_id number, total_salary number);

Table created.

SQL>
SQL> create or replace procedure my_dept_salary_proc as
2 l_dept_id number(4);
3 l_tot_salary number(10,2);
4 cursor get_dept_id is select distinct department_id from employees;
5
6 begin
7 open get_dept_id;
8 loop
9 fetch get_dept_id into l_dept_id;
10 exit when get_dept_id%notfound;
11
12 -- get the sum(salary)
13 select sum(salary) into l_tot_salary from employees where department_id = l_dept_id ;
14 -- the developer mistakenly entered the same query twice in the procedure
15 -- the second SELECT is textually different
16 SELECT SUM(SALARY) into l_tot_salary FROM employees WHERE DEPARTMENT_ID = l_dept_id ;
17
18 insert into departmental_salary values (l_dept_id, l_tot_salary);
19 -- again, the developer mistakenly entered the same INSERT twice
20 -- not exactly the same text
21 INSERT into DEPARTMENTAL_SALARY values (L_DEPT_ID, L_TOT_SALARY);
22
23 end loop;
24
25 commit;
26
27 end;
28 /

Procedure created.

SQL>
SQL> exec dbms_session.session_trace_enable(waits=>TRUE);

PL/SQL procedure successfully completed.

SQL>
SQL> execute my_dept_salary_proc;

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_session.session_trace_disable;

PL/SQL procedure successfully completed.

SQL>
SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3927.trc

SQL>


Note the repetition of the SELECT and INSERT statements.  In "normal" SQL, the two SELECTs for SUM(SALARY) would have two different SQL_IDs because any SQL that differs by Case or blank characters is a different SQL_ID.  As also, the two INSERTs should have different SQL_IDs.

What does a tkprof show ?

[oracle@ora11204 ~]$ tkprof /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3927.trc \
> EXEC_MY_DEPT_SALARY_PROC.PRF sys=NO aggregate=NO

TKPROF: Release 11.2.0.4.0 - Development on Sun Nov 8 23:23:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.


[oracle@ora11204 ~]$



Note that I've set SYS=NO to exclude Parse Recursive SQLs.

Selectively reporting from the tkprof, I see :

SQL ID: 0wan8zwsb8000 Plan Hash: 0

BEGIN dbms_session.session_trace_enable(waits=>TRUE); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 43


SQL ID: a8p49fngpats5 Plan Hash: 0

BEGIN my_dept_salary_proc; END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 43


SQL ID: 4xts9z878v4yb Plan Hash: 3042654289

SELECT DISTINCT DEPARTMENT_ID
FROM
EMPLOYEES


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 13 0.00 0.00 0 6 0 12
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.00 0.00 0 6 0 12

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 43 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
12 12 12 HASH UNIQUE (cr=6 pr=0 pw=0 time=743 us cost=4 size=33 card=11)
214 214 214 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=917 us cost=3 size=642 card=214)


SQL ID: bnr7u92kg18ak Plan Hash: 1756381138

SELECT SUM(SALARY)
FROM
EMPLOYEES WHERE DEPARTMENT_ID = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 12 0.00 0.00 0 0 0 0
Fetch 12 0.00 0.00 0 66 0 12
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 25 0.00 0.00 0 66 0 12

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 43 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=6 pr=0 pw=0 time=73 us)
12 12 12 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=64 us cost=3 size=133 card=19)


SQL ID: bnr7u92kg18ak Plan Hash: 1756381138

SELECT SUM(SALARY)
FROM
EMPLOYEES WHERE DEPARTMENT_ID = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 12 0.00 0.00 0 0 0 0
Fetch 12 0.00 0.00 0 66 0 12
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 25 0.00 0.00 0 66 0 12

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 43 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=6 pr=0 pw=0 time=50 us)
12 12 12 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=40 us cost=3 size=133 card=19)


SQL ID: b6yyghrcpvy4y Plan Hash: 0

INSERT INTO DEPARTMENTAL_SALARY
VALUES
(:B2 , :B1 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 12 0.00 0.00 0 4 55 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 13 0.00 0.00 0 4 55 12

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 43 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=2 pr=0 pw=0 time=479 us)


SQL ID: b6yyghrcpvy4y Plan Hash: 0

INSERT INTO DEPARTMENTAL_SALARY
VALUES
(:B2 , :B1 )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 12 0.00 0.00 0 1 14 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 13 0.00 0.00 0 1 14 12

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 43 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=1 pr=0 pw=0 time=63 us)


SQL ID: 8ggw94h7mvxd7 Plan Hash: 0

COMMIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 1 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 1 0

Misses in library cache during parse: 0
Parsing user id: 43 (recursive depth: 1)


SQL ID: 4pzfk82288xrc Plan Hash: 0

BEGIN dbms_session.session_trace_disable; END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 43


********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 3
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.00 0.01 0 0 0 3

Misses in library cache during parse: 1



OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 15 0.00 0.00 0 0 0 0
Execute 59 0.00 0.00 0 17 77 28
Fetch 40 0.00 0.00 0 148 0 37
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 114 0.01 0.01 0 165 77 65

Misses in library cache during parse: 1
Misses in library cache during execute: 1





Note 1 : The first query for DISTINCT DEPARTMENT_IDs returned 12 rows (12 distinct Department_IDs).  Therefore, each of the SELECT and INSERT statements was executed 12 times.
There are some very interesting observations to be made.

Note 2 : The Rows (1st), (avg), (max) statistics for the SELECT SUM(SALARY) are not accurate as the trace file has not captured Row Source Statistics individually for each execution of the query.  Different DEPARTMENT_IDs actually have different Row Counts.  You have to be careful when interpreting these statistics.  Check the trace file to see if Row Source Statistics (indicated by "STAT" lines) are captured.


Firstly : The SQLs called from the PLSQL procedure are not accounted under "OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS". This can be surmised by looking at the "Execute" and "Fetch" count statistics. The SQLs called from the PLSQL procedure are reported under "OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS".  That is why it can sometimes be very important to *not* ignore the "Recursive Statements" statistics.  Here, all the effort is in what are identified by tkprof as "Recursive Statements".  Some of the 59 Executions under "Recursive Statements" are NOT Parse overheads but actual User SQLs (called from the PLSQL procedure) !
I can also see from the raw trace file that my SELECT and INSERT statements appear with "dep=1" (and not "dep=0").  So, they are tracked as Recursive Depth SQLs, not Parent level SQLs.

PARSING IN CURSOR #139685522349264 len=44 dep=1 uid=43 oct=3 lid=43 tim=1446996115239966 hv=244159435 ad='70eaa1c0' sqlid='4xts9z878v4yb'
SELECT DISTINCT DEPARTMENT_ID FROM EMPLOYEES
END OF STMT

PARSING IN CURSOR #139685521574368 len=60 dep=1 uid=43 oct=3 lid=43 tim=1446996115241015 hv=2767233362 ad='70f496b0' sqlid='bnr7u92kg18ak'
SELECT SUM(SALARY) FROM EMPLOYEES WHERE DEPARTMENT_ID = :B1
END OF STMT

PARSING IN CURSOR #139685521624960 len=51 dep=1 uid=43 oct=2 lid=43 tim=1446996115241988 hv=3646814366 ad='6e08dea0' sqlid='b6yyghrcpvy4y'
INSERT INTO DEPARTMENTAL_SALARY VALUES (:B2 , :B1 )
END OF STMT


Secondly : those CASE-Sensitive SQL statements that I had in the procedure which should have generated different SQL_IDs are all transformed to ALL-UPPERCASE SQL statements with the additional blank spaces stripped out.  Thus, although we have two different sets of SELECT statements for the SUM(SALARY), they appear exactly the same (transformed to ALL-UPPERCASE) with the *same* SQL_ID in the raw trace.  Similarly, the two different INSERT statements appear exactly the same ALL-UPPERCASE statements.
This is in optimization that Oracle applies to SQL statements inside PLSQL.  They effectively become case-insensitive and similar.  Also, you can see that Oracle automatically assigns Bind Variables (:B1, :B2) in place of the placeholders I used (l_dept_id, l_tot_salary).


Note  :  SELECT statement statistics (consistent gets, disk reads,  rows etc are accounted under FETCH, while DML (e.g. INSERT) statement statistics are accounted under EXECUTE.


I can verify that the INSERTs were executed twice for each DEPARTMENT_ID :

SQL> select department_id, total_salary from departmental_salary order by 1;

DEPARTMENT_ID TOTAL_SALARY
------------- ------------
10 8800
10 8800
20 38000
20 38000
30 49800
30 49800
40 13000
40 13000
50 312800
50 312800
60 57600
60 57600
70 20000
70 20000
80 609000
80 609000
90 116000
90 116000
100 103216
100 103216
110 40616
110 40616



24 rows selected.

SQL>


.
.
.

Categories: DBA Blogs

Tested 1000 Select Statements on New Exadata X5

Bobby Durrett's DBA Blog - Fri, 2015-11-06 17:37

I finished testing 1000 select statements on our new Exadata X5 to see if they would run faster or slower than on our older Exadata V2.  Our current production V2 has 12 nodes and the new X5 has only 2.  The memory and parallel server parameters on the X5 are 6 times are large as on the old one, since we have one sixth as many hosts and more than 6 times the memory and CPU per host. I think that memory parameters can sometimes change execution plans, and of course with the newer Exadata software who knows what other differences we might see.  I wanted to see if any plan changes or other issues caused some queries to run much slower on our newer Exadata system than the old one. I picked 1000 select statements at random from our current production and tested them comparing plans and execution time. In the end I did not find any bad plan changes and on average the tested select statements ran about 4 times faster on the X5 than on the older V2.

I used my testselect package that I have mentioned in several other posts. Here are some other examples of using this package for performance tuning:

http://www.bobbydurrettdba.com/2015/03/02/different-plan_hash_value-same-plan/

http://www.bobbydurrettdba.com/2013/12/09/testing-removing-subpartitions-from-a-table-with-query-testing-package/

In the other posts I was using the package to test the effect of some change on query plans and performance.  So, I was comparing two different situations on the same host. But, in this case I was comparing two different hosts with essentially the same data and settings. But they had different versions of Exadata hardware and larger parameters and fewer nodes on the newer host.  Here are the results of my first run with all 1000 statements.  I got the execution plan for all 1000 select statements but only executed the ones with different plans.  Here were the results:

>execute TEST_SELECT.display_results('X5','V2');
        
Select statements that ran 3 times faster with X5 than with V2.
        
T1=X5
T2=V2
        
        SQLNUMBER T1_EXECUTE_PLAN_HASH T2_EXECUTE_PLAN_HASH T1_ELAPSED_IN_SECONDS T2_ELAPSED_IN_SECONDS
        --------- -------------------- -------------------- --------------------- ---------------------
                3            287237826            287237826                     3                    34
                4           1245040971           1245040971                     1                    11
                9             36705296           2770058206                     4                    22

... edited out most of the lines for brevity ...

              997           2423577330           2423577330                     0                     9
              998           2217180459           3921538090                     1                    13
             1000           3842377551           1690646521                     2                    12
        
Number of selects=329
        
Select statements that ran 3 times faster with V2 than with X5.
        
T1=V2
T2=X5
        
        SQLNUMBER T1_EXECUTE_PLAN_HASH T2_EXECUTE_PLAN_HASH T1_ELAPSED_IN_SECONDS T2_ELAPSED_IN_SECONDS
        --------- -------------------- -------------------- --------------------- ---------------------
               95           3919277442           3919277442                     0                     2
              210           3508255766           3508255766                     0                     2
              282           3946849555           3085057493                     0                     6
              347           3278587008            789099618                    19                   170
              375            581067860            460184496                     0                     3
              429            534521834            534521834                     1                     6
              569           3953904703           3484839332                     0                     2
              681            946688683           3451337204                     1                     6
              697            908111030           2971368043                     0                     1
              699           3756954097           1915145267                     0                     1
              706           1121196591           1121196591                     0                     2
              708            581067860            460184496                     0                     4
              797            908111030           2841065272                     0                     5
              950            786005624           2571241212                    45                   460
              966           3151548044           3151548044                     1                     5
        
Number of selects=15
        
Summary of test results
        
                   TEST_NAME TOTAL_ELAPSED_IN_SECONDS SELECTS_EXECUTED AVERAGE_ELAPSED_IN_SECONDS
        -------------------- ------------------------ ---------------- --------------------------
                          X5 5545.9999999999999999999              486                         11
                          V2                    21138              486                         43

Of the tested statements 329 ran 3 or more times faster on the X5.  But 15 selects ran 3 or more times faster on the old V2.  So, I needed to test the 15 selects again on both servers.

I’m not sure if it was smart or not, but I decided to run all the selects 5 times in a row to maximize caching.  The X5 is new and not in use so there wouldn’t be any activity to stimulate caching.  My test script for the X5 looked like this:

truncate table test_results;

execute TEST_SELECT.execute_all('X5');
execute TEST_SELECT.execute_all('X5');
execute TEST_SELECT.execute_all('X5');
execute TEST_SELECT.execute_all('X5');
execute TEST_SELECT.execute_all('X5');
execute TEST_SELECT.reexecute_errored('X5');
execute TEST_SELECT.reexecute_errored('X5');
execute TEST_SELECT.reexecute_errored('X5');
execute TEST_SELECT.reexecute_errored('X5');
execute TEST_SELECT.reexecute_errored('X5');

After we made sure that the system had cached everything, all 15 selects ran, on average, 4 times faster on the X5 than the V2:

TE  SQLNUMBER SQL_ID        EXPLAIN_PLAN_HASH EXECUTE_PLAN_HASH ROWS_FETCHED ELAPSED_IN_SECONDS CPU_USED_BY_THIS_SESSION CONSISTENT_GETS DB_BLOCK_GETS PARSE_TIME_ELAPSED PHYSICAL_READS ERROR_MESSAGE
-- ---------- ------------- ----------------- ----------------- ------------ ------------------ ------------------------ --------------- ------------- ------------------ -------------- ----------------------------------------------------------------
X5         95 54a8k0yhbgyfq                          3919277442            1                  0                       12            2583            14                  0              1
V2         95 54a8k0yhbgyfq                          3919277442            1                  1                       15            2583            14                  1              1
V2        210 b132ygmp743h4                          3508255766            0                  2                       19            1592            14                  0              1
X5        210 b132ygmp743h4                          3508255766            0                  2                        8            1430            14                  0              1
V2        282 aw5f12xsa8c2h                          3946849555            0                  0                       14            3468            14                  0              2
X5        282 aw5f12xsa8c2h                          3946849555            0                  0                        8            3322            14                  2              2
V2        347 8ncbyjttnq0sk                          3278587008            1                  3                      462         1203794            14                  0          61838
X5        347 8ncbyjttnq0sk                          3278587008            1                  2                      206         1126539            14                  4          51849
X5        375 4yq5jkmz2khv5                           581067860            0                  0                        9           14530            14                  0              2
V2        375 4yq5jkmz2khv5                           581067860            0                  0                       19           14686            14                  1              2
V2        429 49pyzgr4swm4p                           534521834            0                  2                       11            1814            14                  0              0
X5        429 49pyzgr4swm4p                           534521834            0                  0                        5            1638            14                  1              0
X5        569 3afmdkmzx6fw8                           630418386          694                  0                       74           70173            14                  3              0
V2        569 3afmdkmzx6fw8                          3527323087          694                  1                       73           68349            14                  0           3588
V2        681 dyufm9tukaqbz                           668513927            0                  0                       10            6298            14                  0              2
X5        681 dyufm9tukaqbz                          3317934314            0                  0                        8            6096            14                  0              2
V2        697 1fqc3xkzw8bhk                           908111030            0                  0                        3            1406            14                  0              1
X5        697 1fqc3xkzw8bhk                           908111030            0                  0                        2            1406            14                  0              1
V2        699 03qk2cjgr4q2k                          1915145267           31                  0                      476           95922            14                  1              0
X5        699 03qk2cjgr4q2k                          1915145267           31                  0                      272           96299            14                  0              0
V2        706 28fnjtdhjqwrg                          1121196591            0                  0                       21            1355            14                  0              4
X5        706 28fnjtdhjqwrg                          1121196591            0                  0                       13            1355            14                  0              4
V2        708 2yrkwqs46nju0                           581067860            0                  0                       14           14684            14                  0              0
X5        708 2yrkwqs46nju0                           581067860            0                  0                        9           14528            14                  0              0
V2        797 dc5481yn8pm85                           908111030            0                  0                        3            1407            14                  0              2
X5        797 dc5481yn8pm85                           908111030            0                  0                        2            1407            14                  0              2
V2        950 by6n1m74j82rt                           786005624            6                  7                     2087          249736            14                  1         245443
X5        950 by6n1m74j82rt                          2571241212            6                  0                      186           90897            14                  0              3
X5        966 5c2n74gfrxwxx                          3151548044           12                  0                       24          116360            14                  9          84949
V2        966 5c2n74gfrxwxx                          3151548044           12                  0                       52          119701            14                  1          88002

The summary of the results:

Select statements that ran 3 times faster with X5 than with V2.
	
T1=X5
T2=V2
	
	SQLNUMBER T1_EXECUTE_PLAN_HASH T2_EXECUTE_PLAN_HASH T1_ELAPSED_IN_SECONDS T2_ELAPSED_IN_SECONDS
	--------- -------------------- -------------------- --------------------- ---------------------
	       95           3919277442           3919277442                     0                     1
	      429            534521834            534521834                     0                     2
	      569            630418386           3527323087                     0                     1
	      950           2571241212            786005624                     0                     7
	
Number of selects=4
	
Select statements that ran 3 times faster with V2 than with X5.
	
T1=V2
T2=X5
	
	SQLNUMBER T1_EXECUTE_PLAN_HASH T2_EXECUTE_PLAN_HASH T1_ELAPSED_IN_SECONDS T2_ELAPSED_IN_SECONDS
	--------- -------------------- -------------------- --------------------- ---------------------
	
Number of selects=0
	
Summary of test results
	
	           TEST_NAME TOTAL_ELAPSED_IN_SECONDS SELECTS_EXECUTED AVERAGE_ELAPSED_IN_SECONDS
	-------------------- ------------------------ ---------------- --------------------------
	                  X5                        4               15                          0
	                  V2                       16               15                          1

I guess it is no surprise that the X5 is faster than the five-year older V2.  But, I thought it was a good example of how to use my testselect package to do see how a set of queries will run in two different situations.

Bobby

Categories: DBA Blogs

JDBC connection samples in Oracle Enterprise Data Quality (OEDQ) to Oracle Service Name and MS SQL SSL

Ittichai Chammavanijakul - Fri, 2015-11-06 16:46

This post is just a quick note to myself on how to configure JBDC on Oracle Enterprise Data Quality (OEDQ) to connect to different databases.

First let’s talk about connecting to Oracle database. By default, OEDQ’s Data Store can only connect Oracle database by SID, but not by Service Name.

 

Fortunately, the JDBC connection option is available to take advantage of the connection string that can support Service Name and multiple hosts if needed.

EDQ_datastore_JDBC

 

Sample:

Driver class name: weblogic.jdbc.oracle.OracleDriver
JDBC URL:  jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS = (PROTOCOL = TCP)(HOST = host1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = host2)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = host3)(PORT = 1521))(CONNECT_DATA=(SERVICE_NAME = srvname)))

 

Another sample is to connect Microsoft SQL Server which is requires SSL connection. By default if using Server > Database > Microsoft SQL Server (2000-2008), you will get this error.

[FMWGEN][SQLServer JDBC Driver] The SQL Server login requires an SSL connection. (Code 2,011)

EDQ_MSSQL_SSL_error

Again, let’s use the JDBC connection instead.

Driver class name: com.microsoft.sqlserver.jdbc.SQLServerDriver
JDBC URL:  jdbc:weblogic:sqlserver://host:1433;databaseName=DBNAME;EncryptionMethod=SSL;ValidateServerCertificate=false

EDQ_MSSQL_SSL_jdbc

Categories: DBA Blogs

Log Buffer #448: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-11-06 11:38

This Log Buffer is dedicated to the top quality news from the arena of Oracle, SQL Server and MySQL.

Oracle:

  • We had a question on AskTom the other day, asking us to explain what a “latch” was.
  • Jonathan Lewis thinks column groups can be amazingly useful in helping the optimizer to generate good execution plans because of the way they supply better details about cardinality.
  • Today it’s all about developing software that makes access to your product easier.
  • Steve Jobs sets a great perspective on the journey of simplicity. It starts from simple, goes through complexity and ends up in simplicity.
  • AWR period comparison is pretty easy if you have access to the two periods in the same AWR repository.

SQL Server:

  • Understanding Peer-to-Peer Transactional Replication, Part 2.
  • Knee-Jerk Wait Statistics : PAGELATCH.
  • Stairway to Columnstore Indexes Level 5: Adding New Data To Columnstore Indexes.
  • SQL Server Reporting Services General Best Practices.
  • Hello Azure: Azure IaaS – Getting Started.
  • A Skills Roadmap for DBAs in the Cloud Era.

MySQL:

  • MySQL Performance: 1M QPS on mixed OLTP_RO with MySQL 5.7 GA.
  • Deploying MongoDB, MySQL, PostgreSQL & MariaDB’s MaxScale in 40min.
  • ClusterControl Tips & Tricks: wtmp Log Rotation Settings for Sudo User.
  • Setting-up second mysql instance & replication on Linux in 10 steps.
  • s9s Tools and Resources: ‘Become a MySQL DBA’ series, ClusterControl 1.2.11 release, and more!

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Links for 2015-11-04 [del.icio.us]

Categories: DBA Blogs

Partner Webcast – Enterprise Managed Oracle MySQL with EM12c

Oracle recently announced the general availability of MySQL 5.7, the latest version of the world’s most popular open source database. The new version delivers greater performance, scalability and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Building Oracle #GoldenGate Studio Repository…. a walk through

DBASolved - Wed, 2015-11-04 10:49

With the announcement of Oracle GoldenGate Studio at OOW this year, there has been a good bit of interest in what is can do for any Oracle GoldenGate environment. The basics of this new design tool is that it will allow the end user to quickly build out GoldenGate architectures and mappings; however, before you can build the architectures and mappings there needs to be a repository to store this information.

The Oracle GoldenGate Studio is built on the same framework that the Oracle Data Integrator is built on. With this framework, a repository database has to be created to retain all of the architectures and mappings. To do this, you use the Repository Creation Utility (RCU). Unlike ODI, the GoldenGate Studio repository can only be created in Oracle database. The RCU can be used to create the repository in any version of the Oracle Database (EE, SE, or XE).

After identifying or installing a new Oracle database for the repository; the RCU will need to be ran. The steps below will guide you through the creation of the repository needed for Oracle GoldenGate Studio.

Note: The RCU will be ran out of the Oracle GoldenGate Studio home directory.

To run the RCU, you will need to be in the Oracle GoldenGate Studio home directory. In this directory, you will need to go to oracle_common/bin as indicated below. Then execute the RCU from there.

$ cd $GGSTUDIO_HOME/oracle_common/bin
$ ./rcu &

Executing the “rcu” command, will start the RCU wizard to build the repository. The first screen of the RCU will be the welcome screen. Click Next.

Being that this is a new repository, you will want to select “Create Repository” and “System Load and Product Load” options. Click Next.

The next screen of the RCU will ask you for connection information related to the database where the repository will be built. Provide the information and click Next.

While the RCU is attempting to connect to the database, it will run a few checks to verify that the database is supported and can be used for the repository. If you get a warning, this is normal and can be ignored. Once the warning has been ignored, the prerequisites will complete. Click Ok then Next.

The next step of the RCU will allow you to select the components needed for the repository. There are only two main components needed for the repository. “Common Infrastructure Services” (selected by default) and “Oracle GoldenGate -> Repository” (selected by default). Both of these selection will have a prefix of “DEV” by default. This is something that can be changed in the “Create new prefix” box.

Note: I like changing it to GGR (GoldenGateRepository), this way I can keep different schemas in the same repository database.

Just like the database connection prerequisites, the RCU will check for all the items needed. Click OK.

The next screen will ask you for passwords that will be used with the schemas in the repository. You have the option of using a single password for all schemas or specify different passwords. Since this is mostly for testing, a single password works for my setup. Click Next.

The custom variables step will require you to create a password for the Supervisor user. Remember the Supervisior user is a layover from the ODI framework. Provide a password that you would like to use. Also notice that the “Encryption Algorithm” variable is empty. This is meant to be empty, do not place anything here. Then click Next.

Now the wizard will prompt you about information needed to create default and temp tablespaces for the schemas setup earlier in the wizard. Taking all the defaults unless there is something specific you would like to change. Click Next.

The summary page will provide you with the information on items that will be created with the RCU. Click Create and wait for the repository to be created.

Once the “create” button has been pushed, the RCU will begin building the repository.

Upon completion of the repository, the RCU will provide a Completion Summary screen with all the details of the repository build. At this point, you can close out of the RCU by clicking “close”

If you are familiar with any of the Oracle Data Integration tools, this repository wizard is very similar to other products that use a repository (example: Oracle Data Integrator). The repository is a very nice and useful with Oracle GoldenGate Studio because it will be used to keep track of projects, solutions and mapping that you will be working on.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Nagios Authentication with Active Directory.

Pythian Group - Wed, 2015-11-04 10:14

 

Nagios authentication with Active Directory aligns with user management consolidation policies in most organizations. This post explains how to setup Nagios authentication with Active Directory, while using Apache as web server.

mod_authz_ldap is an apache LDAP authorization module. This can be used to authorize a user based on an LDAP query.

Install mod_authz_ldap.

# yum install mod_authz_ldap

Make sure that the module is loaded in apache:

/etc/httpd/conf.d/authz_ldap.confLoadModule authz_ldap_module modules/mod_authz_ldap.so

To query LDAP, ldapsearch can be used. Install following package:

# yum install openldap-clients

Active Directory will not allow an LDAP client to operate against it anonymously, therefore a user DN and password with minimum permission is required.

For example: CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org

The CN attribute corresponds to the “Display Name” of the account in Active Directory.

ldapsearch can be used to query LDAP server. In this case Active Directory.

In this example, we will look at how to enable access to all the members in ‘Pythian’ group who in turn have membership in ‘Nagios Admins’ group.

To find the members of Pythian group, run following command:

# ldapsearch -x -LLL -D ‘CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org’ -W -H ldap://192.168.1.1 -b ‘CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org’
Enter LDAP Password:
dn: CN=Pythian,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
objectClass: top
objectClass: group
cn: pythian
description: General Pythian group.
member: CN=Joseph Minto,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC=hq,DC=CORP,DC=abc,DC=org <—————
member: CN=Test All,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC=hq,DC=CORP,DC=abc,DC=org <—————
distinguishedName: CN=pythian,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
instanceType: 4
whenCreated: 20120720203444.0Z
whenChanged: 20150611152516.0Z
uSNCreated: 11258263
memberOf: CN=OA Admins,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
uSNChanged: 128023795
name: pythian
objectGUID:: XY68X44xZU6KQckM3gckcw==
objectSid:: AQUAAAAAAAUVAAAAly2pALIyHF9ZQexpa+IAAA==
sAMAccountName: pythian
sAMAccountType: 268435456
groupType: -2147483646
objectCategory: CN=Group,CN=Schema,CN=Configuration,DC=CORP,DC=abc,DC=org
dSCorePropagationData: 20140718174533.0Z
dSCorePropagationData: 20121012140635.0Z
dSCorePropagationData: 20120823115415.0Z
dSCorePropagationData: 20120723133138.0Z
dSCorePropagationData: 16010714223649.0Z

To find the details of a user account, following command can be used:

# ldapsearch -x -LLL -D ‘CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org’ -W -H ldap://192.168.1.1 -b ‘CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org’ -s sub “sAMAccountName=jminto”
Enter LDAP Password:
dn: CN=Joseph Minto,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC= hq,DC=CORP,DC=abc,DC=org
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: Joseph Minto
sn: Minto
c: US
l: Arlington
st: VA
description: 09/30/15 – Consultant – Pythian
postalCode: 22314
telephoneNumber: 1 866 – 798 – 4426
givenName: Joseph
distinguishedName: CN=Joseph Minto,OU=Service Consultants,OU=Consultants,OU=User Accounts,DC=hq,DC=CORP,DC=abc,DC=org
instanceType: 4
whenCreated: 20131203160403.0Z
whenChanged: 20150811045216.0Z
displayName: Joseph Minto
uSNCreated: 62354283
info: sponsored by: sam@abc.org
memberOf: CN=Pythian,OU=Internal Security Groups,DC=hq,DC=CORP,DC=abc,DC=org
memberOf: CN=Nagios Admins,OU=Nagios Groups,OU=AppSecurityGroups,DC=hq,DC=CORP,DC=abc,DC=org <————-
uSNChanged: 137182483
co: United States
name: Joseph Minto
objectGUID:: uh9bC/ke6Uap0/dUk9gyLw==
userAccountControl: 512
badPwdCount: 0
codePage: 0
countryCode: 840
badPasswordTime: 130360542953202075
lastLogoff: 0
lastLogon: 130844674893200195
scriptPath: callsl.bat
logonHours:: ////////////////////////////
pwdLastSet: 130305602432591455
primaryGroupID: 513
objectSid:: AQUAAAAAAAUVAAAAly2pALIyHF9ZQexphO8AAA==
adminCount: 1
accountExpires: 130881456000000000
logonCount: 116
sAMAccountName: jminto
sAMAccountType: 805306368
userPrincipalName: jminto@hq.CORP.abc.org
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=CORP,DC=abc,DC=org
dSCorePropagationData: 20150320162428.0Z
dSCorePropagationData: 20140718174545.0Z
dSCorePropagationData: 20131203161019.0Z
dSCorePropagationData: 16010101181632.0Z
lastLogonTimestamp: 130837423368430625
mail: jo@pythian.com

Following are the ldapsearch switches used above:

-x Use simple authentication instead of SASL.
-L Search results are display in LDAP Data Interchange Format detailed in ldif(5). A single -L restricts the output to LDIFv1.
A second -L disables comments. A third -L disables printing of the LDIF version. The default is to use an extended version of LDIF.-D binddn
Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value.-W Prompt for simple authentication. This is used instead of specifying the password on the command line.-H ldapuri
Specify URI(s) referring to the ldap server(s); a list of URI, separated by whitespace or commas is expected; only the protocol/host/port fields are
allowed. As an exception, if no host/port is specified, but a DN is, the DN is used to look up the corresponding host(s) using the DNS SRV records,
according to RFC 2782. The DN must be a non-empty sequence of AVAs whose attribute type is “dc” (domain component), and must be escaped according to RFC
2396.-b searchbase
Use searchbase as the starting point for the search instead of the default.-s {base|one|sub|children}
Specify the scope of the search to be one of base, one, sub, or children to specify a base object, one-level, subtree, or children search. The default is
sub. Note: children scope requires LDAPv3 subordinate feature extension.

In the nagios configuration in apache, parameters in mod_authz_ldap can be used to validate a user like we used in ldapsearch:

# cat /etc/httpd/conf.d/nagios.conf
# SAMPLE CONFIG SNIPPETS FOR APACHE WEB SERVER
# Last Modified: 11-26-2005
#
# This file contains examples of entries that need
# to be incorporated into your Apache web server
# configuration file. Customize the paths, etc. as
# needed to fit your system.ScriptAlias /nagios/cgi-bin/ “/usr/lib64/nagios/cgi-bin/”Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
AuthName “Nagios Access”
AuthType BasicAuthzLDAPMethod ldap
AuthzLDAPServer “192.168.1.1”
AuthzLDAPBindDN “CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPBindPassword “typepasswordhere”
AuthzLDAPUserKey sAMAccountName
AuthzLDAPUserBase “CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPUserScope subtree
AuthzLDAPGroupKey cn
AuthzLDAPMemberKey member
AuthzLDAPSetGroupAuth ldapdn
require group “Nagios Admins”Alias /nagios “/usr/share/nagios/html”Options None
AllowOverride None
Order allow,deny
Allow from all
AuthName “Nagios Access”
AuthType BasicAuthzLDAPMethod ldap
AuthzLDAPServer “192.168.1.1”
AuthzLDAPBindDN “CN=Nagios User,CN=Users,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPBindPassword “typepasswordhere”
AuthzLDAPUserKey sAMAccountName
AuthzLDAPUserBase “CN=Pythian,OU=Internal Groups,DC=hq,DC=CORP,DC=abc,DC=org”
AuthzLDAPUserScope subtree
AuthzLDAPGroupKey cn
AuthzLDAPMemberKey member
AuthzLDAPSetGroupAuth ldapdn
require group “WUG Admins”

In the above configuration, mod_authz_ldap uses parameters like ldapserver, binddn, bindpassword, scope, searchbase etc to see if the supplied user credentials can be found in the Active Directory. It would also check to see if the user is a member of ‘Nagios Admins’ group.

Restarting apache would start enable Active Directory based authentication for Nagios.

 

Discover more about our expertise in Infrastructure Management.

Categories: DBA Blogs

Online Resizing of ASM Disks

Pythian Group - Wed, 2015-11-04 07:01

 

The SAN administrator has informed us that they have extended the disks. This is the information I had from our client. The disks were labelled:

mpath_compellent_oraarch
mpath_compellent_oraarch02
mpath_compellent_oraarch03

 

The original size of the disks were 300GB and they had been extended to 600GB. These were multipath disks belonging to the disk diskgroup ARCH, which was being used to store archive logs in ASM. The database was 11.2.0.3 and was in a 2-node RAC configuration. The server was Red Hat Linux 5.9 – 2.6.18-406.el5 – 64bit.

I checked the disks using fdisk (as the root user) and got the following:

fdisk -l /dev/mpath/mpath_compellent_oraarch

Disk /dev/mpath/mpath_compellent_oraarch: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mpath/mpath_compellent_oraarch doesn't contain a valid partition table

This confirmed that the OS was not aware of the size extension.

 

Firstly, I wanted to confirm that the correct disks had been extended. So the first place to look is in ASM:

select g.name 
     , d.path
     , d.os_mb
     , d.total_mb
from v$asm_diskgroup g
   , v$asm_disk      d
where g.group_number = d.group_number
and   g.name = 'ARCH'
/

NAME       PATH                           OS_MB   TOTAL_MB
---------- ------------------------- ---------- ----------
ARCH       ORCL:ASMDISK_NEW_ARCH03       307200     307200
ARCH       ORCL:ASMDISK_NEW_ARCH01       307200     307200
ARCH       ORCL:ASMDISK_NEW_ARCH02       307200     307200

 

Now we need to match these names to those provided by the SAN administrator.

Check the directory:

/dev/oracleasm/disks
ls -l /dev/oracleasm/disks/ASMDISK_NEW_ARCH*
brw-rw---- 1 oracle dba 253, 30 Oct  6 00:35 /dev/oracleasm/disks/ASMDISK_NEW_ARCH01
brw-rw---- 1 oracle dba 253, 29 Oct  6 00:35 /dev/oracleasm/disks/ASMDISK_NEW_ARCH02
brw-rw---- 1 oracle dba 253, 32 Oct  6 00:35 /dev/oracleasm/disks/ASMDISK_NEW_ARCH03

This gives is the major and minor numbers for the disks – major number is 253 and minor numbers are 30,29 and 32.

 

Then compare these numbers against the devices listed in:

/dev/mapper
ls -l /dev/mapper/mpath_compellent_oraarch*
brw-rw---- 1 root disk 253, 30 Oct  6 00:34 /dev/mapper/mpath_compellent_oraarch
brw-rw---- 1 root disk 253, 29 Oct  6 00:34 /dev/mapper/mpath_compellent_oraarch02
brw-rw---- 1 root disk 253, 32 Oct  6 00:34 /dev/mapper/mpath_compellent_oraarch03

The numbers match showing that they are the same devices.

 

Now we need to find the actual disks that make up the multipath devices.

multipath -l
Output truncated for brevity

mpath_compellent_oraarch03 (36000d310009aa700000000000000002b) dm-32 COMPELNT,Compellent Vol
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 13:0:0:3  sdfm       130:128 [active][undef]
 \_ 11:0:0:3  sdgd       131:144 [active][undef]

mpath_compellent_oraarch02 (36000d310009aa700000000000000002a) dm-29 COMPELNT,Compellent Vol
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 12:0:0:2  sdfi       130:64  [active][undef]
 \_ 14:0:0:2  sdfk       130:96  [active][undef]

mpath_compellent_oraarch (36000d310009aa7000000000000000026) dm-30 COMPELNT,Compellent Vol
[size=300G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 13:0:0:1  sdfj       130:80  [active][undef]
 \_ 11:0:0:1  sdgc       131:128 [active][undef]

From here we can see the disks:

/dev/sdfm
/dev/sdgd
/dev/sdfi
/dev/sdfk
/dev/sdfj
/dev/sdgc

We need to find this information on the other node as well, as the underlying disk names will very likely be different on the other server.

 

Now for each disk we need to rescan the disk to register the new size. To do this we need to the following for each disk on both nodes:

echo 1 &gt; /sys/block/sdfm/device/rescan

Then we can check each disk to make sure it has successfully been extended:

fdisk -l /dev/sdfm

Disk /dev/sdfm: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdfm doesn't contain a valid partition table

Looks good – once done on all nodes we can then extend the multipath devices for each device name on both nodes:

multipathd -k'resize map mpath_compellent_oraarch'
ok

Then we can check the multipath device disk size:

fdisk -l /dev/mpath/mpath_compellent_oraarch

Disk /dev/mpath/mpath_compellent_oraarch: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mpath/mpath_compellent_oraarch doesn't contain a valid partition table

Looks good – once done on both nodes we can then resize the ASM disks within ASM:

SQL&gt; select g.name 
     , d.path
     , d.os_mb
     , d.total_mb
from v$asm_diskgroup g
   , v$asm_disk      d
where g.group_number = d.group_number
and   g.name = 'ARCH'
/

NAME       PATH                           OS_MB   TOTAL_MB
---------- ------------------------- ---------- ----------
ARCH       ORCL:ASMDISK_NEW_ARCH03       614400     307200
ARCH       ORCL:ASMDISK_NEW_ARCH01       614400     307200
ARCH       ORCL:ASMDISK_NEW_ARCH02       614400     307200

SQL&gt; alter diskgroup ARCH resize all;

Diskgroup altered.

SQL&gt; select g.name 
     , d.path
     , d.os_mb
     , d.total_mb
from v$asm_diskgroup g
   , v$asm_disk      d
where g.group_number = d.group_number
and   g.name = 'ARCH'
/

NAME       PATH                           OS_MB   TOTAL_MB
---------- ------------------------- ---------- ----------
ARCH       ORCL:ASMDISK_NEW_ARCH03       614400     614400
ARCH       ORCL:ASMDISK_NEW_ARCH01       614400     614400
ARCH       ORCL:ASMDISK_NEW_ARCH02       614400     614400

The disks and diskgroup were successfully resized.

 

Discover more about our expertise in Database Management.

Categories: DBA Blogs

Oracle ISVs : Leading The Charge in Cloud Transformation

At Oracle we are offering our application solutions is the Cloud (i.e. ERP, HCM), this also means our ISV (Independent Software Vendor) ecosystem is critical in enriching the SaaS portfolio ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OakTable video of myself and others

Bobby Durrett's DBA Blog - Tue, 2015-11-03 10:53

You can find the full length video of my Delphix talk that I did at OakTable World on Tuesday here: url

Also, the OakTable folks have updated the OakTable World agenda page with video of all the talks. This has lots of good material and for free. Scroll down to the bottom of the page to find the links to the videos.

Bobby

Categories: DBA Blogs

Partner Webcast – Oracle Mobile Security Suite (OMSS): Unified Security for Mobility

Since the latest release, Oracle Mobile Security Suite (OMSS) is a fully featured Identity-Centric Enterprise Mobility Management (EMM) Platform that can address a mix of both BYOD and corporate...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #447: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-11-02 09:34

 

This Log Buffer Edition covers the weekly blog posts of Oracle, SQL Server and MySQL.

Oracle:

  • An Index or Disaster, You Choose (It’s The End Of The World As We Know It).
  • SQL Monitoring in Oracle Database 12c.
  • RMAN Full Backup vs. Level 0 Incremental.
  • Auto optimizer stats after CTAS or direct loads in #Oracle 12c.
  • How to move OEM12c management agent to new location.

SQL Server:

  • Automate SQL Server Log Monitoring.
  • 10 New Features Worth Exploring in SQL Server 2016.
  • The ABCs of Database Creation.
  • Top 10 Most Common Database Scripts.
  • In-Memory OLTP Table Checkpoint Processes Performance Comparison.

MySQL:

  • The Client Library, Part 1: The API, the Whole API and Nothing but the API.
  • Performance of Connection Routing plugin in MySQL Router 2.0.
  • MariaDB 10.0.22 now available.
  • Semi-synchronous Replication Performance in MySQL 5.7.
  • MySQL and Trojan.Chikdos.A.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs