DBA Blogs
Restrict Application access to developers in same workspace
Find if event spanning several dates happened via SQL
pragma autonomous_transaction within procedure before creating synonyms
Undo Tablespaces.
How to count no of records in table without count?
Square root in excel – A Step-By-Step Tutorial
Have you ever stopped to wonder what life would have been like when there wasn’t a calculator to perform the arithmetical operations for people? It surely sends a shiver down your spine to even imagine the horror of having to survive without a calculator. But in the present times, it wouldn’t be wrong to state […]
The post Square root in excel – A Step-By-Step Tutorial appeared first on VitalSoftTech.
Top 8 Post Limits on Tumblr (and Other Limitations) Revealed
Do you know about post limits on Tumblr? Are you aware that the blogging platform has certain rules and regulations which social media users must abide by? Tumblr has become one of the best and most entertaining platforms out there for blogging and sharing multimedia content. It is not just immensely popular amongst the younger […]
The post Top 8 Post Limits on Tumblr (and Other Limitations) Revealed appeared first on VitalSoftTech.
So Far So Good with Force Logging
I mentioned in my previous two posts that I had tried to figure out if it would be safe to turn on force logging on a production database that does a bunch of batch processing on the weekend: post1, post2. We know that many of the tables are set to NOLOGGING and some of the inserts have the append hint. We put in force logging on Friday and the heavy weekend processing ran fine last weekend.
I used an AWR report to check the top INSERT statements from the weekend and I only found one that was significantly slower. But the table it inserts into is set for LOGGING, it does not have an append hint, and the parallel degree is set to 1. So, it is a normal insert that was slower last weekend for some other reason. Here is the output of my sqlstatsumday.sql script for the slower insert:
Day SQL_ID PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 6mcqczrk3k5wm 472069319 129 36734.0024 20656.8462 462.098677 0 0 38.8160385 666208.285 1139.86923 486.323077
2019-09-29 6mcqczrk3k5wm 472069319 130 44951.6935 27021.6031 573.245664 0 0 21.8764885 879019.29 1273.52672 522.083969
2019-10-06 6mcqczrk3k5wm 472069319 130 9624.33742 7530.07634 264.929008 0 0 1.26370992 241467.023 678.458015 443.427481
2019-10-13 6mcqczrk3k5wm 472069319 130 55773.0864 41109.542 472.788031 0 0 17.5326031 1232828.64 932.083969 289.183206
2019-10-20 6mcqczrk3k5wm 472069319 130 89684.8089 59261.2977 621.276122 0 0 33.7963893 1803517.19 1242.61069 433.473282
2019-10-27 6mcqczrk3k5wm 472069319 130 197062.591 144222.595 561.707321 0 0 362.101267 10636602.9 1228.91603 629.839695
It averaged 197062 milliseconds last weekend but 89684 the previous one. The target table has always been set to LOGGING so FORCE LOGGING would not change anything with it.
One of the three INSERT statements that I expected to be slowed by FORCE LOGGING was faster this weekend than without FORCE LOGGING last weekend:
Day SQL_ID PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 0u0drxbt5qtqk 382840242 1 2610257.66 391635 926539.984 0 0 13718.453 5483472 745816.5 3689449
2019-09-29 0u0drxbt5qtqk 382840242 1 17127212.3 1507065 12885171.7 0 0 14888.4595 18070434 6793555.5 15028884.5
2019-10-06 0u0drxbt5qtqk 382840242 1 3531931.07 420150 2355139.38 0 0 12045.0115 5004273 1692754 5101998
2019-10-13 0u0drxbt5qtqk 382840242 1 1693415.59 180730 1250325.41 0 0 819.7725 2242638.5 737704.5 2142812
2019-10-20 0u0drxbt5qtqk 382840242 1 5672230.17 536115 3759795.33 0 0 10072.9125 6149731 2332038 2806037.5
2019-10-27 0u0drxbt5qtqk 382840242 1 2421533.59 272585 1748338.89 0 0 9390.821 3311219.5 958592.5 2794748.5
It ran 2421533 milliseconds this weekend and 5672230 the prior one. So clearly FORCE LOGGING did not have much effect on its overall run time.
It went so well this weekend that we decided to leave FORCE LOGGING in for now to see if it slows down the mid-week jobs and the web-based front end. I was confident on Friday, but I am even more confident now that NOLOGGING writes have minimal performance benefits on this system. But we will let it bake in for a while. Really, we might as well leave it in for good if only for the recovery benefits. Then when we configure GGS for the zero downtime upgrade it will already have been there for some time.
The lesson for me from this experience and the message of my last three posts is that NOLOGGING writes may have less benefits than you think, or your system may be doing less NOLOGGING writes than you think. That was true for me for this one database. It may be true for other systems that I expect to have a lot of NOLOGGING writes. Maybe someone reading this will find that they can safely use FORCE LOGGING on a database that they think does a lot of NOLOGGING writes, but which really does not need NOLOGGING for good performance.
Bobby
Basic Replication -- 10 : ON PREBUILT TABLE
You can also define a Materialized View over an *existing* (pre-populated) Table.
Let's say you have a Source Table and have built a Replica of it it another Schema or Database. Building the Replica may have taken an hour or even a few hours. You now know that the Source Table will have some changes every day and want the Replica to be updated as well. Instead of executing, say, a TRUNCATE and INSERT, into the Replica every day, you define a Fast Refresh Materialized View over it and let Oracle identify all the changes (which, on a daily basis, could be a small percentage of the total size of the Source/Replica) and update the Replica using a Refresh call.
Here's a quick demo.
SQL> select count(*) from my_large_source;
COUNT(*)
----------
72447
SQL> grant select on my_large_source to hr;
Grant succeeded.
SQL> connect hr/HR@orclpdb1
Connected.
SQL> alter session enable parallel dml;
Session altered.
SQL> create table my_large_replica
2 as select * from hemant.my_large_source
3 where 1=2;
Table created.
SQL> insert /*+ PARALLEL (8) */
2 into my_large_replica
3 select * from hemant.my_large_source;
72447 rows created.
SQL>
So, now, HR has a Replica of the Source Table in the HEMANT schema. Without any subsequent updates to the Source Table, I create the Materialized View definition, with the "ON PREBUILT TABLE" clause.
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> create materialized view log on my_large_source;
Materialized view log created.
SQL> grant select, delete on mlog$_my_large_source to hr;
Grant succeeded.
SQL> connect hr/HR@orclpdb1
Connected.
SQL>
SQL> create materialized view my_large_replica
2 on prebuilt table
3 refresh fast
4 as select * from hemant.my_large_source;
Materialized view created.
SQL> select count(*) from hemant.my_large_source;
COUNT(*)
----------
72447
SQL> select count(*) from my_large_replica;
COUNT(*)
----------
72447
SQL>
I am now ready to add data and Refresh the MV.
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> desc my_large_source
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COL NOT NULL NUMBER
PRODUCT_NAME VARCHAR2(128)
FACTORY VARCHAR2(128)
SQL> insert into my_large_source
2 values (74000,'Revolutionary Pin','Outer Space');
1 row created.
SQL> commit;
Commit complete.
SQL> select count(*) from mlog$_my_large_source;
COUNT(*)
----------
1
SQL>
SQL> connect hr/HR@orclpdb1
Connected.
SQL> select count(*) from hemant.my_large_source;
COUNT(*)
----------
72448
SQL> select count(*) from my_large_replica;
COUNT(*)
----------
72447
SQL>
SQL> execute dbms_mview.refresh('MY_LARGE_REPLICA','F');
PL/SQL procedure successfully completed.
SQL> select count(*) from my_large_replica;
COUNT(*)
----------
72448
SQL>
SQL> select id_col, product_name
2 from my_large_replica
3 where factory = 'Outer Space'
4 /
ID_COL
----------
PRODUCT_NAME
--------------------------------------------------------------------------------
74000
Revolutionary Pin
SQL>
SQL> select count(*) from hemant.mlog$_my_large_source;
COUNT(*)
----------
0
SQL>
Instead of rebuilding / repopulating the Replica Table with all 72,448 rows, I used the MV definition and the MV Log on the Source Table to copy over that 1 new row.
The above demonstration is against 19c.
Here are two older posts, one in March 2009 and the other in January 2012 on an earlier release of Oracle.
GRANT INHERIT PRIVILEGES
Basic Replication -- 9 : BUILD DEFERRED
However, it is possible to define a Materialized View without actually populating it.
You might want to take such a course of action for scenarios like :
1. Building a number of Materialized Views along with a code migration but not wanting to spend time that would be required to actually populate the MVs and deferring the population to a subsequent maintenance window after which the code and data will be referenced by the application/users
2. Building a number of MVs in a Tablespace that is initially small but will be enlarged in the maintenance window to handle the millions of rows that will be inserted
3. Building an MV definition without actually having all the "clean" Source Table(s) rows currently available, deferring the cleansing of data to a later date and then populating the MV after the cleansing
The BUILD DEFERRED clause comes in handy here.
Let's say that we have a NEW_SOURCE_TABLE (with many rows and/or with rows that are yet to be cleansed) and want to build an "empty" MV on it (OR that this MV is one of a number of MVs that are being built together simply for migration of dependent code, without the data).
SQL> desc new_source_table
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL NUMBER
DATA_ELEMENT_1 VARCHAR2(15)
DATA_ELEMENT_2 VARCHAR2(15)
DATE_COL DATE
SQL>
SQL> create materialized view log on new_source_table;
create materialized view log on new_source_table
*
ERROR at line 1:
ORA-12014: table 'NEW_SOURCE_TABLE' does not contain a primary key constraint
SQL> create materialized view log on new_source_table with rowid;
Materialized view log created.
SQL>
SQL> create materialized view new_mv
2 build deferred
3 refresh with rowid
4 as select id as id_number,
5 data_element_1 as data_key,
6 data_element_2 as data_val,
7 date_col as data_date
8 from new_source_table
9 /
Materialized view created.
SQL>
Notice that my Source Table currently does not have a Primary Key. The MV Log can be created with the "WITH ROWID" clause in the absence of the Primary Key.
The Materialized View is also built with the ROWID as the Refresh cannot use a Primary Key.
Of course, you may well have a Source Table with a Primary Key. In that case, you can continue to default using the Primary Key instead of the ROWID
Once the Source Table is properly populated / cleansed and/or the tablespace containing the MV is large enough, the MV is first refreshed with a COMPLETE Refresh and subsequently with FAST Refresh's.
SQL> select count(*) from new_source_table;
COUNT(*)
----------
106
SQL> execute dbms_mview.refresh('NEW_MV','C',atomic_refresh=>FALSE);
PL/SQL procedure successfully completed.
SQL>
Subsequently, when one or more rows are inserted/updated in the Source Table, the next Refresh is a Fast Refresh.
SQL> execute dbms_mview.refresh('NEW_MV','F');
PL/SQL procedure successfully completed.
SQL>
SQL> select mview_name, refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'NEW_MV'
4 /
MVIEW_NAME REFRESH_M REFRESH_ LAST_REF
------------------ --------- -------- --------
NEW_MV DEMAND FORCE FAST
SQL>
Thus, we started off with an empty MV and later used REFRESHs (COMPLETE and FAST) to populate it.
Oracle database software client 18c 32 bit win 7
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [1384], [1270], [2885], [], [], [], [], [], [], []
Last run date of a function on a menu
SQL to return 12 months of this year
Oracle - validate date format (yyyy-mm-ddThh24:mi:ssZ) in XML against XSD
REGR_R2 returns a value of 1 when y column contains a constant value (e.g., all rows have value of 0.0038).
ServiceManager … as a daemon
In an earlier post on ServiceManager, I took a look at how you could start/stop the ServiceManager manually. A lot of what was said in that post still applies to this post; however, in this one I’m going to take a look at how to review the ServiceManager when it is configured as a daemon […]
The post ServiceManager … as a daemon appeared first on DBASolved.
What is an LP? (EP vs LP)
If you’re an aspiring musician or independent artist looking to promote your work in the music industry, you’ve probably wondered at one point or another; what is an LP? You might have heard of EPs as well, bringing up an even bigger question in your head; EP vs. LP? Which format for music promotion is […]
The post What is an LP? (EP vs LP) appeared first on VitalSoftTech.
Multiple SQL statements in a single Execute Immediate
Pages
