DBA Blogs

REGEXP_LIKE Statement

Tom Kyte - Thu, 2019-03-21 01:06
Why is this statement returning value - <code>select * from ( SELECT 'AAaaanders4n' name FROM dual ) WHERE REGEXP_LIKE (name, '^[A]{1}');</code> I have given {1} in regexp_like, still this statement returns 'AAaaanders4n'
Categories: DBA Blogs

Speed of Light

Bobby Durrett's DBA Blog - Wed, 2019-03-20 16:30

Looking at cloud databases has me thinking about the speed of light. Wikipedia says that the speed of light is about 186,000 miles per second. If my calculations are correct that is 5.37 microseconds per mile. The United States is about 2680 miles wide so it would take light about 14.4 milliseconds to cross the US. If I ping one of my favorite web sites it takes tens of milliseconds to ping so that kind of makes sense because those sites are in other cities and I am going through various routers. I did some tests with my company’s storage and found that reading from our storage when the data is cached in the storage server takes around 200 microseconds. That is 200 microseconds for a round trip. I’m sure that our database servers and storage are a lot less than a mile apart so most of that time has nothing to do with the speed of light. I heard about a cloud vendor whose fast network connection took 100 microseconds plus the speed of light. I guess 100 microseconds is the cost of getting your data to fiber and light does the rest. If your cloud database was on the other side of the country, I guess it could take 14 milliseconds each way at least for each SQL request. If the cloud database was in your own city and say 10 miles away that would only tack on about 53.7 microseconds each way to the 100 microseconds overhead. I guess it makes sense. Maybe 100 microseconds plus the speed of light is the cost of moving data in the best case?

Bobby

Categories: DBA Blogs

Partitioning -- 13d : TRUNCATE and DROP Partitions and Global Indexes

Hemant K Chitale - Wed, 2019-03-20 07:11
A TRUNCATE or DROP Partition makes Global Indexes on a Partitioned Table UNUSABLE.

You may be lucky if the target partition was empty, resulting in Oracle maintaining Global Indexes as valid.  However, the accepted rule is that you either (a) use the UPDATE INDEXES clause [resulting in the TRUNCATE or DROP taking longer to run, effectively locking the table partitions] OR  (b) do a REBUILD of the Indexes that become UNUSABLE after the TRUNCATE or DROP.

12c has introduced what it calls Asynchronous Global Index Maintenance.  With this feature present, the TRUNCATE or DROP runs much faster as a DDL without actually removing the target rows from the Global Indexes [but still requires the UPDATE INDEXES clause to be specified]

So, now in my 12.2 database I have these two Indexes on SALES_DATA :

SQL> select index_name, partitioned, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by 2,1
5 /

INDEX_NAME PAR STATUS
------------------------------ --- --------
SALES_DATA_PK NO VALID
SALES_DATA_LCL_NDX_1 YES N/A

SQL>


I then TRUNCATE a non-empty Partition and check the Indexes

SQL> alter table sales_data truncate partition P_2015 update indexes;

Table truncated.

SQL>
SQL> select index_name, partitioned, status, orphaned_entries
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by 2,1
5 /

INDEX_NAME PAR STATUS ORP
------------------------------ --- -------- ---
SALES_DATA_PK NO VALID YES
SALES_DATA_LCL_NDX_1 YES N/A NO

SQL>


The ORPHANED_ENTRIES column indicates that SALES_DATA_PK is subject to Asynchronous Index Maintenance.

This is the job that will do the Index Maintenance at 2am  :

SQL> l
1 select owner, job_name, last_start_date, next_run_Date
2 from dba_scheduler_jobs
3* where job_name = 'PMO_DEFERRED_GIDX_MAINT_JOB'
SQL> /

OWNER
---------------------------------------------------------------------------
JOB_NAME
---------------------------------------------------------------------------
LAST_START_DATE
---------------------------------------------------------------------------
NEXT_RUN_DATE
---------------------------------------------------------------------------
SYS
PMO_DEFERRED_GIDX_MAINT_JOB
20-MAR-19 10.18.51.215433 AM UTC
21-MAR-19 02.00.00.223589 AM UTC


SQL> !date
Wed Mar 20 20:05:24 SGT 2019

SQL>


So, I could
(1) wait for the next run of the job OR
(2) manually trigger the job (which will scan the entire database for all indexes that require such maintenance) OR
(3) Execute  DBMS_PART.CLEANUP_GIDX  to initiate the maintenance for the specific index OR
(4) Execute an ALTER INDEX REBUILD to make the Index USABLE again.

SQL> execute dbms_part.cleanup_gidx('HEMANT','SALES_DATA');

PL/SQL procedure successfully completed.

SQL> select index_name, partitioned, status, orphaned_entries
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by 2,1
5 /

INDEX_NAME PAR STATUS ORP
------------------------------ --- -------- ---
SALES_DATA_PK NO VALID NO
SALES_DATA_LCL_NDX_1 YES N/A NO

SQL>


Note that the argument to CLEANUP_GIDX is the *Table Name*, not an Index Name.


Here I have demonstrated a TRUNCATE Partition, but the same method would be usable for a DROP Partition.




Categories: DBA Blogs

Generate number based on start and end columns.

Tom Kyte - Wed, 2019-03-20 06:46
Generate value based on start and end columns without using procedure. How to modify the select query. <i>select key_column, start_point, end_point FROM tab1 WHERE key_column='10254';</i> key_column start_point end_point 10254 -2 ...
Categories: DBA Blogs

How to recover the whole database with RMAN Backup

Tom Kyte - Wed, 2019-03-20 06:46
Hi Team, First off all a big Thanks for your supports Now i wanna know the steps to recover a fully operational database with RMAN backup. I haven't done this scenario before,So i am going for a Test case here. My requirement is 1) I have dat...
Categories: DBA Blogs

cannot access objects in different schema

Tom Kyte - Wed, 2019-03-20 06:46
I am the admin user and can create tables and procedures in any schema. I have few tables in Schema B which I am referencing in a package i am creating in Schema A however upon compiling it does not see the tables in Schema B. Schema B does not ha...
Categories: DBA Blogs

Virtual columns in Oracle 11g

Tom Kyte - Wed, 2019-03-20 06:46
hi tom what is virtual column in 11g. Why oracle has introduce it. Can you give us its possible usages. regards Amir Riaz
Categories: DBA Blogs

New utility Python scripts for DBAs

Bobby Durrett's DBA Blog - Tue, 2019-03-19 14:45

I pushed out three new Python scripts that might be helpful to Oracle DBAs. They are in my miscpython repository.

Might be helpful to some people.

Bobby

Categories: DBA Blogs

Predict tablespace growth for next 30 days

Tom Kyte - Tue, 2019-03-19 12:26
How to Predict tablespace growth for next 30 days need to configure using oem any possible solutions
Categories: DBA Blogs

Oracle Database 19c Automatic Indexing: Predicted Back In 2008 (A Better Future)

Richard Foote - Mon, 2019-03-18 19:24
I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs – 3 Different Ones). In the comments section, there’s an interesting discussion where I mention: “If Oracle19 does everything for you and all the various indexes structures get […]
Categories: DBA Blogs

How to count pairs in a consecutive number of rows

Tom Kyte - Mon, 2019-03-18 18:06
I have the following example: COLUMN 19 20 26 28 29 32 33 34 I need to count the rows based on pairs, 19-20, 28-29, 32-33. I'm having difficulty to check if a pair is already counted or not, any sugestions ? The result should be s...
Categories: DBA Blogs

Error generating DUMP: ORA-39006: internal error

Tom Kyte - Mon, 2019-03-18 18:06
Hi, I have a problem creating a dump with SQL Developer, the PL/SQL generated is: <code> set scan off set serveroutput on set escape off whenever sqlerror exit DECLARE h1 number; s varchar2(1000):=NULL; errorvarchar varchar2(1...
Categories: DBA Blogs

ORA-06533: Subscript Beyond Count error

Tom Kyte - Mon, 2019-03-18 18:06
Hi I have the following PLSQL code - if run 1st time - it works fine - running 2nd or 3rd time it fails with "Subscript beyond count" error If I make the declaration of g_response private to the procedure (not globally in the package) - it works...
Categories: DBA Blogs

Partner Webcast – Oracle Cloud Business Analytics Data Visualizations

Providing fast and flexible analysis of any data from any source is a business requirement these days. Oracle Analytics Cloud is a cloud-first analytics platform, built on the industry-leading Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Data Pump Exit Codes

Learn oracle 12c database management - Mon, 2019-03-18 11:48


oracle@Linux01:[/u01/oracle/DPUMP] $ exp atoorpu file=abcd.dmp logfile=test.log table=sys.aud$
About to export specified tables via Conventional Path ...
. . exporting table                           AUD$     494321 rows exported
Export terminated successfully without warnings.

oracle@qpdbuat211:[/d01/oracle/DPUMP] $ echo $?
0


oracle@Linux01:[/u01/oracle/DPUMP] $ imp atoorpu file=abcd.dmp logifle=test.log
LRM-00101: unknown parameter name 'logifle'

IMP-00022: failed to process parameters, type 'IMP HELP=Y' for help
IMP-00000: Import terminated unsuccessfully

oracle@Linux01:[/u01/oracle/DPUMP] $ echo $?
1
Can be used in export shell scripts for status verification:
if test $status -eq 0
then
echo "export was successfull."
else
echo "export was not successfull."
fi
Also check below page fore reference :
Categories: DBA Blogs

Automate recyclebin purge in oracle

Learn oracle 12c database management - Mon, 2019-03-18 11:46


Setup this simple scheduler job as sysdba to purge the objects in the recycbin.
This is one of the most space cosuming location that often dba's forget to cleanup and the
objects get piled up occupying lot of space. Based on how long you want to save these dropped object setup a job under scheduler to run below plsql block either daily, weekly or monthly. 


 I suggest to run weekly.


--For user_recycbin purge--
-- plsql --

declare
VSQL varchar2(500);
begin
VSQL:='purge user_recyclebin';
execute immediate VSQL;
dbms_output.put_line('USER RECYCLEBIN has been purged.');
end;

/




--For dba_recycbin purge--
-- plsql --

declare
VSQL varchar2(500);
begin
VSQL:='purge dba_recyclebin';
execute immediate VSQL;
dbms_output.put_line('DBA RECYCLEBIN has been purged.');
end;





Prerequisites
The database object must reside in your own schema or you must have the DROP ANY ... system privilege for the type of object to be purged, or you must have the SYSDBA system privilege. To perform the PURGE DBA_RECYCLEBIN operation, you must have the SYSDBA or PURGE DBA_RECYCLEBINsystem privilege.
Categories: DBA Blogs

Monitoring Database in AWS Aurora After Migrating from Oracle to PostgreSQL

Pakistan's First Oracle Blog - Fri, 2019-03-15 19:08
Suppose you have an Oracle database on-premise, which you have now moved over to AWS Cloud in AWS Aurora PostgreSQL. 
For your Oracle database, you have been using v$ views to monitor your runtime performance of instance, long running operations, top SQLs from ASH, blocking etc. How do you continue doing that when you migrate your database to cloud especially in AWS Aurora based PostgreSQL?

Well, PostgreSQL provides statistics collection views, which is a subsystem that collects runtime dynamic information about certain server activities such as statistical performance information. For example, you can use  pg_stat_activity view to check for long running queries.

There are various other statistics views too in PostgreSQL such as pg_stat_all_tables to see size of table, access method like FTS or index scan, and so on. There are other views to check IO on tables and indexes and plethora of others.

In addition to these statistics views, Aurora PostgreSQL provides a nifty tool called as Performance Insights. Performance insights monitors Amazon RDS or Aurora databases (both MySQL and PostgreSQL) and captures workloads so that you can analyze and troubleshoot database performance. Performance insights visualizes the database load and provides very useful filtering using various attributes such as: waits, SQL statements, hosts, or users.

As part of operational excellence, its imperative after a database migration that performance is monitored, documented and continuously improved. Performance Insights and the statistics views are great for proactive and reactive database tuning in AWS RDS and AWS Aurora.
Categories: DBA Blogs

Explicitly providing values in a WHERE clause showing much better performance compared to using sub query

Tom Kyte - Fri, 2019-03-15 16:46
Hi I am new to oracle and not sure how to provide the liveSQL link. I have 2 tables to join huge_table contains about 1 billion rows big_table contains about 100 million rows and small tables contains 999 rows providing the condition to fil...
Categories: DBA Blogs

Compare columns in two tables and report which column is different

Tom Kyte - Fri, 2019-03-15 16:46
Compare columns in two tables and list out column names which are different. for ex:- create table t1(c1 number(2), c2 varchar2(10)); create table t2(c1 number(2), c2 varchar2(10)); insert into t1 values(1,'a'); insert into t2 values(1,'b'); result ...
Categories: DBA Blogs

Taking More Indexing Time on ORACLE database While Performing QUERYS

Tom Kyte - Thu, 2019-03-14 22:26
Hi Team, Very impressive for your way of explaining. So we have ORACLE db production server,from that we are trying to perform indexing with 40 Querys on every day.For Example i am showing few Querys:: <code>DROP INDEX GINQCON_ix1 FORCE; ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs