DBA Blogs

How to retrieve data from a quarter?

Tom Kyte - Wed, 2018-10-03 09:46
Right now am using the below query to extract the date. FROM c WHERE date <=TO_DATE ('30-SEP-18', 'DD-MON-YY'). Can you suggest me a way where I dont need to hardcode the date like 30-sep-18. Note: the example date is 30 sep 2018 because I'm ...
Categories: DBA Blogs

Fatal NI connect error 12170

Tom Kyte - Wed, 2018-10-03 09:46
Dear Tom, Please help to advice weather the below attachment is weather network error or not?? on our production database, lots of such TNS-12535: TNS:operation timed out errors. *************************************************************...
Categories: DBA Blogs

Foreign Key violation not happening

Tom Kyte - Wed, 2018-10-03 09:46
Hi, I have parent table X (Code table), and a child table CUSTOMER. I have a FK constraint enabled on CUSTOMER. But still database is accepting/inserting CDE in table CUSTOMER table which are not present in Table X. When FK Constraint is enab...
Categories: DBA Blogs

The Future of Partitioning

Tom Kyte - Wed, 2018-10-03 09:46
Hi Tom, The 11g release did nicely complete (nearly all) possible combination of composite partitioning schema. The missing hash - hash partitioning schema is probably not very interesting. The question is, does it mean that the development of ...
Categories: DBA Blogs

Migrating spatial data from SQL Server to Oracle

Tom Kyte - Tue, 2018-10-02 15:26
Hi, I would want to migrate spatial columns of data from SQL server to Oracle database server. Can you please provide how to begin. The sql developer tool does not support migration of spatial data. Could you provide appropriate tool/steps necessa...
Categories: DBA Blogs

Compressing LOB data increases table size

Tom Kyte - Tue, 2018-10-02 15:26
Hello, I think this a more conceptual question: In our core production database system, we have a pretty big table that weights 15,5TB. Of these, about 14.4TB are XML Data stored in a LOB column. We decided to reduce the size of the database...
Categories: DBA Blogs

Delete on billions record table

Tom Kyte - Tue, 2018-10-02 15:26
Hi Chris, Hi Connor, Appology for puttingy query in theoritical manner, but I would request you to please suggest something which I can try in below scenario. There is requirement when I need to cleanup a table (e.g. orders) having approx. 25 b...
Categories: DBA Blogs

Database Password Change

Tom Kyte - Tue, 2018-10-02 15:26
Hello Experts, I am just curious about changing passwords via cmd sql for SYS user. I issued the below command: alter user SYS identified by new_password; commit; Then restart the instance and OS. When I tried to use the new password, it ...
Categories: DBA Blogs

Regarding Undo tablespace

Tom Kyte - Tue, 2018-10-02 15:26
Which background process writes the copy of "old image" from "db buffer cache" to undo -segment of undo tablespace ?
Categories: DBA Blogs

Find the min date from a list of date without using a sub select query

Tom Kyte - Mon, 2018-10-01 21:06
Hi, I am working on a project and I need some help with the query. I have the following data in my existing table: <code> ----------------- S_ID S_DATE ----------------- A 01-FEB-12 A 14-MAR-12 A 28-APR-14 A 28-MAR...
Categories: DBA Blogs

Using the Query Cache for good performance in #Exasol

The Oracle Instructor - Mon, 2018-10-01 08:52

The result of a query can be cached in Exasol to the effect that repeated identical queries complete in no time. This feature has been introduced in version 5 and is enabled by default.

SQL_EXA> select session_value,system_value  from exa_parameters where parameter_name='QUERY_CACHE';
EXA: select session_value,system_value  from exa_parameters where parameter...

SESSION_VALUE        SYSTEM_VALUE
-------------------- --------------------
ON                   ON

1 row in resultset.

The Query Cache can be (de-)activated on the session level as well as on the system level.

SQL_EXA> alter session set query_cache='off';
EXA: alter session set query_cache='off';

Rows affected: 0
SQL_EXA> select object_name,mem_object_size/1024/1024 as mb from exa_user_object_sizes where object_name='T';
EXA: select object_name,mem_object_size/1024/1024 as mb from exa_user_objec...

OBJECT_NAME          MB
-------------------- ----------------------------------
T                             1537.49641990661621093750

1 row in resultset.

SQL_EXA> select count(*) from t;
EXA: select count(*) from t;

COUNT(*)
---------------------
            320000000

1 row in resultset.
SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 4
Elapsed: 00:00:03.022

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 5
Elapsed: 00:00:02.620

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 6
Elapsed: 00:00:02.724

Without using the Query Cache the repeated query takes roughly 3 seconds.

SQL_EXA> alter session set query_cache='on';
EXA: alter session set query_cache='on';

Rows affected: 0


Timing element: 7
Elapsed: 00:00:00.008

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 8
Elapsed: 00:00:00.009

Lightning fast! If statement profiling is enabled, QUERY CACHE RESULT shows as PART_NAME in tables like EXA_USER_PROFILE_LAST_DAY.
Also EXECUTION_MODE from EXA_SQL_LAST_DAY shows the usage of the Query Cache:

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
                   35

1 row in resultset.

SQL_EXA> show autocommit;
AUTOCOMMIT = "ON"
SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=33 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         0.005                     1 CACHED

1 row in resultset.

If DML changes the table, the result in the Query Cache is invalidated automatically:

SQL_EXA> update t set numcol2=1 where rowid in (select rowid from t limit 1);
EXA: update t set numcol2=1 where rowid in (select rowid from t limit 1);
Rows affected: 1

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
888896

1 row in resultset.

Timing element: 10
Elapsed: 00:00:02.870

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
51

1 row in resultset.

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME DURATION ROW_COUNT EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------

0 rows in resultset.

There’s a 1 Minute interval for syncing the statistic tables. It can be triggered manually, though:

SQL_EXA> flush statistics;
EXA: flush statistics;

Rows affected: 0

SQL_EXA> commit;
EXA: commit;

Rows affected: 0

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         2.862                     1 EXECUTE

1 row in resultset.

Runtime and EXECUTION_MODE EXECUTE confirms that the Query Cache was invalidated by the UPDATE above. Now it’s automatically refreshed:

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 11
Elapsed: 00:00:00.010
Categories: DBA Blogs

Covert Column to Rows Dynamically

Tom Kyte - Mon, 2018-10-01 02:46
Hi, I have situation when I am trying to convert rows into columns(Dynamically). Here is the scenario - 1. I have a function(userdefined) which takes in a SQL query as input and returns the result as a table type. 2. The result is ...
Categories: DBA Blogs

Insufficient privilege to access object SYS.DBMS_LOCK

Tom Kyte - Mon, 2018-10-01 02:46
Dear Tom, If we are using the dbms_lock package in anonymous block it is working fine.But it is not working inside the procedure. We have gone through the following link asktom.oracle.com/~tkyte/Misc/RolesAndProcedures.html and even tried with '...
Categories: DBA Blogs

Alter Table

Tom Kyte - Mon, 2018-10-01 02:46
Hi Tom , i want ALTER TABLE by adding columns to it , if the columns already exist int the table then it doesn't add , else adds the columns . how i can do that ? Regards, Adil
Categories: DBA Blogs

Partitioning -- 6 : Hash Partitioning

Hemant K Chitale - Sun, 2018-09-30 06:25
Unlike Range or List Partitioning where you define the rule which identifies which Partition a row will be inserted into (based on the value in the Partition Key Column(s)),  Hash Partitioning relies on Oracle applying a "hashing formula (algorithm)" to "randomly" distribute incoming rows across the available Partitions in the table.
This would be useful when you want to break up a table into smaller physical segments (maybe into even separate Tablespaces on different disks) without considering grouping of data.  In Date based Range Partitioning data is grouped into different Partitions -- i.e. physical segments on disk --  based on the Date value (e.g. by Month or Year).  In List Partitioning, data is grouped based on the value in the Partition Key Column.

Here is a small example on Hash Partitioning :

SQL> create table iot_incoming_data
2 (data_item_number number,
3 data_item_key varchar2(32),
4 data_item_value varchar2(64),
5 data_item_timestamp timestamp)
6 partition by hash (data_item_number)
7 (partition p1 tablespace hash_ptn_1,
8 partition p2 tablespace hash_ptn_2,
9 partition p3 tablespace hash_ptn_3,
10 partition p4 tablespace hash_ptn_4)
11 /

Table created.

SQL>


In this definition of the table, I have "randomly" distributed incoming rows across 4 Partitions in 4 different Tablespaces.  Given the incoming "data_item_number" values (either machine generated or from a sequence), each of the 4 Partitions would be equally loaded.
(In contrast, in Date based Range Partitioning of, say, a SALES table, you might have fewer rows in older Partitions and an increasing number of rows in new Partitions as your business and Sales Volume grow over time !).

Unlike Range Partitioning, Hash Partitioning will not perform well for a "range based query"  (e.g. a range of sales dates or a range of data item numbers).  It is suitable for "equality" or "in-list" predicates.  If you do need a range based query, you would need a Global Index.

Note that it is advised that you should use a Power of 2 for the number of Hash Partitions.

Let me demonstrate insertion of data :

SQL> insert into iot_incoming_data
2 select rownum,
3 dbms_random.string('X',16),
4 dbms_random.string('X',32),
5 systimestamp
6 from dual
7 connect by level < 10001;

10000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('HEMANT','IOT_INCOMING_DATA');

PL/SQL procedure successfully completed.

SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'IOT_INCOMING_DATA'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
P1 2471
P2 2527
P3 2521
P4 2481

SQL>


Note that I have inserted the 10,000 rows from a single session.  In the real world, you would have multiple sessions concurrently inserting rows into the table.
Based on the Hashing algorithm that Oracle used (note : this is internal to Oracle and we cannot use any custom algorithm), Oracle has more or less evenly distributed the incoming rows across the 4 Partitions.

Let me select some random rows from the Partitions :

SQL> select data_item_number  
2 from iot_incoming_data partition (P1)
3 where rownum < 6
4 order by 1;

DATA_ITEM_NUMBER
----------------
8361
8362
8369
8379
8380

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P2)
3 where rownum < 6
4 order by 1
5 /

DATA_ITEM_NUMBER
----------------
8087
8099
8101
8105
8109

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P3)
3 where rownum < 6
4 and data_item_number < 100
5 order by 1
6 /

DATA_ITEM_NUMBER
----------------
2
5
8
18
20

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P4)
3 where rownum < 6
4 and data_item_number between 1000 and 1100
5 order by 1
6 /

DATA_ITEM_NUMBER
----------------
1001
1002
1005
1008
1009

SQL>


(The first two queries returned rows with values greater than 8000 simply because I didn't specify a range of values as a filter and those rows came from the first few blocks that Oracle read from the buffer cache).
Note how the DATA_ITEM_NUMBER values indicate "near-random" distribution of rows across the Partitions.  It is likely that if I had created multiple sessions concurrently running inserts into the table, distribution of the rows would have been even more "random".



Categories: DBA Blogs

Insertion over db links creating extra rows than expected

Tom Kyte - Fri, 2018-09-28 19:46
Hi Tom, iam facing a weird issue . below is the scenario, My package creates insert statements for 4 tables which lie on someother oracle 11g db. tab1 tab2 tab3 tab4 The same 4 tables exists in 7 servers. (admin and cus1 to cu6). ...
Categories: DBA Blogs

HOW TO RESOLVE THE ERROR :- ora-01406 fetched column value was truncated

Tom Kyte - Fri, 2018-09-28 01:26
HOW TO RESOLVE THE ERROR :- ora-01406 fetched column value was truncated
Categories: DBA Blogs

Outer join with row archival

Tom Kyte - Fri, 2018-09-28 01:26
LiveSQL-Link: https://livesql.oracle.com/apex/livesql/s/hblhxmq40jtini45sivyqj4le <code> create table test_table (id number(10),name varchar2(10)) row archival; insert into test_table (id,name) values (1,'name1'); insert into test_table (id...
Categories: DBA Blogs

Find Closest Matching Single Record

Tom Kyte - Fri, 2018-09-28 01:26
I want a Query that should fetch a single record based on match conditions: table looks like <code> create table SERVICES ( srvc VARCHAR2(10) not null, location VARCHAR2(10), grp VARCHAR2(10), empno VARCHAR2(10), pric...
Categories: DBA Blogs

Renaming a RAC cluster

DBA Scripts and Articles - Thu, 2018-09-27 09:34

Introduction Renaming an Oracle RAC cluster is not an easy thing, unfortunately for me I had to do this today because the name chosen for the newly installed cluster was wrong. Oracle does not provide a simple command to do this and you have to go through a deconfiguration/reconfiguration of the whole cluster. Changing the … Continue reading Renaming a RAC cluster

The post Renaming a RAC cluster appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs