Feed aggregator

How to get database name in oracle db node js

Tom Kyte - Fri, 2016-07-08 21:46
<b>How to get database name in oracle db in node js without query using connection object</b> <code> var oracledb = require('oracledb'); oracledb.getConnection( { user: 'c##root', password: 'root', connectString: ...
Categories: DBA Blogs

How to load a xlsx file in to a table

Tom Kyte - Fri, 2016-07-08 21:46
I have a xlsx file ,and i am using sql loader in toad to load in to table.I also have created control file for the same. the file content is given below: ID Name Employee name Designation Status 0000700 John,Sam John sam (000070) Sr.Developer, ...
Categories: DBA Blogs

Oracle Security Expert Seminar

Pete Finnigan - Fri, 2016-07-08 21:46

I am happy to announce that I will be teaching a five day Oracle Security expert seminar class with Oracle University at Oracle offices in Reading, UK from September 26th to September 30th 2016. This is a 5 days expert....[Read More]

Posted by Pete On 08/07/16 At 02:45 PM

Categories: Security Blogs

SQL Server 2016 AlwaysOn: Direct seeding and performance considerations

Yann Neuhaus - Fri, 2016-07-08 13:05

This blog post follows the previous one about new direct seeding feature shipped with SQL Server 2016. As a reminder, I had some doubts about using direct seeding with large databases because log stream is not compressed by default but I forgot the performance improvements described into the Microsoft BOL. I remembered to talk about it a couple of months ago in this blog post.

Microsoft did a good work of improving the AlwaysOn log transport layer and it could be very interesting to compare two methods: Adding a 100 GB database by using usual way that includes backup and restore operations from the primary to the secondary or using direct seeding feature. Which one is the quickest method?

Let’s just have a quick look at my test environment. Two virtual machines with the following configuration:

  • 4x Intel Core i7-3630QM 2.3 GHz
  • 4GB of RAM
  • 2 10Gbps network cards
  • One disk that will host both the database data and log files on my primary replica (Samsung Portable SSD T3 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host both the database data and log files on my secondary replica (Samsung Portable SSD T1 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host backups (Samsung SSD 840 EVC, 250GB with S.M.A.R.T, NCQ and TRIM) used by both virtual machines

As an aside, each SSD disk is able to deliver at least 450MB/s and 35000 IOPS.

blog 99 - AG direct seeding - 0 - lab environment

I also used a custom database named GestionCom that contains roughly 100GB of data for my test. 100GB would be sufficient to get relevant results.

 

blog 99 - AG direct seeding - 01 - db GestionCom

 

Go ahead and let’s compare both synchronization methods

 

First test by using an usual way to add  a database to an availability group

As said earlier, my first test will consist in using the usual way so far to add a database to an availability group. Let’s say that we may use 3 ways for data synchronization: FULL, join only and skip initial synchronization. We will use the first method for this test that includes all the steps: backup and restore the concerned database and then join it to the availability group. At this point we may easily image that the most part of the time will be consumed in the backup and restore step. I also want to precise that I did not use voluntary fine tuning options like BUFFERCOUNT, MAXTRANSFERSIZE or splitting backups to several media files in order to stay compliant with the availability group wizard.

 

Step Duration Backup database to backup local disk (primary)WITH CHECKSUM, COMPRESSION 06’55’’ Restore database from network share (secondary)WITH CHECKSUM, NORECOVERY 17’10’’ Join database to availability group + start synchronization 00’01’’ Total 24’06’’

 

What about resource consumption?

On the primary …

blog 99 - AG direct seeding - 10 - CPU and disk activity during backup

 

blog 99 - AG direct seeding - 1 - network bandwith during restore from primary

On the secondary …

 

blog 99 - AG direct seeding - 11 - CPU and disk activity during restore

blog 99 - AG direct seeding - 2 - network bandwith during restore from secondary

CPU utilization is equal to 35% on average during the test. Moreover, disk write throughput seems to stagnate to 130 MB/s on average and includes both backup and restore activities. The network throughput utilization seems also to stagnate between 135 Mbps and 174 Mbps according to my test.

So it is clear that my environment is under-used regarding resource consumption in this first test.

 

Second test by using new direct seeding method

This time I will use the new database deployment method: direct seeding. As said in my previous blog, using this feature will simplify a lot the adding database process but what about the synchronization speed and resource consumption in this case?

Well, to get a good picture of what happens during the seeding process, we will use different tools as the new sys.dm_hadr_automatic_seeding DMV and extended events as well. Extended events will help us to understand what happens under the cover in this case but to measure only the time duration of the operation we don’t need them. If you look at the event list as well as categories, you will probably notice a new dbseed category available that corresponds to the direct seeding. Events in this category are only available from the debug channel. That’s fine because we want to track when the seeding process starts, when it finishes and what’s happen between these two events (like failure, timeout, progress). By the way, the hadr_physical_progress may be very useful to get a picture of network activity for the concerned seeding session if your network card is shared between other sessions or availability group replication activities. In my case, I’m the only one and I will get this information directly from the task manager panel.

So let’s create the extended event session:

CREATE EVENT SESSION [hadr_direct_seeding] 
ON SERVER 
ADD EVENT sqlserver.hadr_automatic_seeding_start
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
	)
,
ADD EVENT sqlserver.hadr_automatic_seeding_state_transition
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_success
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_timeout
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_physical_seeding_progress
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
)
ADD TARGET package0.event_file
(
	SET filename = N'hadr_direct_seeding',
	max_file_size = (2048), 
	max_rollover_files = (10))
WITH 
(
	MAX_MEMORY=4096 KB,
	EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
	MAX_DISPATCH_LATENCY = 30 SECONDS,
	MAX_EVENT_SIZE = 0 KB,
	MEMORY_PARTITION_MODE = NONE,
	TRACK_CAUSALITY = OFF,
	STARTUP_STATE = OFF
)
GO

 

And after adding the GestionCom database to the TestGrp availability group, the direct seeding feature comes into play. Honestly, it was a very big surprise! Let’s take a look at the network utilization:

blog 99 - AG direct seeding - 3 - network bandwith during seeding

 

A network usage of 2.2 Gbps on average this time! The direct seeding feature provides a better use of the network bandwidth and we may understand clearly why efforts have been made by Microsoft to improve the synchronization process.

Let’s now move on the CPU and disk utilization respectively from the primary and then the secondary

blog 99 - AG direct seeding - 5 - disk throughput during seeding from primary

blog 99 - AG direct seeding - 6 - disk throughput during seeding from secondary

~ 350 up to 450 MB/s on each side (gain x2) but an increase of the CPU utilization up to 70% during my test (increase x2). So a better resource usage but at the cost of a higher CPU utilization …

Let’s finish by looking at the sys.dm_hadr_automatic_seeding DMV that provides the answer to the question: are we faster in this case?

select
    ag.name as aag_name,
    ar.replica_server_name,
    d.name as database_name,
    has.current_state,
    has.failure_state_desc as failure_state,
    has.error_code,
    has.performed_seeding,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.start_time) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.completion_time) as completion_time,
    has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
    on ag.group_id = has.ag_id
join sys.availability_replicas as ar
    on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
    on d.group_database_id = has.ag_db_id

 

 

blog 99 - AG direct seeding - 7 - DMV stats

 

And the answer is yes as we may expect! Only 8 minutes (gain x3) to replicate and to synchronize the GestionCom database between the two high available replicas compared to the first method. But that’s not all … let’s focus on the redo thread activity from the secondary and you may notice a very interesting rate value (~ 12 MB/s). I don’t remember to have seen this value with current availability groups at customer places. This is the second improvement made by Microsoft concerned that has introduce parallel redo capability. As a reminder, before SQL Server 2016, there is only one redo thread per database. In this context, a single redo thread simply could not keep up with applying the changes as persisted in the log.

From the secondary, we may see some changes by looking at the sys.dm_exec_requests DMV:

select 
	r.command, 
	r.status,
	r.wait_type,
	r.wait_time,
	r.last_wait_type,
	r.scheduler_id
from sys.dm_exec_requests as r
where r.command like '%REDO%'
order by r.scheduler_id

 

 

blog 99 - AG direct seeding - 70 - parallel redo thread

Using direct seeding is definitely a solution to take into account to our future database deployment but I think we have to keep in mind two things according to this test: CPU and network consumption from seeding activity may impact the performance of other applications and vis-versa. In real world, there are good chances to be in this situation.

Finally let’s have a look at the extend event output. In respect of what we want to highlight in this blog post, we don’t get any other valuable information but one thing got my attention: LIMIT_CONCURRENT_BACKUPS value from the current value column (underlined in red). What does it mean exactly? Let’s talk about it  in a next blog post because this is a little bit out of scope of the main subject.

blog 99 - AG direct seeding - 12 - Xe with direct seeding

 

Third test by using direct seeding and compression

Let’s talk about the last test I performed. I used direct seeding without compression in the previous test so SQL Server didn’t compress the data stream by default in this case. However we may force SQL Server to use compression by setting a special trace flag 9567. After all, we want to avoid direct seeding flooding the wire and impacting the existing workload from other applications.

I have to admit that enabling compression with direct seeding is not so obvious. For instance I didn’t see any difference from the DMVs that indicates we’re using compression. (is_compression_enabled column from the sys.dm_hadr_physical_seeding_stats DMV is always equal to 0 regardless we use or not compression). The only obvious difference comes from the network throughput usage that is lower with compression (gain x 2.5). However I noticed an important increase of CPU utilization near from 100% on the primary in my case.

blog 99 - AG direct seeding - 8 - resource with seeding and compression from primary

What about seeding time? Well, I didn’t notice any gain on this field. Does compression allow to save network bandwidth? Maybe … hard to say with only this test and one specific environment.

I tried to add 3 VCPUs to each replica and leave one VCPU to the system so a total number of 7 VCPUS dedicated for SQL Server use.

blog 99 - AG direct seeding - 12 - cpu usage during seeding with compression 8 vcpus

At this point, I  admit to be a little bit surprising and I wonder if compression uses correctly all the available processors regarding the uneven distribution of CPU resource usage. The above picture is good representation of what I saw during other tests I performed with compression. In addition, I didn’t see any obvious performance gain in terms of duration except that wire is less used. I’m a little bit disappointed by compressiion but once again it is still much too early to draw a conclision and I’m looking forward direct seeding in action at my customers with real production infrastructure.

The bottom line is that direct seeding is a very promising feature and I love it because it is the direct visible part of the AlwaysOn performance improvements shipped with SQL Server 2016. However, and this is my personal opinion, I think we don’t let it fool us and consider to use direct seeding carefully according to your workload and available resources. Fortunately, in most cases it will be suitable.

Stay tuned!

 

 

 

Cet article SQL Server 2016 AlwaysOn: Direct seeding and performance considerations est apparu en premier sur Blog dbi services.

Partner Webcast - Oracle Mobile Cloud Service for Developers: Updates and Features

The Oracle Mobile Cloud Service (MCS) goal is to easily enable mobile application developers that are continually looking for new ways to accelerate development of apps to meet their consumer’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DML and Bloom

Jonathan Lewis - Fri, 2016-07-08 07:01

One of the comments on my recent posting about “Why use pl/sql bulk strategies over simple SQL” pointed out that it’s not just distributed queries that can change plans dramatically when you change from a simple select to “insert into … select …”; there’s a similar problem with queries that use Bloom filters – the filter disappears when you change from the query to the DML.

This seemed a little bizarre, so I did a quick search on MoS (using the terms “insert select Bloom Filter”) to check for known bugs and then tried to run up a quick demo. Here’s a summary of the related bugs that I found through my first simple search:

Bug 20112932 : BLOOM FILTER IS NOT CHOOSED WHEN EXECUTING INSERT SELECT STATEMENT
11.2.0.4 - Nov 2014: "not a bug".
This cites three more documents:
  Bug 13801198  BLOOM PRUNING/FILTER NOT USED IN DML STATEMENTS 
    11.2.0.3 - March 2012 - Duplicate, base bug: 18949550
    18949550: MERGE QUERY NOT USING BLOOM FILTER
    11.2.0.3 - June 2014: fixed in 12.2 (no patches yet)

  Bug 14325392  BLOOM FILTER NOT SUPPORTED IN PARTITION WISE JOIN 
    11.2.0.3 - July 2012 - fixed in 12.1 
    (but I think this is a PX filter, not a partition filter)

  Bug 17716301  BLOOM FILTER NOT USED FOR OUTER JOIN</pre>

    Aug 2014 - hidden bug note. Patches available for 11.2.0.3
    Note 1919508.1 describes the problem.

And here’s some code to demonstrate the problem – running 11.2.0.4 (with similar results on 12.1.0.2).


rem
rem     Script:         dml_bloom_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem

create table pt_hash (
        id,
        grp,
        small_vc,
        padding
)
nologging
pctfree 90 pctused 10
partition by hash(grp)
(
        partition p1,
        partition p2,
        partition p3,
        partition p4
)
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        trunc(rownum/50)                grp,
        to_char(trunc(rownum/20))       small_vc,
        rpad('x',100)                   padding
from
        generator, generator
where
        rownum <= 1e5
;

create table t1
as
select
        rownum  id,
        rownum  n1,
        lpad(rownum,10) v1,
        rpad('x',100) padding
from
        dual
connect by
        rownum <= 4
;

create table target (
        n1      number,
        id      number
)
;

-- gather stats on t1 and pt_hash (for all columns size 1, otherwise default)

There’s nothing fancy in this code, just a hash partitioned table with 100,000 rows (and a lot of wasted space), and a “driving” table with 4 rows that I can use in a very simple join to the partitioned table. I’ve also got an empty table ready for an insert statement.

So here’s a query with its execution plan (with rowsource execution statistics) that (based on the content of the t1 table) should select from at most two of the four partitions. I’ve hinted a hash join which is where Bloom filtering can be very effective with partitioned table joins:


set serveroutput off
alter session set statistics_level = all;

select
        /*+
                leading(t1 h1)
                use_hash(h1)    no_swap_join_inputs(h1)
        */
        t1.n1,
        h1.id
from
        t1,
        pt_hash h1
where
        t1.id between 2 and 3
and     h1.grp = t1.n1
and     h1.id <= 100
;

select * from table(dbms_xplan.display_cursor(null,null,'outline allstats last cost partition'));

------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |      1 |        |  2115 (100)|       |       |      1 |00:00:00.16 |    8374 |   8367 |       |       |          |
|*  1 |  HASH JOIN                  |         |      1 |      3 |  2115   (2)|       |       |      1 |00:00:00.16 |    8374 |   8367 |  2440K|  2440K|  777K (0)|
|   2 |   PART JOIN FILTER CREATE   | :BF0000 |      1 |      3 |     2   (0)|       |       |      2 |00:00:00.01 |       2 |      0 |       |       |          |
|*  3 |    TABLE ACCESS FULL        | T1      |      1 |      3 |     2   (0)|       |       |      2 |00:00:00.01 |       2 |      0 |       |       |          |
|   4 |   PARTITION HASH JOIN-FILTER|         |      1 |    100 |  2113   (2)|:BF0000|:BF0000|     51 |00:00:00.16 |    8372 |   8367 |       |       |          |
|*  5 |    TABLE ACCESS FULL        | PT_HASH |      2 |    100 |  2113   (2)|:BF0000|:BF0000|     51 |00:00:00.16 |    8372 |   8367 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------------------

-- Predicates and Outline redacted

The most significant points to notice are:

  • Operation 2 – Part Join Filter Created: we’ve created a Bloom filter
  • Operation 4 – Partition Hash Join – Filter: we’ve used the Bloom filter, Pstart/Pstop = :BF0000
  • Operations 5 – Table Access Full: starts twice, the two partitions we need

And now we use the same SQL statement but insert the result set into the holding table:

insert into target(n1, id)
select
        /*+
                ordered
                use_hash(h1)    no_swap_join_inputs(h1)
        */
        t1.id,
        h1.id
from
        t1,
        pt_hash h1
where
        t1.id between 2 and 3
and     h1.grp = t1.n1
and     h1.id <= 100
;

select * from table(dbms_xplan.display_cursor(null,null,'outline allstats last cost partition'));

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |         |      1 |        |  2115 (100)|       |       |      0 |00:00:00.02 |   16698 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL |         |      1 |        |            |       |       |      0 |00:00:00.02 |   16698 |       |       |          |
|*  2 |   HASH JOIN              |         |      1 |      3 |  2115   (2)|       |       |      1 |00:00:00.02 |   16689 |  1969K|  1969K|  723K (0)|
|*  3 |    TABLE ACCESS FULL     | T1      |      1 |      3 |     2   (0)|       |       |      2 |00:00:00.01 |       2 |       |       |          |
|   4 |    PARTITION HASH ALL    |         |      1 |    100 |  2113   (2)|     1 |     4 |    100 |00:00:00.02 |   16687 |       |       |          |
|*  5 |     TABLE ACCESS FULL    | PT_HASH |      4 |    100 |  2113   (2)|     1 |     4 |    100 |00:00:00.02 |   16687 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

-- Predicate and Outline sections redacted

In this case we don’t see any operation to create a Bloom filter, and operation 5 reports 4 starts, one for each partition, with the Pstart/Pstop showing 1 – 4. As a cross-check you can also see that the buffers visited has doubled and the time (though don’t trust it too much on such a small test) has increased.

To my surprise the outline information for the first plan didn’t report a Bloom filter hint (px_join_filter), but I tried adding one to the insert statement anyway – and nothing changed; however there is a “subquery pruning” mechanism that’s been available to the optimizer for a very long time now so I decided to try hinting that for both queries /*+ subquery_pruning(@sel$1 h1@sel$1 partition) */. This had no effect on the select statement, but produced the following plan for the insert:


-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name    | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT          |         |      1 |        |  2115 (100)|       |       |      0 |00:00:00.01 |    8390 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL  |         |      1 |        |            |       |       |      0 |00:00:00.01 |    8390 |       |       |          |
|*  2 |   HASH JOIN               |         |      1 |      3 |  2115   (2)|       |       |      1 |00:00:00.01 |    8381 |  1969K|  1969K|  829K (0)|
|*  3 |    TABLE ACCESS FULL      | T1      |      1 |      3 |     2   (0)|       |       |      2 |00:00:00.01 |       2 |       |       |          |
|   4 |    PARTITION HASH SUBQUERY|         |      1 |    100 |  2113   (2)|KEY(SQ)|KEY(SQ)|     51 |00:00:00.01 |    8379 |       |       |          |
|*  5 |     TABLE ACCESS FULL     | PT_HASH |      2 |    100 |  2113   (2)|KEY(SQ)|KEY(SQ)|     51 |00:00:00.01 |    8377 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------------

As you can see operation 4 now reports Partition Hash Subquery – Oracle has “pre-queried” the driving table (you can see the statement in the 10046 trace file if you enable SQL trace) to discover which partitions it will have to visit – and operation 5 starts twice with a Pstart/Pstop of KEY(SQ).

Interestingly the hint didn’t end up in the Outline section of the execution plan, so this is an example of a query which you can’t fix by capturing the SQL Plan Baseline for one statement against the text for another – though you can hack the hint into an SQL Profile or (my preference in this case, I think) an SQL Patch.

Footnote:

After publishing this note I started to browse through the comments on the article I’d linked to reference subquery pruning, and discovered that an example of this insert/select issue appeared there with a surprising workaround, which was to use the “multi-table insert” (with just a single table). At the moment, though I haven’t managed to get this working with this example.

Footnote 2:

I’ve just been reminded by email of another detail mentioned in the comments of the earlier post – this problem isn’t present for direct path (i.e. /*+ append */) inserts – and I have run a quick test on 12.1.0.2 to check that this is indeed the case (though the fix isn’t present in 11.2.0.4).  Don’t forget to check the listed Bug notes on MoS, in case there’s a back-port available for your version of Oracle.

 


How to get access to the Oracle Public Cloud

&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span...

We share our skills to maximize your revenue!
Categories: DBA Blogs

GATHER_TABLE_STATS difference between 11g and 12c (DBMS_STATS.flush_database_monitoring_info)

Tom Kyte - Fri, 2016-07-08 03:26
Hi Tom, So we are looking to be refreshing stats when the stats become stale while a lot of DML is happening on a table, and were testing how to handle this. FLUSH_DATABASE_MONITORING_INFO is the right procedure as it updates the dictionary. We ...
Categories: DBA Blogs

Anyway to Access Data in a Table That Has Been Made Invalid By Dropping an XMLSchema

Tom Kyte - Fri, 2016-07-08 03:26
Is here anyway to make a table valid again if it has an XMLType column with an XMLSchema associated it which has been deleted via dbms_xmlschema.deleteSchema with a DELETE_INVALIDATE parameter? I cannot even successfully drop the XMLType column from...
Categories: DBA Blogs

LEFT JOIN on nested table and TABLE function: unexpected results

Tom Kyte - Fri, 2016-07-08 03:26
Hi! This is my first question here, so don't hesitate to tell me if I am unclear or if key information is missing. I have been getting unexpected results while trying a LEFT JOIN on a nested table. Here's my tables: <code>CREATE OR REPLACE ...
Categories: DBA Blogs

utl_http results to ORA-12541: TNS:no listener

Tom Kyte - Fri, 2016-07-08 03:26
Hi, I have 11g DB and I have url below that I can open in the browser displaying pdf. This is pdf report generated from Jasper server. http://serv-ora11g:8081/jasperserver/flow.html?_flowId=viewReportFlow&j_username=jasperadmin&j_password=jaspe...
Categories: DBA Blogs

utl_htp is begin_request is resulting to ORA-29263: HTTP protocol error

Tom Kyte - Fri, 2016-07-08 03:26
Hi, I have URL that I can access with a browser and this displays the PDF http://serv-ora11g:8081/jasperserver/flow.html?_flowId=viewReportFlow&j_username=jasperadmin&j_password=jasperadmin&reportUnit=/KVS/Collection/col1190&output=pdf I c...
Categories: DBA Blogs

Cross Platform "Endian" Conversion from Solaris to Linux Intel-Based Exadata -- Mounting NFS to Exadata Machine

Tom Kyte - Fri, 2016-07-08 03:26
Hi, Is it possible for a Solaris NFS to be mounted on a Linux Intel-Based Exadata machine? This approach seems (at least to me) to be alluded to in this section of Oracle Database Backup and User's Guide: https://docs.oracle.com/cd/E11882_01/back...
Categories: DBA Blogs

Creating tables (maybe)

Tom Kyte - Fri, 2016-07-08 03:26
How to create a table for this one. AAA BBB CCC RED 1 2 3 BLUE 4 5 6 GREEN 7 8 9 I tried on SQL DEVELOPER.It's gave me an error. And converted into an out put like.. RED BLUE GREEN AAA 1 4 ...
Categories: DBA Blogs

cursor for loop

Tom Kyte - Fri, 2016-07-08 03:26
Hi, I have a cursor for loop which does some calculation in the cursor query. With in the loop I have insert with commit for every iteration. There are not more statement apart from INSERT & COMMIT inside the loop. Example: DECLARE cursor...
Categories: DBA Blogs

Dba vs Apps DBA

Tom Kyte - Fri, 2016-07-08 03:26
What is different between Oracle DBA and Oracle apps DBA? I so many websites where some of them used term Oracle DBA and some of them Oracle apps DBA?
Categories: DBA Blogs

Undo Table space

Tom Kyte - Fri, 2016-07-08 03:26
Hi Tom, Following are the scenarios and please let me know what will happen in the listed scenarios : 1. I have a Undo table space with least size, i do a deleting of records in a table which is huge in size(a size which undo tablespace ca...
Categories: DBA Blogs

Node-oracledb 1.10 has Enhanced Metadata

Christopher Jones - Fri, 2016-07-08 03:06

Top feature: Enhanced Metadata

The changes in node-oracledb 1.10 are:
  • Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation.

    Additional metadata for query and REF CURSOR columns is available in the metaData object when the new boolean oracledb.extendedMetaData attribute or corresponding execute() option attribute extendedMetaData are true.

    For example, if the DEPARTMENTS table is like:

    SQL> desc departments
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     DEPARTMENT_ID                             NOT NULL NUMBER(4)
     DEPARTMENT_NAME                           NOT NULL VARCHAR2(30)
     MANAGER_ID                                         NUMBER(6)
     LOCATION_ID                                        NUMBER(4)
    

    Then a query in node-oracledb would give extended metadata:

    [ { name: 'DEPARTMENT_ID',
           fetchType: 2002,
           dbType: 2,
           precision: 4,
           scale: 0,
           nullable: false },
         { name: 'DEPARTMENT_NAME',
           fetchType: 2001,
           dbType: 1,
           byteSize: 30,
           nullable: false },
         { name: 'MANAGER_ID',
           fetchType: 2002,
           dbType: 2,
           precision: 6,
           scale: 0,
           nullable: true },
         { name: 'LOCATION_ID',
           fetchType: 2002,
           dbType: 2,
           precision: 4,
           scale: 0,
           nullable: true } ]

    You can see that the available attributes vary with the database type. The attributes are described in the metaData documentation.

    The commonly used column name is always available in metaData regardless of the value of extendedMetaData. This is consistent with previous versions.

    The metadata dbType and fetchType attributes numbers are described in new DB_TYPE_* constants and the existing node-oracledb type constants, respectively. Your code should use these constants when checking metadata types.

    Why did we make the extra metadata optional and off by default? Why do the types use numbers instead of strings? We had a lot of debate about common use cases, out-of-box experience, performance etc. and this is the way the cookie crumbled.

    I know this enhancement will make your applications easier to maintain and more powerful.

  • Fixed an issue preventing the garbage collector cleaning up when a query with LOBs is executed but LOB data isn't actually streamed.

  • Report an error earlier when a named bind object is used in a bind-by-position context. A new error NJS-044 is returned. Previously errors like ORA-06502 were given since the expected attributes were not found and bind settings ended up as defaults. You can still use unnamed objects for bind-by-position binds like:

    var sql = "begin myproc(:1, :2, :3); end;";
    var binds = [ id, name, { type: oracledb.STRING, dir: oracledb.BIND_OUT } ];
    

    Here the third array element is an unnamed object.

  • Fixed a bug where an error event could have been emitted on a QueryStream instance prior to the underlying ResultSet having been closed. This would cause problems if the user tried to close the connection in the error event handler as the ResultSet could have prevented it.

  • Fixed a bug where the public close method was invoked on the ResultSet instance that underlies the QueryStream instance if an error occurred during a call to getRows. The public method would have thrown an error had the QueryStream instance been created from a ResultSet instance via the toQueryStream method. Now the call to the C layer close method is invoked directly.

  • Updated Pool._logStats to throw an error instead of printing to the console if the pool is not valid.

  • Added GitHub Issue and Pull Request templates.

  • Updated installation instructions for OS X using the new Instant Client 12.1 release.

  • Added installation instructions for AIX and Solaris x64.

  • Some enhancements were made to the underlying DPI data access layer. These were developed in conjuction with a non- node-oracledb consumer of DPI, but a couple of changes lay groundwork for potential, user-visible, node-oracledb enhancements:

    • Allow SYSDBA connections

    • Allow session tagging

    • Allow the character set and national character set to be specified via parameters to the DPI layer

    • Support heterogeneous pools (in addition to existing homogeneous pools)

    To reiterate, these are not exposed to node-oracledb.

Resources

Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Enable auditing in oracle database

Learn DB Concepts with me... - Thu, 2016-07-07 13:54

ENABLE AUDITING IN ORACLE DATABASE
SERVER SETUP FOR DB AUDITING


Auditing is a default feature available in Oracle server. The initialization parameters that influence its behaviour can be displayed using the SHOW PARAMETER SQL*Plus command.

SQL> SHOW PARAMETER AUDIT

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      C:\ORACLE\PRODUCT\10.2.0\ADMIN
                                                 \DB10G\ADUMP
audit_sys_operations                 boolean     FALSE
audit_trail                          string      NONE

SQL>

Auditing is disabled by default, but can enabled by setting the AUDIT_TRAIL static parameter, which has the following allowed values.

AUDIT_TRAIL = { NONE | os | db | db,extended | xml | xml,extended }

The following list provides a description of each setting (choose one based on your requirement):


NONE or false - Auditing is disabled.
db or true - Auditing is enabled, with all audit records stored in the database audit trial (SYS.AUD$).
db,extended - As db, but the SQL_BIND and SQL_TEXT columns are also populated.
xml - Auditing is enabled, with all audit records stored as XML format OS files.
xml,extended - As xml, but the SQL_BIND and SQL_TEXT columns are also populated.
OS - Auditing is enabled, with all audit records directed to the operating system's audit trail.


To enable auditing and direct audit records to the database audit trail, we would do the following.
SQL> ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE;
System altered.

SQL> SHUTDOWN
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> STARTUP
ORACLE instance started.

Total System Global Area  289406976 bytes
Fixed Size                  1248600 bytes
Variable Size              71303848 bytes
Database Buffers          213909504 bytes
Redo Buffers                2945024 bytes
Database mounted.
Database opened.

SQL>

Note : Enabling and Disabling the auditing in database will only take affect after a db restart.
Categories: DBA Blogs

Invisible Bug

Jonathan Lewis - Thu, 2016-07-07 11:27

At this Wednesday’s Oracle Midlands event someone asked me if Oracle would use the statistics on invisible indexes for the index sanity check. I answered that there had been a bug in the very early days of invisible indexes when the distinct_key statistic on the index could be used even though the index itself would not be considered as a candidate in the plan (and the invisible index is still used to avoid foreign key locking – even in 12c – it’s only supposed to be invisible to the optimizer).

The bug was fixed quite a long time ago – but a comment on the “Index Sanity” article has introduced me to a related bug that is still present in 11.2.0.4 where the presence of an invisible index can affect an execution plan. Here’s a little model (run under 11.2.0.4) to demonstrate:

rem
rem     Script:         invisible_index_bug.sql
rem     Author:         Jonathan Lewis
rem

execute dbms_random.seed(0)

drop table t2;
drop table t1;

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        cast(mod(rownum,1000) as number(8,0))           n1,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;

create table t2
as
select
        rownum id,
        trunc(dbms_random.value(0,10000)) n1
from
        dual
connect by
        level <= 100
;
begin 
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

column n1 new_value m_n1
select n1 from t2 where id = 50;
clear columns

set autotrace traceonly explain

select
        t1.*
from
        t1, t2
where
        t2.n1 = &m_n1
;

create unique index t2_i1 on t2(n1)
-- invisible
;

select
        t1.*
from
        t1, t2
where
        t2.n1 = &m_n1
;

set autotrace off

All I’ve done is create a couple of tables then do a join that we might expect to see executed as a cartesian merge join; at one point I was going to make the data more complicated and include a join condition, but decided to keep things small and simple so it’s a silly example but it is sufficient to make the point. The funny little bit about selecting an n1 value from t2 was also in anticipation of a more complex example but it does, at least, ensure I query for a value that is in range.

Here are the two execution plans from 11.2.0.4 – the key feature is that the plan changes after the invisible index is created:


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  1000K|   119M|  2263   (3)| 00:00:12 |
|   1 |  MERGE JOIN CARTESIAN|      |  1000K|   119M|  2263   (3)| 00:00:12 |
|*  2 |   TABLE ACCESS FULL  | T2   |     1 |     4 |     2   (0)| 00:00:01 |
|   3 |   BUFFER SORT        |      |  1000K|   115M|  2261   (3)| 00:00:12 |
|   4 |    TABLE ACCESS FULL | T1   |  1000K|   115M|  2261   (3)| 00:00:12 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T2"."N1"=5308)


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  1000K|   119M|  2263   (3)| 00:00:12 |
|   1 |  NESTED LOOPS      |      |  1000K|   119M|  2263   (3)| 00:00:12 |
|*  2 |   TABLE ACCESS FULL| T2   |     1 |     4 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   |  1000K|   115M|  2261   (3)| 00:00:12 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T2"."N1"=5308)

Notice how the plan without the invisible index does a “sort” (actually a “buffer sort” so it’s just copying the data into local memory), while the plan with the not quite invisible enough index in place gets away with just a full tablescan. This is bug 16564891, associated with base bug 16544878.

The bug notes say “fixed in 12.2”, but in Oracle 12.1.0.2 the first plan appears in both cases, and we have to make the index visible to get the second plan. (Take note of the need for the “negative” test to prove the point; the fact that the same plan appears for both cases doesn’t, by itself, prove that the bug was fixed, we have to show that the plan would have changed if the bug had still been present).

I believe the problem isn’t the problem of Oracle using the statistics when it shouldn’t; the change appears because in 11g Oracle incorrectly allows itself to see the uniqueness of the index and infer that table t2 is a “single row” table. In 12c the optimizer calculates that there will probably be only one row but that doesn’t stop it choosing the merge join cartesian as the “insurance bet” against having to do more than one tablescan of the t1 table. We can see this difference in the 10053 trace files, the 11g file has an entry for the “Single Table Access Path” for t2 that reads:

1-ROW TABLES:  T2[T2]#0

If you read the bug note for bug 16564891 you’ll see that it has a more realistic example of the problem – and it may give you some idea of where you might run into the bug. In general I don’t think many people are likely to come across the problem since it revolves around uniqueness, which is rather an important property, and there can’t be many occasions when someone decides to add (or test dropping) a unique index. Given that the example in the bug looks like “add a unique index to a dimension table that’s joining to a fact table” that may be a good pointer to where you’re most likely to run into the problem — when you’re trying to enforce data correctness in a data warehouse.

 


Pages

Subscribe to Oracle FAQ aggregator