SQL & PL/SQL
This is one of those myths that is probably best demonstrated by example. First, let’s create some test data: 10 million rows with the following columns:
- COL1 – Alternating values of 1 and 0 – not indexed
- COL2 – Identical to COL1 – bitmap indexed
- PAYLOAD – 100 bytes of X’s to make the rows a realistic width – not indexed
COL1 and COL2 are identical low cardinality columns: there are only two valid values.
I recently did a comparison caching mechanisms of scalar subquery caching(SSC) and deterministic functions in 11.2. Unfortunately, I do not have enough time to do a full analysis, so I will post it in parts.
1. Both mechanisms are based on hash functions.(About hash tables and hash collisions for scalar subquery caching excelent wrote Tom Kyte and Jonathan Lewis(“Cost-Based Oracle fundamentals” chapter 9))
2. Deterministic caching depends on fetch size(arraysize) – results cached only within one fetch call, ssc has no this limitation.
I want to share my modifications of Tom Kyte's runstats package, which include:
- Any number of runs sets for analyzing
- Standalone: No need to create other objects
- Ability to specify session SID for statistics gathering
- Ability to specify what to gather: latches, stats or both
- Separate mask filters for output by statname and latchname
- Ability to specify difference percentage for output separately for latches and statistics
- More accurate with some statistics because of avoiding global temporary table usage, but less in
Oracle Database A to Z question and answers and
very importantly the Experiences also you can share here...
The Blog is for Both the beginners and as well as experienced people to discuss and gain knowledge in Oracle.
Here we are going to share every thing about Oracle....
And Simply saying, we will only work on oracle related but nothing else here....
We will Eat Oracle, Drink Oracle and Walk Oracle
1.DML data can be recovered by rollback before commit
2.Remove any subset of data
3.Delete is slower when table have numerous indexes and triggers
4.DML triggers fire for delete mthod
5.data can be recovered after commit by flashback method
6.high water mark will remain the same
7.data deleted info capture with returning clause
8. you can not delete data if function based index is invalid
9. can not delete data from complex views.
We all know that CHAR and VARCHAR2 are two different datatypes used for storing strings in Oracle. When I asked many people about what’s the difference exactly, all were saying VARCHAR2 can be used for variable sized character strings. As I’m not satesfied with the answer, as always, I’ve decided to do my own experiments on these two.
The old Oracle join syntax really should be consigned to history.
This single script provides the overall picture of the database in terms of Waits events, Active/Inactive killed sessions, Top Processes (physical I/O, logical I/O, memory and CPU processes), Top CPU usage by users, etc.
[code]set serveroutput on
cursor c1 is select version
cursor c2 is
, to_char(sysdate, 'HH24:MI:SS DD-MON-YY') currtime
, to_char(startup_time, 'HH24:MI:SS DD-MON-YY') starttime
cursor c4 is
select * from (SELECT count(*) cnt, substr(event,1,50)
Suppose your boss calls you one day and tells you that there has been some unexpected changes in the employee database.Employee's designation, their salary are being manipulated illegally.Such things have been continuing from a past few days and he asks if you could help getting hold of the culprit.
Don't worry Oracle's DBMS_FGA package will save your day and earn you a raise in your job.
The Oracle DBMS_FGA package provides fine grained auditing on objects.
To have an overview of the summary of dbms_fga subprograms visit :
What I love about writing SQL Tuning articles is that I very rarely end up publishing the findings I set out to achieve. With this one, I set out to demonstrate the advantages of PARALLEL DML, didn't find what I thought I would, and ended up testing 8 different techniques to find out how they differed. And guess what? I still didn't get the results I expected. Hey, at least I learned something.
As an ETL designer, I hate updates. They are just plain nasty. I spend an inordinate proportion of design time of an ETL system worrying about the relative proportion of rows inserted vs updated. I worry about how ETL tools apply updates (did you know DataStage applys updates singly, but batches inserts in arrays?), how I might cluster rows together that are subject to updates, and what I might do if I just get too many updates to handle.
It would be fair to say I obsess about them. A little bit.