Feed aggregator

Table creation script

Tom Kyte - Mon, 2016-07-11 04:46
Hai Tom sir...I have doubt. 1)How can i create this table.see below. RED GREEN YELLOW AAA 1 2 3 BBB 4 5 6 CCC 7 8 9 Please create this one.. and more question only... 2)How can in write a quer...
Categories: DBA Blogs

RFS not starting for standby

Tom Kyte - Mon, 2016-07-11 04:46
HI, what could be the reason for RFS not starting at standby. anything specific I have to look for. DR:- SQL> select inst_id,process,status,thread#,sequence#,block#,blocks from gv$managed_standby; SQL> INST_ID PROCESS STATUS ...
Categories: DBA Blogs

Loading SQL*Plus HELP into the Database

Hemant K Chitale - Sun, 2016-07-10 22:39
Oracle provides scripts to load the HELP command for SQL*Plus.

See $ORACLE_HOME/sqlplus/admin/help

The schema to use is SYSTEM, not SYS.

I demonstrate
(a) How to load SQLPlus Help  into the database
(b) How to customise the Help (e.g. add new commands)

[oracle@ora11204 help]$ cd $ORACLE_HOME/sqlplus/admin/help
[oracle@ora11204 help]$ ls -l
total 84
-rwxrwxrwx. 1 oracle oracle 265 Feb 17 2003 helpbld.sql
-rwxrwxrwx. 1 oracle oracle 366 Jan 4 2011 helpdrop.sql
-rwxrwxrwx. 1 oracle oracle 71817 Aug 17 2012 helpus.sql
-rwxrwxrwx. 1 oracle oracle 2154 Jan 4 2011 hlpbld.sql
[oracle@ora11204 help]$ sqlplus -S system/oracle @helpbld.sql `pwd` helpus.sql
...
...
...
View created.


58 rows created.


Commit complete.


PL/SQL procedure successfully completed.

[oracle@ora11204 help]$


The 'pwd`  (note the back-quote character, not the single quote character) is a way of specifying the current directory in Unix and Linux shells.   This specifies where the help datafile is located.  helpus.sql is the help data in English (US-English).

The scripts create a table called "HELP" in the SYSTEM schema.  SQL*Plus's "HELP" command then uses this table.

Examples :

SQL> connect hemant/hemant
Connected.
SQL> help

HELP
----

Accesses this command line help system. Enter HELP INDEX or ? INDEX
for a list of topics.

You can view SQL*Plus resources at
http://www.oracle.com/technology/documentation/

HELP|? [topic]


SQL>
SQL> help set

SET
---

Sets a system variable to alter the SQL*Plus environment settings
for your current session. For example, to:
- set the display width for data
- customize HTML formatting
- enable or disable printing of column headings
- set the number of lines per page

SET system_variable value

where system_variable and value represent one of the following clauses:

APPI[NFO]{OFF|ON|text} NEWP[AGE] {1|n|NONE}
ARRAY[SIZE] {15|n} NULL text
AUTO[COMMIT] {OFF|ON|IMM[EDIATE]|n} NUMF[ORMAT] format
AUTOP[RINT] {OFF|ON} NUM[WIDTH] {10|n}
AUTORECOVERY {OFF|ON} PAGES[IZE] {14|n}
AUTOT[RACE] {OFF|ON|TRACE[ONLY]} PAU[SE] {OFF|ON|text}
[EXP[LAIN]] [STAT[ISTICS]] RECSEP {WR[APPED]|EA[CH]|OFF}
BLO[CKTERMINATOR] {.|c|ON|OFF} RECSEPCHAR {_|c}
CMDS[EP] {;|c|OFF|ON} SERVEROUT[PUT] {ON|OFF}
COLSEP {_|text} [SIZE {n | UNLIMITED}]
CON[CAT] {.|c|ON|OFF} [FOR[MAT] {WRA[PPED] |
COPYC[OMMIT] {0|n} WOR[D_WRAPPED] |
COPYTYPECHECK {ON|OFF} TRU[NCATED]}]
DEF[INE] {&|c|ON|OFF} SHIFT[INOUT] {VIS[IBLE] |
DESCRIBE [DEPTH {1|n|ALL}] INV[ISIBLE]}
[LINENUM {OFF|ON}] [INDENT {OFF|ON}] SHOW[MODE] {OFF|ON}
ECHO {OFF|ON} SQLBL[ANKLINES] {OFF|ON}
EDITF[ILE] file_name[.ext] SQLC[ASE] {MIX[ED] |
EMB[EDDED] {OFF|ON} LO[WER] | UP[PER]}
ERRORL[OGGING] {ON|OFF} SQLCO[NTINUE] {> | text}
[TABLE [schema.]tablename] SQLN[UMBER] {ON|OFF}
[TRUNCATE] [IDENTIFIER identifier] SQLPLUSCOMPAT[IBILITY] {x.y[.z]}
ESC[APE] {\|c|OFF|ON} SQLPRE[FIX] {#|c}
ESCCHAR {@|?|%|$|OFF} SQLP[ROMPT] {SQL>|text}
EXITC[OMMIT] {ON|OFF} SQLT[ERMINATOR] {;|c|ON|OFF}
FEED[BACK] {6|n|ON|OFF} SUF[FIX] {SQL|text}
FLAGGER {OFF|ENTRY|INTERMED[IATE]|FULL} TAB {ON|OFF}
FLU[SH] {ON|OFF} TERM[OUT] {ON|OFF}
HEA[DING] {ON|OFF} TI[ME] {OFF|ON}
HEADS[EP] {||c|ON|OFF} TIMI[NG] {OFF|ON}
INSTANCE [instance_path|LOCAL] TRIM[OUT] {ON|OFF}
LIN[ESIZE] {80|n} TRIMS[POOL] {OFF|ON}
LOBOF[FSET] {1|n} UND[ERLINE] {-|c|ON|OFF}
LOGSOURCE [pathname] VER[IFY] {ON|OFF}
LONG {80|n} WRA[P] {ON|OFF}
LONGC[HUNKSIZE] {80|n} XQUERY {BASEURI text|
MARK[UP] HTML [OFF|ON] ORDERING{UNORDERED|
[HEAD text] [BODY text] [TABLE text] ORDERED|DEFAULT}|
[ENTMAP {ON|OFF}] NODE{BYVALUE|BYREFERENCE|
[SPOOL {OFF|ON}] DEFAULT}|
[PRE[FORMAT] {OFF|ON}] CONTEXT text}


SQL>
SQL> help show

SHOW
----

Shows the value of a SQL*Plus system variable, or the current
SQL*Plus environment. SHOW SGA requires a DBA privileged login.

SHO[W] option

where option represents one of the following terms or clauses:
system_variable
ALL
BTI[TLE]
ERR[ORS] [{FUNCTION | PROCEDURE | PACKAGE | PACKAGE BODY | TRIGGER
| VIEW | TYPE | TYPE BODY | DIMENSION | JAVA CLASS} [schema.]name]
LNO
PARAMETERS [parameter_name]
PNO
RECYC[LEBIN] [original_name]
REL[EASE]
REPF[OOTER]
REPH[EADER]
SGA
SPOO[L]
SPPARAMETERS [parameter_name]
SQLCODE
TTI[TLE]
USER


SQL>
SQL> help connect

CONNECT
-------

Connects a given username to the Oracle Database. When you run a
CONNECT command, the site profile, glogin.sql, and the user profile,
login.sql, are processed in that order. CONNECT does not reprompt
for username or password if the initial connection does not succeed.

CONN[ECT] [{logon|/|proxy} [AS {SYSOPER|SYSDBA|SYSASM}] [edition=value]]

where logon has the following syntax:
username[/password][@connect_identifier]

where proxy has the syntax:
proxyuser[username][/password][@connect_identifier]
NOTE: Brackets around username in proxy are required syntax


SQL>


Remember !  These are SQL*Plus commands, not SQL Language commands.  So you won't see help about CREATE or ALTER or SELECT and other such commands.

Since, it uses a plain-text file (helpus.sql in this case) to load the help information, it is possible to extend this.

For example, I copy helpus.sql as helpcustom.sql and add these lines into the scrip file :

INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 2, 'This Hemant''s Test Database');
INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 3, 'A Playground database');
INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 4, 'Running 11.2.0.4 on Linux');

INSERT INTO SYSTEM.HELP VALUES ('OWNERINFO', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('OWNERINFO', 2, 'Test Database owned by Hemant');
INSERT INTO SYSTEM.HELP VALUES ('CONTENTS', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('CONTENTS', 2, 'Various Experiments by Hemant');

INSERT INTO SYSTEM.HELP VALUES ('WHO IS HEMANT', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('WHO IS HEMANT', 2, 'Hemant K Chitale');
INSERT INTO SYSTEM.HELP VALUES ('WHO IS HEMANT', 3, 'https://hemantoracledba.blogspot.com');

COMMIT;


and then I run the command :

sqlplus -S system/oracle @helpbld.sql `pwd` helpcustom.sql


And view the results :

SQL> connect hemant/hemant
Connected.
SQL> help dbinfo

This Hemant's Test Database
A Playground database
Running 11.2.0.4 on Linux

SQL> help ownerinfo

Test Database owned by Hemant

SQL> help who is hemant

Hemant K Chitale
https://hemantoracledba.blogspot.com

SQL>
SQL> help startup

STARTUP
-------

Starts an Oracle instance with several options, including mounting,
and opening a database.

STARTUP options | upgrade_options

where options has the following syntax:
[FORCE] [RESTRICT] [PFILE=filename] [QUIET] [ MOUNT [dbname] |
[ OPEN [open_options] [dbname] ] |
NOMOUNT ]

where open_options has the following syntax:
READ {ONLY | WRITE [RECOVER]} | RECOVER

and where upgrade_options has the following syntax:
[PFILE=filename] {UPGRADE | DOWNGRADE} [QUIET]


SQL> help shutdown

SHUTDOWN
--------

Shuts down a currently running Oracle Database instance, optionally
closing and dismounting a database.

SHUTDOWN [ABORT|IMMEDIATE|NORMAL|TRANSACTIONAL [LOCAL]]


SQL>


And, so, the SQL*Plus HELP command can be customised !

.
.
.

Categories: DBA Blogs

Oracle Multitenant feature name

Yann Neuhaus - Sun, 2016-07-10 15:46

We all know what is the multitenant option: a container database (CDB) with multiple user pluggable databases (PDB). This requires Enterprise Edition plus option. But you can use the same architecture without the option and even in Standard Edition: a CDB with only one user PDB. It is called “single-tenant” or “lone PDB”. How do you call this new 12c architecture?

I call “multitenant architecture” the following set of features, available in all editions:

  • dictionary separation between system metadata/objects and user metadata/object
  • unplug / plug features to transport PDBs
  • create a new PDB as a clone of another existing PDB
  • ability for a session to switch between PDB and CDB$ROOT explicitly with ‘ALTER SESSION’ or implicitly through metadata and object links
  • ability to read data for several containers in one query

But it may look strange to use the “multitenant” term when in Standard Edition or Enterprise edition without option.

Container database (CDB)

One idea would be to simply call it ‘CDB architecture’, because it’s the architecture of the container database, and because a database without this architecture is called “non-CDB”:

@franckpachot Oracle docs says „CDB architecture“ and „Multitenant Architecture“ – I prefer CDB, because there’s Non-CDB, too :)

— Markus Flechtner (@markusdba) July 8, 2016

consolidated database (CDB)

However, as very often with Oracle, the names change and the acronyms remain. Old names are found in the software, before new name is made public to market it. We all know that CDB stands for “container database” because a CDB is a container that contains at least 3 containers (CDB$ROOT, PDB$SEED and your PDB) and more when you have the multitenant option. However I see no mention of that name in ORACLE_HOME/rdbms/admin scripts:
[oracle@CDB ~]$ grep -i "container database" $ORACLE_HOME/rdbms/admin/* | wc
0 0 0

It seems that the original meaning of CDB was “consolidated database”:

[oracle@CDB ~]$ grep -i "consolidated database" $ORACLE_HOME/rdbms/admin/* | wc
58 465 7276
 
[oracle@CDB ~]$ grep -i "consolidated database" $ORACLE_HOME/rdbms/admin/*.bsq
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dcore.bsq:REM Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dcore.bsq:REM SEED Pluggable Database in a Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/doptim.bsq:Rem the caching in library cache in a Consolidated Database.
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dsec.bsq: * Consolidated Database
/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/dsec.bsq: * Consolidated Database

So, the idea behind CDB is the same as behind multitenant: it’s consolidation of several pluggable databases. And then, consolidation is not really what is done by single-tenant where we have only one user PDB per CDB and where this architecture requires 3 containers instead of one non-CDB.

Pluggable databases

Another idea would be to call it “pluggable databases architecture” because this is the real evolution. User tablespaces are transportable for a long time, since 8i introduction of extended rowid and locally managed tablespaces. 12c brought the same for system tablespaces so that PDBs are fully transportable physically, thanks to separation of dictionary.

The real point of this new architecture is the total separation of system data/metadata and user data/metadata, the separation of system DBA role and application DBA role, and this is pluggable databases. Multitenancy is just one thing that is possible with this new architecture. It’s important today because it’s required for the cloud (for consolidation and easy provisioning).

Oracle Multitenant

Actually, the feature was called ‘Oracle Pluggable Database’ until 12.1.0.1 was released publicly with a new name: Oracle Multitenant. And this is the name we can see in Database Feature Usage Statistics. Well, if you look at the first patchset of 12cR1, 12.1.0.2, you will see the old name ‘Oracle Pluggable Databases’ but this is a bug (Patch 20718081 changes back the name).

Here is what you see from Database Feature Usage Statistics after an upgrade from 12.1.0.1 to 12.1.0.2, in EM Express:

CaptureFUMultitenant

And from DBA_FEATURE_USAGE_STATISTICS view:


SQL> select name, version, detected_usages, currently_used, aux_count from dba_feature_usage_statistics where name like '%Multitenant%' or name like '%Pluggable%';
 
NAME VERSION DETECTED_USAGES CURRE AUX_COUNT
------------------------------ ----------------- --------------- ----- ----------
Oracle Multitenant 12.1.0.1.0 98 FALSE 1
Oracle Pluggable Databases 12.1.0.2.0 32 FALSE 1

So, bug aside, the name of the feature is ‘Oracle Multitenant’ and this is true for any container database, because the feature is considered as used as soon as V$DATABASE.CDB=’YES':
select count(*) into feature_boolean from v$database where cdb = 'YES'

The usage of the multitenant option is detected by the AUX_COUNT which is the number of user PDBs:
select count(*) into aux_count from v$pdbs where con_id > 2
CON_ID=0 is for the CDB, CON_ID=1 is for CDB$ROOT, CON_ID=2 is for PDB$SEED and user PDBs start at CON_ID > 2

So the name is “multitenant” whatever the number of PDBs.

So what?

I prefer to stick with “multitenant architecture” even when used without the multitenant option. It’s a good way to keep in mind that, even if it brings lot of interesting features for single-tenant as well, Oracle has taken the step of dictionary separation with the motivation of Cloud, Consolidation and Multitenancy. Charged options are a good trigger to increase priority of evolution requests…
However, this “multitenant architecture” brings very interesting features to Standard Edition and Enterprise Edition even without option. And it you doubt, I’ll try to convince you in San Francisco, September 18th.

 

Cet article Oracle Multitenant feature name est apparu en premier sur Blog dbi services.

SQL questions for improving skills

Tom Kyte - Sun, 2016-07-10 10:26
Hi Tom, My question is little weird. Where can i find good SQL questions (scenarios) for practice. I tried google but did not find good list. Can you please suggest any website or book/magazine? Thanks, DP
Categories: DBA Blogs

How to register a "service" with a listener without using local_listener

Tom Kyte - Sun, 2016-07-10 10:26
Hello I create a service like so: exec dbms_service.create_service('MYSERVICE','MYSERVICE'); exec dbms_service.start_service('MYSERVICE'); Then, I add the service in the listener.ora sid_list parameter like so: SID_LIST_LISTENER = (SI...
Categories: DBA Blogs

Politicians won't move on climate change cause they know we don't REALLY care.

FeuerThoughts - Sun, 2016-07-10 10:23


Yep. That's the truth (at the least the truth that seems to be taking shape between my ears these days).

I've been thinking about the Great Pacific Garbage Patch of late....


Lots of us seem to know it exists, and we are disgusted by it. Disgusted by us - humans who are disastrously trashing our planet.

And what are we going to do about it?

We are going to demand that Congress DO SOMETHING!

And our demands are going to be expressed in extremely powerful ways:
  • Online petitions
  • Facebook rants
  • Lots and lots of outraged tweets
Oh yes. Those. Lots of them, lots of indignation, shared outrage, thank you Facebook Echo Chamber.

And yet, and yet...somehow those awful Congresspeople ignore the Will of the People. How can this be? 

Time for more outraged and indignant rants and sarcastic memes on Facebook.

How ridiculous on two fronts:

1. Online "activism" is largely ineffective. 

2. Politicians will only listen to us when we take action that demonstrates our seriousness.

And this is where we really fall short.

So you read about all the awful plastic clogging up our oceans, killing fish and whales and dolphins and....everything, really, just about everything.

And what do you actually do?

Do you change even one iota of the way you live your life? It doesn't seem that way to me. We bitch and moan for a while, and then watch Game of Thrones or go to Six Flags or buy another case of plastic bottled water.

And since we don't seem to be willing to make the smallest sacrifices in our lives, politicians know they can just keep on serving their real masters: lobbyists of corporations.

Let's face it: if you consume and discard plastic, it's going somewhere, and it's going to be nasty, no matter the location. 

But if you don't consume that plastic, you will have not contributed to the problem. You will have not made things worse. And if millions of people did this same thing - took action in their life to change patterns of consumption - the impact would be enormous.

Here are some of the things I do to avoid plastic consumption:

1. I never, never, NEVER (well, hardly ever) buy plastic bottled water. And I especially never buy cases of plastic bottled water that is wrapped in plastic. How grotesque. Instead, buy a glass or stainless steel bottle and refill the damn thing, people.

2. I hardly ever buy processed food. I mostly buy food, like broccoli and fruit and eggs. Sure, they all require some processing. But nothing like buying a Lunchable. So gross.

3. I travel with a set of bamboo "silverware" so I can avoid using plastic-wrapped plastic forks and knives. I so detest those.

4. When I get ice cream, I get a cone: no need for a plastic dish, no plastic spoon. Of course, if I go to a lovely ice cream shop like Oberweis and eat my delight there, they use glass bowls and glasses and real silverware. So then I will treat myself to a milkshake or sundae. Yummy and no plastic.

5. I make my own yogurt instead of buying lots of plastic containers of the stuff. It's easy to do: just buy one of these

6. I buy milk in reusable glass containers. Again, thanks Oberweis!

And there's more, but you get the idea. It mostly comes down to being more intentional about how you go through the day: think ahead, always carry your water bottle and bamboo silverware, just say no to treats that come in plastic that you do not really need to eat, etc.

If millions of humans took action like this, the amount of garbage going to landfills and into the ocean would decrease substantially. 

With reduced demand, less plastic would be produced in factories, less pollution would be produced, etc.

But if you do not do things like this, if you direct your outrage to distant politicians who will never pay you attention and do not address some of that outrage at yourself, well...

Then the coral and whales and sharks and fish and birds and eventually even (dare I say it!) humans will suffer. 

Bottom line: if you want politicians to change their behavior, first change yours

That way, when they still don't give a rat's ass about you, at least you will have helped make the planet a little bit healthier.

Multiple by a million or a billion, and maybe the coral will notice.
Categories: Development

Comparing Columns Containing NULL Values

Randolf Geist - Sun, 2016-07-10 06:32
Prompted by a (not really that) recent discussion on the OTN forum I've decided to publish this note.

Sometimes you have the task of comparing column values and handling the NULL value cases correctly makes this rather cumbersome for columns that are allowed to be NULL.

The "official" SQL way of comparing two column values and to find out whether they are equal or not - under the assumption that having NULL in both columns should be treated as equal (a point that can be argued) would read in SQL something like the following for the "unequal" case:

column1 != column2 or (column1 is null and column2 is not null) or (column1 is not null and column2 is null)
and

column1 = column2 or (column1 is null and column2 is null)
for the "equal" case.

Imagine this to be done for a lot of columns and it becomes obvious that people might be looking for more concise alternatives how to express this.

Sometimes you might have come across code that uses NVL or a similar function with some "special" value that is not supposed to be part of the column data to be compared - like -1 for (positive) integer values for example, but this has the inherent risk of producing wrong results if the "special" value is (or might become in future) not that special after all.

The not really documented SYS_OP_MAP_NONNULL function could come handy if it was official, because it covers the NULL cases automatically.

Note: I still would call it undocumented even if it shows up in some "Materialized View" related part of the official 12c documentation, because it is not part of the official SQL language manual

SYS_OP_MAP_NONNULL(column1) != SYS_OP_MAP_NONNULL(column2)
is all that is needed. More concise as above, but undocumented.

And: There is one obvious drawback to SYS_OP_MAP_NONNULL, since it adds one byte to the input value to ensure the mapped NULL value differs from any other value: It doesn't work properly with maxed out columns, for example this:

SQL> create table t
2 as
3 select rpad('x', 4000) as col1, rpad('x', 4000) as col2 from dual
4 union all
5 select rpad('x', 4000) as col1, rpad('y', 4000) as col2 from dual
6 union all
7 select null as col1, rpad('y', 4000) as col2 from dual
8 union all
9 select rpad('x', 4000) as col1, null from dual
10 union all
11 select null as col1, null as col2 from dual
12 ;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t where sys_op_map_nonnull(col1) = sys_op_map_nonnull(col2);
select count(*) from t where sys_op_map_nonnull(col1) = sys_op_map_nonnull(col2)
*
ERROR at line 1:
ORA-01706: user function result value was too large
errors out with "ORA-01706: user function result value was too large".

DECODE
Decode is an exception to the rule how Oracle treats NULLs in SQL - it returns the "equal" value if both operands to compare are NULL, so it allows the following construct to achieve above comparison (see above OTN forum discussion for credits where this expression appeared first, more recently here on Stew Ashton's blog):

For checking for inequality:

decode(column1, column2, 0, 1) = 1
For equality:

decode(column1, column2, 0, 1) = 0
Very concise, and officially documented.

Interestingly, starting with 11.2.0.2 there is an implicit re-write by default enabled that rewrites above expression (and only this variation, not the one testing for inequality):

decode(column1, column2, 0, 1) = 0
into

sys_op_map_nonnull(column1) = sys_op_map_nonnull(column2)
This is controlled via fix control 8551880 ("Rewrite decode predicate to join").

Guess what happens when running the following query in 11.2.0.2 or later against above data set (the fix control is enabled by default in these versions):

SQL> select /*+ opt_param('_fix_control', '8551880:0') */ count(*) from t where decode(col1, col2, 0, 1) = 0;

COUNT(*)
----------
2

SQL>
SQL> select /*+ opt_param('_fix_control', '8551880:1') */ count(*) from t where decode(col1, col2, 0, 1) = 0;
select /*+ opt_param('_fix_control', '8551880:1') */ count(*) from t where decode(col1, col2, 0, 1) = 0
*
ERROR at line 1:
ORA-01706: user function result value was too large
Finally, what about performance? For that purpose I created the following compressed 50M rows table, covering different cases:

create table t1 compress
as
select case mod(rownum, 5) + 1
when 1 then cast('C1' as varchar2(20))
when 2 then cast(null as varchar2(20))
when 3 then cast('C3' as varchar2(20))
when 4 then cast(null as varchar2(20))
when 5 then cast('C5' as varchar2(20))
end as col1,
case mod(rownum, 5) + 1
when 1 then cast('C2' as varchar2(20))
when 2 then cast(null as varchar2(20))
when 3 then cast('C3' as varchar2(20))
when 4 then cast('C4' as varchar2(20))
when 5 then cast(null as varchar2(20))
end as col2
from
(select /*+ cardinality(1e4) */ null from dual connect by level <= 1e4)
, (select /*+ cardinality(5e3) */ null from dual connect by level <= 5e3)
;
and got the following results from 11.2.0.4 (ignoring the different cardinality estimates for the moment, since I'm only interested in the elapsed time here):

SQL> select count(*) from t1;

Elapsed: 00:00:00.94

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18743 (2)| 00:03:45 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 50M| 18743 (2)| 00:03:45 |
-------------------------------------------------------------------


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> select count(*) from t1 where decode(col1, col2, 0, 1) = 1;

Elapsed: 00:00:02.55

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 19176 (4)| 00:03:51 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| T1 | 500K| 1953K| 19176 (4)| 00:03:51 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(DECODE("COL1","COL2",0,1)=1)


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> select count(*) from t1 where SYS_OP_MAP_NONNULL("COL1")!=SYS_OP_MAP_NONNULL("COL2");

Elapsed: 00:00:03.04

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 19515 (6)| 00:03:55 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| T1 | 15M| 60M| 19515 (6)| 00:03:55 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(SYS_OP_MAP_NONNULL("COL1")<>SYS_OP_MAP_NONNULL("COL2"))


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> select count(*) from t1 where (col1 != col2 or (col1 is null and col2 is not null) or (col1 is not null and col2 is null));

Elapsed: 00:00:03.51

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 18876 (2)| 00:03:47 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| T1 | 30M| 115M| 18876 (2)| 00:03:47 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COL1" IS NULL AND "COL2" IS NOT NULL OR "COL2" IS NULL
AND "COL1" IS NOT NULL OR "COL1"<>"COL2")



Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> select /*+ opt_param('_fix_control', '8551880:0') */ count(*) from t1 where decode(col1, col2, 0, 1) = 0;

Elapsed: 00:00:02.49

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 19176 (4)| 00:03:51 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| T1 | 500K| 1953K| 19176 (4)| 00:03:51 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(DECODE("COL1","COL2",0,1)=0)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> select /*+ opt_param('_fix_control', '8551880:1') */ count(*) from t1 where decode(col1, col2, 0, 1) = 0;

Elapsed: 00:00:03.04

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 19515 (6)| 00:03:55 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| T1 | 6000K| 22M| 19515 (6)| 00:03:55 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(SYS_OP_MAP_NONNULL("COL1")=SYS_OP_MAP_NONNULL("COL2"))


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> select count(*) from t1 where col1 = col2 or (col1 is null and col2 is null);

Elapsed: 00:00:02.30

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 18865 (2)| 00:03:47 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| T1 | 13M| 49M| 18865 (2)| 00:03:47 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COL1" IS NULL AND "COL2" IS NULL OR "COL1"="COL2")


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
68456 consistent gets
0 physical reads
0 redo size
346 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Surprisingly, in my tests SYS_OP_MAP_NONNULL always performed worst, and the DECODE expression pretty close to the more verbose SQL expression - depending on the data set sometimes even faster.

So using SYS_OP_MAP_NONNULL, and in particular the implicit rewrite might not be such a good idea after all. And now you know how the automatic rewrite could be disabled if desired.

RMAN 12c NF – SQL interface in RMAN


Michael Dinh - Sat, 2016-07-09 18:51

Admittedly, I did not realize the benefit at first until there came a need.

Example:


# Set ORACLE_SID
ORACLE_SID=$1
export ORACLE_SID
### This retrieve the numeric value from PROD1/PROD2, i.e. 1,2 respectively
n=`echo "${ORACLE_SID:(-1)}"`

run {
backup tag 'ARCHIVED_LOG' archivelog all delete input skip inaccessible;
alter database backup controlfile to trace as '/rman_bkup$n/$ORACLE_SID/cf_@.sql' REUSE RESETLOGS;
create pfile='/rman_bkup$n/$ORACLE_SID/init@.ora' from spfile;
create pfile from spfile;
}

 


Oracle Education Dissatisfaction

Michael Dinh - Sat, 2016-07-09 18:28

Just like anything, there is good and bad.

Unfortunately for me, my experience with Oracle education has been predominately bad and former manager experienced the same where he was able to retake a course almost a year later since the original was unrealistic for real world scenarios.

With Oracle education, the most simplistic setup is used, e.g. no ASM, no Grid Infrastructure as it may add a little more complexity to set up?

A little advice, try to find out who will be the instructor before registering for the course. Call and ask.

Next, there is a large and great Oracle community where you may learn more about technology and new features.

Here are 2 features I came to know from twitter feed and blogs which were never covered in the 12c Data Guard Administration.

ROLLING FORWARD A PHYSICAL STANDBY USING RECOVER FROM SERVICE COMMAND IN 12C (Doc ID 1987763.1) (Courtesy Nassyam Basha)
RMAN ACTIVE DUPLICATE USING BACKUPSET IN 12C(NEW FEATURE) (Doc ID 1987193.1) (Courtesy of Deiby Gomez)

 

 


ADF 12.2.1.1 Improved Support for Programmatic View Object

Andrejus Baranovski - Sat, 2016-07-09 13:15
ADF 12.2.1.1 brings improved support for programmatic VO creation. Such VO's are handy, when we want to base VO on alternative data source, such as PL/SQL ref cursor. In ADF 12.2.1.1 developer don't need to worry which framework methods to override, now it is enough to extend from Programmatic View Object Implementation class. This is special framework helper class, designed for programmatic VO support. See example below.

Sample application (ADF12211App.zip) is based on one regular VO, which renders employees table. Programmatic VO renders data for tag cloud component, located below table:


Steps to create programmatic VO are much more simple in ADF 12.2.1.1. Select data source option to be Programmatic in VO creation wizard:


JDEV will create VO with Java implementation classes, extended from ProgrammaticViewObjectImpl class and ProgrammaticViewRowImpl class. These classes will take care for special lifecycle required for programmatic VO behavior. See extends part:


Generated class contains getScrollableData extended method. This is the place to supply data collection for the VO. In my example, I'm creating ArrayList (VO rows) of HashMap's (one HashMap, represents row data of attribute/value pair). Attributes are populated with values and collection is returned back to the framework to manage it. Method is being called automatically by the framework. There are other methods available, to retrieve row by key, etc.:


New framework class allows to work with programmatic VO's easier and leverage framework features. I would expect it would provide better support for programmatic VO data filtering.

VO data is accessed on UI through regular binding expressions:


Initialized by ADF bindings layer:

Errors during iPod touch iOS update

Tom Kyte - Fri, 2016-07-08 21:46
My IPod touch iOS update keeps saying that an error has accured and when i tap on settings it asks me to update it! I'm trying to update to iOS 8.14. Please help!!
Categories: DBA Blogs

Revert 12.1.0.2.0 CDB to stand alone non-CDB with same product version

Tom Kyte - Fri, 2016-07-08 21:46
Hello Team, We have installed Oracle 12c version 12.1.0.2.0 to be pluggable. However, many scripts would require the stand alone non pluggable mode of oracle. Can you please guide to convert CDB to non-CDB without downgrading to older versions. ...
Categories: DBA Blogs

datapump export - ORA-39126 / ORA-12838

Tom Kyte - Fri, 2016-07-08 21:46
HTP0#>expdp system/***** full=y DIRECTORY=data_pump_dir4bos dumpfile=EZTAG_UiAT.dmp logfile=myfullexp.log job_name=eztag_uat_ex5 Export: Release 11.2.0.4.0 - Production on Fri Jul 8 06:46:18 2016 Copyright (c) 1982, 2011, Oracle and/or its affi...
Categories: DBA Blogs

How to get database name in oracle db node js

Tom Kyte - Fri, 2016-07-08 21:46
<b>How to get database name in oracle db in node js without query using connection object</b> <code> var oracledb = require('oracledb'); oracledb.getConnection( { user: 'c##root', password: 'root', connectString: ...
Categories: DBA Blogs

How to load a xlsx file in to a table

Tom Kyte - Fri, 2016-07-08 21:46
I have a xlsx file ,and i am using sql loader in toad to load in to table.I also have created control file for the same. the file content is given below: ID Name Employee name Designation Status 0000700 John,Sam John sam (000070) Sr.Developer, ...
Categories: DBA Blogs

Oracle Security Expert Seminar

Pete Finnigan - Fri, 2016-07-08 21:46

I am happy to announce that I will be teaching a five day Oracle Security expert seminar class with Oracle University at Oracle offices in Reading, UK from September 26th to September 30th 2016. This is a 5 days expert....[Read More]

Posted by Pete On 08/07/16 At 02:45 PM

Categories: Security Blogs

SQL Server 2016 AlwaysOn: Direct seeding and performance considerations

Yann Neuhaus - Fri, 2016-07-08 13:05

This blog post follows the previous one about new direct seeding feature shipped with SQL Server 2016. As a reminder, I had some doubts about using direct seeding with large databases because log stream is not compressed by default but I forgot the performance improvements described into the Microsoft BOL. I remembered to talk about it a couple of months ago in this blog post.

Microsoft did a good work of improving the AlwaysOn log transport layer and it could be very interesting to compare two methods: Adding a 100 GB database by using usual way that includes backup and restore operations from the primary to the secondary or using direct seeding feature. Which one is the quickest method?

Let’s just have a quick look at my test environment. Two virtual machines with the following configuration:

  • 4x Intel Core i7-3630QM 2.3 GHz
  • 4GB of RAM
  • 2 10Gbps network cards
  • One disk that will host both the database data and log files on my primary replica (Samsung Portable SSD T3 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host both the database data and log files on my secondary replica (Samsung Portable SSD T1 500GB with S.M.A.R.T, NCQ and TRIM)
  • One disk that will host backups (Samsung SSD 840 EVC, 250GB with S.M.A.R.T, NCQ and TRIM) used by both virtual machines

As an aside, each SSD disk is able to deliver at least 450MB/s and 35000 IOPS.

blog 99 - AG direct seeding - 0 - lab environment

I also used a custom database named GestionCom that contains roughly 100GB of data for my test. 100GB would be sufficient to get relevant results.

 

blog 99 - AG direct seeding - 01 - db GestionCom

 

Go ahead and let’s compare both synchronization methods

 

First test by using an usual way to add  a database to an availability group

As said earlier, my first test will consist in using the usual way so far to add a database to an availability group. Let’s say that we may use 3 ways for data synchronization: FULL, join only and skip initial synchronization. We will use the first method for this test that includes all the steps: backup and restore the concerned database and then join it to the availability group. At this point we may easily image that the most part of the time will be consumed in the backup and restore step. I also want to precise that I did not use voluntary fine tuning options like BUFFERCOUNT, MAXTRANSFERSIZE or splitting backups to several media files in order to stay compliant with the availability group wizard.

 

Step Duration Backup database to backup local disk (primary)WITH CHECKSUM, COMPRESSION 06’55’’ Restore database from network share (secondary)WITH CHECKSUM, NORECOVERY 17’10’’ Join database to availability group + start synchronization 00’01’’ Total 24’06’’

 

What about resource consumption?

On the primary …

blog 99 - AG direct seeding - 10 - CPU and disk activity during backup

 

blog 99 - AG direct seeding - 1 - network bandwith during restore from primary

On the secondary …

 

blog 99 - AG direct seeding - 11 - CPU and disk activity during restore

blog 99 - AG direct seeding - 2 - network bandwith during restore from secondary

CPU utilization is equal to 35% on average during the test. Moreover, disk write throughput seems to stagnate to 130 MB/s on average and includes both backup and restore activities. The network throughput utilization seems also to stagnate between 135 Mbps and 174 Mbps according to my test.

So it is clear that my environment is under-used regarding resource consumption in this first test.

 

Second test by using new direct seeding method

This time I will use the new database deployment method: direct seeding. As said in my previous blog, using this feature will simplify a lot the adding database process but what about the synchronization speed and resource consumption in this case?

Well, to get a good picture of what happens during the seeding process, we will use different tools as the new sys.dm_hadr_automatic_seeding DMV and extended events as well. Extended events will help us to understand what happens under the cover in this case but to measure only the time duration of the operation we don’t need them. If you look at the event list as well as categories, you will probably notice a new dbseed category available that corresponds to the direct seeding. Events in this category are only available from the debug channel. That’s fine because we want to track when the seeding process starts, when it finishes and what’s happen between these two events (like failure, timeout, progress). By the way, the hadr_physical_progress may be very useful to get a picture of network activity for the concerned seeding session if your network card is shared between other sessions or availability group replication activities. In my case, I’m the only one and I will get this information directly from the task manager panel.

So let’s create the extended event session:

CREATE EVENT SESSION [hadr_direct_seeding] 
ON SERVER 
ADD EVENT sqlserver.hadr_automatic_seeding_start
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
	)
,
ADD EVENT sqlserver.hadr_automatic_seeding_state_transition
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_success
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_automatic_seeding_timeout
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
),
ADD EVENT sqlserver.hadr_physical_seeding_progress
(
    ACTION(sqlserver.database_name,sqlserver.sql_text)
)
ADD TARGET package0.event_file
(
	SET filename = N'hadr_direct_seeding',
	max_file_size = (2048), 
	max_rollover_files = (10))
WITH 
(
	MAX_MEMORY=4096 KB,
	EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
	MAX_DISPATCH_LATENCY = 30 SECONDS,
	MAX_EVENT_SIZE = 0 KB,
	MEMORY_PARTITION_MODE = NONE,
	TRACK_CAUSALITY = OFF,
	STARTUP_STATE = OFF
)
GO

 

And after adding the GestionCom database to the TestGrp availability group, the direct seeding feature comes into play. Honestly, it was a very big surprise! Let’s take a look at the network utilization:

blog 99 - AG direct seeding - 3 - network bandwith during seeding

 

A network usage of 2.2 Gbps on average this time! The direct seeding feature provides a better use of the network bandwidth and we may understand clearly why efforts have been made by Microsoft to improve the synchronization process.

Let’s now move on the CPU and disk utilization respectively from the primary and then the secondary

blog 99 - AG direct seeding - 5 - disk throughput during seeding from primary

blog 99 - AG direct seeding - 6 - disk throughput during seeding from secondary

~ 350 up to 450 MB/s on each side (gain x2) but an increase of the CPU utilization up to 70% during my test (increase x2). So a better resource usage but at the cost of a higher CPU utilization …

Let’s finish by looking at the sys.dm_hadr_automatic_seeding DMV that provides the answer to the question: are we faster in this case?

select
    ag.name as aag_name,
    ar.replica_server_name,
    d.name as database_name,
    has.current_state,
    has.failure_state_desc as failure_state,
    has.error_code,
    has.performed_seeding,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.start_time) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), has.completion_time) as completion_time,
    has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
    on ag.group_id = has.ag_id
join sys.availability_replicas as ar
    on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
    on d.group_database_id = has.ag_db_id

 

 

blog 99 - AG direct seeding - 7 - DMV stats

 

And the answer is yes as we may expect! Only 8 minutes (gain x3) to replicate and to synchronize the GestionCom database between the two high available replicas compared to the first method. But that’s not all … let’s focus on the redo thread activity from the secondary and you may notice a very interesting rate value (~ 12 MB/s). I don’t remember to have seen this value with current availability groups at customer places. This is the second improvement made by Microsoft concerned that has introduce parallel redo capability. As a reminder, before SQL Server 2016, there is only one redo thread per database. In this context, a single redo thread simply could not keep up with applying the changes as persisted in the log.

From the secondary, we may see some changes by looking at the sys.dm_exec_requests DMV:

select 
	r.command, 
	r.status,
	r.wait_type,
	r.wait_time,
	r.last_wait_type,
	r.scheduler_id
from sys.dm_exec_requests as r
where r.command like '%REDO%'
order by r.scheduler_id

 

 

blog 99 - AG direct seeding - 70 - parallel redo thread

Using direct seeding is definitely a solution to take into account to our future database deployment but I think we have to keep in mind two things according to this test: CPU and network consumption from seeding activity may impact the performance of other applications and vis-versa. In real world, there are good chances to be in this situation.

Finally let’s have a look at the extend event output. In respect of what we want to highlight in this blog post, we don’t get any other valuable information but one thing got my attention: LIMIT_CONCURRENT_BACKUPS value from the current value column (underlined in red). What does it mean exactly? Let’s talk about it  in a next blog post because this is a little bit out of scope of the main subject.

blog 99 - AG direct seeding - 12 - Xe with direct seeding

 

Third test by using direct seeding and compression

Let’s talk about the last test I performed. I used direct seeding without compression in the previous test so SQL Server didn’t compress the data stream by default in this case. However we may force SQL Server to use compression by setting a special trace flag 9567. After all, we want to avoid direct seeding flooding the wire and impacting the existing workload from other applications.

I have to admit that enabling compression with direct seeding is not so obvious. For instance I didn’t see any difference from the DMVs that indicates we’re using compression. (is_compression_enabled column from the sys.dm_hadr_physical_seeding_stats DMV is always equal to 0 regardless we use or not compression). The only obvious difference comes from the network throughput usage that is lower with compression (gain x 2.5). However I noticed an important increase of CPU utilization near from 100% on the primary in my case.

blog 99 - AG direct seeding - 8 - resource with seeding and compression from primary

What about seeding time? Well, I didn’t notice any gain on this field. Does compression allow to save network bandwidth? Maybe … hard to say with only this test and one specific environment.

I tried to add 3 VCPUs to each replica and leave one VCPU to the system so a total number of 7 VCPUS dedicated for SQL Server use.

blog 99 - AG direct seeding - 12 - cpu usage during seeding with compression 8 vcpus

At this point, I  admit to be a little bit surprising and I wonder if compression uses correctly all the available processors regarding the uneven distribution of CPU resource usage. The above picture is good representation of what I saw during other tests I performed with compression. In addition, I didn’t see any obvious performance gain in terms of duration except that wire is less used. I’m a little bit disappointed by compressiion but once again it is still much too early to draw a conclision and I’m looking forward direct seeding in action at my customers with real production infrastructure.

The bottom line is that direct seeding is a very promising feature and I love it because it is the direct visible part of the AlwaysOn performance improvements shipped with SQL Server 2016. However, and this is my personal opinion, I think we don’t let it fool us and consider to use direct seeding carefully according to your workload and available resources. Fortunately, in most cases it will be suitable.

Stay tuned!

 

 

 

Cet article SQL Server 2016 AlwaysOn: Direct seeding and performance considerations est apparu en premier sur Blog dbi services.

Partner Webcast - Oracle Mobile Cloud Service for Developers: Updates and Features

The Oracle Mobile Cloud Service (MCS) goal is to easily enable mobile application developers that are continually looking for new ways to accelerate development of apps to meet their consumer’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator