Feed aggregator

How to Deserialize a HTTP POST RESPONSE

Tom Kyte - Mon, 2017-11-06 08:46
I am getting a response for a request(l_value) as below: <?xml version="1.0" encoding="utf-8"?> <boolean xmlns="http://tempuri.org/">false</boolean> How do I get the boolean value false and assign it to a variable lv_return <code> httpres...
Categories: DBA Blogs

Warning Event: <hostname.domain name>: <db name> DB - Failed logon attempts

Tom Kyte - Mon, 2017-11-06 08:46
OS_USER HOST_NAME TERMINAL USERID Login Date CLIENT_PROGRAM_NAME Error massage oracle hostname.domain name SYSTEM 11-02-2017 14:54:05 sqlplus@hostname.domain name (TNS V1-V3) ORA-1017:invalid user...
Categories: DBA Blogs

Oracle memory and processor requirement

Tom Kyte - Mon, 2017-11-06 08:46
Hello Tom, I'm new in database and just want to understand the memory and proc requirement for a database server. Here if I say that approx. 500-600 concurrent user are going to connect 12C database through application and size of database is ap...
Categories: DBA Blogs

Synchronize data from Oracle Se to Oracle EE

Tom Kyte - Mon, 2017-11-06 08:46
Dear All, Could anyone share with me either data from Oracle SE can be synchronize to Oracle EE? Regards
Categories: DBA Blogs

Presenting at Cloud day event of North India Chapter of AIOUG

Amardeep Sidhu - Mon, 2017-11-06 05:47

I will be presenting a session titled “An 18 pointers guide to setting up an Exadata machine” at Cloud Day being organized by North India chapter of AIOUG. Vivek Sharma is doing multiple sessions on various cloud and performance related topics. You can register for the event here

https://www.meraevents.com/event/aioug-nic-cloud-day 

 

Categories: BI & Warehousing

Microservices: Running a webserver (caddy) on Ubuntu Core with snap

Dietrich Schroff - Sun, 2017-11-05 14:00
After the installation of a ubuntu core system inside virutalbox i was keen how to put a microservice via snap package onto the server.

First a listing of the installed snap packages:
~$ snap list
Name          Version     Rev  Developer  Notes
core          16.04.1     394  canonical  -
pc            16.04-0.8   9    canonical  -
pc-kernel     4.4.0-45-4  37   canonical  -
To get a list of available packages you can use "snap find" + a search value:
$ snap find http
Name                          Version            Developer       Notes  Summary
http                          0.9.9-1            chipaca         -      HTTPie in a snap
httpstat                      1.1.3              simosx          -      Curl statistics made simple
gost                          2.4                ginuerzh        -      GO Simple Tunnel
spreed-webrtc-snap            0.24.11-4          garywzl77       -      WebRTC audio/video calls and conferences
squid-gary                    0.3                garywzl77       -      Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more.
littlewatcher                 0.9.9              littlewatcher   -      Client for a distributed monitoring network
tinyproxy-snap                0.2                garywzl77       -      a light-weight HTTP(S) proxy daemon for POSIX operating systems.
caddy-hduran                  0.9.3              hduran          -      The HTTP/2 web server with automatic HTTPS
demo-curl                     7.47.0-1ubuntu2.1  woodrow         -      command line tool for transferring data with URL syntax
conn-check                    1.3.2-2            1stvamp         -      Utility for verifying connectivity between services
reqcounter                    0.1                meehow          -      HTTP requests counter
bhttp                         0                  rog             -      Macaroon-aware HTTP command line client
unixhttp                      1                  teknoraver      -      HTTP over Unix
wuzz                          dd696dc-1          nhandler        -      interactive cli tool for HTTP inspection
prometheus-blackbox-exporter  0.5.0              jacek           -      The Prometheus Blackbox Exporter
gnocchi                       4.0.3              james-page      -      Time Series Database as a Service
kurly                         master             carla-sella     -      kurly is an alternative to the widely popular curl program.
inadyn                        0.1                snapcrafters    -      Internet Automated Dynamic DNS Client
ipfs                          v0.4.11            elopio          -      global, versioned, peer-to-peer filesystem
tinyproxy-ogra                1.8.3              ogra            -      very tiny proxy server
demo-wget                     1.17.1-2           woodrow         -      retrieves files from the web
links                         2.12-1             zygoon          -      Web browser running in text mode
couchdb                       2.0                apache-couchdb  -      RESTful document oriented database
attfeeder                     0.0.1              sphengineering  -      Attitude angles feeder
I chose caddy-hduran:
snap install caddy-hduran  and some seconds later the installation was finished.

The deployment structure can be found with this command:
~$ mount | grep caddy
/var/lib/snapd/snaps/caddy-hduran_12.snap on /snap/caddy-hduran/12 type squashfs (ro,relatime)
/var/lib/snapd/snaps/caddy-hduran_12.snap on /writable/system-data/snap/caddy-hduran/12 type squashfs (ro,relatime)
nsfs on /run/snapd/ns/caddy-hduran.mnt type nsfs (rw)
Twice a readonly filesystem. So where to put the configuration file?

I found
/writable/system-data/var/snap/caddy-hduran/12/and put there a Caddyfile with this content:
192.168.178.31:8080
tls off and a simple index.html. This only works with sudo.
Then i started the caddy server
/var/snap/caddy-hduran/12# caddy-hduran.caddy
Activating privacy features... done.
http://192.168.178.31:8080
WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with "ulimit -n 8192".A strange behaviour is that it will not run if you stay in
/writable/system-data/var/snap/caddy-hduran/12/but in
 /var/snap/caddy-hduran/12it starts...



If you want more information about how to configure caddy, take a look at this blog: https://www.booleanworld.com/host-website-caddy-web-server-linux/

Multitenant internals: INT$ and INT$INT$ views

Yann Neuhaus - Sun, 2017-11-05 13:24

This month, I’ll talk – with lot of demos – about multitenant internals at DOAG conference. CaptureMultitenantInternals
The multitenant dictionary architecture starts with a simple idea: system metadata and data are in CDB$ROOT and user metadata and data are in PDB. And it could have been that simple. When a session connected to a PDB needs to read some system information, the session context is switched to the CDB$ROOT container and reads from CDB$ROOT SYS tablespace and objects, and then switches back to the PDB container. This is implemented by metadata and data links: the PDB lists the objects for which the session has to switch to CDB$ROOT to get metadata or data.

CaptureMultitenantInternals1But, for compatibility reason, and ease of administration, the dictionary views must display information from both containers, transparently, and then things become a little more complex with common views and extended data views.

At Oracle Open World, the multitenant architects, in the #PDBExpert session, answered questions about the multitenant architecture posted on Twitter. My first question (because I was investigating a bug at that time) was about some views, such as INT$INT$DBA_CONSTRAINTS, introduced to implement the complexity of showing the same information in dictionary views as the ones we had on non-CDB. Of course, the architects didn’t want to go too far on this and had a very accurate answer: INT$ is for internal, and here you have two ‘INT$’ so you shouldn’t look at that.

But I like to understand how things work and here is the explanation of these INT$INT$ views. And I’m not even sure that INT is for ‘internal’ but maybe ‘intermediate’. But for sure, the $ at the end is used by Oracle internal dictionary objects.

INT$ Extended Data views

We are used to seeing all objects, system ones and user ones, listed by the dictionary views. For example, DBA_PROCEDURES shows all procedures, system and user ones, and then have to read from both containers (current PDB and CDB$ROOT) through extended data links. ALL_PROCEDURES shows all procedures accessible by the user, and they also have to switch to CDB$ROOT if the user has been granted to read system objects. USER_PROCEDURES shows only the objects owned by the current user, and then can read from the current container only.

For the ease of the definition, in 12c all the joins on the underlying tables(such as procedureinfo$, user$, obj$) is done by an intermediate view such as INT$DBA_PROCEDURES which is defined as EXTENDED DATA link to read from CDB$ROOT in addition to the local table. Then DBA_PROCEDURES, ALL_PROCEDURES and USER_PROCEDURES are defined on top of it with the required where clause to filter out owner and privilege accessibility.

INT$INT$ Extended Data views

In this post, I’ll detail the special case of DBA_CONSTRAINTS because things are more complex to get the multitenant architecture behaving the same as the non-CDB.

There are several types of constraints which are identified with the CONSTRAINT_TYPE column of DBA_CONSTRAINTS, or the TYPE# of the underlying table CDEF#

Here, I query the underlying table with the CONTAINER() function to see what is stored in each container:

SQL> select decode(type#,1,'C',2,'P',3,'U',4,'R',5,'V',6,'O',7,'C',8,'H',9,'F',10,'F',11,'F',13,'F','?') constraint_type,
2 type#,con_id,count(*) from containers(cdef$)
3 group by type#,con_id order by type#,con_id;
 
CONSTRAINT_TYPE TYPE# CON_ID COUNT(*)
--------------- ----- ------ --------
C 1 1 74
C 1 3 74
P 2 1 843
P 2 3 844
U 3 1 238
U 3 3 238
R 4 1 324
R 4 3 324
V 5 1 11
O 6 1 172
O 6 3 26
C 7 1 5337
C 7 3 5337
F 11 1 11
F 11 3 11
? 12 1 3
? 12 3 3

I have very few user objects in this database. CON_ID=1 is CDB$ROOT and CON_ID=3 is my PDB. What we can see here is that we have nearly the same number of rows in both containers for the following constraint types: C (check constraint on a table), P (primary key), U (unique key), R (referential integrity), and other types related to tables. And some types have most of their rows in CDB$ROOT only: V (check option on views), R (read only on views)

That’s an implementation specificity of the multitenant architecture which makes things more complex for the dictionary views. For some objects (such as procedures and views) the metadata is stored in only one container: system objects have all their information in CDB$ROOT and the PDB has only a link which is a dummy row in OBJ$ which mentions the sharing (such as metadata link), owner and name (to match to the object in CDB$ROOT), and a signature (to verify that the DDL creating the object is the same). But other objects (such as tables) have their information duplicated in all containers for system objects (CDB$ROOT, PDB$SEED and all user PDBs). This is the reason why we see rows in both containers for constraint definition when they are related to a table.

Example on view constraint

I’ll take a constraint on system view as an example: constraint SYS_C003357 on table SYS.DBA_XS_SESSIONS


SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='DBA_XS_SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS DBA_XS_SESSIONS VIEW METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from containers(dba_constraints) where owner='SYS' and table_name='DBA_XS_SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS DBA_XS_SESSIONS O SYS_C003357

I’m looking at the dependencies for the DBA_CONSTRAINTS view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS DBA_CONSTRAINTS SYS GETLONG
SYS DBA_CONSTRAINTS SYS INT$DBA_CONSTRAINTS

So the DBA_CONSTRAINT is a view on INT$DBA_CONSTRAINTS as we have seen above. However, this view is not directly reading the tables but another view:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$DBA_CONSTRAINTS SYS GETLONG
SYS INT$DBA_CONSTRAINTS SYS INT$INT$DBA_CONSTRAINTS

Here is our additional INT$INT$ view which is reading the tables:

SQL> select owner,name,referenced_owner,referenced_name from dba_dependencies where owner='SYS' and name='INT$INT$DBA_CONSTRAINTS' and type='VIEW';
 
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
----- ---- ---------------- ---------------
SYS INT$INT$DBA_CONSTRAINTS SYS USER$
SYS INT$INT$DBA_CONSTRAINTS SYS CDEF$
SYS INT$INT$DBA_CONSTRAINTS SYS OBJ$
SYS INT$INT$DBA_CONSTRAINTS SYS CON$
SYS INT$INT$DBA_CONSTRAINTS SYS _CURRENT_EDITION_OBJ
SYS INT$INT$DBA_CONSTRAINTS SYS _BASE_USER
SYS INT$INT$DBA_CONSTRAINTS SYS GETLONG

In summary, the EXTENDED DATA view which reads the tables on each container (CDB$ROOT and PDB) is here the INT$INT$DBA_CONSTRAINTS and the INT$DBA_CONSTRAINTS is another intermediate one before the DBA_CONSTRAINTS view.


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('DBA_CONSTRAINTS','INT$DBA_CONSTRAINTS','INT$INT$DBA_CONSTRAINTS') order by object_id desc;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
PUBLIC DBA_CONSTRAINTS SYNONYM METADATA LINK
SYS DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$DBA_CONSTRAINTS VIEW METADATA LINK
SYS INT$INT$DBA_CONSTRAINTS VIEW EXTENDED DATA LINK

In this example, we don’t understand the reason for the additional intermediate view because the return all the same number of rows in each container:


SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357
 
SQL> select con_id,constraint_type,constraint_name from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 O SYS_C003357
3 O SYS_C003357

The difference is only a few additional columns from the object definition (OWNERID,OBJECT_ID,OBJECT_TYPE#,SHARING) in the INT$ and INT$INT$ which are not selected in the final view:

SQL> select * from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER OWNERID CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME OBJECT_ID OBJECT_TYPE# SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED SHARING ORIGIN_CON_ID CON_ID
----- ------- --------------- --------------- ---------- --------- ------------ ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------- ------------- ------
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 1
SYS 0 SYS_C003357 O DBA_XS_SESSIONS 10316 4 ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1 3
 
SQL> select * from containers(DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003357'
 
OWNER CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME SEARCH_CONDITION_VC R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED ORIGIN_CON_ID CON_ID
----- --------------- --------------- ---------- ------------------- ------- ----------------- ----------- ------ ---------- -------- --------- --------- --- ---- ----------- ----------- ---------- ------- ------------ ------------- ------
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17 1 1
SYS SYS_C003357 O DBA_XS_SESSIONS ENABLED NOT DEFERRABLE IMMEDIATE NOT VALIDATED GENERATED NAME 26-JAN-17

If we look at the INT$DBA_CONSTRAINTS definition we see some filters on those object definition:

SQL> ddl INT$DBA_CONSTRAINTS
 
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."INT$DBA_CONSTRAINTS" ("OWNER", "OWNERID", "CONSTRAINT_NAME", "CONSTRAINT_TYPE", "TABLE_NAME", "OBJECT_ID", "OBJECT_TYPE#", "SEARCH_CONDITION", "SEARCH_CONDITION_VC", "R_OWNER", "R_CONSTRAINT_NAME", "DELETE_RULE", "STATUS", "DEFERRABLE", "DEFERRED", "VALIDATED", "GENERATED", "BAD", "RELY", "LAST_CHANGE", "INDEX_OWNER", "INDEX_NAME", "INVALID", "VIEW_RELATED", "SHARING", "ORIGIN_CON_ID") AS
select OWNER, OWNERID, CONSTRAINT_NAME, CONSTRAINT_TYPE,
TABLE_NAME, OBJECT_ID, OBJECT_TYPE#, SEARCH_CONDITION,
SEARCH_CONDITION_VC, R_OWNER, R_CONSTRAINT_NAME, DELETE_RULE, STATUS,
DEFERRABLE, DEFERRED, VALIDATED, GENERATED,
BAD, RELY, LAST_CHANGE, INDEX_OWNER, INDEX_NAME,
INVALID, VIEW_RELATED, SHARING, ORIGIN_CON_ID
from INT$INT$DBA_CONSTRAINTS INT$INT$DBA_CONSTRAINTS
where INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 4 /* views */
OR (INT$INT$DBA_CONSTRAINTS.OBJECT_TYPE# = 2 /* tables */
AND (INT$INT$DBA_CONSTRAINTS.ORIGIN_CON_ID
= TO_NUMBER(SYS_CONTEXT('USERENV', 'CON_ID'))));

For views (OBJECT_TYPE#=4) there is no filter, which explains why we see the same number of rows in the previous example. But for tables (OBJECT_TYPE#=2) there’s an additional filter to keep the row from the current container only.

Example on table constraint

Then, I’ll take another example with a constraint definition for a table:

SQL> select owner,object_name,object_type,sharing from dba_objects where owner='SYS' and object_name='RXS$SESSIONS';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ----------- ----------- -------
SYS RXS$SESSIONS TABLE METADATA LINK
 
SQL> select owner,table_name,constraint_type,constraint_name from dba_constraints where owner='SYS' and table_name='RXS$SESSIONS' and rownum=1;
 
OWNER TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME
----- ---------- --------------- ---------------
SYS RXS$SESSIONS C SYS_C003339

From the INT$INT$ view, we have a duplicate when we query on a PDB because for tables the PDB not only holds a dummy row in OBJ$ but full information about the table is duplicated in other tables such as TAB$ and CDEF$:

SQL> select con_id,constraint_type,constraint_name from containers(INT$INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339
3 C SYS_C003339

This is the reason for the additional intermediate view: filtering out those duplicate by removing the rows from CDB$ROOT when queried from a PDB.

SQL> select con_id,constraint_type,constraint_name from containers(INT$DBA_CONSTRAINTS) where OWNER='SYS' and constraint_name='SYS_C003339'
 
CON_ID CONSTRAINT_TYPE CONSTRAINT_NAME
------ --------------- ---------------
1 C SYS_C003339
3 C SYS_C003339

Thanks to that, the duplicates are not visible to the end-user views DBA_CONSTRAINTS and PDB_CONSTRAINTS.

You may wonder why only DBA_CONSTRAINTS needs this views and not DBA_TABLES, DBA_INDEXES or DBA_TAB_COLUMNS? That’s because all information about system tables and indexes are replicated in all PDBs and then there is no need for EXTENDED DATA and context switches. DBA_CONSTRAINT has the particularity of showing information about tables and views, which implement the metadata links in a different way.

 

Cet article Multitenant internals: INT$ and INT$INT$ views est apparu en premier sur Blog dbi services.

Multitenant dictionary: what is consolidated and what is not

Yann Neuhaus - Sun, 2017-11-05 11:38

The documentation says that for Reduction of duplication and Ease of database upgrade the Oracle-supplied objects such as data dictionary table definitions and PL/SQL packages are represented only in the root.

Unfortunately, this is only partly true. System PL/SQL packages are only in root but system table definition are replicated into all PDBs.

This post is an extension of a previous blog post which was on 12cR1. This one is on 12cR2.

As I did at Open World and will do at DOAG, I show multitenant internals by creating a metadata link procedure. When I do a simple ‘describe’ when connected to a PDB, the sql_trace shows that the session switches to the CDB$ROOT to get the procedure information:

*** 2017-11-05T16:17:36.339126+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140420856738440 len=143 dep=1 uid=0 oct=3 lid=0 tim=101728244788 hv=2206365737 ad='7f60a7f0' sqlid='9fjf75a1s4y19'
select procedure#,procedurename,properties,itypeobj#, properties2 from procedureinfo$ where obj#=:1 order by procedurename desc, overload# desc
END OF STMT

All information about the system PL/SQL procedures is stored in the root only. The PDB has only a dummy row in OBJ$ to mention that it is a metadata link. And this is why you pay for the multitenant option: consolidation of all system dictionary objects into the root only. You save space (on disk and related memory) and you have only one place to upgrade.

But this is implemented only for some objects, like PL/SQL procedures, but not for others like table and indexes. If you ‘describe’ a metadata link table when connected to a PDB you will not see any switch to CDB$ROOT in the sql_trace:

*** 2017-11-05T13:01:53.541231+01:00 (PDB1(3))
PARSING IN CURSOR #139835947128936 len=86 dep=1 uid=0 oct=3 lid=0 tim=98244321664 hv=2195287067 ad='75f823b8' sqlid='32bhha21dkv0v'
select col#,intcol#,charsetid,charsetform from col$ where obj#=:1 order by intcol# asc
END OF STMT
PARSE #139835947128936:c=0,e=158,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=3765558045,tim=98244321664
BINDS #139835947128936:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=08 fl2=1000001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f2e124fef10 bln=22 avl=03 flg=05
value=747
EXEC #139835947128936:c=1000,e=603,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=3765558045,tim=98244322311
FETCH #139835947128936:c=0,e=15,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,plh=3765558045,tim=98244322342
FETCH #139835947128936:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=3765558045,tim=98244322356
FETCH #139835947128936:c=0,e=4,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3765558045,tim=98244322369
STAT #139835947128936 id=1 cnt=2 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=3 pr=0 pw=0 str=1 time=16 us cost=3 size=234 card=13)'
STAT #139835947128936 id=2 cnt=2 pid=1 pos=1 obj=21 op='TABLE ACCESS CLUSTER COL$ (cr=3 pr=0 pw=0 str=1 time=11 us cost=2 size=234 card=13)'
STAT #139835947128936 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=6 us cost=1 size=0 card=1)'
CLOSE #139835947128936:c=0,e=1,dep=1,type=3,tim=98244322439

Here all information about the columns is read from COL$ in the PDB. And if you look at TAB$ (tables), COL$ (table columns), IND$ (indexes), CONS$ and CDEF$ (constraints), you will see that they contain rows in a PDB where no user objects have been created. This is the case for all information related to tables: they are stored in CDB$ROOT and replicated into all other containers: PDB$SEED and all user created PDB. Only the information related to non-data objects, are stored only in one container.

I’ve run a query to count the rows in CDB$ROOT and PDB$SEED and here is the result:
CaptureMultitenantNumRows

All rows in OBJ$ are replicated, which is expected because this is where the metadata link information is stored. But you see also all information related to tables that are also replicated, such as the 100000+ columns in COL$. And this is the reason why you do not see a big consolidation benefit when you look at the size of the SYSTEM tablespace in pluggable databases which do no contain any user data:

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 820 SYSTEM YES /u01/oradata/CDB1A/system01.dbf
3 630 SYSAUX NO /u01/oradata/CDB1A/sysaux01.dbf
4 80 UNDOTBS1 YES /u01/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB1A/pdbseed/system01.dbf
6 390 PDB$SEED:SYSAUX NO /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u01/oradata/CDB1A/users01.dbf
8 100 PDB$SEED:UNDOTBS1 NO /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u01/oradata/CDB1A/PDB1/system01.dbf
10 440 PDB1:SYSAUX NO /u01/oradata/CDB1A/PDB1/sysaux01.dbf
11 100 PDB1:UNDOTBS1 YES /u01/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u01/oradata/CDB1A/PDB1/users01.dbf

Here I have 250MB in PDB$SEED which is supposed to contain only links to the 820GB SYSTEM tablespace, but there is a lot more than that.

So, basically, not all the dictionary is consolidated in multitenant but only the non-data part such as those PL/SQL packages and the dictionary views definition. You can think about the multitenant option consolidation as an extension to sharing the Oracle Home among several databases. It concerns the software part only. But the part of the dictionary which contains data about system objects is replicated into all containers, and is read locally without a context switch. This also means that a patch or upgrade on them has to be run in all containers.

With the fact that some information is replicated and some are not, comes the complexity to manage that in the dictionary views, and this will be the subject of the next blog post about INT$INT$ views.

 

Cet article Multitenant dictionary: what is consolidated and what is not est apparu en premier sur Blog dbi services.

ksplice kernel updates and Exadata patching

Amardeep Sidhu - Sun, 2017-11-05 11:32

If you have installed some one off ksplice fix for kernel on Exadata, remember to uninstall it before you do a kernel upgrade  eg regular Exadata patching. As such fixes are kernel version specific so they may not work with the newer version of the kernel. 

Categories: BI & Warehousing

GoldenGate Naming Convention P03

Michael Dinh - Sun, 2017-11-05 10:44

GoldenGate Naming Convention P01
GoldenGate Naming Convention P02

Here I provide an example for how I would implement 3-way replication.
I used capitalization for some clarity in trail – not tested yet.

N-way Replication calculations:
Primary Extract for each silo: 1
Pump Extract for each silo: N-1 
Replicats for each silo: N-1 
Total proceses for each silo: 2N-1
Sequence start value: (1001-100N), increment by 100

++++++++++

3-way Replication:
Primary Extract for each silo: 1
Pump Extract for each silo: 3-1=2
Replicats for each silo: 3-1=2
Total proceses for each silo: 2*3-1=5
Sequence start value: (1001,1002,1003) increment by 100

++++++++++

(Silo 1 - NYPRD) E_NY(aa) | pump (aB):(aC) | replicate (bA):(cA) | 1001+100 (sequence)
(Silo 2 - DCPRD) E_DC(bb) | pump (bA):(bC) | replicate (aB):(cB) | 1002+100 (sequence)
(Silo 3 - STDBY) E_SB(cc) | pump (cA):(cB) | replicAte (aC)|(bC) | 1003+100 (sequence)

++++++++++

E_NY1 (aa)

-- Pump include all other silos except current silo
P_NY2 (aB)
P_NY3 (aC)  

-- ReplicAt include all other silos except current silo
R_DC2 (bA)
R_SB3 (cA)

++++++++++

E_DC2 (bb)

P_DC1 (bA)
P_DC3 (bC)

R_NY1 (aB)
R_SB3 (cB)

++++++++++

E_SB3 (cc)

P_SB1 (cA)
P_SB2 (cB)

R_DC2 (bC)
R_NY1 (aC)

++++++++++

Stop replication from NYPRD:
(Silo 1) stop *NY* (stops E_NY1/P_NY2/P_NY3)

-- This may be optional depending on requirements.
-- If nothing is extracted, then nothing is replicated.
(Silo 2) stop *NY* (stops R_NY1) 
(Silo 3) stop *NY* (stops R_NY1)

Essential WebLogic Tuning to Run on Docker and Avoid OOM

Andrejus Baranovski - Sun, 2017-11-05 09:43
Read my previous post about how to run ADF on Docker - Oracle ADF on Docker Container. Docker WebLogic image is based on official Oracle Docker image for FMW infrastructure - OracleFMWInfrastructure. WebLogic container created based on this image runs, but not for long - eventually JVM process eats up all memory and OOM (out of memory) exception is thrown. This is known issue related to JVM running in Docker container - Running a JVM in a Container Without Getting Killed. Good news - we can switch on WebLogic memory management functionality to prevent OOM error while running in Docker container. This WebLogic functionality is turned on with special flag -XX:+ResourceManagement. To set this flag, we need to update startWebLogic.sh script, but probably we dont want to rebuild Docker container. Read below how to achieve this.

First we need to access startWebLogic.sh script from Docker container. Make sure Docker container on your host is running and execute Docker copy command:

docker cp RedSamuraiWLS:/u01/oracle/user_projects/domains/InfraDomain/bin/startWebLogic.sh /Users/andrejusbaranovskis/infra/shared

This will copy startWebLogic.sh file from Docker container to your host system.

Search in startWebLogic.sh script content and search for resource management config. By default it is commented out. Set this string for JAVA_OPTIONS. This enables WebLogic resource management and G1GC garbage collector:

JAVA_OPTIONS="-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC ${SAVE_JAVA_OPTIONS}"

startWebLogic.sh script contains comment, where it recommends to enable this option:


Once JAVA_OPTIONS variable is updated, copy startWebLogic.sh script back to Docker container:

docker cp /Users/andrejusbaranovskis/infra/shared/startWebLogic.sh RedSamuraiWLS:/u01/

Enter into Docker container command prompt (in my case user 501 is root user for Docker container):

docker exec -u 501 -it RedSamuraiWLS bash

Change file permissions for startWebLogic.sh:

chmod 777 startWebLogic.sh

Enter into Docker container as oracle user:

docker exec -it RedSamuraiWLS bash

Copy startWebLogic.sh script from u01 into bin folder (overwrite existing script file):

cp startWebLogic.sh /u01/oracle/user_projects/domains/InfraDomain/bin

Stop Docker container and run docker commit to create new image (which includes change in startWebLogic.sh):

docker commit RedSamuraiWLS abaranovskis/redsamurai-wls:v2

Docker image is created with delta change only, this allows to save space. Run docker images command to verify if new image is created successfully:


Run docker push to upload new image version into Docker repository. Upload will happen fast, because it will upload only delta of changes:

docker push abaranovskis/redsamurai-wls:v2

You should see new image version uploaded into Docker repository:


To run container online, we can login into Digital Ocean console and execute docker run command (I'm using container memory limit -m 4g (4 GB)) -  it will pull and run new image:


Once docker container is running, execute top command in Digital Ocean console to monitor memory consumption. Java process memory consumption should not grow, if there is no user activity in WebLogic server:

Relocate Services Back To Instance Before Patching

Michael Dinh - Sun, 2017-11-05 07:16

This will only work for 2-nodes RAC!

Prerequisite:
Patching starts at instance1, services failover to instance2.
Patching completed at instance1, restart instance1.
Patching starts at instance2, services failover to instance1.
Patching completed at instance2, restart instance2.
All services are now running at instance1.
Relocate services from instance2 back to where it belongs.

Save existing service configuration before patching.
[oracle@racnode-dc1-2 rac_relocate]$ ./save_service.sh

 

+ srvctl status database -d orclcdb -v
+ srvctl status database -d orclcdb -v
+ awk '-F ' '{print $2}'
+ cat /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
+ cat /tmp/instance.conf
orclcdb1
orclcdb2
++ tail -1 /tmp/services.conf
++ awk '-F ' '{print $11}'
++ awk '{$0=substr($0,1,length($0)-1); print $0}'
+ svc=testsvc26,testsvc27,testsvc28,testsvc29
+ exit
[oracle@racnode-dc1-2 rac_relocate]$

 

Patching completed at instance1 and starting at instance2.
All services are running on instance1 after failover of instance2.

 

[oracle@racnode-dc1-2 rac_relocate]$ srvctl stop instance -db orclcdb -instance orclcdb2 -failover
[oracle@racnode-dc1-2 rac_relocate]$ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is not running on node racnode-dc1-2
[oracle@racnode-dc1-2 rac_relocate]$

 

Patching completed at instance2, start instance2, all services running from instance1.

[oracle@racnode-dc1-2 rac_relocate]$ srvctl start instance -db orclcdb -instance orclcdb2
[oracle@racnode-dc1-2 rac_relocate]$ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
[oracle@racnode-dc1-2 rac_relocate]$

Verify relocate services will work as intended by testing first – print but not execute commands.

[oracle@racnode-dc1-2 rac_relocate]$ ./test_relocate.sh
================================================================================
++++++ Saved Configuration
-rw-r--r-- 1 oracle oinstall  18 Nov  5 13:01 /tmp/instance.conf
-rw-r--r-- 1 oracle oinstall 291 Nov  5 13:01 /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
orclcdb1
orclcdb2
================================================================================
++++++ Relocate Configuration
newinst=orclcdb2
oldinst=orclcdb1
svc=testsvc26,testsvc27,testsvc28,testsvc29
================================================================================
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
srvctl relocate service -db orclcdb -service testsvc26 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc27 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc28 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc29 -oldinst orclcdb1 -newinst orclcdb2
[oracle@racnode-dc1-2 rac_relocate]$

Relocate services to orginal saved configuration.

[oracle@racnode-dc1-2 rac_relocate]$ ./relocate_service.sh
================================================================================
++++++ Saved Configuration
-rw-r--r-- 1 oracle oinstall  18 Nov  5 13:01 /tmp/instance.conf
-rw-r--r-- 1 oracle oinstall 291 Nov  5 13:01 /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
orclcdb1
orclcdb2
================================================================================
++++++ Relocate Configuration
newinst=orclcdb2
oldinst=orclcdb1
svc=testsvc26,testsvc27,testsvc28,testsvc29
================================================================================
+ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
+ IFS=,
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc26 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc27 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc28 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc29 -oldinst orclcdb1 -newinst orclcdb2
+ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
+ exit
[oracle@racnode-dc1-2 rac_relocate]$

I have rant about hardcoding before.
YES! I hardcoded conf file location to provide a permanent and consistent location for all environments.

I don’t like to have to dig through code for find such information.
ex:
SCRIPT_DIR=/u01/app/oracle/scripts
LOG_DIR=$SCRIPT_DIR/log

save_service.sh


#!/bin/sh -x
srvctl status database -d ${db} -v > /tmp/services.conf
srvctl status database -d ${db} -v|awk -F" " '{print $2}' > /tmp/instance.conf
cat /tmp/services.conf
cat /tmp/instance.conf
svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
exit
IFS=","
srvctl status database -d ${db} -v
for s in ${svc}
do
srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}
done
srvctl status database -d ${db} -v
exit

 

test_relocate.sh


#!/bin/sh
echo "================================================================================"
echo "++++++ Saved Configuration"
ls -l /tmp/*.conf
cat /tmp/services.conf
cat /tmp/instance.conf
echo "================================================================================"
echo "++++++ Relocate Configuration"
export svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
export oldinst=`head -1 /tmp/instance.conf`
export newinst=`tail -1 /tmp/instance.conf`
env|egrep 'svc|inst'|sort
echo "================================================================================"
srvctl status database -d ${db} -v
IFS=","
for s in ${svc}
do
echo "srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}"
done
exit

 

relocate_service.sh


#!/bin/sh
echo "================================================================================"
echo "++++++ Saved Configuration"
ls -l /tmp/*.conf
cat /tmp/services.conf
cat /tmp/instance.conf
echo "================================================================================"
echo "++++++ Relocate Configuration"
export svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
export oldinst=`head -1 /tmp/instance.conf`
export newinst=`tail -1 /tmp/instance.conf`
env|egrep 'svc|inst'|sort
echo "================================================================================"
set -x
srvctl status database -d ${db} -v
IFS=","
for s in ${svc}
do
srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}
done
srvctl status database -d ${db} -v
exit

 


Exadata X5-2L deployment & Challenges

Syed Jaffar - Sun, 2017-11-05 07:01
A brand new Exadata X5-2L eighth rack (I knew the latest is X7 now, but it was for a POC, so no worries) has been deployed recently at a customer for Oracle EBS Exadata migration POC purpose. Its wasn't an easy walk in the park as I initially presumed. There were some challenges (network, configuration) thrown during the migration, but, happily overcome and had it installed and EBS database migration completion.

So, I am going to share yet another Exadata bare metal deployment story explaining the challenges I have faced, and how they are fixed.

Issue 1) DB network cable issues:
After successful execution of the elasticConfig, all the Exadata factory IP addresses have been set to client IPs. Though the management network was accessible from the outside, client network was not accessible. When verified with the network team about enabling the ports on the corporate switch, they confirmed that the ports are enabled, however, the connection is showing as not active and asked to us investigate the network cables connected to the DB nodes. When we verified the network cables ports, we didn't find any lights flashing and after an extensive investigation (Switch ports, SFP on Exadata and Corporate switch, checking the cables status), it was found that the cables pin direction was not properly connected. Also, found that the network bonding interfaces (eth1 and eth2) were not up, confirmed from ethtool eth1 command. After fixing the cables, and bringing up the interfaces (ifup eth1 & eith2), we could see that cables are connected properly and we can also see the lights on the ports.

$ethtool eth1 (shows the interfaces were not connected)
Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: Unknown!
        Duplex: Unknown! (255)

        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: no

Issue 2) Wrong Netmask selection for client network:
After fixing the cable issues, we then continued with onecommand execution. During the validation it failed because of different netmask for client network (under the cluster information section). The customer unfortunately made a mistake in the client network netmask selection for cluster settings, so there was a difference in the client netmask value for client and cluster. This was fixed by modifying the netmask value in the ifcfg-boneth0 file (/etc/sysconfig/network-scripts), restart the network services.

Issue 3) Failed eight Rack configuration (rack type and disk size):
Since the system was delivered somewhere during end of August 2015, no one actually knows exactly the disk size and rack model. The BOQ (Bill of quantity) for the order only shows X5-2 HC storage. So, there was wrong selection in the OEDA for Exadata rack and disk size. Instead of 4TB disk size, it was selected as 8TB disk size and instead of Elastic configuration, fixed Eighth rack was selected. This was fixed by rerunning the OEDA with the correct options.

Issue 4) Cell server IP issues:
There was another obstacle faced while doing the cell connectivity (part of onecommand). Cell server IPs were not modified by the elasticConfig. Fortunately, I found my friend blog on this topic and quickly fixed the issue. This is why I like to blog all the technical issues, who knows, this could solve someone pains.

http://blog.umairmansoob.com/exadata-deployment-error-cell-02559/

Issue 5) SCAN Listener configuration:
Cluster validation failed due to inconsistent values for scan name. During the investigation of various issues, private, public & scan IPs are put in the /etc/hosts file. So, while configuring LISTENER_SCAN2 and SCAN3, this issue happened. This was fairly understandable. Due to 3 entries of scan values in the /etc/hosts file this happened. Upon a quick google about the issue, the following blog helped me to fix the issue

https://learnwithme11g.wordpress.com/2010/09/03/how-to-add-scan-listener-in-11gr2-2/

Finally, I have managed to deploy the Exadata successfully and perform the Oracle EBS database migration . No doubt, this experience really made me strong in network and other areas. So, every challenges comes with opportunity to learn.

I thank those individuals who really write blogs and share their experience to help Oracle community.

There is still one open issues which is yet to be resolved. A slow sqlplus and db startup. I presume this is due to heavy resource utilization over the server. Yet to resolve the mystery. Stay tuned for more updates.






12.1 Improved Service Failover

Michael Dinh - Sun, 2017-11-05 06:51

11gR2 Database Services and Instance Shutdown

The thought of having to manually relocated dozens of services was not very appealing.

As it turns out, there is no need to manually relocate services.

srvctl stop instance -db orclcdb -instance orclcdb1 -failover will do the trick.

Comparing the 2 commands, 12c is a lot clearer / cleaner.

12c:
srvctl add service -db orclcdb -service DBA_TEST -preferred orclcdb1 -available orclcdb2 -failovertype SELECT -tafpolicy BASIC

11g:
srvctl add service -d orclcdb -s DBA_TEST -P BASIC -e SELECT -r orclcdb1 -a orclcdb2

DEMO:

$ srvctl config service -d orclcdb -s DBA_TEST|egrep -i ‘Service name|Preferred instances|Available instances|failover’

Service name: DBA_TEST
Failover type: SELECT
Failover method:
TAF failover retries:
TAF failover delay:
Preferred instances: orclcdb1
Available instances: orclcdb2

$ srvctl status database -d orclcdb

Instance orclcdb1 is running on node racnode-dc1-1
Instance orclcdb2 is running on node racnode-dc1-2

$ sqlplus mdinh/mdinh@dbatest @t.sql

SQL*Plus: Release 12.1.0.2.0 Production on Sun Nov 5 04:17:56 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Nov 05 2017 04:15:29 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


   INST_ID STARTUP_TIME
---------- -----------------------------
         1 05-NOV-2017 04:12:55
         2 05-NOV-2017 04:14:49


   INST_ID FAILOVER_TYPE FAILOVER_M FAI
---------- ------------- ---------- ---
         1 NONE          NONE       NO
         1 SELECT        BASIC      NO
         2 NONE          NONE       NO

04:17:57 MDINH @ dbatest:>host
[oracle@racnode-dc1-1 ~]$ srvctl stop instance -db orclcdb -instance orclcdb1 -failover;date
Sun Nov 5 04:18:34 CET 2017
[oracle@racnode-dc1-1 ~]$ exit
exit

04:18:37 MDINH @ dbatest:>@t.sql

   INST_ID STARTUP_TIME
---------- -----------------------------
         2 05-NOV-2017 04:14:49


   INST_ID FAILOVER_TYPE FAILOVER_M FAI
---------- ------------- ---------- ---
         2 SELECT        BASIC      YES

04:18:40 MDINH @ dbatest:>

Determine whether the given is numeric , alphanumeric and hexadecimal.

Tom Kyte - Sat, 2017-11-04 21:06
Dear Team, May I know how do we determine the below for a string. 1. If its numeric. 2 . Alphanumeric. 3. Hexadecimal (E.g Mac address). Regards Kalyana Chakravarthy
Categories: DBA Blogs

Microservices: Ubuntu Core and snap - a minimal linux

Dietrich Schroff - Sat, 2017-11-04 15:54
After some first steps with coreOS
i read about Ubuntu Core, which targets also on a minimal linux. Here an architecture overview from ubuntu:
Ubuntu provides an image for KVM (the link points to an installation howto), but i want to stay with Virtualbox. I followed this tutorial:
wget http://releases.ubuntu.com/ubuntu-core/16/ubuntu-core-16-amd64.img.xz
unxz ubuntu-core-16-amd64.img.xz
VBoxManage convertdd ubuntu-core-16-amd64.img ubuntu-core-16-amd64.vdi --format VDI

VBoxManage modifyhd ubuntu-core-16-amd64.vdi --resize 20480And then configure the virtualbox vm:




And Go!




To pass this step you have to create an account at https://login.ubuntu.com
 
 At login.ubuntu.com you have to provide your public ssh keyfile:

and then a login which does not work:


and the login via ssh:

schroff@zerberus:~$ ssh d-schroff@192.168.178.31
The authenticity of host '192.168.178.31 (192.168.178.31)' can't be established.
ECDSA key fingerprint is SHA256:yKE/g7JYnlED6jOF/8gsUeVrdkuEU/zytFdlCcVzNEs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.178.31' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to Snappy Ubuntu Core, a transactionally updated Ubuntu.

 * See https://ubuntu.com/snappy

It's a brave new world here in Snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snap --help'
for app installation and transactional updates.

d-schroff@localhost:~$
First impression: Getting a ssh login is much easier than configuring ssh with ignition at coreOS.

GoldenGate Naming Convention P02

Michael Dinh - Sat, 2017-11-04 08:05

GoldenGate Naming Convention P01

Bidirectional replication:

E_10G (write to aa) | P_10G (read from aa, write to ab) | R_10G (read from ab, write to 12C DB)
E_12C (write to bb) | P_12C (read from bb, write to ba) | R_12C (read from ba, write to 10G DB)

Source (10G DB)     | Target (12C DB)
-------------------------------------
E_10G [>aa]         | E_12C [>bb]
P_10G [>ab]         | P_12C [>ba]
R_12C [<ba]         | R_10G [<ab]

Create same process name for extract, pump, replicat.

Using the example above:
stop 10G replication, stop *10G at source and target.
stop 12C replication, stop *12C at source and target.

Another method.

Source (10G DB)     | Target (12C DB)
-------------------------------------
E_10G [>aa]         | E_12C [>bb]
P_10G [>ab]         | P_12C [>ba]
R_10G [<ba]         | R_12C [<ab]

Process name was created based on DB versions.

Using the example above:
stop 10G replication, stop E_10G,P_10G at source and stop R_12C at target.
stop 12C replication, stop E_12C,P_12C at source and stop R_10G at target.

Of the 2 methods, which do you prefer?

Splitting extracts:

Source      | Target
-------------------------------------
E_USR [>aa] | R_JOE [<az]
P_JOE [>az] | R_SUE [<ay]
P_SUE [>ay] | R_AMY [<ax]
P_AMY [>ax] |

How does one know where source is from? You don’t unless you comment the parameters.

Example: RAC environment where VIP is used for PUMP

EXTRACT e_hawk
-- CHECKPARAMS
-- ADD EXTRACT e_hawk, INTEGRATED TRANLOG, BEGIN NOW
-- ADD EXTTRAIL ./dirdat/aa EXTRACT e_hawk, MEGABYTES 500

EXTRACT p_hawk
-- CHECKPARAMS
-- Target: host03/04
-- ADD EXTRACT p_hawk, EXTTRAILSOURCE ./dirdat/aa
-- ADD RMTTRAIL ./dirdat/ab, EXTRACT p_hawk, MEGABYTES 500
RMTHOST OGG_VIP MGRPORT 7801, TCPBUFSIZE 1048576, TCPFLUSHBYTES 1048576 

REPLICAT r_hawk
-- CHECKPARAMS
-- Source: host01/02
-- REGISTER REPLICAT r_hawk DATABASE
-- ADD REPLICAT r_hawk, INTEGRATED, EXTTRAIL ./dirdat/ab

In conclusion, there is really no best practice, but some thought and planning do help.


REG_EXP is a problem

Tom Kyte - Sat, 2017-11-04 02:46
WHERE email IN ( select regexp_substr('one@gmail.com,two@gamil.com,three@gmail.com,four@gmail.com','[^,]+', 1, level) from dual connect by regexp_substr('one@gmail.com,two@gamil.com,three@gmail.com,four@gmail.com', '[^,]+', 1, level) is not nul...
Categories: DBA Blogs

I need to replace this query with substr and instr

Tom Kyte - Sat, 2017-11-04 02:46
SELECT REGEXP_SUBSTR(val_PC, '[^, ]+', 1, LEVEL) FROM DUAL CONNECT BY REGEXP_SUBSTR(val_PC, '[^, ]+', 1, LEVEL) IS NOT NULL
Categories: DBA Blogs

Multiple query question

Tom Kyte - Sat, 2017-11-04 02:46
I have an Argos report that takes a query(s) and output a report based on those entries. My problem is there are 5 possible querys. I trying to use and/or ( I already tried the CASE function) to pulls data base on their entry. The queries are; Recei...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator