Feed aggregator

Hybrid Histogram

Tom Kyte - Fri, 2016-08-19 20:26
Team, Was reading about Hybrid histogram from documentation <u>http://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL372</u> <code>demo@ORA12C> create table t as select * from all_objects; Table created. demo@ORA12C> column ow...
Categories: DBA Blogs

XML Parse error due to escape characters

Tom Kyte - Fri, 2016-08-19 20:26
Hi Tom, We have a process wherein we try to consume the XML and store it in a table as CLOB . These XMLs which we are getting are having the escape characters like '&' or '<' in the text part of the XML. When we try to parse the XML using XMLt...
Categories: DBA Blogs

insert only value of rows which has data from tabular form into database

Tom Kyte - Fri, 2016-08-19 20:26
Hello can you please help me how to create a PL/SQL process to insert only rows which has values from tabular form into database,not those rows which does not have value, below is the insert PL/SQL process that i created, but it insert empty rows al...
Categories: DBA Blogs

Bulk collect workaround for memory bug

Bobby Durrett's DBA Blog - Fri, 2016-08-19 16:42

A coworker passed a test script on to me that was failing with the following memory error:

ORA-04030: out of process memory when trying to allocate 4088 bytes (PLS CGA hp,pdzgM64_New_Link)

The error occurred when initializing a PL/SQL table variable with 7500 objects. Here is my sanitized version of the code:

CREATE OR REPLACE TYPE ARRAY_ELEMENT
AS
  OBJECT
  (
    n1 NUMBER,
    n2 NUMBER,
    n3 NUMBER,
    n4 NUMBER );
/

CREATE OR REPLACE TYPE MY_ARRAY
IS
  TABLE OF ARRAY_ELEMENT;
/

DECLARE
  MY_LIST MY_ARRAY;
BEGIN
  MY_LIST := MY_ARRAY(
    ARRAY_ELEMENT(1234,5678,1314,245234),
    ARRAY_ELEMENT(1234,5678,1314,245234),
    ARRAY_ELEMENT(1234,5678,1314,245234),
...
    ARRAY_ELEMENT(1234,5678,1314,245234),
    ARRAY_ELEMENT(1234,5678,1314,245234)
  );

The real code had different meaningful constants for each entry in the table. Here is the error:

8004      ARRAY_ELEMENT(1234,5678,1314,245234)
8005    );
8006  
8007  END;
8008  /
DECLARE
*
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 4088 bytes 
(PLS CGA hp,pdzgM64_New_Link)


Elapsed: 00:02:51.31

I wrapped the error code manually so it would fit on the page.

The solution looks like this:

create table MY_OBJECTS
  (
    o ARRAY_ELEMENT );

DECLARE
  MY_LIST MY_ARRAY;
BEGIN
 MY_LIST := MY_ARRAY( );
  
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
...
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));

 commit;
  
 SELECT o
   BULK COLLECT INTO MY_LIST
   FROM MY_OBJECTS; 

END;
/

Here is what the successful run looks like:

8004    insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
8005    insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
8006  
8007    commit;
8008  
8009    SELECT o
8010      BULK COLLECT INTO MY_LIST
8011      FROM MY_OBJECTS;
8012  
8013  END;
8014  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:21.36
SQL> 

There is an Oracle document about this bug:

ORA-4030 (PLSQL Opt Pool,pdziM01_Create: New Set), ORA-4030 (PLS CGA hp,pdzgM64_New_Link) (Doc ID 1551115.1)

It doesn’t have using bulk collect as a work around. My situation could be only useful in very specific cases but I thought it was worth sharing it.

Here are my scripts and their logs: zip

This is on HP-UX Itanium Oracle 11.2.0.3.

Bobby

Categories: DBA Blogs

Digging into ADD SCHEMATRANDATA … what is #GoldenGate doing?

DBASolved - Fri, 2016-08-19 10:30

In my post on the differences between ADD TRANDATA and ADD SCHEMATRANDATA, I highlighted the differences between the two ways of adding supplemental logging to tables. One of the things I pointed out was that ADD SCHEMATRANDATA doesn’t add any supplemental log groups (SLG). Without SLGs then how does ADD SCHEMATRANDATA work? That was the question I was left with. So I started digging around to find the answer and I think you may be interested in this as well.

Tracing

In order to figure out what is actually going on within the database when ADD SCHEMATRANDATA is run, I had to trace the GoldenGate session. In order to do this, I first had to login to the database from GGSCI using “dblogin useridalias ggate”. After logging in, I had to identify the session and then setup tracing. This was done with the following scripts:

—SQL to identify the GGate session
select 'exec dbms_monitor.session_trace_enable(session_id=>'||sid||', serial_num=>'||serial#||', binds=>true, waits=true)' 
from v$session where username = 'GGATE';

—Start tracing of the session
exec dbms_monitor.session_trace_enable(session_id=>156, serial_num=>15799, binds=>true, waits=>true);
—Disable tracing after done
exec dbms_monitor.session_trace_disable(session_id=>156, serial_num=>15799);

Now with tracing enabled for the session, I’m able to trace the GoldenGate session from within GGSCI. At this time, I’m able to run DELETE SCHEMATRANDATA and ADD SCHEMATRANDATA against the schema I want to add the logging to.

All the last step of tracing, I need to disable the tracing (script above) of the GoldenGate user and identify the trace file. By default the trace file should be in $ORACLE_BASE/diag/rdbms/<db>/<db>/trace. In this directory, you need to identify the trace files for the session that was traced.

oracle >ls -ltr src12c_ora_23267.*
-rw-r-----. 1 oracle oracle   2788 Aug 15 16:09 src12c_ora_23267.trm
-rw-r-----. 1 oracle oracle 300531 Aug 15 16:09 src12c_ora_23267.trc

After you know the trace file, you can perform a TKPROF on the trace file.

oracle > tkprof src12c_ora_23267.trc src12c_ora_23267.tkprofs

This is generate a tkprofs file that will show specifics for the sessions, such as the SQL that is ran. This is what I’m more interested in, what SQL is ran with ADD SCHEMATRANDATA is called.

Quick Review of TKProf file

I’m not going to go into all the details that are in a tkprof generated file, but I took a look through the file trying to figure out what is going on when running the ADD SCHEMATRANDATA. As I was searching the file, I found a reference to LOGMNR$ALWAYS_SUPLOG_COLUMNS.

If you have kept up with the versions of Oracle GoldenGate, you will know that this is a reference to LogMiner and that Oracle is integrating the Oracle GoldenGate processes with it. This is also a hint to where to look, a.k.a Oracle Streams. As I continued to look through the tkprof file, I found a few references to a streams package – DBMS_CAPTURE_ADM; along with sql statements making calls to views like STREAMS$_PREPARE_DDL.

BEGIN sys.dbms_capture_adm.PREPARE_SCHEMA_INSTANTIATION('soe','ALLKEYS_OFF');
  END;
BEGIN sys.dbms_capture_adm.ABORT_SCHEMA_INSTANTIATION('soe'); END;

At this point, it is safe to say, that Oracle is merging Oracle GoldenGate into Oracle Streams.

Review of Packages

Note: These are Oracle packages that are encrypted in the database. You can use UnwrapIt to view if needed. Output here is only for teaching purposes.

Now that I’m armed with what package the ADD SCHEMATRANDATA is calling, I can drill into what exactly is happening.

In taking a look at the DBMS_CAPTURE_ADM package, I wanted to look at the PREPARE_SCHEMA_INSTANTIATION procedure. I can quickly see that the procedure takes the schema_name and that defaults will be used for supplemental logging and container.

PROCEDURE PREPARE_SCHEMA_INSTANTIATION( 
 SCHEMA_NAME               IN VARCHAR2, 
 SUPPLEMENTAL_LOGGING      IN VARCHAR2 DEFAULT 'KEYS', 
 CONTAINER                 IN VARCHAR2 DEFAULT 'CURRENT') 
 IS 
 BEGIN 
 DBMS_CAPTURE_ADM_IVK.PREPARE_SCHEMA_INST_IVK( 
 SCHEMA_NAME => SCHEMA_NAME,  
 SUPPLEMENTAL_LOGGING => SUPPLEMENTAL_LOGGING, 
 CONTAINER => CONTAINER); 
 END;

After the schema is passed to the procedure, this procedure passes all three of the parameters to another package and procedure for execution. This package is DBMS_CAPTURE_ADM_IVK.PREPARE_SCHEMA_INST_IVK.

PROCEDURE PREPARE_SCHEMA_INST_IVK(
SCHEMA_NAME               IN VARCHAR2,
SUPPLEMENTAL_LOGGING      IN VARCHAR2 DEFAULT 'KEYS',
CONTAINER                 IN VARCHAR2 DEFAULT 'CURRENT')
IS
SUPP_LOG_LEVEL      BINARY_INTEGER;    
SYNCHRONIZATION     VARCHAR2(4) := 'LOCK';
CANON_CONTAINER     DBMS_ID;
BEGIN
  DBMS_LOGREP_UTIL.WRITE_TRACE('prepare_schema_inst_ivk()+ container: ' ||
  CONTAINER, DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
  COMMIT;

  SUPP_LOG_LEVEL := DBMS_CAPTURE_ADM_IVK.SUPP_LOG_LEVEL_CHECK(SUPPLEMENTAL_LOGGING, 
  IF DBMS_XSTREAM_GG_ADM.SYNCHRONIZATION IS NOT NULL 
  THEN
         SYNCHRONIZATION := UPPER(DBMS_XSTREAM_GG_ADM.SYNCHRONIZATION);
  END IF;

  IF (SYNCHRONIZATION != 'NONE' AND SYNCHRONIZATION != 'LOCK' AND
           SYNCHRONIZATION != 'WAIT') 
  THEN
           DBMS_LOGREP_UTIL.RAISE_SYSTEM_ERROR(
           DBMS_STREAMS_ADM.INVALID_PARAMETER_NUM,
           DBMS_LOGREP_UTIL.GET_PROC_USE_CONTEXT, 'SYNCHRONIZATION'); 
  END IF;

  DBMS_UTILITY.CANONICALIZE(CONTAINER, CANON_CONTAINER, 30);   
  IF CANON_CONTAINER = 'CURRENT' 
  THEN
     DBMS_LOGREP_UTIL.WRITE_TRACE('prepare schema objects for current' ||
            ' container: ', DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
     DBMS_CAPTURE_ADM_INTERNAL.PREPARE_SCHEMA_INSTANTIATION(
        SCHEMA_NAME, SYS_CONTEXT('USERENV','_USER'), SUPP_LOG_LEVEL,
        SYNCHRONIZATION, CANON_CONTAINER);
  ELSE
     DBMS_CAPTURE_ADM_INTERNAL.PREPARE_SCHEMA_INST_CDB(
            SCHEMA_NAME, CANON_CONTAINER,
            SYS_CONTEXT('USERENV','_USER'), SUPP_LOG_LEVEL,
           SYNCHRONIZATION);
  END IF;

  COMMIT;
  END;

In this procedure, you will notice that the input is all three of the parameters that are passed from the PREPARE_SCHEMA_INSTANTIATION procedure. Then this procedure precedes to check the supplemental logging and attempts to sync the tables with the DBMS_XSTREAM_GG_ADM package. After the tables have been synced, then the procedure calls another package and procedure to prepare (DBMS_CAPTURE_ADM_INTERNAL.PREPARE_SCHEMA_INSTANTIATION) the tables with the required supplemental logging level.

When looking at this procedure, it takes in five different parameters; two of which are defaults. After the procedure sets up tracing, it checks to see if the calling procedure is specifying GoldenGate. Once everything is confirmed and synchronization is complete, then the procedure grabs the CURRENT_SCN, waits for any inflight transactions and prepares the schema tables before exiting the package.

PROCEDURE PREPARE_SCHEMA_INSTANTIATION(
SCHEMA_NAME               IN VARCHAR2,
CANON_INVOKER             IN VARCHAR2,
SUPP_LOG_LEVEL            IN BINARY_INTEGER,
SYNCHRONIZATION           IN VARCHAR2 DEFAULT 'LOCK',
CONTAINER                 IN VARCHAR2 DEFAULT 'CURRENT')
IS
CANON_SCHEMA  DBMS_ID;
IDX           NUMBER :=0;
NAME_ARRAY    DBMS_UTILITY.NAME_ARRAY;
OWNER_ARRAY   DBMS_UTILITY.NAME_ARRAY;
CURRENT_SCN   NUMBER;
WFIT          BOOLEAN := FALSE;
RUN_TKLRWT1   BOOLEAN := FALSE;
SUPPLOG       BINARY_INTEGER := SUPP_LOG_LEVEL;
TEST_EVENT_LVL   BINARY_INTEGER := 0;
BEGIN
 DBMS_LOGREP_UTIL.WRITE_TRACE(
 'dbms_capture_adm_internal.prepare_schema_instantiation()+', DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
 DBMS_LOGREP_UTIL.WRITE_TRACE('schema_name='||SCHEMA_NAME,   DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
  DBMS_LOGREP_UTIL.WRITE_TRACE('supp_log_level='||SUPP_LOG_LEVEL, DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH); 
 DBMS_UTILITY.CANONICALIZE(SCHEMA_NAME, CANON_SCHEMA, 30);  

 IF ((SUPP_LOG_LEVEL = DBMS_STREAMS_DECL.SUPPLOG_ALLKEYS_ON) OR (SUPP_LOG_LEVEL = DBMS_STREAMS_DECL.SUPPLOG_ALLKEYS_OFF)) 
 THEN
    STORE_PREPARE_INFO(CANON_SCHEMA, SUPP_LOG_LEVEL);
    RETURN;
 END IF;
    DBMS_LOGREP_UTIL.WRITE_TRACE('dbms_capture_adm_internal.prepare_schema_instantiation()+ before read_ev', DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
    DBMS_SYSTEM.READ_EV(DBMS_LOGREP_UTIL.EXT_TRACE_EV, TEST_EVENT_LVL);

    IF (DBMS_XSTREAM_GG_ADM.IS_GOLDENGATE AND TEST_EVENT_LVL = 0) 
    THEN
     SUPPLOG := 0;
    END IF;
   DBMS_LOGREP_UTIL.WRITE_TRACE('dbms_capture_adm_internal.prepare_schema_instantiation()+ before prepare_ddl',DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
    PREPARE_DDL(CANON_SCHEMA, SUPPLOG);
    DBMS_LOGREP_UTIL.WRITE_TRACE('dbms_capture_adm_internal.prepare_schema_instantiation()+ before prepare_schema_tables',DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
	RUN_TKLRWT1 := RUN_WFIT_TEST;

    IF (UPPER(SYNCHRONIZATION) = 'NONE') 
    THEN
		PREPARE_SCHEMA_TABLES(CANON_SCHEMA, FALSE, CONTAINER);
    ELSIF (UPPER(SYNCHRONIZATION) = 'LOCK') 
    THEN
		PREPARE_SCHEMA_TABLES(CANON_SCHEMA, TRUE, CONTAINER);
    ELSIF (UPPER(SYNCHRONIZATION) = 'WAIT') 
    THEN
		WFIT := TRUE;
 		CURRENT_SCN  := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
        DBMS_CAPTURE_ADM_INTERNAL.WAIT_FOR_INFLIGHT_TXN(CURRENT_SCN, 'Prepare_schema_instantiation');
        PREPARE_SCHEMA_TABLES(CANON_SCHEMA, FALSE, CONTAINER);
    END IF;
        DONE_WFIT_TEST(WFIT, RUN_TKLRWT1);
    END;

Up to this point, the last three packages have been preparing the tables. Seems like a lot of preparing to get tables set for supplemental logging. Well the last call in the previous package called the procedure to prepare the tables. When I look at this procedure, I see that there is a call to a PREPARE_TABLE_INST procedure (getting a bit lazy on copying the procedure – don’t wan to put to much since this is Oracle’s code).

FOR I IN 1..IDX LOOP
BEGIN
   DBMS_LOGREP_UTIL.WRITE_TRACE('prepare_schema_tables():' || CANON_OWNER_ARRAY(I) || '.' || CANON_NAME_ARRAY(I), DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
   PREPARE_TABLE_INST(CANON_OWNER_ARRAY(I), CANON_NAME_ARRAY(I),DBMS_STREAMS_DECL.SUPPLOG_NONE, LOCKING, CONTAINER);
EXCEPTION WHEN OTHERS THEN
   DBMS_LOGREP_UTIL.DUMP_TRACE('error:dbms_capture_adm_internal.prepare_table_inst('||'owner='||CANON_OWNER_ARRAY(I)||' name='||CANON_NAME_ARRAY(I));
END;

Now, when I go an look at the PREPARE_TABLE_INST procedure, I see that the procedure is calling a C package called “knlcpreptabinst” to set the supplemental logging on the tables associated with the schema.

PROCEDURE PREPARE_TABLE_INST(
CANON_OWNER                 IN VARCHAR2,
CANON_TABLE_NAME            IN VARCHAR2,
SUPPLEMENTAL_LOGGING_LEVEL  IN BINARY_INTEGER,
LOCKING                     IN BOOLEAN,
CONTAINER                   IN VARCHAR2) 
IS
EXTERNAL
 NAME "knlcpreptabinst"
 LIBRARY DBMS_LOGREP_LIB
 WITH CONTEXT
 PARAMETERS(CONTEXT,    
      CANON_OWNER       OCISTRING, CANON_OWNER       INDICATOR SB2,
      CANON_TABLE_NAME  OCISTRING, CANON_TABLE_NAME  INDICATOR SB2,
      SUPPLEMENTAL_LOGGING_LEVEL UB4,
      LOCKING           UB2,       LOCKING           INDICATOR SB2,
      CONTAINER         OCISTRING, CONTAINER         INDICATOR SB2)
LANGUAGE C;

Without drilling down into the C package, I cannot see exactly how the supplemental logging is added or where it is exactly stored in the database. I can only assume (we know what that means … lol), that it is in the data dictionary some where.

Looking for tables or views that may shed some light on this as well, I’ve found LOGMNR$SCHEMA_ALLKEY_SUPLOG that will show you the schema, if all keys are in supplemental log mode and if no validated pks are allowed. The following query is what I used to extract information about the SOE schema:

select * from LOGMNR$SCHEMA_ALLKEY_SUPLOG
where allkey_suplog = 'YES';

 

Output is as follows:

SCHEMA_NAME     ALL ALL
--------------- --- ---
SOE             YES NO

In drilling down further, after have a friend of mine pointed out a function to me (follow him on twitter -> @resetlogs). You can get down to the table level on supplemental logging when using ADD SCHEMATRANDATA. There is a log miner function that has to be called when using SQL to pull the correct information. This function is similar named to the table I referenced above … LOGMNR$ALWAYS_SUPLOG_COLUMNS.

This function takes two parameters. The first is the schema that holds the objects and the second is the table name. So in the following example, I can see that the ORDERS table of the SOE schema has supplemental logging added.

select * from table(logmnr$always_suplog_columns('SOE','ORDERS'));

OWNER           TABLE_NAME                     COLUMN_NAME                        INTCOL     SEGCOL    USERCOL
--------------- ------------------------------ ------------------------------ ---------- ---------- ----------
SOE             ORDERS                         ORDER_ID                                1          1          1
SOE             ORDERS                         ORDER_TOTAL                             6          6          6
SOE             ORDERS                         COST_OF_DELIVERY                       11         11         11
SOE             ORDERS                         DELIVERY_ADDRESS_ID                    13         13         13
SOE             ORDERS                         ORDER_DATE                              2          2          2
SOE             ORDERS                         CUSTOMER_CLASS                         14         14         14
SOE             ORDERS                         CUSTOMER_ID                             4          4          4
SOE             ORDERS                         ORDER_STATUS                            5          5          5
SOE             ORDERS                         PROMOTION_ID                            8          8          8
SOE             ORDERS                         ORDER_MODE                              3          3          3
SOE             ORDERS                         SALES_REP_ID                            7          7          7
SOE             ORDERS                         WAREHOUSE_ID                            9          9          9
SOE             ORDERS                         DELIVERY_TYPE                          10         10         10
SOE             ORDERS                         WAIT_TILL_ALL_AVAILABLE                12         12         12
SOE             ORDERS                         CARD_ID                                15         15         15
SOE             ORDERS                         INVOICE_ADDRESS_ID                     16         16         16

 

I know this has been a long post, but hopefully, I’ve been able to somewhat show how the ADD SCHEMATRANDATA command within GGSCI works and where you can see if supplemental logging is turned on for the selected schema. As I was trying to dig to the root of this issue, I found it interesting that so many packages are involved with setting the supplemental logging on a schema/tables; while identifying if it is enabled is not as easy as ADD TRANDATA. Where when you use ADD TRANDATA the tables are easily identified and can quickly see that the supplemental log groups have been added.

As Oracle GoldenGate for Oracle moves to a more integrated approach to replication, I think more items will be tied to the log miner and streams architecture.

Enjoy!!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

New! Enterprise Mobility for Dummies, 2nd Edition

WebCenter Team - Fri, 2016-08-19 10:00

Get up to speed on the latest mobile trends, and gain a better understanding of the Oracle Mobile Platform. Download this NEW edition of Enterprise Mobility for Dummies.

  • Gain a solid understanding of enterprise mobility and latest trends
  • Mobile clients, services, backend integrations, security, API First, microservices, and more
  • Explore what the Oracle Mobile Platform has to offer

You know that mobile is eating the world right? So you'd better get your seat at the dinner table. Download it now!

Reducing Oracle Docker images size by using a central repo

Marcelo Ochoa - Fri, 2016-08-19 09:42
Oracle provides a way to build several Docker images at their GitHub Official Repo. But IMO they have two big problems:

  • unlike the Docker way, you have to download manually binary installation files
  • size of the images is big due the use of Docker COPY command (ADD or COPY section)

To mitigate these two factors I tested the local repo idea, a new Docker image which exposes using HTTP protocol all Oracle binary files in your DataCenter without broke the license term.
You could build a local repository image adding or removing Oracle binary packages, for example:
FROM gliderlabs/alpine:3.4
# Maintainer
# ----------
MAINTAINER Marcelo Ochoa
RUN apk --no-cache add nginx && mkdir /run/nginx
# Oracle Linux 64Bits 12.1.0.2
COPY linuxamd64_12102_database_1of2.zip /var/lib/nginx/html
COPY linuxamd64_12102_database_2of2.zip /var/lib/nginx/html
# Oracle Linux XE 11.2.0
COPY oracle-xe-11.2.0-1.0.x86_64.rpm.zip /var/lib/nginx/html
# Oracle JDK 8b101
COPY server-jre-8u101-linux-x64.tar.gz /var/lib/nginx/html
# Oracle WebLogic 12.2.1.1.0
COPY fmw_12.2.1.1.0_wls_Disk1_1of1.zip /var/lib/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
files marked in italic/bold should be downloaded first (manually from OTN) and located in same directory as your local repo Dockerfile, for example:
mochoa@localhost:~/jdeveloper/mywork/docker/localrepo$ ls -l
total 3805068
-rwxr-xr-x 1 mochoa mochoa         48 ago 12 10:23 buildDockerImage.sh
-rw-rw-r-- 1 mochoa mochoa        493 ago 15 18:19 Dockerfile
-rw-rw-r-- 1 mochoa mochoa  832500826 ago 15 18:01 fmw_12.2.1.1.0_wls_Disk1_1of1.zip
-rw-r--r-- 1 mochoa mochoa 1673544724 jul 16 19:19 linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 mochoa mochoa 1014530602 jul 16 19:20 linuxamd64_12102_database_2of2.zip
-rw-rw-r-- 1 mochoa mochoa  315891481 jul 18 12:20 oracle-xe-11.2.0-1.0.x86_64.rpm.zip
-rwxr-xr-x 1 mochoa mochoa         97 ago 12 10:25 run-repo.sh
-rw-rw-r-- 1 mochoa mochoa   59874321 ago 15 17:32 server-jre-8u101-linux-x64.tar.gz
to build and run your local repo use buildDockerImage.sh and run-repo.sh:
# docker build -t "localrepo:1.0.0" .# docker run --name localrepo --hostname localrepo --detach=true --publish=8000:80 localrepo:1.0.0
echo "Local repo is at: http://$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' localrepo):80/"
finally if We use a modified version of Oracle Official Dockerfile:
mochoa@localhost:~/jdeveloper/mywork/docker/docker-images/OracleDatabase/dockerfiles$ git diff 11.2.0.2/Dockerfile.xe
diff --git a/OracleDatabase/dockerfiles/11.2.0.2/Dockerfile.xe b/OracleDatabase/dockerfiles/11.2.0.2/Dockerfile.xe
index 00ed28a..cade77c 100644
--- a/OracleDatabase/dockerfiles/11.2.0.2/Dockerfile.xe
+++ b/OracleDatabase/dockerfiles/11.2.0.2/Dockerfile.xe
@@ -50,7 +50,7 @@ ARG ORACLE_PWD

 # Copy binaries
 # -------------
-COPY $INSTALL_FILE_1 $CONFIG_RSP $INSTALL_DIR/
+COPY $CONFIG_RSP $INSTALL_DIR/

 # Update yum
 # ----------
@@ -61,8 +61,9 @@ WORKDIR $INSTALL_DIR

 # Install Oracle Express Edition
 # ------------------------------
-RUN unzip $INSTALL_FILE_1 && \
-    rm $INSTALL_FILE_1 &&    \

+RUN curl -sSL http://192.168.200.55:8000/$INSTALL_FILE_1 -o $INSTALL_FILE_1 && \
+    unzip $INSTALL_FILE_1 && \
+    rm -f $INSTALL_FILE_1 && \
     rpm -i Disk1/*.rpm &&    \
     sed -i -e "s|###ORACLE_PWD###|$ORACLE_PWD|g" $INSTALL_DIR/$CONFIG_RSP && \
     /etc/init.d/oracle-xe configure responseFile=$CONFIG_RSP
@@ -91,5 +92,6 @@ RUN echo "DEDICATED_THROUGH_BROKER_LISTENER=ON"  >> $ORACLE_HOME/network/admin/l
     echo "DIAG_ADR_ENABLED = off"  >> $ORACLE_HOME/network/admin/listener.ora;

 EXPOSE 1521 8080
+VOLUME ["/u01/app/oracle/oradata","/u01/app/oracle/fast_recovery_area"]

 CMD $ORACLE_BASE/runOracle.sh
if We build the Oracle XE Docker image using the original script the size of the image is 2.84 GB, using a local repo and the modified Dockerfile result in 2.524 GB, basically is the size of the rpm.zip binary file, obviously if you sum the size of the local repo plus the size of the Oracle image is similar, but remember that the image of the resulting Oracle XE images is spread in all of your DataCenter nodes.
More on this the difference in the Oracle EE image using a modified version of this Dockerfile:
mochoa@localhost:~/jdeveloper/mywork/docker/docker-images/OracleDatabase/dockerfiles$ git diff 12.1.0.2/Dockerfile.ee
diff --git a/OracleDatabase/dockerfiles/12.1.0.2/Dockerfile.ee b/OracleDatabase/dockerfiles/12.1.0.2/Dockerfile.ee
index fccc3c4..8b38153 100644
--- a/OracleDatabase/dockerfiles/12.1.0.2/Dockerfile.ee
+++ b/OracleDatabase/dockerfiles/12.1.0.2/Dockerfile.ee
@@ -48,7 +48,7 @@ ENV INSTALL_DIR=$ORACLE_BASE/install \

 # Copy binaries
 # -------------
-COPY $INSTALL_FILE_1 $INSTALL_FILE_2 $INSTALL_RSP $CONFIG_RSP $INSTALL_DIR/
+COPY $INSTALL_RSP $CONFIG_RSP $INSTALL_DIR/
 COPY $RUN_FILE $ORACLE_BASE/

 # Setup filesystem and oracle user
@@ -78,10 +78,12 @@ RUN sed -i -e "s|###ORACLE_EDITION###|EE|g" $INSTALL_DIR/$INSTALL_RSP &&
 # -------------------
 USER oracle

-RUN unzip $INSTALL_FILE_1 && \
-    rm $INSTALL_FILE_1 &&    \
-    unzip $INSTALL_FILE_2 && \
-    rm $INSTALL_FILE_2 &&    \

+RUN curl -sSL http://192.168.200.55:8000/$INSTALL_FILE_1 -o $INSTALL_FILE_1 && \
+    unzip $INSTALL_FILE_1 -d $INSTALL_DIR/ && \
+    rm -f $INSTALL_FILE_1 && \
+    curl -sSL http://192.168.200.55:8000/$INSTALL_FILE_2 -o $INSTALL_FILE_2 && \
+    unzip $INSTALL_FILE_2 -d $INSTALL_DIR/ && \
+    rm -f $INSTALL_FILE_2 && \

     $INSTALL_DIR/database/runInstaller -silent -force -waitforcompletion -responsefile $INSTALL_DIR/$INSTALL_RSP -ignoresysprereqs -ignoreprereq && \
     rm -rf $INSTALL_DIR/database

@@ -142,6 +144,7 @@ RUN echo "startup;" | sqlplus / as sysdba && \

 RUN rm -rf $INSTALL_DIR
 EXPOSE 1521 5500
-  
+VOLUME ["/opt/oracle/oradata","/opt/oracle/fast_recovery_area"]
+

 # Define default command to start Oracle Database.
 CMD $ORACLE_BASE/runOracle.sh
result in 18.45 GB over 13.08 GB, 5Gb less!!
Hope these tips!!!

PD: I don't know why official Oracle RDBMS images includes the Oracle Data-files (data), containers are immutable, Do not store data in containers!!! I prefer a build Docker image which creates outside the Database data-files.






10 Oracle documents every Apps DBA must read before Upgrade to Oracle EBS 12.2

Online Apps DBA - Fri, 2016-08-19 09:22

 Are you an Oracle Apps DBA looking for Upgrading Oracle E-Business to version 12.2 ? If you confused where to start and what documents to read before you Upgrade to Oracle E-Business Suite 12.2 then you are at right place. This post covers 10 important Oracle documents that every Apps DBA must read before doing Oracle EBS […]

The post 10 Oracle documents every Apps DBA must read before Upgrade to Oracle EBS 12.2 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

New features in the Oracle Compute Cloud

Pat Shuff - Fri, 2016-08-19 07:00
Today Oracle updated the Oracle Compute Cloud Service by adding three new features.
  • Integration of the Oracle Marketplace into the Compute Cloud Console to make it easier to deploy custom solutions
  • Expanding the functionality of Backup Services to snapshot Compute Instances and clone them from snapshots
  • Making it easier to import existing virtual machines into the Oracle Cloud and making these images available to the Public and Private Marketplace
We talked earlier this week on pulling an image from the Oracle Marketplace. Previous to today you had to go to cloud.oracle.com/marketplace, setup preferences to link your account to your compute account, get an app from the marketplace, and provision the instance through the compute cloud. Today we just need to go into the create instance menu system from the Compute Console and select an image from the Marketplace to provision into a compute instance. This modification reduces the number of steps required to use the Marketplace as well as making it easier to provision preconfigured solutions into a compute instance or set of compute instances.

Note the Marketplace tab below the Private Images. A list of instances in the Marketplace along with a search engine are integrated into the provisioning of a new compute instance.

Compute instances now also have a backup tab similar to the monitor tab that was introduced a few weeks ago. This allows you to create snapshots of whole instances, save the snapshot as a bootable image, and provision new instances based on this snapshot.

This allows you to provision a predefined instance from a default OS or Marketplace instance, customize it, take a snapshot, then provision new instances from the customized installation.

The third new feature release is for users to have the ability to import VMWare instances directly into the Compute Cloud private images. The goal of this release is to allow users to import VMDK formatted images into the Oracle Cloud and run them with little or no modifications. This includes defining multiple network interfaces at import time rather than having to go back and configure multiple interfaces after the fact. The import does not require the users to modify the network drivers before importing but leverages the experience of the Ravello team for translating VMWare definitions into Oracle Compute Cloud definitions using Orchestration to create the network definition and provision it as the compute instance is started.

In summary, three new features were released today to make it easier to use the Oracle Compute Cloud Service. This is an ongoing improvement of services to help allow for frictionless migration of services from your data center into the cloud. These improvements along with those that will be announced between now and Oracle OpenWorld in September will help users treat the Oracle Public Cloud as an extension of their own data center for capacity expansion, disaster recovery, and development and test.

Variable substitution for a manifest.yml for Cloud Foundry

Pas Apicella - Fri, 2016-08-19 06:45
Pushed applications to CF or PCF you would of most likely used a manifest.yml file and at some point wanted to use variable substitution. manifest.yml files don't support that and a feature request has been asked for this as follows

https://github.com/cloudfoundry/cli/issues/820

With a recent customer we scripted the creation of a manifest.yml file from a Jenkins job  which would inject the required ROUTE to the application by creating the manifest.yml through a script as follows as shown below.

manifest-demo.sh

export ROUTE=$1

echo ""
echo "Setting route to $ROUTE ..."
echo ""

cat > manifest.yml <<!
---
applications:
- name: gs-rest-service
  memory: 256M
  instances: 1
  host: $ROUTE
  path: target/gs-rest-service-0.1.0.jar
!

cat manifest.yml

Script tested as follows

pasapicella@pas-macbook:~/bin/manifest-demo$ ./manifest-demo.sh apples-route-pas

Setting route to apples-route-pas ...

---
applications:
- name: gs-rest-service
  memory: 256M
  instances: 1
  host: apples-route-pas
  path: target/gs-rest-service-0.1.0.jar

Categories: Fusion Middleware

Oracle TO_NCLOB Function with Examples

Complete IT Professional - Fri, 2016-08-19 06:00
The Oracle TO_NCLOB function is a simple function and is used for conversion. Learn what it does and see some examples in this article. Purpose of the Oracle TO_NCLOB Function The purpose of the TO_NCLOB function is to convert a value from a LOB to an NCLOB value. You may not use it very often, […]
Categories: Development

Simplifying your mobile app UI design with Oracle MAF 2.3.2

As you may already know Oracle MAF 2.3.2 has been released and brought a bunch of new features including important support for IPv6 only networks on iOS (requirement from Apple that any apps...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Documentum Administrator UCF Troubleshooting

Yann Neuhaus - Fri, 2016-08-19 04:23

Maybe you had some issues with UCF in DA as me. I had this for no reason since few days at a customer. The problem was that we use SSL with DA and the Unified Content Facilities (UCF) wasn’t happy about it.
Thus, in this short blog I’ll speak about troubleshooting UCF.

The error I got happened when trying to edit, view or create documents; I had a popup saying an error occured with UCF.

First, we must know our enemy in order to fight it!

UCF stands for Unified Content Facilities. It’s a java applet made by EMC and used by wdk applications in order to manage and optimize content transfer between the application and your workstation. Thanks to UCF you can transfer large files with compressions and reconnect if the network failed some packets. The applet is downloaded to your workstation at runtime when you connect to a wdk application.
You can find the UCF configuration in your user folder like follow:
C:\Users\<USER>\Documentum\ucf

Refresh UCF Cache

Before going deeper in the debugging, maybe try to clear the ucf cache first and re-download the latest one from the server. In order to do so you’ll have to perform the following steps:
Clear you browser cache. If you have IE, go to Tools -> Delete Browsing History (or press CTRL+SHIFT+DEL).
Then check each checkboxes and click Delete.

Capture1

Close the browser afterwards.

Now be sure that you don’t have any browser pointing to a wdk application and go to C:\Users\<USER>\Documentum and try deleting the ucf folder.
If you have an error telling you it is already used, open the task manager and search for javaw.exe processes, and then close them down.
You should be able to delete the ucf folder now.

Also clear the cached ucf jar files by opening the java control panel. Go to Control Panel -> search for Java -> General Tab -> Temporary Internet Files -> Settings -> Delete Files

Now test again by opening Documentum Administrator and creating/editing a document. You shouldn’t have a popup error about UCF.

If you reached this point in the blog that means you didn’t get rid of the problem, so didn’t I. Well at this point we did some corrections but we don’t know what is the real error about UCF, we only get this stack trace saying UCF failed. We can now enable the ucf tracing in order to see if something more interesting is written in the logs.
You can enable the tracing on both sides, the server and your workstation. The easiest is to begin with your workstation so go back to the ucf folder C:\Users\<USER>\Documentum\ucf
Then navigate to <PCNAME>\shared\config and edit ucf.client.config.xml
Add the following options between <configuration/>:

<option name="tracing.enabled">
    <value>true</value>
</option>
<option name="debug.mode">
    <value>true</value>
</option>

Also edit the file: ucf.client.logging.properties by changing .level=WARNING to .level=ALL

Now reproduce the error and check what has been written in C:\Users\<USER>\Documentum\Logs

If you can’t see what the problem is you can also activate the tracing on the webserver by editing the same way: ../WEB-INF/classes/ucf.server.config.xml but note that you need to restart the webserver for it to take effect.

The errors in the generated log should allow you to find the real cause of the ucf error. In my case it was the SSL handshake that was failing.

 

Cet article Documentum Administrator UCF Troubleshooting est apparu en premier sur Blog dbi services.

Neue Datenbank Konsole Befehle (DBCC) CLONEDATABASE

Yann Neuhaus - Fri, 2016-08-19 02:52

Sie haben es vielleicht noch nicht gesehen, aber mit den Service Pack 2 von SQL Server 2014 gibt es einen neuen DBCC Befehl: CLONEDATABASE.

„DBCC CLONEDATABASE sollte Kopien eines Schemas und den Statistiken einer Produktionsdatenbank erstellen, um bei Leistungsproblemen Abfragen zu untersuchen.“ MSDN Quelle finden wir hier.

Dieser Befehl kann nur Benutzedatenbanken klonen.

Clonedatabase01

Wie Sie sehen, für die Systemdatenbanken ist es nicht möglich, denn es treten Fehlermeldungen auf:
Msg 12603, Level 16, State 1, Line 5
DBCC CLONEDATABASE does not support cloning system databases.

Mit DBCC CLONEDATABASE wird eine neue Datenbank erstellet. Es ist ein interner Snapshot der die Systemmetadaten, alle Schemas und alle Statistiken für alle Indizes kopiert. Deswegen, ist die Datenbank leer und ist im Read-Only Modus.
Clonedatabase02

Die Schemas sind kopiert… Ok, ich werde jetzt mit SQL Server Data Tools (SSDT) ein «Schemavergleich» durchführen:
Clonedatabase03

Alle Schemas sind in der geklonten Databank. Die Änderungen sind für den Fulltext und die Daten im XML Schema, wie zum Beispiel der MileRangeType mit seiner Bedeutung.
Clonedatabase04

Ich habe mit meinem Freund Michel über diese Funktionalität gesprochen und er hat mir gefragt wie verhaltet es sich mit den Daten?
Mein erster Schritt ist es die Betrachtung der Dateistruktur nach dem Klonen :
Clonedatabase05
Sie können feststellen, dass meine geklonte Datenbank weder die Benutzerdatenbank noch die Modeldatenbank übernommen hat.
Ich richte eine neue Filegroup [FG_Employees] ein, mit ein neue File AdventureWorks2014_Employees.ndf
Ich ändere mein Clustered Indize PK_Employee_BusinessEntityID zu dieser neuen Filegroup:

USE [master]
GO
ALTER DATABASE [AdventureWorks2014] ADD FILEGROUP [FG_Employees]
GO
ALTER DATABASE [AdventureWorks2014]
ADD FILE ( NAME = N'AdventureWorks2014_Employees',
FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.TEST\MSSQL\DATA\AdventureWorks2014_Employees.ndf'
, SIZE = 16384KB , FILEGROWTH = 8192KB ) TO FILEGROUP [FG_Employees]
GO
USE [AdventureWorks2014];
GO
CREATE UNIQUE CLUSTERED INDEX PK_Employee_BusinessEntityID
    ON HumanResources.Employee(BusinessEntityID)
WITH (DROP_EXISTING =  ON )
ON [FG_Employees]

Und jetzt, klone ich wieder meine Datenbank:
Clonedatabase06
Die Dateistruktur ist kopiert, cool!
Kann ich die Datenbank auf Read-Only ändern und Datei importieren?

USE [master]
GO
ALTER DATABASE [AdventureWorks2014_clone] SET  READ_WRITE WITH NO_WAIT
GO
INSERT INTO [AdventureWorks2014_clone].[Sales].[Currency]
SELECT *  FROM [AdventureWorks2014].[Sales].[Currency]

Kein Problem, die Daten sind direkt kopiert.
Clonedatabase07

Diese neue Funktionalität ist wirklich einfach zu verwenden.
Ich hoffen dass im nächsten Service Pack von SQL Server 2012 und SQL Server 2016 die CLONEDATABASE Funktion auch integriert wird. Für mehr Information, MSDN link hier

 

Cet article Neue Datenbank Konsole Befehle (DBCC) CLONEDATABASE est apparu en premier sur Blog dbi services.

Node-oracledb 1.11 Released to NPM

Christopher Jones - Fri, 2016-08-19 02:09

We've pushed out a release of node-oracledb to GitHub and NPM.

Top feature: Connection Pool Cache

The node-oracledb driver connects Node.js to Oracle Database for fast and functional applications.

The changes in node-oracledb 1.11 are:
  • Added a connection pool cache feature allowing pools to be given a string alias at creation. This makes pools easier to use.

    Pools are typically created in one file and later used in other files. Giving a pool a string alias allows these other modules to access the pool by knowing only its alias name.

    When pools are created with oracledb.createPool(), the pool attributes can now optionally contain a new poolAlias string.

    var hrPoolPromise = oracledb.createPool({
      poolAlias: 'pool1',
      users: 'hr',
      password: 'welcome',
      connectString: 'localhost/orcl'
    });

    Pools can be retrieved from the cache using a new oracledb.getPool() method, and then used as before to get connections. Multiple pools, each with a different alias, can be used.

    Interestingly, oracledb.getConnection() has been enhanced to take a pool alias specifying which pool to return a connection from. This bypasses the need to call oracledb.getPool().

    And there's more! The first pool (in a normal code flow) created without a poolAlias attribute is given the alias "default". Connections can be retrieved from this pool by using oracledb.getConnection() without passing an alias at all. This means applications that create only a single pool can simply use oracledb.getConnection() anywhere to get a connection from the pool.

    More information and examples are in the documentation. It is worth checking this since oracledb.getConnection() now has several different behaviors, depending on the use of aliases (and whether callbacks or promises are used).

    We'd recommend using aliases all the time if you create more than one pool - and want to access them via aliases. Using aliases is optional. Don't mix un-aliased and aliased pools unless you want to confuse yourself.

  • Improved the bootstrap error message when the node-oracledb binary cannot be loaded.

  • Fixed memory leaks with DATE and TIMESTAMP bind values.

  • Fixed external authentication which broke in 1.10.

  • Fixed metadata scale and precision values on AIX.

  • A couple of other internal improvements are shown in the CHANGELOG.

Resources

Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line!

Node-oracledb installation instructions are here.

Node-oracledb API and user documentation is here.

How to automatically execute sql queries in SQl developer

Tom Kyte - Fri, 2016-08-19 02:06
Hi thatjeffsmith, I'm from Vietnam. I have a trouble need your help I want to run this queries in SQL developer automatically, frequency will be daily: spool D:\test.csv select /*csv*/ * from dwh.DEPARTMENT_TCB where rownum <= 10 / spool o...
Categories: DBA Blogs

Query index information is very slow

Tom Kyte - Fri, 2016-08-19 02:06
Hello, I do a <code> SELECT AIC.INDEX_NAME AS IndexName, CASE ALC.CONSTRAINT_TYPE WHEN 'P' THEN 'T' ELSE 'F' END AS IsPrimary, CASE ALC.CONSTRAINT_TYPE W...
Categories: DBA Blogs

Partitioning

Tom Kyte - Fri, 2016-08-19 02:06
I have a table that is partitioned on a date column. When I insert rows into the table and commit all is well. When I select from the table => select * from tableA partition(xyz); I get the number of rows I expect. Also if i select from the table...
Categories: DBA Blogs

New Time Model Clarifications

Tom Kyte - Fri, 2016-08-19 02:06
Good Saturday Morning, Gentlemen. I am getting ready to give a talk to a local Oracle User's group. One item that I plan to discuss with the club is the Time Model. I have a few questions that will help me understand it better. Where will t...
Categories: DBA Blogs

expdp/impdp with Full Database but need only METADATA

Tom Kyte - Fri, 2016-08-19 02:06
i want to do expdp and impdp of full database but condition is METADATA_ONLY. SOURCE DATABASE IS 11.1.0 and TARGET DATABASE IS 11.2.0. Please tell a. while exporting and importing which user should be used to export. b. FULL=Y, CONTENT=MET...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator