Feed aggregator

OTN Appreciation Day : ADVM

Yann Neuhaus - Tue, 2016-10-11 05:26

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature: ADVM – The Oracle ASM Dynamic Volume Manager.

So, what is it? The docs tell you this: “Oracle ASM Dynamic Volume Manager (Oracle ADVM) provides volume management services and a standard disk device driver interface to clients. File systems and other disk-based applications send I/O requests to Oracle ADVM volume devices as they would to other storage devices on a vendor operating system.”

The easy to understand version is this: It enables us to use regular file systems on top of ASM.

Does is make sense to use it? When you have ASM running on the host or all the hosts of a Grid Infrastructure cluster anyway then it definitely makes sense. ASM will do all the mirroring and striping for you so there is no need to use another technology to achieve that when you can create ADVM volumes and create file systems on top of these. Although the most common scenario is to create an ACFS file system on top of the volumes you are actually not limited to that. Lets do a short demo.

Lets say we have these devices available for use by the grid user:

[root@rac1 ~] ls -la /dev/sd[b-f]*
brw-rw----. 1 root disk     8, 16 Oct 10 17:54 /dev/sdb
brw-rw----. 1 grid asmadmin 8, 17 Oct 10 18:10 /dev/sdb1
brw-rw----. 1 root disk     8, 32 Oct 10 17:54 /dev/sdc
brw-rw----. 1 grid asmadmin 8, 33 Oct 10 18:10 /dev/sdc1
brw-rw----. 1 root disk     8, 48 Oct 10 17:54 /dev/sdd
brw-rw----. 1 grid asmadmin 8, 49 Oct 10 18:10 /dev/sdd1
brw-rw----. 1 root disk     8, 64 Oct 10 17:54 /dev/sde
brw-rw----. 1 grid asmadmin 8, 65 Oct 10 18:10 /dev/sde1
brw-rw----. 1 root disk     8, 80 Oct 10 17:54 /dev/sdf
brw-rw----. 1 grid asmadmin 8, 81 Oct 10 18:10 /dev/sdf1

We want to use “/dev/sde1″ for our new ADVM volume. What we need is an ASM diskgroup in a first step because for creating an ADVM volume you’ll need a ASM diskgroup where you can place your volume on:

grid@rac1:/home/grid/ [+ASM1] sqlplus / as sysasm
SQL> create diskgroup ADVM external redundancy disk '/dev/sde1';

Diskgroup created.

SQL> 

Ok, fine. How can we proceed with creating a volume? Quite easy:

grid@rac1:/home/grid/ [+ASM1] asmcmd volcreate -G ADMV -s 2g VOLADVM
ORA-15032: not all alterations performed
ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher (DBD ERROR: OCIStmtExecute)

Hm, quite clear when you search the documentation: ADVM is available since 11gR2:
advm_search

Easy to fix:

grid@rac1:/home/grid/ [+ASM1] sqlplus / as sysasm

SQL> alter diskgroup ADVM set attribute 'compatible.asm'='12.1';

Diskgroup altered.

SQL> 

Lets try again:

grid@rac1:/home/grid/ [+ASM1] asmcmd volcreate -G ADMV -s 2g VOLADVM
grid@rac1:/home/grid/ [+ASM1] 

Perfect. Now I have a volume visible to the operating system:

grid@rac1:/home/grid/ [+ASM1] ls -la /dev/asm/*advm*
brwxrwx---. 1 root asmadmin 252, 115201 Oct 10 18:20 /dev/asm/voladvm-225

On top of this volume we can now create file systems. The natural one would be ACFS:

[root@rac1 ~] mkfs.acfs /dev/asm/voladvm-225
mkfs.acfs: version                   = 12.1.0.2.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/voladvm-225
mkfs.acfs: volume size               = 2147483648  (   2.00 GB )
mkfs.acfs: Format complete.

But in fact every other file system the operating system supports is possible, too:

[root@rac1 ~] mkfs.xfs /dev/asm/voladvm-225
meta-data=/dev/asm/voladvm-225   isize=256    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Quite cool, isn’t it? Whatever file system your operating system supports can be put on ASM disk groups …

 

Cet article OTN Appreciation Day : ADVM est apparu en premier sur Blog dbi services.

OTN Appreciation Day : Prebuilt Developer VMs

Bar Solutions - Tue, 2016-10-11 05:11

I learnt a lot from the entire Oracle Technology Network and I still do. One of the best features of OTN these days, IMHO, is the ability to download prebuilt Virtual Machines. Since I am a developer and not an administrator, I don’t like to be bothered with stuff like how much disk space do I need, how many cores should I use etc. I can just download a Virtual Box image, import it and start experimenting with the technology I am interested in. For instance, the multi tenant features of the Oracle 12c database. The best thing in using a virtual machine is that when you screw up really, really bad, you can just throw away the virtual machine, import the original version again and try again.
Thanks, OTN, for making it possible for me to learn new technologies without having to learn all the administrator stuff.
Oh, and if I need some extra information or find out what an error means and what I can do about it, there is almost always an answer to be found at OTN.

#ThanksOTN

OTN Appreciation Day: Automatic Storage Management (ASM)

Jeff Moss - Tue, 2016-10-11 04:48

Big shout out to Tim for kicking this off!

Automatic Storage Management (ASM) provides optimised volume management and filesystem capabilities for Oracle databases, whether they be single or multi instance (RAC) implementations.

Although introduced with Oracle 10g Release 1 in 2004, I first used it in a production scenario around 2008, when upgrading a hardware platform for a Data Warehouse. It seemed like a logical choice for myself and the DBAs at the time, although the storage team were less pleased at losing some control. Ultimately it proved a big success on that project and is still in stable, reliable use today.

Things I like about ASM include:

  • Simplifies storage management
  • Automatic rebalancing when capacity is added
  • Visibility within Enterprise Manager for monitoring
  • Availability of detailed metrics within the database
  • Reliable, balanced and consistent performance
  • Works with RAC
  • Rolling upgrades and patching
  • Provides a reliable cluster filesystem (ACFS)
  • Even more cool features coming in 12c such as Flex ASM

 

Some useful links:

ASM Administrators Guide 12cR1 (Oracle Docs)

The Mother Of All ASM Scripts (John Hallas)

Technical overview of new features for ASM in 12c (Whitepaper)

SGMB_URL = "http://www.oramoss.com/wp-content/plugins/social-media-builder/"; jQuery(".dropdownWrapper").hide();
jQuery(".socialMediaOnEveryPost").addClass("sgmb-left")

Oracle 12c – Managing RMAN persistent settings via SQL

Yann Neuhaus - Tue, 2016-10-11 04:29

RMAN persistent settings can be managed in two different ways.

  • Via the RMAN interface
    – e.g. RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
  • Via SQL
    – e.g. VARIABLE RECNO NUMBER;
    EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG(‘BACKUP OPTIMIZATION’,’ON’);

There are several scenarios when it might be helpful to use the SQL way. I will show 3 of them:

  • Automation
  • Reset to default
  • Rebuilding the RMAN persistent settings after losing all controlfiles (no catalog)

Let’s take a look at the first scenario. For example, when you have an automated way to run SQL’s against all of your databases and you want to change the RMAN retention from 3 days to 4 days for all of your databases. Then you could run the following.

SQL> select conf#, name, value from v$rman_configuration where name = 'RETENTION POLICY';

CONF# NAME                             VALUE
----- -------------------------------- ----------------------------------------------------------------------------------------
    1 RETENTION POLICY                 TO RECOVERY WINDOW OF 3 DAYS


SQL> EXECUTE DBMS_BACKUP_RESTORE.DELETECONFIG(CONF# => 1);

PL/SQL procedure successfully completed.

SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 4 DAYS');

PL/SQL procedure successfully completed.


SQL> select conf#, name, value from v$rman_configuration where name = 'RETENTION POLICY';

CONF# NAME                             VALUE
----- -------------------------------- ----------------------------------------------------------------------------------------
    1 RETENTION POLICY                 TO RECOVERY WINDOW OF 4 DAYS

	
-- The new value is, of course, immediately reflected via the RMAN interface as well

RMAN> SHOW RETENTION POLICY;

RMAN configuration parameters for database with db_unique_name OCM121 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;

 

The second useful scenario might be, to reset the whole RMAN config with one shot, instead of running several clear commands like the following, “RMAN> CONFIGURE BACKUP OPTIMIZATION CLEAR;” , simply run the RESETCONFIG.

SQL> EXECUTE DBMS_BACKUP_RESTORE.RESETCONFIG;

PL/SQL procedure successfully completed.

-- After executing this command, the v$rman_configuration view is empty, which means that all
-- RMAN persistent settings are default.

SQL> select conf#, name, value from v$rman_configuration;

no rows selected

 

And last but not least, to restore the RMAN persistent settings via SQL, in case you have lost all of your controlfiles and no RMAN catalog is in place.

One little side note, in case you have a RMAN catalog. The RMAN sync from the controlfile to the catalog is usually unidirectional, meaning that the controlfile is always the master and it syncs the information to the catalog. However, there are exceptions were it is bidirectional. One of it is, when you recreate the controlfile manually, then RMAN is able to get the last RMAN persistent settings from the catalog and applies it to the controlfile.

However, if you don’t have a catalog, dump out the RMAN persistent settings into SQL, simply by backing up the controlfile to trace.

SQL> alter database backup controlfile to trace as '/tmp/cntrl.trc';

Database altered.

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 4 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 14 DAYS');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/nfs/OCM121)''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO APPLIED ON ALL STANDBY');

And if you run into the severe situation of losing all controlfiles, you can restore the RMAN persistent settings quite quickly. Especially useful, when you have configured complex Media Manager settings.
Cheers,
William

P.S. Managing RMAN persistent settings via SQL is not a 12c feature. It exists for quite a long time.

 

Cet article Oracle 12c – Managing RMAN persistent settings via SQL est apparu en premier sur Blog dbi services.

OTN Appreciation Day: Breaking Barriers…In Memory

Marco Gralike - Tue, 2016-10-11 04:17
The stuff I liked most in the database releases of the last years is the…

OTN Appreciation Day: OBIEE's Export to Excel Functionality

Rittman Mead Consulting - Tue, 2016-10-11 04:11

Only kidding…. Do you know that almost any transformation doable in excel can be achieved in OBIEE, probably faster and with zero impact on your local workstation?

Cat Million Rows Image credit https://twitter.com/fomin_andrew/status/749305992198881281

Why bothering downloading data to Excel when you have pivot tables, conditional formatting and a huge variety of graphs with drilling/action capabilities all in OBIEE. A platform where analysis can be shared by passing a single URL instead of emailing huge XLS files?

Sometimes however there is a good reason to export to excel, like when preparing a presentation on top of OBIEE data/analysis. The following are the possible ways of achieving the OBIEE/Excel integration:

  • Dashboard and Analysis can be exported to excel with a single click
  • A BI Publisher version of a dashboard can be created and used by default when exporting
  • Excel can be linked via Smartview to a single Analysis: Data and Visualisations can be downloaded and refreshed upon request with configurable parameters.
  • Excel can directly query the BiServer Subject Areas via Smartview.
  • Excel version of Dashboard and Analysis can be delivered by email via Agents.

An important note, Oracle published "OBIEE 11.1.1.7 - New Features, Export Guidance And Recommendations For Working With Microsoft Office (Doc ID 1558070.1)". This Document contains recommendations on how to provide the export to Excel depending on the output data volume. The document was written for OBIEE 11.1.1.7 but the same suggestions apply to almost any OBIEE version available.

Categories: BI & Warehousing

OTN Appreciation Day : Partition your table online !

Laurent Schneider - Tue, 2016-10-11 03:26

#ThanksOTN @oraclebase

No, I am not talking about DBMS_REDEFINITION, where you get a kind of online feeling.

No, I don’t want to rename my table, rename my foreign keys, my primary key, my not-null-constraints, recreate my referential integrity, recompile my triggers.

I just want to partition a non-partitioned table.

ALTER TABLE EMP 
  MODIFY 
  PARTITION BY RANGE(HIREDATE)
  INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
  (PARTITION P1 VALUES LESS THAN (DATE '1980-01-01'))
  ONLINE;

This is going to save me a lot of Saturday work 🙂

You need 12.2 to run this.

OTN Appreciation Day: Flashback

John Scott - Tue, 2016-10-11 02:48

This is my contribution to the excellent OTN Appreciation Day idea by Tim

The Flashback (particularly Flashback Query) features of the database have saved my neck many (too many!) times over the years.

For example retrieving the value of the Employees salary as it was 10 minutes ago:

SQL> select sal 
 from emp 
 as of timestamp sysdate - interval '10' minutes 
 where ename = 'SCOTT';

This feature can be used almost everywhere Oracle is used, for example it’s embedded into the Export Application feature of Oracle Application Express

2016-10-11_08-38-54

As an extension to this, using the Flashback Data Archive feature allows you maintain an archive  of your application data that you can query in real time – without having to write your own logic.

Flashback – an extremely useful but often little known and underused feature of Oracle!


Which EBS 12.1.3 RPC Should I Apply?

Steven Chan - Tue, 2016-10-11 02:05

A customer was in the process of upgrading their EBS 12.1.3 environment to RPC 4 when they heard about the release of RPC 5.  They asked, "Which one should we apply?"

This kind of question crops up frequently since we're continually producing new EBS updates. There is no definitive right answer.  For example, the answer depends upon:

  • How far a customer might have progressed in their testing of the previous update
  • The number of customizations affected by the respective updates
  • Whether the customer wants new functionality available only in the latest update
  • Whether the customer needs fixes that are only available in the latest update
  • How risk-averse the customer is (e.g. "Let other customers live on the bleeding edge")
  • etc.

EBS 12.1.3 RPC 5 screenshot download

What is a Recommended Patch Collection?

Recommended Patch Collections (RPC) are collections of patches that we have already released individually for EBS 12.1.3.

Customers can apply individual patches or they can apply a Recommended Patch Collection.  Either approach is fine.

We periodically bundle the latest patches together to save you the trouble of applying them individually.  As an added bonus, all patches in a given RPC are tested together for compatibility, which isn't always true for individual patches that are released on an emergency basis.

Recommended Patch Collections are not a “release.”  We currently have no plans to make a specific RPC a minimum baseline or prerequisite for new EBS 12.1 patches. 

What is our policy for releasing EBS patches?

Our official policy for releasing EBS 12.1 and 12.2 patches is published here:

Our current policy is that EBS 12.1.3 users should apply the latest 12.1.3 product family patches for the products in use, plus a small number of ATG technology stack patches. We will produce new EBS patches for issues that can be reproduced in environments with those prerequisites.

What's the best strategy for applying EBS 12.1.3 Recommended Patch Collections?

We generally recommend applying the latest EBS updates to production as soon as it's convenient for your users.

If you're in the middle of testing a specific RPC and it's too late to switch to a newer one, go ahead and complete your patching project on production. You can always apply the next RPC in a subsequent project.

Related Articles

Categories: APPS Blogs

Documentum story – Manual deployment of X DARs on Y docbases

Yann Neuhaus - Tue, 2016-10-11 02:00

In a previous blog (click here), I presented a common issue that might occur during the installation of some DARs and how to handle that with what Documentum provides but there are some limitations to that. Indeed the script repositoryPatch.sh is pretty good (except the small bug explained in the other blog) but its execution is limited to only one docbase and it is pretty boring to always put the full path of the DARs file knowing that usually all DARs will be at the same place (or at least this is what I would recommend). In addition to that, this script repositoryPatch.sh might not be available in your Content Server because it is normally available only after applying a patch of the Content Server. Therefore we usually use our own shell script to deploy X DARs on Y docbases with a single command.

 

For this blog, let’s use the following:

  • Documentum CS 7.2
  • RedHat Linux 6.6
  • $DOCUMENTUM=/app/dctm/server
  • $DM_HOME=/app/dctm/server/product/7.2

 

I will propose you in this blog three different solutions to avoid the issue with the space in the name of a DAR and to be able to deploy all DARs that you want on all docbases that you define.

  1. Variable with space separated list
#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
dar_list=("DAR 1.dar" "DAR 2.dar" "DAR 3.dar")
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        for dar in "${dar_list[@]}"
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

This is probably not the best solution because you have to manually add double quotes around each DAR name so that’s a little bit boring, unless you already have such a list. Please note that with this script, all DARs must be in the folder $DM_HOME/install/DARsInternal/ which is the folder used by Documentum by default for DARs.

 

  1. No variable but still space separated list
#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        for dar in "DAR 1.dar" "DAR 2.dar" "DAR 3.dar"
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

Same as before for this one, you don’t need the @ trick since the list of DARs is in the for loop directly but you still need to manually put double quotes around the file names.

 

  1. Variable with comma separated list
#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
dar_list="DAR 1.dar,DAR 2.dar,DAR 3.dar"
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        IFS=',' ; for dar in $dar_list
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

This version is my preferred one because what you need is just a list of all DARs to be installed and the separation is just a comma so that’s pretty simple to obtain and simpler to manage than double quotes everywhere. Now these versions will all provide the following output showing that the script is working properly even for DARs containing spaces in their names:

Deploying DAR 1.dar into DOCBASE1
Deploying DAR 2.dar into DOCBASE1
Deploying DAR 3.dar into DOCBASE1
Deploying DAR 1.dar into DOCBASE2
Deploying DAR 2.dar into DOCBASE2
Deploying DAR 3.dar into DOCBASE2
Deploying DAR 1.dar into DOCBASE3
Deploying DAR 2.dar into DOCBASE3
Deploying DAR 3.dar into DOCBASE3

 

So that was for the deployment of several DARs in several docbases. By default Documentum will consider that the username is “dmadmin”. If this isn’t the case, then this script will not work in its current state. Yes I know, we specified the user in the script but Documentum doesn’t care and it will fail if you aren’t using dmadmin. If you need to specify another name for the Installation Owner, then you need to do three additional things. The first one is to add a new parameter in the script that will therefore now look like the following:

#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
dar_list="DAR 1.dar,DAR 2.dar,DAR 3.dar"
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        IFS=',' ; for dar in $dar_list
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -Dinstallparam="$dar_location/installparam.xml" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

After doing that, the second thing to do is to create the file installparam.xml that we used above. In this case, I put this file in $DM_HOME/install/DARsInternal but you can put it wherever you want.

[dmadmin@content_server_01 ~]$ cat $DM_HOME/install/DARsInternal/installparam.xml
<?xml version="1.0" encoding="UTF-8"?>
<installparam:InputFile xmlns:installparam="installparam" xmlns:xmi="http://www.omg.org/XMI" xmi:version="2.0">
    <parameter value="dmadmin" key="YOUR_INSTALL_OWNER"/>
</installparam:InputFile>

 

Just replace in this file YOUR_INSTALL_OWNER with the name of your Installation Owner. Finally the last thing to do is to update the buildfile. In our script, we are using the default one provided by EMC. In this buildfile, you need to specifically tell Documentum that you want it to take into account a custom parameter file and this is done by adding a single line in the emc.install XML tag:

[dmadmin@content_server_01 ~]$ grep -A5 emc.install $DM_HOME/install/composer/deploy.xml
        <emc.install dar="${dar}"
                     docbase="${docbase}"
                     username="${user}"
                     password="${password}"
                     domain="${domain}"
                     inputfile="${installparam}" />

 

Once this is done, you can just restart the deployment of DARs and it should be successful this time. Another solution to specify another Installation Owner or add more install parameters is to not use the default buildfile provided by EMC but use your own custom buildile. This will be an ANT file (xml with project, target, aso…) that will define what to do exactly so this is highly customizable. So yeah there are a lot of possibilities!

Note: Once done, don’t forget to remove the line from the file deploy.xml ;)

 

Hope you enjoyed this blog and that this will give you some ideas about how to improve your processes or how to do more with less. See you soon!

 

Cet article Documentum story – Manual deployment of X DARs on Y docbases est apparu en premier sur Blog dbi services.

OTN Appreciation Day : Transportable tablespaces

Yann Neuhaus - Tue, 2016-10-11 01:15

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature that I described at the “EOUC Database ACES Share Their Favorite Database Things“: Transportable Tablespaces appeared in Oracle 8.1.5

ROWID

I’ll start with a change that came between Oracle 7 and Oracle 8. The ROWID, which identifies the physical location of a row within an database (with file ID, block offset, and row directory number) changed to be the location within a tablespace only. The format did not change, but the file ID was changed to be a relative file number instead of an absolute file number.
Here is the idea:
CaptureTTS-ROWID

Actually, to be able to migrate without visiting each block (the ROWID is present in all blocks, all redo vectors, etc) they used the same number but that number is unique only within a tablespace. The first goal was to hold more datafiles per tablespace (Oracle 8 was the introduction of VLDB – Very Large Databases – concepts). The limit of 255 datafiles per database became a limit of 255 datafiles per tablespace. So the numbers starts the same as before but can go further.

This change was simple because anytime you want to fetch a row by its ROWID you know which table you query, so you know the tablespace. The exception is when the ROWID comes from a global index on a partitioned table, and for this case Oracle 8 introduced an extended ROWID that contains additional bytes to identify the segment by its DATA_OBJECT_ID.

By the way, this makes tablespaces more independent on the database that contains them because all row addressing is relative.

Locally Managed Tablespaces

Another change in 8i was Locally Managed Tablespaces. Before, the space management of the tablespaces was centralized in the database dictionary. Now, it is delocalized in each tablespace. What was stored in UET$ system table is now managed as a bitmap in the first datafile header.

Pluggable tablespaces

The original name of transportable tablespace was “pluggable tablespaces”. Because they are now more self-contained, you can detach them from a database an attach them to another database, without changing the content. This means that data is moved physically which is faster than the select/inserts that are behind a logical export/import. There are only two things that do not come with the datafiles.

The open transactions store their undo in the database UNDO tablespace. This means that if you detach a user tablespace you don’t have the information to rollback the ongoing transactions when you re-attach it elsewhere. For this reason, this ‘detach’ is possible only when there are no on-going transactions: you have to put the tablespace READ ONLY.

The user object metadata is stored in the database dictionary. Without them, the datafiles is just a bunch of bytes. You need the metadata to know what is a table or index, and which one. So, with transportable tablespaces, a logical export/import remains for the metadata only. This was done with exp/imp when TTS were introduced and is now done with DataPump. Small metadata is moved logically. Large data is moved physically.

CaptureTTS-EXP

Transportable tablespaces

TTS is faster than simple DataPump because data is moved physically by moving the datafiles. TTS is more flexible than an RMAN duplicate because you can move a subset of a database easily. Because the metadata is still transported logically, and datafiles are compatible with newer versions, TTS can be done cross-version, which makes it a nice way to migrate and upgrade. It is used also for tablespace point-in-time recovery where you have to recover to an auxiliary instance and then transport the tablespace to the target.
TTS is also used to move data quickly from operational database to a datawarehouse ODS.
It is also a good way to publish and share a database in read-only, on a DVD for example.

And beyond

Except with the move to DataPump for the metadata transfer, TTS has not change a lot until 12c. In 12.1 you have full transportable tablespace which automates the operations when you want to move a whole database. This can be used to migrate from non-CDB to multitenant architecture.

With multitenant, pluggable databases is an extension of TTS. Because user metadata come with the PDB system tablespaces yon don’t need to export them logically anymore: you transport the whole PDB. That’s the first restriction relieved. The second restriction, the need for read only, will be relieved as well when the UNDO will be local to the PDB and I don’t think I disclose any secret when telling that local UNDO has been announced for 12.2

OTN Appreciation Day

This was my contribution to the “EOUC Database ACES Share Their Favorite Database Things” at Oracle Open World 2016 organized by Debra Lilley. Tim Hall idea of “OTN Appreciation Day” comes from that. You still have time to contribute for this day. No need for long posts – I always write a but more than what I plan to. The “rules” for this day is described in oracle-base.com

 

Cet article OTN Appreciation Day : Transportable tablespaces est apparu en premier sur Blog dbi services.

OTN Appreciation Day: Oracle Data Integrator 12c - Flexibility

Rittman Mead Consulting - Mon, 2016-10-10 22:49

As you may already know by now, it’s OTN Appreciation Day! The idea was thought up by Oracle ACE Director Tim Hall to give thanks to the Oracle Technical Network that we all love and use on a daily basis (see #ThanksOTN on Twitter).

The focus for these blog posts is not only to give thanks to OTN, but also to generate some interesting conversation around different features of various Oracle products. One of my favorite features of Oracle Data Integrator 12c is its flexibility. This isn’t necessarily a single feature of the product, but more of an overall assessment of ODI as a whole. Let me breakdown a few of the features that make ODI flexible and easy to adapt. Note: I actually mentioned a couple of these in my previous blog post, “Oracle Data Integrator 12c: Getting Started - What is ODI?”, but why not point them out again?

Knowledge Modules

The typical main goal of Oracle Data Integrator is to develop mappings that use on or more source datastores to load one or more target datastores. These mappings can have many different components that join, filter, aggregate or transform the data before it reaches its final destination. Logically, this is completed in ODI by dragging lines between boxes and configuring properties for each component.

The physical implementation of a mapping is set to use one or more Knowledge Modules to generate the code or objects required for specific loading and integration types. These KMs are code templates that use the ODI Substitution API to fill in the mapping metadata based on the logical implementation. The great part is that these KMs can be easily customized and modified to fit your data integration needs! Your environment doesn’t have to conform to the tool, you get to make the tool conform to your environment.

"But wait!", says the current ODI developers. “We can’t modify Component Knowledge Modules.” Yes, that’s a true statement. The current ODI 12c version has two types of KMs: Template and Component. The former have been in the product since inception and are easily customizable while the latter are more of a black box. But, as announced at Oracle OpenWorld last month, a brand new, highly extensible Knowledge Module framework, and the ability to edit all KMs, is coming soon!

Procedures

If Knowledge Module customization doesn’t give you the flexibility necessary to perform specific actions in your data integration project, we also have ODI Procedures. These objects are used to execute a specific set of code for any sort of task. With so many different technologies accessible via ODI, you can write nearly any bit of code to execute. Procedures are often used for exception handling, file processing, and all types of scripting via Jython or Groovy. These objects just add to the ultimate flexibility of Oracle Data Integrator.

Java API

Last, but definitely not least, we have the ODI SDK Java API. Using the SDK, we can perform nearly every action that can be completed via ODI Studio. That’s a huge amount of flexibility! Now if there is a batch creation of objects or change necessary, I can write a few lines of Groovy code, execute the script, and it’s all completed in seconds rather than days of manual work. Take a look at a couple of examples I’ve posted in the past, adding columns to a set of ODI Datastores and creating Interfaces (in ODI 11g) based on a SQL query. Any chance I get I try to use the ODI SDK to make my development life just that much easier.

There you have it, my favorite feature of Oracle Data Integrator 12c - flexibility. I hope you all have a great OTN Appreciation Day and thanks for reading! And of course, #ThanksOTN!

Categories: BI & Warehousing

OTN Appreciation Day : Undo and Redo

Hemant K Chitale - Mon, 2016-10-10 20:06
On OTN Appreciation Day, let me say I like the Undo and Redo features of Oracle.  I name them together as they work together.

Undo also supports MultiVersionReadConsistency -- a great advantage of Oracle.

Redo, with Archive Logging, also supports Online Backups -- an absolute necessity.

These features have been around for almost 30 years now.

Here are some Quick and Rough Notes on Undo and Redo.
.
.
.
Categories: DBA Blogs

Undo and Redo

Hemant K Chitale - Mon, 2016-10-10 20:04
Quick and Rough Notes :


Undo and Redo


Undo is where Oracle logs how to reverse a transaction (one or more DMLs in a transaction)

Redo is where Oracle logs how to replay a transaction

Undo and Redo are written to as the transaction proceeds, not merely at the end of the transaction
(imagine a transaction that consists of 1million single-row inserts, each distinct insert is written to undo and redo)
Undo segments
Oracle dynamically creates and drops Undo segments depending on transaction volume
An undo segment consists of multiple extents. As a transaction grows beyond the current extent, a new extent may be allocated
One undo segment can support multiple transactions but a transaction cannot span multiple undo segments
After COMMIT the undo information is retained for undo_retention or autotuned_undo_retention.
At the end of the retention period, the undo is discarded, the extent is expired

Undo retention
Oracle may autotune the undo retention
If the datafile(s) for the active undo tablespace are set to autoextend OFF, Oracle automatically uses the datafile to the fullest and ignores undo_retention
If the datafile(s) are set to autoextend ON, Oracle autotunes undo_retention to match query lengths
Check V$undostat for this information

Undo and Read Consistency
Oracle's implementation of MultiVersionReadConsistency relies on a user session being able to read the undo generated by another session
A session may need to read the prior image of data because the data has been modified (and may even have been commited) by another session
It clones the current version of the block it is reading and applies the undo for that block to create its read consistent version
Flashback Query is supported by reading from Undo
Isolation levels (READ COMMITTED, SERIALIZABLE, READ ONLY) 
Read Consistency with READ COMMITTED is at *statement* level by default
A session running multiple queries may each read a different version by default because Read Committed is enforced for each statement
(This also means that if you have a PLSQL block running the same SQL multiple times, each execution can see a different version of the data-- if the data is modified by another session between executions of the SQL !)
A session can choose to set it's ISOLATION LEVEL to SERIALIZABLE which means that every query sees the same version of data
This works only for short running queries and with few changes to the data or read only data.
SERIALIZABLE can update data provided that the same data hasn't been updated and committed by another session after the start (else you get ORA-08177)
READ ONLY does not allow the session to make changes

Transactions
When a transaction is in progress, it is identified by the Transaction Address, Undo segment, slot and sequence
The ITL slot in the block header contains the reference (address) to the Undo
The SCN is assigned at commit time (therefore a transaction doesn't begin with an SCN)

Temp Undo
12c also allows temporary undo
Normally, changes to GTT generate undo which needs to be written to undo segments
With 12c temp undo, those undo entries are also, like the actual changes, temporary and can be discarded when the commit is issued
Thus, the undo doesn't need to be written to disk (remember data in a GTT is not visible to another session, so there is no need to persist the undo)
Redo also captures Undo One transaction (or multiple concurrent transactions) may have updated multiple database blocks So, DBWR may have written down some of the modified buffers to disk, even before the transaction COMMIT has been issued This means that some of the blocks on disk may have uncomitted changes What happens if the instance were to fail (e.g. a bug takes down a background process or the server crashes due to an OS bug or a CPU failure ?) On instance recovery, Oracle must identify the uncommited transactions and roll them back But if the undo for that was only in memory and was lost on instance/server failure, how can Oracle rollback the uncomitted transaction ? Oracle knows that it must "undo" modified blocks This is done by protecting the undo through the redo as well Before a modified buffer is written to disk by DBWR, LGWR writes the redo for it That redo also captures the undo This ensures that, on the need to do Instance Recovery or Media Recovery, the undo is also available The Rollforward process writes the undo to the undo segments This allows Oracle to rollback the uncommitted transaction because the undo is now on disk (and not lost from memory) Redo Strands Redo consists of multiple strands Since 10g, Oracle has introduced private strands for single-instance databases This allows a process to manage it's private strand of redo until it decides to commit At commit time, the private strand is written into the public redo area and this allows LGWR to flush the redo to disk IMU Similarly, Oracle also manages undo "in memory" (using IMU pools). This means that, for a short period or small transactions, Undo is managed in memory rather than through undo segments Therefore, Oracle doesn't have to track undo segment changes in the redo This also allows bundling the undo for multiple changes into a single redo record, instead of separate redo records RAC In RAC, every instance has (a) a seperate Redo Thread (b) a separate Undo Tablespace However, the redo thread must be readable by every other instance -- as instance recovery by another (surviving) instance needs to read the redo Similarly, the undo tablespace is read by any other instance because queries in instance 2 may need to read undo of instance 1 for read-consistency
Categories: DBA Blogs

Export APEX application with SQLcl

Kris Rice - Mon, 2016-10-10 19:23
APEXExport has been around a long time for exporting an application and anything else like images, feedback,websheets,.. into a file system so that they can be version controlled.  This is a must if there is ever a need to rollback or see what the application was X days ago.  This is a java program that is part of the apex distribution.  The catch for some folks is that it's a java program and

Input parameters tramsission into logging procedure

Tom Kyte - Mon, 2016-10-10 18:06
Hello, Tom! I'm sure, my problem is not nontrivial one, but I've certainly stuck on it and need your help. This is it. I need to write a logging procedure that will catch exceptions from other pl-sql blocks and assembly error messages with the str...
Categories: DBA Blogs

Inconsistent results from queries involving date predicates

Tom Kyte - Mon, 2016-10-10 18:06
Hey there, I am confused to see inconsistent query results from (seemingly) same queries: <code> SELECT NVL(ROUND(SUM(TRD_BRKRG_VL)/10000000,2),0) FROM TRD_TRD_DTLS WHERE TRUNC(TRD_TRD_DT) BETWEEN '01-Apr-2014' AND '31-Mar-2015' </code>...
Categories: DBA Blogs

identify increasing amount.

Tom Kyte - Mon, 2016-10-10 18:06
Hi Tom, I have just came across a requirement, where I need to identify the increasing amount. say I have a table for quarterly expenses and I need to identify the dept which has increasing expense by 105 or more from previous quarter, and if any dep...
Categories: DBA Blogs

Passing array size for a structure dynamically in PRO*C using a select query in a function.

Tom Kyte - Mon, 2016-10-10 18:06
Hi, I have declared a C variable in declare section and trying to get its value using a select query from a function as follwing, Note: I have mentioned only relevant codelines. ---------------------------------------------------------------- #...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator