Feed aggregator

Modify your nodemanager.properties in wlst

Darwin IT - Mon, 2018-01-22 09:12
In 2016 I did several posts on automatic installs of Fusion MiddleWare, including domain creation using wlst.

With weblogic 12c you automatically get a pre-configured per-domain nodemanager. But you might find the configuration not completely suiting your whishes.

It would be nice to update the nodemanager.properties file to with your properties in the same script.

Today I started with upgrading our Weblogic Tuning and Troubleshooting training to 12c, and one of the steps is to adapt the domain creation script. In the old script, the AdminServer is started right way, to add the managed server to the domain. In my before mentioned script, I do that offline. But since I like to be able to update the nodemanager.properties file I figured that out.

Earlier, I created  a function to just write a new property file:
#
# Create a NodeManager properties file.
def createNodeManagerPropertiesFile(javaHome, nodeMgrHome, nodeMgrType, nodeMgrListenAddress, nodeMgrListenPort):
print ('Create Nodemanager Properties File for home: '+nodeMgrHome)
print (lineSeperator)
nmProps=nodeMgrHome+'/nodemanager.properties'
fileNew=open(nmProps, 'w')
fileNew.write('#Node manager properties\n')
fileNew.write('#%s\n' % str(datetime.now()))
fileNew.write('DomainsFile=%s/%s\n' % (nodeMgrHome,'nodemanager.domains'))
fileNew.write('LogLimit=0\n')
fileNew.write('PropertiesVersion=12.2.1\n')
fileNew.write('AuthenticationEnabled=true\n')
fileNew.write('NodeManagerHome=%s\n' % nodeMgrHome)
fileNew.write('JavaHome=%s\n' % javaHome)
fileNew.write('LogLevel=INFO\n')
fileNew.write('DomainsFileEnabled=true\n')
fileNew.write('ListenAddress=%s\n' % nodeMgrListenAddress)
fileNew.write('NativeVersionEnabled=true\n')
fileNew.write('ListenPort=%s\n' % nodeMgrListenPort)
fileNew.write('LogToStderr=true\n')
fileNew.write('weblogic.StartScriptName=startWebLogic.sh\n')
if nodeMgrType == 'ssl':
fileNew.write('SecureListener=true\n')
else:
fileNew.write('SecureListener=false\n')
fileNew.write('LogCount=1\n')
fileNew.write('QuitEnabled=true\n')
fileNew.write('LogAppend=true\n')
fileNew.write('weblogic.StopScriptEnabled=true\n')
fileNew.write('StateCheckInterval=500\n')
fileNew.write('CrashRecoveryEnabled=false\n')
fileNew.write('weblogic.StartScriptEnabled=true\n')
fileNew.write('LogFile=%s/%s\n' % (nodeMgrHome,'nodemanager.log'))
fileNew.write('LogFormatter=weblogic.nodemanager.server.LogFormatter\n')
fileNew.write('ListenBacklog=50\n')
fileNew.flush()
fileNew.close()

But this one just rewrites the file, and so I need to determine the values for properties like DomainsFile, JavaHome, etc., which are already set correctly in the original file. I only want to update the ListenAddress, and ListenPort, and possibly the SecureListener property based on the nodemanager type. Besides that, I want to backup the original file as well.

So, I adapted this  function to:
#
# Update the Nodemanager Properties
def updateNMProps(nmPropertyFile, nodeMgrListenAddress, nodeMgrListenPort, nodeMgrType):
nmProps = ''
print ('Read Nodemanager properties file%s: ' % nmPropertyFile)
f = open(nmPropertyFile)
for line in f.readlines():
if line.strip().startswith('ListenPort'):
line = 'ListenPort=%s\n' % nodeMgrListenPort
elif line.strip().startswith('ListenAddress'):
line = 'ListenAddress=%s\n' % nodeMgrListenAddress
elif line.strip().startswith('SecureListener'):
if nodeMgrType == 'ssl':
line = 'SecureListener=true\n'
else:
line = 'SecureListener=false\n'
# making sure these properties are set to true:
elif line.strip().startswith('QuitEnabled'):
line = 'QuitEnabled=%s\n' % 'true'
elif line.strip().startswith('CrashRecoveryEnabled'):
line = 'CrashRecoveryEnabled=%s\n' % 'true'
elif line.strip().startswith('weblogic.StartScriptEnabled'):
line = 'weblogic.StartScriptEnabled=%s\n' % 'true'
elif line.strip().startswith('weblogic.StopScriptEnabled'):
line = 'weblogic.StopScriptEnabled=%s\n' % 'true'
nmProps = nmProps + line
# Backup file
print nmProps
nmPropertyFileOrg=nmPropertyFile+'.org'
print ('Rename File %s to %s ' % (nmPropertyFile, nmPropertyFileOrg))
os.rename(nmPropertyFile, nmPropertyFileOrg)
# Save New File
print ('\nNow save the changed property file to %s' % nmPropertyFile)
fileNew=open(nmPropertyFile, 'w')
fileNew.write(nmProps)
fileNew.flush()
fileNew.close()
It first reads the property file, denoted with nmPropertyFile line by line.
If a line starts with a particular property that I want to set specifically, then the line is replaced. Each line is then added to the nmProps  variable. For completeness and validation I print the resulting variable.
Then I rename the original file to nmPropertyFile+'.org' using os.rename(). And lastly, I write the contents of the nmProps to the original file in one go.


This brings me again one step further to a completely scripted domain.

Partner Webcast – Exadata Database Machine X7 is the biggest change in Exadata in many years

body {-webkit-text-size-adjust:none; -ms-text-size-adjust:none;} body {margin:0; padding:0;} table td {border-collapse:collapse;} h1, h2, h3,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Data Guard And Notification

Tom Kyte - Mon, 2018-01-22 02:26
As part of the my project, we are migrating legacy system to new system. As a part of one process, we have reporting data base from which different reports are generated. Report generation is started by trigger(alert) once the replication of producti...
Categories: DBA Blogs

how to recover a bad block in a big file tablespace

Tom Kyte - Mon, 2018-01-22 02:26
some days ago,I asked a question about big file tablespace vs normal small file tablespace, you said "That said, its rare to need to recover a particular datafile - the most common scenarios are recover a database, or recover/fix some blocks. In eith...
Categories: DBA Blogs

How many context switches my session does?

Tom Kyte - Mon, 2018-01-22 02:26
Hello, My questios is already in subject. What performance views has information about about PL/SQL and SQL context switches? SELECT n.Name FROM v$statname n WHERE n.Name LIKE '%context%' Returns only "OS Voluntary context switches" and "OS...
Categories: DBA Blogs

two_tasks

Tom Kyte - Mon, 2018-01-22 02:26
would like to know why it is not advisable to use "two_tasks" Thank you Regards
Categories: DBA Blogs

Primary Key

Tom Kyte - Mon, 2018-01-22 02:26
Dear Sir, When asked what is the difference between Primary Key and Unique Key , People say Primary Key cannot be NULL, and Unique Key can be NULL. My Question is If we add a NOT NULL constraint on Unique Key , we can use Unique key constraint...
Categories: DBA Blogs

Announcement: Oracle Indexing Internals Seminars Coming to New Zealand in March 2018 (Shipyards of New Zealand)

Richard Foote - Sun, 2018-01-21 19:14
I’m very pleased to announce I’ve now finalised some dates in New Zealand for my popular and highly acclaimed “Oracle Indexing Internals and Best Practices” seminar. They are: Wellington 12-13 March 2018: Tickets and Registration Link Auckland 15-16 March 2018: Tickets and Registration Link As usual, numbers will be strictly limited due to the small class nature […]
Categories: DBA Blogs

Oracle SOA Suite: Installing the Fusion Middleware SOA Suite

Dietrich Schroff - Sun, 2018-01-21 14:30
After the installation of the Fusion Middleware Infrastructur the next step is to install the SOA Suite software.

The software can be found here:



The first try failed with this error:

java -d64 -jar fmw_12.2.1.3.0_soa_quickstart.jar
Launcher-Logdatei ist /tmp/OraInstall2017-10-07_11-47-20PM/launcher2017-10-07_11-47-20PM.log.
Nicht genügend freier Speicherplatz in /tmp/orcl3797124329273264119.tmp, um das Installationsprogramm zu extrahieren. Aktuell 2796 MB. Erforderlich 3532 MB.Ok. Some cleanups inside /tmp and then:









Next step: Run the config.sh to create a SOA Suite Server....

Be Friend With awk/sed | ASM Mapping

Michael Dinh - Sun, 2018-01-21 11:10

I had request to add disks to ASM Disk Group without any further details for what new disks were added.

Need to figure out which disks are on ASM now, which disks should be used as new ones.

Got lazy and created scripts for this for future use.

[root@racnode-dc1-1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ /etc/init.d/oracleasm listdisks
CRS01
DATA01
FRA01

--- [8,49] is major,minor for device
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

--- Extract major,minor for devide
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'
[8,49]

--- Remove [] brackets
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'
[8,49
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'
8,49

--- Alternative option to remove []
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed 's/[][]//g'
8,49

--- Create patterns for grep
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'
8,.*49

--- Test grep using pattern
[oracle@racnode-dc1-1 ~]$ ls -l /dev/* | grep -E '8,.*49'
brw-rw---- 1 root    disk      8,  49 Jan 21 16:42 /dev/sdd1
[oracle@racnode-dc1-1 ~]$

--- Test grep with command line syntax
[oracle@racnode-dc1-1 ~]$ ls -l /dev/*|grep -E `oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
brw-rw---- 1 root    disk      8,  49 Jan 21 16:42 /dev/sdd1
[oracle@racnode-dc1-1 ~]$

--- Run script
[oracle@racnode-dc1-1 ~]$ /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Jan 21 21:42 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Jan 21 21:42 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Jan 21 21:42 /dev/sde1

[oracle@racnode-dc1-1 ~]$

--- ASM Lib version
[oracle@racnode-dc1-1 ~]$ rpm -qa|grep asm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-19.0.1.el7.x86_64
[oracle@racnode-dc1-1 ~]$

--- Script
[oracle@racnode-dc1-1 ~]$ cat /sf_working/scripts/asm_mapping.sh

#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done
[root@racnode-dc1-1 ~]# fdisk -l /dev/sdd1

Disk /dev/sdd1: 8587 MB, 8587837440 bytes, 16773120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@racnode-dc1-1 ~]#

Using SQL*Plus to Seed your Dockerized Oracle Database

Debu Panda - Sat, 2018-01-20 19:28
In my last blog, you learned  to create a containerized Oracle database for your development/testing purpose. You also know how to connect to your container and run command in the container.

Most applications require some reference data e.g. example,my  OrderApp application based on Apache Tom EE  requires catalog data to be pre-populated before I can test my application. 

One of the readers asked me how can we run a SQL script on his local or shared drive to seed the containerized database.

In this blog, I will show how you can execute scripts with SQL*Plus inside the container to seed your dockerized Oracle database.

Connecting to SQLPlus

In the last blog, we learned that ORACLE_HOME for the database running in the container is /u01/app/oracle/product/12.1.0/dbhome_1.

I can connect to the database by running the following command:

docker exec -it orcldb /u01/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus system/welcome1@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 20 06:22:58 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Jan 14 2018 03:09:54 +00:00

Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL>


You might remember ORCL was the instance name that provided for my database.

Also note that when I run the command, SQL*Plus is getting executed inside the container.

Running a SQL script Using SQLPlus

As the command is getting executed inside the container running the Oracle database, the SQL script has to be accessible from the container.

My Script

My application depends upon a user in the PDB. My script creates the user, creates tables in that user's schema and populates data in those tables. 

I have a script named user.sql that I want to execute and here are the contents of /Users/dpanda/orderapp2/orcl/sql/user.sql script.


create user orderapp identified by orderapp2
default tablespace users temporary tablespace temp
/

alter user orderapp quota unlimited on users
/
grant connect, resource to orderapp
/
connect orderapp/orderapp@pdb1
@/u04/app/sql/sample_oow_tomcat_create.sql
@/u04/app/sql/sample_oow_productline.sql
commit;
exit;

As I am invoking the SQL*Plus inside the container, I have to specify the drive inside the container.

Mapping Volume from the Container to Local or Shared Drive

You might remember from the last blog that when I started the Database container, I mapped the drive in /u04/app in the container to /Users/dpanda/orderapp2/orcl by using –v option as below:

docker run -d --env-file db.properties -p 1521:1521 -p 5500:5500 --name orcldb --net appnet  --shm-size="4g" -v /Users/dpanda/orderapp2/orcl:/u04/app:/u04/app container-registry.oracle.com/database/standard




The script directory has to be specified as /u04/app/sql as my script is located in /Users/dpanda/orderapp2/orcl/sql directory on my MAC .


Here is the docker command I can use to run my script:

docker exec -it orcldb
/u01/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus system/welcome1@PDB1 @/u04/app/sql/user

As you can see, I can connecting to the pdb1 database by executing SQLPlus command and running the user.sql script.

Here is the output you will get

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 13 06:16:32 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sat Jan 13 2018 06:16:19 +00:00

Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production


User created.


User altered.


Grant succeeded.

…..


1 row created.


1 row created.


1 row created.


Commit complete.

Disconnected from Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production


Hope this helps to automate your script to seed your containerized Oracle database.


In a future blog, I will demonstrate how can you Oracle Instant Client in a Docker container to automate your scripts.

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

Yann Neuhaus - Sat, 2018-01-20 16:16

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
22:22:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB6 READ WRITE NO

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb1 AcyH+Z... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
3 pdb6 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
 
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
 
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
6 PDB6 READ WRITE YES
 
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS MESSAGE ACTION
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
 
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb2 AdTdo9... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
4 pdb1 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

 

Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.

Who used apex.oracle.com in 2017?

Joel Kallman - Sat, 2018-01-20 09:55
A number of years ago, I provided statistics on the geographic distribution of people who used the free, evaluation service for Oracle Application Express at https://apex.oracle.com.  I did this in 2008, 2009 and 2011.  It's time for an update.

I've included a graphic of the top 30 and the full report below, and here are my observations:

  • Since 2008, the number of visitor sessions to apex.oracle.com increased 280 times! That's not percentage, but times.  Percentage-wise, that's 27,985 % growth from 2008 to 2017.
  • In 2008, there were 1 or more visitor sessions from 122 countries/territories.  In 2017, that number increased to 212 countries/territories
  • The USA, India and United Kingdom remain in the top 3.  But the UK is barely hanging onto the #3 position.
  • Colombia vaulted from #11 in 2011 to #6 in 2017.  Bangladesh jumped from #26 to #14.  Japan jumped from #19 to #9.  Wow!

Usage of apex.oracle.com doesn't necessarily translate into usage of APEX.  These statistics are only a measurement of who is logging into apex.oracle.com - the actual usage of APEX is far greater.  Also, I fully anticipate the comment "...but as a percentage of population size, our country has the top usage in the world!" (here's looking at you, Netherlands).  But just because someone's country has declined in the rankings doesn't mean that APEX isn't growing there....it's just growing in other countries too!

Lastly, the statistics presented below are sessions, not to be confused with activity.  At the time of this writing, there are 36,133 workspaces on apex.oracle.com.  In the past 7 days, there were 4,643,958 page views, and 1,808 new workspaces requested and approved.

Not only is usage of APEX growing on a global basis, but that growth is accelerating.





Hard Partitioning with Oracle VM Server

Amis Blog - Sat, 2018-01-20 09:03

Some quick notes to “pin” (or hard partition) a virtual machine to a specific core.

Download OVM utils which is found in patch 13602094 (Oracle Support): ORACLE VM 3.0 UTILS RELEASES: 1.0.2, 2.0.1, 2.1.0.

When you extract the zip file you will find three zip files for the different Oracle VM versions:
Patch Details
ovm utils now consists of 3 packages
* ovm_utils_1.0.2.zip : for Oracle VM versions 3.0, 3.1 and 3.2
* ovm_utils_2.0.1.zip : for Oracle VM version 3.3
* ovm_utils_2.1.0.zip : for Oracle VM version 3.4

Extract the correct version on the Oracle VM Manager server to /u01/app/oracle/ovm-manager-3/

You can use “xm info” on an Oracle VM Server to print out CPU information:xm info
This server has one socket with four cores and two threads per core.

The “xenpm  get-cpu-topology” command prints out the thread/core/socket topology:
xenpm get-cpu-topology
CPU0 is thread 1 of core 0 and CPU1 is thread 2 of core 0.

You can check which virtual machines are using which CPU’s with the command “xm vcpu-list”:
xm vcpu-list
The actual pinning is performed on the Oracle VM Manager server by using the command “ovm_vmcontrol”.

Use the parameter “-c getvcpu” the get the current hard partition information for a virtual machine:getcpuUse the parameter “-c setvcpu” to pin a virtual machine to a CPU:setcpu
Stop and start the virtual machine after pinning the CPU’s.

You can also pin CPU’s by editing vm.cfg:
vmcfgStop and start the virtual machine after making changes to the vm.cfg file.

The post Hard Partitioning with Oracle VM Server appeared first on AMIS Oracle and Java Blog.

Blocking Prepared XA Transaction

Dominic Brooks - Sat, 2018-01-20 06:20

There was an oracle-l thread last month about blocking sessions which could not be identified.

I replied back about a very niche scenario which I’d seen a few times before. And it’s just happened again so I thought it would be useful to just post some evidence on this.

Here are a couple of articles posted previously on XA transactions:

First, a reminder, from the posts above, that a transaction doesn’t have to have a session.

Normal “vanilla” sessions, there’s a 1:1 relationship. But with an XA transaction, a session can attach and detach – but only one session can be attached to a transaction at any one time.

And a bit of context about these XA transactions.

This is a complex Java application with multiple resources participating in the XA transaction. There was HornetQ involved and HornetQ was apparently timing out and was bounced, the taking down being down probably with a kill -9.

And an XA transaction does what is called a 2-phase commit where the first phase involves the transaction co-ordinator instructing the participating resources to prepare to commit and subsequently the second phase is to commit.

What I have seen many times on this application is that something happens to the transaction co-ordinator in between the PREPARE and the COMMIT and we are left we an orphaned transaction.

This transaction will eventually time-out – depending on the properties set on the transaction by the co-ordinator – at which point it will become an in-doubt transaction and be visible in dba_2pc_pending.

Back to reality… there is nothing in DBA_2PC_PENDING:

select * from dba_2pc_pending;

no rows selected

But we can see there is a transaction in prepared state in V$TRANSACTION, a transaction which started yesterday afternoon:

select * from v$transaction where status = 'PREPARED';
ADDR                 XIDUSN    XIDSLOT     XIDSQN     UBAFIL     UBABLK     UBASQN     UBAREC STATUS           START_TIME           START_SCNB START_SCNW START_UEXT START_UBAFIL START_UBABLK START_UBASQN START_UBAREC SES_ADDR               FLAG SPACE RECURSIVE NOUNDO PTX NAME                                                                                                                                                                                                                                                             PRV_XIDUSN PRV_XIDSLT PRV_XIDSQN PTX_XIDUSN PTX_XIDSLT PTX_XIDSQN     DSCN-B     DSCN-W  USED_UBLK  USED_UREC     LOG_IO     PHY_IO     CR_GET  CR_CHANGE START_DATE            DSCN_BASE  DSCN_WRAP  START_SCN DEPENDENT_SCN XID              PRV_XID          PTX_XID        
---------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------------- -------------------- ---------- ---------- ---------- ------------ ------------ ------------ ------------ ---------------- ---------- ----- --------- ------ --- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -------------------- ---------- ---------- ---------- ------------- ---------------- ---------------- ----------------
0000004A833D6868        691         32     315541         22    1085802     -28624         18 PREPARED         01/19/18 15:39:45    3454176189       2913         13           22      1085802       -28624            9 0000004B82E584A0    4201987 NO    NO        NO     NO                                                                                                                                                                                                                                                                            0          0          0          0          0          0          0          0          1         12     -40287       -163  -66597824      -1385 19-JAN-2018 15:39:45          0          0    1.3E+13             0 B302200095D00400 0000000000000000 0000000000000000 


And this is the script which I mentioned on the oracle-l thread which is one I use to see what locks transaction are holding, when the transaction started and what sessions are attached:

select s.machine
,      lo.inst_id
,      lo.object_id
,      lo.session_id
,      lo.os_user_name
,      lo.process
,      ob.owner
,      ob.object_name
,      ob.subobject_name
,      tx.addr
,      tx.start_time txn_start_time
,      tx.status
,      tx.xid
,      s.*
from   gv$locked_object lo
,      dba_objects      ob
,      gv$transaction    tx
,      gv$session        s
where  ob.object_id = lo.object_id
and    tx.xidusn    (+) = lo.xidusn
and    tx.xidslot   (+) = lo.xidslot
and    tx.xidsqn    (+) = lo.xidsqn
and    s.taddr      (+) = tx.addr
order by txn_start_time, session_id, object_name;

For privacy reasons and as this is a real-world situation and not an isolated test case, I won’t share the output of the script.

But it shows that:

  • the transaction is holding mode 6 exclusive TX row locks on a number of objects
  • that the transaction is in PREPARED
  • and that there is no session attached to the transaction (although v$locked_object does tell us what the SID was when it was there)

Now, from the application perspective, something has apparently rolled back a message perhaps because now HornetQ has been bounced, everything is back up and running and it seems like the message that resulted in our orphaned transaction is being reattempted and is being blocked by the exclusive locks still being held.

From an ACTIVE SESSION HISTORY perspective, this is what we see from this script from which I’ve removed columns for brevity and privacy but left them in so you can see what I run normally:

select count(*) over (partition by h.sample_time) sess_cnt
--,      h.user_id
--,      (select username from dba_users u where u.user_id = h.user_id) u, h.service_hash
,      xid--, sample_id
, sample_time, session_state, session_id, session_serial#,--sql_id,
sql_exec_id, sql_exec_start, event, --p1,
mod(p1,16), blocking_session,blocking_session_serial#--, current_obj#
--,      (select object_name||' - '||subobject_name from dba_objects where object_id = current_obj#) obj
--,      (select sql_fulltext from v$sql s where s.sql_id = h.sql_id and rownum = 1) sqltxt
--,      (select sql_text from dba_hist_sqltext s where s.sql_id = h.sql_id and rownum = 1) sqltxt
--, h.*
from   v$active_session_history h
where event = 'enq: TX - row lock contention'
order by h.sample_id desc;
XID              SESSION_STATE SESSION_ID SESSION_SERIAL# EVENT                                                            MOD(P1,16) BLOCKING_SESSION BLOCKING_SESSION_SERIAL#
---------------- ------------- ---------- --------------- ---------------------------------------------------------------- ---------- ---------------- ------------------------
4F021700A3C00600 WAITING              232           53035 enq: TX - row lock contention                                             6
FC010B00196E0A00 WAITING              471            5205 enq: TX - row lock contention                                             6
670320004FA50300 WAITING             2652           11791 enq: TX - row lock contention                                             6
640204005BA40500 WAITING             4300           49665 enq: TX - row lock contention                                             6

So, you can see we have four sessions being blocked on exclusive mode 6 row level locks and that the blocking session is null. If I showed you the full history then you would see that these sessions have been repeatedly trying for many hours.

BTW, this is RAC but all these sessions are intentionally on the same instance so there’s none of that jiggery-pokery involved.

I imagine at some point there was an internal conversation in Oracle about whether to report blocking session or blocking transaction.

At this point, it’s just a normal lock held by a “normal” transaction which hasn’t committed yet and actually the application just keeps trying to run the transaction waits for 1 minute until they hit the default distributed transaction timeout:

ORA-02049: timeout: distributed transaction waiting for lock

which will be logged somewhere obscure in the application logs – and there’ll be a brief pause and then it starts all over again.

Anyway at this point what to do?

At this point, the transaction hasn’t timed out.

In a few hours, the transaction will time out and become an in-doubt transaction.

Once it does, the application will receive a message:

ORA-01591: lock held by in-doubt distributed transaction

At which time, it can be manually resolved.

Typically, this is what we usually do:

begin
for txn in (select local_tran_id from dba_2pc_pending)
loop
-- if txn is not already forced rollback
execute immediate 'rollback force '''||txn.local_tran_id||'''';
commit;
dbms_transaction.purge_lost_db_entry(txn.local_tran_id);
commit;
end loop;
end;
/

If we were going to intervene right now and resolve it, what could we do?

We could bounce the database. Then the application would become in-doubt and see above.

But bouncing the database is quite drastic.

A lot of the advice in the manual is about what to do once it is IN-DOUBT. Some of it might work now – not sure.

What I advise is that if we know we want to rollback or commit this transaction now and we don’t want to wait for it to become IN-DOUBT which often we can’t wait for then programatically we can attach to the transaction using DBMS_XA and do something with it.

First of all, we need some information about the transaction.
CAVEAT DBA!!! This isn’t exactly well documented but I have found that what we tend to need is to identify the transactions in V$GLOBAL_TRANSACTION which are in COLLECTING state:

select state
,      UTL_RAW.CAST_TO_BINARY_INTEGER (globalid)
,      UTL_RAW.CAST_TO_BINARY_INTEGER (branchid)
,      t.* 
from v$global_transaction t where state = 'COLLECTING';
STATE                                  UTL_RAW.CAST_TO_BINARY_INTEGER(GLOBALID) UTL_RAW.CAST_TO_BINARY_INTEGER(BRANCHID)   FORMATID GLOBALID                                                                                                                         BRANCHID                                                                                                                           BRANCHES   REFCOUNT PREPARECOUNT STATE                                       FLAGS COUPLING      
-------------------------------------- ---------------------------------------- ---------------------------------------- ---------- -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ---------- ---------- ------------ -------------------------------------- ---------- ---------------
COLLECTING                                                                   49                                       45     131075 312D2D35363832376566363A393236643A35613562363664363A633738353036                                                                 2D35363832376566363A393236643A35613562363664363A633738353065                                                                              1          1            1 COLLECTING                                      1 TIGHTLY COUPLED 


Then we can replace the formatid, global id and branch id in the script below. Whether you require numbers or raw depends on the signature to DBMS_XA_XID – see documentation.

set serveroutput on
DECLARE
l_xid     DBMS_XA_XID :=
DBMS_XA_XID(131075,
'312D2D35363832376566363A393236643A35613562363664363A633738353036',
'2D35363832376566363A393236643A35613562363664363A633738353065');
l_return  PLS_INTEGER;
BEGIN
l_return := SYS.dbms_xa.xa_rollback(xid =>  l_xid);
dbms_output.put_line(l_return);
END;
/

This approach above comes with no guarantees.
But it has worked for me several times in the past.

Best Practice: Does the SQL Mantra simplify Data Verification?

Tom Kyte - Fri, 2018-01-19 19:46
Tom-- I subscribe to your SQL mantra for data (and truly, is there anything else? :)) which is: 1. Do it in a single SQL statement if at all possible. 2. If you cannot, then do it in PL/SQL (as little PL/SQL as possible!). 3. If you cannot do it...
Categories: DBA Blogs

Loading records in separate rows using sqlloader

Tom Kyte - Fri, 2018-01-19 19:46
Hi, First of all thanks a lot for answering my previous questions, that helped me a lot. Now I came up with a new questions regarding SQL loader. So, basically I have a file that contains a records like below A|B|C|D| Now, I have a requirem...
Categories: DBA Blogs

Switch objects to add where clause

Tom Kyte - Fri, 2018-01-19 19:46
Hi , We have a process that cannot be changed and that executes a query over a specific table. Due to a new development it is necessary to had a new statment to the WHERE clause of the query. But the executable cannot be changed. So the idea w...
Categories: DBA Blogs

Transaction set consistency

Tom Kyte - Fri, 2018-01-19 19:46
hello , i am reading <<Database Concepts>> , i can not understand "Transaction set consistency" clearly , will you explain this to me pls ? can you show me some simple examples to show what is "Transaction set consistency" ? thanks . the...
Categories: DBA Blogs

Quarterly EBS Upgrade Recommendations: Jan 2018 Edition

Steven Chan - Fri, 2018-01-19 11:54

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this Note:

Upgrade Recommendations for January 2018

  EBS 12.2  EBS 12.1  EBS 12.0  EBS 11.5.10 Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to September 30, 2023

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.7
Sept. 2017

12.1.3 RPC5
Aug. 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb. 2016

StartCD 13
Aug. 2011

12.0.6


11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 10
Sept. 2017

FND
Apr. 2017

EBS 12.2.6 OAF Update 8
Dec. 2017

EBS 12.2.5 OAF Update 18
Dec. 2017

EBS 12.2.4 OAF Update 18
Dec. 2017

ETCC
Oct. 2017

Web Tier Utilities 11.1.1.9

Daylight Savings Time DSTv28
Nov. 2016

Upgrade to JDK 7

FND
Apr. 2017

OAF Bundle 5
Jun. 2016

JTT Update 4
Oct. 2016

Daylight Savings Time DSTv28
Nov. 2016

Upgrade to JDK 7

 

 

Apply the latest security updates

Jan. 2018 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Jan. 2018 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Oct. 2015 Critical Patch Update April 2016 Critical Patch Update Use the latest certified desktop components

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements.

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

    Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

Migrate from Oracle SSO to OAM 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

    If you're using Oracle Discoverer

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

    If you're using Oracle Portal Migrate to Oracle WebCenter  11.1.1.9 Migrate to Oracle WebCenter 11.1.1.9 or upgrade to Portal 11.1.1.6 (End of Life Jun. 2017).

 

 
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator