Feed aggregator

New on Oracle Mobile Cloud Service 2.0

As we simplify mobile development, this has led us to develop important new features with the new release of Oracle Mobile Cloud Service 2.0 such as  Mobile Application Accelerator (MAX), a new...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Scripted Domain Creation for SOA/BPM, OSB 12.2.1

Darwin IT - Thu, 2016-06-09 02:53
Recently I blogged about the automatic install of SOASuite and ServiceBus 12.2.1. It catered for the installation of the binaries and the creation of a repository.

What it does not handles is the creation of a domain. The last few weeks I worked on a script to do that. It's based on a wlst script for 12.1.3 by Edwin Biemond. This one was quite educational for me. But as denoted: it was made for 12.1.3. And I wanted more control on what and how certain artefacts are created. So gave it my own swing, also to adapt it for some wishes of my current customer.

Although there are certainly some improvements thinkable, and probably will be done in future uses, for now it is ready to share with the world.

One of the changes I did was to divide the main function in sections to make clear what the structure of the script is. I also moved some of the duplicate code or functional parts into separate functions.

Let me describe the sections first.

1. Create Base domainThe script starts with creating a base domain. It reads the default wls template 'wls.jar'. Sets the log properties of the domain. It then adapts the AdminServer to
  • change name as set in the property file: you can have your own naming convention.
  • change listen addres + port
  • Set default SSL settings
  • Set log file properties.
Then set the admin password on the domain and set the server start mode. Save the domain.
It then creates boot.properties files for nodemanager and AdminServer and set the password of the NodeManager. Finally setting the Applications home folder.

2. Extending domain with templatesThe second section extends the domain with templates. Another improvement I did is that you can select which components you want to add by toggling the appropriate 'xxxEnabled' switches  in the property file, where xxx stands for the component (for instance 'soa', 'bpm', 'osb', 'bam', 'ess', etc.)

It supports the following components:
  • ServiceBus
  • SOA and BPM Suite and B2B
  • BAM
  • Enterprise Scheduler Service
Components such as  Managed File Transfer can be added quite easily.
3. DataSourcesSection 3 takes care of setting the datasources to the created repository based on the repository user '{Prefix}_STB', via the 'LocalScvTblDataSource' datasource. In the property file you need to set
  • soaRepositoryDbUrl: jdbc connect string to the repository database
  • soaRepositoryDbUserPrefix=prefix used in the Repository creation 
  • soaRepositoryStbPwd=Password for the {Prefix}_STB user.
 It will adapt the LocalScvTblDataSource with these properties to load the information on the other repository users to create and adapt the datasources that are essential to the modules with which the domain is extended. These datasources are then made XA-Enabled.

4. Create UnixMachines, Clusters and Managed ServersThis section creates Machine definitions of type 'Unix', based on the properties:
  • server1Address=darlin-vce-db.darwin-it.local
  • server1Machine=darlin-vce-db
  • server2Enabled=true
  • server2Address=darlin-vce-db2.darwin-it.local
  • server2Machine=darlin-vce-db2
The machine denoted with 'server1Machine' is created always. The AdminServer is added to it, as well all the first, default, managed servers. The machine denoted with the property 'server2Machine' is only created when 'server2Enabled' is set to true.

I realize that 'server' in this context might be a little confusing. In serverYAddress and  serverYMachine, I actually mean a server-host, not a managed or admin server.

 For each component to configure (soa, osb, etc.) a cluster, denoted with for instance osbClr or soaClr, is created.

When you extend the domain with SOA Suite or OSB then automatically a managed server called 'soa_server1' or 'osb_server1' created with the appropriate deployments targeted to it. In the script of Edwin these are removed and new ones are created. I found problems with that and found that it's quite unnecessary, since we can rename the current ones with the given name in the property file, denoted with soaSvr1 or osbSvr1, etc., as is done with the AdminServer. So I leave the already created ones, but rename them to the desired value.

These first servers are added to the appropriate cluster, what causes to re-target the deployments to that cluster, magically.

Then if enabled, as with osbSvr2Enabled or soaSvr2Enabled, etc., the particular 'second' servers are created and added to the particular cluster.

5. Add Servers to ServerGroupsNew in 12c is the concept of ServerGroups. In 11g you had only one definition of USER_MEM_ARGS in the setDomainEnv.sh/cmd. So these counted for each server (admin or managed) that are started using the start(Managed)Weblogic.sh/cmd scripts. But in 12c the determination of the USER_MEM_ARGS are done in a separate script: setStartupEnv.sh/cmd.
In this script the determination is done based on so-called ServerGroups. This provides a means to differentiate in memory settings for the particular servers, which was lacking in 11g.

So in this sections all the Managed and Admin Servers are added to a particular ServerGroup.
6. Create boot properties filesLastly, for each created managed server a boot.properties file with the username password is created. Smart: I used to do this every single time by hand...

The example property fileHere's an example of the property file:
#############################################################################
# Properties voor Creeëren SOADomain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-15
#
#############################################################################
#
fmwHome=/u01/app/oracle/FMW12210
#
soaDomainName=osb_domain
domainsHome=/u01/app/work/domains
applicationsHome=/u01/app/work/applications
productionMode=true
#
# Server Settings
server1Address=darlin-vce-db.darwin-it.local
server1Machine=darlin-vce-db
server2Enabled=true
server2Address=darlin-vce-db2.darwin-it.local
server2Machine=darlin-vce-db2
#
# Properties for AdminServer
adminServerName=AdminServer
adminListenAddress=darlin-vce-db
adminListenPort=7001
adminJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
# Properties for OSB
osbEnabled=true
osbJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
osbClr=OsbCluster
osbSvr1=OsbServer1
osbSvr1Port=8011
osbSvr2Enabled=true
osbSvr2=OsbServer2
osbSvr2Port=8012
# Properties for SOA
soaEnabled=true
bpmEnabled=true
b2bEnabled=true
soaJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=752m -Xms1024m -Xmx1532m
soaClr=SoaCluster
soaSvr1=SoaServer1
soaSvr1Port=8001
soaSvr2Enabled=true
soaSvr2=SoaServer2
soaSvr2Port=8002
# Properties for ESS
essEnabled=true
essJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
essClr=essCluster
essSvr1=EssServer1
essSvr1Port=8021
essSvr2Enabled=true
essSvr2=EssServer2
essSvr2Port=8022
# Properties for BAM
bamEnabled=true
bamJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
bamClr=BamCluster
bamSvr1=BamServer1
bamSvr1Port=9001
bamSvr2Enabled=true
bamSvr2=BamServer2
bamSvr2Port=9002
# AdminUser
adminUser=weblogic
adminPwd=welcome1
# SoaRepository Settings
soaRepositoryDbUrl=jdbc:oracle:thin:@darlin-vce-db.darwin-it.local:1521/pdborcl
soaRepositoryDbUserPrefix=DEV
soaRepositoryStbPwd=DEV_STB
# Logs
logsHome=/u01/app/work/logs
fileCount=10
fileMinSize=5000
fileTimeSpan=24
rotationType=byTime
#
# Settings
webtierEnabled=false
jsseEnabled=false

Save it with a name like darlin-vce-db.properties, but adapted for each particular environment.

The script(And of course I don't mean the band that my daughter likes...)
#############################################################################
# Create a SOA/BPM/OSB domain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-09
#
#############################################################################
# Modify these values as necessary
import sys, traceback
scriptName = 'createSoaBpmDomain.py'
#
#Home Folders
wlsHome = fmwHome+'/wlserver'
soaDomainHome = domainsHome+'/'+soaDomainName
soaApplicationsHome = applicationsHome+'/'+soaDomainName
#
# Templates for 12.1.3
#wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
#oracleCommonTplHome=fmwHome+'/oracle_common/common/templates'
#wlservicetpl=oracleCommonTplHome+'/oracle.wls-webservice-template_12.1.3.jar'
#osbtpl=fmwHome+'/osb/common/templates/wls/oracle.osb_template_12.1.3.jar'
#applCoreTpl=oracleCommonTplHome+'/wls/oracle.applcore.model.stub.1.0.0_template.jar'
#soatpl=fmwHome+'/soa/common/templates/wls/oracle.soa_template_12.1.3.jar'
#bamtpl=fmwHome+'/soa/common/templates/wls/oracle.bam.server_template_12.1.3.jar'
#bpmtpl=fmwHome+'/soa/common/templates/wls/oracle.bpm_template_12.1.3.jar'
#essBasicTpl=oracleCommonTplHome+'/wls/oracle.ess.basic_template_12.1.3.jar'
#essEmTpl=fmwHome+'/em/common/templates/wls/oracle.em_ess_template_12.1.3.jar'
#ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template_12.1.3.jar'
#b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template_12.1.3.jar'
#
# Templates for 12.2.1
wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
oracleCommonTplHome=fmwHome+'/oracle_common/common/templates'
wlservicetpl=oracleCommonTplHome+'/wls/oracle.wls-webservice-template.jar'
osbtpl=fmwHome+'/osb/common/templates/wls/oracle.osb_template.jar'
applCoreTpl=oracleCommonTplHome+'/wls/oracle.applcore.model.stub_template.jar'
soatpl=fmwHome+'/soa/common/templates/wls/oracle.soa_template.jar'
bamtpl=fmwHome+'/soa/common/templates/wls/oracle.bam.server_template.jar'
bpmtpl=fmwHome+'/soa/common/templates/wls/oracle.bpm_template.jar'
essBasicTpl=oracleCommonTplHome+'/wls/oracle.ess.basic_template.jar'
essEmTpl=fmwHome+'/em/common/templates/wls/oracle.em_ess_template.jar'
ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template.jar' # need to be validated!
b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template.jar' # need to be validated!
#
# ServerGroup definitions
adminSvrGrpDesc='WSM-CACHE-SVR WSMPM-MAN-SVR JRF-MAN-SVR'
adminSvrGrp=["WSM-CACHE-SVR" , "WSMPM-MAN-SVR" , "JRF-MAN-SVR"]
essSvrGrpDesc="ESS-MGD-SVRS"
essSvrGrp=["ESS-MGD-SVRS"]
soaSvrGrpDesc="SOA-MGD-SVRS"
soaSvrGrp=["SOA-MGD-SVRS"]
bamSvrGrpDesc="BAM12-MGD-SVRS"
bamSvrGrp=["BAM12-MGD-SVRS"]
osbSvrGrpDesc="OSB-MGD-SVRS-COMBINED"
osbSvrGrp=["OSB-MGD-SVRS-COMBINED"]
#
#
lineSeperator='__________________________________________________________________________________'
#
#
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd '+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh '+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl='localhost:7101'"
print "adminUser='weblogic'"
print "adminPwd='welcome1'"
#
# Create a boot properties file.
def createBootPropertiesFile(directoryPath,fileName, username, password):
print ('Create Boot Properties File for folder: '+directoryPath)
print (lineSeperator)
serverDir = File(directoryPath)
bool = serverDir.mkdirs()
fileNew=open(directoryPath + '/'+fileName, 'w')
fileNew.write('username=%s\n' % username)
fileNew.write('password=%s\n' % password)
fileNew.flush()
fileNew.close()
#
# Create Startup Properties File
def createAdminStartupPropertiesFile(directoryPath, args):
print 'Create AdminServer Boot Properties File for folder: '+directoryPath
print (lineSeperator)
adminserverDir = File(directoryPath)
bool = adminserverDir.mkdirs()
fileNew=open(directoryPath + '/startup.properties', 'w')
args=args.replace(':','\\:')
args=args.replace('=','\\=')
fileNew.write('Arguments=%s\n' % args)
fileNew.flush()
fileNew.close()
#
# Set Log properties
def setLogProperties(logMBeanPath, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print '\nSet Log Properties for: '+logMBeanPath
print (lineSeperator)
cd(logMBeanPath)
print ('Server log path: '+pwd())
print '. set FileName to '+logFile
set('FileName' ,logFile)
print '. set FileCount to '+str(fileCount)
set('FileCount' ,int(fileCount))
print '. set FileMinSize to '+str(fileMinSize)
set('FileMinSize' ,int(fileMinSize))
print '. set RotationType to '+rotationType
set('RotationType',rotationType)
print '. set FileTimeSpan to '+str(fileTimeSpan)
set('FileTimeSpan',int(fileTimeSpan))
#
#
def createServerLog(serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print ('\nCreate Log for '+serverName)
print (lineSeperator)
cd('/Server/'+serverName)
create(serverName,'Log')
setLogProperties('/Server/'+serverName+'/Log/'+serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan)
#
# Change DataSource to XA
def changeDatasourceToXA(datasource):
print 'Change datasource '+datasource
print (lineSeperator)
cd('/')
cd('/JDBCSystemResource/'+datasource+'/JdbcResource/'+datasource+'/JDBCDriverParams/NO_NAME_0')
set('DriverName','oracle.jdbc.xa.client.OracleXADataSource')
print '. Set UseXADataSourceInterface='+'True'
set('UseXADataSourceInterface','True')
cd('/JDBCSystemResource/'+datasource+'/JdbcResource/'+datasource+'/JDBCDataSourceParams/NO_NAME_0')
print '. Set GlobalTransactionsProtocol='+'TwoPhaseCommit'
set('GlobalTransactionsProtocol','TwoPhaseCommit')
cd('/')
#
#
def createCluster(cluster):
print ('\nCreate '+cluster)
print (lineSeperator)
cd('/')
create(cluster, 'Cluster')
#
# Create a Unix Machine
def createUnixMachine(serverMachine,serverAddress):
print('\nCreate machine '+serverMachine+' with type UnixMachine')
print (lineSeperator)
cd('/')
create(serverMachine,'UnixMachine')
cd('UnixMachine/'+serverMachine)
create(serverMachine,'NodeManager')
cd('NodeManager/'+serverMachine)
set('ListenAddress',serverAddress)
#
# Add server to Unix Machine
def addServerToMachine(serverName, serverMachine):
print('\nAdd server '+serverName+' to '+serverMachine)
print (lineSeperator)
cd('/Servers/'+serverName)
set('Machine',serverMachine)
#
# Determine the Server Java Args
def getServerJavaArgs(serverName,javaArgsBase,logsHome):
javaArgs = javaArgsBase+' -Dweblogic.Stdout='+logsHome+'/'+serverName+'.out -Dweblogic.Stderr='+logsHome+'/'+serverName+'_err.out'
return javaArgs
#
# Change Managed Server
def changeManagedServer(server,listenAddress,listenPort,javaArgs):
print '\nChange ManagedServer '+server
print (lineSeperator)
cd('/Servers/'+server)
print '. Set listen address and port to: '+listenAddress+':'+str(listenPort)
set('ListenAddress',listenAddress)
set('ListenPort' ,int(listenPort))
# ServerStart
print ('. Create ServerStart')
create(server,'ServerStart')
cd('ServerStart/'+server)
print ('. Set Arguments to: '+javaArgs)
set('Arguments' , javaArgs)
# SSL
cd('/Server/'+server)
print ('. Create server SSL')
create(server,'SSL')
cd('SSL/'+server)
print ('. Set SSL Enabled to: '+'False')
set('Enabled' , 'False')
print ('. Set SSL HostNameVerificationIgnored to: '+'True')
set('HostNameVerificationIgnored', 'True')
#
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
set('JSSEEnabled','True')
else:
print ('. Set JSSEEnabled to: '+ 'False')
set('JSSEEnabled','False')
#
# Create a Managed Server
def createManagedServer(server,listenAddress,listenPort,cluster,machine,
javaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan):
print('\nCreate '+server)
print (lineSeperator)
cd('/')
create(server, 'Server')
cd('/Servers/'+server)
javaArgs=getServerJavaArgs(server,javaArgsBase,logsHome)
changeManagedServer(server,listenAddress,listenPort,javaArgs)
createServerLog(server, logsHome+'/'+server+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+server+' to cluster '+cluster)
cd('/')
assign('Server',server,'Cluster',cluster)
addServerToMachine(server, machine)
#
# Adapt a Managed Server
def adaptManagedServer(server,newSrvName,listenAddress,listenPort,cluster,machine,
javaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan):
print('\nAdapt '+server)
print (lineSeperator)
cd('/')
cd('/Servers/'+server)
# name of adminserver
print '. Rename '+server+' to '+ newSrvName
set('Name',newSrvName )
cd('/Servers/'+newSrvName)
javaArgs=getServerJavaArgs(newSrvName,javaArgsBase,logsHome)
changeManagedServer(newSrvName,listenAddress,listenPort,javaArgs)
createServerLog(newSrvName, logsHome+'/'+newSrvName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+newSrvName+' to cluster '+cluster)
cd('/')
assign('Server',newSrvName,'Cluster',cluster)
addServerToMachine(newSrvName, machine)
#
# Change Admin Server
def changeAdminServer(adminServerName,listenAddress,listenPort,javaArguments):
print '\nChange AdminServer'
print (lineSeperator)
cd('/Servers/AdminServer')
# name of adminserver
print '. Set Name to '+ adminServerName
set('Name',adminServerName )
cd('/Servers/'+adminServerName)
# address and port
print '. Set ListenAddress to '+ server1Address
set('ListenAddress',server1Address)
print '. Set ListenPort to '+ str(listenPort)
set('ListenPort' ,int(listenPort))
#
# ServerStart
print 'Create ServerStart'
create(adminServerName,'ServerStart')
cd('ServerStart/'+adminServerName)
print '. Set Arguments to: '+javaArguments
set('Arguments' , javaArguments)
# SSL
cd('/Server/'+adminServerName)
print 'Create SSL'
create(adminServerName,'SSL')
cd('SSL/'+adminServerName)
set('Enabled' , 'False')
set('HostNameVerificationIgnored', 'True')
#
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
set('JSSEEnabled','True')
else:
print ('. Set JSSEEnabled to: '+ 'False')
set('JSSEEnabled','False')
#
#
def main():
try:
#
# Section 1: Base Domain + Admin Server
print (lineSeperator)
print ('1. Create Base domain '+soaDomainName)
print('\nCreate base wls domain with template '+wlsjar)
print (lineSeperator)
readTemplate(wlsjar)
#
cd('/')
# Domain Log
print('Set base_domain log')
create('base_domain','Log')
setLogProperties('/Log/base_domain', logsHome+soaDomainName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
#
# Admin Server
adminJavaArgs = getServerJavaArgs(adminServerName,adminJavaArgsBase,logsHome)
changeAdminServer(adminServerName,adminListenAddress,adminListenPort,adminJavaArgs)
createServerLog(adminServerName, logsHome+adminServerName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
#
print('\nSet password in '+'/Security/base_domain/User/weblogic')
cd('/')
cd('Security/base_domain/User/weblogic')
# weblogic user name + password
print('. Set Name to: ' +adminUser)
set('Name',adminUser)
cmo.setPassword(adminPwd)
#
if productionMode == 'true':
print('. Set ServerStartMode to: ' +'prod')
setOption('ServerStartMode', 'prod')
else:
print('. Set ServerStartMode to: ' +'dev')
setOption('ServerStartMode', 'dev')
#
print('write Domain...')
# write path + domain name
writeDomain(soaDomainHome)
closeTemplate()
#
createAdminStartupPropertiesFile(soaDomainHome+'/servers/'+adminServerName+'/data/nodemanager',adminJavaArgs)
createBootPropertiesFile(soaDomainHome+'/servers/'+adminServerName+'/security','boot.properties',adminUser,adminPwd)
createBootPropertiesFile(soaDomainHome+'/config/nodemanager','nm_password.properties',adminUser,adminPwd)
#
es = encrypt(adminPwd,soaDomainHome)
#
readDomain(soaDomainHome)
#
print('set Domain password for '+soaDomainName)
cd('/SecurityConfiguration/'+soaDomainName)
set('CredentialEncrypted',es)
#
print('Set nodemanager password')
set('NodeManagerUsername' ,adminUser )
set('NodeManagerPasswordEncrypted',es )
#
cd('/')
setOption( "AppDir", soaApplicationsHome )
#
print('Finished base domain.')
#
# Section 2: Templates
print('\n2. Extend Base domain with templates.')
print (lineSeperator)
print ('Adding Webservice template '+wlservicetpl)
addTemplate(wlservicetpl)
# SOA Suite
if soaEnabled == 'true':
print ('Adding SOA Template '+soatpl)
addTemplate(soatpl)
else:
print('SOA is disabled')
# BPM
if bpmEnabled == 'true':
print ('Adding BPM Template '+bpmtpl)
addTemplate(bpmtpl)
else:
print('BPM is disabled')
# OSB
if osbEnabled == 'true':
print ('Adding OSB template '+osbtpl)
addTemplate(osbtpl)
else:
print('OSB is disabled')
#
print ('Adding ApplCore Template '+applCoreTpl)
addTemplate(applCoreTpl)
#
if bamEnabled == 'true':
print ('Adding BAM Template '+bamtpl)
addTemplate(bamtpl)
else:
print ('BAM is disabled')
#
if webtierEnabled == 'true' == true:
print ('Adding OHS Template '+ohsTpl)
addTemplate(ohsTpl)
else:
print('OHS is disabled')
#
if b2bEnabled == 'true':
print 'Adding B2B Template '+b2bTpl
addTemplate(b2bTpl)
else:
print('B2B is disabled')
#
if essEnabled == 'true':
print ('Adding ESS Template'+essBasicTpl)
addTemplate(essBasicTpl)
print ('Adding ESS Em Template'+essEmTpl)
addTemplate(essEmTpl)
else:
print('ESS is disabled')
#
dumpStack()
print ('Finished templates')
#
# Section 3: Change Datasources
print ('\n3. Change datasources')
print 'Change datasource LocalScvTblDataSource'
cd('/JDBCSystemResource/LocalSvcTblDataSource/JdbcResource/LocalSvcTblDataSource/JDBCDriverParams/NO_NAME_0')
set('URL',soaRepositoryDbUrl)
set('PasswordEncrypted',soaRepositoryStbPwd)
cd('Properties/NO_NAME_0/Property/user')
set('Value',soaRepositoryDbUserPrefix+'_STB')
#
print ('Call getDatabaseDefaults which reads the service table')
getDatabaseDefaults()
#
if soaEnabled == 'true':
changeDatasourceToXA('EDNDataSource')
if osbEnabled == 'true':
changeDatasourceToXA('wlsbjmsrpDataSource')
changeDatasourceToXA('OraSDPMDataSource')
changeDatasourceToXA('SOADataSource')
#
if bamEnabled == 'true':
changeDatasourceToXA('BamDataSource')
#
print 'Finshed DataSources'
#
# Section 4: Create UnixMachines, Clusters and Managed Servers
print ('\n4. Create UnixMachines, Clusters and Managed Servers')
print (lineSeperator)
cd('/')
#
createUnixMachine(server1Machine,server1Address)
if server2Enabled == 'true':
createUnixMachine(server2Machine,server2Address)
#
addServerToMachine(adminServerName,server1Machine)
#
cd('/')
# SOA Suite
if soaEnabled == 'true':
createCluster(soaClr)
adaptManagedServer('soa_server1',soaSvr1,server1Address, soaSvr1Port,soaClr,server1Machine,
soaJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if soaSvr2Enabled == 'true':
createManagedServer(soaSvr2,server2Address,soaSvr2Port,soaClr,server2Machine,
soaJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create SOA Server2')
#
# OSB
if osbEnabled == 'true':
createCluster(osbClr)
adaptManagedServer('osb_server1',osbSvr1,server1Address,osbSvr1Port,osbClr,server1Machine,
osbJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if osbSvr2Enabled == 'true':
createManagedServer(osbSvr2,server2Address,osbSvr2Port,osbClr,server2Machine,
osbJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create OSB Server2')
#
# BAM
if bamEnabled == 'true':
createCluster(bamClr)
adaptManagedServer('bam_server1',bamSvr1,server1Address,bamSvr1Port,bamClr,server1Machine,
bamJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if bamSvr2Enabled == 'true':
createManagedServer(bamSvr2,server2Address,bamSvr2Port,bamClr,server2Machine,
bamJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create BAM Server2')
#
# ESS
if essEnabled == 'true':
createCluster(essClr)
adaptManagedServer('ess_server1',essSvr1,server1Address,essSvr1Port,essClr,server1Machine,
essJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if essSvr2Enabled == 'true':
createManagedServer(essSvr2,server2Address,essSvr2Port,essClr,server2Machine,
essJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create ESS Server2')

#
print ('Finshed creating Machines, Clusters and ManagedServers')
#
# Section 5: Add Servers to ServerGroups.
print ('\n5. Add Servers to ServerGroups')
print (lineSeperator)
cd('/')
print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
setServerGroups(adminServerName, adminSvrGrp)
# SOA
if soaEnabled == 'true':
print 'Add server group '+soaSvrGrpDesc+' to '+soaSvr1+' and possibly '+soaSvr2
setServerGroups(soaSvr1, soaSvrGrp)
if soaSvr2Enabled == 'true':
setServerGroups(soaSvr2, soaSvrGrp)
#
# OSB
if osbEnabled == 'true':
print 'Add server group '+osbSvrGrpDesc+' to '+osbSvr1+' and possibly '+osbSvr2
setServerGroups(osbSvr1, osbSvrGrp)
if osbSvr2Enabled == 'true':
setServerGroups(osbSvr2, osbSvrGrp)
#
if bamEnabled == 'true':
print 'Add server group '+bamSvrGrpDesc+' to '+bamSvr1+' and possibly '+bamSvr2
setServerGroups(bamSvr1, bamSvrGrp)
if bamSvr2Enabled == 'true':
setServerGroups(bamSvr2, bamSvrGrp)
#
if essEnabled == 'true':
print 'Add server group '+essSvrGrpDesc+' to '+essSvr1+' and possibly '+essSvr2
setServerGroups(essSvr1, essSvrGrp)
if essSvr2Enabled == 'true':
setServerGroups(essSvr2, essSvrGrp)
#
print ('Finshed ServerGroups.')
#
updateDomain()
closeDomain();
#
# Section 6: Create boot properties files.
print ('\n6. Create boot properties files')
print (lineSeperator)
# SOA
if soaEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+soaSvr1+'/security','boot.properties',adminUser,adminPwd)
if soaSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+soaSvr2+'/security','boot.properties',adminUser,adminPwd)
#
# OSB
if osbEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+osbSvr1+'/security','boot.properties',adminUser,adminPwd)
if osbSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+osbSvr2+'/security','boot.properties',adminUser,adminPwd)
#
if bamEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+bamSvr1+'/security','boot.properties',adminUser,adminPwd)
if bamSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+bamSvr1+'/security','boot.properties',adminUser,adminPwd)
#
if essEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+essSvr1+'/security','boot.properties',adminUser,adminPwd)
if essSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+essSvr2+'/security','boot.properties',adminUser,adminPwd)
#
print ('\nFinished')
#
print('\nExiting...')
exit()
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)
#call main()
main()
exit()

ConclusionAs said, although I think this script is already quite adaptable using the property file, of course there are many improvements thinkable for your particular situation. It creates a 'skeleton' SOA or Service Bus domain, but you might need to adapt for network topologies, security settings.
And although it creates a 'per domain' nodemanager configuration, you would need to adapt it for your particular needs to get the domain started. I only tested this by starting the Admin server using the startWeblogic.sh script.

Having such a script is such  a valuable asset: it allows you to (re-)create your domains repeatably in a standard way, ensuring that different environments (dev, test, acc, prod) are created similarly.

One, last thing though, the script somehow registers the creation of the domain and thus the use of the datasources in the repository. So you can't just throw away the domain and recreate it to the current Repository. You'll need to rereate the Repository as well.


database option - compression part 2

Pat Shuff - Thu, 2016-06-09 02:07
Yesterday we looked at the different compression options that are available for a database. Today we are going to walk through an example. The example comes from Oracle by Example - Compression. This is a hands on tutorial that has you execute code in an 11g database. Note that you must create this database as a High Performance or Extreme Performance database. If you create a Standard Edition or Enterprise Edition the execution will fail with an option not available error as we saw with partitioning a couple of days ago.

To start, we create an 11g database in the Oracle Public Cloud. We create the instance, wait an hour or so, change the network configuration to open port 1521 to the public, and connect using sys as sysdba to the instance. We are going to use SQL Developer in our Windows 2012 instance to make the connection. To get the connection information, we the database console and get the ip address of the instance.

We then go to our sqldeveloper tool and add this database connection. We can use ssh tunneling or open port 1521 to the world to make the connection.

The first step that we are told to do is to execute the setup.sql file available via the tutorial. We are not going to execute this program but do everything by hand through sql developer. The purpose of this script is to enable the user sh, set a password, and grant privileges to the user. We can do this from SQL Developer. The code that it recommends using is

connect / as sysdba
set echo on
alter user sh identified by sh account unlock;
grant create tablespace to sh;
grant drop tablespace to sh;
First, we don't want to use such a simple password. We change this and set it to something a little more secure. We select the database instance, in our example it is prs11gHP where we are connected as the sys user. We select other Users..., the user sh, and edit the entry. When the screen comes up to edit the user, we enable the account, set the password, grant create tablespace and drop tablespace rights to the user and apply. This effectively executes the script shown above.

At this point, we have a user that can create and drop tables. We now want to load the create_sales_tbls.sql code from the tutorial.

The create script first, drops the existing tables. This might generate an error because the table does not exist. This error is not significant and won't stop everything from executing. We then create a non-compressed and a compressed table by selecting from the demo sales table that exists if you installed the demo database during your install.

drop table sales_nocompress purge
/
drop table sales_compress purge
/
set echo on
set timing on
create table sales_nocompress
as select * from sales
/

create table sales_compress compress for all operations
as select * from sales where 1=0
/

select count(*)
from sales_compress
/
Note that the two create statements should create a table of the same size. What we see is that the creation of the first table takes just over 4 seconds because we pull in the sales table information. The second creation does not take as long because the data is in cache and the where clause fails for all select operations.

When we do the select, the table size should be zero based on the where clause. We then to an insert into the table to create a table of the same size. This is done by executing

@oltp_insert

set timing off
select count(*) from sales_compress
/
select count(*) from sales_nocompress
/
This executes the oltp_insert.sql code then compares the counts of the two tables to make sure they contain the same number of records. The code that is executed in the insert script is
SQL> set timing on
SQL> declare

  commit_after integer := 0 ;
  loop_variable integer ;

  cursor c_sales is
  select prod_id
  , cust_id
  , time_id
  , channel_id
  , promo_id
  , quantity_sold
  , amount_sold
  from sales ;

begin

  for r_sales in c_sales
  loop

    if commit_after = 0
    then

      loop_variable := 0 ;

      commit_after := round(dbms_random.value(1,1)) ;

    end if ;

    insert into sales_compress
    (prod_id, cust_id, time_id, channel_id, promo_id, quantity_sold, amount_sold)
    values
    ( r_sales.prod_id
    , r_sales.cust_id
    , r_sales.time_id
    , r_sales.channel_id
    , r_sales.promo_id
    , r_sales.quantity_sold
    , r_sales.amount_sold
    ) ;

    if loop_variable = commit_after
    then
      commit ;
      commit_after := 0 ;
    end if ;

    loop_variable := loop_variable + 1 ;

  end loop ;

end ;
/
We are not going to go through this code but it does return the same amount of entries as the uncompressed table. The values that are inserted are pulled from the sales table and inserted into the compressed table. Note that we are using the basic compression since we did not state any compress methodology when we created the table.

We can execute the examine_storage.sql script to see that the compressed storage takes up about half the storage as the uncompressed table. We can also see that the table is enabled for oltp compression by looking at the parameters of the table from a select statement.

We can also look at the select time differences by reading all of the data from the compressed and uncompressed tables. Note that the compressed table takes about 3/4 of the time that the uncompressed takes to execute.

In summary, we were able to create an 11g database, create a table that is compressed and non-compressed and look at the relative size and timing on retrieving data from the table. We can experiment with this data and grow the table size to see if we still get the same improvements as the table gets larger. We can try different compression algorithms to see if it effects performance or compression ratios. We have done all of this in a database as a service public cloud instance. The only tools that we needed was a SQL Developer connection and an Oracle Cloud account. We could have done with with Amazon RDS as well as EC2 and Microsoft Azure Compute. The key difference is that this experiment took about two hours to execute and we only consumed about $15 to learn and play with compression on 11g (or 12c) given that a low memory option for the database is only $6.720 per OCPU per hour. With the pay as you go option we burn less than $15 and turn off the service. We could have uploaded our own data sets into the database instance and played with the compression advisor in a sandbox and not effected our production environment. If we were using database backup as a service we could have restored a single table from our backup and play with the compression variations and compression advisor.

Combining Features - Wrong Results With Scalar Subquery Caching

Randolf Geist - Wed, 2016-06-08 15:39
Quite often you can get into trouble with Oracle when you start combining different features.

In this case of one my clients it is the combination of user-defined PL/SQL functions that can raise exceptions (think of currency conversion and a non-existent currency code gets passed into the function), DML error logging and attempting to improve performance by wrapping the PL/SQL function call into a scalar subquery to benefit from the built-in scalar subquery caching feature of the SQL runtime engine.

As long as the scalar subquery didn't get used everything worked as expected, but after adding the scalar subquery after some while it became obvious that wrong results occurred - in that particular case here it meant rows that should have been rejected and written to the error logging table due to the exception raised in the user-defined PL/SQL function suddenly showed up in the target table, and what was even more worrying - they included a converted amount, obviously using some arbitrary conversion rate, although the PL/SQL function actually didn't return any value but raised an exception for unknown currency codes.

The interesting point here is that everything works as expected if the error logging feature doesn't get used, or if the scalar subquery doesn't get used.

In case of no error logging the whole statement will roll back if an exception occurs, and that holds true and works even with scalar subquery caching in place.

Without scalar subquery caching the correct result gets produced, all rows that are supposed to be rejected are written to the error logging table and don't show up in the target table.

And if no exception gets raised (only valid and existing currency codes get used), again the results are correct in the target table even when using scalar subquery caching - so the caching of the values in principle works correctly.

Only when combining those features and dealing with invalid data the problem shows up, and as it can be demonstrated from a simple test case, even then it only occurs under certain circumstances.

Here is a simple test case for reproducing the issue:

set echo on timing on time on

drop table t_source purge;

drop table t_dest purge;

drop table ERR$_T_DEST purge;

drop package test_plsql_caching_with_except;

create table t_source
as
select
rownum as id
, cast('C' || to_char(round(trunc(rownum / 100000)) + 1, 'FM00') as varchar2(3)) as currency_code_clust
, cast('C' || to_char(mod(rownum, 10) + 1, 'FM00') as varchar2(3)) as currency_code_scat
, 100 as amount
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3) gen1
, (select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3) gen2
;

exec dbms_stats.gather_table_stats(null, 't_source')

create or replace package test_plsql_caching_with_except
as
function test_plsql_caching_with_except(s_currency_code in varchar2, s_do_excpt_on in varchar2 default 'C10') return number parallel_enable;
end;
/

create or replace package body test_plsql_caching_with_except
as
function test_plsql_caching_with_except(s_currency_code in varchar2, s_do_excpt_on in varchar2 default 'C10') return number parallel_enable
is
begin
if s_currency_code = s_do_excpt_on then
raise_application_error(-20001, 'exception encountered');
end if;
--
return to_number(substr(s_currency_code, 2));
end;
end;
/

create table t_dest (id number(*, 0), currency_code varchar2(3), org_amount number, conv_amount number);

exec DBMS_ERRLOG.create_error_log (dml_table_name => 't_dest')

truncate table t_dest;

truncate table ERR$_T_DEST;

alter session enable parallel dml;

-- Using "currency_code_scat" as function parameter gives correct result in T_DEST
-- But column CONV_AMOUNT in ERR$_T_DEST shows some function result that shouldn't be there since the function raised an exception (reproduces in 11.2.0.4 but not 12.1.0.2)
--
-- define ccy_code = currency_code_scat

-- Using "currency_code_clust" as function parameter gives wrong result in T_DEST - rows that should go to ERR$_T_DEST
-- due to exception being raised in PL/SQL function end up in T_DEST (with some "cached" function result applied)
-- There shouldn't be any rows in T_DEST with "currency_code = 'C10'"
-- Applies to both serial and Parallel Execution, conventional and direct path insert
define ccy_code = currency_code_clust

-- Use commented, non-cached function call to see correct results
insert /*+
append
no_parallel
-- parallel(n)
*/
into t_dest (id, currency_code, org_amount, conv_amount)
select id, &ccy_code, amount,
--test_plsql_caching_with_except.test_plsql_caching_with_except(&ccy_code) * amount
(select test_plsql_caching_with_except.test_plsql_caching_with_except(&ccy_code) * amount from dual)

from t_source
log errors
reject limit unlimited;

commit;

-- This is supposed to 900K
select count(*) from t_dest;

-- This is supposed to be 0
select count(*) from t_dest where currency_code = 'C10';

-- This is supposed to 100K
select count(*) from ERR$_T_DEST;

-- This is supposed to 0
select count(*) from ERR$_T_DEST where conv_amount is not null;
The second parameter to the PL/SQL function determines for which code an exception should be raised (or if at all), and hence allows reproducing different scenarios. By default it will raise an exception for code "C10" which represents 10% (100K rows) of the data in T_SOURCE. So 900K rows should end up in the destination table and those 100K rows having code "C10" should go into the error logging table.

Running the test case without the scalar subquery gets the correct result:

SQL> -- This is supposed to 900K
SQL> select count(*) from t_dest;

COUNT(*)
----------
900000

Elapsed: 00:00:00.70
SQL>
SQL> -- This is supposed to be 0
SQL> select count(*) from t_dest where currency_code = 'C10';

COUNT(*)
----------
0

Elapsed: 00:00:00.21
SQL>
SQL> -- This is supposed to 100K
SQL> select count(*) from ERR$_T_DEST;

COUNT(*)
----------
100000

Elapsed: 00:00:00.70
SQL>
SQL> -- This is supposed to 0
SQL> select count(*) from ERR$_T_DEST where conv_amount is not null;

COUNT(*)
----------
0

Elapsed: 00:00:00.04
SQL>
Use the scalar subquery, and suddendly instead of 100K rows that should be rejected only a single one gets rejected and 99.999 make it into the target table with some converted amount:

SQL> -- This is supposed to 900K
SQL> select count(*) from t_dest;

COUNT(*)
----------
999999

Elapsed: 00:00:00.23
SQL>
SQL> -- This is supposed to be 0
SQL> select count(*) from t_dest where currency_code = 'C10';

COUNT(*)
----------
99999

Elapsed: 00:00:00.20
SQL>
SQL> -- This is supposed to 100K
SQL> select count(*) from ERR$_T_DEST;

COUNT(*)
----------
1

Elapsed: 00:00:00.00
SQL>
SQL> -- This is supposed to 0
SQL> select count(*) from ERR$_T_DEST where conv_amount is not null;

COUNT(*)
----------
1

Elapsed: 00:00:00.01
SQL>
Now the critical point is the order of the column values in the column passed into the PL/SQL function. The problem can only be reproduced when using the "clustered" variant of the currency codes (column CURRENCY_CODE_CLUST in T_SOURCE). Switch to the "scattered" variant (CURRENCY_CODE_SCAT), and the (almost) correct result gets produced, at least in the target table. I say "almost" because looking closer at the rows written to the error logging table it becomes obvious that they contain a converted amount that definitely shouldn't be there - so the wrong result error already shows up here, since the row was written to the error logging table due to the PL/SQL function raising an exception, hence the converted amount should be NULL. This "wrong converted amount in the error logging table" no longer occurs in 12c (12.1.0.2), so something has changed but the problem of wrong results (rows with errors / exceptions go into the target table) is still reproducible in 12c.

The problem seems to be related to the way how the scalar subquery caching works when the same value comes again - Jonathan Lewis described this feature in "Cost Based Oracle" a long time ago. There seems to be special optimisation that avoids the cache lookup in case the same value as before is used again, like a "single value cache" on top of the actual cache. And it looks like it is this special treatment that doesn't handle correctly the case of the exception being raised in the PL/SQL function.

The test case also allows checking if other caching mechanisms like declaring the function "deterministic" (although in the real life case it isn't really hence shouldn't be used since it opens up other possibilities like building a Materialized View or a function-based index based on the PL/SQL function that potentially would lead to wrong results again then) or making use of the PL/SQL function result cache suffer from the same problem - and they don't. Both produce the correct results under all scenarios tested.

I have a more complex variant of the test case that also allows spotting more clearly the critical difference between the "scalar subquery / deterministic caching" that obviously happens in the SQL engine and the PL/SQL function result cache that is implemented in the PL/SQL engine by counting the number of times the function code gets actually executed and how many times the PL/SQL code block got executed according to the Library Cache information.

In this case here where the overhead of the actual function call is very small but it is called very frequently the PL/SQL function result cache adds much more overhead than the SQL engine caching, because the context switch to PL/SQL needs to be done for every function invocation, only to be then resolved in the PL/SQL function result cache instead of actually executing the function code.

Whereas the "scalar subquery / deterministic" cache avoids the context switch to PL/SQL and uses values from a cache maintained in the SQL engine.

So the lesser the overhead of the actual PL/SQL function code and the more frequent the function gets called, the more overhead the usage of the PL/SQL function result cache will produce in comparison to the SQL engine based caching mechanisms.

Of course the PL/SQL function result cache comes with other features that are not there with the others - like cross session / statement invocation caching that might make a big difference if a single function call causes a lot of overhead.

The Future of Work: What does a Smart Office look like today?

Oracle AppsLab - Wed, 2016-06-08 15:23

Editor’s note: We just returned from Holland last week where we attended AMIS 25, which was a wonderful show. One of the demos we showed was the Smart Office; Noel (@noelportugal) also gave a presentation on it.

We’ve been showing the Smart Office since OOW last year, and it remains one of our most popular demos because it uses off-the-shelf components that are available today, e.g. Amazon Echo, Leap Motion, Philips Hue lights, beacons, etc., making it an experience that anyone could replicate today with some development work.

The following article is cross-posted from our Emerging Technologies pages on oracle.com. Stay tuned for details on each of the components of the Smart Office.

In early 2015, the AppsLab team decided we were going to showcase the latest emerging technologies in an integrated demo. As part of the Oracle Applications User Experience group, our main goal as the emerging technologies team is to design products that will increase productivity and user participation in Oracle software.

We settled on the idea of the Smart Office, which is designed with the future of enterprise workplaces in mind. With the advent of the Internet of Things and more home automation in consumer products, users are expecting similar experiences in the workplace. We wanted to build an overall vision of how users will accomplish their tasks with the help of emerging technologies, no matter where they might be working.

Technologies such as voice control, gesture, and proximity have reached what we consider an acceptable maturity level for public consumption. Inexpensive products such as the Amazon Echo, Leap Motion and Bluetooth beacons are becoming more common in users’ daily lives. These examples of emerging technology have become cornerstones in our vision for the Smart Office.

Wearable technology also plays an important role in our idea of the Future of Work. Smart watches are becoming ubiquitous, and the price of wireless microprocessors continues to decrease. Dedicated mobile devices, our research shows, can increase productivity in the workplace when they are properly incorporated into the user experience as a whole.

Building for you, a Sales Cloud example

We first created what we call a user persona to assist us in building the Smart Office. This helps us develop very specific work flows using very specific technology that can be widely applied to a variety of software users. In this case, we started with a sales example as they are often mobile workers.

bean_front

Sally Smith, our development example for the Smart Office, is a regional sales vice president who is traveling to her headquarter’s office. Traveling to another office often requires extra effort to find and book a working space. To help Sally with that task, we built a geo-fence-enabled mobile app as well as a Smart Badge. Here’s what these two components help her do:

  • As Sally approaches the office building, her mobile device (using geo-fencing capabilities) alerts her via her smart watch and helps her find her way to her an available office space, using micro-location with beacons. She uses her Smart Badge, which has access to data about her employee status, to go through the security doors at the office building.
  • As Sally approaches the available office space, her Smart Badge proximity sensor (a Bluetooth beacon) connects with a Lighthouse, which is a small touch-screen device outside the office space that displays space availability and works as the “brain” to control IoT devices inside the space. The proximity with the Lighthouse triggers a second confirmation to her smart watch to unlock the office and reserve the space in the company’s calendar system. This authenticates her reservation in two ways.

office

  • As Sally enters the office, her global preferences are loaded into the office “brain.” Settings such as light brightness and color (Hue Lights), and room temperature (Nest Thermostat) are set to her liking.
  • The office screens then start to load Sally’s familiar pictures as well as useful data relative to her location, such as weather or local events, on two Infoscreens. An Infoscreen is a Wi-Fi-enabled digital frame or LCD screen hung on the wall.

RS3660_ORACLE 332

Sally has already interacted with her Smart Office in several ways. But up to this point, all of the interactions have been triggered or captured by emerging technology built into mobile devices that she is carrying with her. Now, she is ready to interact more purposefully with the Smart Office.

  • Sally uses the Amazon Echo voice control to talk to the office: “Alexa, start my day.” Since she has been authenticated by the system already, it knows that the Oracle Sales Cloud is the application she is most likely to need, and the welcome page is now loaded in the touchscreen at the desk. She can use voice navigation to check on her opportunities, leads, or any other section of the Sales Cloud.
  • Sally was working on the plane with Oracle Sales Cloud, but she did not have a chance to save her work before landing. Session portability is built into the cloud user experience, which takes care of saving her work when she is offline. Now that she is sitting inside the Smart Office and back online, she just swipes her screen to transfer her incomplete work onto the desktop screen.

RS3652_ORACLE 419

  • The Smart Office also uses empty wall space to project data throughout the day. On this Ambient Screen, Sally could use her voice (Amazon Echo), or hand gestures (Leap Motion), to continue her work. Since Sally has a global sales team, she can use the Ambient Screen to project a basic overview of her team performance metrics, location, and notifications.
  • If Sally needs to interact with any of the notifications or actions she sees on the Ambient Screen, she can use a grab-and-throw motion to bring the content to her desk screen. She can also use voice commands to call up a team map, for example, and ask questions about her team such as their general location.

gant

  • As Sally finishes her day and gets ready to close her session inside the Smart Office, she can use voice commands to turn everything off.

Find out more

The Smart Office was designed to use off-the-shelf components on purpose. We truly believe that the Future of Work no longer relies on a single device. Instead, a set of cloud-connected devices help us accomplish our work in the most efficient manner.

For more on how we decide which pieces of emerging technology to investigate and develop in a new way for use in the enterprise world, read “Influence of Emerging Technology,” on the Usable Apps website.

See this for yourself and get inspired by what the Oracle Applications Cloud looks like when it’s connected to the future. Request a lab tour.Possibly Related Posts:

OGh Oracle DBA and SQL Celebration Day 2016

Tim Hall - Wed, 2016-06-08 12:01

oghThe OGh Oracle DBA and SQL Celebration Day was the last stop on my trip and it went super-quick! The morning started with a keynote by Dad. He managed to resist the urge to talk about me for 60 minutes, which was good. It gets really embarrassing when he keeps telling everyone I’ve eclipsed his accomplishments in every way possible!

After the keynote is was speaking about efficient PL/SQL calls from SQL. I lost the ability to type, but I think the session went OK. There is a playlist of videos about this stuff here.

Using Index Hints in oracle

Learn DB Concepts with me... - Wed, 2016-06-08 09:59

Hints : Hints are used to give specific information that we know about our data and application, to Oracle. This further improves the performance of our system. There can be instances where the default optimizer may not be efficient for certain SQL statements. We can specify HINTS with the SQL statements, to improve the efficiency of those SQL statements. Hints should only be used as a last-resort if statistics were gathered and the query is still following a sub-optimal execution plan.

Example of the correct syntax for an index hint:

select /*+ index(TEST_IDX IDX_OS_USR) */ * from TEST_IDX;







If we alias the table (A in below case), you must use the alias in the index hint:

select /*+ index(A IDX_OS_USR) */ * from TEST_IDX A;

Note :

Oracle decides to use weather to use this hint or not, of oracle finds that it has faster execution plan without using hint it ignores it. You might think that an index may be helpfull and provide it as hint but oracle may still ignore it. In below case you can see hint being ignored.






Categories: DBA Blogs

@OraclePartners FY17 Global Kickoff

REGISTER NOW: ORACLE FY17 GLOBAL PARTNER KICKOFF It’s the start of a new Oracle fiscal year and we are excited about the tremendous opportunity ahead of us. Join...

We share our skills to maximize your revenue!
Categories: DBA Blogs

database options - compression

Pat Shuff - Wed, 2016-06-08 02:07
A natural follow on to database partitioning is database compression. With partitioning we wanted to split everything into buckets based on how frequently it is used and minimize the more used stuff so that it would fit into memory. The older stuff that we don't access that frequently can be put on slower and lower cost storage. In this blog we are going to look at different techniques to use the cheaper storage event more. Since we don't access this data very frequently and most of the time when we access it we only need to read it and not write to it, we should be able to take advantages of common data and compress the information to consume less storage. If, for example, we have the census data that we are storing and we want to store city and state information we can take advantage of not having Punxsutawney, Pennsylvania stored 5900 times based on the current population. If we stored a copy of this roughly 6000 time it would take up 6000 times 12 bytes for the city and 6000 times 12 bytes for the state. We would also store 15767 as the zip code roughly 6000 times consuming 6000 times 9 bytes. If we could create a secondary table that contains Punxsutawney, Pennsylvania 15767 and correlate it to the hexadecimal value 2e, we could store 2e for the city, state, and zip code thus consuming one byte each rather than 12, 12, and 9 bytes. We effectively save 180,000 bytes by doing a replacement value rather than storing the long strings multiple times. This is effectively the way that hybrid columnar compression works.

Compression can be done at a variety of levels and locations. Disk vendors for years have touted compression in place on storage to consume less space. Compression has been used in a variety of industries. Audio compression, for example, takes recorded audio and under samples the changes in volume and pitch and only records only 8,000 samples per second since the ear can not really hear changes faster than that. These changes are then compressed and stored in an mp3 or avi format. Programs know how to take the mp3 format and rebuild the 8k sample and drive a speaker to estimate the sound that was originally created. Some people can hear the differences and still want to listen to music recorded on reel to reel tape or vinyl because the fidelity is better than CD-ROM or DVD. Videos do the same thing by compressing a large number of bits on a screen and break it into squares on the screen. Only the squares that are changing are transmitted rather than sending all of the data across the whole screen and the blocks that did not change are redisplayed rather than being retransmitted thirty times a second. This allows for video distribution of movies and video recordings across the internet and storage on a DVD rather than recording all of the data all of the time.

Generically compressing data for a database can be complex and if done properly works well. It can also be done very poorly and cause performance problems and issues when reading back the data. Let's take the census data that we talked about earlier. If we store the data as bytes it will consume 198K of space on the disk. If we use the compression ratio that we talked about we will consume roughly 20K of data. This gives us a 10x compression ratio and saves us a significant amount of space on the disk. If the disk sub-system does this compression for us we write 198K of data to the disk, it consumes 20K of storage on the spindles, but when we read it back it has to be rehydrated and we transfer 198K back to the processor and consume 198K of memory to hold the rehydrated data. If the database knew what the compression algorithm and compressed the data initially in memory it would only transmit 20K to the disk, store 20K on the spindles, read 20K back from the disk, and consume 20K of memory to hold the data. This might not seem significant but if we are reading the data across a 2.5 G/second SCSI connection it takes 80ms to read the data rather than 8ms. This 72ms difference can be significant if we have to repeat this a few thousand times. It can also be significant if we have a 1 GigE network connection rather than a direct attached disk. The transfer time jumps to 200ms by moving the data from an attached disk to an nfs or smb mounted disk. We see performance problems like this with database backups to third party storage solutions like Data Domain. If you take a database backup and copy it to a Data Domain solution you get the 10x compression and the backup takes roughly an hour. You have to estimate that it will take seven to eight times the time to rehydrate the data so a restore will take 7-8 hours to recover your database.

The recommended solution is to use compression inside the database rather than third party compression solutions that are designed to compress backups, home directories, and email attachments. Oracle offers Advanced Compressions options for information stored in the database. If you look at the 12c Advanced Compression Data Sheet you will notice that there are a variety options available for compression. You can compress

  • using historic access patterns (heat map and ADO options)
  • using row compression (by analyzing update and insert operations as they occur)
  • file compression (duplicate file links and file compression of LOBS, BLOGS, and CLOBS)
  • backup data compression
  • Data Guard compression of redo logs before transmission
  • index compressions
  • network transmission compression of results to client systems
  • hybrid columnar compression (Exadata and ZFS only)
  • storage snapshot optimization (ZFS only)

Heat Map Compression

At the segment level, Heat Map tracks the timestamps of the most recent modification and query of each table and partition in the database. At the block level, Heat Map tracks the most recent modification timestamp. These timestamps are used by Automatic Data Optimization to define compression and storage policies which will be automatically maintained throughout the lifecycle of the data. Heat Map skips internal operations done for system tasks -- automatically excluding Stats Gathering, DDLs, Table Redefinitions and similar operations. In addition, Heat Map can be disabled at the session level, allowing DBA’s to exclude manual maintenance, avoiding pollution of Heat Map data.

With the data collected by Heat Map, Oracle Database can automatically compress each partition of a table independently based on Heat Map data, implementing compression tiering. This compression tiering can use all forms of Oracle table compression, including: Advanced Row Compression and all levels of Hybrid Columnar Compression (HCC) if the underlying storage supports HCC. Oracle Database can also compress individual database blocks with Advanced Row Compression based on Heat Map data.

Row Compression

a segment-level ADO policy is created to automatically compress the entire table after there have been no modifications for at least 30 days, using Advanced Row Compression:

ALTER TABLE employee ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;
In this next example, a row-level ADO policy is created to automatically compress blocks in the table, after no rows in the block have been modified for at least 3 days, using Advanced Row Compression:
ALTER TABLE employee ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER 3 DAYS OF NO MODIFICATION;
In addition to Smart Compression, other ADO policy actions can include data movement to other storage tiers, including lower cost storage tiers or storage tiers with other compression capabilities such as Hybrid Columnar Compression (HCC). HCC requires the use of Oracle Storage – Exadata, Pillar Axiom or Sun ZFS Storage Appliance (ZFSSA).

In this example, a tablespace-level ADO policy automatically moves the table to a different tablespace when the tablespace currently containing the object meets a pre-defined tablespace fullness threshold:

ALTER TABLE employee ILM ADD POLICY tier to ilmtbs;
Another option when moving a segment to another tablespace is to set the target tablespace to READ ONLY after the object is moved. This is useful for historical data during database backups, since subsequent full database backups will skip READ ONLY tablespaces.

Advanced Row Compression uses a unique compression algorithm specifically designed to work with OLTP applications. The algorithm works by eliminating duplicate values within a database block, even across multiple columns. Compressed blocks contain a structure called a symbol table that maintains compression metadata. When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table. Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table.

File Compression

Consider an email application where 10 users receive an email with the same 1MB attachment. Without Advanced LOB Deduplication, the system would store one copy of the file for each of the 10 users – requiring 10MB of storage. If the email application in our example uses Advanced LOB Deduplication, it will store the 1MB attachment just once. That’s a 90% savings in storage requirements.

In addition to the storage savings, Advanced LOB Deduplication also increases application performance. Specifically, write and copy operations are much more efficient since only references to the SecureFiles data are written. Further, read operations may improve if duplicate SecureFiles data already exists in the buffer cache.

Backup data compression

RMAN makes a block-by-block backup of the database data, also known as a “physical” backup, which can be used to perform database, tablespace or block level recovery. Data Pump is used to perform a “logical” backup by offloading data from one or more tables into a flat file.

Due to RMAN’s tight integration with Oracle Database, backup data is compressed before it is written to disk or tape and doesn’t need to be uncompressed before recovery – providing an enormous reduction in storage costs and a potentially large reduction in backup and restore times. There are three levels of RMAN Compression: LOW, MEDIUM, and HIGH. The amount of storage savings increases from LOW to HIGH, while potentially consuming more CPU resources.

Data Pump compression is an inline operation, so the reduced dump file size means a significant savings in disk space. Unlike operating system or file system compression utilities, Data Pump compression is fully inline on the import side as well, so there is no need to decompress a dump file before importing it. The compressed dump file sets are automatically decompressed during import without any additional steps by the Database Administrator.

Data Guard redo log compression

Data Guard Redo Transport Services are used to transfer this redo data to the standby site(s). With Advanced Compression, redo data may be transmitted in a compressed format to reduce network bandwidth consumption and in some cases reduce transmission time of redo data. Redo data can be transmitted in a compressed format when the Oracle Data Guard configuration uses either synchronous redo transport (SYNC) or asynchronous redo transport (ASYNC).

Index Compression

Advanced Index compression is a new form of index block compression. Creating an index using Advanced Index Compression reduces the size of all supported unique and non-unique indexes -- while still providing efficient access to the indexes. Advanced Index Compression works well on all supported indexes, including those indexes that are not good candidates (indexes with no duplicate values, or few duplicate values, for given number of leading columns of the index) with the existing index Prefix Compression feature.

Network Compression

Advanced Network Compression, also referred to as SQL Network Data Compression, can be used to compress the network data to be transmitted at the sending side and then uncompress it at the receiving side to reduce the network traffic. Advanced Network Compression reduces the size of the session data unit (SDU) transmitted over a data connection. Reducing the size of data reduces the time required to transmit the SDU.

Advanced Network Compression not only makes SQL query responses faster but also saves bandwidth. On narrow bandwidth connections, with faster CPU, it could significantly improve performance. The compression is transparent to client applications.

We won't cover the last two options since they don't apply to database services in the cloud unless you purchase the Exadata as a Service option. There is a Compression Estimation Tool to help you estimate the benefits of compression. A sample of this looking at 100 TB of database data shows a significant cost savings in the millions of dollars.

There is also a Compression Advisor that can be downloaded and installed in your database to look at your tables and estimate how much storage you can save based on your data and your usage patterns. You can watch a Four minute marketing video on the tool and how to use it. I recommend Tyler Mouth's blog entry on customizing the output of the compression advisor to be a little more user friendly. I would also look at Mike Haas's Blog on compression and the DBAORA blog that provides a good overview of 11g compressions. Mike Messin's blog is a good blog on installing and executing the compression advisor.

In summary, compression can be used with a variety of mechanisms based on your usage patterns and objectives. This option is not one size fits all and requires a DBA with knowledge of the usage patterns and familiarity of the data and applications. Letting a non-DBA decide on the compression mechanism can lead to poor performance, missing recovery objective times, increased network throughput, and higher processor utilization than necessary. The Database 12c Compression Documentation details how to create tables that are compressed, how to look and see if tables are compressed, and how to update tables for compression. Compression is a mechanism that can directly reduce your storage costs by consuming significantly less amounts of storage to store the same data. In the cloud this correlates directly to storage cost savings. You get compression as an option for High Performance Edition and Extreme Performance Edition but not the Standard Edition or Enterprise Edition versions of the database.

ORA-01110 data file %s

VitalSoftTech - Tue, 2016-06-07 19:42
What is the cause of the error "ORA-01110 data file %s"? How do I resolve this?
Categories: DBA Blogs

June 29: MoneyGram International—Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2016-06-07 18:52

Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, June 29, 2016 at 9:00 am PDT.

Paula Peacher, Senior Director, Global Payroll and HRIS at MoneyGram International will discuss MoneyGram's decision to streamline its payroll, core HR and recruitment process so employee’s could have better access to data, align employee execution with corporate strategy, and improve HR regulatory compliance.

Paula also will explain MoneyGram's move from Ceridian to Oracle HCM Cloud to provide its employees a better user experience and to increase employee reporting capabilities, productivity, and participation in company results.

MoneyGram selected Oracle HCM Cloud with the following modules: Core HR, US Payroll, Workforce Compensation, Performance and Goal Management, Talent Review and Succession, Recruiting and Onboarding.

MoneyGram now offers services at approximately 350,000 agent locations in more than 200 countries.

Register now to attend the live Forum and learn more about MoneyGram International’s experience with Oracle HCM Cloud.

June 28: Inspirage—Oracle ERP and Sales Clouds Customer Forum

Linda Fishman Hoyle - Tue, 2016-06-07 18:52

Join us for another Oracle Customer Reference Forum on June 28, 2016, at 9:00 a.m. PDT. 

Del Clark, Chief Financial Officer at Inspirage, will talk about Inspirage’s business drivers to implement Oracle ERP Cloud and Oracle Sales Cloud. Get details of the content and a registration link for this live Forum.

Automated Configuration Management in PeopleSoft

PeopleSoft Technology Blog - Tue, 2016-06-07 18:50

Content Contributed by Biju Narayanan

This post is the first in a series on Configuration Management

Configuration the Old Way

PeopleSoft system administrators have generally spent a good deal of time on manual, repetitive tasks while setting up and configuring their PeopleSoft system.  When you set up a new PeopleSoft demo environment or upgrade/clone environments, considerable effort is spent on configuration-related activities. This often involves going through hundreds of user interface pages to manually configure functional components like Integration Broker, Secure Enterprise Search, Process Scheduler, and so on.  Testing for correctness is also required. If you cloned or made copies of systems configurations, they needed to be redone. Host name entries are spread within many configurations from webserver to appserver tiers and in many objects within databases, which adds to the challenge.

What We've Done to Improve the Process

With PeopleSoft 8.55, configuration is centralized with the Cloud Deployment Architecture (CDA).  Deployment Packages (DPK) are a new delivery mechanism provided with PeopleTools 8.55 (for use with Linux and Windows operating systems). Deployment Packages contain all the software created by PeopleSoft as well as the required components pre-installed and fully patched, along with a deployment and configuration automation system. In addition, all Update Images delivered after 8.55 will also take advantage of the Deployment Framework, and will be comprised of Deployment Packages.  DPKs have three parts:

  • Binaries based on platform: Middleware binaries for Application Server runtime Tuxedo, WebServers - Weblogic, PeopleSoft binaries (PS_HOME), database, Client binaries, etc.
  • Puppet-based automation for installation and deployment of middleware - The PeopleSoft Puppet modules provide automation for setting up the middle ware, deployment of PeopleTools binaries and configuration of Application server and WebServer domains.  (For detailed information on Puppet see puppet.com, and docs.puppetlabs.com)
  • ACM - Automated Configuration Manager to configure PeopleSoft components like Integration Broker, Secure Search, Process Scheduler etc.


With DPKs configuration is centralized in Pupprt configuration files, and can be customized by overriding configuration attributes in psft.customizations.yaml. 

Automating PeopleSoft Configuration Using ACM

Automated Configuration Manager (ACM) is a tool set available beginning with PeopleTools 8.54.  It is used to automate PeopleTools product configuration. ACM allows you to externalize product configuration as a template and then use the template to automate configurations across systems. ACM comes with graphical user interfaces,  command-line tools, and a suite of configuration plugins that help administrators configure the base PeopleSoft components.



Peopletools 8.55 delivers the following predefined templates that you can start with.



The other functional plugins include the following:

  • Integration Broker
    • Rename local node, purge messages
    • Configure default local node, set target location, configure content/portal URI
    • Configure gateway URL, load gateway connectors, configure nodes in the gateway
    • Set advanced gateway properties like keystore password, proxy/non proxy information
    • Activate IB Domains and Queues
    • Cleanup
  • Secure Enterprise Search
    • Configure Search Instance
    • Configure Identity plugin and proxy login
    • Deploy/Undeploy, Index search definitions
    • Cleanup Meta-data
  • Process Scheduler
    • Report Node configurations
    • Server Definition configuration
  • Cluster
    • Network Node Registration
    • Single Sign-On configuration
    • Unified Navigation configuration, creation of remote folders
    • Import SES search indexes to Portal environment
  • Approvals
    • Enable SSL configuration for MAP layouts
    • MAP approval Svc Install configuration
    • Introspection and Deployment of IB services
  • Miscellaneous Utilities
    • Performance Monitor Configurations
    • Grant PTF roles to users
    • Setting URL identifiers, url properties and its values
    • Web Profile related configurations
Working with ACM

An administrator typically performs the following process to develop a configuration automation template. The template contains configuration units, and the sequence and dependencies in which the configuration has to be automated. After the template is developed, it is tested and used as a base version to automate other environments.

Pre-Requisites for Executing ACM
  1. User should have "ACM Administrator" Role:  Navigate to Main Menu > PeopleTools > Security > User Profiles > User Profiles, then add the "ACM Administrator" role for the corresponding user.
  2. The newly created Template should have sufficient privilege: Navigate to Main Menu > PeopleTools > Security > Permissions and Roles > Permission Lists, then choose permission list "PTPT4800", then add the newly created template to the list present in the "ACM Templates" tab.
The Process
  1. Design a Template: Automate functional units of configuration. You can group configuration plugins, set configuration attributes and sequence then to automate functional parts. This can be done from PeopleTools > Automation Configuration Manager > Templates



  2. Execute the Template: The template can be executed interactively in three ways:
    1. Interactive: Navigate to: Peopletools > Automation Configuration Manager > Templates
    2. Command line: Administrators generally prefers to automate configurations by running scripts. This is the recommended method for automating configurations.



    3. DPK Puppet: Set the configuration in Puppet modules prior running the deployment scripts. You can specify your entire configuration in the "/etc/puppet/data/psft_configuration.yaml" file, before running the DPK automation process. The code block below shows how to set IB Configuration in your deployment configuration.

integration_broker:
    run_control_id: intbroker
    os_user: '%{hiera(''domain_user'')}'
    db_settings: {
      db_name: '%{hiera(''db_name'')}',
      db_type: '%{hiera(''db_platform'')}',
      db_connect_id: '%{hiera(''db_connect_id'')}',
      db_connect_pwd: '%{hiera(''db_connect_pwd'')}'
    }
    acm_plugin_list:
      PTIBRenameNode: {
        env.default_local_node: '%{hiera(''gateway_node_name'')}',
        env.app_msg_purge_all_dms: true
      }
      PTIBConfigureDBNode: {
        env.pia_webserver_host: '%{::fqdn}',
        env.pia_webserver_port: '%{hiera(''pia_http_port'')}',
        env.pia_webserver_ssl_port: '%{hiera(''pia_https_port'')}',
        env.pia_site_name: '%{hiera(''pia_site_name'')}',

   4.  Test and Debug: After the execution you can monitor and debug the execution by  navigating to PeopleTools>Automation Configuration Manager>Monitor



After you generate a working version of the template, you can keep it as a configuration baseline and use it to clone configurations across multiple environments.  The template is the source of truth for configuration, and ACM allows you to re-run the configuration any number of times.


Tutorial: Using Oracle EBS Adapter in Integration Cloud Service

Steven Chan - Tue, 2016-06-07 13:46

Oracle Integration Cloud Service (ICS) is Oracle’s integration Platform as a Service (iPaaS). It provides a web-based interface to integrate Software as a Service (SaaS) and on-premise applications. It includes various technology and application adapters.

Oracle E-Business Suite Adapter from Integration Cloud Service provides native and secure connectivity to Oracle E-Business Suite instance. It lets you create integrations with EBS public integration interfaces. As of ECS 16.2.5, the EBS Adapter supports: 

  • EBS R12.1.3, R12.2.3, R12.2.4 and R12.2.5.
  • PL/SQL-based REST Services for outbound integrations from ICS (also called inbound integrations to EBS).
  • Business Events and XML Gateway Messages for inbound integrations to ICS (also called outbound integrations from EBS).
  • Custom interfaces registered in Integration Repository.
  • Connectivity via an on-premise Agent. If EBS REST Services are directly accessible from ICS, on-premise Agent need not be configured. 

Screenshot of Adapter video

Here is a tutorial on using EBS Adapter for integrations in Integration Cloud Service:

It highlights the steps required to utilize the Business Event information from EBS. It is based on an Order-to-Manual Invoice scenario. When a Sales Order is booked in EBS Order Management, EBS raises a business event with Header Id and Status of the corresponding Sales Order. This information is used to retrieve details of the Sales Order. A manual invoice is created in EBS Accounts Receivable for this Sales Order.

This tutorial uses EBS Adapter as a Trigger (Source) as well as an Invoke (Target) connection. A Business Event is used to trigger the ICS integration whereas, PL/SQL-based REST Service is used to retrieve or fetch Sales Order details from EBS Order Management. It uses a REST Adapter to create manual invoice in EBS Accounts Receivables.

Tutorial Resources

References

Related Articles

Categories: APPS Blogs

Quiz Night

Jonathan Lewis - Tue, 2016-06-07 13:35

Here’s an execution plan from a recent OTN database forum posting:

 
------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT          |                    |     1 |   231 |   160   (6)| 00:00:02 |
|   1 |  UPDATE                   | GS_TABLE           |       |       |            |          |
|*  2 |   HASH JOIN SEMI          |                    |     1 |   231 |   130   (0)| 00:00:02 |
|*  3 |    TABLE ACCESS FULL      | GS_TABLE           |     5 |   895 |   123   (0)| 00:00:02 |
|*  4 |    TABLE ACCESS FULL      | UPDATEDPROGRAMCODE |   850 | 44200 |     7   (0)| 00:00:01 |
|*  5 |   VIEW                    |                    |    11 |  2024 |     8  (13)| 00:00:01 |
|*  6 |    WINDOW SORT PUSHED RANK|                    |    11 |   440 |     8  (13)| 00:00:01 |
|*  7 |     FILTER                |                    |       |       |            |          |
|*  8 |      TABLE ACCESS FULL    | UPDATEDPROGRAMCODE |    11 |   440 |     7   (0)| 00:00:01 |
|   9 |   VIEW                    |                    |   850 |  1138K|     9  (23)| 00:00:01 |
|  10 |    SORT ORDER BY          |                    |   850 |   685K|     9  (23)| 00:00:01 |
|* 11 |     VIEW                  |                    |   850 |   685K|     8  (13)| 00:00:01 |
|  12 |      WINDOW SORT          |                    |   850 | 47600 |     8  (13)| 00:00:01 |
|* 13 |       FILTER              |                    |       |       |            |          |
|* 14 |        TABLE ACCESS FULL  | UPDATEDPROGRAMCODE |   850 | 47600 |     7   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Unfortunately the originator of this plan declined to show us the query or answer any questions about where the work was going, but did confirm a speculative comment I had made that the instance was 12c. So the question is this: can you spot what it was that made me think that the plan came from 12c ?

I have to say, by the way, that there may be ways to get this plan from 11g, it was just that my first impression was that it was probably 12c and I didn’t attempt to come up with a way of getting a similar plan from 11g. (And, as far as the general shape of the plan is concerned, I can think of two different types of query that could produce it.)

Footnote

You are allowed to prove me wrong.

Answer

(Which might be me showing ignorance rather than inspiration)

The basic shape of the plan suggests to me that the query is of the form:

update gs_table 
set     col1 = (select from updatedprogramcode),
        col2 = (select from updatedprogramcode)
where   exists (select from updatedprogramcode)         -- possibly "where IN (subquery)"
;

There are a couple of variations in how the “set” subquery content might vary, and I’ll write up a short blog about that later.

Having noted this basic shape, I then noted that the subqueries involved analytic functions – as indicated by the WINDOW SORT operations; moreover one of them used a PUSHED RANK option and the other was embedded in a non-mergeable VIEW (operation 11). Updates with subqueries generally involve correlated columns – and prior to 12c there are some restrictions on how far up the tree the correlation can go. Here’s a sample query (using two tables that I’ve cloned from all_objects) to demonstrate:


update t1 set
        data_object_id = (
                select  objno
                from    (
                        select
                                object_id objno,
                                row_number() over (order by  object_id desc) rn
                        from
                                t2
                        where
                                t2.object_type = t1.object_type
                        )
                where rn = 1
        )
/

We need to embed the inner select statement in an inline view because we want to use the result of the row_number() analytic function in a filter predicate, but in Oracle 11g the reference to t1.object_id can’t correlate back to the outer t1 table, while in 12c this isn’t a problem. Here’s the 12c plan, followed by the 11g error:


12c Plan (autotrace)
----------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT          |      |  5000 | 45000 | 70012  (15)| 00:04:34 |
|   1 |  UPDATE                   | T1   |       |       |            |          |
|   2 |   TABLE ACCESS FULL       | T1   |  5000 | 45000 |    12   (0)| 00:00:01 |
|*  3 |   VIEW                    |      |     1 |    26 |    13   (8)| 00:00:01 |
|*  4 |    WINDOW SORT PUSHED RANK|      |   556 |  6116 |    13   (8)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL     | T2   |   556 |  6116 |    12   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("RN"=1)
   4 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("OBJECT_ID")
              DESC )<=1)
   5 - filter("T2"."OBJECT_TYPE"=:B1)


11g Error
---------
                                t2.object_type = t1.object_type
                                                 *
ERROR at line 11:
ORA-00904: "T1"."OBJECT_TYPE": invalid identifier

Notice, by the way that my predicate “rn = 1” has resulted in the WINDOW SORT PUSHED RANK that appeared in the original plan.

In case I haven’t said it enough times: this is a just a rapid inference I drew from looking briefly at the plan and I haven’t tried hard to work out whether there is a way to get a plan like this in 11g. It was nice being proved right by the follow-up post from the OP, but my guess may have been right by accident – I’d rather be proved wrong than carry on thinking I’d got it right when I hadn’t … so feel free to supply an example in the comments.

 


CREATE AN INVISIBLE INDEX ON A TABLE

Learn DB Concepts with me... - Tue, 2016-06-07 12:50
INVISIBLE INDEX:

Oracle 11g gives us ability to create indexes that can be marked as invisible. Invisible indexes are maintained like any other index, but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level

CREATE AN INVISIBLE INDEX:

CREATE INDEX INV_IDX_OS_USR ON TEST_IDX (ID) INVISIBLE;

lets check the newly created index :

SQL> select OWNER,INDEX_NAME,TABLE_OWNER,TABLE_NAME,VISIBILITY from all_indexes where index_name='INV_IDX_OS_USR';

OWNER         INDEX_NAME            TABLE_OWNER         TABLE_NAME      VISIBILITY       
--------      -----------                                    ----------               -----------                  --------
ATOORPU       INV_IDX_OS_USR       ATOORPU            TEST_IDX               INVISIBLE 


USER CAN'T MAKE USE OF INVISIBLE INDEX UNTIL HE MAKES IT VISIBLE IN THAT SESSION LETS SEE IF WE CAN USE INVISIBLE INDEX WITH OUT ENABLING IT IN OPTIMIZER:

select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;




MAKING AN INDEX VISIBLE IN CURRENT SESSION:

ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;
select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;







MAKING AN INDEX INVISIBLE IN CURRENT SESSION:

ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE;

select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;

-->> you will not have to provide any hints to use index. I have provided hint just to make sure  oracle uses it.

select  * from TEST_IDX where ID=284;        -->> Same as above



MAKING AN INDEX INVISIBLE OR VISIBLE:


Indexes can be created as invisible by using the INVISIBLE keyword at the end, and their visibility can be managed using the ALTER INDEX command

TO MAKE AN EXISTING INDEX INVISIBLE USE BELOW SYNTAX: 

ALTER INDEX index_name INVISIBLE;

TO MAKE AN EXISTING INDEX VISIBLE USE BELOW SYNTAX: 


ALTER INDEX index_name VISIBLE;


Categories: DBA Blogs

CREATE AN INVISIBLE INDEX ON A TABLE

Learn DB Concepts with me... - Tue, 2016-06-07 12:50
INVISIBLE INDEX:

Oracle 11g gives us ability to create indexes that can be marked as invisible. Invisible indexes are maintained like any other index, but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level

CREATE AN INVISIBLE INDEX:
 
CREATE INDEX INV_IDX_OS_USR ON TEST_IDX (ID) INVISIBLE;

lets check the newly created index :

SQL> select OWNER,INDEX_NAME,TABLE_OWNER,TABLE_NAME,VISIBILITY from all_indexes where index_name='INV_IDX_OS_USR';

OWNER         INDEX_NAME            TABLE_OWNER         TABLE_NAME      VISIBILITY       
--------      -----------                                    ----------               -----------                  --------
ATOORPU       INV_IDX_OS_USR       ATOORPU            TEST_IDX               INVISIBLE   


USER CAN'T MAKE USE OF INVISIBLE INDEX UNTIL HE MAKES IT VISIBLE IN THAT SESSION LETS SEE IF WE CAN USE INVISIBLE INDEX WITH OUT ENABLING IT IN OPTIMIZER:

select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;




MAKING AN INDEX VISIBLE IN CURRENT SESSION:

ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;
select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;






 
MAKING AN INDEX INVISIBLE IN CURRENT SESSION:
 
ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE;

select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284; 

-->> you will not have to provide any hints to use index. I have provided hint just to make sure  oracle uses it.

select  * from TEST_IDX where ID=284;        -->> Same as above

 

MAKING AN INDEX INVISIBLE OR VISIBLE:


Indexes can be created as invisible by using the INVISIBLE keyword at the end, and their visibility can be managed using the ALTER INDEX command

TO MAKE AN EXISTING INDEX INVISIBLE USE BELOW SYNTAX: 

ALTER INDEX index_name INVISIBLE;

TO MAKE AN EXISTING INDEX VISIBLE USE BELOW SYNTAX: 


ALTER INDEX index_name VISIBLE;


Categories: DBA Blogs

“Why 'Fast Follower' Is Yesterday's Tech And Business Strategy”

Linda Fishman Hoyle - Tue, 2016-06-07 11:29
In this Forbes article, Oracle's Rob Preston, Editorial Director (pictured left), challenges companies to consider whether they can afford to be fast followers, rather than first movers, when in comes to innovating and adopting technology.

It used to be that being first to adopt, innovate, or change was too dicey for many companies. Being a follower reduced one's risk while others worked out the bugs; however, today, according to Sonny Singh, SVP and general manager of Oracle’s financial services global business unit, that strategy is “incomprehensible” because of all the disruption going on around us.

No Need to Boil the Ocean

Preston says being a first mover is not about being the very first to innovate. It's about investing in the “tools, infrastructure, processes, and talent" to create a steady flow of innovation. There’s still plenty of room (and advantages) to be a first mover in areas such as mobile and data analytics. Otherwise, Singh contends you could miss “an entire generation of technology, an entire customer base, or an entire trend."

Takeaway: Being a fast follower isn’t really an option in today’s fast-paced digital world.

Read the article: Why 'Fast Follower' Is Yesterday's Tech And Business Strategy

Oracle JET Executing Dynamic ADF BC REST URL

Andrejus Baranovski - Tue, 2016-06-07 10:56
I'm going to explain how to call ADF BC REST service from JET API and how to control parameters passed through to REST URL. Most of the time, REST should return data based on different parameter values, URL is not static. I will be exploring two cases - search functionality, where parameter value is entered by the user and LOV functionality, where current employee key is submitted to fetch colleagues list.

You can go through the slides from my AMIS25 session - Oracle JET CRUD and ADF BC REST. I explain similar concepts:


Download sample application (you must run ADF BC REST application in JDEV 12.2.1 and JET in NetBeans 8) - JETCRUDApp_v9.zip.

1. Filter implementation

We should take a look first, how filter/search functionality is implemented. User enters keyword for Last Name and filters data in JET table:


User can reset search results, change filter criteria. Filtering is done using startswith operator, by ADF BC VO (it creates temporary bind variable from REST URL parameter q=LastName LIKE value):


Filter block is implemented in a separate HTML div. Filter value is mapped with observable variable registered in JET View Model:


Table collection is based on collection with dynamic REST URL. URL is calculated through customURL property (referencing JavaScript method) in the collection. On filter event, we clear up all dependent data and call JET collection API method - refresh(). This ensures reload for collection and REST URL re-calculation. Similar is done on reset:


Method getURL is responsible to read filter value and construct REST URL accordingly (this will be used to re-fetch collection content):


Here you can see generated REST request URL logged by NetBeans Network Monitor. This URL contains filter information supplied through dynamic REST URL defined for the collection:


2. LOV data fetch implementation

There is LOV in ADF BC, which returns current employee colleagues (employees from the same job, except employee himself). ADF BC REST encapsulates parameters complexity and allows to simplify REST request. JET UI renders LOV data in a chart:


Each time when employee record is selected, we execute dynamic REST call (with new employee ID) to fetch colleagues collection:


This is chart definition in JET UI, it references observable variables for series/groups from JET View Model:


Similar as in the case with Filter, we supply dynamic URL for the collection, using property customURL:


Differently than in the Filter implementation, here we dont refresh collection. We execute JET API method fetch(...), to retrieve new collection and we push contents into observable variables, referenced by chart component in the UI:

Pages

Subscribe to Oracle FAQ aggregator