Feed aggregator

Goodbye Oracle (Modern Love)

Richard Foote - Mon, 2017-08-14 02:39
  It’s with a mixture of excitement, trepidation and a touch of sorrow that I today handed in my resignation after 6 years at Oracle Corporation. My plans at this stage are to take a bit of time-off initially, before going back to providing Oracle Database consulting and training services as I’ve done in the […]
Categories: DBA Blogs

TDE local auto-login key question

Tom Kyte - Sun, 2017-08-13 15:26
How does the TDE local auto-login key identify the machine that it has been created on? If I create it on a vm with a system name (of the vm) of my_test_ora01, does TDE use the system name? the ip address assigned to the system? or the vm host syst...
Categories: DBA Blogs

Prebuilt BI Contents should replace BI Tools

Dylan's BI Notes - Sun, 2017-08-13 09:03
Most school districts need the same kind of reports and dashboard for measuring the performance of students, teachers, and schools.   They do not really need to have IT to build reports for them if the vendors can provide the reports OOTB. There is really hardly a need to have a custom reporting tool for building […]
Categories: BI & Warehousing

First step to micorservices: CoreOS on virtualbox (1)

Dietrich Schroff - Sun, 2017-08-13 03:41
I read some articles about microservice architecture and about kubernetes / apache mesos.
My conclusion from these articles is, that lightweigt servers are a key point to this microservice architecture. Just to learn how tiny such a OS could be, i will try to install CoreOS inside virtualbox.

There is an easy way to get CoreOS running ("booting CoreOS container linux from ISO"), but there some limitations stated on the CoreOS website:

  1. There is no straightforward way to provide an Ignition config.
  2. A mininum of 1024M of RAM is required to boot Container Linux via ISO.
So just download the iso image coreos_production_iso_image.iso and create a virtualbox vm:





 Then mount the coreos_production_iso_image.iso as boot cd/dvd
 and here we go:
At this point we have something like a live cd running. Because nothing was installed on the hdd and it is not possible to umount the ISO image.

How to get from this point to a installed operating system on disk, i will show in a following posting.

WebLogic – Cannot register for disconnect events on local server

Yann Neuhaus - Sun, 2017-08-13 03:15

When working with WebLogic, there will probably be a moment when you will ask yourself: damn, what is wrong? Nowadays, software are so complex that it is kind of easy to introduce bugs/non-wanted behaviors in them… In this blog, I will present a small thing that just blew my mind when I faced it: I had a fully working WLS on which I executed a WLST script in order to configure the SSL Ciphers for the Admin Server as well as Managed Servers. After this, the WLS wasn’t able to start anymore but this had nothing to do with the WLST script since I was sure it didn’t contain any error and it did what I expected… To stay generic, to update the Ciphers of all Managed Servers/Domains of a single server and because we manage more than one hundred WebLogic Servers at this customer, an automated deployment was really necessary.

On all these WLS, we have a very few of them that contain two domains on the same server (with different IPs/DNS Aliases of course). To handle this case, we had to use a variable that point to the Admin Server in order to connect to this server and execute the WLST script against it. The name of the variable we defined is, of course, “ADMIN_URL”. This is the name used by WebLogic in the configuration files to point to the Admin Server. So what better name than this? Well actually, anything would have been better since this is what caused this issue…

 

So let’s demonstrate this. First, I’m just trying to start the Admin Server without the variable defined:

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/bin
[weblogic@weblogic_server_01 bin]$ 
[weblogic@weblogic_server_01 bin]$ ./startWebLogic.sh
.
.
JAVA Memory arguments: -Xms256m -Xmx512m -XX:MaxPermSize=256m
.
CLASSPATH=$ORACLE_HOME/wlserver/server/lib/jcmFIPS.jar:$ORACLE_HOME/wlserver/server/lib/sslj.jar:$ORACLE_HOME/wlserver/server/lib/cryptoj.jar::$JAVA_HOME/lib/tools.jar:$ORACLE_HOME/wlserver/server/lib/weblogic_sp.jar:$ORACLE_HOME/wlserver/server/lib/weblogic.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar:$ORACLE_HOME/wlserver/modules/features/oracle.wls.common.nodemanager_2.0.0.0.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/com.oracle.cie.config-wls-online_8.1.0.0.jar:$ORACLE_HOME/wlserver/common/derby/lib/derbyclient.jar:$ORACLE_HOME/wlserver/common/derby/lib/derby.jar:$ORACLE_HOME/wlserver/server/lib/xqrl.jar:$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar:
.
PATH=$ORACLE_HOME/wlserver/server/bin:$ORACLE_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$DOMAIN_HOME/D2/lockbox:$DOMAIN_HOME/D2/lockbox/lib/native/linux_gcc34_x64:$JAVA_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin
.
***************************************************
* To start WebLogic Server, use a username and    *
* password assigned to an admin-level user. For   *
* server administration, use the WebLogic Server  *
* console at http://hostname:port/console         *
***************************************************
starting weblogic with Java version:
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
Starting WLS with line:
$JAVA_HOME/bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=256m -Dweblogic.Name=AdminServer -Djava.security.policy=$ORACLE_HOME/wlserver/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -Ddomain.home=$DOMAIN_HOME -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom -Dweblogic.security.allowCryptoJDefaultJCEVerification=true -Dweblogic.nodemanager.ServiceEnabled=true -Djava.endorsed.dirs=$JAVA_HOME/jre/lib/endorsed:$ORACLE_HOME/wlserver/../oracle_common/modules/endorsed -da -Dwls.home=$ORACLE_HOME/wlserver/server -Dweblogic.home=$ORACLE_HOME/wlserver/server -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
<Aug 5, 2017 1:37:30 PM UTC> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Aug 5, 2017 1:37:30 PM UTC> <Notice> <WebLogicServer> <BEA-000395> <The following extensions directory contents added to the end of the classpath:
$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar.>
<Aug 5, 2017 1:37:30 PM UTC> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 25.102-b14 from Oracle Corporation.>
<Aug 5, 2017 1:37:31 PM UTC> <Info> <Management> <BEA-141107> <Version: WebLogic Server 12.1.3.0.0 Wed May 21 18:53:34 PDT 2014 1604337 >
<Aug 5, 2017 1:37:32 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
<Aug 5, 2017 1:37:32 PM UTC> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.>
<Aug 5, 2017 1:37:32 PM UTC> <Info> <WorkManager> <BEA-002942> <CMM memory level becomes 0. Setting standby thread pool size to 256.>
<Aug 5, 2017 1:37:33 PM UTC> <Notice> <Log Management> <BEA-170019> <The server log file $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.>
<Aug 5, 2017 1:37:34 PM UTC> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
<Aug 5, 2017 1:37:36 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY.>
<Aug 5, 2017 1:37:36 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
Aug 5, 2017 1:37:36 PM weblogic.wsee.WseeCoreMessages logWseeServiceStarting
INFO: The Wsee Service is starting
<Aug 5, 2017 1:37:37 PM UTC> <Warning> <Munger> <BEA-2156227> <The "META-INF/application.xml" deployment descriptor for the "consoleapp" module is in DTD format. Overrides from the deployment plan will NOT be applied since only deployment descriptors in XML Schema format are supported.>
<Aug 5, 2017 1:37:37 PM UTC> <Warning> <Munger> <BEA-2156227> <The "WEB-INF/weblogic.xml" deployment descriptor for the "consolehelp" module is in DTD format. Overrides from the deployment plan will NOT be applied since only deployment descriptors in XML Schema format are supported.>
<Aug 5, 2017 1:37:38 PM UTC> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias mycert from the JKS keystore file $DOMAIN_HOME/certs/identity.jks.>
<Aug 5, 2017 1:37:38 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the JKS keystore file $DOMAIN_HOME/certs/trust.jks.>
<Aug 5, 2017 1:37:39 PM UTC> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on weblogic_server_01:8443 for protocols iiops, t3s, ldaps, https.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000329> <Started the WebLogic Server Administration Server "AdminServer" for domain "DOMAIN" running in production mode.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.>

^C
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000388> <JVM called the WebLogic Server shutdown hook. The server will force shutdown now.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000396> <Server shutdown has been requested by <WLS Kernel>.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SUSPENDING.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <Server> <BEA-002607> <Channel "DefaultSecure", listening on weblogic_server_01:8443, was shut down.>
[weblogic@weblogic_server_01 bin]$

 

As you can see above, the Admin Server is starting properly. Once in RUNNING state, I just stopped it (CTRL+C) to continue the demo.

The next step is to find the current value of “ADMIN_URL”. This variable is used in the files stopWebLogic.sh, stopManagedWebLogic.sh and startManagedWebLogic.sh. To be able to stop these components, the address of the Admin Server is needed. The funny thing is that in two of these three files, it is using the “t3s” protocol and on the third one, it is using “https” (or t3/http if not in SSL-enabled). Once you have this value, you define your environment variable using this exact same value (manual setup or more automatic setup):

[weblogic@weblogic_server_01 bin]$ echo $ADMIN_URL

[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ grep -E 'ADMIN_URL="[th]' *.sh
startManagedWebLogic.sh:	ADMIN_URL="https://weblogic_server_01:8443"
stopManagedWebLogic.sh:		ADMIN_URL="t3s://weblogic_server_01:8443"
stopWebLogic.sh:			ADMIN_URL="t3s://weblogic_server_01:8443"
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ export ADMIN_URL="https://weblogic_server_01:8443"
[weblogic@weblogic_server_01 bin]$ # or
[weblogic@weblogic_server_01 bin]$ export ADMIN_URL=`grep -E 'ADMIN_URL="[th]' stopWebLogic.sh | sed 's,ADMIN_URL="\([^"]*\)",\1,'`
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ echo $ADMIN_URL
t3s://weblogic_server_01:8443
[weblogic@weblogic_server_01 bin]$

 

At this point, I defined the ADMIN_URL variable using the T3S protocol. We can think that this wouldn’t affect the start of our domain. I mean we are just defining a variable that exist in the shell scripts of WebLogic with the exact same value… But note that this variable isn’t defined in the file “startWebLogic.sh”… Once this is done, we can reproduce the issue. For that, simply try to start the Admin Server again:

[weblogic@weblogic_server_01 bin]$ ./startWebLogic.sh
.
.
JAVA Memory arguments: -Xms256m -Xmx512m -XX:MaxPermSize=256m
.
CLASSPATH=$ORACLE_HOME/wlserver/server/lib/jcmFIPS.jar:$ORACLE_HOME/wlserver/server/lib/sslj.jar:$ORACLE_HOME/wlserver/server/lib/cryptoj.jar::$JAVA_HOME/lib/tools.jar:$ORACLE_HOME/wlserver/server/lib/weblogic_sp.jar:$ORACLE_HOME/wlserver/server/lib/weblogic.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar:$ORACLE_HOME/wlserver/modules/features/oracle.wls.common.nodemanager_2.0.0.0.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/com.oracle.cie.config-wls-online_8.1.0.0.jar:$ORACLE_HOME/wlserver/common/derby/lib/derbyclient.jar:$ORACLE_HOME/wlserver/common/derby/lib/derby.jar:$ORACLE_HOME/wlserver/server/lib/xqrl.jar:$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar:
.
PATH=$ORACLE_HOME/wlserver/server/bin:$ORACLE_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$DOMAIN_HOME/D2/lockbox:$DOMAIN_HOME/D2/lockbox/lib/native/linux_gcc34_x64:$JAVA_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin
.
***************************************************
* To start WebLogic Server, use a username and *
* password assigned to an admin-level user. For *
* server administration, use the WebLogic Server *
* console at http://hostname:port/console *
***************************************************
starting weblogic with Java version:
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
Starting WLS with line:
$JAVA_HOME/bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=256m -Dweblogic.Name=AdminServer -Djava.security.policy=$ORACLE_HOME/wlserver/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -Ddomain.home=$DOMAIN_HOME -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom -Dweblogic.security.allowCryptoJDefaultJCEVerification=true -Dweblogic.nodemanager.ServiceEnabled=true -Djava.endorsed.dirs=$JAVA_HOME/jre/lib/endorsed:$ORACLE_HOME/wlserver/../oracle_common/modules/endorsed -da -Dwls.home=$ORACLE_HOME/wlserver/server -Dweblogic.home=$ORACLE_HOME/wlserver/server -Dweblogic.management.server=t3s://weblogic_server_01:8443 -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
<Aug 5, 2017 1:40:04 PM UTC> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Aug 5, 2017 1:40:04 PM UTC> <Notice> <WebLogicServer> <BEA-000395> <The following extensions directory contents added to the end of the classpath:
$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar.>
<Aug 5, 2017 1:40:04 PM UTC> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 25.102-b14 from Oracle Corporation.>
<Aug 5, 2017 1:40:05 PM UTC> <Warning> <Security> <BEA-090924> <JSSE has been selected by default, since the SSLMBean is not available.>
<Aug 5, 2017 1:40:05 PM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
<Aug 5, 2017 1:40:05 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $ORACLE_HOME/wlserver/server/lib/DemoTrust.jks.>
<Aug 5, 2017 1:40:06 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $JAVA_HOME/jre/lib/security/cacerts.>
<Aug 5, 2017 1:40:06 PM UTC> <Info> <Management> <BEA-141298> <Could not register with the Administration Server: java.rmi.RemoteException: [Deployer:149147]Could not reach the Administration Server through any of its URLs: "https://weblogic_server_01:8443".>
<Aug 5, 2017 1:40:06 PM UTC> <Info> <Management> <BEA-141107> <Version: WebLogic Server 12.1.3.0.0 Wed May 21 18:53:34 PDT 2014 1604337 >
<Aug 5, 2017 1:40:07 PM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $ORACLE_HOME/wlserver/server/lib/DemoTrust.jks.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $JAVA_HOME/jre/lib/security/cacerts.>
<Aug 5, 2017 1:40:07 PM UTC> <Alert> <Management> <BEA-141151> <The Administration Server could not be reached at https://weblogic_server_01:8443.>
<Aug 5, 2017 1:40:07 PM UTC> <Info> <Configuration Management> <BEA-150018> <This server is being started in Managed Server independence mode in the absence of the Administration Server.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
<Aug 5, 2017 1:40:07 PM UTC> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.>
<Aug 5, 2017 1:40:07 PM UTC> <Info> <WorkManager> <BEA-002942> <CMM memory level becomes 0. Setting standby thread pool size to 256.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <Log Management> <BEA-170019> <The server log file $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.>
<Aug 5, 2017 1:40:09 PM UTC> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
<Aug 5, 2017 1:40:10 PM UTC> <Error> <Configuration Management> <BEA-150000> <An error occurred while establishing a connection back to the Adminstration Server t3s://weblogic_server_01:8443 during startup. Since bootstrap succeeded, check that t3s://weblogic_server_01:8443 uniquely identifies the Administration Server.
javax.naming.ConfigurationException: Cannot register for disconnect events on local server
    at weblogic.server.channels.RemoteChannelServiceImpl.registerInternal(RemoteChannelServiceImpl.java:234)
    at weblogic.server.channels.RemoteChannelServiceImpl.registerForever(RemoteChannelServiceImpl.java:190)
    at weblogic.protocol.ConnectMonitorFactory.registerForever(ConnectMonitorFactory.java:54)
    at weblogic.management.provider.MSIService.registerForReconnectToAdminServer(MSIService.java:188)
    at weblogic.management.mbeanservers.compatibility.internal.CompatibilityMBeanServerService.start(CompatibilityMBeanServerService.java:183)
    Truncated. see log file for complete stacktrace
>
<Aug 5, 2017 1:40:10 PM UTC> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias myacert from the JKS keystore file $DOMAIN_HOME/certs/identity.jks.>
<Aug 5, 2017 1:40:10 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the JKS keystore file $DOMAIN_HOME/certs/trust.jks.>
<Aug 5, 2017 1:40:12 PM UTC> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:

    There are 1 nested errors:

javax.naming.ConfigurationException: Cannot register for disconnect events on local server
    at weblogic.server.channels.RemoteChannelServiceImpl.registerInternal(RemoteChannelServiceImpl.java:234)
    at weblogic.server.channels.RemoteChannelServiceImpl.registerForever(RemoteChannelServiceImpl.java:190)
    at weblogic.protocol.ConnectMonitorFactory.registerForever(ConnectMonitorFactory.java:54)
    at weblogic.management.provider.MSIService.registerForReconnectToAdminServer(MSIService.java:188)
    at weblogic.management.mbeanservers.compatibility.internal.CompatibilityMBeanServerService.start(CompatibilityMBeanServerService.java:183)
    at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:78)
    at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1017)
    at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:388)
    at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:430)
    at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:456)
    at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:225)
    at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:82)
    at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2488)
    at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:98)
    at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:87)
    at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1162)
    at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1147)
    at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$UpOneLevel.run(CurrentTaskFuture.java:753)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:553)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)

>
<Aug 5, 2017 1:40:12 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED.>
<Aug 5, 2017 1:40:12 PM UTC> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down.>
<Aug 5, 2017 1:40:12 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>
[weblogic@weblogic_server_01 bin]$

 

So what is happening exactly that can cause this simple variable definition to prevent you to start your Admin Server? If you take a look at the script “startWebLogic.sh”, you will not see any use of this variable so you should be good, right? Well not really because this script is actually loading its environment by using the well-known “setDomainEnv.sh” (in the same folder). The interesting part is in this second file… If you are checking the usage of “ADMIN_URL” in this setDomainEnv, you will see an “if-then-else” section:

[weblogic@weblogic_server_01 bin]$ grep -C2 "ADMIN_URL" setDomainEnv.sh
# Clustering support (edit for your cluster!)

if [ "${ADMIN_URL}" = "" ] ; then
        # The then part of this block is telling us we are either starting an admin server OR we are non-clustered
        CLUSTER_PROPERTIES=""
        export CLUSTER_PROPERTIES
else
        CLUSTER_PROPERTIES="-Dweblogic.management.server=${ADMIN_URL}"
        export CLUSTER_PROPERTIES
fi
[weblogic@weblogic_server_01 bin]$

 

With this small portion of code, you can actually understand the issue:

  • if the variable “ADMIN_URL” isn’t defined when loading the file setDomainEnv.sh, then WebLogic will suppose that you are starting an Admin Server or that you are in a non-clustered environment.
  • if the variable “ADMIN_URL” is defined when loading the file setDomainEnv.sh, then WebLogic will suppose that you are starting a clustered environment

 

Therefore when defining the ADMIN_URL in the environment, we are actually – unintentionally – saying to WebLogic that this is a cluster and that the Management Server can be found at $ADMIN_URL… But this URL is the local Admin Server which we are trying to start. Thus the failure to start…

If you carefully read the logs above, you could actually saw this small difference… When reproducing the error, if you take a look at the end of the line 21, you will see that the JVM parameter “-Dweblogic.management.server=t3s://weblogic_server_01:8443″ is present and that’s the reason of this issue.

 

 

Cet article WebLogic – Cannot register for disconnect events on local server est apparu en premier sur Blog dbi services.

Create the Oracle database objects

Dimitri Gielis - Sat, 2017-08-12 13:49
This post is part of a series of posts: From idea to app or how I do an Oracle APEX project anno 2017

In a previous post we defined the ERD of the multiplication table application we're going to build. Now I want to go into detail how I create the Oracle database objects like tables, foreign keys, sequences, triggers, etc.

Before I tell you what I do today, let me first walk you through my history so you understand why I'm using it.
Data Modeling ToolsData Modeling Tools allow you to visually create your tables, relationships, etc. If you work with Oracle for 10 or more years, you probably know Oracle Designer. This tool has been really popular, but today it's legacy and not maintained anymore by Oracle. Oracle moved forward with SQL Developer Data Modeler. Although I've been using those tools in the beginning, today I typically don't use them anymore, except for generating a visual diagram (ERD) of my tables, but that is now part of SQL Developer, so I don't have a data modeling tool installed anymore. The main reason for me was, it took me so much time to add the entities, that it was too slow for my process. As written earlier, I typically draw my entities on a whiteboard or piece of paper, after that I want to get going with real tables and data as fast as I can to verify the model.

If you have a big team and somebody is solely responsibility for modeling, I see a benefit of using a modeling tool or if you like the process of visually creating a model. There're probably more advantages using a modeling tool, so if you use it and like it, don't change because of this blog post, I just share how I typically do a project.
ManualSo if I didn't use a modeling tool, what did I do? I created the tables manually. I hear you think, yeah right, and that is faster? It was for me, because I had a certain workflow. I used strict naming conventions: 3 letters for the project, singular table names, meaningless id column in every table etc. Here's an overview screen:


You find the complete guide of my naming conventions here.
Those naming conventions were the base, but per project or customer we could adapt to their standards. The principle was that I created tables with only the relevant columns, a meaningful unique key and a comment.

Next I used a script that would generate all the primary keys and foreign key relationships, sequences, triggers, audit columns and everything else that I could automate for the particular project. You find the base of the script I used till two years ago here and a screenshot of a part of the script:


The only reason I could do this, was because I used strict naming conventions e.g. table_id column (fk) would be a reference to the id column of a table (pk). By doing this, I could really iterate fast on creating and adapting tables, it worked great for me.
The final step was to create a visual ERD of it in SQL Developer (or other tool) as it was easier to communicate with the team and we always include it as part of the documentation.

As I'm writing in the past, you probably figured that I stopped using this method. The reason is simple, because there came something I like even more :)

But before we move on, a final word on naming conventions; it's not which naming conventions you use that is important, it's more about being consistent within your project and make it easier for you to understand your model and have faster knowledge transfer. So whatever naming conventions you use is fine, there's not something like "this is the best" in my view.
Quick SQLSo now we come to today... this little tool is what I use to create a script for my database objects.

I can't explain Quick SQL better than what you find on the site:

Quick SQL enables you to rapidly design and prototype data models using a markdown-like shorthand syntax that expands to standards-based Oracle SQL. You can easily create master detail relationships, check constraints, and even generate sample data.
Quick SQL is a packaged app which comes with Oracle APEX, so you have it, without knowing :)

So back to our multiplication project; the first thing I did was installing the packaged app in our APEX workspace. Go to App Builder > Create > Packaged App > Quick SQL. Next run the app and this is the screen you will see:


Next you start typing your tables and columns and some additional syntax to specify constraints etc. The power of Quick SQL is that it not only generates the tables, but it has built-in naming conventions, generates indexes, constraints, triggers, views and even sample data.

Here's a video of me creating the script for our multiplication table project:



The next thing I do is generate with SQL Developer the ERD, so I visually see it. Just follow the wizard in SQL Developer you find in File - Data Modeler - Import - Data Dictionary:


While reviewing the visual ERD, I saw I made a small mistake in Quick SQL. I didn't specify timezone with local timestamp (tswltz), but just timezone with timestamp (tstz). In the application for my son I used a date for that column, that is why I called that column start_date and end_date, but the more logical name is start_time and end_time, so I changed that too. I want to use timestamp with local timezone as this app is probably going to be used around the world, so it would be nice if you could see the time you actually played in your own timezone.

Here's the final version of the script in Quick SQL:


It's very easy to make changes in Quick SQL, and in settings I just included the drop statements and re-ran the entire script in SQL Dev and I was done :)

I really like Quick SQL and I hope it gets even more improved in the future. It would be really nice to version control the scripts and be able to generate the differences for the tables (alter statements instead of create statements) or do reverse engineering of a specific schema. Another improvement I hope to see in the future, is the ability to define unique constraints on multiple columns (or it might be already there, but that I don't know how to use it?).

The visual ERD is below:


In this post we went from the paper ERD to real Oracle objects. Now we are ready to build our app.
Categories: Development

My Blog Samples Download Repository (For Samples Before 2014 January)

Andrejus Baranovski - Sat, 2017-08-12 08:44
I have been asked, where to download my blog sample applications, posted before 2014 January. If you try to download such sample - you will get error about sample not found. Thats because Google discontinued their support for Google Code repository.

However, you can still download all my samples posted before 2014 January from Google Code archive. Go to archive URL - Google Code Archive for jdevsamples and you can browse all old samples there by date:


All new samples (after 2014 January) are hosted from Google Drive or from GitHub and are accessible directly from blog by URL.

Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud

Amis Blog - Sat, 2017-08-12 06:00

In a previous post I described how to do Continuous Integration & Delivery from Oracle Developer Cloud to Oracle Application Container Cloud on simple Node applications: Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud. In this post, I am going to extend that very simple application with the functionality to count requests. With every HTTP request to the application, a counter is incremented and the current counter value is returned in the response.

image

The initial implementation is a very naïve one: the Node application contains a global variable that is increased for each request that is handled. This is naïve because:

  • multiple instances are running concurrently and each is keeping its own count; because of load balancing, the subsequent requests are handled by various instances and the responses will show a somewhat irregular request counter pattern; the total number of requests is not known: each instance as a subtotal for that instance
  • when the application is restarted – or even a single instance is restarted or added – the request counter for each instance involved is reset

Additionally, the request count value is not available outside the Node application and it can only be retrieved by calling the application -which in turn increases the count.

A much better implementation would be one that uses a cache – that is shared by the application instances and that survives application (instance) restarts. This would also potentially make the request count value available to other microservices that can access the same cache – if we allow that to happen.

This post demonstrates how an Application Cache can be set up on Application Container Cloud Service and how it can be leveraged from a Node application. It shows that the request counter will be shared across instances and survives redeployments and restarts.

image

Note: there is still the small matter of race conditions that are not addressed in this simple example because read,update and write are not performed as atomic operation and no locking has been implemented.

The steps are:

  • Add (naïve) request counting capability to greeting microservice
  • Demonstrate shortcomings upon multiple requests (handled by multiple instances) and by instance restart
  • Implement Application Cache
  • Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service
  • Utilize Application Cache in greeting microservice
  • Redeploy greeting microservice and demonstrate that request counter is shared and preserved

Sources for this article are in GitHub: https://github.com/lucasjellema/greeting .

Add (naïve) request counting capability to greeting microservice

The very simple HTTP request handler is extended with a global variable requestCounter that is displayed and incremented for each request:

image

It’s not hard to demonstrate shortcomings upon multiple requests (handled by multiple instances) :

image

Here we see how subsequent requests are handled (apparently) by two different instances that each have their own, independently increasing count.

After application restart, the count is back to the beginning.

Implement Application Cache

To configure an Application Cache we need to work from the Oracle Application Container Cloud Service console.

image

image

Specify the details – the name and possibly the sizing:

image

image

Press Create and the cache will be created:

image

I got notified about its completion by email:

image

 

Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service

In order to be able to access the cache from within an application on ACCS, the application needs a service binding to the Cache service. This can be configured in the console (manually) as well as via the REST API, psm cli and the deployment descriptor in the Deployment configuration in Developer Cloud Service.

Manual configuration through the web ui looks like this:

image

or though a service binding:

image

image

and applying the changes:

image

image

I can then utilize the psm command line interface to inspect the JSON definition of the application instance on ACCS and so learn how to edit the deployment.json file with the service binding for the application cache. First setup psm:

image

And inspect the greeting application:

psm accs app -n greeting -o verbose -of json

image

to learn about the JSON definition for the service binding:

image

Now I know how to update the deployment descriptor in the Deployment configuration in Developer Cloud Service:

SNAGHTML13e985e

The next time this deployment is performed, the service binding to the application cache is configured.

Note: the credentials for accessing the application cache have to be provided and yes, horrible as it sounds and is, the password is in clear text!

It seems that the credentials are not required. The value of password is now BogusPassword – which is not the true value of my password – and still accessing the cache works fine. Presumably the fact that the application is running inside the right network domain qualifies it for accessing the cache.

The Service Binding makes the following environment variable available to the application – populated at runtime by the ACCS platform:

image

Utilize Application Cache in greeting microservice

The simplest way to make use of the service binding’s environment variable is demonstrated here (note that this does not yet actually use the cache):

image

and the effect on requests:

image

Now to actually interact with the cache – through REST calls as explained here: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/apaas/node/node-accs-caching-basic/node-accs-caching-basic.html – we will use a node module node-rest-client. This module is added to the application using

npm install node-rest-client –save

image

Note: this instruction will update package.json and download the module code. Only the changed package.json is committed to the git repository. When the application is next built in Developer Cloud Service, it will perform npm install prior to zipping the Node application into a single archive. That action of npm install ensures that the sources of node-rest-client are downloaded and will get added to the greeting.zip file that is deployed to ACCS.

Using this module, the app.js file is extended to read from and write to the application cache. See here the changed code (also in GitHub https://github.com/lucasjellema/greeting/blob/master/app.js):

var http = require('http');
var Client = require("node-rest-client").Client;

var version = '1.2.3';

// Read Environment Parameters
var port = Number(process.env.PORT || 8080);
var greeting = process.env.GREETING || 'Hello World!';

var requestCounter = 0;

var server = http.createServer(function (request, response) {
  getRequestCounter( function (value) {
     requestCounter = (value?value+1:requestCounter+1);
     // put new value in cache  - but do not wait for a response          
     console.log("write value to cache "+requestCounter);
     writeRequestCounter(requestCounter);         
     response.writeHead(200, {"Content-Type": "text/plain"});
     response.end( "Version "+version+" says an unequivocal: "+greeting 
                 + ". Request counter: "+ requestCounter +". \n"
                 );
  })
});

server.listen(port);

// functionality for cache interaction
// for interaction with cache
var CCSHOST = process.env.CACHING_INTERNAL_CACHE_URL; 
var baseCCSURL = 'http://' + CCSHOST + ':8080/ccs';
var cacheName = "greetingCache";
var client = new Client(); 


var keyString = "requestCount";

function getRequestCounter(callback)  {
    client.get(baseCCSURL.concat('/').concat(cacheName).concat('/').concat(keyString),
        function(data, rawResponse){
            var value;
            // If nothing there, return not found
            if(rawResponse.statusCode == 404){
              console.log("nothing found in the cache");
              value = null;
            }
            else{
              // Note: data is a Buffer object.
              console.log("value found in the cache "+data.toString());
              value = JSON.parse(data.toString()).requestCounter;
            }          
            callback(value);
        }
     );
};//getRequestCounter

function writeRequestCounter(requestCounter) {
var args = {
        data: { "requestCounter": requestCounter},
        headers: { "Content-Type" : "application/json" }
    };
      client.put(baseCCSURL.concat('/').concat(cacheName).concat('/').concat(keyString),
        args,
        function (data, rawResponse) {   
            // Proper response is 204, no content.
            if(rawResponse.statusCode == 204){
              console.log("Successfully put in cache "+JSON.stringify(data))
            }
            else{
              console.error("Error in PUT "+rawResponse);
              console.error('writeRequestCounter returned error '.concat(rawResponse.statusCode.toString()));
            } 
        }
      );
}// writeRequestCounter

Redeploy greeting microservice and demonstrate that request counter is shared and preserved

When we make multiple invocations to the greeting service, we see a consistently increasing series of count values:

image

Even when the application is restarted or redeployed, the request count is preserved and when the application becomes available again, we simply resume counting.

The logs from the two ACCS application instances provide insight in what takes place – how load balancing makes these instances handle requests intermittently – and how they read each others’ results from the cache:

image

 

Resources

Sources for this article are in GitHub: https://github.com/lucasjellema/greeting .

Blog article by Mike Lehmann, announcing the Cache feature on ACCS: https://blogs.oracle.com/developers/caching-with-oracle-application-container-cloud

Documentation on ACCS Caches: https://docs.oracle.com/en/cloud/paas/app-container-cloud/cache/getting-started-caches.html

Tutorials on cache enabling various technology based applications on ACCS: https://docs.oracle.com/en/cloud/paas/app-container-cloud/create-sample-caching-applications.html

Tutorial on Creating a Node.js Application Using the Caching REST API in Oracle Application Container Cloud Service http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/apaas/node/node-accs-caching-basic/node-accs-caching-basic.html#section4

Public API Docs for Cache Service – https://apicatalog.oraclecloud.com/ui/views/apicollection/oracle-public/cache-app/v1

Using psm to retrieve deployment details of ACCS application: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/accs-app.html (to find out how Application Cache reference is defined)

The post Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud appeared first on AMIS Oracle and Java Blog.

Disk space details of every db and table

Tom Kyte - Sat, 2017-08-12 02:46
How can i get the database details like recovery model used, total capacity, free space , space used in case of each database and tables present
Categories: DBA Blogs

PARALLEL DML with exclusive table locks

Tom Kyte - Fri, 2017-08-11 08:26
Hi ! Just running into a funny situation. Developing a package that has to update a set of very large tables with Parallel DML. Because of parallel DML restrictions I need to disable the trigger on the tables first. To make sure that nobo...
Categories: DBA Blogs

manually pivoting large amounts of data to be PIPELINED through a function

Tom Kyte - Fri, 2017-08-11 08:26
I have 3 SQL statements that will run independently and produced the required output in the required format. However, I need to combine these statements such that they are all contained and called within 1 stored procedure or such. I need to use th...
Categories: DBA Blogs

Triggers

Tom Kyte - Fri, 2017-08-11 08:26
Version: Oracle 11g Schema: JAYG Table: MyTable Fields: ID (as primary key, Allows Nulls ), field1, ......,field5 (Note: I have changed (names) to generic ones for the purpose of this inquiry) I have an ODBC connection to the table from ...
Categories: DBA Blogs

Oracle Mobile Cloud Service (MCS): Overview of integration options

Amis Blog - Fri, 2017-08-11 04:40

Oracle Mobile Cloud Service has a lot of options which allows it to integrate with other services and systems. Since it runs JavaScript on Node.js for custom APIs, it is very flexible.

Some features allow it to extent its own functionality such as the Firebase configuration option to send notifications to mobile devices, while for example the connectors allow wizard driven integration with other systems. The custom API functionality running on a recent Node.js version ties it all together. In this blog article I’ll provide a quick overview and some background of the integration options of MCS.

MCS is very well documented here and there are many YouTube video’s available explaining/demonstrating various MCS features here. So if you want to know more, I suggest looking at those.

Some recent features

Oracle is working hard on improving and expanding MCS functionality. For the latest improvements to the service see the following page. Some highlights I personally appreciate of the past half year which will also get some attention in this blog:

  • Zero footprint SSO (June 2017)
  • Swagger support in addition to RAML for the REST connector (April 2017)
  • Node.js version v6.10.0 support (April 2017)
  • Support for Firebase (FCM) to replace GCM (December 2016)
  • Support for third party tokens (December 2016)
Feature integration Notification support

In general there are two options for sending notifications from MCS. Integrating with FCM and integrating with Syniverse. Since they are third party suppliers, you should compare these options (license, support, performance, cost, etc) before choosing one of them.

You can also use any other notification provider if it offers a REST interface by using the REST connector. You will not get much help in configuring it through the MCS interface though; it will be a custom implementation.

Firebase Cloud Messaging / Google Cloud Messaging

Notification support is implemented by integrating with Google cloud messaging products. Google Cloud Messaging (GCM) is being replaced with Firebase Cloud Messaging (FCM) in MCS. GCM has been deprecated by Google for quite a while now so this is a good move. You do need a Google Cloud Account though and have to purchase their services in order to use this functionality. See for example here on how to implement this from a JET hybrid application.

Syniverse

Read more on how to implement this here. You first have to create a Syniverse account. Next subscribe to the Syniverse Messaging Service, register the app and get credentials. These credentials you can register in MCS, client management.

 

Beacon support

Beacons create packages which can be detected on Bluetooth by mobile devices. The package structure the beacons broadcast, can differ. There are samples available for iBeacon, altBeacon and Eddystone but others can be added if you know the corresponding package structure. See the following presentation some background on beacons and how they can be integrated in MCS. How to implement this for an Android app can be watched here.

 

Client support

MCS comes with several SDKs which provide easy integration of a client with MCS APIs. Available client SDKs are iOS, Android, Windows, Web (plain JavaScript). These SDKs provide an easy alternative to using the raw MCS REST APIs. They provide a wrapper for the APIs and provide easy access in the respective language the client uses.

Authentication options (incoming) SAML, JWT

Third party token support for SAML and JWT is available. Read more here. A token exchange is available as part of MCS which creates MCS tokens from third party tokens based on specifically defined mappings. This MCS tokens can be used by clients in subsequent requests. This does require some work on the client side but the SDKs of course help with this.

Facebook Login

Read here for an example on how to implement this in a hybrid JET application.

OAuth2 and Basic authentication support.

No third party OAuth tokens are supported. This is not strange since the OAuth token does not contain user data and MCS needs a way to validate the token. MCS provides its own OAuth2 STS (Secure Token Service) to create tokens for MCS users. Read more here.

Oracle Enterprise Single Sign-on support.

Read here. This is not to be confused with the Oracle Enterprise Single Sign-on Suite (ESSO). This is browser based authentication of Oracle Cloud users which are allowed access to MCS.

These provide the most common web authentication methods. Especially the third party SAML and JWT support provides for many integration options with third party authentication providers. OKTA is given as an example in the documentation.

Application integration: connectors

MCS provides connectors which allow wizard driven configuration in MCS. Connectors are used for outgoing calls. There is a connector API available which makes it easy to interface with the connectors from custom JavaScript code. The connectors support the use of Oracle Credential Store Framework (CSF) keys and certificates. TLS versions to TLS 1.2 are supported. You are of course warned that older versions might not be secure. The requests the connectors do are over HTTP since no other technologies are currently directly supported. You can of course use REST APIs and ICS as wrappers should you need it.

Connector security settings

For the different connectors, several Oracle Web Service Security Manager (OWSM) policies are used. See here. These allow you to configure several security settings and for example allow usage of WS Security and SAML tokens for outgoing connections. The policies can be configured with security policy properties. See here.

REST

It is recommended to use the REST connector instead of doing calls directly from your custom API code because of they integrate well with MCS and provide security and monitoring benefits. For example out of the box analytics.

SOAP

The SOAP connector can do a transformation from SOAP to JSON and back to make working with the XML easier in JavaScript code. This has some limitations however:

Connector scope

There are also some general limitations defined by the scope of the API of the connector:

  • Only SOAP version 1.1 and WSDL version 1.2 are supported.
  • Only the WS-Security standard is supported. Other WS-* standards, such as WS-RM or WS-AT, aren’t supported.
  • Only document style and literal encoding are supported.
  • Attachments aren’t supported.
  • Of the possible combinations of input and output message operations, only input-output operations and input-only operations are supported. These operations are described in the Web Services Description Language (WSDL) Version 1.2 specification.
Transformation limitations

  • The transformation from SOAP to XML has limitations
  • A choice group with child elements belonging to different namespaces having the same (local) name. This is because JSON doesn’t have any namespace information.
  • A sequence group with child elements having duplicate local names. For example, <Parent><ChildA/><ChildB/>…<ChildA/>…</Parent>. This translates to an object with duplicate property names, which isn’t valid.
  • XML Schema Instance (xsi) attributes aren’t supported.
Integration Cloud Service connector

Read more about this connector here. This connector allows you to call ICS integrations. You can connect to your ICS instance and select an integration from a drop-down menu. For people who also use ICS in their cloud architecture, this will probably be the most common used connector.

Fusion Applications connector

Read more about this connector here. The flow looks similar to that of the ICS Cloud Adapters (here). In short, you authenticate, a resource discovery is done and local artifacts are generated which contain the connector configuration. At runtime this configuration is used to access the service. The wizard driven configuration of the connector is a great strength. MCS does not provide the full range of cloud adapters as is available in ICS and SOA CS.

Finally Flexibility

Oracle Mobile Cloud Service allows you to define custom APIs using JavaScript code. Oracle Mobile Cloud Service V17.2.5-201705101347 runs Node.js version v6.10.0 and OpenSSL version 1.0.2k (process.versions) which are quite new! Because a new OpenSSL version is supported, TLS 1.2 ciphers are also supported and can be used to create connections to other systems. This can be done from custom API code or by configuring the OWSM settings in the connector configuration. It runs on Oracle Enterprise Linux 6 kernel 2.6.39-400.109.6.el6uek.x86_64 (JavaScript: os.release()). Most JavaScript packages will run on this version so few limitations there.

ICS also provides an option to define custom JavaScript functions (see here). I haven’t looked at the engine used in ICS though but I doubt this will be a full blown Node.js instance and suspect (please correct me if I’m wrong) a JVM JavaScript engine is used like in SOA Suite / SOA CS. This provides less functionality and performance compared to Node.js instances.

What is missing? Integration with other Oracle Cloud services

Mobile Cloud Service does lack out of the box integration options with other Oracle Cloud Services. Only 4 HTTP based connectors are available. Thus if you want to integrate with an Oracle Cloud database (a different one than which is provided) you have to use the external DB’s REST API (with the REST connector or from custom API code) or use for example the Integration Cloud Service connector or the Application Container Cloud Service to wrap the database functionality. This of course requires a license for the respective services.

Cloud adapters

A Fusion Applications Connector is present in MCS. Also OWSM policies are used in MCS. It would therefore not be strange if MCS would be technically capable of running more of the Cloud adapters which are present in ICS. This would greatly increase the integration options for MCS.

Mapping options for complex payloads

Related to the above, if the payloads become large and complex, mapping fields also becomes more of a challenge. ICS does a better job at this than MCS currently. It has a better mapping interface and provides mapping suggestions.

The post Oracle Mobile Cloud Service (MCS): Overview of integration options appeared first on AMIS Oracle and Java Blog.

Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud

Amis Blog - Fri, 2017-08-11 02:57

A familiar story:

  • Develop a Node application with one or more developers
  • Use Oracle Developer Cloud Service to organize the development work, host the source code and coordinate build jobs and the ensuing deployment
  • Run the Node application on Oracle Application Container Cloud

I have read multiple tutorials and blog posts that each seemed to provide a piece of puzzle. This article shows the full story – in its simplest form.

We will:

  • Start a new project on Developer Cloud Service
  • Clone the Git repository for this new project
  • Locally work on the Node application and configure it for Application Container Cloud
  • Commit and push the sources to the Git repo
  • Create a Build job in Developer Cloud service that creates the zip file that is suitable for deployment; the job is triggered by changes on the master branch in the Git repo
  • Create a Deployment linking to an existing Oracle Application Container Cloud service instance; associate the deployment with the build task (and vice versa)
  • Run the build job – and verify that the application will be deployed to ACCS
  • Add the ACCS Deployment descriptor with the definition of environment variables (that are used inside the Node application)
  • Make a change in the sources of the application, commit and push and verify that the live application gets updated

Prerequisites: access to a Developer Cloud Instance and an Application Container Cloud service. Locally access to git and ideally Node and npm.

Sources for this article are in GitHub: https://github.com/lucasjellema/greeting .

Start a new project on Developer Cloud Service

Create the new project greeting in Developer Cloud

image

image

image

image

After you press Finish, the new project is initialized along with all associated resources and facilities, such as a new Git repository, a Wiki, an Issue store.

image

When the provisioning is done, the project can be accessed.

image

 

Locally work on the Node application

Copy the git URL for the source code repository.

image

Clone the Git repository for this new project

git clone https://lucas.jellema%40amis.nl@developer.us2.oraclecloud.com/developer73410-a512292/s/developer73410-a512292_greeting_19451/scm/greeting.git

image

Start a new Node application, using npm init:

image

This will create the package.json file.

To prepare the application for eventual deployment to Application Container Cloud, we need to add the manifest.json file.

image

We also need to create a .gitignore file, to prevent node_modules from being committed and pushed to Git.

image

Implement the application itself, in file app.js. This is a very simplistic application – that will handle an incoming request and return a greeting of some sort:

image

Note how the greeting can be read from an environment variable, just like the port on which the requests should be listened to. When no environment values are provided, defaults are used instead.

Commit and push the sources to the Git repo

The Git repository in the Developer Cloud Service project is empty except for the readme.md when the project is first created:

image

Now we commit and push the files created locally:

image

and

image

A little while later, these sources show up in Developer Cloud Service console:

image

Create a Build job in Developer Cloud service

To have the application build we can create a build job in Developer Cloud Service that creates the zip file that is suitable for deployment; this zip file needs to contain all sources from Git and all dependencies (all node modules) specified in package.json. The job is triggered by changes on the master branch in the Git repo. Note: the build job ideally should also perform automated tests – such as described by Yannick here.

image

image

Specify free-style job. Specify the name – here BuildAndDeploy.

Configure the Git repository that contains the sources to build; this is the repository that was first set up when the project was created.

image

Configure the build job to be performed whenever sources are committed to (the master branch in) the Git repository:

image

 

Create a Build Step, of type Execute Shell:

image

 

Enter the following shell-script commands:

git config –global url.https://github.com/.insteadOf git://github.com/

npm install

zip -r greeting.zip .

This will download all required node modules and package all sources in a single zip-file called greeting.zip.

image

Define as post build step that the greeting.zip file should be archived. That makes this zip file available as artifact produced by the build job – for use in deployments or other build jobs.

image

 

Run the job a first time with Build Now.

image

image

image

The console output for running the shell commands is shown. Note that the implicit first steps performed in a build include the retrieval of all sources from the git repositories on to the file system of the build server. The explicit shell commands are executed subsequently – and can make use of these cloned git repo sources.

image

The build job produces greeting.zip as artifact:

image

Create a Deployment linking to an existing Oracle Application Container Cloud service instance

The build job produces an artifact that can be deployed to an ACCS instance. We need a Deployment to create an ACCS instance based on that artifact. The Deployment is the bridge between the build artifact and a specific target environment – in this case an ACCS instance.

image

Specify name of the configuration – for use within Developer Cloud Service – and of the application – that will be used in Application Container Cloud. Specify the type of Deployment – we want On Demand because that type of Deployment can be associated with a Build job to be automatically performed at the end of the build. Specify the Deployment Target – New of type Application Container Cloud.

image

Provide the connection details for an ACCS instance. Press Test Connection to verify these details.

image

Upon success, click on Use Connection.

image

Specify the type of Runtime – Node in this case. Select the Build Job and Artifact to base this Deployment on:

image

image

Note: for now, the Deployment is tied to a specific instance of the build job. When add the Deployment as Post Build step to the Build Job, we will always use the artifact produced by that specific build instance.

When the Deployment is saved, it starts to execute the deployment immediately:

image

In the Application Container Cloud Console, we can see the new Node application greeting being created

image

image

After some time (actually, quite some time) the application is deployed and ready to be accessed:

image

And here is the result of opening the application in  browser

image

Now associate the build job with the Deployment, in order to have the deployment performed at the end of each successful build:

image

Go to the Post Build tab, check the box for Oracle Cloud Service Deployment and add a Deployment Task of type Deploy:

image

Select the Deployment we created earlier:

image

And press Save to save the changes to the build job’s definition.

 

Run the build job – and verify that the application will be deployed to ACCS (again)

If we now run the build job, as its last action it should perform the deployment:

image

image

image

The ACCS console shows that now we have Version 2.0, deployed just now.

image

 

Add the ACCS Deployment descriptor with the definition of environment variables

The app.js file contains the line

var greeting = process.env.GREETING || ‘Hello World!’;

This line references the environment variable GREETING – that currently is not set. By defining a deployment descriptor as part of the Deployment definition, we can not only specify the number of instances and their size as well as any Service Bindings and the value of Environment Variables such as GREETING.

 

image

Add the Deployment Descriptor json:

{

“memory”: “1G”,

“instances”: “1”,

“environment”: {

“GREETING”:”Greetings to you”,

“APPLICATION_PREFIX”:”GRTZ”

}

}

Note: variable APPLICATION_PREFIX is not currently used.

image

Save and the deployment will be performed again:

image

image

When done, the application can be accessed. This time, the greeting returned is the one specified in the the deployment descriptor deployment.json (as environment variable) and picked up by the application at run time (using

process.env.GREETING).

image

Make a change in the sources of the application and Do the End To End Workflow

If we make a change in the application and commit and push the change to Git then after some time we should be able to verify that the live application gets updated.

Make the change – a new version label and a small change in the text returned by the application.

    image

    Then commit the change and push the changes – to the Developer CS Git repo:

    image

     

    The changes arrive in the Git repo:

    image

    Now the Git repo has been updated, the build job should be triggered:

    image

    image

    Some of the console output – showing that deployment has started:

    image

    The ACCS Service Console makes it clear too

    image

    When the deployment is done, it is clear that the code changes made it through to the running application:

    image

    So editing the source code and committing plus pushing to git suffices to trigger the build and redeployment of the application – thanks to the set up made in Developer Cloud Service.

    Next Steps

    Show how multiple instances of an application each have their own state – and how using an Application Cache can make them share state.

    Show how an ACCS application can easily access a DBaaS instance through Service Bindings (and in the case of Node application through the oracle node driver and OCI libraries that come prepackaged with the ACCS Node Runtime.

    Show how Oracle Management Cloud APM can be setup as part of an ACCS instance in order to perform application monitoring of applications running on ACCS; probably works for Log Analytics as well.

     

    Resources

    Sources for this article are available in GitHub: https://github.com/lucasjellema/greeting.

    Oracle Community Article by Abhinav Shroff –Oracle Developer Cloud to build and deploy Nodejs REST project on Application Container Cloud

    A-Team Chronicle by Yannick Ongena- Automated unit tests with Node.JS and Developer Cloud Services

    Article by Fabrizio Marini – Oracle Application Container Cloud & Developer Cloud Service: How to create a Node.js application with DB connection in pool (in cloud) and how to deploy it on Oracle Application Container Cloud (Node.js) using Developer Cloud Service

    Create Node.js Applications (Oracle Documentation) – https://docs.oracle.com/en/cloud/paas/app-container-cloud/create-sample-node.js-applications.html

    Developer Cloud Service Docs – Managing Releases in Oracle Developer Cloud Service

    Oracle Documentation – Creating Meta Files for ACCS deployments – https://docs.oracle.com/en/cloud/paas/app-container-cloud/dvcjv/creating-meta-data-files.html

    The post Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud appeared first on AMIS Oracle and Java Blog.

    Using Edition Based Redefinition for Rolling Back Stored Program Unit Changes

    The Anti-Kyte - Thu, 2017-08-10 15:14

    We had a few days of warm, sunny weather in Milton Keynes recently and this induced Deb and I to purchase a Garden Umberella to provide some shade.
    After a lifetime of Great British Summers we should have known better. The sun hasn’t been seen since.
    As for the umbrella ? Well that does still serve a purpose – it keeps the rain off.

    Rather like an umbrella Oracle’s Edition Based Redefinition feature can be utilized for purposes other than those for which it was designed.
    Introducted in Oracle Database 11gR2, Edition Based Redefinition (EBR to it’s friends) is a mechanism for facilitating zero-downtime releases of application code.
    It achieves this by separating the deployment of code to the database and that code being made visible in the application.

    To fully retro-fit EBR to an application, you would need to create special views – Editioning Views – for each application table and then ensure that any application code referenced those views and not the underlying tables.
    Even if you do have a full automated test suite to perform your regression tests, this is likely to be a major undertaking.
    The other aspect of EBR, one which is of interest here, is the way it allows you to have multiple versions of the same stored program unit in the database concurrently.

    Generally speaking, as a database application matures, the changes made to it tend to be in the code rather more than in the table structure.
    So, rather than diving feet-first into a full EBR deployment, what I’m going to look at here is how we could use EBR to:

    • decouple the deployment and release of stored program units
    • speed up the process of rolling back the release of multiple stored program unit changes
    • create a simple mechanism to roll back individual stored program unit changes

    There’s a very good introductory article to EBR on OracleBase.
    Whilst you’re here though, forget any Cross-Edition Trigger or Editioning View complexity and let’s dive into…

    Fun and Games when releasing Stored Program Units

    As I’ve observed before, deploying a new version of a PL/SQL program unit is destructive in nature. By default, the old version of the program is overwritten by the new version and is unrecoverable from within the database.
    This can be problematic, especially on those occasions when you discover that your source control repository doesn’t contain what you thought it did.

    Having the safety net of the old version stored in the database, ready to be restored should the need arise, is not something to be sniffed at.
    Incidentally, Connor Macdonald has his own roll-you-own method for backing up PL/SQL source.

    Before we get into how EBR can help with this, we need to do a bit of configuration…

    Setting up Editions

    From 11gR2 onward, any Oracle database will have at least one Edition…

    When you connect to the database, you can specify an Edition to connect to. By default this is the current Database Edition.
    To start with, when you’re connected, both the current edition and session edition will be ORA$BASE :

    select sys_context('userenv', 'current_edition_name') as default_edition,
        sys_context('userenv', 'session_edition_name') as session_edition
    from dual;
    

    However, by default, it does not appear that any database objects are associated with the ORA$BASE edition.
    Taking the HR schema as an example :

    select object_name, object_type, edition_name
    from dba_objects_ae
    where owner = 'HR'
    and object_type != 'INDEX'
    order by 1,2
    /
    

    When we execute this query, we get :

    That’s because, at this point, HR is blissfully unaware of any Editions. However, if we enable editions for this user…

    alter user hr enable editions
    /
    

    …and re-execute the query, we can see that things have changed a bit…

    The Editionable objects in the schema ( Procedures, Triggers and the View) are now associated with the ORA$BASE edition.

    The scope of an Edition is the database ( or the PDB if you’re on 12c). To demonstrate this, let’s say we have another schema – called EDDY :

    create user eddy identified by eddy
    /
    
    alter user eddy temporary tablespace temp
    /
    
    alter user eddy default tablespace users
    /
    
    alter user eddy quota unlimited on users
    /
    
    grant create session, create table, create procedure to eddy
    /
    
    alter user eddy enable editions
    /
    
    

    Eddy is a bit of an ‘ed-banger and the schema contains the following…

    create table rock_classics
    (
        artist varchar2(100),
        track_name varchar2(100)
    )
    /
    
    create or replace package i_love_rock_n_roll as
    	function eddy_the_ed return varchar2;
    end i_love_rock_n_roll;
    /
    
    create or replace package body i_love_rock_n_roll as
        function eddy_the_ed return varchar2 is
        begin
            return 'Run to the Hills !';
        end eddy_the_ed;
    end i_love_rock_n_roll;
    /
    

    At this stage, these objects have the Edition you would expect. This time, we can query the USER_ version of the OBJECT_AE view whilst connected as EDDY…

    select object_name, object_type, edition_name
    from user_objects_ae
    order by 2,1
    /
    

    I want to make some changes to the code in the EDDY application. In order to preserve the “old” code, as well as making deployment a fair bit easier, I need a new edition…

    Using a New Edition

    First off, as a user with the CREATE EDITION privilege…

    create edition deep_purple
    /
    

    We can see that the new Edition has been created with ORA$BASE as it’s parent…

    At present ( i.e. 12cR2), an Edition can have only one child and a maximum of one parent. Every Edition other than ORA$BASE must have a parent.
    Therefore, it’s probably helpful to think of Editions as release labels rather than branches.

    At this point, whilst we now have two editions in the database, it’s only possible for EDDY to use one of them.
    If EDDY attempts to switch to the new Edition…

    alter session set edition = deep_purple
    /
    

    …we get…

    In order for EDDY to be able to use the new Edition, we need to grant it…

    grant use on edition deep_purple to eddy
    /
    

    Now Eddy can see the new edition as well as the existing one :

    alter session set edition = deep_purple
    /
    
    select property_value as default_edition,
        sys_context('userenv', 'session_edition_name') as session_edition
    from database_properties
    where property_name = 'DEFAULT_EDITION'
    /
    

    Now we have access to the new Edition, we’re going to make some changes to the application code.
    First of all, we want to add a function to the package :

    create or replace package i_love_rock_n_roll as
    	function eddy_the_ed return varchar2;
        function motor_ed return varchar2;
    end i_love_rock_n_roll;
    /
    
    create or replace package body i_love_rock_n_roll as
        function eddy_the_ed return varchar2 is
        begin
            return 'Run to the Hills !';
        end eddy_the_ed;
        function motor_ed return varchar2
        is
        begin
            return 'Sunrise, wrong side of another day';
        end motor_ed;
    end i_love_rock_n_roll;
    /
    

    We’re also going to create a new standalone function :

    create or replace function for_those_about_to_rock
    	return varchar2 as
    begin
    	return 'We salute you !';
    end for_those_about_to_rock;
    /
    

    Looking at how these changes have affected the Editions with which these objects are associated with is revealing :

    select object_name, object_type, edition_name
    from user_objects_ae
    order by 1,2
    /
    
    OBJECT_NAME              OBJECT_TYPE   EDITION_NAME
    -----------              -----------   ------------
    FOR_THOSE_ABOUT_TO_ROCK  FUNCTION      DEEP_PURPLE
    I_LOVE_ROCK_N_ROLL       PACKAGE       ORA$BASE
    I_LOVE_ROCK_N_ROLL       PACKAGE       DEEP_PURPLE
    I_LOVE_ROCK_N_ROLL       PACKAGE BODY  ORA$BASE
    I_LOVE_ROCK_N_ROLL       PACKAGE BODY  DEEP_PURPLE
    ROCK_CLASSICS            TABLE
    

    The new function, for_those_about_to_rock, is assigned to the current session edition as we would expect. However, it appears that the i_love_rock_n_roll package is now assigned to both versions.
    That’s not right, surely ?

    Let’s do a quick check…

    select i_love_rock_n_roll.motor_ed
    from dual
    /
    
    MOTOR_ED
    --------
    Sunrise, wrong side of another day
    

    So, we can see the new package function.
    However, if we now switch to the other Edition…

    alter session set edition = ORA$BASE
    /
    
    Session altered.
    

    …and try to invoke the standalone function we just created…

    select i_love_rock_n_roll.motor_ed
    from dual
    /
    
    Error starting at line : 1 in command -
    select i_love_rock_n_roll.motor_ed
    from dual
    
    Error at Command Line : 1 Column : 8
    Error report -
    SQL Error: ORA-00904: "I_LOVE_ROCK_N_ROLL"."MOTOR_ED": invalid identifier
    00904. 00000 -  "%s: invalid identifier"
    *Cause:
    *Action:
    

    However, we can still see the original package…

    select i_love_rock_n_roll.eddy_the_ed
    from dual
    /
    
    EDDY_THE_ED
    -----------
    Run to the Hills !
    

    Where it gets really interesting – for our current purposes at least, is that we can see the source code for both versions of the package in the USER_SOURCE_AE view.
    For the original Package Header :

    select text
    from user_source_ae
    where type = 'PACKAGE'
    and name = 'I_LOVE_ROCK_N_ROLL'
    and edition_name = 'ORA$BASE'
    order by line
    /
    

    …we get …

    TEXT
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    package i_love_rock_n_roll as
    	function eddy_the_ed return varchar2;
    end i_love_rock_n_roll;
    

    …but we can also get the new version from the same view…

    select text
    from user_source_ae
    where type = 'PACKAGE'
    and name = 'I_LOVE_ROCK_N_ROLL'
    and edition_name = 'DEEP_PURPLE'
    order by line
    /
    

    …returns…

    TEXT
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    package i_love_rock_n_roll as
    	function eddy_the_ed return varchar2;
        function motor_ed return varchar2;
    end i_love_rock_n_roll;
    

    One other point to note is that you can grant privileges on an object that only exists in your “new” edition…

    SQL> grant execute on for_those_about_to_rock to hr;
    
    Grant succeeded.
    

    …but when connected as that user, the object will not be visible…

    select eddy.for_those_about_to_rock from dual;
    
    Error starting at line : 1 in command -
    select eddy.for_those_about_to_rock from dual
    Error at Command Line : 1 Column : 8
    Error report -
    SQL Error: ORA-00904: "EDDY"."FOR_THOSE_ABOUT_TO_ROCK": invalid identifier
    00904. 00000 -  "%s: invalid identifier"
    *Cause:
    *Action:
    

    …nor will the grantee be able to see the Edition if they do not otherwise have privileges to do so…

    alter session set edition = deep_purple;
    
    Error starting at line : 1 in command -
    alter session set edition = deep_purple
    Error report -
    ORA-38802: edition does not exist
    38802. 00000 -  "edition does not exist"
    *Cause:    This error occurred because a reference was made to an edition that
               does not exist or that the current user does not have privileges on.
    *Action:   Check the edition name and verify that the current user has
               appropriate privileges on the edition.
    
    Releasing code using Editions

    As we can see, Editions allow us to separate the deployment of code from the actual release.
    We’ve already deployed our application changes but they are only visible to eddy at the moment.
    NOTE – as I said at the start, we’re only using EBR for releasing stored program units. If we had any table DDL then we’d need to deal with that separately from EBR in these particular circumstances.

    Anyhow, once we’re sure that all is well, we just need to “release” the code from the DEEP_PURPLE edition as follows :

    alter database default edition = deep_purple
    /
    

    Now when we connect as hr…

    select sys_context('userenv', 'session_edition_name')
    from dual
    /
    
    SYS_CONTEXT('USERENV','SESSION_EDITION_NAME')
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    DEEP_PURPLE
    

    …and the new function is now accessible…

    select eddy.for_those_about_to_rock
    from dual
    /
    
    FOR_THOSE_ABOUT_TO_ROCK
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    We salute you !                                                                                                                                                        
    
    

    Note that, whilst the Editionable objects in the HR schema itself have not directly inherited the new Edition…

    select object_name, object_type, edition_name
    from user_objects_ae
    where object_type in ('PROCEDURE', 'TRIGGER', 'VIEW')
    /
    
    OBJECT_NAME               OBJECT_TYPE         EDITION_NAME
    ------------------------- ------------------- ------------------------------
    UPDATE_JOB_HISTORY        TRIGGER             ORA$BASE
    ADD_JOB_HISTORY           PROCEDURE           ORA$BASE
    SECURE_EMPLOYEES          TRIGGER             ORA$BASE
    SECURE_DML                PROCEDURE           ORA$BASE
    EMP_DETAILS_VIEW          VIEW                ORA$BASE                      
    
    

    …they are still usable now that we’ve migrated to the DEEP_PURPLE edition…

    select first_name, last_name
    from emp_details_view
    where department_id = 60
    /
    
    FIRST_NAME           LAST_NAME
    -------------------- -------------------------
    Alexander            Hunold
    Bruce                Ernst
    David                Austin
    Valli                Pataballa
    Diana                Lorentz                  
    
    
    Rolling back the entire release

    If we need to rollback all of the code changes we’ve made EBR makes this process very simple.

    Remember, the objects owned by EDDY in the DEEP_PURPLE Edition are :

    select object_name, object_type
    from dba_objects
    where owner = 'EDDY'
    order by 2,1
    /
    
    OBJECT_NAME              OBJECT_TYPE
    -----------              -----------
    FOR_THOSE_ABOUT_TO_ROCK  FUNCTION
    I_LOVE_ROCK_N_ROLL       PACKAGE
    I_LOVE_ROCK_N_ROLL       PACKAGE BODY
    ROCK_CLASSICS            TABLE         
    
    

    …and the package members are…

    Now, to rollback all of the application changes associated with the DEEP_PURPLE Edition, we simply need to run…

    alter database default edition = ora$base
    /
    

    We can see that this has had the desired effect

    select object_name, object_type
    from dba_objects
    where owner = 'EDDY'
    order by 2,1
    /
    
    OBJECT_NAME         OBJECT_TYPE
    -----------         -----------
    I_LOVE_ROCK_N_ROLL  PACKAGE
    I_LOVE_ROCK_N_ROLL  PACKAGE BODY
    ROCK_CLASSICS       TABLE
    

    The function has disappeared, along with the additional package member…

    Well, that’s nice and easy, but how could we use EBR to rollback a single change rather than the entire release ?

    Rolling back a single change

    To demonstrate this, we need to set the current Edition back to DEEP_PURPLE…

    alter database default edition = deep_purple
    /
    

    Remember that, where relevant, EBR ensures that a copy of an object’s source code for previous Editions is kept in the Data Dictionary.
    We can use this stored code to restore these versions to the current Edition.
    NOTE – the owner of this next procedure will need the ALTER ANY PROCEDURE privilege :

    create or replace procedure restore_version
    (
        i_owner dba_source_ae.owner%type,
        i_name dba_source_ae.name%type,
        i_type dba_source_ae.name%type,
        i_source_edition dba_source_ae.edition_name%type,
        i_target_edition dba_source_ae.edition_name%type
    )
    is
    --
    -- Simple procedure to demonstrate restoring a given Edition's version
    -- of a stored program unit.
    --
        -- The DDL we execute will complete the current transaction so...
        pragma autonomous_transaction;
    
        rb_source clob;
    begin
        if i_owner is null or i_name is null or i_type is null
            or i_source_edition is null or i_target_edition is null
        then
            raise_application_error(-20000, 'Values for all parameters must be supplied');
        end if;
    
        -- Make sure our session is in the target edition. If not then error.
        if upper(i_target_edition) != upper(sys_context('userenv', 'session_edition_name')) then
            raise_application_error(-20001, 'Session must be in the target edition');
        end if;
    
        for r_code in
        (
            select line,text
            from dba_source_ae
            where owner = upper(i_owner)
            and name = upper(i_name)
            and type = upper(i_type)
            and edition_name = upper(i_source_edition)
            order by line
        )
        loop
            if r_code.line = 1 then
                rb_source := 'create or replace '
                    ||replace(lower(r_code.text), lower(i_type)||' ', i_type||' '||i_owner||'.');
            else
                rb_source := rb_source||r_code.text;
            end if;
        end loop;
    
        if nvl(length(rb_source),0) = 0 then
            raise_application_error(-20002, 'Object source not found');
        end if;    
    
        -- execute the ddl to restore the object
        execute immediate rb_source;
    
    end restore_version;
    /
    

    In the current example we have the Package header and Package body of EDDY.I_LOVE_ROCK_N_ROLL in both the ORA$BASE and DEEP_PURPLE Editions.
    If we want to reverse these changes but leave the rest of the release unaffected, we can simply invoke this procedure…

    begin
        restore_version('EDDY', 'I_LOVE_ROCK_N_ROLL', 'PACKAGE', 'ORA$BASE', 'DEEP_PURPLE');
        restore_version('EDDY', 'I_LOVE_ROCK_N_ROLL', 'PACKAGE BODY', 'ORA$BASE', 'DEEP_PURPLE');
    end;
    /
    

    We can now see that the original package has been restored and is available in the DEEP_PURPLE Edition, along with the other code from the release. However, the package function we’ve removed isn’t :

    As it stands, this would is a one-time operation on as we’re effectively restoring the old version by creating it in the new Edition. At that point the stored program units are identical in both Editions.
    The obvious solution would be to change the session edition programatically in the procedure. Unfortunately, attempts to do so run into :

    ORA-38815: ALTER SESSION SET EDITION must be a top-level SQL statement
    

    Of course, you could issue the ALTER SESSION commands in the script you’re using to call the procedure. However, you would then also need to make a copy of the current Edition code before restoring the old version and it all gets fairly involved.

    Conclusion

    Whilst all of this isn’t quite using EBR for it’s intended purpose, it does offer a couple of advantages over the more traditional method of releasing stored program unit changes.
    First of all, you can separate the deployment of code into your production environment and making it visible to users.
    Secondly, releasing the code becomes a single ALTER DATABASE statement, as does rolling back those changes.
    Finally, it is possible to quickly revert individual stored program units should the need become evident once the release has been completed.
    All of this functionality becomes available without you having to write much code.
    The downside is that a reversion of an individual program unit is a one-time operation unless you write some custom code around this, which is what we were trying to get away from to start with.
    Additionally, without implementing any Editioning Views, you will still have to manage structural changes to tables in the same way as before.

    The weather forecast is sunny for this coming weekend. Unfortunately, that means I’ll have to mow the lawn rather than sit under the umbrella. Honestly, I’m sure astro-turf can’t be that expensive…


    Filed under: Oracle, PL/SQL Tagged: 'current_edition_name'), 'session_edition_name'), alter database default edition, alter session set edition, alter user enable editions, create edition, database default edition, dba_objects_ae, dba_source_ae, Edition Based Redefinition, execute immediate, ORA-38802: edition does not exist, ORA-38815: ALTER SESSION SET EDITION must be a top-level SQL statement, session edition, sys_context('userenv', user_objects_ae, user_source_ae

    Explain plan changed and query performance

    Tom Kyte - Thu, 2017-08-10 14:06
    Hello Recentlly the database server crashed, so thse oracle server was reinstalled on another and and make a recover from the last backup, now everithing is working but I have a performance problem with some querys, <i>SELECT p.utmx, ...
    Categories: DBA Blogs

    How to create global partitioned Index with different partition structure as underlying list partitioned table

    Tom Kyte - Thu, 2017-08-10 14:06
    Hello together ! Today I run in issues if I try to create a global list partitioned Index on a list partitioned table. I would like to have a different partitioning structure on the index. I see some examples in the web with Hash partitioning. Bu...
    Categories: DBA Blogs

    How to create a view with parameter

    Tom Kyte - Thu, 2017-08-10 14:06
    Hi, how to create view with parameter? we want to create view definition as below, <code>create view test_item_v as select item_class, nvl(rev_id,v_rev_id), sum(total_cost), sum(total_resale), sum(margin) from test_item_class where ...
    Categories: DBA Blogs

    When do we use table reorg?

    Tom Kyte - Thu, 2017-08-10 14:06
    Hi TOM, I am using reorg on around 50 tables. I am using below queries. "alter table tablename enable row movement;" "alter table tablename shrink space;" "ALTER TABLE tablename DEALLOCATE UNUSED;" Is there any way that we can check w...
    Categories: DBA Blogs

    Howto mask (remap) data during import over database link with Datapump ?

    Tom Kyte - Thu, 2017-08-10 14:06
    Hi Tom I have an issue with Data Masking (remap_data). It seems, that the package for masking the data has to be present on the importing database, when I use dbms_datapump to import tables from the source database. This seems to go against al...
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator