Feed aggregator

Configure easily your Stretch Database

Yann Neuhaus - Fri, 2016-10-21 10:07

In this blog, I will present you the new Stretch Database feature in SQL Server 2016. It couples your SQL Server On-Premises database with an Azure SQL Database, allowing to stretch data from one ore more tables to Azure Cloud.
This mechanism offers to use low-cost hard drives available in Azure, instead of fast and expensive local solid state drives. Indeed SQL Database Server resources are solicited during data transfers and during remote queries (and not SQL Server on-premises).

First, you need to enable the “Remote Data Archive” option at the instance level. To verify if the option is enabled:
USE master
SELECT name, value, value_in_use, description from sys.configurations where name like 'remote data archive'

To enable this option at the instance level:

EXEC sys.sp_configure N'remote data archive', '1';

Now, you have to link your on-premises database with a remote SQL Database server:
Use AdventureWorks2014;
WITH IDENTITY = 'dbi' , SECRET = 'userPa$$w0rd' ;
ALTER DATABASE AdventureWorks2014
SERVER = 'dbisqldatabase.database.windows.net' ,
CREDENTIAL = Stretch_cred
) ;

The process may take some time as it will create a new SQL Database in Azure, linked to your on-premises database. The credential entered to connect to your SQL Database server is defined in SQL Database. Previously you need to secure the credential by a database master key.

To view all the remote databases from your instance:
Select * from sys.remote_data_archive_databases

Now, if you want to migrate one table from your database ([Purchasing].[PurchaseOrderDetail] in my example), proceed as follows:

Of course repeat this process for each table you want to stretch. You can still access to your data during the migration process.

To view all the remote tables from your instance:
Select * from sys.remote_data_archive_tables

To view the batch process of all the data being migrated: (indeed, you can filtrate by the a specific table)
Select * from sys.dm_db_rda_migration_status

It is also to easily migrate your data back:

Moreover, you can select rows to migration by using a filter function. Here is an example:
CREATE FUNCTION dbo.fn_stretchpredicate(@column9 datetime)
RETURN SELECT 1 AS is_eligible
WHERE @column9 > CONVERT(datetime, '1/1/2014', 101)

Then when enable the data migration, specify the filter function:
ALTER TABLE [Purchasing].[PurchaseOrderDetail] SET ( REMOTE_DATA_ARCHIVE = ON (
FILTER_PREDICATE = dbo.fn_stretchpredicate(ModifiedDate),
) )

Of course in Microsoft world, you can also use a wizard to set up this feature. The choice is up to you!


Cet article Configure easily your Stretch Database est apparu en premier sur Blog dbi services.

Rapid analytics

DBMS2 - Fri, 2016-10-21 09:17

“Real-time” technology excites people, and has for decades. Yet the actual, useful technology to meet “real-time” requirements remains immature, especially in cases which call for rapid human decision-making. Here are some notes on that conundrum.

1. I recently posted that “real-time” is getting real. But there are multiple technology challenges involved, including:

  • General streaming. Some of my posts on that subject are linked at the bottom of my August post on Flink.
  • Low-latency ingest of data into structures from which it can be immediately analyzed. That helps drive the (re)integration of operational data stores, analytic data stores, and other analytic support — e.g. via Spark.
  • Business intelligence that can be used quickly enough. This is a major ongoing challenge. My clients at Zoomdata may be thinking about this area more clearly than most, but even they are still in the early stages of providing what users need.
  • Advanced analytics that can be done quickly enough. Answers there may come through developments in anomaly management, but that area is still in its super-early days.
  • Alerting, which has been under-addressed for decades. Perhaps the anomaly management vendors will finally solve it.

2. In early 2011, I coined the phrase investigative analytics, about which I said three main things:

  • It is meant to contrast with “operational analytics”.
  • It is meant to conflate “several disciplines, namely”:
    • Statistics, data mining, machine learning, and/or predictive analytics.
    • The more research-oriented aspects of business intelligence tools.
    • Analogous technologies as applied to non-tabular data types such as text or graph.
  • A simple definition would be “Seeking (previously unknown) patterns in data.”

Generally, that has held up pretty well, although “exploratory” is the more widely used term. But the investigative/operational dichotomy obscures one key fact, which is the central point of this post: There’s a widespread need for very rapid data investigation.

3. This is not just a niche need. There are numerous rapid-investigation use cases in mind, some already mentioned in my recent posts on anomaly management and real-time applications.

  • Network operations. This is my paradigmatic example.
    • Data is zooming all over the place, in many formats and structures, among many kinds of devices. That’s log data, header data and payload data alike. Many kinds of problems can arise …
    • … which operators want to diagnose and correct, in as few minutes as possible.
    • Interfaces commonly include real-time business intelligence, some drilldown, and a lot of command-line options.
    • I’ve written about various specifics, especially in connection with the vendors Splunk and Rocana.
  • Security and anti-fraud. Infosec and cyberfraud, to a considerable extent, are just common problems in network operations. Much of the response is necessarily automated — but the bad guys are always trying to outwit your automation. If you think they may have succeeded, you want to figure that out very, very fast.
  • Consumer promotion and engagement. Consumer marketers feel a great need for speed. Some of it is even genuine. :)
    • If an online promotion is going badly (or particularly well), they can in theory react almost instantly. So they’d like to know almost instantly, perhaps via BI tools with great drilldown.
    • The same is even truer in the case of social media eruptions and the like. Obviously, the tools here are heavily text-oriented.
    • Call centers and even physical stores have some of the same aspects as internet consumer operations.
  • Consumer internet backends, for e-commerce, publishing, gaming or whatever. These cases combine and in some cases integrate the previous three points. For example, if you get a really absurd-looking business result, that could be your first indication of network malfunctions or automated fraud.
  • Industrial technology, such as factory operations, power/gas/water networks, vehicle fleets or oil rigs. Much as in IT networks, these contain a diversity of equipment — each now spewing its own logs — and have multiple possible modes of failure. More often than is the case in IT networks, you can recognize danger signs, then head off failure altogether via preventive maintenance. But when you can’t, it is crucial to identify the causes of failure fast.
  • General IoT (Internet of Things) operation. This covers several of the examples above, as well as cases in which you sell a lot of devices, have them “phone home”, and labor to keep that whole multi-owner network working.
  • National security. If I told you what I meant by this one, I’d have to … [redacted].

4. And then there’s the investment industry, which obviously needs very rapid analysis. When I was a stock analyst, I could be awakened by a phone call and told news that I would need to explain to 1000s of conference call listeners 20 minutes later. This was >30 years ago. The business moves yet faster today.

The investment industry has invested greatly in high-speed supporting technology for decades. That’s how Mike Bloomberg got so rich founding a vertical market tech business. But investment-oriented technology indeed remains a very vertical sector; little of it get more broadly applied.

I think the reason may be that investing is about guesswork, while other use cases call for more definitive answers. In particular:

  • If you’re wrong 49.9% of the time in investing, you might still be a big winner.
  • In high-frequency trading, speed is paramount; you have to be faster than your competitors. In speed/accuracy trade-offs, speed wins.

5. Of course, it’s possible to overstate these requirements. As in all real-time discussions, one needs to think hard about:

  • How much speed is important in meeting users’ needs.
  • How much additional speed, if any, is important in satisfying users’ desires.

But overall, I have little doubt that rapid analytics is a legitimate area for technology advancement and growth.

Categories: Other

Webcast: Oracle WebCenter Sites Product Roadmap Review Webcast

WebCenter Team - Fri, 2016-10-21 07:54

The IOUG WebCenter SIG is hosting a webcast on October 31st at 11:00am Central time.

Learn about the exciting new features available in Oracle WebCenter Sites for the 11g and 12c releases. Understand how marketers, content contributors, developers, and administrators can take advantages of the advancements made in the latest release of Oracle WebCenter Sites. Get a sneak peek into what’s coming in Oracle WebCenter Sites.

Featured Speaker: Sripathy Rao, WebCenter Principal Product Manager

IOUG WebCenter Special Interest Group: http://www.ioug.org/p/cm/ld/fid=148&gid=61

Oracle NoSQL Database Version 4.2 ( Now available

Oracle NoSQL Database Version 4.2 ( is now available for download! Download Page – Click Here - Documentation Page – Click Here Oracle NoSQL...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Reminder: EBS 12.2 Minimum Patching Baselines and Dates

Steven Chan - Fri, 2016-10-21 02:05

Oracle E-Business Suite 12.2 is now covered by Premier Support to September 30, 2023. This is documented here:

Oracle Lifetime Support table for EBS

Premier Support includes new EBS 12.2 patches for:

  • Individual product families (e.g. Oracle Financials)
  • Quarterly security updates released via the Critical Patch Update process
  • New technology stack certifications with server-based components
  • New certifications for end-user desktops and mobile devices

What are the minimum patching baselines for EBS 12.2?

The minimum patching baselines for EBS 12.2 have not changed.  New EBS 12.2 patches are created and tested against the minimum patching baseline documented in Section 4.1, "E-Business Suite 12.2 Minimum Prerequisites" in this document:

All EBS 12.2 customers must apply the minimum patch prerequisites to be eligible for Premier Support. Those patches include the suite-wide EBS 12.2.3 Release Update Pack and a small number of technology stack infrastructure updates.

Instead of applying the minimum baseline

Many new updates have been released since that patching baseline was originally published.  These new patches contain stability, performance, and security-related updates and are strongly recommended for all customers. 

If you haven't applied any of those updates yet, the simplest solution would be to apply the latest suite-wide EBS 12.2.6 Release Update Pack (RUP).  That way, you get all of the 12.2 minimum patching requirements and the latest updates simultaneously.

If you've reached this blog article via a search engine, it is possible that a new Release Update Pack has been released since this article was first published.  Click on the "Certifications" link in the sidebar for a pointer to the latest EBS 12.2 RUP.

Related Articles

Categories: APPS Blogs

Documentum story – Unable to start a new Managed Server in SSL in WebLogic

Yann Neuhaus - Fri, 2016-10-21 02:00

Some time ago, I was creating a new Managed Server named msD2-02 on an existing domain of a WebLogic Server created loooong ago and I faced a small issue that I will try to explain in this blog. This Managed Server will be used to host a D2 4.5 Application (Documentum Client) and I created it using the Administration Console, customized it, enabled the SSL with internal SSL Certificates, the SAML2 Single Sign-On, aso…


When I wanted to start it for the first time, I get an error showing that the user/password used was wrong… So I tried to recreate the boot.properties file from scratch, setting up the username/password in there and tried again: same error. What to do then? To be sure that the password was correct (even if I was pretty sure), I tried to copy the boot.properties file from another Managed Server and tried again but same result over and over. Therefore I tried a last time removing the boot.properties completely to enter the credentials during the startup:

[weblogic@weblogic_server_01 msD2-02]$ /app/weblogic/domains/DOMAIN/bin/startManagedWebLogic.sh msD2-02 t3s://weblogic_server_01:8443

JAVA Memory arguments: -Xms2048m -Xmx2048m -XX:MaxMetaspaceSize=512m



*  To start WebLogic Server, use a username and   *
*  password assigned to an admin-level user.  For *
*  server administration, use the WebLogic Server *
*  console at http://hostname:port/console        *
starting weblogic with Java version:
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
Starting WLS with line:
/app/weblogic/Java/jdk1.8.0_45/bin/java -server -Xms2048m -Xmx2048m -XX:MaxMetaspaceSize=512m -Dweblogic.Name=msD2-02 -Djava.security.policy=/app/weblogic/Middleware/wlserver/server/lib/weblogic.policy  -Dweblogic.ProductionModeEnabled=true -Dweblogic.security.SSL.trustedCAKeyStore=/app/weblogic/Middleware/wlserver/server/lib/cacerts  -Dcom.sun.xml.ws.api.streaming.XMLStreamReaderFactory.woodstox=true -Dcom.sun.xml.ws.api.streaming.XMLStreamWriterFactory.woodstox=true -Djava.io.tmpdir=/app/weblogic/tmp/DOMAIN/msD2-02 -Ddomain.home=/app/weblogic/domains/DOMAIN -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.protocolVersion=TLS1 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom -Dweblogic.security.allowCryptoJDefaultJCEVerification=true -Dweblogic.nodemanager.ServiceEnabled=true  -Djava.endorsed.dirs=/app/weblogic/Java/jdk1.8.0_45/jre/lib/endorsed:/app/weblogic/Middleware/wlserver/../oracle_common/modules/endorsed  -da -Dwls.home=/app/weblogic/Middleware/wlserver/server -Dweblogic.home=/app/weblogic/Middleware/wlserver/server   -Dweblogic.management.server=t3s://weblogic_server_01:8443  -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true  weblogic.Server
<Jun 14, 2016 11:52:43 AM UTC> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Jun 14, 2016 11:52:43 AM UTC> <Notice> <WebLogicServer> <BEA-000395> <The following extensions directory contents added to the end of the classpath:
<Jun 14, 2016 11:52:44 AM UTC> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 25.45-b02 from Oracle Corporation.>
<Jun 14, 2016 11:52:44 AM UTC> <Info> <Security> <BEA-090065> <Getting boot identity from user.>
Enter username to boot WebLogic server:weblogic
Enter password to boot WebLogic server:
<Jun 14, 2016 11:52:54 AM UTC> <Warning> <Security> <BEA-090924> <JSSE has been selected by default, since the SSLMBean is not available.>
<Jun 14, 2016 11:52:54 AM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
<Jun 14, 2016 11:52:54 AM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /app/weblogic/Middleware/wlserver/server/lib/cacerts.>
<Jun 14, 2016 11:52:54 AM UTC> <Info> <Management> <BEA-141298> <Could not register with the Administration Server: java.rmi.RemoteException: [Deployer:149150]An IOException occurred while reading the input.; nested exception is:
        javax.net.ssl.SSLException: Error using PKIX CertPathBuilder.>
<Jun 14, 2016 11:52:54 AM UTC> <Info> <Management> <BEA-141107> <Version: WebLogic Server  Wed May 21 18:53:34 PDT 2014 1604337 >
<Jun 14, 2016 11:52:55 AM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
<Jun 14, 2016 11:52:55 AM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /app/weblogic/Middleware/wlserver/server/lib/cacerts.>
<Jun 14, 2016 11:52:55 AM UTC> <Alert> <Management> <BEA-141151> <The Administration Server could not be reached at https://weblogic_server_01:8443.>
<Jun 14, 2016 11:52:55 AM UTC> <Info> <Configuration Management> <BEA-150018> <This server is being started in Managed Server independence mode in the absence of the Administration Server.>
<Jun 14, 2016 11:52:55 AM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
<Jun 14, 2016 11:52:55 AM UTC> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.>
<Jun 14, 2016 11:52:55 AM UTC> <Info> <WorkManager> <BEA-002942> <CMM memory level becomes 0. Setting standby thread pool size to 256.>
<Jun 14, 2016 11:52:55 AM UTC> <Notice> <Log Management> <BEA-170019> <The server log file /app/weblogic/domains/DOMAIN/servers/msD2-02/logs/msD2-02.log is opened. All server side log events will be written to this file.>
<Jun 14, 2016 11:52:57 AM UTC> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
<Jun 14, 2016 11:52:57 AM UTC> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias alias_cert from the JKS keystore file /app/weblogic/domains/DOMAIN/certs/identity.jks.>
<Jun 14, 2016 11:52:57 AM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the JKS keystore file /app/weblogic/domains/DOMAIN/certs/trust.jks.>
<Jun 14, 2016 11:52:58 AM UTC> <Critical> <Security> <BEA-090403> <Authentication for user weblogic denied.>
<Jun 14, 2016 11:52:58 AM UTC> <Critical> <WebLogicServer> <BEA-000386> <Server subsystem failed. Reason: A MultiException has 6 exceptions.  They are:
1. weblogic.security.SecurityInitializationException: Authentication for user weblogic denied.
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.security.SecurityService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.jndi.internal.RemoteNamingService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.jndi.internal.RemoteNamingService
5. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.t3.srvr.T3InitializationService errors were found
6. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.t3.srvr.T3InitializationService

A MultiException has 6 exceptions.  They are:
1. weblogic.security.SecurityInitializationException: Authentication for user weblogic denied.
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.security.SecurityService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.jndi.internal.RemoteNamingService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.jndi.internal.RemoteNamingService
5. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.t3.srvr.T3InitializationService errors were found
6. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.t3.srvr.T3InitializationService

        at org.jvnet.hk2.internal.Collector.throwIfErrors(Collector.java:88)
        at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:269)
        at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:413)
        at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:456)
        at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:225)
        Truncated. see log file for complete stacktrace
Caused By: weblogic.security.SecurityInitializationException: Authentication for user weblogic denied.
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.doBootAuthorization(CommonSecurityServiceManagerDelegateImpl.java:1023)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.postInitialize(CommonSecurityServiceManagerDelegateImpl.java:1131)
        at weblogic.security.service.SecurityServiceManager.postInitialize(SecurityServiceManager.java:943)
        at weblogic.security.SecurityService.start(SecurityService.java:159)
        at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:78)
        Truncated. see log file for complete stacktrace
Caused By: javax.security.auth.login.FailedLoginException: [Security:090303]Authentication Failed: User weblogic weblogic.security.providers.authentication.LDAPAtnDelegateException: [Security:090295]caught unexpected exception
        at weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl.login(LDAPAtnLoginModuleImpl.java:257)
        at com.bea.common.security.internal.service.LoginModuleWrapper$1.run(LoginModuleWrapper.java:110)
        at java.security.AccessController.doPrivileged(Native Method)
        at com.bea.common.security.internal.service.LoginModuleWrapper.login(LoginModuleWrapper.java:106)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        Truncated. see log file for complete stacktrace
<Jun 14, 2016 11:52:58 AM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED.>
<Jun 14, 2016 11:52:58 AM UTC> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down.>
<Jun 14, 2016 11:52:58 AM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>


As you can see above, the WebLogic Managed Server is able to retrieve and read the SSL Keystores (identity and trust) so this apparently isn’t the issue which seems to be linked to a wrong username/password. Strange isn’t it?


All other Managed Servers are working perfectly, the applications are accessible in HTTPS, we can see the status of the servers via WLST/AdminConsole, aso… But this specific Managed Server isn’t able to start… After some reflexion, I thought at the Embedded LDAP! This is a completely new Managed Server and I tried to start it directly in HTTPS. What if this Managed Server isn’t able to authenticate the user weblogic because this user doesn’t exist in the Embedded LDAP of the Managed Server? Indeed during the first start, a Managed Server will try to automatically replicate the Embedded LDAP from the AdminServer which contains the primary Embedded LDAP. Just for information, we usually create a bunch of Managed Servers for Documentum during the domain creation and therefore all these Managed Servers are usually started at least 1 time in HTTP before setting up the SSL in the Domain: that’s the main difference between the existing Managed Servers and the new one and therefore I dug deeper in this direction.


To test my theory, I tried to replicate the Embedded LDAP manually. In case you don’t know how to do it, please take a look at this blog which explains that in details: click here. After doing that, the Managed Server msD2-02 was indeed able to start because it was able to authenticate the user weblogic but that doesn’t explain why the Embedded LDAP wasn’t replicated automatically in the first place…


So I checked more deeply the logs and actually the first strange message during startup is always the same:

<Jun 14, 2016 11:52:54 AM UTC> <Info> <Management> <BEA-141298> <Could not register with the Administration Server: java.rmi.RemoteException: [Deployer:149150]An IOException occurred while reading the input.; nested exception is:
        javax.net.ssl.SSLException: Error using PKIX CertPathBuilder.>


As said previously, all components are setup in HTTPS and only HTTPS. Therefore all communications are using an SSL Certificate. For this customer, we weren’t using a Self-Signed Certificate but a Certificate Signed by an internal Certificate Authority. As shown in the Info message, the Managed Server wasn’t able to register with the AdminServer with an SSL Exception… Therefore I checked the SSL Certificate, the Root and Gold Certificate Authority too but for me everything was working properly. The Admin Console is accessible in HTTPS, all Applications are accessible, the status of the Managed Servers are visible in the Administration Console and via WLST which shows that they are able to communicate internally too, aso… So what could be wrong? Well after checking the startup command of the Managed Server (and actually it is also mentioned in the startup logs), I found the following:

[weblogic@weblogic_server_01 servers]$ ps -ef | grep msD2-02 | grep -v grep
weblogic 31313     1  0 14:34 pts/2    00:00:00 /bin/sh ../startManagedWebLogic.sh msD2-02 t3s://weblogic_server_01:8443
weblogic 31378 31315 26 14:34 pts/2    00:00:35 /app/weblogic/Java/jdk1.8.0_45/bin/java -server
    -Xms2048m -Xmx2048m -XX:MaxMetaspaceSize=512m -Dweblogic.Name=msD2-02
    -Djava.security.policy=/app/weblogic/Middleware/wlserver/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true
    -Dcom.sun.xml.ws.api.streaming.XMLStreamReaderFactory.woodstox=true -Dcom.sun.xml.ws.api.streaming.XMLStreamWriterFactory.woodstox=true
    -Djava.io.tmpdir=/app/weblogic/tmp/DOMAIN/msD2-02 -Ddomain.home=/app/weblogic/domains/DOMAIN -Dweblogic.nodemanager.ServiceEnabled=true
    -Dweblogic.security.SSL.protocolVersion=TLS1 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom
    -Dweblogic.security.allowCryptoJDefaultJCEVerification=true -Dweblogic.nodemanager.ServiceEnabled=true
    -da -Dwls.home=/app/weblogic/Middleware/wlserver/server -Dweblogic.home=/app/weblogic/Middleware/wlserver/server
    -Dweblogic.management.server=t3s://weblogic_server_01:8443 -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server


What is this JVM parameter? Why does WebLogic defines a specific cacerts for this Managed Server and isn’t using the default one (included in Java)? Something is strange with this startup command…So I checked all other WebLogic Server processes and apparently ALL Managed Servers include this custom cacerts while the AdminServer doesn’t… Is that a bug?! Even if it makes sense to create a custom cacerts for WebLogic only, then why the AdminServer isn’t using it? This fact doesn’t make any sense and this is why we are facing this issue:
– All Managed Servers are using: /app/weblogic/Middleware/wlserver/server/lib/cacerts
– The AdminServer is using: /app/weblogic/Java/jdk1.8.0_45/jre/lib/security/cacerts


After checking the different startup scripts, it appears that this is define in the file startManagedServer.sh. Therefore this JVM parameter is only used by the Managed Server and therefore it is apparently a choice from Oracle (or something that has been forgotten…) to only start the Managed Servers with this option and not the AdminServer… Using different cacerts means that the SSL Certificates trusted by Java (default one) will be trusted by the AdminServer but it will not be the case for the Managed Servers. In our setup, we always add the Root and Gold Certificates (SSL Chain) in the default Java cacerts because it is the one used to allow the setup of our Domain and our Applications in SSL. This is working properly but that isn’t enough to allow the Managed Servers to start properly: you also need to take care of this second cacerts and that’s the reason why the new Managed Server wasn’t able to register to the AdminServer and therefore not able to replicate the Embedded LDAP.


So how to correct that? First, let’s export the Certificate Chain from the identity keystore and import that into the WebLogic cacerts too:

[weblogic@weblogic_server_01 servers]$ keytool -export -v -alias root_ca -file rootCA.der -keystore /app/weblogic/domains/DOMAIN/certs/identity.jks
[weblogic@weblogic_server_01 servers]$ keytool -export -v -alias gold_ca -file goldCA.der -keystore /app/weblogic/domains/DOMAIN/certs/identity.jks
[weblogic@weblogic_server_01 servers]$
[weblogic@weblogic_server_01 servers]$ keytool -import -v -trustcacerts -alias root_ca -file rootCA.der -keystore /app/weblogic/Middleware/wlserver/server/lib/cacerts
Enter keystore password:
[weblogic@weblogic_server_01 servers]$ keytool -import -v -trustcacerts -alias gold_ca -file goldCA.der -keystore /app/weblogic/Middleware/wlserver/server/lib/cacerts
Enter keystore password:


After doing that, you just have to remove the Embedded LDAP of this Managed Server to reinitialize it using the same steps as before but just do not copy the ldap from the AdminServer since we need to ensure that the automatic replication is working now. Then start the Managed Server one last time and verify that the replication is happening properly and therefore if the Managed Server is able to start or not. For me, everything was now working properly, so that’s a victory! :)


Cet article Documentum story – Unable to start a new Managed Server in SSL in WebLogic est apparu en premier sur Blog dbi services.

Links for 2016-10-20 [del.icio.us]

Categories: DBA Blogs

Update rows using MERGE on rows that do not have a unique identifier

Tom Kyte - Thu, 2016-10-20 16:06
I have an external table that reads from a csv file. I then need to merge any updates or new rows to a table. The problem is the table does not have a unique identifier. I have account numbers and dates, but the date may get updated on an account ...
Categories: DBA Blogs

Handle individual UKs on bulk inserts

Tom Kyte - Thu, 2016-10-20 16:06
Hello, I need to execute bulk insert into a table where two columns have unique constraints. One column has native values (cannot be changed) and another one contains abstract pseudo-random value (I generate it myself but I cannot change the algorit...
Categories: DBA Blogs

Convert my rows in columns

Tom Kyte - Thu, 2016-10-20 16:06
Hi friends! I tried to use pivot, unpivot and other ways to return the expected, but i'm not successfully. <code>create table t1 (ID NUMBER(5), tp char(1), nm varchar2(5), st number(2), en number(2)); insert into t1 values (1,'A', 'a', 0, ...
Categories: DBA Blogs

How to use case statement inside where clause ?

Tom Kyte - Thu, 2016-10-20 16:06
I'm getting error-- PL/SQL: ORA-00905: missing keyword when i compile the following script create or replace procedure testing (ass_line in char, curs out sys_refcursor ) is begin open curs for select asl.Production_Group,asl.last_sequen...
Categories: DBA Blogs

Mixing 9s and 0s in format number

Tom Kyte - Thu, 2016-10-20 16:06
Hello, I'm a bit confused by SQL Refenrence Lets consider Elements 0 and 9 of "Table 2-14 Number Format Elements" of the current "SQL Reference" Lets also consider the emxamples of "Table 2-15 Results of Number Conversions" I have SO much to a...
Categories: DBA Blogs

Column level access restrictions on a table

Tom Kyte - Thu, 2016-10-20 16:06
Let's say I have a table T with columns A, B, C and D. Data in each column by itself is not considered sensitive, but a combination of columns A,B,C in the same resultset is considered sensitive. Is it possible to allow queries that select A,B,D or A...
Categories: DBA Blogs

Related to Job scheduler

Tom Kyte - Thu, 2016-10-20 16:06
Hi Tom, actually i created a job (it will run for every 5 mins)which will execute the procedure,In that procedure we are reading file list from directory and constructing a string which contains all the file names using java class. Here my prob...
Categories: DBA Blogs

Invalid identifier while merge using dynamic sql - static merge statement suceeds

Tom Kyte - Thu, 2016-10-20 16:06
Hi, I am facing a weird problem in which a merge statement using dynamic sql is failing with the below error while the static merge statement is suceeding. <b>Error:</b> ORA-00904: : invalid identifier ORA-06512: at line 60<u></u> Below is...
Categories: DBA Blogs

The Best Transformation Story Ever

Linda Fishman Hoyle - Thu, 2016-10-20 15:19

Oracle’s transformation story isn’t new. It just gets better over time. As companies transition to the cloud and become digital businesses, who better than Oracle to guide them through these changes? Oracle writer Margaret Lindquist shares many of these insights in Oracle’s Safra Catz (pictured left): How Finance Can Lead Cloud Transformation.

Show Me the Money

Unless you’re a start-up, most companies already have an ERP system in place, if not many ERP systems accumulated over time. That was the case in 1999 when Oracle set out to reduce multiple, disparate systems around the world to a single source of truth.

“By eliminating that duplication and consolidating systems, we were able to invest in our main business. When we started this effort, we spent $650 million a year on R&D. Now, we spend $5.6 billion. That is the goal with these ERP transformations. It’s critical to simplify and run the business in such a way that resources are released to invest in your main business.” – Safra Catz

Simplification, consolidation, and rationalization are just as relevant with the cloud. GE Digital has taken this advice to heart, replacing its fragmented ERP structure with ERP Cloud and investing those savings in innovation.

Don’t Forget the Human Element

Not every transformation is driven by cost savings. CIO Mark Sunday describes Oracle’s transformation into a cloud-first company as a way to engage with customers and employees in a digital world. No matter where your cloud journey begins, the hardest part will be dealing with people’s resistance to change.

Safra offers this advice: “You have to provide employees with so many benefits in terms of improved productivity, cost savings, and better efficiencies, that on their own they begin to look for other opportunities to push the capabilities of the systems.”

HugePages speeds up Oracle login process on Linux

Bobby Durrett's DBA Blog - Thu, 2016-10-20 13:28

We bumped a Linux database up to a 12 gigabyte SGA and the login time went up to about 2.5 seconds. Then a Linux admin configured 12 gigabytes of HugePages to fit the SGA and login time went down to .13 seconds. Here is how I tested the login time. E.sql just has the exit command in it so this logs in as SYSDBA and immediately exits:

$ time sqlplus / as sysdba < e.sql

... edited out for space ...

real    0m0.137s
user    0m0.007s
sys     0m0.020s

So, then the question came up about our databases with 3 gig SGAs without HugePages. So I tested one of them:

real    0m0.822s
user    0m0.014s
sys     0m0.007s

Same version of Oracle/Linux/etc. Seems like even with a 3 gig SGA the page table creation is adding more than half a second to the login time. No wonder they came up with HugePages for Linux!


Categories: DBA Blogs

Common Criteria and the Future of Security Evaluations

Oracle Security Team - Thu, 2016-10-20 12:08

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security that are quite complementary and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk.

The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product.

Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance.

The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.)

Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes.

Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon.

Demand for Higher Assurance

In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.”

The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money.

Product Complexity and Evaluations

To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories.

For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation.

Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality.

As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.”

Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products,[1] or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them.

Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology.


One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation).

Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for:

a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality)

b) the cPP to have been modified, and

c) all vendors to have evaluated against the new cPP (that includes the new security feature)

Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving.

Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators?


The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by:

1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product

2) enabling faster innovation in security and the ability to evaluate it via EPs

3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems)

4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive.

Creating Oracle Application Builder Cloud Service App Based on Oracle ADF Business Components

Shay Shmeltzer - Thu, 2016-10-20 11:29

Oracle Application Builder Cloud Service (ABCS for short) enables you (and your business users) to create rich web and mobile apps in a quick visual way from a browser with no-coding required (but coding is possible).

The UI that ABCS creates is based on Oracle JET, which many of our customers love because its responsiveness and lightness.

Some Oracle ADF customers have been on the hunt for a new client-side UI solution for their apps, and Oracle JET is certainly a technology that will work for those use cases.

A nice feature for Oracle ADF customers is that their data-access and business-service layer is built in a reusable way that is decoupled from the UI. And now, with the ability to expose ADF Business Components as REST service, they can use any modern UI framework to develop the UI including Oracle JET. There are already many blog entries with code samples on how to write JET apps that connect to ADF Business Components

But what if we could give you the simplicity of ABCS for the UI creation, the power of JET for the UI experience, and the ability to leverage your existing investment in Oracle ADF all without writing a single line of code manually?

Well, in the demo below I'll show you how you can reuse the logic you have in Oracle ADF Business Component and build a JET based UI on top of them in a declarative way with Oracle Application Builder Cloud Service.

Basically you get the best of each tool - and you don't need to write a single line of code !


In the 9 minutes demo I'll show you how to:

  • Create an ADF Business Components layer on top of Oracle Database in the Cloud - (0:00)
  • Expose the ADF Business Components as REST service - (1:45)
  • Deploy the REST service to Java Cloud Service (JCS) - (2:19)
  • Create an Oracle Application Builder Cloud Service application - (6:00)
  • Add an ADF BC REST Service as a data source to the app - (6:30)
  • Create the user interface to your application - (7:20)

(Times are indicated in case you want to skip sections you are already familiar with) 

If you are interested in a bit of a background on why this is so simple, the answer is that ABCS was built to enable easy integration with Oracle SaaS leveraging the REST services they expose. To quickly build the full app with all the defaulting you are seeing in there (full CRUD with a simple drag and drop) ABCS needs to know some basic information about the data that it needs to render (primary key, data types, etc). Since Oracle SaaS is built on Oracle ADF, we built into ABCS the capability to analyze the describe that ADF BC REST services provide. This makes it dead simple to consume ADF REST service in ABCS, whether these services come from Oracle's apps - or your own ADF apps :-) 

As you can see there is a great synergy between Oracle ADF, Oracle Application Builder Cloud Service and Oracle JET. 

Want to try it on your own? Get a trial of Oracle Application Builder Cloud Service here

Categories: Development

While upgrading to I faced error ORA-01830 / ORA-06512

Pythian Group - Thu, 2016-10-20 10:35

The other day I was running an upgrade for a client that is using ACLs ( Access Control Lists) from to If you have been doing upgrades to 12c, you know that when running the catctl.pl -n 4 catupgrd.sql it entails 73 steps. So this upgrade failed in step 65 with the following error (I have trimmed the output for reading purposes) :

Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/dev/product/12.1.0/lib; export LD_LIBRARY_PATH;/u01/dev/product/12.1.0/perl/bin/perl -I /u01/dev/product/12.1.0/rdbms/admin -I /u01/dev/product/12.1.0/rdbms/admin/../../sqlpatch /u01/dev/product/12.1.0/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only &gt; catupgrd_datapatch_upgrade.log 2&gt; catupgrd_datapatch_upgrade.err
returned from sqlpatch
    Time: 80s
Serial   Phase #:66 Files: 1     Time: 71s
Serial   Phase #:67 Files: 1     Time: 1s
Serial   Phase #:68 Files: 1     Time: 0s
Serial   Phase #:69 Files: 1     Time: 20s

Grand Total Time: 4946s

catuppst.sql unable to run in Database: DEVSTAR Id: 0
        ERRORS FOUND: during upgrade CATCTL ERROR COUNT=5
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-01830: date format picture ends before converting entire input string ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 167
ORA-06512: at line 28
ORA-06512: at line 69
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 167 ORA-06512: at line 28
ORA-06512: at line 69
STATEMENT = [as above]
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at line 28 ORA-06512: at line 69
STATEMENT = [as above]
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at line 69]
STATEMENT = [as above]
Identifier ORDIM 16-09-25 12:28:53 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-20000: Oracle XML Database component not valid. Oracle XML Database must be installed and valid prior to Oracle Multimedia install, upgrade, downgrade, or patch.
ORA-06512: at line 3

And the worst part of it was that the upgrade also corrupted my database , also a good point to stress out , have a good backup before attempting to do an upgrade

Sun Sep 25 13:55:52 2016
Checker run found 59 new persistent data failures
Sun Sep 25 14:00:18 2016
Hex dump of (file 5, block 1) in trace file /u01/app/diag/rdbms/dev/dev/trace/de_ora_13476.trc
Corrupt block relative dba: 0x01400001 (file 5, block 1)
Bad header found during kcvxfh v8
Data in bad block:
 type: 0 format: 2 rdba: 0x01400001
 last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000001
 check value in block header: 0xa641
 computed block checksum: 0x0
Reading datafile '/u01/dev/oradata/dev/system_01.dbf' for corruption at rdba: 0x01400001 (file 5, block 1)
Reread (file 1, block 1) found same corrupt data (no logical check)

So what I had to do was a restore of my database before the upgrade, as I couldn’t even do a flashback due to the corrupt block.

But to fix this error, I had to apply the patch 20369415 to the 12c binaries before I ran the catupgrd.sql

[oracle@dev 20369415]$ opatch lsinventory | grep 20369415
Patch  20369415     : applied on Sun Sep 25 14:49:59 CDT 2016

Once the patch was applied , I reran the upgrade, and now it finished successfully

Serial   Phase #:65      Files: 1     Time: 133s
Serial   Phase #:66      Files: 1     Time: 78s
Serial   Phase #:68      Files: 1     Time: 0s
Serial   Phase #:69      Files: 1     Time: 275s
Serial   Phase #:70      Files: 1     Time: 171s
Serial   Phase #:71      Files: 1     Time: 0s
Serial   Phase #:72      Files: 1     Time: 0s
Serial   Phase #:73      Files: 1     Time: 20s

Phases [0-73]         End Time:[2016_09_26 17:42:54]

Grand Total Time: 5352s

LOG FILES: (catupgrd*.log)
COMP_ID              COMP_NAME                                VERSION  STATUS
-------------------- ---------------------------------------- -------- ---------------
APEX                 Oracle Application Express      VALID

OWB                  OWB                             VALID

AMD                  OLAP Catalog                    OPTION OFF

SDO                  Spatial                         VALID

ORDIM                Oracle Multimedia               VALID

XDB                  Oracle XML Database             VALID

CONTEXT              Oracle Text                     VALID

OWM                  Oracle Workspace Manager        VALID

CATALOG              Oracle Database Catalog Views   VALID

CATPROC              Oracle Database Packages and Types VALID

JAVAVM               JServer JAVA Virtual Machine    VALID

XML                  Oracle XDK                      VALID

CATJAVA              Oracle Database Java Packages   VALID

APS                  OLAP Analytic Workspace         VALID

XOQ                  Oracle OLAP API                 VALID


This was a small post to make you aware that if you are using ACLs , you need to run the patch 20369415 to the 12c binaries so that you don’t have to face a possible database corruption and have a harder time upgrading your database.

Note: This post was originally posted in rene-ace.com

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator