Feed aggregator

table partition exchange

Tom Kyte - Mon, 2018-02-19 08:46
Hi Team, we have situation where we have two partition table (range partition ) ,example table A1 ,A1_history . application team want to archive all the partition except last 3 partitions , from Current production table to archival table . ...
Categories: DBA Blogs

Delay in row commit or Uncommited rows by Application

Tom Kyte - Mon, 2018-02-19 08:46
Hi Tom, Application(.Net) component is calling SP and creates an entry into few tables and sometimes there is delay in committing these transactions by .Net component or there is transaction left uncommitted in Oracle DB. we are aware of this issu...
Categories: DBA Blogs

Rebuilding all the unusable index

Tom Kyte - Mon, 2018-02-19 08:46
Hi Tom I really need to identify all the failed or unusable indexes in all schema in database and also to rebuild it. Kindly provide a query for both. im using SQL DEVELOPER TOOL in ORACLE 12c.
Categories: DBA Blogs

Oracle Statistics

Tom Kyte - Mon, 2018-02-19 08:46
I have an upcoming production migration I need help with. Table A has hundreds of million of rows. The migration is supposed to update a column in table A, but instead of doing an UPDATE it creates table B (one-to-one with table A) and then does a ...
Categories: DBA Blogs

Spring into action with our new OBIEE 12c Systems Management & Security On Demand Training course

Rittman Mead Consulting - Mon, 2018-02-19 05:49

Rittman Mead are happy to release a new course to the On Demand Training platform.

The OBIEE 12c Systems Management & Security course is the essential learning tool for any developers or administrators who will be working on the maintenance & optimisation of their OBIEE platform.

Baseline Validation Tool

View lessons and live demos from our experts on the following subjects:

  • What's new in OBIEE 12c
  • Starting & Stopping Services
  • Managing Metadata
  • System Preferences
  • Troubleshooting Issues
  • Caching
  • Usage Tracking
  • Baseline Validation Tool
  • Direct Database Request
  • Write Back
  • LDAP Users & Groups
  • Application Roles
  • Permissions

Get hands on with the practical version of the course which comes with an OBIEE 12c training environment and 9 lab exercises.
System Preferences

Rittman Mead will also be releasing a theory version of the course. This will not include the lab exercises but gives each of the lessons and demos that you'd get as part of the practical course.

Course prices are as follows:

OBIEE 12c Systems Management & Security - PRACTICAL - $499

  • 30 days access to lessons & demos
  • 30 days access to OBIEE 12c training environment for lab exercises
  • 30 days access to Rittman Mead knowledge base for Q&A and lab support

OBIEE 12c Systems Management & Security - THEROY - $299

  • 30 days access to lessons & demos
  • 30 days access to Rittman Mead knowledge base for Q&A

To celebrate the changing of seasons we suggest you Spring into action with OBIEE 12c by receiving a 25% discount on both courses until 31st March 2018 using voucher code:


Access both courses and the rest of our catalog at learn.rittmanmead.com

Categories: BI & Warehousing

18c Read Only Oracle Home

Yann Neuhaus - Sun, 2018-02-18 13:49

Capture18c000This is the big new feature of Oracle 18c about database software installation. Something that was needed for decades for the ease of software deployment. Piet de Visser raised this to Oracle a long time ago, and we were talking about that recently when discussing this new excitement to deploy software in Docker containers. Docker containers are by definition immutable images. You need a Read Only Oracle Home, all the immutable files (configuration, logs, database) being in an external volume. Then, to upgrade the software, you just open this volume with an image of the new database version.


In 12.2 you may have seen a ‘roohctl’ script in ORACLE_HOME/bin. The help explains that ‘rooh’ stands for Read-Only Oracle Home:

[oracle@VM122 ~]$ roohctl -help
Usage: roohctl [] [ ] Following are the possible flags:
Following are the possible commands:
-enable Enable Read-only Oracle Home
-disable Disable Read-only Oracle Home

Note that in 18c the help does not show ‘-disable’ even if it is accepted….
So in 12cR2 you were able to run ‘roohctl -enable’ but the only thing it did was changing the Yes/No flag in orabasetab:

cat $ORACLE_HOME/install/orabasetab
#orabasetab file is used to track Oracle Home associated with Oracle Base

Oracle 18

Here is an Oracle 18 that I re-installed (as an Oracle Home Clone) with the following:

runInstaller -clone ORACLE_HOME=/u01/app/oracle/product/181 ORACLE_HOME_NAME=O181 ORACLE_BASE=/u00/app/oracle

My idea is to be able to easily differentiate the different paths (ORACLE_HOME under /u01 and ORACLE_BASE under /u00)

The $ORACLE_HOME/install/orabasetab records the ORACLE_HOME, ORACLE_BASE and ORACLE_HOME_NAME:

[oracle@VM181 18c]$ cat $ORACLE_HOME/install/orabasetab
#orabasetab file is used to track Oracle Home associated with Oracle Base

ORACLE_HOME: This may seem useless because this file is under ORACLE_HOME, so if you read it you are supposed to know the ORACLE_HOME. However, you may find it from different paths (symbolic links, /../.) and this is a good way to normalize it.

ORACLE_BASE: This will be used to externalize the mutable files outside of the ORACLE_HOME

ORACLE_HOME_NAME: is the name of Oracle Home that you provide when installing and you can find in the Oracle Inventory.

The last field is ‘N’ when the mutable files are under ORACLE_HOME and ‘Y’ when they are externalized to have an immutable Read Only Oracle Home.

We are not supposed to use this file directly. It is modified by runInstaller and roohctl. And it is read by orabasehome and orabaseconfig

orabasehome and orabaseconfig

We have two new location name derived from the orabasetab content.

One is the ‘Oracle Base Config’ which is mostly there to find the configuration files (.ora, .dat) in the /dbs subdirectory. With Read Only Oracle Home, this is set to the ORACLE_BASE:

[oracle@VM181 18c]$ orabaseconfig

Most of the files in /dbs have the ORACLE_SID in their name, which is unique in the host, and this is why they can all go into the same directory. However, I would prefer a subdirectory per database. When you move a database from one system to another, it is easier to move a directory. You can do per-file symbolic links but be sure to maintain them as they may be re-created as files.

The other is the ‘Oracle Base Home’ which is mostly there for the /network subdirectory (with the SQL*Net configuration files, logs and trace) and the /assistant (DBCA templates) and /install ones. With Read Only Oracle Home, this goes to a /homes subdirectory of ORACLE_BASE

[oracle@VM181 18c]$ orabasehome

As you see, there is an additional subdirectory with the name of the Oracle Home. In my opinion, it is not a good idea to put sqlnet.ora, tnsnames.ora and listener.ora here. It is better to have one common TNS_ADMIN. However, because the default was one directory per Oracle Home, the Read Only Oracle Home feature had to keep this possibility. In 12.2 an ORACLE_HOME/env.ora was introduced to set TNS_ADMIN in a consistent way.

With Read Only Oracle Home enabled, I strace-ed a ‘startup’ to show which files are read:

[oracle@VM181 18c]$ ORACLE_SID=CDB18 strace -e trace=file -f sqlplus / as sysdba <<&1 | grep /u00
open("/u00/app/oracle/homes/O181/network/admin/oraaccess.xml", O_RDONLY) = -1 ENOENT (No such file or directory)
access("/u00/app/oracle/homes/O181/network/admin/oraaccess.xml", F_OK) = -1 ENOENT (No such file or directory)
open("/u00/app/oracle/homes/O181/network/admin/oraaccess.xml", O_RDONLY) = -1 ENOENT (No such file or directory)
access("/u00/app/oracle/homes/O181/network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u00/app/oracle/homes/O181/network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u00/app/oracle/homes/O181/network/admin/intchg.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u00/app/oracle/homes/O181/network/admin/tnsnav.ora", F_OK) = -1 ENOENT (No such file or directory)
open("/u00/app/oracle/dbs/cm_CDB18.dat", O_RDONLY|O_SYNC) = -1 ENOENT (No such file or directory)
[pid 15339] access("/u00/app/oracle/homes/O181/network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory)
[pid 15339] stat("/u00/app/oracle/dbs/spfileCDB18.ora", 0x7ffe6a5785b8) = -1 ENOENT (No such file or directory)
[pid 15339] open("/u00/app/oracle/dbs", O_RDONLY) = 8
[pid 15339] stat("/u00/app/oracle/dbs/spfileCDB18.ora", 0x7ffe6a578010) = -1 ENOENT (No such file or directory)
[pid 15339] stat("/u00/app/oracle/homes/O181/dbs/spfile.ora", 0x7ffe6a5785b8) = -1 ENOENT (No such file or directory)
[pid 15339] open("/u00/app/oracle/homes/O181/dbs", O_RDONLY) = 8
[pid 15339] stat("/u00/app/oracle/homes/O181/dbs/spfile.ora", 0x7ffe6a578010) = -1 ENOENT (No such file or directory)
[pid 15339] access("/u00/app/oracle/dbs/initCDB18.ora", F_OK) = -1 ENOENT (No such file or directory)
[pid 15339] open("/u00/app/oracle/dbs/initCDB18.ora", O_RDONLY) = -1 ENOENT (No such file or directory)
LRM-00109: could not open parameter file '/u00/app/oracle/dbs/initCDB18.ora'

The files were not there as I’ve not created any database here. The goal is to show that there is no attempt to read any configuration file under ORACLE_HOME.

You can also see that DBCA will search for templates in this new directory:


I mentioned network and assistant subdirectories. But it concerns all directories where the instance can write files:

[oracle@VM181 18c]$ du $ORACLE_BASE/homes
4 /u01/app/oracle/homes/O181/assistants/dbca/templates
8 /u01/app/oracle/homes/O181/assistants/dbca
12 /u01/app/oracle/homes/O181/assistants
4 /u01/app/oracle/homes/O181/network/trace
4 /u01/app/oracle/homes/O181/network/admin
4 /u01/app/oracle/homes/O181/network/log
16 /u01/app/oracle/homes/O181/network
4 /u01/app/oracle/homes/O181/dbs
4 /u01/app/oracle/homes/O181/install
64 /u01/app/oracle/homes/O181/rdbms/log
72 /u01/app/oracle/homes/O181/rdbms/audit
140 /u01/app/oracle/homes/O181/rdbms
180 /u01/app/oracle/homes/O181
184 /u01/app/oracle/homes

You may wonder why we see a /dbs subdirectory here as the instance configuration files are in the common /u01/app/oracle/dbs. The /dbs is also the current working directory for oracle processes. And this one will be set to ORACLE_BASE/homes/oracle_home_name/dbs.

We can also see /rdbms/log here. I opened a bug 2 years ago about SBTIO.LOG not going to the right place under ADR_HOME, but going to ORACLE_HOME/rdbms/log (Bug 23559013 USER_DUMP_DEST VALUE NOT IGNORED EVEN THOUGH DIAGNOSTIC_DEST IS SET). I’ve no idea about the status of the bug, but at least this will not go to Oracle Home anymore. Even if you don’t really have the need to have a Read Only Oracle Home, this feature is a good way to ensure that it will not grow and fill the filesystem.

Enable Read Only Oracle Home

You enable this feature with ‘roohctl -enable’ after software installation and before any creation of databases or listeners:

[oracle@VM181 18c]$ roohctl -enable
Enabling Read-Only Oracle home.
Update orabasetab file to enable Read-Only Oracle home.
Orabasetab file has been updated successfully.
Create bootstrap directories for Read-Only Oracle home.
Bootstrap directories have been created successfully.
Bootstrap files have been processed successfully.
Read-Only Oracle home has been enabled successfully.
Check the log file /u01/app/oracle/cfgtoollogs/roohctl/roohctl-180217PM111551.log.

If the utility tool finds an existing database or listener related to this Oracle Home, it will return this kind of error:

Cannot enable Read-Only Oracle home in a configured Oracle home.
The Oracle Home is configured with databases 'CDB18'.
The Oracle Home is configured with listeners 'LISTENER'.

There is an undocumented ‘-force’ parameter to add to ‘roohctl -enable’ which can proceed anyway, but it will not move the configuration files.

I have not tested all possibilities because the General Availability of 18c is currently limited to Exadata and Oracle Public Cloud. But it seems that this roohctl will work the same on Windows (with /database instead of /dbs and with registry settings instead of orabasetab) and with Grid Infrastructure (there’s a -nodeList argument).

I mentioned above that the ORACLE_HOME/install goes to $(orabasehome)/install. I don’t know which files go there when ROOH is enabled. The orabasetab remains under ORACLE_HOME, of course. And some logs, such as re-running root.sh, still go to ORACLE_HOME/install:

[oracle@VM181 ~]$ sudo $ORACLE_HOME/root.sh
Check /u01/app/oracle/product/181/install/root_VM181_2018-02-18_19-06-23-833474515.log for the output of root script

This looks strange, but remember that the idea of a Read Only Oracle Home is to ship it after all changes are done. If you have something to change (patch, re-link, …) that will go to another Oracle Home. Maybe cloned from the other, then made Read Only after the changes.


Do you use the question mark as a shortcut to ORACLE_HOME? This does not change and remains the ORACLE_HOME:

[oracle@VM181 ~]$ sqlplus / as sysdba
SQL*Plus: Release Production on Sun Feb 18 20:26:33 2018
Copyright (c) 1982, 2017, Oracle. All rights reserved.
Connected to an idle instance.
SQL> start ?
SP2-0310: unable to open file "/u01/app/oracle/product/181.sql"
SQL> exit

This is ok as I mostly use it to read files from the software distribution (such as ?/rdbms/admin/awrrpt)

If you use it in database configuration files, then be careful. Here I have enabled ROOH and defined a pfile mentioning the spfile with the ‘?’ shortcut

[oracle@VM181 ~]$ orabaseconfig
[oracle@VM181 ~]$ cat $(orabaseconfig)/dbs/init$ORACLE_SID.ora

However, the ‘?’ is resolved to ORACLE_HOME and not Oracle Base Config:

[oracle@VM181 ~]$ strace -f sqlplus / as sysdba <<&1 | grep xxx
[pid 1898] read(10, "spfile=?/dbs/xxx\n", 4096) = 17
[pid 1898] stat("/u01/app/oracle/product/181/dbs/xxx", 0x7ffc5ac1c758) = -1 ENOENT (No such file or directory)
[pid 1898] stat("/u01/app/oracle/product/181/dbs/xxx", 0x7ffc5ac1c1b0) = -1 ENOENT (No such file or directory)
ORA-01565: error in identifying file '?/dbs/xxx'

So what?

Having a read-only Oracle Home, or at least be sure that you have no files written into it, is a good idea. Easier to manage space. Easier to deploy by cloning Oracle Home. Good practice to separate big software directory from small configuration files. And to have the current working directory outside of that. Having configuration files at the same place as the software is always a bad idea (and reminds me the .ini files in C:\WIN directory a long time ago). So, even if it is not enabled by default, Read Only Oracle Home is the way to go.

I think the risks are very limited once tested, as it is just changing the directories and any problem can be worked around with symbolic links on directories. However, this may change some habits and scripts. Not finding the right configuration file in a stressful situation may be annoying.

So, don’t wait, and even in 12c, you can change your habits and replace all references to ${ORACLE_HOME}/dbs by $(orabaseconfig)/dbs and other ${ORACLE_HOME} to $(orabasehome). In 12c they will go to the same ORACLE_HOME. And they you will be ready to enable ROOH in 18c.


Cet article 18c Read Only Oracle Home est apparu en premier sur Blog dbi services.

Firefox tab crashes by loading maps.google.com on ubuntu 16.04

Dietrich Schroff - Sun, 2018-02-18 10:21
On a ubuntu 16.04 system firefox crashes by loading maps.google.com:

 The only way to fix this was going to "about:config":

On this page search for "webgl.disabled" and change the value from false
to true:

After that, maps.google.com loads without any problem...

My personal journey with Linux on the desktop

Yann Neuhaus - Sun, 2018-02-18 10:19

My Linux experience started back in 1999 when we needed a router for our ISDN connection in our shared apartment when we were students. I don’t remember the hardware we used for that but I am sure at that time it was a SUSE operating system. Not sure about the version, but based on this it must have been version 4 something or 5 something. The KDE desktop environment looked liked this in version 1 which was released July 12, 1998.


At the company I worked at that time we had no choice and Windows was the only platform available. When I moved to the next company in 2002 it was the same situation: Having all the infrastructure based on Microsoft products (Active directory, Outlook, Office … ) it was nearly impossible to switch to Linux on the desktop. So my Linux experience focused on server platforms (SUSE as well at that time) and mainly in combination with Oracle databases. It was then when I had my first experiences with Oracle RAC 9.x on SUSE. Believe or not: SUSE already had a rpm which installed all dependencies you need for Oracle in the Oracle 9i ages. As far as I know Oracle came up with that for Oracle Linux years later. I did some experiments with Linux on my personal workstation but because of the non availability of games and everyone used Windows it was not more than playing around.

Things started to change when I moved on the next company in 2007. All was based on Microsoft as well but we had several terminal servers mainly used for VPN. But that opened doors for Linux on the Desktop. As the terminal servers had the complete Microsoft Office package installed I could use them for all the Word, Excel, Outlook stuff but use Linux on my notebook. The only bits I had to figure out were:

  • What software to use for remote connections to Windows machines (rdp)?
  • Can I copy paste between the terminal session and my Linux workstation? (especially for pictures pasted into Outlook)
  • Can I share a common drive for exchanging files between the terminal server and my workstation?

What worked really well was freerdp. Using an alias like this:

alias termserver='nohup xfreerdp --plugin rdpdr --data disk:dwetemp:/home/dwe/Downloads -- --plugin cliprdr -z -x l -g 1280x1024 -u dwe -p Rev0luti0n -k 0x00000807 &'

… I could easily connect to the terminal server, use the clipboard to copy/paste into and out of the terminal server session and have a shared drive I could use for exchanging files. For all the other stuff we needed to work on that time (we had site to site VPN connections to our customers) I could use my Linux Desktop. All the Oracle stuff was mainly based on Solaris 9 and 10 so defining aliases for all the connections I required and exchanging my public ssh key brought a great speed up compared to my colleagues that worked with putty on Windows. Later on all the Solaris machines have been replaced with RedHat but that did not change my way of working.

Maybe the biggest issue was to find the distribution that worked for me. I tried openSUSE, Fedora and pure Debian. Nothing made me really happy as I didn’t want to compile software from source or add additional software repositories just to have the basic things working: Music, Video, Flash and so on. For a desktop things have to work out of the box, at least for me. I never really liked Ubuntu but I think this was mainly because of the Unity desktop. I know you can have Ubuntu with various other desktops but somehow I did not consider them. Then I came across Linux Mint (Maya, 2012) and I must say this is one of the best distributions out there. Everything just worked out of the box and the Cinnamon desktop is really great:


I was really happy with that for next one or two years until I re-installed Linux Mint but this time with the KDE desktop, maybe just to see how it evolved over time. There were big discussion when the KDE project switched from version three:


… to version four:

… and I really was interested how the product looked and how it feels. Starting then, KDE is what I really love and use every day. It comes with a powerful editor called Kate, and that is essential for me. Beside vim this is one of the best editors I’ve ever used. But then, the Linux Mint project decided to drop the KDE edition and I again I had to search for a stable KDE distribution. I tried Kubuntu, Manjaro, the KDE spin of Fedora and just a few weeks ago I gave GNOME a try with Fedora Workstation. Finally I switched to KDE neon and I am quite happy with it. This is how my desktop looks today:


Not much on it, you might think, but I feel there is no need for that. On KDE you can do almost anything with the keyboard and all I need is Krunner. ALT-F2 brings it up by default and from there you can start whatever you want, no need to use the mouse for that. Define a bunch of custom keyboard shortcuts, all the aliases required, configure Dolphin the way I like it(especially the places), install terminator and the Desktop is ready. For running Microsoft Office CrossOver Linux works fine for me.

Almost all the applications required are available for Linux today but there are still a few trade-offs. I still need a Windows VM for some stuff (mainly VPN clients to connect to customers which are only available for Windows). But the most important point is that the company you work for has an environment you can work with Linux. As more and more stuff is web based today this is becoming easier and easier but still you can mostly not use Linux in big companies as it is just not supported. Some companies switch to Apple products but I never really considered that for myself. What I recommend from my personal experience: Use something that is either based on Debian (Ubuntu, Mint, …) or rpm based (RedHat, CentOS, SUSE, …). A lot of software is either available as rpm or deb, but nothing else. For a desktop things must just work out of the box.

The good thing with Linux and all the available desktops on top of it: You have plenty of products to choose from. Maybe it takes years until you find the right one for you but I am sure there is something which fits your needs.


Cet article My personal journey with Linux on the desktop est apparu en premier sur Blog dbi services.

Oracle ERP DB Exadata migration + AC-50480: Internal error occurred + ORA-12154

Syed Jaffar - Sun, 2018-02-18 06:24
I was recently involved in a project to migrate an Oracle ERP database to an Exadata server. The ERP database was a non-RAC Oracle 12cR1 running on RHEL with Oracle EBusiness suite 12.1.3.

The migration involved only migrating the ERP database from traditional storage/server technologies to Exadata machine. Upgrade from non-RAC database to RAC. The application remains on the same host. Though similar migration projects were successfully done earlier, this time, I faced a tough challenge resolving autoconfig issues. Here are the details of the issue and how I resolved the problem.

Downtime was not an issue, hence, I opted out for an RMAN backup and restore approach. In nutshell, the following was done:

  • Prepared the Oracle home for ERP on Exadata and applied all recommended patches on the home.
  • Performed all mandatory steps required for ERP Oracle Home.
  • After graceful shutdown of application tier, ERP database was stopped and started in MOUNT state to proceed with RMAN full backup (this is the approach I used, though, many different approaches can be achieved).
  • Copied the files to target (Exadata system) and complete the recovery procedure.
  • Through manual approach, converted the non-RAC database to RAC mode and completed the post migration steps.
Its mandatory to run catbundle script after the database migration to avoid issues like blank login page and issues changing the passwords. Some of you might defer with my advice, but, I faced this at multiple clients. So, I decided to make this as a practice right after migration.
  • The autoconfig on database nodes were successful. However, when autoconfig was executed on application tier, it completed with warnings. The autoconfig,log has the following errors:
Updating s_tnsmode to 'generateTNS'
UpdateContext exited with status: 0
AC-50480: Internal error occurred: java.lang.Exception: Error while generating listener.ora.
Error generating tnsnames.ora from the database, temporary tnsnames.ora will be generated using templates
Instantiating Tools tnsnames.ora
Tools tnsnames.ora instantiated
Web tnsnames.ora instantiated

The NetServiceHandler.log reported the following error:

SQLPLUS Executable : /oracle/PROD/apps/tech_st/10.1.2/bin/sqlplus

SQL*Plus: Release - Production on Thu Feb 15 21:27:02 2018

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter value for 1: Enter value for 2: ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

Unable to generate listener.ora from database
Using default listener.ora file

 After googling for the solution, the common workaround was the following:
  1. Clean up the entries from fnd_nodes using the EXEC FND_CONC_CLONE.SETUP_CLEAN;
  2. Run the autoconfig on Dbtier and run the autoconfig on App tier
 Unfortunately this doesn't helped us in our case.

When I looked at the host names in the FND_NODES for database, the hostname was registered against the management hostname (Exadata has management, public IPs). Though client opened ports for Public hostname, they didn't opened the firewall against the management network. Though we have multiple IPs (management and public), the DB server can only take one hostname. So, if you are on Exadata, you need to ensure what is the hostname registered on the server, despite the different IPs.
After allowing firewall to open the LISTENER ports through management IP, the autoconfig on App tier went successfully and we manage to connect the application.

The bottom line is, when you are on Exadata for ERP databases, you need to watchout for the hostnames and the ports opened against IPs from application server.

Shredding and Querying with Oracle Offline Persistence in JET

Andrejus Baranovski - Sat, 2018-02-17 14:44
I think offline functionality topic should become a trend in the future. Its great that Oracle already provides solution for offline - Oracle Offline Persistence toolkit. This is my second post related to offline support, read previous post - Oracle Offline Persistence Toolkit - Simple GET Response Example with JET. I have tested and explained with sample app how it works to handle simple GET response offline. While today I would like to go one step further and check how to filter offline data - shredding and querying offline.

Sample app is fetching a list of employees - Get Employees button. It shows online/offline status - see icon in top right corner. We are online and GET response was cached by persistence toolkit:

We can test offline behaviour easily - this can be done through Chrome Developer Tools - turn on Offline mode. Btw, take a look into Initiator field for GET request - it comes from Oracle Offline Persistence toolkit. As I mention it in my previous post - once persistence toolkit is enabled, all REST calls are going through toolkit, this is how it is able to cache response data:

While offline, click on Get Employees button - you should see data returned from cache. Did you noticed - icon in the top right corner was changed to indicate we are offline:

Ok, now we will see how shredding mechanism works (more about it read on GitHub). While offline, we can search for subset of cached data. Search By Name does that, it gets from cache entry for Lex:

Switch online and call same action again, but with different name - REST call will be invoked against back-end server as expected. Again it is transparent to JET developer, no need to worry if app state is online/offline, same REST request is done in both cases:

Let's take a quick look into implementation part (complete example is available on my GitHub repository).

Online/offline status icon is controlled by observable variable:

It is very simple to determine online/offline state. We need to add event listener for online/offline and reset observable variable accordingly:

Persistence toolkit supports Simple and Oracle shredder/query handlers. I'm using ADF BC REST for backend and so my choice is oracleRestJsonShredding and oracleRestQueryHandler. Oracle shredder understands REST structure returned by ADF BC REST. Oracle query handler support filtering parameters for ADF BC REST for offline filtering - this allows to use same query format for both online and offline. I was happy to read that Oracle query handler explicitly supports ADF BC REST - queryHandlers:

Same REST call with filtering is executed online and offline:

What you can do when your Veritas cluster shows interfaces as down

Yann Neuhaus - Sat, 2018-02-17 07:40

Recently we had the situation that the Veritas cluster (InfoScale 7.3) showed interfaces as down on the two RedHat 7.3 nodes. This e.g. can happen when you change hardware. Although all service groups were up and running this is a situation you usually want to avoid as you never know what happens when the cluster is in such a state. When you have something like this:

[root@xxxxx-node1 ~]$ lltstat -nvv | head
LLT node information:
Node State Link Status Address
  * 0 xxxxx-node1 OPEN
      eth3 UP yy:yy:yy:yy:yy:yy
      eth1 UP xx:xx:xx:xx:xx:xx
      bond0 UP rr:rr:rr:rr:rr:rr
    1 xxxxx-node2 OPEN
      eth3 UP ee:ee:ee:ee:ee:ee
      eth1 DOWN tt:tt:tt:tt:tt:tt
      bond0 DOWN qq:qq:qq:qq:qq:qq

… what can you do?

In our configuration eth1 and eth3 are used for the interconnect and bond0 is the public network. As you can see above the eth1 and bond0 are reported as down for the second node. Of course, the first check you need to do is to check the interface status on the operating system level, but that was fine in our case.

Veritas comes with a tiny little utility (dlpiping) you can use to check connectivity on the Veritas level. Using the information from the lltstat command you can start dlpiping in “send” mode on the first node:

[root@xxxxx-node1 ~]$ /opt/VRTSllt/dlpiping -vs eth1

When that is running (will not detach from the terminal) you should start in “receive” mode on the second node:

[root@xxxxx-node1 ~]$ /opt/VRTSllt/dlpiping -vc eth1 xx:xx:xx:xx:xx:xx
using packet size = 78
dlpiping: sent a request to xx:xx:xx:xx:xx:xx
dlpiping: received a packet from xx:xx:xx:xx:xx:xx

This confirms that connectivity is fine for eth1. When you repeat that for the remaining interfaces (eth3 and bond0) and all is fine then you you can proceed. If not, then you have another issue than what we faced.

The next step is to freeze all your service groups so the cluster will not touch them:

[root@xxxxx-node1 ~]$ haconf -makerw
[root@xxxxx-node1 ~]$ hagrp -freeze SERVICE_GROUP -persistent # do that for all service groups you have defined in the cluster
[root@xxxxx-node1 ~]$ haconf -dump -makerw

Now the magic:

[root@xxxxx-node1 ~]$ hastop -all -force 

Why magic? This command will stop the cluster stack on all nodes BUT it will leave all the resources running. So you can do that without shutting down any user defined cluster services (Oracle databases in our case). Once the stack is down on all the nodes stop gab and ltt on both nodes as well:

[root@xxxxx-node1 ~]$ systemctl stop gab
[root@xxxxx-node1 ~]$ systemctl stop llt

Having stopped llt and gab you just need to start them again in the correct order on both systems:

[root@xxxxx-node1 ~]$ systemctl start llt
[root@xxxxx-node1 ~]$ systemctl start gab

… and after that start the cluster:

[root@xxxxx-node1 ~]$ systemctl start vcs

In our case that was enough to make llt work as expected again and the cluster is fine:

[root@xxxxx-node1 ~]$ gabconfig -a
GAB Port Memberships
Port a gen f44203 membership 01
Port h gen f44204 membership 01
[root@xxxxx-node1 ~]#

[root@xxxxx-node1 ~]$ lltstat -nvv | head
LLT node information:
   Node State Link Status Address
    * 0 xxxxx-node1 OPEN
      eth3 UP yy:yy:yy:yy:yy:yy
      eth1 UP xx:xx:xx:xx:xx:xx
      bond0 UP rr:rr:rr:rr:rr:rr
    1 xxxxx-node2 OPEN
      eth3 UP ee:ee:ee:ee:ee:ee
      eth1 UP qq:qq:qq:qq:qq:qq
      bond0 UP tt:tt:tt:tt:tt:tt 

Hope that helps …


Cet article What you can do when your Veritas cluster shows interfaces as down est apparu en premier sur Blog dbi services.

CPUs: Cores versus Threads on an Oracle Server

Yann Neuhaus - Sat, 2018-02-17 06:49

When doing a performance review I often do talk with the DBA about the CPU utilization of the server. How reliable is the server CPU utilization with tools like top or the host CPU utilization in the AWR-report? E.g. on an Linux Intel x86-64 server with 8 Cores and 16 logical CPUs (Intel Hyperthreading), what does a utilization of 50% mean?
As I had an ODA X7-M in a test lab available, I thought I’ll do some tests on that.

In my old days at Oracle Support we used a small script to test the CPU single thread performance of an Oracle DB-server:

set echo on
set linesize 120
set timing on time on
with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 60 )
select /*+ ALL_ROWS */ count(*) from t,t,t,t,t

The SQL just burns a CPU-Core for around 20 seconds. Depending on your CPU single thread performance it may take a bit longer or completes faster.

On the ODA X7-M I have 16 Cores enabled and as hyperthreading enabled I do get 32 CPUs in /proc/cpuinfo:

oracle@dbi-oda01:/home/oracle/cbleile/ [CBL122] grep processor /proc/cpuinfo | wc -l
oracle@dbi-oda01:/home/oracle/cbleile/ [CBL122] lscpu | egrep "Thread|Core|Socket|Model name"
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
Model name: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz

The CPU-speed was at 2.3 GHZ all the time:

[root@dbi-oda01 ~]# for a in `ls -l /sys/devices/system/cpu/cpu*/cpufreq | grep cpufreq | cut -d "/" -f6 | cut -d "u" -f2`; do echo "scale=3;`cat /sys/devices/system/cpu/cpu${a}/cpufreq/cpuinfo_cur_freq`/1000000" | bc; done

The CPU is capable of running up to 3.7 GHZ, but that did not happen on my machine.

Running my SQL-script on the ODA X7-M actually took 17.49 seconds:

18:44:00 SQL> with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 60 )
18:44:00 2 select /*+ ALL_ROWS */ count(*) from t,t,t,t,t
18:44:00 3 /
Elapsed: 00:00:17.49

I continued to do the following tests (a job means running above SQL-script):
– 1 Job alone
– 2 Jobs concurrently
– 4 Jobs concurrently
– 8 Jobs concurrently
– 16 Jobs concurrently
– 24 Jobs concurrently
– 32 Jobs concurrently
– 40 Jobs concurrently
– 50 Jobs concurrently
– 60 Jobs concurrently
– 64 Jobs concurrently
– 128 Jobs concurrently

Here the result:

Jobs Min Time Max Time Avg Time Jobs/Cores Jobs/Threads Avg/Single-Time Thread utilization
1 17.49 17.49 17.49 0.06 0.03 1.00 1.00
2 17.51 17.58 17.55 0.13 0.06 1.00 1.00
4 17.47 17.86 17.62 0.25 0.13 1.01 0.99
8 17.47 17.66 17.55 0.50 0.25 1.00 1.00
16 17.64 21.65 18.50 1.00 0.50 1.06 0.95
24 18 27.38 24.20 1.50 0.75 1.38 0.72
32 32.65 34.57 33.21 2.00 1.00 1.90 0.53
40 34.76 42.74 40.31 2.50 1.25 2.30 0.54
50 48.26 52.64 51.21 3.13 1.56 2.93 0.53
60 52.4 63.6 60.63 3.75 1.88 3.47 0.54
64 54.2 68.4 64.27 4.00 2.00 3.67 0.54
128 119.49 134.34 129.01 8.00 4.00 7.38 0.54

When running with 16 Jobs top showed a utilization of around 50-52%. However running more than 16 Jobs showed an increase of the average time a job takes. I.e. with 16 Jobs the 16-Cores-Server is already almost fully utilized. Running with 32 Jobs results in an average elapsed time of 1.9 times compared to running 16 jobs (or less) concurrently. As it is 1.9 times and not 2 times I can conclude that there is an advantage of running with hyperthreading enabled, but it’s only around 5-10%.

So when calculating the utilization of your server then base it on the number of cores and not on the number of threads. When looking at your host CPU-utilization in top or in the AWR-report on an hyperthreaded-enabled server then it’s a good idea to multiply the server-utilization by 1.9.


Cet article CPUs: Cores versus Threads on an Oracle Server est apparu en premier sur Blog dbi services.

Oracle Code is back – Bigger and Better!

OTN TechBlog - Fri, 2018-02-16 16:24

2018 is yet another great year for developers! Oracle’s awesome global developer conference series, Oracle Code, is back – and it’s bigger and better!

In 2017 Oracle ran the first series of Oracle Code developer conferences. In over 20 cities across the globe the series attracted more than 10,000 developers from all over the world, providing them with the opportunity to learn new skills, network with peers and take home some great memories. Following the huge success, Oracle is about to run yet another 14 events across the globe kicking off in late February in Los Angeles. The great thing about Oracle Code, attendance and speaking at the conferences is fully free of charge, showing Oracle holding true to the commitment to the developer communities out there. Across four continents you will get to hear everything that is hot and top in the industry: Blockchain, Containers, Microservices, API Design, Machine Learning, AI, Mobile, Chatbots, Databases, Low Code Development, trendy programming languages, CI/CD, DevOps and much, much more will be right in the center of Oracle Code.

Throughout the one-day events, that provide space for 500 people, developers can share their experience, participate in hands-on labs, talk to subject matter experts and, most importantly, have a lot of fun in the Oracle Code Lounge.

IoT Cloud Brewed Beer

Got a few minutes to try the IoT Cloud Brewed Beer from a local micro brewery? Extend manufacturing processes and logistics operations quickly using data from connected devices. Tech behind the brew: IoT Production Monitoring, IoT Asset Monitoring, Big Data, Event Hub, Oracle JET.

3D Builder Playground

Create your own sculptures and furniture with the 3D printer and help complete the furniture created using Java constructive geometry library. The Oracle technology used is Application Container Cloud running Visual IDE and Java SE running JSCG library.

Oracle Zip Labs Challenge

Want some bragging rights and to win prizes at the same time? Sign up for a 15-minute lab on Oracle Cloud content and see your name on the leaderboard as the person to beat in Oracle Zip Labs Challenge.

IoT Workshop

Interact and exchange ideas with other attendees at the IoT Workshop spaces. Get your own Wi-Fi microcontroller and connect to Oracle IoT Cloud Service. Oracle Developer Community is partnering with AppsLab and the Oracle Applications Cloud User Experience emerging technologies team to make these workshops happen.

Robots Rule with Cloud Chatbot Robot

Ask NAO the robot to do Tai Chi or ask "who brewed the beers"? So how does NAO do what it does? It uses the Intelligent Bot API on Oracle Mobile Cloud Service to understand your command and responds back by speaking back to you.

Dev Live

The Oracle Code crew also thought of the folks who aren’t that lucky to participate at Oracle Code in person: Dev Live are live interviews happening at Oracle Code that are streamed online across the globe so that everyone can watch developers and community members share their experiences.

Register NOW!

Register now for an Oracle Code event near you at: https://developer.oracle.com/code

Have something interesting that you did and want to share it with the world? Submit a proposal in the Call for Papers at: https://developer.oracle.com/code/cfp

See you next, at Oracle Code!

Oracle Adaptive Intelligent Applications for ERP

OracleApps Epicenter - Fri, 2018-02-16 11:51
Recently Oracle announced new AI-based Apps for Finance Leaders to empower CFOs w/ data-driven insights to adapt to change, develop new #markets & increase profitability! With Oracle Adaptive Intelligent Applications for ERP, finance leaders can benefit from: Better insight: Making use of analytics and synthetic intelligence to finance can enhance efficiency and will increase agility […]
Categories: APPS Blogs

Duplex RMAN backups between disk and tape

Yann Neuhaus - Fri, 2018-02-16 09:46

Below a workaround is shown how to “duplex” archivelog backups between disk and tape:

Backup on disk (normal way):

backup device type disk archivelog all;


Immediately  backup on tape:

backup device type sbt archivelog until time 'sysdate' not backed up 2 times;


This backup command backs up all archivelogs, that are not backed up twice, so all which are backed up with the first command. As in the first backup command a logfile switch is included, between the two backup commands, no logfile switch should occur, otherwise “duplexing” does not work. The until time clause in the second command is added to prevent RMAN from another logfile switch, which would lead to different contents of the backups. And this clause does not filter anything, because sysdate means date and time when issuing the command.


Cet article Duplex RMAN backups between disk and tape est apparu en premier sur Blog dbi services.

variable in FROM clause inside pl/sql

Tom Kyte - Fri, 2018-02-16 07:46
Hi Tom We have an anonymous pl/sql block which looks like follows but using dbms_sql (the following doesnt work) declare vRows number; begin for i in (select * from user_tables) loop select count(*) into vRows from i....
Categories: DBA Blogs

Update current row witrh values from previous row

Tom Kyte - Fri, 2018-02-16 07:46
Hi, I'm searching for a solution to solve update with a single SQL statement instead of a PL/SQL procedure: <code>create table test (stock_date DATE, stock NUMBER(5), stock_in NUMBER(5), stock_out NUMBER(5), stock_val NUMBER(5)); INSERT INTO tes...
Categories: DBA Blogs

How to find the SQL_ID of the sql statements running in a stored procedure?

Tom Kyte - Fri, 2018-02-16 07:46
Hi Team, I have scenario in which I need to check which of my procedures(they run in batch) are slowing down the operations. The procedure consist of two main tasks: 1.) Get data from multiple table (has multiple joins and vast data) 2.) insert ...
Categories: DBA Blogs

Based on parameter value need to execute the condition .

Tom Kyte - Fri, 2018-02-16 07:46
<code>create or replace procedure fetch_ids(ename in varchar2,hiredate in date) as begin select * from emp where empname=ename and join_date=hiredate ; end;</code> Problem statement: 1)if i will not pass the ename, i need to fetch all the e...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator