Feed aggregator

Links for 2016-10-22 [del.icio.us]

Categories: DBA Blogs

Oracle Health Check

Michael Dinh - Sat, 2016-10-22 07:44

Currently, I am working on health check for ODA and find there are too many tools with disparate information.

I am sure there are more than the ones listed below and stopped searching.

ODA Oracle Database Appliance orachk Healthcheck (Doc ID 2126926.1)
Multiplexing Redolog and Control File on ODA (Doc ID 2086289.1)

ORAchk – Health Checks for the Oracle Stack (Doc ID 1268927.2)
How to Perform a Health Check on the Database (Doc ID 122669.1)
Health Monitor (Doc ID 466920.1)

Oracle Configuration Manager Quick Start Guide (Doc ID 728988.5)
Pre-12+ OCM Collectors to Be Decommissioned Summer of 2015 (Doc ID 1986521.1)

cluvfy comp healthcheck

One example found:  ORAchk will report if less than 3 SCANs configured while cluvfy comp healthcheck (11.2) does not.

Intesteresting side track: < 3 not escaped is  ❤

Complete cluvfy comp healthcheck  results plus how to create database user CVUSYS (WARNING: ~1600 lines).

Some failures from cluvfy comp healthcheck.

Database recommendation checks for "emu"

Verification Check        :  DB Log Mode
Verification Description  :  Checks the database log archiving mode
Verification Result       :  NOT MET
Verification Summary      :  Check for DB Log Mode failed
Additional Details        :  If the database is in log archiving mode, then it is
                             always desirable and advisable to upgrade the database in
                             noarchivelog mode as that will reduce the time taken to
                             upgrade the database. After the upgrade, the database can
                             be reverted to the archivelog mode.
References (URLs/Notes)   :  https://support.oracle.com/CSP/main/article?cmd=show&type=N

Database(Instance)  Status    Expected Value                Actual Value

emu                 FAILED    db_log_mode = NOARCHIVELOG    db_log_mode = ARCHIVELOG


Database(Instance)  Error details

emu                 Error - NOARCHIVELOG mode is recommended when upgrading
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available

Verification Check        :  Users Granted CONNECT Role
Verification Description  :  Checks for the presence of any users with CONNECT role
Verification Result       :  NOT MET
Verification Summary      :  Check for Users Granted CONNECT Role failed

Database(Instance)  Status    Expected Value                Actual Value

emu                 FAILED    connect_role_grantees = 0     connect_role_grantees = 5


Database(Instance)  Error details

emu                 Error - CONNECT role granted users found
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available

Does Oracle itself needs a health check?

Is Fixed Objects Stats needed in PDB?

Syed Jaffar - Sat, 2016-10-22 04:53
Doesn't matter if we are working on a new technology or something we are very family with. When we encounter tricky situations, sometimes neither we could find things in the manuals nor could find any useful references.

All what you need is an action by yourself and then become a reference to others .

Mike Dietrich had received an interesting question about whether Gather Fixed Objects required for an indiviudal PDBs or just required on the ROOT Container. He did a small test and explained the answer at oracle.blogs.


Catch ORA-28002 in Oracle forms

Tom Kyte - Sat, 2016-10-22 04:46
Hello, How can I catch warning ora-28002 (password will expire within days) in Oracle forms 6i? I tried in several triggers but with no success. Is there a way to catch such warnings in Forms? Also for example the ora-28001 (expired) code. ...
Categories: DBA Blogs

commands execution sequence of sqlplus

Tom Kyte - Sat, 2016-10-22 04:46
dear Tom, I have command echo "exit" | sqlplus $CONDB @LONGSQL in AIX. Questions: 1. will sqlplus always execute exit after executing LONGSQL. 2. If there is no exit in LONGSQL?what is the better way to let sqlplus exit after executin...
Categories: DBA Blogs

Result of view, if base table is modified

Tom Kyte - Sat, 2016-10-22 04:46
Hi Tom, During a recent interview, I was asked that what happens to a view, if the base table is modified. For eg. I have a table emp with 3 columns viz eid, did, sal. I have created a view vw_emp as CREATE VIEW vw_emp AS SELECT * FROM emp; Thing...
Categories: DBA Blogs

ORAPWD file usage and its not using properly

Tom Kyte - Sat, 2016-10-22 04:46
Hi Tom, I dont know below is fault or am i missing something in the architecture. Iam using orapwd file , -- When connecting from OS level password will be taken from this file -- once i change / recreate the ORAPWD file its not working as ...
Categories: DBA Blogs

Extract domain names from a column having multiple email addresses

Tom Kyte - Sat, 2016-10-22 04:46
Hi Tom, i am trying to extract the domain names from a comma delimited email address and show them as comma delimited. i was successful to some extent where i am able to grab the domain name using REGEXP_SUBSTR and REGEXP_REPLACE and show them in...
Categories: DBA Blogs

Elasticsearch for PeopleSoft Now Available!

PeopleSoft Technology Blog - Fri, 2016-10-21 13:05

We’ve been announcing for some time that Elasticsearch would be available for PeopleSoft, and that day has come!  Customers can now download a DPK from My Oracle Support and install and configure Elasticsearch for their PeopleSoft systems.  Elasticsearch is available for the PeopleTools 8.55.11 patch, and customers must be on PeopleTools 8.55.11 or higher to use Elasticsearch.  You can get the DPK from the PeopleTools Maintenance Page on MOS.  Elasticsearch DPKs are available for Linux and Windows.

There is also documentation to help with installation, deployment, and maintenance.  Visit the Elasticsearch Documentation Home Page on My Oracle Support.  If you are currently using Secure Enterprise Search (SES), we cover how to transition from SES to Elasticsearch.  If you are not using SES, we cover how to do a fresh install of Elasticsearch.  In the near future we will provide additional resources including Oracle University Training, a Spotlight Series video, and more.  Our Cumulative Feature Overview tool has been updated with Elasticsearch content.

All of our testing indicates that Elasticsearch will be significantly easier to install and maintain and will perform much better than SES both for indexing and results retrieval.  With this big improvement, we hope customers will take advantage of Elasticsearch and make searching an important part of their user experience.  Search can be especially valuable with the Fluid User Experience because the Fluid header—which includes the Search widget—is always available, so users will be able to initiate a search from any context and at any point of their process.

Note that Oracle Secure Enterprise Search (SES) will be supported until April 30, 2018, eighteen months after Elasticsearch is delivered in PeopleTools 8.55.11.

2 x ODA X6-2S + Dbvisit Standby: Easy DR in SE

Yann Neuhaus - Fri, 2016-10-21 11:28

What’s common with Standard Edition, simplicity, reliability, high performance, and affordable price?
Dbvisit standby can be an answer because it brings Disaster Recovery to Standard Edition without adding complexity
ODA Lite (the new X6-2S and 2M) is another answer because you can run Standard Edition in those new appliance.
So it makes sense to bring them together, this is what I did recently at a customer.

I’ll not cover the reasons and the results here as this will be done later. Just sharing a few tips to set-up the following configuration: two ODA X6-2S runnimg 12c Standard Edition databases, protected by Dbvisit standby over two datacenters.

ODA repository

ODA X6 comes with a new interface to provision databases from command line (odacli) or GUI (https://oda:7093/mgmt/index.html). It’s a layer over the tools we usually use: it calls dbca in behind. What it does in addition is to log what has been done in a Java DB repository.

What is done is logged in the opt/oracle/dcs/log/dcs-agent.log:
2016-10-13 15:33:59,816 DEBUG [Database Creation] [] c.o.d.a.u.CommonUtils: run: cmd= '[su, -, oracle, -c, export PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin:/u01/app/oracle/product/; export ORACLE_SID=MYNEWDB; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/; export PWD=******** /u01/app/oracle/product/ -createDatabase -silent -gdbName MYNEWDB.das.ch -sid MYNEWDB -sysPassword ******* -systemPassword ******* -dbsnmpPassword ******* -asmSysPassword ******* -storageType ASM -datafileJarLocation /u01/app/oracle/product/ -emConfiguration DBEXPRESS -databaseConfType SINGLE -createAsContainerDatabase false -characterSet WE8MSWIN1252 -nationalCharacterSet AL16UTF16 -databaseType MULTIPURPOSE -responseFile NO_VALUE -templateName seed_noncdb_se2016-10-13_15-33-59.0709.dbc -initParams "db_recovery_file_dest_size=174080,db_unique_name=MYNEWDB" -recoveryAreaDestination /u03/app/oracle/fast_recovery_area/]'

Do I like it? Actually I don’t for two reasons. First reason is that I don’t want to learn new syntax every year. I know CREATE DATABASE from decades, I know DBCA for years. I just prefer to use those.
The second reason is that if you want to add a layer on something, you need to provide at least the same functionality and the same quality than the tool you call in behind. If you provide a command to create a database, then you must provide a command to delete it, even if the previous creation has failed. I’ve created a database which creation failed. The reason was that I changed the listener port, but the template explicitly sets local_listener to port 1521. Fortunately it calls DBCA and I know where are the logs. So my ODA repository has a database in failed status. The problem is that you can’t drop it (it doesn’t exist for DBCA) and you cannot re-create it (it exists for ODA). I’m not a developer, but when I write code I try to manage exceptions. At least they must implement a ‘force’ mode where errors are ignored when deleting something that does not exist.

So if you have the same problem, here is what I did:

  • Open a SR in the hope that they understand there’s something to fix in their code without asking me all log files to upload
  • create a database with same name, directly with DBCA, then drop it with ODACLI

Finally, My Workaround works and Their Oracle Support came with two solutions: create the database with another name or re-image the ODA!

But, when it doesn’t fail, the creation is very fast: from templates with datafiles, and datafiles in those very fast NVMe SSDs.

Create the standby

I don’t like this additional layer, but I have the feeling that it’s better than the ODA repository knows about my databases. The standby database is created with Dbvisit interface (I’m talking about real user friendly interface there, where errors are handled and you even have the possibility to resume a creation that failed). How to make it go to the ODA repository?

I see 3 possibilities.

The odacli has a “–register-database” option to register an already create database. But that does probably too much because it was designed to register databases created on previous ODAs with oakcli.

The odacli has a “–instanceonly” option which is there to register a standby database that will be created later with RMAN duplicate for example. Again this does too much as it creates an instance. I tried it and didn’t have the patience to make it work. When ODACLI encounters a problem, it doesn’t explain what’s wrong, but just show the command line help.

Finally what I did is create a database with ODACLI and the drop it (outside of ODACLI). This is ugly, but its the only way I got something where I understand exactly what is done. This is where I encountered the issue above, so my workflow was actually: create from ODACLI -> fails -> drom from DBCA -> re-create from ODACLI -> success -> drop

I didn’t drop it from DBCA because I wanted to keep the entry in ORATAB. I did it from RMAN:

RMAN> startup force dba mount
RMAN> drop database including backups noprompt;

Then, no problem to create the standby from Dbvisit GUI

Create a filesystem

I’ve created the database directly in ASM. I don’t see any reason to create an ACFS volume for them, especially for Standard Edition where you cannot use ACFS snapshots. It’s just a performance overhead (and with those no-latency disks, any CPU overhead counts) and a risk to remove a datafile as they are exposed in a filesystem with no reason for it.

However, Dbvisit needs a folder where to store the archived logs that are shipped to the standby. I can create a folder in in local filesystem, but I preferred to to create an ACFS filesystem for it.
I did it from ODACLI:

odacli create-dbstorage --dataSize 200 -n DBVISIT -r ACFS

This creates a 200GB filesystem mounted as /u02/app/oracle/oradata/DBVISIT/

Who starts the database?

Dbvisit comes with a scheduler that can start the databases in the required mode. But in ODA the resources are managed by Grid Infrastructure. So after creating the standby database you must modify its mount mode:

srvctl modify database -d MYNEWDB -startoption mount

Don’t forget to change the mount modes after a switchover or failover.

This can be scripted with something like: srvctl modify database -db $db -startoption $(/opt/dbvisit/standby/dbv_oraStartStop status $db| awk '/^Regular Database/{print "OPEN"}/^Standby Database/{print "MOUNT"}')

Keep it simple and test it

ODA is simple if you do what it has been designed for: run the database versions that are certified (currenty and adn don’t try to customize the configuration. Always test the switchover, so that you can rely on the protection. It’s easy with Dbvisit standby, either from GUI of command line. And be sure that your network can keep up with the redo rate. Again, this is easy to check from the GUI. Here is an exemple when testing the migration with Data Pump import:

From public prices, and before any discount, you can get two ODA X6-2S plus perpetual licences for Oracle Database Standard Edition and Dbvisit standby for less than 90KUSD.
If you need more storage you can double the capacity for about additional 10KUSD for each ODA.
And if you think that ODA may need a DBA sometimes, have a look at our SLAs and you have a reliable and affordable system on your premises to store and process your data.


Cet article 2 x ODA X6-2S + Dbvisit Standby: Easy DR in SE est apparu en premier sur Blog dbi services.

Deadlock on two delete statements

Tom Kyte - Fri, 2016-10-21 10:26
Hi Tom, I'm not sure I understand the root cause of the following deadlock trace. Assuming I'm reading it correctly the trace is showing two different sql sessions attempting to delete the same row in the AAA_WF_OPERAND table. However, I do not se...
Categories: DBA Blogs


Tom Kyte - Fri, 2016-10-21 10:26
Hi! This is (should be...) a trivial question for those who are familiar with Regular Expressions, I guess (and hope). I used them almost 25 years ago, and I remember I was comfortable with them at the time. Weird enough, no matter how hard I am stru...
Categories: DBA Blogs

DDL of composite partition table without subpartition in each partition

Tom Kyte - Fri, 2016-10-21 10:26
Hi Tom, I have one composite partitioned table. When we generate ddl of the table, subpartition details are present in each partition. I know that we can have different high values for subpartition in different partition so subpartition details in...
Categories: DBA Blogs

alert logfile

Tom Kyte - Fri, 2016-10-21 10:26
Hi , I need your help to find alert log text data between a range of dates eg: 10 june 2016 to 11 june 2016 in linux RHEL 6 ,as my alert log file is very big i can't do that manually. Your help will be appreciated ,thanks in advance. regards.
Categories: DBA Blogs

before insert of update on a column ROW tigger

Tom Kyte - Fri, 2016-10-21 10:26
currently i am using REGEXP_SUBSTR function to encrypt cc number in a text column of varchar2. is there a function for oracle 9i that i can use to encrypt the number in any combination to XXX. thank you
Categories: DBA Blogs


Tom Kyte - Fri, 2016-10-21 10:26
1) What is difference between conventional path load and direct path load? 2) when to use conventional path load? 3) when to use direct path load?
Categories: DBA Blogs

dbms_parallel_execute and 2 packages of mine !

Tom Kyte - Fri, 2016-10-21 10:26
Hi I've 2 PL/SQL packages. - First is dedicated to : * create a parallel task, * then to create chunks (by rowid), * and finally to execute (previous) created task by executing a second PL/SQL Package....
Categories: DBA Blogs

How to check if oracle directory points to the right location?

Tom Kyte - Fri, 2016-10-21 10:26
Hello! Is there a way to check if my directories look at the right folders on my network? ALL_DIRECTORIES view says the directory path is something like /d01/data/xfer/BLA/BLABLA/IN. The actual network location is something like \\xyz14311.bla.c...
Categories: DBA Blogs

Configure easily your Stretch Database

Yann Neuhaus - Fri, 2016-10-21 10:07

In this blog, I will present you the new Stretch Database feature in SQL Server 2016. It couples your SQL Server On-Premises database with an Azure SQL Database, allowing to stretch data from one ore more tables to Azure Cloud.
This mechanism offers to use low-cost hard drives available in Azure, instead of fast and expensive local solid state drives. Indeed SQL Database Server resources are solicited during data transfers and during remote queries (and not SQL Server on-premises).

First, you need to enable the “Remote Data Archive” option at the instance level. To verify if the option is enabled:
USE master
SELECT name, value, value_in_use, description from sys.configurations where name like 'remote data archive'

To enable this option at the instance level:

EXEC sys.sp_configure N'remote data archive', '1';

Now, you have to link your on-premises database with a remote SQL Database server:
Use AdventureWorks2014;
WITH IDENTITY = 'dbi' , SECRET = 'userPa$$w0rd' ;
ALTER DATABASE AdventureWorks2014
SERVER = 'dbisqldatabase.database.windows.net' ,
CREDENTIAL = Stretch_cred
) ;

The process may take some time as it will create a new SQL Database in Azure, linked to your on-premises database. The credential entered to connect to your SQL Database server is defined in SQL Database. Previously you need to secure the credential by a database master key.

To view all the remote databases from your instance:
Select * from sys.remote_data_archive_databases

Now, if you want to migrate one table from your database ([Purchasing].[PurchaseOrderDetail] in my example), proceed as follows:

Of course repeat this process for each table you want to stretch. You can still access to your data during the migration process.

To view all the remote tables from your instance:
Select * from sys.remote_data_archive_tables

To view the batch process of all the data being migrated: (indeed, you can filtrate by the a specific table)
Select * from sys.dm_db_rda_migration_status

It is also to easily migrate your data back:

Moreover, you can select rows to migration by using a filter function. Here is an example:
CREATE FUNCTION dbo.fn_stretchpredicate(@column9 datetime)
RETURN SELECT 1 AS is_eligible
WHERE @column9 > CONVERT(datetime, '1/1/2014', 101)

Then when enable the data migration, specify the filter function:
ALTER TABLE [Purchasing].[PurchaseOrderDetail] SET ( REMOTE_DATA_ARCHIVE = ON (
FILTER_PREDICATE = dbo.fn_stretchpredicate(ModifiedDate),
) )

Of course in Microsoft world, you can also use a wizard to set up this feature. The choice is up to you!


Cet article Configure easily your Stretch Database est apparu en premier sur Blog dbi services.

Rapid analytics

DBMS2 - Fri, 2016-10-21 09:17

“Real-time” technology excites people, and has for decades. Yet the actual, useful technology to meet “real-time” requirements remains immature, especially in cases which call for rapid human decision-making. Here are some notes on that conundrum.

1. I recently posted that “real-time” is getting real. But there are multiple technology challenges involved, including:

  • General streaming. Some of my posts on that subject are linked at the bottom of my August post on Flink.
  • Low-latency ingest of data into structures from which it can be immediately analyzed. That helps drive the (re)integration of operational data stores, analytic data stores, and other analytic support — e.g. via Spark.
  • Business intelligence that can be used quickly enough. This is a major ongoing challenge. My clients at Zoomdata may be thinking about this area more clearly than most, but even they are still in the early stages of providing what users need.
  • Advanced analytics that can be done quickly enough. Answers there may come through developments in anomaly management, but that area is still in its super-early days.
  • Alerting, which has been under-addressed for decades. Perhaps the anomaly management vendors will finally solve it.

2. In early 2011, I coined the phrase investigative analytics, about which I said three main things:

  • It is meant to contrast with “operational analytics”.
  • It is meant to conflate “several disciplines, namely”:
    • Statistics, data mining, machine learning, and/or predictive analytics.
    • The more research-oriented aspects of business intelligence tools.
    • Analogous technologies as applied to non-tabular data types such as text or graph.
  • A simple definition would be “Seeking (previously unknown) patterns in data.”

Generally, that has held up pretty well, although “exploratory” is the more widely used term. But the investigative/operational dichotomy obscures one key fact, which is the central point of this post: There’s a widespread need for very rapid data investigation.

3. This is not just a niche need. There are numerous rapid-investigation use cases in mind, some already mentioned in my recent posts on anomaly management and real-time applications.

  • Network operations. This is my paradigmatic example.
    • Data is zooming all over the place, in many formats and structures, among many kinds of devices. That’s log data, header data and payload data alike. Many kinds of problems can arise …
    • … which operators want to diagnose and correct, in as few minutes as possible.
    • Interfaces commonly include real-time business intelligence, some drilldown, and a lot of command-line options.
    • I’ve written about various specifics, especially in connection with the vendors Splunk and Rocana.
  • Security and anti-fraud. Infosec and cyberfraud, to a considerable extent, are just common problems in network operations. Much of the response is necessarily automated — but the bad guys are always trying to outwit your automation. If you think they may have succeeded, you want to figure that out very, very fast.
  • Consumer promotion and engagement. Consumer marketers feel a great need for speed. Some of it is even genuine. :)
    • If an online promotion is going badly (or particularly well), they can in theory react almost instantly. So they’d like to know almost instantly, perhaps via BI tools with great drilldown.
    • The same is even truer in the case of social media eruptions and the like. Obviously, the tools here are heavily text-oriented.
    • Call centers and even physical stores have some of the same aspects as internet consumer operations.
  • Consumer internet backends, for e-commerce, publishing, gaming or whatever. These cases combine and in some cases integrate the previous three points. For example, if you get a really absurd-looking business result, that could be your first indication of network malfunctions or automated fraud.
  • Industrial technology, such as factory operations, power/gas/water networks, vehicle fleets or oil rigs. Much as in IT networks, these contain a diversity of equipment — each now spewing its own logs — and have multiple possible modes of failure. More often than is the case in IT networks, you can recognize danger signs, then head off failure altogether via preventive maintenance. But when you can’t, it is crucial to identify the causes of failure fast.
  • General IoT (Internet of Things) operation. This covers several of the examples above, as well as cases in which you sell a lot of devices, have them “phone home”, and labor to keep that whole multi-owner network working.
  • National security. If I told you what I meant by this one, I’d have to … [redacted].

4. And then there’s the investment industry, which obviously needs very rapid analysis. When I was a stock analyst, I could be awakened by a phone call and told news that I would need to explain to 1000s of conference call listeners 20 minutes later. This was >30 years ago. The business moves yet faster today.

The investment industry has invested greatly in high-speed supporting technology for decades. That’s how Mike Bloomberg got so rich founding a vertical market tech business. But investment-oriented technology indeed remains a very vertical sector; little of it get more broadly applied.

I think the reason may be that investing is about guesswork, while other use cases call for more definitive answers. In particular:

  • If you’re wrong 49.9% of the time in investing, you might still be a big winner.
  • In high-frequency trading, speed is paramount; you have to be faster than your competitors. In speed/accuracy trade-offs, speed wins.

5. Of course, it’s possible to overstate these requirements. As in all real-time discussions, one needs to think hard about:

  • How much speed is important in meeting users’ needs.
  • How much additional speed, if any, is important in satisfying users’ desires.

But overall, I have little doubt that rapid analytics is a legitimate area for technology advancement and growth.

Categories: Other


Subscribe to Oracle FAQ aggregator