Feed aggregator

Fast Refreshable Materialized Views - Direct Path Logs on the Master Table

Tom Kyte - Tue, 2017-06-20 09:46
Hello Masters :-), I read the documentation about fast refreshable materialized views. It states: "The changes for conventional DML changes are stored in the materialized view log associated with the master table. The changes for direct-path IN...
Categories: DBA Blogs

Populating sequence for tab delimited input data using SQL*Loader

Tom Kyte - Tue, 2017-06-20 09:46
Hi Tom I have an input file that contains data that is tab delimited containing info like name, type, date, result, value as separate rows. The fields are not enclosed in quotes or anything else. I need to populate sequence for the data while load...
Categories: DBA Blogs

Unload Javascript web page contents to a text file to load in database table

Tom Kyte - Tue, 2017-06-20 09:46
Hello Experts, I have a requirement wherein I need to unload/export web page contents whose data is populated by javascript to a text/csv file; which in turn will be be loaded in the database table. I am not sure if utl_http will get the job d...
Categories: DBA Blogs

Basic stored procedures with Oracle

Tom Kyte - Tue, 2017-06-20 09:46
1. Use table [Employee] Write a stored procedure named Get_Employee_Detail which will accept 2 parameters. @col_name varchar(100) @col_value varchar(500) col_name parameter can accept any column name from the table and col_val...
Categories: DBA Blogs

enq: FB - contention

Tom Kyte - Tue, 2017-06-20 09:46
Hi, We are getting enq: FB - contention for some client and its because of Insert statement. We have Very few Oracle document and SR's created to this can we have an idea on why we are getting this and way to find and fix the issue ( possible w...
Categories: DBA Blogs

Webcast: Maintenance Strategies for Oracle E-Business Suite

Steven Chan - Tue, 2017-06-20 02:00

Oracle University has a large collection of free recorded webcasts for Oracle E-Business Suite.  Here's an excellent primer for system administrators interested in creating a framework for regular EBS maintenance:

Elke Phelps, Senior Principal Product Manager presents the best techniques Applications DBAs can use to proactively maintain their Oracle E-Business Suite environment. It starts with a practical approach for defining a preventive maintenance strategy and then moves to detailed descriptions and tips for various tools and approaches for efficiently performing maintenance tasks. Topics include patching shortcuts, distributing worker processes across multiple servers, shared file systems, avoiding resource bottlenecks, and more. Leave this session with the tools required for creating a plan to keep your environment healthy and up to date. This material was presented at Oracle OpenWorld 2016. 

Related Articles

Categories: APPS Blogs

12cR2 Application Containers and Foreign Keys

Yann Neuhaus - Mon, 2017-06-19 15:58

Application Container brings a new way to share data among databases, and adds a new dimension to referential integrity. A foreign key in an application PDB can reference a row belonging to a root data link table. But then, should a delete on the root validate that there are no orpheans in the application PDBs? And what if those PDBs are closed at the time of this delete? Here is a small example.

If you run this in 12.2.0.1 you will get an error because the search for parent key is done only on the current container. This is considered as a bug: 21955394: CDB:ORA-02291 WHEN FOREIGN KEY REFERS TO THE PRIMARY KEY IN DATA LINK

The example that follows is run with the patch applied to fix this bug.

I’m connecting to root where I have no user PDB yet.

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
 
CON_ID NAME APPLICATION_ROOT APPLICATION_PDB APPLICATION_ROOT_CON_ID
------ ---- ---------------- --------------- -----------------------
1 CDB$ROOT NO NO
2 PDB$SEED NO NO

I create the application container root

SQL> create pluggable database SCOTT_ROOT as application container admin user SCOTT_ADMIN identified by oracle roles=(DBA);
Pluggable database SCOTT_ROOT created.
 
SQL> alter pluggable database SCOTT_ROOT open;
Pluggable database SCOTT_ROOT altered.
 
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
 
CON_ID NAME APPLICATION_ROOT APPLICATION_PDB APPLICATION_ROOT_CON_ID
------ ---- ---------------- --------------- -----------------------
1 CDB$ROOT NO NO
2 PDB$SEED NO NO

I connect to this application root and start the installation of the application

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
 
SQL> alter pluggable database application SCOTT begin install '1.0';
Pluggable database APPLICATION altered.

I’m installing SCOTT tables DEPT and EMP tables but I changed their definition from utlsampl.sql:

  • DEPT is an EXTENDED DATA LINK where a set of row is common, inserted on application root and visible by all application PDBs
  • EMP is a METADATA LINK where each application PDB has its own data isolated from others, but having the same structure


SQL> GRANT CONNECT,RESOURCE,UNLIMITED TABLESPACE TO SCOTT IDENTIFIED BY tiger container=all;
Grant succeeded.
 
SQL> alter session set current_schema=SCOTT;
Session altered.
 
SQL> CREATE TABLE DEPT sharing=extended data
2 (DEPTNO NUMBER(2) CONSTRAINT PK_DEPT PRIMARY KEY,
3 DNAME VARCHAR2(14) ,
4 LOC VARCHAR2(13) ) ;
Table DEPT created.
 
SQL> CREATE TABLE EMP sharing=metadata
2 (EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7,2),
8 COMM NUMBER(7,2),
9 DEPTNO NUMBER(2) CONSTRAINT FK_DEPTNO REFERENCES DEPT);
Table EMP created.
 
SQL> INSERT INTO DEPT VALUES
2 (10,'ACCOUNTING','NEW YORK');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES (20,'RESEARCH','DALLAS');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES
2 (30,'SALES','CHICAGO');
1 row inserted.
 
SQL> INSERT INTO DEPT VALUES
2 (40,'OPERATIONS','BOSTON');
1 row inserted.
 
SQL> COMMIT;
Commit complete.
 
SQL> alter pluggable database application SCOTT end install '1.0';
Pluggable database APPLICATION altered.

The application root has departments 10, 20, 30 and 40 in DEPT shared with all PDBs and has defined that EMP has a foreign key to it.

I create an application PDB from this application root

SQL> create pluggable database SCOTT_ONE admin user SCOTT_ONE_ADMIN identified by covfefe;
Pluggable database SCOTT_ONE created.
 
SQL> alter pluggable database SCOTT_ONE open;
Pluggable database SCOTT_ONE altered.

I sync it to get common DDL and DML applied

SQL> connect sys/oracle@//localhost/SCOTT_ONE as sysdba
Connected.
 
SQL> alter pluggable database application SCOTT sync;
Pluggable database APPLICATION altered.
 
SQL> select name,con_id,application_pdb,application_root_con_id from v$containers;
 
NAME CON_ID APPLICATION_PDB APPLICATION_ROOT_CON_ID
---- ------ --------------- -----------------------
SCOTT_ONE 8 YES 6

Now let’s connect to the application PDB. I can see the DEPT rows inserted from root because it is a DATA LINK.

SQL> connect scott/tiger@//localhost/SCOTT_ONE
Connected.
 
SQL> select * from dept;
 
DEPTNO DNAME LOC
------ ----- ---
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

EMP is empty here

SQL> select * from emp;
 
no rows selected

I insert an EMP row in the application PDB which references a DEPT row in the application root:

SQL> INSERT INTO EMP VALUES
2 (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
 
1 row inserted.

As DEPT is and EXTENDED DATA LINK, I can add new rows in my PDB:

SQL> INSERT INTO DEPT VALUES
2 (50,'MY LOCAL DEPT','LAUSANNE');
 
1 row inserted.

And I can have an EMP row referencing this local parent:

SQL> INSERT INTO EMP VALUES
2 (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,50);
1 row inserted.
 
SQL> commit;
Commit complete.

This looks good. Now what happens of we delete all rows from DEPT in the application root?

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> delete from SCOTT.DEPT;
4 rows deleted.
 
SQL> commit;
Commit complete.

No error here. But then, I have orphans in my application PDB:

SQL> connect scott/tiger@//localhost/SCOTT_ONE
Connected.
SQL> select * from dept;
 
DEPTNO DNAME LOC
---------- -------------- -------------
50 MY LOCAL DEPT LAUSANNE
 
SQL> select * from emp;
 
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

So what?

Referential integrity works across containers: an application PDB can reference parent key in the application root (according that bug is fixed). However, no ORA-02292 (child record found) is raised when child records are not in the current container. This one makes sense. Enforcing the verification of child records in all PDBs would require that they are opened, and may require locking the table in all containers. We must be aware that doing DML on the application root can lead to inconsistency if not done correctly.

Operations on the application root are application releases (upgrades and patches) and must be validated and tested carefully. For the example above, deleting all rows from DEPT can be done as an application patch which deletes from the EMP table as well:

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> alter pluggable database application SCOTT begin patch 1 ;
Pluggable database APPLICATION altered.
SQL> delete from scott.emp;
0 rows deleted.
SQL> delete from scott.dept where deptno in (10,20,30,40);
4 rows deleted.
SQL> alter pluggable database application SCOTT end patch 1 ;
Pluggable database APPLICATION altered.

The delete from EMP does nothing in the application root here, but it will be done on the PDB when applying the patch:

SQL> select * from dept;
 
DEPTNO DNAME LOC
---------- -------------- -------------
50 MY LOCAL DEPT LAUSANNE
 
SQL> select * from emp;
 
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

Note that I’ve defined exactly which rows from DEPT I wanted to delete in the where clause of delete from scott.dept where deptno in (10,20,30,40);
You may be tempted to do something like: delete from scott.dept where deptno in (select deptno from scott.dept);
But keep in mind that the statements you run in the root are re-played as-is in the PDBs. And when you sync the PDB, you can see no rows from DEPT because there were already purged from the root. Actually, what you want is to delete from EMP the rows which refer to the rows you have deleted from the root. It is not possible to get them with a subquery, except if you have stored them into another data link table before deleting them. Changes in the application root must be managed like application patches.

 

Cet article 12cR2 Application Containers and Foreign Keys est apparu en premier sur Blog dbi services.

Getting SQL ids for a a sqltext

Tom Kyte - Mon, 2017-06-19 15:26
Hello Tom & Team, Can you please provide to get the list of sql ids along with elasped times and other metrics for a given sql statment. Once obtained, can i force oracle to use a particular sql id which was performing better ? Regards, Manjun...
Categories: DBA Blogs

SQL - Get Date Ranges for contiguous records

Tom Kyte - Mon, 2017-06-19 15:26
Hi Tom, I have been on your site numerous times and learnt a lot from your opinions and solutions. I am looking for SQL that displays in chronological order in which country "Tom" lived. Below are my details: <code>--create table. create ...
Categories: DBA Blogs

SQL Loader- Double Qutoes...

Tom Kyte - Mon, 2017-06-19 15:26
Hi, I have data in the below format,how to load its without loosing double quite... all columns enclosed in "" and separated by ~ , also last column has values with double quote and need to load with quote "Jobid"~"Jobname"~"Status"~"Comment...
Categories: DBA Blogs

best way to derive deltas when there is nothing set up on source

Tom Kyte - Mon, 2017-06-19 15:26
We are looking at a gradual migration of a legacy system running on 11gR1. We do not have control over the source system (although their DBAs are willing to accommodate <i>small</i> requests we might have). The data model is being changed drastical...
Categories: DBA Blogs

More Fine-Grained 'ALTER USER' Privilege

Tom Kyte - Mon, 2017-06-19 15:26
I am currently looking for a more fine-grained approach to the user management within an Oracle 11g Release 2 (soon to be 12.1.0.2) Database: There is an idea to give some users the permission to manage some aspects of a user account such as: -...
Categories: DBA Blogs

Oracle 12c Index unusable after partition drop

Tom Kyte - Mon, 2017-06-19 15:26
In Oracle 11g if global index maintained during execution of the DROP partition command then command could take hours to complete. Oracle 12c said DROP partition command executes immediately and index will remain usable. I had database table in Or...
Categories: DBA Blogs

Security Alert CVE-2017-3629 Released

Oracle Security Team - Mon, 2017-06-19 15:21

Oracle just released Security Alert CVE-2017-3629 to address three vulnerabilities affecting Oracle Solaris:

- Vulnerability CVE-2017-3629 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 7.8. - CVE-2017-3630 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 5.3. - CVE-2017-3631 only affects Oracle Solaris 11.3 and has a CVSS Base Score of 5.3.

Oracle recommends affected Oracle Solaris customers apply the fixes released with this Security Alert as soon as possible.

For More Information:
The Advisory for Security Alert CVE-2016-0636 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2017-3629-3757403.html

Installing SQLDeveloper and SQLCL on CentOS

The Anti-Kyte - Mon, 2017-06-19 14:02

As is becoming usual in the UK, the nation has been left somewhat confused in the aftermath of yet another “epoch-defining” vote.
In this case, we’ve just had a General Election campaign in which Brexit – Britain’s Exit from the EU – played a vanishingly small part. However, the result is now being interpreted as a judgement on the sort of Brexit that is demanded by the Great British Public.
It doesn’t help that, beyond prefixing the word “Brexit” with an adjective, there’s not much detail on the options that each term represents.
Up until now, we’ve had “Soft Brexit” and “Hard Brexit”, which could describe the future relationship with the EU but equally could be how you prefer your pillows.
Suddenly we’re getting Open Brexit and even Red-White-and-Blue Brexit.
It looks like the latest craze sweeping the nation is Brexit Bingo.
This involves drawing up a list of adjectives and ticking them off as they get used as a prefix for the word “Brexit”.
As an example, we could use the names of the Seven Dwarfs. After all, no-one wants a Dopey Brexit, ideally we’d like a Happy Brexit but realistically, we’re likely to end up with a Grumpy Brexit.

To take my mind off all of this wacky word-play, I’ve been playing around with CentOS again. What I’m going to cover here is how to install Oracle’s database development tools and persuade them to talk to a locally installed Express Edition database.

Specifically, I’ll be looking at :

  • Installing the appropriate Java Developer Kit (JDK)
  • Installing and configuring SQLDeveloper
  • Installing SQLCL

Sound like a Chocolate Brexit with sprinkles ? OK then…

Environment

I’m running on CentOS 7 (64 bit). I’m using the default Gnome 3 desktop (3.1.4.2).
CentOS is part of the Red Hat family of Linux distros which includes Red Hat, Fedora and Oracle Linux. If you’re running on one of these distros, or on something that’s based on one of them then these instructions should work pretty much unaltered.
If, on the other hand, you’re running a Debian based distro ( e.g. Ubuntu, Mint etc) then you’ll probably find these instructions rather more useful.

I’ve also got Oracle Database 11gR2 Express Edition installed locally. Should you feel so inclined, you can perform that install on CentOS using these instructions.

One other point to note, I haven’t bothered with any Oracle database client software on this particular machine.

Both SQLDeveloper and SQLCL require Java so…

Installing the JDK

To start with, we’ll need to download the JDK version that SQLDeveloper needs to run against. At the time of writing ( SQLDeveloper 4.2), this is Java 8.

So, we need to head over to the Java download page
… and download the appropriate rpm package. In our case :

jdk-8u131-linux-x64.rpm

Once the file has been downloaded, open the containing directory in Files, right-click our new rpm and open it with Software Install :

Now press the install button.

Once it’s all finished, you need to make a note of the directory that the jdk has been installed into as we’ll need to point SQLDeveloper at it. In my case, the directory is :

/usr/java/jdk1.8.0_131

Speaking of SQLDeveloper…

SQLDeveloper

Head over to the SQLDeveloper Download Page and get the latest version. We’re looking for the ??? option. In my case :

sqldeveloper-4.2.0.17.089.1709-1.noarch.rpm

While we’re here, we may as well get the latest SQLCL version as well. The download for this is a single file as it’s platform independent.

Once again, we can take advantage of the fact that Oracle provides us with an rpm file by right-clicking it in Files and opening with Software Install.

Press the install button and wait for a bit…

Once the installation is complete, we need to configure SQLDeveloper to point to the JDK we’ve installed. To do this, we need to run :

sh /opt/sqldeveloper/sqldeveloper.sh

…and provide the jdk path when prompted, in this case :

/usr/java/jdk1.8.0_131

The end result should look something like this :

In my case I have no previous install to import preferences from so I’ll hit the No button.

Once SQLDeveloper opens, you’ll want to create a connection to your database.

To do this, go to the File Menu and select New/Connection.

To connect as SYSTEM to my local XE database I created a connection that looks like this :

Once you’ve entered the connection details, you can hit Test to confirm that all is in order and you can actually connect to the database.
Provided all is well, hit Save and the Connection will appear in the Tree in the left-side of the tool from this point forward.

One final point to note, as part of the installation, a menu item for SQLDeveloper is created in the Programming Menu. Once you’ve done the JDK configuration, you can start the tool using this menu option.

SQLCL

As previously noted, SQLCL is a zip file rather than an rpm, so the installation process is slightly different.
As with SQLDeveloper, I want to install SQLCL in /opt .
To do this, I’m going to need to use sudo so I have write privileges to /opt.

To start with then, open a Terminal and then start files as sudo for the directory that holds the zip. So, if the directory is $HOME/Downloads …

sudo nautilus $HOME/Downloads

In Files, right click the zip file and select Open With Archivte Manager

Click the Extract Button and extract to /opt

You should now have a sqlcl directory under /opt.

To start sqlcl, run

/opt/sqlcl/bin/sql

…and you should be rewarded with…

There, hopefully that’s all gone as expected and you’ve not been left with a Sneezy Brexit.


Filed under: Linux, Oracle, SQLDeveloper Tagged: jdk, sqlcl, SQLDeveloper, sudo nautilus

Oracle's Security Fixing Practices

Oracle Security Team - Mon, 2017-06-19 13:53

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers.

As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible.

So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives?

The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer.

Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release.

It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases.

Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order:

  • Main code line. The main code line is the code line for the next major release version of the product.
  • Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned.
  • Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets

This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release.

One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle.

In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to:

  1. Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes;
  2. Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time;
  3. Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes.

For more information:
- Oracle Software Security Assurance website
- Security Alerts and Critical Patch Updates

Real ODA “one button patching”, 12.1.2.6 to 12.1.2.10

Geert De Paep - Mon, 2017-06-19 13:46

Today I had to patch a virtualized Oracle Database Appliance X5-2 HA at a customer site. Current version was 12.1.2.6, installed in June 2016, and it had to be upgraded to 12.1.2.10. I’ve heard a lot of scary stories of colleagues and friends about possible issues during patching, so I was a little reluctant to do this. Anyway, no choice, what needs to be done, needs to be done.

I had one big advantage: this ODA was installed by myself and I was 100% sure that nobody had ever done something ‘unsupported’ with it. During installation in June 2016, I have taken very much care about following the books accurately and ensuring that nothing went wrong. This was a real clean install without any issues, now in production for 1 year.

Now one year later I have to say that I am very happy with “my ODA”. The patching today went smoothly. Not a single issue was encountered during the patching. Actually I was a little surprised, because as I said, I have heard about many issues of others during patching.

More specifically I did the following:

I followed Mos: README for 25499210

Luckily version 12.1.2.6 can be directly upgraded to 12.1.2.10 (lower versions need to go in 2 or more steps).

Server

I did the Server part “locally”, i.e. one node at a time and hence 2 times patching. Allthough I had downtime for the full ODA, I preferred to do it node by node. If something would go completely wrong on node 1, I would still have a valid node 2.

Warnings and tips:

  • Ensure each node (oda_base) has more than 15Gb free on /u01
  • Verify if rpm orclovn-user-6.0.r7494-1.el5 exists in Dom0 on each node (cfr one of those issues – someone I know had this rpm missing in Dom0…)
  • Run the patching one node at a time, not both together
  • It is best to stop any VM’s manually before starting the patching (but at the end you will have to start them manually again as well)

This command performs the patching:

/opt/oracle/oak/bin/oakcli update -patch 12.1.2.10.0 –server –local

You may be interested in the timings, if you need to do this as well and have to plan downtime:

These are the current and proposed versions (I think the amount of components to  be patched will influence the time it will take):

Component Name            Installed Version         Proposed Patch Version  
---------------           ------------------        -----------------       
Controller_INT            4.230.40-3739             Up-to-date               
Controller_EXT            06.00.02.00               10.00.00.00             
Expander                  0018                      Up-to-date              
SSD_SHARED {                                                                
[ c1d20,c1d21,c1d22,      A29A                      Up-to-date              
c1d23 ]                                                                     
[ c1d16,c1d17,c1d18,      A29A                      Up-to-date              
c1d19 ]}                                                               
HDD_LOCAL                 A720                      A72A                    
HDD_SHARED                P901                      PAG1                    
ILOM                      3.2.4.52 r101649          3.2.8.24 r114580        
BIOS                      30050100                  30100400                
IPMI                      1.8.12.4                  Up-to-date              
HMP                       2.3.4.0.1                 2.3.5.2.8               
OAK                       12.1.2.6.0                12.1.2.10.0             
OL                        6.7                       6.8                     
OVM                       3.2.9                     3.2.11                  
GI_HOME                   12.1.0.2.160119(2194      12.1.0.2.170117(2473    
                          8354,21948344)            2082,24828633)          
DB_HOME                   12.1.0.2.160119(2194      12.1.0.2.170117(2473    
                          8354,21948344)            2082,24828633)

Each node took about 75 minutes to do the patching. However at the end a reboot is triggered and you should calculate about 15 minutes extra before everything is up again and you can verify that all went well. But good to know, while node 1 is patched, node 2 remains fully available and vice versa.

Shared storage

After this, the shared storage part needs to be patched using

/opt/oracle/oak/bin/oakcli update -patch 12.1.2.10.0 –storage

You run this command only on node 1.

This took 24 minutes, but at the end both nodes reboot (both at the same time, so this is not rolling and you have db unavailability, even with RAC). So calculate some extra time here.

Database homes

Finally the database homes (I got 4 of them) need to be patched. In fact only 3 of the homes needed to be patched in my case, because we first want to validate the patch before patching production (4th home). On node 1 you can do:

/opt/oracle/oak/bin/oakcli update -patch 12.1.2.10.0 –database

Note, I didn’t use the “–local” option in this case, so the oracle_home on both nodes are patched in 1 command.

Tip:

  • I don’t know if it is required, but it is safe to set first (again one of those issues of others):
    • export NLS_LANG=American_America.US7ASCII
  • You don’t have to stop the databases in advance. This will be done, on one node at a time, while patching.
  • “datapatch” will be run automatically during the patching. So after the patching all databases are open again and no additional steps are required.

After starting the command, a list of homes to be patched is displayed and you can select to patch them all together or not. In my case I answered N and then I could choose which homes NOT(!) to patch. Yes, is somewhat strange, instead of asking which homes I want to patch, I had to enter the opposite:

…
Would you like to patch all the above homes: Y | N ? : N
Please enter the comma separated list of Database Home names that you do NOT want to patch:OraDb12102_home4,OraDb12102_home1,OraDb12102_home2
…

In the mean while I filed an enhancement request at Oracle Support for adding a confirmation after entering these homes. Because at the moment, after pressing RETURN, the script immediately starts patching. I don’t want to know what will happen if you would have a typo in your list of home names. It might be that the script will patching the wrong home in that case… So be careful.

I noticed that datapatch was automatically run for each database and I could verify this in:

SELECT patch_id, version, status, bundle_id, bundle_series
       FROM dba_registry_sqlpatch;

patch_id version  status  bundle_id bundle_series
21948354 12.1.0.2 SUCCESS    160119 PSU         
24732082 12.1.0.2 SUCCESS    170117 PSU

Regarding timings, in my case it took about 20 minutes for each home. However I had 4 databases in each home. If you have more, or less databases, timings may be somewhat different, but at least you have an idea.

Summary

In summary, all patch steps went through without a single error or issue. So this is the real ODA “one button patching”. Very nice.

However I don’t know how many patchings go like this. It looks to me that you can easily do (unsupported) things on your ODA that may cause issues during patching, like e.g. installing custom rpm’s, changing OS settings, not creating databases using the oakcli provided command or manually adding other Oracle products in the oraInventory. Yes I know, it is hard to treat an appliance like an appliance if all backdoors are open…

So if you install or manage an ODA, understand the importance of keeping it clean and get rewarded during patching. You really don’t want issues during patching, especially if your downtime is e.g. in the middle of the night.


Fixes for ADF Cloud User Experience Rapid Development Kit (RDK) UI Layout Issues

Andrejus Baranovski - Mon, 2017-06-19 13:45
If you was evaluating Oracle RDK UI template, probably you noticed information popup coming up, when RDK home page is loaded. Popup is loaded through showPopupBehavior listener, which is executed on Welcome page load event. Such popup is not required in practice, and usually is disabled. But as soon as you disable it, there will be layout issues with Welcome page. User information widget will not align the name and menu navigation items will not be ordered correctly:


This is not nice. And you will get such behaviour only when popup is not loaded:


I looked into it in more detail and I saw there is a second HTTP PPR request executed, when popup is loaded. It seems this second HTTP request was triggering partial response and this was forcing UI to load correctly:


Fortunately I found a simple fix for that. Need to set layout="horizontal" for springboard panelGroupLayout component located in Welcome page:


This change makes job done and now Welcome page layout is rendered correctly from the start, even without loading popup and forcing second HTTP PPR request:


There is another issue - related to panelGridLayout usage in ADF Task Flows loaded through Film Strip page. You can check my previous example about customising/extending RDK template - Extending ADF Cloud User Experience Rapid Development Kit (RDK). Let's assume use case with ADF Task Flow implementing two fragments (search and edit functionality):


Search screen renders ADF list implemented using panelGridLayout:


Navigate to edit screen:


Try to navigate back to search screen, you will get empty list displayed:


Fix is simple. RDK is using property stretchChildren="first" in FilmStrip page and this seems to break UI layout for regions with panelGridLayout component:


Remove stretchChildren="first" property from FilmStrip page, showDetailItem component assigned with id="sdi1":


With this fix applied, try to navigate from edit to search:


This time search page layout with panelGridLayout component is displayed as it should:


Download extended RDK application code with applied fixes - AppsCloudUIKit_v3.zip.

Agile Development with PL/SQL

Gerger Consulting - Mon, 2017-06-19 13:12
Agile Development gives us the ability to work on multiple features at the same time and change which ones to ship at any point in time, quickly. This might be challenging for PL/SQL teams to accomplish, to say the least. This short video shows how Gitora, version control tool for PL/SQL, helps Oracle PL/SQL developers solve this problem.





If you prefer reading a step by step guide instead of watching a video, please click here.
Categories: Development

Unify: Could it be any easier?

Rittman Mead Consulting - Mon, 2017-06-19 09:00

Rittman Mead’s Unify is the easiest and most efficient method to pull your OBIEE reporting data directly into your local Tableau environment. No longer will you have to worry about database connection credentials, Excel exports, or any other roundabout way to get your data where you need it to be.

Unify leverages OBIEE’s existing metadata layer to provide quick access to your curated data through a standard Tableau Web Data Connector. After a short installation and configuration process, you can be building Tableau workbooks from your OBIEE data in minutes.

This blog post will demonstrate how intuitive and easy it is to use the Unify application. We will only cover using Unify and it’s features, as once the data gets into Tableau it can be used the same as any other Tableau Data Source. The environment shown already has Unify installed and configured, so we can jump right in and start using the tool immediately.

To start pulling data from OBIEE using Unify, we need to create a new Web Data Connector Data Source in Tableau. This data source will prompt us for a URL to access Unify. In this instance, Unify is installed as a desktop application, so the URL is http://localhost:8080/unify.

Once we put in the URL, we’re shown an authentication screen. This screen will allow us to authenticate against OBIEE using the same credentials. In this case, I will authenticate as the weblogic user.

Once authenticated, we are welcomed by a window where we can construct an OBIEE query visually. On the left hand side of the application, I can select the Subject Area I wish to query, and users are shown a list of tables and columns in the selected Subject Area. There are additional options along the top of the window, and I can see all saved queries on the right hand side of the window.

The center of the window is where we can see the current query, as well as a preview of the query results. Since I have not started building a query yet, this area is blank.

Unify allows us to either build a new query from scratch, or select an existing OBIEE report. First, let’s build our own query. The lefthand side of the screen displays the Subject Areas and Columns which I have access to in OBIEE. With a Subject Area selected, I can drag columns, or double click them, to add them to the current query. In the screenshot above, I have added three columns to my current query, “P1 Product”, “P2 Product Type”, and “1 - Revenue”.

If we wanted to, we could also create new columns by defining a Column Name and Column Formula. We even have the ability to modify existing column formulas for our query. We can do this by clicking the gear icon for a specific column, or by double-clicking the grey bar at the top of the query window.

It’s also possible to add filters to our data set. By clicking the Filter icon at the top of the window, we can view the current filters for the query. We can then add filters the same way we would add columns, by double clicking or dragging the specific column. In the example shown, I have a query on the column “D2 Department” where the column value equals “Local Plants Dept.”.

Filters can be configured using any of the familiar methods, such as checking if a value exists in a list of values, numerical comparisons, or even using repository or session variables.

Now that we have our columns selected and our filters defined, we can execute this query and see a preview of the result set. By clicking the “Table” icon in the top header of the window, we can preview the result.

Once we are comfortable with the results of the query, we can export the results to Tableau. It is important to understand that the preview data is trimmed down to 500 rows by default, so don’t worry if you think something is missing! This value, and the export row limit, can be configured, but for now we can export the results using the green “Unify” button at the top right hand corner of the window.

When this button is clicked, the Unify window will close and the query will execute. You will then be taken to a new Tableau Workbook with the results of the query as a Data Source. We can now use this query as a data source in Tableau, just as we would with any other data source.

But what if we have existing reports we want to use? Do we have to rebuild the report from scratch in the web data connector? Of course not! With Unify, you can select existing reports and pull them directly into Tableau.

Instead of adding columns from the lefthand pane, we can instead select the “Open” icon, which will let us select an existing report. We can then export this report to Tableau, just as before.

Now let’s try to do something a little more complicated. OBIEE doesn’t have the capability to execute queries across Subject Areas without common tables in the business model, however Tableau can perform joins between two data sources (so long as we select the correct join conditions). We can use Unify to pull two queries from OBIEE from different Subject Areas, and perform a data mashup with the two Subject Areas in Tableau.

Here I’ve created a query with “Product Number” and “Revenue”, both from the Subject Area “A - Sample Sales”. I’ve saved this query as “Sales”. I can then click the “New” icon in the header to create a new query.

This second query is using the “C - Sample Costs” Subject Area, and is saved as “Costs”. This query contains the columns “Product Number”, “Variable Costs”, and “Fixed Costs”.

When I click the Unify button, both of these queries will be pulled into Tableau as two separate data sources. Since both of the queries contain the “Product Number” column, I can join these data sources on the “Product Number” column. In fact, Tableau is smart enough to do this for us:

We now have two data sets, each from a different OBIEE subject area, joined and available for visualization in Tableau. Wow, that was easy!

What about refreshing the data? Good question! The exported data sources are published as data extracts, so all you need to do to refresh the data is select the data source and hit the refresh button. If you are not authenticated with OBIEE, or your session has expired, you will simply be prompted to re-authenticate.

Using Tableau to consume OBIEE data has never been easier. Rittman Mead’s Unify allows users to connect to OBIEE as a data source within a Tableau environment in an intuitive and efficient method. If only everything was this easy!

Interested in getting OBIEE data into Tableau? Contact us to see how we can help, or head over to https://unify.ritt.md to get a free Unify trial version.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator