Feed aggregator

Linux locate/print block device attributes ASMLib

Michael Dinh - Tue, 2016-12-13 08:38

Just learned about this specifically to identify whether disk is being used by ASMLib

blkid – command-line utility to locate/print block device attributes

Oracle Linux Server release 6.4
[root@rac02:/root]
# ll /etc/*release*
-rw-r--r--. 1 root root 32 Feb 22  2013 /etc/oracle-release
-rw-r--r--. 1 root root 55 Feb 22  2013 /etc/redhat-release
lrwxrwxrwx. 1 root root 14 Nov 29  2014 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root 45 Feb 22  2013 /etc/system-release-cpe

[root@rac02:/root]
# cat /etc/system-release
Oracle Linux Server release 6.4

[root@rac02:/root]
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@rac02:/root]
# /sbin/blkid |sort
/dev/mapper/vg01-lv_root: UUID="97968448-9997-42a0-a106-c438a47345b8" TYPE="ext4"
/dev/mapper/vg01-lv_swap: UUID="8fe8b719-a367-4448-9ae2-76b376ad91d5" TYPE="swap"
/dev/sda1: UUID="33162370-b7a1-45ae-9c8c-966b8bd720e3" TYPE="ext4"
/dev/sda2: UUID="qHuGcf-Ntnr-hoLR-qtEb-cdgz-2sph-jgaKDK" TYPE="LVM2_member"
/dev/sdb1: LABEL="DISK1" TYPE="oracleasm"
/dev/sdc1: LABEL="DISK2" TYPE="oracleasm"
/dev/sdd1: LABEL="DISK3" TYPE="oracleasm"
/dev/sde1: LABEL="DISK4" TYPE="oracleasm"
/dev/sdf1: LABEL="DISK5" TYPE="oracleasm"
/dev/sdg1: LABEL="DISK6" TYPE="oracleasm"
/dev/sdh1: LABEL="DISK7" TYPE="oracleasm"
/dev/sdi1: LABEL="DISK8" TYPE="oracleasm"
Oracle Linux Server release 6.6
[root@arrow1 ~]# ll /etc/*release*
-rw-r--r--. 1 root root 32 Oct 15  2014 /etc/oracle-release
-rw-r--r--. 1 root root 55 Oct 15  2014 /etc/redhat-release
lrwxrwxrwx. 1 root root 14 Jun 24  2015 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root 45 Oct 15  2014 /etc/system-release-cpe

[root@arrow1 ~]# cat /etc/system-release
Oracle Linux Server release 6.6

[root@arrow1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)

[root@arrow1 ~]# /sbin/blkid
/dev/sda1: UUID="7c42cea0-f23d-4fec-ad0f-b1bf4b50b17e" TYPE="ext4"
/dev/sda2: UUID="ZVWdTj-8EQN-F3ac-3Tev-xTsb-ssL8-7euz5X" TYPE="LVM2_member"
/dev/sdb1: UUID="KUEy1I-X5i2-CuSm-krTA-R5K5-i4je-Ek56ZK" TYPE="LVM2_member"
/dev/mapper/vg01-LogVol01: UUID="8e2c236c-dd87-4be6-9eaf-f72f32f0dcc6" TYPE="ext4"
/dev/mapper/vg01-LogVol00: UUID="919bab13-82b6-425b-95c8-87975cb0bf20" TYPE="swap"
Oracle Linux Server release 7.3
[root@owl ~]# ll /etc/*release*
-rw-r--r--. 1 root root  32 Nov  7 22:07 /etc/oracle-release
-rw-r--r--. 1 root root 398 Nov  7 22:07 /etc/os-release --- (New as of OEL7?)
-rw-r--r--. 1 root root  52 Nov  7 22:07 /etc/redhat-release
lrwxrwxrwx. 1 root root  14 Dec 12 23:31 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root  31 Nov  7 22:07 /etc/system-release-cpe

[root@owl ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@owl ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)

[root@owl ~]# cat /etc/os-release
NAME="Oracle Linux Server"
VERSION="7.3"
ID="ol"
VERSION_ID="7.3"
PRETTY_NAME="Oracle Linux Server 7.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:3:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.3
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.3

[root@owl ~]# /sbin/blkid
/dev/sr0: UUID="2016-11-21-16-51-51-00" LABEL="VBOXADDITIONS_5.1.10_112026" TYPE="iso9660"
/dev/sda1: UUID="65bf9c73-fb56-49c3-b55e-85de8f318892" TYPE="xfs"
/dev/sda2: UUID="p4rqW2-uzvr-6DpX-nof0-7tgf-Yfvn-n8ehaB" TYPE="LVM2_member"
/dev/mapper/vg01-root: UUID="de9c65d9-60cc-45b8-b9fa-7459ef3f3850" TYPE="xfs"
/dev/mapper/vg01-swap: UUID="1f85c253-261e-4021-9468-651de69b8e7a" TYPE="swap"
[root@owl ~]#

Upgrade to Enterprise Manager 13.2

Yann Neuhaus - Tue, 2016-12-13 08:11

I will describe how to upgrade Enterprise Manager 13.1.0.0 to the new 13.2.0.0 version.

At the beginning we have to ensure that we applied the latest PSU on the repository database.

It is mandatory to apply the following patch before upgrading to Enterprise Manager Cloud Control 13.2.0.0:

DATABASE PATCH SET UPDATE 12.1.0.2.160419

To install the patch you have to check the following point:

- you have to use Oracle Interim Patch Installer version 12.2.0.1.8

Once you have downloaded the patch and unzipped it, you can check for potential conflicts:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail 
-phBaseDir /oracle/u01/app/oracle/software/22899531/22806133
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed. 
OPatch succeeded.

 

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail -phBaseDir 
/oracle/u01/app/oracle/software/22899531/23006522
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

And finally, you check for the system space available:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] $ORACLE_HOME/OPatch/opatch prereq 
CheckSystemSpace -phBaseFile /oracle/u01/app/oracle/software/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checksystemspace"
Prereq "checkSystemSpace" passed.
OPatch succeeded.

Then once the pre requisites are ok, you stop the repository database, and you run the classical opatch apply command from the directory where you have unzipped the PSU.

You finally check the Oracle inventory:

oracle@vmtestoraCC13c:/home/oracle/ [EMREP13C] opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
--------------------------------------------------------------------------------
Local Machine Information:
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 12c                                      12.1.0.2.0
There are 1 products installed in this Oracle Home. 
Interim patches (1) : 
Patch  22806133     : applied on Tue Nov 22 11:19:55 CET 2016
Unique Patch ID:  19983161
Patch description:  "DATABASE BUNDLE PATCH: 12.1.0.2.160419 (22806133)"

Secondly you have to disable the optimizer_adaptive_features parameter in the repository database:

SQL> alter system set optimizer_adaptive_features=false scope=both;

System altered.

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

We verify if any login triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';

We verify if any logoff triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';

If we find a trigger enabled, we disable it:

SQL> SELECT trigger_name, owner from sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
 
TRIGGER_NAME         OWNER
GSMLOGOFF            GSMADMIN_INTERNAL
 
SQL> alter trigger gsmadmin_internal.gsmlogoff disable;

Then you have to copy the EMKEY to the repository database:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl config emkey 
-copy_to_repos -sysman_pwd dbi05manager
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, 
secure the EMKey by running "emctl config emkey -remove_from_repos".

Concerning the OMS shutdown we follow the following procedure:

We stop the JVMD and ADP engines explicitly:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms jvmd stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework
 
oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms adp stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework

Then we shut down completely the OMS:

oracle@v333:/home/oracle/ [oms13c] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down

It is mandatory to stop the management agent in order to avoid errors during the migration phase:

oracle@vmtestoraCC13c:/home/oracle/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

WATCH OUT: before running the installer unset PERLLIB and PERL5LIB variables (if for example you have environment variables defined for your oracle user) otherwise you will get the following error:

/opt/oracle/Middleware13c/perl/lib/5.10.0/x86_64-linux-thread-multi/auto/Cwd/Cwd.so: 
undefined symbol: Perl_Gthr_key_ptr
ERROR: Unable to continue with the installation 
because some one-off patches could not be applied successfully.

Then once the oms and the management agent are stopped you can run:

oracle@v333:/opt/software/ [oms13c] ./em13200_linux64.bin
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 7999 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    
Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer 
from /tmp/OraInstall2016-11-18_10-07-45AM
====Prereq Config Location main===
/tmp/OraInstall2016-11-18_10-07-45AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2016-11-18_10-07-45AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2016-11-18_10-07-45AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_f8wrWz/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/opt/software
EMFileLoc:/tmp/OraInstall2016-11-18_10-07-45AM/oui/em/
ScratchPathValue :/tmp/OraInstall2016-11-18_10-07-45AM

The first installer screen appears:

em1

I choosed not to receive Security alerts then I Select Next:

em2

I select Next, I skip the software Updates

em3

We check the prerequisites are ok

em4

We choose to upgrade an existing Enterprise Manager system, we enter the old Middleware home, and we select Next

em5

We enter the new Middleware home, and we select Next

em6

We enter the repository connection details, the sys and sysman passords, we confirm we have a correct repository backupm we disable the DDMP jobs and we select Next

em7

We select Yes to fix the issues

em8

We review the plugins and we select Next

em9

We have the possibility to add plugins we want to deploy while upgrading to EM 13.2

em10

We enter the weblogic username and password and we select Next

em11

I choosed not to configure a shared location for Oracle BI publisher

em12

We choose the default ports and we select Next

em13

We select Upgrade:

em14

The upgrade is running fine:=)

At the end of the upgrade, you have to run connected as root from the new OMS home the shell root.sh:

root@vmCC13c oms13cr2]# . root.sh
/etc exist
/u03/app/oracle/oms13cr2

Finally you have successfully upgraded to EM 13.2:

em15

Once the OMs is upgraded, you have to upgrade the management agents with the classical procedure. From the EM console you select upgrade agent as follows:

em16

em17

You add the management agent, and you select Submit

em18

You can follow the upgrade phase, finally you have to run the root.sh script on the management agent and cleanup the old agent environment with the EM console as follows:

You select the Post Agent Upgrade Tasks, you select the agent previously migrated:

em19

And you select Submit

The old agent home will be removed; you will just have to adapt your /etc/oratab file to give the new ORACLE_HOME for your agent 13.2

The last phase consists in deleting the old OMS home. As we were in 13.1 version, we only have to check nothing is running with the old environment, then delete the old home.

oracle@vmCC13c:/u03/app/oracle/oms13cr2/ [oms13c] ps -ef | grep oms | grep -v 13cr2
oracle   28463 27736  0 15:52 pts/5    00:00:00 grep --color=auto oms
 
oracle@vmCC13c:/u03/app/oracle/ [oms13c] ls
agent13c  CC13c_setupinfo.txt  gc_inst  gc_inst1  oms13c  oms13cr2  swlib
oracle@vmCC13c:/u03/app/oracle/ [oms13c] rm -rf oms13c

The new features are listed in the following URL and mainly concern the virtualization, the Middleware and Cloud Management, and some new features about incident management (always on monitoring installed on a different host for example).

http://docs.oracle.com/cd/E73210_01/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Finally the upgrade phase to EM 13.2 is finished in a couple of hours and did not present any errors.

 

 

 

 

 

 

 

 

Cet article Upgrade to Enterprise Manager 13.2 est apparu en premier sur Blog dbi services.

Index Compression

Jonathan Lewis - Tue, 2016-12-13 07:11

Richard Foote has published a couple of articles in the last few days on the new (licensed under the advanced compression option) compression mechanism in 12.2 for index leaf blocks. The second of these pointed out that the new “high compression” mechanism was even able to compress single-column unique indexes – a detail that doesn’t make sense and isn’t allowed for the older style “leading edge deduplication” mechanism for index compression.

In 12.2 an index can be created (or rebuilt) with the option “compress advanced high” – and at the leaf-block level this will create “compression units” (possibly just one per leaf block – based on my early testing) that takes the complexity of compression far beyond the level of constructing a directory of prefixes. Richard demonstrated the benefit by creating a table with a numeric unique index – then compressed the index, reducing its size from 2,088 leaf blocks to 965 leaf blocks, which is pretty dramatic difference.

It crossed my mind, though, to wonder whether the level of compression was a side effect of the very straightforward code that Richard had used to create the data set: the table was a million rows with a primary key that had been generated as the rownum selected from the now-classic “connect by..” query against dual, and the row length happened to allow for 242 rows per 8KB table block.

If you pause to think about this data set you realise that if you pick the correct starting point and walk through 242 consecutive entries of the index you will be walking through 242 consecutive rows in the table starting from the zeroth row in a particular table block and ending at the 241st row in that block. A rowid (as stored by Oracle in a simple B-tree index) consists of 6 bytes and the first five bytes of the rowid will be the same for the first 256 rows in any one table block (and the first four will still be the same for the remaining rows in the block). Richard’s data set will be very close to ideal for any byte-oriented, or bit-oriented, compression algorithm because (to use Oracle terminology) the data will have a perfect clustering_factor. (He might even have got a slightly better compression ratio if he’d used an /*+ append */ on the insert, or done a CTAS, and reduced the rowsize to get a uniform 256 rows per table block.)

So how do things change if you randomise the ordering of the key ? Here’s a variant on Richard’s code:


rem
rem	Script:		index_compression_12c_2.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Dec 2016
rem
rem	Last tested 
rem		12.2.0.1
rem

execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4
)
select
	rownum				id,
	lpad('x',130,'x')		padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
order by
	dbms_random.value
;

select table_name, blocks, round(num_rows/blocks,0) rpb from user_tables where table_name = 'T1';

drop index t1_i1;
create unique index t1_i1 on t1(id);
execute dbms_stats.gather_index_stats(user,'t1_i1');
select index_name, compression, pct_free, leaf_blocks from user_indexes where index_name = 'T1_I1';

drop index t1_i1;
create unique index t1_i1 on t1(id) compress advanced high;
execute dbms_stats.gather_index_stats(user,'t1_i1');
select index_name, compression, pct_free, leaf_blocks from user_indexes where index_name = 'T1_I1';

The initial drop index is obviously redundant, and the calls to gather_index_stats should also be redundant – but they’re there just to make it obvious I haven’t overlooked any checks for correctness in the build and stats.

You’ll notice that my row length is going to be slightly more “real-world” than Richard’s so that the degree of compression I get from nearly identical rowid values is likely to be reduced slightly, and I’ve completely randomised the order of key values.

So what do the results like ?

With the default pctfree = 10, and in a tablespace of uniform 1MB extents, 8KB blocks, utilising ASSM I get this:


TABLE_NAME               BLOCKS        RPB
-------------------- ---------- ----------
T1                        19782         51

INDEX_NAME           COMPRESSION     PCT_FREE LEAF_BLOCKS
-------------------- ------------- ---------- -----------
T1_I1                DISABLED              10        2088
T1_I1                ADVANCED HIGH         10        1303

Unsurprisingly the uncompressed index is exactly the same size as Richard’s (well, it was just the integers from 1 to 1M in both cases) but the compression ratio is significantly less – though still pretty impressive.

Of course, for this type of index my example probably goes to the opposite extreme from Richard’s. Realistically if you have a sequence based key with an OLTP pattern of data arrival then consecutive key values are likely to be scattered within a few blocks of each other rather than being scattered complely randomly across the entire width of the table; so a more subtle model (using a suitable number of concurrent processes to insert ids based on a sequence, perhaps) would probably get a better compression ratio than I did, though a worse one than Richard’s.There’s also the issue of the size of the key value itself – once you get to values in the order of 10 million to 100 million you’re looking at mostly 4 bytes (internal format) storage where for large runs of values the first 3 bytes match, possibly leading to a better compression ratio.

Of course the question of globally partitioned indexes will be relevant for some people since the principle reason for global indexes on partitioned tables is to enforce uniqueness on column combinations that don’t include the partition key, and that introduces another possible benefit – the rowid goes up to 10 bytes, of which the first 4 bytes are the object id of the table partition: depending on the nature of the partitioning that repeated 4 bytes per row may be close to non-existent after compression, giving you a better compression ratio on globally partitioned than you get on any other type of single column unique index.

Once you start looking at the details there are a surpising number of factors that can affect how well the advanced compression high can work.

Footnote:

Once you’ve created the index, you can start poking around in all sorts of ways to try and work out what the compression algorithm does. A simple block dump is very informative, with lots of clues in the descriptive details – and lots of puzzles when you start looking too closely. There are hints that this type of index compression adopts a strategy similar to “oltp comprssion” for tables in that compression occurs only as the leaf block becomes full – and possibly allows some sort of batching process within a leaf block before finally compressing to a single contiguous unit. (This is just conjecture, at present: the only firm statement I’ll make about the actual implementation of index compression is that it uses a lot of CPU; in my example the baseline create index took about 1.5 seconds of CPU, the compressed create took about 4.5 seconds of CPU.)

There are also a couple of amusing side effects that may confound those who use the old “validate index / query index_stats” two-step to decide when to rebuild indexes. Here’s what I got on the compressed index:


SQL> validate index t1_i1;

SQL> select blocks, lf_rows, lf_rows_len, btree_space, used_space, pct_used from index_stats;

    BLOCKS    LF_ROWS LF_ROWS_LEN BTREE_SPACE USED_SPACE   PCT_USED
---------- ---------- ----------- ----------- ---------- ----------
      1408    1000000	 14979802    10416812	14994105	144

My index is using 144% of the space that it has been allocated. You don’t have to be very good at maths (or math, even) to realise that something’s wrong with that number.


12c Database: perl adcfgclone.pl appsTier : RC-40201: Unable to connect to Database

Online Apps DBA - Tue, 2016-12-13 06:52

12c Database: perl adcfgclone.pl appsTier : RC-40201: Unable to connect to Database CloneContext_***.log Shows the Message: Exception occurred: java.sql.SQLException: ORA-28040: No matching authentication protocol Environment: 12.1.3 Applications with 12c Database (12.1.0.2) Bug 14575666 In 12.1, the default value for the SQLNET.ALLOWED_LOGON_VERSION parameter has been updated to 11. This means that database clients using pre-11g JDBC […]

The post 12c Database: perl adcfgclone.pl appsTier : RC-40201: Unable to connect to Database appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

The Visual Plugin Pack for OBIEE

Rittman Mead Consulting - Tue, 2016-12-13 04:00

Last week we announced the Rittman Mead Open Source project, and released into open source:

  • the excellent Insights project, a javascript API/framework for building a new frontend for OBIEE, written by Minesh Patel
  • Enhanced usage tracking for OBIEE, to track click-by-click how your users interact with the application

Today it is the turn of the Visual Plugin Pack.....

What is the Visual Plugin Pack for OBIEE ?

Visual Plugin Pack (VPP) is a means by which users of OBIEE Answers can use custom JavaScript visualisations without having to write any javascript!

It is a framework that enables developers to build Javascript visualisation plugins, that report builders can then utilise and configure through native OBIEE user interface.

I want to point this out from the very start, that despite its name, the Visual Plugin Pack is not a pack of all-singing, all-dancing, super-duper visualisations for OBIEE.

Instead, VPP should be thought of as a framework that allows you to quickly develop and integrate all-singing, all-dancing, super-duper visualisations that will work natively within OBIEE.

Subtle difference, but an important one.

So what does it do ?

Essentially, VPP allows you to accelerate the development and deployment of custom, configurable and reusable OBIEE JavaScript visualisations.

Back in 2013 I wrote this post describing how to embed a D3 Visualisation within OBIEE. The same method will still work today, but it's a cumbersome process and requires heavy use of the narrative form, which let's be honest, is a painful experience when it comes to JavaScript development.

Some drawbacks with this method:

  • Code editing in the Narrative view is not productive.
  • Reusing visualisations in other analyses requires the copying and pasting of code.
  • Basic Visualisation configuration changes, for example, width, height, colours, font etc requires code changes.
  • Remapping Column bindings requires code changes.
  • JavaScript library dependencies and load order can be tricky to manage.

The Visual Plugin Pack attempts to address these issues by abstracting away the complexities of the Narrative form and allowing developers to focus on visualisation code, not OBIEE integration code.
If you choose to use VPP for your visualisations then you will never have to touch the narrative form, all visualisation development can take place outside of OBIEE in your favourite code editor and deployed to Weblogic when you are done.

VPP also allows you to define design-time controls that affect column bindings and visualisation behaviour. The example visualisation below has been written to accept 5 column bindings and 1 configuration component, which controls the visualisation size. You can create as many column bindings and configuration components as you need

scatterplot matrix

How do I get started ?

You can download or fork the repository from here.

Installation and developer guides can be found on the wiki:-

There are several visualisations that come bundled with VPP, some more polished than others, but they should serve as good examples that can be enhanced further.

Summary

If you've got some in-house JavaScript skills and are looking to develop and integrate custom visualisations into OBIEE, then VPP can help alleviate a lot of the frustrations associated with the traditional method. Once you're up and running you'll be able to develop faster, integrate quickly and share your visualisations with all OBIEE report writers.

If you'd like to discuss how Rittman Mead can help with deployment or assist with custom visualisation development feel free to contact us.

Categories: BI & Warehousing

Query on application continuity

Tom Kyte - Tue, 2016-12-13 02:06
I'm thinking of configuring Application Continuity for my 2 node RAC (12c) database where my application is running on Weblogic server. Before I proceed with the setup of AC, I need to clarify/confirm few things regarding AC: 1. AC does not requir...
Categories: DBA Blogs

in wait event "disk file operations i/o", P1/p2/p3 is 8/0/8, the first 8 refers to the file type, what kind of file does the file type 8 stands for?

Tom Kyte - Tue, 2016-12-13 02:06
one of our DBs experienced a performance slow problem after a network issue, according to AWR report, the 2 bottlenecks are "disk file operations i/o" and "session login/logoff", since we have db audit turned on and it writes audit info to reboot os ...
Categories: DBA Blogs

Package Performance: Cloned vs Original

Tom Kyte - Tue, 2016-12-13 02:06
Hello Tom, This is my First POST. I experienced a performance issue with one of the package and in order to debug/troubleshoot the package, I cloned the original package and without changing any content of the cloned package; I ran a test for a...
Categories: DBA Blogs

Index Clustering Factor and Insert

Tom Kyte - Tue, 2016-12-13 02:06
HI Connor, I followed the below article by Tom, http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52asktom-1735913.html Lets ignore the CHAR, VARCHAR, and NUMBER context, and focus only on the clustering factor. Then, what woul...
Categories: DBA Blogs

APPEND hint in MERGE without INSERT clause

Tom Kyte - Tue, 2016-12-13 02:06
Hello, from the performance point of view: does it make sense to add the APPEND hint into the MERGE statement in that case there is only an UPDATE clause? If yes, what is the benefit? Thank you, Zdenek
Categories: DBA Blogs

Is there any way to convert an complex object type to JSON in 12c

Tom Kyte - Tue, 2016-12-13 02:06
Hello Tom We have a procedure which has an complex object type as output (we didn use Refcursor as we had lot of reusable codes in case of using object). This data is populated in Application layer for display and the same is moved back to a JSON...
Categories: DBA Blogs

Does Flashback Data Archive work with EBS?

Steven Chan - Tue, 2016-12-13 02:05

Flashback Data Archive is a group of Oracle Database features that that let you view past states of database objects or to return database objects to a previous state without using point-in-time media recovery. Oracle Flashback features use the Automatic Undo Management (AUM) system to obtain metadata and historical data for transactions. They rely on undo data, which are records of the effects of individual transactions.

Flashback Data Archive was formerly known as Total Recall.  Flashback Data Archive is licenced via the Oracle Advanced Compression (ACO) database option.

Oracle Flashback Data Archive diagram

Oracle Flashback is not explicitly certified with the E-Business Suite, but it is supported.  We do not currently publish any Best Practices whitepapers with guidance on how to use Flashback Data Archive with EBS.  The generic Oracle Flashback Technology documentation is applicable to E-Business Suite databases.

Related Articles

Categories: APPS Blogs

Why You Need a Long Term Career Plan and How to Create One

Complete IT Professional - Mon, 2016-12-12 19:57
Why should you have a career plan? Do you even need to know what your job will be in the long term? Learn why and how to create a career plan in this article. The answer is “yes”. You need to know what your career will look like, long term. Or at least have an […]
Categories: Development

Getting started with Docker – 2 – building your own base image

Yann Neuhaus - Mon, 2016-12-12 13:30

In the last post we quickly looked at how you can install everything you need to run docker containers on a CentOS 7 host and did bring up the official PostgreSQL image. However there are good reasons not to rely on an existing image: You want to deliver a pre-configured base image which includes everything your application requires and you want to have control over that image. When this image needs to get updated you can update the base image and then re-deploy it to wherever you need to update the base image.

For CentOS there is a script which you can use to build your base image on GitHub. For other distributions you can check the docker documentation. This script basically makes use of the “–installroot” switch of yum which allows to install packages to another location than the default one.

Using this script we can build a CentOS 7 base image. There are three parameters you can use:

  -p ""  The list of packages to install in the container.
                   The default is blank.
  -g ""    The groups of packages to install in the container.
                   The default is "Core".
  -y      The path to the yum config to install packages from. The
                   default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora

We’ll use the “-p” switch to install all the packages which are required to build PostgreSQL from source (this depends on the compile options, of course) and some additional packages which are useful for daily work (such as screen):

[docker@centos7 ~]$ sudo ./mkimage-yum.sh -p "gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip" centospg

Once done the new image is available locally:

[docker@centos7 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centospg            7.2.1511            184509483d52        40 seconds ago      510.6 MB
postgres            latest              78e3985acac0        2 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

If you wonder how that image made it into docker take a look a the last lines of the script:

tar --numeric-owner -c -C "$target" . | docker import - $name:$version

What happened is that a tar file of the temporary chroot directory was created:

[docker@centos7 ~]$ ls -latr /tmp/mkimage-yum.sh.dKHtsq/
total 24
drwxrwxrwt.  2 root root    6 Aug 12  2015 tmp
dr-xr-xr-x.  2 root root    6 Aug 12  2015 sys
drwxr-xr-x.  2 root root    6 Aug 12  2015 srv
drwxr-xr-x.  2 root root    6 Aug 12  2015 run
dr-xr-x---.  2 root root    6 Aug 12  2015 root
dr-xr-xr-x.  2 root root    6 Aug 12  2015 proc
drwxr-xr-x.  2 root root    6 Aug 12  2015 opt
drwxr-xr-x.  2 root root    6 Aug 12  2015 mnt
drwxr-xr-x.  2 root root    6 Aug 12  2015 media
drwxr-xr-x.  2 root root    6 Aug 12  2015 home
drwxr-xr-x.  2 root root 4096 Aug 12  2015 dev
dr-xr-xr-x.  2 root root    6 Aug 12  2015 boot
lrwxrwxrwx.  1 root root    7 Dec 10 15:02 bin -> usr/bin
lrwxrwxrwx.  1 root root    8 Dec 10 15:02 sbin -> usr/sbin
lrwxrwxrwx.  1 root root    9 Dec 10 15:02 lib64 -> usr/lib64
lrwxrwxrwx.  1 root root    7 Dec 10 15:02 lib -> usr/lib
dr-xr-xr-x. 17 root root 4096 Dec 10 15:02 .
drwxr-xr-x. 13 root root 4096 Dec 10 15:02 usr
drwxr-xr-x. 17 root root 4096 Dec 10 15:02 var
drwxr-xr-x. 19 root root 4096 Dec 10 15:02 etc
drwxrwxrwt. 11 root root 4096 Dec 12 11:43 ..

… which then was imported into docker with the “docker import” command.

To test if it really works we can start it and execute bash:

[docker@centos7 ~]$ docker run -it --rm  centospg:7.2.1511 bash
[root@cf690e9d9476 /]$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Fine, so far so good. Now we can really begin: We have our own CentOS base image where we want our PostgreSQL image to be based on. How then can we create a PostgreSQL image on top of our base image?

We’ll execute the commands necessary first and provide explanations afterwards. So, the first step:

[docker@centos7 ~]$ sudo yum install -y git
[docker@centos7 ~]$ mkdir centospg
[docker@centos7 ~]$ cd centospg
[docker@centos7 centospg]$ git init
Initialized empty Git repository in /home/docker/centospglatest/.git/
[docker@centos7 centospg]$ git config --global user.email "daniel@abc.def"
[docker@centos7 centospg]$ git config --global user.name "Daniel"

Why do we need git? It is not required to use git at all but you probably would like to have your files which are used to build your containers managed by git so that you can use all advantages of GIT combined with the advantages of docker. It will will make more and more sense as we step through all the commands.

What we need to create now is a so called Dockerfile. This file lists the instructions that Docker will execute to build you image. Lets go:

[docker@centos7 centospg]$ touch Dockerfile
[docker@centos7 centospg]$ git add Dockerfile 
[docker@centos7 centospg]$ git commit -m "initial" Dockerfile 
[master (root-commit) ce3727a] initial
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 Dockerfile

Our very basic Dockerfile will look like this:

# use our CentOS base images as source
FROM centospg:7.2.1511
# set the PostgreSQL we will download
ENV PG_VERSION 9.6.1
# include the PostgreSQL binaries in the PATH
ENV PATH /u01/app/postgres/product/96/db_01/bin:$PATH
# add a postgres group and postgres user
RUN groupadd postgres
RUN useradd -g postgres -m postgres
# prepare the directories
RUN mkdir -p /u01/app/postgres
RUN chown postgres:postgres /u01/app/postgres
# allow sudo for the postgres user
RUN echo 'postgres ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# download, configure, compile and install PostgreSQL from source
USER postgres
RUN wget https://ftp.postgresql.org/pub/source/v${PG_VERSION}/postgresql-${PG_VERSION}.tar.bz2 -O /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN cd /var/tmp; tar -axf /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN rm -f /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN cd /var/tmp/postgres*; ./configure --prefix=/u01/app/postgres/product/96/db_01
RUN cd /var/tmp/postgres*; make
RUN cd /var/tmp/postgres*; make install
# cleanup
RUN rm -rf /var/tmp/postgres*

Using this Dockerfile we can build our PostgreSQL image:

[docker@centos7 centospg]$ docker build -t centospg:PG9.6.1 .

The output of this is quite long, here a snippet:

Sending build context to Docker daemon 45.06 kB
Step 1 : FROM centospg:7.2.1511
 ---> 184509483d52
Step 2 : ENV PG_VERSION 9.6.1
 ---> Running in 054900c7ebe1
 ---> 866815b9f092
Removing intermediate container 054900c7ebe1
Step 3 : ENV PATH /u01/app/postgres/product/96/db_01/bin:$PATH
 ---> Running in 46bcf7667a06
 ---> 94c9adb0402b
Removing intermediate container 46bcf7667a06
Step 4 : RUN groupadd postgres
 ---> Running in 24a7d9b7a1ea
 ---> eb4ff8268e2e
Removing intermediate container 24a7d9b7a1ea
Step 5 : RUN useradd -g postgres -m postgres
 ---> Running in 3e09b556fed8
 ---> acff1dcf2d4c
Removing intermediate container 3e09b556fed8
Step 6 : RUN mkdir -p /u01/app/postgres
 ---> Running in 999a62d075c0
 ---> fa4bdfa74d31
Removing intermediate container 999a62d075c0
Step 7 : RUN chown postgres:postgres /u01/app/postgres
 ---> Running in 37773e484260
 ---> 668c491b534b
Removing intermediate container 37773e484260
Step 8 : RUN echo 'postgres ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
 ---> Running in bb9cbfd20623
 ---> 589959efbda5
Removing intermediate container bb9cbfd20623
Step 9 : USER postgres
 ---> Running in f70b8c70c3fc
 ---> 32d3d3d603d2
Removing intermediate container f70b8c70c3fc
Step 10 : RUN wget https://ftp.postgresql.org/pub/source/v${PG_VERSION}/postgresql-${PG_VERSION}.tar.bz2 -O /var/tmp/postgresql-${PG_VERSION}.tar.bz2
 ---> Running in c5cc11840a15
--2016-12-12 12:43:05--  https://ftp.postgresql.org/pub/source/v9.6.1/postgresql-9.6.1.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 174.143.35.246, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|174.143.35.246|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19260568 (18M) [application/x-bzip]
Saving to: '/var/tmp/postgresql-9.6.1.tar.bz2'

     0K .......... .......... .......... .......... ..........  0%  180K 1m44s
    50K .......... .......... .......... .......... ..........  0%  368K 77s
...
 ---> 645cf59717f4
Removing intermediate container c5cc11840a15
Step 11 : RUN cd /var/tmp; tar -axf /var/tmp/postgresql-${PG_VERSION}.tar.bz2
 ---> Running in 6a47968ddeb5
...
# PostgreSQL configure, make, make install
...
PostgreSQL installation complete.
 ---> 7f6b11b357d7
Removing intermediate container 041441816c4d
Step 16 : RUN rm -rf /var/tmp/postgres*
 ---> Running in 480cc2157b9a
 ---> a7b0bf1d1c35
Removing intermediate container 480cc2157b9a
Successfully built a7b0bf1d1c35

Once all of that completed we have a new Docker image:

[docker@centos7 centospg]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centospg            PG9.6.1             a7b0bf1d1c35        45 seconds ago      706.7 MB
centospg            7.2.1511            184509483d52        47 hours ago        510.6 MB
postgres            latest              78e3985acac0        4 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

Using this image we can now bring up a container which complies to all our standards we build into the base images:

[docker@centos7 centospg]$ docker run -it centospg:PG9.6.1 bash
[postgres@7ac7780b0b1b /]$ which initdb
/u01/app/postgres/product/96/db_01/bin/initdb
[postgres@7ac7780b0b1b /]$ initdb --version
initdb (PostgreSQL) 9.6.1

Finally we commit our Dockerfile:

[docker@centos7 centospg]$ git commit -m "First working version" Dockerfile 
[master f0ba897] First working version
 1 file changed, 25 insertions(+)

Something to start with, isn’t it?

Note: The Docker best practices tell you to add only few instructions to a Dockerfile because every instruction creates a new layer. In general you should run only one service per image. This makes it easier to scale you containers.

 

Cet article Getting started with Docker – 2 – building your own base image est apparu en premier sur Blog dbi services.

I’m Colton Fischer and this is how I work

Duncan Davies - Mon, 2016-12-12 08:00

The next profile in our ‘How I Work‘ series is Colton Fischer. Colton is one of the crop of new PeopleSoft bloggers bursting on the scene during 2016. His PeopleSoft Mods blog is a treasure trove of great content and if you’ve not been there before it’s worth scrolling back through Colton’s historical posts – as he doesn’t just look at existing delivered functionality but extends PeopleSoft in ways that it’s clear to see the business benefit, frequently explaining how you can get the same results.

colton_fischer

Name: Colton Fischer

Occupation: Application Developer at the University of Houston
Location: Houston, Texas, USA
Current computer: My desktop is a Dell Optiplex 9020 and my laptop is a Dell Precision 3510
Current mobile devices: Samsung Galaxy S5
I work: Eagerly

What apps/software/tools can’t you live without?
Snipping Tool – Helpful tool for when I need a quick screenshot to document or demonstrate something
VPN Client and Remote Desktop – I don’t work from home too much, but this is a lifesaver when I forget some files on my machine at the office
YouTube Mobile App – I am more of a visual learner, so I enjoy watching educational YouTube videos on my phone when I have some down time
ESPN Mobile App – Allows me keep up with collegiate and professional sports

Besides your phone and computer, what gadget can’t you live without?
I’d have to say my coffee maker. That thing gets me through each morning because I don’t know what I would do if I didn’t have my coffee.

What’s your workspace like?
I would say that my workspace is pretty basic. I will admit that it is not normally as clean as it is in this picture. Most of the work I do happens on my desktop computer. I use my laptop for running personal demo systems to do “experimental” development on. I also use the systems on my laptop when I do development that requires constant app and web server bounces so that I don’t bother anyone else.

desk

What do you listen to while you work?
I enjoy most all genres of music, but what I listen to while I work relies heavily on the type of work that I am doing. If I am doing something that requires me to think hard, I listen to soft acoustic guitar songs. If I don’t need to think too hard, then I mostly find myself listening to metal rock, punk rock, and occasionally dubstep.

What PeopleSoft-related productivity apps do you use?
I have App Designer, SQL Developer, and Chrome windows open at all times. I like to use Chrome add-ons or other web-based productivity apps. I use Advanced REST client for Chrome when I need to work with web services. When I need to quickly view some unformatted XML, I use an online XML viewer to format it for me. I often work with base64 encoded values, so I use an online base64 decoder to obtain the decoded values. I also use the Chrome DevTools to view the source behind fields on PeopleSoft pages.

Do you have a 2-line tip that some others might not know?
I am not sure how well-known this is, but I recently discovered that meta-HTML can be used in the message catalog. Using meta-HTML (or even regular HTML) in messages can allow for some cool configuration-based functionality.

What SQL/Code do you find yourself writing most often?
I usually write queries on the security tables that deal with roles and permission lists. However, here lately I have been writing some queries on the PSACCESSLOG and PSPTLOGINAUDIT tables. There is some really interesting information in these tables from a security perspective.

What would be the one item you’d add to PeopleSoft if you could?
I would like to see some built-in functionality to make it easier to identify sensitive data (bank account numbers, SSNs, etc.) fields in PeopleSoft.  It would be nice if the field definitions that are used to store\display sensitive data would have a setting to distinguish the level of sensitivity.  Having this information stored in the meta-data tables would allow for the ability to easily write queries to determine where in the application sensitive data is being exposed.  This would, in- turn allow for PeopleSoft security administrators to more effectively secure their PeopleSoft applications.

What everyday thing are you better at than anyone else?
I am not sure if I am better than anyone at this, but I am good at time management and not procrastinating.

What’s the best advice you’ve ever received?
I enjoy playing golf and tennis when I have someone available to play with. When I am solo and I want to do something physical, I usually go skateboarding. Something else that keeps me happy (but not so healthy) is playing Xbox.

What’s the best advice you’ve ever received?
The best piece of advice I’ve received is to start each day fresh and with a clean slate regardless of how the previous day went.  It’s easy to carry struggles and frustrations with us from day to day, but it only hinders your productivity and outlook.


ORA-02070: database does not support in this context

Tom Kyte - Mon, 2016-12-12 07:46
Hi, I am facing the below exception - ORA-02070: database does not support in this context Here is the query I am using - INSERT INTO table_1 (a,b,c) SELECT table_name , partition_name , subpartition_name FROM user_subpartitions; ...
Categories: DBA Blogs

Problem with PL/SQL, Cursor and trigger

Tom Kyte - Mon, 2016-12-12 07:46
Hi, I have some little problem with my exercises: i have these two tables CREATE TABLE PRENOTAZIONI ( CodPrenotazioni NUMBER(5) PRIMARY KEY, DataPrenotazioni DATE DEFAULT SYSDATE NOT NULL, CodProg NUMBER (3) NOT NULL, CodUtente VARCHAR2 (20) N...
Categories: DBA Blogs

Date Validation without function

Tom Kyte - Mon, 2016-12-12 07:46
Hi There, I've a string column and i would like to validate it to date between 01 Jan 1930 to 01 Jan 2030, Please help me. Date might be anything like below 1226547879 basdhbcus489 34^&dsulch56
Categories: DBA Blogs

Globally unique integers

Tom Kyte - Mon, 2016-12-12 07:46
My application uses NUMBER columns as primary keys, and uses sequences to populate the column (via a trigger with the typical "if :new.customer_id is null then select seq_customer_id.nextval into :new.customer_id from dual" construct). Now, I woul...
Categories: DBA Blogs

Most efficient way to UNPIVOT a table with large # of columns

Tom Kyte - Mon, 2016-12-12 07:46
Database is 11.2.0.4. At this point, cannot use full featured FLASHBACK ARCHIVE with CONTEXT... but it would have the same issues I think <b>ENVIRONMENT</b> I have a change logging table (e.g. CHANGESTABLE) that is basically the mirror image col...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator