Feed aggregator

Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations?

Yann Neuhaus - Wed, 2016-12-14 01:33

For security reasons, you may want that your DataGuard operations are done with a different UNIX user and with a different Oracle user which is not so highly privileged like the SYSDBA.  This is exactly where the SYSDG Administrative Privilege for Oracle Data Guard Operations comes into play.

The SYSDG privilege is quite powerful and allows you to work with the Broker (DGMGRL) command line interface and besides that, it enables the following operations:

  • STARTUP
  • SHUTDOWN
  • ALTER DATABASE
  • ALTER SESSION
  • ALTER SYSTEM
  • CREATE RESTORE POINT (including GUARANTEED restore points)
  • CREATE SESSION
  • DROP RESTORE POINT (including GUARANTEED restore points)
  • FLASHBACK DATABASE
  • SELECT ANY DICTIONARY
  • SELECT
    • X$ tables (that is, the fixed tables)
    • V$ and GV$ views (that is, the dynamic performance views
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • DELETE
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • EXECUTE
    • SYS.DBMS_DRS

In addition, the SYSDG privilege enables you to connect to the database even if it is not open.

Ok. Let’s give it a try. I want to give the user scott all the privileges he needs to do the DataGuard operational tasks. So … I create a UNIX user scott and a database user scott with the SYSDG privilege.

[root@dbidg02 ~]# useradd scott
[root@dbidg02 ~]# usermod -a -G sysdg scott
[root@dbidg02 ~]# cat /etc/group |grep sysdg
sysdg:x:54324:oracle,scott

SQL> create user scott identified by tiger;

User created.

SQL> grant sysdg to scott;

Grant succeeded.

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS where USERNAME = 'SCOTT';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
SCOTT                  FALSE FALSE FALSE TRUE  FALSE

So far so good. Everything works. Scott can do switchovers, convert the physical standby to a snapshot database, create restore points and many more. But what happens when an error pops up? You need to take a look into the most important log files which are the alert log and broker log file in a DataGuard environment.

If you do a “show database verbose”, you will find at the end of the output the locations of the log files, which is quite useful from my point of view. This is new with Oracle 12cR2.

DGMGRL> show database verbose 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Active Apply Rate:  152.00 KByte/s
  Maximum Apply Rate: 152.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    DBIT122
  ...
  ...
Broker shows you the Log file location:

    Alert log               : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log

But unfortunately, the scott user can’t read those files, because there are no read permissions for others and
scott is not part of the oinstall group.

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 trace]$ ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 37787 Dec 13 10:36 drcDBIT122.log
[scott@dbidg01 trace]$ ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 221096 Dec 13 12:04 alert_DBIT122.log

So what possibilities do we have to overcome this issue?

1. We can add user scott to the oinstall group, but then we haven’t won to much security
2. We can set the parameter “_trace_files_public”=true, but when this one is enable, then all oracle
trace files are world readable, not just the alert and broker log
3. We can configure XFS access control lists, so that user scott gets only the permissions he needs

For security reasons, I decided to go for the last one.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r drcDBIT122.log


oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r-----+ 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl alert_DBIT122.log
# file: alert_DBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl drcDBIT122.log
# file: drcDBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

Cool. Now the scott user is really able to do a lot of DataGuard operation tasks, including some debugging.

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log | grep 'MAXIMUM AVAILABILITY mode' | tail -1
Primary database is in MAXIMUM AVAILABILITY mode

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log |grep "Protection Mode" | tail -1
      Protection Mode:            Maximum Availability
Conclusion

Using XFS ACL lists is quite cool if you want to give a specific user permissions to a file, but you don’t want to add him to a group, or make all files world readable. But be careful, that you configure the same ACL list on all other Standby nodes as well, and make sure that you use a Backup solution which supports ACL’s.

For example, using ‘cp’ or ‘cp -p’ makes a huge difference. In one case you loose your ACL list in the copy, in the other case you preserve it. The (+) sign at the end of the file permissions shows the difference.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp alert_DBIT122.log alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp -p alert_DBIT122.log alert_DBIT122.log.b
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.a
-rw-r----- 1 oracle oinstall 312894 Dec 13 14:25 alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.b
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log.b
 

Cet article Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations? est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER

Yann Neuhaus - Wed, 2016-12-14 01:18

In a DataGuard environment, by default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used. But for security reasons you might not want to use such a high privileged user only for the redo transmission. To overcome this issue, Oracle has implemented the REDO_TRANSPORT_USER initialization parameter.

The REDO_TRANSPORT_USER specifies the name of the user whose password verifier is used when a remote login password file is used for redo transport authentication.

But take care, the password must be the same at both databases to create a redo transport session, and the value of this parameter is case sensitive and must exactly match the value of the USERNAME column in the V$PWFILE_USERS view.

Besides that, this user must have the SYSDBA or SYSOPER privilege. However, we don’t want to grant the SYSDBA privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the same value on the redo source database and at each redo transport destination.

Ok. Let’s give it a try. I am creating an user called ‘DBIDG’ which will be used for redo transmission between my primary and standby.

SQL> create user DBIDG identified by manager;

User created.

SQL> grant connect to DBIDG;

Grant succeeded.

SQL> grant sysoper to DBIDG;

Grant succeeded.

Once done, I check the v$pwfile_users to see if my new user ‘DBIDG’ exist.

-- On Primary

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE


-- On Standby
SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

no rows selected

Ok. Like in previous versions of Oracle, I have to copy the password myself to the destination host to make it work.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] scp -p orapwDBIT122 oracle@dbidg02:$PWD

SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE

 

By connecting with the ‘DBIDG’ user, you almost can’t do anything. Not even selecting from the dba_tablespaces view e.g. From the security perspective, this user is much less of a concern.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] sqlplus dbidg/Manager1@DBIT122_SITE1 as sysoper

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 13 11:08:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> desc dba_tablespaces
ERROR:
ORA-04043: object "SYS"."DBA_TABLESPACES" does not exist

Nevertheless, the ‘DBIDG’ user is completely sufficient for my use case. Now, as I got my ‘DBIDG’ redo transport user in both password files (primary and standby), I can activate the redo_transport_user feature on (primary and standby) and check if everything works, by doing a switch over and switch back.

-- On Primary and Standby

SQL> alter system set redo_transport_user='DBIDG';

System altered.


DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 33 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE1"
Switchover succeeded, new primary is "DBIT122_SITE2"

DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE2 - Primary database
    DBIT122_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 71 seconds ago)


DGMGRL> SWITCHOVER TO 'DBIT122_SITE1' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE1"
Connecting ...
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE2"
Switchover succeeded, new primary is "DBIT122_SITE1"

Looks very good so far. But what happens if I have to change the password of the ‘DBIDG’ user?

-- On Primary

SQL> alter user dbidg identified by Manager1;

User altered.

-- On Primary
oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122


-- On Standby
oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122

That’s cool. Passwords on both sites have been updated successfully. They have the same time stamps and even the MD5 checksums are exactly the same. This is because of the new “Automatic Password Propagation to Standby” feature of 12cR2.

Conclusion

REDO_TRANSPORT_USER and “Automatic Password Propagation to Standby” are nice little features from Oracle.  The REDO_TRANSPORT_USER exists for quite a while now, at least since 11gR2, however, the “Automatic Password Propagation to Standby” is new with 12cR2.

 

Cet article Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER est apparu en premier sur Blog dbi services.

Upgrade Merge Patch shows the Error loading seed data for FND_ATTACHMENT_FUNCTIONS

Online Apps DBA - Wed, 2016-12-14 00:50

During Upgrade from 12.1.3 to 12.2.0: Upgrade Merge Patch shows the Error loading seed data for FND_ATTACHMENT_FUNCTIONS ERROR/ISSUE During Upgrade from 12.1.3 to 12.2.0: Upgrade Merge Patch throws the Error loading seed data for FND_ATTACHMENT_FUNCTIONS: FUNCTION_NAME = CSF_CSFDCMAI, FUNCTION_TYPE = F, APPLICATION_SHORT_NAME = CSF, ORA-01422: exact fetch returns more than requested number of rows Resolution: […]

The post Upgrade Merge Patch shows the Error loading seed data for FND_ATTACHMENT_FUNCTIONS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

filtered hierarchical query

Tom Kyte - Tue, 2016-12-13 20:26
I have some hierarchical data (potentially ~80 million rows, if that's important): <code> drop table test; create table test (id number,pid number,t varchar2(20)); insert into test values (1,NULL,'Animalia'); insert into test values (2,1,'Mamm...
Categories: DBA Blogs

Identifying latest partition.

Tom Kyte - Tue, 2016-12-13 20:26
Hi Guys, I have this query to identify partitions to be moved to a new tablespace. select distinct 'ALTER TABLE ' ||table_owner ||'.'||table_name|| ' MOVE PARTITION ' ||PARTITION_NAME|| ' TABLESPACE TBS_NAME UPDATE GLOBAL INDEXES;' from dba_tab...
Categories: DBA Blogs

Partitions in 11g

Tom Kyte - Tue, 2016-12-13 20:26
We nee to partition existing tables having billions of records in 11g. In 11g there is no provision where we can alter table to add partitions. Will DBMS_REDEFINITION package can be used to do that. I have never used it, so could you please su...
Categories: DBA Blogs

ONLINE datafile defragmentation Oracle 11.2g

Tom Kyte - Tue, 2016-12-13 20:26
Hello Guys, In past few days I was trying to figure it how to defragmentate datafile. Tablespace has even more that one datafile. I thought that with method I will be able to move extents from the end of datafile to the begging. ALTER TABLE ...
Categories: DBA Blogs

Grouping on column to many coulmns

Tom Kyte - Tue, 2016-12-13 20:26
What up guys, I'm new here, I've been scavenging the internet looking for an answer to my challenge and to be honest its hard to articulate what I need. I figured this would be my next step. I have the following table which i need to self joi...
Categories: DBA Blogs

Markus Winand on Database Optimizers

Gerger Consulting - Tue, 2016-12-13 19:52
If you are a DBA or a developer and would like to learn more about database optimizers, there is no one better in the world than Markus Winand to learn from!



Markus is hosting a free webinar on December 15th. Attend his webinar and learn how you can build better performing applications.

Register at this link.

Categories: Development

Connection Pools

Anthony Shorten - Tue, 2016-12-13 19:42

The Oracle Utilities Application Framework uses connection pooling to manage the number of connections in the architecture. The advantage of the pools is to be able to share the number of connections across concurrent users rather than just allocating connections to individual users, which may be stay allocated whilst they are not active. This ensures that the connections allocated are being used therefore the number of connections can be less than the number of users using the product at any time.

The configuration of the pool has a number of configuration settings:

  • Minimum Size - This is the initial size of the connection pool at startup. For non-production this is typically set to 1 but in production this is set to a size that represents the minimum number of concurrent users at anytime. The number of connections in the pool will not fall below this setting.
  • Maximum Size - The maximum number of connections to support in the pool. This number typically represents the expected pack concurrent connections available in the pool. If this number is too small, connections will queue causing delays and eventually connections will be refused against the pool.
  • Growth rate - This is the number of new connections to add to the pool when the pool is busy and new connections are required up to the Maximum Size. Some pool technologies allow you to configure the number of connections to add each time.
  • Inactive detection - Pools will have a collection of settings to define when a connection in the pool can be shutdown due to inactivity. This allows the pool to dynamically shrink when traffic becomes off peak.

There are typically two main connection pools that are used in the product:

  • Client Connection Pool - These are the number of connections between the client machines (browser, web services client, etc) and the server. This connection pool can be direct or via proxies (if you are using a proxy generally or as part of a high availability solution). Now, if you are using Oracle WebLogic this is managed by default by a Global Work Manager, which has unlimited connections. While that sounds ideal, this means you will run into a possible out of memory issue as the number of connections increases before you hit the connection limit. It is possible to use configuration to specify Work Managers to restrict the number of connections to prevent out of memory conditions. Most customers use the Global Work Manager with no issues but it may be worth investigating Work Managers to see if you want to use the Capacity Constraint or Max Threads Constraint capabilities to restrict connection numbers.
  • Database Connection Pool -  These are the number of connections between the product and the database. The connection pools can use UCP or JDBC datasources. The latter uses Oracle WebLogic JNDI to define the attributes of the connection pools as well as advanced connection properties such as FCF and GridLink. The choice to use UCP or JDBC data sources will depends on your site standards and how often you want to be able to change the connections to the database. Using JDBC datasources is more flexible in terms of maintenance of pool information whereas UCP is more desirable where fixed configurations are typical.

Additionally if you are using a proxy to get to the product, most proxies have connection number restrictions to consider when determining pool sizes.

So when deciding the size of the pools and its attributes there are a number of configurations:

  • The goal is to have enough connections in a pool to satisfy the number of concurrent users at any time. This includes peak and non-peak periods.
  • When designing pool sizes and other attributes, remember wasted connections are a burden on resources. Having the pool be dynamic will ensure the resources used are optimally used as traffic fluctuates.
  • Conversely, the establishment of new connections is an overhead when traffic grows. In terms of overall performance the establishment of new connections is minimal.
  • The connections in the pool are only needed for users actively using the server resources. They are not used when they are idle or using the client aspects of the product (for example, moving their mouse across the screen, interacting with non-dynamic fields etc).
  • Set the minimum number of connections to the absolute minimum you want to start with or the number you want to always have available at all times. It is not recommended to use the non-production default of one (1) as that would get the pool to create lots of new connections as traffic ramps up during the business day.
  • Set the maximum number to the expected peak concurrent connections at any time with some headroom for growth. Pool connections that are active take resources (CPU, memory etc) so making sure this number if reasonable for your business. Some customers use test figures as a starting point, talk to their management to determine number of peak user connections or use performance testing to decide the figure. I have heard implementation partners talk about rules of thumb where they estimate based upon total users.
  • Set the inactivity to destroy connections when they become idle for a period of time. This value for the period of time can vary but generally a low value is not recommended due to the nature of typical traffic seen onsites. For example, generally partners will look at between 30-60 seconds inactivity time and maybe more. The idea is to gradually remove inactive connections as traffic drops in non-peak.
  • If the pool, allows for specification of the number of new connections to create, consider using a number other than one (1) for online channels. Whilst this low value seems logical, it will result in a slower ramp up rate.
  • Monitor the connection pools for the connection queuing as that may indicate your maximum is too low for your site.

One other piece of advice from my troubleshooting days, do not assume the figures you use today will be valid in a years time. I have found that as the product implementation ages, end users will use the product very differently over time. I have chatted to consultants about the fact I have personally seen traffic double in the first 18 months of an implementation. Now, that is not a hard and fast rule, just an observation that when a product is initially implemented end users are conservative in its use, but over time, as they get more accustomed to the product, their usage and therefore traffic volume increases. This must be reflected in the pool sizing and attributes.

The Best of Both Worlds Regarding Mainframe Storage and the Cloud

Kubilay Çilkara - Tue, 2016-12-13 17:01
It might shock you to hear that managing data has never been more difficult than it is today.  Data is growing at the speed of light, while IT budgets are shrinking at a similar pace.  All of this growth and change is forcing administrators to find more relevant ways to successfully manage and store data.  This is no easy task, as there are many regulatory constraints with respect to data retention, and the business value of the data needs to be considered as well.  Those within the IT world likely remember (with fondness) the hierarchical storage management systems (HSM), which have traditionally played a key role in the mainframe information lifecycle management (ILM).  Though this was once a reliable and effective way to manage company data, gone are the days when businesses can put full confidence in such a method.  The truth of the matter is that things have become much more complicated.

There is a growing need to collect information and data, and the bad news with this is that there is simply not enough money in the budget to handle the huge load.  In fact, not only are budgets feeling the growth, but even current systems can’t keep up with the vast pace of the increase in data and its value.  It is estimated that global data center traffic will soon triple its numbers from 2013.  You can imagine what a tremendous strain this quick growth poses to HSM and ILM.  Administrators are left with loads of questions such as how long must data be kept, what data must be stored, what data is safe for deletion, and when it is safe to delete certain data.  These questions are simply the tip of the iceberg when it comes to data management.  Regulatory requirements, estimated costs, and the issues of backup, recovery and accessibility for critical data are areas of concern that also must be addressed with the changing atmosphere of tremendous data growth.

There is an alluring solution that has come on the scene that might make heads turn with respect to management of stored data.  The idea of hybrid cloud storage is making administrators within the IT world and businesses alike think that there might actually be a way to manage this vast amount of data in a cost effective way.  So, what would this hybrid cloud look like?  Essentially, it would be a combination of capabilities found in both private and public cloud storage solutions.  It would combine on-site company data with storage capabilities found on the public cloud.  Why would this be a good solution?  The reason is because companies are looking for a cost effective way to manage the massive influx of data.  This hybrid cloud solution would offer just that.  The best part is, users would only need to pay for what they use regarding their storage needs.  The goods news is, the options are seemingly unlimited, increasing or decreasing as client needs shift over time.  With a virtualized architecture in place, the variety of storage options are endless.  Imagine what it would be like to no longer be worried about the provider or the type of storage you are managing.  With the hybrid cloud storage system in place, these worries would go out the window.  Think of it as commodity storage.  Those within the business world understand that this type of storage has proven to work well within their spheres, ultimately offering a limitless capacity to meet all of their data storage needs.  What could be better?

In this fast-paced, shifting world, it’s high time relevant solutions come to the forefront that are effective for the growth and change so common in the world of technology today.  Keep in mind that the vast influx of data could become a huge problem if solutions such as the hybrid cloud options are not considered.  This combination of cloud storage is a great way to lower storage costs as the retention time increases, and the data value decreases.  With this solution, policies are respected, flexibility is gained, and costs are cut.  When it comes to managing data effectively over time, the hybrid cloud storage system is a solution that almost anyone could get behind!


Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.

Categories: DBA Blogs

Cutting Edge IMS Database Management

Kubilay Çilkara - Tue, 2016-12-13 16:39
Never before has the management of a database been more difficult for those within the IT world.  This should not come as a shock to those reading this, especially when you consider how vast the data volumes and streams currently are.  The unfortunate news is that these volumes and streams are not shrinking anytime soon, but IT budgets ARE, and so are things such as human resources and technical skills.  The question remains...how are businesses supposed to manage these databases in the most effective way?  Well, the very factors mentioned above make automation an extremely attractive choice.

Often times, clients have very specific requirements when it comes to automating their IMS systems.  Concerns arise such as how to make the most of CPU usage, what capabilities are available, strategic advantages, and how to save with respect to OpEx.  These are not simply concerns, but necessities, and factors that almost all clients consider.  Generally speaking, these requirements can be streamlined into 2 main categories.  The first is how to monitor database exceptions and the second is how to implement conditional reorgs. 

Regarding the monitoring of database exceptions, clients must consider how long this process actually takes without automation.  Without this tool, it needs to be accomplished manually and requires complicated analysis by staff that is well-experienced in this arena. However, when automation is utilized, policies are the monitors of databases.  In this instance, exceptions actually trigger a notification by email which ultimately reports what sort of help is necessary in order to help find a solution to the problem. 

Now, don’t hear what we are NOT saying here.  Does automation make life easier for everyone?  Yes!  Is implementing automation an easy and seamless process? No!  In fact, automation requires some detailed work prior to setting it up.  This work is accomplished so that it is clear what needs to be monitored and how that monitoring should be carried out.  Ultimately, a monitor list is created which will help to define and assign various policies. Overall, this list will make clear WHO gets sent a notification in the event of an exception, as well as what type of notification will be sent.  Even further, this list will show what the trigger of the exception was, and in the end will assign a notification list to the policy.  It may sound like a lot of work up front, but one could argue that it is certainly worth it in the long run. 

When it comes to conditional reorgs, automation saves the day once again.  Many clients can prove to be quite scattered with respect to their reorg cycle, some even organizing them throughout a spotty 4-week cycle.  The issue here is that reorg jobs are scheduled during a particular window of time, without the time or resources even being evaluated.  When clients choose to automate a reorg, automation will help determine the necessity of a reorg.  The best part of the whole process is that no manual work is needed!  In fact, the scheduler will continue to submit the reorg jobs, but execute them only if necessary.  Talk about a good use of both time and resources!  It ends up being a win-win situation. 

Automation often ends up “saving the day” when it comes to meeting and exceeding client goals.  Did you know that individual utilities -when teamed with the functionality and the vast capabilities of the automation process- actually improves overall business performance and value?  It is true.  So, if you are looking for the most cutting edge way to manage your IMS database, looking no further than the process of automation!


Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.
Categories: DBA Blogs

Command Line and Vim Tips from a Java Programmer

I’m always interested in learning more about useful development tools. In college, most programmers get an intro to the Linux command line environment, but I wanted to share some commands I use daily that I’ve learned since graduation.

Being comfortable on the command line is a great skill to have when a customer is looking over your shoulder on a Webex. They could be watching a software demo or deployment to their environment. It can also be useful when learning a new code base or working with a product with a large, unfamiliar directory structure with lots of logs.

If you’re on Windows, you can use Cygwin to get a Unix-like CLI to make these commands available.

Useful Linux commands Find

The command find helps you find files by recursively searching subdirectories. Here are some examples:

find .
    Prints all files and directories under the current directory.

find . -name '*.log'
  Prints all files and directories that end in “.log”.

find /tmp -type f -name '*.log'
   Prints only files in the directory “/tmp” that end in “.log”.

find . -type d
   Prints only directories.

find . -maxdepth 2
     Prints all files and directories under the current directory, and subdirectories (but not sub-subdirectories).

find . -type f -exec ls -la {} \;
     The 
-exec
flag runs a command against each file instead of printing the name. In this example, it will run 
ls -la filename
  on each file under the current directory. The curly braces take the place of the filename.

Grep

The command grep lets you search text for lines that match a specific string. It can be helpful to add your initials to debug statements in your code and then grep for them to find them in the logs.

grep foo filename
  Prints each line in the file “filename” that matches the string “foo”.

grep foo\\\|bar filename
Grep supports regular expressions, so this prints each line in the file that matches “foo” or “bar”.

grep -i foo filename
  Add -i for case insensitive matching.

grep foo *
  Use the shell wildcard, an asterisk, to search all files in the current directory for the string “foo”.

grep -r foo *
  Recursively search all files and directories in the current directory for a string.

grep -rnH foo filename
  Add -n to print line numbers and -H to print the filename on each line.

find . -type f -name '*.log' -exec grep -nH foo {} \;
  Combining find and grep can let you easily search each file that matches a certain name for a string. This will print each line that matches “foo” along with the file name and line number in each file that ends in “.log” under the current directory.

ps -ef | grep processName
  The output of any command can be piped to grep, and the lines of STDOUT that match the expression will be printed. For example, you could use this to find the pid of a process with a known name.

cat file.txt | grep -v foo
  You can also use -v to print all lines that don’t match an expression.

Ln

The command ln lets you create links. I generally use this to create links in my home directory to quickly cd into long directory paths.

ln -s /some/really/long/path foo
  The -s is for symbolic, and the long path is the target. The output of
ls -la
 in this case would be
foo -> /some/really/long/path
 .

Bashrc

The Bashrc is a shell script that gets executed whenever Bash is started in an interactive terminal. It is located in your home directory,

~/.bashrc
 . It provides a place to edit your $PATH, $PS1, or add aliases and functions to simplify commonly used tasks.

Aliases are a way you can define your own command line commands. Here are a couple useful aliases I’ve added to my .bashrc that have saved a lot of keystrokes on a server where I’ve installed Oracle WebCenter:

WC_DOMAIN=/u01/oracle/fmw/user_projects/domains/wc_domain
alias assets="cd /var/www/html"
alias portalLogs="cd $WC_DOMAIN/servers/WC_Spaces/logs"
alias domain="cd $WC_DOMAIN"
alias components="cd $WC_DOMAIN/ucm/cs/custom"
alias rpl="portalLogs; vim -R WC_Spaces.out"

After making changes to your .bashrc, you can load them with

source ~/.bashrc
 . Now I can type
rpl
 , short for Read Portal Logs, from anywhere to quickly jump into the WebCenter portal log file.

alias grep=”grep --color”

This grep alias adds the –color option to all of my grep commands.  All of the above grep commands still work, but now all of the matches will be highlighted.

Vim

Knowing Vim key bindings can be convenient and efficient if you’re already working on the command line. Vim has many built-in shortcuts to make editing files quick and easy.

Run 

vim filename.txt
  to open a file in Vim. Vim starts in Normal Mode, where most characters have a special meeting, and typing a colon,
:
 , lets you run Vim commands. For example, typing 
Shift-G
  will jump to the end of the file, and typing
:q
 while in normal mode will quit Vim. Here is a list of useful commands:

:q
  Quits Vim

:w
  Write the file (save)

:wq
  Write and quit

:q!
  Quit and ignore warnings that you didn’t write the file

:wq!
  Write and quit, ignoring permission warnings

i
  Enter Insert Mode where you can edit the file like a normal text editor

a
  Enter Insert Mode and place the cursor after the current character

o
  Insert a blank line after the current line and enter Insert Mode

[escape]
  The escape button exits insert mode

:150
  Jump to line 150

shift-G
  Jump to the last line

gg
  Jump to the first line

/foo
  Search for the next occurrence of “foo”. Regex patterns work in the search.

?foo
  Search for the previous occurrence of “foo”

n
  Go to the next match

N
Go to the previous match

*
  Search for the next occurrence of the searched word under the cursor

#
  Search for the previous occurrence of the searched word under the cursor

w
  Jump to the next word

b
  Jump to the previous word

``
  Jump to the last action

dw
  Delete the word starting at the cursor

cw
  Delete the word starting at the cursor and enter insert mode

c$
  Delete everything from the cursor to the end of the line and enter insert mode

dd
  Delete the current line

D
  Delete everything from the cursor to the end of the line

u
  Undo the last action

ctrl-r
 
ctrl-r
  Redo the last action

d[up]
  Delete the current line and the line above it. “[up]” is for the up arrow.

d[down]
  Delete the current line and the line below it

d3[down]
  Delete the current line and the three lines below it

r[any character]
  Replace the character under the cursor with another character

~
  Toggle the case (upper or lower) of the character under the cursor

v
  Enter Visual Mode. Use the arrow keys to highlight text.

shift-V
  Enter Visual Mode and highlight whole lines at a time.

ctrl-v
  Enter Visual Mode but highlight blocks of characters.

=
  While in Visual Mode, = will auto format highlighted text.

c
  While in Visual Mode, c will cut the highlighted text.

y
  While in Visual Mode, y will yank (copy) the highlighted text.

p
  In Normal Mode, p will paste the text in the buffer (that’s been yanked or cut).

yw
  Yank the text from the cursor to the end of the current word.

:sort
  Highlight lines in Visual Mode, then use this command to sort them alphabetically.

:s/foo/bar/g
  Highlight lines in Visual Mode, then use search and replace to replace all instances of “foo” with “bar”.

:s/^/#/
  Highlight lines in Visual Mode, then add # at the start of each line. This is useful to comment out blocks of code.

:s/$/;/
Highlight lines in Visual Mode, then add a semicolon at the end of each line.

:set paste
  This will turn off auto indenting. Use it before pasting into Vim from outside the terminal (you’ll want to be in insert mode before you paste).

:set nopaste
  Make auto indenting return to normal.

:set nu
  Turn on line numbers.

:set nonu
  Turn off line numbers.

:r!pwd
  Read the output of a command into Vim. In this example, we’ll read in the current directory.

:r!sed -n 5,10p /path/to/file
  Read lines 5 through 10 from another file in Vim. This can be a good way to copy and paste between files in the terminal.

:[up|down]
  Type a colon and then use the arrow keys to browse through your command history. If you type letters after the colon, it will only go through commands that matched that. (i.e., :se  and then up would help find to “:set paste” quickly).

Vimrc

The Vimrc is a configuration file that Vim loads whenever it starts up, similar to the Bashrc. It is in your home directory.

Here is a basic Vimrc I’d recommend for getting started if you don’t have one already. Run

vim ~/.vimrc
and paste in the following:

set backspace=2         " backspace in insert mode works like normal editor
syntax on               " syntax highlighting
filetype indent on      " activates indenting for files
set autoindent          " auto indenting
set number              " line numbers
colorscheme desert      " colorscheme desert
set listchars=tab:>-,trail:.,extends:>,precedes:<
set list                " Set up whitespace characters
set ic                  " Ignore case by default in searches
set statusline+=%F      " Show the full path to the file
set laststatus=2        " Make the status line always visible

 

Perl

Perl comes installed by default on Linux, so it is worth mentioning that it has some extensive command line capabilities. If you have ever tried to grep for a string that matches a line in a minified Javascript file, you can probably see the benefit of being able to filter out lines longer than 500 characters.

grep -r foo * | perl -nle'print if 500 > length'

Conclusion

I love learning the tools that are available in my development environment, and it is exciting to see how they can help customers as well.

Recently, I was working with a customer and we were running into SSL issues. Java processes can be run with the option 

-Djavax.net.ssl.trustStore=/path/to/trustStore.jks
  to specify which keystore to use for SSL certificates. It was really easy to run
ps -ef | grep trustStore
to quickly identify which keystore we needed to import certificates into.

I’ve also been able to use various find and grep commands to search through unfamiliar directories after exporting metadata from Oracle’s MDS Repository.

Even if you aren’t on the command line, I’d encourage everyone to learn something new about their development environment. Feel free to share your favorite Vim and command line tips in the comments!

Further reading

http://www.vim.org/docs.php

https://www.gnu.org/software/bash/manual/bash.html

http://perldoc.perl.org/perlrun.html

The post Command Line and Vim Tips from a Java Programmer appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Investigating IO Performance on Amazon RDS for Oracle

Pythian Group - Tue, 2016-12-13 15:55

I’ve recently been involved in quite a few database migrations to Oracle RDS. One thing that I had noticed when dealing with post-migration performance issues was related to queries that used TABLE SCAN FULL in their execution. It seemed, that in many cases, it just took a single query to max out the allocated IOPS (IOs per second) or bandwidth, which in turn would caused overall slowness of the RDS instance.

The search in documentation showed that it could have been caused by how IO operations are counted on Amazon RDS, as it’s quite different from what a routine Oracle DBA like me would expect. For multi-block reads the database (depending on storage) would typically issue IOs of size up to 1MB, so if an 8K block size was used the table scans would read up to 128 blocks in a single IO of db file scattered read or direct path read.

Now, pay attention to what the AWS documentation says:
While Provisioned IOPS (io1 storage) can work with I/O sizes up to 256 KB, most databases do not typically use such large I/O. An I/O request smaller than 32 KB is handled as one I/O; for example, 1000 16 KB I/O requests are treated the same as 1000 32 KB requests. I/O requests larger than 32 KB consume more than one I/O request; Provisioned IOPS consumption is a linear function of I/O request size above 32 KB. For example, a 48 KB I/O request consumes 1.5 I/O requests of storage capacity; a 64 KB I/O request consumes 2 I/O requests, etc. … Note that I/O size does not affect the IOPS values reported by the metrics, which are based solely on the number of I/Os over time. This means that it is possible to consume all of the IOPS provisioned with fewer I/Os than specified if the I/O sizes are larger than 32 KB. For example, a system provisioned for 5,000 IOPS can attain a maximum of 2,500 IOPS with 64 KB I/O or 1,250 IOPS with 128 KB IO.
… and …
I/O requests larger than 32 KB are treated as more than one I/O for the purposes of PIOPS capacity consumption. A 40 KB I/O request will consume 1.25 I/Os, a 48 KB request will consume 1.5 I/Os, a 64 KB request will consume 2 I/Os, and so on. The I/O request is not split into separate I/Os; all I/O requests are presented to the storage device unchanged. For example, if the database submits a 128 KB I/O request, it goes to the storage device as a single 128 KB I/O request, but it will consume the same amount of PIOPS capacity as four 32 KB I/O requests.

Based on the statements above it looked like the large 1M IOs issued by the DB would be accounted as 32 separate IO operations, which would obviously exhaust the allocated IOPS much sooner than expected. The documentation talks only about Provisioned IOPS, but I think this would apply to General Purpose SSDs (gp2 storage) too, for which the IOPS baseline is 3 IOPS/GB (i.e. 300 IOPS if allocated size is 100GB of gp2).

I decided to do some testing to find out how RDS for Oracle handles large IOs.

The Testing

For testing purposes I used the following code to create a 1G table (Thanks Connor McDonald and AskTom):

ORCL> create table t(a number, b varchar2(100)) pctfree 99 pctused 1;

Table T created.

ORCL> insert into t  values (1,lpad('x',100));

1 row inserted.

ORCL> commit;

Commit complete.

ORCL> alter table t minimize records_per_block;

Table T altered.

ORCL> insert into t select rownum+1,lpad('x',100) from dual connect by level<131072; 131,071 rows inserted. ORCL> commit;

Commit complete.

ORCL> exec dbms_stats.gather_table_stats(user,'T');

PL/ORCL procedure successfully completed.

ORCL> select sum(bytes)/1024/1024 sizemb from user_segments where segment_name='T';
SIZEMB
1088


ORCL> select value/1024/1024 buffer_cache from v$sga where name='Database Buffers';
BUFFER_CACHE
1184

The code for testing will be:

exec rdsadmin.rdsadmin_util.flush_buffer_cache;
alter session set "_serial_direct_read"=always;
alter session set db_file_multiblock_read_count=&1;

-- Run FTS against T table forever.
declare
  n number:=1;
begin
  while n>0
  loop
    select /*+ full(t) */ count(*) into n from t;
  end loop;
end;
/

Basically, I’ll flush the buffer cache, which will force the direct path reads by setting _serial_direct_read to “ALWAYS”, and then, will choose the db_file_multiblock_read_count based on how big IOs I want to issue (note, by default the db_file_multiblock_read_count is not set on RDS, and it resolves to 128, so the maximum size of an IO from the DB is 1 MB), I’ll test with different sizes of IOs, and will Capture the throughput and effective IOPS by using the “Enhanced Monitoring” of the RDS instance.

Side-note: the testing I had to do turned out to be more complex than I had expected before I started. In few cases, I was limited by the instance throughput before I could reach the maximum allocated IOPS, and due to this, the main testing needed to be one on large enough instance (db.m4.4xlarge), that had more of the dedicated EBS-throughput.

The ResultsProvisioned IOPS storage

Testing was done on a db.m4.4xlarge instance that was allocated 100GB of io1 storage of 1000 Provisioned IOPS. The EBS-optimized throughput for such instance is 256000 KB/s.
The tests were completed by using db_file_multiblock_read_count of 1, 2, 3, 4, 5, 6, 7, 8, 9, 12, 16, 32, 64 and 128.
For each test the Throughput and IO/s were captured (from RDS CloudWatch graphs), and also the efficient IO size was derived.
The DB instance was idle, but still, there could be few small IO happening during the test.

Provisioned IOPS Measured Throughput

Provisioned IOPS Measured Throughput

Provisioned IOPS Measured IO/s

Provisioned IOPS Measured IO/s

From the graphs above the following features that are not documented can be observed:

  • The RDS instance is dynamically choosing the physical IO size (I’ll call them “physical“, just to differentiate that these are the IOs to storage, while in fact, that’s only what I see in the CLoudWatch graphs, the real physical IO could be some something different) based on the size of the IO request from the database. The possible physical IO sizes appear to be 16K, 32K, 64K, 128K and probably also 8K (this could also be in fact 16K physical IO reading just 8K of data)
  • The IOPS limit applies only to smaller physical IOs sizes (up to 32K), for larger physical IOs (64K, 128K) the throughput is the limiting factor of the IO capability. The throughput limit appears to be quite close to the maximum throughput that the instance is capable of delivering, but at this point, it’s not clear how the throughput limit for particular situation is calculated.
Throughput Limits for Provisioned IOPS

I ran additional tests on differently sized instances with io1 storage to understand better how the maximum throughput was determined. The graph below represents the throughput achieved on different instances, but all had the same 100G of 1000 PIOPS io1 storage. The throughput was done by using db_file_multiblock_read_count=128:

PIOPS Throughput by Instance Type

PIOPS Throughput by Instance Type

it appears that the maximum throughput is indeed limited by the instance type, except for the very largest instance db.m4.10xlarge (For this instance the situation is somewhat weird even in the documentation because the maximum throughput is mentioned as 500 MB/s, but the maximum throughput for a single io1 EBS volume, which should be there underneath the RDS, is just 320 MB/s, and I was unable to reach any of these limits)

General Purpose SSD storage

Testing was done on a db.m4.4xlarge instance that was allocated 100GB of gp2 storage with 300 IOPS baseline. The EBS-optimized throughput for such instance is 256000 KB/s.
The tests were conducted similarly to how they were done for Provisioned IOPS above (note, this is the baseline performance, not burst performance)

General Purpose SSD Measured Throughput

General Purpose SSD Measured Throughput

General Purpose SSD Measured IO/s

General Purpose SSD Measured IO/s

Similarly to Provisioned IOPS, the General Purpose SSD storage behaves differently from what’s explained in the documentation:

  • The physical IO size again is calculated dynamically based on the size of the IO request from the database. The possible sizes appear to be the same as for io1: (8K), 16K, 32K, 64K and 128K.
  • The IOPS limit (to baseline level) appears to apply to IO sizes only up to 16K (compared to 32K in case of Provisioned IOPS), for larger physical IOs starting from 32K, the limit appears to be throughput-driven.
  • It’s not clear how the throughput limit is determined for the particular instance/storage combination, but in this case, it appeared to be around 30% of the maximum throughput for the instance, however, I didn’t confirm the same ratio for db.m4.large where the maximum achievable throughput depended on the allocated size of the gp2 storage.
Burst Performance

I haven’t collected enough data to derive anything concrete, but during my testing I observed that Burst performance applied to both maximum IOPS and also the maximum throughput. For example, while testing on db.m4.large (max instance throughput of 57600 KB/s) with 30G of 90 IOPS baseline performance, I saw that for small physical IOs it allowed bursting up to 3059 IOPS for short periods of time, while normally it indeed allowed only 300 IOPS. For larger IOs (32K+), the baseline maximum throughput was around 24500 KB/s, but the burst throughput was 55000 KB/s

Throughput Limits for General Purpose SSD storage

I don’t really know how the maximum allowed throughput is calculated for different instance type and storage configuration for gp2 instances, but one thing is clear: that instance size, and size of the allocated gp2 storage are considered in determining the maximum throughput. I was able to achieve the following throughput measures when gp2 storage was used:

  • 75144 KB/s (133776 KB/s burst) on db.m4.4xlarge (100G gp2)
  • 54500 KB/s (same as burst, this is close to the instance limit) on db.m4.large (100G gp2)
  • 24537 KB/s (54872 KB/s burst) on db.m4.large (30G gp2)
  • 29116 KB/s (burst was not measured) on db.m4.large (40G gp2)
  • 37291 KB/s (burst was not measured) on db.m4.large (50G gp2)
Conclusions

The testing provided some insight into how the maximum performance of IO is determined on Amazon RDS for Oracle, based on the instance type, storage type, and volume size. Despite finding some clues I also understood that managing IO performance on RDS is far more difficult than expected for mixed size IO requests that are typically issued by Oracle databases. There are many questions that still need to be answered (i.e. how the maximum throughput is calculated for gp2 storage instances) and it’d take many many hours to find all the answers.

On the other-hand, the testing already revealed a few valuable findings:

  1. Opposite to the documentation that states that all IOs are measured and accounted in 32KB units, we found that an IU reported by amazon can be of size 8K (probably), 16K, 32K, 64K and 128K
  2. For small physical IOs (up to 32K in case of Provisioned IOPS and up to 16K in case of General Purpose SSD) the allocated IOPS is used as the limit for the max performance.
  3. For larger physical IOs (from 64K in case of Provisioned IOPS and from 32K in case of General Purpose SSD) the throughput is used as the limit for the max performance, and the IOPS limit no longer applies.
  4. The Burst performance applies to both IOPS and throughput

P.S. As to my original issue of a single TABLE SCAN FULL severely impacting the overall performance, I found that in many cases we were using small RDS instances db.m3.large or db.m4.large, for which the maximum throughput was ridiculously small, and we were hitting the throughput limitation, not the IOPS limit that actually didn’t apply to the larger physical IOs on gp2 storage.

Categories: DBA Blogs

Oracle ACE Director Alumni Status

Randolf Geist - Tue, 2016-12-13 15:49
I've recently requested to be removed from the Oracle ACE program and move to the "Alumni" status.

I've already felt for a quite a while now that my area of expertise and interest as well as my public profile (no Twitter etc.) is no longer really a good fit to the ACE program.

The most recent changes to the ACE program then just have made my decision easier to step back.

All the best to the ACE program and thanks for the support during the last eight years!

This decision won't really influence what I'll do in the future - I'll continue to publish notes and videos about things I find interesting about Oracle database technology, the same way I did before - probably with a little less pressure to maintain a certain level of output.

Partner Webcast - Oracle Cloud Machine Technical Overview

Oracle Cloud Machine delivers Oracle Cloud Services direct in your data center, fully managed by Oracle, so that you can take advantage of the agility, innovation and subscription-based pricing...

We share our skills to maximize your revenue!
Categories: DBA Blogs

GNW05 – Extending Databases With the Full Power of Hadoop: How Gluent Does It

Tanel Poder - Tue, 2016-12-13 14:15

It’s time to announce the next webinar in the Gluent New World series. This time I will deliver it myself (and let’s have some fun :-)

Details below:

GNW05 – Extending Databases With the Full Power of Hadoop: How Gluent Does It

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Linux locate/print block device attributes ASMLib

Michael Dinh - Tue, 2016-12-13 08:38

Just learned about this specifically to identify whether disk is being used by ASMLib

blkid – command-line utility to locate/print block device attributes

Oracle Linux Server release 6.4
[root@rac02:/root]
# ll /etc/*release*
-rw-r--r--. 1 root root 32 Feb 22  2013 /etc/oracle-release
-rw-r--r--. 1 root root 55 Feb 22  2013 /etc/redhat-release
lrwxrwxrwx. 1 root root 14 Nov 29  2014 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root 45 Feb 22  2013 /etc/system-release-cpe

[root@rac02:/root]
# cat /etc/system-release
Oracle Linux Server release 6.4

[root@rac02:/root]
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@rac02:/root]
# /sbin/blkid |sort
/dev/mapper/vg01-lv_root: UUID="97968448-9997-42a0-a106-c438a47345b8" TYPE="ext4"
/dev/mapper/vg01-lv_swap: UUID="8fe8b719-a367-4448-9ae2-76b376ad91d5" TYPE="swap"
/dev/sda1: UUID="33162370-b7a1-45ae-9c8c-966b8bd720e3" TYPE="ext4"
/dev/sda2: UUID="qHuGcf-Ntnr-hoLR-qtEb-cdgz-2sph-jgaKDK" TYPE="LVM2_member"
/dev/sdb1: LABEL="DISK1" TYPE="oracleasm"
/dev/sdc1: LABEL="DISK2" TYPE="oracleasm"
/dev/sdd1: LABEL="DISK3" TYPE="oracleasm"
/dev/sde1: LABEL="DISK4" TYPE="oracleasm"
/dev/sdf1: LABEL="DISK5" TYPE="oracleasm"
/dev/sdg1: LABEL="DISK6" TYPE="oracleasm"
/dev/sdh1: LABEL="DISK7" TYPE="oracleasm"
/dev/sdi1: LABEL="DISK8" TYPE="oracleasm"
Oracle Linux Server release 6.6
[root@arrow1 ~]# ll /etc/*release*
-rw-r--r--. 1 root root 32 Oct 15  2014 /etc/oracle-release
-rw-r--r--. 1 root root 55 Oct 15  2014 /etc/redhat-release
lrwxrwxrwx. 1 root root 14 Jun 24  2015 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root 45 Oct 15  2014 /etc/system-release-cpe

[root@arrow1 ~]# cat /etc/system-release
Oracle Linux Server release 6.6

[root@arrow1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)

[root@arrow1 ~]# /sbin/blkid
/dev/sda1: UUID="7c42cea0-f23d-4fec-ad0f-b1bf4b50b17e" TYPE="ext4"
/dev/sda2: UUID="ZVWdTj-8EQN-F3ac-3Tev-xTsb-ssL8-7euz5X" TYPE="LVM2_member"
/dev/sdb1: UUID="KUEy1I-X5i2-CuSm-krTA-R5K5-i4je-Ek56ZK" TYPE="LVM2_member"
/dev/mapper/vg01-LogVol01: UUID="8e2c236c-dd87-4be6-9eaf-f72f32f0dcc6" TYPE="ext4"
/dev/mapper/vg01-LogVol00: UUID="919bab13-82b6-425b-95c8-87975cb0bf20" TYPE="swap"
Oracle Linux Server release 7.3
[root@owl ~]# ll /etc/*release*
-rw-r--r--. 1 root root  32 Nov  7 22:07 /etc/oracle-release
-rw-r--r--. 1 root root 398 Nov  7 22:07 /etc/os-release --- (New as of OEL7?)
-rw-r--r--. 1 root root  52 Nov  7 22:07 /etc/redhat-release
lrwxrwxrwx. 1 root root  14 Dec 12 23:31 /etc/system-release -> oracle-release
-rw-r--r--. 1 root root  31 Nov  7 22:07 /etc/system-release-cpe

[root@owl ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@owl ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)

[root@owl ~]# cat /etc/os-release
NAME="Oracle Linux Server"
VERSION="7.3"
ID="ol"
VERSION_ID="7.3"
PRETTY_NAME="Oracle Linux Server 7.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:3:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.3
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.3

[root@owl ~]# /sbin/blkid
/dev/sr0: UUID="2016-11-21-16-51-51-00" LABEL="VBOXADDITIONS_5.1.10_112026" TYPE="iso9660"
/dev/sda1: UUID="65bf9c73-fb56-49c3-b55e-85de8f318892" TYPE="xfs"
/dev/sda2: UUID="p4rqW2-uzvr-6DpX-nof0-7tgf-Yfvn-n8ehaB" TYPE="LVM2_member"
/dev/mapper/vg01-root: UUID="de9c65d9-60cc-45b8-b9fa-7459ef3f3850" TYPE="xfs"
/dev/mapper/vg01-swap: UUID="1f85c253-261e-4021-9468-651de69b8e7a" TYPE="swap"
[root@owl ~]#

Upgrade to Enterprise Manager 13.2

Yann Neuhaus - Tue, 2016-12-13 08:11

I will describe how to upgrade Enterprise Manager 13.1.0.0 to the new 13.2.0.0 version.

At the beginning we have to ensure that we applied the latest PSU on the repository database.

It is mandatory to apply the following patch before upgrading to Enterprise Manager Cloud Control 13.2.0.0:

DATABASE PATCH SET UPDATE 12.1.0.2.160419

To install the patch you have to check the following point:

- you have to use Oracle Interim Patch Installer version 12.2.0.1.8

Once you have downloaded the patch and unzipped it, you can check for potential conflicts:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail 
-phBaseDir /oracle/u01/app/oracle/software/22899531/22806133
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed. 
OPatch succeeded.

 

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail -phBaseDir 
/oracle/u01/app/oracle/software/22899531/23006522
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

And finally, you check for the system space available:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] $ORACLE_HOME/OPatch/opatch prereq 
CheckSystemSpace -phBaseFile /oracle/u01/app/oracle/software/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checksystemspace"
Prereq "checkSystemSpace" passed.
OPatch succeeded.

Then once the pre requisites are ok, you stop the repository database, and you run the classical opatch apply command from the directory where you have unzipped the PSU.

You finally check the Oracle inventory:

oracle@vmtestoraCC13c:/home/oracle/ [EMREP13C] opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
--------------------------------------------------------------------------------
Local Machine Information:
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 12c                                      12.1.0.2.0
There are 1 products installed in this Oracle Home. 
Interim patches (1) : 
Patch  22806133     : applied on Tue Nov 22 11:19:55 CET 2016
Unique Patch ID:  19983161
Patch description:  "DATABASE BUNDLE PATCH: 12.1.0.2.160419 (22806133)"

Secondly you have to disable the optimizer_adaptive_features parameter in the repository database:

SQL> alter system set optimizer_adaptive_features=false scope=both;

System altered.

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

We verify if any login triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';

We verify if any logoff triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';

If we find a trigger enabled, we disable it:

SQL> SELECT trigger_name, owner from sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
 
TRIGGER_NAME         OWNER
GSMLOGOFF            GSMADMIN_INTERNAL
 
SQL> alter trigger gsmadmin_internal.gsmlogoff disable;

Then you have to copy the EMKEY to the repository database:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl config emkey 
-copy_to_repos -sysman_pwd dbi05manager
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, 
secure the EMKey by running "emctl config emkey -remove_from_repos".

Concerning the OMS shutdown we follow the following procedure:

We stop the JVMD and ADP engines explicitly:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms jvmd stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework
 
oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms adp stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework

Then we shut down completely the OMS:

oracle@v333:/home/oracle/ [oms13c] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down

It is mandatory to stop the management agent in order to avoid errors during the migration phase:

oracle@vmtestoraCC13c:/home/oracle/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

WATCH OUT: before running the installer unset PERLLIB and PERL5LIB variables (if for example you have environment variables defined for your oracle user) otherwise you will get the following error:

/opt/oracle/Middleware13c/perl/lib/5.10.0/x86_64-linux-thread-multi/auto/Cwd/Cwd.so: 
undefined symbol: Perl_Gthr_key_ptr
ERROR: Unable to continue with the installation 
because some one-off patches could not be applied successfully.

Then once the oms and the management agent are stopped you can run:

oracle@v333:/opt/software/ [oms13c] ./em13200_linux64.bin
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 7999 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    
Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer 
from /tmp/OraInstall2016-11-18_10-07-45AM
====Prereq Config Location main===
/tmp/OraInstall2016-11-18_10-07-45AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2016-11-18_10-07-45AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2016-11-18_10-07-45AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_f8wrWz/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/opt/software
EMFileLoc:/tmp/OraInstall2016-11-18_10-07-45AM/oui/em/
ScratchPathValue :/tmp/OraInstall2016-11-18_10-07-45AM

The first installer screen appears:

em1

I choosed not to receive Security alerts then I Select Next:

em2

I select Next, I skip the software Updates

em3

We check the prerequisites are ok

em4

We choose to upgrade an existing Enterprise Manager system, we enter the old Middleware home, and we select Next

em5

We enter the new Middleware home, and we select Next

em6

We enter the repository connection details, the sys and sysman passords, we confirm we have a correct repository backupm we disable the DDMP jobs and we select Next

em7

We select Yes to fix the issues

em8

We review the plugins and we select Next

em9

We have the possibility to add plugins we want to deploy while upgrading to EM 13.2

em10

We enter the weblogic username and password and we select Next

em11

I choosed not to configure a shared location for Oracle BI publisher

em12

We choose the default ports and we select Next

em13

We select Upgrade:

em14

The upgrade is running fine:=)

At the end of the upgrade, you have to run connected as root from the new OMS home the shell root.sh:

root@vmCC13c oms13cr2]# . root.sh
/etc exist
/u03/app/oracle/oms13cr2

Finally you have successfully upgraded to EM 13.2:

em15

Once the OMs is upgraded, you have to upgrade the management agents with the classical procedure. From the EM console you select upgrade agent as follows:

em16

em17

You add the management agent, and you select Submit

em18

You can follow the upgrade phase, finally you have to run the root.sh script on the management agent and cleanup the old agent environment with the EM console as follows:

You select the Post Agent Upgrade Tasks, you select the agent previously migrated:

em19

And you select Submit

The old agent home will be removed; you will just have to adapt your /etc/oratab file to give the new ORACLE_HOME for your agent 13.2

The last phase consists in deleting the old OMS home. As we were in 13.1 version, we only have to check nothing is running with the old environment, then delete the old home.

oracle@vmCC13c:/u03/app/oracle/oms13cr2/ [oms13c] ps -ef | grep oms | grep -v 13cr2
oracle   28463 27736  0 15:52 pts/5    00:00:00 grep --color=auto oms
 
oracle@vmCC13c:/u03/app/oracle/ [oms13c] ls
agent13c  CC13c_setupinfo.txt  gc_inst  gc_inst1  oms13c  oms13cr2  swlib
oracle@vmCC13c:/u03/app/oracle/ [oms13c] rm -rf oms13c

The new features are listed in the following URL and mainly concern the virtualization, the Middleware and Cloud Management, and some new features about incident management (always on monitoring installed on a different host for example).

http://docs.oracle.com/cd/E73210_01/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Finally the upgrade phase to EM 13.2 is finished in a couple of hours and did not present any errors.

 

 

 

 

 

 

 

 

Cet article Upgrade to Enterprise Manager 13.2 est apparu en premier sur Blog dbi services.

Index Compression

Jonathan Lewis - Tue, 2016-12-13 07:11

Richard Foote has published a couple of articles in the last few days on the new (licensed under the advanced compression option) compression mechanism in 12.2 for index leaf blocks. The second of these pointed out that the new “high compression” mechanism was even able to compress single-column unique indexes – a detail that doesn’t make sense and isn’t allowed for the older style “leading edge deduplication” mechanism for index compression.

In 12.2 an index can be created (or rebuilt) with the option “compress advanced high” – and at the leaf-block level this will create “compression units” (possibly just one per leaf block – based on my early testing) that takes the complexity of compression far beyond the level of constructing a directory of prefixes. Richard demonstrated the benefit by creating a table with a numeric unique index – then compressed the index, reducing its size from 2,088 leaf blocks to 965 leaf blocks, which is pretty dramatic difference.

It crossed my mind, though, to wonder whether the level of compression was a side effect of the very straightforward code that Richard had used to create the data set: the table was a million rows with a primary key that had been generated as the rownum selected from the now-classic “connect by..” query against dual, and the row length happened to allow for 242 rows per 8KB table block.

If you pause to think about this data set you realise that if you pick the correct starting point and walk through 242 consecutive entries of the index you will be walking through 242 consecutive rows in the table starting from the zeroth row in a particular table block and ending at the 241st row in that block. A rowid (as stored by Oracle in a simple B-tree index) consists of 6 bytes and the first five bytes of the rowid will be the same for the first 256 rows in any one table block (and the first four will still be the same for the remaining rows in the block). Richard’s data set will be very close to ideal for any byte-oriented, or bit-oriented, compression algorithm because (to use Oracle terminology) the data will have a perfect clustering_factor. (He might even have got a slightly better compression ratio if he’d used an /*+ append */ on the insert, or done a CTAS, and reduced the rowsize to get a uniform 256 rows per table block.)

So how do things change if you randomise the ordering of the key ? Here’s a variant on Richard’s code:


rem
rem	Script:		index_compression_12c_2.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Dec 2016
rem
rem	Last tested 
rem		12.2.0.1
rem

execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4
)
select
	rownum				id,
	lpad('x',130,'x')		padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
order by
	dbms_random.value
;

select table_name, blocks, round(num_rows/blocks,0) rpb from user_tables where table_name = 'T1';

drop index t1_i1;
create unique index t1_i1 on t1(id);
execute dbms_stats.gather_index_stats(user,'t1_i1');
select index_name, compression, pct_free, leaf_blocks from user_indexes where index_name = 'T1_I1';

drop index t1_i1;
create unique index t1_i1 on t1(id) compress advanced high;
execute dbms_stats.gather_index_stats(user,'t1_i1');
select index_name, compression, pct_free, leaf_blocks from user_indexes where index_name = 'T1_I1';

The initial drop index is obviously redundant, and the calls to gather_index_stats should also be redundant – but they’re there just to make it obvious I haven’t overlooked any checks for correctness in the build and stats.

You’ll notice that my row length is going to be slightly more “real-world” than Richard’s so that the degree of compression I get from nearly identical rowid values is likely to be reduced slightly, and I’ve completely randomised the order of key values.

So what do the results like ?

With the default pctfree = 10, and in a tablespace of uniform 1MB extents, 8KB blocks, utilising ASSM I get this:


TABLE_NAME               BLOCKS        RPB
-------------------- ---------- ----------
T1                        19782         51

INDEX_NAME           COMPRESSION     PCT_FREE LEAF_BLOCKS
-------------------- ------------- ---------- -----------
T1_I1                DISABLED              10        2088
T1_I1                ADVANCED HIGH         10        1303

Unsurprisingly the uncompressed index is exactly the same size as Richard’s (well, it was just the integers from 1 to 1M in both cases) but the compression ratio is significantly less – though still pretty impressive.

Of course, for this type of index my example probably goes to the opposite extreme from Richard’s. Realistically if you have a sequence based key with an OLTP pattern of data arrival then consecutive key values are likely to be scattered within a few blocks of each other rather than being scattered complely randomly across the entire width of the table; so a more subtle model (using a suitable number of concurrent processes to insert ids based on a sequence, perhaps) would probably get a better compression ratio than I did, though a worse one than Richard’s.There’s also the issue of the size of the key value itself – once you get to values in the order of 10 million to 100 million you’re looking at mostly 4 bytes (internal format) storage where for large runs of values the first 3 bytes match, possibly leading to a better compression ratio.

Of course the question of globally partitioned indexes will be relevant for some people since the principle reason for global indexes on partitioned tables is to enforce uniqueness on column combinations that don’t include the partition key, and that introduces another possible benefit – the rowid goes up to 10 bytes, of which the first 4 bytes are the object id of the table partition: depending on the nature of the partitioning that repeated 4 bytes per row may be close to non-existent after compression, giving you a better compression ratio on globally partitioned than you get on any other type of single column unique index.

Once you start looking at the details there are a surpising number of factors that can affect how well the advanced compression high can work.

Footnote:

Once you’ve created the index, you can start poking around in all sorts of ways to try and work out what the compression algorithm does. A simple block dump is very informative, with lots of clues in the descriptive details – and lots of puzzles when you start looking too closely. There are hints that this type of index compression adopts a strategy similar to “oltp comprssion” for tables in that compression occurs only as the leaf block becomes full – and possibly allows some sort of batching process within a leaf block before finally compressing to a single contiguous unit. (This is just conjecture, at present: the only firm statement I’ll make about the actual implementation of index compression is that it uses a lot of CPU; in my example the baseline create index took about 1.5 seconds of CPU, the compressed create took about 4.5 seconds of CPU.)

There are also a couple of amusing side effects that may confound those who use the old “validate index / query index_stats” two-step to decide when to rebuild indexes. Here’s what I got on the compressed index:


SQL> validate index t1_i1;

SQL> select blocks, lf_rows, lf_rows_len, btree_space, used_space, pct_used from index_stats;

    BLOCKS    LF_ROWS LF_ROWS_LEN BTREE_SPACE USED_SPACE   PCT_USED
---------- ---------- ----------- ----------- ---------- ----------
      1408    1000000	 14979802    10416812	14994105	144

My index is using 144% of the space that it has been allocated. You don’t have to be very good at maths (or math, even) to realise that something’s wrong with that number.


Pages

Subscribe to Oracle FAQ aggregator