Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
I Love What I do Blog By Fahd Mirza Chughtai
Updated: 14 hours 6 min ago

Oracle Cloud Machine ; Your Own Cloud Under Your Own Control

Fri, 2017-07-07 04:12
Yes, every company wants to be on cloud but not everyone wants that cloud to be out there in wild no matter how secure it is. Some want their cloud to be trapped within their own premise, under their own control.




Enters Oracle Cloud Machine.

Some of the key reasons why this would make sense are sovereignty, residency, compliance, and other business requirements. Moreover, the cloud benefits would be there like turn key solutions, same IaaS and PaaS environments for development, test and production.

Cost might be a factor here for some organizations so a hybrid solution might be a go for majority of corporations.Having a private cloud machine and also having a public cloud systems would be the solution for many. One advantage here would be that the integration  of this private cloud with public one would be streamed lined.
Categories: DBA Blogs

Oracle GoldenGate Cloud Service

Wed, 2017-06-28 18:37
Even on Amazon AWS, for the migration of Oracle databases from on-prem to Cloud, my tool of choice is GoldenGate. The general steps I took for this migration was to create extract on source in on-prem, which sent data to replicat running in AWS Cloud in EC2 server, which in turn applied data to cloud database in RDS.




I was intrigued to see this new product from Oracle which is Oracle GoldenGate Cloud Service (GGCS).

So in this GGCS, we have extract, extract trail, and data pump running in the on-prem, which sends data to a Replication VM node in Oracle Cloud. This Replication VM node has a process called as Collector which collects incoming data from the on-prem. Collector then writes this data to a trail file from which data is consumed by a Replicat process and then applied to the cloud database.

This product looks good as it leverages existing robust technologies and should become default way to migrate or replicate data between oracle databases between on-prem and cloud.
Categories: DBA Blogs

Steps for Moving ASM Disk from FRA to DATA

Sun, 2017-06-18 21:11
Due to some unexpected data load, the space in DATA diskgroup became critically low on one of the production systems during middle of night on the weekend. There was no time to get a new disk and we needed the space to make room for new data load scheduled to be run after 3 hours.

Looked at the tablespaces space in DATA diskgroup and there wasn't much hope in terms of moving or shrinking or deleting anything. Also the upcoming data load was direct path load which always writes above the high water mark in segments, so shrinking wasn't of much help.

Looked at the FRA diskgroup and found out that there was plenty of space there, so I decided to rob Peter to pay Paul. The plan was to remove a disk from FRA diskgroup and add it to DATA. This all was done online and these were general steps:

Steps for Moving ASM Disk from FRA to DATA :

1) Remove Disk from FRA diskgroup

SQL> alter diskgroup FRA drop disk FRA_06;

Diskgroup altered.

2) Wait for Rebalance to finish

SQL> SELECT group_number, operation, state, power, est_minutes FROM v$asm_operation;

3) Add disk to the DATA diskgroup

alter diskgroup DATA add disk '/dev/myasm/superdb_fra_06' name DATA_06 rebalance power 8;

4) Wait for Rebalance to finish

SQL> SELECT group_number, operation, state, power, est_minutes FROM v$asm_operation;

This provided a much needed breather for the weekend and data load ran successfully. We will be making sure that we provision more disks to the DATA diskgroup and return the FRA disk to FRA with thanks.


Categories: DBA Blogs

12c Patching Resume with Nonrolling Option While Analyze - JSON Magic

Mon, 2017-05-15 01:41
I was engaged in an interesting Oracle 12c patching today.  Patch applicability was checked by using:

"$GRID_HOME/OPatch/opatchauto apply /u01/app/oracle/software/24436306 -analyze"

and it failed because its a non-rolling patch:





"OPATCHAUTO-72085: Cannot execute in rolling mode, as execution mode is set to non-rolling for patch ID 24315824.
OPATCHAUTO-72085: Execute in non-rolling mode by adding option '-nonrolling' during execution. e.g. /OPatch/opatchauto apply -nonrolling
After fixing the cause of failure Run opatchauto resume with session id "F7ET "]"

So now I wanted to analyze the patch with non-rolling option.


$GRID_HOME/OPatch/opatchauto apply /u01/app/oracle/software/24436306 -analyze -nonrolling

OPatchauto session is initiated at Mon May 15 01:32:43 2017
Exception in thread "main" java.lang.NoClassDefFoundError: oracle/ops/mgmt/cluster/NoSuchExecutableException
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.loadOptions(SystemInfoGenerator.java:322)
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.validateOptions(SystemInfoGenerator.java:280)
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.main(SystemInfoGenerator.java:134)
Caused by: java.lang.ClassNotFoundException: oracle.ops.mgmt.cluster.NoSuchExecutableException
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 3 more

OPatchauto session completed at Mon May 15 01:32:44 2017
Time taken to complete the session 0 minute, 1 second

opatchauto bootstrapping failed with error code 1.
"


Solution:

In 12c, we have the patching sessions with their configuration in JSON files.

So go to directory $grid_home/OPatch/auto/dbsessioninfo/

and find the JSON file with session id F7ET, which was given in above error.

Edit this file and change the flag of non-rolling to TRUE.

{
      "key" : "nonrolling",
      "value" : "false"
    },

Change above to :

{
      "key" : "nonrolling",
      "value" : "true"
    },

Save the file and run the opatchauto analyze again with resume session

$GRID_HOME/OPatch/opatchauto resume -session F7ET

and it works!!!

Happy Patching!!!

Categories: DBA Blogs

Love Your Data Conference in NYC on 31st May

Fri, 2017-04-21 19:11
In this InfoEra, its all about data.Whether its in the cloud or on-premises everything is truly revolving around and is for data. Pythian understood that decades ago and loving the data of their customers since day one. They are showcasing this love on 31st May in NYC.

http://promo.pythian.com/love-your-data-conference/


To help you turn your organization into a truly data-driven business, this interactive 1-day event in New York City on May 31, 2017, combines presentations, practical interactive panel sessions and open discussions across business and technical tracks.

This event is for CIO’s and IT Business leaders interested in learning how to better empower their company to drive business outcomes with analytics. Pythian’s Love Your Data Conference will focus on practical ways to:
  • Transform your organization using data and self-service analytics
  • Align IT to the business by giving all users access to data
  • Add data intelligence and automation to business decisions
  • Get a 360-degree view of your customer and promote innovation 
If you want to attend only one event this year then this must be the one.
Categories: DBA Blogs

Google Big Querry and Oracle Smart Scan

Tue, 2017-04-04 23:26
Marveling at the technology is my pastime and lately there are 2 technologies which truly have made me say ' Simply Wow.' One is Google's Big Query and the other one is Oracle's Exadata Smart Scan.

I have been managing data in different databases for a long time to appreciate how critical it is for the client to get the results out of their data as fast as possible. It's all about the returning results at the end after issuing a query or clicking a button.

End user or developer don't really care as how many terabytes of data is there. DBAs and data architects might love to boast about the humongous volumes of data they store and support but there is nothing to write home about, if that data cannot be retrieved as quickly as possible.

When I first migrated a 9TB database to Oracle Exadata few years back and ran a heavy report first time, it returned results in a jiffy, while my jaws dropped. This report used to take at least 70 minutes before without smart scan. I had to bring the developer to double check whether the results were correct or not . Oracle's Exadata smart scan is phenomenal.

I got similar jaw-dropping experience yesterday when I saw Google Cloud Platform's product Big Query in action during an Onboard session Sydney. A SQL with regex was run on multi terabyte of dataset with lots of sorting and it returned the results from the cloud in few seconds. Best thing about Big Query is that the all-familiar SQL is used and nothing fancy is needed. You get your petabytes of data warehouse in Google's cloud and then use your SQL to analyze that dataset. Sweet part is the agility and transparency with which that data is returned.

Simply beautiful.
Categories: DBA Blogs

What Oracle DBAs Need to Learn in Oracle Cloud Platform?

Mon, 2017-03-27 01:28
The transition from Oracle on-premises DBA to Oracle Cloud DBA is imminent for many of us. In fact, IMHO; the existing Oracle DBAs would have to manage database both on-premises and in cloud for a long time.

So what Oracle DBA needs to learn in Oracle Cloud Platform? If you visit the Oracle Cloud website, it's a mouthful and more. Its very easy to get bogged down as there are lots of things to learn it seems at first.

The good news is that as an experienced Oracle DBA you know most of things already. So just brush up your basic concept of cloud computing and then start from the following cloud offerings from Oracle Cloud Platform:

  • Database Cloud Schema Service
  • Database Cloud Database as a Service
  • Database Backup Cloud Service

Read about above as much as possible, and if you get a chance play with them. You would be surprised to find out that you know almost everything about these things as they are built upon the existing Oracle technologies.

One thing which is a must for this brave new world of Oracle Cloud DBA (OCDBA) in Oracle Cloud Platform or in any other cloud platform is to know how to migrate an Oracle database to Oracle Cloud (or any other cloud for that matter.) For this purpose, make sure you understand the following concepts:

  • Oracle Goldengate
  • Oracle Datapump
  • Oracle Secure External Password Store
  • Oracle Connection Manager
  • RMAN
  • Oracle Cloud Control

If all of above is ready, you are all good to go! :)
Categories: DBA Blogs

Is Oracle Database in Cloud PaaS, IaaS, SaaS, or DBaaS?

Sat, 2017-03-25 01:15
Question: Is Oracle Database in Cloud PaaS, IaaS, SaaS, or DBaaS?
Answer:
  • If you install and manage Oracle database in cloud by yourself, then you are using it on IaaS.
  • If you are just using it in cloud without installing or managing it, then it's PaaS.
  • If you are configuring the database instance and have access to it through SQL*Net, then its DBaaS. 
  • SaaS not really relevant when it comes to Oracle database in cloud as database mostly reside at the backend of applications whereas Saas is primarily all about applications.


Categories: DBA Blogs

Upgrade of Oracle GI 11.2.0.3 to Oracle 12.1.0.2 Steps

Thu, 2017-03-23 22:14
Just noting down the high level steps which were performed for the Upgrade of Oracle GI 11.2.0.3 to Oracle 12.1.0.2 on RHEL 64bit Linux.

Create a backup of 11G GI HOME as root user:

. oraenv <<< +ASM
 sudo su -
cd /oracle/backup
tar -cvf backup_GI.tar /u01/grid/product/11.2.0.3
tar -cvf backup_inventory.tar  /var/opt/ora/oraInventory/



Stop all DB instances and listener

run the GI 12c runInstaller in silent mode using UPGRADE option in the response file.

Run the rootupgrade.sh script as root user.

check the HAS version and oratab

crsctl query has releaseversion
cat /etc/oratab





Start the listener and DB instances

Cleanup Backup files (can be executed few days later)

Hope that helps.
Categories: DBA Blogs

Google Cloud Platform Fundamentals in Sydney

Sun, 2016-12-11 22:58
Just finished up one day training at Google's Sydney office in Google Cloud Platform Fundamentals. GCP is pretty cool and I think I like it.

Lots of our customers at Pythian are already hosting, migrating and thinking of doing so on cloud. Pythian already has a huge presence in cloud using various technologies.

So it was good to learn something about the Google's cloud offering. It was a pleasant surprise as it all made sense. From App engine to compute engine and from big table to big query, the features are sound, mature and ready to use.

The dashboard is simple too. I will be blogging more about it as I play with it in coming days.
Categories: DBA Blogs

Speaking at APAC OTN TOUR 2016 in Wellington, New Zealand

Sun, 2016-10-23 19:44
The APAC OTN Tour 2016 will be running from October 26th until November 11th visiting 4 countries/7 Cities in the Asia Pacific Region.

I will be speaking at APAC OTN TOUR 2016 in Wellington, New Zealand on 26th October on the topic which is very near and dear to me; Exadata and Cloud.

My session is 12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management with Fahd Mirza

Hope to see you there !

Categories: DBA Blogs

ORA-01156 When Adding Standby Redo Log in Dataguard Configuration

Sat, 2016-10-08 20:20
Database version = 11.2.0.3.0

If you get following error while adding a Standby Redo Log (SRL) to a standby database in dataguard configuration:

SQL> alter database add standby logfile group 3 '/logs/test/sbyredo3a.log' size 500M;
alter database add standby logfile group 3 '/logs/test/sbyredo3a.log' size 500M
*
ERROR at line 1:
ORA-01156: recovery or flashback in progress may need access to files


Then it means that first you have to stop the redo apply, add the SRL and then start the redo apply. Best way to do is from dgmgrl like this:

DGMGRL> connect /
Connected.

DGMGRL> edit database 'test' set state='APPLY-OFF';
Succeeded.

SQL> alter database add standby logfile group 3 '/logs/test/sbyredo3a.log' size 500M;

Database altered.

DGMGRL> edit database 'CONPRO_UK' set state='APPLY-ON';
Succeeded.

HTH.











Categories: DBA Blogs

Wordpress Fatal error: Call to undefined method PHPMailer::getCustomHeaders()

Sat, 2016-07-30 03:06

With numerous new themes and new versions of the wordpress, when you try to publish some post or page or try to send email or try to use contact form, you might get following error:


Fatal error: Call to undefined method PHPMailer::getCustomHeaders()





This error normally occurs if you are using incompatible wordpress and theme and plugin versions. The best course of action is to update your wordpress, your theme, and all the plugin and make sure that contact form 7 plugin isn't complaining about the wordpress version.

If you cannot upgrade your versions or if the error is occuring even after the upgrade, then you can do the following workaround:

Workaround:

Make the following change in your mail.php file in the directory wordpress/wp-content/plugins/contact-form-7/includes

Go to the line 171 and change:
 
add_action( ‘phpmailer_init’, ‘wpcf7_phpmailer_init’ );


to
 

//add_action( ‘phpmailer_init’, ‘wpcf7_phpmailer_init’ ); 

Hope that helps.
Categories: DBA Blogs

Opatchauto Session failed: Parameter validation failed

Wed, 2016-02-10 20:12
While applying PSU on Grid Home in 12c, due to the patch conflict, you might have to rollback few patches before you could apply the PSU.

After rolling back the patches from grid home, when you try to run the opatch analyze command again, you might encounter following error:





[root ~]# $Grid_Home/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

OPatch Automation Tool

Copyright (c)2014, Oracle Corporation. All rights reserved.
OPatchauto Version : 12.1.0.1.10
OUI Version        : 12.1.0.2.0
Running from       : $Grid_Home

opatchauto log file: $Grid_Home/cfgtoollogs/opatchauto/22191349/opatch_gianalyze.log
NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.
OCM RSP file has been ignored in analyze mode. 

Clusterware is either not running or not configured. You have the following 2 options:
1. Configure and start the Clusterware on this node and re-run the tool
2. Run the tool with '-oh ' to first patch the Grid Home, then invoke tool with '-database ' or '-oh ' to patch the RAC home
Parameter Validation: FAILED

Opatchauto Session failed: Parameter validation failed
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Stream closed
                at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:432)
                at oracle.opatchauto.gi.GILogger.printStackTrace(GILogger.java:447)
                at oracle.opatchauto.gi.OPatchauto.main(OPatchauto.java:97)
Caused by: java.io.IOException: Stream closed
                at java.io.BufferedWriter.ensureOpen(BufferedWriter.java:98)
                at java.io.BufferedWriter.write(BufferedWriter.java:203)
                at java.io.Writer.write(Writer.java:140)
                at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:426)
                ... 2 more

opatchauto failed with error code 1.

Then if you try to start the has services, you get following error:

 [root ~]# $Grid_Home/bin/crsctl start has
CRS-6706: Oracle Clusterware Release patch level ('3749979535') does not match Software patch level ('2278979115'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.

SOLUTION:

So in order to resolve this, you need to issue following command as root user:
$ORA_GI_HOME/crs/install/roothas.pl –postpatch

It will start the has services too.

Then again run the analyze command as given above and it will work.
 



Categories: DBA Blogs

Step by Step Jan 2016 PSU Patch Apply on 12c Grid and RDBMS Homes in Linux

Tue, 2016-02-09 20:05

Following step by step action plan is for single instance database stored on ASM in 12.1.0.2 on Linux (OEL 6 64 bit in this case.)






Step
Description
ETA
1
Update the OPATCH utility:

For Database home:

$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/product/12.1.0/db_1
$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch version

For Grid home:

$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/12.1.0.2/grid
$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch version
15 min
2
Create ocm.rsp file:

Note: Press Enter/Return key and don't provide any input and say Yes.

$ export ORACLE_HOME=/u01/app/oracle/12.1.0.2/grid
$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /stage/ocm.rsp
5 min
3
Validation of Oracle Inventory

Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

For database home:

$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.1.0/db_1

For Grid home:

$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/12.1.0.2/grid

If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply.
5 min
4
Stage the Patch:

$ mkdir /stage/PSUpatch
$ cp /stage/p22191349_121020_Linux-x86-64.zip /stage/PSUpatch

Check that the directory is empty.
$ cd /stage/PSUpatch
$ ls

Unzip the patch as grid home owner.

$ unzip p22191349_121020_.zip
5 min
5
One-off Patch Conflict Detection and Resolution:

Run it with root user:

/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

It will ask to rollback identical patches like this:

Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...
Patch "/stage/PSUpatch/22191349/21436941" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.
Patch "/stage/PSUpatch/22191349/21948341" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.
Patch "/stage/PSUpatch/22191349/21948344" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.
Patch "/stage/PSUpatch/22191349/21948354" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.

So first rollback above 4 patches by going to their directory and issuing with grid owner from grid home:

opatch rollback -id 21948354 -local -oh /u01/app/oracle/12.1.0.2/grid (Repeat for all 4 patches)

Note: In some cases, weirdly, I had to shutdown the has services with root user before patch rollback by using:

/u01/app/oracle/12.1.0.2/grid/bin/crsctl stop has -f

After this again run:

/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

If analyze command fail then use this with root user:

$ORA_GI_HOME/crs/install/roothas.pl –postpatch

It will start the has services too.

Then again run the analyze command as given above:

It will show something like:

Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...
Patch "/stage/PSUpatch/22191349/21436941" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.
Patch "/stage/PSUpatch/22191349/21948341" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.
Patch "/stage/PSUpatch/22191349/21948344" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.
Patch "/stage/PSUpatch/22191349/21948354" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.

Now you are good to apply the patch. Proceed to next step.




10 min
6
Apply the Patch: (Note: This should apply patch in both GI and RDBMS Home but its unreliable in that sense so after this completes, we need to check opatch lsinventory to make sure that it also applied patches in RDBMS Home)

As root user, execute the following command:

# /u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -ocmrf /stage/ocm.rsp

In case if it doesn’t apply in RDBMS Home, then run:

/u01/app/oracle/product/12.1.0/db_1/OPatch/opatchauto apply /stage/PSUpatch/22191349 -oh /u01/app/oracle/product/12.1.0/db_1 -ocmrf /stage/ocm.rsp

Make sure the above applies both OCW and PSU patches. You can verify that from opatch lsinventory. If only OCW patch is present in output and no PSU (which is likely the case), then issue following from Oracle home with oracle database owner after shutting down database:

/u01/app/oracle/product/12.1.0/db_1/OPatch/opatch apply -oh /u01/app/oracle/product/12.1.0/db_1 -local /stage/PSUpatch/22191349/21948354
60 min
7
Loading Modified SQL Files into the Database:

% sqlplus /nolog
SQL> Connect / as sysdba
SQL> startup
SQL> quit
% cd $ORACLE_HOME/OPatch
% ./datapatch -verbose
60 min
8
Check for the list of patches applied to the database.

SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch;
5 min
Categories: DBA Blogs

exec_as_oracle_script

Sat, 2015-10-17 01:04
There might be a situation where executing some DDL in pluggable database may cause the following error:

ORA-65040: operation not allowed from within a pluggable database

This error could occur if a tablespace is being dropped from within PDB and this tablespace is a former default tablespace having some of the system objects. Even system objects cannot be moved with simple alter statements from within PDBs.

So in order to move these objects from within PDBs, you should be using procedure dbms_pdb.exec_as_oracle_script which is undocumented so far.

For example:

exec dbms_pdb.exec_as_oracle_script('alter table . move tablespace ');

From My Oracle Support, Doc ID 1943303.1 lists:

--   This procedure enables execution of certain restricted statements (most DDLs) on metadata-linked objects, from within a PDB.
Categories: DBA Blogs

Got Published in AUSOUG's Foresight Online Spring 2015

Tue, 2015-10-13 01:47
AUSOUG's Foresight Online Spring 2015 Edition is the premier publication by Australian Oracle User Group.

Following are highlights of this edition:

  • President's Message
  • DBA Article: Automated Testing of Oracle BPM Suite 12c Processes with SOAP UI - Peter Kemp, State Revenue Office, Victoria
  • DBA Article: Best Practices for Oracle on Pure Storage
  • Apps Article: Performance Review Data Capture - Brad Sayer, More4Apps
  • DBA / Dev Article: Database Developers – Feeling Left Out of Agile? - D Nowrood, Dell Software
  • Apps Article:  Cost-effective alternative to Oracle Discoverer and BI Suite - Wilhelm Hamman, Excel4apps
  • DBA Article: DBA101 - characterset corruption - Paul Guerin, HP
  • Quick Tips 1: Five Reasons to Upgrade to APEX 5.0 - Scott Wesley, Sage Computing Services
  • Quick Tips 2: Last Successful login time in 12c - Fahd Mirza Chughtai, The Pythian Group
Categories: DBA Blogs

Sharding in Oracle 12c Database

Mon, 2015-10-12 22:22
Sharding for Oracle DBAs is still pretty much an alien or pretty new concept. In the realms of big data, this term is being used quite extensively though.

What is Sharding in simple words:

Sharding is partitioning. Horizontal partitioning to be exact.

Sharding means partitioning a table rows on basis of some criteria and storing that partitioned rows of table (i.e. a shard) on different database servers. These database servers are cheap low commodity servers.

The benefits include smaller data to manage, smaller backups, faster reads, and faster response time for the queries.

Just like existing partitioning option in the Oracle database, there are generally three kinds of sharding:

Range Sharding
List Sharding
Hash Sharding

The news out there on social media is that Oracle 12c next version is coming up with Sharding option. That is pretty exciting and let's see what they come up in this regard.


Categories: DBA Blogs

ORA-01917: user or role 'PDB_DBA' does not exist

Sat, 2015-09-19 01:42
I manually created a container database (CDB) 12.1.0.2.0 in my Linux 6.4 based virutal machine. After creating it, I tried to create a pluggable database but got following error:




SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
cdbtest          OPEN

1 row selected.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> select * from cdb_pdbs;

    PDB_ID
----------
PDB_NAME
--------------------------------------------------------------------------------
      DBID    CON_UID GUID                             STATUS    CREATION_SCN
---------- ---------- -------------------------------- --------- ------------
       VSN LOGGING   FOR FOR     CON_ID
---------- --------- --- --- ----------
         2
PDB$SEED
3972600152 3972600152 201432B4FDA10CEBE0530900000AFE92 NORMAL             213
 202375680 LOGGING   NO  NO           2

SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
cdbtest          OPEN

1 row selected.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> select * from cdb_pdbs;

    PDB_ID
----------
PDB_NAME
--------------------------------------------------------------------------------
      DBID    CON_UID GUID                             STATUS    CREATION_SCN
---------- ---------- -------------------------------- --------- ------------
       VSN LOGGING   FOR FOR     CON_ID
---------- --------- --- --- ----------
         2
PDB$SEED
3972600152 3972600152 201432B4FDA10CEBE0530900000AFE92 NORMAL             213
 202375680 LOGGING   NO  NO           2

So I tried with different options, but it always gave that error about PDB_DBA role. Searching the MOS, presented document.

This Document states that if we create a CDB manually we need to get from DBCA because we might have omit some options. CDB and PDB needs to be installed with all the options. I created CDB manually in first place because I had low disk space in my virtual machine, and I was intending to use all the options.

So if you get that error, the solution is to either create database through DBCA.
Categories: DBA Blogs

Amazon S3 to Glacier - Cloud ILM

Wed, 2015-09-09 19:27
Falling in love with Kate Upton is easy but more easier is to be swept off your feet by information lifecycle management (ILM) in the Amazon Web Services (AWS). Simple, easily-configurable, fast, reliable, cost effective and proven are the words which describe it.

Pythian has been involved with ILM for a long time. With various flavors of databases and systems, Pythian has been overseeing creation, alteration, and flow of data for a long time until it becomes obsolete. That is why AWS's ILM resonates perfectly well with Pythian's expertise.

Amazon S3 is an object store for short term storage, whereas Amazon Glacier is their cloud archiving offering or storage for long term. Rules can be defined on the information to specify and automate its lifecycle.

Following screenshot shows the rules being configured on objects from S3 bucket to Glacier and then permanent deletion. 90 days after creation if an object, it will be moved to Glacier, and then after 1 year, it will be permanently deleted. Look at the graphical representation of lifecycle as how intuitive it is.



Categories: DBA Blogs

Pages