Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 15 hours 14 min ago

List of Networking Concepts to Pass AWS Cloud Architect Associate Exam

Wed, 2017-11-08 16:31
Networking is a pivotal concept in cloud computing. Knowing it is a must to be a successful Cloud Architect. Of course you won't be physically peeling the cables to put RJ45 connectors on but you must know various facets of logical networking.


You never know what exactly gonna be in the exam but that's what exams are all about. In order to prepare for AWS Cloud Architect Associate exam you must thoroughly read and understand the following from AWS documentation:


Before you read above, it would be very beneficial if you also go and learn following networking concepts:

  • LAN
  • WAN
  • IP addressing
  • Difference between IPV4 and IPV6
  • CIDR
  • SUBNET
  • VPN
  • NAT
  • DNS
  • OSI Layers
  • TCP
  • UDP
  • ICMP
  • Router, Switch
  • HTTP
  • NACL
  • Internet Gateway
  • Virtual Private Gateway
  • Caching, Latency
  • Networking commands like Route, netstat, ping, tracert etc
Feel free to add any other network concept in comments which I might have missed.
Categories: DBA Blogs

Guaranteed Way to Pass AWS Cloud Architect Certification Exam

Tue, 2017-11-07 06:00
Today and for the sometime to come, one of the hottest IT certification to hold is AWS Cloud Architect Certification. There are various reasons for that:



  • If you pass it, it really means you know the stuff properly
  • AWS is the Cloud platform of choice world over and its not going anywhere
  • There is literally a mad rush out there as companies scramble to shift or extend their infrastructure to cloud to stay relevant and to cut costs.
  • There is a huge shortage of professional with theoretical and hands-on know-how of Cloud and this shortage is growing alarmingly.
So its not surprising that Sysadmins, developers, DBAs and other IT professionals really yearning to achieve Cloud credentials and there is no better way to do that other than getting AWS Certified.

So is there any  Guaranteed Way to Pass AWS Cloud Architect Certification Exam?

I say Yes and here is the way:

Read AWS Documentation about following AWS Services. Read about these services and read them again and again and then again. Learn them like you know your name. Get a free account and then play with these services. When you feel comfortable enough with these services and can recite them to anyone inside out then go ahead sit in exam and you will pass it for sure. So read and learn all services under sections:


  • Compute
  • Storage
  • Database 
  • Network & Content Delivery
  • Messaging
  • Identity and Access Management
Also make sure to read FAQs of all above services. Also read and remember what AWS Kinesis, WAF, Data Pipeline, EMR, Workspace are. No details are necessary for these ones but just what they stand for and what they do.

Best of Luck.
Categories: DBA Blogs

Passed the AWS Certified Solutions Architect - Associate Exam

Tue, 2017-11-07 05:11
Well, it was quite an enriching experience to go through taking the AWS certification exam and I am humbled to say that I passed it. It was first time, I underwent any AWS exam and I must say that quality was high and it was challenging and interesting enough. 

I will be writing soon as how I prepared and what are my tips for passing this exam.

Good night for now.
Categories: DBA Blogs

CIDR for Dummies DBA in Cloud

Sun, 2017-10-01 02:00
For DBAs of Cloud, its imperative to learn various networking concepts and CIDR is one of them. Without going into much detail, I will just post here quick note as what CIDR is and how to use it.



A CIDR looks something like this:

10.0.0.0/28

The 10.0.0.0/28 represents range of IP addresses, and no its NOT from 10.0.0.0 to 10.0.0.28. Here is what it is:

So in order to know how many IP address are in that IP range and from where it starts and where it ends, the formula is :

2 ^ (32 - )

So for the CIDR 10.0.0.0/28 :

2 ^ (32 - 28) = 2 ^ 4 = 2 * 2 * 2* 2 = 16

So in CIDR range 10.0.0.0/28 , we have 16 IP addresses in which

Start IP = 10.0.0.0
End IP  = 10.0.0.15



Also cloud providers normally reserve few IPs out of this CIDR range for different services like DNS, NAT etc. For example, AWS reserves first 4 and last IP of any CIDR range. So in our example , we would just have 10 IP addresses to work with in AWS.

So in case of AWS, we would have a region where we would have a VPC. CIDR is assigned to that VPC. In that VPC, for example we would have 2 subnets. We can distribute our 10 IPs from our CIDR 10.0.0.0/28 to our both subnets. Below I am giving 5 IPs to each subnet. A subnet is just a logical separate network.

For example we can give:

Subnet 1:

10.0.0.5 to 10.0.0.9

Subnet 2:

10.0.0.10 to 10.0.0.14 

Hope that helps.

PS. And oh CIDR stands for Classless Inter-Domain Routing (or Supernetting)
Categories: DBA Blogs

Idempotent and Nullipotent in Cloud

Tue, 2017-09-19 04:50
I was going through the documentation of Oracle Cloud IaaS, when I came across the vaguely familiar term Idempotent.



One great thing which I have felt very strongly with all this Cloud-mania is the recall of various theoretical computing concepts which we learned/read in university courses way back. From networking through web concepts to operating system; there are plethora of concepts which are coming back to be in practice very actively in everyday life of cloud professionals.

Two such mouthful words were Idempotent and Nullipotent. These are types of actions. Difference between Idempotent and Nullipotent action is the result they return when performed.

In simple terms;

    When executed an Idempotent action would provide a result first time and then this result would remain same, no matter how many times the action is repeated after that first time.
  
    An Nullipotent action would always provide same result whether executed several times or not executed at all.
 
So in terms of Cloud where REST (Representational State Transfer) APIs and HTTP (Hyper Text Transfer Protocol) are norm, these 2 concepts of Idempotent and Nullipotent are very important. In order to manage resources in cloud (through URI), there are various HTTP actions which could be performed. Some of these actions are Idempotent and some are Nullipotent.

Like GET action of HTTP is nullipotent. No matter how many times you execute this, it doesn't affect state of the resource and would return same result. And Put is Idempotent action of HTTP which would change the state of resource first time its executed and all subsequent executions of same PUT action would be like as first time.
Categories: DBA Blogs

SRVCTL Status Doesn't Show RAC instances Running Unlike SQLPLUS

Mon, 2017-09-18 18:34
Yesterday, I converted a single instance 12.1.0.2.0 physical standby database to a cluster database with 2 nodes.

After converting that to RAC database, I brought both instances up in mount state on both nodes and they came up fine and I started managed recovery on one node and it started working perfectly fine and got in sync with the primary.


Then I added them as a cluster resource by srvctl like this:

$ srvctl add database -d mystb -o /d01/app/oracle/product/12.1.0.2/db_1 -r PHYSICAL_STANDBY -s MOUNT
$ srvctl add instance -d mystb -i mystb1 -n node1
$ srvctl add instance -d mystb -i mystb2 -n node2

But srvctl status didnt show it running:

$ srvctl status database -d mystb -v
Instance mystb1 is not running on node node1
Instance mystb2 is not running on node node2

While from SQLPLUS, I could see both instances mounted:

SQL> select instance_name,status,host_name from gv$instance;

INSTANCE_NAME     STATUS       HOST_NAME
---------------- ------------ ----------------------------------------------------------------
mystb1             MOUNTED      node1
mystb2              MOUNTED      node2

So I needed to start database in srvctl (thought it was already started and mounted) just to please srvctl:

So I ran this:

$ srvctl start database -d mystb

The command didn't do anything but change the status of resource on the cluster. After running above, it worked:


$ srvctl status database -d mystb -v
 Instance mystb1 is running on node node1
 Instance mystb2 is running on node node2
Categories: DBA Blogs

Attended Google Cloud Summit in Sydney

Wed, 2017-09-13 00:50
The day event at picturesque Pier One Autograph Collection just under the shadow of Sydney's iconic harbor bridge was very interesting to say the least.


Keypoints from the event:

  • Google is investing heavily in APAC region for cloud
  • Sydney region for Google Cloud Platorm is up and running.
  • After 3 or 4 years, it will be all about containers.
  • Machine Learning is a big thing and at last here in true sense.
  • Also lots of tips and advices for the partners
  • Training and security are top concerns for the customers for cloud
  • Companies have simply no reason to manage their own data centers when cloud is here.
Machine learning is terrific especially in this demo by Google where Google's Deepmind self teaches walking.
Categories: DBA Blogs

SPX86-8002-VP - The /var/log filesystem has exceeded the filesystem capacity limit.

Wed, 2017-09-13 00:38
The following error message sounds ominous:

SPX86-8002-VP - The /var/log filesystem has exceeded the filesystem capacity limit.

and from Cloud control:




A processor component is suspected of causing a fault with a 100% certainty. Component Name : /SYS/SP Fault class : fault.chassis.device.misconfig

But in fact, most  of the time its not as bad as it sounds.

More often than not, rebooting the ILOM does the trick and then this error goes away.

Just go to /SP in ILOM and reset it. The next ILOM snapshot would clear it away which takes sometime.
Categories: DBA Blogs

Presented at CLOUG OTN Day 2017, Chile stop of the 2017 LAD OTN Tour

Sun, 2017-07-30 20:37
Amidst lots of Empanadas and Lomo Saltodos, I presented at CLOUG OTN Day 2017, Chile stop of the 2017 LAD OTN Tour last week and it was great to see a very passionate audience.




Despite of long flight and opposite time zone difference, Santiago, Chille came out very welcoming and lively. The event was very well organized and was studded with international speakers including fellow Pythianite Bjoern Rost, and various other well known speakers like Markus Michalewicz, Ricardo Gonzalez, Craig Shallahamer and so on.





Categories: DBA Blogs

Oracle Cloud Machine ; Your Own Cloud Under Your Own Control

Fri, 2017-07-07 04:12
Yes, every company wants to be on cloud but not everyone wants that cloud to be out there in wild no matter how secure it is. Some want their cloud to be trapped within their own premise, under their own control.




Enters Oracle Cloud Machine.

Some of the key reasons why this would make sense are sovereignty, residency, compliance, and other business requirements. Moreover, the cloud benefits would be there like turn key solutions, same IaaS and PaaS environments for development, test and production.

Cost might be a factor here for some organizations so a hybrid solution might be a go for majority of corporations.Having a private cloud machine and also having a public cloud systems would be the solution for many. One advantage here would be that the integration  of this private cloud with public one would be streamed lined.
Categories: DBA Blogs

Oracle GoldenGate Cloud Service

Wed, 2017-06-28 18:37
Even on Amazon AWS, for the migration of Oracle databases from on-prem to Cloud, my tool of choice is GoldenGate. The general steps I took for this migration was to create extract on source in on-prem, which sent data to replicat running in AWS Cloud in EC2 server, which in turn applied data to cloud database in RDS.




I was intrigued to see this new product from Oracle which is Oracle GoldenGate Cloud Service (GGCS).

So in this GGCS, we have extract, extract trail, and data pump running in the on-prem, which sends data to a Replication VM node in Oracle Cloud. This Replication VM node has a process called as Collector which collects incoming data from the on-prem. Collector then writes this data to a trail file from which data is consumed by a Replicat process and then applied to the cloud database.

This product looks good as it leverages existing robust technologies and should become default way to migrate or replicate data between oracle databases between on-prem and cloud.
Categories: DBA Blogs

Steps for Moving ASM Disk from FRA to DATA

Sun, 2017-06-18 21:11
Due to some unexpected data load, the space in DATA diskgroup became critically low on one of the production systems during middle of night on the weekend. There was no time to get a new disk and we needed the space to make room for new data load scheduled to be run after 3 hours.

Looked at the tablespaces space in DATA diskgroup and there wasn't much hope in terms of moving or shrinking or deleting anything. Also the upcoming data load was direct path load which always writes above the high water mark in segments, so shrinking wasn't of much help.

Looked at the FRA diskgroup and found out that there was plenty of space there, so I decided to rob Peter to pay Paul. The plan was to remove a disk from FRA diskgroup and add it to DATA. This all was done online and these were general steps:

Steps for Moving ASM Disk from FRA to DATA :

1) Remove Disk from FRA diskgroup

SQL> alter diskgroup FRA drop disk FRA_06;

Diskgroup altered.

2) Wait for Rebalance to finish

SQL> SELECT group_number, operation, state, power, est_minutes FROM v$asm_operation;

3) Add disk to the DATA diskgroup

alter diskgroup DATA add disk '/dev/myasm/superdb_fra_06' name DATA_06 rebalance power 8;

4) Wait for Rebalance to finish

SQL> SELECT group_number, operation, state, power, est_minutes FROM v$asm_operation;

This provided a much needed breather for the weekend and data load ran successfully. We will be making sure that we provision more disks to the DATA diskgroup and return the FRA disk to FRA with thanks.


Categories: DBA Blogs

12c Patching Resume with Nonrolling Option While Analyze - JSON Magic

Mon, 2017-05-15 01:41
I was engaged in an interesting Oracle 12c patching today.  Patch applicability was checked by using:

"$GRID_HOME/OPatch/opatchauto apply /u01/app/oracle/software/24436306 -analyze"

and it failed because its a non-rolling patch:





"OPATCHAUTO-72085: Cannot execute in rolling mode, as execution mode is set to non-rolling for patch ID 24315824.
OPATCHAUTO-72085: Execute in non-rolling mode by adding option '-nonrolling' during execution. e.g. /OPatch/opatchauto apply -nonrolling
After fixing the cause of failure Run opatchauto resume with session id "F7ET "]"

So now I wanted to analyze the patch with non-rolling option.


$GRID_HOME/OPatch/opatchauto apply /u01/app/oracle/software/24436306 -analyze -nonrolling

OPatchauto session is initiated at Mon May 15 01:32:43 2017
Exception in thread "main" java.lang.NoClassDefFoundError: oracle/ops/mgmt/cluster/NoSuchExecutableException
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.loadOptions(SystemInfoGenerator.java:322)
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.validateOptions(SystemInfoGenerator.java:280)
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.main(SystemInfoGenerator.java:134)
Caused by: java.lang.ClassNotFoundException: oracle.ops.mgmt.cluster.NoSuchExecutableException
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 3 more

OPatchauto session completed at Mon May 15 01:32:44 2017
Time taken to complete the session 0 minute, 1 second

opatchauto bootstrapping failed with error code 1.
"


Solution:

In 12c, we have the patching sessions with their configuration in JSON files.

So go to directory $grid_home/OPatch/auto/dbsessioninfo/

and find the JSON file with session id F7ET, which was given in above error.

Edit this file and change the flag of non-rolling to TRUE.

{
      "key" : "nonrolling",
      "value" : "false"
    },

Change above to :

{
      "key" : "nonrolling",
      "value" : "true"
    },

Save the file and run the opatchauto analyze again with resume session

$GRID_HOME/OPatch/opatchauto resume -session F7ET

and it works!!!

Happy Patching!!!

Categories: DBA Blogs

Love Your Data Conference in NYC on 31st May

Fri, 2017-04-21 19:11
In this InfoEra, its all about data.Whether its in the cloud or on-premises everything is truly revolving around and is for data. Pythian understood that decades ago and loving the data of their customers since day one. They are showcasing this love on 31st May in NYC.

http://promo.pythian.com/love-your-data-conference/


To help you turn your organization into a truly data-driven business, this interactive 1-day event in New York City on May 31, 2017, combines presentations, practical interactive panel sessions and open discussions across business and technical tracks.

This event is for CIO’s and IT Business leaders interested in learning how to better empower their company to drive business outcomes with analytics. Pythian’s Love Your Data Conference will focus on practical ways to:
  • Transform your organization using data and self-service analytics
  • Align IT to the business by giving all users access to data
  • Add data intelligence and automation to business decisions
  • Get a 360-degree view of your customer and promote innovation 
If you want to attend only one event this year then this must be the one.
Categories: DBA Blogs

Google Big Querry and Oracle Smart Scan

Tue, 2017-04-04 23:26
Marveling at the technology is my pastime and lately there are 2 technologies which truly have made me say ' Simply Wow.' One is Google's Big Query and the other one is Oracle's Exadata Smart Scan.

I have been managing data in different databases for a long time to appreciate how critical it is for the client to get the results out of their data as fast as possible. It's all about the returning results at the end after issuing a query or clicking a button.

End user or developer don't really care as how many terabytes of data is there. DBAs and data architects might love to boast about the humongous volumes of data they store and support but there is nothing to write home about, if that data cannot be retrieved as quickly as possible.

When I first migrated a 9TB database to Oracle Exadata few years back and ran a heavy report first time, it returned results in a jiffy, while my jaws dropped. This report used to take at least 70 minutes before without smart scan. I had to bring the developer to double check whether the results were correct or not . Oracle's Exadata smart scan is phenomenal.

I got similar jaw-dropping experience yesterday when I saw Google Cloud Platform's product Big Query in action during an Onboard session Sydney. A SQL with regex was run on multi terabyte of dataset with lots of sorting and it returned the results from the cloud in few seconds. Best thing about Big Query is that the all-familiar SQL is used and nothing fancy is needed. You get your petabytes of data warehouse in Google's cloud and then use your SQL to analyze that dataset. Sweet part is the agility and transparency with which that data is returned.

Simply beautiful.
Categories: DBA Blogs

What Oracle DBAs Need to Learn in Oracle Cloud Platform?

Mon, 2017-03-27 01:28
The transition from Oracle on-premises DBA to Oracle Cloud DBA is imminent for many of us. In fact, IMHO; the existing Oracle DBAs would have to manage database both on-premises and in cloud for a long time.

So what Oracle DBA needs to learn in Oracle Cloud Platform? If you visit the Oracle Cloud website, it's a mouthful and more. Its very easy to get bogged down as there are lots of things to learn it seems at first.

The good news is that as an experienced Oracle DBA you know most of things already. So just brush up your basic concept of cloud computing and then start from the following cloud offerings from Oracle Cloud Platform:

  • Database Cloud Schema Service
  • Database Cloud Database as a Service
  • Database Backup Cloud Service

Read about above as much as possible, and if you get a chance play with them. You would be surprised to find out that you know almost everything about these things as they are built upon the existing Oracle technologies.

One thing which is a must for this brave new world of Oracle Cloud DBA (OCDBA) in Oracle Cloud Platform or in any other cloud platform is to know how to migrate an Oracle database to Oracle Cloud (or any other cloud for that matter.) For this purpose, make sure you understand the following concepts:

  • Oracle Goldengate
  • Oracle Datapump
  • Oracle Secure External Password Store
  • Oracle Connection Manager
  • RMAN
  • Oracle Cloud Control

If all of above is ready, you are all good to go! :)
Categories: DBA Blogs

Is Oracle Database in Cloud PaaS, IaaS, SaaS, or DBaaS?

Sat, 2017-03-25 01:15
Question: Is Oracle Database in Cloud PaaS, IaaS, SaaS, or DBaaS?
Answer:
  • If you install and manage Oracle database in cloud by yourself, then you are using it on IaaS.
  • If you are just using it in cloud without installing or managing it, then it's PaaS.
  • If you are configuring the database instance and have access to it through SQL*Net, then its DBaaS. 
  • SaaS not really relevant when it comes to Oracle database in cloud as database mostly reside at the backend of applications whereas Saas is primarily all about applications.


Categories: DBA Blogs

Upgrade of Oracle GI 11.2.0.3 to Oracle 12.1.0.2 Steps

Thu, 2017-03-23 22:14
Just noting down the high level steps which were performed for the Upgrade of Oracle GI 11.2.0.3 to Oracle 12.1.0.2 on RHEL 64bit Linux.

Create a backup of 11G GI HOME as root user:

. oraenv <<< +ASM
 sudo su -
cd /oracle/backup
tar -cvf backup_GI.tar /u01/grid/product/11.2.0.3
tar -cvf backup_inventory.tar  /var/opt/ora/oraInventory/



Stop all DB instances and listener

run the GI 12c runInstaller in silent mode using UPGRADE option in the response file.

Run the rootupgrade.sh script as root user.

check the HAS version and oratab

crsctl query has releaseversion
cat /etc/oratab





Start the listener and DB instances

Cleanup Backup files (can be executed few days later)

Hope that helps.
Categories: DBA Blogs

Google Cloud Platform Fundamentals in Sydney

Sun, 2016-12-11 22:58
Just finished up one day training at Google's Sydney office in Google Cloud Platform Fundamentals. GCP is pretty cool and I think I like it.

Lots of our customers at Pythian are already hosting, migrating and thinking of doing so on cloud. Pythian already has a huge presence in cloud using various technologies.

So it was good to learn something about the Google's cloud offering. It was a pleasant surprise as it all made sense. From App engine to compute engine and from big table to big query, the features are sound, mature and ready to use.

The dashboard is simple too. I will be blogging more about it as I play with it in coming days.
Categories: DBA Blogs

Speaking at APAC OTN TOUR 2016 in Wellington, New Zealand

Sun, 2016-10-23 19:44
The APAC OTN Tour 2016 will be running from October 26th until November 11th visiting 4 countries/7 Cities in the Asia Pacific Region.

I will be speaking at APAC OTN TOUR 2016 in Wellington, New Zealand on 26th October on the topic which is very near and dear to me; Exadata and Cloud.

My session is 12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management with Fahd Mirza

Hope to see you there !

Categories: DBA Blogs

Pages