Seamless cloning of an application stack is an outstanding goal. Seamless cloning of an application stack including the full production database, application server, and webserver in a few minutes with next to zero disk space used or configuration required is the best goal since Alexander Graham Bell decided he wanted a better way tell Mr. Watson to “come here.”
So in the spirit of discovery, I’ve installed Oracle REST Data Services (ORDS) 2.0 and Oracle Application Express (APEX) 4.2 to a source Oracle database environment in my home Delphix setup. I’m going to:
- Sync the ORDS binaries with Delphix as a file source
- Sync the APEX binaries with Delphix as a file source
- Sync the ORCL database with Delphix as a database source
- Provision a clone of the ORCL database to a target linux system as DBDEV
- Provision a clone of the ORDS and APEX binaries to the target system
Some of you may be scratching your head right now thinking “What is Delphix?” I’ve written a few words on it in the past, and Kyle Hailey has quite a bit of information about it along with other links such as Jonathan Lewis explaining Delphix at OOW14.
If you’re into the whole brevity thing, here’s a short summation: Delphix is a technology you can sync nearly any kind of source data into and provision on demand from any point in time to any target, near instantly and at the click of a button, all without incurring additional disk space. What that means for your business is incredibly efficient development, faster time to market, and improved application quality. And if you want to see this in action, you can try it for yourself with Delphix Developer Edition.
Let’s use Delphix to deploy APEX to a target system.Step 1. A look at the source
On the source environment (linuxsource, 172.16.180.11) I have an 126.96.36.199 database called “orcl”.
In the /u01/app/oracle/product directory are ./apex and ./ords, holding the APEX and ORDS installations respectively.
When ORDS is started, I am able to see the APEX magic by browsing to http://172.16.180.11:8080/apex and logging in to my InvestPLUS workspace. Here’s the pre-packaged apps I have installed:
Sweet. Let’s check out what I have set up in Delphix.Step 2. Check out the Delphix Sources
You can see that I have the ORCL database (named InvestPLUS DB Prod), Oracle REST Data Services, and APEX homes all loaded into Delphix here:
When I say they’re loaded into Delphix, I mean they’ve been synced. The ORCL database is synced over time with RMAN and archive logs and compressed about 3x on the base snapshot and 60x on the incremental changes. The /u01/app/oracle/product/apex and /u01/app/oracle/product/ords directories have also been synced with Delphix and are kept up to date over time. From these synced copies we can provision one or more Virtual Databases (VDBs) or Virtual Files (vFiles) to any target we choose.Step 3. Deploy
Provisioning both VDBs and vFiles is very quick with Delphix and takes only a few button clicks. Just check out my awesomely dramatized video of the provisioning process. For this demo, first I provisioned a clone of the ORCL database to linuxtarget (172.16.180.12) with the name DBDEV.
Next I provisioned a copy of the ORDS home to the target at the same location as the source (/u01/app/oracle/product/ords) with the name ORDS Dev:
And lastly I provisioned a copy of the APEX home to the target at the same location as the source (/u01/app/oracle/product/apex) with the name APEX Dev:
In hindsight I probably could have just synced /u01/app/oracle/product and excluded the ./11.2.0 directory to get both ORDS and APEX, but hey, I like modularity. By having them separately synced, I can rewind or refresh either one on my target system.
Here’s the final provisioned set of clones on the target (you can see them under the “InvestPLUS Dev/QA” group on the left nav):
Let’s see what all this looks like on the target system. Looking at the /u01/app/oracle/product directory on the target shows us the same directories as the source:
I’ve also got the DBDEV database up on the target:
To give you a glimpse of how Delphix provisioned the clone, check this out. Here’s a “df -h” on the linuxtarget environment:
What this is showing us is that the APEX Home, ORDS Home, and DBDEV clone are all being served over NFS from Delphix (172.16.180.3). This is how Delphix performs a clone operation, and why we call it virtual: data is synced and compressed from sources into Delphix, and when you provision a clone Delphix creates virtual sets of files that are presented over the wire to the target system. You can think of Delphix as a backup destination for source databases/filesystems, and as network attached storage for targets. The clever bit is that Delphix uses the same storage for both purposes, with no block copies at all unless data is changed on the target VDBs or vFiles. Cool, right? On a side note and for the curious, Delphix can use dNFS as well for your Oracle VDBs.Step 5. Reconfigure ORDS
On the source environment, ORDS is configured to connect to the ORCL database. On the target we’re going to the DBDEV database. So the one quick change we’ll need to make is to change the SID in the /u01/app/oracle/product/ords/config/apex/defaults.xml file.
[delphix@linuxtarget ords]$ vi config/apex/defaults.xml <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> <properties> <comment>Saved on Wed Jan 14 08:38:04 EST 2015</comment> <entry key="cache.caching">false</entry> <entry key="cache.directory">/tmp/apex/cache</entry> <entry key="cache.duration">days</entry> <entry key="cache.expiration">7</entry> <entry key="cache.maxEntries">500</entry> <entry key="cache.monitorInterval">60</entry> <entry key="cache.procedureNameList"/> <entry key="cache.type">lru</entry> <entry key="db.hostname">localhost</entry> <entry key="db.password">@050784E0F3307C86A62BF4C58EE984BC49</entry> <entry key="db.port">1521</entry> <entry key="db.sid">DBDEV</entry> <entry key="debug.debugger">false</entry> <entry key="debug.printDebugToScreen">false</entry> <entry key="error.keepErrorMessages">true</entry> <entry key="error.maxEntries">50</entry> <entry key="jdbc.DriverType">thin</entry> <entry key="jdbc.InactivityTimeout">1800</entry> <entry key="jdbc.InitialLimit">3</entry> <entry key="jdbc.MaxConnectionReuseCount">1000</entry> <entry key="jdbc.MaxLimit">10</entry> <entry key="jdbc.MaxStatementsLimit">10</entry> <entry key="jdbc.MinLimit">1</entry> <entry key="jdbc.statementTimeout">900</entry> <entry key="log.logging">false</entry> <entry key="log.maxEntries">50</entry> <entry key="misc.compress"/> <entry key="misc.defaultPage">apex</entry> <entry key="security.disableDefaultExclusionList">false</entry> <entry key="security.maxEntries">2000</entry> </properties>
Note the only line I had to change was this one: <entry key=”db.sid”>DBDEV</entry>
After the config change, I just had to start ORDS on the target:
[delphix@linuxtarget ords]$ java -jar apex.war Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Standalone execute INFO: NOTE: Standalone mode is designed for use in development and test environments. It is not supported for use in production environments. Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Standalone execute INFO: Starting standalone Web Container in: /u01/app/oracle/product/ords/config/apex Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Deployer deploy INFO: Will deploy application path = /u01/app/oracle/product/ords/config/apex/apex/WEB-INF/web.xml Jan 21, 2015 1:18:22 PM oracle.dbtools.standalone.Deployer deploy INFO: Deployed application path = /u01/app/oracle/product/ords/config/apex/apex/WEB-INF/web.xml Jan 21, 2015 1:18:22 PM oracle.dbtools.common.config.file.ConfigurationFolder logConfigFolder INFO: Using configuration folder: /u01/app/oracle/product/ords/config/apex Configuration properties for: apex cache.caching=false cache.directory=/tmp/apex/cache cache.duration=days cache.expiration=7 cache.maxEntries=500 cache.monitorInterval=60 cache.procedureNameList= cache.type=lru db.hostname=localhost db.password=****** db.port=1521 db.sid=DBDEV debug.debugger=false debug.printDebugToScreen=false error.keepErrorMessages=true error.maxEntries=50 jdbc.DriverType=thin jdbc.InactivityTimeout=1800 jdbc.InitialLimit=3 jdbc.MaxConnectionReuseCount=1000 jdbc.MaxLimit=10 jdbc.MaxStatementsLimit=10 jdbc.MinLimit=1 jdbc.statementTimeout=900 log.logging=false log.maxEntries=50 misc.compress= misc.defaultPage=apex security.disableDefaultExclusionList=false security.maxEntries=2000 db.username=APEX_PUBLIC_USER Jan 21, 2015 1:18:58 PM oracle.dbtools.common.config.db.ConfigurationValues intValue WARNING: *** jdbc.MaxLimit in configuration apex is using a value of 10, this setting may not be sized adequately for a production environment *** Jan 21, 2015 1:18:58 PM oracle.dbtools.common.config.db.ConfigurationValues intValue WARNING: *** jdbc.InitialLimit in configuration apex is using a value of 3, this setting may not be sized adequately for a production environment *** Using JDBC driver: Oracle JDBC driver version: 188.8.131.52.0 Jan 21, 2015 1:18:59 PM oracle.dbtools.rt.web.SCListener contextInitialized INFO: Oracle REST Data Services initialized Oracle REST Data Services version : 184.108.40.2069.08.09 Oracle REST Data Services server info: Grizzly/1.9.49 Jan 21, 2015 1:18:59 PM com.sun.grizzly.Controller logVersion INFO: GRIZZLY0001: Starting Grizzly Framework 1.9.49 - 1/21/15 1:18 PM Jan 21, 2015 1:18:59 PM oracle.dbtools.standalone.Standalone execute INFO: http://localhost:8080/apex/ started.
Step 6. Victory
With ORDS started, I’m now able to access APEX on my target and log in to see my applications.
The cloned ORDS and APEX homes on the target and the DBDEV database are 100% full clones of their respective sources; block for block copies if you will. No matter how big the source data, these clones are done with a few clicks and takes only a few minutes, barely any disk space (in the megabytes, not gigabytes), and the clones can be refreshed from the source or rewound in minutes.
Delphix is capable of deploying not just database clones, but the whole app stack. Because Delphix stores incremental data changes (based on a retention period you decide), applications can be provisioned from any point in time or multiple points in time. And you can provision as many clones as you want to as many targets as you want, CPU and RAM on the targets permitting. All in all a fairly powerful capability and one I’ll be experimenting on quite a bit to see how the process and benefits can be improved. I’m thinking multi-VDB development deployments and a rewindable QA suite next!
Only two things are really certain: network latency over long distances, and the fact that humanity will soon rapidly degenerate into undead brain-eaters.
When that day comes, when the dead are crowding at your door and the windows are busted out and ripped up rotted arms are clawing at the inside of your home, I know what you’ll be thinking: is my database protected?
Don’t worry, my friends. The Oracle Alchemist has you covered. We just need to zombie-proof your DR plan. Let’s get started.Getting the Power Back
Hopefully you did the smart thing and figured out how much battery and generator power you’d need to survive multiple years of failing power systems due to zombies. I know I did.
However, if you didn’t get this critical task done you may still have some options. Statistics show that the demand for U.S. gasoline was 8.73 million barrels in 2012. That comes out to 23,917.80821917808219 barrels per day of fuel that’s out there just waiting for you to snatch it up. The problem is going to be getting it. You’ll need to load yourself down with lots of weaponry and strike out in a fuel truck a few times a week, which will definitely take away from your database administration time. It’s a smart idea to enable a lot of automation and monitoring to take care of things while you’re out.
You’re going to need to fight other groups of surviving IT for fuel. This means you’re going to need friends. The way I see it, you have two choices: SysAdmins and Developers. They’re the two groups you work closest with as a DBA, so they’re the most likely to have your back when the dead walk. Start your planning now. If you want to get the developers on your side, tune some queries for them. Seriously, nothing will convince a developer to slice through the brain base of a walker like adding some key indexes when a query goes south during testing. However, if you think the SysAdmins are more likely to fight off rival gangs of resource hogs on the prowl for food and fuel, you can make them feel good by keeping all your filesystems cleaned up and RAM usage at a minimum.The Problem with Zombies
Remember, the walking dead are tenacious. You remember before the apocalypse when a bunch of reporting users would all log into the database and run huge ad hoc queries against PROD without thinking about what they were doing? That was nothing. Zombies are the real deal. They will tear through a database faster than a multi-terabyte cartesian product. You can deploy outside the box now increase your chances of having a clone out there somewhere, just in case. If you want that database to survive, you’re going to need standbys. Lots of them.
I’d recommend a hub and spoke configuration. One central production database, at least 5 standby databases. As everybody knows, the chances of a zombie bringing down a database are roughly 89.375%. With 5 standby environments, you can drastically reduce the odds of being left without a standby system. On the plus side, zombies are completely brainless. What this means is that you don’t have to worry about masking or obfuscating your backup data in any way. Even on the off chance one of them kicks off a query (even zombies can figure out SQL Developer), they won’t be able to comprehend your users’ personal data, and with the complete downfall of the dollar it won’t matter if they see any credit information. So rest easy.When All Else Fails
At some point, the zombies are going to come for you. Sorry, but it’s a statistical fact and there’s not much we can do about that. At that moment, when all hope is lost, you’re really going to need to protect your database because once you become a zombie too there really won’t be anyone left and you won’t be focused on maintaining it anymore; you’ll be focused on acquiring copious amounts of human flesh.
So make your last stand count. You’re a soon-to-be-undead DBA, act like it! Remember how we tune. Eliminate the wait, punch through the bottlenecks, make efficient use of processing power. Don’t get trapped between a rack and a hard place. If you have to play a game of circle-the-Exadata in order to get away, go for it, but don’t let them corner you. And whatever you do, make sure you keep your badge with you. The last thing you need is to hit a door you can’t get through without the proper credentials. Above all else: remember to kick off a backup before they finally take you. I’d recommend having the script ready and running on the console just in case you have to hit a quick key.
Good luck. You’re going to need it.
Last week I attended Oracle OpenWorld 2014, and it was an outstanding event filled with great people, awesome sessions, and a few outstanding notable experiences.
Personally I thought the messaging behind the conference itself wasn’t as amazing and upbeat as OpenWorld 2013, but that’s almost to be expected. Last year there was a ton of buzz around the introduction of Oracle 12c, Big Data was a buzzword that people were totally excited and not too horribly burnt out on, and there was barely a cloud in the sky. This year cloud it was cloud all about cloud the Cloud cloud (Spoiler alert: it was the Cloud all along) which just didn’t have that same excitement factor.
But it’s still OpenWorld, set in the heart of San Francisco with tens of thousands of buzzing Oracle faithful. And therefore it was still a pretty awesome time.
This year I went representing Delphix, and man did we represent. The enthusiasm and technical curiosity were evident as our booth filled up for three days straight with folks eager to hear the good news of the data virtualization. I have to say, the DBA in me finds the promise of syncing databases to a software platform that can provision full-size, read/write clones in a couple minutes with no additional disk usage quite alluring. But there was more to the message than the technology behind the platform; there were also a plethora of use cases that captured people’s attention. Faster and more on-time business intelligence and analytics, application and database testing, regulatory compliance, and more. If that wasn’t enough, we also had Jonathan Lewis, Tim Gorman, Kyle Hailey, Ben Prusinski, and yours truly speaking at the booth which was a great bit of fun and drew a lot of folks that wanted to learn more.
On Monday I was honored to be invited back on SiliconAngle’s conference web show theCUBE to talk about copy data, Delphix, the Cloud (that should be fun for people running Cloud to Butt), Oracle’s strategy, and more. They had not one but two booths at OpenWorld this year. The always charismatic and ever savvy Dave Vellante and I had an outstanding chat, which you can see right here!
Another fantastic part of the conference was OakTable World, which is technically not part of OpenWorld…rather, it is a “secret” conference-within-a-conference. Held at the Children’s Museum nestled in the bosom of the Moscone Center (yay visuals), this conference features a lineup of incredibly technical folks talking about incredibly technical things to the wonder and amazement of all. This year was no different, with a great assortment of no-nonsense presentations. On the 2nd day of OakTable World there was also something I liked to call the Attack of the Attacks: #CloneAttack, #RepAttack, and #MonitorAttack. This event featured Delphix, DBVisit, and SolarWinds Confio and allowed people to get the software installed on their own laptops for tinkering, learning, and testing.
Pythian put on a couple exciting events as always, with the Friends of Pythian party on Monday night and the OTN Blogger Meetup on Wednesday. Both events were a blast as always, with a huge assortment of members of the Oracle community and beyond. Honestly, it’s worth going just for the good food and to see Alex Gorbachev stand up on a booth bench and try to hush a crowd of buzzing datafiends.
All in all it was an outstanding OpenWorld and it was great catching up with some amazing and brilliant people. I can’t wait to see you all again next year!