Feed aggregator

Database as a Virtual Image

Pat Shuff - Mon, 2016-05-16 02:07
The question that we are going to dive into this week is what does it really mean to be platform as a service vs infrastructure as a service. Why not go to Amazon and spin up an EC2 instance or search for an Oracle provided AMI on Amazon or Virtual Image on Azure? What benefit do I get from PaaS? To answer that we need to look at the key differences. Let's look at the two options when you provision a database in the Oracle DBaaS. When you provision a database you have the option of service levels; Database Cloud Service and Database Cloud Service - Virtual Image. We looked at the provisioning of the cloud service. It provisions a database, creates the network rules, and spins up an instance for us. What happens when we select Virtual Image?

The release and version screens are the same. We selected 12c for the release and High Performance for the version. Note that the questions are much simpler. We are not asked about how much storage. We are not asked for an SID or sys password. We are not asked about backup options. We are not given the option of DataGuard, RAC, or GoldenGate. We are only asked to name the instance, pick a compute shape, and provide an ssh public key.

This seems much simpler and better. Unfortunately, this isn't true. What happens from here is that a Linux 6.6 instance is created and a tarball is dropped into a staging area. The database is not provisioned. The file system is not prepared. The network ports are not configured and enabled. True, the virtual instance creation only takes a few minutes but all we are doing is provisioning a Linux instance and copying a tarball into a directory. Details on the installation process can be found at Database Cloud Installation - Virtual Image Documentation.

If you look at the detailed information about a system that is being created with a virtual image and a system that is being created as a service there are vast differences.

The first key difference is the amount of information displayed. Both instances have the same edition, Enterprise Edition - High Performance. Both will display this difference in the database as well as in the banner if asked what version the database is. The Service Level is different with the virtual image displayed as part of the service level. This effects the billing. The virtual image is a lower cost because less is done for you.

Product (per OCPU) General Purpose High-Memory Per Month Per Hour Per Month Per Hour Standard Edition Service $600 $1.008 $700 $1.176 Enterprise Edition Service $3,000 $5.040 $3,100 $5.208 High Performance Service $4,000 $6.720 $4,100 $6.888 Extreme Performance Service $5,000 $8.401 $5,100 $8.569

Virtual Image Product (per OCPU) General Purpose High-Memory Per Month Per Hour Per Month Per Hour Standard Edition Service $400 $0.672 $500 $0.840 Enterprise Edition Service $1,500 $2.520 $1,600 $2.688 High Performance Service $2,000 $3.360 $2,100 $3.528 Extreme Performance Service $3,000 $5.040 $3,100 $5.208

The only other information that we get from the management screen is that the instance comsumes 30 GB rather than 100 GB that the database service instance consumes. Note that the database service instance also has the container name and a connection string for connecting to the database. Both will eventually show an ip address and we should look into the operating system to see the differences. The menu to the right of the instance is also different. If we look at the virtual machine instance we only see ssh access, access rules, and deletion of the instance as options.

The ssh access allows us to upload the public key or look at the existing public key that is used to access the instance. The access rules takes us to a new screen that shows the security rules that have been defined for this instance, which is only ssh and nothing else.

If we look at a database as a service instance, the menu is different and allows us to look at things like the DBaaS Monitor, APEX, Enterprise Manager monitor, as well as the ssh and access rules.

Note that the database as a service instance has a lot more security rules defined with most of them being disabled. We can open up ports 80, 443, 4848, 1158, 5500, and 1521. We don't have to define these rules, just enable them if we are accessing them from a whitelist, ip address range, or public internet.

Once we connect to both instances we can see that both are running

Linux hostname 3.8.13-68.2.2.2.el6uek.x86_64 #2 SMP Fri Jun 19 16:29:40 PDT 2015  x86_64 x86_64 x86_64 GNU/Linux
We can see that the file system is different with the /u01, /u02, /u03, and /u04 partitions not mounted in the screen shots below.

If we look at the installation instructions we see that we have to create the /u01, /u02, /u03, and /u04 disks by hand. These are not created for us. We also need to create a logical volume as well as creating the storage services. Step one is to scale up the service by adding a disk. We need to grow the existing file system by first attaching a logical volume then laying out/expanding the logical volume that we have. Note that we can exactly mirror our on-premise system at this point. If we put everything into a 1 TB /u01 partition and blend the log files and data files into one disk (not really recommended) we can do this.

To add the /u01 disk we need to scale up the service and add storage. Note that we only can add a raw disk and can not grow the data volume as we can with the database service.

Note that this scale up does require a reboot of the service. We have the option of adding one logical unit or a full 1 TB disk then partitioning it or we can add the different volumes into different disks. The drawback of doing this is that the way that attached storage is charged is $50/TB/month so adding four disks that consume 20 GB each will consume $200/month because we are allocated the full 1 TB even though we just allocate 20 GB on each disk. We do not subdivide the disk when it is attached and are charged on a per TB basis and not a per GB basis. To save money it is recommended to allocate a full TB rather than a smaller amount. To improve performance and reliability it is recommended to allocate multiple disks and stripe data across multiple spindles and logical units. This can be done at the logical volume management part of disk management detailed in the documentation in provisioning the virtual image instance.

We can look at the logical volume configuration with the lvm pvdisplay, lvm vgdisplay, and lvm lvdisplay. This allows us to look at the physical volume mapping to map physical volumes to logical unit numbers, look at logical volumes for mirroring and stripping options, and volume group options which gets mapped to the data, reco, and fra areas.

Once our instance has rebooted we note that we added /dev/xvdc which is 21.5 GB in size. After we format this disk it partitions down to a 20 GB disk as we asked. If we add a second disk we will get /dev/xvdd and can map these two new disks into a logical volume that we can map to /u01/and /u02. A nicer command to use to look at this is the lsblk command which does not require elevated root privileges to run.

Once we go through the mapping of the /u01, /u02, /u03, and /u04 disks (the documentation only goes into single disks with no mirroring to mount /u01 and /u02) we can expand the binary bits located in /scratch/db. There are two files in this directory, db12102_bits.tar.gz and db12102_se2bits.tar.gz. These are the enterprise edition and standard edition versions of the database.

We are not going to go through the full installation but look at some of the key differences between IaaS with a tarball (or EC2 with an AMI) and a DBaaS installation. The primary delta is that the database is fully configured and ready to run in about an hour with DBaaS. With IaaS we need to create and mount a file system, untar and install the database, configure network ports, define security rules, and write scripts to automatically start the database upon restarting the operating system. We loose the menu items in the management page to look at the DBaaS Monitor, Enterprise Manager monitor, and Application Express interface. We also loose the patching options that appear in the DBaaS management screen. We loose the automated backup and database instance and PDB creation as is done with the DBaaS.

In summary, the PaaS/DBaaS provisioning in not only a shortcut but removes manual steps in configuring the service as well as daily operations. We could have just as easily provisioned a compute service, attached storage, downloaded the tarball that we want to use from edelivery.oracle.com. The key reasons that we don't want to do this are first pricing and second patching. If we provision a virtual image of database as a service the operating system is ready to accept the tarball and we don't need to install the odbc drivers and other kernel modules. We also get to lease the database on an hourly or monthly basis rather than purchasing a perpetual license to run on our compute instance.

Up next, selecting a pre-configured AMI on Amazon and running it in AWS compared to a virtual image on the Oracle Public Cloud.

Forms and Reports Developer 10g Certified on Windows 10 for EBS 12.x

Steven Chan - Mon, 2016-05-16 02:05

Windows 10 logoForms Developer 10g and Reports Developer 10g are now certified on Windows 10 desktops for E-Business Suite 12.1 and 12.2. See:

Windows Compatibility Mode

The Forms Developer 10g and Reports Developer 10g applications are part of Oracle Developer Suite 10g.  These two tools are 32-bit applications that can be installed on 32-bit and 64-bit Windows 10 versions.  They must be installed with "Windows compatibility mode" set to "Windows XP (Service Pack 2)" or "Windows XP (Service Pack 3)".

EBS Server-side updates for Forms & Reports

For the latest Forms and Reports server-side updates for your E-Business Suite 12.1 environments, see:

E-Business Suite 12.2 already contains the latest Forms and Reports server-side updates. No additional server-side patches are required.

Related Articles


Categories: APPS Blogs

Links for 2016-05-15 [del.icio.us]

Categories: DBA Blogs

Six Months with the iPad Pro

Oracle AppsLab - Sun, 2016-05-15 23:26
My first session with the iPad Pro

My first session with the iPad Pro

At first I was skeptical. I was perfectly happy with my iPad Air and the Pro seemed too big and too expensive. Six months later I wouldn’t dream of going back. The iPad Pro has become my primary computing device.

Does the Pro eliminate the need for a laptop or desktop? Almost, but for me not quite yet. I still need my Mac Air for NodeBox coding and a few other things; since they are both exactly the same size I now carry them together in a messenger bag.

iPad Pro and Mac Air share the same bag

iPad Pro and Mac Air share the same bag

The Pro is lighter than it looks and, with a little practice, balances easily on my lap. It fits perfectly on an airplane tray table.

Flying over Vegas using Apple Maps

Flying over Vegas using Apple Maps

Does the 12.9-inch screen really make that much of a difference? Yes! The effect is surprising; after all, it’s the same size as an ordinary laptop screen. But there is something addictive about holding large, high resolution photos and videos in your hands. I *much* prefer photo editing on the iPad. 3D flyovers in Apple Map are almost like being there.

Coda and Safari sharing the screen

Coda and Safari sharing the screen

The extra screen real estate also makes iOS 9’s split screen feature much more practical. Above is a screenshot of me editing a webpage using Coda. By splitting the screen with Safari, I can update code and instantly see the results as I go.

Bloomberg Professional with picture-in-picture

Bloomberg Professional with picture-in-picture

Enterprise users can see more numbers and charts at once. Bloomberg Professional uses the picture-in-picture feature to let you watch the news while perusing a large portfolio display. WunderStation makes dashboards big enough to get lost in.

WunderStation weather dashboard

WunderStation weather dashboard

For web conferences, a major part of my working life at Oracle, the iPad Pro both exceeds and falls short. The participant experience is superb. When others are presenting screenshots I can lean back in my chair and pinch-zoom to see details I would sometimes miss on my desktop. When videoconferencing I can easily adjust the camera or flip it to point at a whiteboard.

But my options for presenting content from the iPad are still limited. I can present images, but cannot easily pull content from inside other apps. (Zoom lets you share web pages and cloud content on Box, Dropbox or Google Drive, but we are supposed to keep sensitive data inside our firewall.) The one-app-at-a-time iOS model becomes a nuisance in situations like this. Until this limitation is overcome I don’t see desktops and laptops on the endangered species list.

Smart Keyboard and Apple Pencil

Smart Keyboard and Apple Pencil

Accessories

The iPad Pro offers two accessories not available with a normal iPad: a “smart keyboard” that uses the new magnetic connector, and the deceptively simple Apple Pencil.

I tried the keyboard and threw it back. It was perfectly fine but I’m just not a keyboard guy. This may seem odd for someone who spends most of his time writing – I’m typing this blog on the iPad right now – but I have a theory about this that may explain who will adopt tablets in the workplace and how they will be used.

I think there are two types of workers: those who sit bolt upright at their desks and those who slump as close to horizontal as they can get; I am a slumper. And there are two kinds of typists: touch typists who type with their fingers and hunt-and-peckers who type with their eyes; I am a, uh, hunter. This places me squarely in the slumper-hunter quadrant.

Slumper-hunters like me love love love tablets and don’t need no stinking keyboards. The virtual keyboard offers a word tray that guesses my words before I do, lets me slide two fingers across the keyboard to precisely reposition the cursor, and has a dictate button that works surprisingly well.

Touch-slumpers are torn: they love tablets but can’t abide typing on glass; for them the smart keyboard – hard to use while slumping – is an imperfect compromise. Upright-hunters could go either way on the keyboard but may not see the point in using a tablet in the first place. Upright-touchers will insist on the smart keyboard and will not use a tablet without one.

Running Horse by Anna Budovsky

Running Horse by Anna Budovsky

If you are an artist, or even just an inveterate doodler, you must immediately hock your Wacom tablet, toss your other high-end styli, and buy the Apple Pencil (with the full-sized Pro as an accessory). It’s the first stylus that actually works. No more circles with dents and monkey-with-big-stick writing. Your doodles will look natural and your signature will be picture perfect.

The above drawing was done in under sixty seconds by my colleague Anna Budovsky. She had never used the iPad Pro before, had never used the app (Paper), and had never before picked up an Apple Pencil. For someone with talent, the Apple Pencil is a natural.

If you are not an artist you can probably skip the Pencil. It’s a bit of a nuisance to pack around and needs recharging once a week (fast and easy but still a nuisance). I carry one anyway just so I can pretend I’m an artist.

The Future

For now the iPad Pro is just a big iPad (and the new Pro isn’t even big). Most apps don’t treat it any differently yet and some older apps still don’t even fully support it. But I am seeing some early signs this may be starting to change.

The iPad Pro has one other advantage: processing power. Normal iPad apps don’t really need it (except to keep up with the hi-res screen). Some new apps, though, are being written specifically for the Pro and are taking things to a new level.

Fractals generated by Frax HD

Fractals generated by Frax HD

Zooming into infinitely complex fractals is not a business application, but it sure is a test of raw processing power. I’ve been exploring fractals since the eighties and have never seen anything remotely as smooth and deep and effortless as Frax HD. Pinch-zooming forever and changing color schemes with a swirl of your hand is a jaw-dropping experience.

The emerging class of mobile CAD apps, like Shapr3D, are more useful but no less stunning. You would think a CAD app would need not just a desktop machine but also a keyboard on steroids and a 3D mouse. Shapr3D uses the Apple Pencil in ingenious ways to replace all that.

A 3D Doodle using Shapr3D

A 3D Doodle using Shapr3D

Sketch curves and lines with ease and then press down (with a satisfying click) to make inflection points. Wiggle the pencil to change modes (sounds crazy but it works). Use the pencil for drawing and your fingers for stretching – Shapr3D keeps up without faltering. I made the strange but complicated contraption above in my first session with almost no instruction – and had fun doing it.

I hesitate to make any predictions about the transition to tablets in the workplace. But I would recommend keeping an eye on the iPad Pro – it may be a sleeping giant.Possibly Related Posts:

Partner Webcast – Oracle Database In-Memory: Accelerate Business

Businesses must compete in today’s high-speed, always-on world where requirements are more demanding than ever. That’s easier said than done, especially when decision-makers must wait hours—in some...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Looking for Nominees for the 2016 Oracle Sustainability Innovation Awards

Linda Fishman Hoyle - Sun, 2016-05-15 16:01

Is Oracle helping you save energy, gas, or paper?

For example, are you using Oracle Cloud solutions to help drive down power consumption?

Or Oracle Transportation Management to reduce fleet emissions?

Or Oracle Asset Lifecycle Management to reduce energy costs and extend the life of assets by managing them more efficiently?

Or Oracle Procurement to ensure the use of sustainable suppliers? And the list goes on.

If so, you might be a good nominee for the 2016 Oracle Sustainability Innovation Award. Jeff Henley and Jon Chorley will present these awards at Oracle OpenWorld San Francisco 2016. Winning customers will receive a complimentary registration pass to OpenWorld.

Nomination Process

Submit nomination forms by June 7. We’re looking for companies that are using any Oracle product to take an environmental lead, as well as to reduce costs and improve business efficiencies using green business practices. Either a customer, its partner, or an Oracle representative can submit the nomination form on behalf of a customer.

Questions? Contact Evelyn Neumayr at evelyn.neumayr@oracle.com.

Automating DG Broker

Michael Dinh - Sat, 2016-05-14 21:11

I have been applying PSU lately and what’s so hard out it?

Four+ databases running on Primary with DG Broker for standby.

There are no conventions, as some standby databases have dr appended to primary name while others have 2 appended to primary name.

I wanted to view the DG configuration for currently active instances and show_dg_config.sh will show me this.

Next, I want a faster way to shutdown DG by having syntax generated and  gen_dg_cmd.sh does this.

Guess I could have taken it further by creating a shell script to create shell scripts to shutdown DG.

One day when I am really bore, I might OR may be you are so nice to complete my mission.

Tested on AIX 7.1

Note: the ps -ef syntax is for AIX and will not work with Linux.

See below for the Linux alternative.

$ ps -ef -o args|grep ora_smon|grep -v grep|awk -F"_smon_" '{print $2}'
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ

$ ps -eo args|grep ora_smon|grep -v grep|awk -F"_smon_" '{print $2}'
thor
hulk

show_dg_config.sh

#!/bin/sh -e
ps -ef -o args|grep ora_smon|grep -v grep|awk -F"_smon_" '{print $2}'
export ORAENV_ASK=NO
for SID in ps -ef -o args|grep ora_smon|grep -v grep|awk -F"_smon_" '{print $2}'`
do
export ORACLE_SID=$SID
. /usr/local/bin/oraenv
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
echo "+++: " $ORACLE_SID $ORACLE_HOME
sysresv
dgmgrl -echo << END
connect /
show configuration
exit
END
done
exit

gen_dg_cmd.sh

#!/bin/sh -e
for XB in `egrep 'Primary|Physical' /tmp/dg.log |sort |awk -F" " '{print $3 $1}'`
do
#echo $XB
#echo $XB|awk '{print substr($1,1,7)}'
if [ "`echo $XB|awk '{print substr($1,1,7)}'`" == "Primary" ]
then
PRI=`echo $XB|awk '{print substr($1,8)}'`
echo "edit database $PRI set state='LOG-TRANSPORT-OFF';"
echo "show database $PRI"
echo "edit database $PRI set state='ONLINE';"
echo "show database $PRI"
fi
if [ "`echo $XB|awk '{print substr($1,1,8)}'`" == "Physical" ]
then
SBY=`echo $XB|awk '{print substr($1,9)}'`
echo "edit database $SBY set state='APPLY-OFF';"
echo "show database $SBY"
echo "edit database $SBY set state='APPLY-ON';"
echo "show database $SBY"
fi
done
exit

./show_dg_config.sh > /tmp/dg.log

egrep ‘Primary|Physical’ /tmp/dg.log |sort |awk -F” ” ‘{print $3 $1}’

Primarydb02
Physicaldb02dr
Primarydb01
Physicaldb01dr
Primarystageqa
Physicalstageqa2
Primarytest
Physicaltestdr

./gen_dg_cmd.sh

edit database db01 set state='LOG-TRANSPORT-OFF';
show database db01
edit database db01 set state='ONLINE';
show database db01
edit database db01dr set state='APPLY-OFF';
show database db01dr
edit database db01dr set state='APPLY-ON';
show database db01dr
edit database db02 set state='LOG-TRANSPORT-OFF';
show database db02
edit database db02 set state='ONLINE';
show database db02
edit database db02dr set state='APPLY-OFF';
show database db02dr
edit database db02dr set state='APPLY-ON';
show database db02dr
edit database stageqa set state='LOG-TRANSPORT-OFF';
show database stageqa
edit database stageqa set state='ONLINE';
show database stageqa
edit database stageqa2 set state='APPLY-OFF';
show database stageqa2
edit database stageqa2 set state='APPLY-ON';
show database stageqa2
edit database test set state='LOG-TRANSPORT-OFF';
show database test
edit database test set state='ONLINE';
show database test
edit database testdr set state='APPLY-OFF';
show database testdr
edit database testdr set state='APPLY-ON';
show database testdr
oracle:/home/oracle/working/dinh$

FREE Training : Learn Oracle Access Manager (OAM) for Single Sign-On (SSO)

Online Apps DBA - Sat, 2016-05-14 16:43

 In this post I am going to cover why you should learn Oracle Access Manager (OAM), What to learn in OAM and How you can learn.  I’ll also share link to my FREE  Oracle Access Manager (OAM) 11gR2 Mini Course. In this FREE OAM Mini Training I’ll send 3-4 mails every week over next 4 weeks for […]

The post FREE Training : Learn Oracle Access Manager (OAM) for Single Sign-On (SSO) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle JET Input Search with ADF BC REST

Andrejus Baranovski - Sat, 2016-05-14 14:33
LOV is popular component in ADF, it allows to seach for data entry in the list, select it and assign to the attribute. I was researching, how similar concept can be implemented in Oracle JET, based on data from ADF BC REST service. JET Input Search component seems to be useful for LOV like behavior implementation.

Job ID field is implemented with Input Search. It is based on value/label pair, user enters label and in the background selected value is returned and assigned to the attribute:


Watch this recording, to see how was it is. Search is performed on client side and value selection is instant:


List is being filter when user types value (you can configure it, to start filtering after user enters certain number of charachters):


Try to select a value from the list and update record:


In the background it is using key value SA_REP for update, we can track it in ADF BC log, where actual DB update takes place (through REST PATCH):


Let's take a look into implementation. In HTML I'm using ojInputSearch component with value and options properties. Property options provides list entries and property value holds selected value key:


Options are defined in JavaScript as observableArray, this allows to synch collection data to the UI. There is collection for options and REST service URL (pointing to Jobs ADF BC REST resource):


Data structure is defined by parseJob function, it contains JobId and JobTitle attributes - this will help to map REST response into JET collection:


JET collection is configured with REST URL for Jobs, unlimited fetch size (to fetch list of all jobs) and data structure mapping for REST resource:


Main part - we need to populate JET collection with data, this can be done by executing fetch method (see JET API documentation). In the success callback (executed asynchronously), we can access returned collection and push all entries into observableArray variable, attached to Input Search UI component:


Make sure to set RangeSize = -1 in ADF BC REST service resource definition for Jobs. This will enforce ADF BC to return all rows:


Download sample application (archive contains ADF BC REST sample and JET implementation with NetBeans, you must add JET runtime distribution to run JET sample) - JETCRUDApp_v7.zip.

Video : JSON Support in Oracle Database 12c

Tim Hall - Sat, 2016-05-14 09:58

Today’s video is a sprint through some of the JSON support in Oracle Database 12c.

If videos aren’t your thing, you might want to read these instead.

The cameo in this video comes courtesy of Yves Colin, who I’ll see again in a couple of weeks at the Paris Province Oracle Meetup. A couple of extras (Bertrand Drouvot and Osama Mustafa) wanted to get in on the act too.

Email, where art thou?

Tim Hall - Sat, 2016-05-14 05:17

email-at-1020116_640Followers of the blog will know I’ve recently migrated the website to AWS. Yesterday I bit the bullet and cancelled my dedicated server.

As part of that process I had to move my email account from that service too. I always pull all my emails into Gmail, so there is no point paying for something cool. A little POP account is fine.

I started the process yesterday afternoon/evening, thinking it would be a quick drop on the old service and recreate on the new one. Unfortunately the old service held on to the domain reference overnight, so it was a quiet evening on the email front.

Fail Fast

Cary Millsap - Fri, 2016-05-13 17:22
Among movements like Agile, Lean Startup, and Design Thinking these days, you hear the term fail fast. The principle of failing fast is vital to efficiency, but I’ve seen project managers and business partners be offended or even agitated by the term fail fast. I’ve seen it come out like, “Why the hell would I want to fail fast?! I don’t want to fail at all.” The implication, of course: “Failing is for losers. If you’re planning to fail, then I don’t want you on my team.”

I think I can help explain why the principle of “fail fast” is so important, and maybe I can help you explain it, too.

Software developers know about fail fast already, whether they realize it or not. Yesterday was a prime example for me. It was a really long day. I didn’t leave my office until after 9pm, and then I turned my laptop back on as soon as I got home to work another three hours. I had been fighting a bug all afternoon. It was a program that ran about 90 seconds normally, but when I tried a code path that should have been much faster, I could let it run 50 times that long and it still wouldn’t finish.

At home, I ran it again and left it running while I watched the Thunder beat the Spurs, assuming the program would finish eventually, so I could see the log file (which we’re not flushing often enough, which is another problem). My MacBook Pro ran so hard that the fan compelled my son to ask me why my laptop was suddenly so loud. I was wishing the whole time, “I wish this thing would fail faster.” And there it is.

When you know your code is destined to fail, you want it to fail faster. Debugging is hard enough as it is, without your stupid code forcing you to wait an hour just to see your log file, so you might gain an idea of what you need to go fix. If I could fail faster, I could fix my problem earlier, get more work done, and ship my improvements sooner.

But how does that relate to wanting my business idea to fail faster? Well, imagine that a given business idea is in fact destined to fail. When would you rather find out? (a) In a week, before you invest millions of dollars and thousands of hours investing into the idea? Or (b) In a year, after you’ve invested millions of dollars and thousands of hours?

I’ll take option (a) a million times out of a million. It’s like asking if I’d like a crystal ball. Um, yes.

The operative principle here is “destined to fail.” When I’m fixing a reported bug, I know that once I create reproducible test case for that bug, my software will fail. It is destined to fail on that test case. So, of course, I want for my process of creating the reproducible test case, my software build process, and my program execution itself to all happen as fast as possible. Even better, I wish I had come up with the reproducible test case a year or two ago, so I wouldn’t be under so much pressure now. Because seeing the failure earlier—failing fast—will help me improve my product earlier.

But back to that business idea... Why would you want a business idea to fail fast? Why would you want it to fail at all? Well, of course, you don’t want it to fail, but it doesn’t matter what you want. What if it is destined to fail? It’s really important for you to know that. So how can you know?

Here’s a little trick I can teach you. Your business idea is destined to fail. It is. No matter how awesome your idea is, if you implement your current vision of some non-trivial business idea that will take you, say, a month or more to implement, not refining or evolving your original idea at all, your idea will fail. It will. Seriously. If your brain won’t permit you to conceive of this as a possibility, then your brain is actually increasing the probability that your idea will fail.

You need to figure out what will make your idea fail. If you can’t find it, then find smart people who can. Then, don’t fear it. Don’t try to pretend that it’s not there. Don’t work for a year on the easy parts of your idea, delaying the inevitable hard stuff, hoping and praying that the hard stuff will work its way out. Attack that hard stuff first. That takes courage, but you need to do it.

Find your worst bottleneck, and make it your highest priority. If you cannot solve your idea’s worst problem, then get a new idea. You’ll do yourself a favor by killing a bad idea before it kills you. If you solve your worst problem, then find the next one. Iterate. Shorter iterations are better. You’re done when you’ve proven that your idea actually works. In reality. And then, because life keeps moving, you have to keep iterating.

That’s what fail fast means. It’s about shortening your feedback loop. It’s about learning the most you can about the most important things you need to know, as soon as possible.

So, when I wish you fail fast, it’s a blessing; not a curse.

The April That Was and Our Plans for May and June

Oracle AppsLab - Fri, 2016-05-13 14:31

Hi there, remember me? Wow, April was a busy month for us, and looking ahead, it’s getting busy again.

Busy is good, and also good, is the emergence of new voices here at the ‘Lab. They’ve done a great job holding down the fort. Since my last post in late March, you’ve heard from Raymond (@yuhuaxie), Os (@vaini11a), Tawny (@iheartthannie), Ben (@goldenmean1618) and Mark (@mvilrokx).

Because it’s been a while, here comes an update post on what we’ve been doing, what we’re going to be doing in the near future, and some nuggets you might have missed.

What we’ve been doing

Conference season, like tax season in the US, consumes the Spring. April kicked off for me at Oracle HCM World in Chicago, where Aylin (@aylinuysal) and I had a great session. We showed a couple of our cool voice demos, powered by Noel’s (@noelportugalfavorite gadget, the Amazon Echo, and the audience was visibly impressed.

@jkuramot @theappslab #oaux wows @OracleHCM customers with emerging tech demos & HR tasks use cases #OracleHCMWorld pic.twitter.com/CwkzVeKdVJ

— Gozel Aamoth (@gozelaamoth) April 7, 2016

I like that picture. Looks like I’m wearing the Echo as a tie.

Collaborate 16 was next, where Ben and Tawny collected VR research and ran a focus group on bots. VR is still very much a niche technology. Many Collaborate attendees hadn’t even heard of VR at all and were eager to take the Samsung Gear VR for a test drive.

During the bots focus group, Ben and Tawny tried out some new methods, like Business Origami, which fostered some really interesting ideas among the group.

Business origami taking shape #oaux #CLV16 pic.twitter.com/PJARBrZGka

— The AppsLab (@theappslab) April 12, 2016

Next, Ben headed out directly for the annual Oracle Benelux User Group (OBUG) conference in Arnhem to do more VR research. Our research needs to include international participants, and Ben found more of the same reactions we’ve seen Stateside. With something as new and different as VR, we cast a wide net to get as many perspectives and collect as much data as possible before moving forward with the project.

Oracle Modern Customer Experience was next for us, where we showed several of our demos to a group students from the Lee Business School at UNLV (@lbsunlv), who then talked about those demos and a range of other topics in a panel session, hosted by Rebecca Wettemann (@rebeccawettemann) of Nucleus Research.

#UNLV #MBA students discuss the future of work with @theappslab @usableapps #ModernCX #SalesX16 pic.twitter.com/5tPl8Y6c95

— Geet (@geet_s) April 28, 2016

The feedback we got on our demos was very interesting. These students belong to a demographic we don’t typically get to hear from, and their commentary gave me some lightning bolts of insight that will be valuable to our work.

As with VR, some of the demos we showed were on devices they had not seen or used yet, and it’s always nice to see someone enjoy a device or demo that has become old hat to me.

Because we live and breathe emerging technologies, we tend to get jaded about new devices far too quickly. So, a reset is always welcome.

What we’re going to be doing in the near future

Next week, we’re back on the road to support an internal IoT hackathon that Laurie’s (@lsptahoeApps UX Innovation team (@InnovateOracle) is hosting in the Bay Area.

The countdown has started! Register at https://t.co/Gyx8d2Oh7k and be part of this huge Oracle Conference in June. pic.twitter.com/18fGbG9L74

— AMIS, Oracle & Java (@AMISnl) May 9, 2016

Then, June 2-3, we’re returning to the Netherlands to attend and support AMIS 25. The event celebrates the 25th anniversary of AMIS (@AMISnl), and they’ve decided to throw an awesome conference at what sounds like a sweet venue, “Hangaar 2” at the former military airport Valkenburg in Katwijk outside Amsterdam.

Our GVP, Jeremy Ashley (@jrwashley) will be speaking, as will Mark. Noel will be showing the Smart Office, Mark will be showing his Developer Experience (DX) tools, and Tawny will be conducting some VR research, all in the Experience Zone.

I’ve really enjoyed collaborating with AMIS in the past, and I’m very excited for this conference/celebration.

After a brief stint at home, we’re on the road again in late June for Kscope16, which is both an awesome conference and happily, the last show of the conference year. OpenWorld doesn’t count.

We have big fun plans this year, as always, so stay tuned for details.

Stuff you might have missed

Finally, here are some interesting tidbits I collected in my absence from blogging.

I love me some bio-hacking, so here’s an ingestible meat robot prototype and flexible electronic skin.

automateadvisediscover7

And here are some OAUX (@usableapps) links you should read:

Possibly Related Posts:

Oracle and Benefit Management LLC to Help Ease the Complexity of Bundled Payments Amidst Healthcare Reform

Oracle Press Releases - Fri, 2016-05-13 14:30
Press Release
Oracle and Benefit Management LLC to Help Ease the Complexity of Bundled Payments Amidst Healthcare Reform Oracle and Benefit Management LLC Enable Healthcare Payors and Providers to Shift from Retrospective Payments to Prospective Bundled Payments

Redwood Shores, Calif.—May 13, 2016

Oracle today announced that Benefit Management LLC, a progressive health Third Party Administrator (TPA) and joint venture of NueHealth, has selected Oracle Health Insurance Value Based Payment Cloud Service to deliver a cloud-based prospective bundled payments system for healthcare payors and providers.

While the goal of healthcare reform has been to decrease costs and improve patient outcomes, it has healthcare payors and providers grappling with how to master the complexity of bundled payment models—specifically the shift from retrospective payments (paying for healthcare services after they are completed) to prospective payments (at the time the service is requested and agreement on a flat fee). 

Benefit Management LLC will harness the power of Oracle Health Insurance Value Based Payments Cloud Service to help payors seamlessly maneuver and scale complex bundled payments, and help ensure timely and accurate payment to providers as they move to a prospective bundled payments model.

“The transition to value-based payments is one of the biggest challenges facing payors and providers today,” said Chad Somers, CEO of Benefit Management. “It is imperative that all aspects of value-based payments be considered before making this change. Never before has combining the clinical outcomes with payment been so important. Oracle Health Insurance Value Based Payment Cloud Service will allow us to revolutionize bundled payments by processing payment at the time service is rendered instead of months later.”

The combination of Benefit Management’s services, backed by 20 years of success with a diverse client base of self-funded employers, and Oracle’s offering will allow Benefit Management customers to:

  • Easily navigate the transition to prospective bundled payments
  • Reduce the provider risk making it easier for providers to participate
  • Decrease the total cost of bundled procedures
  • Increase the quality of care for bundled procedures

“The Oracle insurance cloud solution can also be leveraged by Benefit Management’s self-funded employers,” Somers said. “Value-based programs, especially when you’re talking about bundles, are of tremendous benefit to self-funded employers, and the capability to manage and adjudicate claims based on those kinds of value-based contracts will be something employers can take advantage of to great effect.”

“Oracle Health Insurance Value Based Payment Cloud Service provides the technology platform to administer many different non fee-for-service arrangements in a single solution,” said Srini Venkatasanthanam, Vice President for Oracle Insurance. “Oracle’s health insurance solution enables organizations such as Benefit Management to reduce time-to-market and significantly lower overall cost of ownership. We look forward to enabling Benefit Management to deliver an innovative prospective bundling offering to the market.”

To learn more about Oracle’s Value Based Payment Component and the opportunities it will provide to provider, payors and employers, visit Benefit Management online at www.BenefitManagementLLC.com

 
Contact Info
Valerie Beaudett
Oracle
+1.650.400.7833
valerie.beaudett@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Benefit Management LLC

Benefit Management provides customized, high-quality health benefits administration programs to partially self-insured companies and association plans nationwide, and also delivers value-based administration solutions to providers and payors. The company, founded in 1995, has long been regarded as one of the Midwest’s leading third party administrators, with a reputation for flexibility, innovative services and outstanding customer service. Benefit Management is headquartered in Great Bend, Kan., with offices in Wichita, Kansas City and St. Louis, and administers insured lives in all 50 states.

About NueHealth

NueHealth is building a nationwide system of clinically integrated provider networks that puts healthcare into the hands of consumers. With a vast network of purpose-driven surgical centers and hospitals, NueHealth connects providers directly to consumers and aids them in delivering value-based payment options and improved outcomes. To deliver this improved value, NueHealth leverages proprietary technologies, online platforms, bundled payments and targeted programs and services. NueHealth gives providers and payors the tools and resources to stay ahead of healthcare’s continued evolution, while giving employers, insurance companies and patients access to a simplified model and affordable, high-quality, streamlined care.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Valerie Beaudett

  • +1.650.400.7833

JDeveloper 10g Certified on Windows 10 for EBS 12.1

Steven Chan - Fri, 2016-05-13 14:19

Jdeveloper logoJDeveloper 10g is now certified for Windows 10 desktops for Oracle E-Business Suite 12.1.  See:

When you create extensions to Oracle E-Business Suite OA Framework pages, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product team.

The version of Oracle JDeveloper is specific to the Oracle E-Business Suite Applications Technology patch level, so there is a new version of Oracle JDeveloper with each new release of the Oracle E-Business Suite Applications Technology patchset.

This Note lists the JDeveloper with OA Extension updates for EBS 11i, 12.0, 12.1, and 12.2:

Pending Certification 

Our certification of JDeveloper 10g on Windows 10 for EBS 12.2 is still underway. 

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    

Related Articles

The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Categories: APPS Blogs

Win a FREE PASS to Oracle OpenWorld 2016: Oracle Cloud Platform Innovation Awards - Submit Nominations by June 20th 2016!

WebCenter Team - Fri, 2016-05-13 11:49


Calling all Oracle Cloud Platform Innovators

We invite you to Submit Nominations for the 2016
Oracle Excellence Awards:
Oracle Cloud Platform Innovation



Click here, to submit your nomination today!


Call for Nominations:
Oracle Cloud Platform Innovation 2016

Are you using Oracle Cloud Platform to deliver unique business value? If so, submit a nomination today for the 2016 Oracle Excellence Awards for Oracle Cloud Platform Innovation. These highly coveted awards honor customers and their partners for their cutting-edge solutions using Oracle Cloud Platform. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture.


Customer Winners receive
a free pass to Oracle OpenWorld 2016 in San Francisco (September 18-September 22) and will be honored during a special event at OpenWorld.  

Our 2016 Award Categories are:

To be considered for this award, complete the online nomination forms and submit before June 20th, 2016. For any questions email: innovation-cloud-platform_ww_grp@oracle.com

NOTE: The deadline to submit all nominations is 5pm Pacific on June 20th, 2016. Customers don't have to be in production to submit a nomination and nominations are for both Cloud and on-premise solutions.

Analyze Index Validate Structure – The Dark Side

Pythian Group - Fri, 2016-05-13 09:38

Recently a co-worker wanted to discuss a problem he had encountered after upgrading a database.

The upgrade plan included steps to verify object integrity; this was being done with analyze table <tablename> validate structure cascade. All was fine until one particular table was being analyzed.  Suddenly it seemed the process entered a hung state.

The job was killed and separate commands were created to analyze each object individually.  That went well up until one of the last indexes was reached.

Me: How long has it been running?

Coworker: Three days.

Yes, you read that correctly, it had been running for three days.

My friend ran a 10046 trace to see what the process was doing; nearly all the work was ‘db file sequential read’ on the table.

At this time I suspected it was related to the clustering_factor for the index in question.  The analyze process for an index verifies each row in the index.  If the cluster is well ordered then the number of blocks read from the table will be similar to the number of blocks making up the table.

If however the table is not well ordered relative to the columns in the index the number of blocks read from the table can be many times the total number of blocks that are actually in the table.

Consider for a moment that we have rows with an  ID of 1,2,3,4 and 5.  Let’s assume that our index is created on the ID column.

If these rows are stored in order in the table, it is very likely these rows will all be in the same block, and that a single block read will fetch all of these rows.

If however the rows are stored in some random order, it may be that a separate block read is required for each lookup.

IDBlock Number122275316425104

In this case 5 separate blocks must be read to retrieve these rows.

In the course of walking the index, some  minutes later these rows must also be read:

IDBlock Number104857622104857775104857816104857921048580104

The blocks where these rows reside are the same blocks as the earlier example. The problem of course is that quite likely the blocks have been removed from cache by this time, and must be read again from disk.

Now imagine performing this for millions of rows. With a poor clustering factor the analyze command on an index could take quite some time to complete.

This seemed worthy of a test so we could get a better idea of just how bad this issue might be.

The test was run with 1E7 rows. The SQL shown below creates 1E7 rows, but you can simply change the value of level_2 to 1e3 to reduce the total rows to 1E6, or even smaller if you like.

 


-- keep this table small and the rows easily identifiable
-- or not...

-- 1e3 x 1e4 = 1e7
def level_1=1e3
def level_2=1e4

drop table validate_me purge;

create table validate_me
tablespace
   alloctest_a -- EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO
   --alloctest_m -- EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL
   --alloctest_u -- EXTENT MANAGEMENT LOCAL UNIFORM SIZE 65536 SEGMENT SPACE MANAGEMENT AUTO
pctfree 0
as
select
   -- for a good clustering factor
   --id
   --
   -- for a bad clustering factor
   floor(dbms_random.value(1,1e6)) id
   , substr('ABCDEFGHIJKLMNOPQRSTUVWXYZ',mod(id,10),15) search_data
   , to_char(id,'99') || '-' || rpad('x',100,'x') padded_data
from (
   select rownum id
   from
   (
      select null
      from dual
      connect by level <= &level_1
   ) a,
   (
      select null
      from dual
      connect by level <= &level_2
   ) b
)
/

create index validate_me_idx1 on validate_me(id,search_data);

exec dbms_stats.gather_table_stats(user,'VALIDATE_ME',method_opt => 'for all columns size 1')


 

Let’s see just what the clustering factor is for this index. The following script cluster-factor.sql will get this information for us.

 


col v_tablename new_value v_tablename noprint
col v_owner new_value v_owner noprint

col table_name format a20 head 'TABLE NAME'
col index_name format a20 head 'INDEX NAME'
col index_rows format 9,999,999,999 head 'INDEX ROWS'
col table_rows format 9,999,999,999 head 'TABLE ROWS'
col clustering_factor format 9,999,999,999 head 'CLUSTERING|FACTOR'
col leaf_blocks format 99,999,999 head 'LEAF|BLOCKS'
col table_blocks format 99,999,999 head 'TABLE|BLOCKS'


prompt
prompt Owner:
prompt

set term off feed off verify off
select upper('&1') v_owner from dual;
set term on feed on

prompt
prompt Table:
prompt

set term off feed off verify off
select upper('&2') v_tablename from dual;
set term on feed on


select
   t.table_name
   , t.num_rows table_rows
   , t.blocks table_blocks
   , i.index_name
   , t.num_rows index_rows
   , i.leaf_blocks
   , clustering_factor
from all_tables t
   join all_indexes i
      on i.table_owner = t.owner
      and i.table_name = t.table_name
where t.owner = '&v_owner'
   and t.table_name = '&v_tablename'

/

undef 1 2

 

Output from the script:

 

SQL> @cluster-factor jkstill validate_me

Owner:

Table:


                                          TABLE                                            LEAF     CLUSTERING
TABLE NAME               TABLE ROWS      BLOCKS INDEX NAME               INDEX ROWS      BLOCKS         FACTOR
-------------------- -------------- ----------- -------------------- -------------- ----------- --------------
VALIDATE_ME              10,000,000     164,587 VALIDATE_ME_IDX1         10,000,000      45,346     10,160,089

1 row selected.

Elapsed: 00:00:00.05

 

On my test system creating the table for 1E7 rows required about 2 minutes and 15 seconds, while creating the index took 28 seconds.

You may be surprised at just how long it takes to analyze that index.

 

SQL> analyze index jkstill.validate_me_idx1 validate structure online;

Index analyzed.

Elapsed: 00:46:06.49

 

Prior to executing this command a 10046 trace had been enabled, so there is a record of how Oracle spent its time on this command.

 

If you are wondering how much of the 46 minutes was consumed by the tracing and writing the trace file, it was about 6 minutes:

 

$>  grep "WAIT #48004569509552: nam='db file sequential read'"; oravm1_ora_2377_VALIDATE.trc  | awk '{ x=x+$8 } END { printf ("%3.2f\n",x/1000000/60) }'
  40.74

A Well Ordered Table

Now lets see how index analyze validate structure performs when the table is well ordered. The table uses the DDL as seen in the previous example, but rather than use dbms_random to generate the ID column, the table is created with the rows loaded in ID order.  This is done by uncommenting id in the DDL and commenting out the call to dbms_random.

 

SQL> analyze index jkstill.validate_me_idx1 validate structure online;

Index analyzed.

Elapsed: 00:01:40.53

That was a lot faster than previous.  1 minute and 40 seconds whereas previously the same command ran for 40 minutes.

 

Using some simple command line tools we can see how many times each block was visited.

 

First find the cursors and verify this cursor used only once in the session

$> grep -B1 '^analyze index' oravm1_ora_19987_VALIDATE.trc
PARSING IN CURSOR #47305432305952 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462922977143796 hv=2128321230 ad='b69cfe10' sqlid='318avy9zdr6qf'
analyze index jkstill.validate_me_idx1 validate structure online


$> grep -nA1 'PARSING IN CURSOR #47305432305952' oravm1_ora_19987_VALIDATE.trc
63:PARSING IN CURSOR #47305432305952 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462922977143796 hv=2128321230 ad='b69cfe10' sqlid='318avy9zdr6qf'
64-analyze index jkstill.validate_me_idx1 validate structure online
--
276105:PARSING IN CURSOR #47305432305952 len=55 dep=0 uid=90 oct=42 lid=90 tim=1462923077576482 hv=2217940283 ad='0' sqlid='06nvwn223659v'
276106-alter session set events '10046 trace name context off'

As this cursor was reused, we need to limit the lines we considered from the trace file.

 

One wait line appears like this:

WAIT #47305432305952: nam=’db file sequential read’ ela= 317 file#=8 block#=632358 blocks=1 obj#=335456 tim=1462923043050233

As it is already known the entire table resides in one file, it is not necessary to check the file.

From the following command it is clear that no block was read more than once during the analyze index validate structure when the table was well ordered in relation to the index.

 

$> tail -n +64 oravm1_ora_19987_VALIDATE.trc| head -n +$((276105-64)) | grep "WAIT #47305432305952: nam='db file sequential read'" | awk '{ print $10 }' | awk -F= '{ print $2 }' | sort | uniq -c | sort -n | tail
      1 742993
      1 742994
      1 742995
      1 742996
      1 742997
      1 742998
      1 742999
      1 743000
      1 743001
      1 743002

 

That command line may look a little daunting, but it is really not difficult when each bit is considered separately.

From the grep command that searched for cursors we know that the cursor we are interested in first appeared at line 64 in the trace file.

tail -n +64 oravm1_ora_19987_VALIDATE.trc

The cursor was reused at line 276105, so tell the tail command to output only the lines up to that point in the file.

head -n +$((276105-64))

The interesting information in this case is for ‘db file sequential read’ on the cursor of interest.

grep “WAIT #47305432305952: nam=’db file sequential read'”

Next awk is used to output the block=N portion of each line.

awk ‘{ print $10 }’

awk is again used, but this time to split the block=N output at the ‘=’ operator, and output only the block number.

awk -F= ‘{ print $2 }’

The cut command could have been used here as well. eg. cut -d= -f2

Sort the block numbers

sort

Use the uniq command to get a count of how many times each value appears in the output.

uniq -c

Use sort -n to sort the output from uniq.  If there are any counts greater than 1, they will appear at the end of the output.

sort -n

And pipe the output through tail. We only care if any block was read more than once.

tail

Now for the same procedure on the trace file generated from the poorly ordered table.

 


$> grep -B1 '^analyze index' oravm1_ora_2377_VALIDATE.trc
PARSING IN CURSOR #48004569509552 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462547433220254 hv=2128321230 ad='aad620f0' sqlid='318avy9zdr6qf'
analyze index jkstill.validate_me_idx1 validate structure online

$> grep -nA1 'PARSING IN CURSOR #48004569509552' oravm1_ora_2377_VALIDATE.trc
51:PARSING IN CURSOR #48004569509552 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462547433220254 hv=2128321230 ad='aad620f0' sqlid='318avy9zdr6qf'
52-analyze index jkstill.validate_me_idx1 validate structure online
--
6076836:PARSING IN CURSOR #48004569509552 len=55 dep=0 uid=90 oct=42 lid=90 tim=1462550199668869 hv=2217940283 ad='0' sqlid='06nvwn223659v'
6076837-alter session set events '10046 trace name context off'

 

The top 30 most active blocks were each read 53 or more times when the table was not well ordered in relation to the index.

$> tail -n +51 oravm1_ora_2377_VALIDATE.trc | head -n +$((6076836-51)) | grep "WAIT #48004569509552: nam='db file sequential read'" | awk '{ print $10 }' | awk -F= '{ print $2 }' | sort | uniq -c | sort -n | tail -30
     53 599927
     53 612399
     53 613340
     53 633506
     53 640409
     53 644099
     53 649054
     53 659198
     53 659620
     53 662600
     53 669176
     53 678119
     53 682177
     53 683409
     54 533294
     54 533624
     54 537977
     54 549041
     54 550178
     54 563206
     54 568045
     54 590132
     54 594809
     54 635330
     55 523616
     55 530064
     55 532693
     55 626066
     55 638284
     55 680250

 

Use RMAN

There is a feature of RMAN that allows checking for logical and physical corruption of an Oracle database via the  command backup check logical validate database.  This command does not actually create a backup, but just reads the database looking for corrupt blocks. Following is an (edited) execution of running this command on the same database where the analyze index commands were run.

A portion of the block corruption report is included.

RMAN> backup check logical validate database;
2>
Starting backup at 06-MAY-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 instance=oravm1 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00008 name=+DATA/oravm/datafile/alloctest_a.273.789580415
input datafile file number=00009 name=+DATA/oravm/datafile/alloctest_u.272.789582305
input datafile file number=00024 name=+DATA/oravm/datafile/swingbench.375.821472595
input datafile file number=00023 name=+DATA/oravm/datafile/swingbench.374.821472577
input datafile file number=00019 name=+DATA/oravm/datafile/bh08.281.778786819
input datafile file number=00002 name=+DATA/oravm/datafile/sysaux.257.770316147
input datafile file number=00004 name=+DATA/oravm/datafile/users.259.770316149
input datafile file number=00001 name=+DATA/oravm/datafile/system.256.770316143
input datafile file number=00011 name=+DATA/oravm/datafile/alloctest_m.270.801310167
input datafile file number=00021 name=+DATA/oravm/datafile/ggs_data.317.820313833
input datafile file number=00006 name=+DATA/oravm/datafile/undotbs2.265.770316553
input datafile file number=00026 name=+DATA/oravm/datafile/undotbs1a.667.850134899
input datafile file number=00005 name=+DATA/oravm/datafile/example.264.770316313
input datafile file number=00014 name=+DATA/oravm/datafile/bh03.276.778786795
input datafile file number=00003 name=+DATA/oravm/datafile/rcat.258.861110361
input datafile file number=00012 name=+DATA/oravm/datafile/bh01.274.778786785
input datafile file number=00013 name=+DATA/oravm/datafile/bh02.275.778786791
input datafile file number=00022 name=+DATA/oravm/datafile/ccdata.379.821460707
input datafile file number=00007 name=+DATA/oravm/datafile/hdrtest.269.771846069
input datafile file number=00010 name=+DATA/oravm/datafile/users.271.790861829
input datafile file number=00015 name=+DATA/oravm/datafile/bh04.277.778786801
input datafile file number=00016 name=+DATA/oravm/datafile/bh05.278.778786805
input datafile file number=00017 name=+DATA/oravm/datafile/bh06.279.778786809
input datafile file number=00018 name=+DATA/oravm/datafile/bh07.280.778786815
input datafile file number=00020 name=+DATA/oravm/datafile/bh_legacy.282.778787059
input datafile file number=00025 name=+DATA/oravm/datafile/baseline_dat.681.821717827
input datafile file number=00027 name=+DATA/oravm/datafile/sqlt.668.867171675
input datafile file number=00028 name=+DATA/oravm/datafile/bh05.670.878914399

channel ORA_DISK_1: backup set complete, elapsed time: 00:25:27

List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1    OK     0              75632        256074          375655477
  File Name: +DATA/oravm/datafile/system.256.770316143
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              158478
  Index      0              17160
  Other      0              4730

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2    OK     0              36332        394240          375655476
  File Name: +DATA/oravm/datafile/sysaux.257.770316147
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              170007
  Index      0              138603
  Other      0              49298

 

As shown in the report, only 25 minutes were required to check the entire database for physically or logically corrupt blocks, as opposed to the 40 minutes needed to analyze index validate structure.

While the RMAN corruption check is not the same as the check performed by analyze index validate structure, it is a test that can be completed in a much more timely manner, particularly if some indexes are both large and have a high value for the clustering factor.

Rebuild the Index?

If you have strong suspicions that a large index with an unfavorable clustering factor has corrupt blocks, it may be more expedient to just rebuild the index.  If the database is on Oracle Enterprise Edition, the rebuild can also be done with the ONLINE option.

Consider again the index on the test table with 1E7 rows.  Creating the index required 28 seconds, while validating the structure required 40 minutes.

 

 SQL> alter index validate_me_idx1 rebuild online;

Index altered.

Elapsed: 00:00:59.88


 

The conclusion is quite clear; the use of analyze index validate structure needs to be carefully considered when its use it contemplated for large indexes. The use of this command could be very resource intensive and take quite some time to complete. It is worthwhile to consider alternatives that my be much less resource intensive and time consuming.

Categories: DBA Blogs

MySQL encrypted streaming backups directly into AWS S3

Pythian Group - Fri, 2016-05-13 09:25
Overview

Cloud storage is becoming more and more popular for offsite storage and DR solutions for many businesses. This post will help with those people that want to perform this process for MySQL backups directly into Amazon S3 Storage. These steps can probably also be adapted for other processes that may not be MySQL oriented.

Steps

In order to perform this task we need to be able to stream the data, encrypt it, and then upload it to S3. There are a number of ways to do each step and I will try and dive into multiple examples so that way you can mix and match the solution to your desired results.  The AWS S3 CLI tools that I will be using to do the upload also allows encryption but to try and get these steps open for customization, I am going to do the encryption in the stream.

  1. Stream MySQL backup
  2. Encrypt the stream
  3. Upload the stream to AWS S3
Step 1 : Stream MySQL Backup

There are a number of ways to stream the MySQL backup. A lot of it depends on your method of backup. We can stream the mysqldump method or we can utilize the file level backup tool Percona Xtrabackup to stream the backup. Here are some examples of how these would be performed.

mysqldump

When using mysqldump it naturally streams the results. This is why we have to add the greater than sign to stream the data into our .sql file. Since mysqldump is already streaming the data we will pipe the results into our next step

[root@node1 ~]# mysqldump --all-databases > employee.sql

becomes

[root@node1 ~]# mysqldump --all-databases |
xtrabackup

xtrabackup will stream the backup but with a little more assistance to tell it to do so. You can reference Precona’s online documentation (https://www.percona.com/doc/percona-xtrabackup/2.4/innobackupex/streaming_backups_innobackupex.html) for all of the different ways to stream and compress the backups using xtrabackup. We will be using the stream to tar method.

innobackupex --stream=tar /root > /root/out.tar

becomes

innobackupex --stream=tar ./ |
Step 2 : Encrypt The Stream

Now that we have the backup process in place, we will then want to make sure that our data is secure. We will want to encrypt the data that we are going to be sending up to AWS S3 as to make sure the data is protected. We can accomplish this a couple of ways. The first tool I am going to look at is GnuPG (https://www.gnupg.org/), which is the open source version of PGP encryption. The second tool I will look at is another very popular tool OpenSSL (https://www.openssl.org/).  Below are examples of how I set them up and tested their execution with streaming.

GnuPG

I will be creating a public and private key pair with a password that will be used to encrypt and decrypt the data. If you are going to do this for your production and sensitive data, please ensure that your private key is safe and secure.  When creating the keypair I was asked to provide a password.  When decrypting the data I was then asked for the password again to complete the process. It was an interactive step and is not shown in the example below. To accept a stream, you don’t provide a file name to encrypt, then to stream the output, you just don’t provide an output parameter.

KEY PAIR CREATION
[root@node1 ~]# gpg --gen-key
gpg (GnuPG) 2.0.14; Copyright (C) 2009 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: root
Name must be at least 5 characters long
Real name: root@kmarkwardt
Email address: markwardt@pythian.com
Comment:
You selected this USER-ID:
    "root@kmarkwardt <markwardt@pythian.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory
gpg-agent[1776]: directory `/root/.gnupg/private-keys-v1.d' created
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

After typing for what felt like FOREVER, to generate enough entropy

gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 1EFB61B1 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   2048R/1EFB61B1 2016-04-29
      Key fingerprint = 8D98 2D23 3C49 F1E7 9CD2  CD0F 7163 EB03 1EFB 61B1
uid                  root@kmarkwardt <markwardt@pythian.com>
sub   2048R/577322A0 2016-04-29

[root@node1 ~]#

 

SAMPLE USAGE
ENCRYPT
[root@node1 openssl]# echo "test" | gpg --output install.log.gpg --encrypt -r root 
[root@node1 openssl]# cat install.log.gpg
?
 ???    Ws"???l?
??g             ?w??g?C}P
   ?5A??f?6?p?
???Qq?m??&?rKE??*}5.?4XTj?????Th????}A???: ^V?/w?$???"?<'?;
?Y?|?W????v?R??a?8o<BG??!?R???f?u?????????e??????/?X?y?S7??H??@???Y?X~x>qoA0??L?????*???I?;I?l??]??Gs?G'?!??
                                                                                                            ??k>?
DECRYPT
[root@node1 ~]# gpg --decrypt -r root --output install.log.decrypted install.log.gpg
install.log.decrypted
You need a passphrase to unlock the secret key for
user: "root@kmarkwardt <markwardt@pythian.com>"
2048-bit RSA key, ID 577322A0, created 2016-04-29 (main key ID 1EFB61B1)

can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory
gpg: encrypted with 2048-bit RSA key, ID 577322A0, created 2016-04-29
     "root@kmarkwardt <markwardt@pythian.com>"
[root@node1 ~]# ls
install.log.decrypted
install.log.gpg

ENCRYPT STREAM

[root@node1 ~]# mysqldump --all-databases | gpg --encrypt -r root 
or
[root@node1 ~]# innobackupex --stream=tar ./ | gpg --encrypt -r root 

 

OpenSSL

As with GPG we will generate a public and private key with a pass phrase.  There are other ways to use openssl to encrypt and decrypt the data such as just using a password with no keys, using just keys with no password, or encrypt with no password or keys.  I am using keys with a password as this is a very secure method.

KEY PAIR CREATION
[root@node1 openssl]# openssl req -newkey rsa:2048 -keyout privkey.pem -out req.pem
Generating a 2048 bit RSA private key
.......................................+++
........+++
writing new private key to 'privkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

[root@node1 openssl]# openssl x509 -req -in req.pem -signkey privkey.pem -out cert.pem
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting Private key
Enter pass phrase for privkey.pem:
[root@node1 openssl]# ls -al
total 20
drwxr-xr-x  2 root root 4096 May  5 10:47 .
dr-xr-x---. 9 root root 4096 May  4 04:38 ..
-rw-r--r--  1 root root 1103 May  5 10:47 cert.pem
-rw-r--r--  1 root root 1834 May  5 10:43 privkey.pem
-rw-r--r--  1 root root  952 May  5 10:43 req.pem
[root@node1 openssl]# rm -rf req.pem 
SAMPLE USAGE
ENCRYPT
[root@node1 openssl]# echo "test" | openssl smime -encrypt -aes256 -binary -outform DER cert.pem > test.dat
[root@node1 openssl]# cat test.dat 
???0??1?k0?g0O0B1
                 0    UXX10U

                              Default City10U

?V??p?A$????PO??+???q@t??????\"%:0
??J?????5???0?D/?1z-?xO??&?#?;???E>^?g??#7??#m????lA???'??{)?*xM
P?l????]iz/???H???????[root@node1 openssl]#
DECRYPT
[root@node1 openssl]# openssl smime -decrypt -in test.dat -inform DER -inkey privkey.pem -out test.txt
Enter pass phrase for privkey.pem:
[root@node1 openssl]# cat test.txt 
test

ENCRYPT STREAM

[root@node1 ~]# mysqldump --all-databases | openssl smime -encrypt -aes256 -binary -outform DER cert.pem
or 
[root@node1 ~]# innobackupex --stream=tar ./ | openssl smime -encrypt -aes256 -binary -outform DER cert.pem
Step 3 : Stream to Amazon AWS S3

Now that we have secured the data, we will want to pipe the data into an Amazon AWS S3 bucket.  This will provide an offsite copy of the MySQL backup that you can convert to long term storage, or restore into an EC2 instance.  With this method I will only be looking at one.  The Amazon provided AWS CLI tools incorporates working with S3.  Allowing you to copy your files up into S3 with the ability to stream your input.

AWS CLI

In order to tell the AWS CLI S3 copy command to accept STDIN input you just have to put a dash in the place of the source file.  This will allow the command to accept a stream to copy.  The AWS CLI tools for copying into S3 also allows for encryption.  But I wanted to provide other methods as well to allow you to customize your own solution.   You can also stream the download of the S3 bucket item, which could allow for uncompression as you download the data or any other number of options.

UPLOAD STREAM

echo "test" | aws s3 cp - s3://pythian-test-bucket/incoming.txt 

BACKUP / ENCRYPT / UPLOAD STREAM

-- MySQL Dump -> OpenSSL Encryption -> AWS S3 Upload
[root@node1 ~]# mysqldump --all-databases | openssl smime -encrypt -aes256 -binary -outform DER cert.pem | aws s3 cp - s3://pythian-test-bucket/mysqldump.sql.dat
-- Xtrabackup -> OpenSSL Encryption -> AWS S3 Upload
[root@node1 ~]# innobackupex --stream=tar ./ | openssl smime -encrypt -aes256 -binary -outform DER cert.pem |aws s3 cp - s3://pythian-test-bucket/mysqldump.tar.dat
-- MySQL Dump -> GPG Encryption -> AWS S3 Upload
[root@node1 ~]# mysqldump --all-databases | gpg --encrypt -r root | aws s3 cp - s3://pythian-test-bucket/mysqldump.sql.gpg
-- MySQL Dump -> GPG Encryption -> AWS S3 Upload
[root@node1 ~]# innobackupex --stream=tar ./ | gpg --encrypt -r root | aws s3 cp - s3://pythian-test-bucket/mysqldump.tar.gpg

References

  • https://www.percona.com/doc/percona-xtrabackup/2.4/innobackupex/streaming_backups_innobackupex.html
  • https://linuxconfig.org/using-openssl-to-encrypt-messages-and-files-on-linux
  • https://www.gnupg.org/gph/en/manual/c14.html
  • https://www.gnupg.org/gph/en/manual/x110.html
  • https://linuxconfig.org/using-openssl-to-encrypt-messages-and-files-on-linux
  • https://www.openssl.org/docs/manmaster/apps/openssl.html
  • http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html

 

 

Categories: DBA Blogs

Healthcare Organizations Turn to Oracle ERP Cloud for Industry’s Most Complete, Modern, and Proven Solution

Oracle Press Releases - Fri, 2016-05-13 07:00
Press Release
Healthcare Organizations Turn to Oracle ERP Cloud for Industry’s Most Complete, Modern, and Proven Solution Oracle expands portfolio of healthcare customers with Adventist Health, Family Health, Presbyterian Medical Services, and Southern New Hampshire Health

Redwood Shores, Calif.—May 13, 2016

Oracle today announced that an increasing number of hospitals and healthcare systems worldwide are choosing Oracle Enterprise Resource Planning (ERP) Cloud to help increase productivity, lower costs, and improve controls. Adventist Health, Family Health, Presbyterian Medical Services, and Southern New Hampshire Health are just a few of the more than 1,800 organizations that have recently turned to Oracle ERP Cloud for a collaborative, efficient, and intuitive back-office hub with rich financial and operational capabilities to help reduce costs and modernize business practices to gain insight and productivity.

Healthcare organizations are under increasing pressure from regulatory agencies, consumers, employers, and governing boards to drive cost efficiency while maintaining quality patient care. Today’s healthcare systems are slow and expensive to maintain, especially when integrating a merged or acquired company. By standardizing on Oracle Modern Best Practice for Healthcare using embedded analytics, contextual social collaboration, and mobile technologies, healthcare organizations can achieve economies of scale faster to lower costs and access the innovative technologies to stay ahead of industry changes.

“Presbyterian Medical Services was looking for a complete cloud ERP solution that would help us drive efficiencies while maintaining quality patient care in a diverse, complex industry," said Chad Morris, senior ERP systems analyst, Presbyterian Medical Services. "In just 14 weeks, we were able to go live on Oracle ERP Cloud implementing a wide range of efficiencies with our 90+ locations across New Mexico. We are now in the process of going live on Oracle HCM Cloud."

“Hospitals and healthcare systems are contending with increased financial pressures resulting from rising costs coupled with reduced collections, an influx of data, and changing patient interactions due to high-deductible healthcare plans,” said Rod Johnson, senior vice president, Oracle. “We are committed to delivering innovative cloud solutions that help these healthcare organizations thrive and gain competitive advantage amid shifting industry conditions. Oracle ERP Cloud increases operational efficiencies and identifies growth areas to help healthcare organizations cost-effectively meet these challenges across their complex organizations and workforces.”

Built on a secure and scalable architecture, Oracle ERP Cloud provides customers with flexible alternatives via full or modular adoption, seamless integration across clouds, on-premise and third party applications, standardized best practices and rapid implementation templates, as well as extensive support for global companies in a wide variety of industries, including healthcare. Oracle ERP Cloud delivers complete ERP capabilities across financials, procurement, and project portfolio management, as well as Enterprise Performance Management (EPM), Governance Risk and Compliance (GRC) and Supply Chain Management (SCM). The portfolio includes deep global and industry-specific capabilities and is fully integrated with Oracle Human Capital Management (HCM) and Customer Relationship Management (CRM) solutions.

Contact Info
Nicole Maloney
Oracle
1.650.506.0806
nicole.maloney@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • 1.650.506.0806

Pages

Subscribe to Oracle FAQ aggregator