Feed aggregator

SLOB 2.3 Is Getting Close!

Kevin Closson - Thu, 2015-05-28 15:30

SLOB 2.3 is soon to be released. This version has a lot of new, important features but also a significant amount of tuning in the data loading kit. Before sharing where the progress is on that front, I’ll quickly list some of the new important features that will be in SLOB 2.3:

  1. Single Schema Support. SLOB historically avoids application-level contention by having database sessions perform the SLOB workload against a private schema. The idea behind SLOB is to exert maximum I/O pressure on storage while utilizing the minimum amount of host CPU possible. This lowers the barrier to entry for proper testing as one doesn’t require dozens of processors festering in transactional SQL code just to perform physical I/O. That said, there are cases where a single, large active data set is desirable–if not preferred. SLOB 2.3 allows one to load massive data sets quickly and run large numbers of SLOB threads (database sessions) to drive up the load on the system.
  2. Advanced Hot Spot Testing. SLOB 2.3 supports configuring each SLOB thread such that every Nth SQL statement operates on a hot spot sized in megabytes as specified in the slob.conf file. Moreover, this version of SLOB allows one to dictate the offset for the hot spot within the active data set. This allows one to easily move the hot spot from one test execution to the next. This sort of testing is crucial for platform experts studying hybrid storage arrays that identify and promote “hot” data into flash for example.
  3. Threaded SLOB. SLOB 2.3 allows one to have either multiple SLOB schemas or the new Single Schema and to drive up the load one can specify how many SLOB threads per schema will be active.

 

To close out this short blog entry I’ll make note that the SLOB 2.3 data loader is now loading 1TB scale Single Schema in just short of one hour (55.9 minutes exactly). This procedure includes data loading, index creation and CBO statistics gathering. The following was achieved with a moderate IVB-EP 2s20c40t server running Oracle Linux 6.5 and Oracle Database 12c and connected to an EMC XtremIO array via 8GFC Fibre Channel. I think this shows that even the data loader of SLOB is a worthwhile workload in its own right.

SLOB 2.3 Data Loading 1TB/h


Filed under: oracle

Deep-Diving Oracle UX PaaS4SaaS Partner Enablement in Asia

Usable Apps - Wed, 2015-05-27 15:03

Just back from our fantastic Oracle Applications User Experience (OAUX) communications outreach events in Asia. The VoX blog has great recaps of the highlights and takeaways from two tremendous events in Singapore and Beijing.

 Chinese visa shown

Oracle Applications Cloud user experience in Asia: Enabling a local user experience. Empowering global capabilities.

The second day in each location comprised of a deep-dive workshop about the Oracle Applications Cloud user experience extensibility and PaaS4SaaS capabilities. Together, they’re a powerful competitive differentiator that empowers customers and partners to really make that cloud their own.

It’s worth calling out the comments of co-worker Greg Nerpouni (@gnerpouni) again. Greg really nails the excitement and enthusiasm for what we shared when he says:

"The extensibility and PaaS4SaaS stations were mobbed by our Chinese and Korean partners, especially when they realized the combined power of our extensibility and PaaS4SaaS capabilities. At the extensibility station, they saw tangible ways to increase end user participation and overall success of their cloud rollout for our mutual customers. And at the PaaS4SaaS station, they saw immediate value in being able to leverage the UX rapid development kit to emulate Oracle’s user experience in their own PaaS implementations - and seamlessly integrate their PaaS applications into Oracle Cloud Applications."

Greg Nerpouni deep-dives Oracle Applications Cloud Extensibility in Singapore

Greg stylin' the Cloud UX extensibility deep-dive action in Singapore. (Singapore image via Shy Meei Siow [@shymeeisiow])

Now, the “Why Should I Care?” business propositions for the OAUX PaaS4SaaS Oracle Partner enablement and the requirements for same are clear (read them again). If you've seen our roadshow you'll know that part of my PaaS4Saas story includes “the wisdom of the cloud crowd”.

That wisdom is PaaS and SaaS insight and knowledge from Oracle Partner leaders such as Debra Lilley (@debralilley) of Certus Solutions, who have proven the business proposition, and from cloud influencers and shapers such as Mark Hurd (@markvhurd) and Steve Miranda (@stevenrmiranda).

The latest addition to the celestial book of wisdom comes from Oracle CIO, Mark Sunday. Mark, explaining that enterprise applications aren't a siloed concept, underpins the need for partners to integrate fast and how SaaS with PaaS is a must-have differentiator when he declares, in his own inimitable way (using HCM by way of example):

“Absolutely without a doubt, the integration of a suite always wins... I think it’s more important than any given function. If you think HCM stands alone as a silo inside of an enterprise, you’re nuts.”

If you're a partner, therefore I think you’d be somewhat remiss not to take up on opportunities for enablement to make PaaS4SaaS happen for you too!

Ultan O'Broin Storytelling the Selling the UX Message in Beijing

Storytelling that UX. Winning more business with our Cloud enablement. (Beijing image via Shy Meei Siow)

So, if you’re a partner in the Asia region (or elsewhere for that matter) that wants to go places, start that enablement conversation by following @usableapps on Twitter or reach out to us through your Oracle Alliances and Channels or Oracle PartnerNetwork contacts.

Ultan in action in Beijing

Come on Beijing, you know you want that enablement! (Beijing image via Shy Meei Siow)

Oh, did I mention I did some running in Beijing, by the way of UX research into smartwatches?

AWS EC2 API tools: Create snapshot & Check Data in snapshot

Surachart Opun - Wed, 2015-05-27 02:38
After installed AWS EC2 API tools,  It's time for example create/delete snapshot. 
- Creating snapshot.
ubuntu@ip-x-x-x-x~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        true
ubuntu@ip-x-x-x-x:~$ ec2-create-snapshot  -d vol-41885f55-$(date +%Y%m%d%H%M) vol-41885f55
SNAPSHOT        snap-b20a8c87   vol-41885f55    pending 2015-05-27T05:46:58+0000                843870022970    8       vol-41885f55-201505270546
ubuntu@ip-x-x-x-x:~$ ec2-describe-snapshots
SNAPSHOT        snap-b20a8c87   vol-41885f55    pending 2015-05-27T05:46:58+0000        0%      843870022970    8       vol-41885f55-201505270546
ubuntu@ip-x-x-x-x:~$ ec2-create-snapshot  -d vol-41885f55-$(date +%Y%m%d%H%M) vol-41885f55
SNAPSHOT        snap-bea0d28b   vol-41885f55    pending 2015-05-27T05:50:11+0000                843870022970    8       vol-41885f55-201505270550
ubuntu@ip-x-x-x-x:~$ ec2-describe-snapshots
SNAPSHOT        snap-b20a8c87   vol-41885f55    completed       2015-05-27T05:46:58+0000        100%    843870022970    8       vol-41885f55-201505270546
SNAPSHOT        snap-bea0d28b   vol-41885f55    completed       2015-05-27T05:50:11+0000        100%    843870022970    8       vol-41885f55-201505270550- Deleting snapshot (delete snap-b20a8c87). 
ubuntu@ip-x-x-x-x:~$ ec2-describe-snapshots  |head -1| awk '{print $2}'|xargs ec2-delete-snapshot
SNAPSHOT        snap-b20a8c87
ubuntu@ip-x-x-x-x:~$ ec2-describe-snapshots
SNAPSHOT        snap-bea0d28b   vol-41885f55    completed       2015-05-27T05:50:11+0000        100%    843870022970    8       vol-41885f55-201505270550How to check data in "snap-bea0d28b"? Checking idea on AWS, look like we must create Volume from snapshot and attach it to Instance.
- Creating Volume > Attach to Instance and Mount.
ubuntu@ip-x-x-x-x:~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        true
ubuntu@ip-x-x-x-x:~$ ec2-describe-availability-zones
AVAILABILITYZONE        ap-southeast-1a available       ap-southeast-1
AVAILABILITYZONE        ap-southeast-1b available       ap-southeast-1
ubuntu@ip-x-x-x-x:~$ ec2-create-volume -s 8 --snapshot snap-bea0d28b -z ap-southeast-1a
VOLUME  vol-d15087c5    8       snap-bea0d28b   ap-southeast-1a creating        2015-05-27T06:24:00+0000        standard
ubuntu@ip-x-x-x-x:~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        true
VOLUME  vol-d15087c5    8       snap-bea0d28b   ap-southeast-1a available       2015-05-27T06:24:00+0000        standard
ubuntu@ip-x-x-x-x:~$ sudo fdisk -l
Disk /dev/xvda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065    16771859     8377897+  83  Linux
ubuntu@ip-x-x-x-x:~$ ec2-attach-volume vol-d15087c5 -i  i-d6cdb71a  -d sdf
ATTACHMENT      vol-d15087c5    i-d6cdb71a      sdf     attaching       2015-05-27T06:31:16+0000
ubuntu@ip-x-x-x-x:~$ sudo fdisk -l
Disk /dev/xvda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065    16771859     8377897+  83  Linux
Disk /dev/xvdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
    Device Boot      Start         End      Blocks   Id  System
/dev/xvdf1   *       16065    16771859     8377897+  83  Linux
ubuntu@ip-x-x-x-x:~$

ubuntu@ip-x-x-x-x:~$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       8115168 1212140   6467752  16% /
none                   4       0         4   0% /sys/fs/cgroup
udev              503188      12    503176   1% /dev
tmpfs             101632     332    101300   1% /run
none                5120       0      5120   0% /run/lock
none              508144       0    508144   0% /run/shm
none              102400       0    102400   0% /run/user
ubuntu@ip-x-x-x-x:~$ sudo mount /dev/xvdf1 /mnt/
ubuntu@ip-x-x-x-x:~$ ls -l /mnt/
total 92
drwxr-xr-x   2 root root  4096 May 26 09:35 bin
drwxr-xr-x   3 root root  4096 Mar 25 11:52 boot
drwxr-xr-x   5 root root  4096 Mar 25 11:53 dev
drwxr-xr-x 105 root root  4096 May 26 09:35 etc
drwxr-xr-x   3 root root  4096 May 26 09:07 home
lrwxrwxrwx   1 root root    33 Mar 25 11:51 initrd.img -> boot/initrd.img-3.13.0-48-generic
drwxr-xr-x  21 root root  4096 May 26 09:35 lib
drwxr-xr-x   2 root root  4096 Mar 25 11:50 lib64
drwx------   2 root root 16384 Mar 25 11:53 lost+found
drwxr-xr-x   2 root root  4096 Mar 25 11:50 media
drwxr-xr-x   2 root root  4096 Apr 10  2014 mnt
drwxr-xr-x   2 root root  4096 Mar 25 11:50 opt
drwxr-xr-x   2 root root  4096 Apr 10  2014 proc
drwx------   3 root root  4096 May 26 09:07 root
drwxr-xr-x   3 root root  4096 Mar 25 11:53 run
drwxr-xr-x   2 root root  4096 May 26 09:35 sbin
drwxr-xr-x   2 root root  4096 Mar 25 11:50 srv
drwxr-xr-x   2 root root  4096 Mar 13  2014 sys
drwxrwxrwt   6 root root  4096 May 27 05:38 tmp
drwxr-xr-x  10 root root  4096 Mar 25 11:50 usr
drwxr-xr-x  12 root root  4096 Mar 25 11:52 var
lrwxrwxrwx   1 root root    30 Mar 25 11:51 vmlinuz -> boot/vmlinuz-3.13.0-48-generic
ubuntu@ip-x-x-x-x:~$ ls /mnt/
bin  boot  dev  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  vmlinuz
ubuntu@ip-x-x-x-x:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  1.2G  6.2G  16% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            492M   12K  492M   1% /dev
tmpfs           100M  332K   99M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            497M     0  497M   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/xvdf1      7.8G  1.2G  6.2G  16% /mnt- After checking data, We can unmount and remove it.
ubuntu@ip-x-x-x-x:~$ sudo umount /mnt
ubuntu@ip-x-x-x-x:~$
ubuntu@ip-x-x-x-x:~$
ubuntu@ip-x-x-x-x:~$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       8115168 1212140   6467752  16% /
none                   4       0         4   0% /sys/fs/cgroup
udev              503188      12    503176   1% /dev
tmpfs             101632     332    101300   1% /run
none                5120       0      5120   0% /run/lock
none              508144       0    508144   0% /run/shm
none              102400       0    102400   0% /run/user

ubuntu@ip-x-x-x-x:~$ sudo fdisk -l
Disk /dev/xvda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065    16771859     8377897+  83  Linux
Disk /dev/xvdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
    Device Boot      Start         End      Blocks   Id  System
/dev/xvdf1   *       16065    16771859     8377897+  83  Linux
ubuntu@ip-x-x-x-x:~$
ubuntu@ip-x-x-x-x:~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        true
VOLUME  vol-d15087c5    8       snap-bea0d28b   ap-southeast-1a in-use  2015-05-27T06:24:00+0000        standard
ATTACHMENT      vol-d15087c5    i-d6cdb71a      sdf     attached        2015-05-27T06:31:16+0000        false

ubuntu@ip-x-x-x-x:~$ ec2-detach-volume vol-d15087c5 -i  i-d6cdb71a
ATTACHMENT      vol-d15087c5    i-d6cdb71a      sdf     detaching       2015-05-27T06:31:16+0000
ubuntu@ip-x-x-x-x:~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        true
VOLUME  vol-d15087c5    8       snap-bea0d28b   ap-southeast-1a in-use  2015-05-27T06:24:00+0000        standard
ATTACHMENT      vol-d15087c5    i-d6cdb71a      sdf     detaching       2015-05-27T06:31:16+0000        false
ubuntu@ip-x-x-x-x:~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        true
VOLUME  vol-d15087c5    8       snap-bea0d28b   ap-southeast-1a available       2015-05-27T06:24:00+0000        standard
ubuntu@ip-x-x-x-x:~$ ec2-delete-volume vol-d15087c5
VOLUME  vol-d15087c5
ubuntu@ip-x-x-x-x:~$ ec2-describe-volumes
VOLUME  vol-41885f55    8       snap-d00ac9e4   ap-southeast-1a in-use  2015-05-26T09:07:04+0000        gp2     24
ATTACHMENT      vol-41885f55    i-d6cdb71a      /dev/sda1       attached        2015-05-26T09:07:04+0000        trueLook like it's easy to use and adapt with script.
Categories: DBA Blogs

AWS EC2 API tools: Installation

Surachart Opun - Wed, 2015-05-27 02:08
AWS EC2 API tools help too much for Amazon EC2 to register and launch instances, manipulate security groups, and more. Someone asked me to backup EC2 instance. I thought to use it for backup script. Anyway, No need to explain more how to install Amazon EC2 API tools on Ubuntu? Just say thank for  EC2StartersGuide. I fellow this link and installed it easily. Additional, I used this Link for more idea about java.
- Adding Repository and Install EC2 API tools.
ubuntu@ip-x-x-x-x:~$ sudo apt-add-repository ppa:awstools-dev/awstools
 Up to date versions of several tools from AWS.
 Use this repository by:
 sudo apt-add-repository ppa:awstools-dev/awstools
 sudo apt-get update
 sudo apt-get install ec2-api-tools
.
.
.
ubuntu@ip-x-x-x-x:~$ sudo apt-get update
ubuntu@ip-x-x-x-x:~$ sudo apt-get install ec2-api-tools
ubuntu@ip-x-x-x-x:~$ sudo apt-get install -y openjdk-7-jre
ubuntu@ip-x-x-x-x:~$ file $(which java)
/usr/bin/java: symbolic link to `/etc/alternatives/java'
ubuntu@ip-x-x-x-x:~$ file /etc/alternatives/java
/etc/alternatives/java: symbolic link to `/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java'- Adding variables in ~/.bashrs file. We should have "Access Key" - Security Credentials.
ubuntu@ip-x-x-x-x:~$ vi ~/.bashrc
.
.
.
export EC2_KEYPAIR=***
export EC2_URL=https://ec2.ap-southeast-1.amazonaws.com
export EC2_PRIVATE_KEY=$HOME/.ec2/pk-***.pem
export EC2_CERT=$HOME/.ec2/cert-***.pem
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/jre

ubuntu@ip-x-x-x-x:~$ source ~/.bashrc- If everything's all right. Time to use it.
ubuntu@ip-x-x-x-x:~$ ec2-describe-regions
REGION  eu-central-1    ec2.eu-central-1.amazonaws.com
REGION  sa-east-1       ec2.sa-east-1.amazonaws.com
REGION  ap-northeast-1  ec2.ap-northeast-1.amazonaws.com
REGION  eu-west-1       ec2.eu-west-1.amazonaws.com
REGION  us-east-1       ec2.us-east-1.amazonaws.com
REGION  us-west-1       ec2.us-west-1.amazonaws.com
REGION  us-west-2       ec2.us-west-2.amazonaws.com
REGION  ap-southeast-2  ec2.ap-southeast-2.amazonaws.com
REGION  ap-southeast-1  ec2.ap-southeast-1.amazonaws.comubuntu@ip-x-x-x-x:~$ ec2-describe-availability-zones
AVAILABILITYZONE        ap-southeast-1a available       ap-southeast-1
AVAILABILITYZONE        ap-southeast-1b available       ap-southeast-1
Categories: DBA Blogs

node-oracledb 0.6.0 is on NPM (Node.js driver for Oracle Database)

Christopher Jones - Tue, 2015-05-26 17:31

Node-oracledb 0.6.0 is now out on NPM. The Oracle Database Node.js driver powers high performance Node.js applications.

There is one feature change in this release: node-oracledb now builds with Node.js 0.10, 0.12 and with io.js. Huge thanks to Richard Natal for his GitHub pull request that added support.

For more information about node-oracledb see the node-oracledb GitHub page.

Leveraging Oracle Developer Cloud Service in SQL and PL/SQL Projects - lifecycle and team collaboration

Shay Shmeltzer - Tue, 2015-05-26 12:37

Usually my demos are targeted at Java developers, but I realize that a lot of developers out there are not using Java, for example in the Oracle install base there is a huge section of PLSQL developers. This however doesn't change their requirements from a development platform. They can still benefit from version management and code review functionality. They still need to track bugs/issues and requirements from their users, and they still need to collaborate in a team environment. 

So I decided to try out and see what would be the development lifecycle experience for a PL/SQL developer if they'll leverage the services provided by the Oracle Developer Cloud Service - here is a demo that shows a potential experience. 

What you'll see in the demo:

  • Using JDeveloper to create DB Diagrams, Tables and PL/SQL code
  • Version manage PL/SQL and SQL with Git
  • Defining a cloud project and adding users
  • Check code in, and branch PL/SQL functions
  • Tracking tasks for developers
  • Code review by team members
  • Build automation (with Ant) - and almost a deploy to the DB

As you can see it is quite a nice complete solution that is very quick to setup and use.

It seems that the concepts of continuous integration in the world of PL/SQL development are not yet a common thing. In the demo I use the Ant SQL command to show how you could run a SQL script you created to create the objects directly in the database - which is probably the equivalent of doing a deployment in the world of Java. However if you prefer you can use Ant for example to copy files, zip them, or do many other tasks such as run automated testing frameworks.

The Ant task I used is this:

  <path id="antclasspath">
    <fileset dir=".">
      <include name="ojdbc7.jar"/>
    </fileset>
  </path>
   <target name="deploy">
    <sql driver="oracle.jdbc.OracleDriver" userid="hr2" password="hr"
         url="jdbc:oracle:thin:@//server:1521/sid" src="./script1.sql" 
	classpathref="antclasspath"/>
  </target> 

I had both the ojdbc7.jar file and the script file at the root of the project for convenience. 

While my demo uses JDeveloper - you should be able to achieve similar functionality with any tool that supports Git. In fact if you rather not use a tool you can simply use command lines to check your files directly into the cloud.

Categories: Development

Lab Report: Oracle Database on EMC XtremIO. A Compression Technology Case Study.

Kevin Closson - Tue, 2015-05-26 02:26

If you are interested in array-level data reduction services and how such technology mixes with Oracle Database application-level compression (such as Advanced Compression Option), I offer the link below to an EMC Lab Report on this very topic.

To read the entire Lab Report please click the following link:   Click Here.

The following is an excerpt from the Lab Report:

Executive Summary
EMC XtremIO storage array offers powerful data reduction features. In addition to thin provisioning, XtremIO applies both deduplication and compression algorithms to blocks of data when they are ingested into the array. These features are always on and intrinsic to the array. There is no added licensing, no tuning nor configuration involved when it comes to XtremIO data reduction.

Oracle Database also supports compression. The most common form of Oracle Database compression is the Advanced Compression Option—commonly referred to as ACO. With Oracle Database most “options” are separately licensed features and ACO is one such option. As of the publication date of this Lab Report, ACO is licensed at $11,000 per processor core on the database host1. Compressing Oracle Database blocks with ACO can offer benefits beyond simple storage savings. Blocks compressed with ACO remain compressed as they pass through the database host. In short, blocks compressed with ACO will hold more rows of data per block. This can be either a blessing or a curse. Allowing Oracle to store more rows per block has the positive benefit of caching more application data in main memory (i.e., the Oracle SGA buffer pool). On the other hand, compacting more data into each block often results in increased block-contention.

Oracle offers tuning advice to address this contention in My Oracle Support note 1223705.12. However, the tuning recommendations for reducing block contention with ACO also lower the compression ratios. Oracle also warns users to expect higher CPU overhead with ACO as per the following statement in the Oracle Database product documentation:

Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.

Application vendors, such as SAP, also produce literature to further assist database administrators in making sensible choices about how and when to employ Advanced Compression Option. The importance of understanding the possible performance impact of ACO are made quite clear in such publications as SAP Note 14363524 which states the following about SAP performance with ACO:

Overall system throughput is not negatively impacted and may improve. Should you experience very long runtimes (i.e. 5-10 times slower) for certain operations (like mass inserts in BW PSA or ODS tables/partitions) then you should set the event 10447 level 50 in the spfile/init.ora. This will reduce the overhead for insertion into compressed tables/partitions.

The SAP note offers further words of caution regarding transaction logging (a.k.a., redo) in the following quote:

Amount of redo data generated can be up to 30% higher

Oracle Database Administrators, with prior ACO experience, are largely aware of the trade-offs where ACO is concerned. Database Administrators who have customarily used ACO in their Oracle Database deployments may wish to continue to use ACO after adopting EMC XtremIO. For this reason Database Administrators are interested in learning how XtremIO compression and Advanced Compression Option interact.

This Lab Report offers an analysis of space savings with and without ACO on XtremIO. In addition, a performance characterization of an OLTP workload manipulating the same application data in ACO and non-ACO tablespaces will be covered…please click the link above to continue reading…

 


Filed under: oracle

Temp Table Transformation Cardinality Estimates - 1

Randolf Geist - Mon, 2015-05-25 14:26
Having published recently two notes about the Temp Table Transformation highlighting the heuristics based decision and other weaknesses, for example regarding the projection of columns, it's time to publish some more notes about it.

The transformation can also have significant impact on cardinality estimates, both join and single table cardinality.

Looking at the difference in the join cardinality estimates of following simple example:

create table t1
as
select
rownum as id
, mod(rownum, 10) + 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000
;

exec dbms_stats.gather_table_stats(null, 't1')

alter session set tracefile_identifier = 'temp_trans_join_card';

alter session set events '10053 trace name context forever, level 1';

explain plan for
with
cte as (
select /* inline */ id + 1 as id from t1 t
where 1 = 1
)
select /*+
--opt_estimate(@"SEL$2" join("A"@"SEL$2" "B"@"SEL$2") rows=1000)
no_merge(a) no_merge(b)
*/ * from cte a, cte b
where a.id = b.id
;

alter session set events '10053 trace name context off';

-- 11.2.0.x Plan with TEMP transformation
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 |
| 1 | TEMP TABLE TRANSFORMATION | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D660C_27269C | | |
| 3 | TABLE ACCESS FULL | T1 | 1000 | 4000 |
|* 4 | HASH JOIN | | 1 | 26 |
| 5 | VIEW | | 1000 | 13000 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
| 7 | VIEW | | 1000 | 13000 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("A"."ID"="B"."ID")

-- 11.2.0.x Plan with INLINE hint
----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 253K|
|* 1 | HASH JOIN | | 10000 | 253K|
| 2 | VIEW | | 1000 | 13000 |
| 3 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
| 4 | VIEW | | 1000 | 13000 |
| 5 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("A"."ID"="B"."ID")
the following becomes obvious:

- There are vastly different cardinality estimates possible depending on whether the transformation gets used or not. In principle due to the NO_MERGE hints used, the transformation shouldn't have any impact on the estimates, but it does

- Looking at the optimizer trace file some information seems to get lost when the transformation gets used, in particular column related statistics

- This information loss, like in the example here, can lead to join cardinality estimates of 0 (rounded to 1 in the plan output)

- And even worse, at least in pre-12c versions, such a 0 cardinality estimate can't be corrected via OPT_ESTIMATE hints, since somehow the correction gets ignored/lost although being mentioned in the optimizer trace:

11.2.0.1:
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Join Card: 0.000000 = = outer (1000.000000) * inner (1000.000000) * sel (0.000000)
>> Join Card adjusted from 0.000000 to: 1000.000000, prelen=2
Adjusted Join Cards: adjRatio=1.00 cardHjSmj=0.000000 cardHjSmjNPF=0.000000 cardNlj=0.000000 cardNSQ=1000.000000 cardNSQ_na=0.000000
Join Card - Rounded: 1 Computed: 0.00
The behaviour regarding the OPT_ESTIMATE hint changes in 12c, but then there are other oddities introduced in 12c that are not there in pre-12c - have a look at the "Query Block" section when using the INLINE variant of the query - there are two identical fully qualified object names, clearly a bug, making hinting using global hint syntax impossible for that query block.

Although my simple example here can be corrected via extended statistics on the join column expression used in the CTE query my point here is that depending on whether the transformation gets used or not vastly different and extreme cardinality estimates are possible - and those extreme cases even can't be corrected in pre-12c versions.

For example I recently had a real life case where two columns were joined that had a significant number of NULL values, one coming from a temp table transformation row source. Without the transformation the join cardinality estimates were reasonable, but the transformation again lead to such a 0 cardinality estimate (that couldn't be corrected via a (correctly specified) OPT_ESTIMATE hint), ruining the whole plan.

Oracle BI Publisher 11.1.1.9.0 is available !!!

Tim Dexter - Mon, 2015-05-25 14:12

Hi Everyone,

I am happy to announce that Oracle BI Publisher 11.1.1.9.0 is released, although I admit that this is almost a week old news now and I am sure some of you may have already known this by now from the BI Publisher homepage in OTN or from other sources. My sincere apologies for the delay here.

Thank you Tim for helping me to get back into this blog membership and being patient to allow me put this word out. My activity in the blog has been so less in the past that I am as good as a new member here. When I tried to login last week, I was greeted with this message -

"Sorry, you do not have the privileges necessary to access the page you requested. This system is available to Oracle Employees only. Oracle Employees who would like to request a blog account should click here."

So I had to start all over and get a fresh access created. Thanks to Tim and Phil from IT support for helping me with this. I will now make sure to use this space more often and share more features, tips and tricks.

Oracle BI Publisher 11.1.1.9.0 was GA on May 19th and you can get the download, documentation, certification matrix and release notes here at the BI Publisher home page in OTN. Here is a quick snapshot of new features in this release. The download is also available at Oracle Software Delivery Cloud site. The documentation page has also been given a fresh new structure where in the left navigation you will notice "Task" and "Books" as two menu items. The task will provide quick reference to role based activities under different sub menu items such as "View & Publish", "Design Reports" etc. The "Books" menu will take you to the complete set of books. You can select Administrator's guide, or Developer's guide, Data Modelling Guide, Report Designer's guide and User's Guide for BI Publisher here. If you are looking for any feature, and do not find information under "Tasks" then check for the same under "Books" or use the search option.

Stay tuned for more updates on new features. Wish you have a good time exploring the new features!!

Categories: BI & Warehousing

Oracle BI Publisher 11.1.1.9.0 is available !!!

Tim Dexter - Mon, 2015-05-25 14:12

Hi Everyone,

I am happy to announce that Oracle BI Publisher 11.1.1.9.0 is released, although I admit that this is almost a week old news now and I am sure some of you may have already known this by now from the BI Publisher homepage in OTN or from other sources. My sincere apologies for the delay here.

Thank you Tim for helping me to get back into this blog membership and being patient to allow me put this word out. My activity in the blog has been so less in the past that I am as good as a new member here. When I tried to login last week, I was greeted with this message -

"Sorry, you do not have the privileges necessary to access the page you requested. This system is available to Oracle Employees only. Oracle Employees who would like to request a blog account should click here."

So I had to start all over and get a fresh access created. Thanks to Tim and Phil from IT support for helping me with this. I will now make sure to use this space more often and share more features, tips and tricks.

Oracle BI Publisher 11.1.1.9.0 was GA on May 19th and you can get the download, documentation, certification matrix and release notes here at the BI Publisher home page in OTN. Here is a quick snapshot of new features in this release. The download is also available at Oracle Software Delivery Cloud site. The documentation page has also been given a fresh new structure where in the left navigation you will notice "Task" and "Books" as two menu items. The task will provide quick reference to role based activities under different sub menu items such as "View & Publish", "Design Reports" etc. The "Books" menu will take you to the complete set of books. You can select Administrator's guide, or Developer's guide, Data Modelling Guide, Report Designer's guide and User's Guide for BI Publisher here. If you are looking for any feature, and do not find information under "Tasks" then check for the same under "Books" or use the search option.

Stay tuned for more updates on new features. Wish you have a good time exploring the new features!!

Categories: BI & Warehousing

Recover Oracle Undo Tablespace without Backup

Pakistan's First Oracle Blog - Sun, 2015-05-24 21:10
Woke up with an issue regarding a Oracle 10.2.0 database on Linux complaining about an Undo file on startup.


sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri May 22 20:11:07 2015

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> startup pfile='init.ora'
ORACLE instance started.

Total System Global Area 2801795072 bytes
Fixed Size                  2075504 bytes
Variable Size            1275069584 bytes
Database Buffers         1509949440 bytes
Redo Buffers               14700544 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 244 - see DBWR trace file
ORA-01110: data file 244: '/test/ORADATATEST/test/test_undo2a.dbf'


SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      MANUAL
undo_retention                       integer     21600
undo_tablespace                      string      test_UNDO02
SQL>



SQL> drop tablespace test_UNDO02  including contents and datafiles;
drop tablespace test_UNDO02  including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU4$' found, terminate dropping tablespace

 Check for active rollback segments:

 select segment_id, segment_name,status,tablespace_name from dba_rollback_segs where status not in ('ONLINE','OFFLINE');



Set the following parameter in the pfile.



*._offline_rollback_segments=(_SYSSMU4$)


And now try dropping UNDO tablespace again.

drop tablespace test_UNDO02  including contents and datafiles;

Tablespace dropped.

Now create a new UNDO tablespace:

create UNDO tablespace test_UNDO05 datafile '/test/oradata18/test/test_undo05_file1.dbf' size 500m autoextend on next 1m maxsize 1500m;


Tablespace created.



SQL> create UNDO tablespace test_UNDO05 datafile '/test/oradata18/test/test_undo05_file1.dbf' size 500m autoextend on next 1m maxsize 1500m;


Tablespace created.



SQL> startup pfile='inittest.ora'
ORACLE instance started.

Total System Global Area 2801795072 bytes
Fixed Size                  2075504 bytes
Variable Size            1392510096 bytes
Database Buffers         1392508928 bytes
Redo Buffers               14700544 bytes
Database mounted.
Database opened.

All good now.
Categories: DBA Blogs

Fixing Super LOV in Universal Theme

Dimitri Gielis - Thu, 2015-05-21 15:08
When you migrate to APEX 5.0 and the Universal Theme you might see that some plugins are not behaving correctly anymore. In this post I'll discuss the Enkitec Modal LOV plugin.

When I ran the plugin in my APEX 5.0 app with the Universal Theme it looked like this:


There's too much space in the search bar and the close button is not showing up with an icon.

Here're the steps I did to fix it. First you need to download the css file of the style you use and the js file from the plugin in Shared Components. I use the smoothness.css style most of the times, so I'll use that as an example.

To fix the close icon, add !important to the png:

.ek-ml .ui-state-default .ui-icon {
background-image: url(ui-icons_888888_256x240.png) !important;
}

Note: you can do that for all those png on line 467 till 489.

To fix the height, add following css to smoothness.css:

.superlov-button-container {
  height:50px;
}

And finally in enkitec_modal_lov.min.js change the height of the searchContainer from a dynamic height (r) to 0px:

$searchContainer.css("height","0px")

Next upload those files again to the plugin.

When you run the plugin it should give you this result:


Now the bigger question is; do we still need that plugin? In APEX 5.0 there're native Modal Pages, so you could create an Interactive Report and set the page as a Modal Page. Next you can hook that up to a button or link and you've just build your own Modal LOV.

I still like to use the plugin at the moment (as it's just one item on the page), but it could use a refresh to make it look nicer and more inline with Universal Theme.

Wonder what you think - would you build your own Modal LOV in APEX 5.0 or would you still prefer to use a plugin? 

Categories: Development

AeroGear Germany tour 2015

Matthias Wessendorf - Thu, 2015-05-21 05:50

Over the last three days Sébastien Blanc and I have been touring through Germany to visit a few JUGs.

The talks

We had the same setup for every evening: First, Sebi was talking about JBoss Tools and Forge and showed how to quickly create a Java EE based backend. Afterwards the audience saw the how to create a good looking Apache Cordova mobile app, that he also generated using Forge! At the end the solution was also protected using Keycloak. Afterwards I had a talk about Push Notifications in general. During the talk I had various demos, like our AeroDoc server and iOS client to demonstrate a rich push interaction, between different clients and servers, using geolocation and user segementation. I was also happy that I could demo some hot new stuff on our UI by showing stuff off code from different pull requests.

The cities

We had three different cities on the agenda and the start of the tour was Berlin. However, unfortunately I forgot my powerplug at home… :sob: But, arriving in Berlin I could borrow one fromSimon Willnauer. THANKS DUDE! :heart:

Berlin

The event took place at the VOTUM GmbH and it was a good start of our tour. We had a packed room and lot’s of questions during both talks, so we ended up talking a bit longer. Afterwards there was time to socialize with a drink or two. Here are some impressions from the evening.

Dortmund

After arrving in Dortmund Hendrik Ebbers we giving us a ride to the BVB training center. It was funny, that one guy thought that Sebi looks like the coach (Juergen Klopp) :joy: However, the real Juergen was a few hundred meters away, watching the team doing their training. Before the event started we did some preparation at Hendrik’s awesome home-office. Here is a picture of me talking about the push server, showing the latest greatest of a pending pull-request from Lukáš Fryč. The talks went well and while we enjoyed some Pizza we had a some good conversations with the attendees!

Stuttgart

On our way to the JUG Stuttgart we got effected by the strike, which turned out to be a very positive thing. We got an almost empty ICE train :v: This time the talks took place in the Stuttgart Red Hat Office and Heiko was already awaiting us at the S-Bahn station. After a little introduction by Heiko it was again Sebi’s part before I took over talking about push.

Summary

It was a great tour and lot’s of questions during and after the talks showed we had the right content for the different events. I am already looking forward to see some of the attendees getting in touch with our AeroGear community for more debats! On the last evening Heiko, Sebi and I went for a few :beers: in Stuttgart. Traveling back home, I had another interaction with the strike, which was again very positive. They had to change the train and all seats were opened up, so I ended up sitting on a nice and comfortable seat in the first class for free :stuck_out_tongue_winking_eye:


What is the APEX Open Mic Night at Kscope15?

Joel Kallman - Wed, 2015-05-20 21:32
At the upcoming ODTUG Kscope15 conference, on Monday night, June 22, there will be the Monday Community Events.  The Community Event for the Oracle Application Express track at Kscope15 is the ever-popular Open Mic Night.  Without a doubt, this is one of my favorite events at the Kscope conference.

An Oracle employee sent me an email today, inquiring about the Open Mic Night.  This employee, who is a user of Oracle Application Express at Oracle, will be attending the Kscope conference for the very first time.  As I replied to him in email:

Open Mic night will be on Monday evening, from 8:00P - 10:00P.  You would think that most people would call it a day (after a long day), but it's usually a packed room.

Open Mic night is the attendee's night to shine in front of their fellow attendees.  People are given roughly 5 - 10 minutes to show off what they've done with APEX - it's timed.  No PPT.  If you show a PowerPoint, you will be booed.  You're on stage, you plugin your laptop to a projector, and you present on a big screen.  It's just a great way for people in the #orclapex community to proudly show what they've accomplished.  I've seen some extraordinarily creative and professional solutions from our customers.

The time goes by fast, so you have to come prepared.  And the Oracle APEX team usually sponsors the beer for this event, so it can get a bit rowdy. ;)

If you're at Kscope15 for the APEX track, or even half-curious about APEX, it's a "must attend" event.

Here's a shot from last year's Open Mic Night:



BIP scheduleReport with Parameters

Tim Dexter - Wed, 2015-05-20 15:37

I have just spent an hour or so working on getting a sample scheduleReport web service working with parameter values. There are a lot of examples out there but none I have found have the parameters being set. Our doc is a little light on details on how to set them up :) In lieu of that, here's this!

        // Set the parameter values for the report. In this example we have        // 'dept' and 'emp' parameters. We could easily query the params dynamically        //Handle 'dept' parameter        ParamNameValue deptParamNameVal = new ParamNameValue();        deptParamNameVal= new ParamNameValue();        deptParamNameVal.setName("dept"); // Create the string array to hold the parameter value(s)        ArrayOfXsdString deptVal = new ArrayOfXsdString();        // For individual values or multiples, add values to the // string array e.g. 10,20,30        deptVal.getItem().add("10");        deptVal.getItem().add("20");        deptVal.getItem().add("30");        // Asterisk used for a null value ie 'All'        //deptVal.getItem().add("*"); // add the array to the parameter object        deptParamNameVal.setValues(deptVal);        //Handle 'emp' parameter        ParamNameValue empParamNameVal = new ParamNameValue();        empParamNameVal= new ParamNameValue();        empParamNameVal.setName("emp");        ArrayOfXsdString empVal = new ArrayOfXsdString();        // For individual values or multiples, add values to the string array         // empVal.getItem().add("Jennifer Whalen");        // empVal.getItem().add("Michael Hartstein");        // Asterisk used for a null value ie 'All'        empVal.getItem().add("*");        empParamNameVal.setValues(empVal);        // add parameter values to parameter array                ArrayOfParamNameValue paramArr = new ArrayOfParamNameValue();        paramArr.getItem().add(deptParamNameVal);        paramArr.getItem().add(empParamNameVal);        //Now add array to values obj        ParamNameValues pVals = new ParamNameValues();        pVals.setListOfParamNameValues(paramArr);

 The pVals object can then be added to the report request object.

        req.setParameterNameValues(pVals);

Hopefully, you can extrapolate to your code. JDev application available here, unzip and open the application.
Just the schedule report class is available here.

Categories: BI & Warehousing

BIP scheduleReport with Parameters

Tim Dexter - Wed, 2015-05-20 15:37

I have just spent an hour or so working on getting a sample scheduleReport web service working with parameter values. There are a lot of examples out there but none I have found have the parameters being set. Our doc is a little light on details on how to set them up :) In lieu of that, here's this!

        // Set the parameter values for the report. In this example we have
        // 'dept' and 'emp' parameters. We could easily query the params dynamically
 
        //Handle 'dept' parameter
        ParamNameValue deptParamNameVal = new ParamNameValue();
        deptParamNameVal= new ParamNameValue();
        deptParamNameVal.setName("dept");
        // Create the string array to hold the parameter value(s)
        ArrayOfXsdString deptVal = new ArrayOfXsdString();
        // For individual values or multiples, add values to the 
        // string array e.g. 10,20,30
        deptVal.getItem().add("10");
        deptVal.getItem().add("20");
        deptVal.getItem().add("30");
 
        // Asterisk used for a null value ie 'All'
        //deptVal.getItem().add("*");

        // add the array to the parameter object
        deptParamNameVal.setValues(deptVal);
 
        //Handle 'emp' parameter
        ParamNameValue empParamNameVal = new ParamNameValue();
        empParamNameVal= new ParamNameValue();
        empParamNameVal.setName("emp");
        ArrayOfXsdString empVal = new ArrayOfXsdString();
        // For individual values or multiples, add values to the string array 
        // empVal.getItem().add("Jennifer Whalen");
        // empVal.getItem().add("Michael Hartstein");

        // Asterisk used for a null value ie 'All'
        empVal.getItem().add("*");
        empParamNameVal.setValues(empVal);
 

        // add parameter values to parameter array        
        ArrayOfParamNameValue paramArr = new ArrayOfParamNameValue();
        paramArr.getItem().add(deptParamNameVal);
        paramArr.getItem().add(empParamNameVal);
 
        //Now add array to values obj
        ParamNameValues pVals = new ParamNameValues();
        pVals.setListOfParamNameValues(paramArr);

 The pVals object can then be added to the report request object.

        req.setParameterNameValues(pVals);

Hopefully, you can extrapolate to your code. JDev application available here, unzip and open the application.
Just the schedule report class is available here.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator