Skip navigation.

Feed aggregator

Partner Webcast – Managing Exadata with Oracle Enterprise Manager 12c

Oracle Enterprise Manager 12c is system management software that delivers centralized monitoring, administration, and life cycle management functionality for the complete Oracle IT infrastructure,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OBIEE SampleApp in The Cloud: Importing VirtualBox Machines to AWS EC2

Rittman Mead Consulting - Wed, 2014-09-10 01:40

Virtualisation has revolutionised how we work as developers. A decade ago, using new software would mean trying to find room on a real tin server to install it, hoping it worked, and if it didn’t, picking apart the pieces probably leaving the server in a worse state than it was to begin with. Nowadays, we can just launch a virtual machine to give a clean environment and if it doesn’t work – trash it and start again.
The sting in the tail of virtualisation is that full-blown VMs are heavy – for disk we need several GB just for a blank OS, and dozens of GB if you’re talking about a software stack such as Fusion MiddleWare (FMW), and the host machine needs to have the RAM and CPU to support it all too. Technologies such as Linux Containers go some way to making things lighter by abstracting out a chunk of the OS, but this isn’t something that’s reached the common desktop yet.

So whilst VMs are awesome, it’s not always practical to maintain a library of all of them on your local laptop (even 1TB drives fill up pretty quickly), nor will your laptop have the grunt to run more than one or two VMs at most. VMs like this are also local to your laptop or server – but wouldn’t it be neat if you could duplicate that VM and make a server based on it instantly available to anyone in the world with an internet connection? And that’s where The Cloud comes in, because it enables us to store as much data as we can eat (and pay for), and provision “hardware” at the click of a button for just as long as we need it, accessible from anywhere.

Here at Rittman Mead we make extensive use of Amazon Web Services (AWS) and their Elastic Computing Cloud (EC2) offering. Our website runs on it, our training servers run on it, and it scales just as we need it to. A class of 3 students is as easy to provision for as a class of 24 – no hunting around for spare servers or laptops, no hardware sat idle in a cupboard as spare capacity “just in case”.

One of the challenges that we’ve faced up until now is that all servers have had to be built from scratch in the cloud. Obviously we work with development VMs on local machines too, so wouldn’t it be nice if we could build VMs locally and then push them to the cloud? Well, now we can. Amazon offer a route to import virtual machines, and in this article I’m going to show how that works. I’ll use the superb SampleApp v406 VM that Oracle provide, because this is a great real-life example of a VM that is so useful, but many developers can find too memory-intensive to be able to run on their local machines all the time.

This tutorial is based on exporting a Linux guest VM from a Linux host server. A Windows guest probably behaves differently, but a Mac or Windows host should work fine since VirtualBox is supported on both. The specifics are based on SampleApp, but the process should be broadly the same for all VMs. 

Obtain the VM

We’re going to use SampleApp, which can be downloaded from Oracle.

  1. Download the six-part archive from http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples–167534.html
  2. Verify the md5 checksums against those published on the download page:
    [oracle@asgard sampleapp406]$ ll
    total 30490752
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 01:33 SampleAppv406.zip.001
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 01:30 SampleAppv406.zip.002
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:03 SampleAppv406.zip.003
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:34 SampleAppv406.zip.004
    -rw-r--r-- 1 oracle oinstall 5242880000 Sep  9 02:19 SampleAppv406.zip.005
    -rw-r--r-- 1 oracle oinstall 4977591522 Sep  9 02:53 SampleAppv406.zip.006
    [oracle@asgard sampleapp406]$ md5sum *
    2b9e11f69ada5f889088dd74b5229322  SampleAppv406.zip.001
    f8a1a5ae6162b20b3e9c6c888698c071  SampleAppv406.zip.002
    68438cfea87e8d3a2e2f15ff00dadf12  SampleAppv406.zip.003
    b71d9ace4f75951198fc8197da1cfe62  SampleAppv406.zip.004
    4f1a5389c9e0addc19dce6bbc759ec20  SampleAppv406.zip.005
    2c430f87e22ff9718d5528247eff2da4  SampleAppv406.zip.006
  3. Unpack the archive using 7zip — the instructions for SampleApp are very clear that you must use 7zip, and not another archive tool such as winzip.
    [oracle@asgard sampleapp406]$ time 7za x SampleAppv406.zip.001</code>7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
    p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,80 CPUs)
    
    Processing archive: SampleAppv406.zip.001
    
    Extracting SampleAppv406Appliance
    Extracting SampleAppv406Appliance/SampleAppv406ga-disk1.vmdk
    Extracting SampleAppv406Appliance/SampleAppv406ga.ovf
    
    Everything is Ok
    
    Folders: 1
    Files: 2
    Size: 31191990916
    Compressed: 5242880000
    
    real 1m53.685s
    user 0m16.562s
    sys 1m15.578s
  4. Because we need to change a couple of things on the VM first (see below), we’ll have to import the VM to VirtualBox so that we can boot it up and make these changes.You can import using the VirtualBox GUI, or as I prefer, the VBoxManage command line interface. I like to time all these things (just because, numbers), so stick a time command on the front:
    time VBoxManage import --vsys 0 --eula accept SampleAppv406Appliance/SampleAppv406ga.ovf

    This took 12 minutes or so, but that was on a high-spec system, so YMMV.
    [...]
    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
    Successfully imported the appliance.
    
    real    12m15.434s
    user    0m1.674s
    sys     0m2.807s
Preparing the VM

Importing Linux VMs to Amazon EC2 will only work if the kernel is supported, which according to an AWS blog post includes Red Hat Enterprise Linux 5.1 – 6.5. Whilst SampleApp v406 is built on Oracle Linux 6.5 (which isn’t listed by AWS as supported), we have the option of telling the VM to use a kernel that is Red Hat Enterprise Linux compatible (instead of the default Unbreakable Enterprise Kernel – UEK). There are some other pre-requisites that you need to check if you’re trying this with your own VM, including a network adaptor configured to use DHCP. The aforementioned blog post has details.

  1. Boot the VirtualBox VM, which should land you straight in the desktop environment, logged in as the oracle user.
  2. We need to modify a file as root (superuser). Here’s how to do it graphically, or use vi if you’re a real programmer:
    1. Open a Terminal window from the toolbar at the top of the screen
    2. Enter
      sudo gedit /etc/grub.conf

      The sudo bit is important, because it tells Linux to run the command as root. (I’m on an xkcd-roll here: 1, 2)

    3. In the text editor that opens, you will see a header to the file and then a set of repeating sections beginning with title. These are the available kernels that the machine can run under. The default is 3, which is zero-based, so it’s the fourth title section. Note that the kernel version details include uek which stands for Unbreakable Enterprise Kernel – and is not going to work on EC2.
    4. Change the default to 0, so that we’ll instead boot to a Red Hat Compatible Kernel, which will work on EC2
    5. Save the file
  3. Optional steps:
    1. Whilst you’ve got the server running, add your SSH key to the image so that you can connect to it easily once it is up on EC2. For more information about SSH keys, see my previous blog post here, and a step-by-step for doing it on SampleApp here.
    2. Disable non-SSH key logins (in /etc/ssh/sshd_config, set PasswordAuthentication no and PubkeyAuthentication yes), so that your server once on EC2 is less vulnerable to attack. Particularly important if you’re using the stock image with Admin123 as the root password.
    3. Set up screen, and OBIEE and the database as a Linux service, both covered in my article here.
  4. Shutdown the instance by entering this at a Terminal window:

    sudo shutdown -h now

Export the VirtualBox VM to Amazon EC2

Now we’re ready to really get going. The first step is to export the VirtualBox VM to a format that Amazon EC2 can work with. Whilst they don’t explicitly support VMs from VirtualBox, they do support the VMDK format – which VirtualBox can create. You can do the export from the graphical interface, or as before, from the command line:

time VBoxManage export "OBIEE SampleApp v406" --output OBIEE-SampleApp-v406.ovf

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully exported 1 machine(s).

real    56m51.426s
user    0m6.971s
sys     0m12.162s

If you compare the result of this to what we downloaded from Oracle it looks pretty similar – an OVF file and a VMDK file. The only difference is that the VMDK file is updated with the changes we made above, including the modified kernel settings which are crucial for the success of the next step.

[oracle@asgard sampleapp406]$ ls -lh
total 59G
-rw------- 1 oracle oinstall  30G Sep  9 10:55 OBIEE-SampleApp-v406-disk1.vmdk
-rw------- 1 oracle oinstall  15K Sep  9 09:58 OBIEE-SampleApp-v406.ovf

We’re ready now to get all cloudy. For this, you’ll need:

  1. An AWS account
    1. You’ll also need your AWS account’s Access Key and Secret Key
  2. AWS EC2 commandline tools installed, along with a Java Runtime Environment (JRE) 1.7 or greater:

    wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
    sudo mkdir /usr/local/ec2
    sudo unzip ec2-api-tools.zip -d /usr/local/ec2
    # You might need to fiddle with the following paths and version numbers: 
    sudo yum install -y java-1.7.0-openjdk.x86_64
    cat >> ~/.bash_profile <<EOF
    export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64/jre"
    export EC2_HOME=/usr/local/ec2/ec2-api-tools-1.7.1.1/
    export PATH=$PATH:$EC2_HOME/bin
    EOF<

  3. Set your credentials as environment variables:
    export AWS_ACCESS_KEY=xxxxxxxxxxxxxx
    export AWS_SECRET_KEY=xxxxxxxxxxxxxxxxxxxxxx
  4. Ideally a nice fat pipe to upload the VM file over, because at 30GB it is not trivial (not in 2014, anyway)

What’s going to happen now is we use an EC2 command line tool to upload our VMDK (virtual disk) file to Amazon S3 (a storage platform), from where it gets converted into an EBS volume (Elastic Block Store, i.e. a EC2 virtual disk), and from there attached to a new EC2 instance (a “server”/”VM”).

Before we can do the upload we need an S3 “bucket” to put the disk image in that we’re uploading. You can create one from https://console.aws.amazon.com/s3/. In this example, I’ve got one called rmc-vms – but you’ll need your own.

Once the bucket has been created, we build the command line upload statement using ec2-import-instance:

time ec2-import-instance OBIEE-SampleApp-v406-disk1.vmdk --instance-type m3.large --format VMDK --architecture x86_64 --platform Linux --bucket rmc-vms --region eu-west-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY

Points to note:

  • m3.large is the spec for the VM. You can see the available list here. In the AWS blog post it suggests only a subset will work with the import method, but I’ve not hit this limitation yet.
  • region is the AWS Region in which the EBS volume and EC2 instance will be built. I’m using ew-west-1 (Ireland), and it makes sense to use the one geographically closest to where you or your users are located. Still waiting for uk-yorks-1
  • architecture and platform relate to the type of VM you’re importing.

The upload process took just over 45 minutes for me, and that’s from a data centre with a decent upload:

[oracle@asgard sampleapp406]$ time ec2-import-instance OBIEE-SampleApp-v406-disk1.vmdk --instance-type m3.large --format VMDK --architecture x86_64 --platform Linux --bucket rmc-vms --region eu-west-1 --owner-akid $AWS_ACCESS_KEY --owner-sak $AWS_SECRET_KEY
Requesting volume size: 200 GB
TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       0       Status       active  StatusMessage   Pending : Downloaded 0
Creating new manifest at rmc-vms/d77672aa-0e0b-4555-b368-79d386842112/OBIEE-SampleApp-v406-disk1.vmdkmanifest.xml
Uploading the manifest file
Uploading 31191914496 bytes across 2975 parts
0% |--------------------------------------------------| 100%
   |==================================================|
Done
Average speed was 11.088 MBps
The disk image for import-i-fh08xcya has been uploaded to Amazon S3
where it is being converted into an EC2 instance.  You may monitor the
progress of this task by running ec2-describe-conversion-tasks.  When
the task is completed, you may use ec2-delete-disk-image to remove the
image from S3.

real    46m59.871s
user    10m31.996s
sys     3m2.560s

Once the upload has finished Amazon automatically converts the VMDK (now residing on S3) into a EBS volume, and then attaches it to a new EC2 instance (i.e. a VM). You can monitor the status of this task using ec2-describe-conversion-tasks, optionally filtered on the TaskId returned by the import command above:

ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       3898992128
Status  active  StatusMessage   Pending : Downloaded 31149971456

This is now an ideal time to mention as a side note the Linux utility watch, which simply re-issues a command for you every x seconds (2 by default). This way you can leave a window open and keep an eye on the progress of what is going to be a long-running job

watch ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya

Every 2.0s: ec2-describe-conversion-tasks --region eu-west-1 import-i-fh08xcya                                                             Tue Sep  9 12:03:24 2014

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  active  StatusMessage   Pending InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBytesConverted       5848511808
Status  active  StatusMessage   Pending : Downloaded 31149971456

And whilst we’re at it, if you’re using a remote server to do this (as I am, to take advantage of the large bandwidth), you will find screen invaluable for keeping tasks running and being able to reconnect at will. You can read more about screen and watch here.

So back to our EC2 import job. To start with, the task will be Pending: (NB unlike lots of CLI tools, you read the output of this one left-to-right, rather than as columns with headings)

$ ec2-describe-conversion-tasks --region eu-west-1
TaskType        IMPORTINSTANCE  TaskId  import-i-ffvx6z86       ExpirationTime  2014-09-12T15:32:01Z    Status  active  StatusMessage   Pending InstanceID      i-b2245ef2
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5021144064      VolumeSize      60      AvailabilityZone        eu-west-1a      ApproximateBytesConverted       4707330352      Status  active  StatusMessage   Pending : Downloaded 5010658304

After a few moments it gets underway, and you can see a Progress percentage indicator: (scroll right in the code snippet below to see)

TaskType        IMPORTINSTANCE  TaskId  import-i-fgr0djcc       ExpirationTime  2014-09-15T15:39:28Z    Status  active  StatusMessage   Progress: 53%   InstanceID      i-c7692e87
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5582545920      VolumeId        vol-f71368f0    VolumeSize      20      AvailabilityZone        eu-west-1a      ApproximateBytesConverted       5582536640      Status  completed

Note that at this point you’ll see also see an Instance in the EC2 list, but it won’t launch (no attached disk – because it’s still being imported!)

If something goes wrong you’ll see the Status as cancelled, such as in this example here where the kernel in the VM was not a supported one (observe it is the UEK kernel, which isn’t supported by Amazon):

TaskType        IMPORTINSTANCE  TaskId  import-i-ffvx6z86       ExpirationTime  2014-09-12T15:32:01Z    Status  cancelled       StatusMessage   ClientError: Unsupported kernel version 2.6.32-300.32.1.el5uek       InstanceID      i-b2245ef2
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   5021144064      VolumeId        vol-91b1c896    VolumeSize      60      AvailabilityZone        eu-west-1a      ApproximateBytesConverted    5021128688      Status  completed

After an hour or so, the task should complete:

TaskType        IMPORTINSTANCE  TaskId  import-i-fh08xcya       ExpirationTime  2014-09-16T10:07:44Z    Status  completed       InstanceID      i-b07d3bf0
DISKIMAGE       DiskImageFormat VMDK    DiskImageSize   31191914496     VolumeId        vol-a383f8a4    VolumeSize      200     AvailabilityZone        eu-west-1a      ApproximateBy
tesConverted    31191855472     Status  completed

At this point you can remove the VMDK from S3 (and should do, else you’ll continue to be charged for it), following the instructions for ec2-delete-disk-image

Booting the new server on EC2

Go to your EC2 control panel, where you should see an instance (EC2 term for “server”) in Stopped state and with no name.

Select the instance, and click Start on the Actions menu. After a few moments a Public IP will be shown in the details pane. But, we’re not home free quite yet…read on.

Firewalls

So this is where it gets a bit tricky. By default, the instance will have launched with Amazon’s Firewall (known as a Security Group) in place which – unless you have an existing AWS account and have modified the default security group’s configuration – is only open on port 22, which is for ssh traffic.

You need to head over to the Security Group configuration page, accessed in several ways but easiest is clicking on the security group name from the instance details pane:

Click on the Inbound tab and then Edit, and add “Custom TCP Rule” for the following ports:

  • 7780 (OBIEE front end)
  • 7001 (WLS Console / EM)
  • 5902 (oracle VNC)

You can make things more secure by allowing access to the WLS admin (7001) and VNC port (5902) to a specific IP address or range only.

Whilst we’re talking about security, your server is now open to the internet and all the nefarious persons out there, so you’ll be wanting to harden your server not least by resetting all the passwords to ones which aren’t publicly documented in the SampleApp user documentation!

Once you’ve updated your Security Group, you can connect to your server! If you installed the OBIEE and database auto start scripts (and if not, why not??) you should find OBIEE running just nicely on http://[your ip]:7780/analytics – note that the port is 7780, not 9704.

2014-09-09_20-21-23

If you didn’t install the script, you will need to start the services manually per the SampleApp documentation. To connect to the server you can ssh (using Terminal, PuTTY, etc) to the server or connect on VNC (Admin123 is the password). For VNC clients try Screen Share on Macs (installed by default), or RealVNC on Windows.

Caveats & Disclaimers
  • Running a server on AWS EC2 costs real money, so watch out. Once you’ve put your credit card details in, Amazon will continue to charge your card whilst there are chargeable items on your account (EBS volumes, instances – running or not- , and so on). You can get an idea of the scale of charges here.
  • As mentioned above, a server on the open internet is a lot more vulnerable than one virtualised on your local machine. You will get poked and probed, usually by automated scripts looking for open ports, weak passwords, and so on. SampleApp is designed to open the toybox of a pimped-out OBIEE deployment to you, it is not “hardened”, and you risk learning the tough way about the need for it if you’re not careful.
Cloning

Amazon EC2 supports taking a snapshot of a server, either for backup/rollback purposes or spinning up as a clone, using an Amazon Machine Image (AMI). From the Instances page, simply select “Create an Image” to build your AMI. You can then build another instance (or ten) from this AMI as needed, exact replicas of the server as it was at the point that you created the image.

Lather, Rinse, and Repeat

There’s a whole host of VirtualBox “appliances” out there, and some of them such as the developer-tools-focused ones only really make sense as local VMs. But there are plenty that would benefit from a bit of “Cloud-isation”, where they’re too big or heavy to keep on your laptop all the time, but are handy to be able to spin up at will. A prime example of this for me is the EBS Vision demo database that we use for our BI Apps training. Oracle used to provide an pre-built Amazon image (know as an AMI) of this, but since withdrew it. However, Oracle do publish Oracle VM VirtualBox templates for EBS 12.1.3 and 12.2.3 (related blog), so from this with a bit of leg-work and a big upload pipe, it’s a simple matter to brew your own AWS version of it — ready to run whenever you need it.

Categories: BI & Warehousing

CIFS performance problem

Bas Klaassen - Wed, 2014-09-10 01:04
Today we encountered some performance problems at our customer site. After checking with the customer it seemed that especially the CIFS shares were having problems. At first CIFS was getting slower until the shares were not even accessible anymore. Restarting the CIFS on the filer did solve the problem for a few minutes, but within half an hour the the problems were back again.  Checking the /Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

2009 honda s2000 ultimate edition for sale

Ameed Taylor - Tue, 2014-09-09 18:40
Drive the S2000 tenderly and you presumably won't be satisfied with the buzzy powertrain and occupied ride. Tuned to perform on tight clips, the S2000 can feel rigid and jittery on open streets. Wind out the motor and push its points of confinement in corners, and you're in for a totally diverse, smile actuating background; that is the thing that the Honda S2000 is about.

Mazda's Miata feels very nearly large in correlation to the S2000. The cockpit is confined regardless of how little the tenants. The high shoulders of the S2000 keep the driver and traveler, and the controlling wheel sits low even at its most noteworthy alteration point. Strangely for Honda, the controls aren't laid out neatly (there's not a considerable measure of dash space to do so), and the enormous red Start catch appears to be more like a contrivance. There's a lot of dark plastic, as well, for the sake of sparing weight.

The 2009 Honda S2000 is one of the slightest reasonable large scale manufacture autos on the planet. There's practically no inside or trunk stockpiling, the cockpit's more confined than the mentor situates on a Boeing 757, and its evaluated above $30,000. It is an exemplary roadster sportscar with back wheel drive, a ragtop to open on sunny days, a six-pace manual transmission, and a rev-cheerful four-barrel motor.
2009 red honda convertible s2000
A year ago Honda presented the S2000 CR, the club-racer adaptation of the standard S2000. The CR gets a full-body flight optimized unit, superior Bridgestone tires, firmer suspension settings, a thicker hostile to move bar, and new wheels. A lightweight aluminum hardtop that cuts weight by around 90 pounds replaces the delicate top component. Inside, the CR gets different material seats with yellow sewing, another aluminum shifter handle, and carbon-fiber resemble the other much the same trim boards.

Standard supplies on the 2009 Honda S2000 incorporates electronic dependability control and non-freezing stopping devices, however side airbags—a gimmick now found on almost all new vehicles—aren't accessi

although the 2009 Honda S2000 has a dated design, the bottom edition stands out for its spectacular mix of fashion and performance, regardless of the overwhelming additions on the CR.

automobiles.com studies other exterior highlights embody trendy “excessive-intensity-discharge headlamps and 17-inch alloy wheels” that come usual on the 2009 Honda S2000. Edmunds resorts essentially the most distinguished criticism of the exterior of the 2009 Honda S2000, noting that whereas the brand new aerodynamic items on the CR “reduce excessive-velocity aerodynamic lift by way of about 70 p.c,” additionally they “cut back the car’s overall visual appeal with the aid of, oh, 79 %.” evaluations read through ebizsig.blogspot.com convey that the exterior styling of the 2009 Honda S2000 is a large success, and Kelley Blue e-book says the Honda S2000 “strikes an awfully un-Honda like, slightly depraved poise” that may “resemble an angry cobra about to strike.”
honda s2000 fiche technique 2009
Kelley Blue e book notes that “CR models include an aerodynamic physique kit,” together with “raise-reducing front and rear spoilers and a removable aluminum onerous high instead of the traditional cloth” model on the standard Honda S2000.
according to the reviewers at Edmunds, the “2009 Honda S2000 is a compact two-seat roadster that’s provided in two trims: same old and CR.” each trims share the same normal profile, which automobiles.com calls a “wedge-formed profile that stands except for different roadsters.”

ConsumerGuide approves of the internal structure on the 2009 Honda S2000, claiming that the “S2000 has a comfortable cockpit, so everything is shut at hand,” and whereas the “digital bar-graph tachometer and digital speedometer usually are not the sports activities-automotive norm,” they're “simple to learn.” Edmunds chimes in, noting that “just about all the controls you’ll ever want are set up inside a finger’s extension of the guidance wheel.” one of the most cooler interior features to find its method right into a manufacturing car is the “new top-power Indicator” on the 2009 Honda S2000 CR, a feature that cars.com says will flash “a inexperienced light when top power is reached.” Kelley Blue ebook gushes the 2009 Honda S2000’s “inside is stuffed with excellent surprises,” including a “giant pink start button on the sprint” and “the long heart console [that] sits up excessive, affording you the perfect perch on which to rest your arm.”
2009 honda s2000 performance specs
The 2009 Honda S2000 enjoys better handling because of the quicker guidance ratio and new tires, and the CR variation is a monitor-necessary contender that can hold its personal against more expensive European and American competition.

The EPA estimates that the 2009 Honda S2000, whether in standard or CR kind, will get 18 mpg within the city and 25 on the highway. Most cars as robust because the 2009 Honda S2000 pay a big penalty on the gasoline pump, however the small engine blended with lightweight development on the Honda S2000 yields a moderately frugal efficiency machine.

evaluations read by way of ebizsig.blogspot.com convey that the engine is happiest when operating flat-out. cars.com notes that “once it reaches 5,000 rpm or so, the S2000 lunges ahead like a rocket,” and Edmunds adds that “piloting the 2009 Honda S2000 takes some getting used to, on the grounds that height energy is delivered at nearly eight,000 rpm.” ConsumerGuide reviewers love the engine and find the Honda S2000 “offers a stunning provide of usable power across a extensive rpm vary, mixed with ultrahigh-revving excitement.” although two diverse versions of the 2009 Honda S2000 are on hand, Edmunds studies that the only engine offered is a “2.2-liter four-cylinder that churns out 237 hp at a lofty 7,800 rpm and 162 pound-feet of torque at 6,800 rpm.” Honda has tuned the engine on the Honda S2000 almost to the breaking point, with automobile and Driver commenting that “the S2000’s 2.2-liter four is mainly maxed out.”
modified honda s2000 turbo 2009 picture
evaluations learn by using ebizsig.blogspot.com additionally compliment the S2000’s transmission for its easy shifts and brief throws. Kelley Blue e book claims that the engine and transmission combination makes for “startlingly-quick efficiency,” whereas the chassis provides “outstanding nimbleness” to the 2009 Honda S2000 package deal. vehicles.com states that the four-cylinder engine on the S2000 Honda “mates with a six-speed handbook transmission” that ConsumerGuide says will supply “manageable take hold of motion” and a “slick, quick-throw gearbox.”

As excellent as the engine/transmission mixture is, coping with continues to be a trademark of the 2009 S2000. automobiles.com holds nothing back in praising the “razor-sharp steerage, disciplined coping with and athletic cornering ability” of the 2009 Honda S2000. Kelley Blue e book reviewers rave about the “just about flat cornering conduct and intensely crisp response that allows” the 2009 Honda S2000 “to barter the corners with positive tenacity.” The membership Racer is even more impressive, with automotive and Driver reporting it “is simply harder and sharper, with much less physique roll and tire scrubbing and extra nook composure and stability underneath braking.” sadly, the associated fee for all that efficiency is bad journey quality, and ConsumerGuide points out that “nearly every small bump and tar strip registers during the seats.” On the positive aspect, ConsumerGuide also comments that “braking is swift and simply modulated” whether or not you might be driving on the street or the monitor.
2009 honda s2000 horsepower
2009 honda s2000 owner's manual
2009 honda s2000 pictures
2009 honda s2000 price new
Categories: DBA Blogs

2009 honda s2000 ultimate edition for sale

EBIZ SIG BLOG - Tue, 2014-09-09 18:40
Drive the S2000 tenderly and you presumably won't be satisfied with the buzzy powertrain and occupied ride. Tuned to perform on tight clips, the S2000 can feel rigid and jittery on open streets. Wind out the motor and push its points of confinement in corners, and you're in for a totally diverse, smile actuating background; that is the thing that the Honda S2000 is about.

Mazda's Miata feels very nearly large in correlation to the S2000. The cockpit is confined regardless of how little the tenants. The high shoulders of the S2000 keep the driver and traveler, and the controlling wheel sits low even at its most noteworthy alteration point. Strangely for Honda, the controls aren't laid out neatly (there's not a considerable measure of dash space to do so), and the enormous red Start catch appears to be more like a contrivance. There's a lot of dark plastic, as well, for the sake of sparing weight.

The 2009 Honda S2000 is one of the slightest reasonable large scale manufacture autos on the planet. There's practically no inside or trunk stockpiling, the cockpit's more confined than the mentor situates on a Boeing 757, and its evaluated above $30,000. It is an exemplary roadster sportscar with back wheel drive, a ragtop to open on sunny days, a six-pace manual transmission, and a rev-cheerful four-barrel motor.
2009 red honda convertible s2000
A year ago Honda presented the S2000 CR, the club-racer adaptation of the standard S2000. The CR gets a full-body flight optimized unit, superior Bridgestone tires, firmer suspension settings, a thicker hostile to move bar, and new wheels. A lightweight aluminum hardtop that cuts weight by around 90 pounds replaces the delicate top component. Inside, the CR gets different material seats with yellow sewing, another aluminum shifter handle, and carbon-fiber resemble the other much the same trim boards.

Standard supplies on the 2009 Honda S2000 incorporates electronic dependability control and non-freezing stopping devices, however side airbags—a gimmick now found on almost all new vehicles—aren't accessi

although the 2009 Honda S2000 has a dated design, the bottom edition stands out for its spectacular mix of fashion and performance, regardless of the overwhelming additions on the CR.

automobiles.com studies other exterior highlights embody trendy “excessive-intensity-discharge headlamps and 17-inch alloy wheels” that come usual on the 2009 Honda S2000. Edmunds resorts essentially the most distinguished criticism of the exterior of the 2009 Honda S2000, noting that whereas the brand new aerodynamic items on the CR “reduce excessive-velocity aerodynamic lift by way of about 70 p.c,” additionally they “cut back the car’s overall visual appeal with the aid of, oh, 79 %.” evaluations read through ebizsig.blogspot.com convey that the exterior styling of the 2009 Honda S2000 is a large success, and Kelley Blue e-book says the Honda S2000 “strikes an awfully un-Honda like, slightly depraved poise” that may “resemble an angry cobra about to strike.”
honda s2000 fiche technique 2009
Kelley Blue e book notes that “CR models include an aerodynamic physique kit,” together with “raise-reducing front and rear spoilers and a removable aluminum onerous high instead of the traditional cloth” model on the standard Honda S2000.
according to the reviewers at Edmunds, the “2009 Honda S2000 is a compact two-seat roadster that’s provided in two trims: same old and CR.” each trims share the same normal profile, which automobiles.com calls a “wedge-formed profile that stands except for different roadsters.”

ConsumerGuide approves of the internal structure on the 2009 Honda S2000, claiming that the “S2000 has a comfortable cockpit, so everything is shut at hand,” and whereas the “digital bar-graph tachometer and digital speedometer usually are not the sports activities-automotive norm,” they're “simple to learn.” Edmunds chimes in, noting that “just about all the controls you’ll ever want are set up inside a finger’s extension of the guidance wheel.” one of the most cooler interior features to find its method right into a manufacturing car is the “new top-power Indicator” on the 2009 Honda S2000 CR, a feature that cars.com says will flash “a inexperienced light when top power is reached.” Kelley Blue ebook gushes the 2009 Honda S2000’s “inside is stuffed with excellent surprises,” including a “giant pink start button on the sprint” and “the long heart console [that] sits up excessive, affording you the perfect perch on which to rest your arm.”
2009 honda s2000 performance specs
The 2009 Honda S2000 enjoys better handling because of the quicker guidance ratio and new tires, and the CR variation is a monitor-necessary contender that can hold its personal against more expensive European and American competition.

The EPA estimates that the 2009 Honda S2000, whether in standard or CR kind, will get 18 mpg within the city and 25 on the highway. Most cars as robust because the 2009 Honda S2000 pay a big penalty on the gasoline pump, however the small engine blended with lightweight development on the Honda S2000 yields a moderately frugal efficiency machine.

evaluations read by way of ebizsig.blogspot.com convey that the engine is happiest when operating flat-out. cars.com notes that “once it reaches 5,000 rpm or so, the S2000 lunges ahead like a rocket,” and Edmunds adds that “piloting the 2009 Honda S2000 takes some getting used to, on the grounds that height energy is delivered at nearly eight,000 rpm.” ConsumerGuide reviewers love the engine and find the Honda S2000 “offers a stunning provide of usable power across a extensive rpm vary, mixed with ultrahigh-revving excitement.” although two diverse versions of the 2009 Honda S2000 are on hand, Edmunds studies that the only engine offered is a “2.2-liter four-cylinder that churns out 237 hp at a lofty 7,800 rpm and 162 pound-feet of torque at 6,800 rpm.” Honda has tuned the engine on the Honda S2000 almost to the breaking point, with automobile and Driver commenting that “the S2000’s 2.2-liter four is mainly maxed out.”
modified honda s2000 turbo 2009 picture
evaluations learn by using ebizsig.blogspot.com additionally compliment the S2000’s transmission for its easy shifts and brief throws. Kelley Blue e book claims that the engine and transmission combination makes for “startlingly-quick efficiency,” whereas the chassis provides “outstanding nimbleness” to the 2009 Honda S2000 package deal. vehicles.com states that the four-cylinder engine on the S2000 Honda “mates with a six-speed handbook transmission” that ConsumerGuide says will supply “manageable take hold of motion” and a “slick, quick-throw gearbox.”

As excellent as the engine/transmission mixture is, coping with continues to be a trademark of the 2009 S2000. automobiles.com holds nothing back in praising the “razor-sharp steerage, disciplined coping with and athletic cornering ability” of the 2009 Honda S2000. Kelley Blue e book reviewers rave about the “just about flat cornering conduct and intensely crisp response that allows” the 2009 Honda S2000 “to barter the corners with positive tenacity.” The membership Racer is even more impressive, with automotive and Driver reporting it “is simply harder and sharper, with much less physique roll and tire scrubbing and extra nook composure and stability underneath braking.” sadly, the associated fee for all that efficiency is bad journey quality, and ConsumerGuide points out that “nearly every small bump and tar strip registers during the seats.” On the positive aspect, ConsumerGuide also comments that “braking is swift and simply modulated” whether or not you might be driving on the street or the monitor.
2009 honda s2000 horsepower
2009 honda s2000 owner's manual
2009 honda s2000 pictures
2009 honda s2000 price new
Categories: APPS Blogs

Open World Sessions Covering PeopleSoft's New Fluid UI

PeopleSoft Technology Blog - Tue, 2014-09-09 15:51

The new Fluid user interface is groundbreaking for us and our customers.  For that reason, we are offering several presentations and demos at Open World that cover the Fluid UI from different angles.  Some are general sessions of which Fluid is only a part, while some deal more directly and specifically with the subject.  If you are interested in learning about the Fluid UI from top to bottom, here are a selection of Open World sessions that should provide all you want to know.

General Sessions

GEN7438 -- PeopleSoft Strategy and Roadmap: Modern Solutions Delivered with Ease -- In this session, Oracle’s Paco Aubrejuan (senior vice president and general manager of PeopleSoft Development) shares the Oracle strategy for ongoing investment in its PeopleSoft product family.    Monday  12:00 - 12:45 MW   3004/6

CON7587 -- PeopleSoft General Session: Technology Update and Roadmap Oracle has produced game-changing results for PeopleSoft customers with the release of 9.2 applications.  Come and hear how PeopleSoft plans to further increase the value of applications by embracing technology.   Monday   1:45 - 2:30 MW   3004/6

PeopleTools Sessions

CON7595 -- PeopleSoft PeopleTools 8.54: PeopleSoft Fluid User Interface in Action    This session covers more than just Fluid, but it will provide information on how to get started with Fluid once you take PeopleTools 8.54.    Monday    3:00-3:45 MW 2022

CON7567 -- A Closer Look at the New PeopleSoft Fluid User Experience If you want to learn about the various features of the Fluid UI from a functional perspective and how Fluid will coexist with classic PeopleSoft, this is the session for you.    Monday   5:30 - 6:15 MW 2022

CON7588 -- PeopleSoft Mobility Deep Dive: PeopleSoft Fluid User Interface and More   This session is focused on mobile and how Fluid supports that, but it also covers Fluid from a developer's perspective     Wednesday      11:30 - 12:15      MW 2022

Application Sessions

CON7667 -- PeopleSoft Fluid User Interface: A Modern User Experience for PeopleSoft HCM on Any Device Tuesday   3:45   Palace Gold

CON7584 -- PeopleSoft 9.2 and Beyond: Unbelievable Innovation in Projects and Staffing (ESA) Wednesday    10:15   Westin Olympic

There are many more sessions that will touch on the Fluid UI to varying degrees.  Consult the Agenda Builder on Oracle's Open World site.  We recommend that you attend the panel discussions with PeopleTools development leaders as well as customer panel discussions.  You can also learn more at the PeopleSoft demo pods and Meet the Experts sessions. 

See you at Oracle Open World! 


Big Value from CPQ Cloud for OpenWorld 2014 Attendees

Linda Fishman Hoyle - Tue, 2014-09-09 15:34

A Guest Post by Chris Haussler, Sr. Principal Product Manager, Oracle CPQ Cloud (pictured left)

This will be CPQ Cloud’s first Oracle OpenWorld as a member of the Oracle CX Applications Suite.

It will give sales professionals from around the globe a preview of how the industry-leading configure, price, and quote application and the Oracle Customer Experience portfolio work together today, and going forward.

Attendees will hear directly from us about where CPQ is headed—and from top companies about the transformative value they’ve achieved in their sales processes as a result of implementing Oracle CPQ Cloud alone, and integrated with a holistic front office solution.

CPQ Cloud Integration Sessions in Moscone West
We are offering a total of twenty sessions that revolve around, or are related to, CPQ Cloud implementations. Sessions detailing new and existing CPQ Cloud integrations include:

  • “Commerce at Oracle: Commerce + CPQ Cloud Vision and Strategy, ”Tuesday, 10:15 a.m. and 12 noon PT, Room 3003 [TGS8714]
  • “Implement a Virtual Engineer: Oracle BigMachines CPQ Cloud Service with Oracle E-Business Suite,” Wednesday, 2:00 p.m. PT, Room 3005 [CON6213]
  • “Sign and Send Documents with a Click of a Button: DocuSign/Oracle BigMachines CPQ Cloud Service,” Wednesday, 10:15 a.m. PT, Room 3005 [CON5558]
  • “You Can Have Both: Integrating CPQ Cloud (BigMachines) with On-Premises Siebel,” Wednesday, 12:45 p.m. PT, Room 3005 [CON5850]
  • “What’s New for Oracle BigMachines CPQ Cloud Service in 2014–2015,” Tuesday, 4:15 p.m. PT, Room 2016 [CON7497]

More CPQ Cloud Sessions
To round out the offering for our B2B and B2C customers, here is a sample of other sessions:

  • “Collaboration in a Complex CPQ Environment: A One-Stop-Shop Model” (Siemens Energy)
  • ”Configuration 2.0 with Oracle BigMachines CPQ Cloud Service” (SPX Process Equipment)
  • ”Enable Your Channel Partners with Oracle BigMachines CPQ Cloud Service” (Deloitte)
  • ”Join the Revenue Revolution: CPQ Is the Key to Maximizing Customer Lifetime Value” (PWC)
  • “Oracle BigMachines CPQ Cloud Service Introduces Cross-Industry Quote to Cash”
  • “Simple, Familiar, Fast: Midmarket CPQ for Salesforce.com”
  • “Deployment Options for the Efficient Delivery of B2B Communications Services”

Meet the Oracle CPQ Cloud Experts
Meet Chris Shutts, CPQ Cloud VP of Product Development and co-founder of BigMachines, and Erik Abernathy, CPQ Cloud director of software engineering, to gain insights on CPQ Cloud or to ask questions specific about your implementation—Thursday, 11:30 a.m. PT, Room 3012 [MTE9166]

See It All
Click here to view a full schedule of CPQ Cloud sessions at Oracle Open World 2014.

Oracle OpenWorld 2014 takes place from September 28–October 2, 2014 in San Francisco’s Moscone Center. We look forward to seeing you there!

Chris Haussler
Senior Principal Product Manager
Oracle CPQ Cloud

Check Out the ACE Book Signings in the Oracle OpenWorld Bookstore!

OTN TechBlog - Tue, 2014-09-09 15:03

Make sure to leave room in your suitcase for a signed copy of a book written by one of our amazing Oracle ACEs and Oracle ACE Directors! You can find the Oracle OpenWorld Bookstore in the Moscone South Upper Lobby (right next to the OTN Lounge). Here is a list of ACEs and the times when they will be doing a book signing in the Oracle OpenWorld Bookstore:

Monday, September 29th, 2014 - 1:00pm to 1:30pm - Meet and Greet Session with Oracle Press Authors
  • Luc Bors, Oracle ACE - Oracle Mobile Application Framework Developer Guide
  • Gustavo Gonzalez, ACE Director - Oracle E-Business Suite Financials Handbook, 3rd Ed.
  • Michelle Malcher, ACE Director - Oracle Database 12c: Install, Configure & Maintain Like a Professional
  • Michael McLaughlin, Oracle ACE - Oracle Database 12c PL/SQL Programming
  • Harshad Oak - ACE Director - Java EE Applications on Oracle Java Cloud
Tuesday, September 30th, 2014 - 12:30pm to 1:00pm
  • Matjaz B. Juric, ACE Director - WS BPEL Beginner's Guide - Packt Publishing
Wednesday, October 1st, 2014 - 1:00pm to 1:30pm - Meet and Greet Session with Oracle Press Authors
  • Ian Abramson, Oracle ACE - Oracle Database 12c: Install, Configure & Maintain Like a Professional 
  • Arun Gupta, Java Champion - Java EE and HTML5 Enterprise Application Development
  • Micael Rosenblum, Oracle ACE - Oracle PL/SQL Performance Tuning Tips & Techniques
  • Brendan Tierney, ACE Director - Predictive Analytics Using Oracle Data Miner
Thursday, October 2nd, 2014 - 1:00pm to 1:30pm - Meet and Greet Session with Oracle Press Authors
  • Michelle Malcher, ACE Director - Oracle Database 12c: Install, Configure & Maintain Like a Professional

Sunday Times Tech Track 100

Rittman Mead Consulting - Tue, 2014-09-09 14:35

Over the weekend, Rittman Mead was listed in the Sunday Times Tech Track 100. We are extremely proud to get recognition for the business as well as our technical capability and expertise.

A lot of the public face of Rittman Mead focuses on the tools and technologies we work with. Since day one we have had a core policy to share as much information as possible. Even before the advent of social media, we shared pretty much everything we knew through either our blog or by speaking at conferences, but we very rarely talk about the business itself. However, a lot of the journey we have gone through over the last 7 years has been about the growth and maintenance of a successful, sustainable, multi-national business. We have been able to talk about, educate and evangelise about the tools and technologies as a result of having the successful business to support this.

I remember during one interview we did several years ago the candidate asked (and I’m paraphrasing): “How do you guys make any money, all I see/read is people sitting in airports writing blog posts about leading edge technologies?”.

One massive benefit from this is we often face the same problems (albeit on a different scales) to those that we talk about with customers, so we have been able to better understand the underlying drivers and proposed solutions for our clients.

From a personal point of view, this has meant spending a lot more time looking at contracts as opposed to code and reading business books/blogs as opposed to technical ones. However, it has been well worth it and I would like to say thanks to all of those both inside and outside of the company who have helped contribute to this success.

Categories: BI & Warehousing

Wearables Should be Stylish

Oracle AppsLab - Tue, 2014-09-09 13:18

To no one’s surprise, Apple announced the Apple Watch today.

Very apropos because I just read Sandra Lee’s (@SandraLee0415) post over on Usable Apps about fashionable tech, one of Ultan’s (@ultan) main talking points about wearables.

Ultan, our wearables whisperer, has style and flair; if you’ve ever met him, you know this. His (and Sandra’s) point about wearable tech needing to be stylish is one that Apple has made, again, to precisely no one’s surprise. Appearance matters to people, and smartwatches and other wearables are accessories that should be stylish and functional.

The market has spoken on this. To the point, the Android Wear smartwatch people want is the round Moto 360, which sold out in less a day earlier this week.

The Apple Watch looks very sleek, and if nothing else, the array of custom bands alone differentiate it from smartwatches like the Samsung Gear Live and the LG G, both of which are also glass rectangles, but with boring rubber wristbands.

I failed to act quickly enough to get a Moto 360 and settled instead on a Gear Live, which is just as well, given I really don’t like wearing watches. We’ve been building for the Pebble for a while now, and since the announcement of Android Wear earlier this year, we’ve been building for it as well, comparing the two watches and their SDKs.

IMG_20140909_121201

Like Google Glass, the Gear Live will be a demo device, not a piece of personal tech. However, for Anthony, his Android Wear watch has replaced Glass as his smartphone accessory of choice. Stay tuned for the skinny on that one.

I haven’t read much about the Apple Watch yet, but I’m sure there will be coverage aplenty as people get excited for its release early in 2015. Now that Apple’s in the game, wearables are surely even more of a thing than they were yesterday.

And they’re much more stylish.

Find the comments.Possibly Related Posts:

Maximize your ERP Cloud Experience at Oracle OpenWorld 2014

Linda Fishman Hoyle - Tue, 2014-09-09 12:05

A Guest Post by Senior Director Amrita Mehrok, Oracle Financials Applications Strategy (pictured left)

Oracle OpenWorld 2014 is less than a month away, starting on Sunday, September 28. We are centralizing all Oracle ERP conference sessions at the Westin San Francisco Market Street this year to make it easier for our customers to find everything ERP in a single place. Headlining the track are Oracle’s Cloud Applications—Oracle Financials Cloud, Oracle Procurement Cloud, and Oracle Project Portfolio Management Cloud.Registered participants can attend sessions and experience self-service demonstrations, conveniently located in the Westin. This will help participants easily connect with Oracle product experts, implementation partners, and customers with similar interests and challenges.

Additional detailed demos are available in the Central Applications Demo zone in Moscone West.

Kick-Off General Session in The Westin San Francisco Market Street
SVP Rondy Ng, Applications Development, will kick off the ERP track with a General Session titled “Empowering Modern ERP” on Monday at 10:30 a.m. PT in Room Metropolitan III. He will be joined by Peter Cavallo, VP and North America Practice Lead for IBM, our sponsoring partner. Also joining the session will be Patrick Benson, CIO of Ovation Brands, who will talk about Ovation’s implementation of the ERP Cloud.

Conference Sessions
The ERP track will host more than 80 conference sessions this year including a day (Wednesday) dedicated to everything Cloud. Topics will include overviews of the latest product enhancements and roadmap, case studies, demonstrations, and shared insights by customers and partners. The ERP track will cover Oracle’s Cloud products, as well as sessions on E-Business Suite, PeopleSoft, JD Edwards, Oracle Primavera, and Governance Risk & Compliance. Various Focus on Docs provide participants with a handy list of all relevant ERP sessions taking place during OpenWorld 2014 and are available here.

Meet the Experts
Conference goers can meet informally with Oracle experts in Metropolitan III at the Westin Market Street. The Meet the Experts sessions cover all products and product lines under the ERP umbrella.

Demo Grounds
Oracle ERP will be showcased at more than 20 separate pods in Moscone West. Be sure to check out the demo pods.

Hope to see you there!

Amrita Mehrok, Senior Director
Oracle Financials Applications Strategy

Quiz Night

Jonathan Lewis - Tue, 2014-09-09 11:46

I have a table with several indexes on it, and I have two versions of a query that I might run against that table. Examine them carefully, then come up with some plausible reason why it’s possible (with no intervening DDL, DML, stats collection, parameter fiddling etc., etc., etc.) for the second form of the query to be inherently more efficient than the first.


select
        bit_1, id, small_vc, rowid
from
        bit_tab
where
        bit_1 between 1 and 3
;

prompt  ===========
prompt  Split query
prompt  ===========

select
        bit_1, id, small_vc, rowid
from
        bit_tab
where
        bit_1 = 1
or      bit_1 > 1 and bit_1 <= 3
;

Update / Answers

I avoided giving any details about the data and indexes in this example as I wanted to allow free rein to readers’ imagination  – and I haven’t been disappointed with the resulting suggestions. The general principles of allowing more options to the optimizer, effects of partitioning, and effects of skew are all worth considering when the optimizer CAN’T use an execution path that you think makes sense.  (Note: I didn’t say make it clear in my original question, but I wasn’t looking for cases where you could get a better path by hinting (or profiling) I was after cases where Oracle literally could not do what you wanted.)

The specific strategy I was thinking of when I posed the question was based on a follow-up to some experiments I had done with the cluster_by_rowid() hint. and (there was a little hint in the “several indexes” and more particularly the column name “bit_1″) I was looking at a data warehouse table with a number of bitmap indexes. So here’s the execution plan for the first version of the query  when there’s a simple bitmap index on bit_1.


------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |   600 | 18000 |    96 |
|   1 |  TABLE ACCESS BY INDEX ROWID | BIT_TAB |   600 | 18000 |    96 |
|   2 |   BITMAP CONVERSION TO ROWIDS|         |       |       |       |
|*  3 |    BITMAP INDEX RANGE SCAN   | BT1     |       |       |       |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("BIT_1">=1 AND "BIT_1"<=3)

And here’s the plan for the second query:


------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |   560 | 16800 |    91 |
|   1 |  TABLE ACCESS BY INDEX ROWID | BIT_TAB |   560 | 16800 |    91 |
|   2 |   BITMAP CONVERSION TO ROWIDS|         |       |       |       |
|   3 |    BITMAP OR                 |         |       |       |       |
|*  4 |     BITMAP INDEX SINGLE VALUE| BT1     |       |       |       |
|   5 |     BITMAP MERGE             |         |       |       |       |
|*  6 |      BITMAP INDEX RANGE SCAN | BT1     |       |       |       |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("BIT_1"=1)
   6 - access("BIT_1">1 AND "BIT_1"<=3)

Clearly the second plan is more complex than the first – moreover the added complexity had resulted in the optimizer getting a different cardinality estimate – but, with my data set, there’s a potential efficiency gain. Notice how lines 5 and 6 show a bitmap range scan followed by a bitmap merge: to do the merge Oracle has to “superimpose” the bitmaps for the different key values in the range scan to produce a single bitmap that it can then OR with the bitmap for bit_1 = 1 (“bitmap merge” is effectively the same as “bitmap or” except all the bitmaps come from the same index). The result of this is that when we convert to rowids the rowids are in table order. You can see the consequences in the ordering of the result set or, more importantly for my demo, in the autotrace statistics:


For the original query:
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        604  consistent gets
          0  physical reads
          0  redo size
      27153  bytes sent via SQL*Net to client
        777  bytes received via SQL*Net from client
         25  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        600  rows processed


For the modified query
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        218  consistent gets
          0  physical reads
          0  redo size
      26714  bytes sent via SQL*Net to client
        777  bytes received via SQL*Net from client
         25  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        600  rows processed

Note, particularly, the change in the number of consistent gets. Each table block I visited held two or three rows that I needed, in the first query I visit the data in order of (bit_1, rowid) and get each table block 3 time; in the second case I visit the data in order of rowid and only get each table block once (with a “buffer is pinned count” for subsequent rows from the same block).

Here’s the starting output from each query, I’ve added the rowid to the original select statements so that you can see the block ordering:


Original query
     BIT_1         ID SMALL_VC   ROWID
---------- ---------- ---------- ------------------
         1          2 2          AAAmeCAAFAAAAEBAAB
         1         12 12         AAAmeCAAFAAAAECAAB
         1         22 22         AAAmeCAAFAAAAEDAAB
         1         32 32         AAAmeCAAFAAAAEEAAB
         1         42 42         AAAmeCAAFAAAAEFAAB
         1         52 52         AAAmeCAAFAAAAEGAAB

Modified query
     BIT_1         ID SMALL_VC   ROWID
---------- ---------- ---------- ------------------
         1          2 2          AAAmeCAAFAAAAEBAAB
         2          3 3          AAAmeCAAFAAAAEBAAC
         3          4 4          AAAmeCAAFAAAAEBAAD
         1         12 12         AAAmeCAAFAAAAECAAB
         2         13 13         AAAmeCAAFAAAAECAAC
         3         14 14         AAAmeCAAFAAAAECAAD
         1         22 22         AAAmeCAAFAAAAEDAAB
         2         23 23         AAAmeCAAFAAAAEDAAC
         3         24 24         AAAmeCAAFAAAAEDAAD

By rewriting the query I’ve managed to force a “cluster by rowid” on the data access. Of course, the simpler solution would be to add the /*+ cluster_by_rowid() */ hint to the original query – but it doesn’t work for bitmap indexes, and when I found that it worked for B-tree indexes the next test I did was to try a single bitmap index, which resulted in my writing this note.

Footnote: I don’t really expect Oracle Corp. to modify their code to make the hint work with bitmaps, after all it’s only relevant in the special case of using a bitmap index with a range scan and no subsequent bitmap AND/OR/MINUS operations where it would be needed – and you’re not really expected to use a single bitmap index to access a table, we engineer bitmaps to take advantage of combinations.


Database manufacturers include JSON in latest provisions

Chris Foot - Tue, 2014-09-09 10:13

JavaScript Object Notation has been lauded as one of the most easy-to-understand programming languages available, and has been a boon to professionals managing Web-based data. 

Database administration services and Web developers alike favor the language when handling complex information, because it's easy for people to read and write, JSON.org noted. Programmers are often fans of its affiliations with conventions found in C, C++, Java, JavaScript, Python and other code versions. JSON is constructed on two foundations:

  • A list of name/value pairs, which is known in other languages as an object
  • An organized list of values, also called an array

Why add JSON support to databases? 
Unstructured data, a type of information that is ubiquitous in the current Digital Age, needs to be stored in documents, which is exactly how JSON manages data. Many NoSQL databases such as MongoDB, Couchbase and Hadoop abide by this protocol, which has made it a favorite among Web developers, InfoWorld noted. 

In order to compete with such architectures, software giant Oracle added a JSON support to the company's Oracle 12c databases, which were outlined at the NoSQL Now conference in San Jose, California last month. This is a break from the conventional relational database management system architecture, but it's presented as an alternative to PostgreSQL, which has been regarded as the open source alternative to Oracle. 

Is it a valid option? 
Still, DBA services may advise their clients to keep using Oracle 12c for tabular data and conventional NoSQL solutions for semi-structured information. InfoWorld acknowledged how the latter contingency abides by a "scale out" protocol as opposed to a "scale up" approach. 

Scaling out enables NoSQL solutions to leverage commodity servers as a way to enhance performance as opposed to bulking up a massive database server. In addition, the way a document-based database allocates information makes companies highly resistant to failure because the data is distributed across multiple servers. 

When will the day come? 
InfoWorld classified modern databases into three types: 

  • RDBMS, which handle structured data
  • NoSQL, which manage semi-structured information
  • Hadoop, which organizes unstructured data

The source proposed an interesting situation, that all three systems be synchronized into a single solution. JSON could potentially provide a structure for just such a database, but it's unknown whether Oracle, IBM or another tech company would be able to successfully develop it (the profits for said enterprise would be huge).

Yet, it's more likely those in the open source community would manufacture a database capable of seamlessly handling structured, semi-structured and unstructured data. Just look at how monumental Hadoop has been. 

The post Database manufacturers include JSON in latest provisions appeared first on Remote DBA Experts.

OOW - Focus On Support and Services for Database

Chris Warticki - Tue, 2014-09-09 08:00
Focus On Support and Services for Database   Monday, Sep 29, 2014

Conference Sessions

Best Practices for Maintaining and Supporting Oracle Database
Balaji Bashyam, Vice President, Oracle
Roderick Manalac, Consulting Tech Advisor, Oracle
11:45 AM - 12:30 PM Moscone South - 310 CON8270 Best Practices for Maintaining and Supporting Oracle Enterprise Manager
Farouk Abushaban, Senior Principal Technical Analyst, Oracle
2:45 PM - 3:30 PM Intercontinental - Grand Ballroom C CON8567 Oracle Exadata: Maintenance and Support Best Practices
Christian Trieb, CDO, Paragon Data GmbH
Jaime Figueroa, Senior Principal Technical Support Engineer, Oracle
Bennett Fleisher, Customer Support Director, Oracle
4:00 PM - 4:45 PM Moscone South - 310 CON8259 Upgrading to Oracle E-Business Suite 12.1.3: Tips from ADP
Mukarram Mohammed, DBA Manager, ADP
Ed Fleming, Director, ACS Service Management, Oracle
Sushil Motwani, Senior Principal Technical Account Manager, Oracle
5:15 PM - 6:00 PM Intercontinental - Grand Ballroom C CON5061 Effective Client Failover in an Oracle Data Guard Environment
Sung I Kim, Senior Principal Instructor, Oracle
5:15 PM - 6:00 PM Moscone South - 310 CON8595 Monday, Sep 29, 2014

Conference Sessions

Integrating PeopleSoft for Seamless IT Service Delivery: Tips from UCF
Robert Yanckello, CTO, UCF
Sastry Vempati, Director ACS Customer Service Management, Oracle
2:45 PM - 3:30 PM Moscone West - 2024 CON2541 Tuesday, Sep 30, 2014

Conference Sessions

Fast-Track Big Data Implementation with the Oracle Big Data Platform
Suraj Krishnan, Director, Applications & Middleware, Oracle
Jegannath Sundarapandian, Technical Lead, Oracle
10:45 AM - 11:30 AM Intercontinental - Union Square CON7183 Wells Fargo Uses Cascaded Physical Standby Databases for Preproduction Staging
Burley Patterson, DBA, Wells Fargo
Joy Watkins, Database Analyst, Wells Fargo
Dave LaPoint, Senior Principal Advanced Support Engineer, Oracle
12:00 PM - 12:45 PM Intercontinental - Union Square CON6636 Best Practices for Deploying a DBaaS in a Private Cloud Model
Vinod Haval, Enterprise Architect, Oracle
Bharat Patel, Director - Cloud & Enterprise Architecture, Oracle
12:00 PM - 12:45 PM Moscone South - 310 CON2586 Oracle Database 12c Upgrade: Tools and Best Practices from Oracle Support
Agrim Pandit, Principal Software Engineer, Oracle
5:00 PM - 5:45 PM Moscone South - 310 CON8236 Wednesday, Oct 01, 2014

Conference Sessions

Taming the Wild West with Oracle Database Options
Mike Brotherton, JP Morgan Chase
Ashok Pandya, Consulting Solutions Director, Oracle
4:45 PM - 5:30 PM Intercontinental - Union Square CON3910 Wednesday, Oct 01, 2014

Conference Sessions

Best Practices: SQL Tuning Made Easier with SQLTXPLAIN (SQLT)
Mauro Pagano, Senior Principal Technical Support Engineer, Oracle
12:45 PM - 1:30 PM Moscone South - 310 CON8266 Oracle Analytics and Big Data: Unleash the Value
Lisa Dearnley-Davison, EMEA Consulting Director for Big Data, Oracle
Gary Young, Senior Director, Big Data / Analytics, Oracle
2:00 PM - 2:45 PM Intercontinental - Telegraph Hill CON3811 Dell Shares Tips on Using Oracle Database 12c Multitenancy in a Private Cloud
Janardhana Korapala, Database Admin Consultant, Dell Inc
Kurian Abraham, Technical Leader , Database Technologies, Oracle
Nirmal Bommireddypalli, SR Principal Support Account Manager, Oracle
4:45 PM - 5:30 PM Intercontinental - Intercontinental C CON6331 Thursday, Oct 02, 2014

Conference Sessions

Real-World Oracle Maximum Availability Architecture with Oracle Engineered Systems
Bill Callahan, Director, Products and Technology, CCC Information Services
Jim Mckinstry, Consulting Practice Director, Oracle
9:30 AM - 10:15 AM Intercontinental - Grand Ballroom B CON2335 Best Practices for Maintaining Your Oracle RAC Cluster
William Burton, Consulting Member of Technical Staff, Oracle
Scott Jesse, Customer Support Director, RAC, Storage & RAC Assurance, Oracle
Bryan Vongray, Senior Principal Technical Support Engineer, Oracle
12:00 PM - 12:45 PM Moscone South - 310 CON8252 Parallel Upgrade of PeopleSoft Applications and Oracle Database: Tips from MetLife
Gopi Kotha, Software Systems Specialist, MetLife
Asha Santosh, Lead PeopleSoft DBA, Metropolitan Life Insurance Company (inc)
Navin Lobo, Principal Advanced Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone West - 2020 CON6106 Near-Zero Downtime Database Migration: Case Study Presented by Starwood Hotels
Harish Patel, Database Architect, Starwood hotels and resorts
Shampa Mukhopadhyay, Technical Acct Manager, Oracle
Nalin Sahoo, Senior Principal Engineer, Oracle
2:30 PM - 3:15 PM Intercontinental - Grand Ballroom B CON5325 Thursday, Oct 02, 2014

Conference Sessions

Optimize Oracle SuperCluster with Oracle Advanced Monitoring and Resolution
Erik Carlson, Vice President IT, Jabil Circuit, Inc.
George Mccormick, Field Sales Representative, Oracle
9:30 AM - 10:15 AM Marriott Marquis - Salon 4/5/6* CON2388 Oracle E-Business Suite Architecture Best Practices: Tips from CBS
John Basone, CBS
Greg Jerry, Director - Oracle Enterprise Architecture, Oracle
12:00 PM - 12:45 PM Marriott Marquis - Salon 4/5/6* CON3829 Optimizing Oracle Exadata with Oracle Support Services: A Client View from KPN
Eric Zonneveld, Ing., KPN NV
Jan Dijken, Principal Advanced Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone South - 305 CON7054   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

You are invited! See the Power of Innovation with Oracle Fusion Middleware

WebCenter Team - Tue, 2014-09-09 06:00

The Oracle Fusion Middleware team is very excited to recognize the 2014 Oracle Excellence Awards for Fusion Middleware Innovation winners with a special Awards Ceremony on Tuesday September 30th during Oracle OpenWorld.

Oracle Fusion Middleware Innovation Awards honor customers with cutting-edge use of Oracle Fusion Middleware technologies to solve unique business challenges or create business value. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture.

If you are planning to attend Oracle OpenWorld in San Francisco or plan to be in the area during Oracle OpenWorld, we hope you can join us, and bring back to your organization real-life examples of Fusion Middleware in action.

   Oracle Excellence Awards Ceremony: Oracle Fusion Middleware: Meet This Year’s Most Impressive Innovators (Session ID: CON7029)

   When: Tuesday September 30, 2014

   Time: Champagne Reception 4:30 pm, Ceremony 5-5:45 pm PT

   Where: Yerba Buena Center for the Arts, YBCA Theater (next to Moscone North) 700 Howard St., San Francisco, CA, 94103


To learn more about last year’s award winners please read our blog post: Innovation Award Winners Celebrated at A Grand Ceremony at OOW: Event Highlights

To attend this Award Ceremony, Oracle OpenWorld Badges are required. You can register for this session through the Schedule Builder on the Oracle OpenWorld website. If you are not attending the conference, but will be in the area and would like to join the celebration – please RSVP HERE and we will provide a complimentary Discover Pass code that you can use to register, pick up your badge, and attend the Award Ceremony session.

We hope to see you there!

Learn More About PeopleSoft Activity Guides at Open World

PeopleSoft Technology Blog - Mon, 2014-09-08 17:02

PeopleSoft activity guides offer a wonderful means to guide your users through complex or infrequently performed business processes.  PeopleSoft applications deliver some great activity guides out of the box, but you can also configure your own quickly and with little effort using the PeopleTools Activity Guide Framework.  We'll be presenting a session at Oracle Open World '14 dedicated to Activity Guides.  Come to our session to see some of the activity guides that we deliver.  

We'll also demonstrate how to configure your own using the new Activity Guide WorkCenter in PeopleTools 8.54.

Interested in how Activity Guides will look in the new Fluid User Experience?  We'll provide a preview of that as well.

If you would like to improve user satisfaction and productivity while reducing the need for training and support, come to this session:

Session ID: CON7570
Session Title: PeopleSoft Activity Guides: Simplifying a Complex Process
Venue / Room: Moscone West - 2022
Date and Time: 9/30/14, 10:45 - 11:30

Analyzing Twitter Data using Datasift, MongoDB, Hive and ODI12c

Rittman Mead Consulting - Mon, 2014-09-08 14:39

Last week I posted an article on the blog around analysing Twitter data using Datasift, MongoDB and Pig, where I used the Datasift service to stream tweets about Rittman Mead into a MongoDB NoSQL database, and then queried the dataset using Pig. The context for this is the idea of a “data reservoir”, where we supplement the more traditional file and relational datasets we find in data warehouses with other data, typically machine generated, unstructured or very low-level, to add context to the numbers in our reporting system. In the example I quoted in the article, it’d be very interesting to take the activity we record against our blog and website and correlate that with the “conversation” that happens about it in the social media world; for example, were the hits for a particular article due to it been mentioned in a tweet, and did a spike in activity correspond to a particularly influential Twitter user retweeting something we’d tweeted?

NewImage

In that previous article I’d used Pig to access and analyse the data, in part because I saw a match between the nested datasets in a typical DataSift Twitter message and the relations, tuples and bags you get in a Pig schema. For example, if you look at the Tweet from Borkur in the screenshot below from RoboMongo, a Mac OS X client for MongoDB that I’ve found useful, you can see the author details nested inside the interaction details, and the Type attribute having many values under the Trends parent attribute – these map well onto Pig tuples and bags respectively.

NewImage

What I’d really like to do with this dataset, though, is to take certain elements of it and use that to supplement the data I’m loading using ODI12c. Whilst ODI can run arbitrary R, Pig and shell scripts using the ODI Procedure feature (as I did here to make use of Sqoop, before Oracle added Sqoop KMs to ODI12.1.3), it gets the best out of Hadoop when it can access data using Hive, the SQL layer over Hadoop that represents HDFS data as rows and columns, and allows us to SELECT and INSERT data using SQL commands – or to be precise, a dialect of SQL called HiveQL. But how will Hive cope with the nested and repeating data structures in a DataSift Twitter message, and allow us to get just the data out that we’re interested in?

In fact, the MongoDB connector for Hadoop that I used for Pig the other day also comes with Hive connectivity, in the form of a SerDe that lets Hive report against data in a MongoDB database (David Allen blogged about another MongoDB Hive storage handler a while ago, in an article about MongoDB and ODI). What’s more, this Hive connector for MongoDB is actually easier to work with that the Pig connector, as instead of worrying about Tuples and Bags you can just pick out the nested attributes that you’re interested in using a dot notation. For example, if I’m only interested in the InteractionID, username, tweet content and number of followers within a particular Twitter dataset, I can create a table that looks like this in Hive:

CREATE TABLE tweet_data(
  interactionId string,
  username string,
  content string,
  author_followers int)
ROW FORMAT SERDE 
  'com.mongodb.hadoop.hive.BSONSerDe' 
STORED BY 
  'com.mongodb.hadoop.hive.MongoStorageHandler' 
WITH SERDEPROPERTIES ( 
  'mongo.columns.mapping'='{"interactionId":"interactionId",
  "username":"interaction.interaction.author.username",
  "content":\"interaction.interaction.content",
  "author_followers_count":"interaction.twitter.user.followers_count"}'
  )
TBLPROPERTIES (
  'mongo.uri'='mongodb://cdh51-node1:27017/datasiftmongodb.rm_tweets'
  )

And at that point, it’s pretty easy to bring the dataset into ODI12c, through the IKM Hive to Hive Control Append knowledge module, and join up the Twitter dataset with the website log data that’s coming in via Flume. ODI can connect to Hive via JDBC drivers supplied with CDH4/5, and once you register the Hive connection and reverse-engineer the Hive metastore metadata into ODI’s repository, the complexity of the underlying Hive storage is hidden and you’re just presented with tables and columns, just like any other datastore type.

NewImage

Starting with the Twitter data first, I create a Hive table outside of ODI that returns the precise set of tweet attributes that I’m interested in, and then filter that dataset down to just the tweets that link to content on our website, by filtering on the tweet link’s URL matching the start of our website address.

NewImage

Then I load-up the hits from the Rittman Mead website, previously landed into Hadoop using Flume and exposed to ODI as another Hive table, filter out all the non-blog page accesses and keep just the URL part of the Apache Weblog request field, removing the transport mechanism and other bits around it.

NewImage

Then, I use a final ODI mapping to join the two datasets together, using ODI’s ability to apply HiveQL expressions to the incoming datasets so that’ve got the same format – trailing ‘/‘ at the end of the URL, no ampersand and query text at the end of the URL, and so on. Both this and the previous transformation are great examples of where ODI can help with this sort of work, making it pretty easy to munge and correct your data so that you’re then able to match-up the two different sources.

NewImage

Then it’s just a case of creating a package or load plan to sequence the mappings, and then run them using the local or standalone agent. You can see the individual KM steps running on the left-hand side, with ODI generating HiveQL queries which in turn are translated into MapReduce and run in parallel across the Hadoop cluster.

NewImage

And then, at the end of the process, I’ve got a Hive table of all of our blog articles that have been mentioned on Twitter (since we started consuming the tweet feed, a day or so ago), with the number of page requests and the number of times that page got mentioned in tweets.

NewImage

Obviously there’s a lot more we can do with this; we can access the number of followers each twitter user has, along with their location, gender and the sentiment (positive, negative, neutral) of the tweet. From that we can work out some impact from the twitter activity, and we can also add to it data from other sources such as Facebook, LinkedIn and so on to get a fuller picture of the activity around our site. Then, the data we’re gathering in can either be left in MongoDB, or I can use these ODI mappings to either archive it in Hive tables, or export the highlights out to Oracle Database using Sqoop or Oracle Loader for Hadoop.

Categories: BI & Warehousing

Getting the Whole DB2 package, Additional Services Series Pt. 8 [VIDEO]

Chris Foot - Mon, 2014-09-08 14:09

Transcript

Need to give your databases a boost?

Hi, welcome back to RDX! If your organization's handling large, data-intensive workloads, IBM's DB2 for Linux, Unix and Windows is an attractive alternative.

RDX has worked with DB2 since the beginning, and our DB2 solutions are architected to provide a superior level of DB2 database support. From day-to-day operations to strategic decision making, our DB2 solutions arm customers with the experience, skillsets and best practices required to maximize their critical DB2 environments.

RDX also provides support for IBM’s IMS product set which offers the availability, protection, performance and scalability companies need to process online transactions.

Thanks for watching, and be sure to refer to our company resume for more information on our DB2 and IMS services!
 

The post Getting the Whole DB2 package, Additional Services Series Pt. 8 [VIDEO] appeared first on Remote DBA Experts.

Fashionable Tech

Usable Apps - Mon, 2014-09-08 14:07

By Sandra Lee (@SandraLee0415), Oracle Applications User Experience Communications and Outreach Team

“You don’t have to be first; you just have to be better” is a marketing phrase I’ve heard over the years, and it really is true. Take social media hero Facebook. Sure, Myspace and Friendster came first, but Facebook quickly made its way to the top. This trend happens in almost every market that fills a void without consumers even knowing it.

Such is the case with wearable technology.

By now, we are all familiar with the leading wearable devices like Google Glass and Fitbit, but some haven’t caught on in the general public as much as developer and marketing executives would have liked. The lack of buy-in has a lot to do with price, but ease of use plays a part, too. There’s no question that we, as a technology-needy society, want our devices to be fast, efficient, and attractive, while providing real-life benefits. We’ve got socks that give us real-time health stats, collars that track your puppy’s every move, and bands that let you know when your newborn baby is about to wake up. And these are just the beginning.

The one trend in wearables that I’m really excited about is fashion. Geeky glasses and pocket protectors are being replaced by sleek jackets, statement necklaces, and beautiful rings. It takes the saying “he put a ring on it” to a whole new level.

Below are some new ones that might really be game changers:

Cuff

Cuff

This beautiful piece of jewelry doubles as an activity tracker and phone notification system. But what I like most about the Cuff is that it can keep you safe. Being aware of your surroundings is a great start, but I love the feature that actually alerts people if you ever feel threatened walking to your car at night. At prices starting at just $50, it’s one that’s easy to get on board with.

Ringly

Ringly

Keeping in touch with important people has never been more beautiful. Whether you’re in a quiet museum or cheering on the San Francisco 49ers in a loud stadium, this ring will vibrate softly, alerting you to a phone call, text, or important upcoming event.

Epiphany Eyewear

Epiphany Eyewear

These glasses are the perfect kind of nerdy because the cool part is hidden. Camera and HD video recording capabilities let you use these glasses as shades or as prescription glasses.

Will these three featured wearables be the game changers the wearable technology industry has been looking for? And what will the impact be of more fashion and style-conscious wearable technology on enterprise adoption?

What do you think?

Join the Oracle Applications User Experience team and friends on Tuesday, September 23, 2014, for the Oracle Wearable Technology Meetup at the Oracle Technology Network (OTN) Lounge at Oracle OpenWorld 2014, and let us know your thoughts in person. Don your best wearables and discuss the finer points of enterprise use cases, APIs, integrations, user experience, fashion and style considerations for creating wearable tech, and lots more!

While supplies last, there’ll be inexpensive, yet tasteful, gifts for attendees sporting wearable tech.

For more on wearable technology and OAUX, see our Usable Apps story at https://storify.com/usableapps/wearables.

<b>Contribution by Angela Golla,

Oracle Infogram - Mon, 2014-09-08 13:12
Contribution by Angela Golla, Infogram Deputy Editor

My Oracle Support Patch Conflict Checker Tool
A new My Oracle Support Conflict Checker tool is available from the My Oracle Support Patches & Updates Patch Search results page.

This tool enables you to upload an OPatch inventory and check the patches that you want to apply to your environment for conflicts. If no conflicts are found, you can download the patches. If conflicts are found, the tool finds an existing resolution to download. If no resolution is found, you can request a solution and monitor your request in the Plans region.  The details and a training video can be found in Note:1091294.1.