Feed aggregator

A Lot To Listen To

FeuerThoughts - Mon, 2015-07-20 07:40
A Lot To Listen To

Sometimes, if you're lucky,
there is nothing to hear
but the sound of the wind
blowing through trees.

Now you could say:
"That's not much to listen to."
Or you could listen...

Listen
to the rustling, hissing, whispering, sometimes angry sound
of thousands 
of almost silent brushings of leaf against leaf,
of feather-light taps of twig striking twig,
any single act nothing to hear at all
but when the tree is big enough
and the leaves are numerous enough
and the branches reach out 
thinner and thinner
poking out toward the sun
carrying leaves to their destiny,

then you might be able to hear
the sound of the wind
blowing through trees.

It's a lot to listen to,
if you can hear it.




Copyright 2015 Steven Feuerstein
Categories: Development

Focusing on Ext4 and XFS TRIM Operations – Part I.

Kevin Closson - Sun, 2015-07-19 09:29

I’ve been doing some testing that requires rather large file systems. I have an EMC XtremIO Dual X-Brick array from which I provision a 10 terabyte volume. Volumes in XtremIO are always thinly provisioned. The testing I’m doing required me to scrutinize default Linux mkfs(8) behavior for both Ext4 and XFS. This is part 1 in a short series and it is about Ext4.

Discard the Discard Option

The first thing I noticed in this testing was the fantastical “throughput” demonstrated at the array while running the mkfs(8) command with the “-t ext4” option/arg pair. As the following screen shot shows the “throughput” at the array level was just shy of 72GB/s.

That’s not real I/O…I’ll explain…

EMC XtremIO Dual X-Brick Array During Ext4 mkfs(8). Default Options.

EMC XtremIO Dual X-Brick Array During Ext4 mkfs(8). Default Options.

The default options for Ext4 include the discard (TRIM under the covers) option. The mkfs(8) manpage has this to say about the discard option :

Attempt to discard blocks at mkfs time (discarding blocks initially is useful on solid state devices and sparse / thin-provisioned storage). When the device advertises that discard also zeroes data (any subsequent read after the discard and before write returns zero), then mark all not-yet-zeroed inode tables as zeroed. This significantly speeds up filesystem initialization. This is set as default.

I’ve read that quoted text at least eleventeen times but the wording still sounds like gibberish-scented gobbledygook to me–well, except for the bit about significantly speeding up filesystem initialization.

Since XtremIO volumes are created thin I don’t see any reason for mkfs to take action to make it, what, thinner?  Please let me share test results challenging the assertion that the discard mkfs option results in faster file system initialization. This is the default functionality after all.

In the following terminal output you’ll see that the default mkfs options take 152 seconds to make a file system on a freshly-created 10TB XtremIO volume:


# time mkfs -t ext4 /dev/xtremio/fs/test
mke2fs 1.43-WIP (20-Jun-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
real 2m32.055s
user 0m3.648s
sys 0m17.280s
#

The mkfs(8) Command Without Default Discard Functionality

Please bear in mind that default 152 second result is not due to languishing on pathetic physical I/O. The storage is fast. Please consider the following terminal output where I passed in the non-default -E option with the nodiscard argument. The file system creation took 4.8 seconds:

# time mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

real 0m4.856s
user 0m4.264s
sys 0m0.415s
#

I think 152 seconds down to 4.8 makes the point that with proper, thinly-provisioned storage the mkfs discard option does not “significantly speed up filesystem initialization.” But initializing file systems is not something one does frequently so investigation into the discard mount(8) option was in order.

Taking Ext4 For A Drive

Since I had this 10TB Ext4 file system–and a fresh focus on file system discard (storage TRIM) features–I thought I’d take it for a drive.

Discarded the Default Discard But Added The Non-Default Discard

While the default mkfs(8) command includes discard, the mount(8) command does not. I decided to investigate this option while unlinking a reasonable number of large files. To do so I ran a simple script (shown below) that copies 64 files of 16 gigabytes each–in parallel–into the Ext4 file system. I then timed a single invocation of the rm(1) command to remove all 64 of these files. Unlinking file in a Linux file system is a metadata operation, however, when the discard option is used to mount the file system each unlink operation includes TRIM operations being sent to storage. The following screen shot of the XtremIO performance dashboard was taken while the rm(1) command was running. The discard mount option turns a metadata operation into a rather costly storage operation.

Array Level Activity During Bulk rm(1) Command Processing. Ext4 (discard mount option)

Array Level Activity During Bulk rm(1) Command Processing. Ext4 (discard mount option)

The following terminal output shows the test step sequence used to test the discard mount option:

# umount /mnt ; mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test; mount -t ext4 -o discard /dev/xtremio/fs/test /mnt
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# cd mnt
# cat > cpit
for i in {1..64}; do ( dd if=/data1/tape of=file$i bs=1M oflag=direct )& done
wait
# time sh ./cpit > /dev/null 2>&1 

real 5m31.530s
user 0m2.906s
sys 8m45.292s
# du -sh .
1018G .
# time rm -f file*

real 4m52.608s
user 0m0.000s
sys 0m0.497s
#

The following terminal output shows the same test repeated with the file system being mounted with the default (thus no discard) mount options:

# cd ..
# umount /mnt ; mkfs -t ext4 -E nodiscard /dev/xtremio/fs/test; mount -t ext4 /dev/xtremio/fs/test /mnt
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=2 blocks, Stripe width=16 blocks
335544320 inodes, 2684354560 blocks
134217728 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81920 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
 2560000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# cd mnt
# cat > cpit
for i in {1..64}; do ( dd if=/data1/tape of=file$i bs=1M oflag=direct )& done
wait
#
# time sh ./cpit > /dev/null 2>&1 

real 5m31.526s
user 0m2.957s
sys 8m50.317s
# time rm -f file*

real 0m16.398s
user 0m0.001s
sys 0m0.750s
#

This testing shows that mounting an Ext4 file system with the discard mount option dramatically impacts file removal operations. The default mount options (thus no discard option) performed the rm(1) command in 16 seconds whereas the same test took 292 seconds when mounted with the discard mount option.

So how can one perform the important house-cleaning that comes with TRIM operations?

The fstrim(8) Command

Ext4 supports user-invoked, online TRIM operations on mounted file systems. I would advise people to forego the discard mount option and opt for occasionally running the fstrim(8) command. The following is an example of  how long it takes to execute fstrim on the same 10TB file system stored in an EMC XtremIO array. I think that foregoing the taxation of commands like rm(1) is a good thing–especially since running fstrim is allowed on mounted file systems and only takes roughly 11 minutes on a 10TB file system.

# time fstrim -v /mnt
/mnt: 10908310835200 bytes were trimmed

real 11m29.325s
user 0m0.000s
sys 2m31.370s
#
Summary

If you use thinly-provisioned storage and want file deletion in Ext4 to return space to the array you have a choice. You can choose to take serious performance hits when you create the file system (default mkfs(8) options) and when you delete files (optional discard mount(8) option) or you can occasionally execute the fstrim(8) command on a mounted file system.

Up Next

The next post in this series will focus on XFS.


Filed under: oracle

Announcing “SLOB Recipes”

Kevin Closson - Fri, 2015-07-17 11:28

I’ve started updating the SLOB Resources page with links to “recipes” for certain SLOB testing. The first installment is the recipe for loading 8TB scale SLOB 2.3 Multiple Schema Model with a 2-Socket Linux host attached to EMC XtremIO. Recipes will include (at a minimum) the relevant SLOB program output (e.g., setup.sh or runit.sh), init.ora and slob.conf.

Please keep an eye on the SLOB Resources page for updates…and don’t miss the first installment. It’s quite interesting.

SLOB-recipes


Filed under: oracle

Beat 39

Floyd Teter - Thu, 2015-07-16 18:00
Let's start today's thought with a tidbit from the Standish Group's 2013 Chaos Report.  In that report, the Standish Group cheerfully shares that IT project success came in at 39%...cheerful because that is an improvement.  In other words, 6 out of 10 IT projects are failing to meet schedule, cost and quality objectives and we're thinking that's good news.  Yikes!!!

If we look at the numbers in SaaS carefully - regardless of vendor - we see a pretty consistent gap between sales and "go live".  Guess how large the gap is?  Yeah, about 61%.  Arithmetic anybody?  Granted that my access to data is somewhat limited here but, even with my small sample size, it's one of those things that make me "stop and go hmmm".

The upshot?  In the developing space of SaaS, I think we may have all underestimated the level of difficulty in implementing those nifty SaaS applications.  At the very least, it seems like we're missing the boat on how to move from vision to achievement.

Enablement.  SaaS customers need tools that ease the implementation and use of the applications.  And preferably things that scale...inventing the tool every time you tackle the project buys nothing but headaches.  But I think good tools for enablement are the key if we're ever going to "Beat 39".

More on this in later posts.  I think I may be focusing on this for a bit.



Oracle Priority Support Infogram for 16-JUL-2015

Oracle Infogram - Thu, 2015-07-16 16:33

Time to Patch!

The British actor David Niven once said: “After 50 it’s just patch, patch, patch”. Technology doesn’t wait until you are 50:

Oracle Critical Patch Update for July 2015

Dear Oracle Security Alert Subscriber,

The Critical Patch Update for July 2015 was released on July 14th, 2015.
Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes the list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities for each product suite, and links to other important documents. Supported products that are not listed in the "Affected Products and Components" section of the advisory do not require new patches to be applied.

Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information. Critical Patch Update Advisories are available at the following location:

Oracle Technology Network: http://www.oracle.com/technetwork/topics/security/alerts-086861.html

The Critical Patch Update Advisory for July 2015 is available at the following location:

Oracle Technology Network: http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html

Important information can also be found at: https://blogs.oracle.com/security/

The next four dates for Critical Patch Updates are:

October 20, 2015
January 19, 2016
April 19, 2016
July 19, 2016

RDBMS


Big Data


WebLogic

OTD active/standby failover, from the WebLogic Partner Community EMEA blog.

And from the same source:


Java


SOA

Managing Idempotence in SOA Suite, from the SOA & BPM Partner Community Blog.

From the same blog comes this interesting series on some of the controversies in the SOA community: SOA Mythbusters

WebCenter

Part 1: An Overview of the Oracle.com Localization Framework in WCS, from PDIT Collaborative Application Services.

From Proactive Support - WebCenter Content: Oracle WebCenter Content (WCC) 11.1.1.8.13 Bundle Patch is Here!

OBIEE


Hyperion

Patch Set Update: Hyperion Calculation Manager 11.1.2.4.003, from Business Analytics - Proactive Support.

Ops Center

New Books in 12.3, from the Ops Center blog.

Oracle Technology

It’s hard to put Jeff Taylor’s Weblog into a single product box, as you can tell from the wide-ranging list of blog posts in this Table of Contents at his blog.

EBS

From the Oracle E-Business Suite Support blog:


From the Oracle E-Business Suite Technology blog:














First patch set (5.0.1) released for APEX 5.0

Dimitri Gielis - Thu, 2015-07-16 16:22
I know some people waiting till the first patch set that comes available after a major Oracle APEX release... today you no longer have to wait, APEX 5.0.1 is now available.

In the patch set notes you can read what changed.

If you're still on APEX 4.x you can go immediately to APEX 5.0.1, you only need to download the latest version on OTN.

If you're already on APEX 5.0, you can download the patch set from support.oracle.com, search for patch number 21364820. Applying the patch took less than 5 minutes in my environment.



This patch set updates the Universal Theme too, so don't forget to update your images folder. When you login in APEX after the patch, it will check if you have the correct images folder, if not it will give you an alert. Although I updated the images directory I still got that alert due to browser caching. Refresh your page and it should be ok.

Note that it's important to be on APEX 5.0.1 when you use APEX Office Print - currently available in beta for a select audience, public release end of this month (July 2015). Behind the scenes we use the APEX_JSON and APEX_WEB_SERVICE packages which got an update in APEX 5.0.1.

And finally, there's a nice new D3 chart available in APEX 5.0.1 called "D3 Collapsible Treemap Chart"


Happy upgrading...

Update 18-JUL-2015: there're still a couple of known issues, please read those too and install some patchset exception when necessary.
Categories: Development

How to Hide Actions in OBPM 12c Workspace

Jan Kettenis - Thu, 2015-07-16 13:17
In this article I explain how to hide the actions in the drop-down in Workspace.

In some situations you may need to hide the actions that are shown in the Actions drop-down in Workspace.


One way to do so is by configuring the access that users with a specific Workspace role have for a specific task (not to be confused with a swim-lane role), by going to the task definition -> Access -> Actions. For example, if you want to disable that an assignee can acquire or reassign a task, you can uncheck the "Acquire" and "Reassign" check boxes in the "Assignees" column.


You can also uncheck the outcomes, for example like the "APPROVE" and "REJECT" actions in the picture above. However, this will make that the assignee cannot choose the outcomes at all, because then the buttons are not rendered either. When you uncheck all outcomes this will practically make that the assignee cannot execute the activity at all, which is probably not what you want. As a matter of fact, you will also not be able to commit the task using the updateTaskOutcome() operation on the TaskService, as you will get an error when tying to do so.



A more practical case for hiding the outcomes from the drop-down menu is where the user should not be able to chose them from there, but should be able to chose the actions using buttons on the screen. An example would be where you need to submit data through the form, because it has to update data in the database directly (instead of via a service call in the process). This you can do through the Configure option in the task definition.


When you check "Require payload review before approval" the user will not be able to chose any action from the drop down. However, the buttons will be available on the screen.

Oracle APEX 5.0.1 now available

Patrick Wolf - Thu, 2015-07-16 06:48
Oracle Application Express 5.0.1 is now released and available for download. If you wish to download the full release of Oracle Application Express 5.0.1, you can get it from the Downloads page on OTN. If you have Oracle APEX 5.0.0 … Continue reading
Categories: Development

Oracle LTRIM Function with Examples

Complete IT Professional - Thu, 2015-07-16 06:00

Oracle LTRIM FunctionThe LTRIM function removes characters from the left side of a string. Learn more about it and see some examples in this article.

Purpose of the Oracle LTRIM Function

The purpose of the LTRIM function is to remove a specified character from the left side of a string. The L in LTRIM stands for “Left”, and is the opposite of the RTRIM or “Right” Trim function.

 

Syntax

The syntax of the Oracle LTRIM functions is:

LTRIM( input_string, [trim_string] )

 

Parameters

The parameters of the LTRIM function are:

  • input_string (mandatory): This is the string to remove characters from the left-hand side of.
  • trim_string (optional): This is the string to be removed from the input_string. If it is not specified, a space is used, and all spaces are removed from the left of the input_string.

Some points to note about LTRIM:

  • If the trim_string is a literal value, you need to include it inside single quotes. For example, to remove an underscore, you need to specify it as ‘_’
  • Both input_string and trim_string can be of data type CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB.
  • The returned value is a VARCHAR2 data type if the input types are CHAR, VARCHAR2, NCHAR or NVARCHAR2, and the returned value is a LOB data type if the input types are LOB or CLOB.
  • The trim_string can be more than one character
  • The function removes each individual value inside trim_string, not the string as a whole. See the Examples section below for more information.

 

Can You Use Oracle LTRIM To Remove Leading Zeroes?

Yes, you can. It’s one of the more common uses for the function that I’ve seen.

This can be done as:

LTRIM(value, ‘0’)

See the Examples section below for more information.

 

Can You Use Oracle LTRIM with RTRIM?

Yes, you can, and it works in the same way as just using the TRIM function.

You’ll need to use one inside the other, and it doesn’t really matter which one is used first.

So, you can use either LTRIM(RTRIM(value)) or RTRIM(LTRIM(value)).

See the Examples section below for more information.

 

Are There Other Ways for Oracle to Trim Strings?

Yes, there are a few ways you can trim strings in Oracle:

  • Use LTRIM or RTRIM
  • Use TRIM
  • Use REPLACE
  • Use SUBSTR if you need more advanced trimming features
  • Use regular expressions

 

Examples of the LTRIM Function

Here are some examples of the Oracle LTRIM function. I find that examples are the best way for me to learn about code, even with the explanation above.

Example 1

This example demonstrates a simple LTRIM with no trim value specified.

SELECT LTRIM('    Complete IT Professional')
AS LTRIM_EXAMPLE FROM DUAL;

Result:

LTRIM_EXAMPLEComplete IT Professional

The extra spaces are removed from the original value.

 

Example 2

This example uses a specific value to trim.

SELECT LTRIM('___Complete IT Professional', '_')
AS LTRIM_EXAMPLE FROM DUAL;

Result:

LTRIM_EXAMPLEComplete IT Professional

The underscores are removed from the original value.

Example 3

This example uses LTRIM with several characters as the string to trim.

SELECT LTRIM('; ; ; ; ; Complete IT Professional', ' ; ')
AS LTRIM_EXAMPLE FROM DUAL;

Result:

LTRIM_EXAMPLEComplete IT Professional

Both the spaces and semicolons are removed from the original value.

 

Example 4

This example uses LTRIM  on data in a table, instead of providing a value.

SELECT country, LTRIM(country, 'U')
AS LTRIM_EXAMPLE FROM customers;

Result:

COUNTRYLTRIM_EXAMPLEUSASAUSASACanadaCanadaUKKUSASA(null)(null)FranceFrance(null)(null)

The capital U is removed from several values.

 

Example 5

This example uses LTRIM on data in a table with several characters in the trim parameter.

SELECT full_address, LTRIM(full_address, '1')
AS LTRIM_EXAMPLE FROM customers;

Result:

FULL_ADDRESSLTRIM_EXAMPLE10 Long Road0 Long Road50 Market Street50 Market Street201 Flinders Lane201 Flinders Lane8 Smith Street8 Smith Street14 Wellington Road4 Wellington Road80 Victoria Street80 Victoria Street5 Johnson St5 Johnson St155 Long Road55 Long Road

The “1” characters are removed from several address values.

 

Example 6

This example uses LTRIM  with 0 as the parameter.

SELECT LTRIM('000Complete IT Professional', 0)
AS LTRIM_EXAMPLE FROM DUAL;

Result:

LTRIM_EXAMPLEComplete IT Professional

The zeroes are removed from the original value.

 

Example 7

This example uses both LTRIM and RTRIM in the one expression.

SELECT LTRIM(RTRIM('___Complete IT Professional__', '_'), '_')
AS RTRIM_EXAMPLE FROM DUAL;

Result:

LTRIM_EXAMPLEComplete IT Professional

The underscores are removed from both sides of the original value.

 

Example 8

This example uses Unicode characers as the trim parameter.

SELECT LTRIM('ééComplete IT Professional', 'é')
AS LTRIM_EXAMPLE FROM DUAL;

Result:

LTRIM_EXAMPLEComplete IT Professional

The accented “e” character is removed from the original value.

Similar Functions

Some functions which are similar to the LTRIM function are:

  • RTRIM – Trims characters from the right of the string. The opposite of the LTRIM function.
  • TRIM – Trims characters from both the left and right side of the string. A combination of LTRIM and RTRIM.
  • SUBSTR – Extracts one value from a larger value. Not really a TRIM function but does something similar.
  • REPLACE – Replaces occurrences of one text value with another.

You can find a full list of Oracle functions here.

Lastly, if you enjoy the information and career advice I’ve been providing, sign up to my newsletter below to stay up-to-date on my articles. You’ll also receive a fantastic bonus. Thanks!

Image courtesy of digitalart / FreeDigitalPhotos.net

Categories: Development

Were you at Alliance, Collaborate, Interact this year, or wished you were?

PeopleSoft Technology Blog - Wed, 2015-07-15 18:40

This year, as well as uploading my PDF presentation, I've uploaded a couple of additional files.

The one you may find interesting is a short form Security Check List.

You can find it here: 

This is a supplement to the Securing Your PeopleSoft Application Red Paper (it includes the link) and it covers a number of points I've discussed with customers over the years. I include most of the check list as slides in my session but the PDF is an expanded set. The check list also contains a number of useful links.

In the discussions with customers we frequently find there are topics they have overlooked because they don't appear directly related to PeopleSoft security, but they are part of the overall infrastructure security and often managed by people outside of the PeopleSoft team. It's more important that as teams are reduced in size, that you build collaborative, virtual teams in the rest of your organization. I hope the check list will also provide the conversation starters to help build those virtual teams.

If you think some of the points are  topics by themselves, let me know and I can work on building out the information.

I appreciate any and all feedback. 

Connecting to DBaaS, did you know this trick?

Kris Rice - Wed, 2015-07-15 16:10
SSHTunneling Trick The new command line is a must try, says 10 out of 10 people that built it.  The tool has sshtunneling of ports built in as described by Barry. This means you can script opening your sshtunnel from the command line and run sql very quickly.  Here's the one I used recently at Kscope15. Now the trick? is that once this port is forwarded, any tool can now use it.  In case

Starting a Process using a Timer with a Duration in Oracle BPM

Jan Kettenis - Wed, 2015-07-15 10:34
In this blog article I explain three options to configure a timer start event based upon some configurable duration.

As far as I know firing a timer based on a duration is only applicable in case of a Timer Event Sub-process. Let me know if you think otherwise.

In case of an Event Sub-process the timer starts at the same moment when the process instance starts. There is no way to change it at any point after that. Given this , you can use one of the following three options that I discuss below. If you know of some oher way, again: let me know!

Input ArgumentYou can use an element that is part of the request of the process. In the following example there is one input argument called 'expiry' of type duration which is mapped to a process variable:

The process variable can then used to start the timer using an straightforward simple XPath assignment:



Preference in composite.xml
You can also configure a preference in the composite.xml file. Such a preference belongs to a specific component, and starts with "preference" (or "bpel.preference", but you can leave "bpel." out). Using the dot as a delimiter you can post-fix that with the preference name to use:

You can then set the timer using the ora:getPreference() XPath function. All these preferences are strings, but if the value is an ISO duration it will automatically be converted to a duration.


Domain Value Map
A third option is to configure the duration using a Domain Value Map or DVM for short. In the following example a DVM file is used for configuration parameters as a name-value pair:

 

The timer can be instantiated using the dvm:lookupValue() XPath function, as show in the following picture:


What to Choose?
This depends on the requirements.

If your consumer should be able to determine the duration, you should pass it on as a request parameter.

If the business wants to change it run-time than using the DVM is the best option. The initial value is determined design-time but can be changed run-time via SOA Composer (the same tool via which business rules can be changed).

Otherwise the composite preference is your weapon of choice. Also for this preference the initial value is determined design-time, but can still be changed after deployment by IT using the MBean Browser in Enterprise Manager.

Another new APEX-based public website goes live

Tony Andrews - Wed, 2015-07-15 07:48
Another APEX public website I worked on with Northgate Public Services has just gone live: https://londontribunals.org.uk/ This is a website to handle appeals against parking fines and other traffic/environmental fines issues by London local authorities. It is built on APEX 4.2 using a bespoke theme that uses the Bootstrap framework.  A responsive design has been used so that the site works Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com1http://tonyandrews.blogspot.com/2015/07/another-new-apex-based-public-website.html

APEX 5 - Opening and Closing Modal Window

Denes Kubicek - Wed, 2015-07-15 05:56
This example is showing how to open a Modal Page from any element in your application. It is easy to get it working using some standards like a button or a link in a report. However, it is not 100% clear how to get it working with some other elements which don't have the redirect functionality built in (item, region title, custom links, etc.). This example is also showing how to get the success message displayed on the parent page after closing of the Modal Page.

Categories: Development

Shift Command in Shell Script in AIX and Linux

Pakistan's First Oracle Blog - Tue, 2015-07-14 22:42
Shell in Unix never ceases to surprise. Stumbled upon 'shift 2' command in AIX few hours ago and it's very useful.

'Shift n' command shifts the parameters passed to a shell script by 'n' numbers to the left.

For example:

if you have a shell script which takes 3 parameters like:

./mytest.sh arg1 arg2 arg3

and you use shift 2 in your shell script, then the values of arg1 and arg2 will be lost and the value of arg3 will get assigned to arg1.

For example:

if you have a shell script which takes 2 parameters like:

./mytest arg1 and arg2

and you use shift 2, then values of both arg1 and arg2 will be lost.

Following is a working example of shift command in AIX:

testsrv>touch shifttest.sh

testsrv>chmod a+x shifttest.sh

testsrv>vi shifttest.sh

testsrv>cat shifttest.sh
#!/bin/ksh
SID=$1
BACKUP_TYPE=$2
echo "Before Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"
shift 2
echo "After Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"


testsrv>./shifttest.sh orc daily

Before Shift: orc and daily => SID=orc and BACKUPTYPE=daily
After Shift:  and  => SID=orc and BACKUPTYPE=daily


Note that the values of arguments passed has been shifted to left, but the values of variables has remained intact.
Categories: DBA Blogs

This Is Not Glossy Marketing But You Still Won’t Believe Your Eyes. EMC XtremIO 4.0 Snapshot Refresh For Agile Test / Dev Storage Provisioning in Oracle Database Environments.

Kevin Closson - Tue, 2015-07-14 19:18

This is just a quick blog post to direct readers to a YouTube video I recently created to help explain to someone how flexible EMC XtremIO Snapshots are. The power of this array capability is probably most appreciated in the realm of provisioning storage for Test and Development environments.

Although this is a silent motion picture I think it will speak volumes–or at least 1,000 words.

Please note: This is just a video demonstration to show the base mechanisms and how they relate to Oracle Database with Automatic Storage Management. This is not a scale demonstration. XtremIO snapshots are supported to in the thousands and extremely powerful “sibling trees” are fully supported.

Not Your Father’s Snapshot Technology

No storage array on the market is as flexible as XtremIO in the area of writable snapshots. This video demonstration shows how snapshots allow the administrator of a “DEV” host–using Oracle ASM–to quickly refresh to current or past versions of ASM disk group contents from the “PROD” environment.

The principles involved in this demonstration are:

  1. XtremIO snapshots are crash consistent.
  2. XtremIO snapshots are immediately created, writeable and space efficient. There is no fixed “donor” relationship. Snapshots can be created from other snapshots and refreshes can go in any direction.
  3. XtremIO snapshot refresh does not involve the host operating system. Snapshot and volume contents can be immediately “swapped” (refreshed) at the array level without any action on the host.

Regarding number 3 on that list, I’ll point out that while the operating system does not play a role in the snapshot operations per se, applications will be sensitive to contents of storage immediately changing. It is only for this reason that there are any host actions at all.

Are Host Operations Involved? Crash Consistent Does Not Mean Application-Coherent

The act of refreshing XtremIO snapshots does not change the SCSI WWN information so hosts do not have any way of knowing the contents of a LUN have changed. In the Oracle Database use case the following must be considered:

  1. With a file system based database one must unmount the file systems before refreshing a snapshot otherwise the file system will be corrupted. This should not alarm anyone. A snapshot refresh is an instantaneous content replacement at the array level. Operationally speaking, file system based databases only require database instance shutdown and the unmounting of the file system in preparation for application-coherent snapshot refresh.
  2. With an ASM based database one must dismount the ASM disk group in preparation for snapshot refresh. To that end, ASM database snapshot restore does not involve system administration in any way.

The video is 5 minutes long and it will show you the following happenings along a timeline:

  1. “PROD” and “DEV” database hosts (one physical and one virtual) each showing the same Oracle database (identical DBID) and database creation time as per dictionary views. This establishes the “donor”<->clone relationship. DEV is a snapshot of PROD. It is begat of a snapshot of a PROD consistency group
  2. A single-row token table called  “test” in the PROD database has value “1.” The DEV database does not even have the token table (DEV is independent of PROD…it’s been changing..but its origins are rooted in PROD as per point #1)
  3. At approximately 41 seconds into the video I take a snapshot of the PROD consistency group with “value 1” in the token table. This step prepares for “time travel” later in the demonstration
  4. I then update the PROD token table to contain the value “42”
  5. At ~2:02 into the video I have already dismounted DEV ASM disk groups and started clobbering DEV with the current state of PROD via a snapshot refresh. This is “catching up to PROD”
    1. Please note: No action at all was needed on the PROD side. The refresh of DEV from PROD is a logical, crash-consistent point in time image
  6. At ~2:53 into the video you’ll see that the DEV database instance has already been booted and that it has value “42” (step #4). This means DEV has “caught up to PROD”
  7. At ~3:32 you’ll see that I use dd(1) to copy the redo LUN over the data LUN on the DEV host to introduce ASM-level corruption
  8. At 3:57 the DEV database is shown as corrupted. In actuality, the ASM disk group holding the DEV database is corrupted
  9. In order to demonstrate traveling back in time, and to recover from the dd(1) corrupting of the ASM disk group,  you’ll see at 4:31 I chose to refresh from the snapshot I took at step #3
  10. At 5:11 you’ll see that DEV has healed from the dd(1) destruction of the ASM disk group, the database instance is booted, and the value in the token table is reverted to 1 (step #3) thus DEV has traveled back in time

Please note: In the YouTube box you can click to view full screen or on youtube.com if the video quality is a problem:

More Information

For information on the fundamentals of EMC XtremIO snapshot technology please refer to the following EMC paper: The fundamentals of XtremIO snapshot technology

For independent validation of XtremIO snapshot technology in a highly-virtualized environment with Oracle Database 12c please click on the following link: Principled Technologies, Inc Whitepaper

For a proven solution whitepaper showing massive scale data sharing with XtremIO snapshots please click on the following link: EMC Whitepaper on massive scale database consolidation via XtremIO


Filed under: oracle

Coming Soon - PeopleTools Customer Beta Program

PeopleSoft Technology Blog - Tue, 2015-07-14 15:07
The PeopleTools team continues to push forward, ever improving the features and capabilities of PeopleTools.  Recently, you may have seen some of the planned enhancements for PeopleTools 8.55 discussed on MyOracleSupport in the Planned Features and Enhancements area.  This document has replaced the Release Value Proposition that has been used previously to highlight features to look for in the upcoming PeopleTools release. 

There are a number of cool features that we’re working on, including the Cloud Deployment Architecture (CDA) which will provide greater flexibility in the installation and patching of environments.  Additional planned features include Analytics for PeopleSoft Update Manager (PUM), Fluid dashboards/homepages and Simplified Analytics….just to name a few.

 We plan to kick off the PeopleTools 8.55 Beta Program in the relatively near future, and have an opening for a customer who’s willing to closely partner with us.  If you are looking to get your hands on the next release so that you can thoroughly test out some of these features in your own environment to see the benefits, perhaps you are the one we’re looking for.  Does your team have the skills and desire to take beta code and run with it?  Can your organization get a standard beta trial license agreement signed promptly?  We want to work with a customer that’s going to dive in, and really exercise the new features - If that’s you, email me (mark.hoernemann@oracle.com) and let’s talk.  Please keep in mind that this is a small beta – I’ve only got room for one, maybe two customers.   

July 2015 Critical Patch Update Released

Oracle Security Team - Tue, 2015-07-14 14:59

Hello, this is Eric Maurice.

Oracle today released the July2015 Critical Patch Update. TheCritical Patch Update program is Oracle’s primary mechanism for the release ofsecurity fixes across all Oracle products, including security fixes intended toaddress vulnerabilities in third-party components included in Oracle’s productdistributions.

The July2015 Critical Patch Update provides fixes for 193 new securityvulnerabilities across a wide range of product families including: OracleDatabase, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager,Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoftEnterprise, Oracle Siebel CRM, Oracle Communications Applications, Oracle JavaSE, Oracle Sun Systems Products Suite, Oracle Linux and Virtualization, andOracle MySQL.

Out of these 193 fixes, 44 are for third-party componentsincluded in Oracle products distributions (e.g., Qemu, Glibc, etc.)

This CriticalPatch Update provides 10 fixes for the Oracle Database, and 2 of theDatabase vulnerabilities fixed in today’s Critical Patch Update are remotelyexploitable without authentication. Themost severe of these database vulnerabilities has received a CVSS Base Score of9.0 for the Windows platform and 6.5 for Linux and Unix platforms. This vulnerability (CVE-2015-2629) reflectsthe availability of new Java fixes for the Java VM in the database.

With this CriticalPatch Update, Oracle Fusion Middleware receives 39 new security fixes, 36of which are for vulnerabilities which are remotely exploitable withoutauthentication. The highest CVSS BaseScore for these Fusion Middleware vulnerabilities is 7.5.

This CriticalPatch Update also includes a number of fixes for Oracle applications. Oracle E-Business Suite gets 13 fixes, OracleSupply Chain Suite gets 7, PeopleSoft Enterprise gets 8, and Siebel gets 5fixes. Rounding up this list are 2 fixesfor the Oracle Commerce Platform.

The Oracle Communications Applications receive 2 newsecurity fixes. The highest CVSS BaseScore for these vulnerabilities is 10.0, this score is for vulnerabilityCVE-2015-0235, which affects Glibc, a component used in the OracleCommunications Session Border Controller. Note that this same Glibc vulnerability is also addressed in a number ofOracle Sun Systems products.

Also included in this CriticalPatch Update are 25 fixes Oracle Java SE. 23 of these Java SE vulnerabilities are remotely exploitable withoutauthentication. 16 of these Java SE fixesare for Java client-only, including one fix for the client installation of JavaSE. 5 of the Java fixes are for clientand server deployment. One fix isspecific to the Mac platform. And 4fixes are for JSSE client and server deployments. Please note that this Critical Patch Updatealso addresses a recentlyannounced 0-day vulnerability (CVE-2015-2590), which was being reported asactively exploited in the wild.

This Critical PatchUpdate addresses 25 vulnerabilities in Oracle Berkeley DB, and none of thesevulnerabilities are remotely exploitable without authentication. The highest CVSS Base score reported forthese vulnerabilities is 6.9.

Note that the CVSSstandard was recently updated to version 3.0. In a previousblog entry, Darius Wiles highlighted some of the enhancements introduced bythis new version. Darius will soonpublish another blog entry to discuss this updated CVSS standard and itsimplication for Oracle’s future security advisories. Note that the CVSS Base Score reported in therisk matrices in today’s Critical Patch Update were based on CVSS v2.0.

For More Information:

The July 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html

The Oracle Software Security Assurance web site is locatedat http://www.oracle.com/us/support/assurance

July 2015 Critical Patch Update Released

Oracle Security Team - Tue, 2015-07-14 14:59

Hello, this is Eric Maurice.

Oracle today released the July 2015 Critical Patch Update. The Critical Patch Update program is Oracle’s primary mechanism for the release of security fixes across all Oracle products, including security fixes intended to address vulnerabilities in third-party components included in Oracle’s product distributions.

The July 2015 Critical Patch Update provides fixes for 193 new security vulnerabilities across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager, Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Communications Applications, Oracle Java SE, Oracle Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.

Out of these 193 fixes, 44 are for third-party components included in Oracle products distributions (e.g., Qemu, Glibc, etc.)

This Critical Patch Update provides 10 fixes for the Oracle Database, and 2 of the Database vulnerabilities fixed in today’s Critical Patch Update are remotely exploitable without authentication. The most severe of these database vulnerabilities has received a CVSS Base Score of 9.0 for the Windows platform and 6.5 for Linux and Unix platforms. This vulnerability (CVE-2015-2629) reflects the availability of new Java fixes for the Java VM in the database.

With this Critical Patch Update, Oracle Fusion Middleware receives 39 new security fixes, 36 of which are for vulnerabilities which are remotely exploitable without authentication. The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.

This Critical Patch Update also includes a number of fixes for Oracle applications. Oracle E-Business Suite gets 13 fixes, Oracle Supply Chain Suite gets 7, PeopleSoft Enterprise gets 8, and Siebel gets 5 fixes. Rounding up this list are 2 fixes for the Oracle Commerce Platform.

The Oracle Communications Applications receive 2 new security fixes. The highest CVSS Base Score for these vulnerabilities is 10.0, this score is for vulnerability CVE-2015-0235, which affects Glibc, a component used in the Oracle Communications Session Border Controller. Note that this same Glibc vulnerability is also addressed in a number of Oracle Sun Systems products.

Also included in this Critical Patch Update are 25 fixes Oracle Java SE. 23 of these Java SE vulnerabilities are remotely exploitable without authentication. 16 of these Java SE fixes are for Java client-only, including one fix for the client installation of Java SE. 5 of the Java fixes are for client and server deployment. One fix is specific to the Mac platform. And 4 fixes are for JSSE client and server deployments. Please note that this Critical Patch Update also addresses a recently announced 0-day vulnerability (CVE-2015-2590), which was being reported as actively exploited in the wild.

This Critical Patch Update addresses 25 vulnerabilities in Oracle Berkeley DB, and none of these vulnerabilities are remotely exploitable without authentication. The highest CVSS Base score reported for these vulnerabilities is 6.9.

Note that the CVSS standard was recently updated to version 3.0. In a previous blog entry, Darius Wiles highlighted some of the enhancements introduced by this new version. Darius will soon publish another blog entry to discuss this updated CVSS standard and its implication for Oracle’s future security advisories. Note that the CVSS Base Score reported in the risk matrices in today’s Critical Patch Update were based on CVSS v2.0.

For More Information:

The July 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance

ORDS - Auto REST table feature

Kris Rice - Tue, 2015-07-14 10:47
Got a question on how easy it is to use ORDS to perform insert | update | delete on a table.  Here's the steps. 1) Install ORDS ( cmd line or there's a new wizard in sqldev ) 2) Enable the schema and table in this case klrice.emp; ( again there's a wizard in sqldev ) BEGIN ORDS.ENABLE_SCHEMA(p_enabled => TRUE, p_schema => 'KLRICE',

Pages

Subscribe to Oracle FAQ aggregator