Feed aggregator

Passing more than 10 values with apex.server.process

Andrew Tulley - Fri, 2014-02-07 15:53

You may be familiar with the apex.server.process function exposed by Apex’s Javascript API. It allows you to asynchronously interact with Apex Application Processes.

A simple example would be.

Apex Application Process


HTP.p('You passed "'||APEX_APPLICATION.g_x01 ||'" as the value for x01. ');

HTP.p('You passed "'||APEX_APPLICATION.g_x02 ||'" as the value for x02. ');

HTP.p('You passed "'||APEX_APPLICATION.g_x03 ||'" as the value for x03. ');

Javascript

apex.server.process ( 
  "MY_APP_PROCESS"
,   {   x01: 'my first custom value'
    ,   x02: 'mysecond custom value'
    ,   x03: 'my third custom value'
    }
 , { dataType: 'text'
 ,success: function(pData){alert(pData)}
}
);

If you were to create the Application Process “MY_APP_PROCESS” and run the Javascript above, you’d see an alert popup:

——–

You passed “my first custom value” as the value for x01.
You passed “mysecond custom value” as the value for x02.
You passed “my third custom value” as the value for x03.

———

You can use use x01 through to x10 to pass up to 10 parameters to your application process. What about if you want to pass more than 10 parameters, though? To do this, you first need to create a number of Application Items. You might like to call them :G_11, :G_12, :G_13 etc..

You can then set the values of these items in session state (and hence make them available in your Application Process) by doing the following:

apex.server.process ( 
  "MY_APP_PROCESS"
,   {   x01: 'my first custom value'
    ,   x02: 'mysecond custom value'
    ,   x03: 'my third custom value'
    ,   p_arg_names: ['G_11','G_12','G_13']
    ,   p_arg_values: ['My 11th custom value','My 12th custom value','My 13th custom value']
    }
 , {    dataType: 'text'
    ,   success: function(pData){alert(pData)}
    }
);

Referencing these values inside your Application Process is simply a case of using Bind Variable syntax, e.g.:


HTP.p('You passed "'||APEX_APPLICATION.g_x01 ||'" as the value for x01. ');

HTP.p('You passed "'||APEX_APPLICATION.g_x02 ||'" as the value for x02. ');

HTP.p('You passed "'||APEX_APPLICATION.g_x03 ||'" as the value for x03. ');

HTP.p('You passed "'||:G_11 ||'" as the value for G_11. ');


Command Line Attachment to Oracle Support Service Request

Jeremy Schneider - Fri, 2014-02-07 13:55

For those who haven’t looked at this in awhile: these days, it’s dirt simple to attach a file to your SR directly from the server command line.

curl –T /path/to/attachment.tgz 
     –u "your.oracle.support.login@domain.com" 
     "https://transport.oracle.com/upload/issue/0-0000000000/"

Or to use a proxy server,

curl –T /path/to/attachment.tgz
     –u "your.oracle.support.login@domain.com"
     "https://transport.oracle.com/upload/issue/0-0000000000/"
     -px proxyserver:port
     -U proxyuser

There is lots of info on MOS (really old people call it metalink); doc 1547088.2 is a good place to start. There are some other ways to do this too. But really you can skip all that, you just need the single line above!

Impala Doc Reorg - January 2014

Tahiti Views - Thu, 2014-02-06 17:34
When Impala started, many of the early adopters were experts in HiveQL, so the Impala documentation assumed some familiarity with Hive SQL syntax and behavior. The Impala SQL reference info was relatively low-volume, more of a quick reference. Now there's more functionality to talk about, and more users are starting with Impala straight from an RDBMS background and need the full details from theJohn Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Rocky Mountain OUG: what's up Feb 6th @ 8:30am in Room 504?

Kuassi Mensah - Wed, 2014-02-05 20:17
 Why get up early and show up in Room 504? http://bit.ly/1cgzgy1 #Java #db12c #OracleRac

Configure Coherence HotCache

Edwin Biemond - Tue, 2014-02-04 22:29
Coherence can really accelerate and improve your application because it's fast, high available, easy to setup and it's scalable. But when you even use it together with the JCache framework of Java 8 or the new Coherence Adapter in Oracle SOA Suite and OSB 12c it will even be more easier to use Coherence as your main HA Cache.  Before Coherence 12.1.2 when you want to use Coherence together with

Upcoming Conferences for 2014

Scott Spendolini - Tue, 2014-02-04 12:43

It’s that time of year again: Conference Season!  There’s a few conferences that fall in the 1st few months of the year that I try to present at each year, and this year is no different.  Here’s where I’ll be presenting at over the next few months:

  • RMOUG - Denver, CO - February 5th - 7th
    At RMOUG this year, I’ll be co-presenting a new session called “Creating a Business UI in APEX” with Jorge Rimblas.  I’m very excited about this session, as there is a lot of practical and easy to use information packed into it about user interface design - something most APEX developers have little experience in.  I’ll also be a part of the Oracle ACE Lunch & Learn on Friday, so if you want to talk APEX, come and find my table.
  • UTOUG - Sandy, UT - March 12th & 13th
    This year at UTOUG, I will be presenting “Intro to APEX Security”.  Given that APEX 5.0 is out in at least an EA release, I hope to incorporate what’s new in addition to what APEX 4.2 and prior have to offer.

  • GLOC - Cleveland, OH - May 12th & 13th
    I’ll be quite busy at GLOC this year, with at least two sessions: the aforementioned “Intro to APEX Security”, as well as a 3-hour hands-on session entitled “APEX Crash Course”.  This session will be aimed at those new or relatively new to APEX, and walk the participants through building a few working applications - both desktop and mobile.

    Note: GLOC abstract submission closes this Friday, so there’s still time to submit if you’re interested!

  • KScope - Seattle, WA - June 22nd - 26th
    As usual, KScope will be the busiest conference of the year for me, with 3 sessions, Open Mic Night, Lunch & Learns, booth duty and who knows what else.  In addition to “Creating a Business UI in APEX” and “Intro to APEX Security”, I’ll be holding a Deep Dive session on Thursday entitled “APEX Security Deep Dive”.  This session will take a more thorough look at the inner workings of APEX’s security, and is meant for those who are comfortable with APEX.

I’m sure as the year goes by, there will be additional conferences: MAOP, VOUG, ECOUG and OOW are all events that I typically attend.  Hope to see some of you at one them!

Memory Guard

Tim Dexter - Mon, 2014-02-03 12:38

Happy New ... err .. Chinese Year! Yeah, its been a while, its also been danged busy and we're only in February, just! A question came up on one of our internal mailing lists concerning out of memory errors. Pieter, support guru extraordinaire jumped on it with reference to a support note covering the relatively new 'BI Publisher Memory Guard'. Sounds grand eh?

As many a BIP user knows, at BIP's heart lives an XSLT engine. XSLT engines are notoriously memory hungry. Oracle's wee beastie has come a long way in terms of taming its appetite for bits and bytes since we started using it. BIP allows you to take advantage of this 'scalable mode.' Its a check box on the data model which essentially says 'XSLT engine, stop stuffing your face with memory doughnuts and get on with the salad and chicken train for this job' i.e. it gets a limited memory stack within which to work and makes use of disk, if needed, think Windows' 'virtual memory'.

Now that switch is all well and good, for a known big report that you would typically mark as 'schedule only.' You do not want users sitting in front of their screen waiting for a 10,000 page document to appear, right? How about those reports that are borderline 'big' or you have a potentially big report but expect users to filter the heck out of it and they choose not to? It would be nice to be able to set some limits on reports in case a user kicks off a monster donut binge session. Enter 'BI Publisher Memory Guard'!

It essentially lets you set those limits on memory and report size so that users can not run a report that brings the server to its knees. More information on the support web site, search for 'BI Publisher Memory Guard a New Feature to Prevent out-of-memory Errors (Doc ID 1599935.1)' or you can get Leslie's white paper covering the same here.

Categories: BI & Warehousing

Collaborate 14 and Vegas

Fuad Arshad - Mon, 2014-02-03 12:30
Collaborate 14 is coming soon and i can tell you that it is an excellent content learning and networking opportunity.  I have been going to collaborate for a while and have found it to be not only a place to learn but also to network with my peers .  We built the team to write the Practical Oracle Database Appliance  which is available here at Collaborate 13 and were able to deliver a book  with authors all across the world as a team effort.
I would highly encourage everyone to consider Collaborate 14 as a way to be part of the wonderful Oracle Community and talk to people ,  listen to people, learn and teach others and foremost volunteer. Hey did i mention its in VEGAS. 
Early Bird Registration ends February 12 , So please pass this along and use My Name Fuad Arshad as a referrer.  Adam Savage is the Keynote speaker which is going to be AWESOME.

RMOUG Training Days 2014

Galo Balda's Blog - Sun, 2014-02-02 11:29

training_days

I’ll be presenting on February 6th. Here are the details of my session:

SQL Pattern Matching in Oracle 12c

Room 501, 11:15 – 12:15

Last year I had a great time, so I can’t wait to be back in Denver.

I hope to see you there!

Updated on 02/17/2014: The presentation is available on Slideshare


Filed under: 12C, RMOUG, SQL Tagged: 12C, RMOUG, SQL
Categories: DBA Blogs

Oracle Application Express 5.0 Early Adopter 1 now available

Joel Kallman - Fri, 2014-01-31 16:39


We are quite happy to announce the beginning of the Oracle Application Express 5.0 Early Adopter program, at https://apexea.oracle.com.  This is our open-to-the-public beta program where we encourage our customers (new and old), and also those just interested in Oracle Application Express, to kick the tires of our forthcoming release.  Click the big blue "Request a Workspace" button to get started.

You'll notice right away that the authentication for Oracle Application Express requires an Oracle account.  This is the same account you would use for many Oracle sites, including the OTN Community discussion forums.  If you don't have an account, then simply follow the instructions on the login page to "Sign up for a free Oracle Web account".  However, ensure that you specify the same email address as your Oracle Web account when requesting a new workspace.

The list of new features in Oracle Application Express 5.0 Early Adopter 1 can be reviewed here.  Not everything is ready for prime time, so these are the features we are specifically looking for feedback.

We plan on having an Oracle Application Express 5.0 Early Adopter 2 program.  When that happens, the entire instance will be rebuilt, so don't get too married to any of the data or applications - they will be removed.  Also, there is no guarantee that the applications you create can be imported into any future release of APEX.

The Known Issues will be populated soon, as well the application to review your submitted feedback.  However, we encourage you to use this Early Adopter instance and provide your unvarnished comments.  We still have some miles to travel for Oracle Application Express, but we believe that this will eventually become one of the watershed releases for APEX and the community.

Thank you for all of your support.

Gamification to level 80

Robert Baillie - Fri, 2014-01-31 04:14
Since the end of July last year I've been test driving one of the latest online tools that hopes to change your life by giving you the ability to store your task lists. Wow. What could be more underwhelming, and less worthy of a blog post? Well, this one is different.  This one takes some of the huge amount of thinking on the behaviour of "millenials" and "Generation Y", adds a big dose of social context and ends up with something quite spectacular. This is the gamification of task lists, this is experience points and levelling up, buying armour and using potions, this is World of Warcraft where the grinding is calling your mam, avoiding junk food or writing a blog post. This is HabitRPG.The concept is simple, you manage different styles of task lists.If you complete entries on them you get experience points and coins.If you fail to do entries them you lose hit points. Depending on on whether you're setting yourself realistic targets and completing them you either level up, or...

Gamification to level 80

Rob Baillie - Fri, 2014-01-31 04:14
Since the end of July last year I've been test driving one of the latest online tools that hopes to change your life by giving you the ability to store your task lists.

Wow. What could be more underwhelming, and less worthy of a blog post?

Well, this one is different.  This one takes some of the huge amount of thinking on the behaviour of "millenials" and "Generation Y", adds a big dose of social context and ends up with something quite spectacular.

This is the gamification of task lists, this is experience points and levelling up, buying armour and using potions, this is World of Warcraft where the grinding is calling your mam, avoiding junk food or writing a blog post.

This is HabitRPG.
The concept is simple, you manage different styles of task lists.
  • If you complete entries on them you get experience points and coins.
  • If you fail to do entries them you lose hit points.

Depending on on whether you're setting yourself realistic targets and completing them you either level up, or die and start again.
Get enough coins and you can buy armour (reduce the effect of not hitting your targets), weapons (increase the effect of achieving things) or customised perks (real world treats that you give yourself).
There's a wealth of other treats in there too, but I don't want to spoil it for you, because as each of them appear you get a real jolt of surprise and delight (look out for the flying pigs)
.
So, what do I mean by "different styles of task lists". Well, the lists are split into three - Habits, Dailies and Todos:
HabitsThese are repeating things that you want to get into the habit of doing, or bad habits you want to break. 

They have no schedule, or immediate urgency, they just hang around and you come back every now and again to say "yup, did that".  You can set things up as positive or negative, and so state if they are a good or bad habit.

Examples might be:
  • Phone mother (positive)
  • Get a takeaway (negative)
  • Empty the bins (both - positive if you do it, negative if your partner does it)

DailiesSuffering from a bit of a misnomer, dailies are repetitive tasks with some form of weekly schedule. Things that you want to do regularly, and on particular days. You can set a task to be required every day, only every Tuesday, or anything between.

Whilst un-actioned habits are benign, if you don't tick off a daily then you get hurt.  With habits you're gently encouraged to complete them as often as possible. Dailies come with a big stick..
Examples might be:
  • Go to the gym
  • Do an uninterrupted hour of productive work

TodosThe classic task. The one off thing that you've got to do, and once its done you can cross it off and move on to the next thing.

In terms of functionality, they're pretty much the same as dailies - If you don't do a task it hurts.

Examples might be:
  • Write a blog post about HabitRPG
  • Book a holiday cottage in Wales

Other bits
They have a mobile app on both iOS and Android.  I use Android, and it does the job - nothing fancy, but it works.  Most of what you need to do is available to do on the move.

It's missing the ability to work offline, though it's not a huge problem.  I can imagine it being added soon, and I really hope it does.  Sometimes, sitting on the tube, I think of things that I need to do and it would be great to be able to add them to my task list without waiting until I get over-ground again.

Functionality is added regularly, and there is clearly a strong community spirit in the developers who are producing the site.  A kickstarter provided a boost to funds, but they seem to have worked out how to monetise the site and it looks like it'll keep being developed for some time - which is obviously good news!

There are a few community plug-ins out there (they made the good choice of using the public API to hook their UI up, meaning any functionality in the site is available in the API), including one that works like "stayfocused", monitoring your internet browsing habits and rewarding or punishing your HabitRPG character appropriately.

The API's also open up idea of a sales system driven by some of the concepts in HabitRPG, if not HabitRPG itself (though maybe with Ferrari's instead of Golden Swords).  I'd be amazed if this wasn't picked up by a Salesforce developer sometime soon...


Conclusion
I have to admit, I was excited about this idea the moment I heard about it, though I didn't want to blog about it straight away - I wanted to see if it had some legs first.

Sure there are other sites doing similar things, take a look at blah blsh and blah. But, excuse the pun,  this is another level.

When I first started using HabitRPG I had very short term goals. Your character is fragile, so naturally I did what I could to avoid getting hurt. I avoided unrealistic goals, or even goals that I might not get around to for a couple of days. Only todos I was likely to do that day got added.

As I've got further through I have found that I am more inclined to set longer target todos. They hurt you less as you have armour, and the longer you leave them the more XP you get. It sounds like cheating, but its not. It's simply that I've matured the way in which I use my task manager.

It's missing some things that I might expect from a really rich task manager - tags can be used to group items and tasks can be split with simple sub-tasks, but there's nothing more advanced than that - no dependent tasks, or chains of tasks for example.

But maybe the simplicity is key to its success. I rarely need more than a simple reminder, so why complicate things?

You have to be careful with the habits. It can be tempting to add a bad habit in there that you've already pretty much broken, but if Steven Levitt and Stephen J. Dubner are right then you'll end up replacing an intangible moral cost into a tangible HabitRPG cost and result in picking up that bad habit again.

It differs from sites like Strava, in that this is not primarily a competitive site - it needs to focus on the individual as it is trivially easy to "cheat".  You can add arbitrary tasks and complete them immediately - though it really defeats the purpose.  It relies on you entering a contract with yourself to use the site productively.  For that reason, any fundamental introduction to the site of competitiveness is flawed.

However, there is the concept of "challenges" - the idea that you can set goals, assign a prize and then invite people to compete.  It works, but only on the principle that people entering the challenges can be trusted.

All in all this has proven to be a pretty successful experiment for me - since I've started using it I've hardly missed a day at the gym, my washing basket is empty, all my shirts are ironed, I've managed to make it to yoga and I even call my dad more often.

And with a character at level 32 I'm becoming a god!

Implementing Deferred Segment Creation After an Upgrade

David Kurtz - Thu, 2014-01-30 13:02
I have written previously about Deferred Segment Creation. Each empty table and its indexes use only 64Kb (assuming an 8Kb block size and locally managed tablespaces), but in a PeopleSoft there can be be tens of thousands of such tables and that adds up to a saving worth making.

If you are upgrading your database to 11gR2 , you might want to make sure that you are using it.  Deferred segment creation was introduced in Oracle 11.2.0.2 and it became the default in 11.2.0.3.  However, any table created in a previous version will have a physical segment.

This problem could affect any system, but it also manifests itself in PeopleSoft in a particular way.

When you run the alter scripts in PeopleTools a table may be recreated.  If it is a regular table (record type 0) then the CREATE TABLE command will not specify a segment creation clause and so the segment creation will be deferred until rows are inserted.

CREATE TABLE PS_TL_IPT1 (PROCESS_INSTANCE DECIMAL(10) NOT NULL,
   EMPLID VARCHAR2(11) NOT NULL,
...
   INITIAL_SEQ_NBR DECIMAL(15) NOT NULL) TABLESPACE TLWORK STORAGE
 (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0)
 PCTFREE 10 PCTUSED 80
/
However, from PeopleTools 8.51, Application Designer uses the Oracle delivered DBMS_METADATA package to extract the DDL to recreate the object from the actual object.  However, this behaviour only occurs for Temporary working storage tables (record type 7).  Yes, these are exactly the tables that would benefit most from deferred segment creation because in many systems there are many unused temporary table instances.  If table was created under a version of the database prior to 11.2.0.2 then the segment will exist and DBMS_METADATA will generate the DDL with the SEGMENT CREATION IMMEDIATE clause. 

-- Create temporary table 
CREATE TABLE PSYPERSON (EMPLID VARCHAR2(11) NOT NULL,
...
   LAST_CHILD_UPDDTM tIMESTAMP) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 80 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "HRLARGE"
/
You can use DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS to remove the segments for any empty tables (and their indexes) for which the segment has been previously created.  There would be no harm in simply running this program for every table in the system.  If there are rows then DBMS_SPACE_ADMIN will take no action.

The following script identifies candidate tables where the statistics suggest that there are no rowa or where there are no statistics.  I am indebted to Tim Hall for the idea for this tip.

set serveroutput on 
BEGIN
 FOR i IN (
  SELECT owner, table_name
  FROM   all_tables
  WHERE  owner = 'SYSADM'
  AND    segment_created = 'YES'
  AND    temporary = 'N'
  AND   (num_rows = 0 OR num_rows IS NULL)
 ) LOOP
  dbms_output.put_line(i.owner||'.'||i.table_name);
  dbms_space_admin.drop_empty_segments (
    schema_name    => i.owner,
    table_name     => i.table_name);
 END LOOP;
END;
/
As this package drops the empty segments, the SEGMENT_CREATED column on USER_TABLES changes to NO and if you were to extract the DDL with DBMS_METADATA the SEGMENT CREATION clause would have changed to DEFERRED.

As soon as any data is inserted, the segment is created, SEGMENT_CREATED changes to YES and the DDL generated by DBMS_METADATA would have SEGMENT CREATION IMMEDIATE.

The result is that 64Kb of space (assuming a block size of 8Kb) will be freed up for each empty table and index segment that is dropped. Your mileage may vary, but in my demo HR database that is over 20000 tables and 25000 indexes. 2.7Gb isn't a vast amount these days, but it is an easy win.

Added 1.2.2014:
To answer Noons' question below.  So long as the table or partition doesn't have any rows, the segment will be dropped it will as if the segment creation had been deferred.  You don't have to do anything special to the table.  There is no problem applying this to any empty tables create with their segments.  Here is a simple test with my results on 11.2.0.3:

I will create a table and I have explicitly created the segment immediately, then I insert a row, commit the insert and delete the row.  I haven't even bothered to commit the delete.
SQL> create table t(a number) segment creation immediate;
SQL> insert into t values(42);
SQL> commit;
SQL> delete from t;
SQL> select segment_type, segment_name, tablespace_name from user_Segments where segment_name = 'T';

SEGMENT_TYPE SEGMENT_NAME TABLESPACE_NAME
------------------ --------------- ------------------------------
TABLE T PSDEFAULT

SQL> select table_name, segment_created from user_tables where table_name = 'T';

TABLE_NAME SEG
------------------------------ ---
T YES

SQL> select dbms_metadata.get_ddl('TABLE','T') from dual;

DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYSADM"."T"
( "A" NUMBER
) SEGMENT CREATION IMMEDIATE


So, at the moment the segment exists, it has had rows in it, but they have been deleted and the table is empty. If I run DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS the segment is dropped.

SQL> execute dbms_space_admin.drop_empty_segments (user,'T');
SQL> select segment_type, segment_name, tablespace_name from user_Segments where segment_name = 'T';

no rows selected

SQL> select table_name, segment_created from user_tables where table_name = 'T';

TABLE_NAME SEG
------------------------------ ---
T NO

SQL> select dbms_metadata.get_ddl('TABLE','T') from dual;

DBMS_METADATA.GET_DDL('TABLE','T')
--------------------------------------------------------------------------------
CREATE TABLE "SYSADM"."T"
( "A" NUMBER
) SEGMENT CREATION DEFERRED

TRUCEConf

Catherine Devlin - Wed, 2014-01-29 23:47

Please consider participating in TRUCEConf (March 18-19 in Cincinnati)!

The goal is to help the tech community heal, through learning from others outside our industry and having an open dialogue and on how we can be better humans to each other in the world of tech.

You may remember fierce controversy around TRUCEConf when virtually nothing was known about it but its name; without solid information, it was easy to read bad connotations into the name. I would have been uneasy myself if I hadn't known the founder, Elizabeth Naramore.

But now there's plenty of information, including the schedule, that should replace those concerns with enthusiasm. I think the format - a day of mind-opening speakers from all over, followed by an unconference day - should be very productive!

I'm really looking forward to it and hope that many of you can come. If you can't come in person, consider supporting the conference with a donation - they're going without corporate sponsors so your individual support means a ton. Thanks!

Speaking at OTN Yathra (India)

Kuassi Mensah - Wed, 2014-01-29 12:01
Speaking @ http://t.co/E0KUudWOLn Mumbai/Pune/Chennai In-Database#MapReduce for DBAs and Database Developers using SQL or #Hadoop

Whatsapp on 2 devices: "does not work, tell why!"

Dietrich Schroff - Tue, 2014-01-28 13:58
After nearly everyone of my friends is using whatsapp, it was overdue to download this app.
Why is this app used by everyone? What's cool about this app?
  • You have to do nothing after the download except this sms-verification
  • No account creation
  • No new password (or reuse another one <- li="" nooo="">
  • No need for searching addresses
  • You do not have to take care about your contact list
  • Just start messaging
Why does this work? Whatsapp uses your phonenumber to build the account identifier. The app scans your phone address book and builds its own contact list. Fine.

Really?

Not at all: If you want to add whatsapp on a second device with no sim card (e.g. your tablet), you can find several instructions but: after adding whatsapp to this device, whatsapp on your phone does no longer work...
Whatapps says:
Your WhatsApp account can only be verified with one number, on one device.
If you attempt to frequently switch your WhatsApp account between different devices, at a certain point, you may be blocked from re-verifying your account. So please do not repeatedly switch between different devices.
Cheers,
WhatsApp Support TeamFinally i would say: The biggest advantage is also the biggest disadvantage. Using the phonenumber as account id makes everything easy but prevents the usage of more than one device....

Changes and Book

Fuad Arshad - Mon, 2014-01-27 16:35
I just realized that i have not blogged in a very long time. This has been partly because i switched jobs and started working for Oracle.  it has been an interesting six months and i have been enjoying the challenge of working with various customer and helping them solve problems.
The other thing that has been an important milestone in my career is the publishing of a book that collaborated with a very fine team of individuals with . The book is a collection of our experiences and passion with the Oracle Database and is called Practical Oracle Database Appliance. You can pre-order the book at Amazon with a link available below. I will be trying to blog more about various aspects of my new job and interesting stuff above Exadata as i learn them


CRS-1615:No I/O has completed after 50% of the maximum interval

Syed Jaffar - Mon, 2014-01-27 03:01
A few days back on one of the production cluster nodes the clusterware became unhealthy and caused instances termination on the node. If it was a node eviction, everything would have come back automatically after the node reboot, however, in this scenario, just cluster components were in unhealthy state, hence, no instance started on the node.

Upon referring the ocssd.log file, the following error found:

cssd(21532)]CRS-1615:No I/O has completed after 50% of the maximum interval. Voting file /dev/rdisk/oracle/ocr/ora_ocr_01 will be considered not functional in 13169 milliseconds

The node was unable to communicate with rest of the cluster nodes. If so, why didn't my node crashed/evicted? Isn't that question comes to your mind?.

Well, from the error it is clear that the node suffered I/O issues, and was unable to access the voting disk and started complaining (in the ocssd.log) that it can't see other nodes in the cluster. When contacted the storage and OS teams, the OS team was quick to identify the issue on PCI card. For some reasons, the I/O channel to the storage from the node was suspended for 10 seconds and re-established the connection back after 10 seconds. In the mean time, the cluster on the node become unhealthy and couldn't proceed further with any other action.

The only workaround we had to perform was restarting the CSS component using the following command:

crsctl start res ora.cssd -init

Once the CSS was started, everything back to normal state. Luckily, didn't have to do a lot of research why cluster become unhealthy or instances crashed.

It's a drag...

Tony Andrews - Sun, 2014-01-26 06:54
It really used to be a "drag" putting together a set list for my band using my Oracle database of songs we play: the easiest way was to download all the songs to an Excel spreadsheet, manipulate them there, then re-import back into the database.  I tried various techniques within APEX but none was easier than that - until now.  I have just discovered jQuery UI's Sortable interaction, and in a Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com8Surrey, UK51.2622513 -0.4672517000000198150.6264563 -1.7581452000000197 51.898046300000004 0.82364179999998011http://tonyandrews.blogspot.com/2014/01/its-drag.html

AUTO Vs Manual PSU patch - when and why?

Syed Jaffar - Sun, 2014-01-26 03:46
The purpose of this blog entry is to share my thoughts on AUTO Vs Manual PSU patch deployment with when and why scenario, also,  a success story of reducing (almost 50%) over all patching time we achieved in the recent times. Thought this would help you as well.

In my own perspective, I think we are one of the organizations in the Middle East who has a large and complex Oracle Cluster setups in-place. Having six (06) cluster environments, which includes production, non-production and DR, it is always certainly going to be an uphill task and challenging aspect maintaining them. One of the tasks that requires most attention and efforts more often is non-other than the PSU patch deployment for all the six environments at our premises.

We are currently in the process of applying 11.2.0.2.11 PSU patch in all our environments and the challenge in front of us is to bring down the patching time on each server. Our past patching experience says, an AUTO patching on each node needs minimum of 2 hours , and if you are going to patch a 10 node cluster, you need at least 22 hrs time frame to complete the deployment.

AUTO Patching
No doubt AUTO patching is the coolest enhancement from Oracle to automate the entire patching procedure smoothly and more importantly without much human intervene. The downside of the AUTO patch is the following, where AUTO patching for GI and RDBMS homes together fail or can't go with AUTO patch for GI and RDBMS home together:

  1. When you have multiple Oracle homes with different software owners, for example, we have an Oracle EBusiness Suite and typical RDBMS homes(10g and 11g) under different ownership.
  2. If you have multiple versions of Oracle databases running, for example, we have Oracle v10g and Oracle v11g databases.
  3. During the course of patching, if one of the files get failed to copy/rollback the backup copy due to file busy with any other existing OS process, the patch will be rolled back and will automatically restarts the cluster and the other services on the node subsequently.
Perhaps in the above circumstances, one may choose going with the AUTO patch separately for GI and then to the RDBMS homes. However, when you hit the 3 point above, the same thing gonna happen, which is time consuming, of course. While patching on particular node, the AUTO patching on GI home failed 3 times due to unsuccessfully cluster shutdown and we end-up rebooting the node 3 times.

Manual Patching
In contrast, manual patching requires heavy human intervene during the course of patch deployment. A set of steps needs to be followed carefully.

Since we got a challenge to look at all the possibilities to reduce the overall patching time, we started off analyzing various options between AUTO and Manual patch deployment and where the time is being consumed/wasted. We figured out that, after each successful/unsuccessful AUTO patching attempt, the cluster and the services on the nodes will have to restart and this was the time consuming factor. In a complex/large cluster environment with many instances, asm disks, diskgroups, it is certainly going to take a good amount of time to start off everything. This caught our attention and thought of giving a manual patching try.

When we tried manual patching method, we managed to patch the GI and RDBMS homes in about 1 hour time, this was almost 50% less than with the AUTO patching time-frame. Imagine, we finish 6 nodes in about 7 hours time in contrast to 12-13 hours time frame.

In a nutshell, if you have a small cluster environment, say 2 nodes cluster, you may feel you are not gain much with respect to saving the time, however, if you are going to patch a large/complex cluster environment, think of manual method which could save pretty huge patching downtime. At the same time, keep in mind that this method requires DBA intervene more than the AUTO patching method.


Pages

Subscribe to Oracle FAQ aggregator