Skip navigation.

Feed aggregator

HOWTO: Building a JSON Database API (3)

Marco Gralike - Fri, 2015-05-01 12:54
Be aware (!), while following thru on the following, that there might be a change…

Oracle Product Support Advisor Webcasts - May

Chris Warticki - Fri, 2015-05-01 12:41
Oracle Corporation
Oracle Product Support Advisor Webcasts May 2015
shadow1

This Month:


Oracle Product Support Advisor Webcasts for May shadow1 dial Dear Valued Support Customer,
We are pleased to invite you to our Advisor Webcast series for May 2015. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts:
Many of the Oracle Support product teams maintain support blogs. You may access the Support Blogs directly, or via the Product Support Blog Index. Watch this short video to see how to subscribe to a support blog of interest.

Sincerely,
Oracle Support

shadow2 shadow3 pen May Featured Webcasts by Product Area: CRM Introduction To Siebel TaskUI May 21 Enroll Database Oracle数据库12c - Dataguard新特性 (Oracle Database 12c - Dataguard New Features) - Mandarin only May 21 Enroll Database Oracle 12c: Datapump New Features May 21 Enroll E-Business Suite Understanding the Output of the Discrete Job Value Report May 12 Enroll E-Business Suite Demantra Certification. Are you attempting to get certified? Lets walk through the process! May 13 Enroll E-Business Suite Utilizing API For Pick Release And Ship Confirm May 19 Enroll E-Business Suite Getting Started with Document Sequencing in Oracle Receivables May 20 Enroll E-Business Suite Basic Troubleshooting Information for Duplicate Sales Order Transactions Stuck in Inventory Tables May 20 Enroll E-Business Suite An Overview of Construction Estimates in R12 May 21 Enroll E-Business Suite A Diagnosis on OPM - ASCP Data Collection Entities May 26 Enroll Eng System Exadata 的磁盘管理和故障排除技巧 (Exadata Disk Management and Troubleshooting tips) - Mandarin only May 20 Enroll Fusion Applications Want to find out All secrets on GRC Transaction User Defined Objects? May 7 Enroll Fusion Applications Understanding Required Setup and Approval Rules for Invoice Approval Workflow May 14 Enroll Fusion Applications Overview on Fusion Business Intelligence : Lets shed some light on OTBI & OBIA May 20 Enroll JD Edwards JD Edwards World and EnterpriseOne HCM Affordable Care Act (ACA) Updates May 5 Enroll JD Edwards JD Edwards EnterpriseOne: Periods of Supply explained May 6 Enroll JD Edwards JD Edwards World -Troubleshooting Sales Update P42800 May 7 Enroll JD Edwards JD Edwards EnterpriseOne – Understanding Taxes in Accounts Payable and Accounts Receivable May 14 Enroll JD Edwards JD Edwards World: Approvals Management for Procurement May 19 Enroll JD Edwards 0JD Edwards EnterpriseOne Cash Basis Accounting May 21 Enroll JD Edwards JD Edwards World: Troubleshooting Fixed Assets when Upgrading from A7.3 to A9.x May 26 Enroll JD Edwards JD Edwards World: Advanced Pricing Formulas and Variable Tables May 28 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Using Lot Status Groups to Segregate Inventory May 13 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Automating Timecard Creation With Timecard Templates May 20 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Compensation Management Fundamentals May 27 Enroll PGBU PGBU Unifier Tips and Tricks Advisor Webcast May 6 Enroll PeopleSoft Enterprise Knowledge Center Load Balancing May 12 Enroll PeopleSoft Enterprise PeopleSoft Payroll for North America – Setup and Troubleshooting for Pennsylvania Act 32 May 12 Enroll PeopleSoft Enterprise Financial Aid Regulatory 2015-2016 Release 3 (9.0 Bundle #37) May 13 Enroll PeopleSoft Enterprise PeopleSoft Enterprise Support for Crystal Reports May 19 Enroll Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

SQLcl - Code editing on the console

Barry McGillin - Fri, 2015-05-01 12:08
We've been playing with our console drawing in SQLcl for a while now and this week, we hooked up some keys to make editing and running much easier.  The video will show the following keys for managing your buffer in the console.  This will make it into the next Early Access candidate soon.

  • up arrow - previous history (this will continue to show you the next history unless you move into the text to edit it.
  • down arrow - next history which is the same as above.
If we are editing and not showing history, then the up and down arrow will move up and down the buffer. 
  • ctrl-W will take you to the top left of the buffer and ctrl-S will take you to the bottom of the buffer.
  • left arrow moves right, with ctrl-A taking you to extreme left of that line
  • right arrow moves right and ctrl-E takes you to the extreme right of that line
  • ESC takes you out of edit mode, back to the SQL> prompt
  • ctrl-R will execute your buffer if you are editing it.

Editing SQL in SQLcl
At the start of the video, we paste in a large piece of SQL from Kris' blog and all NBSP get stripped out so you get the full SQL and none of the dross. 
If you are at the end of the buffer and terminate your statement correctly, the next CR will run the contents of your buffer.  If you are anywhere else in the buffer, ctrl-R will run the buffer for you.
Check out the latest one on OTN and come back for these features when we drop the new version of SQLcl on OTN.

Strategies for Minimising SQL Execution Plan Instability

Dominic Brooks - Fri, 2015-05-01 10:42
Execution Plan Instability – What is the problem?

The Oracle Optimizer is a complex piece of software and with every release it becomes more complex.

RBO

In the beginning, the Optimizer was rule-based.

The Optimizer had a ranked list of heuristics used to optimize a query, picking the lowest ranked rule available.

This rule-based mode, whilst still in use with some internal Oracle dictionary queries, has been unsupported since version 10.1.

This means that no code changes have been officially made to the RBO and no bug fixes are provided. There are many features that the RBO is unaware of.

Applications should not still be using the rule-based optimizer.

CBO

The Cost-Based Optimizer is designed to evaluate a number of execution plans and pick the one estimated to be the fastest.

Many of the CBO features are designed to combat common optimization problems and many of these problems occur where development teams are not necessarily aware of the full implications of the features that they are using.

These built-in default behaviours roughly conform to the infamous 80%:20% rule in that most of the time they do a good job but they are not infallible.

Bind Variables, Literals and Histograms

Most of the features which are deliberately designed such that plan instability is difficult to avoid stem from the decision to use bind variables or literals (NOT that they should be mutually exclusive) and the interaction of the former with histograms.

In article 1 of his Philosophy series, Jonathan Lewis neatly sums this up:

Histograms and bind variables exist for diametrically opposed reasons – they won’t work well together without help

Fundamentally, bind variables exist to provide shareable plans.

Bind variables should be used where we are not interested in getting specific optimizations for differing parameters.

Literals should be used where we want the Optimizer to pay particular attention to data distribution and skew for the specific parameters supplied.

A SQL statement will/should often have some parameters which should be literals and some which should be binds.

From this point onwards, there has been a whole raft of features designed to treat the misuse of one or the other.

In Oracle 8.1.6, histograms were introduced.

Histograms exist to provide specific data distribution information, particularly relevant to specific parameters.

Also, in that version, we got CURSOR_SHARING – targeted at applications using literals instead of binds such that SQL which was identical part from the use of binds was rewritten to use sytem-generated bind variables.

Then in 9.2, we got bind variable peeking.

This feature was introduced so that the optimizer could peek at the values supplied at parse time and use data distribution information specific to these parse-time values to generate an execution plan which suited those values.

In addition and at the same time, through these various versions to present day, we have had the default behaviour of DBMS_STATS statistic gathering to let the database decide which columns it will create histograms on, based on the SQL which has been running.

This means that new histograms can suddenly spring up – or existing histograms unexpectedly disappear – on all sorts of columns. This can be problematic on columns with large numbers of distinct values AND particularly so on join cardinalities where there may be a mismatch of histograms on both sides of the join.

Ever since this point, we have had a conflict of interest in feature usage and an ever increasing number of additional functionality to battle against this conflict – adaptive cursor sharing, cardinality feedback, etc, etc

Finally, the education message got blurred or lost somewhere along the line to the extent that a lot of systems blindly overuse bind variables because of the perceived performance impact of using literals.

This situation is not helped by the fact PL/SQL is designed to encourage bind variables.

Using supplied parameters as literals means using some construct of dynamic SQL, not difficult but nevertheless an added complexity and also another feature which is often blindly discouraged.

SQL Execution Plan Instability – Is SPM a viable approach? SPM Overview

In Oracle 11g, Oracle introduced SQL Plan Baselines as a SQL Plan Management feature.

The core attraction of this functionality is that you can “lock in” a particular plan or plans for a SQL statement. This stores a set of outline hints and a specific plan hash value in relation to a specific SQL signature.

The Optimizer then uses that set of hints to try to reproduce the desired plan. If it can it will, if it can’t it will reject the hintset.

Additionally, the Optimizer completes its best-cost execution plan optimization anyway so that it can provide the ability to do a controlled evolution of baselined plans in the event that the lower-cost plan that it would have used is different and better performing than the baselined plan.

To use this the database parameters (session or system) just need to be configured to capture plans into a baseline and then use them.

There is flexibility to this capture/use. You can capture everything as it is generated; you could capture everything from memory now and/or at regular intervals and import into a baseline; you could capture everything from AWR into a SQL Tuning Set and then import into a baseline; or you could capture everything from another environment and export/import it into another.

And at any point, you can turn off capture and continue to use those which you currently have – this usage continues to capture any lower cost plans that the optimizer would have generated for any existing baselined plans

For a more detailed look at SPM and a comparison with SQL Profiles, see documentation.

Sounds good – why isn’t it the silver bullet for system-wide stability?

This approach might be good enough, but it is not a cast-iron guarantee of system-wide stability.

There are a number of reasons why not.

New & Changed SQL

Firstly, you need to have captured all your plans already. If you get brand new SQL, then there will be nothing in SPM.

Depending on your application, this may be a significant concern.

For example, consider an application making heavy usage of Hibernate which generates SQL statements.

A minor change in the domain model can mean a change of system-generated table alias in many statements.

As a result, you may suddenly get a lot of brand new SQL and significant numbers of baselined statements which you will now never see again.

What are the baselined plans based on? Are they the best plans? The only plans ever used?

If you suddenly baseline the plans for a large number of SQL statements, you are dictating which plan is to be used.

The plans will be based on the parsed plans in memory or in AWR at the time.
Are these the best plans?

Does/should this SQL statement always use this plan?

Are there normally multiple plans for different bindsets?

What if you normally get benefit from adapative cursor sharing?

ACS and baselines

What if your application benefits from adapative cursor sharing?

Sure, you can baseline multiple plans but these plans have no ACS information.

As soon as that ACS information is no longer in memory (as happens), there is no shortcut in a baseline to regain that, you still have to have the multiple executions required for the optimizer to recognize that which plans to use for which bindsets.

Parsing overhead

Widespread usage of baselines might, depending on your system performance profile, have a significant impact on the parsing resources.

This is because it always generates a best-cost plan anyway.

Then if that is not the baselined plan, it will use the baselined hintset to try to generate the specific plan hash.

In addition, it that is not possible, it will use just the optimizer_features_enable hint to try to generate the required plan.

So, you might in a heavily-baselined system to be doing 2x the parse work of a non-baselined system.

This might well be easily accommodated but there are systems where this would cause a problem.

Integration with development and testing processes

A SQL Plan Baseline is tied to a SQL statement based on the statement’s EXACT_MATCHING_SIGNATURE – a hash of the SQL statement which has been case and space normalized.

If a SQL statement materially changes, the baseline no longer applies.

How aware are developers of the presence of baselines?

And how to integrate with the development process?

How will our release process deal with baselines?

And if baselining large numbers of plans is being considered, then we have to think about where these will be generated.

The natural implication (of widespread baseline usage) is that new functionality being promoted to Prod would have a set of tested, baselined plans accompanying it and these would presumably have to be generated in a Prod-like environment which included Prod-like volumes.

SQL Execution Plan Instability – Decision Time

There is a decision to be made.

(And/or perhaps there is often a conceptual over-simplification by senior management to combat? Or at least a lack of deep understanding of the beast that we’re dealing with here?)

Do you want the Optimizer to try to get a better execution plan sometimes?

If the answer is yes, then you have to accept that it will get it wrong from time to time.

In particular, the various feedback and adaptive mechanisms are designed to recognize that they have got it wrong.

BUT they need that problematic execution in the the first place – sometimes more than one – to recognize that fact.

That one problematic execution could be your next Priority 1 incident.

In addition, the feedback mechanism is not perfect and it still can make subsequent executions worse in some circumstances.

SQL Execution Plan Instability – Turn it off?

IF your primary goal is plan stability – and I think many teams would claim this is this their goal but they do not embrace the implications of this – then perhaps a better decision is to turn off the various features which cause or combine to cause most of the problems of instability.

Appropriate usage of binds, appropriate usage of literals

Choose whether to use a bind variable or a literal as is appropriate for the value / column / SQL statement.

A SQL statement might have a mix of both.

DBMS_STATS defaults

A METHOD_OPT of FOR ALL INDEXED COLUMNS SIZE AUTO is an immediate red flag. This is never a good setting.

FOR ALL COLUMNS SIZE AUTO without table-level preferences (SET_TABLE_PREFS) is another red flag.

As an interim step, consider use FOR ALL COLUMNS SIZE REPEAT to lock in the current histogram usage.

The end goal should be to have table level preferences set for all tables.

This relies on knowing your data, your data distribution, your code, and knowing which histograms make sense (i.e. for skewed columns) – it will be far fewer than gathered by default.

For columns with significant numbers of distinct skew, it may be necessary to manually craft the histograms.

Volatile tables

Volatile tables should have stats set to an appropriate setting to generate appropriate plans for all situations and then those stats should be locked.

Stats which are gathered at any point during the volatility cycle may be good or may be problematic.

Similarly dynamic sampling can only see the data at the time of hard parse – you might be lucky and this is better than stats which say 0 rows but it can be a time bomb.

Turn off optimizer features

Turning off optimizer features might be best done via a LOGON trigger and turning such off for a subset of application users. These features include:

  • Bind Variable Peeking – off via _optim_peek_user_binds = false
  • Cardinality Feedback and ACS – should be disabled by turning off bind variable peeking but off via _optimizer_use_feedback = false, _optimizer_adaptive_cursor_sharing = false, _optimizer_extended_cursor_sharing_rel = “none”
  • Dynamic Sampling – optimizer_dynamic_sampling to 0
  • 12c additional adaptive features – e.g. adaptive execution plans

Additionally it probably makes sense to turn off the adaptive direct path read behaviour or anything with the word adaptive or dynamic in it or associated to it

This functionality decides on whether to do full segment scans via the buffer cache or not and the behaviour is a runtime decision depending on the size of the object relative to the buffer cache AND depending on how much of the segment is currently in the cache.

  • Adaptive direct path reads – _serial_direct_read = always

All too often I’ve seen a concurrently executed SQL statement switch to a “bad” plan involving a full table scan delivered via direct path reads stress out the IO subsystem because of the number of concurrent executions of that query which then affects performance across the DB.

Caveat

The actions above are still not sufficient to guarantee plan stability but, for this goal above all else, this is likely to be the most appropriate action.

However, to further guarantee stability it is still likely that some degree of hinting – whether via manual hints, sql profiles or baselines – might be necessary for small numbers of SQL statements where the intial cost-based plan is not appropriate e.g. GTTs and other situations but it should be small number of statements.

SQL Execution Plan Instability – Summary & Opinion

The actions discussed above are made on the basis that we want to minimise the chance of execution plan instability at all costs.

By making this decision, we are prioritizing stability over all the features within Oracle designed to generate better plans for specific situations, sets of binds, etc.

Personally, I always recommend going with the default behaviour until such time as it causes a significant problem.

I also always recommend matching the scope of a solution to the scope of a problem.

For example, if we have a problem with one or two SQL statements, the potential solutions should be limited to those SQL statements.

We should never be making changes with a potential system-wide impact for the sake of a couple of SQL statements.

And even parameter changes can be injected to specific SQL statements either via a SQL Patch or via manual hints.

In my opinion, having multiple plans for a SQL statement is expected.

But what is under-appreciated is the extent to which this is normal.

These are normally only noticed when they cause a problem and the significant number of plans which regularly change plans without impact tend to go unnoticed.

It is also my opinion that SQL execution issues occur mostly when the SQL is badly written or when incompatible features are combined – e.g. peeked binds together with histograms – and I prefer to tackle that root cause rather than a generic concept of plan instability being inherently problematic.

A future path?

Might it be a good idea for there to be a built-in choice of optimizer “umbrella” modes – maximum performance vs maximum stability which would turn on/off some of the features above and/or adjust their sensitivity/aggressiveness?


Vice President Scott Howley on Upcoming Executive Webcast

WebCenter Team - Fri, 2015-05-01 10:13

Happy Friday! You are all invited to the video executive webcast on Documents Cloud Service coming up on Wednesday, May 13 at 10 am PT/1 pm ET. Need a reason to attend? If the evite hasn't convinced you yet, take a look at the star cast for the webcast and enjoy this brief preview video from Oracle Vice President and our webcast host, Scott Howley.

<p> </p>


Parallel Execution -- 5 Parallel INSERT

Hemant K Chitale - Fri, 2015-05-01 09:57
Oracle permits Parallel DML (as with Parallel Query, this requires the Enterprise Edition).

Unlike Parallel Query, Parallel DML is *not* enabled by default.  You must explicitly enable it with an ALTER SESSION ENABLE PARALLEL DML.

The most common usage is Parallel INSERT.

Parallel Insert uses PX servers to execute the Insert.  Ideally, it makes sense to use Parallel Query to drive the Parallel Insert.  Each PX server doing the Insert executes a Direct Path Insert --- it allocates one or more extents to itself and inserts rows into that extent.  Effectively, the Parallel Insert creates a temporary segment.  When the whole Insert is successful, these extents of the temporary segment are merged into the target table (and the temporary segment loses it's existence).

Note that there are four consequences of this behaviour :

(a) Any empty or usable blocks in the existing extents are NOT used for the new rows.  The table *always* grows in allocated space even if there are empty blocks.

(b) Depending on the number of PX servers used, this method allocates more new extents than would a normal (Serial) Insert.

(c) The rows inserted are not visible to even the session that executed the Insert until and unless it issues a COMMIT.  Actually, the session cannot even re-query the same table (irrelevant is the possibility that the query would hit only pre-existing rows) without a COMMIT.  (This does not prevent the session from querying some other table before the COMMIT).

(d) The Direct Path Insert does not require large Undo space.  It does not track all the rowids into Undo.  It only needs to track the temporary segment and extents to be discarded should a ROLLBACK be issued.  So, it uses minimal Undo space.


[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Fri May 1 22:46:46 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>alter session enable parallel dml;

Session altered.

HEMANT>create table another_large_table as select * from large_table where 1=2;

Table created.

HEMANT>insert /*+ PARALLEL */
2 into another_large_table
3 select /*+ PARALLEL */ *
4 from large_table;

4802944 rows created.

HEMANT>!ps -ef |grep ora_p0
oracle 3637 1 0 22:47 ? 00:00:00 ora_p000_orcl
oracle 3639 1 0 22:47 ? 00:00:00 ora_p001_orcl
oracle 3641 1 0 22:47 ? 00:00:00 ora_p002_orcl
oracle 3643 1 0 22:47 ? 00:00:00 ora_p003_orcl
oracle 3680 3618 0 22:50 pts/1 00:00:00 /bin/bash -c ps -ef |grep ora_p0
oracle 3682 3680 0 22:50 pts/1 00:00:00 grep ora_p0

HEMANT>
HEMANT>select count(*) from another_large_table;
select count(*) from another_large_table
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


HEMANT>


So, we see that 4 PX servers were used. We also see that the session cannot re-query the table.
What evidence do we have of the temporary segment and extents ?

SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_type = 'TEMPORARY'
4 /

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ------------ --------------- ---------- -------------
HEMANT 11.54579 HEMANT 141 536.9375

SYS>

HEMANT>commit;

Commit complete.

HEMANT>

SYS>/

no rows selected

SYS>

The temporary segment no longer exists after the inserting session issues a COMMIT.  The extents of the temporary segment have been merged into the target table segment.

SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_name = 'ANOTHER_LARGE_TABLE'
4 /

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 142 537

SYS>

Now, let's see if another Parallel Insert would be able to reuse usable table blocks.  We DELETE (*not* TRUNCATE !) the rows in the table and re-attempt a Parallel Insert.

HEMANT>delete another_large_table;

4802944 rows deleted.

HEMANT>commit;

Commit complete.

HEMANT>

SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_name = 'ANOTHER_LARGE_TABLE';

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 142 537

SYS>

HEMANT>insert /*+ PARALLEL */
2 into another_large_table
3 select /*+ PARALLEL */ *
4 from large_table;

4802944 rows created.

HEMANT>

SYS>l
1 select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3* where segment_name = 'ANOTHER_LARGE_TABLE'
SYS>/

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 142 537

SYS>

HEMANT>commit;

Commit complete.

HEMANT>

SYS>l
1 select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3* where segment_name = 'ANOTHER_LARGE_TABLE'
SYS>
SYS>/

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 281 1073.9375

SYS>

We see that the inserted rows took another 139 extents and did NOT reuse any of the existing blocks even though they were all candidates for new rows.

This is something you must be extremely careful about !!  A Parallel Insert will "grow" the table by allocating new extents, ignoring all usable blocks in the table.  The only exception is if you have TRUNCATEd the target table.

HEMANT>truncate table another_large_table reuse storage;

Table truncated.

HEMANT>insert /*+ PARALLEL */
2 into another_large_table
3 select /*+ PARALLEL */ *
4 from large_table;

4802944 rows created.

HEMANT>

SYS>select s.username, s.sql_id, t.used_ublk
2 from v$session s, v$transaction t
3 where s.taddr=t.addr
4 /

USERNAME SQL_ID USED_UBLK
------------------------------ ------------- ----------
HEMANT 8g72bx3jy79gy 1

SYS>select sql_fulltext
2 from v$sqlstats
3 where sql_id = '8g72bx3jy79gy';

SQL_FULLTEXT
--------------------------------------------------------------------------------
insert /*+ PARALLEL */
into another_large_table
select /*+ PARALLEL */ *
from la


SYS>

Note how the 4.8million row Insert used only 1 Undo Block.
HEMANT>commit;

Commit complete.

HEMANT>

SYS>select s.username, s.sql_id, t.used_ublk
2 from v$session s, v$transaction t
3 where s.taddr=t.addr
4 /

no rows selected

SYS>
SYS>select owner, segment_name, tablespace_name, extents, bytes/1048576
2 from dba_segments
3 where segment_name = 'ANOTHER_LARGE_TABLE'
4 /

OWNER SEGMENT_NAME TABLESPACE_NAME EXTENTS BYTES/1048576
------------ ---------------------- --------------- ---------- -------------
HEMANT ANOTHER_LARGE_TABLE HEMANT 140 537

SYS>

The TRUNCATE allowed the next Parallel Insert to reuse the extents.

.
.
.

Categories: DBA Blogs

HOWTO: Building a JSON Database API (2)

Marco Gralike - Fri, 2015-05-01 09:12
After creating the initial structures to scrape some JSON content of Wikipedia, as mentioned in…

ControlCenter for WebCenter Content: Controlled Document Management for any Device

Fishbowl Solutions recently held a webinar about our newest product, ControlCenter for Oracle WebCenter Content. Product manager Kim Negaard discussed ControlCenter’s unique features and advantages for controlled document management. If you missed the webinar, you can now view it on YouTube:

Below is a summary of questions Kim answered during the webinar. If you have any unanswered questions, or would just like to learn more, feel free to contact Fishbowl by emailing cmsales@fishbowlsolutions.com.

Is this a custom component that I can apply to current documents and workflow processes, or are there additional customization that needs to be done?

ControlCenter is installed as a custom component and will work with current documents. You can identify which document types and workflows you want to be able to access through ControlCenter and additional customizations are not necessary.

Does the metadata sync with the header information in both Microsoft Word and Adobe PDF documents?

The metadata synchronization works with PDF and Word documents.

Do you need to have a specific template in order to synchronize?

Not really. There’s two ways you can insert metadata into a document:

  1. You can use the header or footer area and replace anything existing in the current header with a standard header.
  2. You can use properties fields: for example, anytime the Microsoft Office properties field “document ID” occurs in a document, the dDoc name value could be inserted into that field.

In either of these cases, the formatting standards wouldn’t be rigid, but you would ideally want to know before rolling this out which approach you’d like to take.

What version of WebCenter do you need for this?

This is supported on WebCenter Content version 11.1.8 or above.

Is it completely built on component architecture as a custom component?

Yes.

Does ControlCenter require the Site Studio component to be enabled in order to work?

No, Site Studio doesn’t have to be enabled.

Does ControlCenter require Records Management to be installed on top of WCC?

No.

Does the custom component require additional metadata specific to ControlCenter?

Not exactly. We’ve made it pretty flexible; for example, with the scheduled reviews, we don’t force you to create a field called “review date”. We allow you to pick any date field you want to use for the scheduled reviews, so that if you already have something in place you could use it.

Where do you put ControlCenter if you don’t already have an existing server?

You do need to have an Oracle WebCenter Content server to run ControlCenter. If you don’t have a server, you’ll need to purchase a license for WebCenter Content. However, you don’t need any additional servers besides your WebCenter Content server.

Does the notification have to go to a specific author, or could you send it to a group or list in case that author is no longer with the organization?

The notification system is very flexible in terms of who you can send documents to. You can send it to a group, like an entire department or group of team leads, or it can be configured to send to just one person, like the document author or owner.

How does this work with profiles?

ControlCenter fully supports profiles. When you view content information for a document, it will display the metadata using the profile. If you check in a document using a check-in profile, then all of the metadata and values from that profile will be respected and enforced within ControlCenter. I should also mention that ControlCenter does support DCLs, so if you’re using DCLs those will be respected, both from a check in perspective but also in the metadata on the left. So as you are creating a browse navigation in ControlCenter, it will recognize your DCLs and allow you to filter with the proper relationships.

Does it integrate with or support OID (Oracle Internet Directory)/OAM (Oracle Access Manager)?

ControlCenter will use whatever authentication configuration you already have set up. So if you’re using OAM with WebCenter Content, then that’s what ControlCenter will use as well.

Does it support any custom metadata that has already been created?

Yes, if you have custom metadata fields that are already created, any of those can be exposed in ControlCenter.

Does it support any other customizations that have already been defined in the WebCenter Content instance?

It will depend on the nature of the customization. In general, if you have customized the WebCenter UI, those customizations would not show up in ControlCenter because ControlCenter has a separate UI; however, customizations on the back end, like workflow or security, would likely carry over into ControlCenter.

Does ControlCenter integrate with URM?

The ControlCenter interface isn’t specifically integrated with URM.

In case of a cluster environment, does ControlCenter need to be installed on both WebCenter Content servers?

Yes, if you have a clustered WebCenter Content environment, you would need to install ControlCenter on both/all nodes of the clustered environment.

Does it change anything within core WebCenter?

Not really. The only change to the core UI is an additional button in the Browse Content menu that will take you to the ControlCenter interface. But other than that, ControlCenter doesn’t change or prevent you from using the regular Content Server interface.

Can you customize the look and feel (icons, colors, etc.)?

Yes. We will work with you to customize the look and feel, widgets, etc. The architecture that we used when we created this supports customization.

The post ControlCenter for WebCenter Content: Controlled Document Management for any Device appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

HOWTO: Building a JSON Database API (1)

Marco Gralike - Fri, 2015-05-01 04:01
When learning new things, I always try to apply it to a hopefully useful and…

Links for 2015-04-30 [del.icio.us]

  • www.slideshare.net
    via Blogs.Oracle.com/IMC - Slideshows by User: oracle_imc_team http://www.slideshare.net/
Categories: DBA Blogs

Partners, Don’t PaaS On an Opportunity to Grow Your Oracle Applications Cloud Business

Usable Apps - Thu, 2015-04-30 21:56

I attended the Oracle HCM Cloud Partner Enablement Summit near Milan, Italy to explain the Oracle Applications User Experience (OAUX) enablement strategy of using Oracle Platform as a Service (PaaS) to extend the Oracle Applications Cloud. We enable partners to offer their customers even more: a great user experience across the Software as a Service (SaaS) applications portfolio. We call this PaaS4SaaS.

The central part of my charter is to drive an OAUX PaaS4SaaS strategy that resonates with the business needs of the Oracle PartnerNetwork (OPN) and our own sales enablement worldwide, but with the EMEA region as focus.

We have a great team that delivers Oracle PaaS and SaaS enablement and direct deal support, scaling our outreach message and running events so that the proven resources to win more business get into the hands of our partners and our sales teams.

Rapid Development Kit available for PaaS4SaaS

The OAUX team PaaS4SaaS enablement is based on a rapid development kit (RDK) strategy of simple development, design, and business materials. After a few hours, partners walk away from one of our events with Cloud solutions they can sell to customers.

Let me explain more broadly why our PaaS4SaaS approach is a partner differentiator and a competitive must-have, and about how you can be in on the action!

RDK simplified UI deployed to Oracle Java Cloud Service - SaaS Extension live!

During the event in Italy, I deployed live a tablet-first Oracle Applications Cloud simplified UI from the RDK to the Oracle Java Cloud Service - SaaS Extension (JCS-SX), demonstrating that our apps are not only simple to use, but easy to build, and therefore easy for partners to sell.

Make no mistake; PaaS4SaaS is the partner differentiator when it comes to competing in the cloud. Our enablement means partners can:  

  • Build Oracle Applications Cloud simplified UIs productively using Oracle PaaS and SaaS.
  • Offer customization and integration confidence to customers in the cloud: they’ll get the same great user experience that Oracle delivers out of the box.
  • Identify new reusable business opportunities in the cloud and win more deals.
  • Accelerate innovation and SaaS adoption and increase the range of value-add PaaS solutions offered to customers.
  • Sharpen sales and consulting strategies using the user experience message, and take your position in the partner world to a new level.

But don’t just take it from me, check out the wisdom of the cloud, and what our partners, the press, and Oracle’s leadership team have to say about PaaS and SaaS:  

Here are the partner requirements to start that conversation with you about OAUX enablement: 

  • Do you have use cases for PaaS and the Oracle Applications Cloud? 
  • Do you want user experience (UX) as the partner differentiator? 
  • Are you an Oracle Applications Cloud (ERP, HCM, Sales) partner who wants to lead and influence
  • Do you have Oracle ADF, Oracle Fusion Middleware, SOA, Cloud or other development skills in-house or by way of an alliance? 
  • Are you willing to participate jointly in Oracle outreach and communications about your enablement and the outcome? 

So, if you want what other partners in EMEA and the U.S. have already experienced, and you tick all the requirements boxes, get in touch with me.

For more information on our PaaS4SaaS enablement, check out these links:  

Simple test of DB server CPU speed

Bobby Durrett's DBA Blog - Thu, 2015-04-30 16:55

I’m trying to compare two types of database servers and it looks like one has a faster CPU than the other.  But, the benchmark I have used runs a complicated variety of SQL so it is hard to really pin down the CPU performance.  So, I made up a simple query that eats up a lot of CPU and does not need to read from disk.

First I created a small table with five rows:

create table test (a number);

insert into test values (1);
insert into test values (1);
insert into test values (1);
insert into test values (1);
insert into test values (1);

commit;

Then I ran a query Cartesian joining that table to itself multiple times:

select
sum(t1.a)+
sum(t2.a)+
sum(t3.a)+
sum(t4.a)+
sum(t5.a)+
sum(t6.a)+
sum(t7.a)+
sum(t8.a)+
sum(t9.a)+
sum(t10.a)
from 
test t1,
test t2,
test t3,
test t4,
test t5,
test t6,
test t7,
test t8,
test t9,
test t10;

Then I used one of my profile scripts to extract the CPU.  Here is a typical output:

SUBSTR(TIMESOURCE,1,30)           SECONDS PERCENTAGE
------------------------------ ---------- ----------
TOTAL_TIME                             32        100
CPU                                    32        100

I edited the output to make it fit.  The profile shows the time that the query spent on the CPU in seconds.

I tried multiple runs of the same query and kept adding tables to the join to make the query longer.

This zip includes the sql scripts that I ran and my spreadsheet with the results: zip

I was comparing an Itanium and a Xeon processor and the test query ran in about half the time on the Xeon.  I realize that this is not a complete benchmark, but it is some information.  My other testing is not targeted specifically to CPU but I also saw a significant CPU speed-up there as well.  So, this simple query adds to the evidence that the Xeon processor that I am evaluating is faster than the Itanium one.

– Bobby

Categories: DBA Blogs

Using Windows 2012 R2 & dynamic witness feature with minimal configurations - part II

Yann Neuhaus - Thu, 2015-04-30 11:53

I wrote a blog post some time ago about using a file share witness with a minimal windows failover cluster configuration that consists of two cluster nodes. In this blog post, I told I was reluctant to use a witness in this case because it introduces a weakness in the availability process. Indeed, the system is not able to adjust node weight in this configuration but it does mean that we don’t need a witness in this case and this is what I want to clarify here. I admit myself I was wrong on this subject during for some time.

Let’s set the scene with a pretty simple Windows failover cluster architecture that includes two nodes and with dynamic quorum but without a configured witness. The node vote configuration is as follows:

 

blog_38_-_1_-_cluster_nodes_state

 

At this point the system will affect randomly a node weight to the current available nodes. For instance, in my context the vote is affected to the SQL143 node but there is a weakness in this configuration. Let’s first say the node SQL141 goes down in an unplanned scenario. In this case the cluster stays functioning because the node SQL143 has the vote (last man standing). Now, let’s say this time the node SQL143 goes down in an unplanned scenario. In this case the cluster will lost the quorum because the node SQL141 doesn’t have the vote to survive. You will find related entries in the cluster event log as shown to the next picture with two specific event ids (1135 and 1177).

 

blog_38_-_2_-_event_viewer

 

However in the event of the node SQL143 is gracefully shutdown, the cluster will able to remove the vote of the node SQL143 and give it to the node SQL141. But you know, I’m a follower of the murphy law: anything that can go wrong, will go wrong and it is particularly true in IT world.

So we don’t have the choice here. To protect from unplanned failure with two nodes, we should add a witness and at this point you may use either a disk or a file share witness. My preference is to promote first the disk quorum type but it is often not suitable with customers especially for geo cluster configuration. In this case using file share witness is very useful but it might introduce some important considerations about quorum resiliency. First of all, I want to exclude scenarios where the cluster resides on one datacenter. There are no really considerations here because the loose of the datacenter implies the unavailability of the entire cluster (and surely other components).  

Let’s talk about geo location clusters often used with SQL Server availability groups and where important considerations must be made about the file share witness localization. Indeed, most of my customers are dealing only with two datacenters and in this case the 100$ question is where to place it? Most of time, we will place the witness in the location of what we can call the primary datacenter. If the connectivity is lost between the two datacenters the service stays functioning in the primary datacenter. However a manual activation will be required in the event of full primary data center failure.

 

blog_38_-_3_-_geo_clust_primary_without_change

blog_38_-_3_-_geo_clust_primary_

 

 

 

Another scenario consists in placing the witness on the secondary datacenter. Unlike our first scenario, a network failure between the two datacenters will trigger an automatic failover of the resources to the secondary datacenter but if in the event of a complete failure of the secondary datacenter, the cluster will lost the quorum (as a reminder the remaining node is not able to survive).

 

blog_38_-_4_-_geo_clust_secondary_

        blog_38_-_4_-_geo_clust_secondary_failover

 

 

As you can see, each of aforementioned scenario have their advantages and drawbacks. A better situation would be to have a third datacenter to host the witness. Indeed, in the event of network failure between the two datacenters that host the cluster nodes, the vote will be assigned to the node which will first successfully lock the file share witness this time.

Keep in mind that even in this third case, losing the witness because either of a network failure between the two main datacenters and the third datacenter or the file share used by the witness deleted accidently by an administrator, can compromise the entire of the cluster availability in case of a node failure (one who has the vote). So be aware to monitor correctly this critical resource.

So, I would finish by a personal think. I always wondered why in the case of a minimal configuration (only 2 cluster nodes and a FSW), the cluster was not able to perform weight adjustment. Until now, I didn’t get the response from Microsoft but after some time, I think this weird behavior is quite normal. Let’s image the scenario where your file share witness resource is in failed state and the cluster is able to perform weight adjustment. Which of the nodes it may choose? The primary or the secondary? In fact it doesn’t matter because in the both cases, the next failure of the node which has the vote will also shutdown the cluster. Finally it is just delaying an inevitable situation …

Happy clustering !

Oracle Priority Support Infogram for 30-APR-2015

Oracle Infogram - Thu, 2015-04-30 10:50

Linux
From Oracle’s Linux Blog: Friday Spotlight: Oracle Linux 7 - Optimizing Deployment Flexibility and Increasing ROI.
And from the same source: The April 2015 Oracle Linux Newsletter is Now Available
Fusion
From Fusion Applications Performance Issues, How to Increase Performance With Business Events in Fusion Applications
And from Fusion Applications Developer Relations: JDeveloper Versions Revisited
Java
Less Noisy NetBeans, from Geertjan’s Blog.
Mobile Computing
Oracle Database Mobile Server 12c (12.1.0.0.0) Released, from Oracle Partner Hub: ISV Migration Center Team.
SOA
Implementing Coherence and OWSM out-of-process on OSB, from the SOA & BPM Partner Community Blog.
Social Technology
What Does Facebook’s API Change Mean?, from Oracle Social Spotlight.
MySQL
From Oracle’s MySQL Blog: Importing and Backing Up with mysqldump
Demantra
From the Oracle Demantra blog: The table_reorg procedure ran for SALES_DATA errored due to lack of tablespace. Can I delete the RDF segments?
EBS
From the Oracle E-Business Suite Support blog:
Critical Update to the EBS Procurement Approvals Analyzer (Version 200.3)
So really why should I apply the March 2015 12.1.3: Procurement Family Update - Rollup Patch?
From the Oracle E-Business Suite Technology blog:
High Availability Configuration for Integrated SOA Gateway R12.2
Database Migration Using 12cR1 Transportable Tablespaces Now Certified for EBS 12.1.3
Best Practices for Optimizing Oracle XML Gateway Performance
…and Finally

Now you can listen to Wikipedia. No, not sounds on Wikimedia, but the sounds of edits being entered in various languages on Wikipedia itself. There’s a list on the bottom to add and remove languages to the stream.

SQLcl connections - Lazy mans SQL*Net completion

Barry McGillin - Thu, 2015-04-30 10:30
Turloch posted this today, which is like aliases for SQL*Net connection URL's which are used to connections like this:

connect <USERNAME>/<Password>@URL


This works great and you can simplify your connection strings that you use.  Vadim wired this into the code completion and we can now code complete via key, a connection string that you have used before or you can set up a new now using the net command.


jQuery Toggle Image

Denes Kubicek - Thu, 2015-04-30 06:20
For this, you will need two lines of code. In case you want to save the values, there is a bit more work to do but everything can be done using standards.

Categories: Development

Windows Server Next, AKA Windows Server 2016!

Yann Neuhaus - Thu, 2015-04-30 02:10

The first technical preview of the future version of Windows Server was, since last October, available (here) and a second one with more new features should be available in May.
The final version which should be normally released in 2015 has been recently postponed to 2016. It will be the first time that the client and server releases will be decoupled.

This new version of Windows Server will include:

  • new and changed functionalities for Hyper-V
  • improvements for Remote Desktop Services
  • new and updated functionalities for Failover Clustering
  • significant new features with PowerShell 5.0
  • directory services, Web application proxy and other features
  • ...

According to Microsoft employee, Jeffrey Snover, the next version of Windows Server has been deeply refactoring to really build a Cloud-optimized server! A server which is deeply refactored for a cloud scenario.

b2ap3_thumbnail_WinowsServer2016_Roadmap.jpg

The goal is to scope out my needs in order to use only the required components.

On top of this Cloud-Optimized server, the server will be build, the same server that we have for the moment, compatible with that we have but with two application profiles:

  • the first application profile which will target the existing set of APIs server
  • the second will be a subset of APIs which will be cloud-optimized

Microsoft works also to further clarify the difference between Server and Client to avoid making a mix between client APIs and server APIs for example.

Microsoft will also introduce Docker containers to his new Windows Server 2016! Container is a compute environment also called compute container.

We will have two flavors of compute containers:

  • one for application compatibility (server running in a container)
  • a second optimized for the cloud (cloud-optimized server)


Docker container

 The goal of Docker is to embed an application into a virtual container. Application via the container will be able to be executed without any problem on Windows or Linux servers. This technology will facilitate the deployment of application and is offered as Open Source under apache license by an American company called Docker.

b2ap3_thumbnail_WinowsServer2016_Docker1.jpgb2ap3_thumbnail_WinowsServer2016_Docker2.jpg

A container is very lightweight as it does not contain its own operation system. In fact, it will use the host machine in order to achieve all of the system calls.

Migration of Docker containers will be easier as their weight are small.

The bigger clouds providers like Amazon on AWS, Microsoft on Azure, Google on Google Compute, have already integrated this new technology... Dealing with Docker containers give the possibility to migrate from one cloud to another one easily.


Nano server

In addition, the Docker container technology which will come with Windows Server 2016 will be part of a set of application deployment services, called Nano Server.

According to an internal presentation of WZor published by Microsoft, Nano Server is presented as “The future of Windows Server”.

Nano Server will be a zero-footprint model, server roles and optional features will reside outside of it. No binaries or metadata in the image, it will be just standalone packages.

Hyper-V, Clustering, Storage, Core CLR, ASP.NET V.Next, PaaS v2, containers will be part of the new roles and features.

The goal will be also to change the mentality of servers management. Tend towards remote management and process automation via Core PowerShell and WMI. In order to facilitate remote management, local tools like Task manager, Registry editor, Event viewer... will be replaced by web-based tools and accessible via a remote connection.

This new solution will be integrated also in Visual Studio.

 

In conclusion, WZor summarized Nano Server as “a nucleus of next-gen cloud infrastructure and applications”. This shows the direction that Microsoft wants to give to Windows Server 2016: even better integration to the cloud, optimization for new distributed applications and management facilitation.

Links for 2015-04-29 [del.icio.us]

Categories: DBA Blogs

getting started with postgres plus advanced server (2) – setting up a backup and recovery server

Yann Neuhaus - Wed, 2015-04-29 23:50

The first post in this series explained how to get ppas installed on a linux system. Now that the database cluster is up and running we should take care immediately about backup and recovery. For this I'll use another system where I'll install and configure bart. So, the system overview for now is:

server ip address purpose ppas 192.168.56.243 ppas database cluster ppasbart 192.168.56.245 backup and recovery server


As bart requires the postgres binaries I'll just repeat the ppas installation on the bart server. Check the first post on how to do that.

tip: there is a "--extract-only" switch which only extracts the binaries without bringing up a database cluster.

After that just install the bart rpm:

yum localinstall edb-bart-1.0.2-1.rhel6.x86_64.rpm

All the files will be installed under:

ls -la /usr/edb-bart-1.0/
total 20
drwxr-xr-x.  4 root root    44 Apr 23 13:41 .
drwxr-xr-x. 14 root root  4096 Apr 23 13:41 ..
drwxr-xr-x.  2 root root    17 Apr 23 13:41 bin
drwxr-xr-x.  2 root root    21 Apr 23 13:41 etc
-rw-r--r--.  1 root root 15225 Jan 27 15:24 license.txt

Having a dedicated user for bart is a good idea:

# groupadd bart
# useradd -g bart bart
# passwd bart
Changing password for user bart.
New password: 
Retype new password: 
$passwd: all authentication tokens updated successfully.

As backups need some space a top level directory for all the bart backups needs to be created:

# mkdir /opt/backup
chown bart:bart /opt/backup
chmod 700 /opt/backup
mkdir -p /opt/backup/ppas94/archived_wals

Now everything is in place to start the bart configuration. A minimal configuration file would look like this:

cat /usr/edb-bart-1.0/etc/bart.cfg
[BART]
bart-host = bart@192.168.56.245
backup_path = /opt/backup
pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup
logfile = /var/tmp/bart.log
xlog-method = fetch

[PPAS94]
host = 192.168.56.243
port = 5444
user = enterprisedb
description = "PPAS 94 server"

The BART section is the global section while the next sections are specific to the database clusters to backup and restore. As bart requires passwordless ssh authentication between the bart host and the database host to be backup up lets setup this. On the bart bart host ( ppasbart ):

su - bart
ssh-keygen -t rsa

On the host where database runs ( ppas ):

su -
cd /opt/PostgresPlus/9.4AS
mkdir .ssh
chown enterprisedb:enterprisedb .ssh/
chmod 700 .ssh/
su - enterprisedb
ssh-keygen -t rsa

As the public keys are now available we'll need to make them available on each host. On the ppas host:

cat .ssh/id_rsa.pub > .ssh/authorized_keys
chmod 600 .ssh/authorized_keys

Add the public key from the barthost to the authorized keys file above. Example: get the public key from the bart host:

[bart@ppasbart ~]$ id
uid=1001(bart) gid=1001(bart) groups=1001(bart) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[bart@ppasbart ~]$ cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN document.write(['bart','ppasbart.loca'].join('@'))l

Copy/paste this key into the authorized_keys file for the enterprisedb user on the database host, so that the file looks similar to this:

cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN l
[bart@ppasbart ~]$ cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQZWeegLpqVB20c3cIN0Bc7pN6OjFM5pBsunDbO6SQ0+UYxZGScwjnX9FSOlmYzqrlz62jxV2dOJBHgaJj/mbFs5XbmvFw6Z4Zj224aBOXAfej4nHqVnn1Tpuum4HIrbsau3rI+jLCNP+MKnumwM7JiG06dsoG4PeUOghCLyFrItq2/uCIDHWoeQCqqnLD/lLG5y1YXQCSR4VkiQm62tU0aTUBQdZWnvtgskKkHWyVRERfLOmlz2puvmmc5YxmQ5XBVMN5dIcIZntTfx3JC3imjrUl10L3hkiPkV0eAt3KtC1M0n9DDao3SfHFfKfEfp5p69vvpZM2uGFbcpkQrtN l
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN

Make the file the same on the bart host and test if you can connect without passwords:

[bart@ppasbart ~]$ hostname
ppasbart.local
[bart@ppasbart ~]$ ssh bart@ppasbart
Last login: Thu Apr 23 14:24:39 2015 from ppas
[bart@ppasbart ~]$ logout
Connection to ppasbart closed.
[bart@ppasbart ~]$ ssh enterprisedb@ppas
Last login: Thu Apr 23 14:24:47 2015 from ppas
-bash-4.2$ logout
Connection to ppas closed.

Do the same test on the ppas host:

bash-4.2$ hostname
ppas.local
-bash-4.2$ ssh bart@ppasbart
Last login: Thu Apr 23 14:22:07 2015 from ppasbart
[bart@ppasbart ~]$ logout
Connection to ppasbart closed.
-bash-4.2$ ssh enterprisedb@ppas
Last login: Thu Apr 23 14:22:18 2015 from ppasbart
-bash-4.2$ logout
Connection to ppas closed.
-bash-4.2$ 

Once this works we need to setup a replication user in the database being backed up. So create the user in the database which runs on the ppas host (I'll do that with enterprise user instead of the postgres user as we'll need to adjust pg_hba.conf file right after creating the user):

[root@ppas 9.4AS]# su - enterprisedb
Last login: Thu Apr 23 14:25:50 CEST 2015 from ppasbart on pts/1
-bash-4.2$ . pgplus_env.sh
-bash-4.2$ psql -U enterprisedb
psql.bin (9.4.1.3)
Type "help" for help.

edb=# CREATE ROLE bart WITH LOGIN REPLICATION PASSWORD 'bart';       
CREATE ROLE
edb=# exit
-bash-4.2$ echo "host    all     bart         192.168.56.245/32          md5" >> data/pg_hba.conf

Make sure that the IP matches your bart host. Then adjust the bart.cfg file on the bart host to match your configuration:

cat /usr/edb-bart-1.0/etc/bart.cfg
[BART]
bart-host = bart@192.168.56.245
backup_path = /opt/backup
pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup
logfile = /var/tmp/bart.log
xlog-method = fetch

[PPAS94]
host = 192.168.56.243
port = 5444
user = bart
remote-host = enterprisedb@192.168.56.243
description = "PPAS 94 remote server"

Another requirement is that the bart database user must be able to connect to the database without prompting for a password. Thus we create the .pgpass file on the bart host which is used for reading the password:

[bart@ppasbart ~]$ cat .pgpass 
192.168.56.243:5444:*:bart:bart
[bart@ppasbart ~]$ chmod 600 .pgpass

As a last step we need to enable wal archiving on the database that should be backed up. The following parameters need to be set in the postgresql.conf file:

wal_level = archive  # or higher
archive_mode = on
archive_command = 'scp %p bart@192.168.56.245:/opt/backup/ppas94/archived_wals/%f'
max_wal_senders = 1  # or higher

Once done restart the database cluster:

su -
service ppas-9.4 restart

Lets see if bart can see anything on the bart server:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-SERVERS -s PPAS94 
Server name         : ppas94
Host name           : 192.168.56.243
User name           : bart
Port                : 5444
Remote host         : enterprisedb@192.168.56.243
Archive path        : /opt/backup/ppas94/archived_wals
WARNING: xlog-method is empty, defaulting to global policy
Xlog Method         : fetch
Tablespace path(s)  : 
Description         : "PPAS 94 remote server"

Looks fine. So lets do a backup:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg BACKUP -s PPAS94

INFO:  creating backup for server 'ppas94'
INFO:  backup identifier: '1429795268774'
WARNING: xlog-method is empty, defaulting to global policy
56357/56357 kB (100%), 1/1 tablespace

INFO:  backup checksum: 6e614f981902c99326a7625a9c262d98
INFO:  backup completed successfully

Cool. Lets see what is in the backup catalog:

[root@ppasbart tmp]# ls -la /opt/backup/
total 0
drwx------. 3 bart bart 19 Apr 23 15:02 .
drwxr-xr-x. 4 root root 38 Apr 23 13:49 ..
drwx------. 4 bart bart 46 Apr 23 15:21 ppas94
[root@ppasbart tmp]# ls -la /opt/backup/ppas94/
total 4
drwx------. 4 bart bart   46 Apr 23 15:21 .
drwx------. 3 bart bart   19 Apr 23 15:02 ..
drwx------. 2 bart bart   36 Apr 23 15:21 1429795268774
drwx------. 2 bart bart 4096 Apr 23 15:21 archived_wals
[root@ppasbart tmp]# ls -la /opt/backup/ppas94/1429795268774/
total 56364
drwx------. 2 bart bart       36 Apr 23 15:21 .
drwx------. 4 bart bart       46 Apr 23 15:21 ..
-rw-rw-r--. 1 bart bart       33 Apr 23 15:21 base.md5
-rw-rw-r--. 1 bart bart 57710592 Apr 23 15:21 base.tar
[root@ppasbart tmp]# ls -la /opt/backup/ppas94/archived_wals/
total 81928
drwx------. 2 bart bart     4096 Apr 23 15:21 .
drwx------. 4 bart bart       46 Apr 23 15:21 ..
-rw-------. 1 bart bart 16777216 Apr 23 15:10 000000010000000000000002
-rw-------. 1 bart bart 16777216 Apr 23 15:13 000000010000000000000003
-rw-------. 1 bart bart 16777216 Apr 23 15:20 000000010000000000000004
-rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000005
-rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000006
-rw-------. 1 bart bart      304 Apr 23 15:21 000000010000000000000006.00000028.backup

Use the SHOW-BACKUPS switch to get on overview of the backups available:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-BACKUPS 
 Server Name   Backup ID       Backup Time           Backup Size  
                                                                  
 ppas94        1429795268774   2015-04-23 15:21:23   55.0371 MB   
 ppas94        1429795515326   2015-04-23 15:25:18   5.72567 MB   
 ppas94        1429795614916   2015-04-23 15:26:58   5.72567 MB   
                                                                  

A backup without a restore proves nothing so lets try to restore one of the backups to the ppas server to a different directory:

[root@ppas 9.4AS]# mkdir /opt/PostgresPlus/9.4AS/data2
[root@ppas 9.4AS]# chown enterprisedb:enterprisedb /opt/PostgresPlus/9.4AS/data2

On the ppasbart host do the restore:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2
INFO:  restoring backup '1429795614916' of server 'ppas94'
INFO:  restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
INFO:  base backup restored
INFO:  archiving is disabled
INFO:  backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2

Looks good. Lets see what is in the data2 directory on the ppas host:

[root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2
backup_label  dbms_pipe  pg_clog      pg_hba.conf    pg_log      pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspc    PG_VERSION  postgresql.auto.conf
base          global     pg_dynshmem  pg_ident.conf  pg_logical  pg_notify     pg_serial    pg_stat       pg_subtrans  pg_twophase  pg_xlog     postgresql.conf
[root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2/pg_xlog
000000010000000000000008  archive_status

Looks good, too. As this is all on the same server we need to change the port before bringing up the database:

-bash-4.2$ grep port postgresql.conf  | head  -1
port = 5445				# (change requires restart)
-bash-4.2$ pg_ctl start -D data2/
server starting
-bash-4.2$ 2015-04-23 16:01:30 CEST FATAL:  data directory "/opt/PostgresPlus/9.4AS/data2" has group or world access
2015-04-23 16:01:30 CEST DETAIL:  Permissions should be u=rwx (0700).

Ok, fine. Change it:

-bash-4.2$ chmod 700 /opt/PostgresPlus/9.4AS/data2
-bash-4.2$ pg_ctl start -D data2/
server starting
-bash-4.2$ 2015-04-23 16:02:00 CEST LOG:  redirecting log output to logging collector process
2015-04-23 16:02:00 CEST HINT:  Future log output will appear in directory "pg_log".

Seems ok, lets connect:

-bash-4.2$ psql -p 5445 -U bart
Password for user bart: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=> l
                                           List of databases
   Name    |    Owner     | Encoding |   Collate   |    Ctype    | ICU |       Access privileges       
-----------+--------------+----------+-------------+-------------+-----+-------------------------------
 edb       | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 postgres  | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 template0 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
 template1 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
(4 rows)

Cool. Works. But: archiving is disabled and you'll need to enable it again. This is the default behavior of bart as it adds "archive_mode=off" to the end of the postgressql.conf. But take care that you adjust the archive_command parameter as all archived wals will be scp'ed to the same directory on the ppasbart server as the original database did. Can we do a point in time recovery? Let's try (I'll destroy the restored database cluster and will use the same data2 directory ):

-bash-4.2$ pg_ctl -D data2 stop -m fast
waiting for server to shut down.... done
server stopped
-bash-4.2$ rm -rf data2/*
-bash-4.2$ 

Lets try the restore to a specific point in time:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2 -g '2015-04-03 15:23:00'
INFO:  restoring backup '1429795614916' of server 'ppas94'
INFO:  restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
INFO:  base backup restored
INFO:  creating recovery.conf file
INFO:  archiving is disabled
INFO:  backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2

Seems ok, but what is the difference? When specifying a point in time a recovery.conf file will be created for the restored database cluster:

-bash-4.2$ cat data2/recovery.conf
restore_command = 'scp -o BatchMode=yes -o PasswordAuthentication=no bart@192.168.56.245:/opt/backup/ppas94/archived_wals/%f %p'
recovery_target_time = '2015-04-03 15:23:00'

Lets start the database (after changing the port again in postgresql.conf):

-bash-4.2$ pg_ctl -D data2 start
server starting
-bash-4.2$ 2015-04-23 16:16:12 CEST LOG:  redirecting log output to logging collector process
2015-04-23 16:16:12 CEST HINT:  Future log output will appear in directory "pg_log".

Are we able to connect?

-bash-4.2$ psql -U bart -p 5445 
Password for user bart: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=>

Works, too. So now we have a central backup server for our postgresql infrastructure from which backups and restores can be executed. Combine this with a backup software (like netbackup, etc) which picks up the backups from the bartserver and you should be fine. in the next post we'll setup a hot standby database server.

R12.2 Single file system

Vikram Das - Wed, 2015-04-29 23:21
With the release of AD and TXK Delta 6, Oracle has provided the feature of single file system on development instances for R12.2. Here's what they have mentioned in support.oracle.com article: Oracle E-Business Suite Applications DBA and Technology Stack Release Notes for R12.AD.C.Delta.6 and R12.TXK.C.Delta.6 (Doc ID 1983782.1)
Enhancements in AD and TXK Delta 6
4. New and Changed FeaturesOracle E-Business Suite Technology Stack and Oracle E-Business Suite Applications DBA contain the following new or changed features in R12.AD.C.Delta.6 and R12.TXK.C.Delta.6.4.1 Support for single file system development environments
  • A normal Release 12.2 online patching environment requires one application tier file system for the run edition, and another for the patch edition. This dual file system architecture is fundamental to the patching of Oracle E-Business Suite Release 12.2 and is necessary for production environments and test environments that are meant to be representative of production. This enhancement makes it possible to have a development environment with a single file system, where custom code can be built and tested. A limited set of adop phases and modes are available to support downtime patching of such a development environment. Code should then be tested in standard dual file system test environments before being applied to production.
More details are provided in Oracle E-Business Suite Maintenance Guide, Chapter: Patching Procedures):
http://docs.oracle.com/cd/E26401_01/doc.122/e22954/T202991T531065.htm#6169002 

Support for Single File System Development Environments
A normal Release 12.2 online patching environment requires two application tier file systems, one for the run edition and another for the patch edition. This dual file system architecture is fundamental to patching of Oracle E-Business Suite Release 12.2, and is necessary both for production environments and test environments that are intended to be representative of production. This feature makes it possible to create a development environment with a single file system, where custom code can be built and tested. The code should then always be tested in a standard dual file system test environment before being applied to production.
You can set up a single file system development environment by installing Oracle E-Business Suite Release 12.2 in the normal way, and then deleting the $PATCH_BASE directory with the command:
$ rm -rf $PATCH_BASE
A limited set of adop phases and modes are available to support patching of a single file system development environment. These are:
·         apply phase in downtime mode·         cleanup phaseSpecification of any other phase or mode will cause adop to exit with an error.
The following restrictions apply to using a single file system environment:
·         You can only use a single file system environment for development purposes.·         You cannot use online patching on a single file system environment.·         You can only convert an existing dual file system environment to a single file system: you cannot directly create a single file system environment via Rapid Install or cloning.·         There is no way to convert a single file system environment back into a dual file system.
·         You cannot clone from a single file system environment.
Categories: APPS Blogs