Feed aggregator

OCP 12C – Oracle Data Redaction

DBA Scripts and Articles - Mon, 2014-10-27 08:00

What is Oracle Data Redaction ? Oracle Data Redaction is meant to mask (redact) sensitive data returned from application queries. Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it is returned to the application. You can redact column data by using one of the following methods: Full … Continue reading OCP 12C – Oracle Data Redaction

The post OCP 12C – Oracle Data Redaction appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OCP 12C – Oracle Data Redaction

DBA Scripts and Articles - Mon, 2014-10-27 08:00

What is Oracle Data Redaction ? Oracle Data Redaction is meant to mask (redact) sensitive data returned from application queries. Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it is returned to the application. You can redact column data by using one of the following methods: Full … Continue reading OCP 12C – Oracle Data Redaction

The post OCP 12C – Oracle Data Redaction appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

PeopleTools 8.54 Feature: Application Engine Trace File Enhancements

Javier Delgado - Mon, 2014-10-27 04:01
In this blog, we have been reviewing the new features of PeopleTools 8.54. Today is the turn of Application Engine, particularly on its troubleshooting. This release of PeopleTools include several enhancements on Application Engine tracing, which are outlined below:


  • The .AET trace file can now include the PeopleCode trace. This removes the need of checking the .AET file for the the non-PeopleCode steps and the .TRC file for the PeopleCode steps. Surely, .TRC files could also contain the SQL executed in non-PeopleCode steps if needed, but it was significantly more difficult to read as the SQL statements were not formatted.


This new feature is enabled by setting the TraceAECombineOutput parameter in the server configuration file for Application Server or Process Scheduler Server.

TraceAECombineOutput=Y


  • You can set the file size of the Application Engine Trace file. This way, if the trace file exceeds the threshold, it splits into a different file. For certain processes, this could be quite handy, as sometimes the trace sizes become unmanageable.

This new feature is enabled by setting the AETFileSize parameter in the server configuration file for Application Server or Process Scheduler Server. The size is measured in Megabytes.

AETFileSize=20

  • You can actually select which sections of an Application Engine program should be traced and which not. This can contribute to reduce unneeded trace information, just focusing on the potential error areas.


This new feature is enabled by setting the TraceAEEnableSection parameter in the server configuration file for Application Server or Process Scheduler Server.


TraceAEEnableSection=Y


Then, using Application Designed, you should mark the sections you want to trace. Keep in mind that by default all sections are unmarked:

In order to enable the flag in Application Designer, the Enable Section Trace(g) setting has to be enabled in Configuration Manager:


Note: As far as I can tell, you can only set this flag when you create a new section. If you need to modify an existing one, you would need to copy and paste, and then remove the original one. Have any of you found a more efficient way of setting the flag?


  • The Application Engine Trace file name now includes the Date/Time stamp.


These enhancements should simplify troubleshooting of Application Engine program issues, particularly those ones containing a significant amount of PeopleCode processing or generating very large trace files.

Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell; O'Reilly Media

Surachart Opun - Sat, 2014-10-25 12:52
Impala is open source and a query engine that runs on Apache Hadoop. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. If you are looking for a book getting start with it - Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell (@max_webster). Assist readers to write, tune, and port SQL queries and other statements for a Big Data environment, using Impala. The SQL examples in this book start from a simple base for easy comprehension, then build toward best practices that demonstrate high performance and scalability. For readers, you can download QuickStart VMs and install. After that, you can use it with examples in a book.
In a book, it doesn't assist readers to install Impala or how to solve the issue from installation or configuration. It has 5 chapters and not much for the number of pages, but enough to guide how to use Impala (Interactive SQL) and has good examples. With chapter 5 - Tutorials and Deep Dives, that it's highlight in a book and the example in a chapter that is very useful.
Free Sampler.

This book assists readers.
  • Learn how Impala integrates with a wide range of Hadoop components
  • Attain high performance and scalability for huge data sets on production clusters
  • Explore common developer tasks, such as porting code to Impala and optimizing performance
  • Use tutorials for working with billion-row tables, date- and time-based values, and other techniques
  • Learn how to transition from rigid schemas to a flexible model that evolves as needs change
  • Take a deep dive into joins and the roles of statistics
[test01:21000] > select "Surachart Opun" Name,  NOW() ;
Query: select "Surachart Opun" Name,  NOW()
+----------------+-------------------------------+
| name           | now()                         |
+----------------+-------------------------------+
| Surachart Opun | 2014-10-25 23:34:03.217635000 |
+----------------+-------------------------------+
Returned 1 row(s) in 0.14sAuthor: John Russell (@max_webster)
Categories: DBA Blogs

Minimising Parse Time in Application Engine with ReUseStatement

David Kurtz - Fri, 2014-10-24 10:15
This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.
ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.
Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

;-------------------------------------------------------------------------
; AE Tracing Bitfield
;
; Bit Type of tracing
; --- ---------------
...
; 128 - Timings Report to AET file
...
; 1024 - Timings Report to tables
...
TraceAE=1152
;------------------------------------------------------------------------

Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.
Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

                          PeopleSoft Application Engine Timings
(All timings in seconds)

C o m p i l e E x e c u t e F e t c h Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
...

With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

                               C o m p i l e    E x e c u t e    F e t c h        Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
...

So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.
 Profile CompilationsThis query produces a simple profile of batch timings for statements. 
  • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
  • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
  • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

REM ReUseCand.sql
REM (c)Go-Faster COnsultancy Ltd. 2014
COLUMN detail_id FORMAT a32
COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
SPOOL ReUseCand
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_Timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm > SYSDATE-7
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
FROM y
), a as (
SELECT process_name, detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
FROM z
GROUP BY process_name, detail_id)
SELECT * FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
SPOOL OFF

So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

Process                                         Step    Compile    Process
Name DETAIL_ID SEcs Count Instances
------------ -------------------------------- ------ ---------- ----------
AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

This query is based on the previous one, but includes scalar queries on the ASH data for each step.
  • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
  • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
  • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
  • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

REM ReUseCandASH.sql
REM ReUseCandASH.sql
REM (c)Go-Faster Consultancy Ltd. 2014
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
COLUMN detail_id FORMAT a32
COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
SPOOL ReUseCandASH
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.begindttm, l.enddttm
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
AND l.enddttm < TRUNC(SYSDATE,'HH24')
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END AS time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = x.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
AND h.in_hard_parse = 'Y'
) hard_parse_secs
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = X.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
) ash_secs
FROM y
), a AS (
SELECT /*process_name ,*/ detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
, SUM(hard_parse_secs) hard_parse_secs
, SUM(ash_secs) ash_secs
FROM z
GROUP BY /*process_name,*/ detail_id)
SELECT a.*
FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
spool off

Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

                                   Step    Compile    Process      Parse         DB
DETAIL_ID Secs Count Instances Secs Time Comment
-------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
  • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
  • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.
ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.

    APEX Tabular Forms Deep Dive at #ORCLAPEX NYC Meetup

    Marc Sewtz - Fri, 2014-10-24 09:35
    Ever heard of APEX$ROW_NUM, APEX$ROW_SELECTOR and APEX$ROW_STATUS? Did you know you can reference tabular form columns using bind variable syntax in your page processes? Are you familiar with the execution scope of tabular form processes for modified vs submitted rows? Ever struggled using apex_application.g_fxx arrays with checkboxes? And do you really need to use the apex_item APIs? In APEX 4.0, 4.1 and 4.2 we've introduced lots of new tabular form features, including declarative tabular form validations and page processes, yet there are still PL/SQL processes being written that loop through the apex_application.g_fxx arrays or tabular form regions that mix apex_item API calls with using built-in form elements.

    So when planning our next #ORCLAPEX NYC Meetup, we've figured this would be a great topic to cover. And since we don't want to talk you to death during a single meetup - we decided that we'll do a three part series, starting slow and covering the basics during our next meetup on 11/06, then dig a little deeper in part two and then conclude this series with a grand finale during the third meetup in this series.

    Join us on Thurday, November 6th at the Oracle office in Manhattan, at 120 Park Ave. We're starting at 6pm and we'll serve pizza this time - thanks to everyone's generous donations last time.

    RSVP today:  #ORCLAPEX NYC Meetup

    OCP 12C – Resource Manager and Performance Enhancements

    DBA Scripts and Articles - Fri, 2014-10-24 08:16

    Use Resource Manager for a CDB and a PDB Managing Resources between PDBs The Resource Manager uses Shares ans Utilization limit to manage resources allocated to PDBs. The more “Shares” you allocate to a PDB, the more resource it will have. Shares are allocated through DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE. One directive can only concern one PDB and you can’t … Continue reading OCP 12C – Resource Manager and Performance Enhancements

    The post OCP 12C – Resource Manager and Performance Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

    Categories: DBA Blogs

    PeopleTools 8.54 Feature: ExcelToCI Errors and Warnings Worksheet

    Javier Delgado - Fri, 2014-10-24 03:58
    Some years ago, I wrote this post on ExcelToCI limitations. One of the limitations I've found annoying in the past was the need to move the mouse over each Warning or Error result cell. It was not just annoying, it actually didn't allow the users to easily work on the different error types and analyze useful information such as the most common error messages, how many rows would go through if they solved a particular issue, etc.



    PeopleTools 8.54 has introduced a new worksheet showing all the warning and error messages. The following screenshot provides a clear view on how the information is presented:



    From that point on, the users may analyze the messages using Excel dynamic tables, filters, etc. Yet, there is some room for improvement. The most obvious one is to put each particular error in a different Excel row. That would make error analysis much richer.

    Let's see how this evolves with the next releases of PeopleTools.

    The new %SelectDummyTable MetaSQL

    Javier Delgado - Fri, 2014-10-24 03:57
    Does anyone know a PeopleSoft developer who didn't ever use a SQL statement like the following one?

    select %CurrentDateOut
    from PS_INSTALLATION;

    Where PS_INSTALLATION could be any single-row table in the PeopleSoft data model.

    If you look at the previous statement, the SELECT clause is not retrieving any field from the PS_INSTALLATION table, but just using it to comply with ANSI SQL. The same statement could be written in Microsoft SQL Server like this:

    select %CurrentDateOut;

    In Oracle Database, as:

    select %CurrentDateOut
    from dual;

    In both cases, the sentences are a better performing option. Both solutions do not require accessing any physical table.

    The problem with these solutions is that they are platform specific, and we want to avoid platform specific syntax. Believe me, when you perform a platform migration you suddenly have very present in your mind the ancestors of the programmers who used this type of syntax. So, up to now, we had to stick with the SELECT ... FROM PS_INSTALLATION solution.








    Until now. PeopleTools 8.54 introduces a new MetaSQL name %SelectDummyTable, which automatically translates into a platform specific sentences. Our previous sample would be written as:

    select %CurrentDateOut
    from %SelectDummyTable

    We now have a platform independent and well performing solution. What else can we ask for? ;-)

    Note: I've checked the online PeopleBooks from Oracle and at this point there is no documentation on this Meta SQL. Still, I've conducted some tests and it seems to be working correctly.

    New Integration Network Utilities in PeopleTools 8.54

    Javier Delgado - Fri, 2014-10-24 03:57
    The new integration features available in PeopleTools 8.54 include better support for REST services and the new Integration Network WorkCenter. There are plenty of things to test and eventually use that may be of interest of anyone upgrading to this PeopleTools release. However, today I will focus on two simple but quite handy utilities:

    Saving Gateway Metadata

    There is a new functionality that saves the integrationgateway.properties configuration file in the database for future use or deployment on other gateway instances.



    It has happened to me a couple of times that I did redeploy of PIA that reset the configuration file to the default version. Ok, it wasn't very clever of me, as I could easily take a backup of the file before doing the redeploy, but this save to database button seems easier to use than navigating through the endless PIA directory structure.

    Node Lockdown

    Another handy feature that allows us to block certain attributes of Nodes, so they are not overwritten when performing an Application Designer project copy.

    The page used to lock the attributes is the following:



    You just need to pick which attributes should be locked and for which nodes.

    Both seem nice and useful utilities delivered by PeopleTools 8.54. I hope you also find them of interest.

    PeopleSoft's PS_HOME evolution

    Javier Delgado - Fri, 2014-10-24 03:56
    One of the new features of PeopleTools 8.54 is the portability of the PS_HOME directory. Before going into the analysis of its benefits, let's have a look back to how  PS_HOME has evolved.

    One Directory for Everything

    PS_HOME is the name of the environment variable holding the PeopleSoft installation directory. Before PeopleTools 8.50, the full PeopleSoft installation was done on a single directory, including PeopleTools binaries, application external files, customized files, logs, etc. Also, in those installations using WebLogic and WebSphere, the J2EE deployment was normally located at PS_HOME/webserv (this was not the case for Oracle Application Server, which its their own directories for that purpose).

    The main issue with this approach is that the Ops team would normally go nuts when they saw how the directories were structured in PeopleSoft. Very often, keeping read-only binary files and always changing log files on the same directory structure would not comply with the internal policies in many organizations. With some degree of manual configuration and symbolic linking, this issue could be tackled, but the solution increased the maintenance costs, particularly when a PeopleTools or application upgrade came into the scene.

    Splitting Logic and Data

    PeopleTools 8.50 provided the ability to split the PS_HOME directory contents into three different places:
    • PIA_HOME: contained the J2EE deployment, equivalent to the former PS_HOME/webserv directory.
    • PS_CFG_HOME: contained logs, traces and search indexes. Basically, any file created, modified or deleted at run time.
    • PS_HOME: contained the binaries and external programs such as Crystal Reports. Cobols and SQRs.
    This was a major improvement. Now the binaries could be kept as read-only except when an external program was migrated. Moreover, the monitoring of disk space could now be restricted to PIA_HOME and PS_CFG_HOME.

    PeopleTools and Applications in Different Rooms

    PeopleTools 8.52, together with the PeopleSoft 9.1 applications, introduced a new directory: PS_APP_HOME. This directory contained exclusively the application binaries and external program files, leaving PS_HOME just for the specific PeopleTools files.

    This approach allowed a simpler maintenance of the product. For instance, you could use the same PS_HOME for both PeopleSoft HCM and FSCM, keeping the specific application files in their own PS_APP_HOME directories. This way, when you applied a PeopleTools patch on PS_HOME, it would be available for all applications.

    Clearly Identify your Customizations

    The natural evolution of PS_APP_HOME was PS_CUST_HOME, which was introduced by PeopleTools 8.53. This new directory was meant to hold all the customized external files. This helped not only in maintaining PS_HOME and PS_APP_HOME almost readonly (they would be updated only by PeopleTools or application upgrades), but also to clearly identify the customizations, which is a tremendous gain when performing an application upgrade.

    And now... Portable PS_HOME

    PeopleTools 8.54 has gone a step further in simplifying the maintenance of the PeopleSoft installation. One of the issues we still faced with PS_HOME is that we could not move it to a different directory without facing issues, as there were some symbolic links and files containing absolute directory references within it.

    This could be solved by adjusting the symbolic links and directory references, but it was a time consuming process. The alternative was to reinstall PS_HOME from the delivered install images, but in the best scenario, this could take a couple of hours.

    In the latest PeopleTools release, all symbolic links were removed, and all the directory references are relative, not absolute. This allows the system administrator to easily move the directory to another location, or even to another server. Actually, you may not even need to move it. Just mounting the PS_HOME directory installed in one server into all the different PeopleSoft servers would make the trick, so you only need to apply changes in a single place.

    I'm sure System Administrators and Installers will love this new feature. At BNB we are also analyzing other potential uses for it, but let me keep the secret for the moment ;).

    Tip: One of the symbolic links removed in UNIX/Linux platforms was the PS_HOME/appserv/psadmin link. If you have any maintenance script to boot or shutdown services using this path, you will need to adjust it to the source location: PS_HOME/bin/psadmin, or just call psadmin after executing psconfig.sh.

    My Oracle Support Upgrade Complete

    Joshua Solomin - Thu, 2014-10-23 10:34
    Untitled Document

    GPIcon
    We upgraded My Oracle Support on October 10, 2014. This upgrade brings changes to help you work more effectively with Oracle Support.

    Among the areas you will notice enhancements are:

    • The My Oracle Support customer experience
    • My Oracle Support Chat
    • Knowledge Management
    • Cloud Portal
    For details about the latest features visit the My Oracle Support User Resource Center.

     

     

    OCP 12C – SQL Enhancements

    DBA Scripts and Articles - Thu, 2014-10-23 09:20

    Extended Character Data Type Columns In this release Oracle changed the maximum sixe of three data types  In Oracle 12c if you set a VARCHAR2 to 4000 bytes or less it is stored inline, if you set it to more than 4000 bytes then it is transformed in extended character data type and stored out … Continue reading OCP 12C – SQL Enhancements

    The post OCP 12C – SQL Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

    Categories: DBA Blogs

    Configure Oracle Exadata Write Back Flash Cache

    VitalSoftTech - Thu, 2014-10-23 00:37
    In addition to improving read I/Os, Oracle Exadata Write back flash cache also provides the ability to cache write I/Os directly to PCI flash. Exadata storage software version 11.2.3.2.1 is the minimum version required to use write back flash cache. Grid infrastructure and database homes must run 11.2.0.3.9 or later to use with Write-back Smart […]
    Categories: DBA Blogs

    ORA-16534 When Converting to/from Snapshot Standby with DataGuard Broker

    Don Seiler - Wed, 2014-10-22 23:00
    We here at Seilerwerks Industries (not really) have been using snapshot standby databases to refresh an array of unit test databases from a common primary. During the business day, these would be converted to snapshot standby databases for testing, then overnight they are converted back to physical standby and recovered up to the master again.

    However we ran into one problem the other week. I noticed that the test3 database was still in physical standby mode well into the business day. Trying to manually convert returned this error:

    DGMGRL> convert database test3 to snapshot standby
    Converting database "test3" to a Snapshot Standby database, please wait...
    Error:
    ORA-16534: switchover, failover or convert operation in progress
    ORA-06512: at "SYS.DBMS_DRS", line 157
    ORA-06512: at line 1

    A quick search of MOS yielded bug 13716797 (ORA-16534 from the broker when setting apply-off), which simply suggested restarting the problem database when encountering that error. However doing so did not get me any further. That's when the I checked the Data Guard Broker configuration:

    DGMGRL> show configuration;

    Configuration - testdb

      Protection Mode: MaxPerformance
      Databases:
        test1 - Primary database
        test5 - Physical standby database
        test6 - Snapshot standby database
        test3 - Physical standby database
        test4 - Snapshot standby database

    Fast-Start Failover: DISABLED

    Configuration Status:
    ORA-16610: command "CONVERT DATABASE test6" in progress
    DGM-17017: unable to determine configuration status

    Looks like I have two databases stuck in physical standby mode, test3 and also test6. And the configuration is specifically complaining about test6. So I restarted that database and, sure enough, I was then able to convert both back to snapshots:

    DGMGRL> show configuration;

    Configuration - testdb

      Protection Mode: MaxPerformance
      Databases:
        test1 - Primary database
        test5 - Snapshot standby database
        test6 - Snapshot standby database
        test3 - Snapshot standby database
        test4 - Snapshot standby database

    Fast-Start Failover: DISABLED

    Configuration Status:
    SUCCESS

    It was very interesting to me to see one member of the Data Guard configuration prevent me from performing an operation on a different member. Hopefully this helps one of you in the future.

    Categories: DBA Blogs

    OCP 12C – DataPump, SQL*Loader, External Tables Enhancements

    DBA Scripts and Articles - Wed, 2014-10-22 14:57

    Oracle DataPump Enhancements Full Transportable Export and Import of Data In Oracle 12c you now have the possibility to create full transportable exports and imports. A full transportable export contains all objects and data needed to create a copy of the database. To create a fully transportable export of your database you need to specify … Continue reading OCP 12C – DataPump, SQL*Loader, External Tables Enhancements

    The post OCP 12C – DataPump, SQL*Loader, External Tables Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

    Categories: DBA Blogs

    Old Castles

    Pete Scott - Mon, 2014-10-20 07:12
    Living here on the Kent Coast we are quite blessed with the number of castles within half and hour’s drive of our cottage. English Heritage manages several nearby castles or forts. The nearest, Richborough, is out and out Roman. We had a lot of Romans roaming around here, they even strolled past my cottage along […]

    Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia; O'Reilly Media

    Surachart Opun - Sat, 2014-10-18 13:45
    Apache Spark started as a research project at UC Berkeley in the AMPLab, which focuses on big data analytics. Spark is an open source cluster computing platform designed to be fast and general-purpose for data analytics - It's both fast to run and write. Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce. Users can write applications quickly in Java, Scala or Python. In additional, it's easy to run standalone or on EC2 or Mesos. It can read data from HDFS, HBase, Cassandra, and any Hadoop data source.
    If you would like a book about Spark - Learning Spark Lightning-Fast Big Data Analytics by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia. It's a great book for who is interested in Spark development and starting with it. Readers will learn how to express MapReduce jobs with just a few simple lines of Spark code and more...
    • Quickly dive into Spark capabilities such as collect, count, reduce, and save
    • Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
    • Learn how to run interactive, iterative, and incremental analyses
    • Integrate with Scala to manipulate distributed datasets like local collections
    • Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
    • Use other languages by means of pipe() to achieve the equivalent of Hadoop streaming
    With Early Release - 7 chapters. Explained Apache Spark overview, downloading and commands that should know, programming with RDDS (+ more advance) as well as working with Key-Value Pairs, etc. Easy to read and Good examples in a book. For people who want to learn Apache Spark or use Spark for Data Analytic. It's a book, that should keep in shelf.

    Book: Learning Spark Lightning-Fast Big Data Analytics
    Authors: Holden KarauAndy KonwinskiPatrick WendellMatei Zaharia
    Categories: DBA Blogs

    My global view on Oracle OpenWorld 2014

    Javier Delgado - Thu, 2014-10-16 07:41
    For those who can read Spanish, I just posted in our company blog an entry describing a general overview of Oracle OpenWorld announcements. A couple of weeks ago I made a post on this blog describing the most important outcomes from a PeopleSoft point of view. This new post gives a broader view. 

    Pages

    Subscribe to Oracle FAQ aggregator