Feed aggregator

ORA-04021: timeout occurred while waiting to lock object ORA-06512: at "SYS.DBMS_STATS",

Tom Kyte - Sat, 2017-03-18 13:06
Hi, I am performing the below tasks by using ODI(Oracle data integrator): Step-1: Truncate the Table1 Step2: Truncate table 2 Step-3: Drop the INdex's on Table1 & Table 2 Step4: Insert the data into Table1 Commit Step-5: Insert the data i...
Categories: DBA Blogs

point in time clone

Tom Kyte - Sat, 2017-03-18 13:06
Hi Tom, I started hot backup rman at 9am and it got completed at 2pm,In the middle there was update query running more than 5 hrs which is started @10am and completed by @5pm (at end committed) so if I want to clone this database with this fu...
Categories: DBA Blogs

is there any performance impact if a table columns shares different tablespaces?

Tom Kyte - Sat, 2017-03-18 13:06
Hi Tom, is there any performance impact if a table columns shares different tablespaces? e.g. <code> CREATE TABLE DATA ( DATA_ID NUMBER, PROPERTY_DATA CLOB, PROPERTY_DATA_BIN BLOB, BIN_TYPE VARCHAR2(10 BYTE), SOURCE_INST...
Categories: DBA Blogs

executable jobs in scheduler

Tom Kyte - Sat, 2017-03-18 13:06
Hello, Tom! Tell me plz how i can copies file from server to another computer throught executable jobs ? Code of my job: begin dbms_scheduler.create_job ( job_name =>'tstcmd', job_type =>'executable', job_action =>'D:\Copy_export_data.bat'...
Categories: DBA Blogs

Can Sys object have 2 paswords ?

Tom Kyte - Sat, 2017-03-18 13:06
Hi Tom, we have an instance of oracle running on one of our client server . But I found a strange thing, that I can login into the oracle server using 2 different password and it let us do that . Is it possible that Sys object have two passwo...
Categories: DBA Blogs

how to reclaim space from table fnd_lobs?

Tom Kyte - Sat, 2017-03-18 13:06
I want to reclaim the space from the table "fnd_lobs", which measures 100 gb, but it actually measures 50 gb, I have not found any method to recover the other 50 gb. they are looking for in google, and in a few forums, there was a user who said that...
Categories: DBA Blogs

performance problem

Tom Kyte - Sat, 2017-03-18 13:06
Hi Tom , Good Afternoon . I have one stored procedure which is running for long time . Could you please suggest how to optimize the code . There are having four tables. 1.staging_product 2.stagin_retail 3.product 4.curr_retail The pr...
Categories: DBA Blogs

How many members for standby redo logs?

Yann Neuhaus - Sat, 2017-03-18 11:43

I see lot of databases with two members for redo logs and also two members for standby redo logs. Why not, but when asking I realized that there are some mis-comprehension about it. And what was recommended 10 years ago may be different today.

Online and Stanbdy redo logs

Your transactions happen on the primary database and are written to the online redo logs before the modification is done on datafiles. And when you commit you wait to be sure that the redo is on persistence storage. If you loose the current redo log group, then your database crashes and you loose the latest transactions. This is why we multiplex the online redo logs. Even if you are 100% confident on your storage high availability the risk of human error dropping a file exists and is considerably lower if there a two files.
For additional protection, in case you loose all the primary redo members, Data Guard synchronizes the transaction to a second site by shipping the redo stream. There, on the standby site, the redo is written to the standby redo logs.

The online redo logs are used only on the primary site, and should better be named primary redo logs. You create them on the standby site only to be prepared for failover, when it will become the primary and opened read-write. But let’s be clear: online redo logs are not used when database is not online, and mount is not online.

The standby redo logs are not standby at all. They are actively used on the standby site and this is why thew are called ‘standby. On the primary, they are not used, just there to be ready when the primary becomes a standby after a failover.

Members

We have seen why we multiplex the online redo logs:

  • it protects the transactions because without multiplexing you loose transactions when loosing one group
  • it protects the instance availability because without multiplexing you crash the instance when loosing one group

But this is different with standby redo logs.

  • it is an additional protection. Transactions are still persistent on the primary even if you loose a standby log group.
  • the primary is still available even if one standby cannot be SYNC

Of course, if in Maximum Protection mode the availability of the primary is compromised when the standby cannot apply the redo in SYNC. But in this protection mode you probably have multiple standby and the loss of one standby redo log on one standby site it not a problem.

Redo transport and redo apply

I said that transactions are still persisted on the primary, but even without standby redo logs they are still shipped to standby site, but in ASYNC mode. This means that in order to loose transactions in case of the loss of a standby redo log group, you need to experience this file loss, and primary site failure and network failure at the same time. The probability for this is very low and having an additional member do not lower the risk.

Of course, I’ve tested what happens. I have two standby redo log members and I removed all of them one minute ago:

DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Mar 17 14:47:45 2017
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
Connected to "ORCLA"
Connected as SYSDBA.
 
Database - orclb
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 1 minute 30 seconds (computed 0 seconds ago)
Average Apply Rate: 0 Byte/s
Active Apply Rate: 0 Byte/s
Maximum Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
ORCLB
 
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
ORA-16857: member disconnected from redo source for longer than specified threshold
ORA-16826: apply service state is inconsistent with the DelayMins property
ORA-16789: standby redo logs configured incorrectly

As you can see, when there is no member remaining, the APPLY is stuck but transport still happens, in ASYNC to archived logs.
The standby alert log mentions the failure:
2017-03-17T14:51:21.209611+01:00
Errors in file /u01/app/oracle/diag/rdbms/orclb/ORCLB/trace/ORCLB_rsm0_6568.trc:
ORA-00313: open failed for members of log group 5 of thread 1
ORA-00312: online log 5 thread 1: '/u01/oradata/ORCLB/onlinelog/m5.log'
ORA-27048: skgfifi: file header information is invalid
Additional information: 2
ORA-00312: online log 5 thread 1: '/u01/oradata/ORCLB/onlinelog/o1_mf_5_dbvmxd52_.log'
ORA-27048: skgfifi: file header information is invalid
Additional information: 2

and the SYNC mode cannot continue without standby redo logs:

03/17/2017 14:58:15
Failed to open SRL files. ORA-313
Redo Apply is running without NODELAY option while DelayMins=0
Failed to open SRL files. ORA-313
Error: LogXptMode value 'SYNC' of requires this database to have standby redo logs, but they are not configured correctly.
03/17/2017 14:59:15
Failed to open SRL files. ORA-313
Redo Apply is running without NODELAY option while DelayMins=0
Failed to open SRL files. ORA-313
Error: LogXptMode value 'SYNC' of requires this database to have standby redo logs, but they are not configured correctly.

Sure, you don’t want to loose the standby redo member. But the risk is not higher than loosing any other files, and this is why there is no reason to multiplex it. Standby redo logs are not the same as the primary online redo logs. On similar hardware, you need same size and you need one more group, but no reason to multiplex the same.

Documentation

The confusion may come from old documentation. The 10g documentation says:
For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
This documentation dates from 2005 and systems have changed about availability of files.

More recent documentation is the white paper on Best Practices for Synchronous Redo Transport which mentions: It is critical for performance that standby redo log groups only contain a single member

So what?

At the time of 10g we had a LUN for redo logs and were not concerned by the size, but more by its unavailability. Things change. Losing a file, and only one file, today is extremely rare. We are more concerned about consolidation and performance. Having 4 online groups, 200MB or 500MB, and 5 standby groups, all multiplexed, for many databases will take space. And this space you want to allocate it on the fastest disks because user commits wait on log writes (on primary and standby except in Max Performance). You don’t want to over-allocate the space here. Better have larger online redo logs. And your goal is that network shipping + standby log writing takes not longer than local write to online redo logs, so that Data Guard protection do not increase commit latency. Multiplexing standby redo logs increases the risk to get longer writes on standby site.

So if you have your standby redo logs multiplexed, it’s not wrong. But things change and today you may prefer to save space and performance overhead with only one member.

Before writing this blog post, my poll on twitter had 40 votes. Only 28% mentioned no multiplexing. But twitter poll is not exact science as you can see that 8 people answered 42 members ;)

Because I've seen lot of misunderstanding about it, I'm curious to know how many members you have in your standby redo logs

— Franck Pachot (@FranckPachot) March 17, 2017

 

Cet article How many members for standby redo logs? est apparu en premier sur Blog dbi services.

Apache Kafka on the Oracle Cloud: My First experiences with Oracle Event Hub Cloud Service

Amis Blog - Sat, 2017-03-18 06:46

Oracle recently made their ‘Kafka on Cloud’ service available: the Event Hub cloud service – offered as part of the Big Data Compute Cloud Service. In this article, I will briefly show the steps I went through to get up and running with this service. To be frank, it was not entirely intuitive – so this may be handy the next time I have to do it or perhaps when you are struggling. One obvious point of confusion is the overloaded use of the ‘service’ – which is applied among others for Kafka Topics.

The steps are:

  1. Initialize the Oracle BigData Compute CS | Oracle Event Hub Cloud Service – Platform – -expose REST Proxy
  2. Create Network Access Rule to enable access to the Event Hub Platform instance from the public internet
  3. Create a Event Hub service instance on the Event Hub Platform – corresponding to a Kafka Topic
  4. Inspect Event Hub using PSM
  5. Interact with Event Hub through CURL
  6. Create application (e.g. NodeJS) to access the Event Hub Service (aka Kafka Topic): produce message, create consumer, consume messages (through the REST proxy) – discussed in a subsequent blog article
Initialize the Oracle BigData Compute CS | Oracle Event Hub Cloud Service – Platform – -expose REST Proxy

 

From the Dashboard, open the BigData Compute Cloud Service. Open the dropdown menu. Select option Oracle Event Hub Cloud Service – Platform.

 

image

 

image

 

Provide the name of the service – the Kafka Event Hub Cluster: soaringEventsHub. Click Next

image

Specify Basic deployment type (for simple explorations). Upload the SSH Public Key.

For a trial, select just a single node (I started out with 5 nodes and it cost me a huge pile of resources).

Enable REST Access – to be able to communicate with the Event Hub through the REST API. Specify the username and password for accessing the REST proxy. (in hindsight, I probably should have picked a different username than admin).

image

 

Click Next. The service will be created – with a generous memory an storage allocation:image

When the service creation is complete – details are shown, including the public IP for the Kafka cluster and the ZooKeeper instanceSNAGHTMLd64c40

And the Rest Proxy details:

SNAGHTML1296a2d

Create Network Access Rule to enable access to the Event Hub Platform instance from the public internet

To be perfectly honest – I am not sure anymore that this step is really required. I had trouble accessing the Event Hub service from my laptop – and I am still not able to connect to the Kafka broker from a regular Kafka client – so I tried many different things, including exposing the public IP address on the Event Hub (Kafk)a node. Whether it helped or made any difference, I am not entirely sure. I will share the steps I took, and you figure out whether you need them. (I am still hoping that accessing the Event Hub from applications running inside the same identity domain on Oracle PaaS will be easier).

I opened the Access Rule for the Event Hub service:

image

image

I created an access rule

image

And enabled it. In red the new access rule. In the blue rectangle the access rule that opens up the REST Proxy for the Event Hub service:

SNAGHTML152941b

 

Create Event Hub Service on the Event Hub Service Platform aka Create a Kafka Topic

Open the Oracle Event Hub Cloud Service console (not the Oracle Event Hub Cloud Service – Platform!)

 

image

Click on Create Service. This corresponds to creating a Kafka Topic. The naming is awfully confusing here. Create an instance of Event Hub Service on top of an instance of Event Hub Platform Service actually means create a Kafka message topic. Now that is much clearer.

 

image

 

Specify the name for the service – which will be part of the name of the Kafka Topic. Specify the Number of Partitions and the default Retention Period for messages published to the topic. Indicate on which instance of the Event Hub Cloud Service Platform – created in the first step in this article – the topic should be created.image

Click Next.image

Click Create.

The Topic is created with a name that is composed from the name of the identity domain and the name specified for the Event Hub Service instance: partnercloud17-SoaringEventBus.

 

image

The popup details for the “service”  (aka Topic) indicate the main configuration details as well as the REST endpoint for this Topic. Messages can be produced at this endpoint:SNAGHTMLd98394

The log of all Service activity (again, Service here is at the same level of Kafka Topic)

image

The Service Console now lists these instances:

image

 

 

Inspect Event Hub using PSM

The PSM (PaaS Service Manager) command line interface for the Oracle Cloud (read here on how to install it and get going) can be used to inspect the state of the Event Hub Service. It cannot be used to produce and consume messages though.

Using PSM for Event Hub is described in the documentation:  http://docs.oracle.com/en/cloud/paas/java-cloud/pscli/oehcs-commands.html

To list all services (topics)
psm oehcs services

image

— details for a single service
psm oehcs service –service-name SoaringEventBus

— to create a new Topic:

psm oehcs create-service

— to update the retention time for a topic:

psm oehcs update-service

 

Interact with Event Hub through CURL

    One way to produce messages to the topic and/or consume message from a topic is by using cURL.

    It took me some time to figure out the correct syntax for each of these operations. It seems not abundantly well documented/explained at present. Anyways, here it goes. (note: the -k option tells cURL to accept an unknown certificate)

    * To produce a message to the topic on Event Hub, use:

    curl -X POST -k -u username:password -H “Content-Type: application/vnd.kafka.json.v1+json”  -d “{ \”records\”: [ { \”key\”: \”myKey\”, \”value\”: \”mySpecialValue\”}] }” https://public_ip_REST-PROXY:1080/restproxy/topics/NAME_OF_EVENT_HUB_SERVICE

    * Create a consumer group: in order to consume messages, you first need to create a consumer group with a consumer instance. Subsequently, you can consume messages through the consumer:

    curl -X POST -k -u username:password -H “Content-Type: application/vnd.kafka.json.v1+json”  -d “{ \”name\”:  \”soaring-eventbus-consumer\”, \”format\”: \”json\” }”  https://public_ip_REST-PROXY:1080/restproxy/consumers/soaring-eventbus-consumer-group

    * To consume messages – levering the consumer group soaring-eventbus-consumer-group and the consumer instance soaring-events-consumer:
    curl -X POST -k -u username:password -H “Accept: application/vnd.kafka.json.v1+json”    https://public_ip_REST-PROXY:1080/restproxy/consumers/soaring-eventbus-consumer-group/instances/soaring-eventbus-consumer/topics/partnercloud17-SoaringEventBus

     

    Here three commands: create a consumer group, produce a message and consume (all) available messages – all from the same topic:

    SNAGHTML142b0dd

    The post Apache Kafka on the Oracle Cloud: My First experiences with Oracle Event Hub Cloud Service appeared first on AMIS Oracle and Java Blog.

    bulk update

    Tom Kyte - Fri, 2017-03-17 18:46
    Hi Tom, I have table called customers with data below SQL> select * from customers; ID NAME AGE ADDRESS SALARY ---------- -------------------- ---------- --------...
    Categories: DBA Blogs

    Unsuccessful addition of column with NOT NULL constraint ends with creation hidden columns SYS_C[...] on Oracle12c

    Tom Kyte - Fri, 2017-03-17 18:46
    Hello! Recently I've encountered a problem with adding new column to existing, not empty table. New column had a NOT NULL constraint, so ALTER TABLE command failed with ORA-01758 (that's clear). But having looked at DBA/USER_TAB_COLS dictionary ...
    Categories: DBA Blogs

    SQL Query to split rows based on dates

    Tom Kyte - Fri, 2017-03-17 18:46
    Could you please help me with a SQL that would generate the desired output? <code> create table user_tmp (vname varchar(15) , external_pgm varchar(1), eff_date date, exp_date date); insert into user_tmp values('JOHN','A', '01-JAN-1...
    Categories: DBA Blogs

    How can we identify a session in GV_$SESSION with DBMS_SESSION.UNIQUE_SESSION_ID ?

    Tom Kyte - Fri, 2017-03-17 18:46
    Actually (one node RAC) I can identify a session in V_$SESSION with AUDSID = "sys_context('USERENV', 'SESSIONID')". We are switching to multiple node RAC and I have to use GV_$SESSION and DBMS_SESSION.UNIQUE_SESSION_ID to get a unique id across al...
    Categories: DBA Blogs

    Convert BLOB file to readable format

    Tom Kyte - Fri, 2017-03-17 18:46
    Hi, There is one table which has data type as BLOB for one of the columns. We need to open that file and see the context. Please suggest, how can we convert that file and make it in readable format? Thanks.
    Categories: DBA Blogs

    java.sql.SQLRecoverableException: ORA-01034: ORACLE not available

    Tom Kyte - Fri, 2017-03-17 18:46
    I try to make connection to oracle db on Linux from java application. T got the following error. What is the cause and hot to resolve it? java.sql.SQLRecoverableException: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not ex...
    Categories: DBA Blogs

    Welcome to the new Fishbowl Solutions Blog

    Out with the old and in with the new.  Welcome to the new home of the Fishbowl Solutions blog! Please enjoy upgraded functionality and integration with our website.  Check back often for new and exciting posts form our talented staff.  If you want automatic updates click the subscribe link to the right and be notified whenever a new post appears.

     

     

     

     

     

     

     

    The post Welcome to the new Fishbowl Solutions Blog appeared first on Fishbowl Solutions.

    Categories: Fusion Middleware, Other

    Oracle BPM: Loops and Gateway Struggles

    Jan Kettenis - Fri, 2017-03-17 11:40
    If there is one issue that I see people often struggle with, then it is the use of loops in combination with gateways. The following discusses a few cases.

    The following picture shows several loops in combination with a Parallel gateway, of which some are valid and some not. The same holds for the Inclusive gateway.

    To understand why some loops are valid and other not, you have to realize that at the beginning of a Parallel or Inclusive gateway as many tokens are generated as there are parallel flows that run between the start and end of the gateway. To the BPM engine this translates to 1 or more threads that are instantiated.

    No such restrictions are there for an exclusive gateway, because then there is only one token (thread) active at any time.

    So in BPMN the following flows are not valid:
    • From "crossover?", because you are going to another thread that may already have passed the point that the flow goes to. However, JDeveloper does not prevent you from doing so.
    • From "loop back inside to beginning", because at the beginning of the gateway new threads would have to be instantiated for flows of which some threads may already run. JDeveloper should fail validation of such a construct.
    • From "loop back inside from outside", because you would then have to go back to a thread already ended in the merge. JDeveloper should fail validation of such a construct.

    The flows that are valid in BPMN are:
    • From "loop back inside", as you loop back within the same thread.
    • From "loop back outside to beginning" as you are re-instantiating a new set of threads for which the previous set already ended.

    In case the latter does not work apply patch 23230734.

    EBS Support Implications for Discoverer 11gR1 in June 2017

    Steven Chan - Fri, 2017-03-17 10:57

    What happens to Discoverer support in June 2017?

    The Oracle Lifetime Support Policy: Oracle Fusion Middleware Products document states:

    • Premier Support for Discoverer 11gR1 ended on June 30, 2014. 
    • Extended Support for Discoverer 11gR1 ends on June 30, 2017. 

    No new patches for Discoverer 11gR1 or its E-Business Suite (EBS) Discoverer-based content will be created after June 30, 2017.  EBS customers will continue to have access to existing released patches and other published resources.

    Which EBS releases are affected?

    E-Business Suite 12.1 and 12.2 included workbooks, business areas, and folders built for Discoverer 11gR1.  Both EBS 12.1 and 12.2 are affected by this.

    What should EBS users use for analytics now?

    This document was published in March 2014:

    That Note recommends that Discoverer users migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence for Applications (OBIA), or Oracle Endeca Information Discovery.

    Are there automated tools for migrating from Discoverer to other Oracle analytics tools?

    No, there are no automated tools for migrating Discoverer content to OBIEE, OBIA, or Oracle Endeca Information Discovery.

    Can EBS customers request new patches after June 2017?

    No, Oracle will not produce new patches or documentation for Discoverer, EBS content for Discoverer, or Discoverer certifications with EBS 12.1 or 12.2 after June 30, 2017. 

    Can EBS customers access existing Discoverer-related resources after June 2017?

    Yes, EBS customers will still be able to download existing Discoverer patches.  For example, Discoverer 11.1.1.7 was certified in June 2013 and is certified for EBS 12.1 and 12.2.  Customers will continue to be able to download Discoverer 11.1.1.7 and Discoverer-related documentation for EBS environments (Note 1380591.1 for EBS 12.2, Note 1074326.1 for EBS 12.1).

    Can EBS customers continue to use Discoverer after June 2017?

    Yes, but Oracle's ability to assist with questions will be increasingly-limited as environments with Discoverer are retired.  Customers should minimize changes to their Discoverer-related infrastructure with the goal of keeping Discoverer environments stable: e.g. limiting changes that might affect load, hardware infrastructure, or business processes.

    Will Discoverer work with new desktop client updates after June 2017?

    This is unknown.  No new certifications for Discoverer will be performed after June 2017.  Desktop client updates such as new JRE releases, new Windows updates, and new browsers may have unpredictable effects on Discoverer.  Oracle will not issue new compatibility patches for these types of issues after June 2017.


    Categories: APPS Blogs

    Integrigy COLLABORATE 17 Sessions - Presentations on Oracle Database, Oracle E-Business Suite, and PeopleSoft Security

    Integrigy is presenting nine papers this year at COLLABORATE 17 (https://collaborate.oaug.org/). The COLLABORATE 17 conference is a joint conference for the Oracle Applications User Group (OAUG), Independent Oracle Users Group (IOUG), and Quest International Users Group.

    Here is our schedule. If you have questions or would like to meet with us while at COLLABORTE 17, please conact us at info@integrigy.com.

    Sunday Apr 02, 2017

    1:45 PM - 2:45 PM

    Oracle E-Business Suite 12.2 Security Enhancements

    https://app.attendcollaborate.com/event/member?item_id=5621519

    Banyan E

    Speaker: Stephen Kost

    1:45 PM - 2:45 PM

    How to Control and Secure Your DBAs and Developers in Oracle E- Business Suite

    https://app.attendcollaborate.com/event/member?item_id=5740411

    South Seas F

    Speaker: Michael Miller

    Monday Apr 03, 2017

    9:45 AM - 10:45 AM

    The Thrifty DBA Does Database Security

    https://app.attendcollaborate.com/event/member?item_id=5660960

    Jasmine D

    Speaker: Stephen Kost

    1:00 PM - 4:30 PM

    Integrigy team available for meetings and discussions Contacts us at info@integrigy.com to arrange

     

     

    Tuesday Apr 04, 2017

    9:45 AM - 10:45 AM

    Solving Application Security Challenges with Database Vault

    https://app.attendcollaborate.com/event/member?item_id=5660961

    Jasmine D

    Speaker: Stephen Kost

    1:00 PM - 4:30 PM

    Integrigy team available for meetings and discussions Contacts us at info@integrigy.com to arrange

     

     

    Wednesday Apr 05, 2017

    9:45 AM - 10:45 AM

    When You Can't Apply Database Security Patches

    https://app.attendcollaborate.com/event/member?item_id=5660962

    Jasmine D

    Speaker: Stephen Kost

    11:00 AM - 12:00 PM

    Common Mistakes When Deploying Oracle E-Business Suite to the Internet

    https://app.attendcollaborate.com/event/member?item_id=5621520

    South Seas B

    Speaker: Stephen Kost

    1:30 PM - 2:30 PM

    Securing Oracle 12c Multitenant Pluggable Databases

    https://app.attendcollaborate.com/event/member?item_id=5660950

    Palm A

     

    Speaker: Michael Miller

    2:45 PM - 3:45 PM

    How to Control and Secure Your DBAs and Developers in PeopleSoft

    https://app.attendcollaborate.com/event/member?item_id=5617942

    Ballroom  J

    Speaker: Michael Miller

    Thursday Apr 06, 2017

    8:30 AM - 9:30 AM

    Oracle E-Business Suite Mobile and Web Services Security

    https://app.attendcollaborate.com/event/member?item_id=5621407

    South Seas B

    Speaker: Michael Miller

     

    You can download a complete listing of Integrigy's sessions at Integrigy COLLABORATE 17 Sessions.

    Oracle Database, Oracle E-Business Suite, Oracle PeopleSoft
    Categories: APPS Blogs, Security Blogs

    PeopleSoft Security

    This is a quick summary of Integrigy’s latest research on PeopleSoft. Was sending this to a client and decided it was a good posting:

    Guide to PeopleSoft Logging and Auditing

    How to Control and Secure PeopleSoft DBAs and Developers

    PeopleSoft Database Security

    PeopleSoft Database Secure Baseline Configuration

    PeopleSoft Security Quick Reference

    If you have any questions, please contact us at info@integrigy.com

     

     
     
    Oracle PeopleSoft, Whitepaper
    Categories: APPS Blogs, Security Blogs

    Pages

    Subscribe to Oracle FAQ aggregator