Feed aggregator

Union All MV

Jonathan Lewis - Tue, 2016-07-12 04:10

In an article I wrote last week about Bloom filters disappearing as you changed a SELECT to a (conventional) INSERT/SELECT I suggested using the subquery_pruning() hint to make the optimizer fall back to an older strategy of partition pruning. My example showed this working with a range partitioned table but one of the readers reported a problem when trying to apply the strategy to a composite range/hash partitioned table and followed this up with an execution plan of a select statement with a Bloom filter where the subquery_pruning() hint didn’t introduced subquery pruning when the select was used for an insert.

A couple of standard ways to work around this probelm are to embed the select statement in a pipeline function so that we can “insert into table select from table(pipeline_function)”, or to write a pl/sql block that opens a cursor to do a select with bulk collect and loops through an array insert. The overhead in both cases is likely to be relatively small (especially when compared with the overhead of failing to filter). In this case, however, the reader suggested that maybe the problem appeared because the driving table (i.e. the one that would have been query to derive the pruning values) was actually an inline view with a union all.

After modifying my working model to try a couple of different tests I was inclined to agree. Since the two tables in the view looked as if they were likely to be relatively tiny and static I suggested that it would be safe to create a materialized view defined to “refresh on commit” and then use the materialized view explicitly in the query. This, finally, brings me to the point of today’s article – how do you create such a materialized view ?

I’m going to start by creating a couple of small base tables from a familiar object:


create table tt as select * from all_objects where object_type = 'TABLE';
create table tv as select * from all_objects where object_type = 'VIEW';

alter table tt add constraint tt_pk primary key (object_id);
alter table tv add constraint tv_pk primary key (object_id);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'TT',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'TV',
                method_opt       => 'for all columns size 1'
        );
end;
/

Assume, now, that I need an inline view that is interested in the things you will recognise from the above as the tables owned by OUTLN (which will apper in tt) and the views owned by SYSTEM (which will appear in tv) – in the 11.2.0.4 system I’m playing on at the moment that’s three rows from each of the two tables). Here’s the SQL I’d put into the inline view:


select
        object_id, object_type, object_name
from    tt
where   owner = 'OUTLN'
union all
select
        object_id, object_type, object_name
from    tv
where   owner = 'SYSTEM'
;

Since this view won’t give me partition pruning I have to replace it with a table and because I want to ensure that the table is always up to date I have to generate it as the container for a materialized view with refresh on commit. First I need some materialized view logs so that I can do a fast refresh:


create materialized view log on tt
with
        rowid, primary key
        (object_type, object_name, owner)
including new values
;

create materialized view log on tv
with
        rowid, primary key
        (object_type, object_name, owner)
including new values
;

I’ve included the primary key in the definition because I happen to want the object_id column in the log – but I could just have included it as a column in the filter list. I’ve included the rowid in the definition because Oracle needs the rowid if it’s going to be able to do a fast refresh. I can now create a materialized view:


create materialized view mv_t
        build immediate
        refresh fast on commit
as
select
        'T' mv_marker,
        rowid rid,
        object_id, object_type, object_name
from    tt
where   owner = 'OUTLN'
union all
select
        'V' mv_marker,
        rowid rid,
        object_id, object_type, object_name
from    tv
where   owner = 'SYSTEM'
;

I’ve taken the option to “build immediate” and specified – most importantly for my needs – “refresh on commit”. You’ll notice I haven’t chosen to “enable query rewrite”; for the purposes of this demo I don’t need that particular feature.

There are two key features to the materialized view that are a little special – first I’ve included the rowid of each source table as a named column in the materialized view; as I mentioned above Oracle will not allow the view to be fast refreshable without the rowid. The second feature is that I’ve introduced a literal value into the view which I’ve named mv_marker; this makes it easy to see which table a row comes from when you query the materialized view … and Oracle needs to see this.

That’s the job done. Just to demonstrate that my materialized view is working as required here’s a little more SQL (following by the output):


select * from mv_t;

delete from tt where object_name = 'OL$';
update tv set object_name = 'PRODUCT_PRIVILEGES' where object_name = 'PRODUCT_PRIVS';

commit;

select * from mv_t;

=======================================

M RID                 OBJECT_ID OBJECT_TYPE         OBJECT_NAME
- ------------------ ---------- ------------------- --------------------------------
T AAA6tXAAFAAAAEBAAI        471 TABLE               OL$
T AAA6tXAAFAAAAEBAAJ        474 TABLE               OL$HINTS
T AAA6tXAAFAAAAEBAAK        478 TABLE               OL$NODES
V AAA6tWAAFAAAACgABI       8260 VIEW                SCHEDULER_PROGRAM_ARGS
V AAA6tWAAFAAAACgABJ       8261 VIEW                SCHEDULER_JOB_ARGS
V AAA6tWAAFAAAACuAA7      14233 VIEW                PRODUCT_PRIVS

6 rows selected.

2 rows deleted.


1 row updated.


Commit complete.


M RID                 OBJECT_ID OBJECT_TYPE         OBJECT_NAME
- ------------------ ---------- ------------------- --------------------------------
T AAA6tXAAFAAAAEBAAJ        474 TABLE               OL$HINTS
T AAA6tXAAFAAAAEBAAK        478 TABLE               OL$NODES
V AAA6tWAAFAAAACgABI       8260 VIEW                SCHEDULER_PROGRAM_ARGS
V AAA6tWAAFAAAACgABJ       8261 VIEW                SCHEDULER_JOB_ARGS
V AAA6tWAAFAAAACuAA7      14233 VIEW                PRODUCT_PRIVILEGES

5 rows selected.

If you’re wondering why you see “2 rows deleted” but a reduction by just one row in the final output, remember that we’re deleting from table tt but the materialized view holds information about just the subset of tables owned by OUTLN – I happen to have a row in tt that says SYSTEM also owns a table called OL$.

Assistance

If you have trouble working out why your attempts to create a particular materialized view aren’t working the dbms_mview package has a procedure called explain_mview that may give you enough ideas to work out what you’re doing wrong. For example, here’s how I could find out that I needed a literal column to tag the two parts of my union all view:


@$ORACLE_HOME/rdbms/admin/utlxmv.sql

begin
        dbms_mview.explain_mview (
                q'{
                create materialized view mv_t
                        build immediate
                        refresh fast
                        enable query rewrite
                as
                select  -- 'T' mv_marker,
                        rowid rid,
                        object_id, object_type, object_name from tt
                union all
                select  -- 'V' mv_marker,
                        rowid rid,
                        object_id, object_type, object_name from tv
                }'
        );
end;
/

column cap_class noprint
column related_text format a7
column short_msg format a72
break on cap_class skip 1

select
        substr(capability_name,1,3) cap_class,
        capability_name, possible, related_text, substr(msgtxt,1,70) short_msg
from
        mv_capabilities_table
where
        mvname = 'MV_T'
order by
        substr(capability_name,1,3), related_num, seq
;

The first line calls a supplied script to create a table called mv_capabilities_table in the current schema. The call to dbms_mview.explain_mview passes the text of a “create materialized view” statement to the procedure (there are a couple of variations possible) then, after a couple of SQL*Plus formatting commands I’ve queried the table to see Oracle’s analysis for the statement. (You can tag each call to this procedure using a second parameter that I haven’t bothered to use.)

Here’s the output for the failed attempt above, which has commented out the literals that tag the two parts of the UNION ALL:

CAPABILITY_NAME                POS RELATED SHORT_MSG
------------------------------ --- ------- ------------------------------------------------------------------------
PCT_TABLE                      N   TT      relation is not a partitioned table
PCT_TABLE_REWRITE              N   TT      relation is not a partitioned table
PCT_TABLE                      N   TV      relation is not a partitioned table
PCT_TABLE_REWRITE              N   TV      relation is not a partitioned table
PCT                            N

REFRESH_COMPLETE               Y
REFRESH_FAST                   N
REFRESH_FAST_AFTER_INSERT      N           the materialized view does not have a UNION ALL marker column
REFRESH_FAST_AFTER_INSERT      N           set operator in a context not supported for fast refresh
REFRESH_FAST_AFTER_ONETAB_DML  N           see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ANY_DML     N           see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT               N           PCT FAST REFRESH is not possible if query has set operand query blocks

REWRITE                        Y
REWRITE_FULL_TEXT_MATCH        Y
REWRITE_PARTIAL_TEXT_MATCH     N           set operator encountered in mv
REWRITE_GENERAL                N           set operator encountered in mv
REWRITE_PCT                    N           general rewrite is not possible or PCT is not possible on any of the d


17 rows selected.

The query manages to split the output into three sections (but that depends on a side-effect in a way that I would normally call bad design): elements relating to “Partition Change Tracking”, elements relating to “Materialized View Refresh” and elements relating to “Query Rewrite”. You’ll notice that the rewrite section tells me that (even though I haven’t chosen to enable it) my view could be enabled to do query rewrite.

Critically, though, this version of the materialized view can’t be fast refreshed, and we see the key reason in the first “Refresh fast after insert” line: “the materialized view does not have a UNION ALL marker column”. That’s how I know I have to include a literal column that has a different value in each of the two parts of the UNION ALL.


Links for 2016-07-11 [del.icio.us]

Categories: DBA Blogs

Decide Query performance

Tom Kyte - Mon, 2016-07-11 23:06
How we define which query is run slower? Means how we can say that query takes more time and require performance tuning?
Categories: DBA Blogs

find an error at online document

Tom Kyte - Mon, 2016-07-11 23:06
Hi I found a error at online document when I study the oracle. https://docs.oracle.com/database/121/CWADD/crsref.htm#CWADD91788 I do not where to submit this .So I write here, May it can help . crsctl config has Use the <b><i><code>crsctl ...
Categories: DBA Blogs

Grant Access on DB_Link to other User

Tom Kyte - Mon, 2016-07-11 23:06
I need to give access to other user on current DB_LINK, can you provide command ? Current Link: sittest Grant to User: Z9XBIQ1_DW Regards Chaudhry
Categories: DBA Blogs

Forcing a query to timeout

Tom Kyte - Mon, 2016-07-11 23:06
I have several PL/SQL stored procedure reports that monitor various aspects of our database environment (ASM, tablespace, backups, alerts, ...). These all have the same framework, loop thru the entries in a table, using a db_link, query the remote sy...
Categories: DBA Blogs

Network-based IMPDP with local SCN?

Tom Kyte - Mon, 2016-07-11 23:06
Hi, I found a procedure for a network based impdp. The procedure sets the actual (flashback) scn of the target system (see below). That?s a mistake, isn?t it? What will impdp do with this invalid scn? Simply ignore it? Cheers, Markus ...
Categories: DBA Blogs

Creating a Service within IntelliJ IDEA to be used by the Service Registry for Pivotal Cloud Foundry

Pas Apicella - Mon, 2016-07-11 19:59
In this example I am going to show how to use IntelliJ IDEA 15 to create a service application from the IDE to be consumed by the Service Registry service in Pivotal Cloud Foundry (PCF). For more information on this service view the docs page below.

http://docs.pivotal.io/spring-cloud-services/service-registry/index.html

Service Registry for Pivotal Cloud Foundry® (PCF) provides your applications with an implementation of the Service Discovery pattern, one of the key tenets of a microservice-based architecture. Trying to hand-configure each client of a service or adopt some form of access convention can be difficult and prove to be brittle in production. Instead, your applications can use the Service Registry to dynamically discover and call registered services

1. Start IntelliJ IDEA and either "Create a New project" or add a "New Module" to an existing project.

2. Ensure you select "Spring Initializer" as shown below


3. Click Next

4. Describe your project or module, I normally use Maven and generate a JAR file



5. Click Next

6. At the minimum here we only need to select "Service Registry (PCF)" as shown below for the dependency. Of course you would select other options dependncies depending on what the service needed such as REST, JPA, H2 or MySQL etc


7. Click Next

8. Name your new model or project


9. Click Finish

10. Click Finish

11. Your service application must include the @EnableDiscoveryClient annotation on a configuration class. To do that we simply add the annotation to our main class as follows


Java Code
  
package pas.au.pivotal.service.hr;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

import javax.annotation.PostConstruct;

@SpringBootApplication
@EnableDiscoveryClient
public class EmployeeServiceApplication
{
@Autowired
private EmployeeRepository employeeRepository;

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}

@PostConstruct
public void init()
{
employeeRepository.save(new Employee("pas"));
employeeRepository.save(new Employee("lucia"));
employeeRepository.save(new Employee("siena"));
employeeRepository.save(new Employee("lucas"));
}
}

12. Set the spring.application.name property in application.yml. It might be an application.properties file BUT rename it to YML as I know that works. below I not only set the application name I also set the registrationMethod to "route" which is the default and then turn off security as it is enabled by default.

spring:
  application:
    name: employee-service

cloud:
  services:
    registrationMethod: route

security:
  basic:
    enabled: false

So that's all we really need to do here. Of course we will need to add code to our service to do what it needs to do BUT all the config required to enable this service to automatically register itself with the "Service Registry" in PCF is done.

13. Before we deploy this to out PCF instance we have to be sure we have a "Service Registry" service created as shown below using the CF CLI mine is already created.


14. Create a manifest.yml file for the service to be deployed, notice how it binds to the service registry "apples-service-registery", this will ensure it automatically gets registered on deployment with the Service Registry service

---
applications:
- name: apples-employee-service
  memory: 512M
  instances: 1
  host: apples-employee-service-${random-word}
  path: ./target/EmployeeService-0.0.1-SNAPSHOT.jar
  services:
    - apples-service-registery

15. Push the service application to PCF as shown below


.....


16. Login into your PCF instance App Manager UI, in this demo I am using PWS instance run.pivotal.io and find your "Service Registry" service and click on it as shown below



17. Click on the "Manage" link as shown below


18. Verify your service is registered as shown below


More Information

http://docs.pivotal.io/spring-cloud-services/service-registry/index.html

https://docs.pivotal.io/spring-cloud-services/service-registry/resources.html

http://docs.pivotal.io/spring-cloud-services/service-registry/writing-client-applications.html
Categories: Fusion Middleware

Application Management Pack 13.1.1.1 for EM 13c Now Available

Steven Chan - Mon, 2016-07-11 15:24

Application Management Pack (AMP) 13.1.1.1 for Oracle E-Business Suite is now available.  This E-Business Suite plug-in for Oracle Enterprise Manager Cloud Control 13c can be used to manage Oracle E-Business Suite 11.5.10.2, 12.0, 12.1, and 12.2 environments.

What's new in this release?

  • Enterprise Manager 13c Compatibility
EM 13c includes several enhancements including a new Fusion Middleware technology stack, Alta skin user interface and (EBR) Edition Based Redefinition support.


  • Hybrid Cloud Management
Monitor and manage E-Business Suite environments on Oracle Cloud and On-premise. These capabilities are delivered as Enterprise Manager command line interface (EMCLI) to manage, provision, migrate, backup, restore, clone, patch, and lift-and-shift E-Business Suite on Oracle Cloud. 
  • System Management
    • Real User Experience Insight (RUEI)  regions integrated within E-Business Suite Summary Dashboard
    • Automated host aliasing

  • Change Management
    • Patch Recommendations: Deploy recommended E-Business Suite technology stack patches, including Database and WebLogic Server patches, using EM patch plans. 
    • Customization Management: View the inventory of customizations in an Oracle E-Business Suite environment. View or download a spreadsheet of discovered customizations by customization type. Customization Discovery & Reporting process now includes discovery of database objects. 
    • Cloning: Save cloning interview process as templates that can be used for future purposes for Smart Clone for R12, 12.2.x procedure, and add custom parameters to the cloning procedure. 

References

Downloads
The Oracle Application Management Suite for Oracle E-Business Suite can be downloaded and installed by using the Self Update feature of Oracle Enterprise Manager.

Related Articles

Categories: APPS Blogs

SharePlex Compare and Repair commands

Yann Neuhaus - Mon, 2016-07-11 13:31

Reading Franck Pachot blog about comparing source and target in a Dbvisit replication (http://blog.dbi-services.com/compare-source-and-target-in-a-dbvisit-replication/), I decide to write a small article about how we can do same thing with SharePlex. Indeed SharePlex provides built-in commands to compare and repair synchronization.
Below is our configuration file. We are just replicating table article to article_rep. Both tables contain 2000000 rows

datasource:o.SPLEXDB

#source tables target tables routing map
titi.article titi.article_rep atlasrep2@o.SPLEXSTR2

From the source let’s insert 2 rows to verify that replication is working fine.

SQL> insert into article values (2000001,'2000001_designation',2000000001);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into article values (2000002,'2000002_designation',2000000002);

1 row created.

SQL> commit;

Commit complete.


SQL> select * from article where idart in (2000001,2000002);

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000001 2000001_designation            2000000001
   2000002 2000002_designation            2000000002

From the target let’s verify that the 2 rows are replicated

SQL> select * from article_rep where idart in (2000001,2000002);

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000001 2000001_designation            2000000001
   2000002 2000002_designation            2000000002

To compare source and target in SharePlex, the command compare is used. Let’s see a demonstration.
From the source 

sp_ctrl (atlas:2104)>  compare titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

  comparing 1 of 1 objects

  compare started; job id 7

sp_ctrl (atlas:2104)>

And now let’s see the status of our replication

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 7
PID       : 8644
Host      : atlas.localdomain
Started   : 06-JUL-16 11:34:48
Job Type  : Compare
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     In Sync           N/A    2000002   100       0:31


sp_ctrl (atlas:2104)>

We see that tables are in sync. When running the compare command, SharePlex obtains a brief exclusive lock on the source table to get read consistency for its row selection.
On the target system, SharePlex obtains an exclusive lock on the target table and retains the lock for the duration of the comparison of that table.
Now let’s Sync out our replication
From the target let’s delete a row (note that we have one directional replication only from source to target)

SQL> delete from article_rep where idart=2000002;

1 row deleted.

SQL> commit;

Commit complete.

SQL>

And from the source let’s insert a row

SQL> insert into article values (2000003,'2000003_designation',2000000003);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from article where idart=2000003;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000003 2000003_designation            2000000003


SQL> select count(*) from article;

  COUNT(*)
----------
   2000003

SQL>

From the target we can verify that the last row with idart=2000003 is present but the number of rows is now different between the 2 tables. Indeed we deleted the row with idart=2000002 in the target and this deletion is not replicated in the source. Now replication is sync out

SQL> select * from article_rep where idart=2000003;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000003 2000003_designation            2000000003

SQL> select count(*) from article_rep;

  COUNT(*)
----------
   2000002

Let’s run again compare command on the source and let’s see the output now

sp_ctrl (atlas:2104)> compare titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

We can see that the status is Sync Out (note that comparison of 2000000 rows was very quick)

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 9
PID       : 12583
Host      : atlas.localdomain
Started   : 06-JUL-16 13:24:48
Job Type  : Compare
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     Out Sync          N/A    2000003   100       0:25

To sync In  the replication we just have to use the repair command. This command is magic.

sp_ctrl (atlas:2104)> repair  titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

  repairing 1 of 1 objects

  repair started; job id 10

In the Log file on the target we can see that SharePlex is using ORA_HASH to compare

[oracle@atlasrep2 log]$ less declt_10-1_SPLEXDB_192.168.1.40_p9883.log
declt    2016-07-06 13:36:45.228937 9883 1352263552 NOTE: setting up HASH column info (de_table_setup_ora_hash_compare,L2915)
declt    2016-07-06 13:36:45.228971 9883 1352263552 Key Compare : Off
declt    2016-07-06 13:36:45.229009 9883 1352263552 Job ID      : 10
declt    2016-07-06 13:36:45.229017 9883 1352263552 Source Table: "TITI"."ARTICLE"
declt    2016-07-06 13:36:45.229122 9883 1352263552 Source Node : atlas.localdomain
declt    2016-07-06 13:36:45.229130 9883 1352263552 Target Table: "TITI"."ARTICLE_REP"
declt    2016-07-06 13:36:45.229135 9883 1352263552 Target Route: atlasrep2@SPLEXSTR2
declt    2016-07-06 13:36:45.229140 9883 1352263552 Batch Size  : 100000 rows
declt    2016-07-06 13:36:45.229145 9883 1352263552 Repair      : On (2 Pass)
declt    2016-07-06 13:36:45.229150 9883 1352263552 sp_declt PID: 9883
declt    2016-07-06 13:36:45.229155 9883 1352263552 Read Buffer Size : 1 mbytes
declt    2016-07-06 13:36:45.237727 9883 1352263552 Current status: ready to fetch batch,of rows; including the order by sorting.
declt    2016-07-06 13:36:45.237782 9883 1352263552 running sql statement : (de_select_prepare_to_fetch,L1384)
declt    2016-07-06 13:36:45.237795 9883 1352263552   SELECT /*+ PARALLEL (A,2) */ ROWID,ORA_HASH("IDART"||'~'||"PRIX"), ORA_HASH("DESIGNATION") FROM "TITI"."ARTICLE_REP" A ORDER BY 2, 3
declt    2016-07-06 13:36:54.064711 9883 1352263552 Current status: got batch of rows
declt    2016-07-06 13:37:04.527252 9883 1352263552 NOTE: total number of rows fetched = 2000002 (../src/deqtr/de_select.cpp,L1630)
declt    2016-07-06 13:37:04.533329 9883 1352263552 Notice: starting the second phase in the repair process
declt    2016-07-06 13:37:04.616704 9883 1352263552 Leaving de_msg_clt_preCompareMessaging1_5,L1297
declt    2016-07-06 13:37:06.659513 9883 1352263552 Leaving de_msg_clt_preCompareMessaging2,L614
declt    2016-07-06 13:37:06.659716 9883 1352263552 No new key column order, using default column order for TITI.ARTICLE_REP
declt    2016-07-06 13:37:06.721957 9883 1352263552 Current status: ready to fetch batch,of rows; including the order by sorting.
declt    2016-07-06 13:37:06.721996 9883 1352263552 running sql statement : (de_select_prepare_to_fetch,L1384)

When issuing again the show compare command, we can see that the status is now Repaired.

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 10
PID       : 13008
Host      : atlas.localdomain
Started   : 06-JUL-16 13:36:37
Job Type  : Repair
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     Repaired          N/A    2000003   100       0:29

From the source we can verify the number of rows

SQL> select count(*) from article;

  COUNT(*)
----------
   2000003

From the target we also can verify that we have the same number of rows and that the row deleted (idart=2000002) is now present.

SQL> select count(*) from article_rep;

  COUNT(*)
----------
   2000003


SQL> select * from article_rep  where idart=2000002;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000002 2000002_designation            2000000002
Conclusion

We will retain that SharePlex provides powerful commands compare/repair to fix out of synchronization. Just note that during execution of these commands tables are locked (briefly on the source and more on the target) meaning that no transaction will not be possible during this period.
Note that the compare and repair commands can also be used for initial load (not recommended because of locks).


 

Cet article SharePlex Compare and Repair commands est apparu en premier sur Blog dbi services.

How to Avoid The “ORA-00979 not a group by expression” Error

Complete IT Professional - Mon, 2016-07-11 06:00
Have you tried to run a query and got the “ORA-00979: not a group by expression” error? Learn what it is and how to resolve the error in this article. Generating the ORA-00979 Error This error can be generated when your run SELECT queries on your database. Here’s an example of a query that will […]
Categories: Development

PeopleSoft Integration Broker (IB) Security

Securing the PeopleSoft Integration Broker (IB) ensures the security of messaging both within PeopleSoft applications and among third-party systems. The following are several of the key tasks that Integrigy performs during our PeopleSoft security configuration assessments - take a look today at your settings:

  • Ensure all inbound requests are required to use Secure Socket Layer security/Transport Layer Security (SSL/TLS)
  • Ensure that the default the PSKEY  password has been changed - The PSKEY is keystore contains all root and node certificates used by the Integration Gateway and PIA. Using the default or weak password is not best practice.
  • Ensure the IB node ANONYMOUS is appropriately privileged.  If IB connections do not specify a node name and credentials, IB will try to use the ANONYMOUS node and the “default user ID” tied to that node. This default user must not be a highly privileged user and should be granted the least number of privilege possible.
  • Review all other nodes for permissions appropriate for the business services supported by the node. Best practice is to use a unique UserID for each node that only has appropriate permissions to only to the required objects or related sets of operations.

The following attributes are also reviewed that govern IB activity :

Integration Broker Profile Values

Field

Description

Recommendation

IB_PROFILESTATUS

IB Profile Status. If enabled, IB will show performance information.

For production or Internet facing set to off.

IB_ENABLELOG

Enables logging

 

For production or Internet facing set to off.

IB_LOGLEVEL

Log Level  (if logging is enabled)

1= Standard gateway exception errors.

  1. 2 = All errors and warnings (Default.)
  • 3 = Errors, warnings and important information.
  • 4 = Errors, warnings, important and standard information.
  • 5= Errors, warnings important, standard and low importance information

Default: 2

IB_DEPTHLIMIT

Checks for recursion within messages (number of levels) to ensure that messages do not reference themselves.

Value between 3 and 50

Default: 20

IB_MASTER_OVERRIDE

Determines if Master processing returns statistics in the Output Information section after a Post.

For production or Internet facing set to off.

IB_PRE_848

Pre-848 Tools Release

Default is N

IB_MULTIACT_DOMAIN

By default, only one domain may be active in the Integration Broker system. However, PeopleSoft provides the option to enable the activation of multiple domains.

Off unless required.

IB_USEIPADDRESS

Determines if the application server URL for a synchronous slave template uses the application server IP address:  e.g. URL format from <machine name>:<jolt port> to IP address

On

 

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Reset your Datasources

Darwin IT - Mon, 2016-07-11 05:09
In most SOASuite and Oracle ServiceBus projects the Database Adapter is used. And often it is used to invoke PL/Sql functions and procedures. Actually, it's my favorite interaction method with the database, since the re-entrancy of the database adapter wizard is best there. To do a re-entrant update of a select or DML operation when for instance a column is added, is simply put often quite problematic.

But the thing is with Pl/Sql that when you update a pl/sql package the package state is altered and when calling it from another session you'll get a 'ORA-04068: existing state of packages has been discarded' error. And this is will occur for every connection in the pool of your datasource.

The solution is to reset the datasource. This can be done easily in the WebLogic Administration Console (http://adminserver:port/console). But nowadays you can do it as easily in Enterprise Manager Fusion Middleware Control. This can be quite convenient for developers since often you have EM already open because of your SOASuite tests.

To do this open the Weblogic Domain menu and select the option 'JDBC DataSources':
 Select the DataSource you want to reset, for the example I choose the 'EDNDataSource', but probably you'd not do this for one, but for a custom DataSource:

 Click the Control tab, select the DataSource in the table and click Reset:

Reset will drive WebLogic to recreate all the connections in the DataSource. This prevents popping up the message multiple times.

The functionality in Configuring, Monitoring and Controlling the Datasource  is similar as in the WebLogic Admin console. Only the layout is a little different.

Question on multiple DML in FORALL

Tom Kyte - Mon, 2016-07-11 04:46
Hi, I have below question on FORALL 1) <code> forall 1 .. l_var.count delete tab1 where id=l_var(i); forall 1 .. l_var.count delete tab2 where id=l_var(i); forall 1 .. l_var.count delete tab3 where id=l_var(i); </code> 2)...
Categories: DBA Blogs

Error when import large data from 11.2.0.3 into 11.2.0.4 Oracle Exadata

Tom Kyte - Mon, 2016-07-11 04:46
Hi, We hit this and no clue how it suddenly happen. Issue cronology as below (using Toad version 10.2, 10.5 and 11). 1) Export out from Exadata DB version 11.2.0.3 table A with 4million records - ended successfully. 2) Import into Exadata DB ...
Categories: DBA Blogs

Adding subpartitions

Tom Kyte - Mon, 2016-07-11 04:46
Hello Sir I have a table partitioned on RANGE (daily) (interval partitioning) - there are about 300 partitions . Now I want to add a sub-partition to each of these based on a value in a different column. Should I come up with a script with 300 A...
Categories: DBA Blogs

Distinct Values from Multidimensional Collection

Tom Kyte - Mon, 2016-07-11 04:46
Hi Tom, Is there a way to retrieve Unique records from a multidimensional collection? DISTINCT and MULTISET operators seem to work on collections only when they have 1 field. I have the below <b>TYPE TYP_TABLE_REC IS RECORD ( SCHEMA_NA...
Categories: DBA Blogs

Best Way to Design a Table

Tom Kyte - Mon, 2016-07-11 04:46
I've to design Professional Tax table for Payroll(INDIA) , For every state it has its own prof tax according to their gross salary Consider One state Say MAHARASHTRA IF PR-GROSS < 2500.00 TAX= 30.00 IF PR-GROSS < 3500.00 TAX= 60.00 IF PR...
Categories: DBA Blogs

oracle SQL — Replace comma separated string with another set of values

Tom Kyte - Mon, 2016-07-11 04:46
I have the following table: <code>B_ID I_ID R_ID W00001 1234 1235,1237 B00001 1235 1236,1235 T00001 1236 1235,1235,1235 X00001 1237 1234,1236,1238 M00001 1238 1238</code> I need output like below using sql <code>B_ID I_ID New...
Categories: DBA Blogs

Clob and minus - alternatives for Diff Deltas

Tom Kyte - Mon, 2016-07-11 04:46
Hello Great Asktom Team, We can't do a minus query with a clob column. So what are the alternatives. I have ETL process which finds deltas of table based on minus. Can't do that with a column having data type clob So how to do it in pure...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator