Feed aggregator

From BI to AI: Whose Intelligence is Better?

Oracle AppsLab - Tue, 2016-07-12 08:54

Numbers are a property of the universe. Once Earthians figured that out, there was no stopping them. They went as far as the Moon.

We use numbers in business and life. We measure, we look for oddities, we plan. We think of ourselves as rational.

I, for one, like to look at the thermometer before deciding if I shall go out in flip-flops or uggs. But I cannot convince my daughter to do the same. She tosses a coin.

biui2

More often than we like to think, business decisions are made the way my daughter decides on what to wear. I need an illustration here, so let me pick on workers’ compensation. If you have workers, you want to reward them for good work, and by doing that, encourage the behaviors you want to see more of – you want them to work harder, better, quicker, and be happy. You can measure productivity by amount of, say, shoes sold. You can measure quality by, say, number of customers who came back to buy more shoes. You can measure happiness, by, say . . .  okay, let’s not measure happiness. How do you calculate what the worker compensation shall be based on these two measures?

50/50? 75/25? 25/75? Why? Why not? This is where most businesses toss a coin.

Here is an inventory of types of questions people tend to answer by tossing a coin:

  • Should you monitor the dollar amount of sales, or the percentage of sale increase?
  • Which of the two measures lets you better predict future performance?
  • Why would it?
  • How accurate are the predictions?
  • How big shall the errors be until you feel the measure doesn’t make accurate predictions? Why?
  • Which measures shall be combined and looked at together?
  • In which way?
  • Where would you set up thresholds between good, bad, and ugly?
  • Why? Why not?
  • If some numbers are way off, how do you know it is an exception and not part of some pattern that you don’t see?

If not tossing a coin, it is common practice to answer these kinds of questions based on a gut feeling. To answer these questions based on evidence instead, there shall be a way to evaluate the gut feeling, together with bunch of other hypotheses, in order to choose a hypothesis that actually true and works. This is hard for humans. Not only because it requires a lot of modeling and computations.

Conceptually, as humans, we tend to look for reasons and explain things. It is hard for us to see a pattern if we don’t see why it works. “I wouldn’t have seen it if I hadn’t believed it” as one wise person said. Admit it, we are biased. We won’t even consider evaluating a hypothesis that looks like a complete nonsense.

Computers, on the other hand, don’t have such a problem. Machine learning can create and test thousands of crazy hypotheses for you and select the best one. That is, the best one in predicting, not explaining. They can also keep updating the hypotheses as conditions change.

That’s why I believe AI is a new BI. It is more thorough and less biased then us humans. Therefore, it is often more rational.

I am fascinated to learn about ML algorithms, and what they can do for us. Applying the little I learned about Decision Trees to the worker’s compensation dilemma above, this is what I get. Let’s pretend the workers get a bonus at the end of the year. The maximum amount of the bonus is based on their salary, but the exact amount is a percent of the maximum based on performance – partially on the amount of sales, partially on the number of returned customers. These are your predictors. Your goal for paying off the bonus is that next year your workers have increased amount of sales AND increased number of returned customers at the same time. That’s your outcome.

graph1

Decision Tree algorithm will look at each possible combination of your predictors, and will measure which one better divides your outcomes into categories. (They say it is a division that minimizes the entropy and increases information gain).

clip_image001

Would we try to do that “by hand,” it would’ve taken so much time. But here we have the most effective bonus recipes figured out for us. Some of the recipes may look counter-intuitive; we may find out that the largest bonus is not the best encouragement, or some such. But, again, figuring out “whys” is a different problem.

And here is my little classification of business intelligence tasks that I believe AI can take over and improve upon.

graph2

As a human and a designer who welcomes our machine learning overlords, I see their biggest challenge in overcoming our biggest bias, the one of our superior rationality.Possibly Related Posts:

Set the SQL Native Client Default Port with PowerShell

Yann Neuhaus - Tue, 2016-07-12 08:06

I written an article about “SQL Server 2012: Configuring your TCP Port via PowerShell” and I received a question from PaulJ:
“How do you set the port for the SQL Native Client 11.0 Configuration (32bit) – as seen in the SQL Configuration Manager?”

This is a very good question and I decide to write this blog as an answer to this question.
The first step is always the same, initialization of my object:

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlWmiManagement") | Out-Null

$wmi = New-Object ("Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer")

The second step is used to know for which client protocol the setting belongs to.
In the class “Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer”, you find a property “ClientProtocols” as you can see in the msdn web page:
tcpnativeclient01
I display the name and the protocol Properties with this command:

$wmi.ClientProtocols | Select displayname -ExpandProperty ProtocolProperties

tcpnativeclient02
As you can see, I have 4 client protocols (Named Pipes, default port, KEEPALIVE and KEEPALIVEINTERVAL).
The next step is to select the default port:

$tcp_list = $wmi.ClientProtocols  | Where-Object {$_.displayname -eq "TCP/IP"}
$default_tcp = $tcp_list.ProtocolProperties | Where-Object {$_.Name -eq "Default Port"}
$default_tcp

tcpnativeclient03
As you can see, the default client port is set to 1433 and now, I will set another value for this port:

$default_tcp.value=50100

Note: The port has a System.Int32 type
Validate this change with an Alter:

$tcp_list.alter()

To finish, do not forget to restart your services to activate the port change:

$sql_service = ($wmi.Services | Where-Object { $_.Type -eq "SqlServer" })
$sql_service.alter()
$sql_service.stop()
$sql_service.start()

tcpnativeclient04
Et voilà! The default port for the client protocol is changed!

 

Cet article Set the SQL Native Client Default Port with PowerShell est apparu en premier sur Blog dbi services.

How to view database patching information in Oracle 12.1.0 and higher

Syed Jaffar - Tue, 2016-07-12 06:58
We are currently engaged with an assignment to migrate over 50 productions database to Exadata of different sizes , different OS and different business priority. Each passing day, we encounter some technical challenges and of course each situation educates and adds more knowledge.

I would like to share a small post on how to list/view the patching information  in Oracle >= 12.1.0 version:

As of 12.1.0.2, you can extract the database patches information using the following methods:
1. Traditional 'opatch -lsinventory'
2. Review database alert.log
3. Use the dbms_qopatch package

Review database alert.log

We notice an Oracle 12.1.0.2 instance (at least on Exadata x5-2) dumping current patch information in its alert.log. This is pretty good, you no longer need to query lsinventory to know the patches that are applied. Just review your alert.log to know the patching information.

Below is the excerpt of the alert.log:

Wed May 25 17:59:05 2016
===========================================================
Dumping current patch information
===========================================================
Patch Id: 20243804
Patch Description: DATABASE BUNDLE PATCH: 12.1.0.2.5 (20243804)
Patch Apply Time: 2016-04-19 17:01:42 GMT+03:00
Bugs Fixed: 13498243,13640676,14165431,14254610,14643995,15894842,15996428,
16010876,16293223,16356176,16359751,16556115,16619249,16870214,16938780,


Use the dbms_qoptch package:

with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
select x.*
from a,
xmltable('InventoryInstance/patches/*'
passing a.patch_output
columns
patch_id number path 'patchID',
patch_uid number path 'uniquePatchID',
description varchar2(80) path 'patchDescription',
applied_date varchar2(30) path 'appliedDate',
sql_patch varchar2(8) path 'sqlPatch',
rollbackable varchar2(8) path 'rollbackable'
) x;

Apparently there is a bug wen the query is executed on 12.1.0.1. Read "12.1.0.1 datapatch issue : ORA-27477: "SYS"."LOAD_OPATCH_INVENTORY_1" already exists (Doc ID 1934882.1)"

Nulls in composite keys

Yann Neuhaus - Tue, 2016-07-12 06:36

Comparison of NULL can be misleading and it’s even worse for unique constraint validation. Having partial nulls in a composite key can be tricky because the SQL ANSI specification is not very easy to understand, and implementation can depend on the RDBMS. Here is an example with composite unique key and foreign key on Oracle.

Unique constraint

I create a table with a composite unique constraint:
SQL> create table TABLE1 (a char, b char, unique(a,b));
Table TABLE1 created.

I can insert a row with a=’X’ and b=’X':
SQL> insert into TABLE1 values ('X','X');
1 row inserted.

I cannot insert the same row:
SQL> insert into TABLE1 values ('X','X');
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C0015464) violated

I insert another row with same value for column a but different value for column b:
SQL> insert into TABLE1 values ('X','Y');
1 row inserted.

And another row with same value for column a but a null for column b:
SQL> insert into TABLE1 values ('X',null);
1 row inserted.

However, I cannot insert the same a second time:
SQL> insert into TABLE1 values ('X',null);
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C0015464) violated

If you look at documentation, this is documented as:
Because of the search mechanism for unique key constraints on multiple columns, you cannot have identical values in the non-null columns of a partially null composite unique key constraint.

It looks like an implementation reason (the search mechanism is the index that enforces the unique constraint). What is documented in SQL-92?
A unique constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns.

How to interpret this? We cannot insert two (‘X’,null) because that would be two rows with same non-null value (a=’X’) and the Oracle implementation is compilent.

Or is it? We can also read the definition as the unique constraint being violated only when we find rows that have non-null values and they are the same. This is what MySQL and PostgresSQL do: accept duplicates when there is at least one null.
This is also what I found more intuitive: I usually consider NULL as a value that is not known at insert time but that will be assigned a value later during the lifecycle of the row. Thus, I expect to be able to insert rows where there is a null and check the constraint only when all columns have a value.

It is probably an implementation choice from Oracle which stores nulls as a zero-length string and then cannot have two identical entries in a unique index.

Now inserting a row where a is null and b is null:
SQL> insert into TABLE1 values (null,null);
1 row inserted.

And because that do not violate the rule whatever the way we read it (non-null values are not the same as there are no non-null values at all here) I can insert a second one:
SQL> insert into TABLE1 values (null,null);
1 row inserted.

This is documented as
Unless a NOT NULL constraint is also defined, a null always satisfies a unique key constraint

About implementation, there is no problem because full null entries are not stored in the index. They are stored in bitmap indexes, but bitmap indexes cannot be used to enforce a unique constraint.

In summary, here is what can be stored on a table where (a,b) is unique but nullable:

SQL> select rownum,TABLE1.* from TABLE1;
 
ROWNUM A B
---------- - -
1 X X
2 X Y
3 X
4
5

Foreign key

Now that I have a unique key, I can reference it:
SQL> create table TABLE2 (a char, b char, foreign key(a,b) references TABLE1(a,b));
Table TABLE2 created.

Yes. You don’t need to reference the primary key. Any unique key, even with nullable columns, can be referenced.

I can insert a row where parent exists:
SQL> insert into TABLE2 values('X','X');
1 row inserted.

As I’ve no unique key on the child, it’s many to one relationship:
SQL> insert into TABLE2 values('X','X');
1 row inserted.

I also have a parent with a=’X’ and b=’Y':
SQL> insert into TABLE2 values('X','Y');
1 row inserted.

But I’ve no parent with a=’Y':
SQL> insert into TABLE2 values('Y','Y');
*
ERROR at line 1:
ORA-02291: integrity constraint (SYS.SYS_C0015465) violated - parent key not found

So far so good. I said that I have a many to one relationship, but it’s a many to one or zero because my columns are nullable:
SQL> insert into TABLE2 values(null,null);
1 row inserted.

So far so good. But I have a composite key with nullable columns here, and I can insert a row where a=’X’ and b is null:
SQL> insert into TABLE2 values('X',null);
1 row inserted.

But do you think that all non null parent values must exist?
SQL> insert into TABLE2 values('Y',null);
1 row inserted.

Once again, this is documented as:
If any column of a composite foreign key is null, then the non-null portions of the key do not have to match any corresponding portion of a parent key.

And this is what is specified in SQL-92:
If no <match type> was specified then, for each row R1 of the referencing table, either at least one of the values of the referencing columns in R1 shall be a null value, or the value of each referencing column in R1 shall be equal to the value of the corresponding referenced column in some row of the referenced table. More detail about the other match types in Oracle Development Guide.

That may look strange, but, still thinking about NULLS as unknown values, you can consider that constraints cannot be validated until we know all values.

Here is what I was able to insert into my table even with no a=’Y’ in the parent:

SQL> select rownum,TABLE2.* from TABLE2;
 
ROWNUM A B
---------- - -
1 X X
2 X X
3 X Y
4 X
5
6 Y

So what?

Having nulls in composite unique key or foreign key can be misleading, then it’s better to ensure that what you define fits what you expect. It’s probably better to prevent partial nulls in foreign key (a check constraint can ensure that if one column is null then all columns must be null) or to have and additional referential integrity constraint which ensures that you can set only the allowed values for a subset of columns (in our case, a table with column a as primary key that we can reference).

 

Cet article Nulls in composite keys est apparu en premier sur Blog dbi services.

Union All MV

Jonathan Lewis - Tue, 2016-07-12 04:10

In an article I wrote last week about Bloom filters disappearing as you changed a SELECT to a (conventional) INSERT/SELECT I suggested using the subquery_pruning() hint to make the optimizer fall back to an older strategy of partition pruning. My example showed this working with a range partitioned table but one of the readers reported a problem when trying to apply the strategy to a composite range/hash partitioned table and followed this up with an execution plan of a select statement with a Bloom filter where the subquery_pruning() hint didn’t introduced subquery pruning when the select was used for an insert.

A couple of standard ways to work around this probelm are to embed the select statement in a pipeline function so that we can “insert into table select from table(pipeline_function)”, or to write a pl/sql block that opens a cursor to do a select with bulk collect and loops through an array insert. The overhead in both cases is likely to be relatively small (especially when compared with the overhead of failing to filter). In this case, however, the reader suggested that maybe the problem appeared because the driving table (i.e. the one that would have been query to derive the pruning values) was actually an inline view with a union all.

After modifying my working model to try a couple of different tests I was inclined to agree. Since the two tables in the view looked as if they were likely to be relatively tiny and static I suggested that it would be safe to create a materialized view defined to “refresh on commit” and then use the materialized view explicitly in the query. This, finally, brings me to the point of today’s article – how do you create such a materialized view ?

I’m going to start by creating a couple of small base tables from a familiar object:


create table tt as select * from all_objects where object_type = 'TABLE';
create table tv as select * from all_objects where object_type = 'VIEW';

alter table tt add constraint tt_pk primary key (object_id);
alter table tv add constraint tv_pk primary key (object_id);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'TT',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'TV',
                method_opt       => 'for all columns size 1'
        );
end;
/

Assume, now, that I need an inline view that is interested in the things you will recognise from the above as the tables owned by OUTLN (which will apper in tt) and the views owned by SYSTEM (which will appear in tv) – in the 11.2.0.4 system I’m playing on at the moment that’s three rows from each of the two tables). Here’s the SQL I’d put into the inline view:


select
        object_id, object_type, object_name
from    tt
where   owner = 'OUTLN'
union all
select
        object_id, object_type, object_name
from    tv
where   owner = 'SYSTEM'
;

Since this view won’t give me partition pruning I have to replace it with a table and because I want to ensure that the table is always up to date I have to generate it as the container for a materialized view with refresh on commit. First I need some materialized view logs so that I can do a fast refresh:


create materialized view log on tt
with
        rowid, primary key
        (object_type, object_name, owner)
including new values
;

create materialized view log on tv
with
        rowid, primary key
        (object_type, object_name, owner)
including new values
;

I’ve included the primary key in the definition because I happen to want the object_id column in the log – but I could just have included it as a column in the filter list. I’ve included the rowid in the definition because Oracle needs the rowid if it’s going to be able to do a fast refresh. I can now create a materialized view:


create materialized view mv_t
        build immediate
        refresh fast on commit
as
select
        'T' mv_marker,
        rowid rid,
        object_id, object_type, object_name
from    tt
where   owner = 'OUTLN'
union all
select
        'V' mv_marker,
        rowid rid,
        object_id, object_type, object_name
from    tv
where   owner = 'SYSTEM'
;

I’ve taken the option to “build immediate” and specified – most importantly for my needs – “refresh on commit”. You’ll notice I haven’t chosen to “enable query rewrite”; for the purposes of this demo I don’t need that particular feature.

There are two key features to the materialized view that are a little special – first I’ve included the rowid of each source table as a named column in the materialized view; as I mentioned above Oracle will not allow the view to be fast refreshable without the rowid. The second feature is that I’ve introduced a literal value into the view which I’ve named mv_marker; this makes it easy to see which table a row comes from when you query the materialized view … and Oracle needs to see this.

That’s the job done. Just to demonstrate that my materialized view is working as required here’s a little more SQL (following by the output):


select * from mv_t;

delete from tt where object_name = 'OL$';
update tv set object_name = 'PRODUCT_PRIVILEGES' where object_name = 'PRODUCT_PRIVS';

commit;

select * from mv_t;

=======================================

M RID                 OBJECT_ID OBJECT_TYPE         OBJECT_NAME
- ------------------ ---------- ------------------- --------------------------------
T AAA6tXAAFAAAAEBAAI        471 TABLE               OL$
T AAA6tXAAFAAAAEBAAJ        474 TABLE               OL$HINTS
T AAA6tXAAFAAAAEBAAK        478 TABLE               OL$NODES
V AAA6tWAAFAAAACgABI       8260 VIEW                SCHEDULER_PROGRAM_ARGS
V AAA6tWAAFAAAACgABJ       8261 VIEW                SCHEDULER_JOB_ARGS
V AAA6tWAAFAAAACuAA7      14233 VIEW                PRODUCT_PRIVS

6 rows selected.

2 rows deleted.


1 row updated.


Commit complete.


M RID                 OBJECT_ID OBJECT_TYPE         OBJECT_NAME
- ------------------ ---------- ------------------- --------------------------------
T AAA6tXAAFAAAAEBAAJ        474 TABLE               OL$HINTS
T AAA6tXAAFAAAAEBAAK        478 TABLE               OL$NODES
V AAA6tWAAFAAAACgABI       8260 VIEW                SCHEDULER_PROGRAM_ARGS
V AAA6tWAAFAAAACgABJ       8261 VIEW                SCHEDULER_JOB_ARGS
V AAA6tWAAFAAAACuAA7      14233 VIEW                PRODUCT_PRIVILEGES

5 rows selected.

If you’re wondering why you see “2 rows deleted” but a reduction by just one row in the final output, remember that we’re deleting from table tt but the materialized view holds information about just the subset of tables owned by OUTLN – I happen to have a row in tt that says SYSTEM also owns a table called OL$.

Assistance

If you have trouble working out why your attempts to create a particular materialized view aren’t working the dbms_mview package has a procedure called explain_mview that may give you enough ideas to work out what you’re doing wrong. For example, here’s how I could find out that I needed a literal column to tag the two parts of my union all view:


@$ORACLE_HOME/rdbms/admin/utlxmv.sql

begin
        dbms_mview.explain_mview (
                q'{
                create materialized view mv_t
                        build immediate
                        refresh fast
                        enable query rewrite
                as
                select  -- 'T' mv_marker,
                        rowid rid,
                        object_id, object_type, object_name from tt
                union all
                select  -- 'V' mv_marker,
                        rowid rid,
                        object_id, object_type, object_name from tv
                }'
        );
end;
/

column cap_class noprint
column related_text format a7
column short_msg format a72
break on cap_class skip 1

select
        substr(capability_name,1,3) cap_class,
        capability_name, possible, related_text, substr(msgtxt,1,70) short_msg
from
        mv_capabilities_table
where
        mvname = 'MV_T'
order by
        substr(capability_name,1,3), related_num, seq
;

The first line calls a supplied script to create a table called mv_capabilities_table in the current schema. The call to dbms_mview.explain_mview passes the text of a “create materialized view” statement to the procedure (there are a couple of variations possible) then, after a couple of SQL*Plus formatting commands I’ve queried the table to see Oracle’s analysis for the statement. (You can tag each call to this procedure using a second parameter that I haven’t bothered to use.)

Here’s the output for the failed attempt above, which has commented out the literals that tag the two parts of the UNION ALL:

CAPABILITY_NAME                POS RELATED SHORT_MSG
------------------------------ --- ------- ------------------------------------------------------------------------
PCT_TABLE                      N   TT      relation is not a partitioned table
PCT_TABLE_REWRITE              N   TT      relation is not a partitioned table
PCT_TABLE                      N   TV      relation is not a partitioned table
PCT_TABLE_REWRITE              N   TV      relation is not a partitioned table
PCT                            N

REFRESH_COMPLETE               Y
REFRESH_FAST                   N
REFRESH_FAST_AFTER_INSERT      N           the materialized view does not have a UNION ALL marker column
REFRESH_FAST_AFTER_INSERT      N           set operator in a context not supported for fast refresh
REFRESH_FAST_AFTER_ONETAB_DML  N           see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ANY_DML     N           see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT               N           PCT FAST REFRESH is not possible if query has set operand query blocks

REWRITE                        Y
REWRITE_FULL_TEXT_MATCH        Y
REWRITE_PARTIAL_TEXT_MATCH     N           set operator encountered in mv
REWRITE_GENERAL                N           set operator encountered in mv
REWRITE_PCT                    N           general rewrite is not possible or PCT is not possible on any of the d


17 rows selected.

The query manages to split the output into three sections (but that depends on a side-effect in a way that I would normally call bad design): elements relating to “Partition Change Tracking”, elements relating to “Materialized View Refresh” and elements relating to “Query Rewrite”. You’ll notice that the rewrite section tells me that (even though I haven’t chosen to enable it) my view could be enabled to do query rewrite.

Critically, though, this version of the materialized view can’t be fast refreshed, and we see the key reason in the first “Refresh fast after insert” line: “the materialized view does not have a UNION ALL marker column”. That’s how I know I have to include a literal column that has a different value in each of the two parts of the UNION ALL.


Links for 2016-07-11 [del.icio.us]

Categories: DBA Blogs

Decide Query performance

Tom Kyte - Mon, 2016-07-11 23:06
How we define which query is run slower? Means how we can say that query takes more time and require performance tuning?
Categories: DBA Blogs

find an error at online document

Tom Kyte - Mon, 2016-07-11 23:06
Hi I found a error at online document when I study the oracle. https://docs.oracle.com/database/121/CWADD/crsref.htm#CWADD91788 I do not where to submit this .So I write here, May it can help . crsctl config has Use the <b><i><code>crsctl ...
Categories: DBA Blogs

Grant Access on DB_Link to other User

Tom Kyte - Mon, 2016-07-11 23:06
I need to give access to other user on current DB_LINK, can you provide command ? Current Link: sittest Grant to User: Z9XBIQ1_DW Regards Chaudhry
Categories: DBA Blogs

Forcing a query to timeout

Tom Kyte - Mon, 2016-07-11 23:06
I have several PL/SQL stored procedure reports that monitor various aspects of our database environment (ASM, tablespace, backups, alerts, ...). These all have the same framework, loop thru the entries in a table, using a db_link, query the remote sy...
Categories: DBA Blogs

Network-based IMPDP with local SCN?

Tom Kyte - Mon, 2016-07-11 23:06
Hi, I found a procedure for a network based impdp. The procedure sets the actual (flashback) scn of the target system (see below). That?s a mistake, isn?t it? What will impdp do with this invalid scn? Simply ignore it? Cheers, Markus ...
Categories: DBA Blogs

Creating a Service within IntelliJ IDEA to be used by the Service Registry for Pivotal Cloud Foundry

Pas Apicella - Mon, 2016-07-11 19:59
In this example I am going to show how to use IntelliJ IDEA 15 to create a service application from the IDE to be consumed by the Service Registry service in Pivotal Cloud Foundry (PCF). For more information on this service view the docs page below.

http://docs.pivotal.io/spring-cloud-services/service-registry/index.html

Service Registry for Pivotal Cloud Foundry® (PCF) provides your applications with an implementation of the Service Discovery pattern, one of the key tenets of a microservice-based architecture. Trying to hand-configure each client of a service or adopt some form of access convention can be difficult and prove to be brittle in production. Instead, your applications can use the Service Registry to dynamically discover and call registered services

1. Start IntelliJ IDEA and either "Create a New project" or add a "New Module" to an existing project.

2. Ensure you select "Spring Initializer" as shown below


3. Click Next

4. Describe your project or module, I normally use Maven and generate a JAR file



5. Click Next

6. At the minimum here we only need to select "Service Registry (PCF)" as shown below for the dependency. Of course you would select other options dependncies depending on what the service needed such as REST, JPA, H2 or MySQL etc


7. Click Next

8. Name your new model or project


9. Click Finish

10. Click Finish

11. Your service application must include the @EnableDiscoveryClient annotation on a configuration class. To do that we simply add the annotation to our main class as follows


Java Code
  
package pas.au.pivotal.service.hr;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

import javax.annotation.PostConstruct;

@SpringBootApplication
@EnableDiscoveryClient
public class EmployeeServiceApplication
{
@Autowired
private EmployeeRepository employeeRepository;

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}

@PostConstruct
public void init()
{
employeeRepository.save(new Employee("pas"));
employeeRepository.save(new Employee("lucia"));
employeeRepository.save(new Employee("siena"));
employeeRepository.save(new Employee("lucas"));
}
}

12. Set the spring.application.name property in application.yml. It might be an application.properties file BUT rename it to YML as I know that works. below I not only set the application name I also set the registrationMethod to "route" which is the default and then turn off security as it is enabled by default.

spring:
  application:
    name: employee-service

cloud:
  services:
    registrationMethod: route

security:
  basic:
    enabled: false

So that's all we really need to do here. Of course we will need to add code to our service to do what it needs to do BUT all the config required to enable this service to automatically register itself with the "Service Registry" in PCF is done.

13. Before we deploy this to out PCF instance we have to be sure we have a "Service Registry" service created as shown below using the CF CLI mine is already created.


14. Create a manifest.yml file for the service to be deployed, notice how it binds to the service registry "apples-service-registery", this will ensure it automatically gets registered on deployment with the Service Registry service

---
applications:
- name: apples-employee-service
  memory: 512M
  instances: 1
  host: apples-employee-service-${random-word}
  path: ./target/EmployeeService-0.0.1-SNAPSHOT.jar
  services:
    - apples-service-registery

15. Push the service application to PCF as shown below


.....


16. Login into your PCF instance App Manager UI, in this demo I am using PWS instance run.pivotal.io and find your "Service Registry" service and click on it as shown below



17. Click on the "Manage" link as shown below


18. Verify your service is registered as shown below


More Information

http://docs.pivotal.io/spring-cloud-services/service-registry/index.html

https://docs.pivotal.io/spring-cloud-services/service-registry/resources.html

http://docs.pivotal.io/spring-cloud-services/service-registry/writing-client-applications.html
Categories: Fusion Middleware

Application Management Pack 13.1.1.1 for EM 13c Now Available

Steven Chan - Mon, 2016-07-11 15:24

Application Management Pack (AMP) 13.1.1.1 for Oracle E-Business Suite is now available.  This E-Business Suite plug-in for Oracle Enterprise Manager Cloud Control 13c can be used to manage Oracle E-Business Suite 11.5.10.2, 12.0, 12.1, and 12.2 environments.

What's new in this release?

  • Enterprise Manager 13c Compatibility
EM 13c includes several enhancements including a new Fusion Middleware technology stack, Alta skin user interface and (EBR) Edition Based Redefinition support.


  • Hybrid Cloud Management
Monitor and manage E-Business Suite environments on Oracle Cloud and On-premise. These capabilities are delivered as Enterprise Manager command line interface (EMCLI) to manage, provision, migrate, backup, restore, clone, patch, and lift-and-shift E-Business Suite on Oracle Cloud. 
  • System Management
    • Real User Experience Insight (RUEI)  regions integrated within E-Business Suite Summary Dashboard
    • Automated host aliasing

  • Change Management
    • Patch Recommendations: Deploy recommended E-Business Suite technology stack patches, including Database and WebLogic Server patches, using EM patch plans. 
    • Customization Management: View the inventory of customizations in an Oracle E-Business Suite environment. View or download a spreadsheet of discovered customizations by customization type. Customization Discovery & Reporting process now includes discovery of database objects. 
    • Cloning: Save cloning interview process as templates that can be used for future purposes for Smart Clone for R12, 12.2.x procedure, and add custom parameters to the cloning procedure. 

References

Downloads
The Oracle Application Management Suite for Oracle E-Business Suite can be downloaded and installed by using the Self Update feature of Oracle Enterprise Manager.

Related Articles

Categories: APPS Blogs

SharePlex Compare and Repair commands

Yann Neuhaus - Mon, 2016-07-11 13:31

Reading Franck Pachot blog about comparing source and target in a Dbvisit replication (http://blog.dbi-services.com/compare-source-and-target-in-a-dbvisit-replication/), I decide to write a small article about how we can do same thing with SharePlex. Indeed SharePlex provides built-in commands to compare and repair synchronization.
Below is our configuration file. We are just replicating table article to article_rep. Both tables contain 2000000 rows

datasource:o.SPLEXDB

#source tables target tables routing map
titi.article titi.article_rep atlasrep2@o.SPLEXSTR2

From the source let’s insert 2 rows to verify that replication is working fine.

SQL> insert into article values (2000001,'2000001_designation',2000000001);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into article values (2000002,'2000002_designation',2000000002);

1 row created.

SQL> commit;

Commit complete.


SQL> select * from article where idart in (2000001,2000002);

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000001 2000001_designation            2000000001
   2000002 2000002_designation            2000000002

From the target let’s verify that the 2 rows are replicated

SQL> select * from article_rep where idart in (2000001,2000002);

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000001 2000001_designation            2000000001
   2000002 2000002_designation            2000000002

To compare source and target in SharePlex, the command compare is used. Let’s see a demonstration.
From the source 

sp_ctrl (atlas:2104)>  compare titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

  comparing 1 of 1 objects

  compare started; job id 7

sp_ctrl (atlas:2104)>

And now let’s see the status of our replication

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 7
PID       : 8644
Host      : atlas.localdomain
Started   : 06-JUL-16 11:34:48
Job Type  : Compare
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     In Sync           N/A    2000002   100       0:31


sp_ctrl (atlas:2104)>

We see that tables are in sync. When running the compare command, SharePlex obtains a brief exclusive lock on the source table to get read consistency for its row selection.
On the target system, SharePlex obtains an exclusive lock on the target table and retains the lock for the duration of the comparison of that table.
Now let’s Sync out our replication
From the target let’s delete a row (note that we have one directional replication only from source to target)

SQL> delete from article_rep where idart=2000002;

1 row deleted.

SQL> commit;

Commit complete.

SQL>

And from the source let’s insert a row

SQL> insert into article values (2000003,'2000003_designation',2000000003);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from article where idart=2000003;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000003 2000003_designation            2000000003


SQL> select count(*) from article;

  COUNT(*)
----------
   2000003

SQL>

From the target we can verify that the last row with idart=2000003 is present but the number of rows is now different between the 2 tables. Indeed we deleted the row with idart=2000002 in the target and this deletion is not replicated in the source. Now replication is sync out

SQL> select * from article_rep where idart=2000003;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000003 2000003_designation            2000000003

SQL> select count(*) from article_rep;

  COUNT(*)
----------
   2000002

Let’s run again compare command on the source and let’s see the output now

sp_ctrl (atlas:2104)> compare titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

We can see that the status is Sync Out (note that comparison of 2000000 rows was very quick)

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 9
PID       : 12583
Host      : atlas.localdomain
Started   : 06-JUL-16 13:24:48
Job Type  : Compare
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     Out Sync          N/A    2000003   100       0:25

To sync In  the replication we just have to use the repair command. This command is magic.

sp_ctrl (atlas:2104)> repair  titi.article to titi.article_rep at atlasrep2@o.SPLEXSTR2 for o.SPLEXDB

  repairing 1 of 1 objects

  repair started; job id 10

In the Log file on the target we can see that SharePlex is using ORA_HASH to compare

[oracle@atlasrep2 log]$ less declt_10-1_SPLEXDB_192.168.1.40_p9883.log
declt    2016-07-06 13:36:45.228937 9883 1352263552 NOTE: setting up HASH column info (de_table_setup_ora_hash_compare,L2915)
declt    2016-07-06 13:36:45.228971 9883 1352263552 Key Compare : Off
declt    2016-07-06 13:36:45.229009 9883 1352263552 Job ID      : 10
declt    2016-07-06 13:36:45.229017 9883 1352263552 Source Table: "TITI"."ARTICLE"
declt    2016-07-06 13:36:45.229122 9883 1352263552 Source Node : atlas.localdomain
declt    2016-07-06 13:36:45.229130 9883 1352263552 Target Table: "TITI"."ARTICLE_REP"
declt    2016-07-06 13:36:45.229135 9883 1352263552 Target Route: atlasrep2@SPLEXSTR2
declt    2016-07-06 13:36:45.229140 9883 1352263552 Batch Size  : 100000 rows
declt    2016-07-06 13:36:45.229145 9883 1352263552 Repair      : On (2 Pass)
declt    2016-07-06 13:36:45.229150 9883 1352263552 sp_declt PID: 9883
declt    2016-07-06 13:36:45.229155 9883 1352263552 Read Buffer Size : 1 mbytes
declt    2016-07-06 13:36:45.237727 9883 1352263552 Current status: ready to fetch batch,of rows; including the order by sorting.
declt    2016-07-06 13:36:45.237782 9883 1352263552 running sql statement : (de_select_prepare_to_fetch,L1384)
declt    2016-07-06 13:36:45.237795 9883 1352263552   SELECT /*+ PARALLEL (A,2) */ ROWID,ORA_HASH("IDART"||'~'||"PRIX"), ORA_HASH("DESIGNATION") FROM "TITI"."ARTICLE_REP" A ORDER BY 2, 3
declt    2016-07-06 13:36:54.064711 9883 1352263552 Current status: got batch of rows
declt    2016-07-06 13:37:04.527252 9883 1352263552 NOTE: total number of rows fetched = 2000002 (../src/deqtr/de_select.cpp,L1630)
declt    2016-07-06 13:37:04.533329 9883 1352263552 Notice: starting the second phase in the repair process
declt    2016-07-06 13:37:04.616704 9883 1352263552 Leaving de_msg_clt_preCompareMessaging1_5,L1297
declt    2016-07-06 13:37:06.659513 9883 1352263552 Leaving de_msg_clt_preCompareMessaging2,L614
declt    2016-07-06 13:37:06.659716 9883 1352263552 No new key column order, using default column order for TITI.ARTICLE_REP
declt    2016-07-06 13:37:06.721957 9883 1352263552 Current status: ready to fetch batch,of rows; including the order by sorting.
declt    2016-07-06 13:37:06.721996 9883 1352263552 running sql statement : (de_select_prepare_to_fetch,L1384)

When issuing again the show compare command, we can see that the status is now Repaired.

sp_ctrl (atlas:2104)> show compare

   The syntax to get status for the compare command has changed
   Instead of 'show compare' use the 'compare status' or 'repair status' command


Job ID    : 10
PID       : 13008
Host      : atlas.localdomain
Started   : 06-JUL-16 13:36:37
Job Type  : Repair
Status    : Done - 1 object completed

ID     Tablename                            Status     Time       Total Rows %Comp Total Time
------ ------------------------------------ ---------- ---------- ---------- ----- ----------
1      "TITI"."ARTICLE"                     Repaired          N/A    2000003   100       0:29

From the source we can verify the number of rows

SQL> select count(*) from article;

  COUNT(*)
----------
   2000003

From the target we also can verify that we have the same number of rows and that the row deleted (idart=2000002) is now present.

SQL> select count(*) from article_rep;

  COUNT(*)
----------
   2000003


SQL> select * from article_rep  where idart=2000002;

     IDART DESIGNATION                          PRIX
---------- ------------------------------ ----------
   2000002 2000002_designation            2000000002
Conclusion

We will retain that SharePlex provides powerful commands compare/repair to fix out of synchronization. Just note that during execution of these commands tables are locked (briefly on the source and more on the target) meaning that no transaction will not be possible during this period.
Note that the compare and repair commands can also be used for initial load (not recommended because of locks).


 

Cet article SharePlex Compare and Repair commands est apparu en premier sur Blog dbi services.

How to Avoid The “ORA-00979 not a group by expression” Error

Complete IT Professional - Mon, 2016-07-11 06:00
Have you tried to run a query and got the “ORA-00979: not a group by expression” error? Learn what it is and how to resolve the error in this article. Generating the ORA-00979 Error This error can be generated when your run SELECT queries on your database. Here’s an example of a query that will […]
Categories: Development

PeopleSoft Integration Broker (IB) Security

Securing the PeopleSoft Integration Broker (IB) ensures the security of messaging both within PeopleSoft applications and among third-party systems. The following are several of the key tasks that Integrigy performs during our PeopleSoft security configuration assessments - take a look today at your settings:

  • Ensure all inbound requests are required to use Secure Socket Layer security/Transport Layer Security (SSL/TLS)
  • Ensure that the default the PSKEY  password has been changed - The PSKEY is keystore contains all root and node certificates used by the Integration Gateway and PIA. Using the default or weak password is not best practice.
  • Ensure the IB node ANONYMOUS is appropriately privileged.  If IB connections do not specify a node name and credentials, IB will try to use the ANONYMOUS node and the “default user ID” tied to that node. This default user must not be a highly privileged user and should be granted the least number of privilege possible.
  • Review all other nodes for permissions appropriate for the business services supported by the node. Best practice is to use a unique UserID for each node that only has appropriate permissions to only to the required objects or related sets of operations.

The following attributes are also reviewed that govern IB activity :

Integration Broker Profile Values

Field

Description

Recommendation

IB_PROFILESTATUS

IB Profile Status. If enabled, IB will show performance information.

For production or Internet facing set to off.

IB_ENABLELOG

Enables logging

 

For production or Internet facing set to off.

IB_LOGLEVEL

Log Level  (if logging is enabled)

1= Standard gateway exception errors.

  1. 2 = All errors and warnings (Default.)
  • 3 = Errors, warnings and important information.
  • 4 = Errors, warnings, important and standard information.
  • 5= Errors, warnings important, standard and low importance information

Default: 2

IB_DEPTHLIMIT

Checks for recursion within messages (number of levels) to ensure that messages do not reference themselves.

Value between 3 and 50

Default: 20

IB_MASTER_OVERRIDE

Determines if Master processing returns statistics in the Output Information section after a Post.

For production or Internet facing set to off.

IB_PRE_848

Pre-848 Tools Release

Default is N

IB_MULTIACT_DOMAIN

By default, only one domain may be active in the Integration Broker system. However, PeopleSoft provides the option to enable the activation of multiple domains.

Off unless required.

IB_USEIPADDRESS

Determines if the application server URL for a synchronous slave template uses the application server IP address:  e.g. URL format from <machine name>:<jolt port> to IP address

On

 

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Reset your Datasources

Darwin IT - Mon, 2016-07-11 05:09
In most SOASuite and Oracle ServiceBus projects the Database Adapter is used. And often it is used to invoke PL/Sql functions and procedures. Actually, it's my favorite interaction method with the database, since the re-entrancy of the database adapter wizard is best there. To do a re-entrant update of a select or DML operation when for instance a column is added, is simply put often quite problematic.

But the thing is with Pl/Sql that when you update a pl/sql package the package state is altered and when calling it from another session you'll get a 'ORA-04068: existing state of packages has been discarded' error. And this is will occur for every connection in the pool of your datasource.

The solution is to reset the datasource. This can be done easily in the WebLogic Administration Console (http://adminserver:port/console). But nowadays you can do it as easily in Enterprise Manager Fusion Middleware Control. This can be quite convenient for developers since often you have EM already open because of your SOASuite tests.

To do this open the Weblogic Domain menu and select the option 'JDBC DataSources':
 Select the DataSource you want to reset, for the example I choose the 'EDNDataSource', but probably you'd not do this for one, but for a custom DataSource:

 Click the Control tab, select the DataSource in the table and click Reset:

Reset will drive WebLogic to recreate all the connections in the DataSource. This prevents popping up the message multiple times.

The functionality in Configuring, Monitoring and Controlling the Datasource  is similar as in the WebLogic Admin console. Only the layout is a little different.

Question on multiple DML in FORALL

Tom Kyte - Mon, 2016-07-11 04:46
Hi, I have below question on FORALL 1) <code> forall 1 .. l_var.count delete tab1 where id=l_var(i); forall 1 .. l_var.count delete tab2 where id=l_var(i); forall 1 .. l_var.count delete tab3 where id=l_var(i); </code> 2)...
Categories: DBA Blogs

Error when import large data from 11.2.0.3 into 11.2.0.4 Oracle Exadata

Tom Kyte - Mon, 2016-07-11 04:46
Hi, We hit this and no clue how it suddenly happen. Issue cronology as below (using Toad version 10.2, 10.5 and 11). 1) Export out from Exadata DB version 11.2.0.3 table A with 4million records - ended successfully. 2) Import into Exadata DB ...
Categories: DBA Blogs

Adding subpartitions

Tom Kyte - Mon, 2016-07-11 04:46
Hello Sir I have a table partitioned on RANGE (daily) (interval partitioning) - there are about 300 partitions . Now I want to add a sub-partition to each of these based on a value in a different column. Should I come up with a script with 300 A...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator