Feed aggregator

BPEL 12.2.1.3 Certified for Prebuilt EBS 12.1 SOA Integrations

Steven Chan - Wed, 2018-10-03 12:01

Service Oriented Architecture (SOA) integrations with Oracle E-Business Suite can either be custom integrations that you build yourself or prebuilt integrations from Oracle.  For more information about the differences between the two options for SOA integrations, see this previously-published certification announcement.

The prebuilt BPEL business processes in Oracle E-Business Suite Release 12.1 are:

  • Oracle Price Protection (DPP)
  • Advanced Supply Chain Planning (MSC)
  • Oracle Transportation Management: Oracle Warehouse Management (WMS)
  • Oracle Transportation Management: Oracle Shipping Execution (WSH)
  • Oracle Transportation Management: Oracle Purchasing (PO)
  • Complex Maintenance, Repair & Overhaul (CMRO/AHL)

BPEL integration architecture diagram example

 

These prebuilt BPEL processes have now been certified with Oracle BPEL Process Manager 12c version 12.2.1.3 (in Oracle Fusion Middleware SOA Suite 12c) for Oracle E-Business Suite Release 12.1.3.

References

Certified Platforms

Oracle SOA Suite Release 12g 12.2.1.2 is certified to run on any operating system upon which Oracle WebLogic Server 12c is certified. Check the following for more details:

For information on operating systems supported by Oracle SOA Suite, refer to this document:

Integration with Oracle SOA Suite involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component.

Getting Support If you need support for the prebuilt EBS BPEL business processes, you can log Service Requests against the Applications Technology Group product family.

Related Articles

Categories: APPS Blogs

taking many days to merge clob column

Tom Kyte - Wed, 2018-10-03 09:46
Hi Tom, I have two tables which have clob data, Trying to merge the clob data from work table to Live table. its taking more and more days.. total data size 72GB. What is the best way to run the merge in this case to complete the task quickly. ...
Categories: DBA Blogs

What is the Oracle recommendation for Oracle RAC database and use of Sophos anti-virus software on LINUX?

Tom Kyte - Wed, 2018-10-03 09:46
What is the Oracle recommendation for Oracle RAC database and use of Sophos anti-virus software on LINUX? I have read nearly every Oracle document but can find nothing but Windows recommendation. Should i use that as a base for exclusions (? Oracle...
Categories: DBA Blogs

Calling a stored procedure to send a mail from after insert trigger

Tom Kyte - Wed, 2018-10-03 09:46
I have a requirement to send mail from an after insert trigger and I am passing the :NEW.MYID to the procedure. The procedure contains a query with aggregate functions and storing to an 'INTO mynvarchar2var' as a generated string to be send as email...
Categories: DBA Blogs

How to retrieve data from a quarter?

Tom Kyte - Wed, 2018-10-03 09:46
Right now am using the below query to extract the date. FROM c WHERE date <=TO_DATE ('30-SEP-18', 'DD-MON-YY'). Can you suggest me a way where I dont need to hardcode the date like 30-sep-18. Note: the example date is 30 sep 2018 because I'm ...
Categories: DBA Blogs

Fatal NI connect error 12170

Tom Kyte - Wed, 2018-10-03 09:46
Dear Tom, Please help to advice weather the below attachment is weather network error or not?? on our production database, lots of such TNS-12535: TNS:operation timed out errors. *************************************************************...
Categories: DBA Blogs

Foreign Key violation not happening

Tom Kyte - Wed, 2018-10-03 09:46
Hi, I have parent table X (Code table), and a child table CUSTOMER. I have a FK constraint enabled on CUSTOMER. But still database is accepting/inserting CDE in table CUSTOMER table which are not present in Table X. When FK Constraint is enab...
Categories: DBA Blogs

The Future of Partitioning

Tom Kyte - Wed, 2018-10-03 09:46
Hi Tom, The 11g release did nicely complete (nearly all) possible combination of composite partitioning schema. The missing hash - hash partitioning schema is probably not very interesting. The question is, does it mean that the development of ...
Categories: DBA Blogs

Promises and Lightning Components

Robert Baillie - Wed, 2018-10-03 08:41
In 2015, the ECMA specification included the introduction of Promises, and finally (pun intended) the Javascript world had a way of escaping from callback hell and moving towards a much richer syntax for asynchronous processes. So, what are promises? In short, it’s a syntax that allows you to specify callbacks that should execute when a function either ’succeeds’ or ‘fails’ (is resolved, or rejected, in Promise terminology). For many, they're a way of implementing callbacks in a way that makes a little more sense syntactically, but for others it's a new way of looking at how asynchronous code can be structured that reduces the dependancies between them and provides you with some pretty clever mechanisms. However, this article isn’t about what promises are, but rather:  ‘How can Promises be used in Lightning Components, and why you would want to’. If you want some in depth info on what they are, the best introduction I’ve found is this article on developers.google.com In...

Oracle GoldenGate: How to replicate compressed table in database

Online Apps DBA - Wed, 2018-10-03 07:58

[Q/A Blog] Oracle GoldenGate: How to replicate compressed table in the database? Visit: https://k21academy.com/goldengate27 & know about: ✔Common queries/issues faced by trainees while they are supporting the Goldengate environment. ✔ How To Troubleshoot The Issue ✔ What measure & source setting in the database should be used, to handle or prevent the issue raised. [Q/A […]

The post Oracle GoldenGate: How to replicate compressed table in database appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Promises and Lightning Components

Rob Baillie - Wed, 2018-10-03 07:56
In 2015, the ECMA specification included the introduction of Promises, and finally (pun intended) the Javascript world had a way of escaping from callback hell and moving towards a much richer syntax for asynchronous processes.

So, what are promises?
In short, it’s a syntax that allows you to specify callbacks that should execute when a function either ’succeeds’ or ‘fails’ (is resolved, or rejected, in Promise terminology).

For many, they're a way of implementing callbacks in a way that makes a little more sense syntactically, but for others it's a new way of looking at how asynchronous code can be structured that reduces the dependancies between them and provides you with some pretty clever mechanisms.

However, this article isn’t about what promises are, but rather:

How can Promises be used in Lightning Components, and why you would want to?
As with any new feature of Javascript, make sure you double check the browser compatibility to make sure it covers your target brower before implementing anything.

If you want some in depth info on what they are, the best introduction I’ve found is this article on developers.google.com

In addition, Salesforce have provided some very limited documentation on how to use them in Lightning, here.

Whilst the documentations's inclusion can give us hope (Salesforce knows what Promises are and expect them to be used), the documentation itself is pretty slim and doesn’t really go into any depth on when you would use them.

When to use Promises
Promises are the prime candidate for use when executing anything that is asynchronous, and there’s an argument to say that any asynchronous Javascript that you write should return a Promise.

For Lightning Components, the most common example is probably when calling Apex.

The standard pattern for Apex would be something along the lines of:


getData : function( component ) {
let action = component.get(“c.getData");

action.setCallback(this, function(response) {

let state = response.getState();

if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
}
else if (state === "INCOMPLETE") {
// do your incomplete thing
}
else if (state === "ERROR") {
// do your error thing
}
});
$A.enqueueAction(action);
}

In order to utilise Promises in a such a function you would:
  1. Ensure the function returned a Promise object
  2. Call 'resolve' or 'reject' based on whether the function was successful


getData : function( component ) {
return new Promise( $A.getCallback(
( resolve, reject ) => {

let action = component.get(“c.getData");

action.setCallback(this, function(response) {

let state = response.getState();

if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
resolve();
}
else if (state === "INCOMPLETE") {
// do your incomplete thing
reject();
}
else if (state === "ERROR") {
// do your error thing
reject();
}
});
$A.enqueueAction(action);
});
}

You would then call the helper method in the same way as usual


doInit : function( component, event, helper ) {
helper.getData();
}

So, what are we doing here?

We have updated the helper function so that it now returns a Promise that is constructed with a new function that has two parameters 'resolve' and 'reject'. When the function is called, the Promise is returned and the function that we passed in is immediately executed.

When our function reaches its notional 'success' state (inside the 'state == "SUCCESS" section), we call the 'resolve' function that is passed in.

Similarly, when we get to an error condition, we call 'reject'.

In this simple case, you'll find it hard to see where 'resolve' and 'reject' are defined - because they're not. In this case the Promise will create an empty function for you and the Promise will essentially operate as if it wasn't there at all. The functionality hasn't changed.

Aside - if you're unfamiliar with the 'Arrow Function' notation - E.g. () => { doThing() } - then look here or here. And don't forget to check the browser compatibility.

So the obvious question is.. Why?
What does a Promise give you in such a situation?

Well, if all you are doing it calling a single function that has no dependant children, then nothing. But let's say that you wanted to call "getConfiguration", which called some Apex, and then *only once that was complete* you called "getData".

Without Promises, you'd have 2 obvious solutions:
  1. Call "getData" from the 'Success' path of "getConfiguration".
  2. Pass "getData" in as a callback on "getConfiguration" and call the callback in the 'Success' path of "getConfiguration"
Neither of these solutions are ideal, though the second is far better than the first.

That is - in the first we introduce an explicit dependancy between getConfiguration and getData. Ideally, this would not be expressed in getConfiguration, but rather in the doInit (or a helper function called by doInit). It is *that* function which decides that the dependancy is important.

The second solution *looks* much better (and is), but it's still not quite right. We now have an extra parameter on getConfiguration for the callback. We *should* also have another callback for the failure path - otherwise we are expressing that only success has a further dependancy, which is a partial leaking of knowledge.

Fulfilling your Promise - resolve and reject
When we introduce Promises, we introduce the notion of 'then'. That is, when we 'call' the Promise, we are able to state that something should happen on 'resolve' (success) or 'reject' (failure), and we do it from *outside* the called function.

Or, to put it another way, 'then' allows us to define the functions 'resolve' and 'reject' that will get passed into our Promise's function when it is constructed.

E.g.

We can pass a single function into 'then', and this will be the 'resolve' function that gets called on success.


doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) } );
}

Or, if we wanted a failure path that resulted in us calling 'helper.setError', we would pass a second function, which will become the 'reject' function.


doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) }
, () => { helper.setError( component ) } );
}

Aside - It's possible that the functions should be wrapped in a call to '$A.getCallback'. You will have seen this in the definition of the Promise above. This is to ensure that any callback is guaranteed to remain within the context of the Lightning Framework, as defined here. I've not witnessed any problem with not including it, although it's worth bearing in mind if you start to get issues on long running operations.

Now, this solution isn't vastly different to passing the two functions directly into the helper function. E.g. like this:


doInit : function( component, event, helper ) {
helper.getConfiguration( component
, () => { helper.getData( component ) }
, () => { helper.setError( component ) } );
}

And whilst I might say that I personally don't like the act of passing in the two callbacks directly into the function, personal dislike is probably not a good enough reason to use a new language feature in a business critical system.

So is there a better reason for doing it?

Promising everything, or just something
Thankfully, Promises are more than just a mechanism for callbacks, they are a generic mechanism for *guaranteeing* that 'settled' (fulfilled or rejected) Promises result in a specified behaviour occurring once certain states occur.

When using a simple Promise, we are simply saying that the behaviour should be that the 'resolve' or 'reject' functions get called. But that's not the only option

. For example, we also have: Promise.allWill 'resolve' only when *all* the passed in Promises resolve, and will 'reject' if and when *any* of the Promises reject.Promise.raceWill 'resolve' or 'reject' when the first Promise to respond comes back with a 'resolve' or 'reject'. Once we add that to the mix, we can do something a little clever...

How about having the component load with a 'loading spinner' that is only switched off when all three calls to Apex respond with success:


doInit : function( component, event, helper ) {
Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } );
}

Or even better - how about we call getConfiguration, then once that’s done we call each of the getData functions, and only when all three of those are finished do we set the flag:


doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } )
);
}

Or how about - we normally call three functions to get data, unless a flag is set, at which point we want to call a fourth function, and only when all four are complete do we set the flag:


doInit : function( component, event, helper ) {

let initialisations = [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ];

if ( component.get( 'v.runGetDataFour' ) {
initialisations.push( helper.getDataFour( component ) );
}

helper.getConfiguration( component )
.then( Promise.all( initialisations )
.then( () => { helper.setIsLoaded( component ) } )
);
}

Now, just for a second, think about how you would do that without Promises...

Join Cardinality

Jonathan Lewis - Wed, 2018-10-03 06:01

Following up my “Hacking for Skew” article from a couple of days ago, Chinar Aliyev has written an article about a method for persuading the optimizer to calculate the correct cardinality estimate without using any undocumented, or otherwise dubious, mechanisms. His method essentially relies on the optimizer’s mechanism for estimating join cardinality when there are histograms at both ends of the join, so I thought I’d write a short note describing the simplest possible example of the calculation – an example where the query is a single column equi-join with no nulls in either column and a perfect frequency histograms at both ends of the join.  (For a detailed description of more general cases I always refer to the work done by Alberto Dell’Era a few years ago). We start with two data sets that exhibit a strong skew in their data distributions:


execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      n1
from
        generator       v1
;

create table t2
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      n1
from
        generator       v1
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 254'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 254'
        );
end;
/


I’ve generated two tables of 10,000 randomly generated values using the dbms_random.normal() function, but I’ve scaled the value up by a factor of three and taken the absolute value – which has given me a range of 12 distinct integer values with a nicely skewed distribution. Then I’ve gathered stats requesting histograms of up to 254 buckets. Since I’ve tested this only on versions from 11.2.0.4 onwards this means I’ll get a perfect histogram on the n1 columns on both tables.

Now I’m going run a query that reports the values and frequencies from the two tables by querying user_tab_histograms using a variant of an analytic query I published a long time ago to convert the cumulative frequencies recorded as the endpoint values into simple frequencies. If, for some reason, this query doesn’t run very efficiently in your tests you could always /*+ materialize */ the two factored subqueries (CTEs – common table expressions):


prompt  =======================================================================
prompt  Multiply and sum matching frequencies. An outer join is NOT needed
prompt  because rows that don't match won't contributed to the join cardinality
prompt  =======================================================================

break on report skip 1
compute sum of product on report
column product format 999,999,999

with f1 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T1'
and     column_name = 'N1'
order by
        endpoint_value
),
f2 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'N1'
order by
        endpoint_value
)
select
        f1.value, f1.frequency, f2.frequency, f1.frequency * f2.frequency product
from
        f1, f2
where
        f2.value = f1.value
;


     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2658       2532    6,730,056
         1       2341       2428    5,683,948
         2       1828       1968    3,597,504
         3       1305       1270    1,657,350
         4        856        845      723,320
         5        513        513      263,169
         6        294        249       73,206
         7        133        117       15,561
         8         40         54        2,160
         9         23         17          391
        10          5          5           25
        11          4          2            8
                                 ------------
sum                                18,746,698

As you can see, the two columns do have a highly skewed data distribution. The pattern of the two data sets is similar though the frequencies aren’t identical, of course. The total I get from this calculation is (I claim) the cardinality (rows) estimate that the optimizer will produce for doing an equi-join on these two tables – so let’s see the test:


set serveroutput off
alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
alter session set statistics_level = typical;
alter session set events '10053 trace name context off';

And the resulting output:

Session altered.
Session altered.


  COUNT(*)
----------
  18746698


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0wxytnyqs4b5j, child number 0
-------------------------------------
select  count(*) from  t1, t2 where  t1.n1 = t2.n1

Plan hash value: 906334482
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:03.23 |      40 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:03.23 |      40 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |     18M|     18M|00:00:02.96 |      40 |  2616K|  2616K| 2098K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |  10000 |  10000 |00:00:00.01 |      20 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |  10000 |  10000 |00:00:00.01 |      20 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")

As we can see, the estimate for the hash join is “18M” which is in the right ballpark but, in its current format, isn’t entirely helpful which is why I’ve enabled the 10053 trace to get an exact figure from the trace file, and this is what we see:


***********************
Best so far:  Table#: 0  cost: 4.352468  card: 9487.000000  bytes: 28461.000000
              Table#: 1  cost: 378.482370  card: 18467968.000000  bytes: 110807808.000000
***********************

The optimizer’s estimate is exactly the sum of the products of the frequencies of matching values from the (frequency) histogram data. There is a simple rationale for this – it gets the right answer. For each row in t1 with value ‘X’ the (frequency) histogram on t2 tells Oracle how many rows will appear in the join, so multiplying the frequency of ‘X’ in t1 by the frequency of ‘X’ in t2 tells Oracle how many rows the ‘X’s will contribute to the join. Repeat for every distinct value that appears in both (frequency) histograms and sum the results.

As a refinement on this (very simple) example, let’s delete data from the two tables so that we have rows in t1 that won’t join to anything in t2, and vice versa – then re-gather stats, query the histograms, and check the new prediction. We want to check whether a value that appears in the t1 histogram contributes to the join cardinality estimate even if there are no matching values in the t2 histogram (and vice versa):


delete from t1 where n1 = 4;
delete from t2 where n1 = 6;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 254', no_invalidate=>false)
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for all columns size 254', no_invalidate=>false)

with f1 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T1'
and     column_name = 'N1'
order by
        endpoint_value
),
f2 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'N1'
order by
        endpoint_value
)
select
        f1.value, f1.frequency, f2.frequency, f1.frequency * f2.frequency product
from
        f1, f2
where
        f2.value = f1.value
;


set serveroutput off
alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
alter session set statistics_level = typical;
alter session set events '10053 trace name context off';

And the output – with a little cosmetic tidying:


856 rows deleted.
249 rows deleted.

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.


     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2658       2532    6,730,056
         1       2341       2428    5,683,948
         2       1828       1968    3,597,504
         3       1305       1270    1,657,350
         5        513        513      263,169
         7        133        117       15,561
         8         40         54        2,160
         9         23         17          391
        10          5          5           25
        11          4          2            8
                                 ------------
sum                                17,950,172


Session altered.
Session altered.


  COUNT(*)
----------
  17950172


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0wxytnyqs4b5j, child number 0
-------------------------------------
select  count(*) from  t1, t2 where  t1.n1 = t2.n1

Plan hash value: 906334482
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:02.89 |      40 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:02.89 |      40 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |     17M|     17M|00:00:02.61 |      40 |  2616K|  2616K| 2134K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |   9144 |   9144 |00:00:00.01 |      20 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |   9751 |   9751 |00:00:00.01 |      20 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")


From the 10053 trace file:
***********************
Best so far:  Table#: 0  cost: 4.340806  card: 9144.000000  bytes: 27432.000000
              Table#: 1  cost: 368.100010  card: 17950172.000000  bytes: 107701032.000000
***********************

You can see from the frequency histogram report that we “lost” values 4 and 6 from the report; then the total from the report matches the actual number of rows returned by the query, and the cardinality estimate in the plan is again in the right ballpark – with the trace file showing an exact match.

I’ve run this test on 11.2.0.4,  12.1.0.2,  12.2.0.1 and  18.3.0.0 – and there’s an anomaly that appears in 11.2.0.4 (though maybe that should be “disappeared from”): the optimizer’s estimate for the cardinality was 17,952,157 – too large by 1,985. If you read the original document by Alberto Dell’Era you’ll see that he does mention a component of the calculation that seems to be redundant and possibly shouldn’t be there. I haven’t checked the details yet, but maybe that little difference in 11.2.0.4 is exactly the component that shouldn’t be there and isn’t there any more.

Conclusion:

For an incredibly simple class of queries with perfect frequency histograms there’s a very simple way to calculate the cardinality estimate that the optimizer will predict. Match up rows from the two frequency histograms, multiply the corresponding frequencies (making sure you don’t multiply the cumulative frequencies), and sum.

This is, of course, only a tiny step in the direction of seeing how Oracle uses histograms and covers only a type of query that is probably too simple to appear in a production system, but it’s a basis on which I may build in future notes over the next few weeks.

Oracle Cloud Infrastructure (OCI): Networking

Online Apps DBA - Wed, 2018-10-03 01:23

Do you know “Networking” in OCI is one of the most interesting Topics Covered in Oracle Cloud Infrastructure Architect Associate Exam (1Z0-932). Watch the video at https://k21academy.com/oci18 & Get An Overview of OCI Networking Architecture & Components of Networking in OCI such as: ✔ VCN ✔ Subnet & Router ✔ Gateways, DRG, IGW & much […]

The post Oracle Cloud Infrastructure (OCI): Networking appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Foglight: Monitoring solution for databases [Part 01]

Yann Neuhaus - Wed, 2018-10-03 01:00
What is Foglight?

Foglight is a solution from Quest which promises to provide visibility into issues affecting the application and end user experience.

The solution also helps you to find quickly the root cause in application, database, infrastructure, or network to resolve issues by providing “Intuitive workflows”

Let’s give it a try!

Preparing the installation

Requirements for installing Foglight are:

  • A machine to host the Management Server. Ideally dedicated
  • Administrator or root access to all machines requiring a Foglight agent
  • An administrator password for Foglight
  • A user account on the machine where you are installing Foglight
  • The IATEMPDIR environment variable is set to a location with sufficient space for installer self-extraction
Architecture

Foglight requires 2 components:

  • a Management Server: the data collection and processing server
  • a database repository: can be a PostgreSQL embedded in the installation process (Standard installation) or a supported external database: MySQL, Oracle, PostgreSQL or Microsoft SQL server (Custom Installation)
    • If you chose the embedded database, it is automatically stopped or started with the Management Server
    • You can start with the embedded database and then migrate to an external one. The procedure is available here
Important considerations
  • For this test I chose to download and install Foglight with the embedded PostgreSQL database
  • I will use the 45 days trial license which is by default activated at the installation. It is possible to install the license during the installation if you perform the custom install
  • I will make a silent installation given the fact that the Foglight installer can be started in command-line mode by using the console mode or silent mode
Installation

After unzipping the downloaded file, we can arrange the installation parameters according to our needs. This is done by editing the installation parameter file (as description of each parameter can be found here):


[foglight@mgt-server Installers]$ egrep -v "^#|^$" fms_silent_install.properties
INSTALLER_UI=SILENT
USER_INSTALL_DIR=/foglight/app
FMS_LICENSE_AGREEMENT=yes
FMS_SERVICE=false
FMS_UPGRADE=1
FMS_ADMIN_PASSWORD=foglight
FMS_HTTPS_ONLY=0
FMS_HA_MODE=0
FMS_DB_USER=foglight
FMS_DB_USER_PASSWORD=foglight
FMS_DB=embedded
FMS_DB_HOST=127.0.0.1
FMS_DB_PORT=15432
FMS_DB_SETUPNOW=1
FMS_RUN_NOW=false
FMS_CLUSTER_MCAST_PORT=45566
FMS_HTTP_PORT=8080
FMS_HTTPS_PORT=8443
FMS_FEDERATION_PORT=1099
FMS_QP5APP_PORT=8448
FMS_SERVICE_LINUX_ENABLED=0
FMS_SERVICE_LINUX_VALID_PLATFORM=false

The we can run the installation in silent as below:


[foglight@mgt-server Installers]$ ./foglight-5.9.2-foglightfordatabaseslinux-x86_64.bin -i silent -f fms_silent_install.properties
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Preparing SILENT Mode Installation...

===============================================================================
Foglight 5.9.2 (created with InstallAnywhere by Macrovision)
-------------------------------------------------------------------------------

===============================================================================
Installing...
-------------

[==================|==================|==================|==================] [------------------|------------------|------------------|------------------]

Installation Complete.

At this stage the installation should have succeed. If it is not the case, have a look on the below log files located in the user home:

[foglight@mgt-server ~]$ ll ~
total 8
-rw-rw-r-- 1 foglight foglight 573 Oct 1 14:47 Foglight_5.9.3_Install_2018-10-01_144723_001.log
-rw-rw-r-- 1 foglight foglight 4026 Oct 1 14:47 Foglight_5.9.3_InstallLog.log

Start, Stop and login Start

Now we can start our installation:

[foglight@mgt-server app]$ fms -d
2018-10-01 15:00:08.000 INFO [native] Attempting to start Foglight as a daemon.
The startup may take some time to complete. Please check the log file for more
information. Use the '--stop' command line option to shut down a running
daemon.
2018-10-01 15:00:08.000 INFO [native] Daemon process for 'Foglight' started.

And the check out what are the running processes. The is 1 process for the management server and various for the postgres database as we are in a embedded installation:

[foglight@mgt-server app]$ ps -ef | grep foglight
foglight 23601 1 74 23:01 pts/0 00:02:22 Foglight 5.9.2: Foglight Daemon
foglight 23669 1 0 23:01 pts/0 00:00:00 /foglight/app/postgresql/bin/postgres -D /foglight/app/state/postgresql-data --port=15432
foglight 23670 23669 0 23:01 ? 00:00:00 postgres: logger process
foglight 23672 23669 0 23:01 ? 00:00:00 postgres: checkpointer process
foglight 23673 23669 0 23:01 ? 00:00:00 postgres: writer process
foglight 23674 23669 0 23:01 ? 00:00:00 postgres: wal writer process
foglight 23675 23669 0 23:01 ? 00:00:00 postgres: autovacuum launcher process
foglight 23676 23669 0 23:01 ? 00:00:00 postgres: stats collector process
foglight 23687 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48463) idle
foglight 23688 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48464) idle
foglight 23689 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48465) idle
foglight 23690 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48466) idle
foglight 23691 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48467) idle
foglight 23692 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48468) idle
foglight 23693 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48469) idle
foglight 23694 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48470) idle
foglight 23695 23669 1 23:02 ? 00:00:03 postgres: foglight foglight 127.0.0.1(48471) idle
foglight 23853 23669 1 23:04 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48474) idle
foglight 23868 23601 47 23:04 pts/0 00:00:11 FoglightAgentManager 5.9.2: FglAM /foglight/app/fglam/state/default on server
foglight 23876 23868 0 23:04 ? 00:00:00 Quest Application Watchdog 5.9.5: Monitoring PID 23868
foglight 23943 1 0 23:04 pts/0 00:00:00 Quest Application Relauncher 5.9.5: /foglight/app/fglam/bin/fglam

Another option is to start the Management Server using initialization scripts. This option is particularly useful when you want to automatically start the Management Server after it has been rebooted:

[root@mgt-server ~]# cp /foglight/app/scripts/init.d/Linux/foglight /etc/init.d/
[root@mgt-server ~]# ll /etc/init.d/foglight
-rwxr-xr-x 1 root root 2084 Oct 1 15:13 /etc/init.d/foglight

Login

Given the parameters provided in the installation files, I can reach the web console here: https://192.168.56.100:8443
My user/password is the default foglight/foglight and here I am:

Capture

Stop

[foglight@mgt-server ~]$ export PATH=$PATH:/foglight/app/bin

[foglight@mgt-server ~]$ fms --stop
2018-10-01 15:15:17.000 INFO [native] Sending stop request to 'Foglight'
process running in /foglight/app/state (pid 12570).
2018-10-01 15:15:17.000 INFO [native] Shutdown request transmitted.

After few seconds you can observe that the PostgreSQL and Management server are down:

[foglight@servemgt-server r ~]$ ps -ef |grep foglight
root 13182 2656 0 15:15 pts/0 00:00:00 su - foglight
foglight 13183 13182 0 15:15 pts/0 00:00:00 -bash
foglight 13251 13183 0 15:15 pts/0 00:00:00 ps -ef
foglight 13252 13183 0 15:15 pts/0 00:00:00 grep --color=auto foglight

I hope this helps and please do not hesitate to contact us for more details.

 

Cet article Foglight: Monitoring solution for databases [Part 01] est apparu en premier sur Blog dbi services.

Get the Real-world Advice You Need—Add Oracle Support Experts to Your Schedule

Chris Warticki - Tue, 2018-10-02 19:01
Oracle Support has the expertise and resources to help you optimize and innovate within your business, whether you're an executive leader, enterprise resource manager, database admin, IT service engineer, or anything in between.
Visit us at Oracle OpenWorld 2018 in San Francisco, October 22 to 25, and discover tools, knowledge, and best practices for getting the most from your Oracle solutions—on-premises, in the cloud, or a hybrid of both.
Come and engage with Oracle Support and Services leaders and engineers at Oracle OpenWorld.
We can help you discover available support resources and see what's coming next for your Oracle Cloud and on-premise solutions.
See sessions and demos from:
  • Oracle Support
  • Oracle Advanced Customer Services
  • Oracle University
  • And more!
Connect with support experts across Oracle’s product portfolio and gain valuable insights at the Oracle Support Stars Bar, located on the third floor of Moscone West.
Support personnel are available for walk-in consultations, or you can schedule a one-on-one engagement ahead of time to fit your event schedule.
Attend a 15-minute Mini-Briefing session on key topics at our Mini-Briefing Center (adjacent to the Stars Bar) and enter a drawing for a chance to win a prize.
Learn about resources to help drive productivity and proactively avoid business obstacles.
Oracle Support offers dozens of dedicated conference sessions for your specific products and solutions, with support tricks and tips for Oracle Database, Oracle E-Business Suite, Oracle Fusion Applications, and more.
Many Oracle Support sessions have historically been some of the best-attended at OpenWorld, with a focus on practical, real-world expertise that you can take home and benefit from immediately.
Oracle Support's proactive program helps organizations adopt and integrate the latest product capabilities, streamline support resolution, and optimize systems and software performance. Oracle Support annually recognizes customers and partners who have excelled in driving business value adopting Oracle's proactive support offerings.
The Proactive Support Individual Champion awards are presented at the Stars Bar on Monday, October 22, at 1:00 p.m. Join us for the award presentation and find out how real-world peers have had success using Proactive tools and resources.
Join us for the annual My Oracle Support Monday Mix, our unique customer appreciation event that complements your Oracle OpenWorld experience and offers another dimension to the sessions and demos you'll attend all week.
Connect and engage with Oracle Support executives and engineers on Monday, October 22 from 6:15 to 8:30 p.m. at Fang Restaurant, 660 Howard Street in San Francisco (just a three-minute walk from Moscone).
Admission is free for Oracle Support customers with an Oracle OpenWorld badge and photo ID.
Oracle Support has provided award-winning service for more than 40 years, and continues to help our more than 430,000 customers find new possibilities to fuel their success and protect their technology investments. We look forward to seeing you at Oracle OpenWorld 2018.
Oracle Support
Trusted | Secure | Comprehensive

Migrating spatial data from SQL Server to Oracle

Tom Kyte - Tue, 2018-10-02 15:26
Hi, I would want to migrate spatial columns of data from SQL server to Oracle database server. Can you please provide how to begin. The sql developer tool does not support migration of spatial data. Could you provide appropriate tool/steps necessa...
Categories: DBA Blogs

Compressing LOB data increases table size

Tom Kyte - Tue, 2018-10-02 15:26
Hello, I think this a more conceptual question: In our core production database system, we have a pretty big table that weights 15,5TB. Of these, about 14.4TB are XML Data stored in a LOB column. We decided to reduce the size of the database...
Categories: DBA Blogs

Delete on billions record table

Tom Kyte - Tue, 2018-10-02 15:26
Hi Chris, Hi Connor, Appology for puttingy query in theoritical manner, but I would request you to please suggest something which I can try in below scenario. There is requirement when I need to cleanup a table (e.g. orders) having approx. 25 b...
Categories: DBA Blogs

Database Password Change

Tom Kyte - Tue, 2018-10-02 15:26
Hello Experts, I am just curious about changing passwords via cmd sql for SYS user. I issued the below command: alter user SYS identified by new_password; commit; Then restart the instance and OS. When I tried to use the new password, it ...
Categories: DBA Blogs

Regarding Undo tablespace

Tom Kyte - Tue, 2018-10-02 15:26
Which background process writes the copy of "old image" from "db buffer cache" to undo -segment of undo tablespace ?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator