Feed aggregator

E-rows / A-rows

Jonathan Lewis - Wed, 2019-12-04 07:17

This note was prompted by an error I made at the UKOUG TechFest19 yesterday. It’s fairly well-known that when you read an execution plan that includes the rowsource execution stats – so you get the E-rows (estimated) and A-rows (Actual) reported – then a sensible check of the quality of the optimizer’s calculations is to compare the estimates and actuals allowing for the fact that the E-rows is “per start” and the A-rows is “cumulative”, so A-rows = E-rows * Starts.

The error I made yesterday was to forget that this relationship isn’t always true. In particular partitioning and parallel query introduced the need to be a little flexibility in reading the numbers – which I’ll demonstrate with a coupld of simple examples running under 12.2.0.1


rem
rem     Script:         estimate_actual.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem

create table pt_composite_1 (
        id,
        grp,
        small_vc,
        padding
)
nologging
partition by range(id) 
subpartition by hash (grp)
subpartitions 4
(
        partition p1 values less than (  4000),
        partition p2 values less than (  8000),
        partition p3 values less than ( 16000),
        partition p4 values less than ( 32000),
        partition p5 values less than ( 64000),
        partition p6 values less than (128000)
)
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                            id,
        trunc(rownum/100)                                 grp,
        cast(to_char(trunc(rownum/20)) as varchar2(10))   small_vc,
        cast(rpad('x',100) as varchar2(100))              padding
from
        generator       g1,
        generator       g2
where 
        rownum <= 1e5 -- > comment to avoid WordPress format issue
/

create table t3 
nologging pctfree 80
storage (initial 1M next 1M)
as
select * from pt_composite_1
/

All I’ve done is create a couple of tables with 100,000 rows each – and now I’m going to count the rows and see what I get from the execution plans with rowsource execution stats enabled:


set serveroutput off
alter session set statistics_level = all;

prompt  =================
prompt  Partition effects
prompt  =================

select count(id) from pt_composite_1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  ================
prompt  Parallel effects
prompt  ================

select /*+ parallel (t3 4) */ count(id) from t3;
select * from table(dbms_xplan.display_cursor(null,null,'allstats'));

With a little cosmetic tidying, here are the two execution plans (note that I haven’t used the “last” format option when reporting the parallel plan:


=================
Partition effects
=================

select count(id) from pt_composite_1

-------------------------------------------------------------------------------------------------
| Id  | Operation            | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                |      1 |        |      1 |00:00:00.04 |    1866 |
|   1 |  SORT AGGREGATE      |                |      1 |      1 |      1 |00:00:00.04 |    1866 |
|   2 |   PARTITION RANGE ALL|                |      1 |    100K|    100K|00:00:00.04 |    1866 |
|   3 |    PARTITION HASH ALL|                |      6 |    100K|    100K|00:00:00.04 |    1866 |
|   4 |     TABLE ACCESS FULL| PT_COMPOSITE_1 |     24 |    100K|    100K|00:00:00.04 |    1866 |
-------------------------------------------------------------------------------------------------


================
Parallel effects
================

select /*+ parallel (t3 4) */ count(id) from t3

------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |      1 |        |      1 |00:00:00.04 |      20 |      0 |
|   1 |  SORT AGGREGATE        |          |      1 |      1 |      1 |00:00:00.04 |      20 |      0 |
|   2 |   PX COORDINATOR       |          |      1 |        |      4 |00:00:00.04 |      20 |      0 |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
|   4 |     SORT AGGREGATE     |          |      4 |      1 |      4 |00:00:00.11 |    8424 |   7692 |
|   5 |      PX BLOCK ITERATOR |          |      4 |    100K|    100K|00:00:00.11 |    8424 |   7692 |
|*  6 |       TABLE ACCESS FULL| T3       |     61 |    100K|    100K|00:00:00.06 |    8424 |   7692 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access(:Z>=:Z AND :Z<=:Z)

As you can see, the lines specifying partition selection report E-Rows for the whole table, not for any partition-level approximation, so for operations 3 and 4 we shouldn’t multiply Starts by E-rows to compare with A-row. (Starts = 6 for operation 3 because we have 6 partitions, and Start = 24 for operation 4 because at the lowest level we have a total of 24 data segments).

For the parallel query we see the same pattern – every parallel slave reports the expected total number of rows, and every “block iterator” (rowid range) reports the expected total number of rows. Again we see that using multiplication to compare E-rows and A-rows would not be valid.

In fact it’s not just partitioning and parallelism that can cause confusion. Even something as simple as a serial nested loop join has a couple of surprises (largely thanks to the evolution of the mechanics – without a matching adjustment to the execution plans – over time). Here’s a script to generate a couple of tables, which we will then join – hinting various mechanisms for the nested loop.


create table t1
as
select
        rownum           id,
        mod(rownum,100)  n1,
        cast(lpad(rownum,20) as varchar2(20)) v1 
from
        dual
connect by
        level <= 1000 -- > comment to avoid WordPress format issue
;

create table t2
as
select  * from t1
union all
select  * from t1
union all
select  * from t1
;

create index t2_i1 on t2(id);

It’s not a subtle test – as you can see we have 3 rows in table t2 for every row in t1. So let’s pick some t1 rows and join to t2 on id. Again it’s 12.2.0.1:

set serveroutput off
alter session set statistics_level = all;

prompt  ==============================
prompt  Nested loop join (traditional)
prompt  ==============================

select
        /*+ 
                leading(t1 t2) use_nl_with_index(t2 t2_i1) 
                opt_param('_nlj_batching_enabled', 0)
                no_nlj_prefetch(t2)
        */
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n1 = 4
and
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  ==============================
prompt  Nested loop join with prefetch
prompt  ==============================

select
        /*+ 
                leading(t1 t2) use_nl_with_index(t2 t2_i1) 
                nlj_prefetch(t2) 
        */
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n1 = 4
and
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  ==============================
prompt  Nested loop join with batching
prompt  ==============================

select
        /*+ 
                leading(t1 t2) use_nl_with_index(t2 t2_i1) 
                nlj_batching(t2) 
        */
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n1 = 4
and
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Here are the three plans, with a couple of comments after each – all three queries returned the same 30 *- 10 * 3) rows.


==============================
Nested loop join (traditional)
==============================

--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |     30 |00:00:00.01 |      60 |
|   1 |  NESTED LOOPS                        |       |      1 |     30 |     30 |00:00:00.01 |      60 |
|*  2 |   TABLE ACCESS FULL                  | T1    |      1 |     10 |     10 |00:00:00.01 |      15 |
|   3 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     10 |      3 |     30 |00:00:00.01 |      45 |
|*  4 |    INDEX RANGE SCAN                  | T2_I1 |     10 |      3 |     30 |00:00:00.01 |      15 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."N1"=4)
   4 - access("T2"."ID"="T1"."ID")

This is the original nested loop structured (apart from the “batched” option that appeared in 12c) and follows the rule/guideline:

  • Operation 2 operates once and returns the 10 rows predicted.
  • Operation 3 is started 10 times by operation 1, with a prediction of 3 rows per start – and the actual comes out at 30 rows.
  • Operation 4 is started 10 times (once for each start of operation 3), with a predication of 3 rowids per start – and the actual comes out at 30 rows
==============================
Nested loop join with prefetch
==============================

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T2    |      1 |      3 |     30 |00:00:00.01 |      60 |
|   2 |   NESTED LOOPS                      |       |      1 |     30 |     30 |00:00:00.01 |      30 |
|*  3 |    TABLE ACCESS FULL                | T1    |      1 |     10 |     10 |00:00:00.01 |      15 |
|*  4 |    INDEX RANGE SCAN                 | T2_I1 |     10 |      3 |     30 |00:00:00.01 |      15 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."N1"=4)
   4 - access("T2"."ID"="T1"."ID")
 

Again in the order of rowsource generation

  • Operation 3 starts once with a prediction of 10 rows and the rule works.
  • Operation 4 is started 10 times by operation 2, with a prediction of 3 rows (rowids) per start, and the rule works.
  • Operation 2 was started once by operation 1, with a predication of 30 rows (rowids), and the rule works.
  • Operation 1 starts once, but the prediction is reported as the value you would have got from the original NLJ shape – and breaks the rule.
==============================
Nested loop join with batching
==============================

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     30 |00:00:00.01 |      60 |
|   1 |  NESTED LOOPS                |       |      1 |     30 |     30 |00:00:00.01 |      60 |
|   2 |   NESTED LOOPS               |       |      1 |     30 |     30 |00:00:00.01 |      30 |
|*  3 |    TABLE ACCESS FULL         | T1    |      1 |     10 |     10 |00:00:00.01 |      15 |
|*  4 |    INDEX RANGE SCAN          | T2_I1 |     10 |      3 |     30 |00:00:00.01 |      15 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T2    |     30 |      3 |     30 |00:00:00.01 |      30 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."N1"=4)
   4 - access("T2"."ID"="T1"."ID")


In the order in which rowsources are created

  • Operation 3 starts once with a prediction of 10 rows – and the A-rows matches the rule
  • Operation 4 is started 10 times by opreation 2, with a prediction of 3 rows per start – and the A-rows matches the rule.
  • Operation 5 is started 30 times by operation 1, with a prediction of 3 rows per start – again reporting the value that you would have seen from the original representation of the NLJ, the prediction obviously should be 1 – so the rule is broken again
tl;dr

It is important to remember that the basic rule of “A-rows = starts * E-rows” does not hold for the partition-related lines or the PX related lines of partitioned and parallel execution plans.

You may also find a few other cases where you need be a little cautious about trusting the rule without first thinking carefully about the mechanics of what the shape of the plan is telling you.

Amazon AWS | Microsoft AZURE | Oracle Cloud (Confused ?): Right Choice for DBA’s

Online Apps DBA - Wed, 2019-12-04 01:41

Amazon AWS | Microsoft AZURE | Oracle Cloud (Confused ?): Right Choice for DBA’s Check this video by Oracle ACE & Cloud Expert, Atul Kumar from team K21Academy in which he discussed which is the right Cloud to learn if you’re a DBA, AppsDBA or working on Oracle applications like E-business Suite, PeopleSoft, JD Edwards, […]

The post Amazon AWS | Microsoft AZURE | Oracle Cloud (Confused ?): Right Choice for DBA’s appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Autonomous Database Choices to Fit Your Needs

Oracle Press Releases - Tue, 2019-12-03 12:09
Blog
Autonomous Database Choices to Fit Your Needs

By Juan Loaiza, Executive Vice President, Mission-Critical Database Technologies—Dec 3, 2019

Loiza

The role of the enterprise database has evolved, and its importance has increased tremendously since Oracle pioneered the technology more than four decades ago. Today's databases perform near real-time processing for the most important enterprise workloads in the world, such as at telecoms and financial institutions. They handle petabytes of data, and that data comes in many forms.

At Oracle, we are bringing enterprise-grade database technology to the cloud and making it dramatically simpler and more cost effective. Now every user in enterprises of every size of can get the benefits of database technology that was previously restricted to mission-critical systems. The world’s most sophisticated database is now also the world’s simplest.

Automation, Security, and Dynamic Elasticity

Oracle Autonomous Database supports all kinds of workloads—transaction processing, data warehousing, and mixed workloads. And with Autonomous Database, most of it can be automated.

Databases—especially mission-critical enterprise databases—have traditionally required a lot of oversight and manual management. With Autonomous Database, we’ve automated most of these operational tasks. The Autonomous Database uses machine learning and automation to help eliminate complexity and human error, enabling higher reliability and more efficiency.

This automation also makes Autonomous Database much more secure than other databases because it constantly updates and patches itself in response to the latest threats. Additionally, Autonomous Database can automatically scale up or down, and you pay only for the compute and storage you use. It's a dramatically better database experience.

Bringing the Cloud to You

Despite the many benefits of the public cloud, we recognize that customers — particularly large enterprises — can’t move everything to the cloud. There are many reasons for that, including regulatory compliance concerns, latency issues, and application integration complexity. That's why at OpenWorld 2019 we announced Oracle Gen 2 Exadata Cloud at Customer.

The idea of Exadata Cloud at Customer is, if you can't come to the public cloud, we'll bring the cloud to you. We take the same hardware, software, and APIs that we have in our Exadata Cloud Service in the public cloud, and we deploy them in the customer's data center. The customer stays in control of all their data, and all of their security policies can remain the same, so it makes it a lot easier to adopt. Customers get all the benefits of cloud technology, but they’re also able to control the compliance and policies that had previously prevented them from going to the cloud.

Today we have Oracle Database available as a cloud service on Exadata inside customers’ data centers, and in 2020 we also plan to bring Autonomous Database to customers’ data centers.

Get the Best for Free

Another Autonomous Database option available to customers is Oracle Cloud Free Tier. The free tier can be used by anyone, but it is particularly interesting for developers, students, and small businesses. The free tier makes it super easy to learn Autonomous Database, or to use it to develop new applications.

A lot of clouds have a free tier, but you normally don’t get their best database. You normally get a lower-end commodity database. In the Oracle Cloud Free Tier, Oracle is providing the full Autonomous Database running on the Exadata platform—the same database and platform that is used by some of the biggest enterprises in the world.

With support for online transaction processing or data warehousing, public cloud or Cloud at Customer deployment, and a free tier for students and developers, Oracle Autonomous Database supports any kind of workload, deployment, or user with the simplest ease of use and the highest elasticity and security.

All the previously existing barriers to running databases in the cloud have now been eliminated.

k8s info: VMware Tanzu Octant - A web-based, highly extensible platform for developers to better understand the complexity of Kubernetes clusters

Pas Apicella - Tue, 2019-12-03 10:33
Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer's toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities

So how would I install this?

1. First on my k8s cluster lets create a deployment and a service. You can skip this step if you already have workloads on your cluster. These commands will work on any cluster as the image exists on DockerHub itself so as long as you can get to DockerHub these kubectl commands will work.

$ kubectl run pbs-demo --image=pasapples/pbs-demo-image --replicas=2 --port=8080
$ kubectl expose deploy pbs-demo --type=LoadBalancer --port=80 --target-port=8080
$ http http://101.195.48.144/customers/1

HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Tue, 03 Dec 2019 16:11:54 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://101.195.48.144/customers/1"
        },
        "self": {
            "href": "http://101.195.48.144/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

2. To install Octant you can view instructions on the GitHub page as follows

https://github.com/vmware-tanzu/octant

Given I am on a Mac it's installed using brew as shown below. For other OS refer to link above

$ brew install octant

3. Thats it you can now launch the UI as shown below.

$  octant

2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "deployment/configuration", "module-name": "overview"}
2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/containerEditor", "module-name": "overview"}
2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/serviceEditor", "module-name": "overview"}
2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "octant/deleteObject", "module-name": "configuration"}
2019-12-03T21:47:56.272+0530 INFO dash/dash.go:370 Using embedded Octant frontend
2019-12-03T21:47:56.277+0530 INFO dash/dash.go:349 Dashboard is available at http://127.0.0.1:7777

Octant should immediately launch your default web browser on 127.0.0.1:7777

And to view our deployed application!!!!







It's a nice UI and it even has the ability to switch to a different k8s context from the menu bar itself



More Information

1. Seeing is Believing: Octant Reveals the Objects Running in Kubernetes Clusters
https://blogs.vmware.com/cloudnative/2019/08/12/octant-reveals-objects-running-in-kubernetes-clusters/

2. GitHub project page
https://github.com/vmware-tanzu/octant

Categories: Fusion Middleware

k8s info: kubectx and kubens to the rescue

Pas Apicella - Tue, 2019-12-03 05:54
kubectx is a utility to manage and switch between kubectl(1) contexts. To me this is so handy I can't live without it. I am constantly using k8s everywhere from PKS (Pivotal Container Service) clusters, GKE clusters, minikube and wherever I can get my hands on a cluster.

So when I heard about kubectx and no I can't live with this and it makes my life so much easier. His how

Where is my current k8s context and potentially what other contexts could I switch to?


Ok so I am in the k8s cluster with the context of "apples". Let's switch to "lemons" then


It's really as simple as that. In my world every k8s cluster is named after a FRUIT.

Finally if you wish to set the correct context namespace you can use "kubens" to do that just as easily as shown below



More Information

https://github.com/ahmetb/kubectx

https://formulae.brew.sh/formula/kubectx
Categories: Fusion Middleware

Joined the ranks of the 100+ CKA/CKAD certified Pivotal Platform Architects

Pas Apicella - Tue, 2019-12-03 05:22
I am now officially CKAD certified in fact I am Cloud Foundry certified as well. Great to be certified with the leaders in container technology both with PaaS and CaaS.





Categories: Fusion Middleware

Observation regarding Interval partitioning

Tom Kyte - Mon, 2019-12-02 17:52
<b>Hi, <b>I had some observation regarding Interval partitioning when I was looking into one issue,</b> Below is the use case </b> For ex : We have two tables <code>CREATE TABLE TEST_GURU_1 ( ENAME VARCHAR2(500), EMPDATE DATE ) ...
Categories: DBA Blogs

Parallel execution of procedure like multithreading in java

Tom Kyte - Mon, 2019-12-02 17:52
Hi All, Consider below sample Data model: Application is maintaining information of different countries - States -Cities (each of this is individual tables). At the end of quarter we are doing assessment and calculating different metrics at countr...
Categories: DBA Blogs

A stored procedure taking 98% CPU time as a percentage of Elapsed Time

Tom Kyte - Mon, 2019-12-02 17:52
A stored procedure taking 98% CPU time as a percentage of Elapsed Time. Please provide us the fix and recommendations to check
Categories: DBA Blogs

Password change of remote database user by dblink but showing error as "ORA-06550: line 1, column 8: PLS-00352: Unable to access another database &#x27;SYS_DBALINK"

Tom Kyte - Mon, 2019-12-02 17:52
HI,i created a form in Oracle Apex and when i click the submit button then Stored Procedure runs that changes the remote database user password over dblink but iam getting the error as "ORA-06550: line 1, column 8: PLS-00352: Unable to access another...
Categories: DBA Blogs

insert records into multiple tables

Tom Kyte - Mon, 2019-12-02 17:52
Hai bro, How to insert a records for multiple tables with out using INSERT ALL and with single INSERT statement only, is that possible ?
Categories: DBA Blogs

DDL related parse, execute, fetch

Tom Kyte - Mon, 2019-12-02 17:52
Dear Mr. Tom, When we issue a DML it undergoes PARSE EXECUTE FETCH process and the things happen in Database Buffer Cache. Kindly tell me when we issue a DDL then PARSE EXECUTE is understandable but what about FETCH? What it will...
Categories: DBA Blogs

Plan changes when json_arrayagg function got added

Tom Kyte - Mon, 2019-12-02 17:52
Team, could you please help us to understand why the plan changes when json_arrayagg function got added? <code> demo@PDB1> create table t as select * from all_objects; Table created. demo@PDB1> alter table t add constraint t_pk primary ...
Categories: DBA Blogs

transportable DBF Import in 12c

Tom Kyte - Mon, 2019-12-02 17:52
i'm trying to import transportable data files to Oracle DB 12.2 . These files are exported from as transportable from Oracle DB 11.1.i recieve the following error. ORA-39123: Data Pump transportable tablespace job aborted ORA-19721: Cannot find dat...
Categories: DBA Blogs

How to extract table data into CSV file dynamically using generic procedure

Tom Kyte - Mon, 2019-12-02 17:52
Hi, Need help on how to generate the CSV file for the given tablename dynamically using PLSQL procedure. I understand we can use UTL_FILE oracle package to generate the CSV file however I would like to know how we can create generic script which ...
Categories: DBA Blogs

Multiple Schema Oracle Wallet

Tom Kyte - Mon, 2019-12-02 17:52
Dear AskTom, I have a shell script that connects as several different users to the same database. From Oracle: You can store multiple credentials for multiple databases in one client wallet. You cannot store multiple credentials (for logging i...
Categories: DBA Blogs

Practical Application Performance Tuning: An nVision Case Study

David Kurtz - Mon, 2019-12-02 16:41
I gave this presentation at the UKOUG Techfest 19 conference.  It is closely based on a previous presentation about PeopleSoft nVision performance tuning, and uses the experience of a PeopleSoft project as a case study, so I am also posting here on my PeopleSoft blog.
This video was produced as a part of the preparation for this session.  The slide deck is also available on my website.

Learning about and understanding the principles and mechanics of the Oracle database is fundamentally important for both DBAs and developers. It is one of the reasons we still physical conferences.
This presentation tells the story of a performance tuning project for the GL reporting on a Financials system on an engineered system. It required various techniques and features to be brought to bear. Having a theoretical understanding of how the database and various features work allowed us to make reasonable predictions about whether they would be effective in our environment. Some ideas were discounted, some were taken forward.
We will look at instrumentation, ASH, statistics collection, partitioning, hybrid columnar compression, Bloom filtering, SQL profiles. All of them played a part in the solution, some added further complications that had to be worked around, some had to be carefully integrated with the application, and some required some reconfiguration of the application into order to work properly.
Ultimately, performance improvement is an experimental science, and it requires a similar rigorous thought process.

Create a Vagrant box with Oracle Linux 7 Update 7 Server with GUI

Darwin IT - Mon, 2019-12-02 12:31
Yesterday and today I have been attending the UKOUG TechFest '19 in Brighton. And it got me eager to try things out. For instance with new Oracle DB 19c features. And therefor I should update my vagrant boxes to be able to install one. But I realized my basebox is still on Oracle Linux 7U5, and so I wanted to have a neatly fresh, latest OL 7U7 box.
Use Oracle's base boxNow, last year I wrote about how to create your own Vagrant Base Box: Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box. So I could create my own, but already quite some time ago I found out that Oracle supplies those base boxes.

They're made available at https://yum.oracle.com/boxes, and there are boxes for OL6, OL7 and even OL8. I want to use OL 7U7, and thus I got started with that one. It's neatly described at the mentioned link and it all comes down to:

$ vagrant box add --name <name> <url>
$ vagrant init <name>
$ vagrant up
$ vagrant ssh

And in my case:

$ vagrant box add --name ol77 https://yum.oracle.com/boxes/oraclelinux/ol77/ol77.box
$ vagrant init ol77
$ vagrant up
$ vagrant ssh

Before you do that vagrant up, you might want to edit your vagrant file, to add a name for your VM:
BOX_NAME="ol77"
VM_NAME="ol77"
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.

# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = BOX_NAME

...

# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
vb.name = VM_NAME
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
#
...

Otherwise your VM name in Virtual box would be someting like ol7_default_1235897983, something cryptic with a random number.

If you do a vagrant up now it will boot up nicely.

VirtualBox Guest AdditionsThe VirtualBox GuestAdditions are from version 6.12, while my VirtualBox installation already has 6.14. I found it handy to have a plugin that auto-updates it. My co-Oracle-ACE Maarten Smeets wrote about that earlier. It comes down to executing the following in a command line:
vagrant plugin install vagrant-vbguest

If you do a vagrant up now, it will update the guest additions. However, to be able to do so, it needs to install all kinds of kernel packages to compile the drivers. So, be aware that this might take some time, and you'll need internet connection.
Server with GUIThe downloaded box is a Linux Server install, without a UI. This probably is fine for most of the installations you do. But I like to be able to log on to the desktop from time to time, and I want to be able to connect to that using MobaXterm, and be able to run a UI based installer or application. A bit of X-support is handy. How to do that, I found at this link.

GUI support is one of the group packages that are supported by Oracle Linux 7, and this works exactly the same as RHEL7 (wonder why that is?).

To list the available packages groups are supported, you can do:

[vagrant@localhost ~]$ sudo  yum group list
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Available Environment Groups:
Minimal Install
Infrastructure Server
File and Print Server
Cinnamon Desktop
MATE Desktop
Basic Web Server
Virtualization Host
Server with GUI
Available Groups:
Backup Client
Base
Cinnamon
Compatibility Libraries
Console internet tools
Development tools
E-mail server
Educational Software
Electronic Lab
Fedora Packager
Fonts
General Purpose Desktop
Graphical Administration Tools
Graphics Creation Tools
Hardware monitoring utilities
Haskell
Input Methods
Internet Applications
KDE Desktop
Legacy UNIX Compatibility
MATE
Milkymist
Network Infrastructure Server
Networking Tools
Office Suite and Productivity
Performance Tools
Scientific support
Security Tools
Smart card support
System Management
System administration tools
Technical Writing
TurboGears application framework
Web Server
Web Servlet Engine
Xfce
Done

(After having executed vagrant ssh.)
You'll find 'Server with GUI' as one of the options. This will install all the necessary packages to run Gnome. But, if you want to have KDE there's also package group for that.

To install it you would run:
[vagrant@localhost ~]$ sudo yum groupinstall 'Server with GUI'
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Resolving Dependencies
--> Running transaction check
---> Package ModemManager.x86_64 0:1.6.10-3.el7_6 will be installed
--> Processing Dependency: ModemManager-glib(x86-64) = 1.6.10-3.el7_6 for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libmbim-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-glib.so.5()(64bit) for package: ModemManager-1.6.10-3.el7_6.x86_64
....
....
python-firewall noarch 0.6.3-2.0.1.el7_7.2 ol7_latest 352 k
systemd x86_64 219-67.0.1.el7_7.2 ol7_latest 5.1 M
systemd-libs x86_64 219-67.0.1.el7_7.2 ol7_latest 411 k
systemd-sysv x86_64 219-67.0.1.el7_7.2 ol7_latest 88 k

Transaction Summary
========================================================================================================================
Install 303 Packages (+770 Dependent packages)
Upgrade ( 7 Dependent packages)

Total download size: 821 M
Is this ok [y/d/N]:


It will list a whole bunch of packages with dependencies that it will install. If you're up to it, at this point you would confirm with 'y'. Notice that there will be a bit over a 1000 packages installed, so it will be busy with that for a while.
This is because it will install the complete Gnome Desktop environment.
You could also do:
[vagrant@localhost ~]$ sudo yum groupinstall 'X Window System' 'GNOME'

That will install only the minimum, necessary packages to run Gnome. I did not try that yet.
If it finished installing all the packages, the one thing that is left, is to change the default runlevel, since obviously you want to start in the GUI by default. I think most in the cases, at least.
This is done by:
[vagrant@localhost ~]$ sudo systemctl set-default graphical.target

I could have put that in a provision script, like I've done before. And maybe I will do that.
Package the boxYou will have noticed that it would have stamped quite some time to update the kernel packages for installing the latest Guest Additons and the GUI desktop. To prevent us from doing that over and over again, I thought it was wise to package the box into a ol77SwGUI box (Server with GUI). I described that in my previous article last year:
vagrant package --base ol77_default_1575298630482_71883 --output d:\Projects\vagrant\boxes\OL77SwGUIv1.0.box

The result
This will deliver you a Vagrant Box/VirtualBox image with:
  • Provider: VirtualBox
  • 64 bit
  • 2 vCPUs
  • 2048 MB RAM
  • Minimal package set installed
  • 32 GiB root volume
  • 4 GiB swap
  • XFS root filesystem
  • Extra 16GiB VirtualBox disk image attached, dynamically allocated
  • Guest additions installed
  • Yum configured for Oracle Linux yum server. _latest and _addons repos enabled as well as _optional_latest, _developer, _developer_EPEL where available.
  • And as an extra addon: Server with GUI installed.
Or basically more or less what I have in may own base box. What I'm less happy with is the 16GiB extra disk image attached. I want a bigger disk for my installations, or at least the data. I'll need to figure out what I want to do with that. Maybe I add an extra disk and reformat the lot with a disk spanning Logical Volume based filesystem.

Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS

Yann Neuhaus - Mon, 2019-12-02 07:15

I’ve done quite some real time logical replication projects in the past, either using Oracle Golden Gate or EDB replication server. Build in logical replication in PostgreSQL (which is available since PostgreSQL 10) can be used as well when both, the source and the target are PostgreSQL instances. While being at the DOAG conference and exhibition 2019 I got in contact with people from DBPLUS and they provide a product which is called “Data Replicator”. The interesting use case for me is the real time replication from Oracle to PostgreSQL as the next project for such a setup is already in the pipe so I thought I’ll give it try.

The “Data Replicator” software needs to be installed on a Windows machine and all traffic will go through that machine. The following picture is stolen from the official “Data Replicator” documentation and it pretty well describes the architecture when the source system is Oracle:

As “Data Replicator” will use Oracle LogMiner, no triggers need to be installed on the source system. Installing something on a validated system might become tricky so this already is a huge benefit compared to some other solutions, e.g. SymmetricDS. When you know GoldenGate the overall architecture is not so much different: What GoldeGate calls the extract is the “Reader” in Data Replicator and the replicat becomes the “Applier”.

The installation on the Windows machine is so simple, that I’ll just be providing the screenshots without any further comments:





In the background three new services have been created and started by the installation program:

There is the replication manager which is responsible for creating replication processes. And then there are two more services for reading from source and writing data to the target. In addition the graphical user interface was installed (which could also be running on another windows machine) which looks like this once you start it up:

Before connecting with the GUI you should do the basic configuration by using the “DBPLUS Replication Manager Configuration” utility:

Once that is done you can go back to the client and connect:

The initial screen has not much content, except for the possibility to create a new replication and I really like that: No overloaded, very hard to initially understand interface but easy and tidy. With only one choice it is easy to go forward so lets create a new replication:

Some concept here: Very clean interface, only 5 steps to follow. My source system is Oracle 19.3 EE and all I have to do is to provide the connection parameters, admin user and a new user/password combination I want to us for the logical replication:

Asking “Data Replicator” to create the replication user, and all is fine:

SQL> r
  1* select username,profile from dba_users where username = 'REPLUSR'

USERNAME                       PROFILE
------------------------------ ------------------------------
REPLUSR                        DEFAULT

Of course some system privileges have been granted to the user that got created:

SQL> select privilege from dba_sys_privs where grantee = 'REPLUSR';

PRIVILEGE
----------------------------------------
SELECT ANY TRANSACTION
LOGMINING
SELECT ANY DICTIONARY
SELECT ANY TABLE

Proceeding with the target database, which is PostgreSQL 12.1 in my case:

As you can see there is no option to create a user on the target. What I did is this:

postgres=# create user replusr with login password 'xxxxxxx';
CREATE ROLE
postgres=# create database offloadoracle with owner = 'replusr';
CREATE DATABASE
postgres=# 

Once done, the connection succeeds and can be saved:

That’s all for the first step and we can proceed to step two:

I have installed the Oracle sample schemas for this little demo and as I only want to replicate these I’ve changed the selection to “REPLICATE ONLY SELECTED SCHEMAS AND TABLES”.

Once more this is all that needs to be done and the next step would be to generate the report for getting an idea of possible issues:

The reported issues totally make sense and you even get the commands to fix it, except for the complaints about the unique keys, of course (If you go for logical replication you should anyway make sure that each table either contains a primary key or at last a unique key). Once the Oracle database is in archive mode and supplemental log data was added the screen will look fine (I will ignore the two warnings as they are not important for this demo):

The next step is to define the “Start Options” and when you select “automatic” you’ll have to specify the options for the transfer server:

There is a small configuration utility for that as well:

When you are happy with it, provide the details in the previous screen and complete the replication setup by providing a name in the last step:

That’s all you need to do and the replication is ready to be started:

… and then it immediately fails because we do not have a valid license. For getting a trial license you need to provide the computer ID which can be found in the information section:

Provide that to DBPLUS and request a trial license. Usually they are responding very fast:

Starting the replication once more:

You’ll see new processes on the PostgreSQL side:

postgres@centos8pg:/home/postgres/ [121] ps -ef | grep postgres
root      1248   769  0 12:58 ?        00:00:00 sshd: postgres [priv]
postgres  1252     1  0 12:58 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1256  1252  0 12:58 ?        00:00:00 (sd-pam)
postgres  1262  1248  0 12:58 ?        00:00:00 sshd: postgres@pts/0
postgres  1263  1262  0 12:58 pts/0    00:00:00 -bash
postgres  1667     1  0 12:58 ?        00:00:00 /u01/app/postgres/product/12/db_0/bin/postgres -D /u02/pgdata/12
postgres  1669  1667  0 12:58 ?        00:00:00 postgres: checkpointer   
postgres  1670  1667  0 12:58 ?        00:00:00 postgres: background writer   
postgres  1671  1667  0 12:58 ?        00:00:00 postgres: walwriter   
postgres  1672  1667  0 12:58 ?        00:00:00 postgres: autovacuum launcher   
postgres  1673  1667  0 12:58 ?        00:00:00 postgres: stats collector   
postgres  1674  1667  0 12:58 ?        00:00:00 postgres: logical replication launcher   
postgres  2560  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40790) idle
postgres  2562  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40800) idle
postgres  2588  1263  0 14:40 pts/0    00:00:00 ps -ef
postgres  2589  1263  0 14:40 pts/0    00:00:00 grep --color=auto postgres

… and you’ll see LogMiner proceses on the Oracle side:

LOGMINER: summary for session# = 2147710977
LOGMINER: StartScn: 2261972 (0x00000000002283d4)
LOGMINER: EndScn: 18446744073709551615 (0xffffffffffffffff)
LOGMINER: HighConsumedScn: 0
LOGMINER: PSR flags: 0x0
LOGMINER: Session Flags: 0x4000441
LOGMINER: Session Flags2: 0x0
LOGMINER: Read buffers: 4
LOGMINER: Region Queue size: 256
LOGMINER: Redo Queue size: 4096
LOGMINER: Memory LWM: limit 10M, LWM 12M, 80%
LOGMINER: Memory Release Limit: 0M
LOGMINER: Max Decomp Region Memory: 1M
LOGMINER: Transaction Queue Size: 1024
2019-11-22T14:05:54.735533+01:00
LOGMINER: Begin mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log
2019-11-22T14:05:54.759197+01:00
LOGMINER: End   mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log

In the details tab there is more information about what is currently going on:

Although it looked quite good at the beginning there is the first issue:

Oracle data type is unknown: OE.CUST_ADDRESS_TYP
Stack trace:
System.ArgumentException: Oracle data type is unknown: OE.CUST_ADDRESS_TYP
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

As with all logical replication solutions custom types are tricky and usually not supported. What I will be doing now is to replicate the “HR” and “SH” schemas only, which do not contain any custom type:

Once again, starting the replication, next issue:

Oracle data type is unknown: ROWID
Stack trace:
System.ArgumentException: Oracle data type is unknown: ROWID
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

Lets check which column(s) and table(s) that is/are:

SQL> SELECT owner, table_name, column_name from dba_tab_columns where data_type = 'ROWID' and owner in ('HR','SH');

OWNER                TABLE_NAME                     COLUMN_NAME
-------------------- ------------------------------ ------------------------------
SH                   DR$SUP_TEXT_IDX$U              RID
SH                   DR$SUP_TEXT_IDX$K              TEXTKEY

Such columns can be easily excluded:



Starting over again, next issue:

At least the schemas need to exist on the target, so:

postgres=# \c offloadoracle postgres
You are now connected to database "offloadoracle" as user "postgres".
offloadoracle=# create schema sh;
CREATE SCHEMA
offloadoracle=# create schema hr;
CREATE SCHEMA
offloadoracle=# 

Next try:

On the source side:

SQL> grant flashback any table to REPLUSR;

Grant succeeded.

SQL> 

On the target side:

offloadoracle=# grant all on schema hr to replusr;
GRANT
offloadoracle=# grant all on schema sh to replusr;
GRANT

Finally most of the tables are replicating fine now:

There are a few warnings about missing unique keys and some tables can not be replicated at all:

For now I am just going to exclude the failed tables as this is fine for the scope of this post:

… an my replication is fine. A quick check on the target:

offloadoracle=# select * from sh.products limit 3;
 prod_id |           prod_name           |           prod_desc           | prod_subcategory | prod_subcategory_id | prod_subcategory_desc |        prod_category        | prod_category_id |     prod_category_desc      | prod_weight_class | prod_unit_of_measure | prod_pac>
---------+-------------------------------+-------------------------------+------------------+---------------------+-----------------------+-----------------------------+------------------+-----------------------------+-------------------+----------------------+--------->
      13 | 5MP Telephoto Digital Camera  | 5MP Telephoto Digital Camera  | Cameras          |     2044.0000000000 | Cameras               | Photo                       |   204.0000000000 | Photo                       |                 1 | U                    | P       >
      14 | 17" LCD w/built-in HDTV Tuner | 17" LCD w/built-in HDTV Tuner | Monitors         |     2035.0000000000 | Monitors              | Peripherals and Accessories |   203.0000000000 | Peripherals and Accessories |                 1 | U                    | P       >
      15 | Envoy 256MB - 40GB            | Envoy 256MB - 40Gb            | Desktop PCs      |     2021.0000000000 | Desktop PCs           | Hardware                    |   202.0000000000 | Hardware                    |                 1 | U                    | P       >
(3 rows)

lines 1-7/7 (END)

… confirms the data is there. As this post is already long enough here some final thoughts: The installation of “Data Replicator” is a no-brainer. I really like the simple interface and setting up a replication between Oracle and PostgreSQL is quite easy. Of course you need to know the issues you can run into with logical replication (missing unique or primary keys, not supported data types, …) but this is the same topic for all solutions. What I can say for sure is, that I never was as fast for setting up a demo replication as with “Data Replicator”. More testing to come …

Cet article Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS est apparu en premier sur Blog dbi services.

Video : SQLcl and Oracle REST Data Services (ORDS)

Tim Hall - Mon, 2019-12-02 02:41

In today’s video we’ll demonstrate the ORDS functionality built into Oracle SQLcl.

This is based on this article.

There are loads of other ORDS articles here.

The star of today’s video is Arman Sharma, captured at Sangam 2015. Seems like yesterday.

Cheers

Tim…

Video : SQLcl and Oracle REST Data Services (ORDS) was first posted on December 2, 2019 at 9:41 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator