Feed aggregator

Amazon AWS instances and Oracle database performance

Yann Neuhaus - Wed, 2017-02-01 03:19

When you run Oracle Database on Amazon AWS you Bring Your Own Licenses depending on the number of virtual cores (which are the number of cores allocated to your vCPUs). Behind the instance types, you have different processors and hyper-threading. Then, when choosing which instance type to run, you want to know which processor offers the best performance for your Oracle Workload. Here is an example comparing the logical reads on T2, M4, R4 and C4 instances.

My comparison is done running cached SLOB (https://kevinclosson.net/slob/) to measure the maximum number of logical reads per seconds when running the same workload on the different instance types.
I’ve compared what you can have with 2 Oracle Database processor licences, which covers 2 cores (no core factor on AWS) which means 2 vCPU for T2 which is not hyper-threaded, and 4 vCPU for the others.

T2.large: 2vCPU, 8GB RAM, monthly cost about 100$

I was on Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz

With one session:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.37
DB CPU(s): 1.0 13.0 0.00 5.34
Logical read (blocks): 747,004.5 9,760,555.7

With 2 sessions:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 11.12
DB CPU(s): 2.0 27.1 0.00 11.04
Logical read (blocks): 1,398,124.7 19,111,284.0

T2 is not hyper-threaded which is why we double the LIOPS with two sessions. So with 2 Oracle licences on T2 we get 1.4 LIO/s

M4.xlarge: 4vCPU, 16GB RAM, monthly cost about 180$

M4 is the latest General Purpose instance in EC2. It is hyper-threaded so with 2 Oracle processor licences we can use 4 vCPU.
Here I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 2 cores with 2 threads each.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.46
DB CPU(s): 1.0 13.1 0.00 5.46
Logical read (blocks): 874,326.7 11,420,189.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.24
DB CPU(s): 2.0 27.2 0.00 9.22
Logical read (blocks): 1,540,116.9 21,047,307.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 12.33
DB CPU(s): 3.0 40.8 0.00 12.30
Logical read (blocks): 1,645,128.2 22,469,983.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 14.46
DB CPU(s): 4.0 54.3 0.00 14.39
Logical read (blocks): 1,779,361.3 24,326,538.0

Those CPU are faster than the T2 ones. With a single session, we can do 17% more LIOPS. And running on all the 4 threads, we can reach 1.8 kLIOPS which is 27% more that T2 for same Oracle licences.

R4.xlarge: 4vCPU, 30.5GB RAM, monthly cost about 200$

R4 is the memory-intensive instance. I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz so I expect about the same performance as M4.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 6.01
DB CPU(s): 1.0 13.7 0.00 6.01
Logical read (blocks): 864,113.9 11,798,650.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.38
DB CPU(s): 2.0 27.2 0.00 9.36
Logical read (blocks): 1,546,138.8 21,115,125.5

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 14.07
DB CPU(s): 3.0 40.9 0.00 14.05
Logical read (blocks): 1,686,595.4 23,033,987.3

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 15.00
DB CPU(s): 4.0 54.3 0.00 14.93
Logical read (blocks): 1,837,289.9 25,114,082.1

This one looks a little faster. It is the same CPU but cached SLOB does not test only CPU frequency but also memory access. R4 instances have DDR4 memory.

C4.xlarge: 4vCPU, 7.5GB RAM, monthly cost about 170$

For my last test I choose the compute-optimized C4 with Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 6.83
DB CPU(s): 1.0 13.7 0.00 6.83
Logical read (blocks): 923,185.0 12,606,636.8

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.38
DB CPU(s): 2.0 27.2 0.00 9.36
Logical read (blocks): 1,632,424.3 22,296,021.5

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 39.2 0.00 13.64
DB CPU(s): 3.0 39.1 0.00 13.61
Logical read (blocks): 1,744,709.5 22,793,491.7

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 15.79
DB CPU(s): 4.0 54.3 0.00 15.71
Logical read (blocks): 1,857,692.6 25,396,599.8

According to https://aws.amazon.com/ec2/instance-types/ C4 instances have the lowest price/compute performance in EC2. The frequency is 20% faster than R4, but we have similar IOPS. The CPU frequency is not the only parameter for database workload.

So what?

You should not compare only the EC2 instance cost (I’ve indicated the approximate cost for RHEL in Europe, but you can check pricing at https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/). You should estimate the Oracle licences you need to run your workload. Creating an EC2 instance takes only few minutes. Installing Oracle from an ORACLE_HOME clone is also very fast and creating a database with SLOB create_database_kit is easy. Fully automated, you can run the same SLOB tests on an instance and get results after 2 hours. It is highly recommended to do that before choosing the instance type for your database. The number of cores will determine the Oracle licences to buy, which is an acquisition cost + a yearly maintenance fee. The goal is to run on the processors that gives the best performance for your workload.

 

Cet article Amazon AWS instances and Oracle database performance est apparu en premier sur Blog dbi services.

MD5 Signed JAR Files Treated as Unsigned in April 2017

Steven Chan - Wed, 2017-02-01 02:05

Oracle currently plans to disable MD5 signed JARs in the upcoming Critical Patch Update slated for April 18, 2017.  JAR files signed with MD5 algorithms will be treated as unsigned JARs.

MD5 JAR file signing screenshot

Does this affect EBS environments?

Yes. This applies to Java 6, 7, and 8 used in EBS 12.1 and 12.2.  Oracle E-Business Suite uses Java, notably for running Forms-based content via the Java Runtime Environment (JRE) browser plug-in.  Java-based content is delivered in JAR files.  Customers must sign E-Business Suite JAR files with a code signing certificate from a trusted Certificate Authority (CA). 

A code signing certificate from a Trusted CA is required to sign your Java content securely. It allows you to deliver signed code from your server (e.g. JAR files) to users desktops and verifying you as the publisher and trusted provider of that code and also verifies that the code has not been altered. A single code signing certificate allows you to verify any amount of code across multiple EBS environments. This is a different type of certificate to the commonly used SSL certificate which is used to authorize a server on a per environment basis. You cannot use an SSL certificate for the purpose of signing jar files. 

Instructions on how to sign EBS JARs are published here:

Where can I get more information?

Oracle's plans for changes to the security algorithms and associated policies/settings in the Oracle Java Runtime Environment (JRE) and Java SE Development Kit (JDK) are published here:

More information about Java security is available here:

Getting help

If you have questions about Java Security, please log a Service Request with Java Support.

If you need assistance with the steps for signing EBS JAR files, please log a Service Request against the "Oracle Applications Technology Stack (TXK)" > "Java."

Disclaimer

The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.


Categories: APPS Blogs

Batch Scheduler Resources

Anthony Shorten - Tue, 2017-01-31 20:16

In the last release of the Oracle Utilities Application Framework, we released an integration to the DBMS_SCHEDULER to manage and execute our batch processes. We supply a pl/sql based interface to our batch process.

DBMS_SCHEDULER is part of the database and therefore there are lots of advice on the internet to help use the scheduler effectively. I have compiled a list of some of the resources on the internet that may be useful when using this scheduler:

This list is not exhaustive so take a look at other resources you might find useful (look for DBMS_SCHEDULER in the search engine of your choice). Those coming to the Oracle Utilities Edge Conference should note that I am conducting a session on the scheduler and the integration on Feb 14 at the conference if you want more information.

Batch Scheduler Resources

Anthony Shorten - Tue, 2017-01-31 20:16

In the last release of the Oracle Utilities Application Framework, we released an integration to the DBMS_SCHEDULER to manage and execute our batch processes. We supply a pl/sql based interface to our batch process.

DBMS_SCHEDULER is part of the database and therefore there are lots of advice on the internet to help use the scheduler effectively. I have compiled a list of some of the resources on the internet that may be useful when using this scheduler:

This list is not exhaustive so take a look at other resources you might find useful (look for DBMS_SCHEDULER in the search engine of your choice). Those coming to the Oracle Utilities Edge Conference should note that I am conducting a session on the scheduler and the integration on Feb 14 at the conference if you want more information.

Concat all columns

Tom Kyte - Tue, 2017-01-31 16:46
Hello Tom. I want to concat all columns of a row into one string. <code>select * from table</code> should bring out one colum per row, including all fieldvalues as one string. The use auf || didn't work, because I want it for different table...
Categories: DBA Blogs

When are these wait event occur and what are the causes enq: IM - contention for blr and enq: TA - contention

Tom Kyte - Tue, 2017-01-31 16:46
When are these wait event occur and what are the causes and solutions for them. 1) enq: IM - contention for blr 2) enq: TA - contention 3) undo segment tx slot
Categories: DBA Blogs

data base sync method

Tom Kyte - Tue, 2017-01-31 16:46
Hi, I have two data base in production one is DEV db (source DB) and another QA DB(Target db). If any changes apply to Source DB then it has to reflect in target DB but not vice versa. The TARGET DB is in read write mode. Please let me know the ea...
Categories: DBA Blogs

Oracle Database Licensing in the Cloud

Tom Kyte - Tue, 2017-01-31 16:46
Hi Just read the latest release of the Oracle Database licensing in cloud (http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf), has little concern! Does this mean, we will face doubling our licensing cost if we host our Oracle d...
Categories: DBA Blogs

Uploading Files into Server

Tom Kyte - Tue, 2017-01-31 16:46
Hi, Is this possible to Transfer files using sql * plus to a remote data base from a client(Local machine Having Sql*plus)?
Categories: DBA Blogs

Analytics/MODEL to consolidate order lines by value

Tom Kyte - Tue, 2017-01-31 16:46
Given a manual business process when customers accept a minimum dollar value for shipments, I need to write a query that will consolidate the order lines for the customer such that we show the date their order lines accumulate the minimum shipment va...
Categories: DBA Blogs

Introducing high-availability and multi-subnet scenarios with SQL Server on Linux

Yann Neuhaus - Tue, 2017-01-31 15:56

On my first blog about SQL Server on Linux, I introduced the new high availability feature which concerns only SQL Server failover cluster instances so far. During this discovery time, I had the support of Mihaela Blendea (@MihaelaBlendea) at Microsoft to clarify some architecture aspects about this new kind of architecture. Firstly, I would like to thank her. It’s always a big pleasure to get the availability of the Microsoft team in this case. But after achieving the installation of my SQL Server FCI environment on Linux, I was interested in performing the same in a more complex scenario like multi-subnets failover clusters as I may notice at some customer shops. The installation process will surely change over the time and it is not intended as an official documentation of course. This is only an exercise which is part of my Linux immersion experience.

So I decided to evolve my current architecture (two clusters nodes with PaceMaker on the same subnet) by introducing a third one on a different subnet. Here a picture of the architecture I wanted to install.

blog 115 - 1 - sqlfci multisubnet architecture

So basically, referring to my previous architecture, the task to perform was as follows:

  • Make the initial heartbeat configuration redundant. Even if nowadays having redundant network paths is mostly handled by modern infrastructures and virtualization layers as well, I still believe it is always a best practice to make the heartbeat redundant at the cluster level in order to avoid unexpected behaviors like split brains (for instance with two nodes in this case). I will have the opportunity to talk about quorum stuff in a next post.
  • Introduce a third node on a different subnet to the existing architecture and then adding it to the cluster. You may follow the Microsoft documentation to perform this task. The main challenge here was to add the third node in the context of multi-subnet scenario and to ensure the communication path is working well between cluster nodes for both networks (public and private).
  • Find a way to make the existing SQL Server FCI resource multi-subnet compliant. I mean to get the same kind of behavior we may have with WSFC on Windows when the resource fails over nodes on different subnets. In this case, we have to configure an OR based resource dependency which includes second virtual address IP.
  • Check if applications are able to connect in the context of multi-subnet failover event.

You may notice that I didn’t introduce redundancy at the storage layer. Indeed, the NFS server becomes the SPOF but I didn’t want to make my architecture more complex at all for the moment. In a more realistic scenario at customer shops, this aspect would be probably covered by other storage vendor solutions.

So let’s begin by the heartbeat configuration. According to my existing infrastructure, only one ring was configured and ran on the top of my eth0 interfaces on both nodes ((respectively 192.168.5.17 for the linux01 node and 192.168.5.18 for the linux02 node).

 [mikedavem@linux01 ~]$ sudo pcs cluster corosync
…
nodelist {
    node {
        ring0_addr: linux01.dbi-services.test
        nodeid: 1
    }

    node {
        ring0_addr: linux02.dbi-services.test
        nodeid: 2
    }
}
…

 

So I added one another network interface (eth1) on each cluster node with a different subnet (192.168.20.0). Those interfaces will be dedicated on running the second Corosync link (ring 2).

  • Linux01
[mikedavem@linux01 ~]$ ip addr show eth1
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:2b:d4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.17/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe00:2bd4/64 scope link
       valid_lft forever preferred_lft forever

 

  • Linux02
[mikedavem@linux01 ~]$ sudo ssh linux02 ip addr show eth1
…
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:2b:d5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.18/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::36d8:d6f9:1b7a:cebd/64 scope link
       valid_lft forever preferred_lft forever

 

At this point I binded each new IP address with a corresponding hostname. We may either store the new configuration in the /etc/hosts file or in the DNS server(s).

Then I updated the Corosync.conf on both nodes by adding the new ring configuration as follows. The point here is that configuration changes are not synchronized automatically across nodes like Windows Failover clusters. To allow redundant ring protocol, I added the rrp_mode parameter to be active on both network interfaces (eth0 and eth1) and a new ring section for each node (ring1_addr).

totem {
    version: 2
    secauth: off
    cluster_name: linux_cluster
    transport: udpu
    rrp_mode: active
}
nodelist {
    node {
        ring0_addr: linux01.dbi-services.test
        ring1_addr: linux01H2.dbi-services.test
        nodeid: 1
    }
    node {
        ring0_addr: linux02.dbi-services.test
        ring1_addr: linux02H2.dbi-services.test
        nodeid: 2
    }
}

After restarting the Corosync service on both nodes, I checked the new ring status on both nodes

 [mikedavem@linux01 ~]# sudo corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
        id      = 192.168.5.17
        status  = ring 0 active with no faults
RING ID 1
        id      = 192.168.20.17
        status  = Marking seqid 23 ringid 1 interface 192.168.20.17 FAULTY
[root@linux01 ~]#
 [root@linux01 ~]# ssh linux02 corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
        id      = 192.168.5.18
        status  = ring 0 active with no faults
RING ID 1
        id      = 192.168.20.18
        status  = ring 1 active with no faults

 

At this point, my pacemaker cluster was able to use all the network interfaces to execute heartbeat.

In the respect of the Microsoft documentation, I added a new node LINUX03 with the same heartbeat configuration and the general Corosync configuration was updated as follows:

[mikedavem@linux01 ~]# sudo pcs cluster node add linux03.dbi-services.test,linux03H2.dbi-services.testnodelist 
…
    node {
        ring0_addr: linux01.dbi-services.test
        ring1_addr: linux01H2.dbi-services.test
        nodeid: 1
    }
    node {
        ring0_addr: linux02.dbi-services.test
        ring1_addr: linux02H2.dbi-services.test
        nodeid: 2
    }
    node {
        ring0_addr: linux03.dbi-services.test
        ring1_addr: linux03H2.dbi-services.test
        nodeid: 3
    }
}

 

Obviously, communication paths were done successfully after configuring correctly the routes between nodes on different subnets. Corresponding default gateways are already configured for eth0 interfaces but we have to add static routes for eth1 interfaces as shown below:

  • LINUX01 and LINUX02 (eth0 – subnet 192.168.5.0 – default gateway 192.168.5.10 / eth1 – subnet 192.168.20.0 – static route to 192.168.30.0 subnet by using 192.168.20.10).
[mikedavem@linux01 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.5.10    0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
192.168.5.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.30.0    192.168.20.10   255.255.255.0   UG    0      0        0 eth1

 

  • LINUX03 (eth0 – subnet 192.168.50.0 – default gateway 192.168.50.10 / eth1 – subnet 192.168.30.0 – static route to 192.168.20.0 subnet by using 192.168.30.10).
[mikedavem@linux03 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.50.10   0.0.0.0         UG    0      0        0 eth0
0.0.0.0         192.168.50.10   0.0.0.0         UG    100    0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
192.168.20.0    192.168.30.10   255.255.255.0   UG    100    0        0 eth1
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 eth1
192.168.50.0    0.0.0.0         255.255.255.0   U     100    0        0 eth0

 

Let’s have a look at the cluster status:

[root@linux01 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
 Last updated: Mon Jan 30 12:47:00 2017         Last change: Mon Jan 30 12:45:01 2017 by hacluster via crmd on linux01.dbi-services.test
 3 nodes and 3 resources configured

PCSD Status:
  linux01.dbi-services.test: Online
  linux03.dbi-services.test: Online
  linux02.dbi-services.test: Online

 

To enable NFS share to be mounted from the new cluster node LINUX03 on the 192.168.50.0 subnet, we have to add the new configuration in the /etc/exports file and export it afterwards.

[root@nfs ~]# exportfs -rav
exporting 192.168.5.0/24:/mnt/sql_log_nfs
exporting 192.168.5.0/24:/mnt/sql_data_nfs
exporting 192.168.50.0/24:/mnt/sql_data_nfs

[root@nfs ~]# showmount -e
Export list for nfs.dbi-services.com:
/mnt/sql_log_nfs  192.168.5.0/24
/mnt/sql_data_nfs 192.168.50.0/24,192.168.5.0/24

 

Well, after checking everything is ok from the cluster side, the next challenge was to find a way to configure the SQL Server FCI resource to be multi-subnet compliant. As stated by Microsoft, the SQL Server FCI is not as coupled with Pacemaker add-on as the Windows Failover Cluster. Based on my Windows Failover experience, I wondered if I had to go to the same way with the pacemaker cluster on Linux and I tried to find out a way to add a second VIP and then to include it as part of the OR dependency but I found nothing on this field. But Pacemaker offers concepts which include location / collocation and scores in order to behave on the resources during failover events. My intention is not to go into details trough the pacemaker documentation but by playing with the 3 concepts I was able to address our need. Again please feel free to comments if you have a better method to meet my requirement.

Let’s first add a second virtual IP address for the 192.168.50.0 subnet (virtualipdr) and then let’s add a new dependency / colocation between for SQL Server resource (sqllinuxfci)

[mikedavem@linux01 ~]$sudo pcs cluster cib cfg
[mikedavem@linux01 ~]$sudo pcs -f cfg resource create virtualipdr ocf:heartbeat:IPaddr2 ip=192.168.50.20
[mikedavem@linux01 ~]$sudo pcs -f cfg constraint colocation add virtualipdr sqlinuxfci
[mikedavem@linux01 ~]$sudo pcs cluster cib-push cfg
[mikedavem@linux01 ~]$sudo pcs constraint location

 

Now to avoid starting virtualip or virtualipdr resources on the wrong subnet, let’s configure an “opt-out” scenario which includes symmetric cluster to allow resources to run everywhere and location constraints to avoid running a resource on a specified location / node.

[mikedavem@linux01 ~]$sudo pcs property set symmetric-cluster=true
[mikedavem@linux01 ~]$pcs constraint location virtualipdr avoids linux01.dbi-services.test=-1
[mikedavem@linux01 ~]$pcs constraint location virtualipdr avoids linux02.dbi-services.test=-1
[mikedavem@linux01 ~]$pcs constraint location virtualip avoids linux03.dbi-services.test=-1

 

The new constraint topology is as follows

[mikedavem@linux01 ~]$ sudo pcs constraint
Location Constraints:
  Resource: sqllinuxfci
    Enabled on: linux01.dbi-services.test (score:INFINITY) (role: Started)
  Resource: virtualip
    Disabled on: linux03.dbi-services.test (score:-1)
  Resource: virtualipdr
    Disabled on: linux01.dbi-services.test (score:-1)
    Disabled on: linux02.dbi-services.test (score:-1)
Ordering Constraints:
Colocation Constraints:
  FS with sqllinuxfci (score:INFINITY)
  virtualip with sqllinuxfci (score:INFINITY)
  virtualipdr with sqllinuxfci (score:INFINITY)
Ticket Constraints:

 

Let’s have a look at the pacemaker status. At this point all SQL Server resources are running on the LINUX01 on the 192.168.5.0 subnet. We may notice the virtualipdr is in stopped state in this case.

[mikedavem@linux01 ~]$ sudo pcs status
Cluster name: linux_cluster
Stack: corosync
Current DC: linux02.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Jan 31 22:28:57 2017          Last change: Mon Jan 30 16:57:10 2017 by root via crm_resource on linux01.dbi-services.test

3 nodes and 4 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test linux03.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::mssql:fci):       Started linux01.dbi-services.test
 FS     (ocf::heartbeat:Filesystem):    Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.test
 virtualipdr    (ocf::heartbeat:IPaddr2):       Stopped

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

Go ahead and let’s try to move the resources on the LINUX03 node – 192.168.50.0 subnet

[mikedavem@linux01 ~]$ sudo pcs resource move sqllinuxfci linux03.dbi-services.test

 

The new Pacemarker status becomes

[mikedavem@linux01 ~]$ sudo pcs status
Cluster name: linux_cluster
Stack: corosync
Current DC: linux02.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Jan 31 22:33:21 2017          Last change: Tue Jan 31 22:32:53 2017 by root via crm_resource on linux01.dbi-services.test

3 nodes and 4 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test linux03.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::mssql:fci):       Stopped
 FS     (ocf::heartbeat:Filesystem):    Started linux03.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Stopped
 virtualipdr    (ocf::heartbeat:IPaddr2):       Started linux03.dbi-services.test

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

In turn, the virtualipdr brought online and virtualip brought offline as well because we are now located on the 192.168.50.0 subnet. Here we go!

Ok at this point our SQL Server Failover Cluster Instance seems to behave as expected but how to deal with client connections in this case? If I refer to previous Windows Failover Cluster experiences, I may think about two scenarios by using DNS servers.

  • We are able to use SqlClient / JAVA / ODBC support for HA with MultiSubnetFailover parameter on the connection string. In this case good news, we may simply put the both different addresses for the corresponding DNS record and the magic will operate by itself (similar to RegisterAllProvidersIP property with availability groups). The client will reach out automatically the first available address and everything should be fine.
  • We cannot modify or use the MultiSubnetFailover and in this case we may setup the TTL value manually for the corresponding DNS record (similar to the HostRecordTTL parameters with availability groups). We will experience timeout issues for the first connection attempt but the second one should work.
  • Other scenarios?? Please feel free to comment

In my lab environnement using SqlClient based connections seem to work well in the aforementioned cases. I will perform further tests in a near feature and update this blog accordingly with the results.

I’m looking forward to see other improvements / features with the next SQL Server CTPs

Happy clustering on Linux!

 

 

 

 

 

 

 

Cet article Introducing high-availability and multi-subnet scenarios with SQL Server on Linux est apparu en premier sur Blog dbi services.

McColl’s Chooses Oracle Retail Stores Solutions and Hardware

Oracle Press Releases - Tue, 2017-01-31 11:05
Press Release
McColl’s Chooses Oracle Retail Stores Solutions and Hardware U.K. Convenience Retailer Leverages Scale and Agility

Redwood Shores Calif—Jan 31, 2017

Today Oracle announced that McColl’s has invested in Oracle Retail Xstore Point-of-Service and Oracle MICROS Family Workstation 6 to improve the in-store guest experience. With 1,375 stores, McColl’s is the UK’s leading neighbourhood retailer serving the convenience and newsagent sectors. As part of its growth strategy, McColl’s is investing to improve store standards and the customer experience.
 
McColl’s sees significant growth opportunity in the convenience market and is increasing its store portfolio with the acquisition of 298 convenience stores during 2017.  Approval was gained from the Competition & Markets Authority (CMA) in December 2016 to acquire 298 convenience stores from the Co-operative Group Limited, the rollout of which will be completed by August 2017. The convenience sector requires software and hardware with speed, scale and agility to support the complete customer offer across the store portfolio. 
 
“McColl’s performs over 4 million customer transactions per week through 2,700 tills and we need fast and reliable store systems to support our customers and store colleagues,” said Neil Hodge, Information Technology Director, McColl’s. “We chose Oracle Retail Xstore Point-of-Service and the Oracle MICROS Family Workstation because it is an adaptable solution capable of supporting our growth as operational requirements change.”
 
“As a long time customer of MICROS using a Torex POS solution, we are delighted to be continuing our relationship with Oracle. The Oracle MICROS Workstation 6 performance impressed both the technical teams and the store colleagues alike,” said Neil Hodge, Information Technology Director, McColl’s. “We are excited about the future and the capabilities available with Oracle Retail Xstore Point-of-Service.”
 
“We are honored to welcome McColl’s into the Oracle Retail community and are committed to their success,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “The Oracle MICROS Family Workstation 6 is engineered to work seamlessly with the Oracle Retail Xstore Point-of-Service to ensure that we deliver superior POS performance and reliability for the busy retail convenience store environment.”
 
Oracle Retail Industry Connect 2017
Join us at Oracle Industry Connect this spring. The program is designed for and delivered by retailers. On March 20-22, 2017 we will gather as a community in Orlando, FL to share best practices, talk strategy and discover new innovations for retail. Limited to retailers and paying sponsors. Register today: http://www.oracleindustryconnect.com
 
About McColl's
McColl's is a leading neighbourhood retailer in the independent managed sector running 1,375 convenience and newsagent stores. We operate 1,001 McColl's branded UK convenience stores as well as 374 newsagents branded Martin's, except in Scotland where we operate under our heritage brand, RS McColl. In addition we are also the largest operator of Post Offices in the UK.
 
Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • 4155951584

Omni Financeiras Speeds Pace of Business with Cloud

WebCenter Team - Tue, 2017-01-31 09:55

Omni Financeira is a financial institution, providing consumer credit for pre-owned vehicles, construction materials, furniture, home appliances and groceries for consumers with all ranges of income. Distributing business risks today with more than one million financed contracts, guarantees flexibility and profitability to partners and customers.

Omni uses Oracle Documents Cloud Services to store customer documents used in the credit request process, managed by Oracle WebCenter Content. A customized portal allows the 10,000 credit agents to easily submit the documents for credit analysis. Each year, 1.4 Terabytes of documents are stored in the cloud. Oracle Documents Cloud Service provides REST APIs to enable access to documents for multiple applications, as Omni internal systems, and from multiple channels, including a mobile app. To date, Omni has realized an 80% savings in infrastructure costs. Capacity, scalability and growth for document storage is no longer a concern.

View this video to hear from Edi Nilson Piovezani, Director Infrastructure, Omni Financiera in Brazil, speaks about their content management journey to the Cloud to reduce cost, drive efficiency, and create a dynamic digital experience for their credit agents.

EBS 12.2.6 OA Extensions for Jdeveloper 10g Updated

Steven Chan - Tue, 2017-01-31 02:04
Jdeveloper logoWhen you create extensions to Oracle E-Business Suite OA Framework pages, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product team. 

The version of Oracle JDeveloper is specific to the Oracle E-Business Suite Applications Technology patch level, so there is a new version of Oracle JDeveloper with each new release of the Oracle E-Business Suite Applications Technology patchset.

The Oracle Applications (OA) Extensions for JDeveloper 10g for E-Business Suite Release 12.2.6 have recently been updated.  For details, see:

The same Note also lists the latest OA Extension updates for EBS 11i, 12.0, 12.1, and 12.2.

Related Articles

Categories: APPS Blogs

No listner error

Tom Kyte - Mon, 2017-01-30 22:26
Hi Tom, I want to read mail using plsql function so i excuted this function create or replace type TStrings is table of varchar2(4000); 2 / Type created. create or replace function xx_pop3( userName varchar2, password varchar2,...
Categories: DBA Blogs

Retrieving ATM location's using the NAB API

Pas Apicella - Mon, 2017-01-30 21:21
NAB have released an API to determine ATM Locations amongst other things based on a GEO location. It's documented here.

https://developer.nab.com.au/docs#locations-api-get-locations-by-geo-coordinates

Here we use this API but I wanted to highlight a few things you would need to know to consume the API

1. You will need to provide the following and this has to be calculated based on a LAT/LONG. The screen shot below shows what a GEO location with a radius of 0.5 KM's would look like. You will see it's starting point is your current location in this case in Melbourne CBD.



2. The NAP API would required the following to be set which can be obtained using a calculation as per the screen shot above

swLat South-West latitude coordinates expressed as decimal
neLat North-East latitude coordinates expressed as decimal
neLng North-East longitude coordinates expressed as decimal
swLng South-West longitude coordinates expressed as decimal

3. The attribute LocationType allows you to filter what type of location your after I set this to "ATM" to only find ATM locations.

4. I also set the attribute Fields to extended as this gives me detailed information

5. Once you have the data here is an example of getting detailed information of ATM locations using the GEO location co-ordinates. In this example CURL is good enough to illustrate that

pasapicella@pas-macbook:~/pivotal$ curl -H "x-nab-key: NABAPI-KEY" "https://api.developer.nab.com.au/v2/locations?locationType=atm&searchCriteria=geo&swLat=-37.81851471355399&swLng=144.95235719310358&neLat=-37.812155549503025&neLng=144.96040686020137&fields=extended&v=1" | jq -r
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10371  100 10371    0     0   2016      0  0:00:05  0:00:05 --:--:--  2871
{
  "locationSearchResponse": {
    "totalRecords": 16,
    "viewport": {
      "swLat": -37.81586424048582,
      "swLng": 144.9589117502319,
      "neLat": -37.81109231077813,
      "neLng": 144.96758064976802
    },
    "locations": [
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Melbourne Central",
          "address4": "Lower Ground floor",
          "id": 5058548,
          "key": "atm_3B46",
          "description": "Melbourne Central",
          "address1": "300 Elizabeth Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.812031,
          "longitude": 144.9621768,
          "hours": "Mon-Thu 10.00am-06.00pm, Fri 10.00am-09.00pm, Sat 10.00am-06.00pm, Sun 10.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Melbourne Central",
          "address4": "Ground Floor",
          "address5": "Near La Trobe St Entrance under escalator",
          "id": 5058552,
          "key": "atm_3B56",
          "description": "Melbourne Central",
          "address1": "300 Elizabeth Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.812031,
          "longitude": 144.9621768,
          "hours": "Mon-Thu 10.00am-06.00pm, Fri 10.00am-09.00pm, Sat 10.00am-06.00pm, Sun 10.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Queen Victoria Market",
          "address5": "Outside the market facing the street",
          "id": 5058555,
          "key": "atm_3B61",
          "description": "Queen Victoria Market",
          "address1": "Queen Street",
          "address2": "Corner Therry Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8130009,
          "longitude": 144.9597905,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Target Centre",
          "id": 5058577,
          "key": "atm_3CC7",
          "description": "Target Centre",
          "address1": "236 Bourke Street Mall",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8132227,
          "longitude": 144.9665518,
          "hours": "Mon-Fri 09.00am-05.00pm, Sat-Sun 10.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Queen Victoria Centre",
          "id": 5058614,
          "key": "atm_3F07",
          "description": "Queen Victoria",
          "address1": "228-234 Lonsdale Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8122729,
          "longitude": 144.9622383,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "University of Melbourne",
          "address5": "Kenneth Myer Building",
          "id": 5058653,
          "key": "atm_3G28",
          "description": "KMB Foyer",
          "address1": "30 Royal Parade",
          "suburb": "Parkville",
          "state": "VIC",
          "postcode": "3052",
          "latitude": -37.8149256,
          "longitude": 144.9643156,
          "hours": "Mon-Fri 07.00am-07.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Midtown Plaza",
          "address4": "Shop 8",
          "id": 5058783,
          "key": "atm_3S02",
          "description": "Midtown Plaza",
          "address1": "186 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8131315,
          "longitude": 144.9654723,
          "hours": "Mon-Fri 09.30am-05.00pm, Sat 10.00am-02.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Midtown Plaza",
          "address4": "Shop 8",
          "id": 5058784,
          "key": "atm_3S03",
          "description": "Midtown Plaza",
          "address1": "186 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8131315,
          "longitude": 144.9654723,
          "hours": "Mon-Fri 09.30am-05.00pm, Sat 10.00am-02.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Melbourne NAB House",
          "id": 5058814,
          "key": "atm_3S38",
          "description": "National Bank House",
          "address1": "500 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8154128,
          "longitude": 144.9590017
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Melbourne NAB House",
          "id": 5058815,
          "key": "atm_3S39",
          "description": "National Bank House",
          "address1": "500 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8154128,
          "longitude": 144.9590017,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Melbourne NAB House",
          "id": 5058837,
          "key": "atm_3S67",
          "description": "National Bank House",
          "address1": "500 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8154128,
          "longitude": 144.9590017,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "SmartATM",
          "address3": "Midtown Plaza",
          "address4": "Shop 8",
          "id": 5058842,
          "key": "atm_3S72",
          "description": "Midtown Plaza",
          "address1": "186 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8131315,
          "longitude": 144.9654723,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "NAB ATM",
          "address3": "Midtown Plaza",
          "id": 5059024,
          "key": "atm_4G04",
          "description": "Midtown Plaza",
          "address1": "194 Swanston Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.8130332,
          "longitude": 144.9654279,
          "hours": "Mon-Tue 09.30am-05.30pm, Wed-Fri 09.30am-09.00pm, Sat 09.30am-05.30pm, Sun 11.00am-05.00pm"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": true,
          "isDisabilityApproved": true,
          "isAudio": false,
          "source": "BOQ ATM",
          "id": 5059452,
          "key": "atm_9036021I",
          "description": "455 Bourke Street",
          "address1": "455 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.81518,
          "longitude": 144.960583
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Branch",
          "isDeposit": false,
          "isDisabilityApproved": false,
          "isAudio": false,
          "source": "rediATM",
          "id": 5060659,
          "key": "atm_C11243",
          "description": "Bourke Street",
          "address1": "460 Bourke Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.81494,
          "longitude": 144.96002,
          "hours": "24/7"
        }
      },
      {
        "apiStructType": "atm",
        "atm": {
          "location": "Offsite",
          "isDeposit": false,
          "isDisabilityApproved": false,
          "isAudio": false,
          "source": "rediATM",
          "address3": "Emporium Melbourne",
          "address5": "ATM 02",
          "id": 5060908,
          "key": "atm_C11662",
          "description": "Emporium Melbourne",
          "address1": "269-321 Lonsdale Street",
          "suburb": "Melbourne",
          "state": "VIC",
          "postcode": "3000",
          "latitude": -37.811932,
          "longitude": 144.963648,
          "hours": "Mon-Wed 10.00am-07.00pm, Thu-Fri 10.00am-09.00pm, Sat-Sun 10.00am-07.00pm"
        }
      }
    ]
  },
  "status": {
    "code": "API-200",
    "message": "Success"
  }
}
Categories: Fusion Middleware

New Product Launch: Oracle Database Programming Interface for C (ODPI-C)

Christopher Jones - Mon, 2017-01-30 20:36

Today Oracle released a great new GitHub project - Oracle DatabaseProgramming Interface for C. It sits on top of OCI and offers analternative programming experience.

ODPI-C is a Clibrary that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database driversand user applications.ODPI-C Goal

ODPI-C's goal is to expose common OCI functionality in a readilyconsumable way to the C or C++ developer. OCI's API is extremelyflexible and is highly efficient. It gives a lot of fine-grainedcontrol to the developer and has a very wide range of use cases.ODPI-C is also flexible but is aimed primarily at language drivercreators. These creators are programming within the confines of ascripting language's type system and semantics. The languages oftenexpose simplified data access to users through cross-platform,'common-denominator' APIs. Therefore it makes sense for ODPI-C toprovide easy to use functionality for common data access, while stillallowing the power of Oracle Database to be used.

Of course ODPI-C isn't just restricted to driver usage. If ODPI-C hasthe functionality you need for accessing Oracle Database, you can addit to your own custom projects.

ODPI-C is a refactored and greatly enhanced version of the "DPI" data access layer used in our very successful node-oracledbdriver.

Releasing ODPI-C as a new, standalone project means its code can beconsumed and reused more easily. For database drivers it allowsOracle features to be exposed more rapidly and in a consistent way.This will allow greater cross-language driver feature compatibility,which is always useful in today's multi-language world.

ODPI-C Features

Oracle's AnthonyTuininga has been leading the ODPI-C effort, making full use ofhis extensive driver knowledge as creator and maintainer of theextremely popular, and full featured, Python cx_Oracledriver.

The ODPI-C feature list currently includes all the normal callsyou'd expect to manage connections and to execute SQL and PL/SQLefficiently. It also has such gems as SQL and PL/SQL object support,scrollable cursors, Advanced Queuing, and Continuous QueryNotification. The full list in this initial Beta release, in noparticular order, is:

  • 11.2, 12.1 and 12.2 Oracle Client support

  • SQL and PL/SQL execution

  • REF cursors

  • Large objects (CLOB, NCLOB, BLOB, BFILE)

  • Timestamps (naive, with time zone, with local time zone)

  • JSON objects

  • PL/SQL arrays (index-by integer tables)

  • Objects (types in SQL, records in PL/SQL)

  • Array fetch

  • Array bind/execute

  • Array DML Row Counts

  • Standalone connections

  • Connections via Session pools (homogeneous and non-homogeneous)

  • Session Tagging in session pools

  • Database Resident Connection Pooling (DRCP)

  • Connection Validation (when acquired from session pool or DRCP)

  • Proxy authentication

  • External authentication

  • Statement caching (with tagging)

  • Scrollable cursors

  • DML RETURNING clause

  • Privileged connection support (SYSDBA, SYSOPER, SYSASM, PRELIM_AUTH)

  • Database Startup/Shutdown

  • End-to-end tracing, mid-tier authentication and auditing (action, module, client identifier, client info, database operation)

  • Batch Errors

  • Query Result Caching

  • Application Continuity (with some limitations)

  • Query Metadata

  • Password Change

  • OCI Client Version and Server Version

  • Implicit Result Sets

  • Continuous Query Notification

  • Advanced Queuing

  • Edition Based Redefinition

  • Two Phase Commit

In case you want to access other OCI calls without having tomodify ODPI-C code, there is a call to get the underlying OCI servicecontext handle.

ODPI-C applications can make full advantage of OCI features whichdon't require API access, such as the oraaccess.xml configuration for enabling statement cache auto-tuning. Similarly, Oracle Database featurescontrolled by SQL and PL/SQL, such as partitioning, can be used inapplications, as you would expect.

Communication to the database is handled by OracleNet, so features such as encrypted communication and LDAP can beconfigured.

ODPI-C's API makes memory and resource management simpler,particularly for 'binding' and 'defining'. A reference countingmechanism adds resiliency by stopping applications destroying in-useOCI resources. To offer an alternative programming experience fromOCI, the ODPI-C API uses a multiple getter/setter model for handlingattributes.

Using ODPI-C

ODPI-C is released as source-code on GitHub. The code makes OCIcalls and so requires an Oracle client, which must be installedseparately. Version 11.2 or later of the client is required. Thisallows applications to connect to Oracle Database 9.2 or later. Thefree Oracle Instant Client is the standard way to obtain standaloneOracle client libraries and header files.

The project is licensed under the Apache 2.0 and/or the Oracle UPL licenses, so thecode is readily available for adoption into your own projects.

ODPI-C code can be included in your C or C++ applications andcompiled like any OCI application. Or if you want to use ODPI-C as ashared library, a sample Makefile for building on Linux, OS X and Windows is provided

Support for ODPI-C is via logging GitHub Issues only -but this does have the advantage of giving you direct access to ODPI-Cdevelopers. Also remember the underlying OCI libraries (which do allthe hard work) are extremely widely used, tested and supported.

If you want to do more than view the code, you can build ODPI-C as alibrary using the sample Makefile, and then build the currentstandalone sampleprograms. These show a number of ODPI-C features.

ODPI-C Plans

The ODPI-C release today is 2.0.0-beta.1, indicating we're happy withthe general design but want to get your wider review. We also need tocomplete some testing and add some polish.

We aim to stabilize ODPI-C relatively quickly and then continueadding functionality, such as support for the new Oracle Database 12.2 Sharding feature.

Future Node.js node-oracledb and Python cx_Oracle drivers will useODPI-C. There is active work on these updates.

I know Kubo Takehiro, who does a fantastic job maintaining the ruby-oci8 driver, hasbeen keen to see what ODPI-C can do for his driver. I look forward toseeing how he uses it.

I think you'll be pleased with the direction and plans for scriptinglanguages in 2017.

We really welcome your feedback on this big step forward.

ODPI-C References

Home page is: https://oracle.github.io/odpi/

Code is at https://github.com/oracle/odpi

Documentation is at https://oracle.github.io/odpi/doc/index.html

Issues and comments are be reported at https://github.com/oracle/odpi/issues

New Product Launch: Oracle Database Programming Interface for C (ODPI-C)

Christopher Jones - Mon, 2017-01-30 20:36

Today Oracle released a great new GitHub project - Oracle Database Programming Interface for C. It sits on top of OCI and offers an alternative programming experience.

ODPI-C is a C library that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. ODPI-C Goal

ODPI-C's goal is to expose common OCI functionality in a readily consumable way to the C or C++ developer. OCI's API is extremely flexible and is highly efficient. It gives a lot of fine-grained control to the developer and has a very wide range of use cases. ODPI-C is also flexible but is aimed primarily at language driver creators. These creators are programming within the confines of a scripting language's type system and semantics. The languages often expose simplified data access to users through cross-platform, 'common-denominator' APIs. Therefore it makes sense for ODPI-C to provide easy to use functionality for common data access, while still allowing the power of Oracle Database to be used.

Of course ODPI-C isn't just restricted to driver usage. If ODPI-C has the functionality you need for accessing Oracle Database, you can add it to your own custom projects.

ODPI-C is a refactored and greatly enhanced version of the "DPI" data access layer used in our very successful node-oracledb driver.

Releasing ODPI-C as a new, standalone project means its code can be consumed and reused more easily. For database drivers it allows Oracle features to be exposed more rapidly and in a consistent way. This will allow greater cross-language driver feature compatibility, which is always useful in today's multi-language world.

ODPI-C Features

Oracle's Anthony Tuininga has been leading the ODPI-C effort, making full use of his extensive driver knowledge as creator and maintainer of the extremely popular, and full featured, Python cx_Oracle driver.

The ODPI-C feature list currently includes all the normal calls you'd expect to manage connections and to execute SQL and PL/SQL efficiently. It also has such gems as SQL and PL/SQL object support, scrollable cursors, Advanced Queuing, and Continuous Query Notification. The full list in this initial Beta release, in no particular order, is:

  • 11.2, 12.1 and 12.2 Oracle Client support

  • SQL and PL/SQL execution

  • REF cursors

  • Large objects (CLOB, NCLOB, BLOB, BFILE)

  • Timestamps (naive, with time zone, with local time zone)

  • JSON objects

  • PL/SQL arrays (index-by integer tables)

  • Objects (types in SQL, records in PL/SQL)

  • Array fetch

  • Array bind/execute

  • Array DML Row Counts

  • Standalone connections

  • Connections via Session pools (homogeneous and non-homogeneous)

  • Session Tagging in session pools

  • Database Resident Connection Pooling (DRCP)

  • Connection Validation (when acquired from session pool or DRCP)

  • Proxy authentication

  • External authentication

  • Statement caching (with tagging)

  • Scrollable cursors

  • DML RETURNING clause

  • Privileged connection support (SYSDBA, SYSOPER, SYSASM, PRELIM_AUTH)

  • Database Startup/Shutdown

  • End-to-end tracing, mid-tier authentication and auditing (action, module, client identifier, client info, database operation)

  • Batch Errors

  • Query Result Caching

  • Application Continuity (with some limitations)

  • Query Metadata

  • Password Change

  • OCI Client Version and Server Version

  • Implicit Result Sets

  • Continuous Query Notification

  • Advanced Queuing

  • Edition Based Redefinition

  • Two Phase Commit

In case you want to access other OCI calls without having to modify ODPI-C code, there is a call to get the underlying OCI service context handle.

ODPI-C applications can make full advantage of OCI features which don't require API access, such as the oraaccess.xml configuration for enabling statement cache auto-tuning. Similarly, Oracle Database features controlled by SQL and PL/SQL, such as partitioning, can be used in applications, as you would expect.

Communication to the database is handled by Oracle Net, so features such as encrypted communication and LDAP can be configured.

ODPI-C's API makes memory and resource management simpler, particularly for 'binding' and 'defining'. A reference counting mechanism adds resiliency by stopping applications destroying in-use OCI resources. To offer an alternative programming experience from OCI, the ODPI-C API uses a multiple getter/setter model for handling attributes.

Using ODPI-C

ODPI-C is released as source-code on GitHub. The code makes OCI calls and so requires an Oracle client, which must be installed separately. Version 11.2 or later of the client is required. This allows applications to connect to Oracle Database 9.2 or later. The free Oracle Instant Client is the standard way to obtain standalone Oracle client libraries and header files.

The project is licensed under the Apache 2.0 and/or the Oracle UPL licenses, so the code is readily available for adoption into your own projects.

ODPI-C code can be included in your C or C++ applications and compiled like any OCI application. Or if you want to use ODPI-C as a shared library, a sample Makefile for building on Linux, OS X and Windows is provided

Support for ODPI-C is via logging GitHub Issues only - but this does have the advantage of giving you direct access to ODPI-C developers. Also remember the underlying OCI libraries (which do all the hard work) are extremely widely used, tested and supported.

If you want to do more than view the code, you can build ODPI-C as a library using the sample Makefile, and then build the current standalone sample programs. These show a number of ODPI-C features.

ODPI-C Plans

The ODPI-C release today is 2.0.0-beta.1, indicating we're happy with the general design but want to get your wider review. We also need to complete some testing and add some polish.

We aim to stabilize ODPI-C relatively quickly and then continue adding functionality, such as support for the new Oracle Database 12.2 Sharding feature.

Future Node.js node-oracledb and Python cx_Oracle drivers will use ODPI-C. There is active work on these updates.

I know Kubo Takehiro, who does a fantastic job maintaining the ruby-oci8 driver, has been keen to see what ODPI-C can do for his driver. I look forward to seeing how he uses it.

I think you'll be pleased with the direction and plans for scripting languages in 2017.

We really welcome your feedback on this big step forward.

ODPI-C References

Home page is: https://oracle.github.io/odpi/

Code is at https://github.com/oracle/odpi

Documentation is at https://oracle.github.io/odpi/doc/index.html

Issues and comments are be reported at https://github.com/oracle/odpi/issues

Partner Webcast – Docker Agility in Cloud: Introducing Oracle Container Cloud Service

In modern IT «containers» is not simply a buzzword, this is a proven way of improving developers’ productivity and minimizing deployment costs. It allows to quickly create ready-to-run packaged...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator