Feed aggregator

Oracle Utilities Unveils Network Management Platform of the Future

Oracle Press Releases - Wed, 2017-02-01 10:00
Press Release
Oracle Utilities Unveils Network Management Platform of the Future Oracle Utilities Network Management System 2.3 supports transition to a customer-centric grid

DistribuTECH, San Diego, Calif.—Feb 1, 2017

Oracle today introduced Oracle Utilities Network Management System 2.3, an expanded network management platform that enables utilities to make the transition to a customer-centric grid by giving operators real-time visibility across all grid and pipeline assets and eliminating the complexities of siloed data and applications.

Utilities are facing major shifts in their operations caused by a rapid increase in distributed energy resources (DERs) such as solar PVs, storage systems, electric vehicles and connected home devices, as well as continuing issues with aging water and gas pipelines.

These new energy sources and increased monitoring of aging pipelines are creating even more data silos than ever before, making a seamless transition to a customer-centric grid extremely challenging.

Oracle Utilities Network Management System 2.3 now enables utilities to aggregate data from various network assets into a single interface with more detailed visibility into network operations. For the first time, utilities have network asset data—two-way distributed energy resource (DER) data, advanced metering infrastructure (AMI) data, distribution management system (DMS) data and outage management system (OMS) data—all in a single interface for more meaningful analysis and decision making.

The enhanced system gives utility operators the visibility they need to respond to changing network conditions—whether that’s a flurry of new electric vehicles in a neighborhood and the need to resize transformers, or responding faster during a large storm outage or gas leak.   

New features include a modern user interface, consistent across all applications. The mobile interface enables grid operators to communicate with field crews for faster issue resolution. This version also includes new safety features that alert operators about unsafe operations on the distribution lines due to DER. This release supports:

  • 1. Complete visibility of the distribution network with AMI, distribution sensors, GIS, SCADA, weather and other edge devices in a single platform.
  • 2. A scalable and future-proof platform that doesn’t lock utilities into a monolithic and stand-alone solution. 
  • 3. Opportunities to increase reliability scores such as CAIDI and SAIDI even as the network evolves.
  • 4. Decreased restoration expenses by enabling field crews and operators to interact in real-time and resolve issues via mobile application.
  • 5. Support for gas, water, and electric utilities.

"The most successful utilities will embrace the emergence of a customer-centric network, and have a strategic vision of how technology can support it," said Rodger Smith, senior Vice President and general manager, Oracle Utilities. "In the first transitional phase, utilities built billions of dollars of infrastructure over several decades. Then with the advent of sensors and SCADA, we got better data and point-in-time visibility. Now we are in an age of predictive analytics, multiple new data streams from edge devices, and more complex real-time switching. We view our NMS solution as a platform that manages all aspects of the grid—from the customer’s living room to the utility back office.”

New Features in Oracle Utilities Network Management System
  • Enhances damage assessment and storm management features such as estimated restoration times for accurate communication to the customer.
  • Improves grid operator’s ability to prepare systems for outages with newly found data from GIS, weather, AMI, and SCADA systems.
  • Gives network managers advanced distribution management features that increase fault isolation, load shedding, and load profile capabilities.
Contact Info
Valerie Beaudett
Oracle Corporation
+1 650.400.7833
valerie.beaudett@oracle.com
Christina McDonald
Burson-Marsteller
+1 212.614.4221
Christina.McDonald@bm.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Christina McDonald

  • +1 212.614.4221

Running SLOB on Exadata Express Cloud Service

Yann Neuhaus - Wed, 2017-02-01 07:06

The Exadata Express Cloud Service is a managed PDBaaS: Oracle is the system admin and the CDB database administrator, you are the PDB administrator. You connect with a local user, PDB_ADMIN, which is no SYSDBA privilege but has a PDB_DBA which has nearly all DBA rights, but with some features disabled by lockdown profile.
I have no worry about the performance on this service: it is an Exadata X5 half rack bare metal (my guess), the CDB running as RAC One Node on 2 compute nodes, accessing the 7 storage cells. Smart Scan is disabled and given the small size of the PDB, and the low usage of the CDB, I/O is fast (most single block latency bwtween 128 and 256us coming from cell flash cache).
But I like to run SLOB on new platforms and I wanted to see if I can run it here, without SYSDBA role, and connecting to a CDB.

TNS_ADMIN

I’ll connect remotely because SLOB will be running on my computer (I have no access to the host for the PDBaaS managed service). In order to connect remotely, through a SQL*Net over SSL, you have to download the client credentials kit
CaptureEXCS017

It is a zip file that contains the encryption wallet and the sqlnet.ora and tnsnames.ora to use. I unzip it in a directory (/var/tmp/client_credentials in this example) and I’ll use it by setting the TNS_ADMIN environment to this directory.
Thus I change the wallet location in the sqlnet.ora:
$ cat sqlnet.ora
WALLET_LOCATION = (SOURCE = (METHOD = file)
(METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))

and I use the provided tnsnames.ora which defines the ‘dbaccess’ service:
$ cat tnsnames.ora
dbaccess = (description=
(address=(protocol=tcps)(port=1522)(host=dbaccess.us2.oraclecloudapps.com))
(connect_data=(service_name=eptdojjm1ag.usdc2.oraclecloud.com))
(security=(ssl_server_cert_dn="CN=dbaccess.us2.oraclecloudapps.com,O=Oracle Corporation,L=Redwood Shores,ST=California,C=US"))
)

Changes in slob.conf

The default slob.conf connects with a bequeath connection. Here I need to connect through the network service described above. I uncommented:

#ADMIN_SQLNET_SERVICE=slob
#SQLNET_SERVICE_BASE=slob
#SYSDBA_PASSWD=change_on_install

and replaced it with:

ADMIN_SQLNET_SERVICE=dbaccess
SQLNET_SERVICE_BASE=dbaccess
SYSDBA_PASSWD=xxxxxxxxx

Where xxxxxxxxx is my PDB_ADMIN password that I setup in:
CaptureEXCS024

Of course we need an Oracle Client. Download instant client if you don’t have one.

Changes in setup.sh

SLOB creates the users and grant them RESOURCE,DBA
In Exadata Express Cloud Service, I don’t have the DBA role but the CREATE SESSION, and the PDB_ROLE which is sufficient. In addition to that I need to give quota to the IOPS tablespace because PDB_ADMIN do not have unlimited tablespace. Finally here is the create user part after my modifications:


msg NOTIFY "Creating user: $user "
 
sqlplus "$constring" <<EOF
WHENEVER SQLERROR EXIT 2;
SET TERMOUT ON
SET ECHO ON
PROMPT User grants for $user
WHENEVER SQLERROR EXIT 2;
GRANT CREATE SESSION TO $user IDENTIFIED BY $user;
GRANT PDB_DBA TO $user;
ALTER USER $user DEFAULT TABLESPACE $tablespace ;
ALTER USER $user QUOTA UNLIMITED ON $tablespace ;
EXIT;
EOF

The setup.sh starts by dropping the SLOB schemas up to MAX_SLOB_SCHEMAS just in case they are there. The default is 4096 but that’s too much for me because of the latency to the server in US (it is planned to have Exadata Express Cloud Service in Europe in the future), so I reduced it to 8 (as I have only 1 CPU in this service, there’s no need for more users):

export MAX_SLOB_SCHEMAS=8

And last modification in setup.sh was to disable the Multitenant checking because PDBaaS is by definition on a Multitenant database:


if ( ! check_mto "$ADMIN_CONNECT_STRING")
then
msg FATAL ""
msg FATAL "This version of SLOB does not support Oracle Multitenant Option"
msg FATAL
#exit 1
fi

I’ve not seen any problem. Oracle designed multitenant so that everything you used to do on a database is possible to do on a PDB.

Changes in runit.sh

First, in order to connect as PDB_ADMIN instead of SYS as SYSDBA, you have to change the admin_connect_string:

export admin_connect_string="pdb_admin/${SYSDBA_PASSWD}@${ADMIN_SQLNET_SERVICE}"

Second, SLOB calls UTL_FILE which is not allowed in Exadata Express Cloud Service. Becase I’ve seen in the code that this is bypassed when the ‘nt’ argument is 1 so I’ve set it to this value in runit.sh:

arg9=1 #was: $nt

Finally, because we can’t create manual AWR snapshots in Exadata Express Cloud Service, I replaced the EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT by the creation of a small view to get the basic statistics:

create or replace view FPAV as
select 'STAT' class,name,value from V$SYSSTAT
union all
select 'WAIT' class,event,total_waits from V$SYSTEM_EVENT
union all
select 'TIME' class,stat_name,value from V$SYS_TIME_MODEL
union all
select 'V$OS' class,stat_name,value from V$OSSTAT
union all
select 'MICRO' class,event||' - '||wait_time_format,wait_count from V$EVENT_HISTOGRAM_MICRO
;
create table FPAT as select sysdate time,FPAV.* from FPAV where 1=0;
insert into FPAT select sysdate time,FPAV.* from FPAV;
commit;

and I replaced the part that runs the AWR report with a simple query:


($admin_conn < awr.txt
column per_sec format 999G999G999G999G999
spool awr.txt append
select to_char(btim,'hh24:mi:ss') btime,to_char(time,'hh24:mi:ss')etime,seconds,class,name,round(delta/seconds) per_second , decode(class||' '||name
,'STAT session logical reads','LIO/s'
,'STAT physical read total IO requests','PIO/s'
,'TIME DB CPU','CPU us'
,'TIME DB time','DB time us'
,'STAT physical read total bytes','Read B/s'
,'STAT physical write total bytes','Write B/s'
,'V$OS BUSY_TIME','OS busy cs'
,'WAIT resmgr:cpu quantum','resmgr'
) main from (
select
FPAT.*
,max(time)over(order by time rows between unbounded preceding and unbounded following) last
,value-lag(value)over(partition by class,name order by time) delta
,(time-lag(time)over(partition by class,name order by time))*24*60*60 seconds
,lag(time)over(partition by class,name order by time) btim
from FPAT
) where time=last and delta>0 order by main,per_second desc;
spool off
exit
EOF

Statspack is not an alternative here because it requires some views and grants from SYS which you cannot do on Exadata Express Cloud Service. I really don’t understand why we cannot use AWR locally because in 12.2 you can have AWR snapshots at PDB level. But any use of dbms_workload_repository is disabled by lockdown profile.

Result

So this is the kind of output I get on running 2 SLOB sessions during 10 minutes:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
11:16:09 11:26:15 606 TIME DB CPU 1968151 CPU us
11:16:09 11:26:15 606 TIME DB time 1981479 DB time us
11:16:09 11:26:15 606 STAT session logical reads 1228557 LIO/s
11:16:09 11:26:15 606 STAT physical read total IO requests 0 PIO/s
11:16:09 11:26:15 606 STAT physical read total bytes 189 Read B/s
11:16:09 11:26:15 606 STAT physical write total bytes 189 Write B/s
11:16:09 11:26:15 606 WAIT resmgr:cpu quantum 0 resmgr
11:16:09 11:26:15 606 STAT logical read bytes from cache 1.0064E+10
11:16:09 11:26:15 606 TIME sql execute elapsed time 1980376
11:16:09 11:26:15 606 V$OS IDLE_TIME 6787
11:16:09 11:26:15 606 V$OS BUSY_TIME 391
11:16:09 11:26:15 606 V$OS USER_TIME 311

1.96 CPU seconds per second and 1.98 DB time. Those are my 2 cached SLOB sessions.
About 1.2 million LIO per seconds. This is what I expect from those test on Exadata X5 (PCT_UPDATE=0 and WORK_UNIT=64).
Resource manager ready to kick in as my CPU_COUNT is set to 2 in my PDB (set by lockdown profile, you cannot change it)
The overall system hosting the CDB has been mostly idle: 3.91 / (67.87 + 3.91) is 5.44% busy. I’m responsible for 2 sessions over 72 threads (36 cores): 2/72=2.78% CPU usage

The Exadata Express Cloud Service allocates 1 core (2 threads) but because the system is nearly idle, I didn’t use 2 threads of the same core. Let’s compare with a 1 session only run:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
12:15:49 12:25:55 606 TIME DB CPU 989008 CPU us
12:15:49 12:25:55 606 TIME DB time 991198 DB time us
12:15:49 12:25:55 606 STAT session logical reads 668368 LIO/s
12:15:49 12:25:55 606 STAT physical read total IO requests 0 PIO/s
12:15:49 12:25:55 606 STAT physical read total bytes 203 Read B/s
12:15:49 12:25:55 606 STAT physical write total bytes 189 Write B/s
12:15:49 12:25:55 606 STAT logical read bytes from cache 5475272359
12:15:49 12:25:55 606 TIME sql execute elapsed time 990290
12:15:49 12:25:55 606 STAT consistent gets 668368

Comparing logical reads, 1228557/668368=1.8 so probably I was not running the two sessions on the same core.

So what?

SLOB (https://kevinclosson.net/slob/) doesn’t need to be installed on the database server because it is mostly PL/SQL, so no roundtrips. And this is a way to run it when you have only access to a PDB without SYSDBA privileges.

 

Cet article Running SLOB on Exadata Express Cloud Service est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 9 – Temporary tables

Yann Neuhaus - Wed, 2017-02-01 06:47

It has been quite a while since the last posts in this series, so here is what we looked at until now:

In this post we’ll look at temporary tables. Temporary tables hold data for the scope of a session or a transaction and the data is lost afterwards, so what are they good for then? Usually you will use them to store intermediate results that you need for further processing without the need to permanently store these. A typical use case is a business report that requires some intermediate aggregations. Can we do this in PostgreSQL? Yes, of course. Lets go.

When we look at the create table syntax the keywords TEMP or TEMPORARY are there:

(postgres@[local]:5439) [postgres] > \h create table
Command:     CREATE TABLE
Description: define a new table
Syntax:
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
  { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    | table_constraint
    | LIKE source_table [ like_option ... ] }
    [, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]

Then we should be able to create a temporay table, shouldn’t we?

(postgres@[local]:5439) [postgres] > create temporary table tmp1 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d tmp1
           Table "pg_temp_2.tmp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

Easy. Have you looked at the schema it got created in? What is pg_temp_2? I do not have schema that is named pg_temp_2:

(postgres@[local]:5439) [postgres] > \dn
  List of schemas
  Name  |  Owner   
--------+----------
 public | postgres
(1 row)

Temporay tables get created in a special schema. You can see it if you query pg_namespace directly:

(postgres@[local]:5439) [postgres] > select nspname from pg_namespace where nspname = 'pg_temp_2';
  nspname  
-----------
 pg_temp_2
(1 row)

Important to know: When you exit from your session the temporary table is gone:

(postgres@[local]:5439) [postgres] > \d tmp1
           Table "pg_temp_2.tmp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

(postgres@[local]:5439) [postgres] > \q
postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > \d tmp1
Did not find any relation named "tmp1".

What about the visibiliy of the data in a temporary table? It depends on how you create the table. When you want the data
to be visible for the entire lifetime of the session you do it like above:

(postgres@[local]:5439) [postgres] > create temporary table tmp1 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > insert into tmp1 values (1,'1');
INSERT 0 1
(postgres@[local]:5439) [postgres] > select * from tmp1;
 a | b 
---+---
 1 | 1
(1 row)

When you want the data to be visible only for the duration of the current transaction you do it like this:

(postgres@[local]:5439) [postgres] > create temporary table tmp2 ( a int, b varchar(10)) on commit delete rows;
CREATE TABLE
(postgres@[local]:5439) [postgres] > begin;
BEGIN
(postgres@[local]:5439) [postgres] > insert into tmp2 values (1,'1');
INSERT 0 1
(postgres@[local]:5439) [postgres] > end;
COMMIT
(postgres@[local]:5439) [postgres] > select count(*) from tmp2;
 count 
-------
     0
(1 row)

Another important point to know: When you create a temporary table with the same name as an existing table you will not see
the existing table anymore unless you prefix the normal table with the schema name:

(postgres@[local]:5439) [postgres] > create table tmp3 ( a int );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create temporary table tmp3 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d tmp3
           Table "pg_temp_2.tmp3"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

(postgres@[local]:5439) [postgres] > \d public.tmp3
     Table "public.tmp3"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

Be careful with this. What about performance? Is a temporary table faster to insert than a normal table?

(postgres@[local]:5439) [postgres] > create table tmp1 ( a int, b varchar(10) );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create temporary table tmp2 ( a int, b varchar(10) );
CREATE TABLE

This is the script:

(postgres@[local]:5439) [postgres] > \! cat a.sql
truncate tmp1;
truncate tmp2;
with aa as
( select generate_series ( 1, 1000000 ) a  )
insert into tmp1 (a,b)
       select a, a
         from aa;

with aa as
( select generate_series ( 1, 1000000 ) a  )
insert into tmp2 (a,b)
       select a, a
         from aa;

This is the result:

(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 19.626 ms
TRUNCATE TABLE
Time: 35.665 ms
INSERT 0 1000000
Time: 7777.034 ms
INSERT 0 1000000
Time: 3327.984 ms
(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 46.221 ms
TRUNCATE TABLE
Time: 33.286 ms
INSERT 0 1000000
Time: 7425.957 ms
INSERT 0 1000000
Time: 3241.140 ms
(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 44.365 ms
TRUNCATE TABLE
Time: 35.992 ms
INSERT 0 1000000
Time: 8732.566 ms
INSERT 0 1000000
Time: 3888.795 ms

The temporary table is almost double as fast to write to than the normal table. Of course you can create indexes on temporary tables as well:

(postgres@[local]:5439) [postgres] > create temporary table tmp4 ( a int, b varchar );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create index tmpi1 on tmp4(a);
CREATE INDEX

.. and of course the index is gone as well once you end your session.

Another point to remember: Temporary tables are not visible to the vacuum deamon. You might think this is not a problem as they disappear anyway but remember that autovacuum is responsible for gathering the statistics (kicking off analyze) as well.
Depending on what you want to do with the data you loaded into the temporary table it might be wise to issue a manual analyze on it:

(postgres@[local]:5439) [postgres] > analyze verbose tmp4;

I hope I do not need to say that tempoary tables are not crash safe :) Have fun …

 

Cet article Can I do it with PostgreSQL? – 9 – Temporary tables est apparu en premier sur Blog dbi services.

ORA-01017 invalid username/password; logon denied Solution

Complete IT Professional - Wed, 2017-02-01 05:00
Have you gotten an ORA-01017 error in your Oracle database? Learn what caused it and how you can resolve it in this article. ORA-01017 Cause So you’re trying to connect to a database, or run a query across a database link, and you get this message: ORA-01017: invalid username/password; logon denied There are a few […]
Categories: Development

Amazon AWS instances and Oracle database performance

Yann Neuhaus - Wed, 2017-02-01 03:19

When you run Oracle Database on Amazon AWS you Bring Your Own Licenses depending on the number of virtual cores (which are the number of cores allocated to your vCPUs). Behind the instance types, you have different processors and hyper-threading. Then, when choosing which instance type to run, you want to know which processor offers the best performance for your Oracle Workload. Here is an example comparing the logical reads on T2, M4, R4 and C4 instances.

My comparison is done running cached SLOB (https://kevinclosson.net/slob/) to measure the maximum number of logical reads per seconds when running the same workload on the different instance types.
I’ve compared what you can have with 2 Oracle Database processor licences, which covers 2 cores (no core factor on AWS) which means 2 vCPU for T2 which is not hyper-threaded, and 4 vCPU for the others.

T2.large: 2vCPU, 8GB RAM, monthly cost about 100$

I was on Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz

With one session:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.37
DB CPU(s): 1.0 13.0 0.00 5.34
Logical read (blocks): 747,004.5 9,760,555.7

With 2 sessions:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 11.12
DB CPU(s): 2.0 27.1 0.00 11.04
Logical read (blocks): 1,398,124.7 19,111,284.0

T2 is not hyper-threaded which is why we double the LIOPS with two sessions. So with 2 Oracle licences on T2 we get 1.4 LIO/s

M4.xlarge: 4vCPU, 16GB RAM, monthly cost about 180$

M4 is the latest General Purpose instance in EC2. It is hyper-threaded so with 2 Oracle processor licences we can use 4 vCPU.
Here I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 2 cores with 2 threads each.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.46
DB CPU(s): 1.0 13.1 0.00 5.46
Logical read (blocks): 874,326.7 11,420,189.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.24
DB CPU(s): 2.0 27.2 0.00 9.22
Logical read (blocks): 1,540,116.9 21,047,307.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 12.33
DB CPU(s): 3.0 40.8 0.00 12.30
Logical read (blocks): 1,645,128.2 22,469,983.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 14.46
DB CPU(s): 4.0 54.3 0.00 14.39
Logical read (blocks): 1,779,361.3 24,326,538.0

Those CPU are faster than the T2 ones. With a single session, we can do 17% more LIOPS. And running on all the 4 threads, we can reach 1.8 kLIOPS which is 27% more that T2 for same Oracle licences.

R4.xlarge: 4vCPU, 30.5GB RAM, monthly cost about 200$

R4 is the memory-intensive instance. I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz so I expect about the same performance as M4.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 6.01
DB CPU(s): 1.0 13.7 0.00 6.01
Logical read (blocks): 864,113.9 11,798,650.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.38
DB CPU(s): 2.0 27.2 0.00 9.36
Logical read (blocks): 1,546,138.8 21,115,125.5

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 14.07
DB CPU(s): 3.0 40.9 0.00 14.05
Logical read (blocks): 1,686,595.4 23,033,987.3

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 15.00
DB CPU(s): 4.0 54.3 0.00 14.93
Logical read (blocks): 1,837,289.9 25,114,082.1

This one looks a little faster. It is the same CPU but cached SLOB does not test only CPU frequency but also memory access. R4 instances have DDR4 memory.

C4.xlarge: 4vCPU, 7.5GB RAM, monthly cost about 170$

For my last test I choose the compute-optimized C4 with Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 6.83
DB CPU(s): 1.0 13.7 0.00 6.83
Logical read (blocks): 923,185.0 12,606,636.8

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.38
DB CPU(s): 2.0 27.2 0.00 9.36
Logical read (blocks): 1,632,424.3 22,296,021.5

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 39.2 0.00 13.64
DB CPU(s): 3.0 39.1 0.00 13.61
Logical read (blocks): 1,744,709.5 22,793,491.7

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 15.79
DB CPU(s): 4.0 54.3 0.00 15.71
Logical read (blocks): 1,857,692.6 25,396,599.8

According to https://aws.amazon.com/ec2/instance-types/ C4 instances have the lowest price/compute performance in EC2. The frequency is 20% faster than R4, but we have similar IOPS. The CPU frequency is not the only parameter for database workload.

So what?

You should not compare only the EC2 instance cost (I’ve indicated the approximate cost for RHEL in Europe, but you can check pricing at https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/). You should estimate the Oracle licences you need to run your workload. Creating an EC2 instance takes only few minutes. Installing Oracle from an ORACLE_HOME clone is also very fast and creating a database with SLOB create_database_kit is easy. Fully automated, you can run the same SLOB tests on an instance and get results after 2 hours. It is highly recommended to do that before choosing the instance type for your database. The number of cores will determine the Oracle licences to buy, which is an acquisition cost + a yearly maintenance fee. The goal is to run on the processors that gives the best performance for your workload.

 

Cet article Amazon AWS instances and Oracle database performance est apparu en premier sur Blog dbi services.

MD5 Signed JAR Files Treated as Unsigned in April 2017

Steven Chan - Wed, 2017-02-01 02:05

Oracle currently plans to disable MD5 signed JARs in the upcoming Critical Patch Update slated for April 18, 2017.  JAR files signed with MD5 algorithms will be treated as unsigned JARs.

MD5 JAR file signing screenshot

Does this affect EBS environments?

Yes. This applies to Java 6, 7, and 8 used in EBS 12.1 and 12.2.  Oracle E-Business Suite uses Java, notably for running Forms-based content via the Java Runtime Environment (JRE) browser plug-in.  Java-based content is delivered in JAR files.  Customers must sign E-Business Suite JAR files with a code signing certificate from a trusted Certificate Authority (CA). 

A code signing certificate from a Trusted CA is required to sign your Java content securely. It allows you to deliver signed code from your server (e.g. JAR files) to users desktops and verifying you as the publisher and trusted provider of that code and also verifies that the code has not been altered. A single code signing certificate allows you to verify any amount of code across multiple EBS environments. This is a different type of certificate to the commonly used SSL certificate which is used to authorize a server on a per environment basis. You cannot use an SSL certificate for the purpose of signing jar files. 

Instructions on how to sign EBS JARs are published here:

Where can I get more information?

Oracle's plans for changes to the security algorithms and associated policies/settings in the Oracle Java Runtime Environment (JRE) and Java SE Development Kit (JDK) are published here:

More information about Java security is available here:

Getting help

If you have questions about Java Security, please log a Service Request with Java Support.

If you need assistance with the steps for signing EBS JAR files, please log a Service Request against the "Oracle Applications Technology Stack (TXK)" > "Java."

Disclaimer

The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.


Categories: APPS Blogs

Batch Scheduler Resources

Anthony Shorten - Tue, 2017-01-31 20:16

In the last release of the Oracle Utilities Application Framework, we released an integration to the DBMS_SCHEDULER to manage and execute our batch processes. We supply a pl/sql based interface to our batch process.

DBMS_SCHEDULER is part of the database and therefore there are lots of advice on the internet to help use the scheduler effectively. I have compiled a list of some of the resources on the internet that may be useful when using this scheduler:

This list is not exhaustive so take a look at other resources you might find useful (look for DBMS_SCHEDULER in the search engine of your choice). Those coming to the Oracle Utilities Edge Conference should note that I am conducting a session on the scheduler and the integration on Feb 14 at the conference if you want more information.

Batch Scheduler Resources

Anthony Shorten - Tue, 2017-01-31 20:16

In the last release of the Oracle Utilities Application Framework, we released an integration to the DBMS_SCHEDULER to manage and execute our batch processes. We supply a pl/sql based interface to our batch process.

DBMS_SCHEDULER is part of the database and therefore there are lots of advice on the internet to help use the scheduler effectively. I have compiled a list of some of the resources on the internet that may be useful when using this scheduler:

This list is not exhaustive so take a look at other resources you might find useful (look for DBMS_SCHEDULER in the search engine of your choice). Those coming to the Oracle Utilities Edge Conference should note that I am conducting a session on the scheduler and the integration on Feb 14 at the conference if you want more information.

Concat all columns

Tom Kyte - Tue, 2017-01-31 16:46
Hello Tom. I want to concat all columns of a row into one string. <code>select * from table</code> should bring out one colum per row, including all fieldvalues as one string. The use auf || didn't work, because I want it for different table...
Categories: DBA Blogs

When are these wait event occur and what are the causes enq: IM - contention for blr and enq: TA - contention

Tom Kyte - Tue, 2017-01-31 16:46
When are these wait event occur and what are the causes and solutions for them. 1) enq: IM - contention for blr 2) enq: TA - contention 3) undo segment tx slot
Categories: DBA Blogs

data base sync method

Tom Kyte - Tue, 2017-01-31 16:46
Hi, I have two data base in production one is DEV db (source DB) and another QA DB(Target db). If any changes apply to Source DB then it has to reflect in target DB but not vice versa. The TARGET DB is in read write mode. Please let me know the ea...
Categories: DBA Blogs

Oracle Database Licensing in the Cloud

Tom Kyte - Tue, 2017-01-31 16:46
Hi Just read the latest release of the Oracle Database licensing in cloud (http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf), has little concern! Does this mean, we will face doubling our licensing cost if we host our Oracle d...
Categories: DBA Blogs

Uploading Files into Server

Tom Kyte - Tue, 2017-01-31 16:46
Hi, Is this possible to Transfer files using sql * plus to a remote data base from a client(Local machine Having Sql*plus)?
Categories: DBA Blogs

Analytics/MODEL to consolidate order lines by value

Tom Kyte - Tue, 2017-01-31 16:46
Given a manual business process when customers accept a minimum dollar value for shipments, I need to write a query that will consolidate the order lines for the customer such that we show the date their order lines accumulate the minimum shipment va...
Categories: DBA Blogs

Introducing high-availability and multi-subnet scenarios with SQL Server on Linux

Yann Neuhaus - Tue, 2017-01-31 15:56

On my first blog about SQL Server on Linux, I introduced the new high availability feature which concerns only SQL Server failover cluster instances so far. During this discovery time, I had the support of Mihaela Blendea (@MihaelaBlendea) at Microsoft to clarify some architecture aspects about this new kind of architecture. Firstly, I would like to thank her. It’s always a big pleasure to get the availability of the Microsoft team in this case. But after achieving the installation of my SQL Server FCI environment on Linux, I was interested in performing the same in a more complex scenario like multi-subnets failover clusters as I may notice at some customer shops. The installation process will surely change over the time and it is not intended as an official documentation of course. This is only an exercise which is part of my Linux immersion experience.

So I decided to evolve my current architecture (two clusters nodes with PaceMaker on the same subnet) by introducing a third one on a different subnet. Here a picture of the architecture I wanted to install.

blog 115 - 1 - sqlfci multisubnet architecture

So basically, referring to my previous architecture, the task to perform was as follows:

  • Make the initial heartbeat configuration redundant. Even if nowadays having redundant network paths is mostly handled by modern infrastructures and virtualization layers as well, I still believe it is always a best practice to make the heartbeat redundant at the cluster level in order to avoid unexpected behaviors like split brains (for instance with two nodes in this case). I will have the opportunity to talk about quorum stuff in a next post.
  • Introduce a third node on a different subnet to the existing architecture and then adding it to the cluster. You may follow the Microsoft documentation to perform this task. The main challenge here was to add the third node in the context of multi-subnet scenario and to ensure the communication path is working well between cluster nodes for both networks (public and private).
  • Find a way to make the existing SQL Server FCI resource multi-subnet compliant. I mean to get the same kind of behavior we may have with WSFC on Windows when the resource fails over nodes on different subnets. In this case, we have to configure an OR based resource dependency which includes second virtual address IP.
  • Check if applications are able to connect in the context of multi-subnet failover event.

You may notice that I didn’t introduce redundancy at the storage layer. Indeed, the NFS server becomes the SPOF but I didn’t want to make my architecture more complex at all for the moment. In a more realistic scenario at customer shops, this aspect would be probably covered by other storage vendor solutions.

So let’s begin by the heartbeat configuration. According to my existing infrastructure, only one ring was configured and ran on the top of my eth0 interfaces on both nodes ((respectively 192.168.5.17 for the linux01 node and 192.168.5.18 for the linux02 node).

 [mikedavem@linux01 ~]$ sudo pcs cluster corosync
…
nodelist {
    node {
        ring0_addr: linux01.dbi-services.test
        nodeid: 1
    }

    node {
        ring0_addr: linux02.dbi-services.test
        nodeid: 2
    }
}
…

 

So I added one another network interface (eth1) on each cluster node with a different subnet (192.168.20.0). Those interfaces will be dedicated on running the second Corosync link (ring 2).

  • Linux01
[mikedavem@linux01 ~]$ ip addr show eth1
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:2b:d4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.17/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe00:2bd4/64 scope link
       valid_lft forever preferred_lft forever

 

  • Linux02
[mikedavem@linux01 ~]$ sudo ssh linux02 ip addr show eth1
…
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:2b:d5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.18/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::36d8:d6f9:1b7a:cebd/64 scope link
       valid_lft forever preferred_lft forever

 

At this point I binded each new IP address with a corresponding hostname. We may either store the new configuration in the /etc/hosts file or in the DNS server(s).

Then I updated the Corosync.conf on both nodes by adding the new ring configuration as follows. The point here is that configuration changes are not synchronized automatically across nodes like Windows Failover clusters. To allow redundant ring protocol, I added the rrp_mode parameter to be active on both network interfaces (eth0 and eth1) and a new ring section for each node (ring1_addr).

totem {
    version: 2
    secauth: off
    cluster_name: linux_cluster
    transport: udpu
    rrp_mode: active
}
nodelist {
    node {
        ring0_addr: linux01.dbi-services.test
        ring1_addr: linux01H2.dbi-services.test
        nodeid: 1
    }
    node {
        ring0_addr: linux02.dbi-services.test
        ring1_addr: linux02H2.dbi-services.test
        nodeid: 2
    }
}

After restarting the Corosync service on both nodes, I checked the new ring status on both nodes

 [mikedavem@linux01 ~]# sudo corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
        id      = 192.168.5.17
        status  = ring 0 active with no faults
RING ID 1
        id      = 192.168.20.17
        status  = Marking seqid 23 ringid 1 interface 192.168.20.17 FAULTY
[root@linux01 ~]#
 [root@linux01 ~]# ssh linux02 corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
        id      = 192.168.5.18
        status  = ring 0 active with no faults
RING ID 1
        id      = 192.168.20.18
        status  = ring 1 active with no faults

 

At this point, my pacemaker cluster was able to use all the network interfaces to execute heartbeat.

In the respect of the Microsoft documentation, I added a new node LINUX03 with the same heartbeat configuration and the general Corosync configuration was updated as follows:

[mikedavem@linux01 ~]# sudo pcs cluster node add linux03.dbi-services.test,linux03H2.dbi-services.testnodelist 
…
    node {
        ring0_addr: linux01.dbi-services.test
        ring1_addr: linux01H2.dbi-services.test
        nodeid: 1
    }
    node {
        ring0_addr: linux02.dbi-services.test
        ring1_addr: linux02H2.dbi-services.test
        nodeid: 2
    }
    node {
        ring0_addr: linux03.dbi-services.test
        ring1_addr: linux03H2.dbi-services.test
        nodeid: 3
    }
}

 

Obviously, communication paths were done successfully after configuring correctly the routes between nodes on different subnets. Corresponding default gateways are already configured for eth0 interfaces but we have to add static routes for eth1 interfaces as shown below:

  • LINUX01 and LINUX02 (eth0 – subnet 192.168.5.0 – default gateway 192.168.5.10 / eth1 – subnet 192.168.20.0 – static route to 192.168.30.0 subnet by using 192.168.20.10).
[mikedavem@linux01 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.5.10    0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
192.168.5.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.30.0    192.168.20.10   255.255.255.0   UG    0      0        0 eth1

 

  • LINUX03 (eth0 – subnet 192.168.50.0 – default gateway 192.168.50.10 / eth1 – subnet 192.168.30.0 – static route to 192.168.20.0 subnet by using 192.168.30.10).
[mikedavem@linux03 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.50.10   0.0.0.0         UG    0      0        0 eth0
0.0.0.0         192.168.50.10   0.0.0.0         UG    100    0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
192.168.20.0    192.168.30.10   255.255.255.0   UG    100    0        0 eth1
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 eth1
192.168.50.0    0.0.0.0         255.255.255.0   U     100    0        0 eth0

 

Let’s have a look at the cluster status:

[root@linux01 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
 Last updated: Mon Jan 30 12:47:00 2017         Last change: Mon Jan 30 12:45:01 2017 by hacluster via crmd on linux01.dbi-services.test
 3 nodes and 3 resources configured

PCSD Status:
  linux01.dbi-services.test: Online
  linux03.dbi-services.test: Online
  linux02.dbi-services.test: Online

 

To enable NFS share to be mounted from the new cluster node LINUX03 on the 192.168.50.0 subnet, we have to add the new configuration in the /etc/exports file and export it afterwards.

[root@nfs ~]# exportfs -rav
exporting 192.168.5.0/24:/mnt/sql_log_nfs
exporting 192.168.5.0/24:/mnt/sql_data_nfs
exporting 192.168.50.0/24:/mnt/sql_data_nfs

[root@nfs ~]# showmount -e
Export list for nfs.dbi-services.com:
/mnt/sql_log_nfs  192.168.5.0/24
/mnt/sql_data_nfs 192.168.50.0/24,192.168.5.0/24

 

Well, after checking everything is ok from the cluster side, the next challenge was to find a way to configure the SQL Server FCI resource to be multi-subnet compliant. As stated by Microsoft, the SQL Server FCI is not as coupled with Pacemaker add-on as the Windows Failover Cluster. Based on my Windows Failover experience, I wondered if I had to go to the same way with the pacemaker cluster on Linux and I tried to find out a way to add a second VIP and then to include it as part of the OR dependency but I found nothing on this field. But Pacemaker offers concepts which include location / collocation and scores in order to behave on the resources during failover events. My intention is not to go into details trough the pacemaker documentation but by playing with the 3 concepts I was able to address our need. Again please feel free to comments if you have a better method to meet my requirement.

Let’s first add a second virtual IP address for the 192.168.50.0 subnet (virtualipdr) and then let’s add a new dependency / colocation between for SQL Server resource (sqllinuxfci)

[mikedavem@linux01 ~]$sudo pcs cluster cib cfg
[mikedavem@linux01 ~]$sudo pcs -f cfg resource create virtualipdr ocf:heartbeat:IPaddr2 ip=192.168.50.20
[mikedavem@linux01 ~]$sudo pcs -f cfg constraint colocation add virtualipdr sqlinuxfci
[mikedavem@linux01 ~]$sudo pcs cluster cib-push cfg
[mikedavem@linux01 ~]$sudo pcs constraint location

 

Now to avoid starting virtualip or virtualipdr resources on the wrong subnet, let’s configure an “opt-out” scenario which includes symmetric cluster to allow resources to run everywhere and location constraints to avoid running a resource on a specified location / node.

[mikedavem@linux01 ~]$sudo pcs property set symmetric-cluster=true
[mikedavem@linux01 ~]$pcs constraint location virtualipdr avoids linux01.dbi-services.test=-1
[mikedavem@linux01 ~]$pcs constraint location virtualipdr avoids linux02.dbi-services.test=-1
[mikedavem@linux01 ~]$pcs constraint location virtualip avoids linux03.dbi-services.test=-1

 

The new constraint topology is as follows

[mikedavem@linux01 ~]$ sudo pcs constraint
Location Constraints:
  Resource: sqllinuxfci
    Enabled on: linux01.dbi-services.test (score:INFINITY) (role: Started)
  Resource: virtualip
    Disabled on: linux03.dbi-services.test (score:-1)
  Resource: virtualipdr
    Disabled on: linux01.dbi-services.test (score:-1)
    Disabled on: linux02.dbi-services.test (score:-1)
Ordering Constraints:
Colocation Constraints:
  FS with sqllinuxfci (score:INFINITY)
  virtualip with sqllinuxfci (score:INFINITY)
  virtualipdr with sqllinuxfci (score:INFINITY)
Ticket Constraints:

 

Let’s have a look at the pacemaker status. At this point all SQL Server resources are running on the LINUX01 on the 192.168.5.0 subnet. We may notice the virtualipdr is in stopped state in this case.

[mikedavem@linux01 ~]$ sudo pcs status
Cluster name: linux_cluster
Stack: corosync
Current DC: linux02.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Jan 31 22:28:57 2017          Last change: Mon Jan 30 16:57:10 2017 by root via crm_resource on linux01.dbi-services.test

3 nodes and 4 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test linux03.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::mssql:fci):       Started linux01.dbi-services.test
 FS     (ocf::heartbeat:Filesystem):    Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.test
 virtualipdr    (ocf::heartbeat:IPaddr2):       Stopped

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

Go ahead and let’s try to move the resources on the LINUX03 node – 192.168.50.0 subnet

[mikedavem@linux01 ~]$ sudo pcs resource move sqllinuxfci linux03.dbi-services.test

 

The new Pacemarker status becomes

[mikedavem@linux01 ~]$ sudo pcs status
Cluster name: linux_cluster
Stack: corosync
Current DC: linux02.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Jan 31 22:33:21 2017          Last change: Tue Jan 31 22:32:53 2017 by root via crm_resource on linux01.dbi-services.test

3 nodes and 4 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test linux03.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::mssql:fci):       Stopped
 FS     (ocf::heartbeat:Filesystem):    Started linux03.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Stopped
 virtualipdr    (ocf::heartbeat:IPaddr2):       Started linux03.dbi-services.test

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

In turn, the virtualipdr brought online and virtualip brought offline as well because we are now located on the 192.168.50.0 subnet. Here we go!

Ok at this point our SQL Server Failover Cluster Instance seems to behave as expected but how to deal with client connections in this case? If I refer to previous Windows Failover Cluster experiences, I may think about two scenarios by using DNS servers.

  • We are able to use SqlClient / JAVA / ODBC support for HA with MultiSubnetFailover parameter on the connection string. In this case good news, we may simply put the both different addresses for the corresponding DNS record and the magic will operate by itself (similar to RegisterAllProvidersIP property with availability groups). The client will reach out automatically the first available address and everything should be fine.
  • We cannot modify or use the MultiSubnetFailover and in this case we may setup the TTL value manually for the corresponding DNS record (similar to the HostRecordTTL parameters with availability groups). We will experience timeout issues for the first connection attempt but the second one should work.
  • Other scenarios?? Please feel free to comment

In my lab environnement using SqlClient based connections seem to work well in the aforementioned cases. I will perform further tests in a near feature and update this blog accordingly with the results.

I’m looking forward to see other improvements / features with the next SQL Server CTPs

Happy clustering on Linux!

 

 

 

 

 

 

 

Cet article Introducing high-availability and multi-subnet scenarios with SQL Server on Linux est apparu en premier sur Blog dbi services.

McColl’s Chooses Oracle Retail Stores Solutions and Hardware

Oracle Press Releases - Tue, 2017-01-31 11:05
Press Release
McColl’s Chooses Oracle Retail Stores Solutions and Hardware U.K. Convenience Retailer Leverages Scale and Agility

Redwood Shores Calif—Jan 31, 2017

Today Oracle announced that McColl’s has invested in Oracle Retail Xstore Point-of-Service and Oracle MICROS Family Workstation 6 to improve the in-store guest experience. With 1,375 stores, McColl’s is the UK’s leading neighbourhood retailer serving the convenience and newsagent sectors. As part of its growth strategy, McColl’s is investing to improve store standards and the customer experience.
 
McColl’s sees significant growth opportunity in the convenience market and is increasing its store portfolio with the acquisition of 298 convenience stores during 2017.  Approval was gained from the Competition & Markets Authority (CMA) in December 2016 to acquire 298 convenience stores from the Co-operative Group Limited, the rollout of which will be completed by August 2017. The convenience sector requires software and hardware with speed, scale and agility to support the complete customer offer across the store portfolio. 
 
“McColl’s performs over 4 million customer transactions per week through 2,700 tills and we need fast and reliable store systems to support our customers and store colleagues,” said Neil Hodge, Information Technology Director, McColl’s. “We chose Oracle Retail Xstore Point-of-Service and the Oracle MICROS Family Workstation because it is an adaptable solution capable of supporting our growth as operational requirements change.”
 
“As a long time customer of MICROS using a Torex POS solution, we are delighted to be continuing our relationship with Oracle. The Oracle MICROS Workstation 6 performance impressed both the technical teams and the store colleagues alike,” said Neil Hodge, Information Technology Director, McColl’s. “We are excited about the future and the capabilities available with Oracle Retail Xstore Point-of-Service.”
 
“We are honored to welcome McColl’s into the Oracle Retail community and are committed to their success,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “The Oracle MICROS Family Workstation 6 is engineered to work seamlessly with the Oracle Retail Xstore Point-of-Service to ensure that we deliver superior POS performance and reliability for the busy retail convenience store environment.”
 
Oracle Retail Industry Connect 2017
Join us at Oracle Industry Connect this spring. The program is designed for and delivered by retailers. On March 20-22, 2017 we will gather as a community in Orlando, FL to share best practices, talk strategy and discover new innovations for retail. Limited to retailers and paying sponsors. Register today: http://www.oracleindustryconnect.com
 
About McColl's
McColl's is a leading neighbourhood retailer in the independent managed sector running 1,375 convenience and newsagent stores. We operate 1,001 McColl's branded UK convenience stores as well as 374 newsagents branded Martin's, except in Scotland where we operate under our heritage brand, RS McColl. In addition we are also the largest operator of Post Offices in the UK.
 
Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • 4155951584

Omni Financeiras Speeds Pace of Business with Cloud

WebCenter Team - Tue, 2017-01-31 09:55

Omni Financeira is a financial institution, providing consumer credit for pre-owned vehicles, construction materials, furniture, home appliances and groceries for consumers with all ranges of income. Distributing business risks today with more than one million financed contracts, guarantees flexibility and profitability to partners and customers.

Omni uses Oracle Documents Cloud Services to store customer documents used in the credit request process, managed by Oracle WebCenter Content. A customized portal allows the 10,000 credit agents to easily submit the documents for credit analysis. Each year, 1.4 Terabytes of documents are stored in the cloud. Oracle Documents Cloud Service provides REST APIs to enable access to documents for multiple applications, as Omni internal systems, and from multiple channels, including a mobile app. To date, Omni has realized an 80% savings in infrastructure costs. Capacity, scalability and growth for document storage is no longer a concern.

View this video to hear from Edi Nilson Piovezani, Director Infrastructure, Omni Financiera in Brazil, speaks about their content management journey to the Cloud to reduce cost, drive efficiency, and create a dynamic digital experience for their credit agents.

EBS 12.2.6 OA Extensions for Jdeveloper 10g Updated

Steven Chan - Tue, 2017-01-31 02:04
Jdeveloper logoWhen you create extensions to Oracle E-Business Suite OA Framework pages, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product team. 

The version of Oracle JDeveloper is specific to the Oracle E-Business Suite Applications Technology patch level, so there is a new version of Oracle JDeveloper with each new release of the Oracle E-Business Suite Applications Technology patchset.

The Oracle Applications (OA) Extensions for JDeveloper 10g for E-Business Suite Release 12.2.6 have recently been updated.  For details, see:

The same Note also lists the latest OA Extension updates for EBS 11i, 12.0, 12.1, and 12.2.

Related Articles

Categories: APPS Blogs

No listner error

Tom Kyte - Mon, 2017-01-30 22:26
Hi Tom, I want to read mail using plsql function so i excuted this function create or replace type TStrings is table of varchar2(4000); 2 / Type created. create or replace function xx_pop3( userName varchar2, password varchar2,...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator