Feed aggregator

Can Oracle 8i client connect to Oracle 12c server?

Tom Kyte - Wed, 2016-06-29 00:06
Can Oracle 8i client connect to Oracle 12c server? If yes, please anybody can help me that what are the changes does in Oracle 8i client's file and Oracle 12c Server's file...
Categories: DBA Blogs

Generating Dates between two date ranges

Tom Kyte - Wed, 2016-06-29 00:06
<code>Hi Tom , I need to write a query which will give me a list of dates between two date ranges (start date and end date) . For Example if my start date is 12/01/2003 and end Date is 12/31/2003. i need to generate my output as Date ---...
Categories: DBA Blogs

Refresh Materialized view when there is records in the source table

Tom Kyte - Wed, 2016-06-29 00:06
Hi , Need help to refresh one materialize view only when there is records in the table. say example . create materialize view MABCD as select field1,field2,field3 from abcdef ; my requirement is if abcdef have records greater than zer...
Categories: DBA Blogs

Expecting Sum-Of-Parts Performance From Shared Solid State Storage? I Didn’t Think So. Neither Should Exadata Customers. Here’s Why.

Kevin Closson - Tue, 2016-06-28 19:55

 

sum-of-parts

Last month I had the privilege of delivering the key note session to the quarterly gathering of Northern California Oracle User Group. My session was a set of vignettes in a theme regarding modern storage advancements. I was mistaken on how much time I had for the session so I skipped over a section about how we sometimes still expect systems performance to add up to a sum of its parts. This blog post aims to dive in to this topic.

To the best of my knowledge there is no marketing literature about XtremIO Storage Array that suggests the array performance is due to the number of solid state disk (SSD) drives found in the device. Generally speaking, enterprise all-flash storage arrays are built to offer features and performance–otherwise they’d be more aptly named Just a Bunch of Flash (JBOF).  The scope of this blog post is strictly targeting enterprise storage.

Wild, And Crazy, Claims

Lately I’ve seen a particular slide–bearing Oracle’s logo and copyright notice–popping up to suggest that Exadata is vastly superior to EMC and Pure Storage arrays because of Exadata’s supposed unique ability to leverage all flash bandwidth in the Exadata X6 family. You might be able to guess by now that I aim to expose how invalid this claim is. To start things off I’ll show a screenshot of the slide as I’ve seen it. Throughout the post there will be references to materials I’m citing.

DISCLAIMER: The slide I am about to show was not a fair use sample of content from oracle.com and it therefore may not, in fact, represent the official position of Oracle on the matter. That said, these slides do bear logo and copyright! So, then, the slide:

X6-sum-of-parts

I’ll start by listing a few objections. My objections are always based on science and fact so objecting to content–in particular–is certainly appropriate.

  1. The slide suggests an EMC XtremIO 4 X-Brick array is limited to 60 megabytes per second per “flash drive.”
    1. An XtremIO 4 X-Brick array has 100 Solid State Disks (SSD)–25 per X-Brick. I don’t know where the author got the data but it is grossly mistaken. No, a 4 X-Brick array is not limited to 60 * 100 megabytes per second (6,000MB/s). An XtremIO 4 X-Brick array is a 12GB/s array: click here. In fact, even way back in 2014 I used Oracle Database 11g Real Application Clusters to scan at 10.5GB/s with Parallel Query (click here). Remember, Parallel Query spends a non-trivial amount of IPC and work-brokering setup time at the beginning of a scan involving multiple Real Application cluster nodes. That query startup time impacts total scan elapsed time thus 10.5 GB/s reflects the average scan rate that includes this “dead air” query startup time. Everyone who uses Parallel Query Option is familiar with this overhead.
  2. The slide suggests that 60 MB/s is “spinning disk level throughput.”
    1. Any 15K RPM SAS (12Gb) or FC hard disk drive easily delivers sequential scans throughput of more than 200 MB/s.
  3. The slide suggests XtremIO cannot scale out.
    1. XtremIO architecture is 100% scale out. One can start with a single X-Brick and add up to 7 more. In the current generation scaling out in this fashion with XtremIO adds 25 more SSDs, storage controllers (CPU) and 4 more Fibre Channel ports per X-Brick.
  4. The slide suggests “bottlenecks at server inputs” further retard throughput when using Fibre Channel.
    1. This is just silly. There are 4 x 8GFC host-side FC ports per XtremIO X-Brick. I routinely test Haswell-EP 2-socket hosts with 6 active 8GFC ports (3 cards) per host. Can a measly 2-socket hosts really drive 12 GB/s Oracle scan bandwidth? Yes! No question. In fact, challenge me on that and I’ll show AWR proof of a single 2-socket host sustaining Oracle table scan bandwidth at 18 GB/s. No, actually, I won’t make anyone go to that much trouble. Click the following link for AWR proof that a single host with 2 6-core Haswell-EP (2s12c24t) processors can sustain Oracle Database 12c scan bandwidth of 18 GB/s: click here. I don’t say it frequently enough, but it’s true; you most likely do not know how powerful modern servers are!
  5. The slide says Exadata achieve “full flash throughput.”
    1. I’m laughing, but that claim is, in fact, the perfect segue to the next section.
Full Flash Throughput Scan Bandwidth

The slide accurately states that the NVMe flash card in the Exadata X6 model are rated at 5.5GB/s. This can be seen in the F320 datasheet. Click the following link for a screenshot of the datasheet: click here. So the question becomes, can Exadata really achieve full utilization of all of the NVMe flash cards configured in the Exadata X6? The answer no, but sort of. Please allow me to explain.

The following graph depicts the reality of how close a full-rack Exadata X6 comes to realizing full flash potential. As we know a full-rack Exadata has 14 storage servers. The High Capacity (HC) model has 4 NVMe cards per storage server purposed as a flash cache. The HC model also comes with 12 7,200 RPM hard drives per storage server as per the datasheet.  The following graph shows that yes, indeed Exadata X6 does realize full flash potential when performing a fully-offloaded scan (Smart Scan). After all, 4 * 14 * 5.5 is 308 and the datasheet cites 301 GB/s scan performance for the HC model. This is fine and dandy but it means you have to put up with 168 (12 * 14) howling 7,200 RPM hard disks if you are really intent on harnessing the magic power of full-flash potential! Why the sarcasm? It’s simple really–just take a look at the graph and notice that the all-flash EF model realizes just a slight bit more than 50% of the full flash performance potential. Indeed, the EF model has 14 * 8 * 5.5 == 616 GB/s of full potential available–but not realizable.

No, Exadata X6 does not–as the above slide suggests–harness the full potential of flash. Well, not unless you’re willing to put up with 168 round, brown, spinning thingies in the configuration. Ironically, it’s the HDD-Flash hybrid HC model that enjoys the “full flash potential.” I doubt the presenter points this bit out when slinging the above slide.

BW

IOPS

The above slide doesn’t actually suggest that Exadata X6 achieves full flash potential for IOPS, but since these people made me crack open the datasheets and use my brain for a moment or two I took it upon myself to do the calculations. The following graph shows the delta bewteen full flash IOPS potential for the full-rack HC and EF Exadata X6 models.

No…it doesn’t realize full flash potential in terms of IOPS either.

IOPS

References

Here is a link to the full slide deck containing the slide I focused on in this post: http://konferenciak.advalorem.hu/uploads/files/INFR_Sarecz_Lajos.pdf.

Just in case that copy of the deck disappears, I pushed a copy up the the WayBack Machine: click here.

Summary

XtremIO Storage Array literature does not suggest that the performance characteristics of the array are a simple product of how many component SSDs the array is configured with. To the best of my knowledge neither does Pure Storage suggest such a thing.

Oracle shouldn’t either. I made that point clear.


Filed under: oracle

New Version Of XPLAN_ASH Utility

Randolf Geist - Tue, 2016-06-28 18:23
A new version 4.23 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

This version comes only with minor changes, see the change log below.

Here are the notes from the change log:

- Finally corrected the very old and wrong description of "wait times" in the script comments, where it was talking about "in-flight" wait events but that is not correct. ASH performs a "fix-up" of the last 255 samples or so and updates them with the time waited, so these wait events are not "in-flight"

- Removed some of the clean up code added in 4.22 to the beginning of the script, because it doesn't really help much but spooled script output always contained these error messages about non-existent column definitions being cleared

- The "Concurrent I/O" sections will now also be skipped in LIMITED_ASH mode

- Some more fixes to the I/O figures in the "Activity Timeline based on ASH" - the spreading introduced in 4.22 needed some further refinement (see 4.22 change log for more details)

The Importance of Routine

Complete IT Professional - Tue, 2016-06-28 17:08
In the last two weeks, I’ve learnt about the importance of a routine. How does it relate to your career, and why should you care? The Last Two Weeks So, you might have noticed there hasn’t been a post on Complete IT Professional in the last week or so. I mentioned there was a delay […]
Categories: Development

TLS 1.2 Certified with E-Business Suite 12.2

Steven Chan - Tue, 2016-06-28 16:57

I'm pleased to announce that Oracle E-Business Suite 12.2 inbound, outbound, and loopback connections are now certified with TLS 1.2, 1.1, and 1.0. If you have not already migrated from SSL to TLS, you should begin planning the migration for your environment. 

For more information on patching and configuration requirements when migrating to TLS 1.2 from TLS 1.0 or SSL, refer to the following My Oracle Support Knowledge Document:

The steps and configuration outlined in MOS Note 1367293.1 will address recent security vulnerabilities (e.g. Weak Cipher Suites/FREAK, POODLE, DROWN).

Configuration Options

  • Certification with TLS 1.2, 1.1 and 1.0

    The default Oracle E-Business Suite 12.2 configuration allows for the handshake between the client and server to negotiate and use the highest version of TLS (either 1.2, 1.1, or 1.0) supported end-to-end by all parties.

For example, if the outbound connection used by iProcurement is by default configured for TLS 1.2, 1.1 and 1.0 and if a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.2 and a common cipher suite is found, then TLS 1.2 will be used end-to-end. If a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.1 and a common cipher suite is found, then the handshake negotiation will resolve to use TLS 1.1 for the connection encryption.

  • Restricting the TLS Protocol (Optional Configuration)

You may optionally configure Oracle E-Business Suite to use the highest level of TLS certified with Oracle E-Business Suite Release 12.2. This option currently allows you configure TLS 1.2 for all inbound, outbound and loopback connections. 

Warning: If you restrict Oracle E-Business Suite 12.2 to use only TLS 1.2, this configuration could result in the inability to connect to other sites or browsers that do not support TLS 1.2.

  • Restricting Connections to HTTPS Only (Optional Configuration)

You also may optionally configure Oracle E-Business Suite to disable the HTTP port and use the HTTPS port only. 

Where can I learn more?
There are several guides and documents that cover Oracle E-Business Suite 12.2 secure configuration and encryption. You can learn more by reading the following:

SSL or TLS 1.0 Reference Note

If you are using SSL or TLS 1.0 and need to review your current configuration or renew your certificate, you may refer to the following:

Related Articles

Categories: APPS Blogs

PostgreSQL as a central reporting hub? – Yes, of course

Yann Neuhaus - Tue, 2016-06-28 15:31

For every client we do projects for there usually are database systems from various vendors. In most places you at least find some Oracle and MS SQL Server databases. Probably somewhere there are even some MySQL/MariaDB or PostgreSQL instance running. The Oracle and MS SQL Server stuff almost always is there because vendors require it. For MySQL/MariaDB the common use case are web applications, ticketing systems and home grown developments. At some point in time there might be a requirement to centralize important data of all these databases into a central reporting hub. The more data you have the more there is a requirement to analyze it and to create reports that drive some sorts of decisions. PostgreSQL is very well prepared to assist in this. If you have read the last posts about connecting your PostgreSQL instance to either Oracle, MS SQL Server, MariaDB/MySQL or even other PostgreSQL instances you might already know into what direction this post will go.

This is a sample use case: There is a requirement to centralize all the sales figures of a company into one big reporting database. The (simplified) database landscape in this company looks exactly as described above:

pg_reporting_1

There is one Oracle database, one MS SQL Server instance, one MariaDB/MySQL instance and one PostgreSQL instance. Each of them holds some sales figures which shall be integrated into the reporting database. For the demo setup I use the Dell DVD Store Database sample schema. You can read how to set this up for:

As the goal is to have a PostgreSQL instance as a central reporting hub the desired database landscape will approximately look like this:

pg_reporting_2

Lets assume all the foreign data wrappers are already setup in the central PostgreSQL instance:

(postgres@[local]:4445) [postgres] > \dx
                                                 List of installed extensions
     Name     |    Version    |   Schema   |                                    Description                                    
--------------+---------------+------------+-----------------------------------------------------------------------------------
 mysql_fdw    | 1.0           | public     | Foreign data wrapper for querying a MySQL server
 oracle_fdw   | 1.1           | public     | foreign data wrapper for Oracle access
 plpgsql      | 1.0           | pg_catalog | PL/pgSQL procedural language
 postgres_fdw | 1.0           | ds2_oracle | foreign-data wrapper for remote PostgreSQL servers
 tds_fdw      | 2.0.0-alpha.1 | public     | Foreign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server)

In addition lets assume all the foreign servers and all the user mappings are there and are working:

(postgres@[local]:4445) [postgres] > select srvname,srvoptions from pg_foreign_server;
   srvname    |                                      srvoptions                                       
--------------+---------------------------------------------------------------------------------------
 mysql_server | {host=192.168.22.41,port=3306}
 mssql_svr    | {servername=192.168.22.102,port=1433,database=ds2,tds_version=7.3,msg_handler=notice}
 oracle_srv   | {dbserver=//192.168.22.42/DELLDVD}
 postgres_srv | {host=192.168.22.40,port=5432,dbname=ds2}
(4 rows)

(postgres@[local]:4445) [postgres] > select * from pg_user_mappings;
 umid  | srvid |   srvname    | umuser | usename  |               umoptions                
-------+-------+--------------+--------+----------+----------------------------------------
 65547 | 65546 | mysql_server |     10 | postgres | {username=web,password=web}
 65587 | 65586 | mssql_svr    |     10 | postgres | {username=ds2user,password=ds2}
 65615 | 65614 | oracle_srv   |     10 | postgres | {user=DS2,password=ds2}
 65676 | 65675 | postgres_srv |     10 | postgres | {user=ds2,password=ds2}

As the central reporting database is already connected to all the other database systems how could you organize the fetching of the data? One approach is to create a separate schema for each of the foreign databases and one additional schema that combines the data:

(postgres@[local]:4445) [postgres] > create schema ds2_mssql;
(postgres@[local]:4445) [postgres] > create schema ds2_oracle;
(postgres@[local]:4445) [postgres] > create schema ds2_mysql;
(postgres@[local]:4445) [postgres] > create schema ds2_postgresql;
(postgres@[local]:4445) [postgres] > create schema ds2_combined;
(postgres@[local]:4445) [postgres] > \dn
      List of schemas
      Name      |  Owner   
----------------+----------
 ds2_combined   | postgres
 ds2_mssql      | postgres
 ds2_mysql      | postgres
 ds2_oracle     | postgres
 ds2_postgresql | postgres
 public         | postgres
(6 rows)

The next step is to create the foreign tables we want to work with.

For Oracle, MySQL/MariaDB and PostgreSQL we can use the import foreign schema command:

postgres=# import foreign schema "DS2" from server oracle_srv into ds2_oracle;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "DS2" from server mysql_srv into ds2_mysql;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "public" from server postgres_srv into ds2_postgresql;
IMPORT FOREIGN SCHEMA

For MS SQL Server we need to specify the foreign tables on our own (there is a feature request to implement import foreign schema):

create foreign table ds2_mssql.orders 
  ( orderid numeric not null
  , orderdate timestamp(0) without time zone not null
  , customerid numeric                                
  , netamount numeric(12,2) not null 
  , tax numeric(12,2) not null 
  , totalamount numeric(12,2) not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orders', row_estimate_method 'showplan_all');

create foreign table ds2_mssql.orderlines
  ( orderlineid numeric not null 
  , orderid numeric not null 
  , prod_id numeric not null 
  , quantity numeric not null 
  , orderdate timestamp(0) without time zone not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orderlines', row_estimate_method 'showplan_all');

Having all the foreign tables available we can start to work with the data either by creating materialized views:

create materialized view ds2_combined.mv_orders as
  select * from ds2_mssql.orders
  union all
  select * from ds2_mysql."ORDERS"
  union all
  select * from ds2_oracle.orders
  union all
  select * from ds2_postgresql.orders
  with no data;
refresh materialized view ds2_combined.mv_orders with data;

… or by importing the data into PostgreSQL and then build reports on top of that:

BEGIN;
  insert into ds2_combined.orders select * from ds2_mssql.orders;
  insert into ds2_combined.orders select * from ds2_mysql."ORDERS";
  insert into ds2_combined.orders select * from ds2_oracle.orders;
  insert into ds2_combined.orders select * from ds2_postgresql.orders;
END;
BEGIN;
  insert into ds2_combined.orderlines 
         select * from ds2_mssql.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_mysql."ORDERLINES";
  insert into ds2_combined.orderlines 
         select * from ds2_oracle.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_postgresql.orderlines;
END;

Having all the data locally available you can do whatever you want with it. Have fun with reporting on your data …

Btw: The source for this was a session at the Swiss PGDAY 2016. You can download the slides there.

 

Cet article PostgreSQL as a central reporting hub? – Yes, of course est apparu en premier sur Blog dbi services.

Statspack Idle Events

Yann Neuhaus - Tue, 2016-06-28 11:26

When you don’t have Enterprise Edition + Diagnostic Pack, you cannot use AWR but you can, and should, install Statspack. Statspack is still there, but unfortunately do not evolve a lot. The most important section, the ‘Top 5 Timed Events’ shows only foreground events, or is supposed to do so. When a user process waits on a background process, this section must count only the foreground wait and not the activity of the background process or we will have double accounting. The background activity is included in ‘Idle’ events in order to be excluded from this section.
But unfortunately, new versions come with new wait events, and the list of Statspack idle events is not up to date anymore.

Here is the ‘Top 5 Timed Events’ I got from a database between 22:00 and 23:00 where there is no application activity:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
LGWR worker group idle 6 22,049 ###### 65.2
AQPC idle 120 3,602 30014 10.7
heartbeat redo informer 3,568 3,601 1009 10.7
lreg timer 1,195 3,598 3011 10.6
direct path read 31,221 466 15 1.4
-------------------------------------------------------------

Humm. What can you do with that? Idle events and timers are at the top. Direct path read seem to be minimal. And no CPU usage?
Obviously, something is wrong here.

Statspack uses a fixed list of wait events that are considered as ‘idle’ events and it is stored at Statspack installation into STATS$IDLE_EVENT.
This comes from an age where wait classes were not there. In current version, a more realistic list of wait events is in V$EVENT_NAME where class_name=’Idle’

Let’s compare them (that’s in 12.1.0.1)

First, are there some non-idle events that are considered as ‘idle’ by Statspack?


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s left outer join V$EVENT_NAME v on s.event=v.name where wait_class'Idle';
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
--------------------------------------- --------------------------------------- ----------
null event null event Other
SQL*Net message to client SQL*Net message to client Network
SQL*Net more data from client SQL*Net more data from client Network
KSV master wait KSV master wait Other
parallel recovery slave wait for change parallel recovery slave wait for change Other

The goal of this blog post is not to detail the meaning of each of those events (search for them on tanelpoder.com as a good start for that), but if they are now considered as non-idle, Statspack should obey the same rule.

Then we can check which idle events are not in Statspack list:


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s right outer join V$EVENT_NAME v on s.event=v.name where wait_class
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
-------------------- ------------------------------------------ ----------
OFS idle Idle
heartbeat redo informer Idle
LGWR worker group idle Idle
Recovery Server waiting for work Idle
Recovery Server waiting restore start Idle
Recovery Server Surrogate wait Idle
Recovery Server Servlet wait Idle
Recovery Server Comm SGA setup wait Idle
parallel recovery coordinator idle wait Idle
recovery sender idle wait Idle
recovery receiver idle wait Idle
recovery merger idle wait Idle
virtual circuit next request Idle
lreg timer Idle
REPL Apply: txns Idle
REPL Capture/Apply: messages Idle
REPL Capture: archive log Idle
PL/SQL lock timer Idle
Emon coordinator main loop Idle
Emon slave main loop Idle
AQ: 12c message cache init wait Idle
AQ Cross Master idle Idle
AQPC idle Idle
Streams AQ: load balancer idle Idle
Sharded Queues : Part Maintenance idle Idle
REPL Capture/Apply: RAC AQ qmn coordinator Idle
iowp msg Idle
iowp file id Idle
netp network Idle
gopp msg Idle

There are a lot of them. We can see lot of idle events that have been introduced in recent versions.

The Statspack list is an old list. Here is how to refresh it:


delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
commit;

Once I did that and run a new Statspack report on the same snapshots as above, I get a more realistic ‘Top 5 Timed Events':


Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
direct path read 31,221 466 15 48.7
CPU time 310 32.4
db file sequential read 49,137 77 2 8.0
SQL*Net vector data to client 15,189 31 2 3.3
enq: TM - contention 1 24 23937 2.5

Actually, this 22:00 to 23:00 time period is where the maintenance jobs are running. dbms_space.auto_space_advisor_job_proc likes to read your tables in bulk in order to see if there is some free space. And I don’t like this 24 seconds TM lock wait at a time where I though the database was not in use. This was hidden from the original report.

Patch?

Statspack is still supported and there’s a patch to add the following events as idle:
"virtual circuit next request" "AQ Cross Master idle" "AQ: 12c message cache init wait" "AQPC idle" "Emon coordinator main loop" "Emon slave main loop" "LGWR worker group idle" "OFS idle" "REPL Apply: txns" "REPL Capture/Apply: RAC AQ qmn coordinator" "REPL Capture/Apply: messages" "REPL Capture: archive log" "Recovery Server Comm SGA setup wait" "Recovery Server Servlet wait" "Recovery Server Surrogate wait" "Recovery Server waiting for work" "Recovery Server waiting restore start" "Sharded Queues : Part Maintenance idle" "Streams AQ: load balancer idle" "gopp msggopp msg" "heartbeat redo informer" "iowp file id" "iowp msg" "lreg timer" "netp network" "parallel recovery coordinator idle wait" "recovery merger idle wait" "recovery receiver idle wait" "recovery sender idle wait" "imco timer" "process in prespawned state"
(Nice way to be referenced by google for all those improbable wait events, isn’t it?)

However, I think that filing STATS$IDLE_EVENT from V$EVENTNAME, or maybe even replacing it as a view can be a better long term solution. Each version comes with new wait events and it seems that Statspack evolves only through patches.

 

Cet article Statspack Idle Events est apparu en premier sur Blog dbi services.

New Cloud-based Banking Solution Targets Default Debtors, Improves Customer Experiences

Oracle Press Releases - Tue, 2016-06-28 10:00
Press Release
New Cloud-based Banking Solution Targets Default Debtors, Improves Customer Experiences

Redwood Shores, Calif.—Jun 28, 2016

Financial institutions looking to recoup delinquent accounts while maintaining a positive association with their brands have a powerful new tool: Oracle Banking Collections. The cloud-enabled solution announced today provides lenders a more comprehensive look at debtors’ total financial pictures and features so they can more easily repay their loans.

While credit card and mortgage delinquencies are currently stable, there is a significant rise in student and auto loan defaults. In fact, Fitch Ratings stated that the 60-plus day delinquency rate for subprime car loans climbed to 5.16% in February of this year, the highest level in nearly two decades. The collections process is complex, inefficient riddled with fraudulent activity.

"With today’s choices and technology available, consumers demand solutions that put them at the center and in control of their options. It is a unique situation to apply a customer first mindset when it comes to handling delinquent accounts, but if done correctly, the benefits of such an offering should help to lead to improved contract rates and be a real help to delinquent borrowers,” said Marc DeCastro, Research Director, Consumer Banking at IDC.

“The Federal Reserve reported that as of March 31, 2016, total household debt in North America alone was $12.25 trillion. Oracle Banking Collections provides financial institutions a single system for comprehensive collection activity across all products related to a delinquent account. The collections process is designed for the digital world with efficient workflows, improved decision making and intuitive customer centric capabilities that empower the consumer with self-service remediation,” said Chet Kamat, senior vice president Banking Products for Oracle Financial Services. “Financial Institutions can choose to deploy the solution on the Oracle Cloud or on premise.”

Each year financial institutions waste millions of dollars attempting to solve their debt collection problem with the age-old tactic of throwing 3rd party debt collection agents who may use decades-old technology riddled with inaccurate borrower data, at the problem. The actions by collection agents, who know nothing about the customer or customer's relation with the financial institution, can border on harassment with phone calls at all hours, dunning letters and the threat of lawsuits. The new Oracle Banking Collections offering is helping financial institutions and their collections agencies improve collections on delinquent accounts by offering three unique and powerful capabilities:

1. Detailed insights into a borrower’s entire relationship with the financial institution, enabling collections agents to know more about who they are communicating with and how best to have a meaningful conversation about remediating the account.

2. A single system of record for all collections activity, helping to support compliance with strict collections regulations including those set forth by the Consumer Financial Protection Bureau.

3. Unique self-help features that enable the consumer to proactively determine a remediation plan for their account and avoid the collections process altogether.

  • The unique features of the Oracle Banking Collections can be found here.

Contact Info
Valerie Beaudett
Oracle Corporate Communications
+1.650.400.7833
valerie.beaudett@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Valerie Beaudett

  • +1.650.400.7833

Online redefination of table

Learn DB Concepts with me... - Tue, 2016-06-28 09:55

Online table redefinition of a table
 Lets create a table with some dependent objects for this setup.

DROP PROCEDURE get_description;
DROP VIEW redef_tab_v;
DROP SEQUENCE redef_tab_seq;
DROP TABLE redef_tab PURGE;

CREATE TABLE redef_tab (
  id           NUMBER,
  description  VARCHAR2(50),
  CONSTRAINT redef_tab_pk PRIMARY KEY (id)
);

CREATE VIEW redef_tab_v AS
SELECT * FROM redef_tab;

CREATE SEQUENCE redef_tab_seq;

CREATE OR REPLACE PROCEDURE get_description (
  p_id          IN  redef_tab.id%TYPE,
  p_description OUT redef_tab.description%TYPE) AS
BEGIN
  SELECT description
  INTO   p_description
  FROM   redef_tab
  WHERE  id = p_id;
END;
/

CREATE OR REPLACE TRIGGER redef_tab_bir
BEFORE INSERT ON redef_tab
FOR EACH ROW
WHEN (new.id IS NULL)
BEGIN
  :new.id := redef_tab_seq.NEXTVAL;
END;
/


Lets confirm that objects  are created 


COLUMN object_name FORMAT A20
SELECT object_name, object_type, status FROM user_objects ORDER BY object_name;

OBJECT_NAME          OBJECT_TYPE         STATUS
-------------------- ------------------- -------
GET_DESCRIPTION      PROCEDURE           VALID
REDEF_TAB            TABLE               VALID
REDEF_TAB_BIR        TRIGGER             VALID
REDEF_TAB_PK         INDEX               VALID
REDEF_TAB_SEQ        SEQUENCE            VALID
REDEF_TAB_V          VIEW                VALID

6 rows selected.


Basic Online Table Redefinition

Now we perform an online table redefinition.

-- Check table can be redefined

EXEC DBMS_REDEFINITION.can_redef_table('ATOORPU', 'REDEF_TAB'); 
--- SCHEMA = ATOORPU, TABLE_NAME=REDEF_TAB


CREATE TABLE redef_tab2 AS
SELECT *
FROM   redef_tab WHERE 1=2;

-- Start Redefinition
EXEC DBMS_REDEFINITION.start_redef_table('ATOORPU', 'REDEF_TAB', 'REDEF_TAB2');


-- Optionally synchronize new table with interim data before index creation
EXEC DBMS_REDEFINITION.sync_interim_table('ATOORPU', 'REDEF_TAB', 'REDEF_TAB2');

-- Add new PK.
ALTER TABLE redef_tab2 ADD (CONSTRAINT redef_tab2_pk PRIMARY KEY (id));

-- Complete redefinition
EXEC DBMS_REDEFINITION.finish_redef_table('ATOORPU', 'REDEF_TAB', 'REDEF_TAB2');

-- Remove original table which now has the name of the new table
DROP TABLE redef_tab2;



Finally, we verify the status of the schema objects.


COLUMN object_name FORMAT A20
SELECT object_name, object_type, status FROM user_objects ORDER BY object_name;

OBJECT_NAME          OBJECT_TYPE         STATUS
-------------------- ------------------- -------
GET_DESCRIPTION      PROCEDURE           VALID
REDEF_TAB            TABLE               VALID
REDEF_TAB2_PK        INDEX               VALID
REDEF_TAB_SEQ        SEQUENCE            VALID
REDEF_TAB_V          VIEW                VALID


Note : The GET_DESCRIPTION procedure and REDEF_TAB_V view are still valid, but the REDEF_TAB_BIR trigger is gone. The trigger was still associated with the original table, renamed to REDEF_TAB2, so when the original table was dropped, the trigger was dropped with it.
Categories: DBA Blogs

Chicago Oracle Meetup

Gerger Consulting - Tue, 2016-06-28 09:00
Are you in Chicago? We have an Oracle Professionals Meet-Up on June 30. I'll be there and would be happy to get to know you. Register at: http://meetu.ps/2XCD1J



See you there!
Yalim K. Gerger
Categories: Development

SQL Server 2016 – Query Store: retrieve query that doesn’t appear!

Yann Neuhaus - Tue, 2016-06-28 07:25

For our event SQL Server 2016 in September, I am studying the new functionality Query Store.
My colleague David Barbarin have written few months ago about Query Store and how it’s working.

Simple SELECT Query

To begin, I execute a simple SELECT on a table with 3 different methods:

SELECT * FROM QS_test;
exec sp_executesql N'SELECT * FROM QS_test'
EXEC sp_GetQS_test;

The “sp_GetQS_test” is a stored procedure with the select statement.
I created a little query with specific DMVs for Query Store to analyze the query:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,
   qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,
   qsq.last_compile_start_time,qsq.last_execution_time,qsq.avg_compile_duration,
   qsp.query_id,qsp.plan_id,qsrs.execution_type_desc
  FROM sys.query_store_query_text qsqt
  inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id
  inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id 
  inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id   
  WHERE query_sql_text='SELECT * FROM QS_test';

QS_01
As you can see in the result, all 3 queries are present with the same query text (‘SELECT * FROM QS_test’) in the query store.

SELECT Query with a Where clause

I continue my test with a select and a where clause:

  SELECT * FROM QS_test WHERE rid=5

I run my query to find the query in the query store:
QS_02
And, Oh surprise, no query found! The query does not appear in the query store….
I rerun my query without the where clause to see if I find something:
QS_03
The result give me a query written differently:
(@1 tinyint)SELECT * FROM [QS_test] WHERE [rid]=@1

This query goes through a parametrization and to retrieve this information we use a new function in SQL Server 2016: fn_stmt_sql_handle_from_sql_stmt

Function sys.fn_stmt_sql_handle_from_sql_stmt

This function give us the SQL handle for the query
QS_04
After, I add the function in my query to find it in the Query Store:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,

qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,qsq.last_compile_start_time,

qsq.last_execution_time,qsq.avg_compile_duration,qsp.query_id,qsp.plan_id,qsrs.execution_type_desc

FROM sys.query_store_query_text qsqt

inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id

inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id

inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id

CROSS APPLY sys.fn_stmt_sql_handle_from_sql_stmt('SELECT * FROM QS_test WHERE rid=5',NULL) fsshfss

WHERE qsqt.statement_sql_handle=fsshfss.statement_sql_handle;

QS_05

It’s done, I retrieve the query thanks to this new function.
You can notice that I use the statement_sql_handle column and not the query_sql_text column in the clause where.

I have tested with query_sql_text column and you can see here the error that I get…
QS_06
The query_sql_text from the function is SQL_Latin1_General_CP1_CI_AS and both my database and my instance are using French_CI_AS. This is not hopeless…
Then, if you want to use the query with query_sql_text, you just need just to precise the collation with the keyword COLLATE
QS_07

 

Cet article SQL Server 2016 – Query Store: retrieve query that doesn’t appear! est apparu en premier sur Blog dbi services.

BT and Oracle Collaborate to Make It Easier for Global Organizations to Move to the Cloud

Oracle Press Releases - Tue, 2016-06-28 07:00
Press Release
BT and Oracle Collaborate to Make It Easier for Global Organizations to Move to the Cloud BT becomes the first global network services provider to offer direct access to the Oracle Cloud

Redwood Shores, Calif.—Jun 28, 2016

BT and Oracle today announced that they are working together to help organizations across the world take advantage of performance, cost and innovation benefits of the cloud. Oracle customers will be able to leverage the high performance, reliability, and security features of the BT Cloud Connect environment to gain direct connectivity to the Oracle Cloud.

While organizations understand the promise and many benefits of the cloud, many are concerned about security and performance issues associated with cloud technologies. This is especially true for business-critical enterprise workloads that frequently demand high levels of availability, security and performance. Together, BT and Oracle are aiming to remove those barriers to cloud adoption by providing options for connectivity from hybrid enterprise data centers to the Oracle Cloud. 

“Cloud is the fastest growing part of Oracle’s business,” said Thomas Kurian, president of Product Development, Oracle. “This partnership will help our customers leveraging the Oracle Cloud execute on their business strategies by taking advantage of the breadth and depth of BT’s global network and its Cloud Connect offering for their high bandwidth connectivity needs.”

With BT Cloud Connect for Oracle FastConnect, customers can use a private connection based on BT’s IP Connect VPN service, which is designed to predictably, securely and consistently exchange large volumes of data between the Oracle Cloud and their own on-premises environment. BT will connect directly to Oracle’s Cloud locations in Amsterdam and London, giving customers access from anywhere in the world to services provided in European data centers. Through pre-provisioning via BT Cloud Connect, BT can cut the typical dedicated connection deployment time from months to days.

Luis Alvarez, CEO Global Services, BT, said: “Direct and reliable access to data and applications hosted in cloud environments has become critical to organizations as they embark on their digital transformation journeys. We are accelerating our drive to be the world’s leading cloud services integrator and I am proud that BT is becoming the first global network services provider to offer direct access to the Oracle Cloud.”

The Oracle Cloud is used by many of the world’s largest organizations. It supports 70 million users and more than 34 billion transactions each day and runs in 19 data centers around the world. With the Oracle Cloud, customers can take advantage of the broadest and most integrated public cloud portfolio across all layers of the IT stack, while also having the choice to bring the Oracle Cloud into their own data centers via the recently announced “Oracle Cloud at Customer” offering. Oracle FastConnect is part of Oracle IaaS, and intended to enable organizations to quickly meet IT and business goals by leveraging a set of core capabilities such as elastic computing, networking and storage.

This news builds on BT’s Cloud of Clouds portfolio strategy by allowing its multinational customers to connect to the cloud through a centrally managed high-speed network with highly predictable performance, reliability and embedded security features.

BT and Oracle aim to offer the service in the final quarter of calendar year 2016.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About BT

BT’s purpose is to use the power of communications to make a better world. It is one of the world’s leading providers of communications services and solutions, serving customers in 180 countries. Its principal activities include the provision of networked IT services globally; local, national and international telecommunications services to its customers for use at home, at work and on the move; broadband, TV and internet products and services; and converged fixed-mobile products and services. BT consists of six customer-facing lines of business: Global Services, Business and Public Sector, Consumer, EE, Wholesale and Ventures, and Openreach.

For the year ended 31 March 2016, BT Group’s reported revenue was £19,042m with reported profit before taxation of £3,029m.

British Telecommunications plc (BT) is a wholly-owned subsidiary of BT Group plc and encompasses virtually all businesses and assets of the BT Group. BT Group plc is listed on stock exchanges in London and New York.

For more information, visit www.btplc.com

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

How are you?

FeuerThoughts - Tue, 2016-06-28 06:26
I get this question a lot.

You probably do, too.

Sometimes nothing more is meant by it than "Hello."

Sometimes they really mean it, they really want to know.

Generally, my answer is "Great!"

'Cause no matter what relatively small irritations I have in my life, the bottom line is that my life is quite wonderful.

But I've decided that perhaps I should not simply say "Great!" I should explain why my life is so great.

So here goes:

Great!

I am an organic life form living on the only planet we know of in the entire universe that supports organic life. How wonderful is that?

Plus, I am self-aware, so I know that I am alive and can appreciate rainbows and the sound of wind moving through trees, and so on.* How incredible is that?

And, best of all, I have two smart, beautiful, hilarious, stubborn and funny granddaughters.**

So how am I?

GREAT!

* Note: just in case that sentence sounds as though I am celebrating the uniqueness of human beings, I must clarify that I happen to believe that many, many living creatures from thousands of species are self-aware and appreciate the world around them. Birds, spiders, squirrels, "etc" ....

** Photos are, of course, required to back this up:

Loey Lucille Silva


Juna Josephine Silva



Categories: Development

Different between FGA and virtual private database?

Tom Kyte - Tue, 2016-06-28 05:46
I am very confused with some terms such as difference between Fine Grained Auditing and virtual private database?
Categories: DBA Blogs

UTL EMAIL (HTML)

Tom Kyte - Tue, 2016-06-28 05:46
Hi Tom, I have issue with my UTL Email, when i am sendint the email the text matter comming in same line: please check comments in the code where the issue: create or replace PROCEDURE ...
Categories: DBA Blogs

Split a int value in multiple rows based on percentage and in multiple of another number

Tom Kyte - Tue, 2016-06-28 05:46
Hi Tom, I have situation where i have to divide a integer in multiple rows depends on the %value given and the rounding specified. The value in ASSIGNMENT table needs to be divided in multiple of ROUNDING depends on the CONSIGNMENT records. In...
Categories: DBA Blogs

Optimal number of partitions

Tom Kyte - Tue, 2016-06-28 05:46
Tom, I have this table with around one million records. The table has the potential to grow to 2 million, 3 million or even more... I am planning on partitioning this table using hash partitioning on the most queried column. A few questions: ...
Categories: DBA Blogs

Adaptive Cursor Sharing

Tom Kyte - Tue, 2016-06-28 05:46
Hi Tom, I am aware, you have already clarified about the same subject many times. I have a table, which stores customer transaction information. Data load is happening continuously into the table. Since it is a transaction table, given a day,...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator