Feed aggregator

How Pythian Helped Nav Canada Migrate Their Oracle Database to a Microsoft Azure SQL Database

Pythian Group - Wed, 2017-08-09 15:20

A common concern for businesses is optimizing database speed and performance, without raising costs.

Nav Canada supports flight planning with aviation weather, aeronautical information, and online flight planning. To address a need for a more cost-efficient environment for their critical flight planning data, while optimizing performance they decided to migrate from an on-premises Oracle system to Microsoft Azure SQL Database. They turned to the database and cloud experts at Pythian to assist them.

“Pythian’s breadth of expertise spans both traditional on-premises and cloud environments, making Pythian the perfect partner as we migrated critical workloads to the cloud.”

Pascal Chamma, Manager, Nav Canada Database Technology Services.

Pythian was tasked with creating a Proof of Concept (PoC) on an Azure database, while simulating the central SQL database. The project team demonstrated that the same level of performance could be achieved within a more cost-effective cloud environment. The PoC proved that the new system could run a process (3000 lines of PL/SQL code) every 20 seconds, performing calculations on up to 3 minutes of collected data.

Nav Canada is currently implementing the system across a production environment, with Pythian advising them as needed.

Read the full case study to learn how Nav Canada was able to migrate to the cloud to reduce costs, without sacrificing performance.

Categories: DBA Blogs

Postgres vs. Oracle access paths VI – Index Scan

Yann Neuhaus - Wed, 2017-08-09 13:58

In the previous post my queries were still reading the indexed column only, from a table which had no modifications since the last vacuum, and then didn’t need to read table pages: it was Index Only Scan. However, we often need more columns than the ones that are in the index. Here is the Index Scan access path.

I’m continuing on the table that I’ve created in the first post of the series. I’ve run VACUUM (the lazy one, not the full one) and did not do any modification after that, as we have seen that Index Only Access is efficient only when there are no modifications.

create table demo1 as select generate_series n , 1 a , lpad('x',1000,'x') x from generate_series(1,10000);
SELECT 10000
create unique index demo1_n on demo1(n);
CREATE INDEX
vacuum demo1;
VACUUM

I have 10000 rows, a unique column N with decimal numbers, indexed and another column A which is not indexed.

Index Only Scan

I’ll now query one row, the one with N=1000.

explain (analyze,verbose,costs,buffers) select n from demo1 where n=1000 ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..4.30 rows=1 width=4) (actual time=0.123..0.124 rows=1 loops=1)
Output: n
Index Cond: (demo1.n = 1000)
Heap Fetches: 0
Buffers: shared hit=3
Planning time: 0.625 ms
Execution time: 0.137 ms

It seems that the query planner estimates to read one block:

  • The startup cost of 0.29 as we have seen before
  • Read one index page, cost=4 (random_page_cost=4)
  • 1 result row to process, estimated at cpu_tuple_cost=0.01

As the index is a B*Tree with 30 pages, I expect to read at least one branch in addition to the leaf block. The execution has actually read 3 blocks (Buffers: shared hit=3). Here it seems that Postgres decides to ignore the branches and count only the leaf blocks.

In Oracle, the estimation cost=1 and execution has read 2 blocks:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gusay436hpzck, child number 0
-------------------------------------
select /*+ */ n from demo1 where n=1000
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 2 |
|* 1 | INDEX UNIQUE SCAN| DEMO1_N | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 2 |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("N"=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Both Oracle and Postgres read only the index here. This is the fastest access to one indexed column: no need to read the table because the column is in the index. The use-case is quite limited here: just testing the existence of the column. I will now select another column than the one used in the where clause.

Select another column

I filter on N but now query the column A which is not in the index. The Index Only Scan changes to an Index Scan:

explain (analyze,verbose,costs,buffers) select a from demo1 where n=1000 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Index Scan using demo1_n on public.demo1 (cost=0.29..8.30 rows=1 width=4) (actual time=0.010..0.010 rows=1 loops=1)
Output: a
Index Cond: (demo1.n = 1000)
Buffers: shared hit=3
Planning time: 0.639 ms
Execution time: 0.030 ms

The cost is the same except that there is one additional page to read, which pushes it to cost=8.30:

  • The startup cost of 0.29 as we have seen before
  • Read one index page, and one table page: cost=8 (random_page_cost=4)
  • 1 result row to process, estimated at cpu_tuple_cost=0.01

In Oracle it is not a different operation. We still have the INDEX UNIQUE SCAN, but in addition to it, an additional operation to read the table: TABLE ACCESS BY INDEX ROWID. The index entry returns the ROWID (physical address of the table block, equivalent to the Postgres TID). And then we have the detail of the cost, and execution buffer reads: one more block.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8q4tcxgk1n1vn, child number 0
-------------------------------------
select /*+ */ a from demo1 where n=1000
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 1 |00:00:00.01 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 1 | 1 | 2 (0)| 1 |00:00:00.01 | 3 |
|* 2 | INDEX UNIQUE SCAN | DEMO1_N | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 2 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("N"=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22] 2 - "DEMO1".ROWID[ROWID,10]

The important thing here is within the predicate information where we see the part of the where clause which is not a filter applied after the scan, but is used for optimal access by the index. It is displayed as access() in Oracle execution plan:

access("N"=1000)

In PostgreSQL execution plan, the same information is displayed as ‘Index Cond':

Index Cond: (demo1.n = 1000)

Postgres Range Scan

That was retrieving only one row with an equality predicate on a unique index column. The index scan helps to get directly to the value because of the B*Tree structure. As the index is sorted, an inequality predicate can also use the index to find the rows in a range of values.

The Postgres plan looks the same, with Index Scan:

explain (analyze,verbose,costs,buffers) select a from demo1 where n<=1000 ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Index Scan using demo1_n on public.demo1 (cost=0.29..175.78 rows=1000 width=4) (actual time=0.029..0.780 rows=1000 loops=1)
Output: a
Index Cond: (demo1.n <= 1000)
Buffers: shared hit=147
Planning time: 1.019 ms
Execution time: 0.884 ms

Same plan but of course we have more index blocks to scan, and more rows to fetch from the table, which is why the cost is higher.

In order to understand the cost, I’ve changed the query planner constants one by one. Here is what I got:

  • (cost=0.29..33.78 rows=1000 width=4) when seq_page_cost=0 instead of 1, which means that it estimates (175.78-33.78)/1=142 sequential reads
  • (cost=0.29..159.78 rows=1000 width=4) when random_page_cost=0 instead of 4, which means that it estimates (175.78-159.78)/4=4 random reads
  • (cost=0.29..165.78 rows=1000 width=4) when cpu_tuple_cost=0 instead of 0.01, which means that it estimates (175.78-165.78)/0.01=1000 rows
  • (cost=0.29..170.78 rows=1000 width=4) when cpu_index_tuple_cost=0 instead of 0.005, which means that it estimates (175.78-170.78)/0.005=1000 index entries
  • (cost=0.00..173.00 rows=1000 width=4) when cpu_operator_cost=0 instead of 0.0025, which means that it estimates (175.78-173.00)/0.0025=1112 cpu operations (116 for initial cost + 996 to get all rows)

I understand the 4 random reads from the index pages. However, I expected random reads, and not sequential reads, to fetch the rows from the table. But this is a case where the clustering factor is very good: the rows have been inserted in the same order as the indexed column, and this means that those reads from table probably read consecutive pages.

In order to validate this guess, I’ve traced the system calls on Linux

25734 open("base/12924/42427", O_RDWR) = 42
25734 lseek(42, 0, SEEK_END) = 11706368
25734 open("base/12924/42433", O_RDWR) = 43
25734 lseek(43, 0, SEEK_END) = 245760

The file descriptor 42 is my table (demo1) and the descriptor 43 is the index (demo1_n). The file name is in the open() call and it includes the file id:

select relname,relfilenode from pg_class where relname='demo1';
-[ RECORD 1 ]--+------
relname | demo1
relfilenode | 42427
 
select relname,relfilenode from pg_class where relname='demo1_n';
-[ RECORD 1 ]--+--------
relname | demo1_n
relfilenode | 42433

Then we see some random reads from the index (branches and first leaf):

25734 lseek(43, 0, SEEK_SET) = 0
25734 read(43, "100036037360374 b152"..., 8192) = 8192
25734 lseek(43, 24576, SEEK_SET) = 24576
25734 read(43, "121000836360374 35023720330237 "..., 8192) = 8192
25734 lseek(43, 8192, SEEK_SET) = 8192
25734 read(43, "13245t360374 211 340237 "..., 8192) = 8192

Then we see 53 reads from the table:

25734 lseek(42, 0, SEEK_SET) = 0
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
...

Only one lseek. The other reads are all single block (8k) I/O calls but without seek, which means that they are sequential. When relying on filesystem prefetching, this may avoid the latency for each I/O call.

Then the next leaf block from the index is read, and then 52 reads from the table (no lseek):

25734 read(43, "13245t360374 211 340237 "..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
...

And again, one index block and 38 contiguous table blocks:

25734 lseek(43, 32768, SEEK_SET) = 32768
25734 read(43, "13245t360374 211 340237 "..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
25734 read(42, "40042203 4 36023330103402273010"..., 8192) = 8192
...

Here is the summary of the cost 175.78

  • The startup cost of 0.29 as we have seen before
  • Estimates 4 random reads (reading 1000 rows from a 30 pages index which contains 10000 rows): cost=16 (random_page_cost=4)
  • Estimates 142 sequential reads: cost=142 (seq_page_cost=1)
  • 1000 index entries to process, estimated at cost=5 (cpu_index_tuple_cost=0.005)
  • 1000 result row to process, estimated at cost=10 (cpu_tuple_cost=0.01)
  • about 1000 operators or functions estimated at cpu_operator_cost=0.0025

The very interesting thing here is that the query planner is totally aware of the clustering factor and uses sequential read estimation.

Oracle Range Scan

Here is the same query on the similar table on Oracle:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID a3gqx19xs9wxq, child number 0
-------------------------------------
select /*+ */ a from demo1 where n<=1000
----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 147 (100)| 1000 |00:00:00.01 | 148 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| DEMO1 | 1 | 1000 | 147 (0)| 1000 |00:00:00.01 | 148 |
|* 2 | INDEX RANGE SCAN | DEMO1_N | 1 | 1000 | 4 (0)| 1000 |00:00:00.01 | 4 |
----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("N"<=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22] 2 - "DEMO1".ROWID[ROWID,10]

The straces shows calls to pread:

open("/u01/oradata/CDB1A/PDB/users01.dbf", O_RDWR|O_DSYNC) = 7
fcntl(7, F_SETFD, FD_CLOEXEC) = 0
fcntl(7, F_DUPFD, 256) = 258
fcntl(258, F_SETFD, FD_CLOEXEC) = 0
close(7) = 0
pread(258, "62422313G275"142532'!1?275""..., 8192, 2252800 ) = 8192
pread(258, "62422413C275"14x2432'!1?275""..., 8192, 2260992 ) = 8192
pread(258, "6242313v3362274"24b+1&!1354274""..., 8192, 24731648 ) = 8192
pread(258, "6242314v3362274"24e*1&!1354274""..., 8192, 24739840 ) = 8192
pread(258, "6242315v3362274"24d51&!1354274""..., 8192, 24748032 ) = 8192
pread(258, "6242316v3362274"24g41&!1354274""..., 8192, 24756224 ) = 8192
pread(258, "6242317v3362274"24f71&!1354274""..., 8192, 24764416 ) = 8192
pread(258, "6242320v3362274"24y71&!1354274""..., 8192, 24772608 ) = 8192

pread is similar to lseek()+read() here and, as far as I know, Linux detects when there is no need to seek, and this allows prefetching as well. Oracle has also its own prefetching but I’ll not go into the detail here (read Timur Akhmadeev on Pythian blog about this).

With Oracle, there is no need to run strace because all system calls are instrumented as ‘wait events’ and here is a trace:

PARSE #140375247563104:c=2000,e=1872,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=187737470,tim=53267437268
EXEC #140375247563104:c=0,e=147,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=187737470,tim=53267437481
WAIT #140375247563104: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=74022 tim=53267437532
WAIT #140375247563104: nam='db file sequential read' ela= 8 file#=12 block#=275 blocks=1 obj#=74023 tim=53267437679
WAIT #140375247563104: nam='db file sequential read' ela= 5 file#=12 block#=276 blocks=1 obj#=74023 tim=53267437785
WAIT #140375247563104: nam='db file sequential read' ela= 5 file#=12 block#=3019 blocks=1 obj#=74022 tim=53267437902
FETCH #140375247563104:c=0,e=368,p=3,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=187737470,tim=53267437977
WAIT #140375247563104: nam='PGA memory operation' ela= 14 p1=0 p2=0 p3=0 obj#=74022 tim=53267438017
WAIT #140375247563104: nam='SQL*Net message from client' ela= 280 driver id=1413697536 #bytes=1 p3=0 obj#=74022 tim=53267438385
WAIT #140375247563104: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=74022 tim=53267438419
WAIT #140375247563104: nam='db file sequential read' ela= 3 file#=12 block#=3020 blocks=1 obj#=74022 tim=53267438443
WAIT #140375247563104: nam='PGA memory operation' ela= 7 p1=1114112 p2=2 p3=0 obj#=74022 tim=53267438475
WAIT #140375247563104: nam='db file sequential read' ela= 5 file#=12 block#=3021 blocks=1 obj#=74022 tim=53267438504
WAIT #140375247563104: nam='db file sequential read' ela= 3 file#=12 block#=3022 blocks=1 obj#=74022 tim=53267438532
WAIT #140375247563104: nam='db file sequential read' ela= 2 file#=12 block#=3023 blocks=1 obj#=74022 tim=53267438552
WAIT #140375247563104: nam='db file sequential read' ela= 3 file#=12 block#=3024 blocks=1 obj#=74022 tim=53267438576
WAIT #140375247563104: nam='db file sequential read' ela= 4 file#=12 block#=3025 blocks=1 obj#=74022 tim=53267438603
WAIT #140375247563104: nam='db file sequential read' ela= 26 file#=12 block#=3026 blocks=1 obj#=74022 tim=53267438647
WAIT #140375247563104: nam='db file sequential read' ela= 4 file#=12 block#=3027 blocks=1 obj#=74022 tim=53267438680
WAIT #140375247563104: nam='db file sequential read' ela= 2 file#=12 block#=3028 blocks=1 obj#=74022 tim=53267438699
WAIT #140375247563104: nam='db file sequential read' ela= 4 file#=12 block#=3029 blocks=1 obj#=74022 tim=53267438781
WAIT #140375247563104: nam='db file sequential read' ela= 3 file#=12 block#=3030 blocks=1 obj#=74022 tim=53267438807
WAIT #140375247563104: nam='db file sequential read' ela= 28 file#=12 block#=3031 blocks=1 obj#=74022 tim=53267438878
...

The name ‘sequential read’ does not mean the same as the Postgres ‘sequential read’. It only means single-block reads that are done one after the other, but they are actually random reads. However, looking at the block# they appear as reading contiguous blocks.

At the end, because I have an index with good clustering factor, and because I’m using the defaults on Linux without direct read and asynchronous I/O, the execution is very similar to the postgres one: read the few index blocks and follow the pointer to the 140 blocks of the table.

The cost estimation looks similar (same number) between Postgres and Oracle but it is not the same unit. Postgres estimates the cost with sequential reads, but Oracle estimates the cost as random reads. In addition to that, Postgres, with its default planner parameters, gives more importance than Oracle to the CPU usage.

This is the good case of Index Access where we have a good clustering/correlation factor between the physical order of the table and the logical order of the index. The random reads are finally behaving as sequential read because there is no seek() between them. You can imagine that in the next post I’ll try the same with a very bad clustering factor.

 

Cet article Postgres vs. Oracle access paths VI – Index Scan est apparu en premier sur Blog dbi services.

Dubai Duty Free Completes Upgrade of Oracle Retail Release 16 in 8 Months

Oracle Press Releases - Wed, 2017-08-09 10:30
Press Release
Dubai Duty Free Completes Upgrade of Oracle Retail Release 16 in 8 Months Airport Retailer Deploys New Tools That Provide Accurate Inventory Visibility and Operational Productivity

Redwood Shores, Calif.—Aug 9, 2017

Dubai Duty Free is the first customer to upgrade to Oracle Retail Release 16, Oracle announced today. One of the largest airport retailers in the world, Dubai Duty Free provides customers with first-class service, excellent value, a wide range of quality products and a world-class shopping environment. Now employing almost 6,000 people, the brand has consistently raised the benchmark for airport retailing while welcoming more than 78 million shoppers in 2016.

In less than 8 months, Dubai Duty Free completed its upgrade of Oracle Retail Merchandising System, Price Management, Invoice Match, Sales Audit, Warehouse Management, Store Inventory Management and Retail Insights from Release 12 to Release 16. The upgrade project also included interfaces to e-commerce, the point-of-sale system, warehouse automation and the Oracle E-business suite.

The new functionality of Release 16 will increase user productivity through exception-based retailing, persona specific dashboards, operational efficiencies, and widespread mobility. Additionally, the new technology allows click and collect with accurate inventory visibility and best-in-class processes with Oracle Commerce and Oracle Retail.

"Oracle understands the Middle Eastern market and is constantly extending the breadth and depth of its applications to match the challenges of modern organizations in all industry sectors,” said Ramesh Cidambi, Chief Operating Officer, Dubai Duty Free. “Dubai Duty Free incorporates Oracle Retail solutions that reflect the retail expertise, knowledge, and understanding that we need as our business continues to grow.”

“Tata Consulting Services (TCS) partnered with my team and Oracle Retail to ensure the project was delivered on time and within budget. Dubai Duty Free established a Train the Trainer program and anointed a User Acceptance Team to ensure a smooth roll out across the enterprise,” said Cidambi. “Setting up an Oracle Customer Management Office for better turnaround time on Service Requests has helped in timely resolution of priority issues. TCS provided expertise, enthusiasm and a collaborative approach to the implementation. The implementation of Oracle Retail is one of the smoothest implementations of my retail career.”

“Airports are by definition the retail environment where speed matters most,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “To deploy Oracle Retail Release 16 in such rapid time reflects the pace at which technology disruption is changing the ways people shop and is an incredible achievement by Dubai Duty Free and TCS.”

About Dubai Duty Free

Founded in 1983, Dubai Duty Free recorded first-year sales of US$20 million and has grown into one of the biggest travel retail operator in the world with sales turnover of US$1.85 billion in 2016.

Now employing almost 6,000 people, the operation has consistently raised the benchmark for airport retailing and it continues to grow. Dubai Duty Free currently operates some 36,000 sqm of retail space at Dubai International Airport and 2,500 sqm at Al Maktoum International that will grow in line with the massive development plans of Dubai South, which ultimately cover some 80,000 sqm of retail. 

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • 415-595-1584

Applying the first RU (Release Update) for Oracle Database 12.2.0.1 on Docker environment

Marcelo Ochoa - Wed, 2017-08-09 08:30
There is great post on Applying the first RU for Oracle Database 12.2.0.1 written by my friend Mike.
But this post is about on how to apply this important patch on Docker environment using official Oracle Docker images.
First, some time DBAs have fear on applying patches for two important thing:

  • What happen if something goes wrong, how to rollback this process quickly on production
  • Which is my downtime, also apply on production systems where minutes means money

Well, on Docker environment this two points are quickly answered:

  • You could rollback the patch on binaries by simply stop your Docker container started with the RU image and go back using the official image, remember that on Docker environment bin/libs are part of the layered file-system where your container start and it acts like an SVN branch, if you want to go back with the changes made on the binary files modified by a patch you can rollback this change stopping your container and starting again using the previous used image.
  • The downtime as I will show below is only the time involving by shutdown your database, and starting it again with the new image, the time for patching your binaries are NOT part of your downtime, when the database startup for the first time with a new patched image there is only a fixed time of datapatch utility which is called once.

Let see in action, first check that you have the two Oracle Database Release 2 images ready at your local repository:
If you have a running container with the official R2 base image oracle/database:12.2.0.1-ee, the start script may be look like:
note that your datafiles reside outside the container as usual to survive the stop/start/rm container life-cycle which obviously make sense for an production database.
Now I will patch this database with first RU patchset, for doing that a run command will point to the image patched with 26123830 patchset, here an screenshot:
here the process for applying a patch:
database downtime is about:

  • 0m4.869s (docker rm -f demo)
  • 0m8.503s (docker run with new image, ./run-demo-patch.sh file)
  • 1m20s (time taken by SQL Patching tool starting time 12:43:06, ending time 12:44:26)

Not bad for my modest laptop.
But what are under the hood on the new patched imaged with the RDBMS RU included?
First there is an official script provided by Oracle to apply generic patches to the official image, here the link at GitHub, but this script doesn't take into account the post-processing task during first startup time, so I decided to provide mine using this Dockerfile and modified startDB.sh script.
Here the explanation of Dockerfile, first the Docker image tagged oracle/database:12.2.0.1-ee-26123830 is built on top of oracle/database:12.2.0.1-ee.
FROM oracle/database:12.2.0.1-ee
next we have to put a downloaded patch zip file in this directory and is referenced with the line:
ENV PATCH_FILE=p26123830_122010_Linux-x86-64.zip
then the Docker script file continues with Mike's suggestions and leaves a similar entry point for this container as in the official Dockerfile.ee script.
Finally startDB.sh script was modified to have a hook which control if the post apply stage of the patch was applied or not, here the logic:

For building a new image with the patch applied on binary/libs file is simple as calling docker with:
docker build -t "oracle/database:12.2.0.1-ee-26123830" .Note that the process described above could be executed by an experimented DBA or a DevOps operator, and the process of stopping a production database and starting again with the patch installed could be executed by a novel DBA or any other operator.

SQL Server on Linux: Introduction to DBFS experimental tool

Yann Neuhaus - Wed, 2017-08-09 08:14

A couple of months ago, Microsoft announced two additional command line tools for SQL Server that are mssql-scripter and DBFS. The latter has drawn my attention because it exposes live data from SQL Server DMVs as virtual files in a virtual directory on Linux operating system. Microsoft has probably taken another positive step in the SQL Server’s adoption on Linux. Indeed, in a Linux world, we may get Kernel’s performance and configuration data either directly from the procfs or indirectly by using tools that involve procfs in the background.

blog 123 - sqlserver dbfs

DBFS uses the FUSE filesystem module to expose DMVs and according to Microsoft blog, reading data from each concerned DMV file is a live process that ensures to get always fresh data.

[mikedavem@sql server]$ rpm -qR $(rpm -qa | grep dbfs)
glibc
fuse
fuse-devel
freetds
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadIsXz) <= 5.2-1

After installing DBFS, let’s start the tool by using the following command line:

[mikedavem@sql mssql]$ dbfs -m /var/opt/mssql/dmv -c /var/opt/mssql/secrets/dmvtool.conf -v      
1: Processing entry for section server in configuration file:
SUCCESSFULLY added entry for server server.

The configuration file (-c parameter) stores sensitive information to connect to the SQL Server instance (hostname, user and “insecure” password). You shall consider to protect this file with suitable permissions.

DBFS is a background process by default but you may change the behavior by using -f parameter at the startup.

[mikedavem@sql mssql]$ ps -e | grep dbfs
  2673 ?        00:00:00 dbfs

At this stage, the negative point is that to start DBFS manually if the server restarts. I had thought to wrap DBFS in a cron job that will run at the server startup but my preference would be to get the possibility to control DBFS through system as a service.

Exploring the DMV mounted filesystem

In fact, every DMV is exposed in two formats (normal file and JSON file).

[mikedavem@sql server]$ pwd
/var/opt/mssql/dmv/server
[mikedavem@sql server]$ ls -I "*.json" | wc -l
494

DBFS exposes a lot of DMVs as we notice above. We may also want to filter regarding the category plan described in the BOL by using grep.

For instance, AlwaysOn related DMVs …

[mikedavem@sql server]$ ls -I "*.json" | grep dm_hadr
dm_hadr_automatic_seeding
dm_hadr_auto_page_repair
dm_hadr_availability_group_states
dm_hadr_availability_replica_cluster_nodes
dm_hadr_availability_replica_cluster_states
dm_hadr_availability_replica_states
dm_hadr_cluster
dm_hadr_cluster_members
dm_hadr_cluster_networks
dm_hadr_database_replica_cluster_states
dm_hadr_database_replica_states
dm_hadr_instance_node_map
dm_hadr_name_id_map
dm_hadr_physical_seeding_stats
…

… or database related DMVs

[mikedavem@sql server]$ ls -I "*.json" | grep dm_db
dm_db_column_store_row_group_operational_stats
dm_db_column_store_row_group_physical_stats
dm_db_file_space_usage
dm_db_fts_index_physical_stats
dm_db_index_usage_stats
dm_db_log_space_usage
dm_db_mirroring_auto_page_repair
dm_db_mirroring_connections
dm_db_mirroring_past_actions
dm_db_missing_index_details
dm_db_missing_index_groups
dm_db_missing_index_group_stats
dm_db_partition_stats
dm_db_persisted_sku_features
dm_db_rda_migration_status
dm_db_rda_schema_update_status
dm_db_script_level
dm_db_session_space_usage
dm_db_task_space_usage 
…

The schema name suffix (sys) is not present for DMV related files.

Extracting data from DMV related files

Linux provides very powerful tools to consume data from files like tr, cut, split, sort, join, cat, grep and awk to cite few of them. You may find some examples in Github but I decided to get my own experience by attempting to address some usual DBA queries.

  • Extracting data from sys.dm_os_sys_info to retrieve CPU, memory available information on the server as well as the memory manager consumption

We may use either cut command to extract required columns from the dm_os_sys_info file.

[mikedavem@sql server]$ cut -d$'\t' -f3,4,5,7,8,9,15,27 dm_os_sys_info
cpu_count       hyperthread_ratio       physical_memory_kb      committed_kb    committed_target_kb      visible_target_kb       scheduler_count virtual_machine_type_desc
4       4       3918848 207160  3914240 3914240 4       HYPERVISOR

But the above output is not very readable and we may want to display the information differently (in column rather than in row). Using awk may be your best solution in this case:

[mikedavem@sql server]$ for ((i=1;i<$(cut -d$'\t' -f3,4,5,7,8,9,15,27 dm_os_sys_info | head -n 1 | wc -w);i++)); do awk '{print $'$i'}' <(cut -d$'\t' -f3,4,5,7,8,9,15,27 dm_os_sys_info) | tr '\n' ' ';echo;done | column -t
cpu_count			4
hyperthread_ratio	4
physical_memory_kb	3918848
committed_kb		207296
committed_target_kb	3914240
visible_target_kb	3914240
scheduler_count		4

 

  • Extracting information from sys.databases

A typical query I may see is to retrieve database information including their name, id, state, recovery model and owner. Basically , we have to join two DMVs to get all the requested information: sys.databases and sys.server_principals. In Linux world, you may also use the join command to gather information from different files but it implies to pre-sort each data file first.

But getting the desired output may be challenging. Indeed, firstly we will probably choose the grep / cut tools to extract only rows and columns we want. But using such tools will prevent to keep the column context and getting only column values from the DMV related files may be meaningless in this case as shown below :

[mikedavem@sql server]$ join -1 2 -2 3 -t$'\t' -o 2.1,2.2,2.3,2.4,2.5,1.1 \
<(grep -Ev '^[NT|##]|SERVER_ROLE' server_principals | cut -d$'\t' -f 1,3 | sort -t$'\t' -k2) \
<(cut -d$'\t' -f 1,2,4,14,22 databases | sort -t$'\t' -k3) | column -t
model				3  01  ONLINE  FULL    sa
test				6  01  ONLINE  FULL    sa
ApplixEnterprise	5  01  ONLINE  SIMPLE  sa
master				1  01  ONLINE  SIMPLE  sa
msdb				4  01  ONLINE  SIMPLE  sa
tempdb				2  01  ONLINE  SIMPLE  sa

But preserving the column context may become also your concern if you want to sort column data values. Indeed, let’s say you want to sort the above output by the database_id column value. You will quickly notice that the header file will be included by the sort operator. Definitely not the result we expect in this case. So, in order to meet our requirement, we may use again the very powerful awk command as shown below:

[mikedavem@sql server]$ join --header -1 2 -2 3 -t$'\t' -o 2.1,2.2,2.3,2.4,2.5,1.1 \
<(head -n1 server_principals | cut -d$'\t' -f 1,3;grep -Ev '^[NT|##]|SERVER_ROLE' server_principals | cut -d$'\t' -f 1,3 | sort -t$'\t' -k2) \
<(head -n 1 databases | cut -d$'\t' -f 1,2,4,14,22;cut -d$'\t' -f 1,2,4,14,22 databases | sort -t$'\t' -k3) \
| awk 'NR<2{print $0;next}{print $0 | "sort -k2"}' | column -t
name				database_id  	owner_sid	state_desc	recovery_model_desc	name
master				1				01		ONLINE      	SIMPLE				sa
tempdb				2            	01		ONLINE      	SIMPLE				sa
model				3            	01		ONLINE      	FULL				sa
msdb				4            	01		ONLINE      	SIMPLE				sa
ApplixEnterprise	5            	01		ONLINE      	SIMPLE				sa
test				6            	01		ONLINE      	FULL				sa

We finally managed to display the desired output but my feeling is we worked hard for few results and usually we have to deal with more DMVs and complex join than the previous example in the same query. Furthermore, we often have to aggregate data from DMVs to get relevant results and I may easily imagine the additional efforts to produce the corresponding scripts if we extend the scenarios we have to deal with.

There are probably other tools on Linux to make the task easier but my opinion is that DBFS should include the ability to execute custom queries already used by DBAs day-to-day to go beyond the actual capabilities. We didn’t surface JSON format in this blog. This is another way to consume data from DMV related files but in my opinion, it targets more the developers than the DBAs audience.

I noticed that I was not the first and the only one to think about those enhancements by looking at the DBFS Github and the issues section. Anyway, I like the Microsoft’s initiative to give us the ability to consume DMVs related data from the command line in a Linux world and I’m looking forward the next features about this tool!

 

 

 

Cet article SQL Server on Linux: Introduction to DBFS experimental tool est apparu en premier sur Blog dbi services.

A Fishbowl Success Story: The Benefits of Consolidating Disparate CAD Databases

A large medical device manufacturer wanted to fully integrate their R&D, engineering, and manufacturing organizations. This would allow a more efficient, capable and robust product development system that would help the flow of new, innovative products and never fall short on quality.

One key obstacle was the amount of data scattered across the globe in various PDM, PLM and network folders.  This data needed to be organized and consolidated into a unified system with quality processes that would achieve FDA certification.  This consolidation would enable individuals to access accurate data from any location at any time.  Just from a CAD data perspective, there were 100’s of thousands of Solidworks files across 7+ locations around the world in 4+ PDM/PLM systems plus random network file folders.

The company partnered with Fishbowl to migrate the Solidworks PDM, PLM, CAD systems into their single global Windchill PDMLink system.  A key criterion for them choosing Fishbowl was Fishbowl’s LinkExport and LinkLoader family of products.  LinkExport automates the data extraction from PDMWorks and Enterprise PDM and LinkLoader automates the bulk loading into Windchill.

THE PLAN

The migration plan was to have separate migrations for each location.  Each production migration would be able to be completed over a weekend to minimize business impact (e.g. users would check files into PDMWorks – or whatever – on Friday and then check them out of Windchill on Monday).  This approach spread out the work and lowered risk since each location also needed to comply with quality audits as part of their test and production migration passes.

RESULTS

Fishbowl successfully executed 7 migrations that consisted of 100,000+ files total.  60,000+ files came from five separate Enterprise PDM and PDMWorks systems and another 40,000+ files from network file folders.  All data was bulk loaded into a single Windchill PDMLink and each migration was completed over a weekend so minimal disruption occurred.  The project ROI was less than 6 months, and the increase efficiencies and innovation have resulted in huge corporate gains.

 

Contact     Rick Passolt for more information on LinkLoader and LinkExport
Webinar: Automate and Expedite PTC Windchill Bulk Loading

 

Date: August 17th, 2017

Time: 1:00-2:00pm CST

Speaker: Rick Passolt – Senior Account Executive

Register

The post A Fishbowl Success Story: The Benefits of Consolidating Disparate CAD Databases appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

MariaDB – How to migrate quickly MySQL to MariaDB over different hosts with mydumper/myloader and ncat

Yann Neuhaus - Wed, 2017-08-09 07:27

A lot of possibilities exist to migrate MySQL to MariaDB. In this blog I would like to show a scenario where I migrate MySQL 5.7.19 with a lot of databases to MariaDB 10.2.7, which is on a different host. The tools I am using are mydumper/myloader and ncat.

Please be aware that mydumper does not come out of the box. It has to be installed beforehand, like explained in my previous blog: https://blog.dbi-services.com/mariadb-speed-up-your-logical-mariadb-backups-with-mydumper/

The same applies to ncat. It might not be on your system. However, the installation of ncat is quite simple. Just run

# yum install nmap-ncat

Ncat is just a networking utility which reads and writes data across networks from the command line. A quite underestimated tool from my point view. And please take care, ncat does not encrypt your network traffic per default. That’s why it might be faster than coping it via scp/sftp. Ncat operates in one of two primary modes: connect mode and listen mode. So, we need to install it on both hosts. In my case on node mysql01 (connect mode) and on mysql02 (listen mode). If you want to learn more about ncat, take a look at the following web site. http://nmap.org/ncat

Ok. Let’s get started. First of all, we take a look at the current MySQL instance. As you can see in the output below, I do have a lot of databases that I want to migrate, e.g. air, dbi and a couple of sakila databases.

mysql@mysql01:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] u | grep -A 1 mysqld6
mysqld6 is running        : 5.7.19 (mysql-5.7.19)
Port                      : 33006

mysqld6-(root@localhost) [(none)]> select @@version;
+------------+
| @@version  |
+------------+
| 5.7.19-log |
+------------+
1 row in set (0.00 sec)

mysqld6-(root@localhost) [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| air                |
| dbi                |
| mysql              |
| performance_schema |
| sakila             |
| sakila01           |
| sakila02           |
| sakila03           |
| sakila04           |
| sakila05           |
| sakila06           |
| sys                |
| wrs                |
+--------------------+
14 rows in set (0.00 sec)

MyDumper comes with a quite cool parameter, called –regex. By using the –regex parameter, we can dump out all databases except the ones the we don’t want in an elegant way. e.g. we might not want to dump (mysql|test|information_schema|sys|performance_schema)

mysql@mysql01:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] /u00/app/mysql/product/tools/mydumper/bin/mydumper --defaults-file=/u00/app/mysql/admin/mysqld6/.my.cnf \
--threads=6 --regex '^(?!(mysql|test|information_schema|sys|performance_schema))' \
--outputdir=/u99/mysqlbackup/mysqld6/mydumper_mysqld6

Ok. We got now all databases dumped out and we can copy it over to the MariaDB host with ssh, ftp, rsync or other tools. Or we do it via ncat directly. For doing so, we need to start the ncat in listen mode (-l) on the destination host mysql02.

-- On host mysql02:

mysql@mysql02:/u00/app/mysql/ [mysqld6] cd /u99/mysqlbackup/mysqld6/mydumper_mysqld6
mysql@mysql02:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6]
mysql@mysql02:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] nc -4 -l 33333 | tar -xzvp
...
this host is waiting now for network packets

On our source node, we start mydumper, pipe it to tar and send it to ncat.

On host mysql01:

mysql@mysql01:/u00/app/mysql/ [mysqld6] cd /u99/mysqlbackup/mysqld6/mydumper_mysqld6
mysql@mysql01:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6]
mysql@mysql01:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] /u00/app/mysql/product/tools/mydumper/bin/mydumper --defaults-file=/u00/app/mysql/admin/mysqld6/.my.cnf \
--threads=6 --regex '^(?!(mysql|test|information_schema|sys|performance_schema))' | tar -czv -f - .  | nc --send-only mysql02 33333
./
./export-20170809-062635/
./export-20170809-062635/air-schema-create.sql
./export-20170809-062635/dbi-schema-create.sql
./export-20170809-062635/sakila-schema-create.sql
./export-20170809-062635/sakila01-schema-create.sql
./export-20170809-062635/sakila02-schema-create.sql
./export-20170809-062635/sakila03-schema-create.sql
./export-20170809-062635/sakila04-schema-create.sql
./export-20170809-062635/sakila05-schema-create.sql
./export-20170809-062635/sakila06-schema-create.sql
./export-20170809-062635/wrs-schema-create.sql
./export-20170809-062635/metadata
./export-20170809-062635/dbi.dbi_t.sql
./export-20170809-062635/sakila05.category.sql
./export-20170809-062635/sakila.actor.sql
./export-20170809-062635/sakila.address.sql
./export-20170809-062635/sakila.category.sql
./export-20170809-062635/sakila.city.sql
./export-20170809-062635/sakila.country.sql
./export-20170809-062635/sakila.customer.sql
./export-20170809-062635/sakila.film.sql
./export-20170809-062635/sakila.film_actor.sql
./export-20170809-062635/sakila.film_category.sql
./export-20170809-062635/sakila.film_text.sql
./export-20170809-062635/sakila.inventory.sql
./export-20170809-062635/sakila.language.sql
./export-20170809-062635/sakila.payment.sql
./export-20170809-062635/sakila.rental.sql
./export-20170809-062635/sakila.staff.sql
./export-20170809-062635/sakila.store.sql
./export-20170809-062635/sakila01.actor.sql
./export-20170809-062635/sakila01.address.sql
...
...

On the other terminal, you can see that the files are being received.

On host mysql02:

mysql@mysql02:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] nc -4 -l 33333 | tar -xzvp
./
./export-20170809-062635/
./export-20170809-062635/air-schema-create.sql
./export-20170809-062635/dbi-schema-create.sql
./export-20170809-062635/sakila-schema-create.sql
./export-20170809-062635/sakila01-schema-create.sql
./export-20170809-062635/sakila02-schema-create.sql
./export-20170809-062635/sakila03-schema-create.sql
./export-20170809-062635/sakila04-schema-create.sql
./export-20170809-062635/sakila05-schema-create.sql
./export-20170809-062635/sakila06-schema-create.sql
./export-20170809-062635/wrs-schema-create.sql
./export-20170809-062635/metadata
./export-20170809-062635/dbi.dbi_t.sql
./export-20170809-062635/sakila05.category.sql
./export-20170809-062635/sakila.actor.sql
./export-20170809-062635/sakila.address.sql
./export-20170809-062635/sakila.category.sql
./export-20170809-062635/sakila.city.sql
./export-20170809-062635/sakila.country.sql
./export-20170809-062635/sakila.customer.sql
./export-20170809-062635/sakila.film.sql
./export-20170809-062635/sakila.film_actor.sql
./export-20170809-062635/sakila.film_category.sql
./export-20170809-062635/sakila.film_text.sql
./export-20170809-062635/sakila.inventory.sql
./export-20170809-062635/sakila.language.sql
./export-20170809-062635/sakila.payment.sql
./export-20170809-062635/sakila.rental.sql
./export-20170809-062635/sakila.staff.sql
./export-20170809-062635/sakila.store.sql
./export-20170809-062635/sakila01.actor.sql
./export-20170809-062635/sakila01.address.sql
...
...
...

mysql@mysql02:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] ls -l
total 20
drwx------ 2 mysql mysql 16384 Aug  9 06:26 export-20170809-062635

That’s it. We got now all the files on our destination host mysql02. Now it’s time to import the data via myloader into the new MariaDB 10.2.7 which is empty at the moment.

mysql@mysql02:/u00/app/mysql/ [mysqld6] u | grep -A 1 mysqld6
mysqld6 is running        : 10.2.7-MariaDB (mariadb-10.2.7)
Port                      : 33006

mysql@mysql02:/u00/app/mysql/product/tools/mydumper-0.9.2/bin/ [mysqld6] mq
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 10.2.7-MariaDB-log MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysqld6-(root@localhost) [(none)]> select @@version;
+--------------------+
| @@version          |
+--------------------+
| 10.2.7-MariaDB-log |
+--------------------+
1 row in set (0.00 sec)

Per default, the myloader tool starts 4 parallel threads, and runs in verbose mode 2 which means that only warning messages are shown. In case you want to have it more verbose, you can specify –verbose=3

mysql@mysql02:/u99/mysqlbackup/mysqld6/mydumper_mysqld6/ [mysqld6] /u00/app/mysql/product/tools/mydumper/bin/myloader \
--defaults-file=/u00/app/mysql/admin/mysqld6/.my.cnf --threads=6 --verbose=3 \
--directory /u99/mysqlbackup/mysqld6/mydumper_mysqld6/export-20170809-062635
** Message: 6 threads created
** Message: Creating database `air`
** Message: Creating table `air`.`muc`
** Message: Creating database `dbi`
** Message: Creating table `dbi`.`dbi_t`
** Message: Creating table `dbi`.`dbi_t2`
** Message: Creating database `sakila`
** Message: Creating table `sakila`.`actor`
** Message: Creating table `sakila`.`address`
** Message: Creating table `sakila`.`category`
** Message: Creating table `sakila`.`city`
** Message: Creating table `sakila`.`country`
** Message: Creating table `sakila`.`customer`
** Message: Creating table `sakila`.`film`
...
...

If you take a look at the process list, you should see 6 threads doing the work.

mysqld6-(root@localhost) [(none)]> show processlist;
...
...
| 30 | root        | localhost | sakila01 | Query   |    0 | Opening tables          | INSERT INTO `city` VALUES
(1,"A Corua (La Corua)",87,"2006-02-15 03:45:25"),
(2,"Abha",82,"2006-02-1 |    0.000 |
| 31 | root        | localhost | sakila   | Query   |    6 | update                  | INSERT INTO `film_actor` VALUES
(1,1,"2006-02-15 04:05:03"),
(1,23,"2006-02-15 04:05:03"),
(1,25,"20 |    0.000 |
| 32 | root        | localhost | sakila   | Query   |    4 | update                  | INSERT INTO `payment` VALUES
(1,1,1,76,2.99,"2005-05-25 11:30:37","2006-02-15 21:12:30"),
(2,1,1,573 |    0.000 |
| 33 | root        | localhost | sakila   | Query   |    3 | update                  | INSERT INTO `rental` VALUES
(1,"2005-05-24 22:53:30",367,130,"2005-05-26 22:04:30",1,"2006-02-15 20: |    0.000 |
| 34 | root        | localhost | sakila01 | Query   |    2 | update                  | INSERT INTO `address` VALUES
(1,"47 MySakila Drive",NULL,"Alberta",300,"","",">\n2]c |    0.000 |
| 35 | root        | localhost | sakila   | Query   |    4 | update                  | INSERT INTO `inventory` VALUES

That’s it. We got now the data migrated to MariaDB 10.2.7.

Conclusion

Many ways do exist for migrating MySQL to MariaDB. Using mydumper/myloader in combination with ncat is just one of those. However, from my point of view, a quite cool one.

 

Cet article MariaDB – How to migrate quickly MySQL to MariaDB over different hosts with mydumper/myloader and ncat est apparu en premier sur Blog dbi services.

DB Tablespace dropped

Tom Kyte - Wed, 2017-08-09 01:26
Hi team, We have given the system,sysaux,undo,temp tablespace huge size allocated now the space issue on Os level So, can i drop and re-create a tablespace in Oracle which is oracle tablespace It is downtime activity ? How can we do this ? Note...
Categories: DBA Blogs

More Oracle Security Training Manuals for Sale

Pete Finnigan - Wed, 2017-08-09 01:26
I advertised here some months ago a small number of printed manuals that I found in our company storage for some of my Oracle security classes. We had these printed over the years for various classes that I taught and....[Read More]

Posted by Pete On 08/08/17 At 01:57 PM

Categories: Security Blogs

Result cache latch contention

Bobby Durrett's DBA Blog - Tue, 2017-08-08 11:29

I recently saw a dramatic example of result cache latch contention. I had just upgraded a database to 11.2.0.4 and several hours later processing pretty much came to a halt.

Of course I am telling you the end before I tell you the beginning. It all started the morning of July 22nd, 2017, a few weeks back. We had worked for a couple of months on an 11.2.0.4 upgrade. I mentioned some issues with the same upgrade in my earlier post. I spent several hours Saturday morning upgrading the database and it all went very smoothly. Then we kicked off the normal batch processing and things seemed fine. Around 1 am the next morning I get paged about an issue. All of the batch processing had ground to a halt on the upgraded database. Needless to say, I was freaking out and afraid that we would have to back out the upgrade. This would have been ugly and time-consuming.

At first I  focused on the top SQL statements in the AWR report. Here are the top few from that morning:

Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id SQL Module SQL Text 17,983.20 452,555 0.04 43.98 10.64 0.00 4yw0zv7ty2t47 SQL*Plus SELECT SLS_EXCL_RSN FROM ( SEL… 3,643.96 0 8.91 13.68 0.19 2fxnrcamtbcc2 SQL*Plus DECLARE return_code number := … 3,637.60 0 8.90 3.67 0.06 18pdd22fh15dc SQL*Plus INSERT /*+ APPEND PARALLEL(TGT…

The top query dominates the others because it takes 43.98% of the total run time so it made sense that this query was the problem. I expected that some queries would change plan with the upgrade. I had used SQL Profiles to lock in other plans that I knew were problematic but I would not be surprised to see new ones. But, looking at the top SQL, sql id 4yw0zv7ty2t47, the plan was the same as before the upgrade. Here is some execution history of the query before and after the July 22 upgrade:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
------------- --------------- ------------------------- ---------------- ------------------
4yw0zv7ty2t47       848116227 15-JUL-17 11.00.28.230 PM           192430         .822087045
4yw0zv7ty2t47       848116227 16-JUL-17 02.00.24.144 AM          2522505         .691503149
4yw0zv7ty2t47       848116227 16-JUL-17 03.00.42.580 AM          2787002         .793723611
4yw0zv7ty2t47       848116227 22-JUL-17 10.00.12.787 PM           220581         23.4686899
4yw0zv7ty2t47       848116227 22-JUL-17 11.00.30.251 PM           455887         20.1393904
4yw0zv7ty2t47       848116227 23-JUL-17 12.00.47.394 AM           445173         20.5407455
4yw0zv7ty2t47       848116227 23-JUL-17 01.00.05.711 AM           369333         28.3195315
4yw0zv7ty2t47       848116227 23-JUL-17 02.00.22.811 AM           465591         39.0232079
4yw0zv7ty2t47       848116227 23-JUL-17 03.00.40.758 AM           452555         39.7370594
4yw0zv7ty2t47       848116227 23-JUL-17 04.00.57.968 AM           458328         39.3421407
4yw0zv7ty2t47       848116227 23-JUL-17 04.09.32.144 AM            10055         39.1518787
4yw0zv7ty2t47       848116227 23-JUL-17 04.11.58.484 AM            18507         39.6002968
4yw0zv7ty2t47       848116227 23-JUL-17 04.15.24.661 AM             5215         39.4672715
4yw0zv7ty2t47       848116227 23-JUL-17 04.16.30.441 AM             8542         39.1123689
4yw0zv7ty2t47       848116227 23-JUL-17 05.00.15.147 AM           321635         39.9827259

So, I stared at this for a couple of hours, getting more and more stressed because my cure-all SQL Profile was not going to help in this situation. The plan had not changed. I could not think, in my sleep deprived state, of a way to resolve this issue that morning. Then, after a quick prayer for help, I noticed the %CPU and %IO columns in the query’s line in the AWR report. 10% CPU, 0% I/O. The query was spending 90% of its time waiting and doing nothing. I had noticed some latch waits on the AWR report but I thought that the CPU on the system was just busy so we had some latch waits.

These were the top three events:

Event Waits Total Wait Time (sec) Wait Avg(ms) % DB time Wait Class latch free 3,658,537 26.9K 7 65.8 Other DB CPU 4749 11.6 db file scattered read 71,549 142.1 2 .3 User I/O

But, now finally I looked back at the latch waits. I went down to the latch section of the AWR report and the Result Cache: RC Latch latch showed up big:

Latch Sleep Breakdown Latch Name Get Requests Misses Sleeps Spin Gets Result Cache: RC Latch 6,742,176 4,142,777 4,143,709 669,430 row cache objects 5,804,568 169,324 6,087 163,272 cache buffers chains 107,393,594 36,532 5,859 30,976

I had noticed the RESULT_CACHE hint in some queries on our test database but never saw the latch contention. Here is the first part of the problem query with the RESULT_CACHE hint.

SQL_ID 4yw0zv7ty2t47
--------------------
SELECT SLS_EXCL_RSN FROM ( SELECT /*+ PARALLEL RESULT_CACHE */ DISTINCT

Here is part of the plan with the result cache step:

Plan hash value: 848116227
------------------------------------------
| Id  | Operation                        |
------------------------------------------
|  11 |  COUNT STOPKEY                   |
|  12 |   VIEW                           |
|  13 |    RESULT CACHE                  |
------------------------------------------

Early in the morning I got the idea of disabling the result cache. I ran this command:

alter system set result_cache_max_size=0 scope=both;

All of the running queries immediately died with this error:

ORA-00600: internal error code, arguments: [qesrcDO_AddRO],…

But, when the jobs were rerun they quickly went to completion. I checked the query performance before and after disabling the result cache and before the upgrade as well and found that post upgrade without the result cache the query ran ten times faster than before.

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
------------- --------------- ------------------------- ---------------- ------------------
4yw0zv7ty2t47       848116227 15-JUL-17 11.00.28.230 PM           192430         .822087045
4yw0zv7ty2t47       848116227 16-JUL-17 02.00.24.144 AM          2522505         .691503149
4yw0zv7ty2t47       848116227 16-JUL-17 03.00.42.580 AM          2787002         .793723611
4yw0zv7ty2t47       848116227 22-JUL-17 10.00.12.787 PM           220581         23.4686899
4yw0zv7ty2t47       848116227 22-JUL-17 11.00.30.251 PM           455887         20.1393904
4yw0zv7ty2t47       848116227 23-JUL-17 12.00.47.394 AM           445173         20.5407455
4yw0zv7ty2t47       848116227 23-JUL-17 01.00.05.711 AM           369333         28.3195315
4yw0zv7ty2t47       848116227 23-JUL-17 02.00.22.811 AM           465591         39.0232079
4yw0zv7ty2t47       848116227 23-JUL-17 03.00.40.758 AM           452555         39.7370594
4yw0zv7ty2t47       848116227 23-JUL-17 04.00.57.968 AM           458328         39.3421407
4yw0zv7ty2t47       848116227 23-JUL-17 04.09.32.144 AM            10055         39.1518787
4yw0zv7ty2t47       848116227 23-JUL-17 04.11.58.484 AM            18507         39.6002968
4yw0zv7ty2t47       848116227 23-JUL-17 04.15.24.661 AM             5215         39.4672715
4yw0zv7ty2t47       848116227 23-JUL-17 04.16.30.441 AM             8542         39.1123689
4yw0zv7ty2t47       848116227 23-JUL-17 05.00.15.147 AM           321635         39.9827259
4yw0zv7ty2t47       848116227 23-JUL-17 05.20.37.524 AM           329457          17.895581
4yw0zv7ty2t47       848116227 23-JUL-17 05.21.15.363 AM           205154         .050141323
4yw0zv7ty2t47       848116227 23-JUL-17 05.25.07.159 AM          1023657         .049949389

If you look through the output you will see that pre-upgrade on July 16th the query averaged about .69 to .79 milliseconds. During the latch contention on July 23rd it averaged about 39 milliseconds, a lot worse. But, after disabling the result cache it averaged .05 milliseconds which is at least 10 times faster than with the result cache before the upgrade.

So, it seems that the result cache hint on this query has always slowed it down. But, the upgrade and the load afterward caused some intense latch contention that we had not seen before. But, it is very cool that disabling the result cache actually made the query faster than it has been in the past.

I don’t think that it makes sense to put a RESULT_CACHE hint in a query that will run in .05 milliseconds without it. The overhead of the result cache made the query run 10 times slower at least. Something about the upgrade resulted in latch contention that caused a system wide problem, but disabling the result cache made the query run faster than it ever had. We could have avoided this problem by leaving off the RESULT_CACHE hint, but it was nice that I could resolve the problem quickly by disabling the result cache using a parameter change.

Bobby

Categories: DBA Blogs

Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX

Yann Neuhaus - Tue, 2017-08-08 08:58

We have seen how an index can help to avoid a sorting operation in the previous post. This avoids a blocking operation: the startup cost is minimal and the first rows can be immediately returned. This is often desired when displaying rows to the user screen. Here is more about Postgres startup cost, Oracle first_rows costing, and fetching first rows only.

Here is the execution plan we had in Oracle to get the values of N sorted. The cost for Oracle is the cost to read the index leaves: estimated to 46 random reads:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dbck3rgnqbakg, child number 0
-------------------------------------
select /*+ */ n from demo1 where n is not null order by n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 46 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 46 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

In PostreSQL, we have two costs (cost=0.29..295.29):

explain (analyze,verbose,costs,buffers) select n from demo1 where n is not null order by n ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..295.29 rows=10000 width=4) (actual time=0.194..2.026 rows=10000 loops=1)
Output: n
Index Cond: (demo1.n IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=30
Planning time: 1.190 ms
Execution time: 2.966 ms

I explained where the total cost (295.29) comes from:

  • The index on the column X has 30 blocks witch is estimated at cost=120 (random_page_cost=4)
  • We have 10000 index entries to process, estimated at cost=50 (cpu_index_tuple_cost=0.005)
  • We have 10000 result rows to process, estimated at cost=100 (cpu_tuple_cost=0.01)
  • We have evaluated 10000 ‘is not null’ conditions, estimated at cost=25 (cpu_operator_cost=0.0025)

But the Postgres EXPLAIN also show the startup cost (0.29) which is the cost before returning the first rows (only few cpu_operator_cost here).

From that, I can guess that fetching 1 row will have the following cost:

  • The startup cost of 0.29
  • Read the first index page, cost=4 (random_page_cost=4)
  • 1 index entry to process at cpu_index_tuple_cost=0.005
  • 1 result row to process, estimated at cpu_tuple_cost=0.01
  • 1 ‘is not null’ conditions, estimated at cpu_operator_cost=0.0025

This should be approximately cost=4.3075 for one row. Roughly the cost to read one index page. We will see later that the query planner do not count this first index page.

Oracle First Rows

In Oracle, we have only the total cost in the execution plan, but we can estimate the cost to retrieve 1 row with the FIRST_ROWS(1) hint:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0fjk9vv4g1q1w, child number 0
-------------------------------------
select /*+ first_rows(1) */ n from demo1 where n is not null order by
n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 2 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

The cost here is small, estimated to 2 random reads (1 B*Tree branch and 1 leaf) which is sufficient to get the first row. Of course, I’ve estimated it for 1 row but I finally retrieved all rows (A-Rows=10000), reading all blocks (Buffers=48). However, my execution plan is optimized for fetching one row.

Fetch first rows

I can run the previous query and finally fetch only one row, but I can also explicitly filter the result to get one row only. If you use older versions of Oracle, you may have used the ‘rownum’ way of limiting rows, and this implicitly adds the first_rows hint. Here I’m using the FETCH FIRST syntax and I need to explicitely add the FIRST_ROWS() hint to get the plan optimized for that.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 9bcm542sk64az, child number 0
-------------------------------------
select /*+ first_rows(1) */ n from demo1 where n is not null order by n fetch first 1 row only
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 1 |00:00:00.01 | 3 |
|* 1 | VIEW | | 1 | 1 | 2 (0)| 1 |00:00:00.01 | 3 |
|* 2 | WINDOW NOSORT STOPKEY| | 1 | 1 | 2 (0)| 1 |00:00:00.01 | 3 |
| 3 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 2 (0)| 2 |00:00:00.01 | 3 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "N")<=1)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "from$_subquery$_002"."N"[NUMBER,22], "from$_subquery$_002"."rowlimit_$$_rownumber"[NUMBER,22] 2 - (#keys=1) "N"[NUMBER,22], "DEMO1".ROWID[ROWID,10], ROW_NUMBER() OVER ( ORDER BY "N")[22] 3 - "DEMO1".ROWID[ROWID,10], "N"[NUMBER,22]

The cost is the same, estimated to 2 random reads, but we see how Oracle implements the FETCH FIRST: with window functions. And only one row has been fetched (A-Rows) reading 3 blocks (buffers). Note that because the index is sorted, the window function is a NOSORT operation.

Postgres

I can run the same query on PostgreSQL and get the execution plan:

explain (analyze,verbose,costs,buffers) select n from demo1 where n is not null order by n fetch first 1 row only;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.29..0.31 rows=1 width=4) (actual time=0.124..0.124 rows=1 loops=1)
Output: n
Buffers: shared hit=3
-> Index Only Scan using demo1_n on public.demo1 (cost=0.29..295.29 rows=10000 width=4) (actual time=0.124..0.124 rows=1 loops=1)
Output: n
Index Cond: (demo1.n IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=3
Planning time: 0.576 ms
Execution time: 0.143 ms

Here, the total cost of the query is lower than the total cost of the Index Only Scan, because we know we will not read all index entries. Then the total cost of the query (0.31) is based on the startup cost (0.29) of the index access. I suppose there is 0.01 for the cpu_tuple_cost but I expected to see the cost to get the first page because we cannot get a row without reading the whole page. My guess is that Postgres divides the total cost (295) by the number of rows (10000) and uses that as a per-row estimation. This makes sense for a lot of rows but underestimates the cost to get the first page.

In order to validate my guess, I force a Seq Scan to have a higher cost and fetch first 5 rows:

explain (analyze,verbose,costs,buffers) select n from demo1 where n is not null fetch first 5 row only ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.76 rows=5 width=4) (actual time=0.026..0.029 rows=5 loops=1)
Output: n
Buffers: shared hit=1
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.022..0.024 rows=5 loops=1)
Output: n
Filter: (demo1.n IS NOT NULL)
Buffers: shared hit=1
Planning time: 1.958 ms
Execution time: 0.057 ms

My guess is: ( 1529.00 / 10000 ) * 5 = 0.7645 which is exactly the cost estimated for the Limit operation. This approximation does not take the page granularity into account.

MIN/MAX

The “order by n fetch first 1 row only” finally reads only one index entry, the first one, and returns the indexed value. We can get the same value with a “select max(N)” and Oracle has a special operation for that:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 29bsqfg69nudp, child number 0
-------------------------------------
select /*+ */ min(n) from demo1
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 1 |00:00:00.01 | 2 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 2 |
| 2 | INDEX FULL SCAN (MIN/MAX)| DEMO1_N | 1 | 1 | 2 (0)| 1 |00:00:00.01 | 2 |
-------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) MIN("N")[22] 2 - "N"[NUMBER,22]

This goes through the index branches (blevel=1 here in this small index so root is the first and only one branch) to the first leaf in order to get the value in the first entry. This has read 2 blocks here. The same can be done to get the last index entry in case we “select max(N)”.

Postgres do not show a special operation for it, but a plan which is very similar to the one we have seen above when fetching the first row: Index Only Scan, with a Limit:


explain (analyze,verbose,costs,buffers) select min(n) from demo1 ;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.31..0.32 rows=1 width=4) (actual time=0.123..0.124 rows=1 loops=1)
Output: $0
Buffers: shared hit=3
InitPlan 1 (returns $0)
-> Limit (cost=0.29..0.31 rows=1 width=4) (actual time=0.121..0.121 rows=1 loops=1)
Output: demo1.n
Buffers: shared hit=3
-> Index Only Scan using demo1_n on public.demo1 (cost=0.29..295.29 rows=10000 width=4) (actual time=0.119..0.119 rows=1 loops=1)
Output: demo1.n
Index Cond: (demo1.n IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=3
Planning time: 0.415 ms
Execution time: 0.140 ms

If we look at the ‘Index Only Scan’ we see exactly what I had at the top of this post with “select n from demo1 where n is not null order by n”.

Above it, there’s the Limit clause which is exactly the same as the one with the “fetch 1 row only” because the query planner understands that getting the MIN(N) is the same as getting the first value from the ordered index on N.

This is processed as a non-correlated subquery (query block), also called InitPlan. The result of it ($0) is used by the result with an additional cost of 0.01 for the cpu_tuple_cost in this additional step. I don’t really know the reason for this additional step here, but anyway, the cost is minimal. Basically, both Oracle and Postgres take advantage of the index structure to get the minimum – or first value – from the sorted index entries.

In this series, I’m running very simple queries in order to show how it works. In this post, we reached the minimum: one column and one row. The next post will finally select one additional column, which is not in the index.

 

Cet article Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX est apparu en premier sur Blog dbi services.

LIVE TODAY! (August 8th) Six Hour Event on New Features of Oracle 12cR2

Gerger Consulting - Tue, 2017-08-08 08:33
Six hour live stream of two ACE Directors, Rich Niemiec and Nitin Vengurlekar talking about their favorite new features of Oracle 12c Release 2. We are at 332 signups already. Register at this link.


Categories: Development

Oracle Announces Oracle Banking Payments Offering

Oracle Press Releases - Tue, 2017-08-08 08:00
Press Release
Oracle Announces Oracle Banking Payments Offering New solution built on ISO standards enables end-to-end payments innovation for banks

Redwood Shores, Calif.—Aug 8, 2017

Oracle today announced the worldwide release of its Oracle Banking Payments solution. The offering was built from the ground up leveraging ISO 20022 and is designed to help banks compartmentalize payments messaging, message transformation and payment processing while providing high fidelity insight. Using the Oracle Banking Payments API, banks can also innovate within the Internet of Payments, collaboration with 3rd parties or curate new business models in collaboration with Fintechs.

“As a core banking vendor we have been supporting payments for over two decades. We run payments for over 500 banks across 140 countries. In many instances our applications cater to a significant proportion of a country or a jurisdiction’s volumes, for example we enable over 20% of the inward and 22% of the outward real time settlements in India” said Chet Kamat, senior vice president, Oracle Financial Services. “With Oracle Banking Payments, financial institutions can improve straight through processing, support real-time and immediate payment settlement and reduce time-to-market while driving innovation.”

Digitization has brought banks to a payments arena tipping point where customers demand constant availability for real-time, frictionless payments. Grappling with increasing levels of operational complexities, banks have increasingly adopted siloed structures and multiple messaging standards leading to a disharmonized payment landscape. At the same time, new players are challenging incumbents by offering personalized products, exemplary efficiency and security. To remain competitive, financial institutions need to adopt a transformational approach that addresses the changing dynamics of the payments marketplace.

With Oracle Financial Services Analytical Applications (OFSAA), Oracle Banking Digital Experience, Oracle FLEXCUBE, Oracle Banking Platform and Oracle Financial Services Revenue management and Billing, today’s announcement of Oracle Banking Payments makes Oracle the most comprehensive solution provider for the payments industry.

Contact Info
Alex Moriconi
Oracle Corporation
+1-650-607-6598
alex.moriconi@oracle.com
Additional Information

Oracle Banking Payments supports global as well as local payment standards like SEPA, SWIFT 2017, Fedwire, US ACH and SWIFT GPI. Banks can easily extend the system themselves to comply with any local and international clearing and settlement messages.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Alex Moriconi

  • +1-650-607-6598

Oracle Announces Oracle Banking Liquidity Management and Oracle Banking Corporate Lending

Oracle Press Releases - Tue, 2017-08-08 08:00
Press Release
Oracle Announces Oracle Banking Liquidity Management and Oracle Banking Corporate Lending New solutions to enable digitization of corporate banking

Redwood Shores, Calif.—Aug 8, 2017

Oracle today announced the launch of two new solutions: Oracle Banking Liquidity Management and Oracle Banking Corporate Lending. These offerings are designed to meet the needs of banks looking to provide real-time, multi-currency, cross border, multi-entity banking services while maintaining high fidelity insight and multi-jurisdictional compliance.

“The constraints of geographies, currencies, time-zones, cash and credit have lesser relevance in the world of digital corporates,” said Chet Kamat, senior vice president, Oracle Financial Services “With the shift towards digital it is critical that banks transform to cater to the emerging needs of their corporate customers. It is in this context that we have developed next generation liquidity and corporate credit solutions. We abstracted our experience of working across more than 140 countries, transforming the business of some of the largest corporate banks globally, to bring together functionalities that enable our customers to be leaders in the corporate banking space”

With the launch of Oracle Banking Corporate Lending, Oracle now offers banks a full spectrum of assets to enable a performance-driven corporate credit business. The solution supports the entire credit lifecycle from customer onboarding to credit management and loan processing. Banks can now structure profitable financing deals, lower credit risk and embed optimized credit support at every stage of the customer’s business.

Oracle Banking Liquidity Management helps banks adopt techniques such as multi-level sweeping and pooling, interest optimization and reallocations, across currencies, geographies and customer entities.

Oracle is committed to the global banking sector and with the addition of these innovative technologies to an already extensive cadre of offerings banks may rest assured that their customers will be even more efficient. Oracle’s new corporate banking offerings are another step toward financial excellence in a globalized economy while ensuring that banks remain a valued partner to their corporate customers.

Contact Info
Alex Moriconi
Oracle Corporation
+1-650-607-6598
alex.moriconi@oracle.com
Additional Information

Oracle’s solutions for corporate banking include Oracle Banking Liquidity Management, Oracle Banking Corporate Lending, Oracle FLEXCUBE, Oracle FLEXCUBE Enterprise Limits and Collateral Management, Oracle Revenue Management and Billing, Oracle Banking Digital Experience and Oracle Financial Services Analytical Applications.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Alex Moriconi

  • +1-650-607-6598

update table A using data from table B when more than one update record available

Tom Kyte - Tue, 2017-08-08 07:06
I have two tables, catalog and interface, both have an item_sku column. The catalog has only one row for each unique item_sku. The interface table may have one, two or three rows for each item_sku. I need to update the catalog table using the inte...
Categories: DBA Blogs

IRR Calculations in SQL or PLSQL

Tom Kyte - Tue, 2017-08-08 07:06
Hello Tom, I would like to if there is any way to calculate IRR (internal rate of return) with the help of stored function/ procedure. IRR is normal function available in Excel which allow user to calculate the IRR on base of cashflows. For E...
Categories: DBA Blogs

Prefixing a packaged procedure call with the package name from inside said package ?

Tom Kyte - Tue, 2017-08-08 07:06
Within a single package two procedures call a function in the package. One prefixes with the package name and the other does not. Is there any difference? Thank you. <code> CREATE OR REPLACE PACKAGE BODY demo AS FUNCTION f1 RETURN NUMBE...
Categories: DBA Blogs

Oracle Scheduler job continue to run though is set to stop after 240 minutes/4 hours

Tom Kyte - Tue, 2017-08-08 07:06
Hi, Although they had set the max job duration to 240 minutes/4 hours, the Oracle Scheduler job continue to run for more than 15 hours until they killed it. How could we get this job to stop once past the max job duration? I had already set tha...
Categories: DBA Blogs

SQL Baseline impact

Tom Kyte - Tue, 2017-08-08 07:06
I got user reported a SQL had been run slow about a month. so I checked the AWR snapshots and generated a baseline from AWR snapshot because the good plan was recorded 20 days ago. The related SQL ran faster the next day but the baseline didn't be us...
Categories: DBA Blogs

Rebuilding the same index in multiple sessions(parallel)

Tom Kyte - Tue, 2017-08-08 07:06
Tried rebuilding the index in multiple sessions,but got the error intiated rebuild in two sessions using below query ALTER INDEX indexname REBUILD One of the session is completed fine but other one failed with below error ORA-08104: this ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator