Feed aggregator

Security Alert CVE-2018-3110 Released

Oracle Security Team - Fri, 2018-08-10 15:02

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows.  It has received a CVSS Base Score of 9.9, and it is not remotely exploitable without authentication.  Vulnerability CVE-2018-3110 also affects Oracle Database version 12.1.0.2 on Windows as well as Oracle Database on Linux and Unix; however, patches for those versions and platforms were included in the July 2018 Critical Patch Update.

Due to the nature of this vulnerability, Oracle recommends that customers apply these patches as soon as possible.  This means that:

  • Customers running Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows should apply the patches provided by the Security Alert.
  • Customers running version 12.1.0.2 on Windows or any version of the database on Linux or Unix should apply the July 2018 Critical Patch Update if they have not already done so. 

For More Information:
• The Advisory for Security Alert CVE-2018-3110 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2018-3110-5032149.html
• The Advisory for the July 2018 Critical Patch Update is located at http://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT

Yann Neuhaus - Fri, 2018-08-10 09:47

When you have a Data Guard configuration, you want the application to connect to the right server, where the primary is, without taking too much time. The default TCP timeout is 1 minute which is too long. When you don’t want to configure a virtual IP address (VIP) you can simply list all the addresses in the client connection string. But then you need to reduce the timeout. A short duration in 1 to 5 seconds will be ok most of the time, but in case of network issue, you want to give a chance to retry with a longer timeout. This post is about the connection string parameters to define this. Of course, all is documented but the goal of this post is also to show how to quickly test it. Because a reliable understanding of how it works relies on both documentation and test.

Here is a simple client failover configuration where the connection tries 10.10.10.10 and, if it fails, tries 10.10.10.11

DEFAULT=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The problem with that is when the 10.10.10.10 is down then the 10.10.10.11 will be tried only after 60 seconds, the default TCP timeout. You can completely avoid waiting for the timeout by using a virtual IP that will always be up, started on the failed-over server. But you can also reduce the TCP timeout to a few seconds.

Here is a tnsping with the above tnsnames.ora entry and when both servers are down:


$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 2m0.051s
user 0m0.005s
sys 0m0.011s

That’s 2 minutes because there is a 1 minute timeout for each address.

TRANSPORT_CONNECT_TIMEOUT

Now, just adding the TRANSPORT_CONNECT_TIMEOUT to the connection string description to reduce the timout to 4 seconds:


DESCRIPTION=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=4)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The total time to get the answer from both addresses is 8 seconds – 4 second for each:

$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m8.023s
user 0m0.010s
sys 0m0.006s

RETRY_COUNT

If you lower the timeout, you may give a chance to retry a few times with RETRY_COUNT. There, RETRY_COUNT=2 will give 3 attempts ( 1 + 2 retries ) to the address list:

$ time tnsping RETRY_COUNT
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:49:34
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (RETRY_COUNT=2) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m24.049s
user 0m0.011s
sys 0m0.010s

This has tried 10.10.10.10 and then 10.10.10.11 for 4 seconds each, and then retried 2 times wich in total takes 2x4x4=24 seconds

DESCRIPTION_LIST

The TRANSPORT and RETRY_COUNT are used only in the DESCRIPTION. You may want to give several attempts with an increasing timeout. For example: try each address for one second to get a quick connection to the primary, wherever it is, when the network is in good health. Then give two attempts with a 5 seconds timeout for bad network times. And then one final attempt to each with the default timeout to be sure that the servers are down.

You can use a DESCRIPTION_LIST for this:

INCREASING=
(DESCRIPTION_LIST=
(LOAD_BALANCE=off)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=5)
(RETRY_COUNT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=2)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
)

Rather than just time the total attempts, I’ll strace each connections:

$ strace -tT tnsping INCREASING 2>&1 | grep -C1 --color=auto -E 'poll.*|inet_addr[()".0-9]*'
 
16:15:49 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000008>
16:15:49 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000087>
16:15:49 times(NULL) = 434920117 <0.000011>
16:15:49 mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efce31bc000 <0.000013>
16:15:49 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.001435>
16:15:50 close(4) = 0 <0.000256>
--
16:15:50 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000060>
16:15:50 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000495>
16:15:50 times(NULL) = 434920218 <0.000062>
16:15:50 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.000768>
16:15:51 close(4) = 0 <0.000050>
--
16:15:51 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000015>
16:15:51 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000060>
16:15:51 times(NULL) = 434920318 <0.000010>
16:15:51 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.005563>
16:15:56 close(4) = 0 <0.000027>
--
16:15:56 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000012>
16:15:56 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000081>
16:15:56 times(NULL) = 434920819 <0.000015>
16:15:56 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.006265>
16:16:01 close(4) = 0 <0.000192>
--
16:16:01 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000079>
16:16:01 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000486>
16:16:01 times(NULL) = 434921320 <0.000087>
16:16:01 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.004660>
16:16:06 close(4) = 0 <0.000611>
--
16:16:06 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000114>
16:16:06 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000536>
16:16:06 times(NULL) = 434921822 <0.000097>
16:16:06 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.008128>
16:16:11 close(4) = 0 <0.000135>
--
16:16:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000137>
16:16:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000584>
16:16:11 times(NULL) = 434922323 <0.000079>
16:16:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.053782>
16:17:11 close(4) = 0 <0.000166>
--
16:17:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000195>
16:17:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000549>
16:17:11 times(NULL) = 434928329 <0.000488>
16:17:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.007246>
16:18:11 close(4) = 0 <0.000043>

With ‘-T’ strace shows the duration of the poll() system call between brackets after the return code. You can see here 1-second timeout attempts to each address, then 2 attempts with 5 seconds timeout and then 60 seconds.

Note that I have added (LOAD_BALANCE=OFF) here because the default is ON in a DESCRIPTION_LIST but here I want to take them in the order I specified them.

 

Cet article TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT est apparu en premier sur Blog dbi services.

Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information

A supplier of equipment, vessels, and services for offshore dredging and wet-mining markets, based in Europe with over 3,000 employees and 39 global locations, was struggling to get the most out of their enterprise business applications.

Business Problem

In 2012, the company started a transformation initiative, and as part of the project, they replaced most of their enterprise business applications.  The company had over 10 different business applications and wanted to provide employees with access to information through a single web experience or portal view. For example, the field engineers may need information for a ship’s parts from the PLM system (TeamCenter), as well as customer-specific cost information for parts from the company’s ERP system (IFS Applications). It was critical to the business that employees could quickly navigate, search, and view information regardless of where it is stored in the content management system. The company’s business is built from ships dredging, laying cable, etc., so the sooner field engineers are able to find information on servicing a broken part, the sooner the company is able to drive revenue.

Integrating Oracle WebCenter

The company chose Oracle WebCenter Portal because it had the best capabilities to integrate their various business systems, as well as its ability to scale. WebCenter enabled them to build a data integration portal that provided a single pane of glass to all enterprise information. Unfortunately, this single pane of glass did not perform as well as expected. The integrations, menu navigation, and the ability to render part drawings in the portal were all developed using Oracle Application Development Framework (Oracle ADF).  Oracle ADF is great for serving up content to WebCenter Portal using taskflows, but it requires a specialized development skill set. The company had limited Oracle ADF development resources, so each time a change or update was requested for the portal it took them weeks and sometimes months to implement the enhancement. Additionally, every change to the portal required a restart and these took in excess of forty minutes.

Platform Goals

The company wanted to shorten the time-to-market for portal changes, as well as reduce its dependency on and the overall development and design limitations with Oracle ADF. They wanted to modernize their portal and leverage a more designer-friendly, front-end development framework. They contacted Fishbowl Solutions after searching for Oracle WebCenter Portal partners and finding out about their single page application approach (SPA) to front-end portal development.

Fishbowl Solutions’ SPA for Oracle WebCenter Portal is a framework that overhauls the Oracle ADF UI with Oracle JET (JavaScript Extension Toolkit) or other front-end design technology such as Angular or React. The SPA framework includes components (taskflows) that act as progressive web applications and can be dropped onto pages from the portal resource catalog, meaning that no Oracle ADF development is necessary. Fishbowl’s SPA also enables portal components to be rendered on the client side with a single page load. This decreases the amount of processing being done on the portal application server, as well as how many times the page has to reload. This all leads to an improved experience for the user, as well as the ability design and development teams to view changes or updates to the portal almost instantaneously.

Outcome

Fishbowl Solutions helped the company implement its SPA framework in under two weeks. Since the implementation, they have observed more return visits to the portal, as well as fewer support issues. They are also no longer constrained by the 40-minute portal restart after changes to the portal, and overall portal downtime has been significantly reduced. Lastly, Fishbowl’s SPA framework provided them with a go-forward design and development approach for portal projects, which will enable them to continue to evolve their portal to best serve their employees and customers alike.

The post Fishbowl Solutions Helps Global Dredging Company Reduce WebCenter Portal Development Costs while Enhancing the Overall Experience to Access Information appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Dear Oracle Customers & Partners, Thank YOU!

Chris Warticki - Fri, 2018-08-10 08:00

Dear Oracle Customers & Partners,

After a deeply rewarding 20-year career here at Oracle, this serves as my final blog post.  I wanted to take a moment and thank all of you for reading, subscribing and sharing the information provided here about Oracle Support.

My 20-year career began serving customers in our Network Support group then quickly transitioned to managing the Unix Install and RDBMS Support teams under Server Technologies.  After 7 years in in the Support delivery organization, I began working for the Customer Support Education team, Proactive Support team and finished within the Global Customer Management organization. 

I've been privileged to have traveled the world evangelizing the evolution, best practices and value of Oracle Support throughout the years.  I've been blessed to meet many of you at conferences, events, user groups and 1on1 (both online and in-person).  I've thoroughly enjoyed every minute of it.

We've been through a lot together.  From the early days of telephone, call-center support to web enabled, online support. From TAR to SR.  From MetaLink to MOS.  We survived FlashMOS. We survived Y2K.  Through it all, you know that my mantra has always been FOR THE CUSTOMER.

As for the future of this blog - stay tuned.  As for my personal next chapter, you can follow me on Twitter and Linked-In.

Thank You all. Thank you for your time.  Thank you for your feedback. Thank you for your business.

Sincerely,

Chris Warticki



 

OPS$Oracle user after Wallet Creation in Oracle 12c

Pakistan's First Oracle Blog - Fri, 2018-08-10 00:37

----- In Oracle 12.1.0.2, created the wallet by using below commands:

TEST$ orapki wallet create -wallet "/u01/app/oracle/admin/TEST/wallet" -pwd ****  -auto_login_local
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.


TEST$ mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys ********
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1

----- But when I logged into the database with sys user, the show user showed OPS$ORACLE user instead of sys:

TEST$ sqlplus /@TEST2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 9 13:09:38 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Aug 09 2018 03:18:20 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> sho user
USER is "OPS$ORACLE"
SQL>

----- So made following changes and it worked fine:

Put following entry in sqlnet.ora file:

SQLNET.WALLET_OVERRIDE = TRUE
The SQLNET.WALLET_OVERRIDE entry allows this method to override any existing OS authentication configuration.

and used mkstore to create the wallet:

TEST$  mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys

Categories: DBA Blogs

ATP vs ADW – the Autonomous Database lockdown profiles

Yann Neuhaus - Thu, 2018-08-09 14:49

The Oracle database has always distinguished two types of workloads: transactional (OLTP) and datawarehouse (VLDB, DWH, DSS, BI, analytics). There is the same idea in the managed Oracle Cloud with two autonomous database services.

To show how this is old, here is how they were defined in the Oracle7 Tuning Book:

CaptureOLTPvsDSS

The definition has not changed a lot. But the technology behind DSS/DWH has improved. Now, with In-Memory Column Store, Smart Scan, Result Cache we can even see that indexes, materialized views, star transformation, hints,.. are disabled in the Autonomous Datawarehouse cloud service.

The difference between the two autonomous cloud services, ATP (Autonomous Transaction Processing) for OLTP and ADW (Autonomous Datawarehouse) for analytics have been described by Maria Colgan after Larry Ellison announce:
https://sqlmaria.com/2018/08/07/how-does-autonomous-transaction-processing-differ-from-the-autonomous-data-warehouse/

PDBaaS

Those autonomous services are PDB as a Service. They are using the consolidation and isolation features of 12cR2 multitenant. And we can even see that the ATP and ADW services can run within the same CDB and same instance. They are different PDBs and they differ only by their resource management plans and PDB lockdown profiles. So let’s see the differences from this point of view: ATP lockdown profile is called ‘OLTP’ and ADW lockdown profile is called ‘DWCS’.

Options

The only difference is about partitioning which is enabled for ATP and disabled for ASW

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION';
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
OLTP OPTION PARTITIONING ENABLE ALL 284
 
SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION' order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------- ----- ------
DWCS OPTION PARTITIONING DISABLE ALL 73

Features

All disabled features are the same:
Disabled for all users: COMMON_SCHEMA_ACCESS, CONNECTIONS, CTX_LOGGING, NETWORK_ACCESS, OS_ACCESS, SYSTEM_DATA
Disabled for local users only: USER_PASSWORD_VERIFIERS
Enabled for all users: COMMON_USER_CONNECT, TRACE_VIEW_ACCESS, UTL_FILE

Those last ones are good news. We can query V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, $DIAG_TRACE_FILE_CONTENTS to show some diagnostics. Unfortunately, I’ve seen no allowed ways to enable SQL Trace.

DDL Statements

In the ADW service, the local user cannot create indexes and materialized views, They are supposed to be autonomously created when required (probably by a common user):

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where regexp_like(rule,'(INDEX|MATERIALIZED)') order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
DWCS STATEMENT ALTER INDEX DISABLE LOCAL 73
DWCS STATEMENT ALTER INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEX DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT DROP INDEX DISABLE LOCAL 73
DWCS STATEMENT DROP INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW LOG DISABLE LOCAL 73

The ATP service enables all those for all users.

System parameters

Both services disable all ALTER SYSTEM statements except KILL SESSION. They allow all ALTER SYSTEM SET for the common users but allow only a few parameters to be changed by local users.

Basically, the ADW service sets the following:


_default_pct_free=1
_ldr_io_size=33554432
_ldr_io_size2=33554432
_max_io_size=33554432
_optimizer_allow_all_access_paths=FALSE
_optimizer_answering_query_using_stats=TRUE
optimizer_ignore_hints=TRUE
optimizer_ignore_parallel_hints=TRUE
parallel_degree_policy=AUTO
parallel_min_degree=CPU
result_cache_max_result=1
result_cache_mode=FORCE

The ATP service keeps the defaults.

Basically, all the features for OLTP are there for years in the Oracle Database. Under the ‘autonomous’ umbrella we can see some well-known features

  • Automatic provisioning: pluggable databases
  • Automatic scaling: PDB resource manager
  • Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor, Storage Indexes
  • Automatic security: Rolling patches, Encryption, Database Vault
  • Automatic Fault Tolerant Failover: RAC, Data Guard
  • Automatic Backup and Recovery: RMAN, Flashback

All these proven features are enhanced to work together in a consolidated multitenant environment. Povisioning is as fast as a CREATE PDB. The ATP and ADW services enable and configure the right set of features for each workload.

 

Cet article ATP vs ADW – the Autonomous Database lockdown profiles est apparu en premier sur Blog dbi services.

Oracle Offline Persistence Toolkit - Controlling Online Replay

Andrejus Baranovski - Thu, 2018-08-09 13:25
Few months ago I had a post about Oracle Offline Persistence toolkit, which integrates well with Oracle JET (JavaScript toolkit from Oracle) - Oracle JET Offline Persistence Toolkit - Offline Update Handling. I'm back to this topic with sample application upgraded to JET 5.1 and offline toolkit upgraded to 1.1.5. In this post I will describe how to control online replay by filtering out some of the requests, to be excluded from replay.

Source code is available on GitHub. Below I describe changes and functionality in the latest commit.

To test online replay, go offline and execute some actions in the sample app - change few records and try to search by first name, also try to use page navigation buttons. You will be able to save changes in offline mode, but if this is your first time loading app and data from other pages wasn't fetch yet, then page navigation would not bring any new results in offline mode (make sure to load more records while online and then go offline):


In online replay manager, I'm filtering out GET requests intentionally. Once going online, I replay only PATCH requests. This is done mainly for a test, to learn how to control replay process. PATCH requests are executed during replay:


Printing out in the log, each GET request which was removed from replay loop:


Replay implementation (I would recommend to read Offline Persistence Toolkit usage doc for more info):


This code is executed, after transition to online status. Calling getSyncLog method from Sync Manager - returns a list of requests pending replay. Promise returns function with array of requests waiting for online replay. I have marked function to be async, this allows to implement sequential loop, where each GET request will be removed one by one in order. This is needed, since removeRequest from Sync Manager is executed in promise and loop would complete too late - after we pass execute replay phase. Read more about sequential loop implementation in JS, when promise is used - JavaScript - Method to Call Backend Logic in Sequential Loop. Once all GET requests are removed, we execute sync method, this will force all remaining requests in queue to be replayed.

Oracle Ksplice for Oracle Linux in Oracle Cloud

Wim Coekaerts - Thu, 2018-08-09 11:38

My favorite topic.. Ksplice! Just a friendly reminder that every Oracle Linux instance in Oracle Cloud comes with Oracle Ksplice installed/enabled by default at no additional cost beyond basic compute.

When you run an OL instance, the uptrack tools are on the base image. (uptrack-upgrade, uptrack-uname, etc..). The config file (/etc/uptrack/uptrack.conf) contains an access-key that enables any cloud instance to talk to our Ksplice service without registration. So as soon as you log into your system you can run # uptrack-upgrade or # uptrack-show .

uptrack doesn't run automatically, by default.  You are expected to manually type # uptrack-upgrade . What this does is the following: it goes to our service and looks at which Ksplice patches are available for your running kernel and asks if you want to install them. if you add - y then  it will just go ahead and install whatever is available without prompting you.

uptrack-show lists the patches that are already applied on your running kernel/system.

uptrack-uname shows the 'effective' kernel version. What this means is which kernel version you are effectively updated to with relevant CVEs and critical issues.

Here's a concrete example of my OCI instance:

 

# uname -a Linux devel 4.1.12-124.14.5.el7uek.x86_64 #2 SMP Fri May 4 15:26:53 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

My instance runs UEK R4 (4.1.12-124.14.5) that's the actual RPM that's installed and the actual kernel that I booted the instance with.

 

# uptrack-uname -a Linux devel 4.1.12-124.15.1.el7uek.x86_64 #2 SMP Tue May 8 16:27:00 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

I already ran uptrack-upgrade before so a number of patches are already applied and installed up to the same level as 4.1.12-124.15.1. So instead of installing the 4.1.12-124.15.1 kernel-uek RPM and rebooting, when I ran uptrack-upgrade a while back, it got me right to that level without affecting my availability one bit.

I did not enable auto-install so since I ran that command a while back, I have not done it again, a good number of (some serious) CVE's have been fixed and released since so it's time to update... but I so hate reboots! luckily.. no need.

What's already installed? Let's see...

 

# uptrack-show Installed updates: [1zkgpvff] KAISER/KPTI enablement for Ksplice. [1ozdguag] Improve the interface to freeze tasks. [nw9iml90] CVE-2017-15129: Use-after-free in network namespace when getting namespace ids. [i9x5u5uf] CVE-2018-5332: Out-of-bounds write when sending messages through Reliable Datagram Sockets. [dwwke2ym] CVE-2017-7294: Denial-of-service when creating surface using DRM driver for VMware Virtual GPU. [cxke2gao] CVE-2017-15299: Denial-of-service in uninstantiated key configuration. [nwtwa8b3] CVE-2017-16994: Information leak when using mincore system call. [hfehp9m0] CVE-2017-17449: Missing permission check in netlink monitoring. [7x9spq2j] CVE-2017-17448: Unprivileged access to netlink namespace creation. [lvyij5z2] NULL pointer dereference when rebuilding caches in Reliable Datagram Sockets protocol. [s31vmh6q] CVE-2017-17741: Denial-of-service in kvm_mmio tracepoint. [3x6jix1s] Denial-of-service of KVM L1 nested hypervisor when exiting L2 guest. [d22dawa6] Improved CPU feature detection on microcode updates. [fszq2l5k] CVE-2018-3639: Speculative Store Bypass information leak. [58rtgwo2] Device Mapper encrypted target Support big-endian plain64 IV. [oita8o1p] CVE-2017-16939: Denial-of-service in IPSEC transform policy netlink dump. [qenhqrfo] CVE-2017-1000410: Information leak in Bluetooth L2CAP messages. [965vypan] CVE-2018-10323: NULL pointer dereference when converting extents-format to B+tree in XFS filesystem. [drgt70ax] CVE-2018-8781: Integer overflow when mapping memory in USB Display Link video driver. [fa0wqzlw] CVE-2018-10675: Use-after-free in get_mempolicy due to incorrect reference counting. [bghp5z31] Denial-of-service in NFS dentry invalidation. [7n6p7i4h] CVE-2017-18203: Denial-of-service during device mapper destruction. [okbvjnaf] CVE-2018-6927: Integer overflow when re queuing a futex. [pzuay984] CVE-2018-5750: Information leak when registering ACPI Smart Battery System driver. [j5pxwei9] CVE-2018-5333: NULL pointer dereference when freeing resources in Reliable Datagram Sockets driver. Effective kernel version is 4.1.12-124.15.1.el7uek

so the above patches were installed last time. Quite a few! All applied, without affecting availability.

Ok, what else is available... a whole bunch, best apply them!

 

# uptrack-upgrade The following steps will be taken: Install [f9c8g2hm] CVE-2018-3665: Information leak in floating point registers. Install [eeqhvdh8] Repeated IBRS/IBPB noise in kernel log on Xen Dom0 or old microcode. Install [s3g55ums] DMA memory exhaustion in Xen software IO TLB. Install [nne9ju4x] CVE-2018-10087: Denial-of-service when using wait() syscall with a too big pid. Install [3xsxgabo] CVE-2017-18017: Use-after-free when processing TCP packets in netfliter TCPMSS target. Install [rt4hra3j] CVE-2018-5803: Denial-of-service when receiving forged packet over SCTP socket. Install [2ycvrhs6] Improved fix to CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Install [rjklau8v] Incorrect sequence numbers in RDS/TCP. Install [qc163oh5] CVE-2018-10124: Denial-of-service when using kill() syscall with a too big pid. Install [5g4kpl3f] Denial-of-service when removing USB3 device. Install [lhr4t7eg] CVE-2017-7616: Information leak when setting memory policy. Install [mpc40pom] CVE-2017-11600: Denial-of-service in IP transformation configuration. Install [s77tq4wi] CVE-2018-1130: Denial-of-service in DCCP message send. Install [fli7048b] Incorrect failover group parsing in RDS/IP. Install [lu9ofhmo] Kernel crash in OCFS2 Distributed Lock Manager lock resource initialization. Install [dbhfmo13] Fail-over delay in Reliable Datagram Sockets. Install [7ag5j1qq] Device mapper path setup failure on queue limit change. Install [8l28npgh] Performance loss with incorrect IBRS usage when retpoline enabled. Install [sbq777bi] Improved fix to Performance loss with incorrect IBRS usage when retpoline enabled. Install [ls429any] Denial-of-service in RDS user copying error. Install [u79kngd9] Denial of service in RDS TCP socket shutdown. Go ahead [y/N]? y Installing [f9c8g2hm] CVE-2018-3665: Information leak in floating point registers. Installing [eeqhvdh8] Repeated IBRS/IBPB noise in kernel log on Xen Dom0 or old microcode. Installing [s3g55ums] DMA memory exhaustion in Xen software IO TLB. Installing [nne9ju4x] CVE-2018-10087: Denial-of-service when using wait() syscall with a too big pid. Installing [3xsxgabo] CVE-2017-18017: Use-after-free when processing TCP packets in netfliter TCPMSS target. Installing [rt4hra3j] CVE-2018-5803: Denial-of-service when receiving forged packet over SCTP socket. Installing [2ycvrhs6] Improved fix to CVE-2018-1093: Denial-of-service in ext4 bitmap block validity check. Installing [rjklau8v] Incorrect sequence numbers in RDS/TCP. Installing [qc163oh5] CVE-2018-10124: Denial-of-service when using kill() syscall with a too big pid. Installing [5g4kpl3f] Denial-of-service when removing USB3 device. Installing [lhr4t7eg] CVE-2017-7616: Information leak when setting memory policy. Installing [mpc40pom] CVE-2017-11600: Denial-of-service in IP transformation configuration. Installing [s77tq4wi] CVE-2018-1130: Denial-of-service in DCCP message send. Installing [fli7048b] Incorrect failover group parsing in RDS/IP. Installing [lu9ofhmo] Kernel crash in OCFS2 Distributed Lock Manager lock resource initialization. Installing [dbhfmo13] Fail-over delay in Reliable Datagram Sockets. Installing [7ag5j1qq] Device mapper path setup failure on queue limit change. Installing [8l28npgh] Performance loss with incorrect IBRS usage when retpoline enabled. Installing [sbq777bi] Improved fix to Performance loss with incorrect IBRS usage when retpoline enabled. Installing [ls429any] Denial-of-service in RDS user copying error. Installing [u79kngd9] Denial of service in RDS TCP socket shutdown. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.17.2.el7uek Done!

I now have a total of 46 Ksplice updates applied on this running kernel.

 

# uptrack-uname -a Linux devel 4.1.12-124.17.2.el7uek.x86_64 #2 SMP Tue Jul 17 20:28:07 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux

current to the 'latest' UEKR4 version in terms of CVEs

Now we don't provide driver 'updates' or so in these patches only critical fixes and security fixes. So the kernel is not -identical- to the 4.1.12-17.2 in every sense. But it certainly is on your current system as it's related to bad things that could happen!

Since I don't want to forget running the update, I am going to just enable Ksplice to run through a cron job. Just edit /etc/uptrack/uptrack.conf and change autoinstall = no to autoinstall = yes.

A few other things:

When Ksplice patches are installed and you do end up doing a reboot, the installed patches will be automatically applied again right at boot time if you reboot into the same original kernel. Note - it will not automatically go look for new patches.

If you want to also go check for new updates, you can comment out #upgrade_on_reboot = yes  this will make that happen.

I removed all installed Ksplice updates (online, using # uptrack-remove --all) and now will time reapplying all 46:

 

# time uptrack-upgrade -y ... real 0m11.705s user 0m4.273s sys 0m4.807s

So 11.7 seconds to apply all 46. Each patch gets applied one after the other, there is no system halt for that long at all, for each individual patch it just halts for a few us (not noticeable) and then has a short pause to continue to the next but this pause is just the uptrack tool, not your server instance.

So enable autoinstall, enable upgrade_on_reboot=yes and you have an Oracle Linux system that you can just leave running and you automatically are current with CVEs/critical fixes without having to worry...Autonomous Oracle Linux patching. Pretty cool!

Some vendors are trying to offer 'live patching' but those things don't come even close. It validates the importance of this technology and feature set,  it's not anywhere near a viable alternative.

Have fun!

 

SQL JSON ORA-40459 Exception

Tom Kyte - Thu, 2018-08-09 10:46
I am trying to use JSON functions like JSON_OBJECT and JSON_ARRAYAGG to generate a JSON string through SQL. We have alot of columns in our tables that have alot more data than 4000 bytes which I am trying to parse through and generate JSON string. Ev...
Categories: DBA Blogs

Merge Two Rows Into One Row With 2 Columns

Tom Kyte - Thu, 2018-08-09 10:46
Is there a way to simply merge via a single pivot or analytical query the results from two rows into one? We have a legacy table that is essentially storing key value pairs in columnar format. As an test case, here is an example: <code>create tabl...
Categories: DBA Blogs

Using Oratcptest

Yann Neuhaus - Thu, 2018-08-09 10:32

Last day I did some tests with Oratcptest. This a command line tool which can be used to measure network bandwidth and network latency between client and server.
In this blog I am going to show how to use this tool in a DataGuard environment.
I am using a DataGuard environment with following configuration.


Configuration - PROD

  Protection Mode: MaxAvailability
  Members:
  PROD_SITE1 - Primary database
    PROD_SITE2 - Physical standby database
    PROD_SITE3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

DGMGRL> show database 'PROD_SITE1' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE2' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE3' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL>

The installation of Oratcptest is very simple. We just have to download on both servers the oratcptest.jar file from the oracle support site. Note that JRE 6 or higher is required. In my case I have JRE 8 on both servers

[oracle@primaserver oratcptest]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
[oracle@primaserver oratcptest]$

[oracle@standserver1 oratcptest]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver1 oratcptest]$

[oracle@standserver2 ~]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver2 ~]$

We can invoke the help command to see all options available for the oratcptest

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar -help

Now we can assess the network bandwidth for our DataGuard. Note that I am using simple virtual machines. But the steps will be the same on real productions servers.
We first have to determine the highest volume of redo log in my database. Following Oracle query can be used.

select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024/((next_time-first_time)*86400) "MB/s" from v$archived_log where ((next_time-first_time)*864000) and first_time between  to_date('2018/08/09 08:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2018/08/09 11:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=2 order by first_time;

   THREAD#  SEQUENCE#         MB        SEC       MB/s
---------- ---------- ---------- ---------- ----------
         1        124 .003417969          9 .000379774
         1        125 .271972656        184 .001478112
         1        126 .001953125          5 .000390625
         1        127 11.3662109        915 .012422088
         1        128 10.8466797       6353 .001707332

We can see that the highest value is .012422088 MB/s. The goal is to see if our network bandwidth can support this rate.
As we are using SYNC mode, the primary database will wait for a confirmation from standby databases that they have written the change to disk before informing the application of the commit success.
For SYNC transport we then have to collect the Average redo write size which is calculated using following formula

Average=redo size / redo writes

These metrics can be obtained using an AWR report. In our case the value is

Average=15924844/4015=3967

Now we are going to simulate SYNC writes over the network using Oratcptest. Note I need the location of my standby redo logs

SQL> select member from v$logfile where type='STANDBY';

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/stredo01.log
/u01/app/oracle/oradata/PROD/stredo02.log
/u01/app/oracle/oradata/PROD/stredo03.log
/u01/app/oracle/oradata/PROD/stredo04.log

From the standby server I can run following command

[oracle@standserver1 oratcptest]$ java -jar oratcptest.jar -server -port=5555 -file=/u01/app/oracle/oradata/PROD/myoratcp.tmp
OraTcpTest server started.

From the primary server

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555  -write  -mode=sync -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = SYNC
        Disk write             = YES
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:26:16) The server is ready.
                    Throughput             Latency
(14:26:18)      1.124 Mbytes/s            3.375 ms   (disk-write 2.537 ms)
(14:26:20)      0.813 Mbytes/s            4.668 ms   (disk-write 3.775 ms)
(14:26:22)      1.094 Mbytes/s            3.467 ms   (disk-write 2.773 ms)
(14:26:24)      1.004 Mbytes/s            3.778 ms   (disk-write 2.991 ms)
(14:26:26)      0.560 Mbytes/s            6.779 ms   (disk-write 5.623 ms)
(14:26:26) Test finished.
               Socket send buffer = 11700 bytes
                  Avg. throughput = 0.920 Mbytes/s
                     Avg. latency = 4.126 ms (disk-write 3.280 ms)

[oracle@primaserver oratcptest]$

We can see that the Average throughput is 0.920 M/s which is sufficient to handle our highest peak rate which is .012422088 MB/s. We can also note the latency which includes the time to send the message to the server host, the optional disk write at the server host, and the acknowledgment back to the client process
If we are using ASYNC mode the test will be like

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555    -mode=async -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = ASYNC
        Disk write             = NO
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:58:03) The server is ready.
                    Throughput
(14:58:05)     13.897 Mbytes/s
(14:58:09)      5.193 Mbytes/s
(14:58:11)     40.007 Mbytes/s
(14:58:13)     21.475 Mbytes/s
(14:58:13) Test finished.
               Socket send buffer = 210600 bytes
                  Avg. throughput = 16.901 Mbytes/s

[oracle@primaserver oratcptest]$

Conclusion
In this blog we have talked about Oratcptest which is a simple tool which can help to assess our network bandwidth. Oratcptest can help before performing Rman Backups, file copies, a DataGuard environment.
Reference : Oracle Doc ID 2064368.1

 

Cet article Using Oratcptest est apparu en premier sur Blog dbi services.

Partitioning -- 2 : Simple Range Partitioning -- by DATE

Hemant K Chitale - Thu, 2018-08-09 03:35
Range Partitioning allows you to separate a logical table into a number of distinct physical segments, each segment holding data that maps to a range of values.
(I encourage you to read the Introduction in the first post in this series)

The simplest and most common implementation is Range Partitioning by a DATE column.

SQL> l
1 create table sales_data
2 (sale_id number primary key,
3 sale_date date,
4 invoice_number varchar2(21),
5 customer_id number,
6 product_id number,
7 sale_value number)
8 partition by range (sale_date)
9 (partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY'))
10 tablespace TBS_YEAR_2015,
11 partition P_2016 values less than (to_date('01-JAN-2017','DD-MON-YYYY'))
12 tablespace TBS_YEAR_2016,
13 partition P_2017 values less than (to_date('01-JAN-2018','DD-MON-YYYY'))
14 tablespace TBS_YEAR_2017,
15 partition P_2018 values less than (to_date('01-JAN-2019','DD-MON-YYYY'))
16 tablespace TBS_YEAR_2018,
17 partition P_2019 values less than (to_date('01-JAN-2020','DD-MON-YYYY'))
18 tablespace TBS_YEAR_2019,
19 partition P_MAXVALUE values less than (MAXVALUE)
20 tablespace USERS
21* )
SQL> /

Table created.

SQL>


Here, I have created each Partition in a separate tablespace.  Note that the Partition Key (SALE_DATE) does not have to be the same as the Primary Key (SALE_ID)

I have also created a MAXVALUE Partition  (Some DBAs/Developers may mistakenly assume this to be a *default* partition.  Range Partitioning, unlike List Partitioning, does not have the concept of a "default" partition.  This simply is the Partition for incoming rows that have Partition Key value that is higher than the last (highest) defined Partition Key Upper Bound (31-Dec-2019 23:59:59 in this case)).

I can look up the data dictionary for these partitions in this manner :

SQL> select partition_name, tablespace_name   
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2015 TBS_YEAR_2015
P_2016 TBS_YEAR_2016
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_MAXVALUE USERS

6 rows selected.

SQL>


Partitions are ordered by Partition *Position*  not Name.

How do I add a new partition for data for the year 2020 ?  By "splitting" the MAXVALUE partition.

SQL> alter table sales_data                 
2 split partition P_MAXVALUE
3 at (to_date('01-JAN-2021','DD-MON-YYYY'))
4 into
5 (partition P_2020 tablespace TBS_YEAR_2020, partition P_MAXVALUE)
6 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE
------------------------------ ---------------------------------------------
P_2015 TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD
P_2016 TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD
P_2017 TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD
P_2018 TO_DATE(' 2019-01-01 00:00:00', 'SYYYY-MM-DD
P_2019 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD
P_2020 TO_DATE(' 2021-01-01 00:00:00', 'SYYYY-MM-DD
P_MAXVALUE MAXVALUE

7 rows selected.

SQL>
SQL> l
1 select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4* order by partition_position
SQL> /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2015 TBS_YEAR_2015
P_2016 TBS_YEAR_2016
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

7 rows selected.

SQL>


Note that, irrespective of the data format I specify in the CREATE or SPLIT commands, Oracle presents the Upper Bound Date (HIGH_VALUE) in it's own format, using a Gregorian Calendar.

How do I remove an older partition ?

SQL> alter table sales_data
2 drop partition P_2015
3 /

Table altered.

SQL>


A DROP command is very simple.

In my next post, I will add Indexes to this table.



Categories: DBA Blogs

Partitioning -- 1 : Introduction

Hemant K Chitale - Thu, 2018-08-09 02:54
I am beginning a new series of Blog Posts on Partitioning in Oracle.  I plan to cover 11g and 12c.   I might add posts on changes in 18c  (which is really 12.2.0.2 currently)

First, this is my presentation at AIOUG Sangam 11
and this the corresponding article

This series of posts will have new examples, from the simple to the complex, not present in the above presentation / article.
Categories: DBA Blogs

Facebook : My Recent Experience

Tim Hall - Thu, 2018-08-09 02:25

Here’s a little story of what has happened to me recently on Facebook.

First a little history lesson. For a long time I had an extremely small list of friends on Facebook. I would only accept friend requests from people I really knew, like IRL friends and a few work colleagues. That was it. No Oracle people were allowed… The wife has a rule that only people she would let stay in her house are friends on Facebook. Nobody is allowed in my house, so my definition had to be a little different than that.

Some time ago I changed my stance on Facebook friends and started to accept other people, mostly assigning them to the “Restricted” list, and so it went on for some time.

Recently I tweeted that I was getting a lot of friend requests and wondered what was going on. I figured I have a lot of readers, so it’s natural people would reach out, and I didn’t think to much about it. After a while I started to get some really odd things happen, so I did a little digging and found some rather “interesting” people in my friend list. I don’t really want to say more about it than that.

The long and short of it was I decided to remove several thousand friends and I’ve returned to something close to my original policy. I’m sorry if you are a decent person and feel offended that I have unfriended you, but if I don’t really know you, that’s the way it is.

By the way, Facebook used to let you mass delete friends, but that is no longer possible. What’s more, if you delete a lot of them at once they lock certain features of your account. I had to write to Facebook to explain what I was doing and why before they would let me unfriend people again. I know it’s an automatic check for suspicious behaviour, but it would be nice if they spent more effort checking what people are saying and doing on their platform…

Cheers

Tim…

Facebook : My Recent Experience was first posted on August 9, 2018 at 8:25 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MERGE JOIN CARTESIAN: a join method or a join type?

Yann Neuhaus - Wed, 2018-08-08 17:00

I’ll present about join methods at POUG and DOAG. I’ll show how the different join methods work in order to better understand them. The idea is to show Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian on the same query. I’ll run a simple join between DEPT and EMP with the USE_NL, USE_HASH, USE_MERGE and USE_MERGE_CARTESIAN hints. I’ll show the execution plan, with SQL Monitoring in text mode. And I’ll put some gdb breakpoints on the ‘qer’ (query execution rowsource) functions to run the plan operations step by step. Then I’ll do the same on a different query in order to show in detail the 12c adaptive plans.

But wait, I listed Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian… but is Merge Cartesian Join really a join method? I mean, my query is not a cartesian join. I have all join predicates here. But for sure you can also do an inner join by starting with a cartesian join and then filter on the join predicate. As if doing physically what the old join syntax of Oracle is doing logically: by not putting any predicates in the from clause and add the join predicates in the where clause to filter over it.

If I look at the 12.2 documentation, it is a Join method
CaptureJoinMethods122

For the definition, a Join Method is how the join will be executed. It is not a decision of the SQL developer because SQL is declarative: you declare the result you want, and the optimizer will decide how to do it. And this is why hints are in comments: they are not part of the declarative syntax. Forcing how to do it is not part of SQL.

Just after listing the join methods, the documentation lists the join types which are part of the SQL because it declares the join result you expect. Inner join to get all matching rows. Semi join to get only the first matching row. Anti Join to get all rows which do not match. Outer join to get all matching rows in addition to those which matches. The syntax is INNER JOIN, OUTER JOIN, EXISTS or IN, NOT EXISTS or NOT IN. Join type is not ‘how’ but ‘what’.

Ok, so back to the join method. Let’s force it on my inner join between DEPT and EMP:

11g


SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
10 RESEARCH DALLAS 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 RESEARCH DALLAS 7839 KING PRESIDENT 17-NOV-81 5000
10 RESEARCH DALLAS 7934 MILLER CLERK 7782 23-JAN-82 1300
10 SALES CHICAGO 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 SALES CHICAGO 7839 KING PRESIDENT 17-NOV-81 5000
10 SALES CHICAGO 7934 MILLER CLERK 7782 23-JAN-82 1300
10 OPERATIONS BOSTON 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 OPERATIONS BOSTON 7839 KING PRESIDENT 17-NOV-81 5000
10 OPERATIONS BOSTON 7934 MILLER CLERK 7782 23-JAN-82 1300
 
12 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 2034389985
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 7 | | | |
| 1 | MERGE JOIN CARTESIAN| | 1 | 14 | 12 |00:00:00.01 | 7 | | | |
| 2 | TABLE ACCESS FULL | DEPT | 1 | 4 | 4 |00:00:00.01 | 4 | | | |
| 3 | BUFFER SORT | | 4 | 4 | 12 |00:00:00.01 | 3 | 2048 | 2048 | 2048 (0)|
|* 4 | TABLE ACCESS FULL | EMP | 1 | 4 | 3 |00:00:00.01 | 3 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - filter("DEPT"."DEPTNO"="EMP"."DEPTNO")
 

Ok, then I declared my result with an inner join query, and I forced the join method with a hint to show that it is possible. But look at the result. 12 rows? Only DEPTNO 10 where the SCOTT schema has employees in 10, 20 and 30? And only 3 employees here, repeated 4 times for each department name? That’s wrong result.

NEVER FORCE A CARTESIAN JOIN WITH USE_MERGE_CARTESIAN!

That’s a very old bug: Bug 17064391 Wrong result with USE_MERGE_CARTESIAN hint finally fixed in 12c (12.2 and backported in 12.1 PSU)

Then how is it fixed?

18c

With the fix, the hint is just ignored and a SORT MERGE JOIN is used here:

SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
20 RESEARCH DALLAS 7566 JONES MANAGER 7839 02-APR-81 2975
20 RESEARCH DALLAS 7902 FORD ANALYST 7566 03-DEC-81 3000
20 RESEARCH DALLAS 7876 ADAMS CLERK 7788 23-MAY-87 1100
20 RESEARCH DALLAS 7369 SMITH CLERK 7902 17-DEC-80 800
20 RESEARCH DALLAS 7788 SCOTT ANALYST 7566 19-APR-87 3000
30 SALES CHICAGO 7521 WARD SALESMAN 7698 22-FEB-81 1250 500
30 SALES CHICAGO 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30 SALES CHICAGO 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300
30 SALES CHICAGO 7900 JAMES CLERK 7698 03-DEC-81 950
30 SALES CHICAGO 7698 BLAKE MANAGER 7839 01-MAY-81 2850
30 SALES CHICAGO 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
 
14 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 1407029907
 
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 12 | 12 | | | |
| 1 | MERGE JOIN | | 1 | 14 | 14 |00:00:00.01 | 12 | 12 | | | |
| 2 | SORT JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 3 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | | | |
|* 4 | SORT JOIN | | 4 | 14 | 14 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | 6 | | | |
--------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
filter("DEPT"."DEPTNO"="EMP"."DEPTNO")

So here the result is good, thanks to the fix, and we clearly see how it is fixed: the USE_MERGE_CARTESIAN hint has been ignored.

And the funny thing is that when you look at the 18c documentation, the Merge Join Cartesian is not a join method anymore but a join type:
CaptureJoinMethods183

Exactly the same paragraph, but now in join types (the ‘what’) rather than in join methods (the ‘when’).

What or How?

Actually, in my opinion, it is both. When you explicitly want a cartesian join, that’s a join type described by the CROSS JOIN in the ANSI join syntax, or the lack of related predicates in the old syntax. This is ‘what’. But you may also encounter a MERGE JOIN CARTESIAN for a non-cartesian join just because the optimizer decides it is more efficient. When you have very few rows on both sides, it may be faster to start with a cartesian product on small rowsources. This can be part of star transformation where fact rows are joined back to the cartesian product of filtered dimensions in order to project the dimension attributes. This is ‘how’ it will be executed. We also see it when the optimizer underestimates the cardinalities and is followed by a long nested loop.

When?

So, let’s look at the documentation “When the Optimizer Considers Cartesian Joins”:
CaptureJoinMethods183-when

  • No join condition exists: that’s when cartesian product is what we want
  • The ORDERED hint specifies a table before its join table is specified: that’s when it is the only join method possible with the specified join order
  • A Cartesian join is an efficient method: then it is a method there, even if documented in join types.

In conclusion, cartesian join is a join type. It can also be used as a join method when the optimizer decides to. But you cannot decide it yourself by hinting since 12c, and trying to do so in previous version is a very bad idea and can returns wrong results.

So, for this one I’ll explicitely run a CROSS JOIN:
CaptureUSEMERGECARTESIAN

The query is on top. The SQL monitor in the middle, showing that we are currently active on reading rows from EMP. The bottom shows the ‘qer’ functions backtrace: the fetch call is propagated from opifch2 for the SELECT STATEMENT, through the MERGE JOIN CARTESIAN (querjo), the BUFFER SORT (qerso), to the TABLE ACCESS (qertb).

So basically, the goal of this full-demo presentation is to show how to read the execution plan by understanding how it is executed. This qertbFetch on the inner table EMP is executed only on the first row coming from the outer table DEPT. As the rows are returned to a buffer, the further iterations will fetch only from this buffer and will not go further than qersoFetchSimple. The qersoProcessULS (‘process underlying row source’ – see Frits Hoogland annotations) is run only once. This is the big difference with Nested Loop where the inner loop on the underlying rowsource is run for each outer loop iteration: those two loops are nested – thus the name. But the function for the join part is the same for Nested Loop, Sort Merge Join and Merge Join Cartesian: qerjo. Only the underlying operations differenciate the join methods.

Last comment, we don’t see any function which really sort the rows in this buffer (as we will see for the Sort Merge Join method) because there is no sorting despites the name of the BUFFER SORT operation. More info on Jonathan Lewis blog.

 

Cet article MERGE JOIN CARTESIAN: a join method or a join type? est apparu en premier sur Blog dbi services.

Design question around automatic data change notifications

Tom Kyte - Wed, 2018-08-08 16:26
Hello Oracle experts. Thank you in advance for going through my question. In our organisation, we are planning to implement a solution to automatically push the data that is changed in our databases to consumers. They are not a high traffic system...
Categories: DBA Blogs

Four Options for Creating Mindbreeze Search Interfaces

A well-designed search interface is a critical component of an engaging search experience. Mindbreeze provides a nice combination of both pre-built search apps and tools for customization. This post explores the following approaches to building a Mindbreeze search interface:

  • The Mindbreeze Default Search Client
  • The Mindbreeze Search App Designer
  • Custom Mindbreeze Web Applications
  • The Mindbreeze REST API
Option 1: The Mindbreeze Default Search Client Flexibility: Low | Development Effort: None

Mindbreeze includes a built-in search client which offers a feature-rich, mobile friendly, search interface out of the box. Built-in controls exist to configure filter facets, define suggestion sources, and enable or disable export. Features are enabled and disabled via the Client Service configuration interface within the Mindbreeze Management Center. The metadata displayed within the default client is determined by the value of the “visible” property set in the Category Descriptor for the respective data sources. Some of the Mindbreeze features exposed through the default client are not available via a designer-built search app (discussed in Option 2). These include saved searches, result groupings (i.e. summarize-by), the sort-by picker, sources filters, and tabs. Organizations that wish to use these features without much effort would be wise to consider the Mindbreeze Default Search Client.

In order to integrate the built-in client with a website or application, users are typically redirected from the primary website to the Mindbreeze client when performing a search. The default client is served directly from the search appliance and the query term can be passed in the URL from the website’s search box to the Mindbreeze client. Alternately, the built-in client can be embedded directly into a website using an iframe.

What is a Category Descriptor?

Mindbreeze uses an XML file called the Category Descriptor (categorydescriptor.xml) to control various aspects of both indexing and serving for each data source category (e.g. Web, SharePoint, Google Drive, etc.). Each category plugin includes a default Category Descriptor which can be extended or modified to meet your needs. Common modifications include adding localized display labels for metadata field names, boosting the overall impact of a metadata field on relevancy, and changing which fields are visible within the default search client.

Option 2: The Mindbreeze Search App Designer Flexibility: Moderate | Development Effort: None to Moderate

The Mindbreeze Search App Designer provides a drag-and-drop interface for creating modular, mobile-friendly, search applications. Some of the most popular modules include filters, maps, charts, and galleries. Many of these features are not enabled on the aforementioned default Client, so a search app is the easiest way to use them. This drag-and-drop configuration allows for layout adjustments, widget selection, and basic configurations without coding or technical knowledge. To further customize search apps, users can modify the mustache templates that control the rendering of each search widget within the search app. Common modifications include conditionally adjusting visible metadata, removing actions, or adding custom callouts or icons for certain result types. 

A key feature is the ability to export the code needed to embed a search app into a website or application from the Search Apps page in the Mindbreeze Management Center. That code can then be placed directly in a div or iframe on the target website eliminating the need to redirect users to the appliance. Custom CSS files may be used to style the results to match the rest of the website. Although you can add a search box directly to a search app, webpages usually have their own search box in the header. You can utilize query terms from an existing search box by passing them as a URL parameter where they will be picked up by the embedded search app.

Did you know? This website uses a search app for Mindbreeze-powered website search. For a deep-dive look at that integration, check out our blog post on How We Integrated this Website with Mindbreeze InSpire.

Option 3: Custom Mindbreeze Web Applications Flexibility: High | Development Effort: Low to Moderate

The default client mentioned in Option 1 can also be copied to create a new custom version of a Mindbreeze Web Application. The most common alteration is to add a reference to a custom CSS file which modifies the look and feel of the search results without changing the underlying data or DOM structure. This modification is easy and low risk. It also very easy to isolate issues related to such a change, as you can always attempt to reproduce an issue using the default client without your custom CSS.

More substantial modifications to the applications index.html or JavaScript files can also be made to significantly customize and alter the behavior of the search experience. Examples include adding custom business logic to manipulate search constraints or applying dynamic boosting to alter relevancy at search time. Other Mindbreeze UI elements can also be added to customized web apps using Mindbreeze HTML building blocks; this includes many of the elements exposed through the search app Designer such as graphs, maps, and timelines. While these types of alterations require deeper technical knowledge than simply adding custom CSS, they are still often less effort than building a custom UI from scratch (as described in Option 4). These changes may require refactoring to be compatible with future versions or integrate new features over time, so this should be considered when implementing your results page.

Option 4: The Mindbreeze REST API Flexibility: High | Development Effort: Moderate to High

For customers seeking a more customized integration, the Mindbreeze REST API allows search results to be returned as JSON, giving you full control over their presentation. Custom search pages also allow for dynamic alterations to the query, constraints, or other parameters based on custom business logic. Filters, spelling suggestions, preview URLs, and other Mindbreeze features are all available in the JSON response, but it is up to the front-end developers to determine which features to render on the page, how to arrange them, and what styling to use. This approach allows for the most control and tightest integration with the containing site, but it is also the most effort. That said, just because custom search pages generally require the greatest effort is not to say selecting this option always will result in a lengthy deployment. In fact, one of our clients used the Mindbreeze API to power their custom search page and went from racking to go-live in 37 days.

Mindbreeze offers an excellent combination of built-in features with tools for extending capabilities when necessary. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us.

The post Four Options for Creating Mindbreeze Search Interfaces appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Backing up and restoring EDB containers in MiniShift/OpenShift

Yann Neuhaus - Wed, 2018-08-08 09:38

The last blogs in the series are already some days old: Setting up MiniShift, Deploying EDB containers in MiniShift/OpenShift, Customizing PostgreSQL parameters in EDB containers in MiniShift/OpenShift, Scaling the EDB containers in MiniShift/OpenShift, EDB Failover Manager in EDB containers in Minishift/OpenShift and EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers. What is missing is how you can backup and restore instances running in this container deployment and that is the topic of this post.

What you usually use to backup and restore EDB Postgres is BART and the container world is no exception to that. Lets see how that works.

My current deployment looks like this:
Selection_013

Two pgpool containers are serving three database containers which you can also check on the command line:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP           NODE        ROLE
edb-as10-0-1-b8lvj   1/1       Running   0          3m        172.17.0.9   localhost   masterdb
edb-as10-0-1-gj76h   1/1       Running   0          1m        172.17.0.5   localhost   standbydb
edb-as10-0-1-sb5lt   1/1       Running   0          2m        172.17.0.4   localhost   standbydb
edb-pgpool-1-qzk5v   1/1       Running   0          3m        172.17.0.7   localhost   queryrouter
edb-pgpool-1-rvtl6   1/1       Running   0          3m        172.17.0.6   localhost   queryrouter

What we want to do is to backup the database instances or at least one of them. What you need to prepare before deploying the BART container is shared storage between the databases containers and the BART container. The is especially important for the restore case as the restore procedure needs to access the backup which is hosted in the BART container. Notice that this storage configuration has the “Read-Write-Many” attributes:
Selection_030

When I initially deployed the database containers I provided exactly these storage claim and volume as a parameter so I have that available in the database containers:
Selection_031

This means in any of the database containers I will be able to see the backup volume:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals

The same storage configuration then needs to be provided to the BART deployment. Here is the yaml file for the BART deployment:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
              
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Once that is imported we can deploy the BART container:
Selection_032
Notice that this is actually the same storage configuration as it was used to setup the database containers.
Selection_033

What I didn’t tell you is that you need to do another step before. As the BART container is supposed to backup all the instances in a project we need to pass the BART configuration file to the container via a configMap. In this setup I only have one instance so the configMap would look like this:
Selection_034

Here you would add all the instances you’d need to backup per project. Once the BART container is ready:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-b8lvj   1/1       Running   0          17m
edb-as10-0-1-gj76h   1/1       Running   0          14m
edb-as10-0-1-sb5lt   1/1       Running   0          16m
edb-bart-1-7cgfv     1/1       Running   0          19s
edb-pgpool-1-qzk5v   1/1       Running   0          17m
edb-pgpool-1-rvtl6   1/1       Running   0          17m

… you can connect to it and perform a manual backup:

dwe@dwe:~$ oc rsh edb-bart-1-7cgfv
sh-4.2$ bart backup -s edb
INFO:  creating backup for server 'edb'
INFO:  backup identifier: '1533738106320'
65043/65043 kB (100%), 1/1 tablespace

INFO:  backup completed successfully
INFO:  backup checksum: 16fba63925ac3e77d474a36496c2a902 of base.tar
INFO:  
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1533738106320
BACKUP NAME: none
BACKUP PARENT: none
BACKUP LOCATION: /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320
BACKUP SIZE: 63.52 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: UTC
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
 ChkSum                             File      
 16fba63925ac3e77d474a36496c2a902   base.tar  

TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000008
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-08-08 14:21:46 UTC
STOP TIME: 2018-08-08 14:21:47 UTC
TOTAL DURATION: 1 sec(s)

This backup is now available on the BART container but in addition it is accessible in the database container:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320/
total 65060
drwxr-xr-x 2 edbuser root     4096 Aug  8 14:21 .
drwxr-xr-x 4 edbuser root     4096 Aug  8 14:21 ..
-rwxr-xr-x 1 edbuser root      664 Aug  8 14:21 backupinfo
-rwxr-xr-x 1 edbuser root 66605568 Aug  8 14:21 base.tar

In case you’d need to restore that you would deploy a new database configuration specifying this backup as the “Restore file”:
Selection_035

One downside with the current versions of the containers: You can not do point in time recovery. Only restores from full backups are supported until now. This will change in the next release, though.

Have fun with the containers …

 

Cet article Backing up and restoring EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

MobaXTerm 10.9

Tim Hall - Wed, 2018-08-08 07:06

Once again I’m late to the party. About a week ago MobaXTerm 10.9 was released.

The downloads and changelog are in the usual places.

This is a great tool!

Cheers

Tim…

MobaXTerm 10.9 was first posted on August 8, 2018 at 1:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MASH Continues European Expansion with Oracle Food and Beverage Cloud

Oracle Press Releases - Wed, 2018-08-08 07:00
Press Release
MASH Continues European Expansion with Oracle Food and Beverage Cloud Oracle Simphony Cloud Enables Danish Steak House Chain to Reduce IT Complexity, Streamline Operations and Drive Strategic Expansion at Scale

Redwood Shores, Calif.—Aug 8, 2018

Modern American Steak House (MASH), a high-end Danish restaurant chain, has chosen Oracle Simphony Cloud to power its steak house restaurants across Europe. With 13 restaurant locations across Denmark, Germany and the UK, and plans for additional international growth, MASH needed a technology platform that could support the strategic expansion of its business at scale. In addition, the restaurant needed a solution that would help reduce the complexity of its IT infrastructure and streamline both front and back-office operations across all of its restaurant locations.

“Our biggest challenge was scalability—since opening MASH in 2009, our expansion into other countries proved to be a very difficult and complex undertaking due to obstacles around IT infrastructure, regulations, taxes, language and currency, so finding a technology solution that would help us maneuver around these obstacles and enable us to grow at scale was a top priority,” said Mirek Nørkjær, Assistant COO, Copenhagen Concepts, the parent company of MASH restaurants. “Implementing Oracle Simphony Cloud has enabled us to minimize IT complexity through one centralized system, streamline front and back-office operations with real-time data and localization capabilities, and support our vision for expansion, all at scale.”

Oracle Simphony Cloud provides MASH with one centralized system to manage operations across all 13 of its restaurants, reducing the cost and complexity of IT, ensuring high performance scalability and allowing for more business efficiency and agility. In addition, Oracle Simphony Cloud enables MASH to maintain brand standards across its restaurants globally by enforcing brand, menu and employee management standards and allowing for localization. It also ensures menu and pricing consistency, within a country, a region, or a single location, and offers multilanguage and currency support.

“Both large restaurant chains and independent restaurants need to be able to remove obstacles in their IT infrastructure that prevent them from creating a positive foundation for growth,” said Chris Adams, vice president strategy and solutions management, Oracle Food and Beverage. “MASH demonstrates the value and potential of Oracle Simphony Cloud to transform IT processes, streamline operations and drive growth at scale.”

For more information learn how MASH delivers prime cuts and premium service.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Pages

Subscribe to Oracle FAQ aggregator