Feed aggregator

Oracle Enables Smart Manufacturing with New Artificial Intelligence Cloud Applications

Oracle Press Releases - Mon, 2018-05-14 07:00
Press Release
Oracle Enables Smart Manufacturing with New Artificial Intelligence Cloud Applications Intelligent applications help improve overall business performance in manufacturing by driving smarter decisions, increasing yields and enhancing production efficiency

Redwood Shores, Calif.—May 14, 2018

Oracle today announced new artificial intelligence (AI) cloud applications that enable manufacturing organizations to reduce costs and increase yields by providing rapid analysis and actionable insights that can improve production efficiency and performance. The new Oracle Adaptive Intelligent Applications for Manufacturing leverage machine learning and AI to process vast amounts of data from production environments and rapidly identify issues, enabling improved operational efficiency.

Oracle Adaptive Intelligent Applications for Manufacturing enables manufacturers to spot anomalies during production, pinpoint the root cause of issues, and predict events before they occur. The applications enable manufacturers to look into every stage of the production process, foresee faulty processes and elements, and trace the impact of issues from production through to customer delivery.

Built on the robust and scalable Oracle Cloud Platform with embedded machine learning capabilities, this solution includes a manufacturing-aware data lake that brings together and analyzes structured, semi-structured, and unstructured data from multiple data sources on the shop floor.

Oracle Adaptive Intelligent Applications for Manufacturing include:

  • Pattern and correlation analysis: Discover key patterns and correlations between a complex set of multi-variate influencing factors across manpower, machine, method, material, and management related information. Users can then align these insights with manufacturing business metrics such as yield, quality, cycle time, cost, scrap, rework, and returns to help quickly identify root causes.
  • Genealogy and traceability analysis: Using highly intuitive user interfaces and a self-driven ad-hoc analysis paradigm, the solution sets the foundation for “smart recall” analysis by providing comprehensive capabilities for backward and forward tracing of products and processes to quickly identify impacted products, services, and customers.
  • Predictive analysis: Leveraging the foundation of patterns and correlations analysis driven by machine learning and AI algorithms, this solution predicts the likelihood of occurrence of critical outcomes such as yield, defects, scrap, rework, cycle time and costs for ongoing production activities. This provides business users with the lead-time needed to intervene in a timely fashion to minimize losses.
 

Oracle Adaptive Intelligent Applications for Manufacturing are designed to work in a complex and heterogeneous mix of IT systems such as Manufacturing Execution Systems (MES), Quality Management, Enterprise Resource Planning (ERP), Human Capital Management (HCM), Customer Relationship Management (CRM) and Operational Technology (OT) systems that include sensor and log data from equipment and machines as well as external environmental data such as humidity, temperature etc.

“Traditionally, pattern and correlation analysis and predictive analysis are done by a small group of specialist data scientists,” said Ramchand Raman, Vice President, Oracle Product Development. “Oracle Adaptive Intelligent Applications for Manufacturing dramatically simplify the output of complex machine learning and AI algorithms and present these insights to average business users to drive better, faster decision making.”

Contact Info
Bill Rundle
Oracle PR
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

ADWC – connect from your premises

Yann Neuhaus - Mon, 2018-05-14 05:44

In the previous post about the Autonomous Data Warehouse Service, I’ve run queries though the Machine Learning Notebooks. But you obviously want to connect to it from your premises, with SQL*Net.

CaptureADWCconnect001Of course the connection, going through the public internet, must be secured. If you already use a managed service like the Oracle Exadata Express Cloud Service, you already know how to do: download a .zip containing the connection string and the wallet and certificate for SQL*Net encryption.

You get it from the Service Console, logged as the ADMIN user, and Administration tab. The Download Client Credentials asks you for the wallet password. However, this is not a password to protect the .zip file and the .zip file contains an auto-login wallet, so keep it secured.

SQL Developer

CaptureADWCconnect002 The simplest use of this file is with SQL Developer because you don’t even have to unzip it. Just choose a ‘Cloud PDB’ connection type, enter the path of the .zip file as Configuration File, the password as Keystore Password and the ADMIN user (or any user you have created with the Oracle ML Users).

In the tnsnames.ora provided in the .zip file there are 3 network service names connecting to 3 different services: _low, _medium and _high. They map to the resource manager plan so that you can run your queries with different priorities.

SQLcl thin

With SQLcl you do not need to unzip the credentials file, at least when you are using thin JDBC (the default).
You just register it with:

18:53:12 SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
Using temp directory:/tmp/oracle_cloud_config4174171941677611695

and you are ready to connect to the _low, _medium and _high services.

As you see, it unzips the file into a temporary directory so you have to do it each time you run SQLcl. You can add this to login.sql and may add some housekeeping as this temporary directory may remain. Or run all this in a docker container.

This is simple, at least if you are running the latest Java 8 which includes the Java Cryptography Extension (JCE). If it is not the case, as when you use the Java Home provided with 18c (1.8.0_152), you have to add the jars yourself. But don’t worry, all is explained:

SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
***** JCE NOT INSTALLED ****
***** CAN NOT CONNECT TO PDB Service without it ****
Current Java: /u01/app/oracle/product/18.0.0/dbhome_1/jdk/jre
Follow instructions on http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
SQL>

Of course the alternative is to install the latest Java

$ sudo rpm -i jre-8u171-linux-x64.rpm
Unpacking JAR files...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...

and set JAVA_HOME to it before starting SQLcl

$ export JAVA_HOME=/usr/java/jre1.8.0_171-amd64
$ SQLPATH=~/sql bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql /nolog

Anyway, in all cases, once the credential .zip is provided you can connect with Thin JDBC (the default) with a user/password that has been created in the ADWC:

SQL> connect admin@adwc_high
Password? (**********?) ****************
AArray = [B@24959ca4
AArray = [B@10289886
AArray = [B@32115b28
AArray = [B@2ad48653
Connected.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
6 esj1pod6 12.2.0.1.0 13-MAY-18 OPEN YES 6 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE RAC

What’s in the .zip

The tnsnames.ora has entries for the low, medium, high services.

adwc_high = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_low = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_low.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_medium = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_medium.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )

The sqlnet.ora mentions the wallet used for SQL*Net encryption:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

Note that the directory is an absolute path and you probably want to change it to your TNS_ADMIN one where you unzip the file.

In the wallet location, you find the ewallet.p12 that contain the certificate and private keys, protected with password, and the cwallet.sso which do not need to provide the password to open it, so protect them with file permissions.

You find also keystore.jks which also contains the Self-signed certificate but in JKS truststore format. and referenced from ojdbc.properties properties:

oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=${TNS_ADMIN})))

Once you have unzipped the credentials, you can use them to connect with OCI.

SQL*Plus or SQLcl -oci

If you want to connect with an OCI client, you have to unzip this file to your TNS_ADMIN directory. That can be the $ORACLE_HOME/network/admin, but be careful to overwrite existing files, or it can be a new directory you will use by setting the TNS_ADMIN environment variable (or registry entry) to it.

Here are some examples where I set TNS_ADMIN to the directory where I unzipped the credentials:

TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlcl -oci /nolog
TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlplus /nolog

Any application using OCI (the oracle client, which can be the InstantClient or a full database installation) can use this without providing any password.

Databas Link

With the unzipped credentials you can access through OCI which means that you can also have a database link to the ADWC database. The credentials must be unzipped (or merged) in the TNS_ADMIN (or default ?/rnetwork/admin) of the instance:

SQL> create database link ADWC connect to ADMIN identified by "Ach1z0#dAch1z0#d" using 'adwc_high';
Database link created.
 
SQL> select banner from v$version@ADWC;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Client Credential password

When downloading the .zip you are asked for a password to ‘protect this file to prevent unauthorized database access‘. But that is misleading. The .zip is not password protected. Anyone can open it. And it contains an auto-login wallet, so anybody can use it. You can access the database without this password. Of course, you cannot connect if you don’t have a user/password with a create session privileges, but you access to it for user credentials verification.

So what is this password used for? We have seen that SQL Developer needs the password (or you will get a files as java.io.IOException: Keystore was tampered with, or password was incorrect). Then, you may remove the .sso auto-login wallet from the .zip file when it is used only by SQL Developer. But of course, you have to think about where the password is stored in SQL Developer. Is is more secured than the .sso ?

As long as the auto-login wallet is there, you do not need to store the wallet password. But of course, you will protect credential files.

 

Cet article ADWC – connect from your premises est apparu en premier sur Blog dbi services.

Sichuan Kelun Pharmaceutical Automates Oncology Clinical Trial Set Up and Management with Clinical One Randomization and Supplies Management Cloud Service

Oracle Press Releases - Sun, 2018-05-13 14:43
Press Release
Sichuan Kelun Pharmaceutical Automates Oncology Clinical Trial Set Up and Management with Clinical One Randomization and Supplies Management Cloud Service

Sichuan, China—May 13, 2018

Oracle Health Sciences today announced that Sichuan Kelun Pharmaceutical Research Institute Co., Ltd (“Kelun”) has implemented Oracle Health Sciences Clinical One Randomization and Supplies Management Cloud Service to fully automate the set up and management of its multi-site, double blind colorectal cancer study.

The China Food and Drug Administration (CFDA) has been driving reforms to improve the quality of drugs and to encourage innovation in the development of new drugs and medical devices. Kelun Pharmaceutical has been successfully conducting generics drug research since 1996 but recently launched its first, new oncology drug study. As with any new clinical trial, there is tremendous time and resources required to set up a new study including trial design and validation which can take weeks to months. ORS enables coordinators to quickly add patients to a trial, collect screening information and ensure eligibility for randomization in record time. In an effort to set up their own trial and eliminate manual, repetitive data entry, Kelun implemented Clinical One Randomization and Supplies Management Cloud Service.  

“We needed an established, standards-based cloud system to simplify the drug supply management process, and we have already determined we made a great choice by selecting Oracle’s Clinical One Randomization and Supplies Management. Within 29 days, we were fully implemented and are now in phase III of our oncology trial,” said Xiaoping Zhang, VP of Biostatistics and Data Management, Clinical Research Hub.

With Clinical One Randomization and Supplies Management, Kelun has been able to manage its trial supply and randomization more effectively and efficiently and has enabled its CRO partner and service providers, including its supply shipping specialist vendor, to automate their processes by leveraging Oracle Health Science’s cloud-based system. In addition to simplifying the drug supply management process, ORS has provided Kelun with role-based analytics to track and manage supply status in real time.

“Our eClinical platform, Clinical One and randomization and supplies management capability was designed for pharmaceutical and biopharma companies of all sizes, across all treatment areas and for all phases of research and development. We are excited to see Kelun leveraging technology to speed the setup and management of their trial and are hopeful that this oncology therapy will make it to market,” said Steve Rosenberg, general manager and senior vice president, Oracle Health Sciences.

Contact Info
Valerie Beaudett
Oracle Corporation
+1 650.400.7833
valerie.beaudett@oracle.com
Meghan Roman
Blanc and Otus
+1 415 260 1857
meghan.roman@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Meghan Roman

  • +1 415 260 1857

Ubuntu 16.04: Installation of chrome-browser fails with libnss3 (>= 2:3.22)

Dietrich Schroff - Sun, 2018-05-13 10:55
On my old laptop with ubuntu 16.04 i tried to install "chrome-browser":
# dpkg -i google-chrome-stable_current_amd64\ \(1\).deb 
Vormals nicht ausgewähltes Paket google-chrome-stable wird gewählt.
(Lese Datenbank ... 137351 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von google-chrome-stable_current_amd64 (1).deb ...
Entpacken von google-chrome-stable (64.0.3282.186-1) ...
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von google-chrome-stable:
 google-chrome-stable hängt ab von libnss3 (>= 2:3.22); aber:
  Version von libnss3:amd64 auf dem System ist 2:3.21-1ubuntu4.

dpkg: Fehler beim Bearbeiten des Paketes google-chrome-stable (--install):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
Trigger für bamfdaemon (0.5.3~bzr0+16.04.20160415-0ubuntu1) werden verarbeitet ...
Rebuilding /usr/share/applications/bamf-2.index...
Trigger für gnome-menus (3.13.3-6ubuntu3) werden verarbeitet ...
Trigger für desktop-file-utils (0.22-1ubuntu5) werden verarbeitet ...
Trigger für mime-support (3.59ubuntu1) werden verarbeitet ...
Trigger für man-db (2.7.5-1) werden verarbeitet ...
Fehler traten auf beim Bearbeiten von:
 google-chrome-stable

But as you can see, this does not work ;-(
So let's clean up the installation:
# apt-get install -f
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Abhängigkeiten werden korrigiert ... Fertig
Die folgenden Pakete werden ENTFERNT:
  google-chrome-stable
0 aktualisiert, 0 neu installiert, 1 zu entfernen und 148 nicht aktualisiert.
1 nicht vollständig installiert oder entfernt.
Nach dieser Operation werden 186 MB Plattenplatz freigegeben.
Möchten Sie fortfahren? [J/n]
(Lese Datenbank ... 137451 Dateien und Verzeichnisse sind derzeit installiert.)
Entfernen von google-chrome-stable (64.0.3282.186-1) ...
Trigger für man-db (2.7.5-1) werden verarbeitet ...
Trigger für bamfdaemon (0.5.3~bzr0+16.04.20160415-0ubuntu1) werden verarbeitet ...
Rebuilding /usr/share/applications/bamf-2.index...
Trigger für gnome-menus (3.13.3-6ubuntu3) werden verarbeitet ...
Trigger für desktop-file-utils (0.22-1ubuntu5) werden verarbeitet ...
Trigger für mime-support (3.59ubuntu1) werden verarbeitet ...
Several searches via google did not help.

But after a while i recognized, that chromium-browser is integrated into ubuntu. With this knowledge was easy:
# apt install chromium-browser
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  libappindicator1 libcurl3 libdbusmenu-gtk4 libindicator7
Verwenden Sie »sudo apt autoremove«, um sie zu entfernen.
Die folgenden zusätzlichen Pakete werden Installiert
  chromium-browser-l10n chromium-codecs-ffmpeg-extra
Vorgeschlagene Pakete:
  webaccounts-chromium-extension unity-chromium-extension adobe-flashplugin
Die folgenden NEUEN Pakete werden installiert:
  chromium-browser chromium-browser-l10n chromium-codecs-ffmpeg-extra
0 aktualisiert, 3 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
Es müssen noch 70,8 MB von 71,7 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 287 MB Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren? [J/n]        
Holen:1 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 chromium-browser amd64 49.0.2623.108-0ubuntu1.1233 [67,5 MB]
Holen:2 http://de.archive.ubuntu.com/ubuntu xenial/universe amd64 chromium-browser-l10n all 49.0.2623.108-0ubuntu1.1233 [3.355 kB]
Es wurden 42,6 MB in 7 s geholt (5.541 kB/s).                                                                         
Vormals nicht ausgewähltes Paket chromium-codecs-ffmpeg-extra wird gewählt.
(Lese Datenbank ... 137350 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../chromium-codecs-ffmpeg-extra_49.0.2623.108-0ubuntu1.1233_amd64.deb ...
Entpacken von chromium-codecs-ffmpeg-extra (49.0.2623.108-0ubuntu1.1233) ...
Vormals nicht ausgewähltes Paket chromium-browser wird gewählt.
Vorbereitung zum Entpacken von .../chromium-browser_49.0.2623.108-0ubuntu1.1233_amd64.deb ...
Entpacken von chromium-browser (49.0.2623.108-0ubuntu1.1233) ...
Vormals nicht ausgewähltes Paket chromium-browser-l10n wird gewählt.
Vorbereitung zum Entpacken von .../chromium-browser-l10n_49.0.2623.108-0ubuntu1.1233_all.deb ...
Entpacken von chromium-browser-l10n (49.0.2623.108-0ubuntu1.1233) ...
Trigger für man-db (2.7.5-1) werden verarbeitet ...
Trigger für bamfdaemon (0.5.3~bzr0+16.04.20160415-0ubuntu1) werden verarbeitet ...
Rebuilding /usr/share/applications/bamf-2.index...
Trigger für gnome-menus (3.13.3-6ubuntu3) werden verarbeitet ...
Trigger für desktop-file-utils (0.22-1ubuntu5) werden verarbeitet ...
Trigger für mime-support (3.59ubuntu1) werden verarbeitet ...
Trigger für hicolor-icon-theme (0.15-0ubuntu1) werden verarbeitet ...
chromium-codecs-ffmpeg-extra (49.0.2623.108-0ubuntu1.1233) wird eingerichtet ...
chromium-browser (49.0.2623.108-0ubuntu1.1233) wird eingerichtet ...
chromium-browser-l10n (49.0.2623.108-0ubuntu1.1233) wird eingerichtet ...


XMLSERIALIZE dynamic order by statement

Tom Kyte - Sat, 2018-05-12 02:26
Hi, I've the following problem. I call the function with the order-by parameter, but it will not be used. When I use the order-by hardcoded, the values ??are sorted. Do you have any idee what is wrong in the function or the calling? Than...
Categories: DBA Blogs

Oracle Names Charles W. Moorman IV and William G. Parrett to the Board of Directors

Oracle Press Releases - Fri, 2018-05-11 15:30
Press Release
Oracle Names Charles W. Moorman IV and William G. Parrett to the Board of Directors

Redwood Shores, Calif.—May 11, 2018

The Oracle Board of Directors today announced that it has unanimously elected Charles (Wick) Moorman IV and William G. Parrett to the company’s Board of Directors. The election is effective as of May 9, 2018 and increases the size of the Board to 14 directors.

“We are very pleased to have two exceptional leaders join our Board,” said Larry Ellison, Chairman of the Board of Directors and Chief Technology Officer. Bruce Chizen, Chair of the Nomination and Governance Committee, added, “Wick brings significant technology, risk management and regulatory experience to our Board, while Bill brings valuable auditing and financial expertise. Both Wick and Bill are accomplished executives with extensive experience leading large, complex organizations. We are excited to add two additional independent directors to the Board and we look forward to working with both Wick and Bill.”

Mr. Moorman, 66, is a Senior Advisor to Amtrak. He previously served as President and CEO of Amtrak from August 2016 until January 2018. He was previously Chairman from February 2006, and CEO from November 2005, of Norfolk Southern Corporation until 2015. Prior to 2005, he held various positions in operations, information technology and human resources at Norfolk Southern Corporation after joining in 1975. Mr. Moorman serves as a director of Chevron Corporation and Duke Energy Corporation, and previously served as a director of Norfolk Southern Corporation.

Mr. Parrett, 72, served as the Chief Executive Officer of Deloitte Touche Tohmatsu from 2003 until May 2007. Mr. Parrett joined Deloitte in 1967 and served in a series of roles of increasing responsibility until his retirement in 2007. Mr. Parrett serves as a director of The Blackstone Group L.P., Eastman Kodak Company, Conduent Incorporated and Thermo Fisher Scientific Inc. (through May 23, 2018), and previously served as a director of UBS AG and iGATE Corporation. Mr. Parrett is a Certified Public Accountant with an active license.

All members of Oracle’s Board of Directors serve one-year terms and are expected to stand for election at the company’s next annual meeting of stockholders in November 2018.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
1.650.607.0349
ken.bond@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of May 11, 2018. Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Deborah Hellinger

  • 1.212.508.7935

Ken Bond

  • 1.650.607.0349

Deploying EDB containers in MiniShift/OpenShift

Yann Neuhaus - Fri, 2018-05-11 10:15

In this post we’ll look at how we can deploy EnterpriseDB containers in MiniShift. When you need to setup MiniShift have a look here. In this post we’ll do the setup with the MiniShift console, in a next post we’ll do the same by using the command line tools.

As a few containers will be running at the end MiniShift got more resources when it was started:

dwe@dwe:/opt$ minishift delete
dwe@dwe:/opt$ minishift start --cpus 4 --disk-size 30GB --memory 4GB

Once MiniShift is up and running open the MiniShift console and login as developer/admin:

dwe@dwe:/opt$ minishift console

Selection_001

The first thing we need to do is to grant the necessary permissions after we stepped into “My Project”:
Selection_002

The permission are in Resources->Membership. Add admin,edit and view to the default account:
Selection_004

For accessing the EnterpriseDB container repository a new secret needs to be created which contains the connection details. Secrets are under Resources->Secrets:
Selection_005
Selection_006

As databases are happy when they can store their data on persistent storage we need a volume. Volumes can be created under “Storage”:
Selection_007
Selection_008

Now we need a local registry where we can push the EnterpriseDB containers to:

dwe@dwe:~$ minishift ssh
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802
docker@minishift:~$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
Unable to find image 'registry:2' locally
2: Pulling from library/registry
81033e7c1d6a: Pull complete 
...
Status: Downloaded newer image for registry:2
14e85f4e2a36e727a0584803e49bbd690ffdb092c02238a241bd2ad003680625
docker@minishift:~$ docker login containers.enterprisedb.com
Username: dbi-services
Password: 
Login Succeeded
docker@minishift:~$ docker pull containers.enterprisedb.com/test/edb-as:v10.3
v10.3: Pulling from test/edb-as
d9aaf4d82f24: Pulling fs layer 
...
Status: Downloaded newer image for containers.enterprisedb.com/test/edb-as:v10.3
docker@minishift:~$ docker tag containers.enterprisedb.com/test/edb-as:v10.3 localhost:5000/test/edb-as:v10.3
docker@minishift:~$ docker push localhost:5000/test/edb-as:v10.3
The push refers to a repository [localhost:5000/test/edb-as]
274db5c4ff47: Preparing 
...
docker@minishift:~$ docker pull containers.enterprisedb.com/test/edb-pgpool:v3.5
v3.5: Pulling from test/edb-pgpool
...
docker@minishift:~$ docker tag containers.enterprisedb.com/test/edb-pgpool:v3.5 localhost:5000/test/edb-pgpool:v3.5
docker@minishift:~$ docker push localhost:5000/test/edb-pgpool:v3.5
The push refers to a repository [localhost:5000/test/edb-pgpool]
8a7df26eb139: Pushed 
...

This is all what is required for the preparation. The next step is to import to the template which specifies the setup. For this little demo we’ll use this one:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-0
   annotations:
    description: "Standard EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/test/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres' 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/test/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

For importing that into OpenShift go to “Overview” and select “Import YAML/JSON”:
Selection_010
Selection_011
Selection_012

This imports the template but does not process it right now. When you go back to “Overview” you should see a new template which you can provision:
Selection_013
Selection_014

Selecting the new template brings you to the specification of the variables. The only bits you need to adjust are the values for the volume and the volume claim:
Selection_015
Selection_016

A few moments later the EDB containers are up and running:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-fdr5j   1/1       Running   0          1m
edb-pgpool-1-9twmc   1/1       Running   0          1m
edb-pgpool-1-m5x44   1/1       Running   0          1m

Current there are two pgpool instances and one database instance container. You can double check that the instance is really running with:

dwe@dwe:~$ oc rsh edb-as10-0-1-fdr5j
sh-4.2$ psql postgres
psql.bin (10.3.8)
Type "help" for help.

postgres=# select version();
                                                   version                                                   
-------------------------------------------------------------------------------------------------------------
 EnterpriseDB 10.3.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

Going back to the “Overview” page in the console shows the same information:
Selection_019

In the next post we’ll scale up the deployment by adding two replicas and configure access from outside the cluster.

 

Cet article Deploying EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

SP2 for SQL Server 2016 is available with new helpful DMVs

Yann Neuhaus - Fri, 2018-05-11 09:23

Last month (April 24, 2018), the Service Pack 2 for SQL Server 2016 was released and distributed.
This Service Pack has new DMVs, already available in SQL Server 2017 RTM.

In this article, I will just write few words about 2 DMVs (sys.dm_db_log_stats & sys.dm_db_log_info) and a new column (modified_extent_page_count) in the DMV sys.dm_db_file_space_usage that I presented during our last event about SQL Server 2017. I think they are really helpful for DBA.
It’s also the opportunity to present you the demo that I create for our Event.

Preparation

First, I create the database smart_backup_2016 and a table Herge_Heros

CREATE DATABASE [smart_backup_2016]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'smart_backup_2016', FILENAME = N'G:\MSSQL\Data\smart_backup_2016.mdf' )
 LOG ON
( NAME = N'smart_backup_2016_log', FILENAME = N'G:\MSSQL\Log\smart_backup_2016_log.ldf' )
GO

USE smart_backup_2016
GO

CREATE TABLE [dbo].[Herge_Heros]
   (
   [ID] [int] NULL,
   [Name] [nchar](10) NULL
   ) ON [PRIMARY]
GO

I do a little insert and run a first Full and a first TLog Backup

INSERT INTO [Herge_Heros] VALUES(1,'Tintin') -- Tim
INSERT INTO [Herge_Heros] VALUES(2,'Milou') -- Struppi


BACKUP DATABASE [smart_backup_2016] TO  DISK = N'C:\Temp\smart_backup.bak' WITH NOFORMAT, NOINIT,  NAME = N'smart_backup-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO
BACKUP Log [smart_backup_2016] TO  DISK = N'C:\Temp\smart_backup.log' WITH NOFORMAT, NOINIT,  NAME = N'smart_backup-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

After, I insert a lot of line to have more than 50% modified pages

INSERT INTO [Herge_Heros] VALUES(3,'Quick') --Strups
INSERT INTO [Herge_Heros] VALUES(4,'Flupke')  --Stepppke
GO 100000

Now, the demo is ready!

new column modified_extent_page_count in sys.dm_db_file_space_usage

smart_backup01
As you can see in this screenshot, the column is really existing in SQL Server 2016 SP2 (13.0.5026.0).
After, you can, like us in our DMK maintenance, create an adapted Backup Strategy depending from changes and no more depending from the time.
In this stored procedure, if the modified pages are greater than 50% of the total pages, it will do a Full Backup and if the modified pages are less than 50%, it will do a Differential Backup.

USE [dbi_tools]
GO

CREATE or ALTER PROCEDURE [maintenance].[dbi_smart_backup] @database_name sysname
as
DECLARE @pages_changes Numeric(10,0)
DECLARE @full_backup_threshold INT
DECLARE @diff_backup_threshold INT
DECLARE @sql_query nvarchar(max)
DECLARE @page_change_text nvarchar(20)
DECLARE @param nvarchar(50)
DECLARE @backupfile nvarchar(2000)
SET @full_backup_threshold=50
SET @diff_backup_threshold=0
SET @param = N'@pages_changesOUT nvarchar(20) OUTPUT'
SET @sql_query =N'SELECT @pages_changesOUT=( 100 * Sum(modified_extent_page_count) / Sum(total_page_count) ) FROM ['+@database_name+'].sys.dm_db_file_space_usage'

EXECUTE sp_executesql @sql_query,@param ,@pages_changesOUT=@page_change_text OUTPUT; 
SET @pages_changes = CAST(@page_change_text AS Numeric(10,0)) 
IF @pages_changes > @full_backup_threshold
  BEGIN
     --Full Backup threshold exceeded, take a full backup
     Print 'Full Backup Threshold exceeded, take a full backup'
     SET @backupfile = N'C:\Temp\'+@database_name+N'_' + replace(convert(nvarchar(50), GETDATE(), 120), ':','_') + N'.bak'
   BACKUP DATABASE @database_name TO DISK=@backupfile
  END
  ELSE
  BEGIN
	   IF @pages_changes >= @diff_backup_threshold
		BEGIN
			-- Diff Backup threshold exceeded, take a differential backup
			Print 'Diff Backup threshold exceeded, take a differential backup'
			SET @backupfile = N'C:\Temp\'+@database_name+N'_' + replace(convert(nvarchar(50), GETDATE(), 120), ':','_') + N'.dif'
			BACKUP DATABASE @database_name TO DISK=@backupfile WITH differential
		END
	ELSE
		BEGIN
			-- No threshold exceeded, No backup
		PRINT 'No threshold exceeded, No backup'   
		END
  END
GO

Now, I run the stored procedure [maintenance].[dbi_smart_backup] in the dbi_tool

USE smart_backup_2016;
GO
EXEC [dbi_tools].[maintenance].[dbi_smart_backup] @database_name = N'smart_backup_2016'

smart_backup02
The dbi backup Stored Procedure in this case do a Full Backup because the modified pages are 64%.
I check the status of the modified pages and the modified pages are at 5%.
smart_backup03
If I restart the stored procedure, I do a differential backup.
smart_backup04
My backup strategy is really adapted to the change of pages in the database and no more based on the time (RTO vs RPO).
Let’s go to the new DMV sys.dm_db_log_stats do to the same with the TLog backup.

DMV sys.dm_db_log_stats

This DMV gives really good information about the transaction log files and can help to adapt the backup strategy and also control the growth of the file.
The DMV is very easy to use and for example, if you want to have the growth of the size since the last TLog backup, use the column log_since_last_log_backup_mb

SELECT log_since_last_log_backup_mb from sys.dm_db_log_stats(DB_ID('smart_backup_2016'))
GO

smart_backup05
Like below, I create in our DMK maintenance an adapted TLOG Backup [dbi_smart_tlog_backup] smart_backup06
If the TLOG is growing more that 5 MB from the last TLOG backup, It will do a TLOG Backup and if not, no TLOG Backup.
In my example, the growth is 548 MB, then a TLOG Backup is necessary.
smart_backup07
After, I control the size and as you can see the size since last TLOG Backup is 0.07MB
smart_backup08
As you can see, no TLOG backup… My backup strategy is adapted to the load! ;-)
smart_backup09

DMV sys.dm_db_log_info

This DMV will help us to have all VLF(Virtual Log File) information and no more using the DBCC Loginfo.
You can use this DMV very easily like this:

SELECT [name] AS 'Database Name', COUNT(l.database_id) AS 'VLF Count'
FROM sys.databases s
CROSS APPLY sys.dm_db_log_info(s.database_id) l
GROUP BY [name]

smart_backup10

These DMVs are very helpful and it is a good thing to have it also in SQL Server 2016 now.

 

Cet article SP2 for SQL Server 2016 is available with new helpful DMVs est apparu en premier sur Blog dbi services.

Skip Scan 3

Jonathan Lewis - Fri, 2018-05-11 08:26

If you’ve come across any references to the “index skip scan” operation for execution plans you’ve probably got some idea that this can appear when the number of distinct values for the first column (or columns – since you can skip multiple columns) is small. If so, what do you make of this demonstration:


rem
rem     Script:         skip_scan_cunning.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem

begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',1000);
end;
/

create table t1
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          id1,
        rownum                          id2,
        lpad(rownum,10,'0')             v1,
        lpad('x',150,'x')               padding
/*
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
*/
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(id1, id2);

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

For repeatability I’ve set some system statistics, but if you’ve left the system stats to default you should see the same effect. All I’ve done is create a table and an index on that table. The way I’ve defined the id1 and id2 columns means they could individually support unique constraints and the index clearly has 1 million distinct values for id1 in the million index entries. So what execution plan do you think I’m likely to get from the following simple query:


set serveroutput off
alter session set statistics_level = all;

prompt  =======
prompt  Default
prompt  =======

select  id 
from    t1
where   id2 = 999
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

You’re probably not expecting an index skip scan to appear, but given the title of this posting you may have a suspicion that it will; so here’s the plan I got running this test on 12.2.0.1:


SQL_ID  8r5xghdx1m3hn, child number 0
-------------------------------------
select id from t1 where id2 = 999

Plan hash value: 400488565

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |  2929 (100)|      1 |00:00:00.17 |    2932 |      5 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |      1 |  2929   (1)|      1 |00:00:00.17 |    2932 |      5 |
|*  2 |   INDEX SKIP SCAN                   | T1_I1 |      1 |      1 |  2928   (1)|      1 |00:00:00.17 |    2931 |      4 |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID2"=999)
       filter("ID2"=999)


So, an index skip scan doesn’t require a small number of distinct values for the first column of the index (unless you’re running a version older than 11.2.0.2 where a code change appeared that could be disabled by setting fix_control 9195582 off).

When the optimizer doesn’t do what you expect it’s always worth hinting the code to follow the plan you were expecting – so here’s the effect of hinting a full tablescan (which happened to do direct path reads):

SQL_ID  bxqwhsjwqfm7q, child number 0
-------------------------------------
select  /*+ full(t1) */  id from t1 where id2 = 999

Plan hash value: 3617692013

----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |  3317 (100)|      1 |00:00:00.12 |   25652 |  25635 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |      1 |  3317   (3)|      1 |00:00:00.12 |   25652 |  25635 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID2"=999)

Note that the cost is actually more expensive than the cost of the indexed access path.  For reference you need to know that the blocks statistic for the table was 25,842 while the number of index leaf blocks was 2,922. The latter figure (combined with a couple of other details regarding the clustering_factor and undeclared uniqueness of the index) explains why the cost of the skip scan was only 2,928: the change that appeared in 11.2.0.2 limited the I/O cost of an index skip scan to the total number of leaf blocks in the index.  The tablescan cost (with my system stats) was basically dividing my table block count by 16 (to get the number of multi-block reads) and then doubling (because the multiblock read time is twice the single block read time).

As a quick demo of how older versions of Oracle would behave after setting “_fix_control”=’9195582:OFF’:


SQL_ID	bn0p9072w9vfc, child number 1
-------------------------------------
select	/*+ index_ss(t1) */  id from t1 where id2 = 999

Plan hash value: 400488565

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation			    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |	A-Time	 | Buffers |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		    |	    |	   1 |	      |  1001K(100)|	  1 |00:00:00.13 |    2932 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |	   1 |	    1 |  1001K	(1)|	  1 |00:00:00.13 |    2932 |
|*  2 |   INDEX SKIP SCAN		    | T1_I1 |	   1 |	    1 |  1001K	(1)|	  1 |00:00:00.13 |    2931 |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID2"=999)
       filter("ID2"=999)

The cost of the skip scan is now a little over 1,000,000 – corresponding (approximately) to the 1 million index probes that will have to take place. You’ll notice that the number of buffer visits recorded is 2931 for the index operation, though: this is the result of the run-time optimisation that keeps buffers pinned very aggressively for skip scan – you might expect to see a huge number of visits recorded as “buffer is pinned count”, but for some reason that doesn’t happen. The cost is essentially Oracle calculating (with pinned root and branch) the cost of “id1 = {constant} and id2 = 999” and multiplying by ndv(id1).

Footnote:

Ideally, of course, the optimizer ought to work out that an index fast full scan followed by a table access ought to have a lower cost (using multi-block reads rather than walking the index in leaf block order one block at a time (which is what this particular skip scan will have to do) – but that’s not (yet) an acceptable execution plan though it does now appear a plan for deleting data.

tl;dr

If you have an index that is very much smaller than the table you may find examples where the optimizer does what appears to be an insanely stupid index skip scan when you were expecting a tablescan or, possibly, some other less efficient index to be used. There is a rationale for this, but such a plan may be much more CPU and read intensive than it really ought to be.

 

All Parent - Child tables in the database

Tom Kyte - Fri, 2018-05-11 08:06
Hi Tom, Can you please explain the way to get a list of all parent child relation in the database. The list should have the Grand parent as the first item and the last item will be the grand child. For Example, Parent ...
Categories: DBA Blogs

performance tuning - sql slows down after gather stats

Tom Kyte - Fri, 2018-05-11 08:06
Hi , I have faced a situation where sql id plan hash value is changed due stats gather on one of table currently i dont understand why this stats gathering cause chnage in plan and due to which execution time is poor now can you guide...
Categories: DBA Blogs

insert into local table with select from multiple database links in a loop

Tom Kyte - Fri, 2018-05-11 08:06
Hi Tom, i would like to apply the Orignial SQL Statement from Oracle MOS DOC ID 1317265.1 and 1309070.1 for license and healthcheck for all of my database instances. My Goal is to create a centralized repository with informations of my databases. Un...
Categories: DBA Blogs

Impdp not failing even if target table have missing column

Tom Kyte - Fri, 2018-05-11 08:06
My question why import is not failing even the source and target have different table structure <b>Source DB</b> has below table (with additional column COL3 and populated SQL> desc tab1 Name Null? Type ---------------------------...
Categories: DBA Blogs

DataGuard Convention

Michael Dinh - Fri, 2018-05-11 06:58

Good convention and implementation make life and automation so much simpler and more time for golfing.

I have seen some really poor and really good implementation and here’s a good one.

Wish I can take credit for it and unfortunately I cannot.

The scripts were created by whoa.

Scripts an be run from primary or standby for any instances provided profile to source database environment exists on host.

Use ORACLE_UNQNAME for DataGuard Environment

====================================================================================================
+++ PRIMARY RACONENODE
====================================================================================================
SQL> show parameter db%name
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      test
db_unique_name                       string      test
SQL> 

$ sysresv|tail -1
Oracle Instance alive for sid "test_1"

$ env|grep ORACLE

ORACLE_SID=test_1 (db_name)
ORACLE_UNQNAME=test (db_unique_name)

$ srvctl config database -d $ORACLE_UNQNAME
Database unique name: test
Database name: test
Oracle home: /u01/app/oracle/product/11g/db_1
Oracle user: oracle
Spfile: +FLASH/test/spfiletest.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: test
Database instances:
Disk Groups: FLASH,DATA
Mount point paths:
Services: testsvc
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: test
Candidate servers: host01,host02
Database is administrator managed

====================================================================================================
+++ STANDBY NON-RAC
====================================================================================================
SQL> show parameter db%name
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      test
db_unique_name                       string      testdr
SQL> 

$ sysresv|tail -1
Oracle Instance alive for sid "test"

$ env|grep ORACLE
ORACLE_SID=test (db_name)
ORACLE_UNQNAME=testdr (db_unique_name)

$ srvctl config database -d $ORACLE_UNQNAME
Database unique name: testdr
Database name: test
Oracle home: /u01/app/oracle/product/11g/db_1
Oracle user: oracle
Spfile:
Domain:
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Database instance: test
Disk Groups: DATA,FLASH
Services:

====================================================================================================
DATAGUARD BROKER CONFIGURATION
====================================================================================================
DGMGRL> show configuration

Configuration - dg_test (db_name)

  Protection Mode: MaxPerformance
  Databases:
    test   - Primary database (db_unique_name)
    testdr - Physical standby database (db_unique_name)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database test

Database - test

  Role:            PRIMARY
  Intended State:  TRANSPORT-OFF
  Instance(s):
    test_1
    test_2

Database Status:
SUCCESS

DGMGRL> show database testdr

Database - testdr

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-OFF
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       7 seconds (computed 0 seconds ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    test

Database Status:
SUCCESS

DGMGRL> exit

====================================================================================================
ls -l dg*.sh
====================================================================================================
-rwxr-xr-x    1 oracle   dba             377 May 08 21:50 dg_lag.sh
-rwxr-x---    1 oracle   dba             445 May 08 20:12 dg_start.sh
-rwxr-xr-x    1 oracle   dba             337 May 08 20:05 dg_status.sh
-rwxr-x---    1 oracle   dba             447 May 08 20:12 dg_stop.sh

====================================================================================================
dg_lag.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
show database ${ORACLE_SID} SendQEntries
show database ${ORACLE_UNQNAME} RecvQEntries
show database ${ORACLE_UNQNAME}
exit
END
}
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit

====================================================================================================
cat dg_start.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
edit database ${ORACLE_SID} set state='TRANSPORT-ON';
edit database ${ORACLE_UNQNAME} set state='APPLY-ON';
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
}
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit

====================================================================================================
dg_status.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
}
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit

====================================================================================================
dg_stop.sh
====================================================================================================
#!/bin/sh -e
check_dg()
{
dgmgrl -echo << END
connect /
edit database ${ORACLE_SID} set state='TRANSPORT-OFF';
edit database ${ORACLE_UNQNAME} set state='APPLY-OFF';
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
}
check_dg
. ~/oracle_staging
check_dg
. ~/oracle_testing
check_dg
exit


Using Oracle Ksplice for CVE-2018-8897 and CVE-2018-1087

Wim Coekaerts - Thu, 2018-05-10 17:15
Just the other day I was talking about using ksplice again and then just after these 2 new CVEs hit that are pretty significant. So, another quick # uptrack-upgrade and I don't have to worry about these CVEs any more.  Sure beats all those rebooting 'other' Linux OS servers. [root@vm1-phx opc]# uname -a Linux vm1-phx 4.1.12-112.16.4.el7uek.x86_64 #2 SMP Mon Mar 12 23:57:12 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-uname -a Linux vm1-phx 4.1.12-124.14.3.el7uek.x86_64 #2 SMP Mon Apr 30 18:03:45 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux [root@vm1-phx opc]# uptrack-upgrade The following steps will be taken: Install [92m63il8] CVE-2018-8897: Denial-of-service in KVM breakpoint handling. Install [3rt72vtm] CVE-2018-1087: KVM guest breakpoint privilege escalation. Go ahead [y/N]? y Installing [92m63il8] CVE-2018-8897: Denial-of-service in KVM breakpoint handling. Installing [3rt72vtm] CVE-2018-1087: KVM guest breakpoint privilege escalation. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.14.5.el7uek

Unique key across tables

Tom Kyte - Thu, 2018-05-10 13:46
Dear tom, How can i enforce unique key across multiple tables. Table1 and Table2 both have ID primary key column. Is it possible to restrict, while inserting and updating into these tables, the union of ID values from two tables are unique. r...
Categories: DBA Blogs

SEQUENCE

Tom Kyte - Thu, 2018-05-10 13:46
hi tom, during one interview i got one question in sequence i.e if there is one sequnce whose max value is 40,but after got nextval 20.without execute the select query 20,000 times and without alter the sequence i want to get 20,000 in the nextva...
Categories: DBA Blogs

How much data is there in your database?

Kubilay Çilkara - Thu, 2018-05-10 13:35
Have you ever thought how much of your database is actually data?

Sometimes you need to ask this most simple question about your database to figure out what the real size of your data is.

Databases store loads of auxiliary data such as indexes and materialized views and other structures where the original data is repeated. Many times databases repeat the data in indexes and materialized views for the sake of achieving better performance for the applications they server, and this repetition is legitimate.

But should this repetition be measured and counted as database size?

To make things worse, many databases due to many updates and deletes, over time create white space in their storage layer. This white space is fragmented free space which can not be re-used by new data entries. Often it might even end up being scanned in full table scans unnecessarily, eating up your resources. But most unfortunate of it all is that it will appear as if it is data in your database size measurements when usually it is not! White space is just void.

There are mechanisms in databases which will automatically remedy the white space and reset and re-organise the storage of data. Here is a link which talks about this in length https://oracle-base.com/articles/misc/reclaiming-unused-space 

One should be diligent when measuring database sizes, there is loads of data which is repeated and some which is just the blank void due to fragmentation and white-space.

So, how do we measure?

Below is a database size measuring SQL script which can be used with Oracle to show data (excluding the indexes) in tables and partitions. It also tries to estimate real storage (in the actual_gb column) excluding the whitespace by multiplying the number of rows in a table with the average row size. Replace the '<YOURSCHEMA>' bit with the schema you wish to measure.

SELECT
    SUM(actual_gb),
    SUM(segment_gb)
FROM
    (
        SELECT
            s.owner,
            t.table_name,
            s.segment_name,
            s.segment_type,
            t.num_rows,
            t.avg_row_len,
            t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 actual_gb,
            SUM(s.bytes) / 1024 / 1024 / 1024 segment_gb
        FROM
            dba_segments s,
            dba_tables t
        WHERE
            s.owner = '<YOURSCHEMA>'
            AND   t.table_name = s.segment_name
            AND   segment_type IN (
                 'TABLE'
                ,'TABLE PARTITION'
                ,'TABLE SUBPARTITION'
            )
        GROUP BY
            s.owner,
            t.table_name,
            s.segment_name,
            s.segment_type,
            t.num_rows,
            t.avg_row_len,
            t.avg_row_len * t.num_rows / 1024 / 1024 / 1024
    );
Categories: DBA Blogs

SQL Developer Web on the Oracle Cloud

Yann Neuhaus - Thu, 2018-05-10 12:21

You like SQL Developer because it is easy to install (just unzip a jar) and has a lot of features? Me too. It can be even easier if it is provided as a web application: no installation, and no java to take all my laptop RAM…
When I say no installation, you will see that you have some little things to setup here in DBaaS. That will probably be done for you in the managed services (PDBaaS) such as ‘Express’ and ‘Autonomous’ ones.

CaptureSDW010
Be careful, Oracle is a Top-Down deployment company. It seems that new products are announced first and then people have to work hard to make them available. Which means that if, like me, you want to test them immediately you may encounter some disappointment.
The announce was there. The documentation was there, mentioning that the Cloud Tooling must be upgraded to 18.2.3. But 18.2.3 was there only a few days later. You can check it from the place where the DBaaS looks for its software. Check from https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/dbaas_patch if you a are not sure.

So, before being able to see SQL Developer in the colorful DBaaS landing page (where you can also access APEX for example) there’s a bit of command line stuff to do as root.

Install the latest Cloud Tooling

SQL Developer Web needs to be installed with the latest version of ORDS, which is installed with the latest version of Cloud Tooling aka dbaastools.rpm

You need to connect as root, so opc and then sudo

ssh opc@144.21.89.223
sudo su

Check if there is a new version to install:

dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}'

If something is returned (such as 18.2.3.1.0_180505.1604) you install it:

dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')

Actually I got an error, and I had to ^C:

[root@DB18c opc]# dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')
DBAAS CLI version 1.0.0
Executing command dbpatchm --run -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
/var/opt/oracle/patch/dbpatchm -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
Use of uninitialized value in concatenation (.) or string at /var/opt/oracle/patch/dbpatchm line 4773.
^C

But finally, it was installed because the ‘list_tools’ above returns nothing.

Enable SQL Developer Web

SQL Developer Web (SDW) is running in ORDS (Oracle REST Data Services) and must be enabled with the ORDS Assistant with the enable_schema_for_sdw action.
Here I’ll enable it at CDB level. I provide a password for the SDW schema. I create it in a file:

cat > password.txt <<<'Ach1z0#d'

You may secure that better than I do, as I’m putting the password on command line here. But this is only a test.

Then, still as root, I call the ORDS assistant to install SDW in C##SQLDEVWEB (as I’m installing it in CDB$ROOT I need a common user name).


/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema="C##SQLDEVWEB" -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true

Here is the output. The last lines are important:

WARNING: Couldn't obtain the "dbname" value from the assistant parameters nor the "$OCDE_DBNAME" environment variable
Starting ORDS
Logfile is /var/opt/oracle/log/ords/ords_2018-05-10_10:44:12.log
Config file is /var/opt/oracle/ocde/assistants/ords/ords.cfg
INFO: Starting environment summary checks...
INFO: Database version : 18000
INFO: Database CDB : yes
INFO: Original DBaaS Tools RPM installed : dbaastools-1.0-1+18.1.4.0.0_180123.1336.x86_64
INFO: Actual DBaaS Tools RPM installed : dbaastools-1.0-1+18.2.3.1.0_180505.1604.x86_64
INFO: DBTools JDK RPM installed : dbtools_jdk-1.8.0-2.74.el6.x86_64
INFO: DBTools JDK RPM "/var/opt/oracle/rpms/dbtools/dbtools_jdk-1.8.0-2.74.el6.x86_64.rpm" MD5 : 48f13bb401677bfc7cf0748eb1a6990d
INFO: DBTools ORDS Standalone RPM installed : dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64
INFO: DBTools ORDS Standalone RPM "/var/opt/oracle/rpms/dbtools/dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64.rpm" MD5 : 480355ac3ce0f357d5741c2c2f688901
INFO: DBTools DBaaS Landing Page RPM installed : dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64
INFO: DBTools DBaaS Landing Page RPM "/var/opt/oracle/rpms/dbtools/dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64.rpm" MD5 : af79e128a56b38de1c3406cfcec966db
INFO: Environment summary completed...
INFO: Action mode is "full"
INFO: Database Role is "PRIMARY"
INFO: Enabling "C##SQLDEVWEB" schema in "CDB$ROOT" container for SQL Developer Web...
 
SQL*Plus: Release 18.0.0.0.0 Production on Thu May 10 10:44:27 2018
Version 18.1.0.0.0
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
 
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL Developer Web user enable starting...
Enabling "C##SQLDEVWEB" user for SQL Developer Web...
 
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Creating "C##SQLDEVWEB" user
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Call completed.
Commit complete.
PL/SQL procedure successfully completed.
Session altered.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
 
"C##SQLDEVWEB" user enabled successfully. The schema to access SQL Developer Web
is "c_sqldevweb"...
 
PL/SQL procedure successfully completed.
 
SQL Developer Web user enable finished...
Disconnected from Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
INFO: To access SQL Developer Web through DBaaS Landing Page, the schema "c_sqldevweb" needs to be provided...
INFO: "C##SQLDEVWEB" schema in the "CDB$ROOT" container for SQL Developer Web was enabled successfully...
 

The information to remember here is that I will have to provide the c_sqldevweb schema name (which is the schema name I’ve provided but lowercased and with sequences of ‘special’ characters replaced by an underscore). It is lowercased, but it seems that the schemaname has to be provided in uppercase.

Basically what has been done is quite simple: create the C##SQLDEVWEB user and call ORDS.ENABLE_SCHEMA to enable it and map it to the url.

DBCS Landing Page 2.0.0

Now I’m ready to see SQL Developer on the DBCS Landing Page. You access this page by:

  1. Enabling https access from internet (in Access Rules, enable ora_p2_httpssl)
  2. going to default web page for your service, in my case https://144.21.89.223

You may have to accept some self-signed certificates

And here it is with SQL Developer Web in the middle:
CaptureSDW011

The above shows PDB1/pdbadmin for the schema but I installed it at CDB level and the log above tells me that the schema is c_sqldevweb, so given the input, I change the schema to c_sqldevweb then on the login page. Finally, the direct url in my example is https://144.21.89.223/ords/c_sqldevweb/_sdw.

I enter C##SQLDEVWEB (uppercase here) as the user and Ach1z0#d as the password.

And here is the Dashboard:
CaptureSDW012

Do not worry about the 97% storage used which tells me that SYSTEM is full. My datafiles are autoextensible.

Just go to the SQL Worksheet and check your files:

select tablespace_name,bytes/1024/1024 "MBytes", maxbytes/1024/1024/1024 "MaxGB", autoextensible from dba_data_files

Enable SDW for local PDB user

To enable a PDB local user, I run ORDS assistant with a local user name (PDBADMIN here) and an additional parameter with the PDB name (PDB1 here).


cat > password.txt <<<'Ach1z0#d'
/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema=PDBADMIN -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true -ords_sdw_schema_container=PDB1

Now, I can connect to it with PDB1/pdbadmin as schema name.

Error handling

CaptureRestCallFail
If, like me, you are not used to ORDS applications, you may waste some minutes looking at a splash screen waiting for the result. Always look at the message bar. All actions are REST calls and the message bar will show if a call is running or completed successfully or not. The example on the right shows ‘call failed’. You can click on it to see the REST call, and the error.

 

Cet article SQL Developer Web on the Oracle Cloud est apparu en premier sur Blog dbi services.

Creating a Custom Component in Oracle JET - Gökhan

Introduction Oracle JET (JavaScript Extension Toolkit) is a collection of open source libraries for JavaScript developers to develop client-side applications. It comes with lots of responsive...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator