Feed aggregator

MATERIALISED VIEW ISSUE

Tom Kyte - Fri, 2018-09-14 07:46
Error starting at line : 12 in command - CREATE MATERIALIZED VIEW EMP_MV BUILD IMMEDIATE REFRESH FORCE ON COMMIT AS SELECT EMPID,EMPNAME FROM EMP Error report - ORA-12054: cannot set the ON COMMIT refresh attribute for the materializ...
Categories: DBA Blogs

Recommended partition size

Tom Kyte - Fri, 2018-09-14 07:46
We want to partition some tables using interval partitioning on the creation date. Partitioning is for manageability - we want to drop older partitions eventually - and partition pruning for improving performance. What is the Recommended partition si...
Categories: DBA Blogs

ODPI-C 3.0 Introduces SODA Document Storage

Christopher Jones - Fri, 2018-09-14 06:59
ODPI-C logo

Release 3.0 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

 

 

This release introduces support for Simple Oracle Document Access (SODA) when using Oracle Database 18c. SODA provides a non-SQL API for storing and accessing documents. Commonly documents are JSON, but other types can also be used. In this release the SODA API is a Preview. More details about SODA are in the companion release announcement for Python cx_Oracle 7.0.

Also introduced is a call timeout feature for Oracle Client 18c users. This allows applications direct control over how long database operations are allowed to run, making it easier for applications to control outcomes and keep control of user interaction. Again, details are in the cx_Oracle announcement.

If you're creating Windows applications for distribution, a change to how Oracle client libraries are located will be helpful. ODPI-C will try to load the Oracle client from the same directory as the ODPI-C binary, before defaulting to the standard search, i.e. using PATH. This means you can bundle a specific version of Instant Client with your application and know that it will be used in preference to any other Oracle libraries on the system.

There are a raft of other tweaks and improvements which can be found in the release notes.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Exadata and Capacity on Demand (CoD)

Syed Jaffar - Fri, 2018-09-14 06:10
As most of us knew that the Exadata Database Machine comes in different sizes with different resource capacity. Not sure how many of you aware that Capacity on Demand (CoD) option can enable customers to start with limited active cores processors and increase them dynamically on demand. If CoD option is not enabled during the initial EDM configuration, then, all active cores are enabled by default and can't be reduced any further.

With X4-2 or higher, number of active cores can be reduced during the installation and can be increased based on the demand.  For X4-2, cores are increased in two (2) core increment, where as X4-8 increased in eight (8) core factor, see the table below.

Below example demonstrates the procedure to increase the active core processors:

Using DBMCLI utility:

DBMCLI> LIST DBSERVER attributes coreCount

DBMCLI> ALTER DBSERVER pendingCoreCount = new_core_count

DBMCLI> LIST DBSERVER attributes coreCount

Note: Once active cores are enabled (increased), there is no procedure to reduce them again.

Restart the database servers after increasing the core count.

Below table depicts the capacity-on-demand core configuration for various EDM types and releases:



Living the Dream

Tim Hall - Fri, 2018-09-14 02:13

I was watching a rerun of X Factor at my brother’s house at the weekend. Most of the time I was wincing at all the bum notes sung by the people the judges were saying were fantastic, or wincing at everyone who was doing it for their { mom | dad | grandma | grandad | dead parrot } in an attempt to get me emotionally invested. Apart from all that cringe, the other thing I noticed was people saying things like,

“This is my dream!”

This really gets on my nerves because invariably they’ve done nothing to make their dream become a reality, other than turn up to audition on the day. They haven’t put in the hours practising their craft. They’ve not gone out looking for constructive criticism, then using that to improve. They’ve not tried to get some training to perfect their skills. They’ve just turned up thinking that singing a bunch of off key runs will make everyone think they are Mariah.

Getting good at anything takes time and effort. If you enjoy it, you might not notice how much effort you’ve put in, but that doesn’t negate the effort you’ve put in. We always hear people speaking about natural talent, but invariably you see those “winners” put in the effort, as well as having natural gifts.

I remember hearing someone saying you should praise effort, not results. From my experience, life is a grind and the people who succeed are the people that are prepared to work hard. Natural talent doesn’t go that far in life.

Next time you hear someone talk about “their dream”, ask them what they’ve done to make it a reality. If they’ve done nothing, I suggest you tell them they are full of shit and need to get off their lazy ass and make it a reality!

Cheers

Tim…

Living the Dream was first posted on September 14, 2018 at 8:13 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Python cx_Oracle 7 Introduces SODA Document Storage

Christopher Jones - Thu, 2018-09-13 22:48

cx_Oracle logo

cx_Oracle 7.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

 

 

Anthony Tuininga has just released cx_Oracle 7.0. This release brings some key technologies and new features to the Python developer:

  • Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See below.

  • Added Connection.callTimeout to support call timeouts when cx_Oracle is using Oracle Client 18.1 and higher. This is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution.

    The main code layer beneath cx_Oracle's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by cx_Oracle, zero or more 'round-trips' to the database can occur - calling the database and getting a response back.

    The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in cx_Oracle before or after the completion of each round-trip is not counted.

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and error "DPI-1067: call timeout of N ms exceeded with ORA-XXX" is returned.

    • In the case where a cx_Oracle operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout.

    • If no round-trip is required, the operation will never be interrupted.

    After a timeout occurs, cx_Oracle attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds.

    If the cleanup was successful, the DPI-1067 error will be returned and the application can continue to use the connection.

    For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed.

  • Added support for closing a session pool via the function SessionPool.close(). This is useful for being 'nice' to the database and making sure that database sessions are not left dangling until the database cleans them up. In particular the optional 'force' argument is handy when you need to suddenly halt a Python application and immediately free all the sessions in the database.

  • Added support for getting the contents of a SQL collection object as a dictionary, where the keys are the indices of the collection and the values are the elements of the collection. See function Object.asdict().

  • On Windows, cx_Oracle will now attempt to load the Oracle client libraries from the same directory as the cx_Oracle module before doing the standard Windows library location search, e.g. in the directories in the PATH environment variable. This new feature could be useful if you are bundling up applications and want to include the Oracle Instant Client. By putting the client in the same directory as the cx_Oracle library there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle client libraries in PATH.

  • A change in cx_Oracle 7 is that when a DML RETURNING statement is executed, variables bound to it will return an array when calling Variable.getvalue(). Attempts to set cx_Oracle.__future__.dml_ret_array_val are now ignored.

  • When a connection is used as a context manager, the connection is now closed when the block ends. Attempts to set cx_Oracle.__future__.ctx_mgr_close are now ignored.

The full release notes show the other new features and changes. Review this list before you upgrade:

python -m pip install cx_Oracle --upgrade SODA in Python cx_Oracle

Oracle Simple Document Access (SODA) support was originally introduced in Java and recently exposed to C. Python support for SODA is now available in cx_Oracle 7 when using Oracle client 18.3 libraries and connecting to Oracle Database 18.1 or higher. SODA is all hot and new and under rapid development. For this cx_Oracle release we're labelling SODA support as a 'preview'. With a future version of the Oracle Client libraries this will change.

SODA is typically used to store JSON documents in Oracle Database, but has flexibility to let you store other types of content.

Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents in them. Some basic examples are:

# Create the parent object for SODA soda = connection.getSodaDatabase() # Create a new SODA collection # This will open an existing collection, if the name is already in use. collection = soda.createCollection("mycollection") # Insert a document # A system generated key is created by default. content = {'name': 'Matilda', 'address': {'city': 'Melbourne'}} doc = collection.insertOneAndGet(content) key = doc.key print('The key of the new SODA document is: ', key)

You can then get documents back via a key look up, or by a search. A key lookup is straightforward:

# Fetch the document back doc = collection.find().key(key).getOne() # A SodaDocument content = doc.getContent() # A JavaScript object print('Retrieved SODA document dictionary is:') print(content)

For documents that can be converted to JSON you can alternatively get them as string:

content = doc.getContentAsString() # A JSON string print('Retrieved SODA document string is:') print(content)

The find() method is an operation builder, with methods that allow progressive filtering criteria to be set, limiting the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() and count().

With JSON documents, a complete filtering specification language can be used to pattern match documents. A brief example is:

# Find all documents with names like 'Ma%' print("Names matching 'Ma%'") documents = collection.find().filter({'name': {'$like': 'Ma%'}}).getDocuments() for d in documents: content = d.getContent() print(content["name"])

A runnable example is in SodaBasic.py

Check out the cx_Oracle SODA manual and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity.

Check it out!

PS the photo is one I took last weekend on a beach in Australia, which is coming into Spring. I thought you'd like it better than corporate clip art.

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Identify the missing object in an ora-08103 error

Tom Kyte - Thu, 2018-09-13 13:26
We have a large cursor that runs nightly and on a random night we get the following error from the job that calls the cursor: 180911 180019 (APPS.GAINS_COMMON,1194) Begin export_data 180911 194348 (APPS.GAINS_COMMON,4735) BEGIN export_sla 1809...
Categories: DBA Blogs

Oracle locking a table while deleting set of rows

Tom Kyte - Thu, 2018-09-13 13:26
I have 2 scripts running at the same time deleting different set of rows from same table. Would this cause any locking issues or contention? Please advise. Thank you very much.
Categories: DBA Blogs

What SQL is currently running in the database

Tom Kyte - Thu, 2018-09-13 13:26
I am using an application that submits SQL queries to the Oracle database. These queries can have hundreds of bind variables in them. I want to be able to see the SQL that is running with the bind variables substituted. I have used this query to g...
Categories: DBA Blogs

Exchange Partition - Error with VIRTUAL & CLOB columns

Tom Kyte - Thu, 2018-09-13 13:26
Hi Folks, Need your expert advice on this. Could you please guide me through the below issue - Issue : Exchange partition doesn't seem to work when both Virtual column and CLOB column exists together in a table :( . However it works fine on it ...
Categories: DBA Blogs

Masking Data With PostgreSQL

Yann Neuhaus - Thu, 2018-09-13 10:01

I was searching a tool for anonymizing data in a PostgreSQL database and I have tested the extension pg_anonymizer.
PostgreSQL_anonymizer is a set of SQL functions that remove personally identifiable values from a PostgreSQL table and replace them with random-but-plausible values. The goal is to avoid any identification from the data record while remaining suitable for testing, data analysis and data processing.
In this blog I am showing how this extension can be used. I am using a PostgreSQL 10 database.
The first step is to install the extension pg_anonymizer. In my case I did it with with pgxn client

[postgres@pgserver2 ~]$ pgxn install postgresql_anonymizer --pg_config /u01/app/postgres/product/10/db_1/bin/pg_config
INFO: best version: postgresql_anonymizer 0.0.3
INFO: saving /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: unpacking: /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: building extension
gmake: Nothing to be done for `all'.
INFO: installing extension
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension/anon'
/usr/bin/install -c -m 644 .//anon.control '/u01/app/postgres/product/10/db_1/share/extension/'
/usr/bin/install -c -m 644 .//anon/anon--0.0.3.sql  '/u01/app/postgres/product/10/db_1/share/extension/anon/'
[postgres@pgserver2 ~]$

We can then verify that under /u01/app/postgres/product/10/db_1/share/extension we have a file anon.control and a directory named anon

[postgres@pgserver2 extension]$ ls -ltra anon*
-rw-r--r--. 1 postgres postgres 167 Sep 13 10:54 anon.control

anon:
total 18552
drwxrwxr-x. 3 postgres postgres    12288 Sep 13 10:54 ..
drwxrwxr-x. 2 postgres postgres       28 Sep 13 10:54 .
-rw-r--r--. 1 postgres postgres 18980156 Sep 13 10:54 anon--0.0.3.sql

Let’s create a database named prod and let’s create the required extensions. tsm_system_rows should delivered by the contrib.

prod=# \c prod
You are now connected to database "prod" as user "postgres".
prod=#
prod=# CREATE EXTENSION tsm_system_rows;;
CREATE EXTENSION
prod=#

prod=# CREATE EXTENSION anon;
CREATE EXTENSION
prod=#


prod=# \dx
                                    List of installed extensions
      Name       | Version |   Schema   |                        Description

-----------------+---------+------------+----------------------------------------------------
--------
 anon            | 0.0.3   | anon       | Data anonymization tools
 plpgsql         | 1.0     | pg_catalog | PL/pgSQL procedural language
 tsm_system_rows | 1.0     | public     | TABLESAMPLE method which accepts number of rows as
a limit
(3 rows)

prod=#

The extension will create following functions in the schema anon. These functions can be used to mask some data.

prod=# set search_path=anon;
SET
prod=# \df
                                                               List of functions
 Schema |           Name           |     Result data type     |                          Argu
ment data types                           |  Type
--------+--------------------------+--------------------------+------------------------------
------------------------------------------+--------
 anon   | random_city              | text                     |
                                          | normal
 anon   | random_city_in_country   | text                     | country_name text
                                          | normal
 anon   | random_company           | text                     |
                                          | normal
 anon   | random_country           | text                     |
                                          | normal
 anon   | random_date              | timestamp with time zone |
                                          | normal
 anon   | random_date_between      | timestamp with time zone | date_start timestamp with tim
e zone, date_end timestamp with time zone | normal
 anon   | random_email             | text                     |
                                          | normal
 anon   | random_first_name        | text                     |
                                          | normal
 anon   | random_iban              | text                     |
                                          | normal
 anon   | random_int_between       | integer                  | int_start integer, int_stop integer
                            | normal
 anon   | random_last_name         | text                     |
                            | normal
 anon   | random_phone             | text                     | phone_prefix text DEFAULT '0'::text
                            | normal
 anon   | random_region            | text                     |
                            | normal
 anon   | random_region_in_country | text                     | country_name text
                            | normal
 anon   | random_siren             | text                     |
                            | normal
 anon   | random_siret             | text                     |
                            | normal
 anon   | random_string            | text                     | l integer
                            | normal
 anon   | random_zip               | text                     |
                            | normal
(18 rows)

prod=#

Now in the database prod let’s create a table with some data.

prod=# \d customers
                      Table "public.customers"
   Column   |         Type          | Collation | Nullable | Default
------------+-----------------------+-----------+----------+---------
 first_name | character varying(30) |           |          |
 last_name  | character varying(30) |           |          |
 email_add  | character varying(30) |           |          |
 country    | character varying(60) |           |          |
 iban       | character varying(60) |           |          |
 amount     | integer               |           |          |

prod=#

prod=# table customers;
 first_name | last_name |        email_add        |   country    |            iban            |   amount
------------+-----------+-------------------------+--------------+----------------------------+------------
 Michel     | Delco     | michel.delco@yaaa.fr    | FRANCE       | FR763000600001123456890189 |    5000000
 Denise     | Blanchot  | denise.blanchot@yaaa.de | GERMANY      | DE91100000000123456789     | 1000000000
 Farid      | Dim       | farid.dim@yaaa.sa       | Saudi Arabia | SA4420000001234567891234   |    2500000
(3 rows)

prod=#

Let’s say that I want some people to access to all data for this table, but I don’t want them to see the real email, the real country and the real iban of the customers.
One solution should be to create a view with anonymous data for these columns. This will replace them with random-but-plausible values for these columns

prod=# create view Customers_anon as select first_name as Firstname ,last_name  as Lastnmame,anon.random_email() as Email ,anon.random_country() as Country, anon.random_iban() as Iban ,amount as Amount from customers;
CREATE VIEW

And then grant the access privilege to concerned people

prod=# select * from customers_anon ;
 firstname | lastnmame |             email             | country |            iban            |   amount
-----------+-----------+-------------------------------+---------+----------------------------+------------
 Michel    | Delco     | wlothean0@springer.com        | Spain   |  AD1111112222C3C3C3C3C3C3  |    5000000
 Denise    | Blanchot  | emoraloy@dropbox.com          | Myanmar |  AD1111112222C3C3C3C3C3C3  | 1000000000
 Farid     | Dim       | vbritlandkt@deliciousdays.com | India   |  AD1111112222C3C3C3C3C3C3  |    2500000
(3 rows)

prod=#
 

Cet article Masking Data With PostgreSQL est apparu en premier sur Blog dbi services.

Cloud Access Security Broker (CASB) for Oracle Cloud Infrastructure (OCI)

Syed Jaffar - Thu, 2018-09-13 08:33
Customer adoption to cloud services (IaaS, PaaS, SaaS)  has been rapidly grown and growing. The most challenging aspect moving to cloud is the ability to secure the application and data that is put on the cloud. Oracle's hetrogenous security solution Cloud Access Security Broker (CASB) helps customers protecting their cloud-based infrastructure, platforms, applications across vendors. CASBs have emerged as  the go-to cloud security solution. CASB has the ability to provide security to entire cloud footprint (SaaS, PaaS, IaaS).

Most essentially, for all Oracle Cloud Infrastructure (OCI) deployments, it provides visibility, threat protection, data security and complaince. Following are a few key advantages of CASB:


  • Governance of privileged activities
  • Unified incident management
  • Proactive remediation
  • Continuous security compliance for OCI deployments
As part of complete visibility, it provides holistic view of entire cloud environment, including users and devices.

Threat Detection with User Behavior Analytics (UBA) builds a baseline for typical behavior, down to the user and application. Also, maintain a log when and how a user deviates from the baseline. With the help of predictive analytics, you can easily identify the risky users who performs folder permission change, changing user privileges , or tampering with the configuration settings.

All your cloud compliance configuration settings can be easily maintained. Once the settings are made, CASB monitoring the settings and alerts you whenever there is a change in the setting.
CASB provides three key components to secure your data in cloud:
  1. Data visibility
  2. Data inspection
  3. Data Accessibility
It can easily integrate with the existing cloud security solutions, such as, SWG, NGF, IDaaS, DLP and SIEM.

For more details and information, visit Oracle website:

https://www.oracle.com/cloud/paas/casb-cloud-service.html


Giving grant role to invoker from stored procedure

Tom Kyte - Wed, 2018-09-12 19:06
Hello there, As schema owner, I give grant execute on package to another user, who is considered as executor of the package. The package contains security part (procedure with invoker rights (authid current_user)), which checks whether the caller ...
Categories: DBA Blogs

How to strip off characters upto a specific character in a BLOB type column

Tom Kyte - Wed, 2018-09-12 19:06
I have a column of type BLOB which is being used to store images. A typical value that is currently getting stored in this table is: data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBY...
Categories: DBA Blogs

Parsing the CLOB data seperated by delimiters

Tom Kyte - Wed, 2018-09-12 19:06
<code> CREATE TABLE T ( DT_FIELD DATE, SKEY NUMBER, FIELD_VALUES CLOB ) INSERT INTO T VALUES('06-SEP-18',10,'68|} 88026 |}ABC|}101010|}ADSD|}'); INSERT INTO T VALUES('07-SEP-18',11,'70|} 88027 |}DEF|}'); INSERT INTO T VALUES('07-SEP-18',12,'...
Categories: DBA Blogs

As per Oracle recommendation we are using Automatic Segment Space Management . We see contentions when we route traffic for DML on multiple RAC nodes

Tom Kyte - Wed, 2018-09-12 19:06
As per Oracle recommendation we are using Automatic Segment Space Management . We see contentions when we route traffic for DML on multiple RAC nodes. The only way I could think we can solve this Manual segment space management for LARGE table wit...
Categories: DBA Blogs

Oracle Utilities Testing Accelerator Whitepaper Updates

Anthony Shorten - Wed, 2018-09-12 18:07

The Oracle Utilities Testing Accelerator Whitepaper has been updated with the latest information about the Testing capability optimized for Oracle Utilities.

The documentation is available at Oracle Utilities Testing Accelerator for Oracle Utilities (Doc Id: 2014163.1) from My Oracle Support.

The article includes the following updated documents:

  • Oracle Utilities Testing Accelerator Overview - Overview of the testing solution and how it optimizes the test experience.
  • Oracle Utilities Testing Accelerator Frequently Asked Questions - Set of common questions and answers about the Oracle Utilities Testing Accelerator including migration from the previous Oracle Application Testing Suite based solution.
  • Oracle Utilities Testing Accelerator Data Sheet (New) -  A brochure about the Oracle Utilities Testing Accelerator.

Oracle Utilities Testing Accelerator training is now available via the Oracle University training on-demand.

Implementing Handle Patch Method in JET Offline Toolkit

Andrejus Baranovski - Wed, 2018-09-12 13:41
When executing PATCH requests offline, JET Offline Persistence Toolkit will record that request and synch it to the backend, once online. But it will not update data stored in cache, this is by design. Since cached data will not be updated, search queries against offline cache would not bring results based on latest changes. To solve this we need to implement cache update ourself by providing handle patch method.

Handle patch is configured through requestHandlerOverride property while registering persistence manager:


Sample implementation for handle patch. This method is invoked, when PATCH is executed while offline only. We must read information from request and pass it to cache store. Search for entry in cache based on key, updating record and updating info back to the store:


Let's do offline test - switch browser tab to be offline (you can do it Chrome browser developer tools). Do search and check log from JET Offline Persistence Toolkit - it executes search automatically against cache store:


Update same record, while offline - PATCH request will be recorded for later synchronisation. Our handle patch method will be invoked to write changes to cache store:


You will notice in the log, actions executed from handle patch method. It finds record by key in cache and updates it:


Search by updated value - updated value is found and returned from cache store:


Code is available in GitHub repository.

Redhat Forum 2018 – everthing is OpenShift

Yann Neuhaus - Wed, 2018-09-12 11:31

I had an exiting and informational day at Redhat Forum Zurich.

After starting with a short welcome in the really full movie theater in Zurich Sihlcity, we had the great pleasure to listen to Jim Whitehurst. With humor he told about the success of Redhat during the last 25 years.

IMG_20180911_094607

The partner and success stories of Vorwerk / Microsoft / Accenture / Avaloq / Swisscom and SAP showed impressivly the potential and the usage of OpenShift.
firefox_2018-09-12_18-11-44

After the lunch break, which was great for networking and talking to some of the partners, the breakout sessions started.
The range of sessions showed the importance of OpenShift for agile businesses.

Here is a short summary of three sessions:
Philippe Bürgisser (acceleris) showed on a practical example his sticking points of bringing OpenShift into production.
PuzzleIT, adcubum and Helsana gave amazing insides into their journey to move adcubum syrius to APPUiO.
RedHat and acceleris explained how Cloud/OpenShift simplifies and improves development cycles.
firefox_2018-09-12_17-50-40

During the end note, Redhat take up the cudgels for women in IT and their importance, a suprising and apreciated aspect – (Red)Hats off!
Thank you for that great event! Almost 900 participants this year can’t be wrong.
firefox_2018-09-12_18-05-56

 

Cet article Redhat Forum 2018 – everthing is OpenShift est apparu en premier sur Blog dbi services.

Column Stats

Jonathan Lewis - Wed, 2018-09-12 07:46

A little while ago I added a post-script to a note I’d written five years ago about gathering stats on a virtual column and had updated with a reference to a problem on the Oracle database forum that complained that stats collection had taken much longer after the addition of a function-based index. The problem related to the fact that the function-based index was supported by a virtual column that used an instr() function on a CLOB (XML) column – and gathering stats on the virtual column meant applying the function to every CLOB in the table.

So – my post-script, add about a month ago, suggested adding a preference (dbms_stats.set_table_preference) to avoid gathering stats on that column. There’s a problem with this suggestion – it doesn’t work because Oracle doesn’t play nicely when you try to limit the stats collection to a few columns – even in version 18.3. Here’s a demonstration of the effect; first we create a table that includes a column group (extended stats), a virtual column, and a function-based index – i.e. the three different ways of generating user-related virtual columns.


rem
rem     Script:         stats_struggle_06.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2018
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad(rownum,10,'0')             v2
from
        generator       v1
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

execute dbms_stats.delete_table_stats(user,'t1')

begin
        dbms_output.put_line(
                dbms_stats.create_extended_stats(
                        ownname         => user,
                        tabname         => 'T1',
                        extension       => '(v1, v2)'
                )
        );
end;
/

alter table t1 add id_12 
        generated always as (mod(id,12)) virtual
;

create index t1_id on t1(mod(id,10));


Since I’ve run this on 12c and 18c I’ve included a call to delete table stats after creating the table. So the next step is to enable SQL trace and see what Oracle does under the covers when we gather stats on just a couple of columns in the table:


alter session set events '10046 trace name context forever';

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 't1',
                method_opt  => 'for columns size 1 id v1',
                cascade     => false
        );
end;
/

alter session set events '10046 trace name context off';

column column_name  format a32
column data_default format a32

select 
        column_name, data_default,
        num_nulls, num_distinct, to_char(last_analyzed,'hh24:mi:ss') gathered
from    user_tab_cols 
where   table_name = 'T1' 
order by 
        internal_column_id
;

COLUMN_NAME                      DATA_DEFAULT                      NUM_NULLS NUM_DISTINCT GATHERED
-------------------------------- -------------------------------- ---------- ------------ --------
ID                                                                         0        10000 16:13:12
V1                                                                         0        10000 16:13:12
V2
SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7   SYS_OP_COMBINED_HASH("V1","V2")
ID_12                            MOD("ID",12)
SYS_NC00006$                     MOD("ID",10)

According to the output of the last query we’ve gathered stats only on the two columns specified. But have we really avoided the work ? Here, with some cosmetic tidying, is the SQL executed by the package:

select 
        /*+
                full(t) no_parallel(t) no_parallel_index(t) dbms_stats
                cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
                xmlindex_sel_idx_tbl no_substrb_pad 
         */
        to_char(count(ID)),
        substrb(dump(min(ID),16,0,64),1,240),
        substrb(dump(max(ID),16,0,64),1,240),
        to_char(count(V1)),
        substrb(dump(min(V1),16,0,64),1,240),
        substrb(dump(max(V1),16,0,64),1,240),
        to_char(count(V2)),
        to_char(count(SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7)),
        to_char(count(ID_12)),
        to_char(count(SYS_NC00006$))
from
        TEST_USER.T1 t  /* NDV,NIL,NIL,NDV,NIL,NIL,ACL,ACL,ACL,ACL*/

We can see that Oracle has done a count(), min() and max() on ID and V1, and the “comment” at the end of the text tells us that it’s applied the approximate_ndv mechanism to the first two columns queried but not the rest. However it has count()ed all the other columns – which means it’s evaluated their underlying expressions. So if you were hoping that limiting the columns gathered would avoid a really expensive function call, bad luck.

Threat / Bug alert

One odd little detail that came up when I ran a test case that used a deterministic function to generate a virtual column: in 12.1.0.2 the function was called once per row (possibly because every row had a different value) whether or not it was in the list of columns for gathering stats; in 18.3 the function was called nearly twice per row when I didn’t specificy stats gathering for the column and nearly 4 times per row when I did. This looks like it might be a change (possibly accidental) to how deterministic functions can cache their inputs and outputs – possibly something as “minor” as the size of the cache.

 

Pages

Subscribe to Oracle FAQ aggregator