Feed aggregator

Identify the missing object in an ora-08103 error

Tom Kyte - Thu, 2018-09-13 13:26
We have a large cursor that runs nightly and on a random night we get the following error from the job that calls the cursor: 180911 180019 (APPS.GAINS_COMMON,1194) Begin export_data 180911 194348 (APPS.GAINS_COMMON,4735) BEGIN export_sla 1809...
Categories: DBA Blogs

Oracle locking a table while deleting set of rows

Tom Kyte - Thu, 2018-09-13 13:26
I have 2 scripts running at the same time deleting different set of rows from same table. Would this cause any locking issues or contention? Please advise. Thank you very much.
Categories: DBA Blogs

What SQL is currently running in the database

Tom Kyte - Thu, 2018-09-13 13:26
I am using an application that submits SQL queries to the Oracle database. These queries can have hundreds of bind variables in them. I want to be able to see the SQL that is running with the bind variables substituted. I have used this query to g...
Categories: DBA Blogs

Exchange Partition - Error with VIRTUAL & CLOB columns

Tom Kyte - Thu, 2018-09-13 13:26
Hi Folks, Need your expert advice on this. Could you please guide me through the below issue - Issue : Exchange partition doesn't seem to work when both Virtual column and CLOB column exists together in a table :( . However it works fine on it ...
Categories: DBA Blogs

Masking Data With PostgreSQL

Yann Neuhaus - Thu, 2018-09-13 10:01

I was searching a tool for anonymizing data in a PostgreSQL database and I have tested the extension pg_anonymizer.
PostgreSQL_anonymizer is a set of SQL functions that remove personally identifiable values from a PostgreSQL table and replace them with random-but-plausible values. The goal is to avoid any identification from the data record while remaining suitable for testing, data analysis and data processing.
In this blog I am showing how this extension can be used. I am using a PostgreSQL 10 database.
The first step is to install the extension pg_anonymizer. In my case I did it with with pgxn client

[postgres@pgserver2 ~]$ pgxn install postgresql_anonymizer --pg_config /u01/app/postgres/product/10/db_1/bin/pg_config
INFO: best version: postgresql_anonymizer 0.0.3
INFO: saving /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: unpacking: /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: building extension
gmake: Nothing to be done for `all'.
INFO: installing extension
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension/anon'
/usr/bin/install -c -m 644 .//anon.control '/u01/app/postgres/product/10/db_1/share/extension/'
/usr/bin/install -c -m 644 .//anon/anon--0.0.3.sql  '/u01/app/postgres/product/10/db_1/share/extension/anon/'
[postgres@pgserver2 ~]$

We can then verify that under /u01/app/postgres/product/10/db_1/share/extension we have a file anon.control and a directory named anon

[postgres@pgserver2 extension]$ ls -ltra anon*
-rw-r--r--. 1 postgres postgres 167 Sep 13 10:54 anon.control

anon:
total 18552
drwxrwxr-x. 3 postgres postgres    12288 Sep 13 10:54 ..
drwxrwxr-x. 2 postgres postgres       28 Sep 13 10:54 .
-rw-r--r--. 1 postgres postgres 18980156 Sep 13 10:54 anon--0.0.3.sql

Let’s create a database named prod and let’s create the required extensions. tsm_system_rows should delivered by the contrib.

prod=# \c prod
You are now connected to database "prod" as user "postgres".
prod=#
prod=# CREATE EXTENSION tsm_system_rows;;
CREATE EXTENSION
prod=#

prod=# CREATE EXTENSION anon;
CREATE EXTENSION
prod=#


prod=# \dx
                                    List of installed extensions
      Name       | Version |   Schema   |                        Description

-----------------+---------+------------+----------------------------------------------------
--------
 anon            | 0.0.3   | anon       | Data anonymization tools
 plpgsql         | 1.0     | pg_catalog | PL/pgSQL procedural language
 tsm_system_rows | 1.0     | public     | TABLESAMPLE method which accepts number of rows as
a limit
(3 rows)

prod=#

The extension will create following functions in the schema anon. These functions can be used to mask some data.

prod=# set search_path=anon;
SET
prod=# \df
                                                               List of functions
 Schema |           Name           |     Result data type     |                          Argu
ment data types                           |  Type
--------+--------------------------+--------------------------+------------------------------
------------------------------------------+--------
 anon   | random_city              | text                     |
                                          | normal
 anon   | random_city_in_country   | text                     | country_name text
                                          | normal
 anon   | random_company           | text                     |
                                          | normal
 anon   | random_country           | text                     |
                                          | normal
 anon   | random_date              | timestamp with time zone |
                                          | normal
 anon   | random_date_between      | timestamp with time zone | date_start timestamp with tim
e zone, date_end timestamp with time zone | normal
 anon   | random_email             | text                     |
                                          | normal
 anon   | random_first_name        | text                     |
                                          | normal
 anon   | random_iban              | text                     |
                                          | normal
 anon   | random_int_between       | integer                  | int_start integer, int_stop integer
                            | normal
 anon   | random_last_name         | text                     |
                            | normal
 anon   | random_phone             | text                     | phone_prefix text DEFAULT '0'::text
                            | normal
 anon   | random_region            | text                     |
                            | normal
 anon   | random_region_in_country | text                     | country_name text
                            | normal
 anon   | random_siren             | text                     |
                            | normal
 anon   | random_siret             | text                     |
                            | normal
 anon   | random_string            | text                     | l integer
                            | normal
 anon   | random_zip               | text                     |
                            | normal
(18 rows)

prod=#

Now in the database prod let’s create a table with some data.

prod=# \d customers
                      Table "public.customers"
   Column   |         Type          | Collation | Nullable | Default
------------+-----------------------+-----------+----------+---------
 first_name | character varying(30) |           |          |
 last_name  | character varying(30) |           |          |
 email_add  | character varying(30) |           |          |
 country    | character varying(60) |           |          |
 iban       | character varying(60) |           |          |
 amount     | integer               |           |          |

prod=#

prod=# table customers;
 first_name | last_name |        email_add        |   country    |            iban            |   amount
------------+-----------+-------------------------+--------------+----------------------------+------------
 Michel     | Delco     | michel.delco@yaaa.fr    | FRANCE       | FR763000600001123456890189 |    5000000
 Denise     | Blanchot  | denise.blanchot@yaaa.de | GERMANY      | DE91100000000123456789     | 1000000000
 Farid      | Dim       | farid.dim@yaaa.sa       | Saudi Arabia | SA4420000001234567891234   |    2500000
(3 rows)

prod=#

Let’s say that I want some people to access to all data for this table, but I don’t want them to see the real email, the real country and the real iban of the customers.
One solution should be to create a view with anonymous data for these columns. This will replace them with random-but-plausible values for these columns

prod=# create view Customers_anon as select first_name as Firstname ,last_name  as Lastnmame,anon.random_email() as Email ,anon.random_country() as Country, anon.random_iban() as Iban ,amount as Amount from customers;
CREATE VIEW

And then grant the access privilege to concerned people

prod=# select * from customers_anon ;
 firstname | lastnmame |             email             | country |            iban            |   amount
-----------+-----------+-------------------------------+---------+----------------------------+------------
 Michel    | Delco     | wlothean0@springer.com        | Spain   |  AD1111112222C3C3C3C3C3C3  |    5000000
 Denise    | Blanchot  | emoraloy@dropbox.com          | Myanmar |  AD1111112222C3C3C3C3C3C3  | 1000000000
 Farid     | Dim       | vbritlandkt@deliciousdays.com | India   |  AD1111112222C3C3C3C3C3C3  |    2500000
(3 rows)

prod=#
 

Cet article Masking Data With PostgreSQL est apparu en premier sur Blog dbi services.

Cloud Access Security Broker (CASB) for Oracle Cloud Infrastructure (OCI)

Syed Jaffar - Thu, 2018-09-13 08:33
Customer adoption to cloud services (IaaS, PaaS, SaaS)  has been rapidly grown and growing. The most challenging aspect moving to cloud is the ability to secure the application and data that is put on the cloud. Oracle's hetrogenous security solution Cloud Access Security Broker (CASB) helps customers protecting their cloud-based infrastructure, platforms, applications across vendors. CASBs have emerged as  the go-to cloud security solution. CASB has the ability to provide security to entire cloud footprint (SaaS, PaaS, IaaS).

Most essentially, for all Oracle Cloud Infrastructure (OCI) deployments, it provides visibility, threat protection, data security and complaince. Following are a few key advantages of CASB:


  • Governance of privileged activities
  • Unified incident management
  • Proactive remediation
  • Continuous security compliance for OCI deployments
As part of complete visibility, it provides holistic view of entire cloud environment, including users and devices.

Threat Detection with User Behavior Analytics (UBA) builds a baseline for typical behavior, down to the user and application. Also, maintain a log when and how a user deviates from the baseline. With the help of predictive analytics, you can easily identify the risky users who performs folder permission change, changing user privileges , or tampering with the configuration settings.

All your cloud compliance configuration settings can be easily maintained. Once the settings are made, CASB monitoring the settings and alerts you whenever there is a change in the setting.
CASB provides three key components to secure your data in cloud:
  1. Data visibility
  2. Data inspection
  3. Data Accessibility
It can easily integrate with the existing cloud security solutions, such as, SWG, NGF, IDaaS, DLP and SIEM.

For more details and information, visit Oracle website:

https://www.oracle.com/cloud/paas/casb-cloud-service.html


Giving grant role to invoker from stored procedure

Tom Kyte - Wed, 2018-09-12 19:06
Hello there, As schema owner, I give grant execute on package to another user, who is considered as executor of the package. The package contains security part (procedure with invoker rights (authid current_user)), which checks whether the caller ...
Categories: DBA Blogs

How to strip off characters upto a specific character in a BLOB type column

Tom Kyte - Wed, 2018-09-12 19:06
I have a column of type BLOB which is being used to store images. A typical value that is currently getting stored in this table is: data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBY...
Categories: DBA Blogs

Parsing the CLOB data seperated by delimiters

Tom Kyte - Wed, 2018-09-12 19:06
<code> CREATE TABLE T ( DT_FIELD DATE, SKEY NUMBER, FIELD_VALUES CLOB ) INSERT INTO T VALUES('06-SEP-18',10,'68|} 88026 |}ABC|}101010|}ADSD|}'); INSERT INTO T VALUES('07-SEP-18',11,'70|} 88027 |}DEF|}'); INSERT INTO T VALUES('07-SEP-18',12,'...
Categories: DBA Blogs

As per Oracle recommendation we are using Automatic Segment Space Management . We see contentions when we route traffic for DML on multiple RAC nodes

Tom Kyte - Wed, 2018-09-12 19:06
As per Oracle recommendation we are using Automatic Segment Space Management . We see contentions when we route traffic for DML on multiple RAC nodes. The only way I could think we can solve this Manual segment space management for LARGE table wit...
Categories: DBA Blogs

Oracle Utilities Testing Accelerator Whitepaper Updates

Anthony Shorten - Wed, 2018-09-12 18:07

The Oracle Utilities Testing Accelerator Whitepaper has been updated with the latest information about the Testing capability optimized for Oracle Utilities.

The documentation is available at Oracle Utilities Testing Accelerator for Oracle Utilities (Doc Id: 2014163.1) from My Oracle Support.

The article includes the following updated documents:

  • Oracle Utilities Testing Accelerator Overview - Overview of the testing solution and how it optimizes the test experience.
  • Oracle Utilities Testing Accelerator Frequently Asked Questions - Set of common questions and answers about the Oracle Utilities Testing Accelerator including migration from the previous Oracle Application Testing Suite based solution.
  • Oracle Utilities Testing Accelerator Data Sheet (New) -  A brochure about the Oracle Utilities Testing Accelerator.

Oracle Utilities Testing Accelerator training is now available via the Oracle University training on-demand.

Implementing Handle Patch Method in JET Offline Toolkit

Andrejus Baranovski - Wed, 2018-09-12 13:41
When executing PATCH requests offline, JET Offline Persistence Toolkit will record that request and synch it to the backend, once online. But it will not update data stored in cache, this is by design. Since cached data will not be updated, search queries against offline cache would not bring results based on latest changes. To solve this we need to implement cache update ourself by providing handle patch method.

Handle patch is configured through requestHandlerOverride property while registering persistence manager:


Sample implementation for handle patch. This method is invoked, when PATCH is executed while offline only. We must read information from request and pass it to cache store. Search for entry in cache based on key, updating record and updating info back to the store:


Let's do offline test - switch browser tab to be offline (you can do it Chrome browser developer tools). Do search and check log from JET Offline Persistence Toolkit - it executes search automatically against cache store:


Update same record, while offline - PATCH request will be recorded for later synchronisation. Our handle patch method will be invoked to write changes to cache store:


You will notice in the log, actions executed from handle patch method. It finds record by key in cache and updates it:


Search by updated value - updated value is found and returned from cache store:


Code is available in GitHub repository.

Redhat Forum 2018 – everthing is OpenShift

Yann Neuhaus - Wed, 2018-09-12 11:31

I had an exiting and informational day at Redhat Forum Zurich.

After starting with a short welcome in the really full movie theater in Zurich Sihlcity, we had the great pleasure to listen to Jim Whitehurst. With humor he told about the success of Redhat during the last 25 years.

IMG_20180911_094607

The partner and success stories of Vorwerk / Microsoft / Accenture / Avaloq / Swisscom and SAP showed impressivly the potential and the usage of OpenShift.
firefox_2018-09-12_18-11-44

After the lunch break, which was great for networking and talking to some of the partners, the breakout sessions started.
The range of sessions showed the importance of OpenShift for agile businesses.

Here is a short summary of three sessions:
Philippe Bürgisser (acceleris) showed on a practical example his sticking points of bringing OpenShift into production.
PuzzleIT, adcubum and Helsana gave amazing insides into their journey to move adcubum syrius to APPUiO.
RedHat and acceleris explained how Cloud/OpenShift simplifies and improves development cycles.
firefox_2018-09-12_17-50-40

During the end note, Redhat take up the cudgels for women in IT and their importance, a suprising and apreciated aspect – (Red)Hats off!
Thank you for that great event! Almost 900 participants this year can’t be wrong.
firefox_2018-09-12_18-05-56

 

Cet article Redhat Forum 2018 – everthing is OpenShift est apparu en premier sur Blog dbi services.

Column Stats

Jonathan Lewis - Wed, 2018-09-12 07:46

A little while ago I added a post-script to a note I’d written five years ago about gathering stats on a virtual column and had updated with a reference to a problem on the Oracle database forum that complained that stats collection had taken much longer after the addition of a function-based index. The problem related to the fact that the function-based index was supported by a virtual column that used an instr() function on a CLOB (XML) column – and gathering stats on the virtual column meant applying the function to every CLOB in the table.

So – my post-script, add about a month ago, suggested adding a preference (dbms_stats.set_table_preference) to avoid gathering stats on that column. There’s a problem with this suggestion – it doesn’t work because Oracle doesn’t play nicely when you try to limit the stats collection to a few columns – even in version 18.3. Here’s a demonstration of the effect; first we create a table that includes a column group (extended stats), a virtual column, and a function-based index – i.e. the three different ways of generating user-related virtual columns.


rem
rem     Script:         stats_struggle_06.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2018
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad(rownum,10,'0')             v2
from
        generator       v1
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

execute dbms_stats.delete_table_stats(user,'t1')

begin
        dbms_output.put_line(
                dbms_stats.create_extended_stats(
                        ownname         => user,
                        tabname         => 'T1',
                        extension       => '(v1, v2)'
                )
        );
end;
/

alter table t1 add id_12 
        generated always as (mod(id,12)) virtual
;

create index t1_id on t1(mod(id,10));


Since I’ve run this on 12c and 18c I’ve included a call to delete table stats after creating the table. So the next step is to enable SQL trace and see what Oracle does under the covers when we gather stats on just a couple of columns in the table:


alter session set events '10046 trace name context forever';

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 't1',
                method_opt  => 'for columns size 1 id v1',
                cascade     => false
        );
end;
/

alter session set events '10046 trace name context off';

column column_name  format a32
column data_default format a32

select 
        column_name, data_default,
        num_nulls, num_distinct, to_char(last_analyzed,'hh24:mi:ss') gathered
from    user_tab_cols 
where   table_name = 'T1' 
order by 
        internal_column_id
;

COLUMN_NAME                      DATA_DEFAULT                      NUM_NULLS NUM_DISTINCT GATHERED
-------------------------------- -------------------------------- ---------- ------------ --------
ID                                                                         0        10000 16:13:12
V1                                                                         0        10000 16:13:12
V2
SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7   SYS_OP_COMBINED_HASH("V1","V2")
ID_12                            MOD("ID",12)
SYS_NC00006$                     MOD("ID",10)

According to the output of the last query we’ve gathered stats only on the two columns specified. But have we really avoided the work ? Here, with some cosmetic tidying, is the SQL executed by the package:

select 
        /*+
                full(t) no_parallel(t) no_parallel_index(t) dbms_stats
                cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
                xmlindex_sel_idx_tbl no_substrb_pad 
         */
        to_char(count(ID)),
        substrb(dump(min(ID),16,0,64),1,240),
        substrb(dump(max(ID),16,0,64),1,240),
        to_char(count(V1)),
        substrb(dump(min(V1),16,0,64),1,240),
        substrb(dump(max(V1),16,0,64),1,240),
        to_char(count(V2)),
        to_char(count(SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7)),
        to_char(count(ID_12)),
        to_char(count(SYS_NC00006$))
from
        TEST_USER.T1 t  /* NDV,NIL,NIL,NDV,NIL,NIL,ACL,ACL,ACL,ACL*/

We can see that Oracle has done a count(), min() and max() on ID and V1, and the “comment” at the end of the text tells us that it’s applied the approximate_ndv mechanism to the first two columns queried but not the rest. However it has count()ed all the other columns – which means it’s evaluated their underlying expressions. So if you were hoping that limiting the columns gathered would avoid a really expensive function call, bad luck.

Threat / Bug alert

One odd little detail that came up when I ran a test case that used a deterministic function to generate a virtual column: in 12.1.0.2 the function was called once per row (possibly because every row had a different value) whether or not it was in the list of columns for gathering stats; in 18.3 the function was called nearly twice per row when I didn’t specificy stats gathering for the column and nearly 4 times per row when I did. This looks like it might be a change (possibly accidental) to how deterministic functions can cache their inputs and outputs – possibly something as “minor” as the size of the cache.

 

All about 'Autonomous Transaction Processing' - Part I

Syed Jaffar - Wed, 2018-09-12 03:47
There has been a lot of buzz about 'Self driving & Self tuning database', 'autonomous', 'automation', etc. since Oracle 18c announced. I have decided to do my homework and test/validate some of them. So, this blog post will focus about 'Autonomous Transaction Processing (ATP)', how this is helpful to an organization and what role a DBA can play.

Its nothing but another typical cloud offering from Oracle. To begin with, Oracle ATP is built upon Oracle database and is designed, optimized to deliver scalable transaction performance across all standard business applications. As a service, ATP doesn't require DBA and no DBA intervention for any installation,configuration or management related activities. It handles all the DB related activities, such as, DB creation, backup, patching, upgrade, space management etc.

Its completely elastic service, where you can dynamically increase and decrease the resources (OCPU and storage capacity) without having any service interruption or database downtime. Using the cloud based console, you can easily manage the service, such as, scaling the service and monitoring. Additionally, cloud based notebook application provides easy querying, colobration and  data-visualization capabilities.

Below picture (source : Oracle documentation) describes the ATP architecture:

Below are some of the key features of Oracle Autonomous Transaction Processing:

  • Simplified management of : rapid provisioning of new database, dynamic resource management (allocation and de-allocation of cpu and storage), patching & upgrades and backup & recovery
  • complete elastic service
  • Supports: existing, cloud and on-prime applications
  • supports high query performance and concurrent workloads
  • Easy data migration
  • BI tools support
  • Building reports and dashboards with analytics
  • All data stores in encrypted formatted to secure the data
  • Strong authenticity for connection and data access control
In part II, I will discuss details subscription, creating and users management.
Trying to avail 30 days free account on Oracle could. If I succeed to have the credentials, I will run through practically and post the configuration and management tasks.

Remap_Data Mutiple Tables- Cannot be applied in expdp?

Tom Kyte - Wed, 2018-09-12 00:46
Hi Tom, Trying to apply remap_data to multiple tables it always fails. The column name is same across both tables where i am applying the function, it exports and imports without any errors from the source schema to the target schema, but it...
Categories: DBA Blogs

give range of ISO weeks allowed for a year.

Tom Kyte - Wed, 2018-09-12 00:46
Hi, I have to include startweek and endweek validation in an oracle form. Here, startweek and endweek includes year and a week number like '201801'. Whenever user enters a startweek or endweek value beyond the allowed range then I have to throw ...
Categories: DBA Blogs

Split multiple strings into rows.

Tom Kyte - Wed, 2018-09-12 00:46
Some records in First Name and Last Name are delimited by "/" and would like to split them into rows. CONNECT BY Clause works perfectly for one column. How do split the strings in multiple columns into rows as given below? Sample Table : ...
Categories: DBA Blogs

Convert comma separated values in a column into rows and Join the result set with another table

Tom Kyte - Wed, 2018-09-12 00:46
I have a table as below R_ID R_Site R_Name R_Role ----- ------------- ------ ------ 1 123,-456,-789 qwer Owner 2 56,-741-852 qaz Manager 3 369,-741,-987 wsx Employee 4 All eddc Employee 5 All ...
Categories: DBA Blogs

Problem reading data from a flexible attribute based XML

Tom Kyte - Wed, 2018-09-12 00:46
Hi Tom, I am not an XML expert. I have a flexible attribute based XML stored into a table in XMLTYPE column TBTest.C1(XMLTYPE) which is a message from upstream queue. The XPATHs of message are like - <code>/*[name()='Message'] /*[name()='Message']...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator