Feed aggregator

Oracle 12c – How to correct the error: “RMAN-20005: target database name is ambiguous”

Yann Neuhaus - Tue, 2017-02-21 08:09

I do have a Data Guard environment, where I have configured the RMAN DB_UNIQUE_NAME persistent setting for my primary and the standby. With the RMAN DB_UNIQUE_NAME settings I am able to run reports my Oracle Data Guard environment from any database. I could e.g. list all archivelogs for SITE1 from SITE2 or the other ways around.
Or I could show all persistent settings for SITE1 from SITE2 and of course the other way around. The only prerequisite for this feature is the RMAN catalog. In case you are not connected to the RMAN catalog you end up with the following error:

RMAN> SHOW ARCHIVELOG DELETION POLICY FOR DB_UNIQUE_NAME 'DBIT121_SITE2';

using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of show command at 02/21/2017 13:58:53
RMAN-05037: FOR DB_UNIQUE_NAME option cannot be used in nocatalog mode

After connecting to the catalog, you can use this feature, e.g. to show the archive deletion policy.

$ rman target sys/welcome1 catalog /@rcat

Recovery Manager: Release 12.1.0.2.0 - Production on Tue Feb 21 14:25:10 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=644484523)
connected to recovery catalog database

RMAN> SHOW ARCHIVELOG DELETION POLICY FOR DB_UNIQUE_NAME 'DBIT121_SITE1';
RMAN configuration parameters for database with db_unique_name DBIT121_SITE1 are:
CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK;

RMAN> SHOW ARCHIVELOG DELETION POLICY FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
RMAN configuration parameters for database with db_unique_name DBIT121_SITE2 are:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

There are quite a lot options which can be combined with the DB_UNIQUE_NAME feature like the following.

LIST ARCHIVELOG ALL FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
REPORT SCHEMA FOR DB_UNIQUE_NAME 'DBIT121_SITE2';
SHOW ALL FOR DB_UNIQUE_NAME 'DBIT121_SITE2';

But getting back to my issue. I was running a resync catalog from my Standby database and ended up with the following error:

RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME 'DBIT121_SITE1';

resyncing from database with DB_UNIQUE_NAME DBIT121_SITE1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of resync from db_unique_name command on default channel at 02/21/2017 13:08:42
RMAN-20005: target database name is ambiguous

RMAN says that the target database name is ambiguous. But what does this mean. Let’s take a look a the RMAN error with the oerr utility. The oerr utility can not only be used with “ORA” error codes like “oerr ora 01555″, but also with “RMAN” error codes.

$ oerr rman 20005
20005, 1, "target database name is ambiguous"
// *Cause: two or more databases in the recovery catalog match this name
// *Action:
//

Ok. This error is much more precise. Looks like that RMAN found more the one database called DBIT121 in the catalog, and so RMAN does not know, on which DBID to perform the requested command. Ok. So let’s connect to the RMAN catalog and check if this is really the case.

SQL> SELECT DB.DB_KEY,DB.DB_ID, DB.CURR_DBINC_KEY, DBINC.DB_NAME
        FROM DB, DBINC
       WHERE DB.CURR_DBINC_KEY = DBINC.DBINC_KEY
         AND DBINC.DB_NAME   = 'DBIT121' ;  2    3    4

    DB_KEY      DB_ID CURR_DBINC_KEY DB_NAME
---------- ---------- -------------- --------
         1  642589239              2 DBIT121
    546780  644484523         546781 DBIT121

Indeed. I do have two different DBID’s pointing to the same DB_NAME. Kinda confusing for RMAN. But which one is the one that have been backed up. We could query the RC_BACKUP_SET and RC_BACKUP_PIECE views to find that out.

SQL> SELECT RBS.DB_KEY
         , RD.NAME
         , RBS.DB_ID
  2    3    4           , RBS.BS_KEY
         , RBS.RECID
         , RBS.STAMP
         , RBS.BACKUP_TYPE
         , RBS.START_TIME, STATUS
  5    6    7    8    9        FROM RC_BACKUP_SET RBS, RC_DATABASE RD
     WHERE RBS.DB_KEY=RD.DB_KEY
       AND RBS.DB_ID=RD.DBID
       AND RD.NAME='DBIT121' ;  10   11   12
...
...

    DB_KEY NAME          DB_ID     BS_KEY      RECID      STAMP B START_TIM S
---------- -------- ---------- ---------- ---------- ---------- - --------- -
    546780 DBIT121   644484523     555608       3070  936496831 I 21-FEB-17 A
    546780 DBIT121   644484523     555609       3071  936496832 I 21-FEB-17 A
    546780 DBIT121   644484523     555610       3072  936496836 D 21-FEB-17 A
    546780 DBIT121   644484523     555611       3073  936496860 D 21-FEB-17 A
    546780 DBIT121   644484523     555612       3074  936496875 D 21-FEB-17 A
    546780 DBIT121   644484523     555613       3075  936496884 D 21-FEB-17 A
    546780 DBIT121   644484523     555614       3076  936496890 D 21-FEB-17 A
    546780 DBIT121   644484523     555615       3077  936496895 L 21-FEB-17 A
    546780 DBIT121   644484523     555616       3078  936496897 L 21-FEB-17 A
    546780 DBIT121   644484523     555617       3079  936496897 L 21-FEB-17 A
    546780 DBIT121   644484523     555618       3080  936496898 D 21-FEB-17 A

    DB_KEY NAME          DB_ID     BS_KEY      RECID      STAMP B START_TIM S
---------- -------- ---------- ---------- ---------- ---------- - --------- -
    546780 DBIT121   644484523     555619       3081  936496900 D 21-FEB-17 A
    546780 DBIT121   644484523     555620       3082  936498788 D 21-FEB-17 A
    546780 DBIT121   644484523     555621       3083  936502389 D 21-FEB-17 A
    546780 DBIT121   644484523     555622       3084  936505991 D 21-FEB-17 A
    546780 DBIT121   644484523     555623       3085  936509589 D 21-FEB-17 A
    546780 DBIT121   644484523     555624       3086  936513189 D 21-FEB-17 A
    546780 DBIT121   644484523     555625       3087  936516788 D 21-FEB-17 A
    546780 DBIT121   644484523     555626       3088  936520387 D 21-FEB-17 A
    546780 DBIT121   644484523     555627       3089  936523988 D 21-FEB-17 A
    546780 DBIT121   644484523     555628       3090  936527608 D 21-FEB-17 A
    546780 DBIT121   644484523     555629       3091  936531188 D 21-FEB-17 A
...
...

After checking the output, I see that DBID 644484523 is the correct one, and DBID 642589239 is the one I want to get rid of.

To do so, we can shutdown the Standby database and start it up with nomount. The reason for that, is that you can’t issue the SET DBID command against a database which is mounted or open.

RMAN> SET DBID=642589239;

executing command: SET DBID
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of set command at 02/21/2017 13:15:26
RMAN-06188: cannot use command when connected to a mounted target database

Ok. Let’s go the nomount and execute the “unregister database;” command after the correct DBID is set.

$ rman target sys/welcome1 catalog /@rcat

Recovery Manager: Release 12.1.0.2.0 - Production on Tue Feb 21 14:25:10 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (not mounted)
connected to recovery catalog database

RMAN> SET DBID=642589239;

executing command: SET DBID
database name is "DBIT121" and DBID is 642589239

RMAN> unregister database;

database name is "DBIT121" and DBID is 642589239

Do you really want to unregister the database (enter YES or NO)? YES
database unregistered from the recovery catalog

RMAN>

Let’s check the RMAN catalog again.

SQL> SELECT DB.DB_KEY, DB.DB_ID, DB.CURR_DBINC_KEY, DBINC.DB_NAME
        FROM DB, DBINC
       WHERE DB.CURR_DBINC_KEY = DBINC.DBINC_KEY
         AND DBINC.DB_NAME   = 'DBIT121' ;  2    3    4

    DB_KEY      DB_ID CURR_DBINC_KEY DB_NAME
---------- ---------- -------------- --------
    556718  644484523         556719 DBIT121

Cool. Looks much better. :-) Now my resync catalog from SITE1 issued from SITE2 works again.

RMAN> LIST DB_UNIQUE_NAME OF DATABASE;

List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
------- ------- ----------------- ---------------  ------------------
556718  DBIT121  644484523        PRIMARY          DBIT121_SITE1
556718  DBIT121  644484523        STANDBY          DBIT121_SITE2

RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME 'DBIT121_SITE1';

resyncing from database with DB_UNIQUE_NAME DBIT121_SITE1
starting full resync of recovery catalog
full resync complete
Conclusion

The RMAN DB_UNIQUE_NAME persistent setting is a quite cool feature. This is something I would really recommend when working with RMAN and Data Guard. It allows you to do actions on primary from the standby or the standby from the primary. It doesn’t matter. But take care that you don’t have multiple DBID’s pointing to the same DB in your RMAN catalog.

 

Cet article Oracle 12c – How to correct the error: “RMAN-20005: target database name is ambiguous” est apparu en premier sur Blog dbi services.

TKPROF analysis: parses and executions

Tom Kyte - Tue, 2017-02-21 07:26
Hi Tom, Below is the execution plan for one of the sql. When i verify the query execution plan it is using index range scan and index unique scan. By seeing the below output how we will start the analysis. and can you advise on why is parse co...
Categories: DBA Blogs

Uncommitted transactions are committed after running DDL

Tom Kyte - Tue, 2017-02-21 07:26
Hi Tom, Why uncommitted DML transactions are committing after DDL transaction in oracle? Please help me to understand.
Categories: DBA Blogs

Log List of columns updated

Tom Kyte - Tue, 2017-02-21 07:26
Hi Chris/Connor, Please have a look at below scenario - Table ----- tb_temp_0001 Columns ------- order_id number -- PK cust_fname cust_lname dob address_1 address_2 address_3 debit_amnt credit_amnt Table ----- tb_app_0001 ...
Categories: DBA Blogs

Oracle utl_smtp to send e-mail Issue while increasing the number of emails in CC

Tom Kyte - Tue, 2017-02-21 07:26
<code>Recently I moved to oracle database Release 12.1.0.2.0, while using below mentioned code for send email through utl_smtp it works fine but when the number of emails in cc increase it generate the error ora-29278 smtp transient error 421 service...
Categories: DBA Blogs

12c Pro*C precompiler no longer supports ezconnect syntax?

Tom Kyte - Tue, 2017-02-21 07:26
Hi Tom, Did Oracle remove support for the ezconnect connect syntax in the userid= option on the 12c Pro*C precompiler? I recently attempted to update a Pro*C application that is compiled using the instant client and precompiler (instantclient-p...
Categories: DBA Blogs

Assessment for platform that uses both Oracle and MongoDB

Tom Kyte - Tue, 2017-02-21 07:26
Team, Very recently I got this question, not sure how to respond. could you help us on this ? <code>We are going to do an assessment of a platform that uses both Oracle and MongoDB. Can you please list down the artifacts that are required f...
Categories: DBA Blogs

SEQUENCE COntention

Tom Kyte - Tue, 2017-02-21 07:26
I need a query to find the sequence contention in oracle. Can you please provide it? Thanks in advance. Sam
Categories: DBA Blogs

Advantages and disadvantages of using Shareplex vs Dataguard for disaster recovery

Tom Kyte - Tue, 2017-02-21 07:26
Hi AskTom, Just need some advise on the setup of our database environment. We will have production instances A and B. A is live for external business and B is an exact copy of A but is used as operation data storage (B is live datawarehouse env)....
Categories: DBA Blogs

Latches and mutex

Tom Kyte - Tue, 2017-02-21 07:26
Dear Team, Can you please let me know few demonstration of latches and mutex types. one more thing can we take any preliminary action to avoid latches and mutex in future. want to know exact logic behind this. Thank's Pradeep
Categories: DBA Blogs

Webcast: "Simplified and Touch-Friendly User Interface in EBS"

Steven Chan - Tue, 2017-02-21 02:05

OAF WebcastOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for the latest updates on our rapidly-evolving OA Framework (OAF) user interface capabilities, see:

Senthilkumar Ramalingam, Group Manager Product Development, shares the latest Oracle Applications Framework (OAF) which includes the Oracle Alta UI, a redesigned home page and a set of new components optimized for display on mobile devices such as tablets.  Oracle Alta UI is the next generation user interface standards from Oracle that offer a modern and compelling UI for both cloud and on-premise applications. In addition, the OAF UI components offer several touch-friendly gestures for common actions, for a smarter and more efficient end user experience. The session also covers major UI enhancements in components like tables, search and attachments. Come see the new components, new gesture-based touch interactions, and a modernized UI that completely transform the Oracle E-Business Suite end user experience. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

APEX 5.1 New Features - neuer Termin

Denes Kubicek - Tue, 2017-02-21 00:33
APEX 5.1 ist endlich da. Oracle Application Express wird mit jedem Release besser und zieht immer mehr Entwickler weltweit in seinen Bann. Es ist einfach, einfache wie auch komplexe Applikationen auf Basis des Oracle Stacks zu entwickeln. Es macht sogar richtig Spaß !

Mit APEX 5.1 sind als wichtiges neues Feature die Interactive Grids mit dazu gekommen. Wir haben sehr lange auf eine moderne Möglichkeit gewartet, Excel - ähnliche Funktionen auf einer Webseite mit APEX zu implementieren. Jetzt ist es endlich soweit :) . Sogar Master-Detail-Detail-Detail-... Beziehungen sind umsetzbar, unsere Anwender werden sich freuen.

Darüber hinaus gibt es auch in vielen anderen Bereichen wichtige Neuerungen, die uns das Leben erleichtern. Gleichzeitig sind aber auch einige wichtige Dinge zu beachten, damit wir ein reibungsloses Upgrade durchführen können.

In diesem Kurs lernen Sie die neuen Funktionalitäten von Oracle Application Express 5.1 kennen, insbesondere wie Sie diese erfolgreich in der Praxis einsetzen.

Lernen Sie von und diskutieren Sie mit den weltweit bekannten Oracle APEX Experten:

Denes Kubicek, langjährige Projekterfahrung in den Bereichen Oracle und APEX mit multisite Applikationen, Preisträger des "Oracle APEX Developer of the Year 2008" Awards des Oracle Magazines, ein Oracle ACE Director und sehr bekannt im Oracle APEX OTN Forum für seine Beispielapplikation und

Dietmar Aust, erfahrener Oracle Consultant mit Spezialisierung auf Oracle APEX, Oracle ACE und aktiv in den OTN Foren zu APEX und Oracle XE, mit regelmäßigen Präsentationen auf den einschlägigen Oracle Konferenzen (DOAG, ODTUG, Oracle Open World).



Categories: Development

AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017

Amis Blog - Mon, 2017-02-20 23:25

Donderdag 16 maart

17.00-21.00 uur

AMIS, Nieuwegein

Aanmelden via: bu.om@amis.nl

Op donderdag 16 maart vindt de tweede AMIS Tools Showroom Sessie plaats. De eerste sessie was op 13 december: hierin hebben 16 AMIS-ers in korte en hele korte presentaties en demonstraties allerlei handige tools en hulpmiddelen laten zien. De nadruk in deze sessie lag op tools voor monitoring, communicatie en collaboration.

In deze tweede sessie gaan we op zoek naar nog een collectie tools. Deze uitnodiging betreft dan ook twee aspecten:

· Wil je er op 16 maart bij zijn om tools door je vakbroeders gepresenteerd te krijgen?

· Heb jij een tool waarover je tijdens deze sessie wil presenteren? Denk bijvoorbeeld aan tools rondom web conferencing & video streaming, screen cams, text editing, chat, image editing, data visualisatie, document sharing, voice recognition. En andere tools, apps en plugins die jij handig vindt in je werk en die je aan je vakgenoten zou willen laten zien – in een korte presentatie (5-15 min) – liefst met een demo.

Zou je via het volgende formulier willen aangeven welke tools voor jou interessant zijn en over welk tool jij wel zou willen presenteren: https://docs.google.com/forms/d/e/1FAIpQLSdNPwUACXxWaZGfs911UraVFQp5aWqeJVEx0xrSRFQTcYnYXA/viewform .
Op basis van de resultaten van deze survey kunnen we de agenda samenstellen voor deze sessie.

The post AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017 appeared first on AMIS Oracle and Java Blog.

Database Star Academy Membership is Now Open

Complete IT Professional - Mon, 2017-02-20 21:15
The Database Star Academy membership is now open! Here’s what you need to know. What Is the Database Star Academy Membership? It’s a membership site with a monthly fee, that gives you access to many different online video courses and PDF guides related to Oracle database development. What’s Included? As part of your membership, you […]
Categories: Development

Using bitmap indexes in OLTP database with mostly inserts

Tom Kyte - Mon, 2017-02-20 13:06
Hi, We have a table for logging metadata about processed messages in a production system. The table have approx 32M rows today and 25 columns. The <b>total number of rows is expected to be around 100M</b> in the future. When the processing of a...
Categories: DBA Blogs

Trigger based on set of data

Tom Kyte - Mon, 2017-02-20 13:06
Hi All, i have a scenario where I want to create a trigger which will generate a flat file whenever a set of data like department number's(10,20,30,40,50...) changes(insert/updates) on a particular date(sysdate).
Categories: DBA Blogs

How to fetch up to 5MB of text/string/data from a larger a clob in oracle

Tom Kyte - Mon, 2017-02-20 13:06
Hi Oracle, <code> CREATE TABLE XMLISSUE ( xmltablecolumn clob ); </code> Created a table As shown below code I have inserted some data to clob <code> DECLARE XMLCLO...
Categories: DBA Blogs

when I drop a plsql function, 1 view goes invalid, 1 view and 1 procedure remain valid

Tom Kyte - Mon, 2017-02-20 13:06
We recently upgraded to database version 12c... We found an anomaly that we can't explain and wondering if this is a bug or a misunderstanding of new functionality within 12c. We can reproduce this issue on demand with the following example....
Categories: DBA Blogs

use of Block change tracking file for offline incremental backup

Tom Kyte - Mon, 2017-02-20 13:06
Hi , If block change tracking is ENABLED, Does oracle uses this file when we take an offline incremental backup ( in mount mode ) ? As the Db is not open , I think it can not access the file ( change tracking file), I believe the feature is only...
Categories: DBA Blogs

Converting a column from one data type to another in PostgreSQL

Yann Neuhaus - Mon, 2017-02-20 11:15

Last week at a customer the following question came up: We have a varchar2 column in Oracle that contains a date but actually is stored as a string. When we want to convert this column to be a real date data type in Oracle we can use dbms_redefinition. How can we do that in PostgreSQL? The first answer that came to me mind is: Add a new column (of data type date) and populate it with the converted string from the source column, drop the source column and rename the new column to the name of the dropped column. This for sure will work but it is not the most convenient way: What happens if the application is somehow dependent on the order of the columns? Shouldn’t be the case if the application is written well, but you never know.

To start with lets generate some test data:

drop table if exists t1;
create table t1 ( a varchar(20) );
insert into t1 (a) values ('01012017');
insert into t1 (a) values ('02012017');
insert into t1 (a) values ('03012017');
insert into t1 (a) values ('04012017');
insert into t1 (a) values ('05012017');
insert into t1 (a) values ('06012017');
insert into t1 (a) values ('07012017');
insert into t1 (a) values ('08012017');
insert into t1 (a) values ('09012017');
insert into t1 (a) values ('10012017');
insert into t1 (a) values ('11012017');
(postgres@[local]:5440) [postgres] > select * from t1;
    a     
----------
 01012017
 02012017
 03012017
 04012017
 05012017
 06012017
 07012017
 08012017
 09012017
 10012017
 11012017
(11 rows)

We now have a varchar column holding the date values as a string. When you look at the “alter table” command in PostgtreSQL you’ll notice something like this:

where action is one of:

    ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ]
    ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ]

What will really help here is the “using” keyword because you can do things like this:

(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE
(postgres@[local]:5440) [postgres] > \d t1
     Table "public.t1"
 Column | Type | Modifiers 
--------+------+-----------
 a      | date | 

That’s really cool: You can pass a function (this mean a self written function as well) to the alter command to specify on how you want to do the conversion.

When there is an index on the column, what happens to the index?

(postgres@[local]:5440) [postgres] > \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | character varying(20) | 
Indexes:
    "i1" btree (a)
(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE
Time: 5.931 ms
(postgres@[local]:5440) [postgres] > \d t1
     Table "public.t1"
 Column | Type | Modifiers 
--------+------+-----------
 a      | date | 
Indexes:
    "i1" btree (a)
(postgres@[local]:5440) [postgres] > select indisvalid,indisready,indislive from pg_index where indexrelid = 'i1'::regclass; 
 indisvalid | indisready | indislive 
------------+------------+-----------
 t          | t          | t
(1 row)

Looks fine as well, lets do a quick test if the index is really usable:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 11
Time: 2.373 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 22
Time: 39.653 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 44
Time: 1.110 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 88
Time: 1.072 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 176
Time: 1.455 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 352
Time: 1.432 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 704
Time: 3.344 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 1408
Time: 20.972 ms
(postgres@[local]:5440) [postgres] > explain analyze select a from t1 where a = to_date('01012017','dd.mm.yyyy');
                                                 QUERY PLAN                                                 
------------------------------------------------------------------------------------------------------------
 Index Only Scan using i1 on t1  (cost=0.27..8.29 rows=1 width=4) (actual time=0.012..0.012 rows=0 loops=1)
   Index Cond: (a = to_date('01012017'::text, 'dd.mm.yyyy'::text))
   Heap Fetches: 0
 Planning time: 0.062 ms
 Execution time: 0.029 ms
(5 rows)

Perfect. The remaining question is: Does this operation block others from reading the table while it is being executed? Lets generate some more date to make the select operation a bit longer and then “watch” the statement in a separate session while we execute the conversion:

Time: 0.453 ms
drop table if exists t1;
create table t1 ( a varchar(20) );
insert into t1 (a) values ('01012017');
insert into t1 (a) values ('02012017');
insert into t1 (a) values ('03012017');
insert into t1 (a) values ('04012017');
insert into t1 (a) values ('05012017');
insert into t1 (a) values ('06012017');
insert into t1 (a) values ('07012017');
insert into t1 (a) values ('08012017');
insert into t1 (a) values ('09012017');
insert into t1 (a) values ('10012017');
insert into t1 (a) values ('11012017');
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 11
-- and so on and so on ...
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 360448

Then, in session 1 I am doing this:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > select count(*) from t1 where a = '01012017';
 count  
--------
 131072
(1 row)
(postgres@[local]:5440) [postgres] > \watch 0.1

In session 2 I am doing the conversion:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE

In session 1 you’ll notice that the statement is blocked and finally you get this:

Time: 0.453 ms
ERROR:  date/time field value out of range: "01012017"
LINE 1: select count(*) from t1 where a = '01012017';
                                          ^
HINT:  Perhaps you need a different "datestyle" setting.

Conclusion: Converting a column from one data type to another is nothing you want to do when you system is live. When you can afford some downtime the “using” syntax is what you want to do and it is working quite fast. Hope this helps …

 

Cet article Converting a column from one data type to another in PostgreSQL est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator