Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 5 hours 50 min ago

Cannot read external table or read using utl_file.get_line on Windows 10 with Oracle 18c

5 hours 50 min ago
I just upgraded to Oracle 18c XE from 11g. I have an external table that worked fine on 11g, but I keep getting the following errors on 18c. ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11604: no directory object specified for log file The directory does exist and I have the correct grants. <code>SELECT * FROM all_directories WHERE directory_name = 'MYDIR';</code> OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID SYS MYDIR C:\Users\sneuf\OneDrive\PLSQL\OracleDirectory 1 <code>SELECT * FROM all_tab_privs WHERE table_name = 'MYDIR' AND grantee = 'C##_SNEUF';</code> GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY COMMON TYPE INHERITED SYS C##_SNEUF SYS MYDIR EXECUTE YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR READ YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR WRITE YES NO NO DIRECTORY NO I'm pretty sure I'm missing a grant somewhere, but I can't figure out what. Here is my table: <code>CREATE TABLE C##_SNEUF.CHECKING_TBL_EXT2 ( DB_KEY NUMBER, CHECK_NUM VARCHAR2(10), TRANS_DATE TIMESTAMP (6), DESCRIPTION VARCHAR2(100), DEPOSIT_WITHDRAWAL VARCHAR2(1), AMOUNT VARCHAR2(12), MEMO VARCHAR2(200) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY MYDIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE MYDIR: 'checking.bad' FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM MISSING FIELD VALUES ARE NULL ( DB_key CHAR, check_num CHAR(10), trans_date CHAR(21) DATE_FORMAT DATE MASK 'MM/DD/YYYY HH24:MI:SS', description CHAR(100), deposit_withdrawal CHAR(1), amount CHAR(12), memo CHAR(200) ) ) LOCATION ( MYDIR: 'checking.csv' ) ) REJECT LIMIT UNLIMITED ;</code> Thanks, Steve
Categories: DBA Blogs

Delete 40%(4TB out of 10TB) of unwanted data using Partitioning techniques from Oracle db(11g) and reclaim space.

5 hours 50 min ago
Problem statement: Delete 40%(4TB out of 10TB) of unwanted data using Partitioning techniques from Oracle db(11g) and reclaim space. Details: My application DB schema size 10 TB. Oracle 11g and the schema is co located with many other schema's, hence any option with DB down time is allowed. ? Top tables based on size will start from 3 TB (Table_1), 2 TB(Table_2), so on. ? Planned to delete 40% of unwanted data from each table(with a where criteria). ? Delete scripts are practically not possible on huge table data, as it is causing DB sever hung AND impacting other schema services. Below are not an options due to downtime constraints and No FS to create temp tables and switch back. ? Removing data by creating a table ? Switch the tables over(CTAS) Strategy for deletion proposed: ? Partitioning (split partition / sub-portioning) of these tables based on data patterns(where clause) to separate. ? After separating purge eligible data, truncate it in back ground (during off business hrs). --> drop / truncate unwanted partitions. ? After truncating the partition, make the table un-partitioned back to original. Reclaiming of space: best option to choose ? 1. alter database datafile xxx.dbf resize 2. alter tablespace xxx coalesce 3. export-import 4. dbms_redefinition 5. alter table xxx shrink Any guidance on the proposed solution ??? ? ? what partitions to be used for a pattern that has joins/where clause that includes 3 to 4 tables ? ? Pattern based partitioning examples and is that supported in 11g ? ? How to un-partition the main table back to original after truncating the unwanted partitions ?
Categories: DBA Blogs

Local Web-App (ASP.NET Core with EF Core works flawlessly) Publishing to azure

5 hours 50 min ago
When I've published the ASP.NET Core Web App to Azure I get the following Error when trying to use the oracle database. <code> NetworkException: ORA-12154: TNS:could not resolve the connect identifier specified OracleInternal.Network.AddressResolution..ctor(string TNSAlias, SqlNetOraConfig SNOConfig, Hashtable ObTnsHT, Hashtable ObLdpHT, string instanceName, ConnectionOption CO) OracleException: ORA-12154: TNS:could not resolve the connect identifier specified OracleInternal.ConnectionPool.PoolManager<PM, CP, PR>.Get(ConnectionString csWithDiffOrNewPwd, bool bGetForApp, OracleConnection connRefForCriteria, string affinityInstanceName, bool bForceMatch)</code> I fix these errors by setting TNS_ADMIN & WALLET_LOCATION in my Application settings (ENV) I have also set WEBSITE_LOAD_USER_PROFILE Then I get a new error when trying to run Oracle DB code. An unhandled exception occurred while processing the request. OracleException: Connection request timed out OracleInternal.ConnectionPool.PoolManager<PM, CP, PR>.Get(ConnectionString csWithDiffOrNewPwd, bool bGetForApp, OracleConnection connRefForCriteria, string affinityInstanceName, bool bForceMatch) This one I can't seem to figure out. Does anyone have any pointers or things I can try ? My connection string looks like this and works perfectly on the local setup. <code>"User Id=user;Password=password;Data Source=db202011081718_medium;"</code> and is injected here in startup.cs <code> services.AddDbContext<ApplicationDbContext>(options => options.UseOracle( Configuration.GetConnectionString("OracleDBContextConnection"))); </code> I must be missing something, related to the azure/oracle specifics that aren't playing nice I've watched this video 3/4 times https://www.youtube.com/watch?v=-IkDuJy30mY
Categories: DBA Blogs

Delete on referenced partitioned table taking time

Wed, 2020-11-25 21:26
Hi Connor, Chris, I have referenced partitioned tables as below Its a daily range partition on ORDER_DATE column. <code> CREATE TABLE ORDERS ( ORDER_ID NUMBER(11), PART_DATE DATE DEFAULT trunc(sysdate), .... 150 columns .... ) PARTITION by range (PART_DATE) ( partition ORDERS_P_01_JAN_2018 values less than (to_date('02-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_02_JAN_2018 values less than (to_date('03-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_03_JAN_2018 values less than (to_date('04-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_04_JAN_2018 values less than (to_date('05-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_MAXVALUE values less than (maxvalue) ); CREATE TABLE ORDERS_DETAILS ( ORDER_ID NUMBER(11) NOT NULL, .... 70 columns .... CONSTRAINT FK_ORDER_ID FOREIGN KEY (ORDER_ID) REFERENCES ORDERS (ORDER_ID) ) PARTITION BY REFERENCE (FK_ORDER_ID); </code> The issue here is - it takes significant time to delete records from both child "ORDER_DETAILS" as well as parent "ORDERS". Daily partition record count of each table is around 800k in "ORDERS" & 2.7 million in "ORDER_DETAILS" and these table holds 2 months of data and rest gets purged on daily basis (using native delete statement). Earlier we were deleting these records using 50k records per iteration followed by commit. Then we tried the limit counter with 5k but it was taking same to finish purging of all records for that day. The next option we are planning to truncate partition instead of delete statement. But before doing so, would like to seek your help so that i can check more on delete taking time. Could you please suggest something which can be checked at our end.
Categories: DBA Blogs

Directory List including modify date and times - casting the date problem

Wed, 2020-11-25 21:26
hi Tom, I started with your dir listing code and it's working out very nice but I am trying to figure out how to get the rest of the file info using File object. I am having a problem with the dates specifically. Here's my attempt but as you can see theres a problem with the dates and I cant seem to figure it out.. GRANT JAVAUSERPRIV to SOLIVER; drop table DIR_LIST; create global temporary table DIR_LIST ( filename varchar2(255),lastmodified date) on commit delete rows; create or replace and compile java source named "DirList" as import java.io.*; import java.sql.*; public class DirList { public static void getList(String directory) throws SQLException { String element; File path = new File(directory); File[] FileList = path.listFiles(); String TheFile; Date ModiDate; #sql { DELETE FROM DIR_LIST}; for(int i = 0; i < FileList.length; i++) { TheFile = FileList[i].getAbsolutePath(); ModiDate = new Date(FileList[i].lastModified()); #sql { INSERT INTO DIR_LIST (FILENAME,LASTMODIFIED) VALUES (:TheFile,:ModiDate) }; } } } / create or replace procedure get_dir_list( p_directory in varchar2 ) as language java name 'DirList.getList( java.lang.String )'; / --implementation EXEC dbms_java.grant_permission( 'SOLIVER', 'java.io.FilePermission', 'c: emp*','read' ); exec get_dir_list( 'c: emp' ); select to_char(lastmodified,'YYYY-MM-DD HH24:MI:SS'),filename from dir_list order by lastmodified; I would appreciate any help...
Categories: DBA Blogs

Normal Forms

Wed, 2020-11-25 21:26
Hi Tom, Just wanted to know , if oracle database supports all the normal forms we know i.e 1st,2nd,3rd.BCNF,4th,5th. http://psoug.org/reference/normalization.html Is scott schema we use , is it fully normalized/or can be normalized till 5th normal form? In general, till which normal form it is sufficient to conclude that the created database is in agreement with the standards. Thanks, Raghavendra B
Categories: DBA Blogs

Oracle PL/SQL Procedure/function to dynamically create view with dynamic columns

Wed, 2020-11-25 03:06
I had similar requirement earlier but now i have been given a clear approach that needs to be implemented. I need to write a procedure(preferably)/function, which would on basis of app_id passed as parameter, change the column headers for corresponding values pivoted in the dynamic view. ***The entire sample data and structure is available here:*** https://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=63c08123fbf2fb13de81df0bff360c0b The table with column headers is DATA_HEADER. The table with value is DATA_VALUE. The column headers and values need to be pivoted in order. Both tables have app_id on common. So for app_id=1, labels from DATA_HEADER would be used against values in DATA_VALUE. However, the values change on basis of PID only, so they would remain constant, only headers would change per app id. When app_id is passed in proc/funct, the expected view should be: PID Title Group Aspect EPT IT BU Section Class Label Value Options Comments 120 Robert Deputy IT Synopsis is good Standard Code Green Needs Improvement for the details 120 Stanley Deputy IT Synopsis is good Standard Code Green Needs Improvement for the details Average call proc(app_id=>2) PID Project Idea Perspective Analysis Status Dept Aspect 3 Class Label Value Options Comments Remarks 120 Robert Deputy IT Synopsis is good Standard Code Green Needs Improvement for the details 121 Stanley Deputy IT Synopsis is good Standard Code Green Needs Improvement for the details Average (Apologies for the structure but i did not know how to format a table,hope the expected result is clear) So basically, the headers change for each app_id, and the max number of column headers will be 20. So the number or name shall vary as in the table DATA_HEADER. The values are uniquely identified on the basis of pid. The order of column headers would be as per the seq column in DATA_HEADER. Similarly the order of values would be as per seq column in data value, so the sequence must be followed and pivoted accordingly. Oracle Version: 12.1
Categories: DBA Blogs

Objects auto validated intermittently

Wed, 2020-11-25 03:06
Hi Connor, Chris, Past few days we are facing issues with database packages getting invalidated and all application processes resulted into "ORA-06508: PL/SQL: could not find program unit being called"/ We could find the invalid objects in user_objects for sometime and they gets validated after sometime. Could you please provide any lead - how can I investigate it further.
Categories: DBA Blogs

Finding the number of rows in each table by a single sql

Wed, 2020-11-25 03:06
hi tom i have a databse with 125 tables. i can find the total number of tables of the database by the sql select * from tab; now i would like to know the number of rows in each table of my database with out executing select count(*) from...; each time. please help me sincerely rajesh
Categories: DBA Blogs

Extract and delete records from table by only one session

Tue, 2020-11-24 09:06
Hello, we have a table with records that contain values that have to be filled in batch and should be consumed by the client application. The various sessions should read a record and delete it and it must be guaranteed that one record is used by only one session. We tried with select for update but had locking problems. Is there another way to "consume" the records and have the security that only session gets the same record? Regards Andreas
Categories: DBA Blogs

Operations preserving row order

Tue, 2020-11-24 09:06
Hi Tom, a fan of your work. Have a question: are there any operations in Oracle preserving row order? For example, can I expect that <code> select * from (select tag from test order by tag) </code> Will return in sorted order? Or if a pipelined table function produces a dozen rows in certain order, can I use "select * from table(f())" to see them in the same order? Will a cursor read rows from a pipelined function in the same order they are piped? Basically, looking for exceptions to the general rule "any operation destroys row order".
Categories: DBA Blogs

"Compute" inside a "group by"

Tue, 2020-11-24 09:06
Hi. Does anyone know how to add a "Compute" inside a "group by" already set up through the action button in an Interactive Report? Is it possible to do it, has someone done something similar? Thank you
Categories: DBA Blogs

Is there a way to find occupied sizes of all tables(of all columns of each table) in oracle db ?

Tue, 2020-11-24 09:06
I need sizes of all columns of a table and the same details of all user tables in db. -- Query to find # of rows and sizes of All data in Table1 select count(1), sum(length(column1)), sum(length(column2)), sum(length(column3)), sum(length(column4)), sum(length(column5)) from TABLE1; Should I construct similar query for all tables and get the info OR is there a way to do automatically pull all tables sizes ?
Categories: DBA Blogs

Questions about on commit refresh Fast MVs

Mon, 2020-11-23 14:46
Team: Here is my testcase used for the below demo. this was from 18c database. Questions: Q1 - why this error "ORA-10980" is reported in this trace file, what was the problem with my testcase ? Q2 - all three delete statements having the predicate like " where rid1/rid2 in (...) " is not using the index on either of the columns why ? Q3 - please see the " insert into T1_T2_MV..." where it joins mlog$_t2 with T2 - with the hint HASH_SJ - optimizer is still not accessing T2 based on ROWID on nested loops for card=2, instead it make use of HASH join here. what else could be done here to avoid the full scan on T2. <code>create table t1 as select a.*,rownum r from all_objects a, all_users where rownum <=2000000; create table t2 as select * from t1; alter table t1 add constraint t1_pk primary key(r); alter table t2 add constraint t2_pk primary key(r); create materialized view log on t1 with primary key,rowid,sequence (object_type, object_name,created,last_ddl_time,timestamp,status) including new values; create materialized view log on t2 with primary key,rowid,sequence (object_type, object_name,created,last_ddl_time,timestamp,status) including new values; create materialized view t1_t2_mv build immediate refresh fast on demand enable query rewrite as select t1.object_type, t1.object_name, t1.created, t1.last_ddl_time, t1.timestamp, t1.status, 1 as umarker, t1.rowid as rid1, t1.rowid as rid2 from t1 where owner in ('PUBLIC','APEX_200100','ORDSYS','MDSYS','XDB','SYSTEM','CTXSYS') union all select t1.object_type, t1.object_name, t1.created, t2.last_ddl_time, t2.timestamp, t1.status, 2 as umarker, t1.rowid as rid1, t2.rowid as rid2 from t1 , t2 where t1.r = t2.r and t1.owner ='SYS'; create index t1_t2_mv_idx_01 on t1_t2_mv( rid1 ) nologging; create index t1_t2_mv_idx_02 on t1_t2_mv( rid2 ) nologging; update t2 set object_type = lower(object_type) where rownum =1; delete from t1 where rownum <=5; commit; exec dbms_stats.gather_table_stats(user,'mlog$_t1',no_invalidate=>false); exec dbms_stats.gather_table_stats(user,'mlog$_t2',no_invalidate=>false); demo@XEPDB1> select count(*) from mlog$_t1; COUNT(*) ---------- 5 demo@XEPDB1> select count(*) from mlog$_t2; COUNT(*) ---------- 2 demo@XEPDB1> @tkfilename.sql D:\APP\VNAMEIT\ORA18C_XE\diag\rdbms\xe\xe\trace\xe_ora_8468.trc demo@XEPDB1> @tktrace.sql PL/SQL procedure successfully completed. demo@XEPDB1> set timing on demo@XEPDB1> exec dbms_mview.refresh('T1_T2_MV','F'); PL/SQL procedure successfully completed. Elapsed: 00:02:19.47 demo@XEPDB1> exit and the TKPROF show's this: The following statements encountered a error during parse: select t1.object_type, t1.object_name, t1.created, t1.last_ddl_time, t1.timestamp, t1.status, 1 as umarker, t1.rowid as rid1, t1.rowid as rid2 from t1 where owner in ('PUBLIC','APEX_200100','ORDSYS','MDSYS','XDB','SYSTEM','CTXSYS') union all select t1.object_type, t1.object_name, t1.created, t2.last_ddl_time, t2.timestamp, t1.status, 2 as umarker, t1.rowid as rid1, t2.rowid as rid2 from t1 , t2 where t1.r = t2.r and t1.owner ='SYS' Error encountered: ORA-10980 -------------------------------------------------------------------------------- select t1.object_type, t1.object_name, t1.created, t1.last_ddl_time, t1.timestamp, t1.status, 1 as umarker, t1.rowid as rid1, t1.rowid as rid2 from t1 where owner in ('PUBLIC','APEX_200100','ORDSYS','MDSYS','XDB','SYSTEM','CTXSYS') union all select t1.object_type, t1.object_name, t1.created, t2.last_ddl_time, t2.timestamp, t1.status, 2 as umarker, t1...
Categories: DBA Blogs

IMPDP statement with multiple where and table clause

Mon, 2020-11-23 14:46
I have more than 20 tables to restore from exported dumpfile. so i have question how to import 20 tables into one impdp statement so my dba can save time. here i post two different impdp statement which contain different where statement and different tables to be import. <code>impdp username/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=dmp_file_name.dmp LOGFILE=dmp_log_file.txt TABLES=HR.emp_log query=HR.emp_log:\"where dept_id in ( select a.dept_id from HR.remote_data_emp_log a where a.log_date = '31-DEC-2019' ) \" impdp username/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=dmp_file_name.dmp LOGFILE=dmp_log_file.txt TABLES=HR.dept_log query=HR.dept_log:\"where dept_id in ( select a.dept_id from HR.remote_data_dept_log a where a.dept_log_date = '31-DEC-2019' ) \"</code>
Categories: DBA Blogs

How to extract specific tags from a clob colum storing XML

Mon, 2020-11-23 14:46
I have a clob column that as different tags in it, like the example below, I am trying to get the comments tag of all the rows, one of them is returning null, I am assuming it is because it has the word "comments" more than once, this is the query I am using: <b>select d.d1_activity_id, dbms_lob.substr(d.bo_data_area, dbms_lob.getlength(d.bo_data_area), 1) as DCLOB, extractValue(xmlparse(contentt d.bo_data_area),'comments' ) AS comnt from d1_activity d where dbms_lob.instr(d.bo_data_area,'comments') > 0 </b> This is an example of the data we have in that column: <code><comments>C2M Test Exchange Meter</comments><instructions>C2M Test Exchange Meter</instructions><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails><contactDetails/><connectNewDevice>D1CN</connectNewDevice><oldDeviceId>061840493997</oldDeviceId> <isFieldActivityCompleted>D1NO</isFieldActivityCompleted><isAppointmentNecessary>N</isAppointmentNecessary><appointmentWindow/><comments>C2M Test for M-Exchange Orch to PragmaCad</comments><instructions>C2M Test for M-Exchange Orch to PragmaCad</instructions><isMeasurementFound>D1NO</isMeasurementFound><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails><allowParentTransition>true</allowParentTransition><overrideRestrictions>D1NA</overrideRestrictions><fieldWorkSystemAddress><address1>3456 BOWDEN CIR W</address1><address4>15305034560000&gt;&lt;193954</address4><crossStreet>6249</crossStreet><city>JACKSONVILLE</city><county>DUVAL</county><postal>32216</postal><country>USA</country><state>FL</state><geocodeLatitude>0.000000</geocodeLatitude><geocodeLongitude>0.000000</geocodeLongitude></fieldWorkSystemAddress><contactDetails/> <updateSpecificActivity>D1YS</updateSpecificActivity><updateableItems><comments>Editing comments</comments><instructions>Editing comments</instructions><startDateTime>2020-10-27-00.00.00</startDateTime></updateableItems><isAppointmentNecessary>N</isAppointmentNecessary><appointmentWindow/><allowParentTransition>true</allowParentTransition><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails> </code>
Categories: DBA Blogs

Periodically Test Recovery Procedures

Mon, 2020-11-23 14:46
Hello Team, I need to document our RMAN backup and restore best practices, and I'm struggling to answer the question: How often (months, years, etc.) should we test our restore procedures? (max time between restores, at least "RESTORE... VALIDATE") Thanks. Here an old document: https://docs.oracle.com/cd/B12037_01/server.101/b10726/configbp.htm#1007459
Categories: DBA Blogs

How to prevent sqlldr from aborting with WHEN clause

Fri, 2020-11-20 13:26
The file names will be the same.. Bad file has ONLY the 1000 record. The good file has many. I can load the 1st record for both the good file and the bad file but the good file aborts because it skips all the other record types. I was using the WHEN clause to load JUST the 1000 record. This is an example of a bad file <code>1000,payment file failed,002 - Duplicate File.</code> ===================================== This is an example of a good file <code>1000,1.0,TEMPSUA,10142020071021,10162020172131 4000,1.0,814605760,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605770,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605780,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605790,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605810,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605820,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605830,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605840,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605850,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 5000,9,0,9</code> I have a table with 1 header record and multiple detail records. A good file and a bad file will have the same file name. Only the header record will alert us to a failed file. I have a 'PRETEST' table that I only want to load the header record into. Then I have a 'PRETEST' script to test the header record. I use a WHEN clause and only load 1 record but the step aborts because it creates a discard file with all the other good records. How can I prevent this from aborting? Thank you Sherry Borden <code>SQL*Loader: Release 19.0.0.0.0 - Production on Wed Nov 18 10:44:55 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Control File: C:\AppWorx\sql\F_JPM_TF_RESPONSE_PRETEST_LOAD.ctl Data File: 3140121.tmp Bad File: 3140121.bad Discard File: 3140121.dis (Allow all discards) Number to load: ALL Number to skip: 0 Errors allowed: 50 Bind array: 250 rows, maximum of 1048576 bytes Continuation: none specified Path used: Conventional Table TEMPLE_FINANCE.JPM_SUA_RESPONSE_PRETEST, loaded when JPM_REC_TYPE = 0X31303030(character '1000') Insert option in effect for this table: REPLACE TRAILING NULLCOLS option in effect Column Name Position Len Term Encl Datatype ------------------------------ ---------- ----- ---- ---- --------------------- JPM_REC_TYPE FIRST * , O(") CHARACTER JPM_S_OR_F NEXT * , O(") CHARACTER JPM_ERRCODE_DESCRIP NEXT * , O(") CHARACTER Record 2: Discarded - failed all WHEN clauses. Record 3: Discarded - failed all WHEN clauses. Record 4: Discarded - failed all WHEN clauses. Record 5: Discarded - fa...
Categories: DBA Blogs

Performing sum of all matched substrings from a string using regular expression

Fri, 2020-11-20 13:26
I have a database table with name T_Usage and column name general_1. The general_1 field consists of below value. 14348860:1T:24:|120|1120|2000*14348860:1T:24:|120|1220|3000*14348860:1T:24:|120|1120|879609299148 I have to perform the sum of substrings enclosed between |(Pipe) and *(asterisk) .In the above input value we have two such substrings (2000,3000). using regexp_substr() function, I am able to identify first substring only. <code>select regexp_substr('input', '\|([0-9])+\*') test from dual;</code> How to identify all occurrences and perform addition. Please provide me SQL query if possible. Expected output should be = (2000 + 3000) = 5000
Categories: DBA Blogs

How to audit all Select and DML by a user?

Fri, 2020-11-20 13:26
Good Afternoon, How can we audit all select and DML statements by a user? I tried this: AUDIT ALL BY JCANTU; Then I ran a few selects, but the select didn't appear in the audit trail so I ended up just doing a SQL Trace. Is audit all supposed to create an audit log if I select a table so that the audit log shows that I performed a select operation and it logs the table that the user selected? Thanks,
Categories: DBA Blogs

Pages