DBA Blogs
Testing a new feature in 23ai that allows you to define a DEFAULT value for a column if a user/front-end/application sends a NULL :
[oracle@oel9 ~]$ sqlplus hemant/hemant@freepdb1
SQL*Plus: Release 23.0.0.0.0 - Production on Thu May 9 14:54:25 2024
Version 23.4.0.24.05
Copyright (c) 1982, 2024, Oracle. All rights reserved.
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.4.0.24.05
SQL> create table my_new_employees(
2 employee_id number(12) primary key,
3 employee_name varchar2(48),
4 department_id number(12)
5 )
6 /
Table created.
SQL>
SQL> insert into my_new_employees
2 values (1,'Hemant',NULL)
3 /
1 row created.
SQL>
SQL> select * from my_new_employees;
EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID
----------- ------------------------------------------------ -------------
1 Hemant
SQL>
SQL> update my_new_employees
2 set department_id=100 -- setting a non-NULL value
3 where employee_id=1
4 /
1 row updated.
SQL> select * from my_new_employees;
EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID
----------- ------------------------------------------------ -------------
1 Hemant 100
SQL>
SQL> alter table my_new_employees
2 modify (department_id default on null for insert and update 512);
Table altered.
SQL> insert into my_new_employees
2 values (2,'Larry'); -- I am not specifying a value for DEPARTMENT_ID
insert into my_new_employees
*
ERROR at line 1:
ORA-00947: not enough values
Help: https://docs.oracle.com/error-help/db/ora-00947/
SQL> insert into my_new_employees
2 values(2,'Larry', NULL); -- I explicitly specify NULL for DEPARTMENT_ID
1 row created.
SQL> select * from my_new_employees order by 1;
EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID
----------- ------------------------------------------------ -------------
1 Hemant 100
2 Larry 512 -- it got set to 512 ON INSERT
SQL>
SQL> update my_new_employees
2 set employee_name = 'Hemant Chitale', department_id=NULL -- I explicitly specify NULL for DEPARMTENT_ID
3 where employee_id=1
4 /
1 row updated.
SQL> select * from my_new_employees order by 1;
EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID
----------- ------------------------------------------------ -------------
1 Hemant Chitale 512 -- it got set to 512 ON UPDATE
2 Larry 512
SQL>
SQL> commit;
Commit complete.
SQL> select * from my_new_employees order by 1;
EMPLOYEE_ID EMPLOYEE_NAME DEPARTMENT_ID
----------- ------------------------------------------------ -------------
1 Hemant Chitale 512
2 Larry 512
SQL>
So, now the column has a proper DEFAULT value when an UPDATE statement sends a NULL. This allows the developer to avoid having to write a Front-End Value Check or a Database Table Trigger to convert an incoming NULL to a DEFAULT value.
I have done a little searching, and not found anything that speaks to this, thought I would ask the experts:
Recently I had to build an SQL script to be run in SQLPlus, and this script invoked another with the @ usage, and passed in a derived value as an argument (would be received as &1 in the 2nd-level code). I wondered if I was facing a misunderstanding with scope, and was hoping you could tell the world for sure.
Test case (no tables or other types involved):
1st-level SQL
<code>
set pages 0 feed off head off verify off trims on
variable roll_qtr1 VARCHAR2(6)
exec :roll_qtr1 := to_char(sysdate,'yyyy')||'0'||to_char(sysdate,'q');
col filename1 new_val filename1
SELECT 'test_file_'||:roll_qtr1||'.dat' filename1 FROM dual;
--spool &filename1
@ get_file_data.sql :roll_qtr1
--spool off
</code>
2nd-level SQL (@ get_file_data.sql from above)
<code>
set pages 0 feed off head off verify on lines 9000 colsep ','
variable parm_qtr varchar2(6)
exec :parm_qtr := '&1';
SELECT :parm_qtr FROM dual;
</code>
Now removing the single quotes off the &1 in the 2nd-level SQL gets the value I expect, whereas the code as it is gives:
BEGIN :parm_qtr := <b>':roll_qtr1'</b>; END;
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 1
<b>What gives?</b> It passed the actual bind variable name instead of the value? Which says to me that running an SQL file with "@" is very much the same as, say, an <i>include</i> command in a C program, or running a KSH file in UNIX using the "." notation for execution - all making whatever happens as if it's all in one process/session. Wouldn't this negate the value of command line parameters within the SQLPlus session? Is there a by-reference vs. by-value thing going on?
I sure would value a chance to learn the right understanding.
We have many files available in our Linux mount point and wanted to load them into the table. The solution I'm seeking is to load all the files(with no-header, a different set of columns in each file, comma separated and new line char is available).
<b>Sample file data:</b>
files1.csv
1,"Test1"
2,"Test2"
-----
files2.csv
1,123,"Case0"
2,456,"MyName"
-----
files3.csv
1234234,"2024-01-01","foo"
5894234,"2024-02-01","foo3"
I'm looking for a way to load these files in a single table as given below. Is there a way we can achieve this using SQL Loader?
<b>Oracle Table: </b>
Create table generic_files(COLUMN_1 VARCHAR2(4000), COLUMN_2 VARCHAR2(4000), COLUMN_3 VARCHAR2(4000), FILE_NAME VARCHAR2(4000), INSERT_DT DATE default SYSDATE)
COLUMN_1. | COLUMN_2. | COLUMN_3 | FILE_NAME. | INSERT_DT (will have sysdate)
1 Test1 null files1.csv
2 Test2 null files1.csv
1 123 Case0 files2.csv
2 456 MyName files2.csv
1234234 2024-01-01 foo files3.csv
5894234 2024-02-01 foo3 files3.csv
I have been wondering for a long time why a select like "SELECT COUNT(*) INTO l_count FROM foo WHERE c_now >= ts + 1" in the following example issues a PLW-07204 warning but for example "SELECT COUNT(*) INTO l_count FROM foo WHERE ts < c_now - 1" does not.
<code>
CREATE TABLE foo (ts DATE, tx VARCHAR2(30));
INSERT INTO foo VALUES (SYSDATE - 1, 'foo');
INSERT INTO foo VALUES (SYSDATE - 2, 'bar');
COMMIT;
ALTER SESSION SET PLSQL_WARNINGS = 'ENABLE:7204';
CREATE OR REPLACE
FUNCTION new_foo RETURN NUMBER
IS
c_now CONSTANT DATE := SYSDATE;
l_count NUMBER;
BEGIN
SELECT COUNT(*) INTO l_count FROM foo WHERE c_now >= ts + 1;
-- SELECT COUNT(*) INTO l_count FROM foo WHERE ts < c_now - 1;
RETURN l_count;
END;
/
SELECT * FROM user_errors WHERE name = 'NEW_FOO';
</code>
Pre-23ai, Database Flashback Logs always were created in the DB_RECOVERY_FILE_DEST location (i.e.. the well-known "Fast Recovery Area" also known as "FRA"). However, these would share space with ArchiveLogs (if you've configured ArchiveLogs to the same location) and RMAN Backups (if you aren't specifically writing RMAN Backups to a different location). 23ai introduced the DB_FLASHBACK_LOG_DEST (and corresponding DB_FLASHBACK_LOG_DEST_SIZE) parameter to allow you to separate your Database Flashback Logs to a dedicated location. The 23ai New Features documentation has this to say : In previous releases, you could store flashback database logs only in the fast recovery area. Now you can optionally designate a separate location for flashback logging. For example, if you have write-intensive database workloads, then flashback database logging can slow down the database if the fast recovery area is not fast enough. In this scenario, you can now choose to write the flashback logs to faster disks. Using a separate destination also eliminates the manual administration to manage the free space in the fast recovery area. Managing flashback database logs outside the fast recovery area lowers the operational costs related to space management and guarantees the best performance for workloads that are typically impacted by flashback logging on traditional storage.
And it provides a link to the documentation on the parameter.
You might think that DB_FLASHBACK_LOG_DEST is now completely independent of DB_RECOVERY_FILE_DEST.
Here is my test run where I configured DB_FLASHBACK_LOG_DEST without configuring DB_RECOVERY_FILE_DEST :
h-4.4$ cd /opt/oracle
sh-4.4$ mkdir FBL
sh-4.4$ mkdir FRA
sh-4.4$ sqlplus / as sysdba
SQL*Plus: Release 23.0.0.0.0 - Production on Sun May 5 10:26:26 2024
Version 23.4.0.24.05
Copyright (c) 1982, 2024, Oracle. All rights reserved.
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.4.0.24.05
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 1603726344 bytes
Fixed Size 5360648 bytes
Variable Size 402653184 bytes
Database Buffers 1191182336 bytes
Redo Buffers 4530176 bytes
Database mounted.
SQL> show parameter spfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /opt/oracle/product/23ai/dbhom
eFree/dbs/spfileFREE.ora
SQL>
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> show parameter db_flashback
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_log_dest string
db_flashback_log_dest_size big integer 0
SQL> alter system set db_flashback_log_dest_size=10G;
System altered.
SQL> alter system set db_flashback_log_dest='/opt/oracle/FBL';
System altered.
SQL> create restore point MY_FIRST_RP ;
Restore point created.
SQL> alter system archive log current;
System altered.
SQL>
SQL> create table x as select * from cdb_objects;
Table created.
SQL> insert into x select * from x;
141420 rows created.
SQL> delete x;
282840 rows deleted.
SQL> commit;
Commit complete.
SQL> alter system archive log current;
System altered.
SQL>
SQL> alter database flashback on; -- only here I enable Flashback
Database altered.
==============================================
alert log messages :
2024-05-05T10:38:35.262274+00:00
alter database flashback on
2024-05-05T10:38:35.423698+00:00
Allocating 8388608 bytes in shared pool for flashback generation buffer.
Allocated 8388608 bytes in shared pool for flashback generation buffer
Flashback Database Enabled at SCN 3124894
===============================================
SQL> create restore point MY_FIRST_RP; -- testing if I can create another RP with the same name
create restore point MY_FIRST_RP
*
ERROR at line 1:
ORA-38778: Restore point 'MY_FIRST_RP' already exists.
Help: https://docs.oracle.com/error-help/db/ora-38778/
SQL> drop restore point MY_FIRST_RP;
Restore point dropped.
SQL> create restore point MY_FIRST_RP;
Restore point created.
SQL> drop table x;
Table dropped.
SQL> create table x as select * from cdb_objects;
Table created.
SQL>
SQL> alter system archive log current;
System altered.
SQL> delete x;
141420 rows deleted.
SQL> insert into x select * from cdb_objects;
141421 rows created.
SQL> commit;
Commit complete.
SQL> alter system archive log current;
System altered.
SQL>
SQL> select substr(name,1,32), scn, time from v$restore_point; -- identify the RP that has been created
SUBSTR(NAME,1,32)
--------------------------------------------------------------------------------------------------------------------------------
SCN TIME
---------- ---------------------------------------------------------------------------
MY_FIRST_RP
3124955 05-MAY-24 10.39.30.000000000 AM
SQL> select * from v$flashback_database_log; -- identify the FBDB Logs Size
OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE CON_ID
-------------------- --------- ---------------- -------------- ------------------------ ----------
3124893 05-MAY-24 1440 419430400 0 0
SQL> select * from v$flashback_log_dest; -- identify the FB Log Dest (why isn't the view named V$FLASHBACK_DATABASE_LOG_DEST ?)
NAME
------------------------------------------------------------------------------------------------------------------------------------
SPACE_LIMIT SPACE_USED NUMBER_OF_FILES CON_ID
----------- ---------- --------------- ----------
/opt/oracle/FBL
1.0737E+10 419430400 2 0
SQL>
SQL> !sh
sh-4.4$ cd /opt/oracle/FBL
sh-4.4$ du -sh *
401M FREE
sh-4.4$ cd FREE
sh-4.4$ ls
flashback
sh-4.4$ cd flashback
sh-4.4$ ls -l
total 409620
-rw-r----- 1 oracle oinstall 209723392 May 5 10:41 o1_mf_m3grfc8t_.flb
-rw-r----- 1 oracle oinstall 209723392 May 5 10:38 o1_mf_m3grfg1v_.flb
sh-4.4$
sh-4.4$ cd $ORACLE_HOME/dbs
sh-4.4$ ls -l arch1*
-rw-r----- 1 oracle oinstall 98164736 May 5 10:31 arch1_2_1167168121.dbf
-rw-r----- 1 oracle oinstall 106480640 May 5 10:33 arch1_3_1167168121.dbf
-rw-r----- 1 oracle oinstall 37506048 May 5 10:40 arch1_4_1167168121.dbf
-rw-r----- 1 oracle oinstall 52515840 May 5 10:40 arch1_5_1167168121.dbf
sh-4.4$
sh-4.4$ exit
exit
SQL> select count(*) from x;
COUNT(*)
----------
141421
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1603726344 bytes
Fixed Size 5360648 bytes
Variable Size 419430400 bytes
Database Buffers 1174405120 bytes
Redo Buffers 4530176 bytes
Database mounted.
SQL> flashback database to restore point MY_FIRST_RP; -- try to Flashback the Database
flashback database to restore point MY_FIRST_RP
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database
Help: https://docs.oracle.com/error-help/db/ora-38760/
============================================
alert log messages :
2024-05-05T10:45:28.380285+00:00
Successful mount of redo thread 1, with mount id 1440201864
2024-05-05T10:45:28.380506+00:00
Allocating 8388608 bytes in shared pool for flashback generation buffer.
Allocated 5807328 bytes in shared pool for flashback generation buffer
RVWR could not begin generation of flashback log data because
DB_RECOVERY_FILE_DEST is not set.
2024-05-05T10:45:28.392865+00:00
Errors in file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_rvwr_2343.trc:
ORA-38776: cannot begin flashback generation - recovery area is disabled
2024-05-05T10:45:28.392899+00:00
WARNING: Cannot open the flashback thread for this instance due to the above error.
WARNING: Flashback thread open failed - to resolve this, either correct the reported error or turn off database flashbac
k.
2024-05-05T10:45:28.393060+00:00
Database mounted in Exclusive Mode
Lost write protection mode set to "auto"
Completed: ALTER DATABASE MOUNT
2024-05-05T10:46:04.458087+00:00
flashback database to restore point MY_FIRST_RP
ORA-38760 signalled during: flashback database to restore point MY_FIRST_RP...
2024-05-05T10:50:43.887137+00:00
==============================================
Explanation of the Error :
===========================
38776, 00000, "cannot begin flashback generation - recovery area is disabled"
// *Cause: During a database mount, the RVWR process discovered that the
// recovery area was disabled. DB_RECOVERY_FILE_DEST must have
// been set null or removed from the INIT.ORA file while the database
// was unmounted.
// *Action: Flashback database requires the recovery area to be enabled.
// Either enable the recovery area by setting the
// DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE initialization
// parameters, or turn off flashback database with the
// ALTER DATABASE FLASHBACK OFF command.
So, Oracle 1. allows me to create a Restore Point 2. generates Flashback Log 3. confirms that they exist BUT DOES NOT ALLOW ME TO FLASHBACK THE DATABASE.
DB_RECOVERY_FILE_DEST is still mandatory (even if you have ArchiveLogs being written elsewhere -- my ArchiveLogs were going to the default location under $ORACLE_HOME/dbs).
If you look at the documentation (column "Required" in Table 5-4) it does say that DB__RECOVERY_FILE_DEST is a required parameter -- although it doesnt explicitly say that this is required for the FLASHBACK LOGS. Also, my database was happy to generate ArchiveLogs in another, default, location ($ORACLE_HOME/dbs).
I was trying to tune a MySQL query this week. I ran the same query against Oracle with the same data and got a much faster runtime on Oracle. I couldn’t get MySQL to do a range scan on the column that Oracle was doing it on. So, I just started barely scratching the surface with a simple test of when MySQL will use an index versus a full table scan in a range query. In my test MySQL always uses an index except on extreme out of range conditions. This is funny because in my real problem query it was the opposite. But I might as well document what I found for what it’s worth. I haven’t blogged much lately.
Here is my testcase and its output:
https://www.bobbydurrettdba.com/uploads/mysqlindexuserangequeries.zip
This is on 8.0.26 as part of an AWS Aurora MySQL RDS instance with 2 cores and 16 gigabytes of RAM.
I created a simple test table and put 10485760 rows in it:
create table test
(a integer NOT NULL AUTO_INCREMENT,
b integer,
PRIMARY KEY (a));
The value of b is always 1 and a ranges from 1 to 10878873.
This query uses a range query using the index:
select
sum(b)
from
test
where
a > -2147483648;
This query uses a full table scan:
select
sum(b)
from
test
where
a > -2147483649;
The full scan is slightly faster.
Somehow when you are 2147483650 units away from the smallest value of a the MySQL optimizer suddenly thinks you need a full scan.
There are a million more tests I could do like things with a million variables, but I thought I might as well put this out there. I’m not really any the wiser but it is a type of test that might be worth mentioning.
Bobby
I've just uploaded a new video on monitoring DataGuard where the Standby is a Single Instance environment for a 2-node RAC Cluster.
The script used in the demo are in this ZIP (script files with extension TXT)
Hi Tom,
We run a multi-user OLTP system on Exadata Quarter Rack (Linux version). Though the system response time is consistent and is performing well. We observed Run queues with CPU utilization at 70% on both the nodes. What could be the reason?
My understanding always has been that Run queues are formed only if the system utilization exceeds 100%. But in this case CPU on both the nodes is 65% utilized and 30% is free.
But may be my understanding is flawed.
Could you pls explain the concept of cpu utilization, run queues vis-avis cpu count, specially in OLTP workload?
I am looking for read consistency across multiple cursors in a packaged procedure. In the past I have opened the cursors that I wanted to be consistent at the start of the procedure, used them, and closed them at the end. I am starting to think that the time the first cursor takes to open, and resolve it's result set is making the subsequent cursor inconsistent, although this seems to have worked 99% of the time.
Example:
DECLARE
CURSOR Cur1 IS
SELECT SUM(X) FROM A WHERE SummaryFlag = 'N';
CURSOR Cur2 IS
SELECT ROWID FROM A WHERE SummaryFlag = 'N';
BEGIN
OPEN Cur1;
OPEN Cur2;
.
FOR Rows IN Cur1
UPDATE ASummary
.
.
FOR Rows IN Cur2
UPDATE A SET SummaryFlag = 'Y' WHERE RowId = Cur2.ROWID;
I have had a few occasions where the summary table does not contain the information that has now been flagged as summarized.
Does opening the cursors one right after the other guarantee a consistent result set, and if not why? Will using "ALTER TRANSACTION ISOLATION LEVEL SERIALIZABLE" fix this? How can I set my ISOLATION LEVEL and ROLLBACK segment at the same time?
Thanks in advance.
Got information from your archives, BUT solution is not provided or there's no solution???
Archive : "Why the trigger disappears... May 28, 2003
Reviewer: Kamal Kishore from New Jersey, USA "
Hi Tom,
After you re-create the view definition using CREATE OR REPLACE (maybe to change its condition), the trigger on the view disappears. Is this expected behaviour?
SQL> create or replace view emp_view
2 as
3 select * from emp
4
SQL> /
View created.
SQL> create or replace trigger trig_emp_view
2 instead of insert or update on emp_view
3 for each row
4 begin
5 Null ;
6 end ;
7 /
Trigger created.
SQL> show errors
No errors.
SQL> select ut.trigger_name, ut.table_owner, ut.table_name
2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW'
3 /
TRIGGER_NAME TABLE_OWNER TABLE_NAME
------------------------------ ------------------------------
------------------------------
TRIG_EMP_VIEW KKISHORE EMP_VIEW
1 row selected.
SQL> create or replace view emp_view
2 as
3 select * from emp
4 /
View created.
SQL> select ut.trigger_name, ut.table_owner, ut.table_name
2 from user_triggers ut where trigger_name = 'TRIG_EMP_VIEW'
3 /
no rows selected
Followup:
the "or replace" is replacing the view and all related things. the create or
replace preserves grants -- not the triggers. it is a "new view"
====>> so what should I do if i have view's with instead of triggers became invalid? what syntax can I use to alter the view without my trigger disappearing?
if "create or replace" cannot be used, what syntax can i used?
It took me 25 years to get used to Oracle using 1-based indexing in pretty much all API's.
How the rather new json_array_t data structure used a 0-based indexing and drives me crazy.
Is there any reason behind this "strange anomaly" or did someone just want to drive people crazy?
The following example only returns 2 and 3 because it must be written "FOR i IN 0 .. c_json.get_size() - 1 LOOP ":
<code>
DECLARE
c_json CONSTANT json_array_t := json_array_t('[1, 2, 3]');
BEGIN
FOR i IN 1 .. c_json.get_size() LOOP
dbms_output.put_line(c_json.get_number(i));
END LOOP;
END;
/
</code>
Here is my challenge. I am developing an application that receives webhook notifications when events occur. I have successfully used the restful services functionality in Apex (SQL Workshop>Restful Services) to retrieve data at the first (root?) level successfully. From the "request" sent from stripe below I can use paramters to retrieve the id, object, api_version, created, etc. but fail to retrieve the data.object.id or anything nested at a lower level. (apologies if I am using wrong descriptors here).
I have tried two approaches unsuccessfully:
1) a number of ways to identify the field as a parameter in the handler without success
2) retrieve the full json payload using :body, :body_text, :payload, :json_payload, etc.
Any guidance on how I could identify specific fields lower in the hierarchy (example: the data.object.id with value "cus_PfPbVdZHzvJq0E" below) as a parameter? Or, any guidance on how I could grab the full json payload?
Any guidance is appreciated.
Dwain
{
"id": "evt_1Oq4mjJ861pVT3w2L6jYiwce",
"object": "event",
"api_version": "2018-02-28",
"created": 1709432897,
"data": {
"object": {
"id": "cus_PfPbVdZHzvJq0E",
"object": "customer",
"account_balance": 0,
"address": null,
"balance": 0,
"created": 1709432896,
"currency": null,
"default_currency": null,
"default_source": null,
"delinquent": false,
"description": null,
"discount": null,
"email": "mike@dc.com",
"invoice_prefix": "2420987A",
"invoice_settings": {
"custom_fields": null,
"default_payment_method": null,
"footer": null,
"rendering_options": null
},
"livemode": false,
"metadata": {
},
"name": "mike",
"next_invoice_sequence": 1,
"phone": null,
"preferred_locales": [
],
"shipping": null,
"sources": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/sources"
},
"subscriptions": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/subscriptions"
},
"tax_exempt": "none",
"tax_ids": {
"object": "list",
"data": [
],
"has_more": false,
"total_count": 0,
"url": "/v1/customers/cus_PfPbVdZHzvJq0E/tax_ids"
},
"tax_info": null,
"tax_info_verification": null,
"test_clock": null
}
},
"livemode": false,
"pending_webhooks": 1,
"request": {
"id": "req_KtKtxAnXwioenZ",
"idempotency_key": "7263ed4a-0295-4a4e-a0b8-d7d3bf7f03b3"
},
"type": "customer.created"
}
I have a table like below.
<code>create table t2 ( id varchar2(1),val number) ;
insert into t2 values ('a',1);
insert into t2 values ('a',2);
insert into t2 values ('a',3);
insert into t2 values ('a',4);
insert into t2 values ('b',1);
insert into t2 values ('b',2);
insert into t2 values ('b',3);
insert into t2 values ('c',1);
insert into t2 values ('c',2);
insert into t2 values ('c',4);
insert into t2 values ('d',1);
insert into t2 values ('d',2);</code>
we have to print o/p like below.
<code>id x
--- -------
a 1,2,3,4
b 1,2,3
c 1,2,4
d 1,2</code>
this can achieve by below query
<code>select id,LISTAGG(val, ',') WITHIN GROUP (ORDER BY val ) as x
from t2
group by id</code>
Here x column is character datatype .But i need to convert this to varray of number / nested table of number ( not varray/nestedtable of character ).
i tried like below
<code>CREATE TYPE varchar_TT AS TABLE OF varchar(10);
with z as (
select id,varchar_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,varchar_TT('1,2') y
from t2
group by id )
select id , x ,y from z ;
o/p
----
id x y
---- ------------- ---------------
a C##SIVA.<b>VARCHAR_TT('1,2,3,4')</b>C##SIVA.VARCHAR_TT('1,2')
b C##SIVA.<b>VARCHAR_TT('1,2,3') </b>C##SIVA.VARCHAR_TT('1,2')
c C##SIVA.<b>VARCHAR_TT('1,2,4') </b>C##SIVA.VARCHAR_TT('1,2')
d C##SIVA.<b>VARCHAR_TT('1,2') </b>C##SIVA.VARCHAR_TT('1,2')</code>
if i add below condition , i am not getting any result .
<b>where y member of x ;</b>
so i tried to convert to number array .
<code>CREATE TYPE number_TT AS TABLE OF number;
with z as (
select id,number_TT(LISTAGG(val, ',') WITHIN GROUP (ORDER BY val)) as x ,number_TT(1,2) y
from t2
group by id )
select id , x ,y from z ;
ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.</code>
1 ) Here i need o/p like below to use <b><u>member</u></b> and <b><u>submultiset</u></b> conditions.
<code>o/p
----
id x y
---- ------------- ---------------
a C##SIVA.<b>NUMBER_TT(1,2,3,4) </b>C##SIVA.NUMBER_TT(1,2)
b C##SIVA.<b>NUMBER_TT(1,2,3) </b>C##SIVA.NUMBER_TT(1,2)
c C##SIVA.<b>NUMBER_TT(1,2,4) </b>C##SIVA.NUMBER_TT(1,2)
d C##SIVA.<b>NUMBER_TT(1,2) </b>C##SIVA.NUMBER_TT(1,2)
select varchar_tt('1,2') x ,number_TT(1,2) y from dual;
x y
-------------------- ----------------
C##SIVA.VARCHAR_TT('1,2') C##SIVA.NUMBER_TT(1,2)</code>
Please let me know how to convert character array to number array .
2)
<code>create table t4 ( id VARCHAR2(1) , val number_tt )
NESTED TABLE val STORE AS val_2 ;</code>
How to insert into t4 table from t2 ?
expected o/p query of t4 table should be like...
Tom,
I create databases then I run the catalog.sql and catproc.sql. Sometimes, I donot run pupbld.sql. Users may get warning message but they could login and work.
But, My friend says that if pupbld.sql is not run as system then users will get the error messages and they cannot log into the database at all. Is it true.
Is it a must to run the pupbld.sql. I could not see in the documentation, whether it is a must. If, it is a must, how I am able to login.
Is this being called by anyother script like catalog.sql, catproc.sql. I grepped both the files for pupbld.sql. It does not exist.
Please clarify.
Regards
Ravi
According Metalink note 785347.1 it seems possible to have a dataguard with primary 11.2 and standby 12.2 or even later but it is really very condensed.
Could you please just confirm that 11.2 -> 12.2 is really possible?
If so, what about 11.2 -> 19.x ?
Or 12.2 -> 19.x ?
Of course the idea is to upgrade to a later version with a very short downtime, after having switched to the newer version the old one would be discarded and the dataguard no longer used.
Best regards
Mauro
How to call rest api which accept
x-www-form-urlencoded in PL/SQL procedure in Apex
I am calling
https://api.textlocal.in/docs/sendsms
When i try to run this code:
DECLARE
STUDENT_ID NUMBER;
BEGIN
-- Generate the next value for the sequence
SELECT LMS_STUDENT_DETAILS_SEQ.nextval;
-- Insert data into LMS_STUDENT_DETAILS table
INSERT INTO LMS_STUDENT_DETAILS (STUDENT_ID, STUDENT_NAME, GENDER, DATE_OF_BIRTH, COURSE, CONTACT_NUMBER, DEPARTMENT)
VALUES (STUDENT_ID, :P6_STUDENT_NAME, :P6_GENDER, :P6_DOB, :P6_COURSE, :P6_CONTACT_NO, :P6_DEPARTMENT);
-- Insert data into LMS_BORROWER table
INSERT INTO LMS_BORROWER (BORROWER_ID, ENTITY_OWNER_FK, ENTITY_TYPE)
VALUES (LMS_BORROWER_SEQ.nextval, STUDENT_ID, 'STUDENT');
END;
I faced this error:
ORA-06550: line 1, column 106: PLS-00103: Encountered the symbol "DECLARE" when expecting one of the following: ( - + case mod new not null <an identifier> <a double-quoted delimited-identifier> <a bind variable> continue avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification> <a number> <a single-quoted SQL string> pipe <an alternatively-quoted string literal with character set
I have a user defined database package which is used quite heavily. When I need to update the code body, I will get several
<code>ORA-04061: existing state of package body "CS.PACKAGE" has been invalidated
ORA-04065: not executed, altered or dropped package body "CS.PACKAGE"
ORA-06508: PL/SQL: could not find program unit being called: "CS.PACKAGE"
ORA-06512: at "CS.PROCEDURE", line 228</code>
We are using a connection pool. How do I put the changes into PACKAGE, without getting several of the above errors?
I cannot control the use of the package, and it is very heavily used.
Pages
|