Feed aggregator
hello guys,
I'm trying to create hierarchy using a parent child table for analytic view but I'm struggling.
the steps:
1. create a attribute dimension using the table --> (id,name,parent_id)
(I succeeded but idk the correct way for the parent child hierarchy).
2.create a hierarchy using the attribute dimension from previous step,
(i succeeded to create a regular hierarchy that isn't based on parent child)
idk how to do it, tried 3 chatbots and also oracle docs with no answer.
im doing this on toad 19 - oracle 19
hope someone can help me.
thanks!
Ask Tom:
I would like to migrate a PL/SQL Web Toolkit application from an on-prem database instance to the Oracle Autonomous Database. I see references to the PL/SQL Gateway via ORDS (I think) in documentation and administration screens but I'm struggling to connect the dots. Most documentation and articles I find focus on the ORDS APIs but not a simple HTTP to PL/SQL connection.
I have created a User, CAAT_CODE, and can create a procedure through the web-based SQL Developer. I want to execute a procedure something like
<code>
create or replace procedure gateway_test
as
begin
htp.print( '<html><body>Hello, World.</body></html>' );
end;
/
</code>
with a URL that looks like
https://g3958a9838612f9-caat.adb.us-chicago-1.oraclecloudapps.com/ords/caat_code/gateway_test
So far, all I get is the HTTP 404, not found error.
Please suggest documentation or a set of steps to configure the gateway in the Oracle Autonomous Database environment or let me know that what I'm trying to do is not supported.
Thanks for many years of Ask Tom. I have found it to be valuable in many Oracle development cases.
John Jeunnette
johnjeunnette@prairiesystemsgroup.com
Hi everyone,
We?re developing an APEX app that handles Excel files, and we?ve run into a problem when trying to export the app from one Workspace and import it into another.
<b>The main issue:
When we export the app and attempt to import it into a different Workspace, the import fails with various errors, and ultimately the app does not load.</b>
Has anyone experienced similar issues or have any advice on how to resolve this? Being able to seamlessly transfer the app across Workspaces would greatly improve our workflow.
Additional details:
We export the entire application using default export options.
During import, we select ?Import as Application? in the Import Wizard.
After some loading time, an error appears?although the exact error message varies. Examples include:
- <b>Internal Server Error - Write</b>
The server encountered an internal error or misconfiguration and was unable to complete your request.
Reference #4.73a2f17.1748513046.f1d44d4
https://errors.edgesuite.net/4.73a2f17.1748513046.f1d44d4
- <b>Execution of the statement was unsuccessful. ORA-02091: transaction rolled back</b>
<code>
begin wwv_flow_imp.import_end(p_auto_install_sup_obj => nvl(wwv_flow_application_install.get_auto_install_sup_obj, false));
commit;
end;
</code>
We encounter the same problem even when exporting and importing a single page into a blank app. Clicking ?Install Page? triggers the same ORA-02091 error as above.
Any guidance, tips, or workarounds would be highly appreciated. Thanks so much for your help!
The blood of the martyrs is the seed of the Church.
I sat upon the shore Fishing, with the arid plain behind me
Shall I at least set my lands in order?
London Bridge is falling down falling down falling down
Poi s’ascose nel foco che gli affina
Quando fiam uti chelidon—O swallow swallow
Le Prince d’Aquitaine à la tour abolie These framents I have shored against my ruins
I thought I could take what Chris showed me in my original question (https://asktom.oracle.com/ords/asktom.search?tag=looking-for-help-with-json-object).
However, after 2 days, I have not made any progress.
The JSON data -
{
"recall_control":
[
{
"item_id":"item1",
"item_cd1":"CA",
"item_cd2":"AP",
"item_cd3":"CO",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL3", "VAL5"]
},
{
"item_id":"item2",
"item_cd1":"CA",
"item_cd2":"AP",
"item_cd3":"EX",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2"]
},
{
"item_id":"item3",
"item_cd1":"CA",
"item_cd2":"TW",
"item_cd3":"CO",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2", "VAL3"]
},
{
"item_id":"item4",
"item_cd1":"CA",
"item_cd2":"TW",
"item_cd3":"EX",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2", "VAL4"]
}
]
}
With Chris's help - I can get the item_id values
<b><code>with jdata as (
select treat ( '{
"recall_control": [
{
"item_id":"item1",
"item_cd1":"CA",
"item_cd2":"AP",
"item_cd3":"CO",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL3", "VAL5"]
},
{
"item_id":"item2",
"item_cd1":"CA",
"item_cd2":"AP",
"item_cd3":"EX",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2"]
},
{
"item_id":"item3",
"item_cd1":"CA",
"item_cd2":"TW",
"item_cd3":"CO",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2", "VAL3"]
},
{
"item_id":"item4",
"item_cd1":"CA",
"item_cd2":"TW",
"item_cd3":"EX",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2", "VAL4"]
}
]
}' as json ) j from dual
)
select json_query ( j,
'$.recall_control[*] ? (
exists ( @.recall_list[*] ? ( @ == "VAL3" ) )
).item_id' with wrapper ) items
from jdata;</code>
ITEMS
-----------------
["item1","item3"]</b>
I took what he gave me and over the last 2 days I have tried to figure out the second half of my issue. How to ADD/REMOVE items from the RECALL_LIST array based on the values obtained in the original query.
I thought I could use the JSON_VALUE method to find where ITEM_ID = "item1" and create an update query to remove VAL3 from the array or add VAL6 to it. However, the query returns now rows updated.
I tried to use Chris's search advice - <b>"You can do this by nesting search expressions, first to locate the recall_control. Then using the exists method to see search the recall_list array. e.g.:"</b>
That didn't work.
What a...
Hello,
I am executing a package on multiple threads. Our package is for deleting data in bulk.
Whenever I am running the package on multiple threads I am getting a deadlock.
On checking the trace file I found the delete statement which is causing deadlock. But the graph shows as
Rows waited on:
Session 38: no row
Session 70: no row
So I am not understanding the root cause of this deadlock.
There are indexes on all the foreign key columns for the table being deleted.
The deletion is based on primary key as well.
Can you please help what could be the probable issue?
Discovery of the day on Oracle 19 version EE 19.21.0.0.0:
JSON_OBJECT_T.GET_CLOB(key) member function returns a CLOB string containing 'null' (lowercase), not a NULL value. As you can see below the 'null' string is matched by the function NULLIF which replaces it with a real NULL value: set serveroutput on
declare
l_procedure clob;
j_job_desc json_object_t;
begin
j_job_desc := json_object_t.parse('{"procedure":null}');
l_procedure := nullif(j_job_desc.get_clob('procedure'),'null');
if l_procedure is null then
dbms_output.put_line('nothing');
else
dbms_output.put_line(l_procedure);
end if;
end;
/
nothing
PL/SQL procedure successfully completed.
The documentation doesn't explain this behavior at all, it just says that the member function will create a CLOB implicitly and that's all,
INTRODUCTION
S3 storage is the most resilient and available service in AWS: is a trusted service for storing backups over the long term.
From a Windows Server machine, in AWS EC2 or elsewhere, the easiest and most reliable way to make a backup from SQL Server:
1- Concept
For data that must be available over a long period of time (finance, insurance, medical), less costly archiving can be envisaged as follows:
- By optimizing the type of storage class
- Reduce the volume of backups with a retention strategy
a- Backup Retention
For a customer project, we chose to implement the following strategy:
- All backups with 30 days retention (with standard storage class)
- A Full backup a month with 12 months retention (with glacier storage class)
- A Full backup a year with 15 years retention (with deep-archive storage class)
To do this, we decided to create, in a Bucket, the following 3 Prefixes:
- Daily: contains all backups (.bak, .trn…) in last 30 days
- Monthly: contains the first FULL backup of the month in the last 12 months
- Yearly: contains the first FULL backup of the year in the last 15 years
b- S3 Lifecycle Policy
To implement the retention, we did set up a lifecycle rule on each 3 prefixes. These rules include the following parameters:
- A Transition Action: the object is converted into the default storage class for the prefix, based on the backup retention decided above. Of course, it’s more efficient to create them in the right storage class first
- A Retention Action: the object is deleted after a defined period
c- Archiving backups
While lifecycle exists natively in S3 with Lifecycle Policies, there is no native mechanism which moves S3 objects between prefixes. Generating a backup once a month and/or a year from SQL Server to S3 is not efficient since backups are already present in Daily prefixes. To do this, you need to set up a custom script which will archive backups from Daily to Monthly and Yearly.
This script can be scheduled, for example, by AWS Lambda or another scheduler tool. In our context, we did choose SQL Server Agent to schedule the archiving after the backup step in a job for the following reasons:
- Execute archiving as soon as a new backup (if eligible) was generated
- Get access easily and modify it, if necessary, from the database server. Of course, the script is server and instance agnostic… but you may want to temporarily modify it on a server as a workaround
2- Implementation
a- Accessing S3
First, a IAM role must be attached to the EC2 instance to authorize the access to the S3 Bucket where backups are stored. Keep in mind this access is available at the machine level, that’s why I recommend to restrict access to read/write only. This role is accessible, on the OS, through a metadata token as follows (IMDSv2 requirement):
[string]$token = Invoke-RestMethod -Headers @{"X-aws-ec2-metadata-token-ttl-seconds" = "21600"} -Method PUT -Uri http://169.254.169.254/latest/api/token
# This sens a PUT request to get a token valid for 6 hours
Once the token retrieved, you can securely accessed the role assigned the the EC2 instance:
$roleName = Invoke-RestMethod -Uri "http://169.254.169.254/latest/meta-data/iam/security-credentials/" -Headers @{"X-aws-ec2-metadata-token"=$token}
From there, you can get temporary security AWS credentials for this role:
$credentials = Invoke-RestMethod -Uri "http://169.254.169.254/latest/meta-data/iam/security-credentials/$roleName" -Headers @{"X-aws-ec2-metadata-token"=$token}
Then, you can set the credentials using AWS Tools PowerShell module:
Set-AWSCredential -AccessKey $credentials.AccessKeyId -SecretKey $credentials.SecretAccessKey -SessionToken $credentials.Token
# Credentials set for the current PowerShell session
b- Retrieving backups to archive
Using dbatools PowerShell module, we retrieve the first backup based on a reference date. Here is an example when a Storage Gateway with File Gateway is used as a destination:
# $RefDate = New-Object DateTime( $Today.Year, $Today.Month, 1 ); # First backup of the month
# $RefDate = New-Object DateTime( $Today.Year, 1, 1 ); # First backup of the year
$BackupHistory = Get-DbaDBBackupHistory -SqlInstance $ConnectionString -Since $RefDate -Type Full -DeviceType Disk | Group-Object Database;
foreach ($Database in $BackupHistory){
$Backup = ($Database.Group | Sort-Object End)[0];
$Files = $Backup.Path;
foreach ($File in $Files){
# Process your copy
}
}
c- Archiving backups
To process the copy, we must construct the key location and the key destination in S3 based on the information retrieved in msdb. So, it mainly depends on your prefix structure in your bucket ; which must remains the same over the time. Here is an example when the structure is RetentionPeriod\InstanceFolder\DatabaseName\FileName:
# Process copy for each file
$Leaf = Split-Path -Path $File -Leaf;
# Construct Source Key in S3
$SourceKey = "$($RefSource)/$($InstanceFolder)/$($Backup.Database)/$($Leaf)";
# Construct Destination Key in S3
$DestinationKey = "$($RefDestination)/$($InstanceFolder)/$($Backup.Database)/$($Leaf)";
# Copy the backup to the new Prefix
Copy-S3Object -BucketName $bucketName -Key $SourceKey -DestinationKey $DestinationKey -StorageClass $StorageClass -Region $Region;
CONCLUSION
In this blog, we demystified all the concepts and steps required to implement a backup strategy in AWS EC2 for SQL Server. For other RDBMS running in EC2, the approach and code used for archiving would be very similar.
If you restrict access to Storage Gateway with File Gateway with READ/WRITE permissions and you enable COMPRESSION in SQL Server, then I recommend you read this blog I wrote: Random permission denied when backing up SQL Server databases to AWS Storage Gateway
L’article Guidance for a SQL Server backup strategy in AWS EC2 est apparu en premier sur dbi Blog.
I am using the following
IIS Server for App server.
ODP.Net
Oracle 10.2.0
After 30 minutes of idle time i get the error ORA-03135: connection lost contact.
The Error is same even if I use Sql*plus.
I asked the network group to increase firewall to 4 hours which they did increase it on port 1521.
I am not sure of the return ports time out values can be set.
Since it?s a web service on IIS, the w3w.exe opens an active connection and we have default settings on connection pooling, so one connection is maintained.
After 30 minutes idle on the next call, I get the error. I tried the same with Sql*plus and I do get the same error.
Is there a way or work around for the same?
Appreciate your help.
Dear Tom,
I have a question related to Oracle EBR.
We are using Oracle 19C (Enterprise Edition) & now moving to be EBR compliant.
Currently we do have DML triggers defined on tables.
These table have foreign key constraints defined with cascade on delete option.
For instance, ORDER_LINE_TABLE has a foreign key referring to ORDER_TABLE with cascade when delete (deleting a record in ORDER_TABLE).
As per the Oracle guidelines, the recommended approach for creating triggers in general seems to be define them on editioning views (EVs), instead on real tables, for many good reasons.
But in this case, it is not possible to create the triggers on EVs due to cascade operation initiated by FK constraint is performed at real table level, thus not visible to triggers on the EV.
This limitation has been mentioned by Oren Nakdimon in below document:
https://db-oriented.com/2025/01/08/ebr-part-13-the-trouble-with-foreign-keys-with-on-delete-clause-and-related-triggers/
As a workaround for this, we have been testing on defining the DML triggers on the real tables, in this like scenarios. But we see some challenges when they are present especially while using Cross Edition (XE) Triggers during upgrades. Since XE triggers are operating on real table level, DML triggers would see those DML operations happening on real tables, hence, it is hard for us to have an isolation in upgrade period.
Even the above document says, I doubt it is a rare combination of having FKs with cascade & having DML triggers.
Do you have a better approach to define DML trigger in such scenarios, instead of the approach I mentioned above?
Thank you & Kind Regards,
Navinth
Hello,
We have migration project form Oracle database 11g in Exadata server, to 19c in ODA server, when we tested the query in the new server with client tool such as Sql developper, Sqlplus and Sqlcl, the connection was lost after 1 hour of inactivity, the only difference between Exadata and ODA is in the network architecture, for the traceroute Exadata, we take switch A, but for ODA we, take firstly Firewall, after switch b to come to ODA, we increase timeout in Firewall for the port 1521 to 24h but the same issue. When we try to connect ton ODA directly, there is no timeout.
I added this ligne: SQLNET.EXPIRE_TIME = 10, in my sqlnet file, and seems work perfectly, my question is it the right solution, or there is another solution more efficnecy. If it is the only solution, I need to do others configurations for the connection doesn't use sqlnet, such as SSIS that use ODBC, and others that use JDBC.
Thanks in advance
We use OCI GoldenGate Microservice to replicate tables from Oracle Database to BigData.
I was tasked to create a process to replicate any "ALTER TABLE ADD COLUMN" statements to let bigdata team modify schema safely.
Tested solution using database trigger in dedicated schema in test environment was rejected because
1. Triggers are evil
2. Database patching can have unpredicted consequences having database trigger. I wonder if that is true.
I used similar trigger to log my own DDL for journaling for 5 years in a database which passed through multiple patching without any issues.
Still, my experience is not statistically correct prove.
My colleague suggested using use aud$unified table, but that involves creating procedure executed by frequently running scheduled job because releases are not 100 % regular.
Are there better approach?
With regards,
Alex
Here is the setup:
prompt create table utility.ddl_log
--drop table utility.ddl_log purge;
create table utility.ddl_log (
ddl_time timestamp(6) default systimestamp not null
ddl_by varcha2(30) default sys_context('userenv', 'current_user')
, os_user varchar2(100) default sys_context('userenv','os_user')
, host varchar2(100) default sys_context('userenv','host')
, ip_address varchar2(100) default sys_context('userenv','ip_address')
, module varchar2(100) default sys_context('userenv','module')
, terminal varchar2(100) default sys_context('userenv','terminal')
, operation varchar2(100)
, owner varchar2(50)
, object_name varchar2(50)
, object_type varchar2(50)
, sqltext clob
);
create or replace trigger utility.after_gg_table_ddl
after alter on database
declare
l_2run binary_integer := 0;
l_sql_id varchar2(16);
l_sql clob;
begin
--p('ora_dict_obj_name = '||ora_dict_obj_name);
select count(*)
into l_2run
from dba_users u
where 1 = 1
and profile = 'APP_USER' -- only application schemas to exclude any other schemas
and ora_dict_obj_type = 'TABLE'
and exists (
select 1
from all_log_groups a
where a.owner = u.username
and a.table_name = ora_dict_obj_name
and log_group_type is not null
)
and ora_dict_obj_owner = u.username
;
--p('l_2run = '||to_char(l_2run)||'; sid = '|| sys_context('userenv','sid'));
if l_2run > 0 then
select sql_id
into l_sql_id
from gv$session
where sid = sys_context('userenv','sid')
and rownum = 1
;
select lower(sql_fulltext)
into l_sql
from gv$sql
where sql_id = l_sql_id
and rownum = 1; --to be on the safe side as many sessions and children can have the same sql_id
-- This part is not required
insert into utility.ddl_log (operation, owner, object_name, object_type, sqltext, prevsqltext)
values (ora_sysevent, ora_dict_obj_owner, ora_dict_obj_name, ora...
Pages
|