DBA Blogs

Number Data type declaration with different length and performance impact

Tom Kyte - 2 hours 54 min ago
1. I have few number column with data type declared as Number, Number (5), Integer, Numeric(10). I know in few cases the maximum data is 2 digits and I see that is declared as Number(38)/ NUMBER / Numeric(30) /Integer if i don't declare as number(2), instead if i declare as ( Number(38)/ NUMBER / Numeric(30) /Integer) will there be any performance issue when I have a table with millions of records and that is used in updating the data or used in Where clause 2. Varchar2 I have a column with 1 character (Y/N) if i declare this as Varchar2(1 CHAR) instead of VARCHAR2(1 BYTE). Will there be any performance issue when we use this column in where condition for millions of records? 3. IS it advisable to use ANSI Datatypes in table declaration or always preferable to use Oracle Data types, will there be any performance issue? Please advise
Categories: DBA Blogs

Update Partition table from another Partition table Performance issue

Tom Kyte - 2 hours 54 min ago
Hi I am migrating from Sybase IQ to Oracle 19C. there are many updates happening from one or multiple tables. My Target_TBL Table has 18 Million records per partition and there are 1000's of Partitions. (Partitioned by VersionID). APP_ID is one of the another key column in this table. I have 10 Partitioned tables which are partitioned by APP_ID which has around 10 Million to 15 Million Records. I have 5 non-partitioned Lookup tables which are smaller in size. I have rewritten all the Update statements to Merge in Oracle 19C, all the updates happen for one VersionID only which is in the where clause, and I join the source table using APP_ID and other keycolumn to update 70 to 100% of the records in each updates 1. Target table has a different key column to update the table from partitioned Source tables which are 10 to 15 Million. i have to do this by 10 different Merge Statements 2. Target Tables have different key columns to update from Non-partitioned Lookup table , I have to do this 5 different merge statements In sybase IQ all the multiple updates are completed in 10 Minutes, in Oracle 19C it takes more than 5 hours. I have enabled parallel Query and Parallel DML also. A) Can you suggest a better way to handle these kind of updates B) In few places the explain plan shows (PDML disabled because single fragment or non partitioned table used) . C) I leave the large Source table updates to go with has join's D) I Force the Lookup source table updates to use Neste Loop. Is this good or Not ? E) if i need to use indexes, can i go with local/global Other key column reference for Lookup tables. Appreciate any other suggestions to handle these scenarios. example <code> Merge INTO Target_TBL USING SOURCE_A ON (SOURCE_A.APP_ID=Target_TBL.APP_ID and SOURCE_A.JOB_ID=Target_TBL.JOB_ID) When Matched then update set Target_TBL.email=SOURCE_A.email Where Target_TBL.VersionID = 100 and SOURCE_A.APP_ID = 9876; Merge INTO Target_TBL USING SECOND_B ON (SECOND_B.APP_ID=Target_TBL.APP_ID and SECOND_B.DEPT_ID=Target_TBL.DEPT_ID) When Matched then update set Target_TBL.salary=SECOND_B.salary Where Target_TBL.VersionID = 100 and SECOND_B.APP_ID = 9876; Merge INTO Target_TBL USING Lookup_C ON (Lookup_C.Country_ID=Lookup_C.Country_ID) When Matched then update set Target_TBL.Amount_LOCAL=Lookup_C.Amount_LOCAL Where Target_TBL.VersionID = 100; </code>
Categories: DBA Blogs

Gather STATS on Partitioned Table and Parallel for Partitioned Table

Tom Kyte - 2 hours 54 min ago
hi I have a Partitioned(List) table by a VERSION_ID, which has around 15 million per partition. We have daily partitioned ID created bulk insert for 15 Million rows with 500 columns and then have 10 updates(MERGE UPDATE) for multiple columns from multiple other tables. is it good to gather stats after insert once and then after multiple update once. What is good practice for performance in gather stats for these partitioned table scenarios's second question, when i use merge on partition table from other partioned table, i am seeing the below in explain plan when i use Parallel DML hint. PDML disabled because single fragment or non partitioned table used
Categories: DBA Blogs

DR setup involving replicated database

Tom Kyte - 2 hours 54 min ago
Howdy, The current set up I'm looking at is an OLTP production system running Oracle 19.20 (4 instance RAC) with active data guard. This system is seeding a data warehouse running Oracle 19.20 by way of Oracle GoldenGate via an integrated extract. At present the warehouse does not have a DR solution in place and that's the point of the post. I'm wondering what the best solution would be for a warehouse DR strategy when GoldenGate is in play like this. I assume data guard again but happy to hear other thoughts. The bulk of the questions I have involve the GoldenGate component. I'm not sure how that would need to be set up / configured in order to minimize the complexity in any role transitions from either the transactional or warehouse (or both); and what scenarios can be handled seamlessly and which would require manual intervention. Thanks a bunch! Cheers,
Categories: DBA Blogs

1, 2, 3 – Frei!

The Oracle Instructor - Sat, 2024-03-16 06:35

Pickleball Übung: 1, 2 , 3 – Frei!

Eine schöne Übung zum Aufwärmen, die auch gut für Einsteiger geeignet ist:

Alle vier Spieler stehen an der NVZ. Aufschlag und Zählweise ist wie beim normalen Spiel.

Die ersten drei Bälle inklusive des Aufschlags müssen in der NVZ aufkommen. Anschließend ist der Ball freigegeben für offensive Dinks, Speed-Ups und Lobs:

Beispiel: Spieler A beginnt mit dem Aufschlag diagonal, D dinkt (nicht zwingend) zu B und B spielt den dritten Ball in die Küche zu C. C spielt einen langen Ball in die Lücke.

Hintergrund: Wir haben die Übung bisher so ähnlich gespielt, aber mit 5 Bällen, die in die Küche gespielt werden müssen, bevor der Ball freigegeben wird.

Das hat in meinen Augen zwei Nachteile:

  1. Es lehrt die Teilnehmer die falsche Art von Dinks, nämlich harmlose „Dead Dinks“ in die Küche. Im ernsthaften Spiel geht es aber beim Dinken nicht in erster Linie darum, unbedingt in die Küche zu treffen. Ein Dink soll möglichst nicht angreifbar sein, aber möglichst unangenehm für den Gegner, damit der einen hohen Ball zurückspielt, den wir unsererseits angreifen können. Das kann durchaus auch ein Ball sein, der kurz hinter der NVZ aufspringt. Mit dem alten Übungsmodell ist das aber ein Fehler. Später hat man dann oft noch Schwierigkeiten, den Leuten die richtige Art von Dinks beizubringen.
  2. Man muss bis 5 die Bälle mitzählen. Das klappt oft nicht so gut, so dass man im Zweifel ist: Waren das jetzt schon 5?

Bei 1, 2, 3 – Frei! behält man leichter den Überblick. Trotzdem ist der Aufschläger (wie beim großen Spiel) etwas im Nachteil, denn die Rückschläger können zuerst einen offensiven Ball spielen. Eben zum Beispiel einen druckvollen Dink, kurz hinter die NVZ.

Categories: DBA Blogs

How to Create Urdu Hindi AI Model and Dataset from New Dataset

Pakistan's First Oracle Blog - Fri, 2024-03-15 21:54

 This video is hands on step-by-step tutorial to create a new dataset, an AI model, fine-tune the model on dataset and then push it to hugging face.




Code:

%%capture

import torch

major_version, minor_version = torch.cuda.get_device_capability()

# Must install separately since Colab has torch 2.2.1, which breaks packages

!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"

if major_version >= 8:

    # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)

    !pip install --no-deps packaging ninja flash-attn xformers trl peft accelerate bitsandbytes

else:

    # Use this for older GPUs (V100, Tesla T4, RTX 20xx)

    !pip install --no-deps xformers trl peft accelerate bitsandbytes

pass


!pip install einops


from unsloth import FastLanguageModel

import torch

max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!

dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+

load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.


model, tokenizer = FastLanguageModel.from_pretrained(

    model_name = "unsloth/gemma-7b-bnb-4bit", # Choose ANY! eg teknium/OpenHermes-2.5-Mistral-7B

    max_seq_length = max_seq_length,

    dtype = dtype,

    load_in_4bit = load_in_4bit,

    token = " ", # use one if using gated models like meta-llama/Llama-2-7b-hf

)


model = FastLanguageModel.get_peft_model(

    model,

    r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128

    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",

                      "gate_proj", "up_proj", "down_proj",],

    lora_alpha = 16,

    lora_dropout = 0, # Supports any, but = 0 is optimized

    bias = "none",    # Supports any, but = "none" is optimized

    use_gradient_checkpointing = True,

    random_state = 3407,

    use_rslora = False,  # We support rank stabilized LoRA

    loftq_config = None, # And LoftQ

)


alpaca_prompt = """ذیل میں ایک ہدایت ہے جو فلم کے نام کی وضاحت کرتی ہے، اس کے ساتھ ایک ان پٹ بھی ہے جو مزید دستاویزات فراہم کرتا ہے۔ گانے کے بول لکھنے کے لیے ایک لمحہ نکالیں جو فلم کے نام کے معنی سے میل کھاتا ہے۔


### Instruction:

{}


### Input:

{}


### Response:

{}"""


EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN

def formatting_prompts_func(examples):

    instructions = examples["urdu_instruction"]

    inputs       = examples["urdu_input"]

    outputs      = examples["urdu_output"]

    texts = []

    for instruction, input, output in zip(instructions, inputs, outputs):

        # Must add EOS_TOKEN, otherwise your generation will go on forever!

        text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN

        texts.append(text)

    return { "text" : texts, }

pass


from datasets import load_dataset

dataset = load_dataset("fahdmirzac/urdu_bollywood_songs_dataset", split = "train")

dataset = dataset.map(formatting_prompts_func, batched = True,)


from huggingface_hub import login

access_token = "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"

login(token=access_token)


from trl import SFTTrainer

from transformers import TrainingArguments


trainer = SFTTrainer(

    model = model,

    tokenizer = tokenizer,

    train_dataset = dataset,

    dataset_text_field = "text",

    max_seq_length = max_seq_length,

    dataset_num_proc = 2,

    packing = False, # Can make training 5x faster for short sequences.

    args = TrainingArguments(

        per_device_train_batch_size = 2,

        gradient_accumulation_steps = 4,

        warmup_steps = 5,

        max_steps = 100,

        learning_rate = 2e-4,

        fp16 = not torch.cuda.is_bf16_supported(),

        bf16 = torch.cuda.is_bf16_supported(),

        logging_steps = 1,

        optim = "adamw_8bit",

        weight_decay = 0.01,

        lr_scheduler_type = "linear",

        seed = 3407,

        output_dir = "outputs",

    ),

)


trainer_stats = trainer.train()


FastLanguageModel.for_inference(model) # Enable native 2x faster inference

inputs = tokenizer(

[

    alpaca_prompt.format(

        "دیے گئے فلم کے نام کے بارے میں ایک مختصر گیت کے بول لکھیں۔", # instruction

        "کیوں پیار ہو گیا", # input

        "", # output - leave this blank for generation!

    )

], return_tensors = "pt").to("cuda")


outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = True)

tokenizer.batch_decode(outputs)


FastLanguageModel.for_inference(model) # Enable native 2x faster inference

inputs = tokenizer(

[

    alpaca_prompt.format(

        "دیے گئے فلم کے نام کے بارے میں ایک مختصر گیت کے بول لکھیں۔", # instruction

        "رنگ", # input

        "", # output - leave this blank for generation!

    )

], return_tensors = "pt").to("cuda")


outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = True)

tokenizer.batch_decode(outputs)


model.push_to_hub("fahdmirzac/Gemma_Urdu_Hindi_Bollywood_Songs", token = "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")

Categories: DBA Blogs

Using Claude 3 Haiku Vision with Amazon Bedrock Locally

Pakistan's First Oracle Blog - Fri, 2024-03-15 02:58

 This video is a hands-on guide as how to use vision features of Anthropic's Claude 3 Haiku AI model with Amazon Bedrock.



Code Used:

import boto3
import json
import base64
from botocore.exceptions import ClientError

bedrock = boto3.client(service_name="bedrock-runtime",region_name='us-east-1')

modelId = "anthropic.claude-3-haiku-20240307-v1:0"

accept = "application/json"
contentType = "application/json"


# prompt = "What is written in this image?"
# image_path = "./images/ab55.png"

# prompt = "How many faces are there in this image and what are the expressions of those faces?"
# image_path = "./images/expression.png"

# prompt = "Tell me a short story about this image."
# image_path = "./images/hiking.png"

prompt = "What's the location in this image?"
image_path = "./images/whereisthis.png"


with open(image_path, "rb") as image_file:
    image = base64.b64encode(image_file.read()).decode("utf8")

request_body = {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 2048,
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": prompt,
                },
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": "image/png",
                        "data": image,
                    },
                },
            ],
        }
    ],
}

try:
    response = bedrock.invoke_model(
        modelId=modelId,
        body=json.dumps(request_body),
    )

    # Process and print the response
    result = json.loads(response.get("body").read())
    input_tokens = result["usage"]["input_tokens"]
    output_tokens = result["usage"]["output_tokens"]
    output_list = result.get("content", [])

    # print("Invocation details:")
    # print(f"- The input length is {input_tokens} tokens.")
    # print(f"- The output length is {output_tokens} tokens.")

    # print(f"- The model returned {len(output_list)} response(s):")
    for output in output_list:
        print(output["text"])

except ClientError as err:
    print(
        "Couldn't invoke Claude 3 Haiku Vision. Here's why: %s: %s",
        err.response["Error"]["Code"],
        err.response["Error"]["Message"],
    )
    raise
Categories: DBA Blogs

Kompaktkurs Pickleball für Einsteiger

The Oracle Instructor - Fri, 2024-03-15 01:41

Alles was ihr braucht, um mit Pickleball qualifiziert zu starten. Auf den Punkt gebracht an einem Tag!
Dieser 4-stündige Kompaktkurs ist ideal geeignet für Berufstätige und sportlich ambitionierte Senioren.

Der Kurs kostet 40 Euro pro Person und wird geleitet von Uwe Hesse – einem erfahrenen Spieler und vom DPB zertifizierten Trainer.
Er findet statt in Düsseldorf.
Die Teilnehmerzahl ist je Kurs begrenzt auf 8, um ein intensives und individuelles Coaching gewährleisten zu können.

Inhalte sind u.a.
Grundschläge: Aufschlag, Return, Volley, Dink
Regelkunde
Basisstrategien im Doppel
Zählweise
Bedeutung der Non-Volley-Zone
3rd Shot Drop
Teilnehmer-Doppel mit Trainerfeedback

Aktuelle Termine:
Samstag, 27. April
Samstag, 04. Mai

Beginn ist jeweils 10:00 Uhr.
Anmeldungen bitte ausschließlich per Mail an info@uhesse.com

Für Mitglieder des DJK Agon 08 kostet der Kurs nur 20 Euro.
Bei nachfolgendem Vereinseintritt werden 20 Euro erstattet.

Categories: DBA Blogs

Create AI Agent in AWS with Boto3 Code

Pakistan's First Oracle Blog - Thu, 2024-03-14 22:03

 This video is a step-by-step tutorial with code as how to create Amazon Bedrock AI agents with boto3 in Python to integrate with Lambda.



Code used: Just use any lambda with it of your choice.


import logging
import boto3
import time
import yaml
import json
import io
from botocore.exceptions import ClientError

def create_agent(bedrock, agent_name, foundation_model, role_arn, instruction):
    try:
        # Create a low-level client with the service name
        response = bedrock.create_agent(
            agentName=agent_name,
            foundationModel=foundation_model,
            agentResourceRoleArn=role_arn,
            instruction=instruction,
        )
    except ClientError as e:
        logging.error(f"Couldn't create agent due to: {e}")
        raise
    else:
        return response["agent"]

def create_agent_action_group(bedrock, name, description, agent_id, agent_version, function_arn, api_schema):
    try:
        response = bedrock.create_agent_action_group(
            actionGroupName=name,
            description=description,
            agentId=agent_id,
            agentVersion=agent_version,
            actionGroupExecutor={"lambda": function_arn},
            apiSchema={"payload": api_schema},
        )
        agent_action_group = response["agentActionGroup"]
    except ClientError as e:
        print(f"Error: Couldn't create agent action group. Here's why: {e}")
        raise
    else:
        return agent_action_group

def prepare_agent(bedrock, agent_id):
    try:
        prepared_agent_details = bedrock.prepare_agent(agentId=agent_id)
    except ClientError as e:
        print(f"Couldn't prepare agent. {e}")
        raise
    else:
        return prepared_agent_details

def create_agent_alias(bedrock, name, agent_id):
    try:
        response = bedrock.create_agent_alias(
            agentAliasName=name, agentId=agent_id
        )
        agent_alias = response["agentAlias"]
    except ClientError as e:
        print(f"Couldn't create agent alias. {e}")
        raise
    else:
        return agent_alias



def main():
    # Define your parameters
    bedrock = boto3.client(service_name='bedrock-agent',region_name='us-east-1')
    agent_name = 'AstroAI'
    foundation_model = 'anthropic.claude-v2'
    role_arn = 'bedrock role arn'
    instruction = 'Your task is to generate unique and insightful daily horoscopes for individuals \
                   based on their zodiac sign. Start by analyzing the general characteristics and common \
                   themes associated with each zodiac sign. Consider traits, challenges, opportunities, \
                   and the emotional and physical wellbeing of individuals under each sign. Use this \
                   understanding to create personalized, relevant, and engaging horoscopes that offer \
                   guidance, reflection, and encouragement for the day ahead. Ensure the horoscopes \
                   are varied and resonate with the unique qualities of each sign, contributing \
                   positively to the users day.'

    # Call the create_agent function
    try:
        agent = create_agent(bedrock, agent_name, foundation_model, role_arn, instruction)
        agent_id = agent['agentId']
        print(f"Agent created successfully: {agent_id}")
    except ClientError:
        print("Failed to create the agent.")

    time.sleep(10)

    try:
        with open("api_schema.yaml") as file:
            api_schema=json.dumps(yaml.safe_load(file))
            name="AstroGroup"
            description="AI Astrologer"
            agent_version="DRAFT"
            function_arn="arn:aws:lambda:us-east-1::function:horoscope"
            agentgroup = create_agent_action_group(bedrock, name, description, agent_id, agent_version, function_arn, api_schema)                
            print(agentgroup['actionGroupId'])
    except ClientError as e:
        print(f"Couldn't create agent action group. Here's why: {e}")
        raise        

    time.sleep(5)

    agentprepared = prepare_agent(bedrock, agent_id)                
    print(agentprepared)

    time.sleep(20)

    agentalias = create_agent_alias(bedrock, name, agent_id)
    print(agentalias['agentAliasId'])

if __name__ == "__main__":
    main()

Categories: DBA Blogs

PLS-00103: Encountered the symbol "RECORD" when expecting one of the following: array varray table object fixed varying opaque sparse

Tom Kyte - Thu, 2024-03-14 13:06
Here i am creating record type with reocrds emp and dept with following syntax <code>CREATE TYPE emp_dept_data IS RECORD (empno number(4), ENAME VARCHAR2(10), JOB VARCHAR2(9), HIREDATE DATE, SAL NUMBER(7,2), DNAME VARCHAR2(14) );</code> I am getting error as PLS-00103: Encountered the symbol "RECORD" when expecting one of the following: array varray table object fixed varying opaque sparse Please tell me how to fix it i am using oracle 19C version Record type is used in pipelined function
Categories: DBA Blogs

How to find out if a database exists in a given system

Tom Kyte - Tue, 2024-03-12 06:26
Tom Suppose if a server is given to you , and you are asked to see if there is a database created( or in other words if a database exists in that server) or not, what is the quickest and what are the different ways in which you can find out( in both windows and unix). Thanks in advance Rag
Categories: DBA Blogs

Day Light Savings

Tom Kyte - Tue, 2024-03-12 06:26
Hi Tom Here is the problem description- A reporting tables holds data with the date columns in EST. The report when viewed the same columns are reported in CST. I can add a straight offset and make the report available with the columns in CST. But what happens during the DST changes. Will this strategy work during the DST change period. Thanks
Categories: DBA Blogs

date fomat containing timezone data

Tom Kyte - Tue, 2024-03-12 06:26
I would like to know if it is possible to configure the Oracle data format to also capture the timezone that date and time orginated.
Categories: DBA Blogs

How to find current_schema from within PL/SQL

Tom Kyte - Tue, 2024-03-12 06:26
I'm trying to figure out why my application occasionally receives the error ORA-00942: table or view does not exist. As the object in question clearly _does_ exist, I assume that the application changes the current_schema to some other schema which doesn't have said object. In order to prove that, I have created a <b>after servererror on database</b> trigger which I would like to write the current_schema to alert.log I first tried to use <b>SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')</b> to get the schema name, but that always returns the schema holding the trigger (SYS in that case) My second guess was to use the SQL query <b>select SCHEMANAME from V$SESSION where SID = SYS_CONTEXT('USERENV', 'SID')</b> but that also only returned SYS instead of the current schema of the session. Then I tried <b>AUTHID CURRENT_USER</b> but that also didn't solve it. Is there a way? Thanks! Here's is what I tried: 1. connect to my application schema "DSCAN" 2. switch the session to some arbitrary other schema: <code>alter session set current_schema = MDDATA; </code> 3. try to select from some non existing table: <code>select * from abc;</code> Trigger output in alert.log: (what I'd like to see is <b>SN:MDDATA</b> instead of <b>SN:SYS</b>) <code>2024-03-01 14:23:23 2024-03-01T13:23:22.415858+00:00 2024-03-01 14:23:23 PDB1(3):*** SERVERERROR ORA-00942 U:DSCAN SN:SYS #453 * T:unknown * P:DataGrip * M:DataGrip * A: * ID: * CI: *** 2024-03-01 14:23:23 PDB1(3):select * from abc 2024-03-01 14:23:23 PDB1(3):-------------------------------------------------------- 2024-03-01 14:23:23 PDB1(3):ORA-00942: table or view does not exist 2024-03-01 14:23:23 PDB1(3):******************************************************** </code> Here's my trigger code: <code>CREATE OR REPLACE TRIGGER SYS.AFTER_SERVERERROR after servererror on database begin execute immediate ('begin afterservererror; end;'); exception when others then -- ignore any further exceptions to prevent infinite loop null; end; create or replace procedure sys.afterServerError is a ora_name_list_t; s varchar2(32767 byte); -- SQL text s1 varchar2(32767 byte); -- SQL text s2 varchar2(32767 byte); -- SQL text sn varchar2(32767 byte); -- schema name m varchar2(32767 byte); -- Error Message n number; begin if ORA_SERVER_ERROR(1) = 22289 then return; end if; execute immediate 'select SCHEMANAME from V$SESSION where SID = SYS_CONTEXT(''USERENV'', ''SID'')' INTO SN; s := s ||'*** '||ORA_SYSEVENT||' ORA-'||ltrim(to_char(ORA_SERVER_ERROR(1), '00000')) ||' U:'||user ||' SN:'||SN ||' #'||SYS_CONTEXT('USERENV', 'SID') ||' * T:'||SYS_CONTEXT('USERENV', 'TERMINAL') ||' * P:'||SYS_CONTEXT('USERENV', 'CLIENT_PROGRAM_NAME') ||' * M:'||SYS_CONTEXT('USERENV', 'MODULE') ||' * A:'||SYS_CONTEXT('USERENV', 'ACTION') ...
Categories: DBA Blogs

Ords error 404

Tom Kyte - Mon, 2024-03-11 12:06
ORDS Landing Page Shows "ok" For Apex And Sql Developer Instead of "GO" button. I can't access to apex workspace page.when i click on the "ok" it toke me to apage shwing "not found http status code :404 Impossible to meet the requirement with a database. Verify that the IURL of the request is correct and that the corresponding URL-database Have been correctly configured" I am working on windows 10 pro and I installed the following: Oracle 19c Apex 23.2 Ords 23.4 Tomcat 9. 0.86
Categories: DBA Blogs

Full Import default schema

Tom Kyte - Mon, 2024-03-11 12:06
I doing full import of 10g export into 19c. export/import via user with exp_full_database privilege/imp_full_database have few questions? 1. is it safe to import full=y option. will it overwrite procedures with 10g version/duplicate data in system schema. what about other default schema for eg . *sys schemas/XDB,APEX . will it overwrite? other components dependancies [workspace manager] . how to safely import only application schema including other components dependancies. 2. how to find out dependancies of application schema on components [ dba_features_usage ?]?
Categories: DBA Blogs

What indexes and partitions are best for manage insert and update of 100 cr records in a table

Tom Kyte - Mon, 2024-03-11 12:06
In our project we have to create a table that should contain around 100 cr records and 60% records will be update in later time..date is the key factor on this table.. To manage this table what type of indexes and partitions are best suitable to improve the performance of the table when querying from the table. Thanks in Advance.
Categories: DBA Blogs

sql developer v4 how to extract schema - user creation ddl ?

Tom Kyte - Thu, 2024-03-07 11:46
Hello, I am just lost here. I have a very simple requirement using sql developer. I need to extract user creation script for a list of users. How on earth do I do this please help . Why have they made so simple things so difficult to do or even find. I click on other users there is no export or extract script ? There is Database export in Tools which will create objects within the schema but not the schema creation script. Thanks
Categories: DBA Blogs

Global variables for schema name

Tom Kyte - Tue, 2024-03-05 23:06
Hi Tom, We have two schemas one for customer and one for the transactions done by customer. Here I have a requirement where i have to access tables of Schema-1 from a function created in Schema-2. So my question here is.... can we declare Schema-1 as a global variable and refer where ever it is necessary. Because i have several instances like this in many of our functions in the package. One more problem is i have to move this package from developer instance to QA, UAT and Production, where schema names differ. Right now i am hard-coding the schema-1 prefix which i want to get away with. Example: (This function is created in Schema-2) <code>create or replace function fnc_comm_history(p_loan_code number) return sys_refcursor as c_ref sys_refcursor; begin open c_ref for select a.name, a.event, a.date, b.price from table1 a, Schema-1.table2 b where a.loan_code=b.loan_code and a.loan_code=p_loan_code; return c_ref; end;</code> Regards, Dora.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs