Feed aggregator

ASSM Argh 2

Jonathan Lewis - Wed, 2018-01-10 07:24

After yesterday’s post one of the obvious follow-up questions was whether the problem I demonstrated was a side effect of my use of PL/SQL arrays and loops to load data. What would happen with a pure “insert select” statement.  It’s easy enough to check:


rem
rem     Script:         assm_argh2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

drop table t2 purge;
drop table t1 purge;

create table t2
segment creation immediate
tablespace test_8k_assm
as
select * from all_objects where rownum <= 50000 -- >comment to avoid WordPress anomaly
;

create table t1
segment creation immediate
tablespace test_8k_assm
as
select * from all_objects where rownum = 0
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

insert /*+ append */ into t1
select t2.*
from
        (
         select /*+ cardinality(40) */ rownum id
         from dual connect by level <= 40 -- > comment to avoid WordPress anomaly
        ) d,
        t2
;

commit;

declare
        m_unformatted_blocks    number;
        m_unformatted_bytes     number;
        m_fs1_blocks            number;
        m_fs1_bytes             number;
        m_fs2_blocks            number;
        m_fs2_bytes             number;

        m_fs3_blocks            number;
        m_fs3_bytes             number;
        m_fs4_blocks            number;
        m_fs4_bytes             number;
        m_full_blocks           number;
        m_full_bytes            number;

begin
        dbms_space.SPACE_USAGE(
                segment_owner           => 'TEST_USER',
                segment_name            => 'T1',
                segment_type            => 'TABLE',
                unformatted_blocks      => m_unformatted_blocks,
                unformatted_bytes       => m_unformatted_bytes,
                fs1_blocks              => m_fs1_blocks ,
                fs1_bytes               => m_fs1_bytes,
                fs2_blocks              => m_fs2_blocks,
                fs2_bytes               => m_fs2_bytes,
                fs3_blocks              => m_fs3_blocks,
                fs3_bytes               => m_fs3_bytes,
                fs4_blocks              => m_fs4_blocks,
                fs4_bytes               => m_fs4_bytes,
                full_blocks             => m_full_blocks,
                full_bytes              => m_full_bytes
        );

        dbms_output.put_line('Unformatted                   : ' || to_char(m_unformatted_blocks,'999,999,990') || ' / ' || to_char(m_unformatted_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 1 (  0 -  25% free) : ' || to_char(m_fs1_blocks,'999,999,990') || ' / ' || to_char(m_fs1_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 2 ( 25 -  50% free) : ' || to_char(m_fs2_blocks,'999,999,990') || ' / ' || to_char(m_fs2_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 3 ( 50 -  75% free) : ' || to_char(m_fs3_blocks,'999,999,990') || ' / ' || to_char(m_fs3_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 4 ( 75 - 100% free) : ' || to_char(m_fs4_blocks,'999,999,990') || ' / ' || to_char(m_fs4_bytes,'999,999,999,990'));
        dbms_output.put_line('Full                          : ' || to_char(m_full_blocks,'999,999,990') || ' / ' || to_char(m_full_bytes,'999,999,999,990'));
end;
/

I’ve copied the first 50,000 rows from all_objects as a way of generating date, then cloned it 40 times into the main table to give me a total of 2,000,000 rows.

A comment on yesterday’s blog reported that the behaviour I described has been fixed in the October bundle patch for 12.1.0.2, but I haven’t patched my copy yet. So here are the results (with a little cosmetic editing) from running the insert and reporting on space usage from 11.2.0.4, 12.1.0.2, and 12.2.0.1 in order:


11.2.0.4
========
2000000 rows created.

Unformatted                   :          764 /        6,258,688
Freespace 1 (  0 -  25% free) :            0 /                0
Freespace 2 ( 25 -  50% free) :            1 /            8,192
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :          133 /        1,089,536
Full                          :       28,579 /      234,119,168

12.1.0.2
========
2000000 rows created.

Unformatted                   :          256 /        2,097,152
Freespace 1 (  0 -  25% free) :       32,810 /      268,779,520
Freespace 2 ( 25 -  50% free) :            0 /                0
Freespace 3 ( 50 -  75% free) :            1 /            8,192
Freespace 4 ( 75 - 100% free) :           47 /          385,024
Full                          :          443 /        3,629,056

12.2.0.1
========
2000000 rows created.

Unformatted		      : 	 764 /	      6,258,688
Freespace 1 (  0 -  25% free) : 	   0 /		      0
Freespace 2 ( 25 -  50% free) : 	   1 /		  8,192
Freespace 3 ( 50 -  75% free) : 	   0 /		      0
Freespace 4 ( 75 - 100% free) : 	 226 /	      1,851,392
Full			      :       39,706 /	    325,271,552

The total number of blocks involved changes from version to version, of course, thanks to the huge differences in the contents of all_objects, but the headline message is clear – 12.1.0.2 is broken for this basic requirement. On the plus side, though, this is what you get from 12.1.0.2 if you change that insert to include the /*+ append */ hint:


2000000 rows created.

Unformatted                   :            0 /                0
Freespace 1 (  0 -  25% free) :            0 /                0
Freespace 2 ( 25 -  50% free) :            0 /                0
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :            0 /                0
Full                          :       33,380 /      273,448,960

Unsurprisingly, 11.2.0.4 and 12.2.0.1 also behave and report 100% Full. This is a slightly special case, of course since there was no previous data in the table, but even when I started the big insert after inserting and committing a few rows all three versions behaved.


50K views on my YouTube Channel

Hemant K Chitale - Wed, 2018-01-10 01:06
My YouTube channel on Oracle has now exceeded 50thousand views.  A few more subscribers and the subscriber count will exceed 500.

Thank you all !

I have been busy for the past few months but, hopefully, in 2018 will keep adding to my YouTube and BlogSpot content.

.
.
.
 
Categories: DBA Blogs

Transfer table from sql server to oracle db

Tom Kyte - Tue, 2018-01-09 21:26
Hi Team, Thanks for your continues help. I want transfer one table(iccpayment) from sql server to oracle db. And also i want create schedule to this table because everyday user load data to sql server table same also happen in oracle. Pleas...
Categories: DBA Blogs

querying a clob

Tom Kyte - Tue, 2018-01-09 21:26
Hi Tom I have a table t_request (request_number number, request_detail clob) For example the clob could have a value of <REQUEST_DETAIL><GROUP_TYPE>PR</GROUP_TYPE><GROUP_NAME>DATA ENGINEERING ASSET MANAGEMENT</GROUP_NAME><BUS_UNIT_ACRN>S...
Categories: DBA Blogs

Single Json object is splitting into multiple lines when the length is more than 32767.

Tom Kyte - Tue, 2018-01-09 21:26
Hi Team, Happy New Year. I am connecting to an External database from unix server with that external database details and executing a select statement to fetch data from one of the table in the external database and then the output will be wri...
Categories: DBA Blogs

Dynamically Discover Ref Cursor Attributes Like Column Name, Column Value

Tom Kyte - Tue, 2018-01-09 21:26
Hi, Is it possible to dynamically discover Ref Cursor attributes like, for example, Column Name? The attached LiveSQL Link contains many details about my question. It is my first time to use LiveSQL Link. If it does not work, please let me kno...
Categories: DBA Blogs

Can you use Oracle Database Unified Auditing with Oracle APEX?

Joel Kallman - Tue, 2018-01-09 16:24
A customer asked me this morning:

"Can we monitor the table access from an APEX application?"

This is a security-minded customer, who was also interested in the use of Oracle Database Vault to help protect unauthorized access to personally identifiable information which would be maintained in their database tables.  The only access to these tables would be through their APEX applications (for now), and they wanted to be able to monitor access to these tables.  While Oracle offers a very robust, enterprise solution in this problem domain named Oracle Audit Vault and Database Firewall, auditing via the native database functionality would be sufficient for now.

This blog post is not intended to be a complete treatise on the subject of database auditing in the Oracle Database, there is already a lot of documentation on this topic.  This blog post is a simple proof that auditing can be enabled to track accesses from an APEX application and how to see the audit log entries.

  1.   I created a simple APEX application based upon an updated version of the TASKS CSV file, used in this 2015 Oracle Magazine article.  This was done in the JOEL_DB schema.  The Interactive Report on page 1 is based upon the query:

    select id, project, task_name, start_date, end_date, status, assigned_to, cost, budget
    from tasks

    The application looked like:




  2. I created an audit policy named JOEL_TASKS_POLICY and enabled it using the following SQL:

    create audit policy joel_tasks_policy actions 
    select on joel_db.tasks,
    update on joel_db.tasks,
    delete on joel_db.tasks,
    insert on joel_db.tasks;

    Note:  This is Oracle Database 12cR2.  Before beginning this step, I ensured that the database was configured for Unified Auditing.

  3. I ran the following SQL statement to validate existence of the audit policy:
    SQL> 

    select audit_option, object_schema, object_name
    from audit_unified_policies
    where policy_name = 'JOEL_TASKS_POLICY';

    AUDIT_OPTION OBJECT_SCHEMA OBJECT_NAME
    DELETE JOEL_DB TASKS
    INSERT JOEL_DB TASKS
    SELECT JOEL_DB TASKS
    UPDATE JOEL_DB TASKS

  4. I stopped and restarted Oracle REST Data Services (ORDS).  This is important, because the policy will only take effect for database sessions established after the audit policy was enabled.  Since ORDS maintains a pool of database sessions, I needed to nuke the existing sessions and establish new ones.

  5. I ran the page a few times in my APEX application using the Interactive Report.

  6. As a DBA, I queried the database view UNIFIED_AUDIT_TRAIL using the following query:
    select dbusername, action_name, event_timestamp, scn, 
    object_schema, object_name, sql_text, sql_binds, client_identifier
    from unified_audit_trail
    where unified_audit_policies = 'JOEL_TASKS_POLICY'
    and object_schema = 'JOEL_DB'
    order by event_timestamp desc



    and voila!  You can easily see that the SELECT statements against the TASKS table are properly being audited, complete with any bind values used in the query.  The username of the database session is APEX_PUBLIC_USER as expected, because this is what the database sessions in the ORDS pool are connected as.  In the CLIENT_IDENTIFIER column, you can see that the name of the authenticated user to the APEX application along with the APEX session identifier are also recorded.

Does this give you everything you wish?  Probably not.  I'd like to know the APEX application ID, the page number, the actual parsing schema, and more.  But using the APEX session ID and correlating it with entries in the APEX_ACTIVITY_LOG database view, I can find so much more about this activity.  There are also many other columns in the UNIFIED_AUDIT_TRAIL database view which are used in conjunction with Oracle Database Vault and Real Application Security, and which would capture still more information about access of this underlying database object.

A great benefit of using database auditing instead of simply implementing "poor-man's" logging in the APEX application is that any access will be recorded from any application, not just the APEX app.



Spectre and Meltdown, Oracle Database, AWS, SLOB

Yann Neuhaus - Tue, 2018-01-09 15:23

Last year, I measured the CPU performance for an Oracle Database on several types of AWS instances. Just by curiosity, I’ve run the same test (SLOB cached reads) now that Amazon has applied all Spectre and Meltdown mitigation patches.

I must admit that I wanted to test this on the Oracle Cloud first. I’ve updated a IaaS instance to the latest kernel but the Oracle Unbreakable Enterprise Kernel does not include the Meltdown fix yet, and booting on the Red Hat Compatible Kernel quickly goes to a kernel panic not finding the root LVM.

This is not a benchmark you can rely on to estimate the CPU usage overhead on your application. This test is not doing system calls (so the KPTI fix should be at its minimal impact). If your application is bound on system calls (network roundtrips, physical reads) the consequences can be worse. But in that case, you have a design problem which was just masked by hardware, optimized, but insecure, by a processor running the code before testing.

Figures from last year: M4.xlarge: 4vCPU, 16GB RAM

M4 is hyper-threaded so with 2 Oracle processor licenses we can use 4 vCPU.
Here I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 2 cores with 2 threads each.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.46
DB CPU(s): 1.0 13.1 0.00 5.46
Logical read (blocks): 874,326.7 11,420,189.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.24
DB CPU(s): 2.0 27.2 0.00 9.22
Logical read (blocks): 1,540,116.9 21,047,307.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 14.46
DB CPU(s): 4.0 54.3 0.00 14.39
Logical read (blocks): 1,779,361.3 24,326,538.0

Jan. 2018 with Spectre and Meltdown mitigation:

Same CPU now with the latest RedHat kernel.

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8

Here is the LIOPS result for the same runs.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 4.69
DB CPU(s): 1.0 13.7 0.00 4.69
Logical read (blocks): 808,954.0 11,048,988.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.00
DB CPU(s): 2.0 27.1 0.00 7.96
Logical read (blocks): 1,343,662.0 18,351,369.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 13.49
DB CPU(s): 4.0 42.5 0.00 13.37
Logical read (blocks): 1,684,204.6 18,106,823.6

Jan. 2018, with Spectre and Meltdown patches, but disabled IBRS, IBPB, KPTI

The RedHat kernel has options to disable Indirect Branch Restricted Speculation, Indirect Branch Prediction Barriers and Kernel Page Table Isolation

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 nopti noibrs noibpb

Here are the same runs after rebooting with nopti noibrs noibpb kernel options:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.86
DB CPU(s): 1.0 29.8 0.00 4.80
Logical read (blocks): 861,138.5 25,937,061.0

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.00
DB CPU(s): 2.0 27.0 0.00 7.92
Logical read (blocks): 1,493,336.8 20,395,790.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 13.49
DB CPU(s): 4.0 42.4 0.00 13.34
Logical read (blocks): 1,760,218.4 18,911,346.0
Read IO requests: 33.5 360.2

Then with only KPTI disabled, but all Spectre mitigation enabled

Here only the page table isolation is is disabled.

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 nopti

Here are the same runs witn only nopti kernel option:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 3.91
DB CPU(s): 1.0 29.8 0.00 3.87
Logical read (blocks): 873,451.2 26,303,984.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 23.1 0.00 7.60
DB CPU(s): 2.0 22.9 0.00 7.54
Logical read (blocks): 1,502,151.4 17,360,883.8

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 12.64
DB CPU(s): 4.0 42.4 0.00 12.50
Logical read (blocks): 1,764,293.0 18,954,682.3

Large pages

The previous tests were using small pages. I did a quick test with KPTI enabled and SGA using large pages:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.85
DB CPU(s): 1.0 30.1 0.00 4.85
Logical read (blocks): 854,682.1 27,672,906.8

Here is the same but with KPTI disabled:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.85
DB CPU(s): 1.0 30.1 0.00 4.85
Logical read (blocks): 920,129.9 27,672,906.8

So what?

This is just a test on a synthetic workload. Nothing similar to a production database situation. However, those cached SLOB runs are doing what an optimized database application should do most of the time: read blocks from the buffer cache. At least this test is much better than the graphs without explanations, or the SELECT 1, that I have seen these days on social media.

Some interesting food for thought in those numbers, by the way.

Now vs. last year: between 5% and 12% degradation, which is what people have reported those days in general. That looks high but usually when we do database performance troubleshooting we are there to address queries with x10 to x100 CPU usage doing unnecessary stuff because of bad design or suboptimal execution plan.

If disable KPTI: degradation is less than 1%, so that’s an easy way to get the same performance if you are sure that you control all software running. At least temporarily before some database tuning is done.

If disable KPTI, IBRS and IBPB: not better than when disabling only KPTI. I’ve no explanation about that… Makes me wonder if those predictive branching are always a good idea.

In all case, if you are not allocating SGA with large pages, then you should. The KPTI degradation is lowered with large pages, which makes sense as the page table is smaller. And if you are not yet using large pages, the benefit will probably balance the KPTI degradation.

This is not a benchmark and your application may see a higher degradation if doing a lot of system calls. If you upgrade from an old kernel, you may even see an improvement thanks to other features compensating the mitigation ones.

 

Cet article Spectre and Meltdown, Oracle Database, AWS, SLOB est apparu en premier sur Blog dbi services.

Running the Oracle Client on a Raspberry Pi

Yann Neuhaus - Tue, 2018-01-09 11:57

What are the possibilities to use a Raspberry Pi computer as an Oracle client?

IMG_1653

Besides other things I’ll show a possibility in this Blog to run the fat/thick Oracle Client on a Raspberry Pi!

REMARK: All examples below were made with an Rasperry Pi 3 and the OS Raspbian, which can be downloaded from

https://www.raspberrypi.org/downloads

First of all what’s possible with Java and Thin Clients?
Running the Java-Programs sqldeveloper or its counterpart in command line mode sqlcl is of course possible on a Raspberry Pi:

1.) sqldeveloper

The prerequisite for running sqldveloper 17.4. (current version as of writing this Blog) is an installed JDK 1.8. As I had that installed by default, I could run sqldeveloper as documented. I.e.


pi@raspberrypi:~ $ sudo apt list --installed | grep jdk
oracle-java8-jdk/stable,now 8u65 armhf [installed] pi@raspberrypi:~ $ cd Oracle
pi@raspberrypi:~/Oracle $ unzip sqldeveloper-17.4.0.355.2349-no-jre.zip
...
pi@raspberrypi:~/Oracle $ cd sqldeveloper/
pi@raspberrypi:~/Oracle/sqldeveloper $ ./sqldeveloper.sh
 
Oracle SQL Developer
Copyright (c) 1997, 2017, Oracle and/or its affiliates. All rights reserved.

sqldeveloper

2.) sqlcl

Installing sqlcl is as easy as installing sqldeveloper:


pi@raspberrypi:~/Oracle $ unzip sqlcl-17.4.0.354.2224-no-jre.zip
...
pi@raspberrypi:~/Oracle $ alias sqlcl='/home/pi/Oracle/sqlcl/bin/sql'
pi@raspberrypi:~/Oracle $ sqlcl cbleile/cbleile@192.168.178.65:1521/prem122.localdomain
 
SQLcl: Release 17.4.0 Production on Tue Jan 09 14:28:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Last Successful login time: Thu Jan 04 2018 22:15:36 +01:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> set sqlformat ansiconsole
SQL> select table_name, tablespace_name from tabs;
TABLE_NAME TABLESPACE_NAME
T1 USERS
 
SQL>

3.) Running Java-Code using the JDBC Thin driver

Running Java-Code with access to Oracle is easy as well. Just download the JDBC Thin Driver ojdbc8.jar and put it somewhere on the Pi. In the example below I actually do use the ojdbc8.jar from sqlcl:


pi@raspberrypi:~/Oracle/Java $ more Conn.java
import java.sql.*;
class Conn {
public static void main (String[] args) throws Exception
{
Class.forName ("oracle.jdbc.OracleDriver");
 
Connection conn = DriverManager.getConnection ("jdbc:oracle:thin:@//192.168.178.65:1521/prem122.localdomain", "cbleile", "cbleile");
// @//machineName:port/SID, userid, password
try {
Statement stmt = conn.createStatement();
try {
ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION where BANNER like '%Enterprise Edition%'");
try {
while (rset.next())
System.out.println ("Connected to "+rset.getString(1));
}
finally {
try { rset.close(); } catch (Exception ignore) {}
}
}
finally {
try { stmt.close(); } catch (Exception ignore) {}
}
}
finally {
try { conn.close(); } catch (Exception ignore) {}
}
}
}
pi@raspberrypi:~/Oracle/Java $ javac Conn.java
pi@raspberrypi:~/Oracle/Java $ java -cp ../sqlcl/lib/ojdbc8.jar:. Conn
Connected to Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

4.) Running a fat Client on the Raspberry Pi

Is it actually possible to run the normal Oracle thick/fat Client on the Pi? As the Oracle Client Binaries are not available for the ARM processor it seems not possible, but emulating the x86-platform you actually can do it.

The easiest way to run x86-Code on a Raspberry Pi is to actually buy the product ExaGear Desktop from Eltechs ( https://eltechs.com ) for aound 20 Euros (they usually sell it for 16 Euros).
REMARK: You can actually also install the QEMU image from https://github.com/AlbrechtL/RPi-QEMU-x86-wine , but that’s far more effortful.

What is ExaGear? ExaGear is an emulator (i.e. a virtual machine) which emulates a x86 Debian Linux on your Raspberry Pi. After downloading ExaGear and unzipping it it’s installed easily on the Pi with just


$ sudo ./install-exagear.sh

Afterwards you can start it with the command exagear:


pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
pi@raspberrypi:~ $ exagear
Starting /bin/bash in the guest image /opt/exagear/images/debian-8
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 i686 GNU/Linux
pi@raspberrypi:~ $ arch
i686

I am now inside the x86 world. I.e. I can use this guest shell as if it were running on an x86 machine.
First I do update the repositories:


pi@raspberrypi:~ $ sudo apt-get update
...

The next step is to download the Oracle 32-Bit 12.2.-Client-Software to the raspberry pi. To be able to install the Oracle software a couple of libraries and programs need to be installed. I.e. inside exagear:


pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo apt-get install libxrender1 libxtst6 libxi6 libaio1 make gcc gawk

To avoid some errors I also had to create 3 symbolic links:


pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/lib/i386-linux-gnu/libpthread_nonshared.a /usr/lib/libpthread_nonshared.a
pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/lib/i386-linux-gnu/libc_nonshared.a /usr/lib/libc_nonshared.a
pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/bin/awk /bin/awk

At that point I could install the Oracle Client software as usual:


pi@raspberrypi:~/Oracle/Downloads/client32 $ ./runInstaller
Starting Oracle Universal Installer...
 
Checking Temp space: must be greater than 415 MB. Actual 1522 MB Passed
Checking swap space: must be greater than 150 MB. Actual 828 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute /usr/bin/xdpyinfo Failed <<<<
 
Some requirement checks failed. You must fulfill these requirements before
 
continuing with the installation,
 
Continue? (y/n) [n] y

xdpyinfo can be installed by installing the x11-utils on Debian, but it’s actually not necessary, so just continue by answering “y” at the prompt.

The rest is a normal Oracle-Client installation. Here some screen shots:

runInstaller1_1

runInstaller2

runInstaller6

runInstaller8

runInstaller10

runInstaller12

I created my small script to set the environment and was then able to run the client-software:


pi@raspberrypi:~ $ more oraclient.sh
#!/bin/bash
 
export ORACLE_HOME=/home/pi/Oracle/app/pi/product/12.2.0/client_1
export PATH=$ORACLE_HOME/bin:$PATH
pi@raspberrypi:~ $ . ./oraclient.sh
pi@raspberrypi:~ $ sqlplus cbleile/cbleile@192.168.178.65:1521/prem122.localdomain
 
SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 9 16:04:36 2018
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Last Successful login time: Thu Jan 04 2018 22:33:09 +01:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> select count(*) from all_objects;
 
COUNT(*)
----------
19640
 
SQL>

Everything worked as expected. I actually haven’t found any issue with the available Oracle programs:


pi@raspberrypi:~ $ ls $ORACLE_HOME/bin
adapters expdp lcsscan orapki.bat statusnc
adrci expdpO linkshlib oraxml symfind
adrciO extjob lmsgen oraxsl sysresv
aqxmlctl extjobo loadjava orion tkprof
aqxmlctl.pl extproc loadpsp osdbagrp tkprofO
bndlchk extprocO loadpspO osh tnsping
coraenv genagtsh lxchknlb ott tnsping0
dbfs_client genclntsh lxegen owm trcasst
dbgeu_run_action.pl genclntst lxinst platform_common trcroute
dbhome genezi mkstore plshprof trcroute0
dbshut geneziO mkstore.bat plshprofO uidrvci
dbstart gennfgt ncomp proc uidrvciO
deploync gennttab netmgr rconfig umu
dg4pwd genoccish oerr relink unzip
dg4pwdO genorasdksh oerr.pl rman wrap
dgmgrl gensyslib ojvmjava rmanO wrc
diagsetup imp ojvmtc roohctl wrcO
diskmon.bin impO orabase schema xml
dropjava impdp orabaseconfig skgxpinfo xmlwf
echodo impdpO orabasehome sqlldr zip
eusm kgmgr oraenv sqlldrO
exp kgmgrO orajaxb sqlplus
expO lbuilder orapki srvctl
pi@raspberrypi:~ $

Just for fun I started a Data Guard Observer on my Rasperry Pi and let it perform a fast-start failover followed by reinstating the previous Primary DB:


pi@raspberrypi:~ $ dgmgrl
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 9 17:08:31 2018
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/manager@tismeds1
Connected to "TISMEDS1"
Connected as SYSDBA.
DGMGRL> show configuration;
 
Configuration - TISMED
 
Protection Mode: MaxAvailability
Members:
TISMEDS1 - Primary database
TISMEDS2 - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 49 seconds ago)
 
DGMGRL> ENABLE FAST_START FAILOVER ;
Enabled.
DGMGRL> start observer;
[W000 01/09 17:09:16.20] FSFO target standby is TISMEDS2
[W000 01/09 17:09:20.18] Observer trace level is set to USER
[W000 01/09 17:09:20.19] Try to connect to the primary.
[W000 01/09 17:09:20.20] Try to connect to the primary TISMEDS1.
[W000 01/09 17:09:20.28] The standby TISMEDS2 is ready to be a FSFO target
[W000 01/09 17:09:22.29] Connection to the primary restored!
[W000 01/09 17:09:24.35] Disconnecting from database TISMEDS1.
[W000 01/09 17:10:32.84] Primary database cannot be reached.
[W000 01/09 17:10:49.29] Primary database cannot be reached.
[W000 01/09 17:10:49.30] Fast-Start Failover threshold has expired.
[W000 01/09 17:10:49.31] Try to connect to the standby.
[W000 01/09 17:10:49.32] Making a last connection attempt to primary database before proceeding with Fast-Start Failover.
[W000 01/09 17:10:49.33] Check if the standby is ready for failover.
[S002 01/09 17:10:50.03] Fast-Start Failover started...
 
17:10:50.04 Tuesday, January 09, 2018
Initiating Fast-Start Failover to database "TISMEDS2"...
[S002 01/09 17:10:50.05] Initiating Fast-start Failover.
Performing failover NOW, please wait...
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
Failover succeeded, new primary is "TISMEDS2"
17:10:52.79 Tuesday, January 09, 2018
[S002 01/09 17:10:52.80] Fast-Start Failover finished...
[W000 01/09 17:10:52.81] Failover succeeded. Restart pinging.
[W000 01/09 17:10:52.88] Primary database has changed to TISMEDS2.
[W000 01/09 17:10:52.91] Try to connect to the primary.
[W000 01/09 17:10:52.92] Try to connect to the primary TISMEDS2.
[W000 01/09 17:10:54.33] The standby TISMEDS1 needs to be reinstated
[W000 01/09 17:10:54.34] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:10:54.35] Connection to the primary restored!
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
[W000 01/09 17:10:56.36] Disconnecting from database TISMEDS2.
[W000 01/09 17:11:24.84] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
[W000 01/09 17:11:54.85] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12541: TNS:no listener
 
[W000 01/09 17:12:24.17] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12541: TNS:no listener
 
[W000 01/09 17:12:54.54] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:12:57.58] Connection to the new standby restored!
[W000 01/09 17:12:57.63] Failed to ping the new standby.
[W000 01/09 17:12:58.64] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:13:00.65] Connection to the new standby restored!
[W000 01/09 17:13:32.32] Try to connect to the primary TISMEDS2.
[W000 01/09 17:13:34.36] Connection to the primary restored!
[W000 01/09 17:13:35.37] Wait for new primary to be ready to reinstate.
[W000 01/09 17:13:36.38] New primary is now ready to reinstate.
[W000 01/09 17:13:36.38] Issuing REINSTATE command.
 
17:13:36.39 Tuesday, January 09, 2018
Initiating reinstatement for database "TISMEDS1"...
Reinstating database "TISMEDS1", please wait...
[W000 01/09 17:13:54.64] The standby TISMEDS1 is ready to be a FSFO target
Reinstatement of database "TISMEDS1" succeeded
17:13:56.24 Tuesday, January 09, 2018
[W000 01/09 17:13:56.65] Successfully reinstated database TISMEDS1.
[W000 01/09 17:13:57.70] The reinstatement of standby TISMEDS1 was just done

Summary: Is it possible to run an Oracle client on the Rasberry Pi? Yes, it is! Running native Java-applications using JDBC Thin Connections is not a problem at all. Running a fat Oracle Client is possible as well using x86 emulation software. Is this supported by Oracle? I do assume that like with any other non-Oracle-VM-solution you would have to prove possible issues by reproducing the problem on a bare metal x86 platform to be able to open a Service Request.

Anyway, if you plan to run an Oracle 18c XE DB at home (see e.g. here
https://ora-00001.blogspot.de/2017/10/oracle-xe-12c-becomes-oracle-xe-18c.html )
then you might consider running your client on a Raspberry Pi ;-)

 

Cet article Running the Oracle Client on a Raspberry Pi est apparu en premier sur Blog dbi services.

ASSM argh!

Jonathan Lewis - Tue, 2018-01-09 11:53

Here’s a problem with ASSM that used to exist in older versions of Oracle had disappeared by 11.2.0.4 and then re-appeared in 12.1.0.2 – disappearing again by 12.2.0.1. It showed up on MoS a few days ago under the heading: “Insert is running long with more waits on db file sequential read”.

The obvious response to this heading is to question the number of indexes on the table – because big tables with lots of indexes tend to give you lots of random I/O as Oracle maintains the indexes – but this table had no indexes. The owner of the problem supplied several of bits of information in the initial post, with further material in response to follow-up questions, including the tkprof summary of the 10046/level 12 trace of the insert and two extracts from the trace file to show us some of the “db file sequential read” waits – the first extract made me wonder if there might be some issue involving 16KB blocks but the second one dispelled that illusion.

There are several buggy things that can appear with ASSM and large-scale DML operations, and sometimes the problems can appear long after the original had done the dirty deed, so I thought I’d create a simple model based on the information supplied to date – and discovered what the problem (probably) was. Here’s how it starts – I’ve created a tablespace using ASSM, and in this tablespace I’ve created a table which has 48 columns with a row length of 290 bytes (roughly matching the OP’s table), and I’ve hacked out a simple PL/SQL block that loops around inserting arrays of 100 rows at a time into the table for a total of 1M rows before committing.


rem
rem     Script:         assm_cleanout.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2        Lots of blocks left "not full"
rem             11.2.0.4
rem

rem
rem     using OMF, so no file-name needed
rem     Ran this bit as SYS, and altered test user to have unlimited quota
rem

/*
create
        tablespace test_8k_assm
        datafile size 1G
        extent management local
        autoallocate
        segment space management auto
;
*/

rem
rem     Create the table, pre-allocate some space.
rem     This means we should get consistent 8M extents and not initial little ones
rem

create table t1 (
        v001 varchar2(5), v002 varchar2(5), v003 varchar2(5), v004 varchar2(5), v005 varchar2(5),
        v006 varchar2(5), v007 varchar2(5), v008 varchar2(5), v009 varchar2(5), v010 varchar2(5),
        v011 varchar2(5), v012 varchar2(5), v013 varchar2(5), v014 varchar2(5), v015 varchar2(5),
        v016 varchar2(5), v017 varchar2(5), v018 varchar2(5), v019 varchar2(5), v020 varchar2(5),
        v021 varchar2(5), v022 varchar2(5), v023 varchar2(5), v024 varchar2(5), v025 varchar2(5),
        v026 varchar2(5), v027 varchar2(5), v028 varchar2(5), v029 varchar2(5), v030 varchar2(5),
        v031 varchar2(5), v032 varchar2(5), v033 varchar2(5), v034 varchar2(5), v035 varchar2(5),
        v036 varchar2(5), v037 varchar2(5), v038 varchar2(5), v039 varchar2(5), v040 varchar2(5),
        v041 varchar2(5), v042 varchar2(5), v043 varchar2(5), v044 varchar2(5), v045 varchar2(5),
        v046 varchar2(5), v047 varchar2(5), v048 varchar2(5)
)
segment creation immediate
tablespace test_8k_assm
storage(initial 8M)
;

alter table t1 allocate extent (size 8M);
alter table t1 allocate extent (size 8M);

rem
rem     Simple anonymous pl/sql block
rem     Large insert, handled with array inserts
rem     Can modify loop count and array size very easily
rem

declare
        type tab_array is table of t1%rowtype;
        junk_array tab_array;
begin

        select
                'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx',
                'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx',
                'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx',
                'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx',
                'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx', 'xxxxx'
        bulk collect into
                junk_array
        from
                all_objects
        where
                rownum  <= 100 -- > comment to avoid WordPress format issue
        ;

        for i in 1..10000 loop
                forall j in 1..junk_array.count
                        insert into t1 values junk_array(j) ;
        end loop;

end;

commit;

The number of rows per block after this insert is 24, with 1038 bytes free space left (808 due to the pctfree = 10, then the bit that was too small to take a 25th row before breaching the pctfree barrier). This means we should report 1M/24 = 41,666 full blocks and one block with some free space. So we query the table using the dbms_space package:


declare
        m_unformatted_blocks    number;
        m_unformatted_bytes     number;
        m_fs1_blocks            number;
        m_fs1_bytes             number;
        m_fs2_blocks            number;
        m_fs2_bytes             number;

        m_fs3_blocks            number;
        m_fs3_bytes             number;
        m_fs4_blocks            number;
        m_fs4_bytes             number;
        m_full_blocks           number;
        m_full_bytes            number;

begin
        dbms_space.SPACE_USAGE(
                segment_owner           => 'TEST_USER',
                segment_name            => 'T1',
                segment_type            => 'TABLE',
                unformatted_blocks      => m_unformatted_blocks,
                unformatted_bytes       => m_unformatted_bytes,
                fs1_blocks              => m_fs1_blocks ,
                fs1_bytes               => m_fs1_bytes,
                fs2_blocks              => m_fs2_blocks,
                fs2_bytes               => m_fs2_bytes,
                fs3_blocks              => m_fs3_blocks,
                fs3_bytes               => m_fs3_bytes,
                fs4_blocks              => m_fs4_blocks,
                fs4_bytes               => m_fs4_bytes,
                full_blocks             => m_full_blocks,
                full_bytes              => m_full_bytes
        );

        dbms_output.new_line;
        dbms_output.put_line('Unformatted                   : ' || to_char(m_unformatted_blocks,'999,999,990') || ' / ' || to_char(m_unformatted_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 1 (  0 -  25% free) : ' || to_char(m_fs1_blocks,'999,999,990') || ' / ' || to_char(m_fs1_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 2 ( 25 -  50% free) : ' || to_char(m_fs2_blocks,'999,999,990') || ' / ' || to_char(m_fs2_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 3 ( 50 -  75% free) : ' || to_char(m_fs3_blocks,'999,999,990') || ' / ' || to_char(m_fs3_bytes,'999,999,999,990'));
        dbms_output.put_line('Freespace 4 ( 75 - 100% free) : ' || to_char(m_fs4_blocks,'999,999,990') || ' / ' || to_char(m_fs4_bytes,'999,999,999,990'));
        dbms_output.put_line('Full                          : ' || to_char(m_full_blocks,'999,999,990') || ' / ' || to_char(m_full_bytes,'999,999,999,990'));

end;
/

The results aren’t what we expect:


Unformatted                   :            0 /                0
Freespace 1 (  0 -  25% free) :       35,001 /      286,728,192
Freespace 2 ( 25 -  50% free) :            1 /            8,192
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :           67 /          548,864
Full                          :        6,665 /       54,599,680

We have one block marked as 25 – 50% free (that’s the one block with 16 rows in it, which means about 40% space currently free) but our 41,666 full blocks are actually reported as 6,665 full blocks and 35,001 blocks with some space available. That’s going to hurt eventually if some process wants to insert more rows and finds that it has to fail its way through 35,001 blocks before finding a block which has enough free space.

So what happens when I repeat the PL/SQL block (and commit)? Here are the results from calls to dbms_space after the next two cycles:


Unformatted                   :            0 /                0
Freespace 1 (  0 -  25% free) :       70,002 /      573,456,384
Freespace 2 ( 25 -  50% free) :            2 /           16,384
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :          220 /        1,802,240
Full                          :       13,330 /      109,199,360

Unformatted                   :          256 /        2,097,152
Freespace 1 (  0 -  25% free) :      105,003 /      860,184,576
Freespace 2 ( 25 -  50% free) :            3 /           24,576
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :          117 /          958,464
Full                          :       19,995 /      163,799,040

Every time we execute the PL/SQL block we leave a trail of 35,001 more blocks which are flagged as “not quite full”.

Looking at the session stats while running the insert loop I can tell that Oracle isn’t checking to see whether or not it should be using those blocks. (A quick way of proving this is to flush the buffer cache before each execution of the PL/SQL and note that Oracle doesn’t read back the 105,000 blocks before inserting any data). So somehow, sometime, someone might get a nasty surprise – and here’s one way that it might happen:

Since I know I my data fits 24 rows per block I’m going to modify my PL/SQL block to select one row into the array then loop round the insert 25 times – so I know I’m inserting a little bit more than one block’s worth of data. Starting from the state with 105,003 blocks marked as “Freespace 1” this is what I saw – first, the free space report after inserting 25 rows:


Unformatted                   :          240 /        1,966,080
Freespace 1 (  0 -  25% free) :        1,074 /        8,798,208
Freespace 2 ( 25 -  50% free) :            0 /                0
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :          133 /        1,089,536
Full                          :      123,927 /    1,015,209,984

Then a few wait events and session statistics for the insert:


---------------------------------------------------------
SID:    39:TEST_USER - jonathan
Session Events - 09-Jan 16:57:18
Interval:-      6 seconds
---------------------------------------------------------
Event                                             Waits   Time_outs        Csec    Avg Csec    Max Csec
-----                                             -----   ---------        ----    --------    --------
db file sequential read                          15,308           0         128        .008           3
db file scattered read                           20,086           0         271        .014           4

---------------------------------
Session stats - 09-Jan 16:57:18
Interval:-  6 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
session logical reads                                                  269,537
physical read total IO requests                                         35,401
db block gets                                                          229,522
consistent gets                                                         40,015
physical reads                                                         124,687
physical reads cache                                                   124,687
db block changes                                                       208,489
physical reads cache prefetch                                           89,293
redo entries                                                           207,892
redo size                                                           16,262,724
undo change vector size                                                  1,720
deferred (CURRENT) block cleanout applications                         103,932
table scan blocks gotten                                                20,797
HSC Heap Segment Block Changes                                              25

The session has read and updated almost all of the level 1 bitmap blocks. I don’t know exactly what triggered this re-read, but seems to be related to the number of rows inserted (or, perhaps, the amount of space used rather than the row count) as an insert crosses the pctfree boundary and fails over to the next block. I’ve only done a couple of little tests to try and get a better idea of why an insert sometimes sweeps through the bitmap blocks – so I know that inserting 2 or 3 rows at a time will also trigger the cleanout – but there are probably several little details involved that need to be identified.

You might note a couple of details in the stats:

  • Because I had flushed the buffer cache before the insert Oracle did its “cache warmup” tablescanning trick – if this had not happened I would probably have done a single block read for every single bitmap block I touched.
  • There are 103,932 block cleanout applications – but 208,000 db block changes and redo entries. Roughly half the latter are for data block cleanouts (OP code 4.1) and half are the state changes on the level 1 bitmap blocks (OP code 13.22). You’ll notice that neither change produces any undo.
  • I’ve also included the HSC Heap Segment Block Changes statistics to show you that not all changes to Heap Segment Blocks show up where you might expect them.
And finally:

If you re-run the tests on 11.2.0.4 and 12.2.0.1 you get the following results after the intial script run – the problem doesn’t exist:


11.2.0.4
========
Unformatted                   :            0 /                0
Freespace 1 (  0 -  25% free) :            0 /                0
Freespace 2 ( 25 -  50% free) :            1 /            8,192
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :           67 /          548,864
Full                          :       41,666 /      341,327,872

12.2.0.1
========
Unformatted                   :            0 /                0
Freespace 1 (  0 -  25% free) :            0 /                0
Freespace 2 ( 25 -  50% free) :            1 /            8,192
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :           67 /          548,864
Full                          :       41,666 /      341,327,872

So how does this help the OP.
  • First, there may be a huge mess still waiting to be cleaned in the table – but at 34M blocks I’m not too keen to suggest running the dbms_space routine to find out what it looks like – but maybe that’s necessary.
  • Secondly – an upgrade to 12.2 will probably avoid the problem in future.
  • Thirdly – if the number of rows per block is very close to uniform, write a little code to do a loop that inserts (say) 2 * expected number of rows per block as single row inserts and rolls back; the inserts will probably trigger a massive though perhaps not complete cleanout, so rinse and repeat until the cleanout is complete. Try to find a time when you don’t mind the extra load to get this job done.
  • Finally – on the big job that does the bulk insert – repeat the dummy insert/rollback at the end of the job to clean up the mess made by the job.
Addenda

Prompted by comment #2 below, I should add that if the problem has been fixed in 12.2 then possibly there’s a bug report and patch for it already. If there isn’t then the OP could raise an SR (referencing this blog note), and request a bug fix or back-port from 12.2.

And with 24 hours of publication, comment #4 (from Yury Pudovchenko) tells us that the bug is fixed by the Oct 2017 Bundle Patch.

 

 


get file names in a directory not working anymore due to SQJ is no longer supported by 12.2 or later version

Tom Kyte - Tue, 2018-01-09 03:06
Hi Tom, I have been using your script as listed below to get the list of the file names in a directory and insert them into a global temporary table. Everything was working fine until last week I have upgraded my database to 12.2 and the script i...
Categories: DBA Blogs

Will DataPump STOP_JOB or KILL_JOB do data rollback upon table-space issues?

Tom Kyte - Tue, 2018-01-09 03:06
Half way through data load using impdp (DataPump) got index partition space error. So I did the following: 1. Attached to job. 2. Issued KILL_JOB 3. Restarted DataPump load. After this got duplicates. How do I rollback partial DataPump ...
Categories: DBA Blogs

alter table xxx shrink not reclaiming space

Tom Kyte - Tue, 2018-01-09 03:06
Dear Ask Tom Team, I hope you had wonderful vacations and happy new year!!! I have question related to Table's Space Reclaiming using "shrink" command. There are some fragmented tables in our DB. I found their current size and wasted space u...
Categories: DBA Blogs

Changing dbms_job to dbms_scheduler

Tom Kyte - Tue, 2018-01-09 03:06
Hi Tom, I have a requirement where I need to change dbms_job functionality to dbms_scheduler.Regarding that I have few query : 1.The existing dbms_job.submit_job is called within loop which calls certain procedure ex: for I in 1..10 dbms_job.su...
Categories: DBA Blogs

Export application through command prompt

Tom Kyte - Tue, 2018-01-09 03:06
I have moved some pages from 1 application to another but when exporting application through the application itself. Then some component is not working of the application. So can you provide an alternative way to export application through command p...
Categories: DBA Blogs

ORA-01722: invalid number ORA-06512: in "SYS.DBMS_SQL" - Fetching cursor with bind variables

Tom Kyte - Tue, 2018-01-09 03:06
Hi, I'm trying do run the following PL/SQL block: <code>DECLARE v_trad_cur CLOB; v_trad_par VARCHAR2 (1000); cur PLS_INTEGER := DBMS_SQL.open_cursor; fdbk NUMBER; retorn VARCHAR2 (1000); vd...
Categories: DBA Blogs

pdb_to_apppdb.sql returns ORA-65021- A workaround

Oracle in Action - Tue, 2018-01-09 01:26

RSS content

While exploring Oracle Multitenant Application Containers, I learnt that in order to convert a regular PDB  to an application PDB

  • Clone a regular PDB  into an application root
  • Connect to the cloned  PDB and execute the $ORACLE_HOME/rdbms/admin/pdb_to_apppdb.sql to convert the cloned regular PDB  to an application PDB

However, when I connected to cloned PDB remotely using @… and executed the script pdb_to_apppdb.sql, I got ORA-65021 :

SQL>@$ORACLE_HOME/rdbms/admin/pdb_to_apppdb

 

 

.

.

.

.

 

SQL>create or replace view sys.cdb$common_root_objects&pdbid sharing=object as

2  select u.name owner, o.name object_name, o.type# object_type, o.namespace nsp,

3         o.subname object_subname, o.signature object_sig,

4         decode(bitand(o.flags, &sharing_bits),

5                &edl+&mdl, 'EDL', &dl, 'DL', 'MDL') sharing

6    from sys.obj$ o, sys.user$ u

7   where o.owner#=u.user# and bitand(o.flags, &sharing_bits) <> 0

8     and bitand(o.flags,&fedobjflag)=&fedobjflag;

old   1: create or replace view sys.cdb$common_root_objects&pdbid sharing=object as

new   1: create or replace view sys.cdb$common_root_objects4 sharing=object as

old   4:        decode(bitand(o.flags, &sharing_bits),

new   4:        decode(bitand(o.flags, (65536+131072+4294967296)),

old   5:               &edl+&mdl, 'EDL', &dl, 'DL', 'MDL') sharing

new   5:               4294967296+65536, 'EDL', 131072, 'DL', 'MDL') sharing

old   7:  where o.owner#=u.user# and bitand(o.flags, &sharing_bits) <> 0

new   7:  where o.owner#=u.user# and bitand(o.flags, (65536+131072+4294967296)) <> 0

old   8:    and bitand(o.flags,&fedobjflag)=&fedobjflag

new   8:    and bitand(o.flags,134217728)=134217728

create or replace view sys.cdb$common_root_objects4 sharing=object as

*

ERROR at line 1:

ORA-65021: illegal use of SHARING clause

After various repeated trials, I realized that connecting to cloned PDB using “Alter session set container ..“ results in successful execution.

Here is the full article where I encountered this error and found out the workaround.

I faced similar issue while creating application seed from application root and resolved it by connecting to the  application seed  created from application root using Alter session set container …

Conclusion:  In order to execute the script $ORACLE_HOME/rdbms/admin/pdb_to_apppdb.sql to convert the cloned regular PDB  to an application PDB, connect to the target regular PDB by switching the container. (Do not connect remotely using @)

References:



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [pdb_to_apppdb.sql returns ORA-65021- A workaround], All Right Reserved. 2018.

The post pdb_to_apppdb.sql returns ORA-65021- A workaround appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Partner Webcast – Accelerate Your Digital Transformation in the Cloud with Oracle Visual ...

With increasing demands for modern business applications that will serve specific business needs, and the proliferation of data sources, the speed at which IT departments address line of business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Accelerate Your Digital Transformation in the Cloud with Oracle Visual ...

With increasing demands for modern business applications that will serve specific business needs, and the proliferation of data sources, the speed at which IT departments address line of business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator