Feed aggregator

Oracle Utilities ( SQL Loader) for Ubuntu

Tom Kyte - 5 hours 56 min ago
I need to install SQL Loader in ubuntu to access the Oracle 12c r1 on different box. My understanding is that Ubuntu is not certified for Oracle . Want to confirm my understanding . If yes , please provide the link to install instructions.
Categories: DBA Blogs

Double quotes found on the cloud which are not in the create script

Tom Kyte - 5 hours 56 min ago
Mr. Kyte... We have purchased a few instances of Oracle 12c in the cloud and most recently have encountered a curious double quoting in our tables. create table t_is_it_quoted ( is_this_text_quoted varchar2(20) ); when we look at...
Categories: DBA Blogs

Insert/update table based on condition

Tom Kyte - 5 hours 56 min ago
We have 2 tables one having 4 columns and second has 2 columns. Create table A(Currency_old varchar(20),Currency_indicator_old varchar(20),Currency_New varchar(20),Currency_indicator_new varchar(20)); Create table B (Currency_old varchar(20),C...
Categories: DBA Blogs

DML operations in before/after trigger

Tom Kyte - 5 hours 56 min ago
SQL> CREATE OR REPLACE package DB_ADMIN.TRI_pkg is 2 type t_dblogin is table of t1%rowtype index by pls_integer; 3 type t_audit is table of audit_tab%rowtype index by pls_integer; 4 5 v_dblogon tri_pkg.T_DBLOGIN; 6 v_audit tri_pkg.t_audit; ...
Categories: DBA Blogs

SQL Loader to load multiple tables from CSV input file

Tom Kyte - 5 hours 56 min ago
I have data file which has comma separated values : col1,col2,col3,col4,Key1,Value1,Key2,Value2,Key3,Value3............ Requirement is to load col1-col4 inTab1(col1,col2,col3,col4) AND second part of data is key/Value pair, number of key/value...
Categories: DBA Blogs

Non-pipelined table function with dynamic sql statement

Tom Kyte - 5 hours 56 min ago
Hello, Ask Tom Team. Would you mind telling me how I can create table function without pipelined option (I think that it unnecessary, because I have not a big data in it) in a package level using dynamic sql and table of records collection. I know ...
Categories: DBA Blogs

Can we update a locked row column in other session ?

Tom Kyte - 5 hours 56 min ago
Hi Tom, I have created one table for example like CREATE TABLE reservation ( Reservation_Id NUMBER, Reservation_number NUMBER ); INSERT INTO reservation (Reservation_Id, Reservation_number) VALUES(1, 100001); INSERT INTO reservat...
Categories: DBA Blogs

Change Decimal pointer

Tom Kyte - 5 hours 56 min ago
Hi, Please inform how can replace decimal operator dot(.) with comma(,) in oracle database Permanently I want to change it in database. Also no separator is required for numaric values.All decimal fields of table will accept comma(,) i...
Categories: DBA Blogs

How to use Clob parameter inside dynamic sql statement to execute immediate

Tom Kyte - 5 hours 56 min ago
Hello, Ask Tom team! There is my problem. i have a package procedure with some parameters: procedure updt(p_col varchar2,p_value clob) is v_update_clob clob; begin v_update_clob:= 'update table set :p_col=:p_value where cond = ''some_conditi...
Categories: DBA Blogs

Video - Live Webinar - Master Class - ADF Bindings Explained

Andrejus Baranovski - 8 hours 15 min ago
Posting 2 hours long video recording for yesterday webinar - Master Class - ADF Bindings Explained. Thanks to everyone for attending. It was large crowd, nearly 100 attendees live. I enjoyed answering to all of your questions, hopefully this will be useful in your daily work.


Webinar topics:

1. ADF Bindings overview. Why ADF Bindings are required and how they are useful
2. Drill down into ADF Bindings. Explanation how binding object is executed from UI fragment down to Page Definition
3. ADF Binding types explanation. Information about different bindings generated, when using JDeveloper wizards. What happens with ADF Bindings, when using LOV, table, ADF Query, Task Flow wizards.
4. Declarative ADF binding access with expressions
5. Programmatic ADF binding access from managed beans
6. ADF binding sharing and access from ADF Task Flows. How to create binding layer for Task Flow method call or router activities.
7. Best practices for ADF Bindings
8. Your questions

Download webinar ADF 12c app from GitHub - ADFAltaApp

Oracle TAN Function with Examples

Complete IT Professional - 13 hours 2 min ago
In this article, I’ll explain the Oracle TAN function, what it is, and show you some examples. Purpose of the Oracle TAN Function The purpose of the Oracle TAN function is to calculate the tangent of a number. The tangent is ratio of the length of the side opposite the angle to the length of the […]
Categories: Development

Can I do it with PostgreSQL? – 7 – Partitioning

Yann Neuhaus - 13 hours 43 min ago

PostgreSQL supports tables up to 32TB. Do you want to be the one responsible for managing such a table? I guess not. Usually you start to partition your tables when they grow very fast and consume more than hundreds of gigabytes. Can PostgreSQL do this? Do you you know what table inheritance is? No? PostgreSQL implements partitioning by using table inheritance and constraint exclusion. Sounds strange? Lets have a look …

Us usual I am running the currently latest version of PostgreSQL:

postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 0.513 ms
(postgres@[local]:5439) [postgres] > 

So, what is table inheritance. In PostgreSQL you do things like this:

(postgres@[local]:5439) [postgres] > create table databases ( name varchar(10), vendor varchar(10) );
CREATE TABLE
Time: 20.477 ms
(postgres@[local]:5439) [postgres] > create table databases_rdbms ( rdbms boolean ) inherits (databases);
CREATE TABLE
Time: 20.080 ms
(postgres@[local]:5439) [postgres] > create table databases_nosql ( nosql boolean ) inherits (databases);
CREATE TABLE
Time: 22.048 ms

What we’ve done here is: We created three tables in total. The “databases_rdbms” and “databases_nosql” tables inherit from the “databases” table. What does that mean? Lets insert some data into the tables that inherit from the “databases” table:

(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('PostgreSQL','Community',true);
INSERT 0 1
Time: 20.215 ms
(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('MariaDB','MariaDB',true);
INSERT 0 1
Time: 1.666 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('MongoDB','MongoDB',true);
INSERT 0 1
Time: 1.619 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('Cassandra','Apache',true);
INSERT 0 1
Time: 0.833 ms

Note that we did not insert any data into the “databases” table, but when we query the “databases” table we get this result:

(postgres@[local]:5439) [postgres] > select * from databases;
    name    |  vendor   
------------+-----------
 PostgreSQL | Community
 MariaDB    | MariaDB
 MongoDB    | MongoDB
 Cassandra  | Apache
(4 rows)

All the data from all child tables has been retrieved (of course without the additional column on the child tables). We can still query the child tables:

(postgres@[local]:5439) [postgres] > select * from databases_rdbms;
    name    |  vendor   | rdbms 
------------+-----------+-------
 PostgreSQL | Community | t
 MariaDB    | MariaDB   | t
(2 rows)

Time: 0.224 ms
(postgres@[local]:5439) [postgres] > select * from databases_nosql;
   name    | vendor  | nosql 
-----------+---------+-------
 MongoDB   | MongoDB | t
 Cassandra | Apache  | t
(2 rows)

But when we query “only” on the master table there is no result:

(postgres@[local]:5439) [postgres] > select * from only databases;
 name | vendor 
------+--------
(0 rows)

Of course for this specific example it would be better to add an additional column to the master table which specifies if a database is a NoSQL database or not. This is just to show how it works. There is a good example for another use case in the documentation.

What does all this have to do with partitioning? When you want to partition your tables in PostgreSQL you’ll do exactly the same thing:

(postgres@[local]:5439) [postgres] > create table log_data ( id int, some_data varchar(10), ts date );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2016() inherits ( log_data );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2015() inherits ( log_data );
CREATE TABLE

We want to partition our log data by year, so we create a child table for each year we know we have data for. We additionally need is a check constraint on each of the child tables:

(postgres@[local]:5439) [postgres] > \d+ log_data_2016
                             Table "public.log_data_2016"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2016-01-01'::date AND ts  \d+ log_data_2015
                             Table "public.log_data_2015"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2015-01-01'::date AND ts < '2016-01-01'::date)
Inherits: log_data

This guarantees that the child tables only get data for a specific year. So far so good. But how does PostgreSQL know that inserts into the master table should get routed to the corresponding child table? This is done by using triggers:

(postgres@[local]:5439) [postgres] > CREATE OR REPLACE FUNCTION log_data_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.ts >= DATE '2015.01.01' AND
NEW.ts < DATE '2016-01-01' ) THEN INSERT INTO log_data_2015 VALUES (NEW.*); ELSIF ( NEW.ts >= DATE '2016-01-01' AND
NEW.ts < DATE '2017-01-01' ) THEN
INSERT INTO log_data_2016 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'Date out of range!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_log_data_trigger
BEFORE INSERT ON log_data
FOR EACH ROW EXECUTE PROCEDURE log_data_insert_trigger();

When there are inserts against the master table, from now on these go to the corresponding child table:

(postgres@[local]:5439) [postgres] > insert into log_data values ( 1, 'aaaa', date('2016.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > insert into log_data values ( 2, 'aaaa', date('2015.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > select * from log_data;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
  2 | aaaa      | 2015-03-03
(2 rows)
(postgres@[local]:5439) [postgres] > select * from log_data_2015;
 id | some_data |     ts     
----+-----------+------------
  2 | aaaa      | 2015-03-03
(1 row)

(postgres@[local]:5439) [postgres] > select * from log_data_2016;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
(1 row)

Selects against the master table where we use the ts column in the where condition now only select from the child table:

(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2016.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.006..0.006 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2016-03-03'::date)
   ->  Seq Scan on log_data_2016  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2016-03-03'::date)
 Planning time: 0.131 ms
 Execution time: 0.019 ms
(7 rows)
(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2015.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.007..0.007 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2015-03-03'::date)
   ->  Seq Scan on log_data_2015  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2015-03-03'::date)
 Planning time: 0.102 ms
 Execution time: 0.019 ms
(7 rows)

Of course you can create indexes on the child tables as well. This is how partitioning basically works in PostgreSQL. To be honest, this is not the most beautiful way to do partitioning and this can become tricky to manage. But as always there are projects that assist you, e.g. pg_partman or pg_pathman.

Wouldn’t it be nice to have a SQL syntax to do table partitioning? Exactly this was committed yesterday and will probably be there in PostgreSQL 10 next year. The development documentation already describes the syntax:

[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
 

Cet article Can I do it with PostgreSQL? – 7 – Partitioning est apparu en premier sur Blog dbi services.

OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7

Yann Neuhaus - 15 hours 18 min ago

User and group quotas do exist for quite a while for all kind of file systems, like ext4 or vxfs and maybe many others.

However, for my use case, I do need quotas for different directories on the same file system. My mount point is /u01 and in that file system, I do have different Oracle Homes which belong to the same user, oracle. However, I do not want that my 11.2.0.4 Oracle Home influences my 12.1.0.2 Oracle Home in respect of file system usage. e.g. if the 11.2.0.4 home is core dumping, it should not fill up the space of the 12.1.0.2 home, and the other way around. But how can I do that?

The idea is to create XFS project quotas, and this is how it works.

First of all, we need to enable project quotas on the XFS file system.  /u01 is currently mounted with XFS default options, which are (rw,relatime,attr2,inode64,noquota). As you can see, the default is “noquota”.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

We can enable project quotas by adding the “prjquota” mount option to the /etc/fstab and afterwards remounting the file system.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

Unfortunately, the remount option does not work with the XFS filesystem. Meaning, I can remount the file system, but my new option “prjquota” is not reflected.

[root@dbidg01 ~]# mount -o remount,rw,relatime,attr2,inode64,prjquota /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

So I have to do a umount, mount. Not a good thing from my point of view, because it means that I cannot enable project quotas online for my /u01 directory, where I have different Oracle homes located. In other words, I need to shutdown all Oracle databases.

[root@dbidg01 ~]# umount /u01
[root@dbidg01 ~]# mount /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,prjquota)

Ok. Now it looks better. The next step is to define unique project id’s for the different directory hierarchies in the /etc/projects file.

For example, to set a project ID of 11 for the directory hierarchy /u01/app/oracle/product/11.2.0, and the project ID of 12 for the directory hierarchy /u01/app/oracle/product/12.1.0 we can do the following.

# echo "11:/u01/app/oracle/product/11.2.0" >> /etc/projects
# echo "12:/u01/app/oracle/product/12.1.0" >> /etc/projects

[root@dbidg01 ~]# cat /etc/projects
11:/u01/app/oracle/product/11.2.0
12:/u01/app/oracle/product/12.1.0

If you don’t want to work with ID’s, you have the possibility to map project names to the project ID’s in your /etc/projid file. It is much easier in regards of reporting quota usage, which we will see later.

For example, to map the project name oracle11gR2 to the project with ID 11 or to map the project name oracle12cR1 to 12 do the following.

# echo "oracle11gR2:11" >> /etc/projid
# echo "oracle12cR1:12" >> /etc/projid

Now use the project subcommand of xfs_quota to define a managed tree in the XFS file system for the different projects.

For example, to define a managed tree in the /u01 file system for the project oracle11gR2, which corresponds to the directory hierarchy /u01/app/oracle/product/11.2.0, do the following.

# xfs_quota -x -c 'project -s oracle11gR2' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle11gR2' /u01
Setting up project oracle11gR2 (path /u01/app/oracle/product/11.2.0)...
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libodm11.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdk.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdkbase.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle11gR2 with recursion depth infinite (-1).

The same applies to project oracle12cR1.

# xfs_quota -x -c 'project -s oracle12cR1' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle12cR1' /u01
Setting up project oracle12cR1 (path /u01/app/oracle/product/12.1.0)...
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/classes.bin
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/cbp.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/libjtcjt.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/US_export_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/cacerts
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/java.security
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/local_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/jce.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/sunjce_provider.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/javaws/javaws
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libjavavm12.a
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libodm12.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntshcore.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.11.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle12cR1 with recursion depth infinite (-1).

Now the fun part begins, and we can start using the limit subcommand to set the limits on the disk usage for the different projects. My 11.2.0.4 Oracle home is currently 5.7G in size, and the 12.1.0.2 Oracle home is 7.1G big. I want to configure for the 11g home a soft limit of 8G and a hard limit of 12G, and for the 12c home a soft limit of 10G and a hard limit of 16G.

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 11.2.0
5.7G    11.2.0

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 12.1.0
7.1G    12.1.0

# xfs_quota -x -c 'limit -p bsoft=8g bhard=12g oracle11gR2' /u01
# xfs_quota -x -c 'limit -p bsoft=10g bhard=16g oracle12cR1' /u01

[root@dbidg01 ~]# xfs_quota -x -c 'report -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0            8776636          0          0     00 [--------]
oracle11gR2    5883604    8388608   12582912     00 [--------]
oracle12cR1    7415292   10485760   16777216     00 [--------]

[root@dbidg01 ~]# xfs_quota -x -c 'report -h -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                        Blocks
Project ID   Used   Soft   Hard Warn/Grace
---------- ---------------------------------
#0           8.4G      0      0  00 [------]
oracle11gR2   5.6G     8G    12G  00 [------]
oracle12cR1   7.1G    10G    16G  00 [------]

The quotas are immediately seen by the df command. The df on the /u01 shows the full filesystem size, which is 50G, however, if I navigate to my 11g or 12c home, I see the soft limit which I have configured beforehand.

[root@dbidg01 product]# df -h /u01
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   50G   22G   29G  43% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/11.2.0
[root@dbidg01 11.2.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01  8.0G  5.7G  2.4G  71% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/12.1.0
[root@dbidg01 12.1.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   10G  7.1G  3.0G  71% /u01
Conclusion

The project quotas with XFS are a quite cool feature. Limiting file system usage per projects gives you a huge flexibility. Setting quotas on individual Oracle homes is one thing, but you could also limit filesystem usage for your Oracle databases in the DIAG directory based on ORACLE_SID, so that /u01/app/oracle/diag/rdbms/<SID1> can never fill up /u01/app/oracle/diag/rdbms/<SID2>. The only drawback I see, is that enabling project quotas is not an online operation. You need to umount and mount the file system to activate this feature.

Have fun with XFS quotas.

Cheers,
William

 

Cet article OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7 est apparu en premier sur Blog dbi services.

Updated: Why Does EBS Integration with Oracle Access Manager Require Oracle Internet Directory?

Steven Chan - 15 hours 57 min ago

[The article applies to EBS 12.1 and 12.2; it is an updated version of this older article that covered EBS 12.0 and 11i]

The E-Business Suite has its own security and user-management capabilities.  You can use the E-Business Suite's native features to authenticate users, authorize users (i.e. assign responsibilities to them), and manage your EBS user repository.  The majority of E-Business Suite system administrators simply use these built-in capabilities for enabling access to the E-Business Suite.

When EBS built-in capabilities aren't enough

Some organisations have third-party user authentication systems in place.  These include CA Netegrity SiteMinder, Windows Kerberos, and others.  These organisations frequently use third-party LDAP directory solutions such as Microsoft Active Directory, OpenLDAP, and others. 

We don't certify the E-Business Suite with those third-party products directly, and we don't have any plans to do so.  This article is intended to explain why Oracle Internet Directory (OID) is required when integrating with Oracle Access Manager (OAM). 

OAM and OID are mandatory for third-party integration

Oracle Internet Directory and Oracle Access Manager are mandatory requirements when integrating third-party authentication products directly with the E-Business Suite.

It is not possible to integrate E-Business Suite directly with Microsoft Active Directory, Windows Kerberos, or CA Netegrity Siteminder directly.

It's possible to integrate the E-Business Suite with those third-party solutions via Oracle Access Manager and Oracle Internet Directory.  See these articles:

Before going on, I'd recommend reading that third-party integration articles.  If you don't have those concepts under your belt, the rest of this article isn't going to make much sense.

Architecture diagram showing Oracle Access Manager Oracle Internet Directory E-Business Suite AccessGate WebGate

Why does EBS require OID with OAM?

Oracle Access Manager itself doesn't require Oracle Internet Directory.  However, Oracle Internet Directory is a mandatory requirement when Oracle Access Manager is integrated with the E-Business Suite.

Why?  The short answer is that the E-Business Suite has hardcoded dependencies on Oracle Internet Directory for this configuration. These dependencies mean that you cannot replace Oracle Internet Directory with any third-party LDAP directory for this particular configuration. 

There are two cases of hardcoded dependencies on Oracle Internet Directory:

1. Reliance on Oracle GUIDs

From the articles linked above, you know that user authentication is handled by Oracle Access Manager, and user authorization is handled by the E-Business Suite itself.  This means that there are two different user namespaces. 

These namespaces must be linked and coordinated somehow, to ensure that a particular user logging in via Oracle Access Manager is the same user represented within the E-Business Suite's own internal FNDUSER repository.

We associate externally-managed Oracle Access Manager users with internally-managed E-Business Suite users via a Global Unique Identifier (GUID).  These Global Unique Identifiers are generated exclusively by Oracle Internet Directory. 

The E-Business Suite has hardcoded functions to handle the mapping of these Global Unique Identifiers between Oracle Access Manager and the E-Business Suite.  These mapping functions are specific to Oracle Internet Directory; it isn't possible to replace Oracle Internet Directory with a generic third-party LDAP directory and still preserve this functionality.

2. Synchronous user account creation

The E-Business Suite is predominantly used internally within an organisation.  Certain E-Business Suite application modules can be made visible to users outside of an organisation.  These include iStore, iRecruitment, iSupplier, and other application modules where the users aren't necessarily restricted to an organisation's own employees.

Users of some of those application modules expect to be able to register for a new account and use it immediately.  This makes sense.  If you're posting job openings via iRecruitment, potential applicants shouldn't need to hold off on submitting their resumes while your E-Business Suite sysadmin creates an account manually, assigns EBS responsibilities, and emails them the account login details. They'll be long gone before that happens.

This means that EBS application modules that support self-registration must create user accounts synchronously.  A new account must be created within the E-Business Suite and the externalized directory at the same time, on demand.

The E-Business Suite has hardcoded dependencies upon Oracle Internet Directory function calls that handle these synchronous account creation tasks.  These function calls are specific to Oracle Internet Directory; it isn't possible to replace Oracle Internet Directory with a generic third-party LDAP directory and still preserve this functionality.

Related Articles


      Categories: APPS Blogs

      Retrieving DBMS_OUTPUT.put_line from JDBC?

      Tom Kyte - Wed, 2016-12-07 17:46
      I'm using a Java application to process an incoming feed. The app inserts data into some Oracle tables, then runs an Oracle stored procedure. The Java application is called from a UNIX script, connects to Oracle using Oracle's thin client driver an...
      Categories: DBA Blogs

      index issue with our partitioned table ora14196 and question if plan OK?

      Tom Kyte - Wed, 2016-12-07 17:46
      Hello Tom I have an index issue with our partitioned table this table cw_tb_zvlist11 has about 500 mio rows inserted and the insert process is still active I see that unfortunality we created the primary index as a nonunique one. Now I tr...
      Categories: DBA Blogs

      adding column with default value 0

      Tom Kyte - Wed, 2016-12-07 17:46
      Hi Tom, I have read lot of posts which you have discussed on the subject of adding column with some default value. Thanks a lot for such a great service you render to oracle community. Unfortunately one of my friend informed me some bugs are as...
      Categories: DBA Blogs

      Is there another option for Next Value For

      Tom Kyte - Wed, 2016-12-07 17:46
      Oracle Database 11g Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for 64-bit Windows: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production How would I go abo...
      Categories: DBA Blogs

      Using ADO and HeatMap in the Utilities ILM Solution

      Anthony Shorten - Wed, 2016-12-07 16:19

      The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map.

      The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed.

      Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules.

      ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data.

      For more information about Heat Map and ADO refer to the following information:

      Extended Stats

      Jonathan Lewis - Wed, 2016-12-07 09:54

      After my Masterclass on indexes at the UKOUG Tech2016 conference this morning I got into a conversation about creating extended stats on a table. I had pointed out in the masterclass that each time you dropped an index you really ought to be prepared to create a set of extended stats (specifically a column group) on the list of columns that had defined the index just in case the optimizer had been using the distinct_keys statistic from the index to help it calculate cardinalities.

      Unfortunately there is a limit on the number of column groups (or any other type of extended stats) you can have on a table and that limit is the larger of 20 and ceiling(number of columns / 10) – so you typically run into a problem if you want to take defensive action after dropping more than twenty (multi-column) indexes. (And you wonder how Oracle’s adaptive dynamic stats process that silently creates column groups overnight handles the problem of needing far more column groups than are allowed.)

      The conversation led on to the oddity that the column count includes the virtual columns representing the column groups so, for example, if you have 253 columns in your table you can create 26 column groups; but if you have 26 column groups that means you have a total of 279 columns, so you can actually create a total of 28 groups (an extra 2); but if you create those two column groups you now have a total of 281 columns in the table which means you’re allowed a total of 29 column groups so you can add one more column group for a total of 282 columns. Here’s some code (which I’ve run only on 11.2.0.4) to play with – to keep things very simple I’ve generated some trivial extended stats rather than column groups:

      
      rem
      rem     Script:         extended_stats_limit2.sql
      rem     Author:         Jonathan Lewis
      rem     Dated:          Dec 2016
      rem
      
      drop table t1 purge;
      
      begin
              for i in 2..253 loop
                      execute immediate
                      'alter table t1 add (c' || to_char(i,'FM000') || ' number)';
              end loop;
      end;
      /
      
      desc t1
      
      prompt  ============================================================================================
      prompt  This will raise an error on the 30th addition
      prompt  ORA-20008: Number of extensions in table TEST_USER.T1 already reaches the upper limit (28.2)
      prompt  ============================================================================================
      
      declare
              ext_string varchar2(20);
      begin
              for i in 1..30 loop
                      ext_string := '(c001 + ' || i || ')';
                      dbms_output.put_line(
                              dbms_stats.create_extended_stats(
                                      ownname         => user,
                                      tabname         => 'T1',
                                      extension       => ext_string
                              )
                      );
              end loop;
      end;
      /
      
      column column_name format a32
      
      select
              column_name, hidden_column, virtual_column, segment_column_id, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      order by
              internal_column_id
      ;
      
      

      This code results in a table with 253 segment columns, and 29 hidden, virtual columns (with names like SYS_STU0#$2X$X1M4NFZVM2O_5A3FC) representing the extended stats. What if I want more extended stats ? There is no limit on virtual columns in general, beyond the inherent table limit of 1,000 columns total, so what if I create a few virtual columns (another 39, say, taking my total column count to 321): would this allow me to increase the number of extended stats to 33 – and if so, what would happen if I then dropped the virtual columns:

      
      prompt  ============================================
      prompt  Now we add some virtual columns after which
      prompt  we will be able to add more extended stats
      prompt  and drop the virtual columns
      prompt  ============================================
      
      begin
              for i in 1..39 loop
                      execute immediate
                      'alter table t1 add (virt' || to_char(i,'fm000') ||
                              ' generated always as ( c002 + ' || i || ') virtual)'
                      ;
              end loop;
      end;
      /
      
      select
              column_name, hidden_column, virtual_column, segment_column_id, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      order by
              internal_column_id
      ;
      
      prompt  ============================================================================================
      prompt  We can now get up to 33 extended stats
      prompt  This will raise an error on the attempt to add the 34th set
      prompt  ORA-20008: Number of extensions in table TEST_USER.T1 already reaches the upper limit (32.5)
      prompt  ============================================================================================
      
      declare
              ext_string varchar2(20);
      begin
              for i in 30..34 loop
                      ext_string := '(c001 + ' || i || ')';
                      dbms_output.put_line(
                              dbms_stats.create_extended_stats(
                                      ownname         => user,
                                      tabname         => 'T1',
                                      extension       => ext_string
                              )
                      );
              end loop;
      end;
      /
      
      select
              column_name, hidden_column, virtual_column, segment_column_id, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      order by
              internal_column_id
      ;
      
      
      select
              column_name, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      and     hidden_column = 'YES'
      and     virtual_column = 'YES'
      order by
              internal_column_id
      ;
      
      prompt  ============================
      prompt  Now drop the virtual columns
      prompt  ============================
      
      begin
              for r in (
                      select column_name from user_tab_cols
                      where  column_name like 'VIRT%'
              ) loop
                      execute immediate
                      'alter table t1 drop column ' || r.column_name;
              end loop;
      end;
      /
      
      select
              column_name, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      and     virtual_column = 'YES'
      order by
              internal_column_id
      ;
      
      

      When I ran this code I ended up with a table consisting of 286 columns, of which 253 were my original columns and 33 – with internal column ids of 254 to 286 inclusive – were the extended stats. It seems there is a way to bypass the limit if you really want to – though I’m not sure I’d really want to do it on a production system.

      Left as Exercise for the Reader:

      Create a table with 5 real columns and the 26 column groups needed to represent all (multi-column) combinations of those five columns. (Remember that the order of columns in a column group is not really significant). (The 26 groups consist of: 1 x 5 column, 5 x 4 column, 10 x 3 column, 10 x 2 column – this may remind some of you of binomial expansions, others may remember it as a row from Pascal’s triangle, you could also view it as a particular subset of the binary representations of the integers from 1 to 31.)

       


      Pages

      Subscribe to Oracle FAQ aggregator